Written by: Suher Adi
2023 was a big year for developments in technology policy and regulations around the world. With the United States, United Kingdom, China, and the European Union racing to pass legislation on emerging technologies like Generative AI, there have also been more diplomatic opportunities to engage with each other on global governance and guidelines. As we look forward to the technological advancements in the year to come, we are also keen on seeing the policy developments through these international forums and initiatives.
February 2023: The International Standards Organization (ISO) along with the International Electrotechnical Commission (IEC) developed a new standard on artificial intelligence. ISO/IEC 23894 specifically focuses on risk management related to the use and integration of AI and how to ensure risk management is integrated into processes effectively. We are likely to see more developments to standardize management and integration on a global scale throughout 2024.
May 2023: The United Nations (UN) launched the Global Digital Compact. According to UN Secretary General Guterres, the Global Digital Compact set out principles, objectives, and actions for advancing an open, secure and human-centered digital future. Taking a step away from historical discussions on internet governance, the UN aims to position digital governance as a new framework for how to think about regulating the digital world. The UN has not defined the idea of “digital governance,” but it is a notable step in the conversation on international tech regulations.
May 2023: The Hiroshima AI Process was launched following the G7 Hiroshima Summit. After continuous discussions with G7 leaders in September and October, there was a release of the Hiroshima AI Process Comprehensive Policy Framework in October. This was seen as the first international framework that includes guiding principles and a code of conduct aimed at promoting the safe, secure and trustworthy advanced AI systems. In October, the G7 leaders also agreed on the International Guiding Principles for Advanced AI systems in addition to the AI Code of Conduct. The principles serve as a foundational document to showcase the agreements by the G7 countries on governance for Advanced AI systems; the Code of Conduct serves as practical steps organizations can take to ensure that their AI systems are deployed with measures to ensure safety and security in place.
May 2023: The EU-US Trade and Tech Council worked together to develop a “code of conduct” to prevent the harms of Generative AI. This was seen as the impetus to the G7 AI Code of Conduct and a way for the EU and US to jointly show their leadership in the international AI policymaking landscape. Upcoming meetings of the TTC are worth watching, as they could serve as a precursor for broader international discussions on technological trade, data transfers, and other major pieces of technology policy.
May 2023: The New Zealand-U.K. Free Trade Agreement went into full effect. The trade agreement encompasses many aspects, including aspects related to technology. It includes a commitment to developing and using a risk-based approach to AI, developing alignment between the two countries to ensure interoperability of AI policies with each other and other international policy developments, and commitments to cross-border data flow and to not localize data. The agreement is considered to be among the most robust with regard to technology policy developments.
June 2023: South Korea becomes the first non-founding member to join the Digital Economy Partnership Agreement (DEPA). Signed in June 2020 by Chile, New Zealand and Singapore, the DEPA was established to promote these countries as platforms for the digital economy. Regarding AI, the DEPA countries have focused on promoting ethical governance frameworks and have taken into consideration international principals and developments with regards to their multilateral work together. Costa Rica is considered likely to also join the agreement.
July 2023: The United Kingdom officially joined the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). The largest trade agreement since the UK left the European Union, it now spans Asia, the Pacific and Europe. The agreement would allow for easier imports from Asian countries. The CPTPP is notable for its inclusion of cross-border data flow agreements, not requiring access to source code as a condition for importing technology, and ensuring interoperability of policies developed with international agreements and standards.
September 2023: The first Japan-UK Strategic Economic Policy and Trade Dialogue in London was held. This bilateral effort touches on all aspects of trade and investment between the two countries. The Japan-UK Comprehensive Economic Partnership Agreement, which was signed in 2020 is notable in two ways: 1) it included commitments on cross-border data flows, and 2) an agreement to bar data localization. The most recent meeting was attended by each country’s ministers of technology and focused on collaboration regarding semiconductors and AI.
October 2023: The UK Online Safety Bill passed the UK Parliament and became law. The law includes a duty of care to prevent users from being harmed on platforms, in addition to stipulations on risk assessment for various harms, mirroring the EU’s Digital Services Act. This includes provisions regarding children around hate speech and graphic content and age verification for websites. Platforms are also expected to promptly remove illegal material once they have been notified of its existence. Some have raised concerns about how new obligations on platforms in the law may negatively impact their ability to continue to offer users high-standard end-to-end encryption. Platforms failing their duty of care can incur fines of up to £18 million or 10% of their annual turnover, whichever is higher. The UK’s communications regulator, Ofcom, will release implementing regulations in the coming months.
October 2023: The Global Partnership on Artificial Intelligence (GPAI) released its Ministerial Declaration. The Ministerial, or New Delhi Declaration, addresses concerns around the spread of misinformation through AI, employment impacts with use of AI, intellectual property, data protection, and threats to human rights and democracy through the use of AI. In addition, the Declaration is seen as a step toward advancing transparency and fairness in the adoption of AI. It emphasizes collaborative efforts to cultivate skills and knowledge around AI, including in policy development, infrastructure investment, risk management frameworks, and other governance techniques. The Declaration also stresses the importance of making sure that policy conversations are inclusive, especially toward low and middle income countries, so that everyone can harness the potential of AI advancements and manage risks. India will be the Lead Chair of GPAI for 2024.
October 2023: UN Secretary-General Launches an AI Advisory Body to address AI Governance. The AI Advisory Board includes 39 members spanning countries and sectors, including public, private, civil society, and academia. The Advisory Body will focus on building global scientific consensus of risks and challenges with regards to AI and seek to create avenues for international cooperation for AI governance. It released an interim report on options for international AI governance in December. The final report is expected by August 2024.
November 2023: The UK government hosted an AI Safety Summit. The Summit brought together the U.S, China, India, Japan, and EU countries to create a baseline for future action, and a commitment to use the London-based AI Safety Institute, which aims to “act as a global hub on AI safety,” for future AI-safety testing. In addition to governments signing on to the Declaration, companies like OpenAI, Google DeepMind, Microsoft, Meta and other AI companies also signed on to the Bletchley Declaration. It is not clear which AI models will be subject to testing or when testing will take place, but this is the first international declaration stating that we should not allow companies to test their own products or take them at their word. Rather, this sets the precedent that more oversight is needed.
November 2023: The Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory updated its definition of “artificial intelligence.” The goal was to create a definition for multiple countries to agree with to allow for interoperability across country jurisdictions. In a departure from previous iterations, the language shifted from including “human defined objectives” to instead focusing on AI’s objectives and the inputs and outputs it utilizes to make those objectives turn into content that can influence physical or virtual environments. The OECD is regarded as having developed the foundational set of principles on AI that most other policy developments have been based on.
December 2023: The G7 Endorsed the AI Code of Conduct that came out of the Hiroshima AI process. This Code of Conduct aims to help countries unify their approach to AI governance and address privacy concerns and security risks associated with the technology. It provides “voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.” The Code of Conduct urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle in addition to tackling any patterns of misuse after AI products have been deployed. Since the guidance is voluntary in nature, it allows countries in different jurisdictions to approach the guidelines in their own way.
December 2023: International Standards Organization (ISO) along with the International Electrotechnical Commission (IEC) developed a new standard on artificial intelligence management systems (AIMS) across jurisdictions. ISO/IEC 42001 specifies requirements for establishing and improving AIMS to ensure responsible development and use of AI systems. This is the first AI management system standard and it is likely that more standards will be set by the ISO and IEC to help provide guidance in this rapidly evolving field.
December 2023: The European Union’s Artificial Intelligence Act passed when the European Parliament and Council reached an agreement after months of negotiations. Through the passage of the AI Act, the EU is hoping to set worldwide regulatory standards for the use of AI technology. The AI Act is using a risk categorization approach to mitigate harms. The greater the risk, the greater the restrictions for that use of technology. It also bans multiple uses of AI, like social scoring systems, bulk scraping of facial images, and most emotion recognition systems in workplace and educational settings. Providers of “high-risk” AI will be subject to reporting requirements, such as disclosure to public databases and human rights impact assessments. EU countries were able to secure exemptions from some aspects of the law for military or defense uses of AI. The EU AI Act rules come into effect in 2025. Companies found to violate the AI Act will face fines ranging from 1.5% to 7% of global sales.
We expect 2024 to be no less busy, and SIIA will keep you informed about major developments throughout the year.
Software & Information Industry Association (SIIA), along with 5 originations, express concerns regarding Maryland’s proposed legislation, HB 772, the “Internet–Connected Devices and Internet Service Providers – Default Filtering of Obscene Content (Maryland Online Child Protection Act).” While acknowledging the importance of online child protection, the organizations express reservations about the effectiveness and feasibility of a state-specific default filter for internet-connected devices. The organizations argue that such a measure is technologically infeasible, creates unrealistic expectations, and undermines competition among existing filter technologies. The letter suggests alternative approaches, citing California’s AB 873, which focuses on incorporating media literacy into curriculum frameworks. The organizations emphasize the need for informed assessments of existing programs before implementing new regulations and advocate for narrowly tailored, risk-based strategies to address specific age groups and tangible harms. Despite concerns about HB 772, the organizations express commitment to collaborating on effective solutions for children’s online safety.
In a letter addressed to Chair Hawkins and Members of the House Committee on Judiciary, five business organizations, including ACT, CCIA, CTA, TechNet, and SIIA, express appreciation for efforts to protect children online but voice serious concerns about a proposed bill SB 104. The organizations argue that requiring a state-specific default filter is technologically infeasible and does not effectively address online safety. The letter emphasizes the diverse range of filter technologies in the market and advocates for a risk-based approach to age-specific protections. The organizations propose alternative solutions, citing Florida’s legislation on internet safety training and industry-led campaigns. The organizations urge lawmakers to avoid passing regulations until existing laws are assessed and commit to collaborating on children’s online safety concerns.
The AI revolution has led countries around the world to grapple with what the future of technology will entail and how best to protect the progress that has been made and ensure a free and open internet. Although the world has seen meaningful strides in this fight, it has also seen questionable actions by some nations, including restricting access to online information, censoring the internet, and enacting surveillance measures in the name of protecting minors.
Recently, certain measures by the Canadian government and Prime Minister Justin Trudeau have shown a dramatic shift away from policies that protect a free and open internet for Canadians.
One of the country’s most headline-grabbing actions to date was C-18, Canada’s Online News Act. While intended to establish a framework that allows digital news intermediary operators and news businesses to enter into agreements on news content, it is widely seen as part of a crusade against Big Tech and an effort for U.S. tech companies to subsidize smaller Canadian publishers, many of which rely on social media to generate traffic. Indeed, earlier this year, Mr. Trudeau claimed Canada is under “attack” and likened it to World War II.
The effect, however, has been felt by Canadians. Earlier this year, both Google and Meta announced they would not make news content available in the country because of concerns with the bills which included placing unprecedented financial liability on companies simply for providing access to the news. Indeed, smaller Canadian media outlets are now likely to be harmed by this legislation, including greatly diminished traffic and potential loss of jobs that was intended to protect them.
The Online Streaming Act (C-11) is another example of Canada prioritizing a legislative vendetta at the expense of Canadians’ access to information. Designed to expand Canada’s current Broadcasting Act, the bill risks putting Canada’s creative industry at risk and sets a dangerous precedent. The rules being written by the CRTC could impose sweeping new taxes on streaming services like Netflix and Disney+–some have proposed as much as 20%–potentially crushing the multibillion dollar market for films, TV shows, and music produced in Canada. While a new and restrictive tax regime would benefit incumbent Canadian cable companies, it will not help Canadian customers who are simply looking to be entertained.
But C-11 is not the only tax on digital services being considered. In August, Canada released a draft of its newest proposed legislation, the Digital Service Tax Act. The bill would impose a 3% tax on revenue from specific businesses that meet a threshold of gross revenues of at least €750 million and in-scope Canadian revenues of at least 20 million Canadian dollars.
Despite revisions, the act still discriminates only against US companies and violates Canada’s concord in the Canada-US-Mexico and the World Trade Association agreements. The Act’s unilateral approach is also likely to undermine the OECD and G20 Inclusive Framework. Rather than working collaboratively with the global community, the Trudeau government seems intent on imposing yet another tax on non-Canadian companies who provide services that Canadians value.
This disconnect leads us to question who Canada has in mind as they consider these changes. What is it the government is trying to accomplish? Why are these laws largely targeting American tech companies? Has Mr. Trudeau fully considered the significant economic impact of these laws on his citizens and how Canada appears to be trending away from open information and leadership in shaping a democratic approach to technology?
It was only last year that Canada endorsed the Declaration for the Future of the Internet, calling for a “shared vision for an open, trusted and secure Internet for all.” If the government has endorsed the belief that the internet is indeed the backbone of the global economy, then surely it is the key to Canada’s prosperity as well.
Yet that vision is in stark contrast to the Trudeau government’s new digital policies. Mr. Trudeau needs to fully consider what is most beneficial for Canadians everywhere. Rather than taxing online news and streaming companies for providing the content Canadians use daily, and enacting a digital sales tax that will significantly hinder the OECD process and countless trade agreements, he must seek opportunities and policies that will solidify the country’s position of serving as a model for technology and digital policy rooted in democratic values.
Paul Lekas is the Head of Global Public Policy for the Software & Information Industry Association (SIIA), the principal trade association for the software and digital content industry. Morten Skroejer is the Senior Director for Technology Competition Policy for SIIA.