Tech& images (2)

Giving with one hand while taking away with the other: Bipartisan Innovation Act and Antitrust are headed towards an irreconcilable conflict

Congress has just begun conferencing to resolve differences between the United States Innovation and Competition Act (USICA) (S.1260) and the America COMPETES Act (H.R. 4521). This “must pass” legislative package contains billions in funding and incentives to spur U.S. innovation, create jobs, and address critical challenges around emerging technologies and a rapidly changing global technology landscape. The measures in the final package will promote federal research and development (R&D) in technology and science, foster stronger public-private partnerships, and build on the huge investments in R&D made by the private sector as federal expenditures have dropped precipitously (as a percentage of GDP) in the last half century.

At the same time, some in Congress continue to urge passage of antitrust legislation aimed at leading U.S. tech companies. If enacted, these bills would likely lead almost immediately to a drop in private sector R&D spending at a time of increasing global competition for leadership in technology innovation.

The antitrust legislation is a mistake.  Although an increase in federal R&D spending is prudent, welcome and necessary, it cannot substitute for the continued innovation of the private sector.  By diverting investment from private R&D, this legislation will imperil both national and economic security.  The U.S. cannot compete globally with one while hamstringing the other. It needs both.

Strengthening U.S. Tech Leadership…

Although nearly a year has gone by since the Senate passed its version of the innovation package – USICA – the legislation remains urgent. As President Biden remarked in his State of the Union address, the legislation is critical to “make record investment in emerging technologies and American manufacturing” to enable the United States “to compete for the best jobs of the future” and “level the playing field with China and other competitors.”

While much has been made of the $52 billion in spending to support domestic semiconductor manufacturing, less attention has gone to how the legislation would jump start federal government spending on science and technology R&D. As other nations have increased government R&D, the United States has gone in the opposite direction. Overall, the trend is a decline in federal R&D investments as a percentage of GDP in 22 of the last 28 years since 1990. Indeed, the fall in federal R&D funding as a share of the economy has been steady over the last 60 years. From its high in 1964, U.S. federal R&D spending has declined by 65 percent, and it is down 40 percent since 1980. Compared to the U.S., China has seen an explosive growth in both public and private R&D investments over the last 20 years and are now close to parity with the U.S.

Federal R&D has proven essential to kick start important innovation in areas where the market has yet to catch up – especially in basic research, where the federal government remains the largest funder in the United States. Often touted is the work of the Defense Advanced Research Projects Agency (DARPA) on the internet and global positioning system (GPS) technology, conducted decades ago. Fostering academic and private sector innovation through federal R&D programs is important and, given the exponential growth in such spending by China, many in the United States see rejuvenation of federal R&D as an existential necessity.

Despite the decline in federal R&D spending over the past half century, the United States has set the pace for innovation worldwide. This is owing to significant R&D investment in the private sector. Since 1980, the private sector has outpaced the federal government in overall R&D spending and now does so by a wide margin. The largest four U.S. technology companies spend almost $100 billion annually in R&D, which represents over 20 percent of the total business R&D and on par if not greater than the entire U.S. federal government.

As ITIF has detailed, the spending of five U.S. firms alone—Alphabet, Amazon, Apple, Meta, and Microsoft—had combined R&D expenditures of $128 billion in 2020, exceeding R&D by all Japanese firms and two-thirds of the top 2,500 EU-based firms and amounting to more than a third of all Chinese public and private R&D spending. This massive spending has enabled the United States to lead in critical technological innovation in artificial intelligence (AI), quantum computing, robotics, and other fields. These advances carry direct benefits to consumers in the form of products and services we use every day. They also serve as an engine for continued U.S. innovation and global leadership in technology. Network effects, which are prevalent in the technology sector, feed on themselves and are known to be one of the primary drivers of R&D investments. R&D by top tech companies leads to open source software that spurs additional innovation, new platforms that foster development of novel apps, investment in smaller companies, and many other features that benefit the innovation ecosystem at large. 

Whatever form the final innovation package takes, it will undoubtedly include measures to drive public-private collaboration and investment in critical technologies including AI, quantum computing, privacy-enhancing tech, and others. These are central to continued U.S. innovation, maintaining a competitive edge vis-à-vis China, and creating jobs. In addition, sustained investment will help to advance technology in responsible ways to set a model for the democratic world and counter the spread of digital authoritarianism. The investment, and the message this package will send, will have enormous implications for consumer welfare, economic growth, and national security.

…or Undermining Core U.S. Advantages?

Some in Congress also are pursuing with vigor a campaign that would directly undermine the core goals of the bipartisan innovation package. Styled as antitrust reform, bills like the American Innovation and Choice Online Act (AICOA) would cause leading U.S. tech companies to decrease their R&D investments and force transfer of proprietary data and technology to competitors, including (based on the current form of the bill) to foreign rivals in China and Russia.

The proponents of AICOA claim that the measures are designed to promote consumer choice and help small entrepreneurs looking to make headway. There is notably little economic analysis behind these claims. In fact, a recent economic study has found that the likely cost from the antitrust bills for companies and consumers would be more than $300 billion over time, without achieving any of its stated objectives. And there is little doubt that the bill would cause leading tech companies—those that contribute over 20 percent of private sector technology R&D each year—to reduce their investments. This would have dramatic downstream effects on the entire tech ecosystem, on the ability of the U.S. government to promote safe and responsible technologies, on cybersecurity, and on job creation. 

It is well-recognized that stable funding, and the ability—and willingness—to invest in the long-term is essential to create important advances in emerging technologies. China is making those investments today. In the U.S., federal R&D is important, but without investments from the very companies that the antitrust bills seek to undermine, the race for the future is one the U.S. cannot win.

 

US-EU Trade & Tech Council

Recommendations for the Upcoming US-EU Trade & Technology Council

The United States and the European Commission (EC) will gather in France on May 16-17, 2022, for the second meeting of the US-EU Trade & Technology Council (TTC). The meeting provides an opportunity for the two sides to demonstrate in concrete terms how they intend to realize the vision of the Pittsburgh statement and pursue policy based on “shared democratic values.”

The Russian invasion of Ukraine has put into stark relief the strategic imperative for enhancing alignment between the United States and the EU on a digital policy. Combined with challenges from other authoritarian regimes, the need for alignment on a vision for digital democracy has never been greater. 

SIIA has provided input to both U.S. and EC officials on short-term measures to strengthen transatlantic alignment on digital policy. In this post, we highlight a few key recommendations we have made.

Recommendation for Working Groups 1 & 5: Foster regulatory interoperability on artificial intelligence (AI) by aligning on a risk-based approach for evaluating AI systems.

Alignment on data governance principles is essential to support a transatlantic approach to digital democracy. While the TTC may not serve as a mechanism to convey concerns about current EU digital regulation proposals – such as the Data Act – it does provide a forum to advance a vision of regulatory interoperability. This is of utmost importance for the development and use of AI systems. 

The EC’s introduction of the Artificial Intelligence Act (AI Act) proposal one year ago has driven the global discussion about how to regulate AI systems to address bias, reduce negative externalities, and promote positive uses. Though the AI Act must work through member states and the European Parliament, it has already changed the conversation about AI regulation worldwide.

At a high level, the AI Act incorporates a risk-based approach to AI regulation. SIIA supports a risk-based approach and has been encouraged by the expert-driven, multi-stakeholder work undertaken by the National Institute of Standards and Technology (NIST), the Organisation for Economic Co-operation and Development (OECD), and the Global Partnership on Artificial Intelligence (GPAI). To ensure regulatory interoperability, SIIA has recommended that U.S. and EC officials agree to develop guiding principles or standards for implementing risk-based approaches to AI systems that explicitly build on the work of NIST, the OECD, and GPAI. Such principles or standards will provide guidance for regulators on how to assess AI systems for safety, security, trustworthiness, and bias.

Recommendation for Working Groups 5, 6, and 9: Launch a transatlantic public-private partnership to create large, high-quality, and privacy-protective data sets that are accessible and usable by a wide range of actors for training, testing, and innovation

The availability of robust, reliable, and trustworthy data sets is a key impediment to AI innovation. While data is an essential component of the AI stack, developing robust data sets that both meet the standards for responsible AI (including using truly inclusive and representative data sets) and minimize privacy concerns is extremely costly for most companies and entrepreneurs. That cost both limits the potential of AI and allows AI tools to be built on unreliable, untrustworthy, or potentially biased information. Data sets that do not comport with standards of accuracy, reliability, trustworthiness, and bias carry societal risk.

Creating shared public datasets that can be used by researchers and innovators from the United States and EU member states can be critical to fostering new and better uses of AI technologies and ensuring that the data relied on by AI algorithms meets quality standards. 

SIIA has recommended two approaches to create shared public datasets.

  • First, a public-private effort to create large synthetic data pools that would be accessible by researchers, government, and across industry. Synthetic datasets can enable algorithms to run on data that reflect, rather than rely on, real-world data. This approach would allow for the creation of a robust data lake that can be vetted to ensure accuracy, reliability, fairness, and so on. Moreover, it would not present privacy and individual rights concerns that may arise from the collection, retention, sharing, and use of datasets that are built directly from personal information. We understand there is interest in the private sector to work with the government on this sort of initiative.
  • Second, a public-private effort to create large open data sets of personal information collected through enhanced notice and consent procedures. This could be modeled on the Casual Conversations dataset developed by Meta. That dataset consists of over 45,000 videos of conversations with paid actors who consented to their information being used openly to help industry to test bias in AI systems.

SIIA has further recommended that NIST – possibly in coordination with an EU equivalent agency –lead the effort to ensure that the large data set is appropriately screened before it is put into wide use. The data pools should be subject to intensive test, evaluation, verification, and validation procedures in accordance with NIST standards and with the involvement of government and private sector experts.

Recommendation for Working Groups 5, 6, and 9: Launch a pilot project to accelerate advances in and applications of privacy enhancing technologies that would establish public and private use cases for productive uses of data that preserve the privacy and security of the underlying data.

Privacy enhancing technologies (PETs) include a group of technologies designed to protect the privacy and security of sensitive information, including homomorphic encryption, differential privacy, federated learning, and synthetic data. PETs now have established uses in a wide range of contexts, including research, health care, financial crime detection, human trafficking mitigation, intelligence sharing, criminal justice, and more. 

Despite the myriad uses, adoption of the more advanced and capable PETs is not yet widespread. Further adoption of PETs can be an essential part of a democratic model of emerging technology in practice, as a counter to a model that sacrifices privacy, trust, safety, and transparency. PETs can enable the secure sharing of data between entities and across jurisdictional boundaries, expanding data access and utility and enabling organizations to reduce risk while making faster, better-informed decisions. PETs are one way to solve (as a technical but not legal matter) privacy-based restrictions on EU-US data flows.

 

Endnotes:
1. Joshua New, AI Needs Better Data, Not Just More Data, Center for Data Innovation (Mar. 20 2019); Tasha Austin, et al., Trustworthy Open Data for Trustworthy AI, Deloitte Insights (Dec. 10, 2021).
2. Meta AI, Casual Conversations Dataset (April 2021).
3. The Center for Data Ethics and Innovation, PETs Adoption Guide, Repository of Use Cases. See also, e.g., Kaitlin Asrow and Spiro Samonos, Federal Reserve Bank of San Francisco, Privacy Enhancing Technologies: Categories, Use Cases, and Considerations (June 1, 2021); Luis T.A.N. Brandao and Rene Peralta, NIST Differential Privacy Blog Series, Privacy-Enhancing Cryptography to Complement Differential Privacy (Nov. 3, 2021).
4.Andrew Imbrie, et al., Privacy Is Power: How Tech Policy Can Bolster Democracy, Foreign Affairs (Jan. 19, 2022).
5. Two use cases involving SIIA members will help to illustrate this point. First is a partnership between Enveil (an SIIA member) and DeliverFund, the leading counter-human trafficking intelligence organization, which leveraged Enveil’s PETs-powered solutions to accelerate reach and efficiency by allowing users to securely and privately screen existing assets at scale by cross-matching and searching across DeliverFund’s extensive data. Second is Meta’s use of secure multi-party computation, on-device learning, and differential privacy tools to minimize the amount of data collected in the advertising space while ensuring that personalized content reaches end users.
6.This is not just an industry view. As the White House stated in announcing the new US-UK challenge, PETs “present an important opportunity to harness the power of data in a manner that protects privacy and intellectual property, enabling cross-border and cross-sector collaboration to solve shared challenges.” White House Office of Science and Technology Policy, US and UK to Partner on Prize Challenges to Advance Privacy-Enhancing Technologies (Dec. 2021); White House, Remarks of Jake Sullivan (July 13, 2021). In addition, the U.S. Census Bureau plans to launch a series of pilot projects to deploy PETs to “to build a platform that will enable secure multi-party computation, encryption technologies, and differential privacy to promote better data sharing both domestically and abroad.” White House, Fact Sheet: The Biden-Harris Administration is Taking Action to Restore and Strengthen American Democracy (Dec. 8, 2021). This energy complements growing global interest. For example, the UK Information Commissioner’s Office is exploring guidance on PETs and ways to incorporate PETs into data regulations. Recently, according to reports, the United Nations launched a “PETs Lab” to test PETs against data sets from the United States, the UK, Canada, Italy, and the Netherlands, and work with researchers and the private sector to develop use cases and create guidance. See United Nations. Global Platform: Data for the World; The Economist, The UN is testing technology that processes data confidentially (Jan. 29, 2022).