Media Library (10)

SIIA Applauds Release of NIST AI Risk Management Framework

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Policy, Software & Information Industry Association.

“SIIA congratulates the National Institute of Standards and Technology (NIST) on the release of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) that provides voluntary guidance for the designing, developing, deploying and using AI.  The RMF reflects the culmination of a collaborative, transparent, and expert-driven process over the past 18 months. It will have a significant role in guiding discussion about AI policy in the United States and internationally. We commend NIST’s efforts to provide detailed guidance for identifying and managing the risks associated with AI technology to ensure that the benefits of AI are realized and the risks are minimized.

“SIIA looks forward to continued engagement with NIST, along with the executive branch and Congress to advance sound AI policy that promotes innovation and fosters safe, responsible, and reliable AI tools and technologies.”

 

 

Media Library (9)

A Look At The Legal Intersection Of AI And Life Sciences

By Ariel Soiffer, Elijah Soko and Paul Lekas (January 20, 2023)

This article was not written by ChatGPT. Will all articles have to start with a statement like this? And will any statement like this be true?

ChatGPT uses artificial intelligence, or AI, to develop written work product. While this application of AI has grabbed the news, there are many other exciting applications of AI, including in the domain of life sciences.

In this article, we start by defining AI in the context of data, algorithms and AI systems. Next, we touch on leading regulatory efforts in the U.S. and abroad, followed by a brief overview of some key issues in compliance. After that, we assess the intersection of AI and intellectual property law. And finally, we mention some of the applications of AI in life sciences.

Artificial Intelligence

AI starts with big data, which refers to large data sets which often come from multiple sources. The data sets include a substantial number of entries, or rows, with many attributes, or columns.

All of this data is analyzed in models which are used to explain, predict or influence behavior. Generally, models become more accurate when developed using more data, although the relationship between accuracy of models and amount of data is often nonlinear.

The Organization for Economic Cooperation and Development defines an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

AI systems are designed to operate with varying levels of autonomy.

Therefore, AI systems may perform human-like tasks without significant

oversight or can learn from experience and improve performance when exposed to data sets.

Frequently, an AI algorithm produces a model from a big data set over time, and that model can be used as a standalone predictive device. Naturally, output of AI will only be as good as the input data sets.

Machine learning is a subset of AI. Machine learning is an iterative process of modifying algorithms — step by step instructions — to better perform complex tasks over time.

In other words, machine learning applies an algorithm to improve an original algorithm’s performance, often checking the output of an analysis in the real world and using the output to iteratively refine the analysis for future inputs. Effectively, machine learning evolves the original algorithm based on analysis of additional inputs.

The AI Regulatory Landscape

AI systems analyze large data sets and produce predictions and recommendations that often has a real-world impact in areas as varied as hiring, fraud prevention and drug discovery. These many AI applications mean that AI has attracted significant attention from policymakers and regulators, which means the AI-focused legal and regulatory landscape is changing quickly.

At the state level, bills or resolutions relating to AI have been introduced in at least 17 states in 2022. However, only a few states enacted laws in 2022 — just Colorado, Illinois, Vermont and Washington — and each was focused on a narrow application of AI.

While there is currently no horizontal federal regulation of AI, many generally applicable laws and regulations apply to AI, including in many life sciences contexts. These include the Health Insurance Portability and Accountability Act, which protects personal health

data; Federal Trade Commission regulations against unfair or deceptive trade practices; and the Genetic Information Nondiscrimination Act, which prevents requesting genetic information in some cases.

Federal regulatory efforts on AI are focused on sector-specific regulations, voluntary standards and enforcement.

As an example of sector-specific regulations, the U.S. Food and Drug Administration has rules regarding medical devices that incorporate AI software to ensure safety of those medical devices.

As an example of voluntary standards, the National Institute of Standards and Technology is finalizing a framework to better manage risks to individuals, organizations and society associated with AI. The NIST risk management framework represents the U.S. government’s leading effort to provide guidance for the use of AI across the private sector.

The FTC has indicated an interest in pursuing enforcement action based on algorithmic bias and other AI-related concerns, including models that reflect existing racial bias in health care delivery. Relatedly, the White House Office of Science and Technology Policy has created a blueprint for an AI Bill of Rights, citing health as a key area of concern for AI systems oversight.

Outside the U.S., the AI regulatory landscape is also developing rapidly.

For example, the European Union is finalizing the Artificial Intelligence Act, which would regulate AI horizontally — across all sectors — and is likely to have a significant global impact, much like what occurred with privacy laws.

The EU approach focuses on high-risk applications of AI, which may include applications in life sciences and related fields. Further, the U.S. and EU, through the U.S.-EU Trade and Technology Council, have developed a road map that aims to guide approaches to AI risk management and trustworthiness based on a shared dedication to democratic values and human rights.

AI Compliance Key Issues

AI raises a number of key issues for compliance including transparency and accountability or human in the loop, fairness and bias, explainability and interpretability, safety, security, and resiliency, reliability and accuracy and validity, and privacy.

We will briefly discuss the first three key issues in this article. Human in the loop refers to a human playing a role after AI makes a recommendation but before that determination is carried out in the real world.

In life sciences, it is critical to include humans in the process regardless of the regulatory requirements. For example, humans review AI drug discovery output and test that output in a wet laboratory to evaluate the AI output and improve AI’s predictions.

Bias in AI means unwanted, unintended or unfair assumptions or prejudices built into AI systems often deriving from algorithms or data. Developers of AI systems should understand and evaluate for bias because bias limits AI’s accuracy and efficacy and creates compliance and reputational challenges. Since data may not be neutral, bias may result from data collection practices.

For example, Winterlight Labs, the developer of an Alzheimer’s detection model used speech recordings and later discovered that its technology was only accurate for English speakers of a specific Canadian dialect as a result of the training data that it used. Bias in the data may result in bias in the AI.

Explainability in AI means the ability to evaluate what output the AI system produces and at least some reasons for that output. Developers should be able to explain why certain data was or was not used. Developers should also be able to explain how a model predicts outputs based on the inputs.

Intellectual Property Rights in AI

The major categories of intellectual property are patents, trademarks, trade secrets and copyrights.

A patent protects novel inventions by giving the patentee exclusivity for that invention. A trademark protects branding by ensuring that only the owner can use a mark for a particular field. A trade secret is information that has independent economic value by not being generally known. A copyright protects original works of authorship such as books and music, as well as software.

Algorithms and models will often be protected as trade secrets. The algorithm of most major search engines is generally protected as a trade secret. If a company sought to protect the search engine algorithm with a patent, the algorithm would have to be published in the patent application. This would allow the public to see the algorithm described in detail and would enable copying.

But given that source code of a search engine cannot easily be reviewed, a patentee would not easily be able to determine whether a competitor used the patented search algorithm without permission.

AI concepts may be eligible for patent protection, but the U.S. Supreme Court’s 2014 decision in Alice Corp. v. CLS Bank International requires something more than abstract ideas when seeking a software patent.

Therefore, a pure software algorithm will be hard to patent, but an application of AI with a physical-world impact may be patentable. The output of AI, such as discovery of a novel drug, should be patentable if the output otherwise qualifies as patentable.

However, courts have been skeptical of naming an AI system as the sole inventor of a patent. The U.S. Court of Appeals for the Federal Circuit has confirmed that an AI system cannot be the sole inventor of a patent.

Similarly, the U.S. Copyright Office has determined that creative works authored by AI are not eligible for copyright protection. While authorities in the U.S. and EU have rejected patent applications citing AI as the sole inventor, South Africa and Australia have ruled that AI can be considered an inventor on patent applications.

In business transactions involving AI, ownership and rights to use the AI system are generally divided among the parties in a few ways. Many software or service providers want the right to use data to improve their services or to improve their AI.

Three generic models in AI business transactions are (1) a service model; (2) a model rights approach; and (3) an algorithm rights approach. A service model refers to the AI provider running AI as a service while the customer provides input. The AI provider provides the output and some rights to use output to the customer. A model rights approach means the customer provides input into the AI, while the AI provider develops and refines the model.

Once the model is complete, the customer gets rights to use the model, but not the underlying AI or algorithm. The algorithm rights approach allows the AI provider to retain ownership of the underlying AI, while the customer receives some rights to the algorithm.

In the life sciences context, most AI providers will propose a service model, where the AI provider delivers output and rights to use the output. AI providers in life sciences aim to apply their AI neutrally, to all potential customers, and to refine their AI system using varied inputs.

Life Sciences Applications of AI

AI in Drug Discovery

Currently, it is possible to obtain massive data sets of small molecule interactions with target proteins, for example, using DNA-encoded libraries. Eventually, this might be possible with peptides.

For example, the discovery of novel molecules is possible through the application of AI to massive data sets of small molecule interactions with target proteins.

Recently, ZebiAI Therapeutics Inc. applied machine learning to data sets that were the output of DNA-encoded library screens. The AI output could be used to predict novel small molecule targets. Human beings still play an important role, including wet lab testing to confirm and refine results from AI-based analysis.

AI in Clinical Trials

Later stage — Phase II or Phase III — clinical trials have substantial data sets with data at the individual level. AI can assess historical data to predict outcomes such as (1) whether there are subpopulations of patients with better outcomes, (2) how adverse events are distributed, and (3) what subject characteristics have better outcomes.

AI in Genomics

AI has improved our understanding of patterns in a genome, i.e., an organism’s complex set of DNA data. Next-generation sequencing can efficiently determine the order of the basic structural unit of DNA. Next-generation sequencing can gather genetic data rapidly from individuals and has driven the price of whole genome screening down to as little as $2,000, and soon for less.

Applying AI to massive genomic data sets that become increasingly available as the price of sequencing drops may improve predictions around who may develop a disease or whether certain actions may reduce risk for a disease.

Conclusion

While ChatGPT has grabbed the headlines by being able to write short essays, AI has many other applications, including making a real difference in the life sciences industry.

The opportunities are enormous. While AI innovation has outpaced regulation, development and use of AI systems are not without challenges, including compliance and reputational challenges.

Companies should focus compliance and due diligence on managing the features and risks of AI. Companies must also stay abreast of regulatory developments and prepare for how new laws and policies will have a direct impact on their development and use of AI-based technologies.

Ariel Soiffer is a partner and Elijah Soko is an associate at WilmerHale.

Paul Lekas is the head of global public policy at Software & Information Industry Association.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Media Library (8)

SIIA Files Supreme Court Amicus Brief in GONZALES V GOOGLE

On February 21, the Supreme Court will hear arguments in Gonzales v Google, as to whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.  Section 230 prevents businesses from being liable for republishing content created or developed by third parties.

The Software & Information Industry Association (SIIA) today filed an amicus brief emphasizing how important Section 230 is to the business of information and the ability of all users to find relevant and usable information online. “This is the most significant SCOTUS term for the business of information in at least a decade,” said Chris Mohr, SIIA’s President. “The Court’s decision will affect the way that businesses of all sizes make the internet usable and accessible to all users,” he added.

Key points from SIIA’s amicus brief:

  • Section 230’s text (and especially its definitions) reveal that Congress anticipated the explosion of information platforms would be faced with, and the need for a legal protection that would enable websites to sort, screen, organize, and display third-party information. It specifically included a series of activities that serve as the foundation for how modern platforms publish third party content.
  • In particular, Section 230 expressly foresaw the need for “access software providers” to be able to “pick, analyze, and organize” an otherwise unnavigable sea of information. These actions all fall under the publishing functions that Congress sought to protect when platforms handle third-party content. 
  • There is no functional or statutory difference between a search and a recommendation. Both search and recommendations are different solutions to solving a well-known problem in information retrieval: filtering massive, unstructured data sets for information that might be relevant to users. 
  • Congress provided protections to Internet platforms to ensure that the market for information would be both innovative and competitive.  Its experiment has proven to be huge success.  If the petitioners win, only the largest firms will be able to absorb the risks of content moderation.
This is the most significant SCOTUS term for the business of information in at least a decade. The Court’s decision will affect the way that businesses of all sizes make the internet usable and accessible to all users.

Chris Mohr

SIIA’s President
Media Library (7)

DC In the Desert: A Brief Dispatch on CES 2023

By Paul Lekas, Senior Vice President, Global Policy, Software & Information Industry Association (SIIA).

Last week, members of SIIA’s policy team – along with over 115,000 others – attended the annual CES trade show in Las Vegas. CES showcased the future of tech innovation in autonomy, AI, health, gaming, robotics, and more. It was also an opportunity to convene thought leaders and government officials in a series of discussions on critical information and technology policy issues – including across SIIA’s policy portfolio. Here are our key takeaways from “DC in the desert”:

  • Resilience is the new watchword. Resilience of global supply chains, resilience of cybersecurity protocol to ensure a healthy digital ecosystem, and resilience of international alliances and partnerships. Policymakers will continue to look for ways to advance resilience across the digital ecosystems. This means advancing robust, secure, reliable and inclusive frameworks for cybersecurity and digital access. CISA Director Jen Easterly and National Cyber Deputy Director Camille Stewart Gloster spoke eloquently on the topic.
  • Content moderation will be big in the 118th Congress. As the Supreme Court readies to hear challenges to Section 230, Congress is prepared to investigate content moderation and possibly legislate on Section 230 reform. Nevertheless, voices from industry, government, and civil society expressed concern that legislative reform could restrict free speech and have significant negative effects on the Internet as we know it today.
  • International tech standards are critical to ensuring the safety and reliability of technology and comportment with democratic values. Industry has a critical role in participating in the development of international technological  standards. Government and industry representatives spoke to the importance of advancing standards around cybersecurity, AI, and other key technologies and involving a greater segment of the private sector.
  • Fostering international cooperation on digital policy. International cooperation is key to fostering interoperability and innovation. Forums such as the US-EU TTC, IPEF, and APEC have a lead role to achieve this. Several speakers conveyed optimism that these new forums will help advance an international data governance framework that comports with democratic values.
  • Continued need for federal privacy legislation. Civil society, industry, and government representatives lamented the failure of ADPPA in 2022 and hope there is an opportunity to move forward legislation in 2023. Thought leaders were virtually united in the view that the United States needs a national standard for privacy, and anticipate – in its absence – that a growing patchwork of state regulation will exacerbate challenges for consumers and businesses alike.
  • Digital divide. The digital divide in terms of broadband connectivity is alarming, yet there is hope the new funding authorized in 2022 will enable NTIA and the FCC to close the gap. FCC Commissioners Simington and Starks and NTIA Director Davidson spoke passionately about this effort.
  • The FTC will be active yet remains under resourced. FTC representatives, including Commissioner Slaughter, spoke to the FTC’s continued efforts to police the online ecosystem through existing authorities yet expressed serious concerns about the FTC’s resourcing.
  • The US-China tech competition. While concerns about the Chinese Communist Party (CCP) took a backseat at CES, Senator Warner described it as “the issue of our time,” pointing to how tech and national security are inextricably intertwined.
  • AI efforts continue and differences between the US and EU approaches remain. The primary vehicles for AI regulation remain, in the U.S., the NIST AI Risk Management Framework – a voluntary framework which will be published this month – and the EU AI Act. The AI Act continues to take a heavier hand in oversight and pre-market regulation, in contrast to the NIST approach.

These CES discussions provided insights into the perspectives of key government officials on important issues of the day. SIIA looks forward to continuing to work closely with Congress and the Executive Branch on the critical issues that align with our 2023 priorities.