I had the honor of representing SIIA at the July 16-17, 2019 “International Conference on AI-Emerging Technologies and Intellectual Property – Connecting the Bits.” Many thanks to the World Intellectual Property Organization (WIPO), the Israeli Patent Office, and the Israeli Innovation Authority. My panel participation presentation on “AI and Regulation – The Broader Picture” can be found here. During my remarks, I focused on trade agreement protections relevant to AI, protections for proprietary data, and explainability/auditability issues. There were a few themes that struck me from the two days of conversations with colleagues. First, participants did not contemplate omnibus AI legislation or regulation. Second, intellectual property remains a key tool in incentivizing AI innovation. Third, there are a number of interesting conversations about AI and patent law, some of which touch directly on SIIA advocacy. Fourth, there was a great deal of interest in the new protections for source code and algorithms in recent trade agreements. While they are found in the digital or e-commerce, not IP, chapters of those agreements, they constitute an important element in protecting AI innovation. Fifth, the discussion on explainability and, in effect, auditability, will continue to be a major focus for international and national discussions on AI.
OMNIBUS LEGISLATION/REGULATION NOT CONTEMPLATED
As SIIA has written, broad AI regulation encompassing all possible applications would be difficult to draft and probably become out of date before entry-into-force. That does not mean that law and regulation are irrelevant. Law and regulation remain highly relevant. In the United States, for instance, laws such as the Civil Rights Act, Affordable Care Act, Genetic Nondiscrimination Act, Fair Credit Reporting Act and others remain relevant in the AI context. Updates might be needed to prevent consumer harm, but they should be domain-specific in order to be relevant. SIIA urges the next European Commission President, Ursula van der Leyen, to keep the OECD AI Principles in mind as it considers how best to promote AI innovation in Europe during the next mandate. Patents and scientific papers in the AI field are dominated by the United States and China. Ensuring “trustworthy AI” for EU citizens, while concomitantly unleashing Europe’s undoubted potential in this space, will be an important challenge for the next Commission.
INTELLECTUAL PROPERTY REMAINS A KEY TOOL IN INCENTIVIZING AI INNOVATION
WIPO Director-General Francis Gurry and United States Under Secretary of Commerce and Director of the Patent and Trademark Office, Andrei Iancu, appropriately mentioned the role of IP in promoting AI. Gurry mentioned a 2019 WIPO report entitled: “WIPO Technology Trends: Artificial Intelligence“, which dealt with this topic. Gurry noted that scientific papers on AI have tripled in ten years. Patents in AI through WIPO’s Patent Cooperation Treaty (PCT) really took off in 2013. Iancu said that 26% of patent filings at the United States Patent and Trademark Office (USPTO) now involve AI in some fashion. He added that USPTO is doubling the number of patent examiners with AI expertise. USPTO has been active in this area, including through organizing a January 19, 2019 conference on AI and intellectual property considerations. There seemed to be a general consensus among all the participants that the current IP system is certainly not impeding AI innovation. On the contrary, it seems to be accomplishing the purpose of promoting innovation. However, an evolution in certain aspects of patent law might occur.
DISCUSSION ON THE IMPACT OF AI ON POSSIBLE CHANGES IN PATENT LAW
Professor Ryan Abbott of Surrey University led the discussion on this topic. He noted that Senators Coon and Tiller through the Stronger Patents Act are seeking changes in patent eligibility. Essentially, what they seek to do is make it possible to patent mathematical formulas and algorithms, i.e. abstract ideas, through a change in Section 101 of the U.S. patent law. This would effectively overturn the Alice decision. SIIA opposes this proposed change because there is no evidence that the current patent system is disadvantaging U.S. technology companies – witness the explosion in AI-related patent applications. SIIA testified before the Senate Judiciary Committee to this effect on June 5, 2019. I made it clear in Tel Aviv that SIIA and many other technology companies and trade associations do not support this change.
Abbott went on to discuss briefly the question of AI and a copyright text and data mining exception (Gurry mentioned this as a significant issue during his remarks as well). The professor noted the EU’s limited exception for scientific research and Japan’s much broader exception. I emphasized that the EU exception requires lawful access to text and data mined works, as well as compliance with security requirements and other conditions established by rightsholders.
Finally, Abbott talked about the issue of machine invention. This is not a new question, but it is being more widely discussed because of AI developments. Right now, he suggested, people are being attributed with being inventors of inventions actually produced by machines. The professor suggested that in order to incentivize investors in AI inventing machines, perhaps machines could one day be considered inventors. Abbott added that questions of how to determine “ordinary skill of the art” and “inventive step” might have to be finetuned as we move into an era of machine AI-enabled inventions.
NEW PROTECTIONS FOR SOURCE CODE AND ALGORITHMS
There was considerable interest in the protections for source code in Article 14:17 of the Trans-Pacific Partnership (TPP) and Article 8.7.3 of the EU-Japan Economic Partnership Agreement, as well as the protections for source code and algorithms in Article 19:16 of the United States Mexico Canada Agreement (USMCA). The protections are found in the digital or ecommerce, not intellectual property rights, chapters in these agreements. However, in effect they protect intellectual property rights because they prohibit trade agreement parties from requiring disclosure of source code and algorithms as a condition of doing business. The USMCA’s provisions go furthest because they protect algorithms, as well as source code. Policymakers should take a particularly close look at USMCA because the agreement makes clear that national authorities can gain access to source code and algorithms in the course of an investigation. However, the provision contains language indicating that the intent is that if there is an investigation, the source code and algorithm will not be disclosed to competitors. Chapter 19 footnote 6 makes this clear: “This disclosure shall not be construed to negatively affect the software source code’s status as a trade secret, if such status is claimed by the trade secret owner.” The USMCA’s Chapter 17 on financial services is also relevant because it combines a binding data flow obligation with regulatory access to data appropriately cabined so that access is not abused for protectionist purposes.
EXPLAINABILITY A BIG ISSUE GOING FORWARD
There was a lot of conversation on explainability. SIIA offers suggestions for how companies can enhance trust in the AI-enabled products and services they offer in this Issue Brief entitled “Algorithmic Fairness” (see especially pages 5-6) without the need for disclosing source code or algorithms. Besides respect for property, one participant mentioned the need to ensure that transparency does not allow people to “game” different applications. I mentioned that SIIA is recommending to the U.S. National Institute of Standards (NIST) that it prioritize standards development in the explainablity/auditability area and perhaps prioritize two or three applications that might be relevant more broadly. Professor Tarsky from Haifa University said that “explainability” from a technical standpoint was quite feasible – “it was just a matter of cost.” Of course, from an industry standpoint, cost matters and how to explain to laypersons what analytics-enabled explanations really mean will still remain challenging. Nonetheless, it is worthwhile exploring what technical solutions there might be to addressing the perceived AI “black box” problem without undermining competition in the source code and algorithm sectors, which is why SIIA recommended to NIST a focus on research into auditability standards, which in effect incorporates the notion of explainability.
THE CONFERENCE DID HELP CONNECT THE BITS
As DG Gurry put it, intellectual property protection is a “necessary but not sufficient” condition for stimulating AI innovation. There are lots of other factors in play as well. From SIIA’s perspective, intellectual property protection with private sector access to data coupled with carefully considered and, only if needed, targeted (not omnibus) AI regulation focused on preventing harm is the right way to go. Dr. Michal Shur-Ofry, Senior Lecturer at Hebrew University, suggested in this context that the GDPR ought to contain an exception for use of data for research purposes. (Note: it is possible that the public interest or legitimate interest could serve as basis for such use of data, but it might be worthwhile considering a more explicit research exception going forward with respect to new GDPR guidance.) He added that privacy law should be viewed in “utilitarian” terms, which implies a form of harm-based approach to regulation, which SIIA supports. Finally, but certainly not least, a number of participants noted the importance of permitting cross-border data flows, which is a core SIIA policy goal.