Screenshot 2023-07-10 at 18.01.20

SIIA Statement on EU Adoption of EU-US Data Privacy Agreement Adequacy Decision

The following statement can be attributed to Chris Mohr, President, Software & Information Industry Association.

The ability of companies to transfer data across borders is the lifeblood of the digital economy. Since the European Court of Justice’s Schrems II decision, these transatlantic data flows have been saddled with legal uncertainty.

SIIA welcomes today’s decision by the European Commission, and the strong support expressed last week by the EU’s member states, to adopt its adequacy decision under the General Data Protection Regulation. This marks the final step toward operationalizing the EU-U.S. Data Privacy Framework, an agreement that restores certainty for businesses and safeguards the fundamental rights of EU citizens.

We also commend President Biden and his administration for the critical role they have played in reaching this milestone, which will benefit businesses and consumers on both sides of the Atlantic. While today’s achievement is significant, we continue to call on Congress to pass a comprehensive federal privacy law that balances the rights of consumers with business certainty.

EdSAFE Alliance

SIIA Announces New Partnership With EdSAFE AI Alliance

The Software Information & Industry Association (SIIA) and the EdSAFE AI Alliance (ESAA) announce a new partnership focused on the responsible use of AI in education. The two organizations will work together to engage with educators, technology companies, policymakers and regulatory bodies to advance initiatives.

SIIA is thrilled to partner with the EdSAFE AI Alliance to work together to promote safe and equitable AI in education. Our members believe in the importance of the responsible deployment of technology in the classroom and never has that been more important than with the advances in AI. We look forward to working with our member companies and ESAA on this critical topic,” said Paul Lekas, SVP of Global Public Policy.

“We are proud to launch this collaboration with SIIA and its membership. They will be adding to and joining the global community connecting and providing leadership in the development of a safer, more equitable and trusted education ecosystem of Artificial Intelligence,” said Dale Allen, PhD, Co-Founder of ESAA and President of the DXtera Institute. “ESAA brings together existing networks, frameworks, education organizations, ministries, research and standards bodies to provide global leadership, strengthen local work and develop benchmarks, frameworks and standards to enable innovation and build trust for the safe use of AI in education.”  

With over 450 members, SIIA is the principal trade association for the software and information industry worldwide and the only trade association for the ed tech industry. SIIA members provide critical tools for in class, remote and hybrid learning that enhance the learning experience at all levels. As AI in education continues to evolve, SIIA remains at the forefront of representing its members and the industry at large to policymakers around the world.

ESAA is a global alliance of AI ed tech and education leaders committed to establish a healthy ecosystem within the AI education industry. Spread across 20 countries with more than 200 participants, ESAA provides a foundation for building frameworks, community, resources and standards to establish a unified approach to help users and regulatory bodies work together on AI in ed tech issues. 

For more information please reach out to Pam Golden at pam@glapr.com or Lauren Lopez at lauren@dxtera.org.

Patent

SIIA statement on the Patent Examination and Quality Improvement Act of 2022

Today the Software & Information Industry Association (SIIA) issued this statement on the the Patent Examination and Quality Improvement Act of 2022:

SIIA commends Senators Leahy and Tillis for the introduction of the Patent Examination and Quality Improvement Act of 2022.  Poor quality patents are a tax on innovation in the software and information industries. The bill recognizes the importance of examination time and resources, as well as the technical training needed for examiners of critical emerging technologies like artificial intelligence.  We look forward to working with the sponsors to help this bill become law.

EU for the Upcoming US-EU Trade & Technology Council

Recommendations for the Upcoming US-EU Trade & Technology Council

The United States and the European Commission (EC) will gather in France on May 16-17, 2022, for the second meeting of the US-EU Trade & Technology Council (TTC). The meeting provides an opportunity for the two sides to demonstrate in concrete terms how they intend to realize the vision of the Pittsburgh statement and pursue policy based on “shared democratic values.”

The Russian invasion of Ukraine has put into stark relief the strategic imperative for enhancing alignment between the United States and the EU on a digital policy. Combined with challenges from other authoritarian regimes, the need for alignment on a vision for digital democracy has never been greater. 

SIIA has provided input to both U.S. and EC officials on short-term measures to strengthen transatlantic alignment on digital policy. In this post, we highlight a few key recommendations we have made.

Recommendation for Working Groups 1 & 5: Foster regulatory interoperability on artificial intelligence (AI) by aligning on a risk-based approach for evaluating AI systems.

Alignment on data governance principles is essential to support a transatlantic approach to digital democracy. While the TTC may not serve as a mechanism to convey concerns about current EU digital regulation proposals – such as the Data Act – it does provide a forum to advance a vision of regulatory interoperability. This is of utmost importance for the development and use of AI systems. 

The EC’s introduction of the Artificial Intelligence Act (AI Act) proposal one year ago has driven the global discussion about how to regulate AI systems to address bias, reduce negative externalities, and promote positive uses. Though the AI Act must work through member states and the European Parliament, it has already changed the conversation about AI regulation worldwide.

At a high level, the AI Act incorporates a risk-based approach to AI regulation. SIIA supports a risk-based approach and has been encouraged by the expert-driven, multi-stakeholder work undertaken by the National Institute of Standards and Technology (NIST), the Organisation for Economic Co-operation and Development (OECD), and the Global Partnership on Artificial Intelligence (GPAI). To ensure regulatory interoperability, SIIA has recommended that U.S. and EC officials agree to develop guiding principles or standards for implementing risk-based approaches to AI systems that explicitly build on the work of NIST, the OECD, and GPAI. Such principles or standards will provide guidance for regulators on how to assess AI systems for safety, security, trustworthiness, and bias.

Recommendation for Working Groups 5, 6, and 9: Launch a transatlantic public-private partnership to create large, high-quality, and privacy-protective data sets that are accessible and usable by a wide range of actors for training, testing, and innovation

The availability of robust, reliable, and trustworthy data sets is a key impediment to AI innovation. While data is an essential component of the AI stack, developing robust data sets that both meet the standards for responsible AI (including using truly inclusive and representative data sets) and minimize privacy concerns is extremely costly for most companies and entrepreneurs. That cost both limits the potential of AI and allows AI tools to be built on unreliable, untrustworthy, or potentially biased information. Data sets that do not comport with standards of accuracy, reliability, trustworthiness, and bias carry societal risk.

Creating shared public datasets that can be used by researchers and innovators from the United States and EU member states can be critical to fostering new and better uses of AI technologies and ensuring that the data relied on by AI algorithms meets quality standards. 

SIIA has recommended two approaches to create shared public datasets.

  • First, a public-private effort to create large synthetic data pools that would be accessible by researchers, government, and across industry. Synthetic datasets can enable algorithms to run on data that reflect, rather than rely on, real-world data. This approach would allow for the creation of a robust data lake that can be vetted to ensure accuracy, reliability, fairness, and so on. Moreover, it would not present privacy and individual rights concerns that may arise from the collection, retention, sharing, and use of datasets that are built directly from personal information. We understand there is interest in the private sector to work with the government on this sort of initiative.
  • Second, a public-private effort to create large open data sets of personal information collected through enhanced notice and consent procedures. This could be modeled on the Casual Conversations dataset developed by Meta. That dataset consists of over 45,000 videos of conversations with paid actors who consented to their information being used openly to help industry to test bias in AI systems.

SIIA has further recommended that NIST – possibly in coordination with an EU equivalent agency –lead the effort to ensure that the large data set is appropriately screened before it is put into wide use. The data pools should be subject to intensive test, evaluation, verification, and validation procedures in accordance with NIST standards and with the involvement of government and private sector experts.

Recommendation for Working Groups 5, 6, and 9: Launch a pilot project to accelerate advances in and applications of privacy enhancing technologies that would establish public and private use cases for productive uses of data that preserve the privacy and security of the underlying data.

Privacy enhancing technologies (PETs) include a group of technologies designed to protect the privacy and security of sensitive information, including homomorphic encryption, differential privacy, federated learning, and synthetic data. PETs now have established uses in a wide range of contexts, including research, health care, financial crime detection, human trafficking mitigation, intelligence sharing, criminal justice, and more. 

Despite the myriad uses, adoption of the more advanced and capable PETs is not yet widespread. Further adoption of PETs can be an essential part of a democratic model of emerging technology in practice, as a counter to a model that sacrifices privacy, trust, safety, and transparency. PETs can enable the secure sharing of data between entities and across jurisdictional boundaries, expanding data access and utility and enabling organizations to reduce risk while making faster, better-informed decisions. PETs are one way to solve (as a technical but not legal matter) privacy-based restrictions on EU-US data flows.

___________________________________

 Footnotes:

1.        Joshua New, AI Needs Better Data, Not Just More Data, Center for Data Innovation (Mar. 20 2019); Tasha Austin, et al., Trustworthy Open Data for Trustworthy AI, Deloitte Insights (Dec. 10, 2021).

2.        Meta AI, Casual Conversations Dataset (April 2021).

3.        The Center for Data Ethics and Innovation, PETs Adoption Guide, Repository of Use Cases. See also, e.g., Kaitlin Asrow and Spiro Samonos, Federal Reserve Bank of San Francisco, Privacy Enhancing Technologies: Categories, Use Cases, and Considerations (June 1, 2021); Luis T.A.N. Brandao and Rene Peralta, NIST Differential Privacy Blog Series, Privacy-Enhancing Cryptography to Complement Differential Privacy (Nov. 3, 2021).

4.        Andrew Imbrie, et al., Privacy Is Power: How Tech Policy Can Bolster DemocracyForeign Affairs (Jan. 19, 2022).

5.        Two use cases involving SIIA members will help to illustrate this point. First is a partnership between Enveil (an SIIA member) and DeliverFund, the leading counter-human trafficking intelligence organization, which leveraged Enveil’s PETs-powered solutions to accelerate reach and efficiency by allowing users to securely and privately screen existing assets at scale by cross-matching and searching across DeliverFund’s extensive data. Second is Meta’s use of secure multi-party computation, on-device learning, and differential privacy tools to minimize the amount of data collected in the advertising space while ensuring that personalized content reaches end users. 

6.        This is not just an industry view. As the White House stated in announcing the new US-UK challenge, PETs “present an important opportunity to harness the power of data in a manner that protects privacy and intellectual property, enabling cross-border and cross-sector collaboration to solve shared challenges.” White House Office of Science and Technology Policy, US and UK to Partner on Prize Challenges to Advance Privacy-Enhancing Technologies (Dec. 2021); White House, Remarks of Jake Sullivan (July 13, 2021). In addition, the U.S. Census Bureau plans to launch a series of pilot projects to deploy PETs to “to build a platform that will enable secure multi-party computation, encryption technologies, and differential privacy to promote better data sharing both domestically and abroad.” White House, Fact Sheet: The Biden-Harris Administration is Taking Action to Restore and Strengthen American Democracy (Dec. 8, 2021). This energy complements growing global interest. For example, the UK Information Commissioner’s Office is exploring guidance on PETs and ways to incorporate PETs into data regulations. Recently, according to reports, the United Nations launched a “PETs Lab” to test PETs against data sets from the United States, the UK, Canada, Italy, and the Netherlands, and work with researchers and the private sector to develop use cases and create guidance. See United Nations. Global Platform: Data for the World; The Economist, The UN is testing technology that processes data confidentially (Jan. 29, 2022).

 

Lunch&LearnSeries_1080x1080

Associations Council Lunch & Learn: Maximizing Conferences to Create New Content

This webinar was hosted March 17, 2022.

Conferences and events are a treasure trove of information that could potentially be used in other ways post-event. Discover how associations are maximizing information delivered at conferences and events into other kinds of content deliverables such as magazine articles, blog posts, online courses, webinars, videos, podcasts, books and more.

To watch the video, please login.