On November 7, 2017 I made a short presentation to the AI Caucus event on AI and ethics, which is summarized in this blog.
The application of big data analytics has already improved lives in innumerable ways. It has improved the way teachers instruct students, doctors diagnose and treat patients, lenders find creditworthy customers, financial service companies control money laundering and terrorist financing, and governments deliver services. It promises even more transformative benefits with self-driving cars and smart cities, and a host of other applications will drive fundamental improvements throughout society and the economy. Government policymakers have worked with developers and users of these advanced analytic techniques to promote and protect these publicly beneficial innovations, and they should continue to do so.
In many circumstances, current law and regulation provide an adequate framework for strong public protection. Most of the legal concerns that animate public discussions can be resolved through strong and vigorous enforcement of rules that apply to advanced and tradi ...
Institutions involved in predictive modeling are using ever more advanced techniques to predict outcomes of interest from credit scoring to facial recognition to spam detection. Institutions assess the performance of these models through standard measures such as accuracy (the number of correct predictions divided by the total number of predictions) or error rate (the number of incorrect predictions divided by the total number of predictions). They can in addition assess the fairness of their predictions with respect to vulnerable groups using measures such as predictive parity across groups, statistical parity, or equal error rates.
Institutions also face legal and ethical obligations to explain the basis of their consequential decisions to those who are affected, to regulators and to the general public. The idea is that people have rights based on autonomy and dignity to be able to understand why institutions make the decisions they do. When predictive models are ...
My recent InfoWorld blog took aim at Elon Musk’s recent call for regulation of AI research. While a deregulation-minded Washington is unlikely to set up a new federal AI agency to oversee AI applications and research, Musk insists that he wants exactly that.
In remarks after his comments to the National Governors Association meeting, Musk clarified that “the process of seeking the insight required to put in place informed rules about the use and development of AI should start now. Musk compared it to the process of establishing other government bodies regulating use of technology in industry, including the FCC and the FAA. “I don’t think anyone wants the FAA to go away,” he said.”
But this is even more worrisome. He is proposing establishing an agency with full regulatory authority over every use of AI. After setting up such an omnibus regulatory structure, then he wants the agency to figure out what it should do!
But this ...
Does the EU’s right to be forgotten extend to the whole world? The French data protection authority, CNIL, says yes and wants search engines to delist search results which contain information that violates the European Union’s right to be forgotten – not just for French users, not just for European users, but for all users everywhere. Google is prepared to remove offending search results for European users, but balks at removing material globally just because European courts find that it violates European privacy rules.
I’ve commented frequently about the tendency of foreign governments to interfere with speech rights in pursuit of legitimate public policy objectives. Is there hate speech or terrorist material online? Let’s require websites and social media platforms to purge it from their systems. Is there outdated or irrelevant material online? Let’s require search engines to delete links to this material. Is there fake news? Let’s require online websites to block it. In each case, the law would go too far. It would restrict far more speech than is necessary to achieve legitimate policy goals.
You probably have gotten a call or email from your credit card issuer asking if you made a particular transaction. Ever wonder what triggered it? Turns out it is a form of artificial intelligence called a neural network. Instead of creating general rules about what transactions are likely to be fraudulent, a neural network just looks at all your transactions and figures out your very own individual pattern of usage. If a new transaction is significantly out of pattern, that’s when you get the call or the email.
U.S. companies have been bringing manufacturing home, and with this has come almost a quarter of a million jobs since 2010. And more are on their way — Deloitte reports that about half of U.S. manufacturing executives plan to bring home some portion of their operations by 2020. But there’s a hard truth beneath this positive trend: While domestic manufacturing is near all-time highs, America is not fully prepared to fill the jobs of the future.
Last week, EU Justice Commissioner Vera Jourova announced that she was going to propose a law on law enforcement access to encrypted data.
At last week’s RightsCon in Brussels, much of the talk was about “fake news” and what to do about it. I was on one of several panels devoted to the topic and found the conversation enlightening. Here’s what I said and some of my reactions from the panel.
The panel’s title was “Resisting Content Regulation in the Post-Truth World: How to Fix Fake News and the Algorithmic Curation of Social Media.” So, unsurprisingly, the panelists largely agreed that the government should stay out of the way. I met no resistance when I said that freedom of expression means that governments should not determine what is or and what is not fake news; that’s a path to censorship, and we don’t want to go there.
I also got buy-in from my second big point, which was that Internet platforms are playing and ought to play a crucial role in controlling the spread of fake news.
This role has two distinct components. Platforms ha ...