In 2017, SIIA published its Ethical Principles for Artificial Intelligence and Data Analytics as a guide for companies as they develop and implement advanced data analytic systems. There are many other such ethical principles including the famous Belmont principles of respect for persons, benefic ...
At yesterday’s FTC hearing on the business of big data I outlined some of the important uses of big data and analytics. SIIA companies are industry leaders using analytics and big data to improve business methods and processes. Among their innovative uses of data are the use of these techniques to:
Last week, Google released a blog of seven ethical principles to guide their work in artificial intelligence. The principles are:
Twenty years ago, Hal Varian and Carl Shapiro published what has become the classic introduction to network economics. Called Information Rules it described and illustrated key economic concepts like network effects, positive feedback loops, standards wars, market tipping points and switching costs, using examples that are now so dated that would not be recognizable to today’s digital natives. But the text drilled into a generation of entrepreneurs and policymakers the importance of understanding the basic economics of network industries before starting a network business or trying to throw a regulatory net around a network industry.
Today Hal Varian works as Google’s chief economist. In his personal capacity he delivered a crash course on AI and data to the U.S. Chamber of Commerce’s TecGlobal 2018 meeting on April 4.
He illustrated the familiar advances in machine learning through pattern recognition in voice and images, noting that it was t ...
On November 7, 2017 I made a short presentation to the AI Caucus event on AI and ethics, which is summarized in this blog.
The application of big data analytics has already improved lives in innumerable ways. It has improved the way teachers instruct students, doctors diagnose and treat patients, lenders find creditworthy customers, financial service companies control money laundering and terrorist financing, and governments deliver services. It promises even more transformative benefits with self-driving cars and smart cities, and a host of other applications will drive fundamental improvements throughout society and the economy. Government policymakers have worked with developers and users of these advanced analytic techniques to promote and protect these publicly beneficial innovations, and they should continue to do so.
In many circumstances, current law and regulation provide an adequate framework for strong public protection. Most of the legal concerns that animate public discussions can be resolved through strong and vigorous enforcement of rules that apply to advanced and tradi ...
Institutions involved in predictive modeling are using ever more advanced techniques to predict outcomes of interest from credit scoring to facial recognition to spam detection. Institutions assess the performance of these models through standard measures such as accuracy (the number of correct predictions divided by the total number of predictions) or error rate (the number of incorrect predictions divided by the total number of predictions). They can in addition assess the fairness of their predictions with respect to vulnerable groups using measures such as predictive parity across groups, statistical parity, or equal error rates.
Institutions also face legal and ethical obligations to explain the basis of their consequential decisions to those who are affected, to regulators and to the general public. The idea is that people have rights based on autonomy and dignity to be able to understand why institutions make the decisions they do. When predictive models are ...
My recent InfoWorld blog took aim at Elon Musk’s recent call for regulation of AI research. While a deregulation-minded Washington is unlikely to set up a new federal AI agency to oversee AI applications and research, Musk insists that he wants exactly that.
In remarks after his comments to the National Governors Association meeting, Musk clarified that “the process of seeking the insight required to put in place informed rules about the use and development of AI should start now. Musk compared it to the process of establishing other government bodies regulating use of technology in industry, including the FCC and the FAA. “I don’t think anyone wants the FAA to go away,” he said.”
But this is even more worrisome. He is proposing establishing an agency with full regulatory authority over every use of AI. After setting up such an omnibus regulatory structure, then he wants the agency to figure out what it should do!
But this ...
Does the EU’s right to be forgotten extend to the whole world? The French data protection authority, CNIL, says yes and wants search engines to delist search results which contain information that violates the European Union’s right to be forgotten – not just for French users, not just for European users, but for all users everywhere. Google is prepared to remove offending search results for European users, but balks at removing material globally just because European courts find that it violates European privacy rules.
I’ve commented frequently about the tendency of foreign governments to interfere with speech rights in pursuit of legitimate public policy objectives. Is there hate speech or terrorist material online? Let’s require websites and social media platforms to purge it from their systems. Is there outdated or irrelevant material online? Let’s require search engines to delete links to this material. Is there fake news? Let’s require online websites to block it. In each case, the law would go too far. It would restrict far more speech than is necessary to achieve legitimate policy goals.