Implementing the Right to Be Forgotten

It isn’t easy to implement the European Court of Justice’s right to be forgotten decision.  According to one press report, Google has received 70,000 requests for link removal through its online form since the program went into effect last month.  Another report says requests to remove links are arriving at an estimated rate of one every seven seconds.  As predicted here after the court’s decision in May, the results are not pretty. But the fault is not in implementation but in the flawed underlying decision that restricts free expression and puts substantial legal discretion in the hands of search engines.

Let’s recall how extreme the decision was.  It said that under European law privacy trumps free expression in the context of Internet search.  The right to respect for private life and the right to the protection of personal data “override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name.”  There can be an exception to this general rule: “…for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.”

The court’s standard for determining whether privacy interests are implicated was whether the information was “inadequate, irrelevant or no longer relevant, or excessive…”  The court added that triggering privacy interests did not require a finding that “the inclusion of the information in question in that list causes prejudice to the data subject.” So what would trigger privacy interests? To say the court’s guidance is extraordinarily vague is an understatement.

Given the court’s reference to search engines in making access to information “appreciably easier” and playing “a decisive role” in the dissemination of information on the Internet, it is hard to avoid the conclusion that the intent of the court was to limit the effective dissemination of information on the Internet. But it did so by granting discretion to search engines to make some delicate value judgments and without specific guidance on how to make those judgments.

So how’s the implementation going? Certainly Google hasn’t done everything right. Taking down some links and then apparently restoring them certainly seems to be a misstep. But on the whole they’ve done a pretty balanced job.  They are requiring the filing of a request, including a statement on why release of the information would not be in the public interest.   There is no indication that they are granting all requests or turning all requests down.  They notify the publisher of the links removed from search results, but they do not reveal the identity of the person requesting the take down, since this would reveal the information that the data subject was trying to conceal.  They are following the law by limiting take down’s to EU citizens and to EU search results rather than extend the EU regime to the world.

Some commenters suggest that search engines are granting too many deletion requests and should instead routinely decline them all – which would force the data subjects to go data protection authorities or the courts to get links removed. [Read more...]

A Dark Day for Free Expression on the Internet

The European Court of Justice’s recent decision granting EU citizens a right to be forgotten by search engines is a major blow to free expression on the Internet.  Reaction from media outlets like the New York Times and the Financial Times has been harshly critical and rightly so.   The key thing for Internet users and for public policymakers in Europe is to understand how this ruling might reduce the amount of accurate information available on the Internet.

The decision did not spring from any impulse to censorship, but from an honest attempt to vindicate the fundamental right to privacy in a digital age.  That’s why any comparison to authoritarian government censorship of the Internet is just overblown rhetoric. But unless it is modified or re-interpreted through further jurisprudence or legislation, this decision might well be the turning point where free expression on the Internet begins to recede from its current high water mark.

What’s the threat to free expression?  The court attempts to balance the interest of search engine users in access to information and the privacy interests of individuals who are the subject of lawfully published material available on the Internet.  In making that balance, however, the court says:  “the data subject’s rights… override, as a general rule, that interest of internet users…”  That’s the problem in a nutshell: under the decision privacy trumps free expression on the Internet.

The court envisages a process in which a person who thinks that search results are an intrusion into his private life presents a complaint to the search engine stating that one or more links in the search results refer to data that appear to be “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.”  The search engine then must “duly examine” this complaint and if it finds that the links meet this standard of “inadequacy, irrelevance or excessiveness” it must delete these links from its search result.  This must be done even if the data are accurate and their initial publication lawful.  An exception from this general requirement to delete links allows the search engine to retain the links in search results when “there are particular reasons, such as the role played by the data subject in public life, justifying a preponderant interest of the public in having access to the information when such a search is made.”

This process is heavily weighted in favor of a complainant, and allows free expression to function only as a defense against a finding of a privacy violation.

The particular case before the court illustrates the process. A Spanish man incurred certain debts many years ago, which was reported accurately in a newspaper at the time.  He has since cleared up the debts.  But a search of his name today returns the original story in a prominent place, thereby recirculating true but outdated information about him.  He asked the search engine to remove these links.  Under the new regime, the search engine would be required to go through the above process using the new standard and if it finds that the information is inadequate, irrelevant or excessive it must consider whether the role played by the physician in public life gives the public a preponderant interest in access to the information.  If not, it must delete the links.

Search engines must assess what this ruling means in terms of their internal policies and practices and seek to bring them into compliance with the ruling.  The cost and burden to these companies are important and might make operating an effective search engine in Europe a nearly impossible task.

But the real impact of the ruling is that is it likely to reduce the amount of accurate information available on the Internet.

Even a preliminary review of the decision reveals substantial challenges:

What do the new standards mean? Are there really three different bases for deleting search results – irrelevance, inadequacy or excessiveness? A standard of excessiveness is particularly troubling and could potentially require search engines to assess whether a publisher gathered too much accurate information about a person.

What role do other interested parties have in a complaint?  The original publisher, for instance, might not want links to his stories suppressed in search results.  Other people and organizations are typically mentioned in published stories.  What if they want links to the stories available and think their rights are violated by suppression?  Are the search engines supposed to convene a process to allow all interested parties to present evidence as if they were a court?

How broadly does the ruling apply?  It covers search engines, but many companies are in the business of aggregating lawfully acquired accurate information from a variety of public and private sources and making it available to the public.  The same story that the search engine would have to delete is also available in thousands of commercially available databases throughout the world.  Are those providers of information services subject to deletion demands from EU data subjects even if they are not based in the EU?

These are just preliminary questions that must be clarified going forward.  But a new day of privacy-based deletion requests is dawning.  Unless EU policymakers intervene, the new day is likely to be a dark one for free expression.


Mark MacCarthy, Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology.

SIIA Welcomes Administration’s Privacy and Big Data Report; Says Current Regulatory Framework Can Respond to Potential Problems

SIIA today responded to the release of the Administration’s Report on Privacy and Big Data.  SIIA welcomed the report’s assessment that big data provides substantial public benefits and will provide more benefits in the future.  The organization believes the current regulations are adequate to address potential concerns.

As the report recognizes, the collection and analysis of data is leading to better consumer products and services and innovations in healthcare, education, energy, and the delivery of government benefits.  SIIA member companies are driving this innovation by leading the development of techniques for analyzing big data, while also working to safeguard personal data.  We will continue to work with the Administration to promote the responsible use of data to drive innovation, job-creation and economic growth.

The Administration’s work to examine discrimination concerns is extremely important.  It is our view that current law works.  Vigilantly enforced consumer protection and antidiscrimination laws are strong and flexible enough to prevent unfair practices.  Industry efforts are also safeguarding data privacy and preventing discriminatory practices.  Burdensome new legal requirements would only impede data-driven innovation and hurt the ability of U.S. companies to create jobs and drive economic growth.

As recently as three weeks ago the Federal Trade Commission used existing authority under the Fair Credit Reporting Act to bring cases against companies that used data in ways that violated the Act’s consumer protection provisions. Other possible unfair or discriminatory practices in the use of data may already be regulated under other statutes, including Title VII of the Civil Rights Act of 1964, the Equal Credit Opportunity Act, the Fair Housing Act and the Genetic Information Nondiscrimination Act of 2008.

In addition, SIIA is delighted that the report recognized the need to reform the Electronic Communications Privacy Act (ECPA). As users increasingly store email and other communications remotely, it is critical to reform ECPA to establish a warrant requirement for access to these communications, regardless of where they are stored.


Mark MacCarthy, Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy

Big Data Analytics: Benefits and Discrimination

As SIIA noted in a series of reports, big data analytics is a source of significant innovations in health care, education, energy and the delivery of government benefits.

Analytics has been with us for some time, but big data analytics really is something new. When data sets are very large in volume, diverse in the variety of data types they contain, and changing with dramatic velocity, standard techniques of data analysis  can be supplemented with new computational techniques that take full advantage of the wealth of new and different input data.  These techniques enable novel insights to emerge from data in the form of correlations that could not be anticipated from previous theories or empirical research.  These unexpected correlations can then form essential elements in increasingly accurate predictive models.

These predictive models have to pass all the normal tests of statistical and empirical significance in order to be successfully used for scientific research or business purposes.  This is one reason that some of the recent critiques of big data miss the mark.  Moreover, the success rate in developing increasingly accurate validated predictive models in a wide range of endeavors is well established.  With the increase in input data coming from sensors embodied in everyday things and linked to communications networks, a phenomenon that goes under the name of the Internet of things, it is highly likely that the new data analytical techniques will become increasingly accurate and will spread to new domains of activity.

The potential common good benefits of this development are so large that a major focus of the Administration’s technology policy should be the promotion and advancement of big data analytics.

It is likely that the upcoming Administration report on privacy and big data will highlight these extraordinary benefits of increasingly accurate analytical predictions.  In his earlier comments to the workshop on big data and privacy in Berkeley on April 1, White House Counselor John Podesta, who has been tasked by President Obama to lead the review effort, referred to the experience of a hospital showing how big data can literally save lives.  This experience is also described in a recent SIIA blog.  The hospital contracted with an outside firm to analyze millions of health data points about new born infants and discovered that a pattern of invariance on a range of indicators of vital signs predicted the onset of an extremely high and dangerous fever twenty four hours later.  This advance warning system enabled hospital personnel to start treatment ahead of time.

It is also likely that the report will focus some attention on big data and discrimination. This concern was highlighted several weeks ago, when a coalition of civil rights groups and privacy advocates issued civil rights principles for the era of big data. One of the principles was to ensure “fairness in automated decisionmaking.”  The group also warned that new big data analytical techniques “…can easily reach decisions that reinforce existing inequities.”  These concerns are legitimate and these groups are right to draw attention to these possibilities.

In his earlier April 1 comments, Podesta raised these same issues:

“Big data analysis of information voluntarily shared on social networks has showed how easy it can be to infer information about race, ethnicity, religion, gender, age, and sexual orientation, among other personal details. We have a strong legal framework in this country forbidding discrimination based on these criteria in a variety of contexts. But it’s easy to imagine how big data technology, if used to cross legal lines we have been careful to set, could end up reinforcing existing inequities in housing, credit, employment, health and education.”

[Read more...]

Ohlhausen on Big Data and Consumer Harm

At today’s conference on Privacy Principles in the Era of Massive Data, co-sponsored by the Georgetown University McCourt School of Public Policy and the Georgetown Law Center, Maureen K. Ohlhausen, Commissioner at the Federal Trade Commission, delivered a thoughtful keynote address on The Power of Data.

She emphasized the value of the new computational techniques that arise in the context of data sets that are larger in volume than traditional data sets, that are composed of a greater variety of data types, and that change at a much faster velocity. These characteristics of volume, variety and velocity enable data scientists to generate insights that were previously impossible to anticipate from traditional static data bases.

This unanticipated quality of the new computational techniques challenges traditional notions of privacy protection. For instance, it creates a tension with the traditionally understood privacy principles of notice and purpose specification.  As Commissioner Ohlhausen pointed out succinctly, “…companies cannot give notice at the time of collection for unanticipated uses.”  These novel uses also challenge the idea that data collection should be minimized and data discarded as soon as possible:

“Strictly limiting the collection of data to the particular task currently at hand and disposing of it afterwards would handicap the data scientist’s ability to find new information to address future tasks.”

So what should the FTC do?  The Commissioner approvingly referenced the FTC’s action in the Spokeo case, where the agency fined the company for failure to follow the requirements of the Fair Credit Reporting Act.  Going forward she thinks that the FTC “should use its traditional deception and unfairness authority to stop consumer harms that may arise from the misuse of big data.”

SIIA agrees.  In our recent White Paper and comments filed with the FTC in their consumer scoring workshop we urged the Commission to use its existing powers under the current regulatory regime to bring bad actors to task for failing to follow consumer protection rules.   This can only help the growth of big data analysis by making sure that edge-riders do not tarnish the new computational techniques.

Moreover, the Commissioner thinks that the FTC should continue its convening role in holding workshops to explore “the nature and extent of likely consumer and competitive benefits and risks.”  In this regard, SIIA found the FTC’s March workshop insightful and looks forward to the Commission’s workshop in September on big data and low income and underserved consumers.

As to principles that should govern the FTC’s actions on big data going forward, the Commissioner was clear that the agency “must identify substantial consumer harm before taking action.”  SIIA endorses this idea that only a significant risk of substantial consumer harm justifies new regulatory action.

Ben Wittes from the Brookings Institution, commenting as part of the discussion panel that followed the Commissioner’s talk, echoed this theme of focusing on harm, instead of abstract notions of privacy.  In his view, when data use is outside of the normal social expectations of data use typical of the context in which the data has been collected, agencies should consider regulatory action only when the data use is hostile to the data subject’s interests.  Determining which uses are harmful, then, becomes a primary task for advocates, industry and policymakers.


Mark MacCarthy, Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy

Piketty’s Historical Perspective on Economic Inequality

The debate over income inequality and job loses in the U.S. too often devolves to overly simplistic and narrow arguments. Thankfully, deeper and more thoughtful analyses are emerging, and one of those — French economist Thomas Piketty’s recently-translated book, Capital in the Twenty-First Century — is making waves, at least among the center-left of the political spectrum in the United States.  And it is a major contribution to the inequality debate.  He takes a historical view of inequality, arguing that:

“When the rate of return on capital exceeds the rate of growth of output and income, as it did in the nineteenth century and seems quite likely to do again in the twenty-first, capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which democratic societies are based.”

This long-term trend, he says, accounts for the dramatic growth of economic inequality over the last thirty years. Piketty’s historical perspective reminds us that, in seeking to understand decades-long economic trends, we might be taking a view that is too narrow and too short-term.  It is worth keeping a wider perspective when it comes to thinking of possible policy responses to these trends.

This wider perspective, backed by detailed analyses of new historical data sets, promises to generate some spirited debate in the coming years.  It provides a useful counterpoint, moreover, to the continuing drum beat of articles and books, such as the very thoughtful best-selling The Second Machine Age, linking job loss and economic inequality to the more recent spread of high technology and software throughout modern economies.

The reality of inequality is far more complex.  For one thing, generalized talk of economic inequality masks several different recent developments: a fall in labor’s share of total income, an increase in the share of compensation going to top executives, and an increase in economic inequality among employees.

Moreover, short-term causal factors that might contribute to these different types of inequality are hard to disentangle.  The rise in inequality has been associated with the intensification of skills-biased technology, which increases the demand for skilled workers, and to globalization, which decreases the bargaining power of workers and decreases the pricing leverage of companies. Now, a recent study links inequality to another recent development the authors call financialization: an increase in the extent to which non-financial corporations in the United States earn income from providing financial services in addition to the core products and services that they also provide. How financialization relates to these other factors needs further study.

The role of software needs to be assessed in a fuller way as well. Too often software is linked to job loss based on nothing more than casual empiricism – like noting that if there are fewer accountants, it must be because of accounting software.  The net effect of software on job creation can’t be established simply by noticing that a job that used to take several people can now be done by one person and some software.  Unfortunately, this is how too many people approach the current debate.  The reality is that, as software spreads through the economy, it contributes to GDP, exports and employment in myriad direct and indirect ways.  SIIA will continue to examine the role of software in the economy in the coming year.


Mark MacCarthy, Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy

Big Data Improves Education Around the World

A recent article by the head of the International Finance Corporation, an affiliate of the World Bank Group, urged the responsible use of big data analytics to improve student learning around the world. IFC works in more than 100 developing countries supporting companies and financial institutions to create jobs and contribute to economic growth.  Supporting improved education is one of their strategic priority programs.

The IFC article highlighted several initiatives that they are supporting:

  • Bridge International Academies in Kenya uses adaptive learning on a large scale in its 259 nursery and primary schools, with monthly tuition averaging $6. By deploying two versions of a lesson at the same time in a large number of classrooms, Bridge can determine which lesson is most effective and then distributes that lesson throughout the rest of its network.
  • SABIS provides K-12 education in 15 countries including in Asia, the Middle East, and North Africa. It mines large data sets for more than 63,000 students, collecting more than 14 million data points on annual student academic performance that are used to shape instruction and achieve learning objectives.
  • Knewton is an adaptive learning platform that partners with companies like Pearson, Cengage, Houghton Mifflin Harcourt, and Wiley to personalize digital courses using predictive analytics.

These uses of big data analytics will improve learning in developing countries and the IFC should take pride in its leadership role in spreading these techniques around the globe.

Some are concerned that the new use of data for improved learning threatens student privacy. As a recent Wall Street article says:

“Perhaps the biggest stumbling block to using data in schools isn’t technological, though. Rather, it’s the fear that doing so will invade the privacy of students.”

The IFC recognizes the concern and urges policymakers to get out in front of the issue and to design privacy protections into big data projects from the ground up to make sure that the information is used appropriately to support learning:

“To realize those benefits – and to do so responsibly – we must ensure that data collection is neither excessive nor inappropriate, and that it supports learning. The private sector, governments, and institutions such as the World Bank Group need to formulate rules for how critical information on student performance is gathered, shared, and used. Parents and students deserve no less.”

SIIA agrees.  As part of our effort to encourage privacy by design in the educational context, we recently published our recommended best practices for providers of educational services to schools, focusing on the need for an educational purpose, transparency, proper authorization and security in the use of student information.

The Administration’s review of privacy and big data is examining this issue in general and as it applies to student privacy.  We look forward to working with them to make sure that the promise of better learning for the world’s students is fulfilled through the responsible use of big data analytics.


Mark MacCarthy, Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy

Curated By Logo