Business network concept. Group of businessperson. AI (Artificial Intelligence).

‘The Core Job of Journalists Isn’t Going Away’ – ALM’s New AI Content Tool Shows Human Plus Machine is the Way Forward

Last month, legal publisher ALM introduced Legal Radar, a “first-of-its-kind website and app” that uses artificial intelligence and natural language generation to offer faster and more personalized user experiences.

Legal Radar puts the reader in charge, allowing users to select the news they would like to see from a list of relevant industries, practice areas, law firms, companies, and geographic regions, then scrapes information from federal case database PACER to generate automated summaries (usually between 50-80 words) of key details about cases as well as pulling in original ALM content from other channels.

“The newsfeed is filled with short, easy-to-digest news briefs that are intended to be scanned, kind of like the experience you would have on a social media app like Twitter or a news app like Flipboard,” says Vanessa Blum,  head of newsroom innovation for ALM’s Global Newsroom. “It’s a very mobile friendly experience and responds to that habit we know our users have which is responding to short news snippets while they are on the go.”

Legal Radar represents a significant shift in the way that content is both generated and consumed. Connectiv spoke with Blum about the realities of building an AI-driven content product, how the customer content experience is changing in B2B media and what the rise of AI really means for editors and journalists.

Connectiv: Vanessa, how does the AI component of Legal Radar work?

Vanessa Blum: We start with a stream of raw data from the federal court system via PACER (Public Access to Court Electronic Records). We apply some data processing on the back end in order to normalize, structure and clean up that data.  Then it’s converted into short summaries using natural language generation (NLG) technology from a platform called Automated Insights.

It goes in as structured data and it comes out as a readable summary. Then, as the final step, we have editors review the summary for accuracy and to make any edits that are necessary.

Connectiv: The release refers to a “first-of-a-kind website and app.” Can you talk more about what makes this first of a kind and how this offers a new customer experience?

Blum: I’ll talk about two things. First is that user experience. There’s never been a legal news product, certainly not a free legal news product, that is so easy to use on mobile, that can be personalized by user selection and is so seamless to digest information and respond to it. We think we nailed that UX in a way that hasn’t been done in legal media

The second part, which we are really excited about, is the way we are using technology and data processing to generate content for Legal Radar. It’s not the tech in itself, it’s that using technology allows us to be exponentially faster in delivering news to readers and also to deliver news across a wide array of topics and interest areas. I’m really excited about what the technology allows us to do, not only the tech in itself.

Connectiv: Talk about the interaction of the technology with editors. What’s this mean for an editor day-to-day?

Blum: I’ll start with the development process, and how closely our editors and developers worked together in building the back-end system. There are journalistic insights baked into every piece of the data processing engine—it’s the editors who devised how this data should be handled as well as the categories and the tagging that should be applied to it.

And then at the NLG level, these are templates that were created by editors to produce the kind of output that would be useful to readers. They account for over a dozen different fact patterns. It’s not a simple plug-and-play NLG engine, there is really this contribution of journalists and editors throughout the development of Legal Radar. Now that it’s up and running, we have editorial review of every item that’s created. We have staffing around the clock where an editor is looking over each and every item.

We thought that was necessary for two reasons—one is that the data set we are working with can be messy. We knew we needed something on the back end to protect against an error in the data producing an error in the content.

The other component is the ability of a human to enrich the content that we are putting out. These are very short, very fast-paced summaries but if something catches an editor’s interest, they will take an extra step—they will open a case, they will open a lawsuit and add a few key facts. We think it’s incredibly valuable to have the human judgment at the end of that process to resolve any questions or enrich what we are producing using the automated system.

Connectiv: A lot of publishers are taking a look at AI and trying to understand what they can do. As someone who’s successfully built an AI tool, what takeaways ca you share about working with AI and building and AI-driven product?

Blum: I have two main takeaways from this experience: first is to focus on the end user and not the tech. It’s easy to get enraptured by cool tech but the best practice is focus first on what you want to deliver and then focus on how the tool gets you to that result. In my role, learning about new tech and seeing how other companies are applying it is eye opening and can spark that creative process but it’s essential to stay user-focused.

The second thing is to build truly cross-functional teams. Creating Legal Radar required journalists, programmers, product designers and business strategists to all be around the table in a way that was really new for our organization. We tend to have content creators in one area and developers in another. For Legal Radar, content creation and technology are so intertwined that we had to break down the walls and get editors and programmers talking together to solve problems. Not only has that made our product better, it’s made our company better.

Connectiv: What was the biggest strategic takeaway from this experience?

Blum: Staying open minded. When we first started, we had a different data set in mind that we thought we’d be using to produce automated coverage. We learned early on that data set wasn’t workable for us, we had to pivot to something else.

One other thing that I’ll mention, we are working with Automated Insights and it’s a great product, but we found we had to build a lot of solutions at the front end before the data is fed into Automated Insights and at the back end before the content goes into the Legal Radar newsfeed. That’s not something we necessarily anticipated at the outset—how much thought and creativity we’d have to apply both to the data feed going into Automated Insights and how we would handle the content on the back-end.

Connectiv: As the head of newsroom innovation, what are you excited about with content and media? And conversely, what do you think is overrated?

Blum: I’m interested and excited in the combination of human and machine intelligence. I love watching how other news organizations are using technology, using algorithmic journalism, using AI and combining it with the expertise of their journalists to come up with solutions that are incredibly rich. That’s kind of the secret sauce in my view.

In terms of what I think is overhyped, I hate answering that because I’m sure I’ll be back talking about this a year from now, but I will say that smart speakers and developing news products for Alexa. I don’t get that one yet. I’m not convinced we’ll be receiving our information from smart speakers in the near future.

Connectiv: You’ve talked about journalists and AI working together. What’s your reaction to the idea of AI replacing editors and writers?

Blum: That’s the natural fear that people in our industry have as we begin learning about automated journalism. The more I’ve learned about it, the less that fear seems grounded. What technology is capable of is so different from what humans are capable of that it’s really through combining the two that we will see the most exciting advances. Technology is great at processing reams of data very fast, but in the business I’m in, which involves asking questions, exploring trends, talking to insiders, there’s no potential at this point that a machine will take over those functions.

When you combine the speed and data processing capabilities of the technology and turn that over to a human being to do the investigation and talk to real people, that’s where magic happens. I think journalist jobs will change–my own changed dramatically–and journalists will be forced to become more tech-savvy and be more open to using data processing in their work, but the core job of a journalist isn’t going away and cannot be replaced by a computer or an algorithm.

Business network concept. Group of businessperson. AI (Artificial Intelligence).

Integrating AI in U.S.-UK Digital Trade Through Technical Standards Cooperation: Financial Services, Cars, and Pharmaceuticals

The Atlantic Council hosted Confederation of British Industry (CBI) Director-General Dame Carolyn Fairbairn in Washington, D.C. on February 5, 2020 for a discussion about the UK’s global trading future post-Brexit. Dame Carolyn was supportive of the UK pursuing a new free trade agreement with the US that would include new standards for tech including ecommerce, fin-tech, and artificial intelligence (AI). She suggested that the OECD AI Principles would be a good place to start with respect to operationalizing high AI standards in a U.S.-UK trade deal. There is a lot to be said for this approach, particularly in making Principle 2.5 c) a reality: c): “Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.” This also makes sense because neither the United States nor the United Kingdom are likely going to want to do away with the idea that market access commitments in trade agreements should be technologically neutral, i.e. that if a country commits to open up the market in a given sector, that sector should be open no matter what technology is used to serve that sector. Cooperating on standards development can, however, have the effect of stimulating the use of innovative technologies such as AI, which is a worthwhile goal.

Standards Cooperation does not Mean Countries Must have Identical Laws

Laws, regulations, and standards are sometimes conflated, which occasionally leads to confusion. The European Center for Standardization defines a standard as “a technical document designed to be used as a rule, guideline or definition. It is a consensus-built, repeatable way of doing something.” The National Institute of Standards (NIST) provides examples of AI standards areas such as:

  • Data sets in standardized formats, including metadata for training, validation and testing of AI systems
  • Tools for capturing and representing knowledge and reasoning in AI systems
  • Full documented use cases providing information re: specific AI applications and guides for making decisions about when to deploy AI systems
  • Benchmarks to drive AI innovation
  • Testing methodologies
  • Metrics to quantifiably measure and characterize AI technologies
  • AI testbeds
  • Tools for accountability and auditing

For instance, AI systems often require safeguarding Personally Indentifiable Information (PII) data. This International Organization of Standards (ISO) standard (ISO/IEC 29101:2013) defines a privacy architecture framework for entities that process such data. It does not specify what the definition of PII data is (that is a country’s sovereign right to determine), only how to create ICT systems to protect such data. Technical standards cooperation could pay dividends if companies could use standards (methodologies) accepted by regulators on both sides of the Atlantic to demonstrate how transparency, bias avoidance, privacy protection and other regulatory priorities are being addressed from a technical standpoint. We are really talking about developing common methodologies (technical standards) to achieve certain objectives such as the protection of privacy, not substantive legal/regulatory convergence. And we are not talking about “checklists” either because the idea is that companies establish ongoing processes, not a checklist of compliance for a certain date in time.

Composition of U.S.-UK Trade

In this context, understanding the composition of U.S.-UK trade and recalling the most modern trade agreement in existence from a digital standpoint – the United States Mexico Canada Agreement (USMCA) – is a good place to start. The United States Trade Representative (USTR) notes that in 2018, the U.S. goods and services trade with the UK totaled roughly $261.9 billion. For both countries, trade in financial services, cars and pharmaceuticals is significant. From an AI promotion standpoint, honing in on these sectors could potentially allow for the two countries to do some innovative things in a trade agreement. The USMCA Digital Trade Chapter’s Chapter 19:14 says that the Parties “shall endeavor” to cooperate on a range of issues important for digital trade. A U.S.-UK trade deal should ideally delineate areas where the U.S. and the UK “shall” cooperate. It might also be worthwhile for the U.S. and the UK to explore whether USMCA Chapter 11 commitments with respect to Technical Barriers to Trade might be worthwhile considering in the U.S.-UK context.

Financial Services: Are Robo-Advisors, Use of Public Records, and Alternative Data Ripe for Cooperation?

The USMCA’s Chapter 17 covers financial services and provides for a point of departure in thinking about what a U.S.-UK deal might look like with respect to financial services. For example, chapter 17:7 provides for commitment with respect to “New Financial Services.” What this means is that if one Party permits a new financial service to be offered in its territory, then it must allow the other two parties to offer the same new financial service. See below for the text of this provision:



Each Party shall permit a financial institution of another Party to supply a new financial service that the Party would permit its own financial institutions, in like circumstances, to supply without adopting a law or modifying an existing law.5 Notwithstanding Article 17.5.1(a) and(e) (Market Access), a Party may determine the institutional and juridical form through which the new financial service may be supplied and may require authorization for the supply of the service. If a Party requires a financial institution to obtain authorization to supply a new financial service, the Party shall decide within a reasonable period of time whether to issue the authorization and may refuse the authorization only for prudential reasons.

There is a “like circumstances” caveat, as well as scope for the Parties to “determine the institutional and juridical form through which the new financial service may be supplied.” The U.S. and the UK may want to consider areas where the two sides might want to consider mutual recognition regimes of some kind. There has been a lot of discussion, for instance, regarding how to regulate financial advisory services “robo-advisors.” See this LEXOLOGY piece, for instance, on how regulators in the U.S., the UK, Europe, Canada and Hong Kong are dealing with this issue.  Michel Girard’s January 2020 Paper entitled: “Standards for Digital Cooperation” provides some good ideas for what might be possible in this and other sectors. He notes, for instance, that the report from a 2018 High-Level Panel on Digital Cooperation proposes new data governance technical standards to address gaps such as the creation of audits and certification schemes to monitor compliance of AI systems with technical and ethical standards. I have also written about how explanations and audits can enhance trust in AI.

Another example where closer U.S.-UK cooperation might be warranted is in the area of know your customer (KYC) and anti-money laundering (AML) services. Although to date, trade agreements have appropriately not entered into detail regarding what a privacy law should look like (the Comprehensive and Progressive Agreement for Trans-Pacific Partnership and USMCA only say that Parties shall have a privacy system), it might be worth clarifying that privacy law should not be an impediment to the provision of these essential services. In practice this would mean that the “right to be forgotten” laws would have to be appropriately tailored and that companies would continue to be able to use public records and wide distributed media to provide high quality KYC and AML services.

Alternative data is another area where the U.S. and the UK might want to step up collaboration. For example, the U.S. and investment industries could potentially benefit from greater use of voluntary alternative data standards. Standards that have the effect of improving data documentation; raising data quality; unifying data pipeline management; reducing time spent on data delivery and ingestion; easier permissions management and authentication; and, simplifying vendor due diligence and contracting would be a good thing.  Export Britain actually advises UK firms to focus on , among other sectors, financial services in exporting to the United States. The same is undoubtedly true for U.S. financial services firms looking to expand in the UK.  It may make sense for regulators on both sides of the Atlantic to work together to promote the use of alternative data standards for the investment industry. There is perhaps also scope to work together on the use of alternative data in making consumer credit decisions. There is substantial evidence suggesting that the use of alternative data in credit scoring can help in expanding service to underserved markets as these comments to the Consumer Financial Protection Bureau (CFPB) make clear. Common U.S.-UK alternative data standards could be helpful, particularly if they are coupled with safeguards to ensure that alternative data can be developed through access to public records and widely distributed media in both the United States and the United Kingdom.

Cars – Can the United States and the United Kingdom Drive Connectedness?

On January 8, 2020, the Trump Administration released “Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 AV 4.0).  Clearly, going forward this will be a U.S. strength given the investments being made by U.S. tech firms. But there is plenty of potential interest in the UK as well as this 2019 Society of Motor Manufacturers and Traders (SMMT) report notes. One of the report’s recommendations is to harmonize international harmonization of regulations. And as this AV Investor Tracker report establishes, concerns about data privacy are holding back the development of the sector. This Booz Allen Hamilton White Paper delineates some of the issues at stake. Perhaps one of the ways to help U.S. and UK carmakers would be to take what is relevant from the U.S. National Institute of Standards (NIST) Privacy Framework in creating “privacy by design” for AV manufacturers in the UK and the U.S. On the UK side there has been plenty of preparatory thinking about the privacy issues surrounding AVs – particularly what to do about location data. See this piece, for instance, entitled: “Where your data is being driven.” The Center for Connected & Autonomous Vehicles has done innovative work in this space. Perhaps U.S. and UK negotiators could agree on how privacy can be addressed through mutually agreed upon privacy by design standards for car manufacturers and the apps that will have increasing value add in automobiles.  On February 14, 2019 the United States and the United Kingdom signed a Mutual Recognition Agreement (MRA) with respect to standards. One of the stated purposes of the agreement is to promote trade between the two countries. The agreement at this time focuses on mutual recognition with respect to telecoms equipment, electromagnetic compatibility, and pharmaceutical good manufacturing practices. Perhaps there might be scope to expand this to AVs and other standards important to innovative digital industries?

Making the Most of AI to Make Drug Discovery Cheaper and Quicker

There is a lot of excitement about the potential for AI to help with drug discovery but there is arguably a need for standards to realize the potential of the technology. AI Startup Entrepreneur and Ph.D. Charles K. Fisher actually asks the FDA to develop such standards. Why not work with the UK equivalents to do precisely that together with NIST and UK equivalents? The Confidentiality Coalition (the Coalition is composed of a range of different healthcare industry players, including pharmaceutical companies) submitted a January 14, 2019 letter to NIST requesting that it work on a privacy framework that is protective of privacy but at the same time allows for needed healthcare data to go to where it is needed. One of the Coalition’s requests is for the Privacy Framework to be consistent with HIPAA and other existing Privacy Frameworks. The NIST Privacy Framework does not explicitly establish a system to comply with specific laws. And, for instance, with respect to international transfers of clinical trial data, there are some differences between HIPAA and the GDPR as this article notes. Although the NIST is appropriately careful to note that its cybersecurity and privacy frameworks are not “checklists,” it might be helpful, especially given the 2019 MRA to select some sectors where additional guidance might be useful such as healthcare. After all, in 2018 the United States imported about $5 billion in pharmaceuticals from the United Kingdom. In 2016, the United States exported about $2.5 billion to the UK in medical and pharmaceutical products. Despite these seemingly impressive numbers though, it is hard to think of a sector more in need of a revolution for a changed innovation model. And besides the economics, AI-driven enhanced drug discovery clearly has potential to help people in the way that matters most: improving health outcomes through faster development of new drugs. In this context, given the politics surrounding healthcare, it is worthwhile underscoring that this technical standards cooperation is about ensuring high quality as well as efficiency, and that it has nothing to do with healthcare delivery models that that United States and the United Kingdom choose.  The UK can keep the NHS. And the U.S. can keep its largely private insurance-based system.


The U.S. is putting its money where its mouth is in that federal money is being prioritized for AI R&D. The UK is also a strong AI adopter and leader. And the countries are partners that share similar values. Let’s make the most of these strengths and develop a trade deal that promotes AI-driven innovation.