The Impact of AI on Research and Innovation

Image: Depositphotos

Source: Irving Wladawsky-Berger

On December 29, the WSJ  published “Will AI Help or Hurt Workers?,” an article based on a research paper by Aidan Toner-Rodgers, a second year PhD student in MIT’s Economics Department.

One of the reasons the WSJ article caught my attention is that it featured a photo of the MIT graduate student in between two of the world’s top economists whose research I’ve closely followed for years: Daron Acemoglu, — who in October was named a co-receipient of the 2024 Nobel Memorial Prize in Economic Science, and David Autor (along with his dog Shelby) — who was a co-chair of a multi-year, MIT-wide Taskforce on the impact of AI on “The Work of the Future.”

Both professors raved about Toner-Rodger’s research even though they have somewhat different points of view on the impact of AI on workers. Professor Autor is more optimistic, arguing that “AI Could Actually Help Rebuild The Middle Class.” His friend and colleague, professor Acemoglu, worries that AI could actually worsen income inequality and not do all that much for productivity and GDP growth over the coming decade. “For all the talk about artificial intelligence upending the world, its economic effects remain uncertain,” he said in a recent interview. “There is massive investment in AI but little clarity about what it will produce.” But they both agreed that “the research by Toner-Rodgers, 26 years old, is a step toward figuring out what AI might do to the workforce, by examining AI’s effect in the real world.”

Economists have long been analyzing the impact of historically transformative technologies, — e.g.,  steam power, electricity, the internal combustion engine, computers, the Internet, — on the future of work.

In a 1930 essay, for example, renowned English economist John Maynard Keynes wrote about the onset of “a new disease” which he named technological unemployment, that is, “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes predicted that the standard of living in advanced economies would be so much higher by 2030 that most people would be working a 15-hour week based on the number of hours he thought necessary to satisfy their financial and emotional needs.

Almost one hundred years later, in November of 2024, the US National Academies (NA) published “Artificial Intelligence and the Future of Work,” a report based on a three year study by experts from universities and private sector institutions, including professor Autor. The study’s eleven key findings are listed in the Summary chapter and explained in great detail throughout the report.

The NA report was considerably more positive than Keynes’ essay, noting that although there are widespread concerns about the impacts of Al on jobs, a number of important factors should ameliorate these concerns. First, US unemployment rates have been quite low compared to historical levels. Second, population growth rates in the US and other advanced economies have been declining and are expected to continue to do so for the foreseeable future. And third, the adoption of AI in the workplace is still in its early stages, making it difficult to estimate the longer term impact of AI on the future of work.

As explained in his research paper, “Artificial Intelligence, Scientific Discovery, and Product Innovation,” Toner-Rodgers came up with a very innovative way of estimating the impact of AI on scientific research in the real world, one of the areas where AI is already having a major impact.

“The economic impact of artificial intelligence will depend critically on whether AI technologies not only transform the production of goods and services, but also augment the process of innovation itself,” he wrote in the paper’s introduction. “Recent advances in deep learning show promise in generating scientific breakthroughs, particularly in areas such as drug discovery and materials science where models can be trained on large datasets of existing examples. Yet little is known about how these tools impact invention in a real-world setting, where R&D bottlenecks, organizational frictions, or lack of reliability may limit their effectiveness. As a result, the implications of AI for both the pace and direction of innovation remain uncertain. Moreover, the consequences for scientists are ambiguous, hinging on whether AI complements or substitutes for human expertise.”

Toner-Rodgers studied the impact of AI-based tools on innovation, leveraging the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. The lab is focused on the applications of materials science in healthcare, optics, and industrial manufacturing, employing researchers with advanced degrees in chemistry, physics, and engineering.

“Traditionally, scientists discover materials through an expensive and time-consuming system of trial and error, conceptualizing many potential structures and testing their properties. The AI technology leverages developments in deep learning to partially automate this process. Trained on the composition and characteristics of existing materials, the model generates ‘recipes’ for novel compounds predicted to possess specified properties. Scientists then evaluate these candidates and synthesize the most promising options. Once researchers create a useful material, they integrate it into new product prototypes that are then developed, scaled, and commercialized.”

His study developed techniques to quantify the impact of AI on each of the three key stages of the R&D process:

  • Idea generation: compared to existing compounds, AI-generated materials have more distinct physical structures, suggesting that AI unlocks new parts of the design space;

  • Creativity of inventions: the patents filed by scientists are more likely to introduce novel technical terms — a leading indicator of creative new inventions; and

  • Product innovation: AI boosts the share of prototypes that lead to innovative, new product lines rather than to incremental improvements to existing products.

The AI-assisted researchers came up with 44% more potential new materials, which led to a 39% increase in patent filings, and 17% more products prototypes based on the new materials. Research productivity increased by 13% to 15%. Overall, the new materials exhibited novel physical structures which in turn led to more radical innovations.

These results have two major implications. First, they demonstrate the potential of AI-augmented research to improve productivity and accelerate the pace of new discoveries. Second, they confirm that these discoveries translate into innovative new product lines.

In addition, Toner-Rodgers found that the use of AI technologies disproportionally benefit high-ability scientists, — the output of the top 10% nearly doubled while the bottom third of researchers saw little benefit.

“Investigating the mechanisms behind these results, I show that AI automates 57% of ‘idea-generation’ tasks, reallocating researchers to the new task of evaluating model-produced candidate materials,” he wrote. “Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process.”

In other words, AI seems to dramatically change the research discovery process.

In general, the research process is composed of two main kinds of tasks: idea generation and idea evaluation. In the absence of AI, researchers devote nearly half their time to idea generation tasks, that is, coming up with new potential materials. But, once AI is introduced and automates a majority of the idea generation tasks, idea generation falls to less than 16% of their time. Scientists then spend 74% of their time evaluating the large number of AI generated material candidates, a significant change to the R&D  process.

The paper further explains why the use of AI disproportionately benefits higher ability scientists. Based on their skills, experience, and judgement, the top scientists focus first on the most viable candidates suggested by the AI tools, while less skilled researchers waste significant resources investigating less promising suggestions. “Indeed, a significant minority of researchers order their tests no better than random chance, seeing little benefit from the tool. Evaluation ability is positively correlated with initial productivity, explaining the widening inequality in scientists’ performance. These results demonstrate the growing importance of a new research skill, — assessing model predictions that complements AI technologies.”

These findings reminded me of a 2017 seminar I attended at MIT — “Exploring the Impact of Artificial Intelligence: Prediction versus Judgment,” by University of Toronto (UofT) professor Avi Goldfarb based on research with his UoT colleagues Ajay Agrawal  and Joshua Gans.

Goldfarb explained that the best way to assess the economic impact of a major new technology is to ask a fundamental question: how does the technology reduce costs? For example, the semiconductor revolution can be viewed as being all about the dramatic reductions in the cost of digital operations thanks to what's become know as Moore’s Law, which then led to the precipitous decrease in the cost of digital computers. As a result, we’ve learned to define all kinds of tasks beyond arithmetic calculations in terms of digital operations, such as inventory management, financial transactions, word processing, and photography. Similarly, the economic value of the Internet revolution can be described as reducing the cost of communications and of search, thus enabling us to easily find and access all kinds of information, including documents, pictures, music and videos.

In a November, 2016 Harvard Business Review article, “The Simple Economics of Machine Intelligence,” Agrawal, Gans, and Goldfarb wrote that AI is in essence a prediction technology. Prediction means anticipating what will happen in the future, and the dramatically lower costs of AI-based predictions are now ushering the 21st century AI revolution. Given their lower costs, we are now seeing a major increase in the use of predictions in a wide variety of tasks in business, government, and research. Over time, we’ll undoubtedly discover that lots of tasks in a variety of disciplines can be reframed as prediction problems.

Predictions are one of the two key ingredients in making decisions. The other is judgement, the part of decision-making that, unlike prediction, cannot be explicitly described to and performed by a machine. Predictions are generally based on analyzing information, but judgements are primarily based on the human ability to understand the impact that different actions have on outcomes based on our intuition, innate skills, and past experiences. As machine predictions become increasingly inexpensive and commonplace, the economic value of human judgement becomes significantly more valuable.

In his paper, Toner-Rodgers concluded that the higher judgement skills of the top scientists in selecting the best AI-generated compounds is the key reason for their higher research productivity. However, his survey data revealed that these productivity gains come at a cost.

“Researchers experience a 44% reduction in satisfaction with the content of their work. This effect is fairly uniform across scientists, showing that even the winners from AI face costs. Respondents cite skill underutilization and reduced creativity as their top concerns, highlighting the difficulty of adapting to rapid technological progress. Moreover, these results challenge the view that AI will primarily automate tedious tasks, allowing humans to focus on more rewarding activities. While enjoyment from improved productivity partially offsets this negative effect, especially for high-ability scientists, 82% of researchers see an overall decline in wellbeing.”

“In addition to impacting job satisfaction, working with the tool changes materials scientists’ views on artificial intelligence. Belief in the ability of AI to enhance productivity nearly doubles. At the same time, concerns over job loss remain constant, reflecting the continued need for human judgment. However, due to the changing research process, scientists expect AI to alter the skills needed to succeed in their field. Consequently, the number of researchers planning to reskill rises by 71%.”

“These findings show that hands-on experience with AI can meaningfully influence views on the technology. The responses also reveal an important fact: domain experts did not anticipate the effects documented in this paper.”


Irving Wladawsky-Berger is a Research Affiliate at MIT's Sloan School of Management and at Cybersecurity at MIT Sloan (CAMS) and Fellow of the Initiative on the Digital Economy, of MIT Connection Science, and of the Stanford Digital Economy Lab.