In recent years, we’ve witnessed unprecedented investment in AI development, impressive advances in AI capabilities, and competing predictions of impacts ranging from transformative benefits to existential risks. In response, a wide range of governance approaches have emerged, including global dialogues, international summits, executive orders, regulation, industry initiatives, voluntary guidance, and international agreements on AI governance.
While elite dialogues and expert perspectives have made headlines repeatedly, there has been less attention paid to the views of the public: the people who will use these systems, feel their effects, and elect the politicians in charge of governing them. This lack of acknowledgment is not just a democratic blind spot; it may be a strategic error. Simply put, to achieve their goals surrounding AI (whatever they may be), technologists and policymakers need to understand the public whose lives their actions will shape.
To support a better understanding of the public’s views, our team conducted a comprehensive review of recent survey data on AI from around the world and created a new database called AI SHARE to analyze trends over time. AI SHARE—the AI Survey Hub for Attitudes and Research Exchange—was built by the Governance and Responsible AI Lab (GRAIL) at Purdue University, and currently aggregates approximately 1,800 survey questions from 218 studies on AI public opinion between 2014 and 2023. It is continuously updated with new publications using a systematic classification of public opinion studies.
Our review of the literature reveals that public opinion on AI is both multifaceted and dynamic. Overall, the U.S. and U.K. publics tend to be more concerned than optimistic about AI’s impacts, though many hold mixed or even inconsistent views. For instance, more people worry about AI’s effects on overall employment than on their own livelihoods; and while there is broad support for AI regulation, people trust neither tech companies nor governments to implement it effectively on their own.
As we argue, understanding public attitudes on AI serves multiple crucial functions: It helps AI developers align products with societal expectations, enables civil society to advocate effectively, and allows policymakers to craft regulations that reflect public values rather than merely technical or commercial imperatives. However, to realize these goals, we need better mechanisms to study and understand the public’s evolving views.
The public is more concerned than optimistic about AI, though many hold mixed views
Public sentiment toward AI appears to lean more negative than positive in Western countries, with many surveys in the U.S. and U.K. showing more people expressing concern than excitement about AI’s impacts, though with some important subtleties.
In the United States, an October 2023 survey found that 49% of respondents thought the risks of AI outweigh benefits, while 34% took the opposite view. An August 2023 survey revealed a similar pattern: 52% of U.S. adults felt more concerned than excited about the increased use of AI, while only 10% expressed more excitement than concern. This represents a significant shift from December 2022, when just 38% reported more concern than excitement, and suggests growing wariness as AI systems become more capable, AI issues gain greater public attention, and public exposure to and experiences with AI increase.
Similar patterns hold in the U.K. An October 2023 survey found 48% of U.K. respondents believe the risks outweigh benefits versus 38% who saw greater benefits than risks. Another August 2023 survey found that 25% of adults thought AI’s impacts would be net negative, compared to just 14% who anticipated more positive outcomes. This trend toward increasing pessimism is also reflected in the rise of U.K. respondents who believe the risks of AI outweigh the benefits—with this share increasing by 5 percentage points between July 2022 and August 2023, according to the same survey.
Surveys that focus on emotional responses of the public find further evidence of this cautious outlook even though there are notable fractions of the public that also express positive emotions. A 2023 survey found that U.K. and U.S. respondents most commonly reported feeling “nervous” about AI (29% and 23%, respectively), followed by “hopeful” (17% in both countries) and “excited” (17% and 16%, respectively). Similarly, a March 2024 survey found U.S. respondents were more likely to feel “cautious” (54%) or “concerned” (49%) than “curious” (29%), “excited” (19%), or “hopeful” (19%). A recent survey of the U.S. public shows that feelings of skepticism (+8%) and overwhelm (+6%) have markedly increased between December 2024 and March 2025, with feelings of excitement having decreased by 5 percentage points.
However, these surveys also reveal a more complex picture with mixed views often outweighing simple pessimism or optimism. A January 2023 survey found that the largest fraction of U.S. respondents (46%) thought AI would do equal amounts of harm and good. In two 2023 surveys in the U.K., at least a plurality of respondents held more balanced views: One survey found 58% thought the impacts of AI would be neutral, while another survey showed 43% believed AI offered equal risks and benefits. These mixed emotional and evaluative responses suggest many citizens recognize AI’s potential while simultaneously harboring significant reservations, a complicated perspective that many policymakers may share and that should inform governance approaches.
More people worry about AI’s effects on overall employment than on their own
Public concern about personal job displacement remains moderate, albeit with some indication of increasing worry over time. As of early 2025, 28% of British adults and 31% of U.S. adults reported being fairly or very worried about their own type of work being automated in their lifetime. These predictions have, however, increased relative to prior waves of this survey, up from 18% in the U.K. in 2019 and 23% in the U.S. in 2021. Tracking whether there will be a continuing upward trend should be of central interest to policymakers and politicians worldwide as evidence of AI’s potential to displace workers is mounting.
Interestingly, people are consistently more concerned about AI’s broader impact on employment than about their own job prospects. When surveyed in 2023, 53% of U.S. adults and 64% of U.K. adults expected AI to somewhat or significantly increase unemployment. Similarly, a May 2023 survey found that 64% of U.K. adults thought more jobs would be lost to automation than created, while only 7% thought more jobs would be created. And in a January 2023 survey, a large majority (73%) of U.S. adults felt that machines with the ability to think for themselves would hurt jobs and the economy. This number was largely unchanged from as far back as April 2015.
This disconnect between societal and personal concern appears to be a consistent pattern. In May 2023, while 64% of U.K. adults believed robotics and AI would cause net job losses, only 14% expressed worry about AI’s impact on their current job, and only 22% worried about its effect on their future career. Perhaps this is because 59% of respondents thought their own job would still primarily be done by humans in 30 years.
The implications for the public’s policy preferences, and potentially action by policymakers, are significant. In a June 2023 survey, 64% of U.K. adults above the age of 16 agreed that the government should create new regulations or laws to prevent potential job losses due to AI. In particular, research suggests that automation concern can increase support for worker-targeted redistributive policies such as extending unemployment benefits and implementing job loss compensation. However, automation concern does not appear to significantly boost support for other social investment policies like education and retraining programs. For example, Busemeyer and Tober (2023) examined data from 24 OECD countries and found that individual automation concerns increased support for compensatory policies but not social investment policies.
People support regulating AI but do not trust companies or the government alone to do it well
Surveys have consistently found support for AI regulation and oversight outweighs opposition. Globally, 71% of people surveyed in October 2022 disagreed with the statement that “AI regulation is not needed.” The same survey found that support for regulation in the U.S. increased from 57% to 66% between 2020 and 2022, while in the U.K. it jumped from 66% to 80%. Another September 2023 survey found that 60% of U.K. respondents believed their government was doing too little to regulate AI, with only 3% thinking it was doing too much.
However, the public exhibits skepticism about who should be responsible for designing and implementing this regulation. In July 2023, 82% of U.S. voters agreed that tech company executives cannot be trusted to self-regulate the AI industry. Similarly, in a May 2023 survey, only 18% of U.K. adults had confidence that tech companies would develop AI responsibly. Yet, problematically, trust in government oversight was not much higher: 68% of U.K. adults had little or no confidence in the government’s ability to effectively regulate AI. In 2023, most of the public in the U.S. (63%) and U.K. (66%) said that they believed government regulators lack adequate understanding of emerging technologies to regulate them effectively. In general, trust in governments has been low for decades in the U.S., and appears to be at a recent historical low in the U.K.
This trust deficit presents a significant but not insurmountable challenge for AI governance efforts. It will likely require multi-stakeholder involvement and governmental capacity building to navigate it and gain the public’s approval. A survey in August 2023 found 56% of the U.S. public don’t think companies should determine the standards for AI on their own but that a majority thought that a wide range of actors such as companies, government agencies, universities, non-governmental groups of ethicists and technologists, and end users should play a moderate or major role in setting ethical standards in AI. Similarly, an October 2022 survey found that the majority of the global public thinks that government and existing regulators (67%), an independent AI regulator (67%), and industry (64%) should regulate AI applications, with co-regulation (70%) showing the highest support.
Another potential avenue for regulation is international governance, with early indicators that there is public support for international cooperation on AI governance. In July 2023, when presented with arguments in favor of each, 41% of U.S. voters preferred international AI regulation over national regulation (24%), with particularly strong support (60%) for internationally regulating AI systems used in military applications, like nuclear weapons. In September 2023, 64% of U.K. adults believed international governments were doing too little to regulate AI, and only 3% thought they were doing too much. Of course, while the public may support international governance efforts, such efforts may face challenges surrounding compromise and negotiation (already difficult at the national level alone), long timeframes required for passage (if the EU AI Act provides an example), implementation and enforcement (see climate change efforts), and familiar struggles to keep up with the pace of AI development. And the public may not be unaware of some of these challenges of global cooperation: One survey found that 58% of U.K. adults were skeptical that countries can effectively work together on AI safety.
We need to track AI public opinion better to inform effective governance
While our research reveals some clear signals about public sentiment toward AI, we also identified significant gaps in how we track and understand these attitudes. First, there is a striking lack of high-quality longitudinal data. To our knowledge, there is no comprehensive, long-running tracker of AI attitudes in the United States, and there have been limited efforts in other countries. Efforts by MeMo:KI in Germany and the Public Attitudes to Data and AI Tracker Survey, the BEIS Public Attitudes Tracker, and Office for National Statistics in the U.K. have come the closest. Without coordinated efforts and consistent measurement over time, it’s difficult to identify meaningful trends.
Second, there’s significant inconsistency in how questions are designed, making it difficult to compare results across surveys. Simple differences in terminology can yield vastly different responses, as can the time horizon specified in questions about automation (five years versus lifetime). Other challenges include neglect of key segments of the public, limited use of scientifically validated survey questions, and lack of open science practices like data sharing.
To address these challenges, we recommend governments and research institutions invest in high-quality, longitudinal trackers of public opinion on AI. These efforts should employ standardized measures that allow for meaningful comparisons over time and across countries.
As one step in support of these efforts, we created the AI SHARE database to provide researchers and policymakers with a comprehensive resource to understand the current landscape of AI attitudes research. The database is categorized along various dimensions to facilitate systematic analysis and is currently being updated with new research studies (surveys and survey-based experiments), along with improved systematic classifications.
By standardizing how we measure and track public opinion, we can better understand the public’s attitudes, feelings, and behaviors, and develop governance approaches that better reflect public values and concerns.
As AI systems become more powerful and ubiquitous, and discourse surrounding AI becomes more politicized, understanding public attitudes will only grow more essential. Clear signals of public perceptions of AI-related risks and benefits, issue priorities, and preferred governance strategies are crucial inputs to the democratic policymaking processes. Moreover, governance efforts that do not account for what people actually think and feel about these technologies risk losing legitimacy and effectiveness.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
What the public thinks about AI and the implications for governance
April 9, 2025