Artificial intelligence is a pervasive presence in people's lives, filtering emails, prioritizing stories in social media feeds, screening job applications, and more. It's also inhabiting people's anxieties. Last year 52% of the American public expressed some level of worry about it, according to a Pew Research survey. The nonprofit community has increasingly turned its attention in that direction, debating whether and how the technology should be managed and regulated. Groups such as the Nonprofit Alliance, the Center for AI Safety, the European Artificial Intelligence & Society Fund, and many others have cautioned that AI development is moving too fast, with too little research, inclusivity, or accountability. The nonprofit group Accountable Tech describes AI as a “threat to society and democracy.”
It's easy to understand the concerns. In the not-so-distant past, when people imagined a future in which superintelligent computers and machines existed, they often assumed these creations would perform society's disliked drudgery while humans busied themselves with artistic pursuits and leisure. As it turns out, something more akin to the opposite is happening. Artificial intelligence is producing art and other content used for leisure, as well as taking over many jobs that pay well, while humans are at risk of being left to fight over scraps too unimportant or unprofitable for AI.
Technology has always displaced workers. AI advocates are quick to point that out, and they're correct. In 1930 there were approximately 235,000 telephone operators in the U.S. Those jobs, which were almost exclusively occupied by women, began disappearing with the arrival of automated telephone exchanges and today are effectively gone. Yet despite the individual hardship that may have been caused by those job losses, their passing is barely a historical footnote due to subsequent events and governmental policies that fueled an economic expansion during the 1940s and 1950s that created the midcentury middle class.
During the 1960s, just as the information technology revolution was beginning, a group of scientists and sociologists warned President Lyndon B. Johnson that the coming shift would create, “a separate nation of the poor, the unskilled, the jobless, who will be unable to find work and to afford life’s necessities.” The World Economic Forum's Henrik Eklund wrote about it earlier this year and judged that those “doom and gloom predictions” never came to light. But in the U.S. wages stopped keeping pace with economic growth shortly after that warning to President Johnson, and today tens of millions of people can't afford life's necessities—among them college and housing. According to one alarming survey published on Nasdaq.com this year, 36% of Americans have less than $100 in savings.
The amount of employment threatened by AI represents a whole new ballgame compared to past worker displacements. Commercial drivers and couriers, retail workers, and anyone in repetitive work may see reduced opportunities. The strikes that paralyzed Hollywood were partly about the uptake of AI by studios. Kristalina Georgieva, writing for the IMF, predicts that AI will eventually affect up to 60% of jobs in wealthy nations such as the United States and 40% worldwide. Affect isn't synonymous with usurp. Some jobs—possibly half—will be augmented by AI. But with 3.5 billion people employed globally, negative impacts loom for huge numbers of people. AI will also exert downforce on wages as well-paid positions dwindle while low-wage jobs remain.
AI's potential effects go far beyond the realm of employment. There are concerns about bias, the environmental impact of supercomputing energy usage, privacy, autonomy, predictive policing, and a host of other issues. The nonprofit Open Markets Institute is sounding some of the loudest alarms. In its November paper, AI in the Public Interest, it states plainly that “a handful of Big Tech companies—by exploiting existing monopoly power and aggressively co-opting other actors—have already positioned themselves to control the future of artificial intelligence and magnify many of the worst problems of the digital age.” The problems listed include misinformation, declining journalism, surveillance advertising, monopolistic abuse of smaller businesses, and the undermining of compensation for creative work.
Various nonprofits have published strategies to prevent or mitigate the many potentially negative effects of AI. Accountable Tech, AI Now Institute, and the Electronic Privacy Information Center (EPIC) collaborated on the Zero Trust AI Governance framework, a slate of recommendations for lawmakers that urges the enforcement of existing laws, the writing of “bold, easily administrable, bright-line rules,” and placing the burden of proof on AI developers to regularly demonstrate that each advance of their systems is safe. Congress appeared to have heeded this and other warnings when a bi-partisan group of senators released a roadmap for regulating artificial intelligence, but some observers, such as Open Markets Institute, see the proposal as inadequate at best and a waste of time and money at worst.
Probably no proposal would go far enough because the issues surrounding AI are like invasive ragweed—they extend in all directions, from embedding racism in their algorithms to ceding decisions about lethal force to the inscrutable calculations of autonomous weaponry. Many problems are underreported. For example, generative AI systems use enormous amounts of chilled water for cooling. In West Des Moines, Iowa, a single AI data center used approximately 6% of the district's fresh water. That information came out only because of a lawsuit. Other figures are educated guesses because the data is proprietary. We Are Water Foundation, citing a Cornell University study, estimates that yearly AI water consumption could reach 6.6 billion cubic meters by 2027. That would be half of what the United Kingdom uses.
In sketching out solutions, Accountable Tech draws a comparison between AI and pharmaceuticals, and how the latter industry spends billions of dollars screening thousands of compounds, advancing the promising dozens into trials, and finally sending to market a very few proven medicines. But the reason behind that process largely has to do with the Food and Drug Administration, without which danger to the public from unsafe medicines would increase multifold. Unfortunately, most of the proposals to regulate AI contain “voluntary” elements, and ideological opposition among a minority of lawmakers to expanding the federal government stands in the way of creating a legitimate and empowered AI oversight body.
Right now, though, 74% of registered voters favor some form of AI regulation, including 68% of registered Republicans. Some states have taken up the issue. In California, no fewer than seven AI bills are being proposed. A bill targeting algorithmic discrimination passed in Colorado. But a state-by-state approach may not make a discernible impact on a technology that's viral and concentrated in the hands of a few tech giants. The percentage of the public favoring legislation is liable to drop if and when stronger lobbying efforts take hold in the corridors of Washington, D.C.; therefore, the moment for nonprofits to advocate for protections may never be more opportune.
Nonprofits working in the European Union have advised the European Commission about the need for regulation, but the same problem exists there as in the U.S. Sarah Myers West, writing on AI Now's website, noted that, “The economic power amassed by these firms exceeds that of many nations, and they’ve demonstrated a willingness to flex that muscle when needed to ensure that policy interventions do not perturb their business objectives.” She went on to ask: “Can any single nation amass sufficient regulatory friction to curb unaccountable behavior by large tech firms?”
Solutions are slow in coming for a technology moving at the speed of light. Top experts are concerned. Leaders from Google DeepMind, OpenAI, Anthropic, and other AI developers have gone on record with their fears. But they're often talking to politicians who are inoculated from the negative effects of AI through their limited usage of advanced technology. Many either don't understand how it works, or have largely learned what they know from AI advocates.
Advances in the field of AI promise to improve areas such as medicine, and some say entirely new job-producing sectors of the economy could sprout. It's happened before. Though the correlation isn't direct, yesteryear's telephone operators are dwarfed in number by today's call center employees. But call center jobs are under threat from AI too. At the moment it looks increasingly like the majority of jobs AI creates will be those that service AI itself in some form. This is uncharted water for the global economy.
Concerns about AI risks have even penetrated the high-walled Vatican City. Pope Francis, earlier this year, asked AI developers to avoid turning human relationships into mere algorithms. He laments that humanity could be condemned to a “future without hope” without the ability for people to make decisions about themselves and their lives. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs,” he said. “Human dignity itself depends on it.”
According to some, AI has even become a problem of existential proportions. It's well understood now that it will chaotically reorder our economic, social, and political landscapes. But last year the nonprofit Center for AI Safety published an open letter signed by hundreds of AI researchers, engineers, and industry executives warning of the risk of literal human extinction from AI, and urging leaders to make it a global prevention priority on the scale of pandemics and nuclear war. Humans, however, when faced with the choice between harm and profit, have always chosen to pursue the latter. That means experts, nonprofits, and concerned citizens who fear that artificial intelligence will run wild have their work cut out for them to prevent that outcome.
- Read the White House Blueprint for an AI Bill of Rights.