Generative AI is the biggest, brightest, and shiniest object in the tech world right now. Programs like ChatGPT, Stable Diffusion, and Midjourney dominate the headlines and the news cycle is saturated in wall-to-wall chatbot coverage. It was evidently all the rage at last week’s SXSW Festival.
This volcanic enthusiasm stands in marked contrast to the high-profile trials and travails plaguing Big Tech and offsets a bit of the deep chill and uncertainty hanging over Silicon Valley following the SVB collapse.
It feels like we’re on the precipice of a major paradigm shift:
Exponential View notes the cost for OpenAI’s ChatGPT API has dropped a gobsmacking 96% compared to last year’s price point for chatbots with similar capabilities.
ChatGPT notably reached 100 million users within two months of launch, making it the fastest-growing consumer application in history.
Wharton Professor Ethan Mollick sees it as a game changer, writing: “the traditional boundaries of jobs have suddenly shifted. Machines can now do tasks that could only be done by highly trained humans. Some valuable skills are no longer useful, and new skills will take their place. And no one really knows what any of this means yet.”
Goldman Sachs Research's Kash Rangan believes that “every product you touch on the consumer side is going to have more AI in it in the next six to 12 months than it did in the last five years.”
Across the board—sales and marketing, R&D, legal and compliance, operations management—potential business use cases abound. Reams of “creative work”—once the near-exclusive domain of human labor—suddenly appear ripe for automation.
There are some gigantic numbers in play as well. The total value of generative AI startups has skyrocketed 6x compared to a mere three years ago and now lies just shy of $50bn. Venture funding through Q1 2023 has tripled compared to Q1 2022 as enthusiasm for Web3 and the Metaverse dropped off a cliff face.
The two charts below capture the trajectory:
Source: Twitter/Dealroom
Source: Axios
The pace is only quickening: a significantly more powerful large-language model, GPT-4, released last week to considerable fanfare, is set to include text-to-video generation, which, per the Evening Standard, will, among other things, allow users “to create short videos based on rough text descriptions of a scene.” The program is notably sharper than its predecessors when it comes to standardized test-taking:
Source: OpenAI
Disaggregating the hype from the underlying reality around exciting new technologies is always a challenge, but with compound annual growth rates projecting through the roof, it’s reasonable to think we could be looking at a trillion-dollar business by the decade’s end. From Silicon Valley to Shanghai, the buzz is very real.
Bigger picture
Of course, generative is just one piece of the broader puzzle. AI is a general-purpose technology, which is to say, per Azeem Azar, it “has a widespread impact across industries and society and intersects with other technologies to spur more innovation.” Taken holistically, we’re looking at a series of technologies whose transformative impact—both positive and negative—on literally everything cannot be easily overstated. The US National Security Commission on Artificial Intelligence Final Report doesn’t mince words laying out the stakes:
The rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence—and in some instances exceed human performance—is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience. AI is also the quintessential “dual-use” technology. The ability of a machine to perceive, evaluate, and act more quickly and accurately than a human represents a competitive advantage in any field—civilian or military. AI technologies will be a source of enormous power for the companies and countries that harness them.
The sudden onset of generative AI has brought home to the general public what the experts have been saying for years: artificial intelligence is on an exponential growth curve. Over the course of the decade, it will radically and rapidly disrupt modern life, with ripple effects hitting everything from work, social interaction, and citizenship, to military competition and the global power balance.
For nation-states, there are gigantic opportunities afoot: to drastically improve governance outcomes across areas like health care, education, climate, public safety, and urban management; to gain a decisive edge over military rivals; and to propel society into the next stage of innovative, high-tech development.
That’s the optimistic line. What’s equally clear is that the road ahead will be paved with massive, hard-to-anticipate, and potentially existential risks—perhaps orders of magnitude greater than anything modern civilization has confronted before. I want to focus here on three such risks, all already well underway.
Rivalry
The US-China relationship, the planet’s most complex and critical state-to-state relationship, stands at its lowest point in decades. DC and Beijing are miles apart on, well, every area of consequence: human rights, COVID, trade, and fundamental questions of ideology and regime legitimacy. There’s mounting concern that the two powers will soon come to blows over the Taiwan Strait—potentially triggering WWIII—in the not-so-distant future.
There’s no escaping geopolitical reality: the age of great-power rivalry is back, and technological prowess—which has long served as a key barometer of superpower status—is the focal point. The zero-sum logic of national security now permeates our highly-globalized innovation ecosystem. In both China and America, the martial language of conflict, competition, and power struggle frames a lot of the discourse around artificial intelligence.
There are a handful of reasons why I think this matters. If attained, AI mastery will allow China to potentially catapult past the United States as the 21st Century’s technology leader; an outcome that Washington—through its vigorous application of trade and technology controls, sanctions, and investment safeguards—now appears utterly hellbent on stopping. What we have is a two-horse race between the US (and its allies) and China. No other nation or bloc of nations stands up.
Chinese President Xi Jinping and his coterie—perhaps more so than their counterparts in foreign capitals—grasp the gravity of the situation; seeing AI as utterly crucial to both military and commercial competitiveness. China’s AI policy community is notably razor-sharp when it comes to tracking the latest overseas developments in the field. Over the last six years, Beijing has treated the technology as a front-and-center strategic priority necessitating a whole-of-government mobilization effort.
We see evidence of this in:
High-level political signaling from prominent officials.
Ai’s prominence in critical strategic planning docs and policy initiatives.
The firehose of subsidies, research grants, R&D expenditure, and state-backed investments—at the national and sub-national level—aimed to cultivate domestic capabilities.
Comprehensive changes to China’s data/cyber governance regime.
An overhaul of various state agencies tasked with overseeing AI’s development and regulation.
Georgetown University’s excellent CSET summarizes the Chinese state’s approach to AI as such:
In 2017, China’s State Council released a comprehensive “New Generation AI Development Plan” aimed at making China the world’s leading AI power by 2030. China’s Communist Party, national government, universities, research labs, and technology companies over the subsequent half decade have demonstrated an unwavering commitment to promote AI not only in applications, where China has a solid track record, but also—as per the plan—in cutting-edge research that historically had not been China’s forte.
In my view, there are two major concern points:
Power transition: China’s political system proves more adept at harnessing and managing AI than Western rivals, giving it a decisive advantage. By doing so, China gains primacy; supplanting the US as the world’s hegemonic power. The post-WWII liberal international order—underpinned by American power, values, and strategic interests—collapses as an authoritarian techno-state reshapes the global trade and security system in its image. Taiwan is coercively unified into the People’s Republic; America’s East Asian alliance network breaks apart; and China sets the agenda across all major international institutions. While this scenario is undoubtedly appealing to the Chinese Communist Party, it may portend disaster for the US, its regional allies, and global democracy writ large.
Power conflict: both sides recklessly plow full-speed ahead in the AI race, with scant regard for the consequences. Major advances in AI, big data, and cyber warfare capacity inject ever-greater levels of paranoia, contentiousness, and hostility into a profoundly fraught relationship that’s been trending in a negative direction. The future of AI-enabled warfighting slams headfirst into outdated military doctrines, significantly increasing the risk of miscalculation. Great-power competition tilts into a devastating kinetic conflict in which two nuclear superpowers—bolstered by autonomous and semi-autonomous weapons systems—inflict massive causalities upon one another, crater the post-Cold War international order, and catalyze the worst economic crisis in a century.
Two epoch-shaping developments—intense Sino-American rivalry and the AI revolution—will likely come to a head, and collide with one another, over the course of this decade. It’s not hyperbolic to point out that the historical stakes are astronomical. The manner in which the two sides succeed (or fail) to manage their burgeoning strategic rivalry, all against the backdrop of disorienting technological change, will be utterly critical in determining humanity’s fate.
Control
The idea that AI and big data will open up powerful new vistas for manipulation, mass surveillance, and techno-totalitarianism has haunted the conversation around digital technologies for decades.
Unsurprisingly, the current epicenter of these fears is China, where an overbearing Communist Party has already melded draconian political controls and strongman-style autocracy with the awe-inspiring analytical powers of Big Data and artificial intelligence.
The recent history here is hardly reassuring: over the past decade, Xi has ratcheted up every instrument of control—suppression, social credit, censorship, propaganda, lavish funding for the military and domestic security apparatus, nationalism, lawfare, and the subordination of private sector interests to the political whims of the party.
China already houses world-beating AI firms in areas like translation, facial recognition, and computer vision. The intersection between a paranoid security state and its burgeoning AI prowess makes for chilling reading, as Bloomberg’s Adrian Woolridge and The Atlantic’s Ross Andersen make note of.
The surveillance nation can be seen at its most terrifying in Xinjiang province, where the state is using inspection to grind the Uyghurs and other Turkic minorities into model citizens. The region is festooned with electronic eyes and ears — long “rifle-style” surveillance cameras that can zoom in on details, spherical rotating cameras that follow people as they walk down the street, infra-red cameras that work at night, Wi-Fi “sniffers” that detect the unique ID numbers of smartphones, bar codes at the entrance to every home and business that the police can scan to bring up a list of registered residents and employees, “anti-terrorism swords” that, according to Chin and Lin, can search smartphones for more than 53,000 identifiers of Islamic or political activity, and drones, equipped with facial recognition technology, that fly over isolated areas.
China already has hundreds of millions of surveillance cameras in place. Xi’s government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China’s government could soon achieve an unprecedented political stranglehold on more than 1 billion people.
Surveillance and coercion are only part of the risk equation, however. Manipulation and persuasion—the ability to harness, hold, and shape public opinion—are also emerging as major concern points for anyone concerned over the future of liberal democratic values in the digital age. To that end, The Center for a New American Security’s (CNAS) Bill Drexel points out that generative AI enhances authoritarians’ capacity to bombard their citizenry with high-quality pro-regime propaganda orders of magnitude more convincing than decades past.
Instantaneous, omnidirectional connectivity and complex globalized interdependence—underwritten by dazzling information and communications tech—present a downside: the current paradigm, for all its virtues and successes, often feels disordered, insecure, and precarious. Following a 2019 surge in anti-government protests across the globe, Noah Smith pointed out that political instability may be an endemic feature of the social media age, writing:
Perhaps the internet is not a tool of freedom so much as evolutionary pressure that selects for authoritarianism. Perhaps social media has changed the nature of great-power competition into an endurance match in which control of the internet is key. Perhaps every country that doesn’t implement its own version of the Great Firewall and the 50 Cent Party will eventually fall victim to waves of Twitter-generated unrest.
Regardless of whether or not that thesis is correct—and there’s reason to seriously question it—I think it captures the big bet China’s political elite are making on the trajectory of history: in a time of spiking polarization, instability, and disorder, China’s brand of security-centric authoritarianism—where the state aggressively monitors, controls, and restricts the flow of data, information, communications, and strategic technology—will allow it to best manage internal challenges while outcompeting and outlasting its ideological rivals.
Seen through that prism, artificial intelligence creates an incredibly powerful toolkit for mass-scale electronic surveillance, suppression, and social control that, per Smith, affords Xi and like-minded autocrats a golden opportunity “to achieve a level of totalitarian social control never before imagined except in dystopian novels.” Nor is this alarming dynamic confined to the machinations of the Chinese Communist Party. Freedom has been on a steady 17-year decline (per Freedom House) and it’s easy to see the allure of China’s “surveillance state” model to autocrats and would-be autocrats everywhere. The model appears ripe for export: Brookings notes that China already has a comparative advantage in facial recognition technology and its firms have been happily willing to sell their techno-surveillance stacks abroad.
Now, I don’t want to pick too much on China here because this is far from simply an autocracy story. In a series of bracing reports earlier this month, Wired Magazine looked at the implications of big data and machine learning for citizens in liberal Western democracies. I found this section—on the Rotterdam city government’s use of algorithms in determining the allocation of welfare benefits—particularly alarming. I think is worth block-quoting at length to get a sense of what these systems can look like in practice:
Rotterdam’s algorithm is best thought of as a suspicion machine. It judges people on many characteristics they cannot control (like gender and ethnicity). What might appear to a caseworker to be a vulnerability, such as a person showing signs of low self-esteem, is treated by the machine as grounds for suspicion when the caseworker enters a comment into the system. The data fed into the algorithm ranges from invasive (the length of someone’s last romantic relationship) and subjective (someone’s ability to convince and influence others) to banal (how many times someone has emailed the city) and seemingly irrelevant (whether someone plays sports). Despite the scale of data used to calculate risk scores, it performs little better than random selection.
Machine learning algorithms like Rotterdam’s are being used to make more and more decisions about people’s lives, including what schools their children attend, who gets interviewed for jobs, and which family gets a loan. Millions of people are being scored and ranked as they go about their daily lives, with profound implications. The spread of risk-scoring models is presented as progress, promising mathematical objectivity and fairness. Yet citizens have no real way to understand or question the decisions such systems make.
Governments typically refuse to provide any technical details to back up claims of accuracy and neutrality. In the rare cases where watchdogs have overcame official stonewalling, they've found the systems to be anything but unbiased. Reports have found discriminatory patterns in credit scoring, criminal justice, and hiring practices, among others.
Being flagged for investigation can ruin someone’s life, and the opacity of the system makes it nearly impossible to challenge being selected for an investigation, let alone stop one that’s already underway.
The Wired feature speaks to a major threat AI poses to human welfare—alignment. Encoding AI systems to ensure they work as their designers intended, and do not unleash potentially catastrophic side effects, while simultaneously ensuring that the objectives of those designers—profit-maximizing tech companies and control-obsessed governments who will grasp the commanding heights of this new AI economy—align with broader societal interests will be a mind-bogglingly wicked challenge.
To put it more simply, there are two existential questions at stake here: will the machines, whose operations remain a black box to their human observers, serve the designers, and will the designers, imbued with an awe-inspiring power to influence the direction of society, serve humanity’s best interests?
Zooming ahead, the speed and scale of change we’re facing will likely outpace humanity’s ability to comprehend, much less effectively regulate, artificial intelligence. Placing more and more decision-making in the hands of hyper-intelligent thinking machines—while concentrating more and more wealth and power in the hands of a few tech goliaths—opens up potentially terrifying avenues for manipulation, abuse, exploitation, systemic bias, and miscalculation.
Finding a workable framework will be, at best, prohibitively difficult. OpenAI openly states that “unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together,” adding “there is currently no known indefinitely scalable solution to the alignment problem.”
Fracture
Harnessed properly, we can envision a future where AI advances act as a massive social risk alleviator: unlocking growth and innovation across industries, improving governance and public safety, and fostering smarter cities and better ecological management.
That’s only one side of the coin, however.
If the second risk flows from a fear that AI will unlock insidious new forms of manipulation and social control that centralize power in the hands of Big Government and Big Tech, then think of this final risk as something of its opposite: disorder and fracture.
Setting aside pressing questions of geopolitical conflict and potent techno-authoritarianism, we’re peering into a new era in which the risks around AI are abundant, interlocking, and likely to turbocharge many of the divisive, centrifugal forces already plaguing democratic societies. These include:
Widespread automation and the destabilization of white-collar work results in huge swaths of the labor force being stripped of their income, security, status, and dignity.
Relentless, unceasing innovation—which places extreme levels of competitive and cost pressure on rigid hierarchal organizations ill-equipped to adapt—leads to waves of business failure, the sudden collapse of entire industries, and the breakdown of effective government administration.
Skyrocketing inequality, a further hollowing out of the middle class, and ever-great levels of financial precarity.
Explosions of public unrest, anger, conspiracism, and politically-motivated violence.
Extreme polarization and the empowerment of flagrantly illiberal/anti-democratic forces.
New security threats: a surge in cybercrime, hacking, and cyber espionage mixed with incredibly potent forms of disinformation and computational propaganda which will appear credible to even the most astute human viewer.
To that list, I would add one additional risk which I see as particularly nefarious: alienation and exhaustion. AI will likely act as an accelerant for many of the prevailing trends and pathologies plaguing the digital age: our discomfort with sudden change, unpredictability, complexity, and hard-to-grasp interdependence; our struggles to manage information overload, digital addiction, and broken attention spans; the inability of many citizens to distinguish truth from fiction and resist misleading conspiracies and falsehoods; and the prevailing sense that modern society is deeply disordered. Perhaps above all else, I fear it will perpetuate a powerful feeling that, both individually and collectively, humanity is being stripped of agency and control.
Given that ominous backdrop, it’s plausible that more and more people—deeply frustrated with their physical realities—will further withdraw into increasingly sophisticated and personalized digital cocoons.
I think that’s where we now stand. Like it or not, every major institution exerting influence over modern life—government, business, media, and military—will be greatly impacted by the exponential changes quickly coming down the pipeline.
I’m not an unabashed AI dystopian. It’s certainly conceivable that none of these grim scenarios play out in the manner I fear they might. Perhaps the hype around ChatGPT fizzles out within one or two iterations as the technology slams into unforeseen technical hurdles. Perhaps we slow down and figure out collaborative new ways to manage the onslaught of risks looming just over the horizon.
I certainly hope so but remain unconvinced. Regardless, I think it’s important to be crystal clear regarding the major challenges we will likely face in the years ahead. Through the negative prism, AI is less of a tool for risk alleviation and pragmatic problem solving and more of an existential risk generator.