The risks of Artificial Intelligence:  China vs US

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”  This alarming warning is a direct quote of a statement released six weeks ago by leaders in the emerging field of artificial intelligence (AI). 

This was preceded by a longer statement in March, which had been signed by an even larger group of AI experts.  It recommended a “six-month pause” in research on the most powerful AI systems to provide time to study their risks.

When I wrote in this blog about China’s race to become the world leader in AI technology last year, the topic felt a bit esoteric.  But since then, public interest in AI has skyrocketed, largely as a result of the success of two new releases of an experimental program called ChatGPT (version 3.5) or GPT-4 (for version 4.0) which answers any question users ask. 

For example, a college student can type in a request like “Write a ten page term paper on the War of Jenkins’ Ear (1739),” and the program will.  GPT-4 is free, and it quickly became “the fastest-growing consumer software application in history.”  This led to a stream of white papers and op-eds written by both experts and amateurs who began clutching their pearls about the risks of AI.

For example, according to a recent report from Goldman Sachs, as many as 300 million jobs could be threatened by AI.  The two categories of jobs most at risk are “office and administrative support jobs” (46% of tasks can be automated) and legal work (44% automatable).  The risk to lawyers may not be surprising since GPT-4 “recently passed a simulated law school bar exam with a score around the top 10% of test takers.”  AI has even become an issue in the ongoing strike of Hollywood writers, who are afraid of its ability to generate new content, and fear that technology is “coming for their jobs.”  

If you haven’t played with GPT-4 yet, you really should.  Anyone can sign up for the free program and start getting answers in less than five minutes.  I guarantee that if you try this experiment: 

  1. You will be absolutely amazed and maybe even a bit frightened
  2. You will spend far more than five minutes playing with the program. 

My favorite description of the current state of ChatGPT came from my brother: “It’s like talking to a brilliant person who has memorized the entire internet.  Only sometimes she’s really drunk.”  But the program is constantly learning from its errors, and gradually sounding more sober.

Many of the people who have been breathlessly writing about AI risks focus on programs that aim to be at least as intelligent as you and me, called AGI – artificial general intelligence.  As I explained in last year’s AI post, there’s just one problem with AGI – it doesn’t exist.  Many experts believe it never will.

The AI programs that do exist like GPT-4 fall into a completely different category:  they are designed to do just one single thing, like play chess, recognize faces, or evaluate mortgage applications.  However, even today’s limited applications come with some risk.

In China, the biggest risk of AI is that it will work too well.  According to NBC News, “A lack of privacy protections and strict party control over the legal system have resulted in near-blanket use of facial, voice and even walking-gait recognition technology to identify and detain those seen as threatening, particularly political dissenters and religious minorities.” 

AI is the technology behind China’s emerging social credit system, which rewards “well-behaved citizens” with a wide range of benefits including discounts on heating bills, skipping hospital waiting rooms, and even getting more matches on dating sites.

This public display of “untrustworthy people.” is an example of China’s AI-powered social credit system.

Due to generous government support, China “produces more top-tier AI engineers than any other country—around 45 percent more than the United States… It has also overtaken the United States in publishing high-quality AI research, accounting for nearly 30 percent of citations in AI journals globally in 2021, compared with 15 percent for the United States.”

Meanwhile, the government is using cell phone data to track users’ location at every minute, not to mention everything they type into their phones. 

Of course, programs that invaded privacy like this would be strictly forbidden in the US.  But that doesn’t seem to bother Chinese citizens.  According to a survey conducted by Ipsos last year “China [is]… the most optimistic country in the world when it comes to AI, with nearly four out of five Chinese nationals professing faith in its benefits over its risks.”  In contrast, according to the same survey “only 35 percent of Americans” agree.

The US’ list of risks is quite different from China’s.  When the US Senate held hearings on AI risks in May, Senate Judiciary Chair Dick Durbin, identified the top AI risks as “weaponized disinformation, housing discrimination, harassment of women, impersonation fraud, voice cloning… [and] workforce displacement.”  Similarly, when Sam Altman, president of the company that created ChatGPT, appeared before a House committee, he said one of his areas of greatest concern was “the potential for AI to be used to manipulate voters and target disinformation… especially because ‘we’re going to face an election next year and these models are getting better.’”

In my opinion, the greatest risk by far is an accidental war started by an error in a military AI application.  “An accident involving AI could be particularly risky [since] it could be difficult to determine whether an incident was deliberate or not.”

According to a white paper published by the Center for AI Safety,  AI has been the subject of many military experiments including a program that “outperformed experienced F-16 pilots in a series of virtual dogfights… [with] aggressive and precise maneuvers the human pilot couldn’t outmatch. (p. 13)”  According to the same paper, the firstknown use of AI in battle came in Libya in 2020 when “retreating forces were hunted down and remotely engaged by a drone operating without human oversight.”  Such applications are likely to multiply in an AI arms race as “ubiquitous sensors and advanced technology on the battlefield… [provide a tremendous amount of] information. AIs help make sense of this information, spotting important patterns and relationships that humans might miss.” (p. 14)

If you wanted to maximize the risks of a military accident getting out of hand, you would start with an authoritarian society where people are afraid to criticize their bosses, and the government refuses to acknowledge mistakes.  Oh look.  I just described China’s approach to AI.

A few weeks ago, Foreign Affairs published an article entitled “China is flirting with AI catastrophe” which argued that “from Chernobyl to COVID, history shows that the most acute risks of catastrophe stem from authoritarian states, which are far more prone to systemic missteps that exacerbate an initial mistake or accident.” 

AI does indeed involve risks, but many are simply based on human aversion to change.  In the 18th and early 19th century, groups of workers in UK cotton and wool mills known as Luddites destroyed industrial machines that threatened their jobs.  It didn’t work; they still lost their jobs.  To add insult to injury, the word Luddite has become a perjorative term that describes anyone opposed to technological advances.   

So what’s the bottom line?  How much should you worry about AI?

Unless your job is threatened, my answer is not at all.  Most people have already got enough problems to worry about, including health, money, relationships, and whether the Red Sox will still be in last place when the baseball season ends.  If you’ve still got the bandwidth to worry about more than just personal challenges, I’d put climate change first, then accidental war, then the growing gap between rich and poor, and the next pandemic, in that order. 

So in my opinion, whether you are in the US, China, or somewhere else, when it comes to AI risks, I would follow the advice from the old song:  Don’t worry, be happy.

1 thought on “The risks of Artificial Intelligence:  China vs US

  1. Pingback: The AI Race:  China vs US | Understanding China, five minutes at a time

Leave a comment