Old Version
Politics

Smart Moves

Seeking to gain a technological lead amid increasing concerns over the risks AI poses, countries worldwide are rushing to legislate AI management while trying not to hinder development

By Xie Ying , Ha Like Updated Oct.1

(Photo by VCG)

Stories of scammers using AI to pose as relatives pleading for money over video calls are increasingly popping up in Chinese media. Given the suspects look and sound exactly like their relative, victims often hand over money without question.  

These scams also target officials and their families. According to news magazine Outlook under the State-run Xinhua News Agency, because officials are public figures, scammers can easily harvest samples of their video and audio from media. An unnamed official told the magazine that someone has used his face and voice to cheat many of his friends on WeChat.  

“Audio synthesis requires at most 32 words and a minimum of only 10,” Zhao Mingming, a cybersecurity expert with the State Grid Information & Telecommunication Industrial Group, told Outlook.  

Aggressive use of AI technologies has triggered worldwide debate, with people increasingly voicing concerns over threats to data and network safety, infringement of intellectual property, profiling and discrimination.  

Earlier this year, British-Canadian computer scientist Geoffrey Hinton, known as the “Godfather of AI” for his pioneering work in deep learning, called AI’s potential to falsify data and information “more urgent” a threat to humanity than climate change. He ended his decade-long stay at Google in May to speak more openly about the looming dangers of the technology.  

Calls to legislate the development and use of AI technologies are increasing, with many countries and regions issuing laws and regulations. However, worries about whether such government oversight will obstruct the development of the emerging industry remain. 

Global Guardrails 
Concerns about AI’s negative influence have grown in step with its development. Soon after OpenAI released its language learning model GPT4 in March, Elon Musk – an early backer of OpenAI – joined AI experts and industry leaders in signing an open letter calling for at least a six-month halt on training AI systems more powerful than GPT4.  

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs... Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter reads.  

These risks, according to the letter, include flooding the internet with false information, automating away jobs and developing an intelligence that could potentially deprive humans of their control over civilization. Within a month, the letter received more than 1,000 signatures from scientists, researchers and business leaders.  

The letter was published through the Future of Life Institute (FLI), a non-government organization in the US founded in 2014 that studies threats from AI. Musk, an FLI co-founder, launched his own AI startup, xAI, in July.  

The late cosmologist Stephen Hawking, a founding member of FLI’s Scientific Advisory board, told the BBC in 2014 that AI “could spell the end of the human race.” The same year, Musk told The Washington Post that AI use is like “summoning the demon.” Since then, Musk has tweeted frequently about the management of AI development.  

At a US congressional hearing held on May 16, OpenAI founder Sam Altman demanded that lawmakers regulate AI and voiced his concerns over the technology’s potential harms. In late May, Altman proposed an international regulatory body to govern AI development.  

Previously, the Business Software Alliance (BSA), which represents tech giants like Microsoft, Adobe and IBM, publicly implored the US government to work out AI regulations based on privacy laws.  

This is already underway in the European Union. On June 14, the EU Parliament green-lighted negotiations for its AI Act, with 499 votes in favor, 28 against and 93 abstentions. The Parliament, EU member countries and the European Commission will hold talks to finalize the articles.  

The AI Act is expected to pass in late 2023, marking the world’s first law of its kind.  

“There have always been arguments worldwide about whether to legislate AI technologies. The EU’s AI Act will spur other countries to pass similar legislation,” Zhao Jingwu, an associate law professor at Beihang University in Beijing, told NewsChina.  

A week after the EU vote, US President Joe Biden met with tech industry leaders in San Francisco to discuss the risks of AI. Although the US issued guidance on the management of AI applications in early 2020, the federal government has not yet taken any strict measures or issued laws or regulations. In October 2022, the White House released the AI Bill of Rights, which provides a framework for the management of AI technologies, though it is not compulsory.  

On June 21, Senate Democratic Leader Chuck Schumer proposed his AI framework during a speech at the US Center for Strategic and International Studies (CSIS), where he called for an AI Act. To speed up legislation, Schumer said he planned to launch a series of AI insight forums starting in September to discuss innovation, intellectual property, national security and privacy issues regarding AI technologies.  

Chinese experts said that the EU and US ramping up AI legislation indicates their intentions to take the lead in planning digital strategies.  

“The EU’s AI Act will not only be applied within the EU but also to foreign users who receive data from AI systems within the EU... It has expanded the range of management and implied the EU’s intentions to seize the jurisdiction of data management,” Peng Xiaoyan, executive director of Beijing V&T Law Firm Hangzhou Branch, told NewsChina.  

Her view is shared by Jin Ling, who in an article for People’s Tribune in February 2022, a magazine under the People’s Daily, wrote that the AI Act is the EU’s attempt to fill its technological disadvantage by playing a fuller role in management.  

That is why British PM Rishi Sunak spoke with Biden about an international governing body for AI management on June 8. The next day, French President Emmanuel Macron announced a plan to establish an AI management body in France. 

Different Directions 
In early June, China’s State Council announced a draft AI law would be submitted to the Standing Committee of the National People’s Congress (NPC), China’s highest legislative body. In July, China’s Cyberspace Administration joined six other agencies in issuing tentative management measures on generative AI services, which experts believe will lay the foundation for future AI legislation.  

China began planning in 2017, when the State Council rolled out an AI development program. In this document, the government proposed to work out ethics and regulations for AI technologies in specific fields by 2020 and establish a suite of laws and policies for AI-related safety by 2025. 

According to the Artificial Intelligence Index Report 2023 issued by Stanford University on April 3, legislation mentioning AI increased nearly 6.5 times worldwide since 2016.  

“The haste to legislate is a result of the heated competition and development of AI technologies,” Peng said. “Data has increasingly become a crucial strategic element, and all countries hope to lead the legislation... Meanwhile, new social problems and contradictions brought about by the fast development of AI technologies like ChatGPT also spur legislation,” she added.  

The EU’s AI Act has been in the works since April 2021, with added revisions for generative AI services.  

One requires transparency for general purpose AI like ChatGPT. For example, developers must label AI-generated content for users and curb illegal content. Developers must release what information they use to train their models, especially when copyrighted.  

Risk assessment is a primary feature of the AI Act – it categorizes risk into four levels, the highest being “unacceptable.” For example, a system that classifies people according to social behaviors or personality is banned.  

In the latest draft, the EU expanded the highest-level risk category to include AI that is “invasive” or “discriminatory.” For example, it bans use of biometric identification in public places, emotion and sentiment analysis, predictive policing based on profiling, location or criminal records, and harvesting facial data from the internet.  

The latest version also raised the fine cap from 30 million euros (US$32.8m) or 6 percent of the company’s operating income from the previous year to 40 million euros (US$43.7m) or 7 percent, much higher than in the EU’s General Data Protection Regulation.  

“This shows the EU’s resolution to supervise and manage AI technologies. Tech giants like Google, Microsoft and Apple could face tens of billion dollars in fines if they violate the law,” Peng said.  

China’s latest tentative measures, which took effect on August 15, propose to conduct “deliberate” and “classified” supervision of generative AI services. The document states that such services should not harm China’s national security and should respect others’ legal interests and rights. The document stresses that no illegal or tortious data should be used to train AI models. Like the EU, China requires AI-generated content to be labeled and puts the onus of monitoring user input on service providers.  

“China’s current AI management is scattered across different fields and departments... and measures and policies usually target a specific technology or service... normally this is designed and released by competent departments, but they have not yet made it law,” Peng told NewsChina.  

According to Zhao Jingwu, compared to the EU and China, the US’s management prioritizes commercial development to maintain its competitiveness. Schumer’s framework aims to realize the potential of AI technologies and support US-led innovation.  

“US management of AI development remains weak, and its society is inclined to be open and encourage the innovation and expansion of AI technologies,” Peng said, adding that management is conducted according to the state while being “general” and “non-specific” at the federal level.  

“The AI Bill of Rights, a milestone in the US’s management of AI development, for example, proposes only five basic principles without more detailed articles or measures... It’s only a framework for guiding the design, use and planning of AI systems,” Peng said.  

“Such documents are not compulsory... since intensified management will surely obstruct the development and innovation of an emerging industry like AI,” she added.  

Despite signing the FLI open letter calling for a suspension of AI training, Musk has switched on an AI project on X (formerly Twitter) and recruited AI experts, calling some to question whether he intended to hobble the progress of OpenAI, which Musk left in 2018 and is now competing with.  

Some scientists and AI leaders have denied they ever signed the FLI open letter. Thomas G. Dietterich, an American pioneer of machine learning, tweeted that the letter is “such a mess of scary rhetoric and ineffective/non-existent policy prescriptions.”  

Yann LeCun, chief AI scientist of Facebook’s parent Meta, tweeted on March 29: “Nope. I did not sign this letter. I disagree with its premise.” During an April 8 livestream on tech news site VentureBeat about the FLI letter, LeCun criticized the call for a “pause” as backward, saying people cannot slow down the progress of science and knowledge.  

In the same livestream, renowned British AI scientist Andrew Ng argued that AI will create enormous value for many industries and pausing its progress would obstruct AI from benefiting the world.  

LeCun and Ng suggest managing content rather than development and research, and argue that concerns about human safety are premature. 

Hard to Balance
Their ideas are reflected in China’s tentative measures on generative AI in July, which are less strict than original drafts, according to experts. Instead, they focus on the supervision of “content” and stress equally prioritizing development and safety. The document devotes an entire chapter to encouraging the innovation and exploration of AI technologies and their applications.  

Even the more detailed and stricter AI Act requires EU member countries to provide enterprises and startups with at least one sandbox to test for compliance. The measure aims to help AI enterprises with management and allow them to concentrate on innovation.  

In her article, Jin Ling argued that the EU’s AI Act would increase enterprises’ costs and chill investment given the uncertainties of risk appraisal. Although the EU claims to support the digital economy, Jin said it is difficult for legislation to balance encouraging innovation and protecting rights.  

Zhao Jingwu agrees. He told NewsChina that AI legislation faces many challenges, given the fast development of technology and governments’ lack of experience in managing what he calls the three major key elements of AI: “data, algorithm and computational power.” Moreover, debate continues over whether legislation should focus on risk control or ensuring industrial development.  

In an interview with social media account Jiemian in June, Zhu Fuyong, an AI professor at the Southwest University of Political Science & Law, said that while legislation can define responsibility, protect people’s legal rights and prevent abuses, many risks have not yet occurred and are based on mere speculation.  

Chen Jidong, a professor at Shanghai Tongji University, agrees, and said the biggest hurdle for legislation is scientifically appraising the risks of AI systems while the industry is emerging. 
 
That is why some experts believe China’s July measures do not provide more detailed rules or cover all application scenarios, making them difficult for enterprises to implement.  

LeCun and Ng argued that AI development should be similar to other industries, where risk control progresses in step with development.  

“China’s legislation should be based on encouraging innovation and enabling AI to develop in a relatively open space... We can just draw a red line,” Peng suggested.  

“We should have an industry development-oriented AI law,” Zhao Jingwu said. “Present measures and norms are enough to meet management demands... our ultimate purpose is to develop the industry, since legislation is not for curbing industrial development but rather guiding and guaranteeing its benign development,” he added.

Print