Six months ago, Elon Musk signed a letter to curb AI, but it accelerated

Other signatories told me they would happily sign another contract, but their biggest concerns seemed to have more to do with short-term issues, such as misinformation and job losses, than with the style of the scenario. Terminator.

“In the age of the internet and Trump, it’s easier for me to see how AI will cause the downfall of civilization by distorting information and destroying knowledge,” explains Richard Kiehl, a professor of microelectronics at Arizona State University.

“Will we have a Skynet that can hack military servers and drop nuclear bombs all over the planet? “I don’t think so,” answered Stephen Mander, a doctoral student working in the field of artificial intelligence at Lancaster University (UK). However, they assume that there will be widespread job turnover and describe it as an “existential risk” to social stability. But he also worried that the letter would encourage more people to experiment with the technology and acknowledged that he had not acted on the letter’s call to slow it down. “After signing it, what have I done for the past year? “I’ve been researching AI,” he said.

What is the purpose of the public letter to stop AI development?

Although the letter failed to trigger a widespread pause, it helped push the idea that AI could wipe out humanity and made it a topic of widespread debate. This was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind artificial intelligence division, comparing the risks to human existence to those of nuclear weapons and pandemics. Next month, the British government will hold an international conference on “AI safety,” where leaders from many countries will discuss the potential harm it could cause, including threats to the survival of civilization.

Perhaps AI detractors hijacked the narrative with the letter to slow it down, but the unease over today’s rapid advances in technology is quite real and understandable. A few weeks before this letter was written, OpenAI had released GPT-4, a large language model that gave ChatGPT new capabilities for answering questions and surprised AI researchers. As the potential of GPT-4 and other language models becomes more apparent, surveys show that the public is more concerned than enthusiastic about the technology. The clear ways in which these tools can be misused are prompting regulatory bodies around the world to take action.

The letter’s call for a six-month moratorium on artificial intelligence development may give the impression that the letter’s signatories anticipate that problems will occur in the short term. But for many of them, the main problem is uncertainty about the true capabilities of the technology, how quickly things can change, and how the technology will develop.

“Many people who are skeptical of AI want to hear about specific disaster scenarios,” said Scott Niekum, a professor at the University of Massachusetts Amherst who studies AI risks and signed the letter. “For me, the fact that it is difficult to imagine detailed and concrete situations is important: it shows how difficult it is even for world-class experts to predict the future of AI and how it will impact the complex world. “That should set off alarm bells.”

Uncertainty is not proof that humanity is in danger. But the fact that many people working in the field of artificial intelligence still don’t feel confident should be reason enough for companies developing it to take a more thoughtful, or slower, approach.

“Many people who have had the privilege of taking advantage of new advances now prefer to see a break,” said Vincent Conitzer, one of the signatories and a professor working in the field of AI at Carnegie Mellon University. “If nothing else, that should be a sign that something very unusual is going on.”

Article originally published in CABLE. Adapted by Andrei Osornio.

Roderick Gilbert

"Entrepreneur. Internet fanatic. Certified zombie scholar. Friendly troublemaker. Bacon expert."

Leave a Reply

Your email address will not be published. Required fields are marked *