A tempest has erupted over a letter co-signed by Elon Musk and thousands of others calling for a halt to AI research after the researchers listed in the letter denounced its use of their work, some signatories were found to be phony, and others withdrew their support, Entrepreneurng report.
Musk, cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak were among the more than 1,800 signatories who demanded a six-month moratorium on the creation of systems “more powerful” than GPT-4 on March 22. Moreover, engineers from Amazon, DeepMind, Google, Meta, and Microsoft contributed.
GPT-4 was created by OpenAI, a business that Musk co-founded and is now supported by Microsoft. It can create music, summarize lengthy documents, and have conversations that resemble those of a human. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.
An AI Supercomputer In California
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.
The think tank that oversaw the project, the Future of Life Institute, highlighted 12 pieces of study from specialists, including university professors and current and former staff members of OpenAI, Google, and its affiliate DeepMind. Four specialists who were mentioned in the letter have voiced worry that the statements made therein were based on their study.
When it was first released, the letter lacked signature verification procedures and amassed signatures from individuals who did not sign it, including Xi Jinping and Yann LeCun, the head of Meta’s AI division, who later made it clear on Twitter that he did not support it.
The Future of Life Institute (FLI), which is predominantly supported by the Musk Foundation, has come under fire from critics for prioritizing imagined catastrophic possibilities above more pressing AI concerns, such as The machines being programmed with prejudices like racism or sexism.
The well-known study “On the Dangers of Stochastic Parrots,” co-authored by Margaret Mitchell, who previously led ethical AI research at Google, was one of the studies highlighted. The letter was criticized by Mitchell, currently the chief ethical scientist of the AI company Hugging Face, who told Reuters that it wasn’t clear what constituted “more powerful than GPT4”
The letter “asserts a set of goals and a narrative on AI that advantages the advocates of FLI by accepting a lot of problematic concepts as a given,” she claimed. Some of us don’t have the privilege of ignoring current harms.
Timnit Gebru, one of her co-authors, and Emily M. Bender criticized the letter on Twitter, the latter labeling some of its assertions l“unhinged”. A University of Connecticut assistant professor named Shiri Dori-Hacohen also objected to the letter mentioning her work. In a study article she co-authored last year, she made the case that the broad usage of AI already carried significant hazards.
Her study made the case that the usage of AI systems now could affect how people decide how to respond to existential risks like nuclear war and climate change.
“AI does not need to reach human-level intelligence to compound those concerns,” she told Reuters.”
There are very significant non-existential threats that aren’t given the same level of Hollywood attention.
When questioned about the critique, Max Tegmark, president of FLI, stated that both the short- and long-term hazards of AI should be treated seriously. “If we cite someone, it simply indicates that we claim to support such a statement. It doesn’t imply that they support the letter or that we agree with what they say, he told Reuters.
In conclusion, researchers have denounced the use of their work in the statement after it was shown to have fraudulent signatures.
Source: The Guardian