The developer of the sophisticated chatbot ChatGPT has requested that artificial intelligence (AI) be regulated by US lawmakers.
The founder and CEO of the business that created ChatGPT, Sam Altman, spoke to a US Senate committee on Tuesday about the advantages and disadvantages of the new technology.
Several AI models have hit the market in recent months.
A new body should be established, according to Mr Altman, to licence AI businesses.
While ChatGPT and other programmes of a similar nature can produce responses to queries that are remarkably human-like, they can also be spectacularly inaccurate.
Mr. Altman, 38, has essentially become the industry’s spokesperson. He has pushed for stronger regulation and has not shied away from addressing the ethical issues that AI brings.
He compared AI to “the printing press” in size while recognising the risks it may pose.
“I think if this technology goes wrong, it can go quite wrong…we want to be vocal about that,” Mr Altman said. “We want to work with the government to prevent that from happening.”
He also acknowledged the potential effects of AI on the economy, including the chance that some jobs could be replaced by AI technology, resulting in employment losses in specific industries.
“There will be an impact on jobs. We try to be very clear about that,” he said, adding that the government will “need to figure out how we want to mitigate that”.
Mr Altman added, however, that he is “very optimistic about how great the jobs of the future will be”.
However, several senators suggested that in order to make it simpler for citizens to sue OpenAI, new legislation were required.
According to Mr. Altman, one of his “areas of greatest concerns” is the possibility that AI may be used to convey targeted misinformation during elections, which could have a negative influence on democracy.
“We’re going to face an election next year,” he said. “And these models are getting better.”
He made many recommendations for how a new US agency could control the market, including “a combination of licencing and testing requirements” for AI firms, which he claimed could be used to control the “development and release of AI models above a threshold of capabilities”.
He added that companies like OpenAI should undergo independent audits.