Billionaire entrepreneur Elon Musk announced plans this week to create an AI-powered conversation tool called “TruthGPT” after criticizing popular AI text bot ChatGPT for being “politically correct”.
“The surest way to an AI dystopia is to train AI to be deceptive,” Tesla CEO and Twitter owner Musk warned in an interview with Fox News host Tucker Carlson on Monday.
Experts told ABC News that AI chatbots pose significant risks centered on political bias, as models can generate large amounts of speech, potentially shaping public opinion and enabling the spread of misinformation.
However, Musk’s comments underscore the formidable challenge raised by the issue, as content moderation itself has become a polarizing topic and Musk has expressed opinions that place his approach within that hot-button political context, something some experts Said.
“Musk is right that if we can’t solve the problem of veracity and the problem of credibility, it poses a risk to AI security,” said Gary Marcus, professor emeritus of psychology and neuroscience at New York University, who specializes in AI. specializes in Yes, told ABC News.
“But tying that question to political correctness might actually be a mistake,” he said. “If you want to be credible on the issue of truth, you must separate truth from politics. It is a mistake to try to tie the two together.”
Created by artificial intelligence firm OpenAI, ChatGPT is a chatbot – a computer program that interacts with human users. Neither Musk nor OpenAI responded to a request for comment from ABC News.
ChatGPT uses an algorithm that selects words based on lessons learned from scanning billions of texts on the Internet. The tool has gained popularity thanks to viral posts demonstrating that it is writing a Shakespeare poem or identifying a bug in computer code.
But the technology has also stirred controversy, with some troubling results. ChatGPT’s designers programmed safeguards that prevent it from taking controversial opinions or expressing hate speech.
Experts told ABC News that AI on content moderation presents legitimate challenges for designers, who must determine which messages are sufficiently offensive or hateful to warrant intervention.
“If a product is going to be used by millions of people, designers have to put security measures in place,” Ruslan Salakhutdinov, professor of computer science at Carnegie Mellon University, told ABC News.
“The question is: how do you make it fair or neutral? It’s a small decision of the designers,” he said. “You can imagine a GPT that is politically biased.”
In addition, the AI conversation tool’s responses depend heavily on the text with which the model is trained, said Kathleen Carle, another professor of computer science at Carnegie Mellon University.
Carle said, “There is a view that the information on which it was trained is more left-wing and contains some political bias and some political agenda.” “That’s where the logic is coming from.”
Elon Musk, who co-founded OpenAI but left the organization in 2018, accused OpenAI of “training AI to wake up” in a December tweet.
While AI chatbots deserve scrutiny over political bias, Musk stands as an imperfect spokesperson for such criticism because of his own high-profile political views, some experts said.
Elon Musk has taken a number of conservative stances in recent months, including expressing support for Republican candidates in last year’s midterm elections and repeatedly criticizing “woke” politics.
“I think what they mean by ‘truth’ is ‘agree with me,'” Oren Etzioni, CEO of the Allen Institute for AI and a computer science professor at the University of Washington, told ABC News.
Still, the polarized political climate poses a challenge for any AI chatbot developer attempting to moderate responses, experts said.
“Politics as it exists today doesn’t exactly draw lines,” Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute, told ABC News. “You have to know where to draw the line in a sensible place for AI to draw the line in a sensible place.”