Most Users Think ChatGPT Is Conscious, Survey Finds

“For most of the general public, AI consciousness is already a reality.” 

Survey Says

In a recent survey, a pair of researchers found that a startling proportion of ChatGPT’s most active users have alarming misconceptions about the OpenAI chatbot.

According to a press release out of England’s University of Waterloo, where first study author Clara Combatto is a professor of psychology, two-thirds of people surveyed seem to erroneously believe that ChatGPT is conscious — and that it can have feelings and memories, just for good measure.

Needless to say, it almost certainly doesn’t have any of those things. But human users’ widespread belief that it does adds yet another wrinkle to the strange dynamics of the AI industry.

Published in the journal Neuroscience of Consciousness, the paper — coauthored by University College London cognitive neuroscientist Stephen Fleming — suggests not only that a majority of users think the popular large language model (LLM) is conscious, but that the more people use it, the more likely they are to feel that way.

“While most experts deny that current AI could be conscious,” Combatto said in the school’s press release, “our research shows that for most of the general public, AI consciousness is already a reality.”

Mind Over Matter

The English researchers recruited 300 people in the United States online and asked them a series of questions about whether they thought LLMs had the capacity for consciousness or other subjective human states, such as emotion, planning, and reasoning. Alongside those queries, Fleming and Combatto also asked participants how often they used ChatGPT.

As the cognitive researchers discovered, people who frequently used the chatbot seemed to have developed a theory of mind, or a conception of AI as a thinking and feeling entity, spontaneously throughout their time engaging with it.

“These results demonstrate the power of language,” Comabtto said, “because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind.”

While these findings are fascinating on their own, they also could have, as the researchers hope, implications for future AI safety measures.

“Alongside emotions, consciousness is related to intellectual abilities that are essential for moral responsibility: the capacity to formulate plans, act intentionally, and have self-control are tenets of our ethical and legal systems,” Combatto explained. “These public attitudes should thus be a key consideration in designing and regulating AI for safe use, alongside expert consensus.”

Obviously, there will need to be way more research to determine how widespread these beliefs may be — but for now, it seems that many users are on the side of the small minority of AI experts who also think LLMs have become sentient.

More on advanced AI: OpenAI Researcher Says He Quit When He Realized the Upsetting Truth

Share This Article

This post was originally published on Futurism

Share your love

Leave a Reply