scientists say we urgently need answers

A standard method for assessing whether machines are conscious has not yet been developed.Credit: Peter Parks/AFP via Getty

Can artificial intelligence (AI) systems become sentient? A coalition of consciousness scientists says that right now, no one knows — and is concerned about the lack of research on the issue.

In comments to the UN, members of the Association for Mathematical Consciousness Science (AMCS) are calling for more funding to support research into consciousness and AI. They say scientific research on the boundaries between conscious and unconscious systems is urgently needed, and cite ethical, legal and safety issues that make understanding AI consciousness critical. For example, if AI develops consciousness, should humans be able to simply turn it off after use?

Such concerns have been largely absent from recent AI safety discussions, such as the high-profile UK AI Safety Summit, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK. Nor does US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by sentient AI systems.

“With everything that’s happening in AI, there will inevitably be other adjacent areas of science that will have to catch up,” Mason says. Consciousness is one of them.

It’s not science fiction

It is unknown to science if there are or ever will be conscious AI systems. Even knowing if one has been developed would be challenging because researchers have yet to create scientifically validated methods for assessing consciousness in machines, Mason says. “Our uncertainty about AI consciousness is one of the many things about AI that should worry us, given the pace of progress,” said Robert Long, a philosopher at the Center for AI Safety, a nonprofit research organization in San Francisco. , California.

Such concerns are no longer just science fiction. Companies such as OpenAI – the company that created the ChatGPT chatbot – aim to develop artificial general intelligence, a deep learning system that is trained to perform a wide range of intellectual tasks similar to those that humans can perform. Some researchers predict that this will become possible in 5-20 years. However, the field of consciousness research is “very under-supported,” says Mason. He notes that, to his knowledge, there has not been a single grant proposal to research the topic in 2023.

The resulting information gap is outlined in AMCS’s submission to the UN’s High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to publish a report in mid-2024 on how the world should manage AI technology. AMCS’s submission has not been made public, but the body confirmed to AMCS that the group’s comments will form part of its “background material” – documents that inform its recommendations on global oversight of artificial intelligence systems.

Understanding what can make AI conscious, AMCS researchers say, is necessary to assess the implications of conscious AI systems for society, including their possible dangers. People will have to judge whether such systems share human values ​​and interests; if not, they could pose a risk to humans.

What machines need

But humans also need to consider the possible needs of conscious AI systems, the researchers say. Can such systems suffer? If we don’t recognize that an AI system has become conscious, we could cause pain to a sentient being, Long says: “We don’t really have much experience in extending moral consideration to entities that don’t look and act like us.” The misattribution of consciousness it would also be problematic, he says, because people shouldn’t spend resources protecting systems that don’t need protection.

Some of the questions raised by AMCS to highlight the importance of the issue of consciousness are legal: should a conscious AI system be held responsible for an intentional act of wrongdoing? And should he be given the same rights as humans? The answers may require changes in regulations and laws, the coalition wrote.

And then there is a need for scientists to educate others. As companies create increasingly capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other researchers of consciousness echo this concern. Philosopher Susan Schneider, director of the Center for Future Intelligence at Florida Atlantic University in Boca Raton, says chatbots like ChatGPT seem so human in their behavior that people are rightly confused by them. Without a thorough analysis by scientists, some people might jump to the conclusion that these systems are conscious, while other members of society might dismiss or even scoff at concerns about AI consciousness.

To mitigate the risks, AMCS, which includes mathematicians, computer scientists and philosophers, is calling on governments and the private sector to fund more research into AI consciousness. It would not take much funding to advance the field: despite limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to judge whether a system has a high chance of being conscious. The paper1published in the arXiv preprint repository in August and not yet peer-reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

“There’s a lot of potential for advancement,” says Mason.

Leave a Comment

Your email address will not be published. Required fields are marked *