Soon we will need ‘neuro rights’ to protect our brains and thoughts from technology | Technology news

In today’s digital world, nothing you do or say is personal. Not only do the walls have ears, but they are also connected to the internet. There is only one place in the world that is truly personal to you, and that is your mind. But even that won’t be true for long.

Elon Musk’s Neuralink may seem like it borders on science fiction. But the day is not far when there will be a machine that can read and maybe even change your mind. Some advocates of neurorights, or human rights specifically aimed at protecting the brain, want to introduce regulations before that becomes a reality.

Jack Gallant, a cognitive scientist at the University of California, Berkeley, and other researchers published a paper developing a rudimentary way to “read minds.” Volunteers in one study were asked to watch videos for hours while their heads were in an MRI machine. The researchers then trained a neural network on the data set that associated the recorded brain activity with each corresponding video frame. The researchers then asked the volunteers to watch new videos while still recording MRI data. They then fed the data into the AI ​​model they trained earlier. The model was able to generate a very vague but recognizable reconstruction of some of the images the volunteers observed. By the way, the article was published in 2011.

In 2021, Chile’s Senate approved a bill, the first of its kind in the world, to amend the constitution to protect “neuron rights,” or brain rights. This made Chile the first country in the world to enshrine neurological rights in its constitution. But did the South American country jump prematurely?

Guido Girardi, a former Chilean senator who was instrumental in the legislation, compared neurotechnology to something else lawmakers may have been slow to respond to — social media. Chili didn’t want to be late again. Neurotechnology, when it becomes more widespread, may have greater implications for society than social media. The argument here is that it might be wise to get ahead of the technology for once.

But earlier can also have its drawbacks. Especially when we’re not quite sure what the technology will be able to do in the future.

“It’s quite difficult to regulate now, and the reason for that is that it’s not entirely clear what the most widely used applications will be.” On the one hand, you can’t wait too long, because then the technology develops too quickly. There will be problems and no one will have thought about them and it will be too late. On the other hand, if you go too early, it can create problems of its own,” Alan McKay, a prominent neurorights advocate, told indianexpress.com. McKay is Deputy Director of the Sydney Institute of Criminology and an Academic Fellow at the University of Sydney’s Faculty of Law.

According to McKay, legal systems around the world must strike a delicate balance. They should not “let the horse bolt” and then close the barn door. But on the other hand, they shouldn’t regulate it so tightly that they spoil the chances of the technology working well. And neurotechnology has great potential to do good.

From paralysis to opportunity and vice versa

Ian Burkhart suffered a spinal cord injury when he was 19 years old, which left him a quadriplegic, unable to move his legs or arm. In 2014, he signed up for a pioneering trial where he tested a brain-computer interface designed to control muscle stimulation. He received an implant in his brain that transmits movement signals to a sleeve of electrodes worn on his arm. This meant he could only move his fingers by thinking.

But he ultimately had to remove the device in 2021, long after the trial was over. “When I first had my spinal cord injury, everyone said you’ll never be able to move anything from your shoulders down again,” he says. “I was able to regain this function and then lose it again. That was difficult,” Burkhart was quoted as saying by the MIT Technology Review.

Therapeutic experiments are just one of the potentially positive applications of brain-computer interfaces and other neurotechnologies. Many companies are working on technology that could help treat conditions ranging from paralysis to therapy. California-based Neuropace, for example, has an FDA-approved device for epilepsy. The “RNS device” detects abnormal electrical activity in the brain and stimulates the brain with electronic pulses to stop the epileptic seizure.

The danger we must guard against

Neurotechnology is promising and the possibilities it provides are truly astounding. It would be cruel to make too much of it and risk taking the wonders of this technology away from people like Burkhart. However, technology poses some challenges for human rights.

McKay is particularly concerned about the potential misuse of technology in the criminal justice system. For example, do you remember the device that can treat epilepsy? Imagine if someone used similar neurotechnology to develop a device that aims to stop convicted criminals from committing crimes by “predicting criminal behavior” and applying some kind of stimulation to the brain.

The possibility of someone developing such dystopian technology exists outside of Black Mirror episodes. Massachusetts-based Brainwave Science already advertises that its “iCognative” product can “reveal hidden information in a suspect’s mind.” This is an excerpt from their website: “This cutting-edge technology can reveal an individual’s plans and intentions, as well as any past actions related to national security, criminal activities such as fraud and business theft, providing an investigative and intelligence-gathering advantage. like never before.”

In addition, many parts of the world already use technology such as electronic ankle bracelets to limit the mobility of prisoners. Once the technology is available, it is not far-fetched to imagine that criminal justice systems around the world will attempt to monitor the minds of convicted criminals using neurotechnology.

But neurotechnology won’t stay within the bounds of therapeutic use (and possible criminal use) for long. It is almost possible that brain-computer interfaces will become mass market devices. Elon Musk has admitted in the past that Neuralink’s goal is to “merge humans with AI.”

Essentially, it won’t be long before technology that can read and maybe even change your thoughts could find its way into therapeutic medicine, the criminal justice system, and the world at large.

Confidentiality and transparency

It’s still hard to predict exactly what the neurotechnology of the future will look like, but many have an idea of ​​what neurolaw should be based on. McKay believes that neurolaws should ensure both privacy and transparency—privacy for the user and transparency about how the technology works.

“Neurotech is increasingly a subset of AI, almost. Sort of like humans merging with AI. There have already been extensive discussions about the ethics of AI and questions about the opacity of black-box systems and how things like bias can creep in,” McKay explained.

A study published in October by Stanford HAI (Human-Centered Artificial Intelligence) found that underlying models like those built by OpenAI, Google, Meta and others are becoming less transparent. This lack of transparency is nothing new in the tech industry. From opaque content moderation systems on social media platforms to deceptive ads to unclear payment practices on aggregator apps, transparency issues have long been a mainstay of tech companies. And soon some new tech companies will be able to read your brain and maybe even control it.

Neurology in India

The concerns are real. The future can be terrifying. But for now, there is no need for additional legislation to control or prevent some of the dangerous scenarios we have touched on. Some existing legal provisions in India already protect citizens against some neurotechnological hazards, technically.

Most read

1
South Africa vs Australia 2023 World Cup semi-final highlights: Australia to play World Cup final against India after close shave against South Africa at Eden
2
Tiger 3 box office Day 4: Salman Khan starrer records 50% drop, earns Rs 169.5 cr in India

“Following the judgments of the Supreme Court in Puttaswamy (2017) and Selvi (2010), it is fair to say that the right to privacy of thoughts is protected by the Indian Constitution. In the Selvi case, the court specifically held that forcible intrusion into a person’s mental processes is an invasion of liberty. It states that drug testing, polygraph testing and similar techniques may not be coercively applied as this would violate individuals’ right to privacy and, in particular, in the context of criminal law, their right against self-incrimination,” it said. technology lawyer Jaideep Reddy in an email interview to indianexpress.com.

According to Reddy, who focuses on the interaction of law and disruptive technology, the Digital Privacy Act of 2023 could also play an important role in helping to manage such technology once it comes into effect. “Under this law, personal data may generally only be collected or processed based on consent or other expressly permitted lawful uses. While the state is given a fairly wide leeway under this law, any neural interference by private individuals will be subject to more safeguards,” Reddy added.

Perhaps the country’s existing legislation and common law system can protect Indian citizens from the dangers posed by neurotechnology. But even so, the important thing is that the stakeholders; citizens, regulators and legislators need to have a conversation about the technology and whether our laws are sufficient to govern it.

Leave a Comment

Your email address will not be published. Required fields are marked *