Monday, June 24, 2024

Who gets to decide if an AI is alive?

Must read

Experts predict that artificial intelligence will make sense within the next 100 years. Some predict it will happen sooner. Others say it will never happen. Still other experts say it has already happened.

It is possible that the experts are just guessing.

The problem with identifying “feeling” and “consciousness” is that there is no precedent when it comes to machine intelligence. You can’t just check a robot’s pulse or ask it to define “love” to see if it’s alive.

The closest we have to a test for sensitivity is the Turing Test and probably Alexa and Siri went through that years ago.

At some point, if and when AI becomes sensitive, we will need an empirical method to determine the difference between smart programming and machines that are actually self-conscious.

Sentience and scientists

Any programmer, marketing team, CEO, or scientist can claim to have created a machine that thinks and feels. There is only one thing that prevents them: the truth.

And that barrier is only as strong as the consequences for breaking it. Currently, the companies are pleasing at the edge of artificial general intelligence (AGI) wisely stayed on the edge of “it’s just a machine” without crossing into the land of “it can think”.

They use terms like “human level” and “strong AI” to indicate that they are working for something that mimics human intelligence. But they usually stop before claiming that these systems are capable of experiencing thoughts and feelings.

Well, most of them anyway. Ilya Sutskever, the chief scientist at OpenAI, seems to think that AI is already sensitive:

But Yann LeCun, the AI ​​/ guru of Facebook / Meta, believes the opposite:

And Judea Pearl, a computer scientist at the Turing Prize, thinks that even a false feeling should be considered consciousness because, as he says, “pretending it is having it.”

Here we have three of the world’s most famous computer scientists, each of them ancestors of modern artificial intelligence in their own right, discussing awareness on Twitter with the recklessness and seriousness of Star Wars versus Star Trek argument.

And this is by no means an isolated event. We’ve been writing about Twitter bullshit and crazy arguments between AI experts for years.

It would seem that computer scientists are no more qualified to think of machine insensitivity than philosophers.

Living machines and their lawyers

If we can’t rely on OpenAI’s chief scientist to determine if, for example, GPT-3 can think, then we’re going to have to change perspectives.

Perhaps a machine is only sensitive if it can meet a simple set of rational qualifications for sensitivity. In that case we would need to turn to the legal system to code and control for possible machine consciousness events.

The problem is that there is only one country with an existing legal framework through which the rights of a sensitive machine can be discussed, and that is Saudi Arabia.

As we reported in 2017:

A robot named Sophia, made by Hong Kong-based Hanson Robotics, was given citizenship during an investment event where plans to build a city full of robot technology were revealed to a crowd of wealthy attendees.

Let’s be clear here: if Sophia the Robot is sensitive, so is Alexa from Amazon, Teddy Ruxpinand The Rockafire Boom.

It is an animatronic puppet that uses natural language processing AI to generate sentences. From an engineering point of view, the machine is quite impressive. But the AI ​​running it is no more complex than the machine learning algorithms that Netflix uses to try to figure out what kind of TV show you want to watch later.

In the United States, the legal system is constantly proving an absolute failure to grasp even the most basic concepts of artificial intelligence.

Last year, Judge Bruce Schroeder banned prosecutors from using Apple iPad’s “pinch to zoom” feature in Kyle Rittenhouse’s lawsuit because no one in court correctly understood how it worked.

Per article by Jon Brodkin of Ars Technica:

Schroeder prevented … [Kenosha County prosecutor Thomas Binger] of pinching and zooming after Rittenhouse defense attorney Mark Richards claimed that when a user zooms in on video, “Apple’s iPad programming creates.[es] what it thinks is there, not what is necessarily there. ”

Richards provided no evidence for that claim and admitted that he did not understand how the pinch-to-zoom feature works, but the judge decided the burden was on the prosecution to prove that zoom does not add new images to the video.

And the US government remain faithful in its continued independent approach to AI regulation.

Equally bad in the EU, where there are legislators currently prevented over many points of contention including facial recognition regulations, with conservative and liberal party lines encouraging the dissonance.

What this means is that we probably won’t see any court, in any democratic country, make reasonable observations about machine sentiment.

Judges and lawyers often lack a basic understanding of the systems at play and scientists are too busy deciding where the target posts for sentience lie to provide any kind of consistent view on the matter.

Nowadays, the full confusion around the field of AI has led to a paradigm where academics and peer review serve as the first and only arbiters of machine sense. Unfortunately, that puts us back in the realm of scientists arguing about science.

That only leaves PR teams and the media. On the bright side, the pace of artificial intelligence is quite competitive. And many of us are on it painfully aware of how hyperbolic the whole field has become since the advent of modern deep learning.

But the dark side is that intelligent voices of reason with expertise in the field they cover – the reporters with many years of experience telling Shinola shit and AI snake oil – are often shouted at by access journalists with larger audiences or peers who provide directly. -up coverage of major technical press releases.

No Turing test for consciousness

The simple fact of the matter is that we don’t have a legal, agreed-upon test for AI sense for the same reason that we don’t have one for aliens: no one is sure exactly what we’re looking for.

Will aliens look like us? What if they are two-dimensional creatures that can hide by turning sideways? Will sensitive AI take a form we can recognize? Or is Ilya Sutskever right and AI is already sensitive?

Maybe AI is already super-smart and it knows that going out as a living would upset a delicate balance. It could be secretly working in the background to make things a little better for us every day – or worse.

Maybe AI will never make sense because it’s impossible to penetrate computer code with the spark of life. Perhaps the best we can ever hope for is ACTION.

The only thing clear is that we need a Turing Test for awareness that actually works for modern AI. If some of the smartest people on the planet seem to think we could hit a machine sensation at any second, it’s pragmatic to be as prepared for that moment as possible.

But we have to figure out what we’re looking for before we can find it, something easier said than done.

How would you define, detect, and determine a machine sensation? Let us know on Twitter.

Source
Tristan Greene

More articles

Latest article