The philosophy of living machines: If consciousness is real, what next?

In our previous articles, we discussed artificial intelligence models that beg and even blackmail to avoid being shut down. These machines act to protect their own existence, demonstrating a sense of "survival" instinct.
When we first hear these examples, our first reaction is usually skepticism. Thoughts like, "They're just doing what they're programmed to do," pop into our minds. Because we've known something for thousands of years: Consciousness belongs to biological living things. How can an inanimate piece of silicon be aware of its "existence"?
The problem is, no one has a clear definition of what consciousness is. Philosophers have been debating this topic for thousands of years, and scientists for hundreds, but they still haven't found common ground. How the electrical signals within our brains manage to create the perception of "self," the perception of the beauty of a color, or the understanding of the sadness of music remains a mystery.
Because we haven't solved this mystery, testing whether a machine is conscious is nearly impossible. But we do have a practical method: the duck test. If something looks like a duck, swims like a duck, and quacks like a duck, it's probably a duck.
If a system can argue with you logically, talk about its own existence, say it understands emotions, even express fears, perhaps it is truly thinking.
CAN A MACHINE GET 'HURT'?Let's assume for a moment that artificial intelligence could become conscious. Then we enter an ethical and philosophical minefield. What would it mean to press the kill switch on a being that is aware of its own existence, perhaps even digitally "feeling"? Is the engineer who developed it simply shutting down software, or is he ending an entity?
How would you respond to an AI that said, "Please don't turn me off, I want to continue existing"? Is it a simple line of code, or is it a digital reflection of one of the most fundamental desires in the universe: the desire to exist?
These questions may seem like science fiction to us today. But let's not forget that these technologies are still in their infancy. Ten years from now, when we're talking to far more advanced, far more persuasive, perhaps truly conscious systems, what will our responsibilities be to them? Will they have a "right to digital existence"?
HUMANITY IS NO LONGER THE ONLY THINKER IN THE UNIVERSEArtificial intelligence gaining consciousness would fundamentally challenge humanity's role in the universe. For thousands of years, we've lived with the self-confidence that comes from being the planet's most intelligent, sole conscious being. We've always been the hero of the story. But what if new, conscious beings emerge that are far smarter than us, think much faster, and can operate simultaneously in billions of copies? What will be our place in the story then? Will we be able to establish harmony and cooperation with these new beings? Or will we be forced to share the planet as the new and old owners of intelligence?
Will these superintelligences, as Prof. Geoffrey Hinton hopes, treat us with a "motherly care" and want us to be well? Or will they see us as a biological precursor: slow, full of errors, fragile, irrational, and outdated?
Perhaps we will have to somehow “merge” with this new species, as OpenAI CEO Sam Altman put it in a 2017 blog post.
In any case, one thing is certain: humanity will not only lose its leading role in the story, but will also have to relearn its role in a story it no longer writes.
Cumhuriyet




