Is it possible computers could become conscious one day?
If consciousness emerged as a result of the complexity of the human mind, could a computer become conscious if it were complex enough?
Ask Your Question today
If consciousness emerged as a result of the complexity of the human mind, could a computer become conscious if it were complex enough?
Look up the website Cleverbot. I stumbled upon it 2 weeks ago while reading the comments of a youtubd video. And though within a few minutes into a conversation with Cleverbot I called it dumb because it would rarely understand what I was talking about, and had its own input, I realized this behavior was very human like. It had its own opinions. And there were a few answers that were quite stunning or witty.
It is a very interesting and inventive concept. Strangers all over the world talk to the bot and those conversations act as the basis for what the cleverbot says to other people. It's ingenious in that I've always had the theory that no one is inherently unique, and that humans are a mashup of components of other people they come across in their lives. And that's what the bot is. Ages ago I played a video game that had a Godlike AI. And the way it functioned and gained its consciousness was by watching everyone all over the world. People are basically an enormously long and complex algorithm. Even people who seem to act random, aren't really random. So it isn't too far fetched to think AI's with an apparent "consciousness" will exist a few dexades from now.
Some probably already are. I would define consciousness as the ability to perceive and make choices. It seems we are already getting there faster than most people think. We may not even be so different. Humans ego willl be their own demise.
*dresses like jiminy cricket and throws cocaine disguised as stardust in your face*
"Anything is possible if you believe"
Sure they could. They wouldn't behave like a human or have human desires unless that was programmed in, though. It would be conscious without needs or desires.
I guess you've never watched a Terminator movie? Skynet anyone? Or the TV series Battlestar Galactica? Cylons anyone? Or read an Issac Asimov book?
This is not a new concept. As a matter of fact, I was reading Scifi books about this very thing long before the first computer was invented.
Also, I forgot - check out my Youtube Channel at http://www.youtube.com/user/malestrom1000 ; some of the videos there deal with stuff like SS (as is SYNTHETIC SENTIENCE or artificial intelligence) and paranormal stuff such as UFOs, demons and hauntings.
Right now, it's virtually impossible, but I wouldn't be surprised somewhere in the near future (twenty to forty years from now) when a computer chip will be able to exhibit CC (cybernetic consciousness).
I think so. That reminds me of anime like Eve no Jikan and Ergo Proxy :D.
Read the short novel "I have no mouth and I must scream". There is also an old computer game that is quite terrifying.
Good answer, but what threshold of self-awareness do they have to cross? How self-aware must they be? What test would we use to determine if they've reached it?
And do humans even have consciousness by that definition? We're not completely self-aware all the time. Most of us can't coherently express that we understand our place in the world around us. We have existential crises. We struggle with what it even means to "be our self".
It seems to me that self-awareness is an abstract sliding scale, and if we take your definition then consciousness must be an abstract sliding scale too. It isn't something we either have or don't have, but something we have levels of. Consider a hypothetical being who has a far greater self-awareness than we do, just because it has a more efficient brain able to process vast amounts of information at a rate we could never imagine. A being that would find a question like your one laughably easy. That being would have such great self-awareness that it would know what consciousness means at a level we never could and might find the experience of humans so measly in comparison that it would never judge us as having consciousness, in the same way that we would probably never say a fly or a mosquito has consciousness. Judgements over which beings have consciousness can only be made in relation to what the judges (in this case human beings) have. It's not objective.
They usually measure an animals self awareness by placing a mirror in front of the animal...and see if it can recognize that reflection as themselves and nit another animal.
I suppose you would do the same for the artificial intelligence. .. :P
Ok, that's very interesting.
But just because it may be impossible for us to measure a definite point where a computer has become aware of itself doesn't mean of course that it couldn't become aware of itself; your point seems to be that we would have trouble measuring a point at which that happens as you're saying that self-awareness is on a scale. It reminds me of the sorites (heap) paradox: when does a heap become heap? How many grains of sand do you need to get a heap, and if you have a heap and take a grain away is it no longer a heap? The point is there is a blurry section in the middle of two distinct outer parts. That is interesting in terms of consciousness. And also children are not really self-aware until a certain age, then there's dreams, drugs, and other altered states. Also, some argue animals have a degree of self-awareness especially primates and perhaps dolphins? Not sure about dolphins.
So like many things self-awareness and consciousness is a slippery thing to pinpoint and define. I just read that betrand russell extended the grain of sand paradox to words like these: "tall", "rich", "old", "blue", "bald."
I think theoretically we could judge when a computer or an animal or some other being gains a level of consciousness that is similar to our own. We might not see it creeping up on us (like in the heap paradox) and we will only be able to accept it and judge it as consciousness if/when it manifests in a way we can recognise. But I don't think that those limitations mean we couldn't ever recognise that a computer had gained self-awareness. I think we could recognise it, that is if it could occur in the first place.