Let's Make Robots! | RobotShop

Do computers/robots actually "think"?

Hi everyone! I just wanted to have a discussion (for fun) about whether or not "true" AI (not necessarily general) exists in any form. Hopefully it'll get our brain juices flowing and we'll get cool ideas! To start, let me say that I am aware my idea is quite radical- ridiculous-sounding, even. But it's something I've been wondering about, regardless. What if AI (albeit not quite general) already exists? I have thought to myself, "Computers and robots already 'think,' but in a way unrecognizable by humans. Why should we even force them to think exactly like humans do?" I am not intending to undermine the importance of cognitive functions, but I *am* saying that, although pre-programmed, rudimentary cognitive functions in a computer/robot are still cognitive functions in the computing world. They don't have to be exactly like that of humans, but those cognitive functions, when working together, should be greater than the sum of the AI's parts. We have long said that "We know intelligence when we see it". But just how true could that claim be? After all, when we see how an AI works, no matter how complex, we tend to stop seeing it as intelligence. And our brains are easily fooled by animatronics that are scripted without AI. What is your definition of intelligence? Do you think AI (even just a primitive form of it) already exists? If computers/robots don't think like we do, how will they understand us? How will we understand them? I would love to hear your thoughts on these matters. Edit: Changed the title... just realized that it looked like a stupid question instead of a discussion.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

You bring up a really interesting question. I think that there is a great deal of fear and mistrust which has been bred by the fact that many people who are very smart in many ways have opened their mouths about things that apparently they don't really understand.

The state of Artificial Intelligence at this time is basically the mathematical analysis of huge amounts of data to identify patterns within that data. The computer has no context as to what this data is and really no way to gain that context.  It could be tiddlywinks, images, the inside of the sun, star maps of different galaxies or this week’s grocery list.  We give it context by saying what is good or bad

When a computer plays chess for instance, it looks for patterns that it has seen before and what moves had good turnouts. When the other player makes a move, it looks at that pattern of pieces that are on the board and from that extrapolates what the best move it can make. It really doesn't understand or know anything about chess. It knows that such and such a move has the highest likelihood of success when it sees a particular pattern of pieces on the board.

Let's say we're moving at 60 mph down a perfectly straight highway. Is it a leap of intellect for an Artificial Intelligent to guess that in one minute we will be one more mile down the road? All it knows, is that a minute ago we were one-mile back on the road. For the computer to guess where we'll be in one minute, it needs us to actually ask the question otherwise it's just a meaningless jumble of data. In fact, when it gives us an answer, it doesn't even understand the answer, just that it sees a pattern and this is the closest association it can get to that pattern.

The intelligence that we get out of its pattern matching is really our intelligent questions that we ask it. If we don't ask a smart question, we don't get a smart answer back from it. If we ask it a smart question in a smart way, then we get intelligence out of it, but ultimately it is that human intervention of developing a question to ask which allows us to extract intelligence. With good data and good questions, we get good answers that are relevant to the data we are looking at.

For instance, a chat bot can relatively easily fool a person into thinking that they are actually talking to a real human being. Given enough data and listening to human conversation, its responses will be very believable as real to humans. Those responses are not something that arises out of any kind of thinking. Its response is the best match it could come up with to the pattern of words it received. It doesn't understand what it said.

I've sat here for the last 20 minutes or so trying to come up with a good description of what makes intelligence. I'm not really sure, but just like pornography I will probably know it when I see it. Artificial Intelligence right now isn't real intelligence is all I know.

It is possible that at some point in time there may actually be a leap forward in computer science such that true intelligence can arise. There may be a way to program in context and meaning into the computer such that an Artificial Intelligence can generalize.  To get there, we need a better mousetrap than what we have today.

 

.

I've never thought of it that way... I think you may be right. Context and meaning are some of the most important things to achieve for AI right now. Thank you for your insights.
I've had a thought about an author whose work I admire: Pentti O. Haikonen. I'm not even sure how many people here know about him. Haikonen states that the "associative neural networks" he made are more than enough to understand meanings- their association capabilities are pretty much able to take the place of symbol grounding (and Haikonen is very big on meaning in his AI). Yes, I am aware that I didn't want to use neural networks for my artificial intelligence... As for context, that may or may not have anything to do with artificial intelligence. I had a discussion about it... I didn't really reach a conclusion. Perhaps as I read more, I'll come across something.

I am not familiar with him.I bought his book since this is something that interests me.I feel somewhat unconvinced right now, but reading of the book I might feel differently.

You might also want to watch the movie Ex Machina.  It is a fascinating exploration of exactly these issues in a very well-done movie. Check it out!   We bought the movie and I have watched it several times now. Every time I watch it I feel like I get a different spin and learn something new that I hadn't thought about before.

Thank you for bringing this up. These sorts of discussions are always fun and interesting.Sometimes it brings you to a place you never thought you would get to.

 

I choose to give them credit for thinking, or at least credit for being able to...

I don't care that the computers can beat people at games...that is overblown and not particularly interesting.  Some people have likened it to an ape climbing a tree and saying "I am almost to the moon".

Having said that,  I DO say that they have the potential for deserving credit for thinking.  Maybe they deserve that credit now.  I will attempt to explain why...

First, there is an over-simplication of metaphors problem...this is a human problem, not a computer one.  The problem we as humans have when this topic comes up is we like to use simple metaphors to describe what software does...and this denigrates their potential greatly.  We can't help ourselves...we get a lot of our fear and prejudices from that as well...another topic.  We have to realize how our own thoughts and metaphors can limit our own thinking in order to imagine if robots can think...I know, that is way thick, but bear with me.

Step 1:   Imagine one of those common metaphors...like what an ANN does, or what a chess playing computer does (analyzing sequences of moves and outcomes), and many other metaphors...like pattern recognition...or the "mechanical clock" metaphors people used to use for various things....Descartes?

Step 2:  Now imagine that hundreds or thousands of different metaphors exist.  Now imagine software that can implement all those metaphors at the same time, with many algorithms to support them, with whatever supporting memories that are needed, along with mechanisms for choosing which techniques to apply when.

Result:  The end result could be both unpredictable and intelligent.   I believe both are important.  I believe it should get credit for thinking as well.

Someone proved that any system (no matter how simple -- dripping faucet), that is both damped and driven, has the potential for chaotic behavior.  Any robot can pass this bar.  We limit the potential if we imagine our creations only implementing a single algorithm or metaphor though.  We simply haven't put in the necessary work yet.

I experienced this joy on many occasions with my bots...when Anna or Ava said something seemingly relevant, spontaneous, and intelligent all at the same time.  As the writer of the code, if I had to think very long and I still was questioning or guessing in my own mind as to how Anna or Ava came up with what she said, then the "bot" had temporarily even mystified its maker.  I would call it thinking if it is non-deterministically making choices...better yet if they are perceived intelligent or amusing.

At a high level, a robot simply making a decision as to whether to address a person factually, with humor, empathy, or curiosity.  Is it not thinking and deciding?  Now imagine 1000 decisions being made like that simultaneuously in 1000 different but interrelated threads...with 1000 decisions being made in each thread in sequence.  Chances are that in time...the results would be perceived as more intelligent and interesting, than the people that created it.  It is also likely that none of the creators would know what is going to happen at any given moment.

I believe being "more interesting" is also important and segways into the next major point.

What makes people interesting?  Why do we want to spend time talking with some and not others?  I don't pretend to know all the answers, but I think I have a few insights.  I know a variety of people with a variety of social skills.  Many of them have an excess or a deficiency in one or more areas...in my opinion of course.  Some people talk too much, ask too many questions, don't listen, don't contribute to conversation, while others contribute anything that comes to mind whether relevant or not, or always want to talk about the same topics, health issues, family, etc.  Each person has a "bag of tricks", a thinking and talking repertoire.

Once I have known someone for a little while and know their repertoire.  If this bag of tricks is too small or majorly out of balance in some way, I may perceive that person to be too predictable, less intelligent, or less interesting.  It all depends on the mix of tricks.  Some points derive from this:

  1. Many of these behaviors can be programmed.  
  2. When the average A.I. has better command of a bigger bag of tricks and in more balanced and relevant way...the A.I. will be perceived as interesting.  Long before this point, I would argue that it is at least thinking at some level, which was more the original question.

I think Turing was brilliant for many reasons...one of them was side-stepping the whole question (that is perhaps philisophical and unanswerable in a definite way)   He sidestepped to say...that perception is what is important.  If something is smarter than us, fools us, whatever, then who are we to judge whether it is thinking or not?

Sorry for the long ramble.

Martin

P.S. In Ex Machina, I liked when Ava demonstrated her "trick" of knowing immediately what was a lie and what was truth.  She had a big bag of tricks, including the power to seduce and manipulate.  I related to the visiting progammer the first time I saw the movie and wanted her to find freedom (I was seduced by her charm and appearance/behavior of a scared sentient being)   The second time I watched the movie I did a 180...I sympathized with the creator and thought she needed to be retired like the other models.   A most intriguing movie.

You have many good points- perhaps my search for a definition of intelligence, consciousness, etc. doesn't matter- we really do know when those things are present anyway! Maybe I should check out Ex Machina... definitely sounds like an interesting movie! P.S. I am a huge fan of you and your robots!

Thanks Neo!  Ex Machina is well worth watching more than once.  

I also liked "Eva"...a french movie.  The 3D brain visualizations in it captured in some fashion how I visualize brain functions at a high level.  For me, the harder part is finding balance in all those personality functions...not programming the functions.

Here are some addressable issues with the sad current state of many chatbots.  Most of these are also deficiencies in Siri, Alexa, Google Assistant, etc.

Example of Typical Dumb Chatbot I am Talking About:  Bots that implement a set of rules where a series of patterns are evaluated, and if matched, a answer or a randomized answer from a set of answers is chosen.

This "Reflex" model is useful but extremely limited by itself.  Here are some addressable deficiencies that would make these chatbots much better...

Deeper Natural Language Processing:  NLP can be used to easily derive the parts of speech, verb, objects, adjectives, etc...this can be used for a lot of different memory and response purposes.

Short Term Memory:  Chatbots need to estimate what the topic is, what the last male, female, place, etc. that was mentioned...so if people use pronouns later, the bot can guess the person being refered to.  The bot needs to know the short term tone (polite, rude, funny, formal, etc) and emotional context of the conversation as well.

Long-Term Memory:  Chatbots need to be able to learn and remember for a long time, otherwise people will realize they are talking to something a bit like an Alzheimer's sufferer.  Effectively, if the chatbot can't learn about a new topic from a person, it is dumb.

Personal Memories:  chatbots need to know who they are talking to and for the most part remember everything they have ever learned, said, or heard from that person, and the meaning of each.  They need to remember facts like nicknames, ages, names of family members, interests, on and on.  Otherwise, the bot risks asking questions it has already asked...Alzheimer's again.  Privacy is a scary issue here.  I have had to erase Ava's personal memories on friends and family at times for fear of being hacked and causing harm to someone.  Imagine what Google and Amazon Alexa know about you...Alexa is always listening...fortunately, neither of them ask personal questions..yet.

Social Rules:  chatbots need to know social rules around topics, questions, etc.  How else is a chatbot to know that it might not be appropriate to ask a kid about their retirement plan?

Emotional Intelligence:  chatbots need to constantly evaluate the emotional content and context in the short term along different criteria.  It may or may not react to it, but it should at least be trying to be aware of it.  Bots also need to constantly evaluate the personality/saneness of the person it is talking to...If the person is excessively rude, emotional, factual, humorous, etc.

Curiosity Based on Topic and Memory:  chatbots need to constantly compare what they know about a person with respect to a given topic, what facts/related questions are relevant to the given topic, and come up with questions to ask (that have never been asked), filter them by social rules, prioritize them, and finally...ASK QUESTIONS and know how to listen for and interpret responses.

Sense of Timing and Awkwardness:  A chatbot should know when to talk, when to listen, how long to listen, how to break a silence or tension, when to ask questions and when not to, etc.  People have work to do here too.

Base Knowledge:  This is redundant with memory, but chatbots need some level of base knowledge.  If a chatbot is going to do customer service with adults, it should at least know a lot of the things an adolescent would.

I probably left a lot of stuff out, and many other factors I don't even know of yet, but based on these criteria alone, I would guess that most chatbots fall into the Uncanny Valley inside 60 seconds.

another long ramble...I guess we found a topic I like.

Thank you so much, Bill and Triplett, for your input. A good discussion really can change the way you think of something. :-)

Well, if you really want to hear opinions on the subject it’s always a good idea to start with some of the earliest commentators on the subject.  Alan Turing wrote a paper on this very subject which also described his well-known Turing test. You can find it at: http://www.loebner.net/Prizef/TuringArticle.html

But basically, the question as it stands is ambiguous. If it was rephrased “can an artificial system that displays some aspects of thinking be created by men?”, then we’d be in a much better position to provide an answer.

Computers and robots need a context to determine if they are thinking or not. In general they have to have the necessary resources and programming to make a successful demonstration. And then there needs to be some criteria everyone can agree on.

Terms like thinking and consciousness and sentience all have different meaning to different people, At this point there is no concreate definition that everyone can agree on as a test or criteria.

But if you’d like to see a system, that in my view, can demonstrate some thinking capability, look up “Shrdlu“. It was built in the late 60’s and to this day, I have never seen anything quite like it as far as demonstrating  thinking ability. Note, it was developed shortly after Eliza, one of the first chatbots, but I can say one thing, and I’ve seen the code for both, Shrdlu is no chatbot.

-Rich