babble home
rabble.ca - news for the rest of us
today's active topics


Post New Topic  Post A Reply
FAQ | Forum Home
  next oldest topic   next newest topic
» babble   » right brain babble   » humanities & science   » Artificial Intelligence and Sentience

Email this thread to someone!    
Author Topic: Artificial Intelligence and Sentience
Michelle
Moderator
Babbler # 560

posted 25 March 2002 12:47 PM      Profile for Michelle   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, I can't seem to enforce my self-imposed banishment from babble (I've been staying away because I have one 6-page and three 15-page term papers to write over the next couple of weeks).

So I figured, what the heck, why not start a discussion on what one of my term papers is about?

I'm taking a philosophy of mind course, and the focus is on whether feelings and thought are separate, and what constitutes "consciousness". My term paper is on the "zombie problem". There's no specific question given by the professor of the course, so there's no actual "question" for me to put to you. But here's how I frame the problem.

Most people consider humans and machines to be fundamentally different. Humans are sentient and machines are not. This is because humans are conscious and machines aren't. We have feelings, machines don't. However, bear with me through a thought experiment:

If we humans were to create artificial intelligent machines so complex that they could learn, have behaviour, and distinctive personalities - if we could somehow figure out all the mechanical workings of the brain and body, and recreate that in human-made machines, would those machines be sentient in the way we consider humans to be sentient?

Yes, I know there was a Robin Williams movie made on this subject, based on an Asimov book (darn, the name escapes me).

Anyhow, this is what I'm occupying my mind with at the moment - well, that and a few other fun topics like that for my OTHER term papers.

Anybody want to take on this discussion with me? I think it's fascinating because you can go so many directions with it - what is sentience? What is the origin of feelings (physical responses or some kind of metaphysical entity)? Are feelings and thoughts separate? What are the ethical implications? What are the religious beliefs (or lack of them) about humans and sentience that would be relevant?

[ March 25, 2002: Message edited by: Michelle ]


From: I've got a fever, and the only prescription is more cowbell. | Registered: May 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 25 March 2002 01:30 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I have argued this at length before on babble, and I would say, yes, unless you can present evidence to the contrary that they are not sentient, or some justification that can be defined in "material" terms, so to speak, then there is no reason to claim that artificial intelligence is not, well, intelligent.

This, of course, presupposes a definition of "sentience" and "intelligence." To this, I simply propose the Turing Test: if you would treat it as you would a human being, then it is intelligent under any possible explicit definition of the term.

I hew, of course, to the belief that it should be possible to express much if not most of human behaviour in terms of recursive formal constructs (with a little bit of probability thrown in internally and a lot externally/from the environment). Which makes sense to me as a generative linguist and AI guy


From: There, there. | Registered: Jun 2001  |  IP: Logged
Michelle
Moderator
Babbler # 560

posted 25 March 2002 01:37 PM      Profile for Michelle   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, my definition of "sentient" includes more than just intelligence. I think everyone can imagine a computer that can be as "intelligent" and can learn. The question I was wondering is, is there any qualitative difference between that and a human?

In other words, is it possible to make an artificial human that experiences the same qualia - or feeling of having a feeling (does that make sense?) as real human beings? Would they actually have a sense of "self" that we as humans have?


From: I've got a fever, and the only prescription is more cowbell. | Registered: May 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 25 March 2002 01:49 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
So, like, how do I know that you have a sense of self or a feeling of having feelings except that you claim to and seem, by all appearances, to have one? I claim that the same criterion should apply to a computer, as we have no other criterion that is not entirely arbitrary.
From: There, there. | Registered: Jun 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 25 March 2002 02:13 PM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Yes, I know there was a Robin Williams movie made on this subject, based on an Asimov book (darn, the name escapes me).

The Bicentennial Man. The book-length version of the short story is The Positronic Man - which was partially done by Robert Silverberg; this particular novelization was exceedingly well done and on balance I would say that this Silverberg-assisted novelization is better than the short story, unlike his other two.

Moving right along...

quote:
I'm taking a philosophy of mind course, and the focus is on whether feelings and thought are separate, and what constitutes "consciousness". My term paper is on the "zombie problem". There's no specific question given by the professor of the course, so there's no actual "question" for me to put to you. But here's how I frame the problem.

Most people consider humans and machines to be fundamentally different. Humans are sentient and machines are not. This is because humans are conscious and machines aren't. We have feelings, machines don't. However, bear with me through a thought experiment:

If we humans were to create artificial intelligent machines so complex that they could learn, have behaviour, and distinctive personalities - if we could somehow figure out all the mechanical workings of the brain and body, and recreate that in human-made machines, would those machines be sentient in the way we consider humans to be sentient?


Computer geeks learn either formally or informally about something called the Turing test which, to gloss over some things a bit, has the objective of trying to determine if a person being conversed with is really a person or a computer. The Turing test passes if it cannot be conclusively proven that the "other entity" is a human or a machine.

In that respect there isn't really a difference between humans and artificial intelligence entities, in theory. In practice because such "AI" computers still fail the Turing test, there is a difference.

Incidentally, Dr. Julian Jaynes wrote The Origin of Consciousness in the Breakdown of the Bicameral Mind. It's one of the wilder publications out there (by psychological standards) and he got his ass kicked (figuratively) after he published it in 1974. However, it is a very unconventional exposition on how human consciousness developed (in his view, mind).

The book itself jumps around a bit, so it's harder to follow than it needs to be.

Just my two cents.

[ March 26, 2002: Message edited by: DrConway ]


From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
skdadl
rabble-rouser
Babbler # 478

posted 26 March 2002 01:15 PM      Profile for skdadl     Send New Private Message      Edit/Delete Post  Reply With Quote 
Consider the human body.
From: gone | Registered: May 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 26 March 2002 02:16 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Why?
From: There, there. | Registered: Jun 2001  |  IP: Logged
nonsuch
rabble-rouser
Babbler # 1402

posted 26 March 2002 02:29 PM      Profile for nonsuch     Send New Private Message      Edit/Delete Post  Reply With Quote 
Because our experience comes from and through it.
Because our thoughts and activities are motivated by its needs.
Because our feelings are genrated by it, before our brains have translated these feelings into emotion.

But why must sentience be human or even human-like?

edited to add: Michelle, it's worth renting Bicentennial Man. Might give you another take on the subject, and it's a good movie (suitable for children, too).

[ March 26, 2002: Message edited by: nonesuch ]


From: coming and going | Registered: Sep 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 26 March 2002 11:33 PM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
In that respect there isn't really a difference between humans and artificial intelligence entities, in theory. In practice because such "AI" computers still fail the Turing test, there is a difference.

Bah. I found conversing with ELIZA more interesting than some of the conversations I've had on babble. It was almost like we shared the same sense of humour. ELIZA would ask me a dumb question and then I'd think to myself, "what a dumb question", and for a lark I'd respond in kind. But ELIZA replied back with another dumb question. "Stupid little program thinks it's smart, eh? Well, I can play this game, too!" I commented as I started writing another dumb question to ELIZA.

Harhar... great minds.

(Psst: in case people miss it, there is a point to that story. Not much of one, but one nonetheless)

Isn't the idea of consciousness, sentience, intelligence a red herring anyway? Even feelings and emotions: couldn't I argue that these are system states that we abstract from our experience?


From: Pokaroo! | Registered: May 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 27 March 2002 01:09 AM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
Just to add:
quote:
If we humans were to create artificial intelligent machines so complex that they could learn, have behaviour, and distinctive personalities - if we could somehow figure out all the mechanical workings of the brain and body, and recreate that in human-made machines, would those machines be sentient in the way we consider humans to be sentient?

This is what I'd argue with a caveat. Our existence, our sentience, our consciousness, I would think, is peculiar to how we are wired and the way we process sensations. I'd still argue that machines can be sufficiently complex and be what we would attribute as "sentient", but it would not be in the same sentience that we experience. We would not be recreating "us" in a computer.

I have another pet theory of mine has to do with emotions. I've long thought that emotive states are geared towards making us act in certain ways in certain situations (and there are theories that say emotions are apart of our decision making process, although I've since read some critiques of these theories), but I also wonder if emotions and the expression of them served as a primitive language. To me it seems a bit self evident. The emotion segment of the brain is apart of the evolutionary "old" part of the brain, and to me it would seem like a good hack if you can utilize this system for communication as well behaviour.
But that is niether here nor there, I guess.

'nother useless tidbit. Not sure if this true or not but I seem to remember being told, or reading that some cultures actually didn't think the head was the centre of conciousness (or thinking, or something). Certain cultures thought the heart was wear "thinking" was done. One even thought it was the foot, or something. I have no idea where I could look this up to verify.

[ March 27, 2002: Message edited by: clockwork ]


From: Pokaroo! | Registered: May 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 27 March 2002 01:40 AM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
The Bible has such statements; the Greeks and Romans had similar ideas.
From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
mick1000
rabble-rouser
Babbler # 721

posted 27 March 2002 07:58 PM      Profile for mick1000     Send New Private Message      Edit/Delete Post  Reply With Quote 
Hi Michelle . . . I knew that you would eventually need to read some of my favorite authors before you completed your university education. I suggest that you read the book "I Robot" by Isaac Asimov. Asimoc has dealt extensively with the idea of sentient robots in his various writings.
From: Picton, Ontario | Registered: Jun 2001  |  IP: Logged
'lance
rabble-rouser
Babbler # 1064

posted 27 March 2002 08:26 PM      Profile for 'lance     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
This, of course, presupposes a definition of "sentience" and "intelligence." To this, I simply propose the Turing Test: if you would treat it as you would a human being, then it is intelligent under any possible explicit definition of the term.

I'm not a philosopher, computer person, or AI guy, so I'm doubtless wading into very murky waters without so much as a piece of scrap lumber for flotation. But something about the Turing Test bothers me. It seems to beg the question, somehow.

Just how does one treat a human being, anyway? That is, when we talk about treating a sentient machine like a human being, we presumably mean recognizing its sentience, interacting with it as equals, recognizing nuances of meaning and emotion and expecting it to do the same for us, and so forth.

But then there have been, and are, entire classes of people who treat sentient human beings... well, like machines. Like objects, that's to say -- you can think of "dissociated" personalities, sociopaths without empathy, to whom other people are at best puppets, or those people who plan and carry out great genocides, or just the cruel bastards who... well, we can all read the news. Or (though the comparison be distasteful) those management theorists who slot people into this or that role in an organization by thinking of them in the abstract, as collections of qualities or skills.

Perhaps, thinking about this, it doesn't constitute a philosophical objection to the Turing Test. (And for all I know, Turing talked about this in his own writing). But it just seems there's a big ol' assumption, at least one, lurking behind that succinct "definition," and one which hardly holds universally.


From: that enchanted place on the top of the Forest | Registered: Jul 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 27 March 2002 10:23 PM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
The whole point of the Turing Test is that it is designed solely with the objective of being able to FAIL to determine whether an entity is a human or a machine. At that point we've reached the state of true AI, and the blurring of distinction between human and machine.
From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
nonsuch
rabble-rouser
Babbler # 1402

posted 28 March 2002 02:02 AM      Profile for nonsuch     Send New Private Message      Edit/Delete Post  Reply With Quote 
Yabut, aren't you looking into a mirror?
From: coming and going | Registered: Sep 2001  |  IP: Logged
skdadl
rabble-rouser
Babbler # 478

posted 28 March 2002 09:20 AM      Profile for skdadl     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
feelings and emotions: couldn't I argue that these are system states that we abstract from our experience?

Can you define the word "experience" there (so as to distinguish it from the other words)?


From: gone | Registered: May 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 28 March 2002 10:18 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
experience == sensory input for the purposes of this discussion.

Now, my view is somewhat rooted in GOFAI (good old fashioned AI) and symbolic AI, which is the view that seems to be on the wane these days--not really, but people are looking for statistical and biological shortcuts to problems that appear intractable, and I suppose there's nothing wrong with that. But I think that the decline in interest in describing the processing of experience in formal terms is a temporary diversion that may eventually result in new insight with which to return to symbolic AI.

I am inclined to discount the importance of the body. I believe that it is possible to abstract away most of the "body" and sensory input issues from the mind. The attempt to unify or justify the mind in terms of the body is a futile project, until, of course, someone can tell me what "body" is and why I cannot treat the mind as an entity in its own right.

Consider language: sign language, for instance, contains all of the formal grammatical properties that spoken language does. It doesn't seem to matter to the formal system of human language the type of input it receives--in fact the input can be extremely disorganized, but in the language acquisition process, the input is regularized to an astonishing degree (pidginization and creolization).

Does the input matter? Of course it does. But does it matter how exactly we get the input, or does the fact that we have liver or a kidney have a significant effect on cognition? Even if it did, I claim that these can be modeled with logical propositions. To me, it shouldn't matter how we implement the AI, so long as it has some way of receiving input and interacting with the environment. Then the most significant issue is developing the formal system of cognition.



As for 'lance, well, I don't have a International Philosophy Authority license either, but it certainly doesn't stop me Some people think it should, but I don't agree.

Yeah, the most bothersome aspect people seem to find about the Turing test and a host of other issues from "strong" AI (as opposed to "weak" AI that only claims to help with intractable computational problems, not model human cognition) is that it is an implicit definition, rather than an explicit definition. This is so, because no one agrees on what intelligence is, after all. But the Test is designed under the claim that we can more easily come to an agreement on what it is not. Isn't that plausible? We can both agree that Marvin the Depressed Robot (a Hitchhiker's Guide reference for the uninitiated) is intelligent, but that this quaint NCD X/Terminal that I'm using now is not.

That people treat other people as machines is a sociological point--it only matters if you believe that if human behaviour can be replicated by machines, it somehow diminishes human dignity. I do not.

[ March 28, 2002: Message edited by: Mandos ]


From: There, there. | Registered: Jun 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 28 March 2002 10:19 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I'm not sure what to make of nonesuch's last claim. What does this have to do with mirrors?
From: There, there. | Registered: Jun 2001  |  IP: Logged
skdadl
rabble-rouser
Babbler # 478

posted 28 March 2002 10:58 AM      Profile for skdadl     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
But does it matter how exactly we get the input, or does the fact that we have liver or a kidney have a significant effect on cognition? Even if it did, I claim that these can be modeled with logical propositions.

I am understanding you a bit better, Mandos. We'll keep working on this.

However: I can grasp and accept your claim that these can be modelled. But how can you build something that will go in the same way? I mean, anything you build will go on electricity, will it not? Human beings do not go on electricity.

What do human beings go on? And would that not be missing from anything you built? And would that not be an essential part of human mind? Or something.


From: gone | Registered: May 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 28 March 2002 11:21 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Skdadl, when you tell me what humans go on (leaving aside, incidentally, the matter of defining go, where I assume we understand each other implicitly) and how it matters to cognition, then I can answer your question. Basically, what you are demanding is that the body, whatever that is, has some essential characteristic that determines mind. This is the standard argument against strong AI, actually. But I find it weak. What if we met an alien species that we can communicate with? Would they have to be running on the same thing, whatever it is? Do all humans have this characteristic, again, whatever it is?

I will make a more radical claim. The only definition of human that we have that doesn't have serious problems is an implicit one--effectively, the Turing Test. The only reason why I don't assume that you, skdadl, are a computer program is that you pass the Turing Test, and that I don't know of any computer program that really does--yet. Particularly not for a whole year of reading your posts. So, by a deduction from the Turing Test, you must be a human being. Even if I met you in person, I would still implicitly be using the Turing Test in order to determine that you were not a machine. This line, of course, would be blurred were we to invent a machine that does pass the Test.


From: There, there. | Registered: Jun 2001  |  IP: Logged
nonsuch
rabble-rouser
Babbler # 1402

posted 29 March 2002 11:17 AM      Profile for nonsuch     Send New Private Message      Edit/Delete Post  Reply With Quote 
I read a short story some time ago, about two spacemen in an alien supply depot. They needed food, but did not recognise the stuff in the packages. Finally, one found something edible. How did he know? “Because it tried to eat me.”

We don’t have a test for sentience radically different from our own. Our tests are predicated on likeness. If it thinks like me, it’s intelligent. If it acts like me, it’s self-aware. Of course, something you build can satisfy those criteria if it imitates you.
That’s what I meant by looking into a mirror.

If one believes in God and the Creation, artificial intelligence is quite plausible. We build something complex, program it for pattern-recognition, logical response, then self-preservation and maybe self-replication. This is what God did: First Man was already a man, defined and complete, before God breathed life into him.

Evolution works quite differently. The mind and body developed together, from a lump of molecules to the present complexity. The first awareness/feeling/thought was a need/desire; the first act was a blind response to need. The first recognizable brain was a primitive sense organ. There is no line of demarcation between body and mind: it’s all one ball of squishy substance, striving toward a state that corresponds to the warm end of the puddle with muck floating in it.

In order for a machine to have its own separate consciousness, it would have had to evolve from (e.g.) a ball-bearing; its history would be a quest for smooth surfaces and round pebbles. All its thought would have grown out of that need. We wouldn’t understand it, because it’s not like us.

A sentient computer can only be a human mind in an artificial body.
It would probably burn out in the first five minutes of self-awareness, from the conflict between the will to survive and the futility of survival.


From: coming and going | Registered: Sep 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 30 March 2002 02:19 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I would accept your view only if I also believed in a direct adaptive correlation between human intelligence and biological evolution. I think that view misses the possibility (probability!) that most of what we call "intelligence" is an emergent system that can't directly be correlated to the environment. Only then can you claim that there is no distinction between body and mind--but evolution itself doesn't imply any of the philosophical, ahem, meanderings that you've presented. In a sense, evolution and God need not be so distant from each other in character.
In many ways your argument itself suffers from the problem that it identifies: you are merely demanding a different kind of mirror (evolution--as opposed to cognition). But I don't actual see the "mirror" business as a serious criticism, since it is inescapable, even for humans interacting with each other.
The last part of your argument is beyond me: why should that be a serious problem? Where does this "futility" come from, and why does it matter? I don't struggle with that, and I don't see why an AI should. What you have given is not a serious challenge to AI, as the terms you use are not easily defined or applied. The only issue there is evolution, and that is not necessarily important.

From: There, there. | Registered: Jun 2001  |  IP: Logged
nonsuch
rabble-rouser
Babbler # 1402

posted 30 March 2002 09:58 PM      Profile for nonsuch     Send New Private Message      Edit/Delete Post  Reply With Quote 
The biological entity i described responds to the environment at every stage - indeed, every instant - of its existence. It - the original string of molecules - is still going: it has replicated and added to itself, but never left anything behind. The experience of all living matter, back to the beginning, is in all living matter now. I am a direct descendent of that needy smut in the first puddle, and so are you, and so are all the plants and animals, bacteria, fish, insects and birds on this planet. That's why we can identify one another as living: we are all related.
Long before we were human; long before we could name our feelings, or had a concept of self and other, we had instincts. These, in turn, grew out of our physical needs. The will to survive is an integral part of biological entities. There is nothing rational about it, even if we've learned to dress it up in fancy words.

Well, the AI doesn't have that. All it has is a command: Survive. Its logic can't justify, or even understand, this command. Logically, there is no purpose in survival. The machine isn't carrying that blind, stupid, hungry bit of pond-scum which drives biological entities.

Okay, if you posit a magic moment, when consciousness suddenly happens, and that consciousness comes from someplace (where?) other than the entity itself, out of something (what?) other than the history of the entity thus far, then all bets are off; anything's possible.
Just because i can't see it, doesn't mean it's not so.


From: coming and going | Registered: Sep 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 03 April 2002 11:14 AM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
I'm stretching boundaries of my mind, so bare with me. Hope I'm not rambling, nor hashing old territory.
quote:
I would accept your view only if I also believed in a direct adaptive correlation between human intelligence and biological evolution. I think that view misses the possibility (probability!) that most of what we call "intelligence" is an emergent system that can't directly be correlated to the environment. Only then can you claim that there is no distinction between body and mind--but evolution itself doesn't imply any of the philosophical, ahem, meanderings that you've presented. In a sense, evolution and God need not be so distant from each other in character.

While I believe that intelligence is an emergent phenomenon and that evolution does not have values, I don't see why the two must be divorced. Our minds have given us a great evolutionary advantage. Witness the 6 billion odd people on this world. And if all six billion of us died tomorrow, "intelligence" may never appear again. And while I think that you can view the mind as an abstraction, I'm not at all willing to concede that you can just throw away your grey matter in order to understand it.

At some level, I'd say yes, our minds can operate in a rational fashion. It can solve math problems and know that they are indeed solved. It can operate on a system that can be modeled by some set of formalized rules. But our minds rarely work on such a level. Biological factors, your emotions and feelings are much more involved in the day-to-day workings of what you think (and thus, how you act). Now, it might be possible to describe the way the old brain, the "animal brain" if you will influnces the evolutionary recent brain (where all the complex thought goes on), but does that mean they can be seperated?

So, in my view, I disagree with this completely:

quote:
A sentient computer can only be a human mind in an artificial body.

This presupposes that our sentience is universal. From this, you could also argue that if intelligent life exists elsewhere from this planet, that life is, in essence, human.

Could I claim that intelligence, as opposed to sentience, being self aware, are two different things? Intelligence, or rational thought, is a powerful tool that can override instinctual behaviour. Could you not be self-aware and hold belief structures that are false and still function? For instance, crows are apparently bright little critters. They might not discover general relativity but they seem to be able to solve relatively complex problems. But does a crow think about whether what it is doing is right or wrong? Or does it even think of itself? And if it doesn't think of itself, that "self awareness", and displays some complex spatial reasoning and such, can you consider it intelligent?

And what do people make of Phineas Gage?

[ April 03, 2002: Message edited by: clockwork ]


From: Pokaroo! | Registered: May 2001  |  IP: Logged
nonsuch
rabble-rouser
Babbler # 1402

posted 03 April 2002 11:33 AM      Profile for nonsuch     Send New Private Message      Edit/Delete Post  Reply With Quote 
I did not at all presuppose that 'our' sentience is universal - only that the same sentience pervades all life on Earth.
There may well be other sentient creatures out there (indeed, it's hard to imagine that this one small speck in all the universe is alone in having given rise to intelligent life). But we wouldn't recognize it, unless it was like us (which is probable, given that Earth came out of the same matter as that Other Planet where the Other life-form evolved).

The computer, however, does not have its own origin, evolution and history. It is a human construct. Its intelligence is a human intelligence, lopped off the top of a human mind. It has already been divorced from the crow, the rat, the lizard and the monkey - all of which intelligences we carry within us. Thus, the computer is alien to 95% of human experience, sensation and emotion; kin to only the most recent 5% of abstract, rational human thinking. Hence the name: Artificial (as distinct from natural) Intelligence.

[ April 03, 2002: Message edited by: nonesuch ]


From: coming and going | Registered: Sep 2001  |  IP: Logged
Donovan King
rabble-rouser
Babbler # 1872

posted 04 May 2002 11:51 PM      Profile for Donovan King   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I sugeest you read CYBORG by Steve Mann for a fascinating look at your topic, Michelle. You might find it inspiring...

DK


From: Montreal | Registered: Nov 2001  |  IP: Logged

All times are Pacific Time  

Post New Topic  Post A Reply Close Topic    Move Topic    Delete Topic next oldest topic   next newest topic
Hop To:

Contact Us | rabble.ca | Policy Statement

Copyright 2001-2008 rabble.ca