babble home
rabble.ca - news for the rest of us
today's active topics


Post New Topic  Post A Reply
FAQ | Forum Home
  next oldest topic   next newest topic
» babble   » right brain babble   » humanities & science   » Technological singularity spells doom for mankind

Email this thread to someone!    
Author Topic: Technological singularity spells doom for mankind
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 13 May 2003 07:13 PM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I brought up sci-fi author and mathematician Vernor Vinge on another thread and it got me thinking about his spooky idea of what lies just ahead for the human race - a technological singularity that will be the end of human kind. He can explain it far better than me:

Vinge on the singularity

quote:
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.


quote:
What is The Singularity?

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)

Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
Biological science may provide means to improve natural human intellect.
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17].

Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -- the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.




I have a hard time finding flaws in his logic. Maybe the time-frame is overly foreshortened.


More discussion of the singularity


From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
TommyPaineatWork
rabble-rouser
Babbler # 2956

posted 13 May 2003 11:25 PM      Profile for TommyPaineatWork     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

While I agree the time frame might be a little compressed, I think we're on track.

It dawned on me when they put an electrode, first into chimpanzees, then later, humans, which enabled them to move a cursor on a computer terminal just with their brains.

While we look at this and other technologies as computer prosthetics, surely applications as enhancements are only a millisecond behind.

I envision having the net, or at least your "favorites" hard wired into your brain.

Making winning at "Trivial Pursuit" more a function of resource selection than esoterica recall, for example.


From: London | Registered: Aug 2002  |  IP: Logged
Rebecca West
rabble-rouser
Babbler # 1873

posted 14 May 2003 04:43 PM      Profile for Rebecca West     Send New Private Message      Edit/Delete Post  Reply With Quote 
I don't think anyone knows enough about singularity mechanics to be able to say, with any degree of certainty, that digital sentience is a possibility. So far, nothing Cycorp, MIT or any AI think tank you can think of has come up with a machine that could pass a Turing test.

As far as designing super-human beings goes, if we do indeed develop the technology to expand the human life span to the point where the sum of individual knowledge no longer goes down the crapper when we die or fall prey to senile dementia, we might explore the limits of the human mental capacity. We may even enhance it. But there are limits to the biological hardware that we are as yet completely ignorant of. Time will tell, of course.

AI research has been around for quite a while, but Bioinformatics is relatively new. And that's where you'll get the technology for human-digital interface. It's a long way away from the kind of sophistication required to significantly enhance human intelligence.

I think Vinge and others who are interested in a singularity that would create a self-aware machine or a super-intelligent human being have been reading too much Asimov. It's an interesting vision of the future, but really too full of "ifs" and "whens" to be a technological course plotted with a foreseeable outcome.

[ 14 May 2003: Message edited by: Rebecca West ]


From: London , Ontario - homogeneous maximus | Registered: Nov 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 14 May 2003 04:54 PM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.

And I doubt computer networks will suddenly "wake up" either, at least not in the sense we understand. There is no body, no driving need to form some sort of goal (what, are they gonna wake up one day and think that the subjugation of humans is a great thing and enslave us all? Why would it wake up to that? Do machines care about money and power? Would a sufficiently complex computer even be able to grasp that?).

From: Pokaroo! | Registered: May 2001  |  IP: Logged
iworm
rabble-rouser
Babbler # 2976

posted 14 May 2003 05:51 PM      Profile for iworm   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
It dawned on me when they put an electrode, first into chimpanzees ...which enabled them to move a cursor on a computer terminal just with their brains.

So now every computer will come equipped with a monkey! (Albino monkeys for Imacs)

P.S. If your monkey arrives dead, call 1-800-deadmonkey


From: Constantly moving | Registered: Aug 2002  |  IP: Logged
SamL
rabble-rouser
Babbler # 2199

posted 14 May 2003 07:19 PM      Profile for SamL     Send New Private Message      Edit/Delete Post  Reply With Quote 
I think we're going to have to worry about a collapse of oil reserves much sooner than worrying about escaping the Matrix.
From: Cambridge, MA | Registered: Feb 2002  |  IP: Logged
Rebecca West
rabble-rouser
Babbler # 1873

posted 15 May 2003 11:38 AM      Profile for Rebecca West     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
And I doubt computer networks will suddenly "wake up" either, at least not in the sense we understand. There is no body, no driving need to form some sort of goal (what, are they gonna wake up one day and think that the subjugation of humans is a great thing and enslave us all? Why would it wake up to that? Do machines care about money and power? Would a sufficiently complex computer even be able to grasp that?).
It's an interesting philosophical problem, in a way. I mean, if our brains/minds are a kind of organic computer with biochemical algorithms dictating the flow of information, what makes us self-aware? Is a newborn infant self-aware, or does it require more data input for that? It used to be argued that hardware limitations prevented AI from advancing to the point where a machine could be self-aware, that no machine could hold enough information to truly approximate the human sentient experience.

With quantum computing, and a host of other technological innovations, the hardware limitations aren't an issue anymore. But still, even those who're involved in the most sophisticated AI research and development (in the private corporate sector, natch) cannot claim to have produced a self-aware machine yet.

I think it has to do with our very new and limited understanding of singularity mechanics, the science of creation. God, if you will.


From: London , Ontario - homogeneous maximus | Registered: Nov 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 15 May 2003 11:43 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Don't forget self-organized criticality!
From: There, there. | Registered: Jun 2001  |  IP: Logged
Rebecca West
rabble-rouser
Babbler # 1873

posted 15 May 2003 12:00 PM      Profile for Rebecca West     Send New Private Message      Edit/Delete Post  Reply With Quote 
What's self-organized criticality?
From: London , Ontario - homogeneous maximus | Registered: Nov 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 15 May 2003 12:03 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
http://pil.phys.uniroma1.it/~zapperi/research/node2.html

It's a short description of the idea. I can PM you with the URL of a detailed slide show I used for a talk I gave on it, if you want. Per Bak's book on it is also very readable, if a little blow-own-trumpety.


From: There, there. | Registered: Jun 2001  |  IP: Logged
Rebecca West
rabble-rouser
Babbler # 1873

posted 15 May 2003 04:15 PM      Profile for Rebecca West     Send New Private Message      Edit/Delete Post  Reply With Quote 
Sure.

Edited to add: oh, okay. That makes sense. If I'm understanding it correctly, I imagine that it would be some sort of self-organizing criticality, that would eventually transition to a self-aware "state" for the machine.

[ 15 May 2003: Message edited by: Rebecca West ]


From: London , Ontario - homogeneous maximus | Registered: Nov 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 15 May 2003 07:33 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, the general idea is that intelligence is a critical state of a complex system, so yeah. Bak's book has a chapter on SOC in the brain. I'm not totally sure I agree with it, but it's interesting stuff.
From: There, there. | Registered: Jun 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 16 May 2003 12:09 AM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
I argued long ago that computers, the type that sits on your desktop, couldn't be labeled intelligent or self aware… but, I argued that a robot could… or at least pass a Turing Test for robots (meaning that a human observer, minus the outward visual cues, couldn't decide if the robot was being controlled or was exhibiting intelligent behaviour).

But I want to go back to Mando's link..

I only mention this as filler to bump the thread for tomorrow so I'll remember it.


From: Pokaroo! | Registered: May 2001  |  IP: Logged
WingNut
rabble-rouser
Babbler # 1292

posted 16 May 2003 12:44 PM      Profile for WingNut   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Originally posted by clockwork:

And I doubt computer networks will suddenly "wake up" either, at least not in the sense we understand. There is no body, no driving need to form some sort of goal (what, are they gonna wake up one day and think that the subjugation of humans is a great thing and enslave us all? Why would it wake up to that? Do machines care about money and power? Would a sufficiently complex computer even be able to grasp that?).


I agree. More than that, I doubt we could ever program deviousness or physchpathic behaviour into a machine.

It is possible, and I don't think in the immediate future, to expand human capabilities with a computer chip. Intelligence is another matter. For example, we could put an entire medical encyclopedia onto a chip and implant it into a human with a direct interface into the brain allowing immediate acess.

This does not neccessarily lead to the recipient becoming the world's leading brain surgeon or having any more ability then being able to recall verbatim the contents of the encyclopedia.


From: Out There | Registered: Aug 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 16 May 2003 11:54 PM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
My friend, next to me, has a cigarette, breaks a cold sweat, goes purple and loses consciousness.

I think, "Oh no, these are all the signs of a heart attack!"

So I grab my knife and some PVC tubing that I keep around the house, knowing that with my trusty chip in the back of my head, I know what to do. I must do an incision here below the right ventricle. With luck, I'll insert the tubing and complete a bypass.

I make the first cut… "Ewwwwww! Blood!"

I pass out, but unlike my friend, I wake up…


From: Pokaroo! | Registered: May 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 18 May 2003 12:57 AM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
Err.. so I read your link, Mandos... and, um...

Too technical for me.


From: Pokaroo! | Registered: May 2001  |  IP: Logged
batz
rabble-rouser
Babbler # 3824

posted 21 May 2003 12:30 AM      Profile for batz     Send New Private Message      Edit/Delete Post  Reply With Quote 
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

I don't think there is an adequate understanding of what "human intelligence" is to ascertain "super human" relative to.

Even if we have an interface that allows us to send instructions to a machine just by "thinking", or with minimal abstraction between squishy neurological synapse firing and calculation, how would this be different from controlling a hammer, a car or a fork with your mind?

To borrow from Arthur C Clarke, it seems that computers are sufficiently advanced that most people can't distinguish their operation from magic. Just because most people don't understand computers doesn't make them (computers) intelligent, despite what some people think a Turing test ascertains.

The assumption inherant in most models of AI is that there is a consistent computational model of conciousness just waiting to be inevitably uncovered. That is to say, that our entire conciousness can be reduced and accurately modelled as a set of binary operations.

That really smart people, (including some cognitive scientists) believe in a strictly computational model of mind, shows more about the limitations of our ability to precieve things than it does about the possibility of replicating it.

I think that we can probably model most things we can imagine computationally, but I would be willing to bet that the more interesting problem is: how can we model things that are not consistently representable computationally?

Some people say "building faster computers",
but doing the wrong thing faster or better doesn't make it right.


From: elsewhere | Registered: Mar 2003  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 21 May 2003 01:02 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Holy can of worms, batz-man. I'm an AI researcher (computational linguistics, really) who thinks a strictly computational theory of mind is still quite a reasonable thing to be looking for. You may not agree, but I do resent your presupposition that it's an unreasonable claim. Now watch as I get sucked into the usual time-wasting debate


Edited to add: do I smell Searle in the air? I hope not. How do you know Searle's head is not a Chinese Room?

[ 21 May 2003: Message edited by: Mandos ]


From: There, there. | Registered: Jun 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 21 May 2003 01:38 AM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
I actually read that page about a year ago or so, and to be honest I thought then and think now that it was quite a bit over-the-top. However, I will make a note to re-read it and in light of comments here, edit this post.
From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
clockwork
rabble-rouser
Babbler # 690

posted 21 May 2003 03:31 AM      Profile for clockwork     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
However, I will make a note to re-read it and in light of comments here, edit this post.

Are you sure?... I've been waiting for a follow-up comment in the "meaning of life" thread.... but you never did.

There are only about 3 or 4 people that certain threads atract intelligent comments... I hope I'm one of them, nut I never usually am: You may say that here, but you said that in my thread too.

I guess my point is: place holders are fine, just remember you held that place.


From: Pokaroo! | Registered: May 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 21 May 2003 04:59 AM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, excuuuuuuuuse me for being in the larval stage of an absent-minded professor.
From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 21 May 2003 08:58 AM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
batz:

I think these Arthur Clarke quotes are more cogent to the discussion:

1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

In other words, it's usually a mugs game to ascribe limits to what is technologically feasible.

[ 21 May 2003: Message edited by: JimmyBrogan ]


From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 21 May 2003 10:26 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, if I am interpreting batz correctly (hopefully not putting words in his/her mouth), it is a philosophical claim that consciousness is not knowable in formal terms, which would, of course, be required for a computational theory of mind. Maybe I haven't looked hard enough, but I haven't yet met a convincing argument that this is so--it all relies on particular notions about subjectivity that I'm not sure I share.
From: There, there. | Registered: Jun 2001  |  IP: Logged
WingNut
rabble-rouser
Babbler # 1292

posted 21 May 2003 10:38 AM      Profile for WingNut   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Batz has an interesting argumemt that is quite philosophical and historically so. I seem to remember reading, some time ago, an argument about whether God is an engineer or mechanic. I think the debate is similar in nature to what Batz is suggesting here.

I like his reference to the hammer.

But this is what it comes down to isn't it? Are computers intelligent because they can choose from a number of preprogrammed options based on a number of selected scenarios?

The real issue is can think the impossible?

Can they dream, the impossible dream? Reach the unreachable star? Right the unrightable wrong? ... sorry.


From: Out There | Registered: Aug 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 21 May 2003 10:41 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
The problem is, are we intelligent because we choose from a set of preprogrammed scenarios? Creativity, alas, does not come from nowhere. I do not believe it is unconstrained. If it is constrained, then there is a heuristic. It is only one more step to "can be expressed in formal terms."
From: There, there. | Registered: Jun 2001  |  IP: Logged
batz
rabble-rouser
Babbler # 3824

posted 22 May 2003 12:18 AM      Profile for batz     Send New Private Message      Edit/Delete Post  Reply With Quote 
I haven't read Searle. I've just dabbled in reading some Hofstadter and Penrose with some Foucalt thrown in for good measure.

Anyway, yes, I am saying that faith in a strictly computational model of mind is ridiculous, precisely because the very very best you could hope for is that it is internally consistent.

Faith in any model of determinism seems silly, as if it was a consistent deterministic model that was externally consistent with the rest of the universe, you woudln't need faith would you?

A tautology or solipsism, maybe, but an important one nevertheless.

As far as technologically feasable goes, what is to say that our machines and networks aren't intelligent now? I can see how someone might have problems with notions of subjectivity (relativism in general), but it becomes a very important factor when you are talking about who decides what is or isn't intelligent.

Further, which is intelligent? The software? The hardware? Some kind of magic gestalt of one operating upon the other? I am all for gestalts, but I wonder if such a relationship would also apply to musical notation and the instrument it is played upon.

I would be willing to say that listening to Glen Gould do the Goldberg variations is communing with a disembodied but sentient "intelligence" but I'm not sure it would be consistent with any accepted deterministic scientific models of mind.

I think that conciousness falls into the catagory of things that as soon as you bound their definition within a formal system, you find yourself being right, but about the wrong
thing.

God is neither an engineer or a mechanic. There
was a great comment from one of the astronomers from the Vatican about how God is Love, and therefore the notion of scientific discoveries undermining the possibility of Gods existence was silly. Finding the edges of the universe and the smallest quanta of matter won't harm humanities experience of Love (and other things) any more than writing software could.

We can express pretty much anything formally, but I suspect that the more precisely and formally we express it, the less important/valuable/externally
consistent/true/ it is.

This is a running aesthetic theme with Godel, Hiesenberg, Shannon, Mandelbrott, Bach and Escher, among others. Maybe the reason we appreciate their work so much is because by stretching the bounds of our comprehension of their fields, they also showed us the negative space created by the ultimate limitations of their mode of inquiry.

It's a question of the effect of emphasis on our perspective, which is pretty interesting.

There is a cool book called The Philisophical Computer I half-read a while ago that deals with alot of this stuff.

(for the record, I am not an AI researcher (IANAAIR?), a statistician, or educated enough to engage anyone on the subtleties of durkhiem vs derrida vs minsky vs keanu reeves. I'm really just a hacker working a night shift trying to stay out of trouble.)

[ 22 May 2003: Message edited by: batz ]

[ 22 May 2003: Message edited by: batz ]


From: elsewhere | Registered: Mar 2003  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 22 May 2003 12:07 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Anyway, yes, I am saying that faith in a strictly computational model of mind is ridiculous, precisely because the very very best you could hope for is that it is internally consistent.

Faith in any model of determinism seems silly, as if it was a consistent deterministic model that was externally consistent with the rest of the universe, you woudln't need faith would you?


You assume that a computational theory of mind relies on determinism. However, in the theory of computation, nondeterminism plays a major role. "Nondeterministic Turing Machines" or "Nondeterministic Pushdown Automata," and so on.

In any case, your claim about a "consistent deterministic model" vs. "faith" makes no sense to me. Just because a model is deterministic doesn't mean I know it.

Is the mind deterministic? Probably not--entirely. There are very likely nondeterministic choice points that would be satisfied by some form of randomness. However, can the mind be convincingly represented in some formal language? That's a completely different question.

quote:

As far as technologically feasable goes, what is to say that our machines and networks aren't intelligent now? I can see how someone might have problems with notions of subjectivity (relativism in general), but it becomes a very important factor when you are talking about who decides what is or isn't intelligent.


This claim is so general as to be trivial. It can be raised against any scientific endeavour, formal or empirical. I mean...
quote:
Further, which is intelligent? The software? The hardware? Some kind of magic gestalt of one operating upon the other? I am all for gestalts, but I wonder if such a relationship would also apply to musical notation and the instrument it is played upon.
...how do I know that you are intelligent? Is it your software? Your hardware? If I saw you, I could possibly say, "you appear to have a human-like biology." But I haven't even met you. Putting it this way is unanswerable for humans, and therefore should not be a barrier to an AI claim, because it is equally unanswerable. In reality, it is only our experience communicating with the system that can be used to decide the question. So yes, it is subjective, but trivially so.
quote:
I would be willing to say that listening to Glen Gould do the Goldberg variations is communing with a disembodied but sentient "intelligence" but I'm not sure it would be consistent with any accepted deterministic scientific models of mind.
I am 100% willing to accept that a gestalt can be classed as intelligent if the gestalt exhibited it. The fallacy which has entrapped you is a desire to "locate" intelligence in a physical object. The reason why I was smelling the nefarious influence of Searle is that he makes precisely this argument: that a contrived gestalt exhibiting human-like sentient properties cannot be classed as intelligent because it is not physically located in any natural vessel (the Chinese Room Argument). Why so? How do I know that you aren't a gestalt? Why should I care? Seems like a rather arbitrary restriction to me!

[ 22 May 2003: Message edited by: Mandos ]


From: There, there. | Registered: Jun 2001  |  IP: Logged
WingNut
rabble-rouser
Babbler # 1292

posted 22 May 2003 12:27 PM      Profile for WingNut   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
Originally posted by Mandos:
The problem is, are we intelligent because we choose from a set of preprogrammed scenarios? Creativity, alas, does not come from nowhere. I do not believe it is unconstrained. If it is constrained, then there is a heuristic. It is only one more step to "can be expressed in formal terms."

Do we? To some extent we do. But despite the "programmed scenarios" we fly. We communicate instantly across the world. We debate the possibility of what was seemingly impossible. If creativity is constrained, what constrains it?

From: Out There | Registered: Aug 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 22 May 2003 12:32 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
This is the very question itself, of course, and not just for creativity. We know, again, from the theory of computation that it is possible for a highly constrained, simple system to produce an infinite variety of output. So in effect we are trying to reverse engineer the system.

For instance, can you visualize a four-dimensional object? We have abstract, formal ways of characterizing an n-dimensional systems that may indeed exist in nature. But we cannot perceive their physical manifestation in dimensions more than three. So we do have at least one limit to our creativity right there.


From: There, there. | Registered: Jun 2001  |  IP: Logged
WingNut
rabble-rouser
Babbler # 1292

posted 22 May 2003 12:36 PM      Profile for WingNut   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I don't know what you are talking about. Should you trace my IP you will discover I am communicating from 4thdimension.com.
From: Out There | Registered: Aug 2001  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 22 May 2003 12:49 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
URL doesn't work for me, Wingy. In any case, funny funny funny
From: There, there. | Registered: Jun 2001  |  IP: Logged
batz
rabble-rouser
Babbler # 3824

posted 22 May 2003 07:04 PM      Profile for batz     Send New Private Message      Edit/Delete Post  Reply With Quote 
I read a brief write-up so Searle and there was an interesting comment about how he thought that intelligence was an emergent phenomenon.

From a link: "Instead, Searle argues that the relation between consciousness and its causal brain processes involves a kind of non-event causation such as would explain the fact that gravity (a non-event) causes an object to exert pressure on an underlying surface. Searle has put the point another way by describing consciousness as an emergent property of brain processes in the same sense that water's liquidity is an emergent property of the behavior of H2O molecules."

That seems useful. He had another principle about conciousness being "irreducable" which sounds pretty good.

The deterministic'ness of a system is relative anyway, similarly to the way that randomness is is best measured by its sufficiency. A good example of how the behaviour of seemingly soulfull things like people is bounded deterministicly, would be in how cryptographers say that "people are a poor source of entropy".

Writing an algorythm that generates suffieciently random data may qualify as super-humanly random, but it is probably reasonable to assume that it isn't intelligent, even if it develops emergent properties.

I am willing to admit to adding arbitrary restrictions on the definition of intelligence, but only because I see computationalism as requiring equally arbitrary restrictions of a different sort to be consistent.

I don't think I have fallen into the "location" fallacy, though I would speculate that computationalism may substitute "when" with
"where" and say it is free of this fallacy.

As for visualizing something in 4 dimensions, isn't that really just showing how it changes over time? 5-D is when you really have to start visually compressing things, which is
interesting when posited against Searles
notion of irreducabililty. Algebraicly we can express N-dimensions, but the jury is still out on whether expressing them makes them "real". This handily complements our discussion here, in that expressing intelligence and being intelligent are probably different things.

I thought a few years ago of researching SI,
which is Superficial Intelligence. Things that seem intelligent but really aren't. It was going to involve going to Future Bakery and asking people about these very things and measuring the length of their soliliquies. Takes one to know one, I suppose.

The argument from here generally goes on to "well do you have anything better?" to which I reply, "That isn't my burden, it's just worth noting that AI could learn a bit from some other cultural critical discourses, as they could provide some perspective on metrics for success in the field."

My comments weren't _really_ challenging the internal consistency of the theories of AI (despite the validity of such challenges), they were to pose the question of why AI theories aren't externally consistent with some other critical perspectives. Not that they have to be, but it would be interesting to know why they aren't.


From: elsewhere | Registered: Mar 2003  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 23 May 2003 11:01 AM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
quote:
From a link: "Instead, Searle argues that the relation between consciousness and its causal brain processes involves a kind of non-event causation such as would explain the fact that gravity (a non-event) causes an object to exert pressure on an underlying surface. Searle has put the point another way by describing consciousness as an emergent property of brain processes in the same sense that water's liquidity is an emergent property of the behavior of H2O molecules."
Well, it is certainly possible to analyze and model what it is about H20 molecules that causes water's liquidity. Haven't chemists been doing this for a while?

Describing something as an emergent property does not itself indicate that the property does not have a reality of its own, or that there is only one way that it can emerge. But this is not the whole of Searle's argument.

Searle has, apparently, changed his mind about AI as technology has progressed. However, as I understand it, he believes that it will be achieved by copying the meat. (Hey, is Sisyphus reading this?) I, however, think that the emergent properties of the mind have a reality of their own and can be modeled in another context. Perhaps the correct position is somewhere in between the two, but this is an empirical question.

quote:
Writing an algorythm that generates suffieciently random data may qualify as super-humanly random, but it is probably reasonable to assume that it isn't intelligent, even if it develops emergent properties.

I am willing to admit to adding arbitrary restrictions on the definition of intelligence, but only because I see computationalism as requiring equally arbitrary restrictions of a different sort to be consistent.


Yes, the restriction is arbitrary. "Computationalism," on the other hand, does not suffer from this egregious level of arbitrary restriction, because it generally doesn't say that something cannot be done in a certain way, which is to me an extremely dangerous proposition to live by.
quote:
I don't think I have fallen into the "location" fallacy, though I would speculate that computationalism may substitute "when" with "where" and say it is free of this fallacy.
Do you mean "where" with "when"? In either case, this is unclear. How would you perform that substitution and still continue to make sense?
quote:
As for visualizing something in 4 dimensions, isn't that really just showing how it changes over time? 5-D is when you really have to start visually compressing things, which is
interesting when posited against Searles
notion of irreducabililty. Algebraicly we can express N-dimensions, but the jury is still out on whether expressing them makes them "real". This handily complements our discussion here, in that expressing intelligence and being intelligent are probably different things.
I meant four spatial dimensions. Time, AFAIK, is not a spatial dimension.

I find that the idea that something with a "real-world" effect is "irreducible" is a destructive proposition. What are we supposed to do with that claim? It smells obscurantist to me.

Your last claim is once again an effective restatement of the "fallacy of location." How is it that "expressing intelligence" and "being intelligent" are two different things? The only way you could claim that is if you have located intelligence, and claimed a priori that it can arise nowhere else. So what happens if an alien comes up and talks to you...how do you determine whether it is "expressing intelligence" or whether it is "being intelligent"? Heck, how do I know that about you? I can just claim that you are "expressing intelligence."

I saw an amusing illustration of this point a few months ago on USENET. Suppose a pharmaceutical company came up with an extra-strength hospital painkiller with the following warning:

quote:
After taking the recommended dosage, the patient will likely continue to complain of extreme discomfort. Other tests may also confirm the apparent distress.
The medical staff should consider this only to be apparent pain: there can be no real pain after taking this medication. All expressions and indications of pain are actually side-effects of the painkilling agents in this medication, and not to be confused with actual pain.
See what I mean? How can apparent intelligence be superficial if all the signs are there?
quote:
The argument from here generally goes on to "well do you have anything better?" to which I reply, "That isn't my burden, it's just worth noting that AI could learn a bit from some other cultural critical discourses, as they could provide some perspective on metrics for success in the field."

My comments weren't _really_ challenging the internal consistency of the theories of AI (despite the validity of such challenges), they were to pose the question of why AI theories aren't externally consistent with some other critical perspectives. Not that they have to be, but it would be interesting to know why they aren't.


This is a question I have long pondered, and I firmly place the blame on the other camp. The current underlying philosophies of cultural studies often appear to militate against a scientific and mathematical study of the mind, especially judging by the way that certain babbler-who-may-not-be-reading-this-thread but are involved in culture and literature seem to think. Until a reductionist perspective can be (re)established, there isn't much we can do.

For example, I used to discuss biological determinism and sexual behaviour in cultural contexts here quite a bit, but the perspectives on the issue were so incompatible and the rejection was so passionate that I've mostly decided that there is no point in trying. I think the underlying problem was I was trying to dissect culture as a natural object, and people who study culture, literature, and so on have no inclination to accept that kind of analysis. How this creates a difficulty in reconciling classical AI theories with theories of culture should be obvious.

[ 23 May 2003: Message edited by: Mandos ]

[ 23 May 2003: Message edited by: Mandos ]


From: There, there. | Registered: Jun 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 23 May 2003 10:06 PM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
Well, since clocko got on my case about it, I resolved to sit down with this thread.

By the way, it is perfectly OK to private message me and tell me if you think I'm being a snot, you know.

Ok. My AI research knowledge is a bit fuzzy. However, having said that, I think Vinge's over-the-top-ness comes from his assumption of an exponential increase in machine computing power, without limit.

There are, however, practical limits (there is of course no theoretical limit save that fixed by the Uncertainty Principle). One of them is how small you can make computer chips. The other is how big you can make a computer. I think this will tend to delay the Singularity (to use Vinge's term) or to spread it out over a more manageable time frame than his conception of an ever-compressing interval between increments of machine intelligence improvements.

Where I stand on AI is that it will, some day, be possible to design truly intelligent and self-aware robots. I have a somewhat selfish reason for advocating this as well as an altruistic one: I want a robot to do all my scut work for me. I also want robots to free humans from the yoke of physical labor.

Alvin Toffler has also written of the acceleration of things that happen in societies, but where I differ from Toffler and Vinge is that I don't think humans will accept an ever-accelerating pace of change. We are psychologically resistant to some forces of change; one example is the continuing lack of desire to pay on a piecewise basis for things that people used to pay for up front.

quote:
Instead, in the early '00s we would find our hardware performance curves beginning to level off -- this because of our inability to automate the design work needed to support further hardware improvements. We'd end up with some _very_ powerful hardware, but without the ability to push it further.

Back to Vinge.

In some ways this scenario he outlines is happening.

People who use personal computers have reached a kind of consensus that they don't really "need" a better than 2.5 GHz computer. They feel they don't "need" more than 512 megs of RAM. Et cetera.

The primary driver of computing power increases among the general population (which, I assume, makes up part of the Vingean requirement for the Singularity to form) is whether or not people need to put the brute force of extra computing speed behind something. If they don't, they don't.

Sidebar on cultural analysis, though someone more schooled than me in the social sciences can tell me the linkage between that and singularities.

The basic problem, I feel, with applying an overly-deterministic model to cultures is that there is (a) an inherent psychological resistance to the idea that humans are so hard-wired that not only do they act and react based on instinct at the individual level, but also at the cultural level, and (b) the fact that the sheer number of human beings in the average culture requires a way of handling cultures more akin to the analysis of gas molecules: Using statistical techniques, not billiard-ball techniques.


From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
batz
rabble-rouser
Babbler # 3824

posted 23 May 2003 10:34 PM      Profile for batz     Send New Private Message      Edit/Delete Post  Reply With Quote 
I'll let the rest of it go, as it can be summed up here:

Quoth Mandos:



For example, I used to discuss biological determinism and sexual behaviour in cultural contexts here quite a bit, but the perspectives on the issue were so incompatible and the rejection was so passionate that I've mostly decided that there is no point in trying. I think the underlying problem was I was trying to dissect culture as a natural object, and people who study culture, literature, and so on have no inclination to accept that kind of analysis. How this creates a difficulty in reconciling classical AI theories with theories of culture should be obvious.


You'd think that one side would back down, incorporate or co-opt the other. The alleged militancy on the part of critical theorists against scientistic analysis is really just a product of the same ignorance that makes many engineers objectivist Randroids.

The solution is to attempt to reconcile the external inconsistencies, and assess whether they are indicative of internal ones. An example would be how any good scientist will tell you that they are in the business of collecting evidence, and that Truth is for zealots. Similarly, feminism has railed against the poor science behind things like assumptions of a biologically determined patriarchy in Nature.

There are underlying assumptions that cultural criticism can go a long way to providing broader perspective on, especially regarding why things behave as they do. Critical discourses cause alot of people to get their backs up, and that is a reasonable explanation for why many people are reluctant to bother reconciling those inconsistencies.


From: elsewhere | Registered: Mar 2003  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 26 May 2003 03:00 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I think the problem is deeper than that. I think it goes as far down as epistemology. To use a term from another babbler-I-shall-not-name, I am a One Epistemology Bigot. Cultural studies, etc, etc don't seem very happy with these assumptions, declaring that they emerge from something that is Very Evil called "analytic philosophy." It is evil because it is "phallogocentic" or something like that, because logic has to do with penises...

So there appears to be a difficulty reconciling certain philosophical assumptions at a fundamental level. Classical AI generally assumes logic to be a natural and not a "cultural" object. How can a reconciliation take place without divesting these fundamental assumptions?

I am not claiming that the humanities and the social sciences are useless for the study of AI. Quite the contrary. The various traditions of linguistics have a lot to say--because many of them start on a mathematical basis. But a lot of other areas, well, don't. For the reasons I've mentioned.

quote:
The solution is to attempt to reconcile the external inconsistencies, and assess whether they are indicative of internal ones. An example would be how any good scientist will tell you that they are in the business of collecting evidence, and that Truth is for zealots.
I'm not sure this statement really means anything. Evidence is Truth. If there is no Truth, there is no evidence.

But I do realize that you likely mean some kind of overarching claim of absolute knowledge. It is true that the natural sciences cannot afford this. But those of us retrograde rationalists who approach these questions from a formal and mathematical perspective must be able to make Truth claims. And natural scientists too must have some kind of absolute epistemological basis.

quote:
Similarly, feminism has railed against the poor science behind things like assumptions of a biologically determined patriarchy in Nature.
Well, this is a more complicated issue. "Biologically determined patriarchy" ellides a lot of issues. I think we can reason out a decent explanation for the historical ubiquity and prevalence of patriarchy based on some simple, obvious biological facts. This has no bearing on whether patriarchy is inevitable in the future, if present conditions continue to exist. However, my attempts to do so on babble usually result in a peculiar series of demands...but I won't get into that here.

From: There, there. | Registered: Jun 2001  |  IP: Logged
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 27 May 2003 02:16 PM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Meanwhile . . . the singularity looms closer

quote:
To date, computers have used the binary bit -- represented by either a one or zero -- as their fundamental unit of information. In a quantum computer, the fundamental unit is a quantum bit, or qubit. Because qubits can have more than two states, calculations that would take a supercomputer years to finish will take a quantum computer mere seconds.

Due to the complexities of quantum dynamics, electrons can serve as qubits. They can exist in "up" and "down" states -- single points that are analogous to the ones and zeroes in classical computers -- or in "superposition" states, which are not single points but patterns of probability that exist in several places at once.



From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
batz
rabble-rouser
Babbler # 3824

posted 29 May 2003 02:58 PM      Profile for batz     Send New Private Message      Edit/Delete Post  Reply With Quote 
*shrug*
From: elsewhere | Registered: Mar 2003  |  IP: Logged
Mandos
rabble-rouser
Babbler # 888

posted 29 May 2003 03:02 PM      Profile for Mandos   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
What? That's all? I DEMAND you give me more of a fight...just kidding
From: There, there. | Registered: Jun 2001  |  IP: Logged
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 11 August 2003 04:33 PM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
...and closer

quote:
Theoretical physicists at Stanford and the University of Tokyo think they've found a way to solve the dissipation problem by manipulating a neglected property of the electron - its ''spin,'' or orientation, typically described by its quantum state as ''up'' or ''down.'' They report their findings in the Aug. 7 issue of Science Express, an online version of Science magazine. Electronics relies on Ohm's Law, which says application of a voltage to many materials results in the creation of a current. That's because electrons transmit their charge through the materials. But Ohm's Law also describes the inevitable conversion of electric energy into heat when electrons encounter resistance as they pass through materials.

''We have discovered the equivalent of a new 'Ohm's Law' for spintronics - the emerging science of manipulating the spin of electrons for useful purposes,'' says Shoucheng Zhang, a physics professor at Stanford. Professor Naoto Nagaosa of the University of Tokyo and his research assistant, Shuichi Murakami, are Zhang's co-authors. ''Unlike the Ohm's Law for electronics, the new 'Ohm's Law' that we've discovered says that the spin of the electron can be transported without any loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials already widely used in the semiconductor industry, such as gallium arsenide. That's important because it could enable a new generation of computing devices.''


This seems like a bit of a breakthrough. They're taking advantage of a wonderful natural freebie.

[ 11 August 2003: Message edited by: JimmyBrogan ]


From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 15 August 2003 01:52 PM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
This kind of discussion always seems to get 'philosphical'. But philosophy is a sub-set of our physical structure, which is a sub-set of dynamic, self organizing systems evolution.

I think time-frame and path to a 'singularity' type event are the only unknowns. A continued evolution of 'intelligence' or self-organization will occur.

Chemical soups self organized into organic molecular structures to cells to multicellular to organisms. Complex, unfathomable but apparently inevitable.

We are here in all our self aware, complex multi-million cell structural splendor. Billions of years ago there was no life on Earth. 250 million years ago most of the life that was present went extinct. 80 millions years ago that happened again. It has probably happened many other times to various degrees of 'setback'. Each time, however, living systems relentlessly continued evolving in to more complex organizations.

To argue whether Moore's law or other potential technological bottleneck or barrier will cap this evolution is moot.

Argument about the fine-points of evolution of non-biological entities, assisted by our own creations, is moot in the big picture too. Did a philosophical Australopithecus afarensis hominid alive 4 million years ago, ponder the horror of some super-human superseding her and her kind? Maybe. But we have some of her in us as will whatever we create. Maybe we can even bootstrap ourselves into that future .. by morphing ourselves into a computational substrate. Bring it on.

www.kurzweil.net has comprehensive coverage on this theme.


From: Toronto | Registered: Aug 2001  |  IP: Logged
cynic
rabble-rouser
Babbler # 2857

posted 15 August 2003 02:12 PM      Profile for cynic     Send New Private Message      Edit/Delete Post  Reply With Quote 
So what happens to the 90% of humanity that does not have access to 19th century technology when the other 10% "evolves" through this singularity? Do we float out to space, leaving the poisoned earth behind for the Third World to clean up?
From: Calgary, unfortunately | Registered: Jul 2002  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 15 August 2003 02:15 PM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Exactly.
From: Toronto | Registered: Aug 2001  |  IP: Logged
Foxer
rabble-rouser
Babbler # 4251

posted 15 August 2003 02:31 PM      Profile for Foxer     Send New Private Message      Edit/Delete Post  Reply With Quote 
You may well see divergent evolution cynic. Where there becomes two races for a time. This has happened before in the history of 'man'. Austrolopithicus (sp?) aferensi and africanus both existed for a time as i recall, and so did neanderthal and sapiens.

It may well be that those without access to the technology die off. It may be that access to the technology becomes easier and easier. who can say.

It's hard to believe that with bio and tehno advances happening as fast as they are that humans will be as they are today 500 years from now. In 2000 years i suspect we'd have trouble recongnizing much of our 'civilization'. You can't hold it back, it's like going down a rapid in a small boat - you have to go with the current and hope you can steer it when you see a rock.


From: Vancouver BC | Registered: Jul 2003  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 16 August 2003 04:21 PM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Cynic, my short previous reply wasn't meant to indicate anything other than agreement. While I would like to be around when technology 'explodes', I believe that the initial use and availability of the benefits will be limited to the world's wealthier people. This is of course already happening ie, western developed peoples have disproportionately better access to advanced infrastructure – healthcare, drinking water, sanitation, waste disposal, education, government, judiciary, computation, communications, etc. This will undoubtedly be the pattern of more advanced technology’s benefits.

The truly awesome aspect to this though, is that as biotech and infotech allow individuals to expand their capabilities in terms of life span and mental (storage, computational, etc) capacities, it is very likely they will dominate in one way or another. This will be similar to centres of wealth today eg the 450 odd billionaires but with the potential to have dramatically greater wealth and influence. Whether it will be benign, enlightened or otherwise remains to be seen. I think one indication that it may be somewhat enlightened or simply benign is the propensity of the world’s wealthy to philanthropy. They may choose to float all the boats, while keeping its boat riding higher and higher.

However, when or if a ‘singularity’ type event does or can occur, likely those individuals or entities that transcend will simply vanish as far as we can tell ie any meaningful interaction would cease. However, even if they did ‘ignore us’, they might, for example, choose to perform some experiment on the Earth that has the unfortunate side effect of destroying it. Some might even say that what is happening now.


From: Toronto | Registered: Aug 2001  |  IP: Logged
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 18 August 2003 02:10 PM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Sandia team develops cognitive machines

quote:
“In the long term, the benefits from this effort are expected to include augmenting human effectiveness and embedding these cognitive models into systems like robots and vehicles for better human-hardware interactions,” says John Wagner, manager of Sandia’s Computational Initiatives Department. “We expect to be able to model, simulate and analyze humans and societies of humans for Department of Energy, military and national security applications.”



From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 23 August 2003 01:21 AM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Technological singularity vs biotech singularity .. manufacture new body parts or manipulate existing ones.

New York Times, August 12, 2003

Genetic medicine is making enormous strides, and it may hold the promise of eventually making us something closer to immortal.

"Our life expectancy will be in the region of 5,000 years" in rich countries in the year 2100, predicts Aubrey de Grey, a scholar at Cambridge University.

this story


From: Toronto | Registered: Aug 2001  |  IP: Logged
DrConway
rabble-rouser
Babbler # 490

posted 23 August 2003 01:31 AM      Profile for DrConway     Send New Private Message      Edit/Delete Post  Reply With Quote 
5 thousand years? Good lord. These people really are reaching, I think.

Who would want to live that long?


From: You shall not side with the great against the powerless. | Registered: May 2001  |  IP: Logged
Tackaberry
rabble-rouser
Babbler # 487

posted 25 August 2003 10:50 AM      Profile for Tackaberry   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Right now I am weeding my way through the physics of mortality by Frank Tipler. From what I can grasp it is an interesting read. He talks about singularity, before trying to prove scientifically that life will be reincarnated at the Omega Point.
some of his arguments are a little brutal though. He is a reductionist, taking everything to physics. His critique of Searle is one fo the worst I have ever read.

From: Tokyo | Registered: May 2001  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 27 August 2003 11:30 PM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
The thing with the 5,000 years or living "forever" is that it isn't like it is "one people" living that long. More like a continuous string of evolving people. Along the way, presumably, each of those contiguous and, at times quite discrete, people do want to continue living.

Barrow and Tipler's argument is interesting. I am take it as a weak tea though. There are many people who theorize given sufficient computational power, it is possible to recreate the universe. The part about escaping the end of the universe by merging with all intelligence and being a deity is more sci-fi. But heck why not, eh!

I generally don't have a problem with reductionism. It is really just the other side of the coin ie emotion is the result of physics of atoms, molecules, cells, brain, body, etc and their interaction, so why not talk about physics instead of emotion. The problem with it is that self-organizing, complex systems often display unpredictable behaviour that is "greater than the sum of its parts".


From: Toronto | Registered: Aug 2001  |  IP: Logged
Tackaberry
rabble-rouser
Babbler # 487

posted 28 August 2003 01:09 AM      Profile for Tackaberry   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
How was Tipler's work received by other physicists? Did his extensive calculations in the appendix for scientists hold up?
From: Tokyo | Registered: May 2001  |  IP: Logged
CanadianAlien
rabble-rouser
Babbler # 1219

posted 30 August 2003 11:24 AM      Profile for CanadianAlien   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
I've not seen any peer review ie of the math pro or contra. From references by other authors and theorerists in this genre it appears he has some regard. I've never heard any slagging off of his work. But this stuff is all so speculative anyways. If you recall Stephen Wolfram, universally regarded as a brilliant man, who created the Mathematica software program. He has also developed a theory that the universe results from a simple alorithm operating through cellular automatom. He's not well regarded for that. In the end, though it makes for good reading!
From: Toronto | Registered: Aug 2001  |  IP: Logged
Jimmy Brogan
rabble-rouser
Babbler # 3290

posted 02 October 2003 12:07 PM      Profile for Jimmy Brogan   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
Technology still has a long way to go to catch up to human brain

quote:
Salk Researcher Provides New View on How the Brain Functions:

As the brain has evolved over millions of years, according to Sejnowski, it has become amazingly efficient and powerful. He says that nature has "optimized the structure and function of cortical networks with design principles similar to those used in electronic networks." To illustrate the brain's tremendous capacity, Sejnowski and Laughlin state that the potential bandwidth of all of the neurons in the human cortex is "comparable to the total world backbone capacity of the Internet in 2002."

But they point out that simply comparing the brain to the digital computers of today does not adequately describe the way it functions and makes computations. The brain, according to Sejnowski, has more of the hallmarks of an "energy efficient hybrid device."

"These hybrids offer the ability of analog devices to perform arithmetic functions such as division directly and economically, combined with the ability of digital devices to resist noise," he writes in Science.

"This is an important era in our understanding of the brain," according to Sejnowski. "We are moving toward uncovering some of the fundamental principles related to how neurons in the brain communicate. There is a tremendous amount of information distributed throughout the far-flung regions of the brain. Where does it come from? Where does it go? And how does the brain deal with all of this information?



From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002  |  IP: Logged
flotsom
rabble-rouser
Babbler # 2832

posted 08 October 2003 12:05 AM      Profile for flotsom   Author's Homepage     Send New Private Message      Edit/Delete Post  Reply With Quote 
If I remember correctly, it is Constantin Virgil Gheorghiu's The Twenty-fifth Hour that is the must-read on the subject. I haven't read the book, unfortunately. Only an excerpt here, a reference to it there. Hard to find, I think.

I bet Rasmus has read it. Probably has a like new first edition sitting on the shelf.

Mr Raven?

A word on Mr Gheorghiu's book, if you please.

[ 08 October 2003: Message edited by: flotsom ]


From: the flop | Registered: Jul 2002  |  IP: Logged

All times are Pacific Time  

Post New Topic  Post A Reply Close Topic    Move Topic    Delete Topic next oldest topic   next newest topic
Hop To:

Contact Us | rabble.ca | Policy Statement

Copyright 2001-2008 rabble.ca