Author
|
Topic: Technological singularity spells doom for mankind
|
Jimmy Brogan
rabble-rouser
Babbler # 3290
|
posted 13 May 2003 07:13 PM
I brought up sci-fi author and mathematician Vernor Vinge on another thread and it got me thinking about his spooky idea of what lies just ahead for the human race - a technological singularity that will be the end of human kind. He can explain it far better than me:Vinge on the singularity quote: Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.
quote: What is The Singularity?The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur): There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.) Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. Biological science may provide means to improve natural human intellect. The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.) What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -- the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
I have a hard time finding flaws in his logic. Maybe the time-frame is overly foreshortened.
More discussion of the singularity
From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002
| IP: Logged
|
|
TommyPaineatWork
rabble-rouser
Babbler # 2956
|
posted 13 May 2003 11:25 PM
quote: Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
While I agree the time frame might be a little compressed, I think we're on track. It dawned on me when they put an electrode, first into chimpanzees, then later, humans, which enabled them to move a cursor on a computer terminal just with their brains. While we look at this and other technologies as computer prosthetics, surely applications as enhancements are only a millisecond behind. I envision having the net, or at least your "favorites" hard wired into your brain. Making winning at "Trivial Pursuit" more a function of resource selection than esoterica recall, for example.
From: London | Registered: Aug 2002
| IP: Logged
|
|
Rebecca West
rabble-rouser
Babbler # 1873
|
posted 14 May 2003 04:43 PM
I don't think anyone knows enough about singularity mechanics to be able to say, with any degree of certainty, that digital sentience is a possibility. So far, nothing Cycorp, MIT or any AI think tank you can think of has come up with a machine that could pass a Turing test.As far as designing super-human beings goes, if we do indeed develop the technology to expand the human life span to the point where the sum of individual knowledge no longer goes down the crapper when we die or fall prey to senile dementia, we might explore the limits of the human mental capacity. We may even enhance it. But there are limits to the biological hardware that we are as yet completely ignorant of. Time will tell, of course. AI research has been around for quite a while, but Bioinformatics is relatively new. And that's where you'll get the technology for human-digital interface. It's a long way away from the kind of sophistication required to significantly enhance human intelligence. I think Vinge and others who are interested in a singularity that would create a self-aware machine or a super-intelligent human being have been reading too much Asimov. It's an interesting vision of the future, but really too full of "ifs" and "whens" to be a technological course plotted with a foreseeable outcome. [ 14 May 2003: Message edited by: Rebecca West ]
From: London , Ontario - homogeneous maximus | Registered: Nov 2001
| IP: Logged
|
|
|
|
|
Rebecca West
rabble-rouser
Babbler # 1873
|
posted 15 May 2003 11:38 AM
quote: And I doubt computer networks will suddenly "wake up" either, at least not in the sense we understand. There is no body, no driving need to form some sort of goal (what, are they gonna wake up one day and think that the subjugation of humans is a great thing and enslave us all? Why would it wake up to that? Do machines care about money and power? Would a sufficiently complex computer even be able to grasp that?).
It's an interesting philosophical problem, in a way. I mean, if our brains/minds are a kind of organic computer with biochemical algorithms dictating the flow of information, what makes us self-aware? Is a newborn infant self-aware, or does it require more data input for that? It used to be argued that hardware limitations prevented AI from advancing to the point where a machine could be self-aware, that no machine could hold enough information to truly approximate the human sentient experience.With quantum computing, and a host of other technological innovations, the hardware limitations aren't an issue anymore. But still, even those who're involved in the most sophisticated AI research and development (in the private corporate sector, natch) cannot claim to have produced a self-aware machine yet. I think it has to do with our very new and limited understanding of singularity mechanics, the science of creation. God, if you will.
From: London , Ontario - homogeneous maximus | Registered: Nov 2001
| IP: Logged
|
|
|
|
|
|
|
|
|
clockwork
rabble-rouser
Babbler # 690
|
posted 16 May 2003 11:54 PM
My friend, next to me, has a cigarette, breaks a cold sweat, goes purple and loses consciousness.I think, "Oh no, these are all the signs of a heart attack!" So I grab my knife and some PVC tubing that I keep around the house, knowing that with my trusty chip in the back of my head, I know what to do. I must do an incision here below the right ventricle. With luck, I'll insert the tubing and complete a bypass. I make the first cut… "Ewwwwww! Blood!" I pass out, but unlike my friend, I wake up…
From: Pokaroo! | Registered: May 2001
| IP: Logged
|
|
|
batz
rabble-rouser
Babbler # 3824
|
posted 21 May 2003 12:30 AM
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."I don't think there is an adequate understanding of what "human intelligence" is to ascertain "super human" relative to. Even if we have an interface that allows us to send instructions to a machine just by "thinking", or with minimal abstraction between squishy neurological synapse firing and calculation, how would this be different from controlling a hammer, a car or a fork with your mind? To borrow from Arthur C Clarke, it seems that computers are sufficiently advanced that most people can't distinguish their operation from magic. Just because most people don't understand computers doesn't make them (computers) intelligent, despite what some people think a Turing test ascertains. The assumption inherant in most models of AI is that there is a consistent computational model of conciousness just waiting to be inevitably uncovered. That is to say, that our entire conciousness can be reduced and accurately modelled as a set of binary operations. That really smart people, (including some cognitive scientists) believe in a strictly computational model of mind, shows more about the limitations of our ability to precieve things than it does about the possibility of replicating it. I think that we can probably model most things we can imagine computationally, but I would be willing to bet that the more interesting problem is: how can we model things that are not consistently representable computationally? Some people say "building faster computers", but doing the wrong thing faster or better doesn't make it right.
From: elsewhere | Registered: Mar 2003
| IP: Logged
|
|
|
|
|
|
Jimmy Brogan
rabble-rouser
Babbler # 3290
|
posted 21 May 2003 08:58 AM
batz:I think these Arthur Clarke quotes are more cogent to the discussion: 1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. 2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible. In other words, it's usually a mugs game to ascribe limits to what is technologically feasible. [ 21 May 2003: Message edited by: JimmyBrogan ]
From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002
| IP: Logged
|
|
|
WingNut
rabble-rouser
Babbler # 1292
|
posted 21 May 2003 10:38 AM
Batz has an interesting argumemt that is quite philosophical and historically so. I seem to remember reading, some time ago, an argument about whether God is an engineer or mechanic. I think the debate is similar in nature to what Batz is suggesting here. I like his reference to the hammer. But this is what it comes down to isn't it? Are computers intelligent because they can choose from a number of preprogrammed options based on a number of selected scenarios? The real issue is can think the impossible? Can they dream, the impossible dream? Reach the unreachable star? Right the unrightable wrong? ... sorry.
From: Out There | Registered: Aug 2001
| IP: Logged
|
|
|
batz
rabble-rouser
Babbler # 3824
|
posted 22 May 2003 12:18 AM
I haven't read Searle. I've just dabbled in reading some Hofstadter and Penrose with some Foucalt thrown in for good measure. Anyway, yes, I am saying that faith in a strictly computational model of mind is ridiculous, precisely because the very very best you could hope for is that it is internally consistent. Faith in any model of determinism seems silly, as if it was a consistent deterministic model that was externally consistent with the rest of the universe, you woudln't need faith would you? A tautology or solipsism, maybe, but an important one nevertheless. As far as technologically feasable goes, what is to say that our machines and networks aren't intelligent now? I can see how someone might have problems with notions of subjectivity (relativism in general), but it becomes a very important factor when you are talking about who decides what is or isn't intelligent. Further, which is intelligent? The software? The hardware? Some kind of magic gestalt of one operating upon the other? I am all for gestalts, but I wonder if such a relationship would also apply to musical notation and the instrument it is played upon. I would be willing to say that listening to Glen Gould do the Goldberg variations is communing with a disembodied but sentient "intelligence" but I'm not sure it would be consistent with any accepted deterministic scientific models of mind. I think that conciousness falls into the catagory of things that as soon as you bound their definition within a formal system, you find yourself being right, but about the wrong thing. God is neither an engineer or a mechanic. There was a great comment from one of the astronomers from the Vatican about how God is Love, and therefore the notion of scientific discoveries undermining the possibility of Gods existence was silly. Finding the edges of the universe and the smallest quanta of matter won't harm humanities experience of Love (and other things) any more than writing software could. We can express pretty much anything formally, but I suspect that the more precisely and formally we express it, the less important/valuable/externally consistent/true/ it is. This is a running aesthetic theme with Godel, Hiesenberg, Shannon, Mandelbrott, Bach and Escher, among others. Maybe the reason we appreciate their work so much is because by stretching the bounds of our comprehension of their fields, they also showed us the negative space created by the ultimate limitations of their mode of inquiry. It's a question of the effect of emphasis on our perspective, which is pretty interesting. There is a cool book called The Philisophical Computer I half-read a while ago that deals with alot of this stuff. (for the record, I am not an AI researcher (IANAAIR?), a statistician, or educated enough to engage anyone on the subtleties of durkhiem vs derrida vs minsky vs keanu reeves. I'm really just a hacker working a night shift trying to stay out of trouble.) [ 22 May 2003: Message edited by: batz ] [ 22 May 2003: Message edited by: batz ]
From: elsewhere | Registered: Mar 2003
| IP: Logged
|
|
Mandos
rabble-rouser
Babbler # 888
|
posted 22 May 2003 12:07 PM
quote: Anyway, yes, I am saying that faith in a strictly computational model of mind is ridiculous, precisely because the very very best you could hope for is that it is internally consistent.Faith in any model of determinism seems silly, as if it was a consistent deterministic model that was externally consistent with the rest of the universe, you woudln't need faith would you?
You assume that a computational theory of mind relies on determinism. However, in the theory of computation, nondeterminism plays a major role. "Nondeterministic Turing Machines" or "Nondeterministic Pushdown Automata," and so on. In any case, your claim about a "consistent deterministic model" vs. "faith" makes no sense to me. Just because a model is deterministic doesn't mean I know it. Is the mind deterministic? Probably not--entirely. There are very likely nondeterministic choice points that would be satisfied by some form of randomness. However, can the mind be convincingly represented in some formal language? That's a completely different question. quote:
As far as technologically feasable goes, what is to say that our machines and networks aren't intelligent now? I can see how someone might have problems with notions of subjectivity (relativism in general), but it becomes a very important factor when you are talking about who decides what is or isn't intelligent.
This claim is so general as to be trivial. It can be raised against any scientific endeavour, formal or empirical. I mean... quote: Further, which is intelligent? The software? The hardware? Some kind of magic gestalt of one operating upon the other? I am all for gestalts, but I wonder if such a relationship would also apply to musical notation and the instrument it is played upon.
...how do I know that you are intelligent? Is it your software? Your hardware? If I saw you, I could possibly say, "you appear to have a human-like biology." But I haven't even met you. Putting it this way is unanswerable for humans, and therefore should not be a barrier to an AI claim, because it is equally unanswerable. In reality, it is only our experience communicating with the system that can be used to decide the question. So yes, it is subjective, but trivially so. quote: I would be willing to say that listening to Glen Gould do the Goldberg variations is communing with a disembodied but sentient "intelligence" but I'm not sure it would be consistent with any accepted deterministic scientific models of mind.
I am 100% willing to accept that a gestalt can be classed as intelligent if the gestalt exhibited it. The fallacy which has entrapped you is a desire to "locate" intelligence in a physical object. The reason why I was smelling the nefarious influence of Searle is that he makes precisely this argument: that a contrived gestalt exhibiting human-like sentient properties cannot be classed as intelligent because it is not physically located in any natural vessel (the Chinese Room Argument). Why so? How do I know that you aren't a gestalt? Why should I care? Seems like a rather arbitrary restriction to me![ 22 May 2003: Message edited by: Mandos ]
From: There, there. | Registered: Jun 2001
| IP: Logged
|
|
|
|
|
|
batz
rabble-rouser
Babbler # 3824
|
posted 22 May 2003 07:04 PM
I read a brief write-up so Searle and there was an interesting comment about how he thought that intelligence was an emergent phenomenon.From a link: "Instead, Searle argues that the relation between consciousness and its causal brain processes involves a kind of non-event causation such as would explain the fact that gravity (a non-event) causes an object to exert pressure on an underlying surface. Searle has put the point another way by describing consciousness as an emergent property of brain processes in the same sense that water's liquidity is an emergent property of the behavior of H2O molecules." That seems useful. He had another principle about conciousness being "irreducable" which sounds pretty good. The deterministic'ness of a system is relative anyway, similarly to the way that randomness is is best measured by its sufficiency. A good example of how the behaviour of seemingly soulfull things like people is bounded deterministicly, would be in how cryptographers say that "people are a poor source of entropy". Writing an algorythm that generates suffieciently random data may qualify as super-humanly random, but it is probably reasonable to assume that it isn't intelligent, even if it develops emergent properties. I am willing to admit to adding arbitrary restrictions on the definition of intelligence, but only because I see computationalism as requiring equally arbitrary restrictions of a different sort to be consistent. I don't think I have fallen into the "location" fallacy, though I would speculate that computationalism may substitute "when" with "where" and say it is free of this fallacy. As for visualizing something in 4 dimensions, isn't that really just showing how it changes over time? 5-D is when you really have to start visually compressing things, which is interesting when posited against Searles notion of irreducabililty. Algebraicly we can express N-dimensions, but the jury is still out on whether expressing them makes them "real". This handily complements our discussion here, in that expressing intelligence and being intelligent are probably different things. I thought a few years ago of researching SI, which is Superficial Intelligence. Things that seem intelligent but really aren't. It was going to involve going to Future Bakery and asking people about these very things and measuring the length of their soliliquies. Takes one to know one, I suppose. The argument from here generally goes on to "well do you have anything better?" to which I reply, "That isn't my burden, it's just worth noting that AI could learn a bit from some other cultural critical discourses, as they could provide some perspective on metrics for success in the field." My comments weren't _really_ challenging the internal consistency of the theories of AI (despite the validity of such challenges), they were to pose the question of why AI theories aren't externally consistent with some other critical perspectives. Not that they have to be, but it would be interesting to know why they aren't.
From: elsewhere | Registered: Mar 2003
| IP: Logged
|
|
Mandos
rabble-rouser
Babbler # 888
|
posted 23 May 2003 11:01 AM
quote: From a link: "Instead, Searle argues that the relation between consciousness and its causal brain processes involves a kind of non-event causation such as would explain the fact that gravity (a non-event) causes an object to exert pressure on an underlying surface. Searle has put the point another way by describing consciousness as an emergent property of brain processes in the same sense that water's liquidity is an emergent property of the behavior of H2O molecules."
Well, it is certainly possible to analyze and model what it is about H20 molecules that causes water's liquidity. Haven't chemists been doing this for a while? Describing something as an emergent property does not itself indicate that the property does not have a reality of its own, or that there is only one way that it can emerge. But this is not the whole of Searle's argument. Searle has, apparently, changed his mind about AI as technology has progressed. However, as I understand it, he believes that it will be achieved by copying the meat. (Hey, is Sisyphus reading this?) I, however, think that the emergent properties of the mind have a reality of their own and can be modeled in another context. Perhaps the correct position is somewhere in between the two, but this is an empirical question. quote: Writing an algorythm that generates suffieciently random data may qualify as super-humanly random, but it is probably reasonable to assume that it isn't intelligent, even if it develops emergent properties.I am willing to admit to adding arbitrary restrictions on the definition of intelligence, but only because I see computationalism as requiring equally arbitrary restrictions of a different sort to be consistent.
Yes, the restriction is arbitrary. "Computationalism," on the other hand, does not suffer from this egregious level of arbitrary restriction, because it generally doesn't say that something cannot be done in a certain way, which is to me an extremely dangerous proposition to live by. quote: I don't think I have fallen into the "location" fallacy, though I would speculate that computationalism may substitute "when" with "where" and say it is free of this fallacy.
Do you mean "where" with "when"? In either case, this is unclear. How would you perform that substitution and still continue to make sense? quote: As for visualizing something in 4 dimensions, isn't that really just showing how it changes over time? 5-D is when you really have to start visually compressing things, which is interesting when posited against Searles notion of irreducabililty. Algebraicly we can express N-dimensions, but the jury is still out on whether expressing them makes them "real". This handily complements our discussion here, in that expressing intelligence and being intelligent are probably different things.
I meant four spatial dimensions. Time, AFAIK, is not a spatial dimension. I find that the idea that something with a "real-world" effect is "irreducible" is a destructive proposition. What are we supposed to do with that claim? It smells obscurantist to me. Your last claim is once again an effective restatement of the "fallacy of location." How is it that "expressing intelligence" and "being intelligent" are two different things? The only way you could claim that is if you have located intelligence, and claimed a priori that it can arise nowhere else. So what happens if an alien comes up and talks to you...how do you determine whether it is "expressing intelligence" or whether it is "being intelligent"? Heck, how do I know that about you? I can just claim that you are "expressing intelligence." I saw an amusing illustration of this point a few months ago on USENET. Suppose a pharmaceutical company came up with an extra-strength hospital painkiller with the following warning: quote: After taking the recommended dosage, the patient will likely continue to complain of extreme discomfort. Other tests may also confirm the apparent distress. The medical staff should consider this only to be apparent pain: there can be no real pain after taking this medication. All expressions and indications of pain are actually side-effects of the painkilling agents in this medication, and not to be confused with actual pain.
See what I mean? How can apparent intelligence be superficial if all the signs are there? quote: The argument from here generally goes on to "well do you have anything better?" to which I reply, "That isn't my burden, it's just worth noting that AI could learn a bit from some other cultural critical discourses, as they could provide some perspective on metrics for success in the field."My comments weren't _really_ challenging the internal consistency of the theories of AI (despite the validity of such challenges), they were to pose the question of why AI theories aren't externally consistent with some other critical perspectives. Not that they have to be, but it would be interesting to know why they aren't.
This is a question I have long pondered, and I firmly place the blame on the other camp. The current underlying philosophies of cultural studies often appear to militate against a scientific and mathematical study of the mind, especially judging by the way that certain babbler-who-may-not-be-reading-this-thread but are involved in culture and literature seem to think. Until a reductionist perspective can be (re)established, there isn't much we can do. For example, I used to discuss biological determinism and sexual behaviour in cultural contexts here quite a bit, but the perspectives on the issue were so incompatible and the rejection was so passionate that I've mostly decided that there is no point in trying. I think the underlying problem was I was trying to dissect culture as a natural object, and people who study culture, literature, and so on have no inclination to accept that kind of analysis. How this creates a difficulty in reconciling classical AI theories with theories of culture should be obvious. [ 23 May 2003: Message edited by: Mandos ] [ 23 May 2003: Message edited by: Mandos ]
From: There, there. | Registered: Jun 2001
| IP: Logged
|
|
DrConway
rabble-rouser
Babbler # 490
|
posted 23 May 2003 10:06 PM
Well, since clocko got on my case about it, I resolved to sit down with this thread. By the way, it is perfectly OK to private message me and tell me if you think I'm being a snot, you know. Ok. My AI research knowledge is a bit fuzzy. However, having said that, I think Vinge's over-the-top-ness comes from his assumption of an exponential increase in machine computing power, without limit. There are, however, practical limits (there is of course no theoretical limit save that fixed by the Uncertainty Principle). One of them is how small you can make computer chips. The other is how big you can make a computer. I think this will tend to delay the Singularity (to use Vinge's term) or to spread it out over a more manageable time frame than his conception of an ever-compressing interval between increments of machine intelligence improvements. Where I stand on AI is that it will, some day, be possible to design truly intelligent and self-aware robots. I have a somewhat selfish reason for advocating this as well as an altruistic one: I want a robot to do all my scut work for me. I also want robots to free humans from the yoke of physical labor. Alvin Toffler has also written of the acceleration of things that happen in societies, but where I differ from Toffler and Vinge is that I don't think humans will accept an ever-accelerating pace of change. We are psychologically resistant to some forces of change; one example is the continuing lack of desire to pay on a piecewise basis for things that people used to pay for up front. quote: Instead, in the early '00s we would find our hardware performance curves beginning to level off -- this because of our inability to automate the design work needed to support further hardware improvements. We'd end up with some _very_ powerful hardware, but without the ability to push it further.
Back to Vinge. In some ways this scenario he outlines is happening. People who use personal computers have reached a kind of consensus that they don't really "need" a better than 2.5 GHz computer. They feel they don't "need" more than 512 megs of RAM. Et cetera. The primary driver of computing power increases among the general population (which, I assume, makes up part of the Vingean requirement for the Singularity to form) is whether or not people need to put the brute force of extra computing speed behind something. If they don't, they don't. Sidebar on cultural analysis, though someone more schooled than me in the social sciences can tell me the linkage between that and singularities. The basic problem, I feel, with applying an overly-deterministic model to cultures is that there is (a) an inherent psychological resistance to the idea that humans are so hard-wired that not only do they act and react based on instinct at the individual level, but also at the cultural level, and (b) the fact that the sheer number of human beings in the average culture requires a way of handling cultures more akin to the analysis of gas molecules: Using statistical techniques, not billiard-ball techniques.
From: You shall not side with the great against the powerless. | Registered: May 2001
| IP: Logged
|
|
batz
rabble-rouser
Babbler # 3824
|
posted 23 May 2003 10:34 PM
I'll let the rest of it go, as it can be summed up here:Quoth Mandos:
For example, I used to discuss biological determinism and sexual behaviour in cultural contexts here quite a bit, but the perspectives on the issue were so incompatible and the rejection was so passionate that I've mostly decided that there is no point in trying. I think the underlying problem was I was trying to dissect culture as a natural object, and people who study culture, literature, and so on have no inclination to accept that kind of analysis. How this creates a difficulty in reconciling classical AI theories with theories of culture should be obvious.
You'd think that one side would back down, incorporate or co-opt the other. The alleged militancy on the part of critical theorists against scientistic analysis is really just a product of the same ignorance that makes many engineers objectivist Randroids.
The solution is to attempt to reconcile the external inconsistencies, and assess whether they are indicative of internal ones. An example would be how any good scientist will tell you that they are in the business of collecting evidence, and that Truth is for zealots. Similarly, feminism has railed against the poor science behind things like assumptions of a biologically determined patriarchy in Nature. There are underlying assumptions that cultural criticism can go a long way to providing broader perspective on, especially regarding why things behave as they do. Critical discourses cause alot of people to get their backs up, and that is a reasonable explanation for why many people are reluctant to bother reconciling those inconsistencies.
From: elsewhere | Registered: Mar 2003
| IP: Logged
|
|
Mandos
rabble-rouser
Babbler # 888
|
posted 26 May 2003 03:00 PM
I think the problem is deeper than that. I think it goes as far down as epistemology. To use a term from another babbler-I-shall-not-name, I am a One Epistemology Bigot. Cultural studies, etc, etc don't seem very happy with these assumptions, declaring that they emerge from something that is Very Evil called "analytic philosophy." It is evil because it is "phallogocentic" or something like that, because logic has to do with penises...So there appears to be a difficulty reconciling certain philosophical assumptions at a fundamental level. Classical AI generally assumes logic to be a natural and not a "cultural" object. How can a reconciliation take place without divesting these fundamental assumptions? I am not claiming that the humanities and the social sciences are useless for the study of AI. Quite the contrary. The various traditions of linguistics have a lot to say--because many of them start on a mathematical basis. But a lot of other areas, well, don't. For the reasons I've mentioned. quote: The solution is to attempt to reconcile the external inconsistencies, and assess whether they are indicative of internal ones. An example would be how any good scientist will tell you that they are in the business of collecting evidence, and that Truth is for zealots.
I'm not sure this statement really means anything. Evidence is Truth. If there is no Truth, there is no evidence. But I do realize that you likely mean some kind of overarching claim of absolute knowledge. It is true that the natural sciences cannot afford this. But those of us retrograde rationalists who approach these questions from a formal and mathematical perspective must be able to make Truth claims. And natural scientists too must have some kind of absolute epistemological basis. quote: Similarly, feminism has railed against the poor science behind things like assumptions of a biologically determined patriarchy in Nature.
Well, this is a more complicated issue. "Biologically determined patriarchy" ellides a lot of issues. I think we can reason out a decent explanation for the historical ubiquity and prevalence of patriarchy based on some simple, obvious biological facts. This has no bearing on whether patriarchy is inevitable in the future, if present conditions continue to exist. However, my attempts to do so on babble usually result in a peculiar series of demands...but I won't get into that here.
From: There, there. | Registered: Jun 2001
| IP: Logged
|
|
|
|
|
Jimmy Brogan
rabble-rouser
Babbler # 3290
|
posted 11 August 2003 04:33 PM
...and closer quote: Theoretical physicists at Stanford and the University of Tokyo think they've found a way to solve the dissipation problem by manipulating a neglected property of the electron - its ''spin,'' or orientation, typically described by its quantum state as ''up'' or ''down.'' They report their findings in the Aug. 7 issue of Science Express, an online version of Science magazine. Electronics relies on Ohm's Law, which says application of a voltage to many materials results in the creation of a current. That's because electrons transmit their charge through the materials. But Ohm's Law also describes the inevitable conversion of electric energy into heat when electrons encounter resistance as they pass through materials. ''We have discovered the equivalent of a new 'Ohm's Law' for spintronics - the emerging science of manipulating the spin of electrons for useful purposes,'' says Shoucheng Zhang, a physics professor at Stanford. Professor Naoto Nagaosa of the University of Tokyo and his research assistant, Shuichi Murakami, are Zhang's co-authors. ''Unlike the Ohm's Law for electronics, the new 'Ohm's Law' that we've discovered says that the spin of the electron can be transported without any loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials already widely used in the semiconductor industry, such as gallium arsenide. That's important because it could enable a new generation of computing devices.''
This seems like a bit of a breakthrough. They're taking advantage of a wonderful natural freebie. [ 11 August 2003: Message edited by: JimmyBrogan ]
From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002
| IP: Logged
|
|
CanadianAlien
rabble-rouser
Babbler # 1219
|
posted 15 August 2003 01:52 PM
This kind of discussion always seems to get 'philosphical'. But philosophy is a sub-set of our physical structure, which is a sub-set of dynamic, self organizing systems evolution.I think time-frame and path to a 'singularity' type event are the only unknowns. A continued evolution of 'intelligence' or self-organization will occur. Chemical soups self organized into organic molecular structures to cells to multicellular to organisms. Complex, unfathomable but apparently inevitable. We are here in all our self aware, complex multi-million cell structural splendor. Billions of years ago there was no life on Earth. 250 million years ago most of the life that was present went extinct. 80 millions years ago that happened again. It has probably happened many other times to various degrees of 'setback'. Each time, however, living systems relentlessly continued evolving in to more complex organizations. To argue whether Moore's law or other potential technological bottleneck or barrier will cap this evolution is moot. Argument about the fine-points of evolution of non-biological entities, assisted by our own creations, is moot in the big picture too. Did a philosophical Australopithecus afarensis hominid alive 4 million years ago, ponder the horror of some super-human superseding her and her kind? Maybe. But we have some of her in us as will whatever we create. Maybe we can even bootstrap ourselves into that future .. by morphing ourselves into a computational substrate. Bring it on. www.kurzweil.net has comprehensive coverage on this theme.
From: Toronto | Registered: Aug 2001
| IP: Logged
|
|
|
|
|
CanadianAlien
rabble-rouser
Babbler # 1219
|
posted 16 August 2003 04:21 PM
Cynic, my short previous reply wasn't meant to indicate anything other than agreement. While I would like to be around when technology 'explodes', I believe that the initial use and availability of the benefits will be limited to the world's wealthier people. This is of course already happening ie, western developed peoples have disproportionately better access to advanced infrastructure – healthcare, drinking water, sanitation, waste disposal, education, government, judiciary, computation, communications, etc. This will undoubtedly be the pattern of more advanced technology’s benefits. The truly awesome aspect to this though, is that as biotech and infotech allow individuals to expand their capabilities in terms of life span and mental (storage, computational, etc) capacities, it is very likely they will dominate in one way or another. This will be similar to centres of wealth today eg the 450 odd billionaires but with the potential to have dramatically greater wealth and influence. Whether it will be benign, enlightened or otherwise remains to be seen. I think one indication that it may be somewhat enlightened or simply benign is the propensity of the world’s wealthy to philanthropy. They may choose to float all the boats, while keeping its boat riding higher and higher. However, when or if a ‘singularity’ type event does or can occur, likely those individuals or entities that transcend will simply vanish as far as we can tell ie any meaningful interaction would cease. However, even if they did ‘ignore us’, they might, for example, choose to perform some experiment on the Earth that has the unfortunate side effect of destroying it. Some might even say that what is happening now.
From: Toronto | Registered: Aug 2001
| IP: Logged
|
|
|
CanadianAlien
rabble-rouser
Babbler # 1219
|
posted 23 August 2003 01:21 AM
Technological singularity vs biotech singularity .. manufacture new body parts or manipulate existing ones.New York Times, August 12, 2003 Genetic medicine is making enormous strides, and it may hold the promise of eventually making us something closer to immortal. "Our life expectancy will be in the region of 5,000 years" in rich countries in the year 2100, predicts Aubrey de Grey, a scholar at Cambridge University. this story
From: Toronto | Registered: Aug 2001
| IP: Logged
|
|
|
|
CanadianAlien
rabble-rouser
Babbler # 1219
|
posted 27 August 2003 11:30 PM
The thing with the 5,000 years or living "forever" is that it isn't like it is "one people" living that long. More like a continuous string of evolving people. Along the way, presumably, each of those contiguous and, at times quite discrete, people do want to continue living.Barrow and Tipler's argument is interesting. I am take it as a weak tea though. There are many people who theorize given sufficient computational power, it is possible to recreate the universe. The part about escaping the end of the universe by merging with all intelligence and being a deity is more sci-fi. But heck why not, eh! I generally don't have a problem with reductionism. It is really just the other side of the coin ie emotion is the result of physics of atoms, molecules, cells, brain, body, etc and their interaction, so why not talk about physics instead of emotion. The problem with it is that self-organizing, complex systems often display unpredictable behaviour that is "greater than the sum of its parts".
From: Toronto | Registered: Aug 2001
| IP: Logged
|
|
|
|
Jimmy Brogan
rabble-rouser
Babbler # 3290
|
posted 02 October 2003 12:07 PM
Technology still has a long way to go to catch up to human brain quote: Salk Researcher Provides New View on How the Brain Functions:As the brain has evolved over millions of years, according to Sejnowski, it has become amazingly efficient and powerful. He says that nature has "optimized the structure and function of cortical networks with design principles similar to those used in electronic networks." To illustrate the brain's tremendous capacity, Sejnowski and Laughlin state that the potential bandwidth of all of the neurons in the human cortex is "comparable to the total world backbone capacity of the Internet in 2002." But they point out that simply comparing the brain to the digital computers of today does not adequately describe the way it functions and makes computations. The brain, according to Sejnowski, has more of the hallmarks of an "energy efficient hybrid device." "These hybrids offer the ability of analog devices to perform arithmetic functions such as division directly and economically, combined with the ability of digital devices to resist noise," he writes in Science. "This is an important era in our understanding of the brain," according to Sejnowski. "We are moving toward uncovering some of the fundamental principles related to how neurons in the brain communicate. There is a tremendous amount of information distributed throughout the far-flung regions of the brain. Where does it come from? Where does it go? And how does the brain deal with all of this information?
From: The right choice - Iggy Thumbscrews for Liberal leader | Registered: Nov 2002
| IP: Logged
|
|
flotsom
rabble-rouser
Babbler # 2832
|
posted 08 October 2003 12:05 AM
If I remember correctly, it is Constantin Virgil Gheorghiu's The Twenty-fifth Hour that is the must-read on the subject. I haven't read the book, unfortunately. Only an excerpt here, a reference to it there. Hard to find, I think.I bet Rasmus has read it. Probably has a like new first edition sitting on the shelf. Mr Raven? A word on Mr Gheorghiu's book, if you please. [ 08 October 2003: Message edited by: flotsom ]
From: the flop | Registered: Jul 2002
| IP: Logged
|
|
|