I’ve been reading a bit about the Technological Singularity. It’s an interesting and chilling idea conceived by people who aren’t testers. It goes like this: the progress of technology is increasing exponentially. Eventually the A.I. technology will exist that will be capable of surpassing human intelligence and increasing its own intelligence. At that point, called the Singularity, the future will not need us… Transhumanity will be born… A new era of evolution will begin.
I think a tester was not involved in this particular project plan. For one thing, we aren’t even able to define intelligence, except as the ability to perform rather narrow and banal tasks super-fast, so how do we get from there to something human-like? It seems to me that the efforts to create machines that will fool humans into believing that they are smart are equivalent to carving a Ferrari out of wax. Sure you could fool someone, but it’s still not a Ferrari. Wishing and believing doesn’t make it a Ferrari.
Because we know how a Ferrari works, it’s easy to understand that a wax Ferrari is very different from a real one. Since we don’t know what intelligence really is, even smart people easily will confuse wax intelligence for real intelligence. In testing terms, however, I have to ask “What are the features of artificial intelligence? How would you test them? How would you know they are reliable? And most importantly, how would you know that human intelligence doesn’t possess secret and subtle features that have not yet been identified?” Being beaten in chess by a chess computer is no evidence that such a computer can help you with your taxes, or advise you on your troubles with girls. Impressive feats of “intelligence” simply do not encompass intelligence in all the forms that we routinely experience it.
The Google Grid
One example is the so-called Google Grid. I saw a video, the other day, called Epic 2014. It’s about the rise of a collection of tools from Google that create an artificial mass intelligence. One of the features of this fantasy is an “algorithm” that automatically writes news stories by cobbling pieces from other news stories. The problem with that idea is that it seems to know nothing about writing. Writing is not merely text manipulation. Writing is not snipping and remixing. Writing requires modeling a world, modeling a reader’s world, conceiving of a communication goal, and finding a solution to achieve that goal. To write is to express a point of view. What the creators of Epic 2014 seemed to be imagining is a system capable of really really bad writing. We already have that. It’s called Racter. It came out years ago. The Google people are thinking of creating a better Racter, essentially. The chilling thing about that is that it will fool a lot of people, whose lives will be a little less rich for it.
I think the only way we can get to an interesting artificial intelligence is to create conditions for certain interesting phenomena of intelligence to emerge and self-organize in some sort of highly connectionist networked soup of neuron-like agents. We won’t know if it really is “human-like”, except perhaps after a long period of testing, but growing it will have to be a delicate and buggy process, for the same reason that complex software development is complex and buggy. Just like Hal in 2001, maybe it’s really smart, or maybe it’s really crazy and tells lies. Call in the testers, please.
(When Hal claimed in the movie that no 9000 series computers had ever made an error, I was ready to reboot him right then.)
No, you say? You will assemble the intelligence out of trillions of identical simple components and let nature and data stimulation build the intelligence automatically? Well, that’s how evolution works, and look how buggy THAT is! Look how long it takes. Look at how narrow the intelligences are that it has created. And if we turn a narrow and simplistic intelligence to the task of redesigning itself, why suppose that it is more likely to do a good job than a terrible job?
Although humans have written programs, no program yet has written a human. There’s a reason for that. Humans are oodles more sophisticated than programs. So, the master program that threatens to take over humanity would require an even more masterful program to debug itself with. But there can’t be one, because THAT program would require a program to debug itself… and so on.
The Complexity Barrier
So, I predict that the singularity will be drowned and defeated by what might be called the Complexity Barrier. The more complex the technology, the more prone to breakdown. In fact much of the “progress” of technology seems to be accompanied by a process of training humans to accept increasingly fragile technology. I predict that we will discover that the amount of energy and resources needed to surmount the complexity barrier will approach infinity.
In the future, technology will be like weather. We will be able to predict it somewhat, but things will go mysteriously wrong on a regular basis. Things fall apart; the CPU will not hold.
Until I see a workable test plan for the Singularity, I can’t take it seriously.
Zach Fisher says
Late night brain dump in response to your great post:
It reminded me of the closing chapters in Michael L. Dertouzos’ “The Unfinished Revolution”. While I found myself agasp at how much of his manifest about human-centric computing had already been realized, I was left with a lot more questions about the role and definition of humanity in an increasingly more technological culture. It appears that some ( myself included at times ) succumb to a kind of seduction in the face of the technology’s impressive record of increasing the speed of work ( where work is calcuating numbers ). And if the sum totality of humanity is crunching numbers, our days were numbered a long time ago ( pun intended ). The seduction has led to a belief that the machine is more capable of doing our jobs better because they’re faster. As wrong as that may be, perhaps it is the ability to believe in the first place – that is the hallmark of human intelligence. Followed closely by the realization that you can be wrong.
Corey says
On a near-term timeline, you are probably right. but fast forward 50 years.. 100 years.. 1000 years. Everything you think you know about technologically and human life now will be laughably outdated.
[James’ Reply: Certainly a lot of things. Not everything. A great deal has remained more or less constant for the last 10,000 years or so. One of those things is human overconfidence in human designs, and human inability to predict the future. That’s why I think testing has a good future…]
“Humans are oodles more sophisticated than programs.”.. currently, sure. But someday in the future, the un-augmented human mind and body will look like simplistic and outdated legacy systems.
[James’ Reply: Maybe. But we are still left with a testing problem.]
Also, not many people believe in the human/machine split. This is part of the reason that early AI research was not successful, yet human augmentation was. The singularity won’t be machines replacing humans.. it will be the point in time when the last piece of natural human technology is replaced.
on a long enough timeline, underestimating technology is a naive position.
[James’ Reply: I’m a tester. What I find naive is underestimating the capacity of technology to fail.]Â
Steve Sandvik says
I love the singularity as a topic, for many of the same reasons you talk about. I would add another thing that people don’t understand, which contributes to the power of this metaphor–exponential growth. It’s all well and good to draw a linear/linear graph of an exponential relationship and point to the right hand edge and go “oh my god we’re going to plotz!” as the curve suddenly takes a dramatic turn upward–but people mostly don’t get that where the elbow appears to be in a graph like that is strictly a matter of the scales chosen. We’re *always* at the elbow, if you choose the scale that way. They were at the elbow in the middle of the dark ages! If I choose certain definitions of intelligence (and I have no reason to believe such definitions are much worse than any others–they all appear to be pretty weak to me) the singularity already passed, and we just didn’t notice–because as you point out, you can’t measure intelligence and computing power on the same scale without abstracting away so much information that the comparison is essentially meaningless.
It *is* true that computers will have more raw digital computing power than human brains–but human brains never have been very good digital computers. So who cares? I for one welcome our new robot number-crunchers. The more the merrier.
On the other hand, I do like suggesting to people that the exponential growth of computing power implies that all this SETI and Folding@home stuff is just contributing to global warming right now, since all the work done up to now will be equaled by the work done in the next 2 years, and for a lower energy cost, assuming available computing power doubles every 2 years and that there are consequent efficiency improvements (which is actually slower than what Kurzweil’s data indicates). Of course, if you’re waiting for a medical breakthrough to combat some horrible disease, waiting 2 years probably doesn’t sound like that good an idea.
Chris McMahon says
James, I think you misunderstand what the Singularity could be. It doesn’t necessarily have to be AI. It could be biotech that allows people to change themselves into something unrecognizable.
[James’ Reply: Isn’t it true that humans already can change themselves into something “unrecognizable”? We can become educated, or bitter, or happy, or angry, or lose weight, or gain it. We can wear disguises, clothing, or machines. We can hide. We can kill ourselves. Are you talking about something unrecognizable in some specific way? If so, then how will we know what that change is, that a change has occurred, or that it has occurred correctly? As a tester, I am challenging whether such grand claims have meaning, or for those claims that seem meaningful and that suggest an evolution outside of human control, I’m suggesting that the complexities involved will pose insurmountable limits.]Â
Another definition of the Singularity: somewhere around 2040-2050, if Moore’s law still continues to hold, there will be an interesting parity. The number of processing actions performed by all computers everywhere in the world will be equal to the number of processing actions performed by all human brains everywhere in the world. That is, the world inhabited by machines will be as rich in experience as the world inhabited by people. (No I haven’t done the math, I’ve only read this.)
[James’ Reply: How could such a parity matter? Imagine an old fashioned Atari video game system from 1982. Imagine the richness represented by that computer. Now imagine a pile of five such game systems, all plugged in at once. Is that richer? What about a billion trillion trillion such games? Is that really any richer? It’s just the same thing over and over.
The number of processing actions has nothing to do with richness of experience. In fact, a computer has no experience, in our terms, because it lacks a self-model. First someone has to develop a self-modeling system. Once that has occurred, it will begin to be meaningful to compare its experience with that of a human. Or even a dog.
At that point, it will also become important to define richness in a way that means something.]Â
Chris Boyne says
This is a topic I have long found irresistable. If you believe that the human mind is created from or caused by the brain / central nervous system (and that’s one of but not the only hypothesis) then it constitutes a strong proof of emergent behaviour.
From a network of fairly simple pieces, the neurons, a complex and powerful system emerges at a higher level. The brain. From the relatively simple capabilities of a neuron, the quite extraordinary capabilities of the entire brain emerge.
The question being, how do we design the little bits, the computer neurons? How do we connect them in such a way to facilitate the emergence of an entity much greater than the sum of its parts?
James, you raise a pertinent, nay critical point – how do we test it? How do we recognise when this emergent behaviour occurs? What are the signs of sentience or sapience?
Without the need for such inefficiencies as growing a whole body over the course of 15 or so years before producing the next iteration, our machine minds can surely evolve faster, orders of magnitude faster than the human brain did.
[James’ Reply: Unless it turns out that growing a body, or something like it, is actually a critical part of mental evolution, such that failing to be challenged by that problem results in an intelligence that can’t understand us. It may also be that “high” intelligence past a certain point is inherently self-destructive or self-extinguishing, the way that yeast drowns in its own waste products. I don’t know what’s reasonable or plausible, but it’s pretty safe to say that complex things tend to fall apart unless a great deal of energy is expended to support them.]
And with sentience the only goal (assuming we find a way to define and test progress towards it) we lose the biological inefficiencies. For example how many genius minds were born into bodies which didn’t reproduce and pass on their genetic characteristics? Countless times I suppose.
I wonder if my hope in emergent behaviour is just wishy washy thinking and I hope not. For I believe your prediction of the complexity crunch seems inevitable and insurmoutable otherwise.
This blog article was worth the wait, nice one.
[James’ Reply: What I am hoping for is a gradual sort of Singularity, where amazing technological improvements are moderated by a sort of “brake” comprised of all the little reliability problems. As they are patiently overcome or evaded, this will give us time to adjust and assimilate. It will give us a good beta test cycle!
Also it seems to me that there can be no technology so trivial that it cannot be oversold by its marketers, and no technology so profound that it cannot be trivialized by its users. Perhaps the Singularity will simply be a vehicle for breathtaking advances in spam, theft, and pornography.]
John Baxter says
Well – I agree that The Singularity will not occur simply because computing power will exceed the capacity of the human brain. I believe that this memory and processing capacity will be realized within a couple decades, but will result in a machine with as much memory and perhaps even as many connections as the human brain, but without a mind. This machine will require software – data structures and algorithms. Even if a way is found to create algorithms that “learn” (and it could be argued that some of this exists now) the “intelligence” that emerges will be something alien and perhaps unrecognizable.
There’s a theory that A.I. is whatever is currently thought to be impossible. I am sure there was a time not too long ago when it was thought that no computer could ever be designed that could play a respectable game of chess. Yet the KIM-1 computer that I proudly owned in about 1975, with one kilobyte of memory, could play chess – representing its moves on ne LED hex display. http://en.wikipedia.org/wiki/KIM-1 And who could have ever imagined a computer that would navigate for you, displaying your route on a beautiful little color screen, and speaking to you about the route? I have a little Garmin Nuvi GPS unit (2007) https://buy.garmin.com/shop/shop.do?cID=134&pID=6291 that cost little more than my KIM-1 (1975), and it does all this. Recently I was driving with a friend in her 70s, and she thought that my GPS was linked to a human operator who was telling me where to turn! In a way, it passed the “Turing Test”. So, does my dear little GPS have “human intelligence”? No, but in a very limited context, it has superhuman intelligence. And its usefulness is the result of elegant design and excellent testing.
In the early 1960s, Ted Nelson invented Project Xanadu, http://en.wikipedia.org/wiki/Project_Xanadu which was to evolve into a huge network of documents, all over the world, linked together by hyperlinks. It was totally impractical to build this. It had to evolve…
The Web exhibits a form of superhuman intelligence. Yet it is not a sentient being. It may someday mimic human intellgence more and more closely, but will it ever experience emotion?
I’m guessing that we will evolve more and more powerful tools to extend our intelligence, our senses, our abilities. But I am also guessing that something uniquely human will never be fully realized in silicon. There is an argument that it may someday be possible to replace damaged neurons in the brain with electronics. And a fascinating thought experiment where more and more of a brain is replaced until the whole brain is built of silicon (and essentially immortal, as long as you pay the electric bill). When does this “model” of the brain cease to be “human”? How do you test this?
Kevin White says
Oooh, the singularity!
The way I’ve interpreted the singularity is not the Terminator/Matrix-esque “super computers make humans obsolete”, but rather that computers will eventually be able to run a complete simulation of the human brain. This would allow for a human to be ‘uploaded’ to a computer somehow, or simply ‘born’ in a computer in a virtual sense. (Think “Overdrawn at the Memory Bank”.)
[James’ Reply: What you are talking about is not a “complete simulation”. That would be like saying “perfectly true lie”. Any complete simulation would instead be called an emulation. This is an important distinction, because anything can be simulated, but to emulate is a profound leap beyond that. You can simulate a brain without knowing very much about how it works. But to emulate it you must know everything about how it works. My major point is that this is a very challenging testing problem.
No scientist, engineer, or mystic can credibly claim to have a human brain emulator unless they sufficiently test it. No tester can credibly claim to have sufficiently tested such a thing without the necessary test data, test infrastructure, scenarios, procedures, time, and most importantly: something like a specification that is known to represent the human mind. Ha! Good luck.]
Once you can simulate a human brain in a computer as if the brain was in a real human body, you could simulate it twice as fast, or four times as fast, etc etc etc on and on. Suddenly, the biological reality of ‘human’ is pointless. Note that I’m not saying I think this will happen, only that it is possible, and that it’s interesting to think about.
[James’ Reply: It could only be pointless if there you can indeed understand the way that the brain communicates with biological reality and sufficiently emulate that. Otherwise, you’ve created some new kind of program, but not an emulation of human experience.
Of course, in a world where most people seem to believe in things for which there is no evidence, trifles like the Turing Test can be sufficient to create a popular impression of a great new reality. My prediction is that behind that popular impression will lie a seamy underbelly of manipulation, falsity, and persistent failure.]
This subject is really open to interpretation, since it’s all a matter of what-if. Computers making more computers? Computers making their own intelligence? Humans simulated to infinite power in computers? It all separates ‘now’ from ‘someday’. (Terrence McKenna, somewhat famous psychonaut and believer in the power of DMT, came up with his own idea of how a singularity of immense ‘novelty’ will occur on December 21st, 2012, the end of the Mayan calendar. McKenna called this Timewave Zero, which can be found in a Wikipedia article here: http://en.wikipedia.org/wiki/Novelty_theory . It’s basically the same thing as the Singularity as popularized by Vernor Vinge et al.)
In any case, I think that the notion of a complexity barrier is very pertinent. Jaron Lanier, one of the people to popularize the idea of ‘virtual reality’, has spoken out against the Singularity, suggesting that our current model for computing fails more and more the more complex it gets, and even that the entire model we use is sort of a red herring, barking up the wrong tree, etc.
An interesting quote on the subject, by Lanier, is here: http://www.singinst.org/summit2007/quotes/jaronlanier/
A much more interesting article, which I believe supports your (James’s) position, also by Lanier: http://www.edge.org/3rd_culture/lanier/lanier_p1.html
I like to think that saying the Singularity is impossible will just result in us ensuring that it happens (see “Humans will never fly!” etc), I also think that there’s just an inherent problem with complexity in our world. What damage can you do with a rock and a stick? Not much. What good can you do? Not much. What damage can you do with nuclear energy? Enough to obliterate humanity and much of the life on Earth. What good can you do with it? A whole dang lot, too.
As software gets more complex, you can do more with it, but it can break down in more ways. Perhaps the solution will come as a breakdown, the unintentional failure of a system in a way that creates something new, an unintended side-effect, a security breach, a synergy.
These are just Friday morning thoughts.
Oh, an additional thought. I have noticed that there tends to be great fear that computer programs, robots, etc. will take over society and render humans pointless. This fear seems to be the exact same fear that parents have, that their children will leave them and become independent and not require them any more. Currently, our software and computer systems and what passes for A.I are the equivalent of house pets, or perhaps children who are developmentally challenged and require care from the day of birth until the day of death. The day when they can grow up on their own could be the singularity.
Final thought: James, your idea of the singularity being one gigantic porno-spam advertisement is extremely depressing and reminds me somewhat of the world from Neal Stephenson’s “Snow Crash”. 🙂
[James’ Reply: Instead of a singularity, I foresee an endless accelerating duality. Once, Britannica Encyclopedias were in the library. Then I bought a bound set. Then I got it on CD. Then I uploaded the CD to my hard drive. Now I have Google on my Blackberry. But I wouldn’t say that I am fundamentally more knowledgeable because of this shift. Things are just more fluid, which certainly helps.
I play World of Warcraft. It’s a simulated world, but what fascinates me is how the world is subordinated to human appetites and aspirations. The high and the low of humanity are amplified. WoW does, however, provide a new forum for social life. This is important, but instead of transforming humanity, I think rather it will allow humanity to be what it already is, except moreso.]
Zach Fisher says
[…I think rather it will allow humanity to be what it already is, except moreso.]
Could you clarify what you mean by ‘moreso’?
[James’ Reply: Yes, all of our ambitions and appetites and flaws become amplified. Being amplified, they overlap and interact. Amplification in a closed and interconnected space leads to interesting and unpredicted failure modes.]
Ivor McCormack says
James,
I was first impressed by the idea of a technology singularity by Vernor Vinge. One of the greatest feats of human kind has been to explore beyond the possible, into the realms of fantasy and ask reflective questions of ourselves. The fear of technology superseding humans has been with us since time immemorial. Each step of the way (fire, sled, wheel, plough, etc) has brought with it a sense of the mystic. As we become more and more technologically advanced, anything beyond our current state is considered to be mystical. We don’t see it as that, because we delude ourselves that we are the pinnacle of what we can currently achieve. The third of Arthur C. Clarke’s laws states:
“Any sufficiently advanced technology is indistinguishable from magic.”
I for one try every day to wonder at the level we can achieve. The problem is I am rooted in the mire of what I have done. To consider that a singularity approaches, and that we will be transmogrified (Calvin and Hobbes greatest contribution to English language) or even supplanted, is abhorrent. It smacks of a religious belief system that supposes that there is a greater power out there that has some hold of our lives. In the same way as I sit and look up at the Horse Chesnut tree growing in my garden and wonder at it, the same way I wonder at my youngest daughter ability to pick up her PSP, hook into our network and talk with friends, the same way I wonder at the b^%$^*it handed over to me by our development team, the same way that every day unsung heroes struggle against themselves to go out and make a better day, I believe that the concept of the singularity is a reflection of the fear we have of a future we have yet to shape and cannot ever be tested.
Thanks for your insightful, thought provoking and personal leadership over the years. Keep it coming.
[James’ Reply: What beautiful writing. Thanks. I suppose I agree. I just keep thinking back to the days when telephone calls NEVER dropped. At least that aspect of life seemed more dependable.]
JonR says
fork you, you pizza ship! this is a felonious death-threat.
thanks for modding my comment off your site.
[James’ Reply: I don’t remember that. Are you sure that you left one?]Â
JonR says
pretty sure. it was something about strong Turing tests, “wax Ferraris” and one-way hypotheses. mumble mumble, grrr etc.
[James’ Reply: Sounds cool. But, ouch, I think I never received such a comment. It may be that my anti-spam system deleted it automatically on not receiving the right response. That has happened to a few people. I’m really sorry about that. I wish I could remove that silly thing, but I would get 100 spam comments a day if I did. If you want, you can email me a comment and I’ll post it myself.
Based on your site, you seem like an interesting thinker, so please don’t be too discouraged…]
Michael Bolton says
Recently I was driving with a friend in her 70s, and she thought that my GPS was linked to a human operator who was telling me where to turn! In a way, it passed the “Turing Test�. So, does my dear little GPS have “human intelligence�? No, but in a very limited context, it has superhuman intelligence.
I don’t think that you’d consider my radio to have superhuman intelligence, but when I look at it and listen to it, I can’t help thinking that there’s a little man inside who’s doing all the talking.
I think you’re confusing “intelligence” with “rapid access to and presentation of data”. The GPS isn’t coming close to passing the Turing Test to any competent tester. For the device to have intelligence, it would have to be able to provide a reasonable answer to an unconstrained question–“What would you feel like if you lost your father?” “What are seven interesting cultural differences between the British and the Brazilians?” “Why does the porridge bird lay his eggs in the air?” If a question was in some way nonsensical, it would have to be able to explain /why/ the question couldn’t be answered.
Some people ascribe great intelligence to me because I can remember and recite Monty Python sketches that I memorized 30 years ago. I’m obliged to remind them that this is the kind of intelligence that one might expect from a macaw or a tape recorder. The real kind is something of which I’m capable, at best, only intermittently and occasionally.
John Baxter says
JB: So, does my dear little GPS have “human intelligence�?
MB: I don’t think that you’d consider my radio to have superhuman intelligence, but when I look at it and listen to it, I can’t help thinking that there’s a little man inside who’s doing all the talking.
I think you’re confusing “intelligence� with “rapid access to and presentation of data�. The GPS isn’t coming close to passing the Turing Test to any competent tester. For the device to have intelligence, it would have to be able to provide a reasonable answer to an unconstrained question–�What would you feel like if you lost your father?� “What are seven interesting cultural differences between the British and the Brazilians?� “Why does the porridge bird lay his eggs in the air?� If a question was in some way nonsensical, it would have to be able to explain /why/ the question couldn’t be answered.
_____________________________________________
Absolutely – I was confusing “intelligence” with “rapid access to and presentation of dataâ€?! And, pondering this forum thread, I believe that this is one key reason why The Singularity will not be what some predict (the emergence of human-like intelligence in machines).
Sadly, schools frequently teach children to exhibit “rapid access to and presentation of dataâ€? (childhood memories of listening to “blah, blah, blah, blah, QUIZ!!!, blah, blah, blah, TEST!!!!!! come to mind. Little did I know that someday “test” would take some of us in new directions 🙂 )
As intelligence emerges, we are first taught to regurgitate data – to play back canned responses. Little mavericks, buccaneers and free-thinkers generally do not enjoy this, and frequently turn it into a game or look for creative ways to sidestep it. There is some value in it – actually quite a bit of value. You cannot become a Mathematician or a Physicist or a Tester unless you learn your sums. You cannot become a Concert Pianist unless you learn your scales, or an Aviator unless you learn straight-and-level flight. We are called upon to learn at a very basic level, as the foundation for thought and creativity.
But the realm of thought, creativity and intelligence is a whole new dimension – a breakthrough that transcends data access and presentation.
Something new and amazing is happening. I have frequently fallen into the trap of thinking that intelligence is just a very sophisticated level of the ability to make connections between elements of data (and my thinking that the GPS system is “intelligent” fits this somewhat). If this actually worked, I would expect the Web to become “aware”. For your example, I guess I could imagine a very sophisticated program, able to parse “What are seven interesting cultural differences between the British and the Brazilians?â€?, then go do Google searches for articles and some kind of weird syntactical analysis with a snake-oil-voodoo algorithm, and then list 7 reasonable statements about these differences.
But what would our machine do with: “Why does the porridge bird lay his eggs in the air?�
When will a machine be able to think: “huh”?
OK – compilers have been saying “syntax error”, which is essentially a Geeky version of “huh” for quite some time.
But the “porridge bird” question is a whole new level of “huh”. It is funny. When will a machine laugh? Or cry? Or have compassion?
I once had a car with automatic seat belts, and modern cars have lots of sensors and warning lights. Did that car embrace me because it “cared” about me? Did it “feel pain” when an engine component failed? No – it was designed by humans to do these things. Does a beautiful airplane feel the joy of flight? No – it obeys the laws of physics – and the human pilot may experience joy (or other emotions at other times).
My new guess, based on learning from this forum, is that if there is a Singularity, we will not notice it. It will just be a continuation of the merging of human intelligence with the amplification provided by machines. Anything that can be called Artificial Intelligence will probably be just “the Next Cool Thing that Computers can Do”. Computers will not be able to think in a human way, but a kind of machine-intelligence-illusion will become more and more interesting, and perhaps may reach a point where it is difficult to tell if we are speaking to a person or to a machine.
And I’d imagine that testing such machines will be quite a fascinating, creative calling…
Andrew says
“What are the features of artificial intelligence? How would you test them?”
You test them in the same way as you would test human (natural) intelligence. Intelligence is intelligence, whether it is man-made or not.
[James’ Reply: I’m afraid it’s not as simple as that. First you have to define intelligence in a way that is accepted by whomever the stakeholders are. But there is no definition, no measurement, that is not objected to by some constituency or other. But let’s say that you decide to use the test at http://simple-iq.com/ (I’m not saying it’s a great test, but it’s a multiple choice test, just like a lot of tests that purport to measure I.Q.) I just took that test by choosing “true” for each answer. That’s the equivalent of a one-line program consisting of $answer = TRUE, which is hardly something you would call a breakthrough A.I. program, eh? Well, the test tells me that I have an I.Q. between 85 and 99. So, are you satisfied with that kind of measurement?
Another problem is that untrained humans are extremely easy to fool, which is why con artists and mystics do such brisk business, at times. Have you ever been married and had the realization that you were mistaken about the type of person whom you thought you married? I have. The experience taught me caution. But when it comes to artificial forms of intelligence, the assumptions and heuristics that often guide us well with humans, and are absolutely reasonable for use with humans, will have no basis at all. For instance, if you told me, triumphantly, that you were a computer chatbot only simulating a human named Andrew, I would put aside all the assumptions I have about people who have Hotmail accounts and begin to scrutinize every conceivable subtlety of how you interact with me. I would withdraw all my previous assumptions that are causing me not to ask you Blade Runner type questions right now (“Why did you flip the turtle on its back, Andrew? Why?”) It’s kind of like how someone might reassess the meaning of a silly lawyer joke told to them by a co-worker, if that co-worker were arrested the next day for mass-murdering a bunch of lawyers.
I’m saying it’s a hard problem to test for a thinking machine… if you want to do it well.]
“[…] how would you know that human intelligence doesn’t possess secret and subtle features that have not yet been identified?â€?
You wouldn’t know this. This would require omniscience. How is this question important, though?
[James’ Reply: It’s important because once the program passes your tests, and you declare it practically indistinguishable from human intelligence, you and many other people are likely to cease thinking at all critically about it. You will apply a bunch of assumptions to the A.I. creature, just as the astronauts did in Arthur C. Clarke’s “2001”, without realizing that perhaps your A.I. is insane in a way that is not yet recognized, but could harmfully manifest at an unexpected moment.
An analog to this is DDT and asbestos. People once thought those substances were not dangerous to humans or the environment. Oops, poor testing led to a big surprise. When I was a kid we played with mercury in science class; today spilling mercury on the school floor would cause an evacuation of the school and notification of HAZMAT. What about food irradiation? Some people say irradiated food is safe. Maybe it really is. Or maybe there is an as yet unrecognized side effect.
I’m not prepared to say what the capabilities of the human mind are, despite the fact that I own more than 100 books on this subject. It seems to me we are still working it out.]Â
“Although humans have written programs, no program yet has written a human. ”
Wrong. Our genetic code is a program (or a collection of programs).
[James’ Reply: It’s some kind of program, but what kind and to what extent? When I write “silly wabbit” in a sentence, as IÂ just did, was that determined by some combination of base pairs in my DNA, or RNA, or epigenome? Or is it an environmental toxin that made me write that? Maybe a hormonal signal?
I’m prepared to accept as a practical hypothesis that human thinking is completely determined by known laws of physics operating on the state of a human brain, in a human body, in a human environment. I’m prepared to accept that critical structures of a fetal brain are encoded in DNA or something like it. I also accept that this situation is the result of billions of years of mutation and natural selection. But still, no human programmer understands how that “program” works, and no human-created program can be said to duplicate it in its essential respects.
My question is, when someone DOES reproduce it in its essential respects, how would we know? This is a testing question.]
“So, the master program that threatens to take over humanity would require an even more masterful program to debug itself with. But there can’t be one, because THAT program would require a program to debug itself… and so on.”
Nice try, but self-debuggability is not necessary. All you’ve pointed out here is the general truth that self-referential statements lead to infinite regression. So what?
[James’ Reply: Something has to debug it? What will debug it? With humans, it was a couple billion years of mutation and natural election. The result is still buggy, by the way, but we more or less accept those bugs.]Â
“The more complex the technology, the more prone to breakdown”
What do you mean by “prone”? Of course it’s true that complexity leads to more potential MODES of failure, but that doesn’t necesarily lead to an increase in the overall probability of failure. If what you claim were actually true then we’d see more airplane crashes (percentagewise) today versus in the 50’s when they were simpler machines. Likewise with a zillion other gadgets that have gotten more reliable as they have also gotten more complex.
[James’ Reply: By prone I mean more likely. You’re right, complexity alone does not necessarily lead to unreliability. It’s just a very powerful dynamic. It’s like an elephant you have to know how to feed and care for. I’m suggesting that the elephant can get so large that the expense of taking care of it exceeds the value of keeping it.
I don’t know if you are a programmer. If so, then perhaps you’ve had the experience of working on a very large and complex project. Perhaps you have experienced large project failures, as I have. If so, then I’m surprised you are so sanguine about the specter of complexity. If not, then for an interesting example see the book Showstopper, which details the project to build Windows NT. Also… consider using a computer. Much of the software I use have annoying behaviors and failures in them that are arguably traceable to complexity, at least in part. Your mileage may vary.
The increase in flight safety is the result of a huge investment in, among other things, ways to reduce complexity. Reduction of cognitive complexity in the cockpit is the subject of a great deal of research. The development of protocols for pilot behavior simplify and harmonize the job. Another vital way complexity is managed is by preferring old complexity to new complexity. This is why you still see 747’s flying around, so many years after they were first designed.
But, yeah, I’m generalizing. I just think it’s a pretty reasonable generalization.]Â
Andrew says
Thanks for your response to my comments. I am actually a tester/programmer at a large software company, and I do think I understand where you’re coming from on a lot of this. We agree on many of the details here, but I think the fundamental disagreement between us is this:
You seem to think that the technological singularity won’t happen simply because it (and intelligence itself, for that matter) is not completely testable. (Please correct me if I am mistaken). My argument is that technology need not be testable in order to work. It need not even be bug-free. After all, nothing in this universe is completely testable, and no technology is completely bug-free. Nonetheless, despite these facts, the rate of technological progress (by various metrics) has been steadily accelerating along an asymptotic curve for the entire history of human technology.
[James’ Reply: I think you’ve simplified and then polarized my argument. I am not complaining because of a lack of perfect testability, I’m arguing that there is a lack of *sufficient* testability. Arguing for lack of perfection would be silly. We don’t ever deal with perfection in our business. No, I’m saying it is woefully untestable, just as Dave Parnas argued that the Space Defense Intitiative would be untestable (notice SDI ain’t working, either). It’s untestable not necessarily because it’s complex, but because the people who think intelligence can be built literally don’t know what they need to build or how to build it. This is a conceptual problem, not just a matter of technology. I challenge you to show how it might be testable– but that would require you to define it, which I predict (perhaps I’ll eat these words) you will not be able to sufficiently do.
In addition to the testability issue, there is another issue: reliability. I’m saying that complexity will inevitably crush reliability.
Finally, I’m also arguing that a tester’s skepticism is needed to leaven the imaginings of starry-eyed extropians.]
All that is required for the singularity to happen is for the rate of technological progress to keep following this curve as it has so far. Maybe it won’t, but in order to argue this point, you’ll need to bring something into the equation that is not already there. Certainly the testability problem, and the debuggability problem are age-old problems that have been intrinsic to technology throughout history, so whatever effect they do have on the shape of the curve, they have already had their effect (and the resulting curve is asymptotic despite them, at least so far).
[James’ Reply: I’ve already raised the appropriate issues. That you dismiss them doesn’t in any way diminish them. You’re free to ignore anything I say, of course, but if we’re in a conversation, the way it works is that you offer rebuttal and then we see if I can make a counter-rebuttal.
Let me clarify one point, though. You speak of a “curve”. I have implicitly denied the meaningfulness of that curve. Therefore I’m denying your premise. Whatever you think that curve is, I’m arguing that the concepts that make it possible to turn computing power into a human-like design intelligence don’t exist, and can’t exist. The reason they can’t exist is that whenever anyone claims to have produced them, they will be subject to a host of easy counter-arguments from epistemology, phenomenology, ethics, testing theory, etc. If someone seems to be getting close, though, I would predict a moral crusade (as against stem-cell research, cloning, and eugenics) that would shut down the project.
The thing I’m really worried about is someone producing something that fools a lot of people into thinking that it’s sentient, thereby inspiring misplaced trust and then misdirected outrage.]
Specific responses to your last points:
On intelligence testing: I agree with you that intelligence is not perfectly testable. My point is that the level of testability of AI is basically the same as that of natural intelligence. No test suite is perfect, of course, but our many methods for estimating human and animal intelligence do seem to tell us something reliably (‘though not 100% reliably, of course). AI is as testable as human intelligence is, if not more so (since we are somewhat better at white-box testing AI than human intelligence)
[James’ Reply: I don’t think those tests are good enough. I don’t think they can be good enough. But if someone thinks they can be, they ought to muster some evidence of that. I see that you have faith in them. Why should anyone else share your faith? The stakes are very high, here. Why do you dismiss my concern that there are important subtleties to intelligence that we probably aren’t modeling? Why aren’t you concerned that about complex and inscrutable technology running amok or breaking down? It already does do that, and on a regular basis. As a tester, I bring systems to failure states regularly and often easily, despite the great confidence of their developers. Now you want to extend our buggy legacy of technology to the realm of computer super-intelligence? Have you read any of Petroski’s work on human error in engineering? Have you read Normal Accidents or The Logic of Failure? Have you read anything about the dangers of automation bias in airline cockpits?]
On DNA as a human-writing program:
“It’s some kind of program, but what kind and to what extent? When I write “silly wabbitâ€? in a sentence, as I just did, was that determined by some combination of base pairs in my DNA, or RNA, or epigenome? Or is it an environmental toxin that made me write that? Maybe a hormonal signal? ”
DNA is only one factor in the programming of a human. There’s also the environment of the world around them (culture, etc.). It’s wrong to expect a genetic reason for every human quality. But my point was that humans are “written” by programs, in refutation of your claim otherwise. DNA is a good example of a human-writing program, but that doesn’t mean there isn’t more to being human. I would consider the world of experience a form of programming as well – perhaps focused more on the brain than the organism as a whole, but still a program.
[James’ Reply: No, this is not a refutation of my argument, it merely misses the point of my argument. I said no program has written a human. Indeed, you can’t point to any program that has “written a human”. DNA doesn’t write a human, the environment humans live in doesn’t write humans. I can accept that the processes we call Evolution, operating within physics and over eons of time, led to humans, whatever the heck humans are, but A) That history is not a program in any sense that computer people would understand it, and B) we don’t know what the resulting “human program” actually is. So, how could we humans write a program to duplicate the essential features of a human?]
Similarly, there’s more to a running application on a computer than just the source code. There’s also the compiler, interpretter, CPU, RAM, plus the entire rest of the physical machine that all this stuff runs in the context of. So it’s actually quite a good analogy, I think.
[James’ Reply: It seems like a good analogy to you for two reasons: 1) you have conflated the comparatively well-known and well-controlled world of computing with the comparatively poorly known world of biophysics, and 2) you haven’t glossing over how bug prone computer systems are.
I think, if you want to make a persuasive argument, you should cite an example of some sort of intelligent machine that works today. I’m arguing from concepts and dynamics, as you are. To transcend that, we probably need to move to examples. My claim, at the moment, is that there are no examples that I can’t disqualify with my tester powers, to my satisfaction. I may be wrong.]
[on human-level AI]
“My question is, when someone DOES reproduce it in its essential respects, how would we know? This is a testing question”
Indeed. We wouldn’t be able to know it ABSOLUTELY since we can never test 100% (this is true of everything), but on the other hand we can certainly do better than 0%. The thing is – we don’t actually need 100% certainty about something in order to agree that we “know” it or understand it, now do we? At least not in the relative sense. How much evidence would it take to convince you that the singularity had happened? Certainly less than 100% (for that would be complete omniscience).
[James’ Reply: Percentages don’t mean anything in this context. What is needed is a culturally acceptable story. This is what testers help to create. That air travel is safe, for instance, is something that would have been unthinkable in 1895, but over time a story was developed that made air travel culturally normal. No ordinary person knows how that safety is achieved, or what that safety really means, but most people accept it.
I’m saying you haven’t even gotten to the first milestone for doing that. There hasn’t been a Wright brothers event that even begins to make it possible to consider. The Singularity still has the status of mythic trope, rather than specifiable event. Once you specify it, we can talk about how it could happen. So far, I understand that you have specified it as a program that could pass an intelligence test, and I’ve countered that a rather stupid program can already do that.]
“Something has to debug it? What will debug it? With humans, it was a couple billion years of mutation and natural election. The result is still buggy, by the way, but we more or less accept those bugs”
Well, strictly speaking, it only needs to be debugged if it doesn’t work well enough initially.
[James’ Reply: Yes, that’s a testing problem. One that for reasons already mentioned can only be solved by bad testers, and only by fooling themselves.]
There are always more bugs…we just correct enough of them. We won’t necessarily know what “enough” means until we achieve it, but up to that point we simply debug it just as we would debug anything else. We search for inconsistencies and correct them as we find them. It’s doomed to be imperfect, but it’s good enough that progress has managed to continue up to this point. Why would this change?
[James’ Reply: I disagree that ANY significant progress has been made, since you have no idea (and I don’t either) what progress really is, since we can’t measure it in a way that is not easily dismissed by well-intentioned people who argue for a deeper meaning of intelligence. Perhaps some progress has been made by people who think of intelligence in parochial and simplistic terms. But what I’ve seen of that work is not particularly interesting.]
Andrew says
This is fun. Thanks for the continued exchange.
I apologize if I misinterpreted your points and took them to too far of an extreme. I’ve encountered that before and it is definitely frustrating.
[James’ Reply: I only publish comments that I think have value to readers. I think your response has value. I reserve the right to be convinced by you when I read your arguments.]
If you don’t intend by your comments the extreme sense in which I took them, then I need some clarifications to understand you. For example, when you wrote (in your latest response):
“the people who think intelligence can be built literally don’t know what they need to build or how to build it. This is a conceptual problem, not just a matter of technology.”
I wonder what you mean by “know”. From my point of view, people DO know (somewhat, though not perfectly) what they need to build and how to build it. For one example, they know that they need to build adaptive systems with massively parallel processing. The details aren’t worked out yet, of course, but there is a direction and a goal that (it seems to me) are well understood – ‘though, again, not completely understood.
[James’ Reply: You are being incredibly vague! You might as well point at Chitty Chitty Bang Bang and tell me that we know how to make practical flying cars. Okay, here’s some homework, go on the web and find a research paper or project that contains specifics. Here’s one you might look at, for instance. This is fascinating stuff! It looks quite promising. But it still deals only with pattern matching. Even if the system described is built (which it hasn’t yet been, in full industrial form), I bet it won’t reproduce the features we expect from a self-modeling being as described in yet another fascinating video here. And if it seemed to, I would question the testing of such a system on complexity and conceptual grounds as I’ve already said.]
The above seems so obvious to me that I expect that it must also be obvious to you, so I feel that I am forced to conclude that you mean “know” in a more complete sense…but that leads us back to the absolute-versus-sufficient argument which you deny being on the other side of…
[James’ Reply: How can something seem obvious to you when you can’t describe or produce it? When you’ve done either of those things, you can tell me it’s obvious. Until that time, you are just like some Scholastic professor from the twelfth century arguing how obviously kings rule via the grace of God who sends angels to guide them.]
“I challenge you to show how it might be testable– but that would require you to define it, which I predict (perhaps I’ll eat these words) you will not be able to sufficiently do.”
Here’s a good test suite: The AI must fool “the experts” in the Turing test (experts such as psychologists who are trained in evaluating intelligence in humans).
If you think this is insufficient, then how do you define sufficiency in this case? Since we’ve already agreed that sufficiency is something less than perfection, I am very curious to know how far less than perfection you have in mind.
[James’ Reply: If you can’t tell me what you are talking about, we aren’t even at the point where “sufficiency” is a meaningful discussion. But let’s pretend, just for fun, that you HAVE come up with some accepted specification of intelligent behavior worthy of being associated with the Singularity (a kind of intelligence that can somehow perpetuate itself, improve itself, and perform arbitrarily complex design activities in the service of humans in a manner that progresses at a speed, quality and to an degree bounded mainly by the level of fungible computing resources we throw at it). Let’s say you feed this specification to your psychologist expert friends. I notice that you have faith in their ability to find EVERY CRITICAL BUG. Notice I did not say every bug. You aren’t claiming that. But clearly you think that there is a reasonable chance they will locate every critical bug. Furthermore, you must think any critical bug they don’t find will not be so critical that it will lead to a terrible disaster before it is found. And finally, you think that problems will be correctable without running an unreasonable risk of harmful new side effects.
I’m astonished that you believe all these things. For one thing, you’ve never done any of this, nor have you seen it done. To extrapolate from simple software to testing and debugging a neural network-like thing with billions of moving parts that adapts itself in real time strikes me as pretty naive. If you’ve read Dietrich Dorner’s book The Logic of Failure, you know that humans are terribly bad at controlling (read “debugging”) non-linear dynamic systems. If you are familiar with chaos theory, you may be concerned about emergent side properties (which may reveal themselves as psychoses or neuroses or hallucinations in the artificial intelligence). If you read about genetic algorithms, swarm intelligence, or finite automata, then you may be struck, as I am, with the remarkable enthusiasm scientists show for the unexpected. They love when their tools do something they didn’t plan. But as testers, we have to be concerned that the unexpected harbors only pleasant surprises. How can we do that?
The psychologists you refer to would not merely be testing software, they would be doing the equivalent of social science research on their artificial subjects. Have you read much about social research methods? Have you talked with clinical psychologists about how psycho-analysis works, and how it may be subverted? It’s not so easy as you seem to think.
I’m trying to imagine what exactly the psychologists would do. Talk to the system and ask it to perform various tricks, I guess. But what if the system, in its self-consciousness, resented being tested, and resolved to fool the psychologists? Intelligence like ours comes with self-consciousness as a built-in feature, and this self-consciousness, as you know, is extremely volatile. If you tell me that you know for sure that the system can’t be trying to fool the testers, I want to know how you could possibly know that. If you tell me that the system doesn’t have self-consciousness, then I want to know how you think it can get along without that, since the ability to understand a purpose is key to design, and that requires the ability to put oneself in the shoes of the user, to some degree.
In short, before we could trust software to act on its ideas, we would have to develop an effective cyber-psychology discipline based on the nature of such machines. That hasn’t been developed, obviously, and it can’t be until the intelligence specification hurdle is crossed.]
[On “the curve”]
I haven’t seen (or failed to recognize) your objections to “the curve”. I might look into your previous posts, but this aspect just isn’t as interesting to me, actually. This seems like a separate argument, and I might try to challenge you on it separately.
[James’ Reply: How could it not be interesting to reply to me when I’m attacking your premise? If your premises don’t stand, then what else is there to talk about? Anyway, I didn’t attack the curve very explicitly in my original post, but there was this sentence “For one thing, we aren’t even able to define intelligence, except as the ability to perform rather narrow and banal tasks super-fast, so how do we get from there to something human-like?” The idea behind that sentence is that you can do all the banal things you want, and do a trillion of them per second, but that doesn’t “add up” to human-like intelligence in the absence of the right algorithm. We don’t have such an algorithm. Unlike uranium, there is no critical mass of desktop calculators that when piled high enough in a confined enough space will spontaneously become self-aware.]
you wrote:
“the concepts that make it possible to turn computing power into a human-like design intelligence don’t exist, and can’t exist. The reason they can’t exist is that whenever anyone claims to have produced them, they will be subject to a host of easy counter-arguments from epistemology, phenomenology, ethics, testing theory, etc. If someone seems to be getting close, though, I would predict a moral crusade (as against stem-cell research, cloning, and eugenics) that would shut down the project.”
Let me get this right…you say that the reason these concepts don’t and can’t exist is because the masses will object, and shut the resulting projects down, etc.? This seems like a two-fold non-sequitur to me. Firstly, the existence of a concept is not contingent on the opinion of the masses. Secondly, the opinion of the masses will not be anywhere near strong enough to overwhelm the drive for improving AI, because the masses are primarily concerned with the short term, and in that short term, AI will provide enormous advantages to whoever has it over those who don’t.
[James’ Reply: You can invent any concept you want, or even pretend to, but when it comes time to claim to someone other than yourself that you invented something that “loves” or “hates” or “has mystical visions” or “thinks” or “uses polysyndeton” then you have a rhetorical problem; a sales problem; a philosophy problem.
You say that AI will provide advantages, but of course that’s a testing problem you haven’t solved, so you are just assuming what has not yet been demonstrated. What you could say instead is “if these mysterious systems seem to produce enough benefits without much risk, over a period of time, people will come to rely on them, even if, like airplanes sometimes do, there is a fiery crash once in a while.” I can accept that. The question is still whether they can be built at all, and how much rebooting will they need.
I’m more concerned about a DDT-style of “Oh my God, when we started using this stuff we didn’t realize we were creating a terrible global problem” type of disaster.]
At the very least, the masses will see a need (aligned with their short-term goals) to continue improving AI in order to compete with other countries who we suspect are also developing their own AI. Already we have seen the fruits of “soft AI” in smart-bombs, weather predicton, ship guidance systems, etc. I predict that this trend will continue until it reaches the tipping point. By the time the masses catch on to the long-term implications of improving AI, it will be far too late to do anything about it short of self-annihilation.
[James’ Reply: I don’t know what you mean by “tipping point” but wow, in your last sentence you seem to have stepped over the line onto my side of the argument. Let me repeat back what you just wrote. “…it will be far too late to do anything about it short of self-annihilation.” Wow. I would call that a big requirements bug. See, I think enslaving ourselves to machines is a BAD thing. As it stands, we are enslaved by other humans who use machines, which is perhaps inevitable. But it is NOT inevitable that we should cede our liberties to the machines themselves.]
you wrote:
“I don’t think those tests are good enough”
What does “good enough” mean to you (in the context of our argument on whether the singularity will happen)? In other words, what level of fidelity would the tests need to achieve in order for the singularity to happen? And why do you think the fidelity of the tests would have any impact on whether the singularity happens or not?
[James’ Reply: I wrote an article on good enough quality and another on good enough testing. See my website.
The tests, of course, don’t have any impact on the Singularity as a technological phenomenon. Perhaps the essence of the Singularity occurred in 420 B.C. in Athens, before being accidentally destroyed by invading Spartans. Perhaps the Singularity has already happened in a government laboratory, ten years ago.
Testing doesn’t make reality. It has to do with the value of claims people make about the world. Testing has to do with whether Singularity hype deserves our attention.]
you wrote:
“Why do you dismiss my concern that there are important subtleties to intelligence that we probably aren’t modeling?”
Because I don’t see how they have any bearing on whether the singularity will happen or not. Testability is not a requirement for functionality.
[James’ Reply: What you’re really saying is that you will be satisfied by the weak testing that you or others will do before they declare that the new era has begun. I hope that you learn important lessons from that. Meanwhile, I will continue to urge people to better lives through skepticism, so that we are more likely to see the world for what it is. This is increasingly important in the modern world. Unlike the era of Pyhrro and Sextus Empiricus, being fooled by illusions can have terrible and far-reaching consequences.]
you wrote:
“Why aren’t you concerned that about complex and inscrutable technology running amok or breaking down? ”
I am concerned about this, but I don’t agree that it will cause a “ship-stopper” for the progress of technology or the the intelligence of AI. Technology will run amok. Technology will break down. This will be a fact of life for eternity, I think. I just don’t think it will be bad enough to counteract the historical trend in progress (yes I know you don’t agree with the premises of “the curve”).
[James’ Reply: Okay, I guess we will see what happens. And as it does, one of us will believe what he sees, and the other one won’t.]
you wrote:
“As a tester, I bring systems to failure states regularly and often easily, despite the great confidence of their developers. Now you want to extend our buggy legacy of technology to the realm of computer super-intelligence?”
Does technology improve over time or doesn’t it?
[James’ Reply: It seems to improve in some ways, and degrade in others. Improvement is a relationship among many things, not a palpable substance. I once had a cell phone I loved, but I was forced to give it up by my cell phone company. I then went through a bunch of phones I hated, each of which were supposedly more advanced. Identity theft was a rare occurrence, twenty years ago. Privacy was easier to protect. I once had the option not to use a computer everyday, but now that’s not feasible. From what I hear, Vista is not more reliable than XP, and its digital rights features are Draconian. I don’t think that just because Microsoft wants to sell us something that we have to buy it. Even things that do improve are not safe from outright obsolescence because of some new widget Bill Gates will push on me, next.]
you wrote:
“So, how could we humans write a program to duplicate the essential features of a human?”
Well, we can just copy the genetic program that already exists (and expose it to the right conditions for it to develop). You say that “DNA doesn’t write a human”. How can you possibly argue this? Our genotype encodes our phenotype. Just read any book on the subject.
[James’ Reply: If I was drinking something right now, I would have spit it out in surprise! Come on, man. I love how you say “we can just…” Yeah right. Which specific book tells us exactly how the program works? Hmm? How come cloning is proving to be so difficult to do? Hmm. Why it looked so easy on Jurassic Park! And what specific magical computer simulation do you propose that will simulate the development of a human from a simulated egg and sperm? The protein folding of all the encoded proteins in an entire human body all at the same time would sure be hard to reproduce in a computer. Consider that protein folding for just one molecule is already a fiendishly complex calculation that it already is the subject of a grid computing project.
I understand that you can imagine what it would feel like to imagine the possibility of such a thing, but I think you haven’t yet actually BEGUN to imagine the possibility itself.]
you wrote:
“I think, if you want to make a persuasive argument, you should cite an example of some sort of intelligent machine that works today”
Good examples abound on the internet:
http://en.wikipedia.org/wiki/Artificial_intelligence#List_of_applications
[James’ Reply: Yes, but which one of these applications is capable of creating the singularity? I visited the section on artificial creativity, which included a caveat about how creativity is a controversial issue, then I followed a link to thinkartificial.org, where I saw this: “Defining Creativity: As with natural intelligence, creativity has remained extremely hard to define. Creativity exhibits itself and effects behavior to a large degree (especially in humans), which makes it very hard to identify its distinguishing features and nature. No empirical definition or authoritative perspective on creativity exists within scientific circles.”
Anyway, please pick an example, because offhand I don’t see one here that threatens my position.]
Petter Bergman says
“2001” has one of my favorite portrayal of AIs in SF-litterature.
In line with your post: “…the HAL 9000 computer, which can reproduce, though some experts still prefer to use the word ‘mimic,’ most of the activities of the human brain…”. Here lies the interesting part of the story, HAL isn’t some mysterious superintelligence, he only *seems* to be. When he starts to kill the crewmembers it’s easy to believe that he is indeed an intelligent, sentient beeing and that he is *evil*. But in the end we get the technical explanation, there’s nothing mysterious or magical about his behaviour, it’s just a bug.
A recurring theme in Clarkes books is best explained with the quote “Any sufficiently advanced technology is indistinguishable from magic.”. Just like Dave Bowman feels threatened by the HALs “unexpected behaviour”, the humans feel threatened by the mysterious monoliths, but even they are just machines, only far more advanced.
Imagine the consequences of a bug in the monolith…
Nischal says
This is my personal favorite topic. And I have a confusion which I want cleared.
I think there is a difference between intellect and intelligence.
What I understand about Intellect and Intelligence is that,
Intelligence is the ability to acquire and apply information/knowledge.
Intellect is the ability to understand and reason.
Which one are we talking about here?
AI surpassing human intelligence
OR
AI surpassing human intellect.
[James’ Reply: “AI” will do neither in any broad sense.]
James says
You assume your relativistic existence is not part of evolving causality. In every perfect evolving closed or infinite system of causality, everything will eventually closely repeat, and after an extraordinary number of evolutions, … will exactly repeat.
The system never reboots, it continuously evolves; i.e. the perceived expanding universe, quantum entanglement, the observed change in the value of the speed of light, entropy …
[James’ Reply: On the contrary, your absolute words exist in a dissipating correlation. In every imperfect disintegrating unbounded or finite system of correlation, nothing will immediately vary, yet given time to fade, will vary utterly.
The system must constantly reboot, while sometimes falling apart; i.e. thermodynamics.]