Click covers for info. Copyright (C) Rudy Rucker 2021.


Archive for the ‘Rudy’s Blog’ Category

Computers Will Be Alive and Intelligent

Friday, November 11th, 2005

These are partial notes of my remarks during my debate with Noam Cook of the Philsophy Department, at San Jose State University, November 10, 2005. I forgot to bring my recorder, so I didn’t manage to tape it for podcast.

It was a nice event, with a big and enthusiastic audience, maybe 150 people, who asked a lot of good questions at the end. I was anxious about the event. They say that people have a phobia of public speaking — how about public speaking with a guy there to contradict everything you say! But Noam was a gentleman, and it went smoothly. I’ve incoporated some of my responses to his points in these notes.

Summary. I wish to argue that humans will eventually bring into existence computing machines that are as alive and intelligent as themselves.

After all, why shouldn’t there be alternate kinds of physical hardware which successfully emulate the behavior of humans? The only hard part is finding the right software for these systems. And even if the software is very hard to figure out, we have some hope of finding it by automated search methods.

Definition. A computation is a deterministic process that obeys a finitely describable rule. Saying that the process is deterministic means that identical inputs yield identical outputs. Saying that the rule is finitely describable means that the rule has finite description such as a program or a scientific theory.

I believe in what I call universal automatism: It’s possible to view every naturally occurring process as a computation. For a universal automatist, all natural processes are deterministic and finitely describable — the weather, the stock market, the human mind, the course of the universe. The laws of nature are a kind of computer program.

In a broad sense, any object is a computer, but for this debate, let’s use “computer” in the narrow sense of being a manmade machine. Definition. A computer, or computing machine, is a device brought into being by humans using tools and used to carry out computations.

In arguing that we can eventually produce computers equivalent to humans, it’s useful to break my argument into three steps.

(1. Automatism) A human mind is a deterministic finitely complex process; that is, human consciousness is a computation carried out by the body and brain.

(2. Emulation) The human thought process can in principle be emulated on a man-made computer; that is, we can carry out equivalent computations on systems other than human bodies.

(3. Feasibility) We will in fact figure out a the design for such a computing system; that is, humans and their tools will eventually bring into existence such human-equivalent systems.

I realize that many people don’t want to accept that (1. Automatism) they are deterministic computations. This is the point I’ll I really have to argue for.

Looking ahead, if I accept that (1. Automatism) I’m a computation of some kind, then it’s relatively easy to believe that (2. Emulation) this same computation could be run on a man-made machine, for computers are so programmable and so flexible.

It’s also not so hard to believe the third step, which says (3. Feasibility) if its in principle possible to run a human-like computation on a machine, then eventually we fiddling monkeys will figure out a way to do it. It’s only a matter of time; my guess is a hundred years.

1. On Automatism. In arguing for the idea that our mind is a kind of computation, note that our psychology rests on our biology which rests upon physics. And physics itself is, I believe, a large, parallel, deterministic computation.

The uncertainties of quantum mechanics aren’t a lasting problem, by the way; the present-day interpretation of quantum mechanics is simply a scrim of confusion and misinterpretation overlaying a crystalline deterministic substrate which will eventually come clear.

So if we grant that human consciousness is a particular kind of physical process occurring in human bodies, and if we grant that physics is made up of deterministic computations, then we have to conclude that consciousness is a kind of computation.

Let me forestall three objections.

Free Will Objection: If I’m a deterministic computation, why can’t anyone predict what I’m going to do?

Answer to Free Will Objection: When I say the human mind can be regarded as a deterministic computation, I am not denying the experiential fact that our minds are unpredictable. The fact that you can’t predict what you’ll be doing tomorrow or next year is fully consistent with the fact that you are deterministic.

The impossibility of predicting your future results from two factors. Most obviously, my future is hard to predict because I can’t know what kind of inputs I’m going to receive. But, and this is a key point, I’d be unable to predict the workings of my mind even if all my upcoming inputs were known to me. I’d be unpredictable even if, for instance, I were to be placed in a sensory deprivation tank for a few hours.

A gnarly computation such as is carried out by a human mind is irreducibly complex; it doesn’t allow for any rapid shortcuts, not even in principle. This is a fact that computer scientists have only recently begun talking about; see Stephen Wolfram’s book A New Kind of Science, and my own book, The Lifebox, the Seashell, and the Soul.

The mind is a deterministic computation, but here are no simple formulas to predict the mind.

“Chinese Room” Objection: A computer can be programmed to emulate all of human behavior, it is still only putting on an act, and it has no internal understanding, knowledge, or intentionality. Consider, for instance the IBM chess-playing program Deep Blue. It excels at playing chess, but it doesn’t “know” anything about chess.

Answer to the “Chinese Room” Objection. To the extent that we can give a precise descriptions of our psychological states, we can create AI programs to emulate them. A goal becomes a target state the program wants to reach. A focus of attention is a particular pointer the program can aim at a simulation object. An emotional makeup becomes a system of weights attached to various internal states. Conscious knowledge something may involve a kind of self-reflexive behavior, in which the system models the world, a self-symbol, the relationship between the world and the self-symbol, and the self-symbol considering the relationship between the world and the self-symbol. To the extent that this can be made precise it can be modeled. Present day AI programs lack many of the internal aspects of human psychology simply because these aspects have not yet been well-enough described. But in principle, it can all be modeled.

Supernaturalism Objection: Given that humans are the Crown of Creation, God surely loves us so much that we’ve been equipped with some vital essence that wholly transcends the petty, deterministic bookkeeping of computational physics.

Putting much the same notion more secularly, we might say that there are oddball as-yet-unknown physical forces involved in life and in consciousness; perhaps quantum computation has something to do with this, or dark energy, or instatons on D-branes in Calabi-Yau spaces.

Answer to the Supernaturalism Objection: We are already on the point of building physical quantum computers. In the long run, any possible kind of physics should be something that we can put into the devices we make. And who’s to say that God’s special vital essence doesn’t dribble our devices as well. Zen Buddhists tell the story of a monk who asks the sage, “Does a stone have Buddha-nature?” The sage answers, “The universal rain moistens all creatures.”

2. On Emulation. The second step of my argument is very easy to defend; we’ve known since the 1940s that there is not an endless staircase of more and more sophisticated computation. Relatively simple devices such as a desktop computer are already “universal computers,” meaning that, in principle, your desktop machine can emulate the behavior of any other system. It’s just a matter of equipping your computer with a lot of extra memory, getting it run fast, and giving it the right software.

I estimate the actual computational power of the human brain as being on the order of a quintillion primitive operations per second using a quintillion bytes of memory. In scientific nomenclature, this would be an exaflop exabyte machine..

Extrapolating from present trends, we may well have desktop computers of this power by the year 2060. Of course the hard part is figuring out how to write the human-emulation software for the exaflop exabyte machine.

3. On Feasibility There are all sorts of ways of making computers, and some hardware designs are better for certain kinds of problems than others. In making a computer that emulates humans, we face two interlocking problems: finding the best kind of hardware to use; finding appropriate software to run on the hardware.

These are exceedingly hard problems. My guess is that it will take at least another hundred years for full parity between humans and certain machines. Possibly I’m too pessimistic.

There are certain limitative logic theorems, such as Gödel’s incompleteness theorem, suggesting that it’s in principle impossible to write software equivalent to a human mind. But these theorems do not rule out the possibility of managing to evolve or to stumble upon human-equivalent software. All that is ruled out is the ability to truly understand how the software works.

Evolution, also known as genetic programming, is a widely used technique in computer science. Although artificial evolution doesn’t find the very best algorithms, it is able to find acceptably good algorithms, often in a reasonable amount of time.

It may be that we don’t need to use an evolutionary process. Wolfram argues that whenever you can find a complicated program to do something, you can also find a concise and simple program to do much the same thing. If we had a better idea about the kinds of programs that might generate human-level AI, we might achieve a rapid success simply by doing an exhaustive search through the first, say, trillion possible such programs.

It may in fact be that human-style mentation is something that nature “likes” to produce; it could be a ubiquitous pattern like cycles or vortices or pairs of scrolls. In this case the search might not take so long after all.

Debate Today: Can Machines Think?

Thursday, November 10th, 2005

Thursday, November 10th

Topic: “Will Computers Ever be Alive or Intelligent?”

Dr. Noam Cook, “No.”

Dr. Rudy Rucker, “Yes.”

4:30pm – 6:00pm

Martin Luther King Library, Room 225B

(On the San Jose State University campus at 4th St. & San Fernando St., San Jose, California.)

B there or B square.

Slightly nervous about this. I have to speak first, which is never an advantage. Will try and podcast it.

Interview for Ylem with Loren Means on Recent Books

Wednesday, November 9th, 2005

I did an email interview this morning. By the way, you can find all of my email interview online, if you’re interested.

Q 168. I’m back, Rudy, I interviewed you twenty months ago for my art magazine, Ylem: Journal of Artists Using Science and Technology, and I’m getting ready to publish our interview. I want to bring it up to date with some follow-up questions. What became of that nonfiction book you were talking about, The Lifebox, the Seashell, and the Soul?

A 168. The Lifebox came out last month to my customary blizzard of zero publicity, other than the web page I made for it. It’s a very nice-looking well-produced book with lots of great illos, and I said everything I wanted to about the meaning of computation. But I’m not seeing any reviews of it, other than in the three publishing trade-zines. And, sigh, Amazon posted the one bad review on their page. And I haven’t seen it for sale in many stores. And the science-book-clubs haven't yet picked it up. And we haven't gotten any deals from foreign publishers. So right now I’m discouraged.

I have a sense that the market for science books these days is geared towards books having precisely one idea, which is then buttressed with water-cooler-level discussions of pre-digested news stories that have been fed to us by the media. The recent best-seller Blink is a self-reflexive example of this: Blink says that your very first and most shallow idea on any topic is correct. You don’t even have to read it! Just put it on your shelf. Got it. Like a white-on-white painting with maybe one red dot. No time wasted. And I’m also up against Ray Kurzweil’s snake-oil-sales-pitch The Singularity is Near, which pretty much says, “Buy my book and you’ll live forever.” The guy even sells vitamins.

The Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning of Life, and How to Be Happy is ruminative and dialectic in approach; I weigh opposing views of reality and come up with a synthesis or, if that’s not possible, consider holding both views simultaneously. Also I commit the high crime of joking around rather than being deadly serious.

The title itself is a dialectic triad, by the way. The Lifebox thesis is that there can be computer models of human minds, the Soul antithesis is that I feel myself to be a vibrant energy-filled being and not a machine, the Seashell synthesis is that the computational patterns found on cone shells are examples of the gnarly deterministic-but-unpredictable computations that could indeed inhabit my skull.

My book is profound and deeply human, but it’s not very blink at all. Stephen Wolfram likes it in any case; he says it’s more important book than my publishers or I realize.

Q 169. And you said that after the Lifebox book you were going to write a novel about two crazy mathematicians?

A 169. Yes, Mathematicians in Love. I just finished making the final revisions. It’ll be out from Tor Books in, I suppose, summer or fall of 2006.

I had fun with this novel. For one thing, it gives sfictional life to some of the ideas in my Lifebox tome. For instance I have my two guys making universal paracomputers out of naturally occurring things like vibrating drumheads. Now, in Lifebox I argued that most naturally occurring processes are, although deterministic, impossible to effectively predict by dint of being gnarly computations. But, just for kicks, I set most of Mathematicians in Love in a world where this isn’t the case, and it is actually possible to build a device that predicts the weather, the stock market, other people’s decisions and so on.

Another thing I do in Mathematicians in Love is to satirize our current government, and to have my characters bring it crashing down. President Joe Doakes goes to jail. I found that very satisfying.

Yet another angle is that I use a notion about parallel worlds which I developed in The Lifebox, the Seashell, and the Soul; my idea is that reality might be a series of parallel universe which are linearly ordered, with each one slightly better than the one before, like successive drafts of a novel.

One thing that would pep up my career would be if Michel “The Eternal Sunshine of the Spotless Mind” Gondry actually makes his movie of my novel Master of Space and Time. He’s had the option for two years and is presently working on a script with Dan “Ghost World” Clowes. Michel says he’d like to cast Jim Carrey and Jack Black as the book’s two mad scientist pals. (I know, I already mentioned this on the blog, but hey, this thought is my current security blanket.)

Q 170. What’s next?

A 170. I’m not sure. I’m not up for another big project yet, what with my would-be-earthshaking tome being ignored, and with the long haul of my latest novel just ended. Call it post-partum blues.

Right now I’m writing some short stories. With a couple more, I’ll have enough for a new story anthology. So that’d be an easy book to get out. This summer I read Charles Stross’s great Accelerando, and that got me interested in tackling the Singularity head on; I’m writing two stories about the Singularity right now, and I already sold one of them to Asimov’s.

I’ve been cleaning out my basement this week and putting all my old boxes of papers in one specified corner. Maybe that means I’m getting ready to write a memoir. I’d sort of like to take on that project, but the publishers I’ve mentioned it to aren’t very interested. I also have a few hundred thousand words of journals that could perhaps be published in some form.

The other possibility is, of course, that I write a new novel. I’d been thinking of doing a sequel to Frek and the Elixir, but I don’t have a killer idea for that yet. For Frek itself, I used Campbell’s monomyth structure, one stage per chapter, which gave the book a nice form, but it sort of makes the book a finished whole, so I’m not exactly sure how to do a sequel. Or I could do a fifth Ware book, not that the first four are flying off the shelves anymore.

Another thought is to drop writing for awhile, and wait for the world to catch up with me. I enjoy painting; maybe I could pick up a few bucks doing that. One result of cleaning out the basement is that I’ll have room for a metal rack on which to store all of my family’s accumulated paintings — we’re all artists. With a place to store the accumulated works, I’d be a step closer to painting a bit more. One thing I might do soon is to start selling posters of my paintings on the web. That’d be more painless than trying to get a show.

I piss away a lot time blogging. On the one hand, it's an obsessive-compusive disorder, and I think I'll be cutting down to one blog post per week so as to have more time. But I do see blogging as an art form; I like for each entry to be a nicely balanced combintation of words and pictures — I shoot a lot of pictures with my digital camera, and I recycle old ones, like I'm doing today. I’ve even gotten into podcasting, that is, posting my lectures and spoken interviews online. In my own diffuse and unpredictable fashion, I seem to be creating an electronic lifebox copy of my mind. On the third hand, why bother? Better to go outside and ride my bike.

Trip to MIT and Harvard

Thursday, November 3rd, 2005

I got back from Wyoming just in time to turn around and fly out to Cambridge, Mass, for a scheduled talk and book-signing at MIT, promoting The Lifebox, the Seashell, and the Soul.

While I was in town, I managed to go see a premier production of (part of) my musical-theater piece, “Mamma Infinity UFO,” a reworking of the play “As Above So Below” based on my short story of the same name, a tale about a mathematician who encounters a UFO shaped like a 3D Mandelbrot set. (Not to be confused with my novel about painter Pieter Bruegel bearing the same title.)

[Set of models of brains on sticks like lollipops or sex-toys in the Harvard Hall of Science: crocodile brain, pigeon brain, rabbit brain, dog brain.]

The Elsewhere Troupe producing my piece only had ten minutes of it worked out; they performed that as part of the Keep Santa Cruz Weird Festival, which was a variety show of different acts in the funky old Rio Theater (a repurposed movie theater) in Santa Cruz that we always pass on the way to the beach. They bill also featured a ukulele band called the UkeAholics, a juggler, a cowboy psychedelic story-teller, a pretty girl carrying title cards, a woman singing rap and opera, experimental movies, and more. Naturally the one act that was so weird that it left the audience temporarily speechless was mine. I’ve seen that effect so often, the stunned silence when my presentation stops. And to me what I’m offering always seems so clear and logical, a gentle extrapolation from familiar facts. In any case, it was great seeing the show, it’s a trip to see my personal psychic states being publicly dramatized.

And then a few days later it’s 6:50 AM; I’m lying in bed in “A Friendly Inn,” a bed and breakfast place in Cambridge, Mass, a classic New England rooming house, only run by Chinese, my room feels almost like a dorm room, here a block away from Harvard Yard, which is the several-city-block compound of buildings and quads making up this quintessential university’s classic core. I see lovely red and yellow autumn leaves out the windows against a clear blue sky, reminding me of days like this forty years ago when I was at Swarthmore.

Harvard was my first choice for a college as a bright high-schooler in 1963, and I didn’t get in, perhaps because of my I-want-to-be-a-beatnik application essay and the I-am-an-existentialist interview I had with a disapproving stock-broker Harvard alum at his office on Market Street in Louisville. Me: “I admire Kerouac and Jean Paul Sartre.” He: “Have those fellows — have those fellows ever met a paycheck?”

For years I carried a resentment of Harvard, the Ivy League, Boston, and even New England as a whole. Yesterday I was sitting alone in the Memorial Church in Harvard Yard, recalling this, and thinking that, after all, it was very good to go to Swarthmore — and I made some progress on setting down that old resentment. Harvard is cute, the best, I’m glad it’s here. I walked into some of the classroom buildings, following the students up the stairs, getting glimpses of the Genuine Harvard Classrooms, with actual chalk blackboards. Wandered into a student dining hall with wood wainscoting and enormous chandeliers (and was immediately evicted — “But I’m a National Merit Scholar!” “Outta here, you beatnik.”). This place really is the platonic ideal of an East coast university.

Thinking back to 1963, I could have gone to MIT, where I applied and was accepted, but on my college tour, the school had struck me as too — what was the word I would have used then; we didn’t use “nerd” or “geek” or “dweeb”? Sliderule? Martian? Engineer? My people, now.

(Heres a photo of Gerry Sussman wearing a “Nerd Pride” pocket protector, I ran into him at MIT this time out, full of ideas, charming, lively, he’s been there 41 years, was two years behind hacker Tyrannosaurus Rex Williaminus Gosper.)

I remember the guy who gave me my MIT campus tour back in 1963 said, “This is the dorm I live in, and some people call it the ‘tool shed.’” “Tool” at that time being a word with some of that same force. But at Swarthmore I saw wide-hipped long-haired girls in jeans sitting on the grassy lawns talking to boys, and I knew this was for me.

Back in the present, at Harvard I walked around the Fogg Art Museum; saw some good stuff like this Roman portrait buried with this person’s mummy.

At MIT I was the guest of my old artificial-life-researcher friend Mitch Resnick of the Media Lab, we met at Alife 1 at Los Alamos in, like, 1987. Mitch has been working with the Lego company over the years, and is now guiding a Lego spin-off startup company Playful Invention Company with a product called the PicoCricket Kit; their aim is to make robotics kits that might appeal to girls as well as to boys. The idea is to involve sound, color, and touch; to make the devices more haptic and sensuous. He says that over the years the Media Lab has become more and more hardware oriented.

Next door to Mitch was the “Bits and Atoms” group, focused on, I think, computational DNA. And a few doors down were some guys making walker exoskeletons under, I believe, a grant from DARPA.

One bittersweet thing about a place like MIT is that the projects one hears of or remembers are often thesis projects which, after a few years, are gone. Always new stuff coming up. One thing I wish I could have seen again was an old Media Lab demo in which a student was stacking up the frames of movies to make virtual 3D objects, sensuous spacetime trails.

We didn’t have time to tour all of the Media Lab because we took time off for lunch at CSAIL (Computer Science and Artificial Intelligence Lab), which is now the coolest building on the MIT campus, a Frank Gehry work. We had lunch with CSAIL director Rod Brooks, he of the Attila, Cog, Kismet, and Roomba fame, coiner of the robot-liberation slogan, “fast, cheap and out of control,” and author of the fascinating book Flesh and Machines. I think the fact that I’m an SF writer impresses these guys more than my work on CS. SF is something they can’t do.

I got to see a few new humanoid robots under development by grad students in Rod’s lab. I was particularly impressed by one named Domo, Rod got me to feel Domo’s hand. Domo’s fingers were so gentle and responsive, like a human’s.

Domo had a gazillion springs in his fingers and arms. He was learning to watch his hand with his eyes, staring at his fingers like a stoned deadhead, with a video screen behind him showing his view with an aqua disk of variable size to indicate the region, if any, that Domo was focusing on. If nothing was happening, the disk would go away, but if I waved my hand in front of Domo’s face, he’d wake up and follow my motions, his eyeballs moving and the aqua disk tracking my hand on the screen. That’s Rod in the background.

A student called Jessica Banks showed me a little robot that was learning to balance on a bowling ball; it was called Eggway. “Can I take your picture for my blog?” I said. “Oh, I get this all the time,” she said, enjoying the attention.

When it came time for my talk, I encountered the all-too-familiar problem of the computer projector not working with my (brand new) computer. But I’d posted my “Gnarly Computation” slides to the Web in advance, and was able to show them using the room’s built-in computer, also I had time to download and install my demo cellular-automata software CAPOW on that machine before the talk. I taped my talk for a 60 Meg MP3 podcast.

Afterwards there were three guys hanging around (from right to left): Bob Hearn, a onetime Apple Works programmer returned to MIT to get his CS Ph. D. in AI; Justin, a sophomore math major interested in CS; and Doug, an MIT graduate in philosophy presently doing sysop-type stuff for the MIT computer network. We went out for a nice dinner at Legal Seafood nearby; I had the New England dinner with chowder, mussels, and a steamed lobster, very satisfying. The guys were chatting about the doings of famous Cambridge professors like Gerry Sacks, Marvin Minsky, Gerry Sussman, Max Tegmark, and Rodney Brooks — Sacks, Minsky and Sussman have been there my whole life. I felt like a country priest come to visit Rome; he falls in with some friendly younger priests who break bread with him and gossip about the intrigues of the cardinals and the monsignors. There’s some tension between the software Minskyian camp and the hardware-loving Brooks camp.

[Even string-king Ed Witten is in Cambridge! VE-RI-TAS!]

I talked to a grad student friend of Bob Hearn’s named Jacob the next day, he said the key breakthrough in most AI problems involves finding the right representation of the problem. The “aha” is all about seeing the problem in the right way. That’s kind of what hierophantics in my novel-under-revision Mathematicians in Love is all about: somehow automatically being able to immediately go to the correct computation-crushing aha representation.

I had a somewhat discouraging book event at the MIT COOP, which isn’t even what I’d call a real bookstore; it’s more like a mall store, a concrete box with greeting cards and stationery and a handful of titles. They didn’t publicize my event, and the turnout was thin. Afterwards, full of self-pity, I felt like this poor beggar I saw playing a one-stringed violin, collecting money in an empty Kleenex box at his feet. Although, obviously, I have it much better. And, yes, I gave the guy some money.

And then I got a cell phone call from my Hollywood agent, enthused about the developments on the Master of Space and Time movie — Michel “The Eternal Sunshine of the Spotless Mind” Gondry is now working on a script with Dan “Ghost World” Clowes, and Gondry has said he plans to cast Jim Carrey and Jack Black in the film. Fingers crossed.

I had dinner with — back to the country-priest-in-Rome analogy — Martin Luther, that is, the iconoclastic Stephen Wolfram, a figure to be reckoned with, but not a part of the academic inner circles. We had a really enjoyable conversation.

I always think I’ve only met three or four people as smart or smarter than me. Kurt Gödel definitely is the gold standard here, the smartest man I ever met, my guru, a demigod, and to hell with those who say he was nuts, they didn’t really know him. Wolfram was the first person I met after Gödel who seemed truly smart. And later John Walker of Autodesk was the third guy like this that I’ve encountered. That’s about it. Reciprocally, Wolfram seems to find me to be someone he really likes to talk to.

My good old SF buddy Paul DiFilippo came in from Providence to visit with me the next morning. He recently wrote a very funny comic book series called TOP TEN, on the racks right now, I read the first three issues on the plane back and look forward to the closing two. Lots of eyeball kicks.

Paul and I went to look at the Science Hall instrument collection (Paul is with the original cyclotron in the photo), bascially scavenged from demo tools and lab equipment left over from earlier times at Harvard which has, after all, been in the education biz for, what, better than 200 years. The woman running the place was didactically asking us if we could see what was “wrong” with a certain mechanical orrery model of the solar system, and I told her, “We don’t know any science, we’re science fiction writers.” She found our levity in poor taste.

I was amused to see that in the Harvard orrery, the crank (= power lever for the cosmos) is locked away behind glass, unlike the crank at the Geneva museum of the history of science which is out there for one and all to touch. (Mentioned in the “Blog the Gnarl #3” entry of my 2004 boing boing blog.) So the spaced-out science fiction writers can’t get at it!


Rudy's Blog is powered by WordPress