Wednesday, March 22, 2006

Error 21

The midnight hush used to serve as the perfect backdrop to these nightlong pursuits. Sam however had never realized. Tonight, though, the air was filled with the subtle drone of a rare midnight rain and Sam found himself aimlessly tracing droplets as they hit the moonlit window pane and hurriedly rushed through a maze of trails which were once droplets themselves. One of them faded out of existence leaving nothing but a trail for another to later run through as Sam’s reverie was interrupted by a sharp beep.

The program he had been running had terminated. It was most unexpected. He had been considering catching a quick sleep before the program would make up its mind. That was pretty quick. He thought. On a 1.5 GHz processor intelligence simulations took a lot more time than that. But that was only when they ran to completion. Sam froze as he read the message on the console.

Self Terminated.

When the initial frantic search for a quick explanation turned up nothing the inherent intractability of the program suddenly looked menacing. Incremental development was aimed at avoiding running into exactly these impasses. He had carefully run tests on individual components and integrated them to build his machine mind. And now, when it had finally started talking to The Model it had self terminated.

Sam checked the status of The Model. It was up and running. Not surprisingly. The Model had been backed unanimously by the AI community and was put together by the best of the lot. Sam's was only one of the many programs the creators of which had eagerly waited for The Model to be made accessible to researchers all over the globe. The Model was the result of advances in Natural Language Processing techniques. The Model, as it was simply named, was just that. A Model. A Knowledge Model. Almost all AI researchers believed AI was about modeling and using knowledge and The Model was a fruit of that belief.

The Model was simply all the knowledge on the internet represented in a form understandable by machines. Every book in electronic form, every research paper, every story, every poem...in fact every piece of text on the internet had been processed and interpreted to make The Model. What a human could read on the internet a machine that knew the freshly standardized Universal Knowledge Representation could read from The Model. The Model was conceived by a small group of AI researchers who always were in short supply of knowledge resources to 'train' their dream programs. Programs that mimicked the human mind. The Model was intended to be a consistent system that represented 'facts' which could be used to 'educate' the fake minds on the anvil. Ironically though, once The Model had started being integrated with processed knowledge from the internet, far from being consistent, it grew on to reflect the welter of ideas of the multitude. The Model, as an article in New Scientist put it, had in it both Ayn Rand and Karl Marx.

So what was once an AI researcher's dream knowledge base - a consistent set of truths, was now reduced to a collection of translations of pieces of information for the machines to read. Sense, was yet to be made of the model. Sam's program, the few who had heard of it thought, was an attempt to do that.

Rain drops continued to hit the window pane. Sam was staring at them impassively now. No longer following any. There was only one way of investigating the problem now. The Log Interpreter. Sam's program was sure intractable but it recorded the turns it took, the choices it made and the truths it encountered. Unlike a conventional program though, the log the program produced had to be interpreted to make sense of the choices the program made. One of the reasons was that the program, like The Model, worked only with numbers and the log it wrote had nothing more than numbers either. The Log Interpreter could however decipher them for Sam given sufficient amount of time. Sam started the interpreter. Estimated Time: 3.16 minutes, the interpreter said. The rain had grown lighter. Sam didn't notice. He was looking beyond. Far to the west he could see the university in silhouette. Only few hours before on one of the corridors there had he disclosed to Ted what this program on which now an autopsy was being performed had been intended to do.

"So this apparently intelligent program of yours, you say, simulates sentience. Now, this feels like someone telling me he's built a car in his back yard that breaks the speed of light." Ted had been reading a lot of physics lately. "We are talking about a Turing machine here Sam. Something that doesn't even know when some one gets it to chase its tail. Sentience is a totally different ball game"
"Precisely. Sentience is a totally different ball game. The Halting problem has nothing to do with it"
"So you have this piece of code in here that, apart form other things, refers to The Model and apparently is the closest man has ever got to mimicking the human mind."
"Yes"
"You are talking of instructions in binary that are picked up as if by a conveyor belt by the fetch-decode-execute cycle of a microprocessor made of semiconductors. An identity is, I suppose, a prerequisite for sentience."
"You are a bunch of chemicals. You do have an identity"
"But I know that I exist. I know that I'm a bunch of chemicals. Does your program know that? Does it know that it is a finite stream of bits that runs on a microprocessor?"
"There is a place for itself in its knowledge model. It perceives itself. Not as a stream of bits but as an entity. As a seeker of knowledge. You do not think of yourself as a bunch of chemicals do you? You were told that you were one. So why does the program have to know that it is a stream of bits? In any case, it only has to read a book on computers to figure out what it is made up of."
"You know what Gödel said. You simply cannot make a machine that knows it all"
"This doesn't know it all"
"But Sam, sentience...a machine simply cannot get self aware. When an instruction is being executed, it simply doesn't know that it is being executed"
"When a neuron is fired in your brain does the neuron know that it is being fired? It doesn't because it is simply not its business to know that. Consciousness is not manifested at that level Ted."
"Let us get down to the specifics. What does your program really do?"
"Well...it tries to make sense of The Model."
"Isn't that what half a dozen people are going to try tonight when The Model goes online? All that you are doing is running an algorithm that extracts consistent sets of truth from The Model."
"Ted, this program is capable of meta-thought. Apart from interpreting parts of The Model it records its actions in its very own Model. So it essentially knows what it is doing."
"But it has been told what to do. It simply doesn't have a choice. What good is your sentience for if you do not have the ability to make choices?"
"It does have. It takes its own decisions on what to do"
"And those decisions would in turn be based on some algorithm that you wrote. You have effectively told it how to make decisions."
"The decisions are based on the knowledge base. Which means that whether or not it has to read Ayn Rand it would decide based on what it read in the Bible."
"That's insane Sam. It would simply get out of control. It would go insane."
"When I said meta-thought I didn't mention the level. Ted, humans are capable of thinking about thought. Probably even thinking about thinking about thought. But that's as far as we can go. The machine has no such limitations. It can execute a meta-raised-to-the-power-n thought process and we would never be able to able to make sense of its conclusions. By the way, there is no such thing as perfect sanity."
"You mean you made an electronic version of Socrates? A machine that's a philosopher? Is that what you are trying to say? Something that thinks endlessly about thought? Something that rambles about in that space you call the thought-space?"
"Well...let me put it this way. If you can imagine thought space then I would say this program carries a map to get around"
"It all gets intractable once you aren't able to interpret its conclusions. I have no idea what to call this. You have got farther than perfect-AI. You have beaten the light barrier. Wouldn't a scaled down version of this behave exactly like a human?"
"Not quite. There's a difference that you are missing. Unlike the humans, this machine would be perfectly rational. In fact I would have to induce errors in its reasoning to make it irrational."
"Perfectly rational. A perfectly rational mind capable of infinite thought. That sounds too good to be true. There has to be a catch some where Sam."

The rain had subsided as Sam waited for the log interpreter to come up with an explanation. The log interpreter finally beeped signaling completion.
Error 21: Could not find purpose, the console read.

Sam knew it. He was reading the suicide note of a perfectly rational being. A catch there indeed was.

No comments: