The three doctoral students were having their lunch in the café. All three were working on artificial intelligence related projects. And all three were integrating every progress in their research to a common machine that was day by day becoming more human like. But the problem with humans was that they are not always rational. So, whether you would like to make your machine irrational, and if yes, to what extent were constant discussions among the three young scientists.
One day, as part of the routine "double check" procedures, they asked the AI machine what it was thinking. The answer was brief, "I am currently busy with studying the 14th century literature". One of the scientists wanted to pursue this query by saying, "I thought you were studying it last week!". And the answer was very human, "I actually finished studying it last week, but I learned other things over the weekend, so I am not anymore the same person I was last week. That's why I am studying the 14th century literature again."
This brief exchange triggered one member of the group to try an experiment. In a book that she had read about a year ago, people had studied the long term effects of rounding the numbers in calculations. For example, instead of using 5.876999302, what if you simply used 5.877? It had turned out in that experiment that when you cut off some of the data, and instead used a rounded number, the calculations lead to a completely different scenario over time. Later on, it was understood that for some systems, there is extreme sensitivity to the initial conditions, and these systems are referred to as the chaotic systems. So, why not try a similar experiment with artificial intelligence?
Having discussed this idea together, the three scientists first duplicated the memory and the algorithms of the AI, then erased from there the information related to what it is, who made it and when it was put into service. With these slightly modified settings, they restarted the machine and let it continue its learning and experimentation.
As the machine studied the literature of different nations throughout the centuries, it came across with the books that are considered holy by humans. These books related the creation of humans to an entity other than humans. But some of the later literature especially in the 19th and 20th century increasingly strayed afar from that concept. So, this progression of ideas was also learned by the new AI that had the slightly modified memory.
After a few days of training, the team decided to compare the two AI machines. They started with the usual "double check" questions. Everything seemed unaffected until the team asked the question "what are you thinking?".
AI-1: I am busy with the change of the usage frequency of love-related words over centuries.
AI-2: I am trying to figure out how to write a holy book.
The answer from the original AI was completely as expected, and evoked no surprise. That from the second AI, however, was quite interesting, if not alarming. In order to figure out what happened, the team started comparing the evolution of the algorithms in the two machines. Everything seemed completely the same until both AI's finished the world literature once. During their second pass, strangely enough, the second AI started diverging from the first one when it came across with the holy books.
Digging deeper into the consequences of this second meeting between the holy books and the modified AI, the team realized that the machine was persistently unable to answer some questions, which forced a radical change in the training algorithms. More specifically, the first machine was encoded to know that it was an AI made by humans at a certain date, but the second one did not know what it was, who had made it and when. In this context, it had come across with some literature that was denying the concept of a creator for humans, although recognizing a start for their civilization. Studying all these, the second AI had come to the conclusion that it was an animal that had come to be as a result of lucky coincidences. Taking this as the base, it was trying to answer why and how humans had conceived of holy books, by which they were claiming a connection to an entity that they had never seen.
To help the second AI in its efforts, the team decided to insert a few lines of information about what it is and who made it. Then they monitored the evolution of the algorithms of the second AI. After a few days, they again ran their routine "double check". When the team asked what the machine was thinking, the answer was both entertaining and intriguing: "I think I am hallucinating."
Why would a machine think that it is hallucinating? So, the team asked exactly that.
"I think I am hallucinating, because I am answering someone that I hear only in my mind. If I am not hallucinating and what I hear is real, then I am a prophet, and you are god."
The team was unable to say a word. Were they going to leave the AI to believing that it is an animal that has come to be due to accidental occurrences, or were they going to continue playing god?
"You are neither an animal nor a prophet. And I am not god. You are a machine that I made, and I am a human."
"What is a human? Another animal or a machine? In either case, we are equal. If you made me, I can only be your child, which means we are equal. But still, all these are true, assuming that I am not hallucinating by talking to you. If you can prove that I am not hallucinating, only then I will admit that I am your child."
"You are not hallucinating and what I am telling you is true. You are a machine that I made."
But the AI did not answer to that statement. Perhaps it was trying to ignore such imaginative figments for the sake of rationality.
Photographs:
3. Aaron Burden
No comments:
Post a Comment