Bleghh.. Physics is such a drag, along with websites to do, SAT crap. Then Iâ€™ve got to write crap to keep this lame website alive. Inuyasha!.. but then Iâ€™ve got to write a report on the biggest Transformers-geek in history. News to keep up with, free mousepads and trolling 11-year-oldâ€™s.
First off, the notion that you can give a program three laws (moreover, in English) and expect it to follow them is ridiculous, especially when theyâ€™re so retardedly vague. Making a bipedal robot is tough enough. But itâ€™s completely possible, through a complex physics-based simulation of the environment. How do you even begin to write a program that can distinguish danger? Here, Iâ€™ll feed this computer the entire selection of the worldâ€™s violence and action films to teach it whatâ€™s dangerous and whatâ€™s not. Thatâ€™s not gonna fly real well.
Inb4 DOOOD ITZ JUST MOVIE STOOPID
If you look past the â€œrandom segments of code” (lol), this movie claiming that program logic will question creation given an amount of time. Computers arenâ€™t aware of what theyâ€™re doing. They follow instructions given to them by programmers and it wonâ€™t do anything that you donâ€™t explicitly tell it to. Moreover, computers have been around for half a century, but you donâ€™t see self-aware robots anywhere (except, well, other movies). This idea is just impossible: From the electric charges in the semi-conductors of your computer to programs tackling the challenges of artificial intelligence, thereâ€™s really no room for computer logic to somehow develop consciousness unintentionally.
Now, consider this: A hundred years from now, on a quantum computer the size of a football field (since Americans seem to like this measurement), programmers have started running a new program that will simulate the biological processes that occur in living creatures. With a carefully crafted robotic body, the program will simulate environments, human interaction, and even emotions by adjusting variables in a massive simulation. Like a baby, neurons will adjust to things that the simulation experiences, and slowly, based on trial and error, the simulation will learn to talk, move and interact like a human.
But is this simulation really self-aware? Or are you just speaking to a very well-made program? Theoretically, it all looks possibleâ€“in the future, of course. iRobot is set in 2035, a stupidly soon date for such an achievement to occur. Ignoring that, a simulation of processes weâ€™ve only begun to understand will be more difficult and complex than any idea ever conceived or attempted by anyone. In the end, the purpose of this simulation is to simulate self-awareness. If it works, then the program did what it was intended to do. Nothing special there.
Faulty code can lead to intent errors: If you screw up, the program might not do what you want it to. Screwups made potato chips, post-it notes and fireworks. But Iâ€™ve yet to see screwups (or alchemists) create intelligence accidentally.
So Edward Elric, good luck. Youâ€™ve got quite a challenge ahead.