Bleghh.. Physics is such a drag, along with websites to do, SAT crap. Then I’ve got to write crap to keep this lame website alive. Inuyasha!.. but then I’ve got to write a report on the biggest Transformers-geek in history. News to keep up with, free mousepads and trolling 11-year-old’s.
First off, the notion that you can give a program three laws (moreover, in English) and expect it to follow them is ridiculous, especially when they’re so retardedly vague. Making a bipedal robot is tough enough. But it’s completely possible, through a complex physics-based simulation of the environment. How do you even begin to write a program that can distinguish danger? Here, I’ll feed this computer the entire selection of the world’s violence and action films to teach it what’s dangerous and what’s not. That’s not gonna fly real well.
Inb4 DOOOD ITZ JUST MOVIE STOOPID
If you look past the “random segments of code” (lol), this movie claiming that program logic will question creation given an amount of time. Computers aren’t aware of what they’re doing. They follow instructions given to them by programmers and it won’t do anything that you don’t explicitly tell it to. Moreover, computers have been around for half a century, but you don’t see self-aware robots anywhere (except, well, other movies). This idea is just impossible: From the electric charges in the semi-conductors of your computer to programs tackling the challenges of artificial intelligence, there’s really no room for computer logic to somehow develop consciousness unintentionally.
Now, consider this: A hundred years from now, on a quantum computer the size of a football field (since Americans seem to like this measurement), programmers have started running a new program that will simulate the biological processes that occur in living creatures. With a carefully crafted robotic body, the program will simulate environments, human interaction, and even emotions by adjusting variables in a massive simulation. Like a baby, neurons will adjust to things that the simulation experiences, and slowly, based on trial and error, the simulation will learn to talk, move and interact like a human.
But is this simulation really self-aware? Or are you just speaking to a very well-made program? Theoretically, it all looks possible–in the future, of course. iRobot is set in 2035, a stupidly soon date for such an achievement to occur. Ignoring that, a simulation of processes we’ve only begun to understand will be more difficult and complex than any idea ever conceived or attempted by anyone. In the end, the purpose of this simulation is to simulate self-awareness. If it works, then the program did what it was intended to do. Nothing special there.
Faulty code can lead to intent errors: If you screw up, the program might not do what you want it to. Screwups made potato chips, post-it notes and fireworks. But I’ve yet to see screwups (or alchemists) create intelligence accidentally.
So Edward Elric, good luck. You’ve got quite a challenge ahead.