Movies that get AI right
When was the last time you watched an AI-based and thought, that's not how AI works? You're not alone.

Top 5 movies that get AI right

Well, it’s the weekend and we thought we’d make this a fun post. Here are the top 5 AI-based movies you can watch this weekend. We also drew a little analysis of how much of the AI element can be real or achievable. In short, of the many movies based on artificial intelligence, these five movies get it right to a certain extent.

I, Robot

Film premise: One of the executives at USR robotics corporation is murdered. Detective Del Spooner suspects one of the company’s own robots as the perpetrator. 

When it concerns artificial intelligence, I, Robot, the movie addresses the three laws prescribed by Isaac Asimov, in the most direct way. These three laws include:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Honestly, these laws make for a great starting point for creating safe artificially intelligent machines. However, the film’s main robot character, Sonny, is depicted as having defied these laws and its programming. It goes rogue. 

If you ask an expert, they’ll tell you that this is might not happen. I, Robot, on the other hand, provides us with a reasonable explanation for the change in behavior of the machines. The film justifies this by revealing that an AI named VIKI has introduced the ‘Zeroth law’. This law states that ‘A robot may not harm humanity, or by inaction, allow humanity to come to harm.’

In the film, this law or directive is taken to an extreme when the robots decide that humans are a danger to themselves and hence must be controlled. These are unpredictable consequences that may become a reality if we do not take sufficient care when programming advanced AI.

Having said this, there is something that the film gets wrong about AI. While adding the Zeroth law or any new law that overrides the existing directive in the robot’s programming will certainly alter its behavior, it is also in violation of Asimov’s three laws of robotics. However, the film does not explain how VIKI arrives at the decision of implementing the Zeroth law in the first place. 

Robots cannot change or override their programming on their own. The idea that VIKI could somehow develop a new agenda on its own is fiction. We need to understand that the underlying goals programmed into the machine are “static.”

Colossus: The Forbin Project

Film premise: An American supercomputer is designed to prevent nuclear war. However, it teams up with its Russian counterpart. Together, they take control over most of the world’s nukes and hold humanity at ransom unless humans give up their control over society to these new silicon overlords. 

In Hollywood, in fact, pretty much everywhere, there is this misconception that a machine must possess the ability to feel or have a will of their own in order to oppose humans. We think that is not necessary and is not based on scientific reasoning. 

Additionally, whether or not the machines in the film are sentient is highly debatable and is not a criterion for the machines to go against humans. All that the machine requires is programming that contradicts its own wants. When we give machines certain goals, we need to be careful. If done incorrectly, things can get sour for the lot of us. 

If truth be told, a completely rational computer, that has intelligence far beyond human beings, might actually be able to create a more fair society for everyone. However, we couldn’t help but agree with how the film concludes where the AI says, “You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride.”

Apart from the fact that a computer, operating on punch cards, has enough and more computational power to enslave humanity, we didn’t find anything glaringly incorrect with the film. The progression of any technology occurs in incremental phases. If we create a computer that we cannot control, there is a good chance that we would have created similar computers in the past that we lost control over. Well, the point is, we will have enough and more red flags before we ever got close to building such a computer or robot.  

Bicentennial Man

Film premise: A robot butler becomes human over a period of several generations. It even replaces his mechanical pieces with lab-grown organs

Well, it’s good that we have a non-violent AI as the central character of the film. We didn’t quite enjoy the film; the story wasn’t compelling enough for us. However, it resonates with the optimistic sentiment of experts that humanity and AI will be able to coexist peacefully, alongside each other. There isn’t anything blatantly inaccurate in the film. 

However, what bothers us this – that any robot which is as advanced as Andrew would have any notion or desire to become human seems “somewhat egocentric.” There’s the issue that Andrew has almost always managed to obtain goals and wants outside of his original programming. 

Her

Film premise: A recently divorced writer installs a new sentient operating system on his computer and the two begin dating. (Who’d have thought?)

Samantha, the AI, doesn’t have a body, but she has a voice. Her, the film, portrays the risks of becoming emotionally attached to machines, without having to create a humanoid form for the AI. Imagine a dramatically advanced version of Siri.

There are pitfalls in designing humanoid AIs. There’s a good chance that people are going to be attached emotionally to these ‘machines’. Additionally, AI may have different interests than its human creators. In the film, the writer-protagonist may grow due to his relationship with Samantha, but the two were not an ideal pair.

Think about it, Samantha is free to roam the Internet and the world, carrying out hundreds of conversations at once with anyone. The writer on the other hand is confined to the physical limitations of his body and brain.

Machines don’t require to experience the world at the same pace as humans and that’s precisely what makes them the ideal candidate for performing millions of computations per second. They make awful companions, though. 

What we would have liked to see though is how Samantha works or what it means to evolve beyond the need for matter. Funnily, considering how advanced AI has become, the rest of civilization appeared to be unchanged. 

2001: A Space Odyssey 

Film premise: While investigating a strange signal emanating from a large black monolith on the moon, the crew of Discovery One discovers that their onboard AI (HAL 9000) is malfunctioning. 

We think that the film’s portrayal of AI is the most accurate of any of the movies so far. HAL seems sentient, but the astronauts don’t seem to be so sure or have no way to know. HAL seems to express fear as Dave slowly deactivates it, but the desperate pleading could be shrugged off as an attempt to carry out its mission. 

Like Colossus, HAL never strays from its original goals or programming. All of its seemingly nefarious actions are carried out simply because it believes it is the best way to complete the mission. It’s not an instinct for survival or its emotion that makes HAL a villain. It’s simply programming. The film makes it clear that consciousness is not a requirement for AI opposition. We agree! 

One of the little things that it gets wrong is this — there’s no explanation of how HAL works. But again, we have not built superior intelligence like HAL yet, so there is no way of knowing unless you’d like for us to use some vague science jargon to try and explain it. 

Subscribe to our newsletter

AI whitepaper download

Want to use AI to enhance portfolio management offerings?

Related articles

AI platform for the world’s
data-driven companies