How does the AI presented in movies differ from AI in real life?


Artificial intelligence (AI) has long been a popular topic for films in the science fiction genre, entertaining audiences for decades. Naturally, mainstream cinema has had a huge influence on how people perceive certain topics and issues, and in the vast and varied world of Sci-Fi films, the way AI is presented has impacted our general understanding of this technology.

So while we can thank major films for inspiring interest and awareness of this technology, it is also important to understand some of the things they can get wrong about AI’s everyday uses and capabilities.

If we consider some of the most popular films in this category – Ex Machina and The Terminator, to name but a few – it’s easy to believe that AI has limitless capabilities and will eventually evolve to overtake the human race. Although far-fetched, this is a concern that is now shared by many: according to recent research by Fountech, one in four UK adults think that AI could be responsible for the end of humankind.

In reality, AI’s remit beyond the silver screen is riddled with limitations which mean such dystopian visions are unlikely to come to fruition in the near future. Instead, this technology has the ability to solve everyday problems and ultimately make the world a better place. Despite the Hollywood effect, 62% of UK adults actually believe that AI will do more good than harm to the world.

Below I explore how Hollywood has influenced people’s understanding of this technology, and the dangers of leaving these misconceptions unaddressed.

How is AI portrayed in film?

Firstly, we must separate the two most common ways AI is presented in film. The first is AI as a cyborg – or a robot with humanlike (or super-human) abilities that can assist or harm mankind; see I Robot and Transformers as common examples of this. The second is AI as a more holistic operating system – a wide network of technologies that are learning, communicating and acting; Moon and The Matrix would fit into this category.

In truth, the former has little meaningful connection with the way that we – both as consumers and businesses – would use the term today. Robots imitating humans in a physical sense lends itself well to action films, but this is not the direction that AI development has taken. Rather, the second interpretation of AI is far closer to the truth. After all, it is typically manifested in non-physical computer programmes applying human-like intelligence and decision-making to complicated, laborious and data-intensive processes.

The typical person might not even notice the presence of AI in their everyday life. Yet functions like Amazon recommending a product and Siri answering questions asked to it are all driven by this form of technology.

How AI is pushing the boundaries in daily life

There’s no doubt about it – AI has helped us achieve tasks that we previously considered impossible. But the processes driving these abilities are far easier to digest than one might imagine from watching Hollywood films.

Let’s use 2001: A Space Odyssey as an example. In the film, the computer (Hal) – created to control the systems of the Discovery One spacecraft – quickly begins to “think” for itself and take its own course of action independent of the human crew. But if we are to compare this to one of the most advanced AI systems in the world – IBM’s Watson – we quickly realise that this turn of events is far-fetched, and better resigned to the world of fiction.

Rather than functioning independently, AI is in fact generally programmed to perform one task or a series of related tasks or functions and relies heavily on human input.

Let’s explore this within the context of the health sector, where AI is increasingly being recruited to diagnose and treat illnesses. AI functions like Natural Language Processing (NLP), allow physicians to enhance their capacity to offer accurate and effective medical assistance. In essence, NLP is the sub-field of AI that is focused on enabling computers to understand and process human languages. In this case, the physician feeds the AI data, such as patient history, which the programme then analyses to identify potential symptoms and treatments. By having this data-driven insight, the physician can then take the lead and, using his or her own expertise, decide what the best course of action would be.

AI’s vision recognition is also increasingly being used to help doctors read scans such as X-rays and MRIs to better narrow the focus of a potential ailment. Currently, image analysis is very time consuming for human providers, but machine learning (ML) algorithms can analyse scans at a much faster rate. This entails the machine processing the raw visual input by quickly and accurately recognising and categorising different objects, after which it is able to identify anomalies.

The general point is that AI tools are being used to support doctors to provide a faster service, identifying potential problems and analysing data to identify trends or genetic information that would predispose someone to a particular illness. But even this sophisticated form of AI cannot function without constant input and feedback from a human.

Exciting storylines naturally rest on futuristic and unimaginable scenarios. But while this makes for a thought-provoking narrative, getting carried away in the realm of futuristic realism risks overshadowing the real-life potential of this amazing technology.

This article was written by Nikolas Kairinos, CEO and Founder, Fountech

Nikolas Kairinos is the chief executive officer and founder of Fountech.ai, a company specialising in the development and delivery of artificial intelligence solutions for businesses and organisations.

1 comment: