Can you trust AI
Can you trust AI? The simplest answer is yes.

Building AI you can trust

Let’s address the elephant in the room — trust problems with artificial intelligence (AI). We are increasingly becoming dependent on AI. Yet, the technology fails to gain our trust.  

Tesla’s autonomous cars have a history of crashing into stopped vehicles. Amazon’s facial recognition system fails to find matches. In another program, it had a discriminatory bias against women. There is no dearth for such AI failures. There are two reasons that I can think of. One, AI needs to get better at what it’s doing faster (this is inevitable because the technology is such that it will eventually get better at what it’s doing, considering it gets enough training. Failing which, it would be us to blame and not the technology.) Two, AI needs to do something different. But what? The idea is that we need to stop building AI that only addresses reason one. 

Would we be rather admiring of a computer system that from the moment of its assembly can innately grasp the basic concepts of time, space, and causality? Perhaps. But first, let’s explore the present capabilities of AI within the context of each of these three concepts. 

The AI systems we currently have know very little or nothing about these three fundamental concepts. What do I mean by this? If you run a search on Google for ‘Did John Hason own a computer?’ what do you get? Technically, the answer is quite straightforward, isn’t it? It only requires to relate two basic facts — the time when John Hason lived and the time when computers were invented.  However, none of the first ten search results give you an accurate answer. In fact, the results don’t even address the question. 

Talk to Books, an AI tool by Google that is designed to answer questions by providing relevant passages from a huge database of text, also fails to answer this question. The results show no meaningful connection between the question that was asked and the passages. 

This situation is worse when it comes to how much of the concept of space and causality AI can grasp. When a child encounters a book for the first time, it can explore and understand that it has pages, the pages have some form of characters and images, how to open the book, flip through the pages, and close it. However no AI can understand how the shape or form of an object is related to its function. 

We need to understand that machines can identify what an object is but not how the object’s physical features relate to the causal effects. For certain AI tasks, the dominant data-correlation approach works alright. For example, you can easily train a deep learning machine to, say, identify pictures of fox terriers and pictures of say John Hason, and to discriminate between the two. This is why such programs are good for automatic photo tagging. 

That said, AI doesn’t have the conceptual depth to realize that, in this instance, there are many fox terriers but only one of John Hason. This means a picture with a bunch of fox terriers is unremarkable but a picture of even two John Hasons has to be ‘remarkable’. The AI cannot call out the fact that the picture with two John Hason needs closer inspection because there cannot be two of him. This failure to comprehend is what makes general AI or general AI based robots like WALL.E or Sonny from iRobot a fantasy. Think about it, if Sonny cannot understand the basics of how our world functions, can we trust him in our house?

In essence, without comprehension of the concepts of time, space, and causality, much of common sense is not possible. This kind of knowledge doesn’t have to be taught explicitly. That’s why it’s called common sense? 

While general AI is what we are dreaming of, and we know for a fact that it will be years before we can make that possible — this means it will be the same amount of time taken for AI to gain our trust but it shouldn’t be the case. That said, it helps address an important issue of whether AI will replace us or take away our jobs — a less acknowledged reason for us not trusting AI. 

Considering my argument so far, you’d agree with me that there are limitations to what AI can accomplish today. The AI we experience in our phones and computers is what we know as narrow AI. Narrow because it can do only one kind of task that it is trained on. It cannot think outside the box or summon creative ideas and solutions. It has great computational power. If trained well, it can learn and evolve but only within the context of what it is trained on. Hence, there is zero possibility of machines taking over our jobs. 

What are your thoughts on building AI you can trust?

Subscribe to our newsletter

AI whitepaper download

Want to use AI to enhance portfolio management offerings?

Related articles

AI platform for the world’s
data-driven companies