Understanding the Realities of Artificial Intelligence

November 21, 2023  |  Irina Kiptikova

This year, we have been seeing a lot of hype around Artificial Intelligence or AI.  Definitely, new opportunities are coming and numerous companies are looking to make serious investments in AI. However, as an IT company, we should warn businesses against overestimating the technology at its current stage. I invited Adi Hazan, an AI thought leader, and Mark Hillary, a British technology writer and analyst, to discuss the pitfalls that companies may encounter with AI and suggest the right approach to automation.

Mark:

Hello Adi, it’s great to have this conversation with you. Now, I know that we’re going to be talking about artificial intelligence, maybe particularly focused on generative AI. But maybe we could start by hearing a little bit about your background and expertise and why we’re talking about AI today.

Adi:

Hi Mark, good to be here. I don’t want to focus on me too much. I’m basically an expert in mathematical computations and we’ve been doing a lot of predictive mathematical modeling since long before AI became popular. We’ve had waves of AI throughout history, but mathematical modeling, which is just the same thing really, has been growing nicely and doing some good things. Completely away from the limelight, behind most of the financial industry, behind most of Telcos, are nice mathematical models that work really well. And now there’s this new sort of a PR wave of AI. We also market ourselves as an AI company. I’m not saying we’re better than anyone, but there’s a new wave of AI consciousness coming about, and so we’re trying to take the forefront on that as well.

Mark:

Yeah, it’s interesting because I was at college over 30 years ago and I studied computer science. Alan Turing and his work were part of my studies back then. He published his famous Turing test back in 1950. So clearly, people have been writing about this and analyzing it for a long time. So why is there a sudden wave of interest in 2023?

Adi:

There are definitely new capabilities coming about. What we’ve discovered is that the same old stuff we were doing when you and I were at university, if you give it a lot more data and a lot more computational power, yields quite good results. So all of a sudden, for the first time, you can have a non-expert log onto something, onto a generative model, type a question in plain English, and get a result that, at a minimum, seems to make sense.

How useful it is is open to debate, but for the first time, the general public had access to it. It used to be only guys like me sitting in dark rooms, and now everybody can touch it or at least everybody can know that they’re touching it. I mean, people don’t realize when you click on your cell phone, it runs statistics to see where you meant to press, how much bigger your finger is than a pixel. So these things are running all the time. But now, for the first time, we can feel like we’re accessing it, and the press are having a field day because it’s a popular topic. It brings some nice fears that they like to play on and some nice new capabilities.

Mark:

I suppose those capabilities are the positive side of this discussion, especially for business opportunities like automation, for example. I mean, if you were talking to a business executive, and they were not IT literate, you know, not the CIO, perhaps a general kind of board meeting, how would you be talking to them about the business capabilities or opportunities then?

Adi:

To be honest, I don’t think anybody should be short on information on capabilities and opportunities. When I talk to people, just like now, I think it’s really important to give some information that you won’t be finding on your first Google search. Nobody needs, I think, what we need now are some cautionary tales and some very sober discussions of what the limitations are.

Process automation is another branch of this, which has been growing steadily for many years now and doing a wonderful work. But again, limited noise factor. On the other hand, when I look at something like, let’s start with my favorite, the driverless car… Mark, are you sipping your water looking out over the driverless cars?

Mark:

Not just yet. Can’t see any at the moment.

Adi:

Because we were promised it five years ago, four years ago, three years ago, two years ago, and now it’s a year away. And you can imagine the pressure inside vehicle companies to say: “But why haven’t we got it? They’ve got it. Are we spending enough? Let’s spend more”. There’s a lot of hype about a lot of things which simply don’t exist and a lot of pressure on management to spend where nothing will ever come out. What would I say to boards? Time to start looking with a more skeptical eye.

Mark:

So you mean that quite often we see a sort of technological capability, I suppose, in theory. If we were just driving a Tesla around a fixed racetrack without too many distractions, then there would be no problem at all for it to drive itself. But put that vehicle into a real-life environment where you have human drivers mixed with automated drivers and mixed with other kinds of situations, such as lines on the road that have worn away and all these other real-life problems, then you start seeing why the regulators are not quite as gung-ho as the technology companies.

Adi:

Did you notice, you probably didn’t notice, that even you said so in theory on a racetrack? Because we’re all used to just dropping in two little words that get us out of liability: “In theory, on a racetrack. In theory, here”. The problem is really that if you take it around a racetrack, nobody will film the mistakes and put them on the air. So you never get the outtakes.

There’s a documentary about Teslas and one of the ex-technicians admitted that what they did was drive a car driverless the whole day, and every time it didn’t hit something, they kept that piece of video and put it all together into a nice long video where nothing ever gets hit. The statistics from Cruise, which was just disabled now in the US, was that it needed to call a human every four to five miles. This is not fantastic. And if you’re saying, “Well, it’s driverless, but keep your hand on the wheel”, you send this confusing message. And we all want it to be true, especially the people who spent a fortune buying it. But really, it’s not even available in a Bentley. It’s not available. It doesn’t exist. An autopilot in the sky is all well and good. There are no children playing in the sky.

If I had to describe AI as a person, I would say, imagine we lock this person in a room and we had all the Chinese words in numbers, we hand in numbers, they have a set of rules, and they hand back numbers according to the words. But the person does not speak Chinese. So there is a level of processing. There is thinking. Certainly, there’s thinking that goes into the rules, but there is no understanding. There’s only numbers that go into the CPU and numbers that come out.

Mark:

Yeah. And I suppose that this is how we’ve seen recently ChatGPT has become a very popular tool because it was released less just about exactly one year ago and became wildly popular with the media because people found that they could plan weddings, plan funerals, write speeches, and do just about anything with it. But, what you’re saying is that it has no understanding of the language it’s creating. It’s merely mathematically using probability about what’s the best next word to use in a sentence.

Adi:

Now, keeping in mind that it can’t generate words, it can only ever use words that you’ve given it. So your generated speech almost definitely belongs to someone, whether they’re alive to claim it or not is a separate story. Whether or not it’s close enough for them to put a legal claim in, certainly on the art front and poetry front, they’re being sued left, right, and center.

So they said, “No, we’ve put in a corpus, a body of work”. But that body of work was people. First of all, if I don’t know what the numbers mean that I put out, there’s no way I can ever deviate from the rule. For a start, I can never actually be creative. If I have three categories coming out, it’s an architectural piece of software, and I look at a picture and I say, “Building brick, building concrete, or building glass”. That’s three categories. I can put out a one or a two or a three. There is absolutely no way that you will ever get a situation where I look and say, “Hang on a second. You know what will look good here? A combination of brick and glass in this context will be stunning”. Because it’s not one, two, or three. You would have to give it a four that says two plus three. It does not know what it’s saying.

The second thing is that once you translate life into numbers, if you can imagine driving under a bridge, you start with a little stripe that gets bigger and bigger and bigger. And what you’re seeing then goes over the car. But unfortunately, if a truck crosses your path, it starts as a little stripe, and it gets bigger and bigger, and bigger. But it doesn’t go over your car. It goes straight through it. The numbers look the same, and a human being will understand, “Hang on, that’s a truck”, and slam on the brakes. Vehicles don’t. Two people, unfortunately, have already died driving full speed straight under trucks. You can say it’s their fault. Their hand wasn’t on the wheel. I don’t know about the liability aspect. I only know full self-driving. You have a mechanism that can’t see the difference.

So you’re never safe. ChatGPT and all the text that it puts out, sorry, Mark, I don’t want to ramble, but lawyers in the US have asked it to cite cases. A colleague of mine at a university asked it for 10 books that it recommended he read on a certain subject. Eight of them existed. Two of them didn’t. It would have made sense for those authors to write those books, but they hadn’t. Now, if you’re presenting work in a court of law, and half of it’s fictitious, and you got it from someone who doesn’t know the difference between fiction and reality, you’d best be very, very careful.

And lastly, this notion of it’s going to give us insights. It doesn’t even know what it’s looking at. Now you want it to tell you things about your business that are more insightful than the people in it. Extremely unlikely.

Mark:

You’re saying that it could never create something like the writing of Anthony Burgess or James Joyce, where they essentially created new languages.

Adi:

They also can never tell a new story. Even a mediocre writer will have something personal in it. If you’re writing what I call ‘a vanilla blend’ (I have a friend and she writes for an advertising company, and that’s what she makes), it’s fluff. She loves that thing. She says, “Write fluff about a yogurt that is smooth and marvelous”. But that’s a very limited use. Look at my balance sheet and tell me what’s wrong with my company. God help you. Unfortunately, it’s happening more and more. The statistic is that over 90% of projects are failing in AI. That is a fortune of value being destroyed. If you stick to common sense things, RPA works really well. Analysis works really well. A lot of things work really well. But you get this feeling that your competitor’s got it and you don’t. And the truth is, no one has it.

Mark:

Isn’t that one of the key problems here that we have this sort of fear of missing out? The chief executive calls up his or her team of directors and says, “You know, I’m reading about AI every day. Time Magazine is talking about it. The New York Times is talking about it. Why are we not doing it?” Essentially, the managers are all charged with finding a way to use it.

Adi:

And the more they spend, the more we show off when we announce it. So if you look at just the structure of most announcements, it’s always a launch. We’re either starting or launching. When these things die, they die very quietly. Nobody advertises a failure. You’re getting information that’s selected to create hype. The person who spent $4 million on the proof of concept and got nothing is not going to make noise about it. The people who took that $4 million and knew you were never going to get anything are certainly not going to make noise about it. So what you have is this dissonance out there. There’s this information coming in and no one’s left feeling comfortable.

To anyone who views this, you have that smart assistant on your phone. How useful is it? Because I can assure you that it is the best of the best, whichever platform you have. They don’t spare a dime on it. And if you try and do anything unusual, including use an unusual accent, it’s devastatingly poor results. 

Mark:

Yeah. Even Satya Nadella called Alexa, Cortana, and Siri. He said all of them are as dumb as a rock. And it was a sort of comment on where AI is today compared to even last year.

Adi:

There’s nothing to indicate it’s going to improve. What you find is that 10% of the last improvement tends to cost double the processing. So if it was 90% of the way there before to get to 91, we have to double. To get to 91.1, we have to double. And there are a lot of things where you just don’t have that kind of leeway, such as driving.

There are also problems with us as a species. When a person runs over a child who broke loose and jumped in front of a car, it’s terribly unfortunate. When a robot does that, it’s not expected, and we don’t perceive that as acceptable.

On the other hand, you spoke of the Turing test, and all the Turing test says is if you can talk to a computer and you can’t tell it’s a human being behind it, then we’ll have achieved artificial intelligence. It’s a simple test, but I would venture to say that most people can talk to ChatGPT and not know that there’s a machine behind it.

But say there was a riot at the university and they got locked in. Is it the students or the lecturers? What’s bigger, a big match or a small car? It will say the match is big. Big is bigger than small. It has never seen either and it’ll pass some of these tests. They keep doing these little fixes whenever they hear about these things. That’s why I don’t discuss too many of them in public. They do these little fixes, but it’s no substitute for understanding.

On the other hand, if you have to read the same bank statement every month, you can get EasyRPA, one of the RPAs, and it will do it for you and put it in electronic format, probably read it better than you with fewer mistakes, and allow you to take intelligent decisions.

So it’s not like nothing is happening, but what is happening is actually happening in the quiet. I think all this marketing and all of this noise is going to work against the industry even in the medium term. I’m already meeting sophisticated clients that say, “We’ve already paid $4 million fine. Do you have something that works?” Which is music to our ears. We have a less ambitious system. We make what we call weak AI, which ironically works better than strong AI because we can control it.

Mark:

But this is because you said that lots of companies are doing launches and proof of concepts. I mean, isn’t a proof of concept or a pilot normally a low-cost experiment to see what works and it would only then go into production, if it’s proven?

Adi:

What will we announce and what noise can we tell the board, if we take $50,000 and try something? That’s not cool. If we say we’ve invested $50 million, we’ve launched a project, our share price goes up, everything is hunky-dory. Shareholders don’t know what AI is. Some of them bought Teslas and are a little bit confused about this full self-driving with your hands on the wheel.

But in general, there probably 4-5 thousand people in the world who know exactly how AI works and can control these algorithms. Probably, you’re in a company that can’t afford them. They are few, and they are very well remunerated. They are creating systems, but it’s psychological. You build something, it works most of the time, you say, “Release it! We’ll iron out the kinks”. Except those kinks are never going anywhere and then it’s very difficult to backtrack. It’s difficult to backtrack in public with your career, with a lot of things.

Mark:

Yeah, and I suppose this is one of the problems then with a process like driving, as you’re saying. There is a lot more to driving than just the technical capability of being able to steer a car down a road. Even to the point where you might pick up your new car from Chevrolet or Ford and they say to you, “What setting do you want inside the vehicle? Would you like it to protect the passenger in the case of an accident, or should it protect pedestrians instead?” So you’re almost being asked to create psychological settings for the vehicle.

Adi:

Human intelligence, our understanding of the human mind, is roughly where science was during the Middle Ages. We kind of know that some things work. There’s no Periodic Table of Consciousness. We know some things work some of the time. So what they’re saying to you is, “Do you want things that’ll work 80% of the time or that 80% of the time?” When it comes to human beings, they don’t drive perfectly. But we’re not an engineered solution. There’s no sensor that should have known there’s a kid who might jump out. And when there is a sensor, we don’t want to accept it.

So I would say that a lot of what we’re failing now as AI would have been perfectly acceptable 20 years ago, except that people are promising us what’s not there. And instead of focusing on what is there and has no risk, there’s a big ego boost in saying, “We’re going to lead the pack”. And, you know, they call it the second starter advantage, not the first starter. The first starter has an accident; the second starter knows to slow down on that curve.

Mark:

Like Uber. When Uber was testing their autonomous vehicles, I think it was after they killed one pedestrian, they sold the business and just said, “We’re not going to carry on with this”.

Adi:

Now, that’s clever because they got money when they sold it. Good for shareholders. Let me tell you one more thing. If you fire 50,000 Uber drivers in London, you’re not just going to have a technical backlash. If you look at a stop sign, a stop sign has a shape. If you cover it with a couple of stickers, the wrong numbers reach the processor. It doesn’t see a stop sign. It doesn’t see anything and keeps going. So there are ways to sabotage AI that are not being discussed. It’s so hard to make it work that nobody sits and says, “Okay, is it clever enough to withstand an attack?”

The best example I can give you: one of the insurers had used AI to look at a picture of a broken car and tell you what percentage of the value the claim should be. And it worked really well. Fairly soon, certain panel beaters, you won’t believe, worked out that if they put a car with a broken headlight in front and just in the background, a completely total loss car, completely broken one, the AI couldn’t tell the difference between foreground and background. And for a broken taillight, it would award you the value of the car. It sounds so trivial, but once it gets the numbers from a shattered car, it knows what to do.

Click ro read Part II 

Leave a comment
*Required. Your email address will not be published.
*Name
*Email
*Comment
Privacy Policy Cookie Policy

    Access full story Leave your corporate email to get a file.
    Yes

      Subscribe A bank transforms the way they work and reach
      Yes