IBA Group recently asked me to interview a leading expert in artificial intelligence (AI), Adi Hazan. Adi founded Analycat over 16 years ago as a specialized AI company that can build language models that satisfy companies – and the regulators that watch over them.
I don’t want to just recall the discussion here. IBA Group has clipped the interview into sections so you can go to their YouTube channel to watch it at your own leisure. However, one of the points Adi made has stuck with me and deserves some more thought.
Adi talked about the corporate Fear of Missing Out (FOMO) that affects the use of AI all over the world. Is FOMO driving a huge amount of flawed AI adoption?
He is right and we have seen this before with other technologies. In many cases the c-suite will order a new technology to be deployed before ever exploring if it is the right solution for a business problem. Seeing it on TV or in a news report is enough for the technology to become “essential!”
Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) are two good examples of technologies where this has all happened before. There are many examples of disasters with these systems, but the Hershey case study from 1999 is a classic.
Hershey wanted to boost the production of their chocolate and snacks. They bought a Siebel ERP system and opted for a ‘big bang’ implementation where it would come on stream overnight, rather than gradually phased in with modular testing. The implementation was a disaster and snarled up the supply chain. Not only did they fail to test the system before switching it on, but they chose to do it just as clients wanted to place big orders for Halloween and Christmas – the busiest time of the year.
As Adi explained to me, we continue to see this FOMO constantly. Managers read about a new technology or see it mentioned on the news. They believe it is the solution that can save or turbocharge the business, and they rush into an investment before careful consideration. The fear of a rival doing it first is often the reason why this is rushed – we need to implement this tech before the competition does it!
AI is the hot tech of 2023 because a year ago ChatGPT was launched and it truly caught the public imagination in a way that has never been seen before. AI researchers have been gradually improving language models for decades, but when OpenAI launched ChatGPT it just captured attention on social media. It went viral.
People shared examples of what ChatGPT could do and within five days of it launching there were already more than a million users. It had over 100 million regular users inside two months of launching.
This growth is astonishing – especially for an AI tool. It’s not a game or a social media platform and yet it was being downloaded and used by tens of millions of people. Instagram took two and a half years to get to 100m users. It took TikTok over nine months.
This media (and social media) coverage created intense pressure on managers to explain to their directors what they were doing with the technology. Imagine being a customer service manager and reading in the Washington Post that ChatGPT can respond to customer emails better than human employees – some companies have been firing their employees and allowing ChatGPT to manage customer interactions.
This is the problem. AI, and specific tools like ChatGPT, offer the potential to disrupt and automate many tasks, but the broad media coverage suggests that it is a semi-magical tool that can replace skilled employees.
When the media suggests this, the FOMO it instills in company directors is real. A boss asking why he or she employs a team of people to answer emails, when AI could do it better at lower cost, will then create a wave of fear throughout an entire organization.
Adi gives good examples of why the broad media approach is often wrong. ChatGPT often hallucinates answers when it is unsure of the precise answer. This is when the AI engine isn’t 100% sure of the right response, but it continues to plough on anyway – filling in the blanks so the answer is ‘good enough.’
The problem is, if I were a customer asking about a mortgage application or a tax refund or I was asking my health insurer about a medical condition then ‘good enough’ isn’t good enough. If a human is unsure of the correct answer then they are expected to check, to call a supervisor, to ensure that they find the right answer. Filling in the blanks to create an answer that is probably OK is not good enough in heavily regulated industries.
This is why Adi suggests focusing on creating smaller models that are highly specialized for specific areas. We don’t need to train a chatbot on the entire contents of Wikipedia if it is designed to answer questions about your tax bill. It is preferable to train it on every aspect of tax law and then to teach it about gray areas where interpretation of rules is important.
When Adi talked about a wave of FOMO in modern AI he identified a new wave of an old problem. Smart leaders are watching this carefully. They are exploring and testing solutions. They are figuring out what can be achieved. They are finding how the tools can make their team more productive and efficient, rather than replacing the team.
In business, FOMO can lead to disaster. It’s true that it can sometimes lead to innovation and success, but how many times did it work out for your business to invest in an enormous change in direction without ever testing the ideas first?
IBA Group has designed and deployed many AI and machine learning systems and understands the importance of pilots and tests to prove the business capabilities of these tools. Click the link for examples and more information.