AI in 2024 and Predictions for 2025. Part 1

December 2, 2024  |  Lee Simon Bristow

Highlights of AI Progress, Challenges, and Governance in 2024 with Lee Bristow

The rapid evolution of artificial intelligence continues to redefine technology and its applications. In this article, we bring the insights from Lee Bristow, an author and risk & governance innovator, who reflects on the significant AI breakthroughs and challenges of 2024. This is the first part of our two-part podcast series where Lee discusses everything from frontier model advancements to the growing impact of AI legislation.

Advancements in Frontier Models

Some of the big technology changes that have happened through 2024 were that the frontier models really started to improve. We also saw a divergence between closed models and open models, with Facebook and XAI coming out with their open-source models. The closed-source models really accelerated in terms of big rounds of raises.

Then, probably towards the middle of the year, it became clear that the ability for large language models to improve was starting to slow. We saw the big frontier models starting to implement a technology called inference, which essentially allowed for additional time to re-evaluate its responses. That essentially created another bottleneck around resources in terms of computation, and then we saw a huge amount of money, time, and effort being thrown into that direction. Those were the really big highlights in terms of 2024 with the frontier models.

The next big movement was around agents and the idea of creating these small agents to complete a particular task. They were essentially trained to deliver a particular outcome. We saw technologies like Crew AI really explode onto the front, and we saw technologies like LChain, although they’ve been around since 2022, really accelerate into the frontline enterprises.

Synthetic Data and Real-Time Computing

By the beginning of 2024, most of the open internet had already been scraped or ingested into the large language models. There was a look toward where else can we find new data, and the idea of synthetic data really came to the forefront. The problem is that synthetic data is essentially derivative data, so it didn’t quite work out the way everyone was hoping.

The next area from a data perspective that I think everyone is looking at would be real-time data or 3D data. That hasn’t materialized in any shape, way, or form, but the idea is that it could potentially become an area. That is essentially around spatial computing, almost allowing the models to start ingesting that data because it provides real-time tactile feedback. You can think about this in the context of Tesla, where the car actually drives on a street and picks up information in real time about people, cars, animals, etc.

You can get a sense of how much data lies within that, as opposed to what is being extracted out of our heads and onto paper or into pictures. Those are the main areas I would say in 2024 that really started pushing the envelope of where AI can go.

Key Legislative Milestones

The biggest announcement came in August of this year, with the EU AI Act coming into effect. I mean, we’ve been talking about it since 2021, and the fact that it has arrived is a pretty big thing. It is the first full, all-encompassing AI piece of legislation. I think that’s a really big change or diversion from where the industry had previously gone.

Up until then, everyone was saying we need legislation. We saw it out of the US and a couple of the big names in tech asking for a slowdown in artificial intelligence, all asking for a rethinking in terms of the potential risks around AI. That was really toward the latter part of 2023 and into the beginning of 2024. What’s interesting is that those calls have slowed a little, just in terms of the request for legislation.

That was pretty much about the same time that the EU AI Act actually came out, which was towards the end of July or the beginning of August. A lot of those advocates have rolled back, because in California they attempted to put out their first piece of legislation, and it’s gone back.

The other one would be around May of 2024, when ISO released 42001, the AI Management System. This is a standard that an organization can adopt to implement quality controls inside the organization to ensure that their AI systems, at the data layer, application layer, and in terms of UX, have the governance for designing, implementing, and securing AI technologies.

Maybe just one other thing—though we’ll get into 2025 in a bit— would be around prohibited AI practices. That is coming in early 2025. From a legislative perspective, everyone is starting to understand that this is coming down the line. What I’ve seen is that there is quite a big jump in terms of awareness around the need for ethical AI and the need for governance around AI. We’ll get into the details of what I mean by governance in a bit.

Security Challenges in the Age of Generative AI

In terms of AI security, it’s quite a new area, mainly because we’re only just starting to understand Generative AI in terms of what it can be used for and how it can be used within an organization. One of the areas you’d try and protect from an AI perspective would be around the actual data itself. Before building or training a model, you would almost want to create disruptions in the data within those particular applications.

Also, just in terms of Generative AI, typically, you are rolling that into an antic workflow, which is leveraging relatively old technology, especially when you’re looking at it from a reactive AI perspective. We’ve had reactive AI for the better part of 30-40 years now, so in terms of information security, that would be an area to attempt to address or infect.

The other one would be when you have Generative AI and it needs to contextualize information, that is coming down the line again. You could almost move or shut the AI off its guardrails by affecting the actual application layer and changing the prompts or the expected outcomes, or the guardrails that will be coming down the line. These will essentially be at the application layer.

Again, you’ve still got all the old infrastructure challenges that you typically had, and then of course, you’re going to start running into other challenges around encryption. Leveraging AI to start brute-forcing into organizations that are using potentially exposed software, using AI to generate exploits a lot faster, and using Generative AI to break down encryption layers.

Conclusions

That is the main sort of challenges that are coming down the line. It’s more around the acceleration that bad actors will have as a tool at their disposal. If you’re not using the latest and greatest, you’re not having strong patch cycles, those basic principles are going to expose you a lot faster. That’s really the key takeaway here from cybersecurity at this particular point.

If the tool sets explode like they did when we saw a lot of technology moving into an online space, we saw the exploits really come to the fore, and I think it doesn’t really change the attack vector, which is principally the data that bad actors would be going after. The prize for the data now becomes so much bigger because a lot of that data that they typically would have wanted will be a lot more concentrated, given the current methodologies that are used to string agentic workflows together, which have access to all of that data. The stakes are just higher in terms of information security. But again, because we’re only just starting to bring agentic workflows into our organization, combining reactive AI with limited memory AI, the space is still pretty much in flux.

The other area I would probably lean on is access control. Access control has always been a mainstay pillar when it comes to information security and cybersecurity because if you can essentially lock down your network, lock down access to that, it becomes much harder to break in through exploits. That is essentially maintaining its mainstay as a core practice. The biggest challenges are going to be around the speed at which bad actors will be able to exploit vulnerabilities that will be coming out into the wild.

Click to read Part 2

    Access full story Leave your corporate email to get a file.
    Yes

      Subscribe A bank transforms the way they work and reach
      Yes