foobuzz

by Valentin, January 8 2025, in tech, random

AGI is the new AI

People struggling to find a good definition for "AGI" (Artificial General Intelligence) seem to forget that we never had a good definition for "AI" (Artificial Intelligence) to begin with. Historically, the practical usage of AI has been "anything that we thought we couldn't do with a computer", such as, for example, natural language translation.

But now that AI is being integrated into every computer system and is being called "AI" in the market, this practical definition of AI is becoming obsolete. Consumers expect (or will expect) a computer equipped with the "AI feature" to be able to do automatic translation, so AI is no longer "something we thought we couldn't do with a computer", it's just part of the feature set of computers.

But maybe we now have some "things we thought we couldn't do with AI"? Yes, that is the practical definition of AGI.

Does it mean that such systems are actually intelligent? Of course not. Just as AI has the word "intelligence" in it and it is now clear to everybody that AI is not intelligent, there is no reason to believe that AGI will be intelligent just because it has the word "intelligence" in it.

For example, the ""benchmark"" for AGI that seems to be doing most of the rounds in the AI buzzsphere is ARC-AGI. But one of the properties of this benchmark is that the puzzles are particularly easy to do for humans. Why would you use something that is trivial to do as a measure of intelligence? You are simply measuring stupidity for systems failing at the benchmark, and then measuring nothing once they succeed at the benchmark (and that is ignoring the fact that this benchmark is just an arbitrary objective function to optimize for). But the point is that current AI systems fail at the benchmark, so that is good enough. As I said, AGI means "things we thought we couldn't do with AI", and nothing more.

The latest blog post from Sam Altman points at a more ambitious and interesting definition for AGI: the ability to do autonomous work in a company:

We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.

This is unfortunately very vague as "the workforce" is already making heavy usage of ChatGPT to increase productivity, and every conversation with it can be seen as invoking an "agent" to do a specific task. For example, asking ChatGPT to write up a draft for a press release instead of asking your marketing intern to do so. It therefore is a question of spectrum in AI ability on how autonomously work can be carried out by the AI before human intervention becomes necessary. This leaves plenty of room for interpretation about what an "agent" is, and plenty of opportunity for Altman to declare "AGI is achieved" whenever something is impressive enough for us to have believed that AI wouldn't be able to do it.

Which is precisely why Altman has us covered with the next iteration:

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.

What is ASI (Artificial Super Intelligence)? Well, of course, it's "something we thought we couldn't do with AGI".

Maybe one day one of those A*I will actually be intelligent. For the moment we just have to appreciate their great usefulness as tools for our endeavors (and ignore the noise).