Measuring AGI: A More Concrete Approach to Intelligence

Google DeepMind is attempting to bring structure to one of the most vague concepts in technology: artificial general intelligence. Instead of relying on abstract definitions, it proposes a measurable framework based on specific cognitive traits that together define what “general intelligence” should look like.

Why AGI Is Hard to Define

AGI is often described loosely as AI that matches human intelligence across all tasks. The problem is that current systems are uneven. They perform exceptionally in some areas but fail in others, making it difficult to judge real progress.

DeepMind’s approach is to replace vague benchmarks with a clearer, testable model of intelligence built around core capabilities.

The 10 Traits of General Intelligence

The framework identifies ten key traits that an AI system would need to demonstrate to be considered truly general.

These include abilities such as reasoning, planning, learning from experience, adapting to new situations, and transferring knowledge across different domains. Communication, especially through language, is also central, along with the ability to understand context and make decisions under uncertainty.

Other traits focus on autonomy and goal-directed behaviour, meaning the system can act independently and pursue objectives over time rather than responding only to prompts.

Taken together, these traits aim to reflect how human intelligence works in practice rather than how AI is currently tested.

From Benchmarks to Real Capability

A key shift in this framework is moving away from narrow benchmarks. Traditional AI evaluation focuses on isolated tasks like solving math problems or answering questions. DeepMind argues this does not capture real intelligence.

Instead, the proposed model looks at how systems perform across multiple domains and how well they integrate different skills at once. The emphasis is on flexibility, consistency, and the ability to generalise knowledge.

The Problem of “Jagged Intelligence”

Current AI systems show what DeepMind describes as uneven or jagged intelligence. They can outperform humans in complex tasks while failing at simple ones.

This inconsistency highlights the gap between today’s models and true general intelligence, reinforcing the need for broader evaluation criteria.

A Framework for Tracking Progress

The goal is not to declare when AGI has been achieved, but to track progress more scientifically. By breaking intelligence into measurable traits, researchers can identify which areas are improving and which remain weak.

This also creates a shared language for comparing systems, something the field has lacked.

Why This Matters

A clearer definition of AGI has practical implications. It can guide research priorities, improve benchmarking, and help policymakers understand what progress actually means.

It also reduces hype by grounding discussions in measurable capabilities rather than speculation.

Conclusion

DeepMind’s framework reframes AGI as a set of observable traits rather than a single milestone.

By focusing on how intelligence behaves across contexts, rather than isolated performance, it offers a more realistic way to measure progress and understand how close current systems are to true general intelligence.