Artificial general intelligence has long been discussed as the point where machines reach human level intelligence, yet the concept has remained vague and difficult to measure. Google DeepMind is attempting to bring structure to this problem by introducing a framework that breaks intelligence into distinct, measurable traits.
Rather than treating AGI as a single breakthrough moment, the framework defines it as a combination of capabilities. It identifies 10 core cognitive traits that together represent what we consider general intelligence.
These include perception, which allows systems to interpret the world through data; attention, which determines what information is prioritised; and memory, which enables retention and recall of knowledge over time.
Learning is another central component, referring to the ability to adapt from new information and experiences. Reasoning and problem solving focus on how systems process information, draw conclusions, and navigate unfamiliar challenges. Generation relates to producing new outputs, such as language, ideas, or creative content.
Beyond these, the framework introduces more advanced traits. Executive function refers to planning, decision making, and the ability to manage complex tasks over time. Metacognition involves awareness of one’s own processes, essentially thinking about thinking, which is critical for self correction and improvement. Social cognition captures the ability to understand and interact with others, including interpreting intent, emotion, and context.
The key shift in this approach is that intelligence is no longer assessed through isolated benchmarks. Traditional AI evaluation often focuses on narrow tasks such as answering questions or recognising images. While systems may perform extremely well in these areas, they can still fail in broader, more general situations. DeepMind’s framework instead evaluates performance across multiple domains, creating a more holistic view.
This also introduces the idea of uneven intelligence. Current AI systems are highly capable in specific areas but lack consistency. A model might excel at language generation but struggle with reasoning or long term planning. By measuring across all 10 traits, the framework highlights these imbalances and sets a clearer path toward more general capability.
Another important element is the introduction of progression levels. Instead of a binary classification where a system either is or is not AGI, the framework allows for gradual measurement. Systems can be evaluated based on how closely they match human performance across traits, how reliable they are, and whether they can generalise knowledge to new situations.
This structured approach also addresses the issue of hype. Claims about AGI have often been driven by marketing or isolated achievements rather than consistent, measurable progress. By grounding evaluation in cognitive science, the framework introduces a more objective standard that can be used across the industry.
The implications extend beyond research. A clearer definition of intelligence affects how systems are developed, regulated, and deployed. It influences how companies prioritise investment, how risks are assessed, and how society interprets progress in AI.
Ultimately, the framework reframes AGI as a spectrum rather than a destination. Intelligence is presented not as a single capability but as a system of interacting functions that must work together. Progress toward AGI, therefore, is not about scaling one ability, but about achieving balance, consistency, and adaptability across all of them.
This marks a shift in how the field approaches its most ambitious goal. Instead of asking when AGI will arrive, the focus moves toward how to measure it, understand it, and build it step by step.
