A new study reveals the fundamental difference in how humans and AI learn from the unknown, and why bridging this gap is the key to our technological future.
Have you ever found yourself in a completely new situation—navigating a foreign city with an unfamiliar transit system, or picking up the rules to a complex new board game on the fly? You likely managed to figure it out. You drew on past experiences, made logical leaps, and adapted. This remarkable ability to handle the unknown is a hallmark of human intelligence. Now, consider our most advanced artificial intelligence. While AI can defeat grandmasters at chess and generate stunning works of art, it often stumbles when faced with a scenario it wasn’t explicitly trained for. Why is there such a stark difference? Why do humans adapt so gracefully while machines often fail?
A groundbreaking interdisciplinary study published in Nature Machine Intelligence dives deep into this question. A team of over 20 experts from cognitive science and AI research, including Professor Dr. Barbara Hammer and Professor Dr. Benjamin Paaßen from Bielefeld University, have pinpointed the core of the issue. It all comes down to a single, surprisingly complex concept: generalization.
The Generalization Gap: Speaking Different Languages
In essence, generalization is the ability to take what you know and apply it successfully to what you don’t. It’s how we transfer knowledge to new problems. The trouble, as the researchers discovered, is that humans and machines have been doing this in fundamentally different ways. In fact, the very word “generalization” means one thing to a cognitive scientist studying the human mind and something entirely different to an AI developer building a machine learning model.
For cognitive scientists, generalization is about abstraction and conceptual thinking. It’s the process of forming a high-level understanding, a mental framework that can be applied flexibly. For AI researchers, the term is an umbrella for a variety of technical processes. It might refer to a model’s ability to work with data outside of its training set (“out-of-domain generalization”), a system’s capacity to follow logical rules (“rule-based inference”), or a hybrid approach that combines neural networks with symbolic logic.
“The biggest challenge is that ‘Generalization’ means completely different things for AI and humans,” explains Benjamin Paaßen. This semantic divide has created a chasm between human and artificial intelligence, limiting our ability to build AI that can truly think and adapt alongside us.
The Human Approach: Abstract and Conquer
When a human learns, we don’t just memorize data; we build concepts. Think about the concept of a “chair.” You’ve seen office chairs, dining chairs, beanbag chairs, and park benches. They look vastly different, yet you instantly recognize them all as things to sit on. You’ve abstracted the idea of a chair—a piece of furniture designed for seating—from specific examples. This abstract model allows you to identify a strange-looking three-legged stool you’ve never seen before as a chair, because it fits your conceptual framework.
This ability to think in abstractions is our superpower. It allows us to reason, make educated guesses, and navigate novelty with a level of intuition that machines currently lack. We build mental models of the world, and when we encounter something new, we try to fit it into those models or adjust the models accordingly. This is the fluid, dynamic intelligence that makes us such proficient learners.

The AI Approach: Data and Rules
Artificial intelligence, in contrast, typically generalizes through statistical patterns or rigid rules. A machine learning model trained on a million images of four-legged wooden chairs might become an expert at identifying that specific type of chair. However, it might be completely baffled by a beanbag or a modern, artistic seat that defies its learned patterns. Its “knowledge” is brittle, tied closely to the data it was fed.
Even rule-based systems have their limits. You can program an AI with the rule “if it has a flat surface and legs, it’s a table,” but it will fail when it encounters a pedestal table with no legs or a low-lying coffee table. The world is too complex and messy for a finite set of rules. While these methods are incredibly powerful for specific, well-defined tasks, they reveal their limitations when faced with the ambiguity and unpredictability of real life.
“If we want to integrate AI systems into everyday life, whether in medicine, transportation, or decision-making, we must understand how these systems handle the unknown,” notes Barbara Hammer. The current divergence in how humans and AI generalize is a critical barrier to creating truly collaborative and safe systems.
A Unified Path Forward
To bridge this gap, the international research team proposes a shared framework for understanding generalization across both fields. They suggest aligning the conversation along three key dimensions:
- What do we mean by generalization? Creating a shared definition and vocabulary.
- How is it achieved? Identifying and comparing the different methods used by humans and machines.
- How can it be evaluated? Developing new benchmarks that test for more flexible, human-like adaptation.
This collaborative effort is about more than just academic alignment. It’s a crucial step toward designing the next generation of AI. The research is part of a larger initiative called SAIL (Sustainable Life-Cycle of Intelligent Socio-Technical Systems), which aims to create AI that is transparent, human-centered, and sustainable. The ultimate goal is to build AI systems that can act as true partners, reflecting and supporting human values and decision-making.
By understanding the nuances of our own intelligence, we can begin to imbue machines with a more robust and flexible way of thinking. The future of AI may not lie in simply giving it more data, but in teaching it how to think conceptually, just like us. Only then can we build AI that doesn’t just follow instructions, but can truly adapt and reason in our complex world.
Reference
Hammer, B., Paaßen, B., Sanneman, L., et al. (2024). Aligning generalization between humans and machines. Nature Machine Intelligence. https://doi.org/10.1038/s42256-024-00851-y




