âYour perspective is always limited by how much you know. Expand your knowledge and you will transform your mind.â â Bruce Lipton
I was helping my son with a jigsaw recently. The jigsaws have grown in size as he has. Long gone are the toddler puzzles with oversized pieces you could solve with your eyes closed (luckily because I was half-asleep in those early parent years.) This jigsaw had 750 pieces. A sky the same shade of almost-purple and trees indistinguishable from each other. I found myself regularly stuck.
Some clusters stayed undone for days. I was worried my son would abandon it, but then, two awkward fragments that had resisted every attempt finally clicked together. At one point we almost ran out of room on the table. That is a lesson in itself. You canât see how things fit when your surface is too small. You start stacking pieces, losing them under the box, forgetting combinations youâve already tried.
I was reminded of George Millerâs famous paper that we can only hold about seven items in working memory, plus or minus two. Thatâs not much of a table. Many pieces fall off the edge long before we can see how they might connect.
As John Swellerâââthe cognitive psychologist behind cognitive load theoryâââput it,
âWorking memory⊠is limited in capacity and duration if dealing with novel information.â
His cognitive load theory suggests that once the mindâs table is too full, our broader understanding collapses. It isnât intelligence that fails us, itâs our cognitive capacity (our table of the mind). We canât hold enough pieces in view long enough to see the picture waiting to emerge.
Pieces We Canât Hold Alone
That jigsaw moment brought to mind something Elliott Aronson shared on The Innovation Show. In the early 1970s, Aronsonâââone of the most influential living social psychologistsâââwas working in newly desegregated schools in Austin, Texas. While the legal barrier had fallen, the psychological one had not. Children from different racial backgrounds were suddenly placed in the same classrooms, yet understanding didnât follow. In many cases, tensions worsened.
Aronson realised that the traditional classroom made every child a rival for the teacherâs attention. Under those conditions, children saw one another as competitors.
His answer became the Jigsaw Classroom.
Aronson broke lessons into fragments and gave each student just one essential piece. No one could grasp the full topic alone. The only way to see the picture was to sit together, listen, and rely on someone elseâs fragment. Each child held something the others needed.
The atmosphere changed and empathy increased because the task made every child a resource for someone else. The whole picture emerged through the combination of their fragments, rather than individual effort.
The message is that no single person holds enough of the picture alone.
There was a time, of course, when one single person could hold most of the piecesâââthe world of Renaissance polymaths.
When One Mind Could Hold Enough
âThe knowledge of all things is possible.â â Leonardo da Vinci
Not to take anything away from such greats as Da Vinci, but he could sketch anatomical drawings in the morning, design a flying machine in the afternoon, and paint into the night because the boundaries between fields were looser and the volume of recorded knowledge was modest enough for a single mind to wander it. Former guest on The Innovation Show, WaqÄs Ahmed wrote about how Leonardoâs range wasnât superhuman so much as it was suited to a world where knowledge was still relatively unified. Back then specialisation was limited, people worked across a breadth of tasks. Today, widespread specialisation thwarts creativity, limiting people to their swim lanes of expertise. Polymaths flourished when patronage systems allowed them to roam outside their lanes. During the renaissance, the table was smaller, and the jigsaw pieces were fewer.
That world is gone. Samuel Arbesman captures this in The Half-Life of Facts. Knowledge no longer accumulates gently; it accelerates at an exponential rate. A fact you learned two decades ago may already be outdated. Sometimes even a fact you learned two days ago. Whole scientific domains double within a working lifetime. The puzzle has swollen far beyond the reach of any individual workspace.
Arbesmanâs point is echoed by others who have tried to take the long view. The great innovation thinker and architect, Buckminster Fuller once estimated that all human knowledge from our earliest ancestors to the birth of Christ amounted to a single âknowledge unit,â and that it took another 1,500 years to double it. After that, the doubling rate kept shrinking.
The physicist and science historian John Ziman, whose work examined how knowledge systems grow, later suggested that global scientific activity doubles roughly every fifteen yearsâââa pattern also known as Zimanâs Law.
The picture keeps expanding. The pieces multiply. The puzzle has outgrown the individual table.
So, if a single person can no longer roam the full landscape, how do we continue to make sense of it?
One way is through the kind of breadth David Epstein explores in Range Widely. His argument isnât that generalists know more; itâs that they know differently. Theyâve wandered, sampled and moved sideways. Generalists, like polymaths carry varied fragments from unexpected domainsâââfragments that often sit dormant until the right problem comes along and suddenly those stray pieces form a bridge no domain specialist could see from their lane.
Generalists survive complexity not because they out-think specialists, but because they out-connect them. They have more edges to test, more varied jigsaw pieces to connect.
But even the best-connected minds face the same biological limits. Millerâs seven-plus-or-minus-two still applies, as do Swellerâs constraints on working memory. If anything, our capacity has weakened. Many of us now experience a kind of digital dementiaâââoutsourcing the wrong things to machines while feeding the mind a diet of short-form fragments. All of this becomes even more challenging when set against Arbesmanâs observation that knowledge expands exponentially. Each year the puzzle grows faster than our ability to hold the pieces, and more of them spill off the table.
This is where technologyâââused in the right wayâââcan play an outsized role.
In our recent 3-part series with Manu Kapur, the learning scientist known for pioneering Productive Failure, he shares that learning deepens when we stay with a problem long enough to form structure. However, that productive struggle collapses when the table is saturated. Offloading this overflow keeps the struggle intact while removing the part that suffocates it.
AI is entering that space or rather, enlarging our space.
Donât Take The BaitâââEmergence or Not?
âWhat we call chaos is just patterns we havenât recognized.â âChuck Palahniuk
âVery often, we canât see the larger web of connections that might make a system behave in unwanted ways.ââââJamais Cascio and Bob Johansen, Navigating The Age of Chaos
In preparing for the forthcoming episode of The Innovation Show with Jamais Cascio and Bob Johansen, I came across a small story that captures this perfectly. In 2021, a drug dealer in the UK posted a photo of his hand holding a block of Stilton cheese. From that one image the police were able to extract fingerprint data and identify him. The dealer had used encrypted messaging, avoided showing his face, and even turned off metadata. None of it mattered once the image-analysis tools had enough resolution and context to see what he had assumed harmless.
It reminded me of the jigsaw I was working on with my sonâââthose moments when two pieces that made no sense for days suddenly snapped together because the surrounding picture had grown large enough for their relationship to become visible. The pieces were always connected; we just lacked the context.
That same dynamic underpins what is often considered âemergentâ behaviour in AI.
A much-discussed early paper suggested that certain abilities appear unpredictably once a model crosses a mysterious size thresholdâââas if intelligence simply switches on. But more recent work, including Jinâs excellent Medium essay and the analysis reported in Quanta, suggests something far more grounded.
What looks like a leap is really a capacity thresholdâââthe moment when the model finally has enough parameters and enough varied, high-quality data to stabilise a pattern that was already there. The behaviour isnât emergent. The pattern is.
It is the pattern that was waiting to be noticed, not the AI that suddenly became clever.
And the scaling tells the story:
- GPT-2 lived in a world of 1.5 billion parameters.
- GPT-3 expanded to 175Â billion.
- GPT-4 reportedly operates in the trillion-parameter range.
- GPT-5 continues that trajectory not only in scale, but in the tools and controls around itâââthe parameters we use to shape how it thinks and how it interacts with external systems:
Each generation also gained access to new kinds of data, often from previously absent domains. When researchers switched from all-or-nothing scoring to more sensitive measuresâââpartial progress, incremental accuracyâââthose dramatic jumps flattened into smooth curves. The learning was continuous. It was our measurement that wasnât.
From our perspective, ability appears suddenly.
From the modelâs perspective, nothing sudden emerged at all.
The table simply became large enough to hold more diverse pieces in parallel.
AI doesnât replace human thought.
It expands the workspace and gives us the bigger table that individualsâââand even institutionsâââcan no longer build alone.
This blog has benefited enormously from collecting a wide range of jigsaw pieces. Hosting The Innovation Show has given me access to amazing thinkers across disciplinesâââneuroscientists, futurists, economists, psychologists, technologists, anthropologists, historians, organisational theorists, learning scientists, and all those still to come. Each guest offers a fragment from a different corner of the puzzle, and over time those fragments start to speak to one another.
Many of the fragments in todayâs essay come from people who have already joined us on the Innovation Show over the last decade. Elliott Aronsonâs work on cooperation and human biases, WaqÄs Ahmedâs insights into polymathy, Samuel Arbesmanâs understanding of how knowledge accelerates, David Epsteinâs exploration of breadth, and Manu Kapurâs work on productive failure all sit somewhere on the table. Two recent pieces of writing also played a part: JINâs thoughtful Medium article on how model behaviour scales, and the Quanta analysis explaining why so-called âemergent abilitiesâ in AI are better understood as capacity thresholds grounded in data richness and dimensional space.
And as I read Navigating the Age of Chaos, to prepare a 2-part episode with Jamais Cascio and Bob Johansen, it added further pieces to the jigsaw. Part 1 is below.
https://medium.com/media/f3bdb411706aab52e0197cbfb7ac29bf/hrefhttps://medium.com/media/7b1e5d4995131533390c75ebf3fc1b9d/hrefhttps://medium.com/media/7fe8330e2e9901fabd08fed33eab51f9/hrefhttps://medium.com/media/eba9a18d540fd51fa71f057e7345525e/hrefhttps://medium.com/media/01537ba52011e3e968eb818e32e27ad4/href
What We See Depends on What We Can Hold: When the Puzzle Outgrows the Table was originally published in The Thursday Thought on Medium, where people are continuing the conversation by highlighting and responding to this story.