S.6 Agent-based models
Helpful prior learning and learning objectives
Helpful prior learning:
Section 1.1.1 The economy and you which explains what an economy is and how it is relevant to students’ lives
Section 1.1.2 The embedded economy, which explains the relationship between the economy and society and Earth’s systems.
Section S.1 What are systems?, which explains what a system is, the importance of systems boundaries, the difference between open and closed systems and the importance of systems thinking
Section S.2 Systems thinking patterns, which outlines the core components of systems thinking: distinctions (thing/other), systems (part/whole), relationships (action/reaction), and perspectives (point/view)
Section S.3 Systems diagrams and models, which explains the systems thinking in some familiar information tools as well as the symbols used to represent parts/wholes, relationships and perspectives.
Learning objectives:
Explain how rule-based relationships between system parts (agents) result in emergent behaviour with examples
Explain the significance of agent-based approaches for understanding how social change can occur
Have you ever seen a murmuration of starlings? Thousands of birds swirl through the sky, forming constantly shifting shapes—sometimes expanding, sometimes folding in on themselves, sometimes suddenly changing direction. It looks almost like a single giant creature moving with one mind. The video below shows this behaviour.
Note: there are a few still frames at the start of the video before it really gets going, so stick with it.
Scientists initially thought these groups must have strong leaders, but it turns out that these flocks organise themselves according to a few simple rules:
Stay close to nearby birds, but don’t collide.
Match the speed and direction of your neighbours.
Move away quickly if a predator appears.
By following these rules, they find safety in numbers, and their movements form intricate patterns that look like a dance. This is an example of emergent behaviour, which comes from these simple rules, even though the rules themselves do not dictate the specific behaviour we see.
How do agent-based models help us understand this emergent behaviour?
Agent-based models (ABMs) help us study systems like murmurations by focusing on how individual parts of the system, called agents, interact. They differ from other ways of modeling systems in an important way.
Stock-and-flow models describe how accumulations of things (stocks) change over time due to inflows and outflows (Section S.4). Causal loop models focus on how different parts of a system influence one another through reinforcing feedback or balancing feedback (Section S.5). They help explain cause-and-effect relationships, such as how rising temperatures cause polar ice to melt, which in turn exposes more heat absorbing melted dark water and land, further increasing temperatures.
ABMs differ from both of these approaches because they focus on the rules rather than flows or feedback loops. In an agent-based model, the relationships between system parts are defined by the rules that agents follow when they interact. The patterns we see in the system emerge from these local rule-based interactions. In a murmuration, the agents are the birds, and their rules shape their movement in response to nearby birds and predators. No single bird plans the shape of the flock—it emerges naturally from their interactions.
Figure 1. The interaction of agents with relationship rules bring about emergent behaviour in a system.
What are some examples of agent-based system behaviour?
ABMs help explain many different self-organising systems in human and non-human groups. In each of these cases, the system’s behaviour is shaped by the rules that determine how agents interact.
Figure 2 A school of fish
(Credit: adiprayogo liemena, Pexels license)
Schools of fish
Like the starlings in the video at the start of this section, fish swim in coordinated groups by following simple rules: stay close to your neighbours, move in the same direction, and avoid predators. No single fish controls the group, but their interactions create emergent patterns that help them survive.
Traffic flow in a city
Each driver follows basic rules—stop at red lights, keep a safe distance, follow speed limits. But together, these individual decisions shape traffic patterns. If too many cars enter a road at once, a traffic jam emerges, even though no driver intended to create it.
Figure 3. Stocks and other assets are often traded on the basis of algorithms, which are simply sets of rules.
(Credit: Alesia Kozik, Pexels license)
Algorithmic trading in financial markets
Many stock market transactions are made by algorithms that follow pre-set rules. A single algorithm might be programmed to sell stocks when prices drop slightly or buy when they start to rise. When many algorithms operate at the same time, their interactions can create market trends—or even flash crashes, where stocks lose large value in seconds. No individual trader intends for this to happen, but because the rules shape the relationships between trading agents, these sudden swings emerge from the system itself.
Figure 4. Messages spread virally when people follow their own rules about when to share.
(Credit: Cottonbro studio, Pexels license)
The spread of information on social media
A message spreads as each person decides whether to share it, based on their own rules, like share if it’s interesting, or share if a friend shared it. Some messages go viral, not because anyone planned it, but because of the way the rules of sharing interact.
Figure 5. Ants follow simple rules that bring about efficient food gathering.
(Credit: Onur Yüksel, Pexels license)
The behaviour of ant colonies
No single ant organises the colony, but simple rules guide their actions. An ant finding food releases a scent trail, and other ants follow it. The colony’s ability to gather food efficiently emerges from these rules.
What insights can we gain from agent-based models?
ABMs help scientists and policymakers understand real-world problems. In epidemiology, agent-based models show how diseases spread by simulating people’s movements and interactions. In urban planning, cities test traffic policies by modeling how drivers respond to new rules. In environmental science, ABMs show how deforestation spreads as farmers and loggers make land-use decisions. These simulations done with the aid of computer programmes, help improve policies by revealing the possibility and likelihood of emergent behaviour.
Beyond specific uses, ABMs show how complex behaviours emerge from simple rules. They challenge the idea that systems need a central planner to function. Many systems—whether ecosystems, economies, or social networks—are shaped by local interactions. A small rule change can shift the whole system. For example, if just a few birds in a murmuration react differently to a predator, the whole flock may turn. The same is true for social media—one change in how posts spread can make a message go viral or disappear.
Human cooperation
ABMs also help us understand human cooperation. Classic economic models assume people act selfishly, always seeking personal gain for themselves. But people often contribute to shared efforts even when they get no personal benefit. They also punish free riders, people who take advantage of group cooperation without contributing to it, even at their own cost to protect fairness. This behaviour is called strong reciprocity, where social rules shape cooperation, influencing how people effectively share resources and work together.
ABMs reveal how the rules communities create and follow shape the systems they live in. Understanding these patterns helps us design regenerative economies that strengthen cooperation, sustain shared resources, and support both human and ecological well-being. When we recognise that rules can encourage reciprocity and trust, we can create policies and institutions that strengthen social and ecological systems.
Activity S.6
Concept: Systems
Skills: Thinking skills (transfer, critical thinking)
Time: varies, depending on the option
Type: Individual or pairs
Option 1: The stadium wave
Time: 5 minutes
Watch the short video of people in a sports stadium doing ‘the wave’.
Alone or with a partner, identify the rules that the people in the audience (agents) follow to make this wave.
Click the arrow to check your ideas:
When the person next to me stands up, I stand up too.
When that person sits down, I sit down too.
Option 2: An agent-based approach to a current event
Time: 40 minutes, less if discussion is time-limited or kept in small groups or pairs
Considering agent-based models can help us understand the race between companies and countries to develop Artificial General Intelligence (AGI). Click on the arrow below to reveal the article and answer the following questions individually, in pairs or small groups.
Why do companies and countries feel pressured to develop AI as fast as possible, even if there are risks?
The article compares the AI race to a flock of birds, a traffic jam, and algorithmic trading. How are the decisions of companies and countries in the AI race similar to the way birds or drivers behave in these systems? What rules are the companies and countries following?
If you could change one rule in the AI race to make it safer, what would it be? How might changing this rule affect the overall system?
The AI Arms Race: Why Is Everyone Rushing to Build Smarter Machines?
(reading time 5-10 minutes)
Imagine you are in a race, but the finish line keeps moving. You don’t know exactly where it is, but you do know one thing—if you slow down, you will fall behind. That is how some experts describe the global race to build more powerful artificial intelligence (AI). Companies and governments around the world are working as fast as possible to develop AI systems, despite growing concerns that some of these technologies could be dangerous.
But if people know the risks, why don’t they just slow down? The answer lies in the way different actors—companies, governments, and research groups—are responding to one another.
Why is AI developing so fast?
AI research has been around for decades, but in recent years, progress has exploded. AI can now generate images, write text, and even pass difficult tests that humans take years to prepare for. Some experts believe that in the near future, AI might become much more powerful—potentially even as smart as or smarter than humans. This idea is called Artificial General Intelligence (AGI).
Many technology companies want to be the first to create AGI. They believe that whoever achieves this breakthrough will have a huge advantage—financially, politically, and scientifically. Governments also see AI as a tool for power, with some using it to improve national security, manage economies, and compete with other nations.
This creates intense competition. If one company or country moves forward with AI development, others feel pressure to do the same—otherwise, they might fall behind. Even if some researchers believe we should slow down and think more carefully about AI’s risks, companies feel they have no choice but to continue pushing forward.
What are the risks of racing too fast?
Some AI experts warn that moving too quickly without proper safety measures could be very dangerous. Here are a few concerns:
Job losses: AI could replace many human workers, leaving people unemployed.
Misinformation: AI-generated content could spread false information, making it hard to tell what is real.
Loss of control: If AI becomes too advanced, it might start making decisions that humans can’t fully predict or control.
Despite these risks, companies and governments continue developing AI at high speed. They argue that slowing down could mean letting competitors take the lead, making it harder to shape AI’s future.
How does this relate to agent-based models?
The AI race is a great example of a self-organising system, where the rules guiding each player’s decisions create a pattern that no one fully controls. Companies and countries aren’t necessarily choosing to develop AI recklessly—they are responding to what others are doing.
This is similar to other complex systems, like:
Flocks of birds: Birds don’t plan their flight paths, but by following simple rules (stay close, match speed, avoid predators), they create swirling formations in the sky.
Traffic jams: No single driver causes a traffic jam, but each driver’s decisions—like slowing down suddenly—can create congestion.
Algorithmic trading: Stock market crashes can happen when trading algorithms react too quickly to small price changes, triggering a chain reaction.
In each case, local interactions between individuals create larger, system-wide effects—whether it’s a murmuration of starlings, a blocked highway, or a worldwide race to develop AI.
What happens next?
Because the AI race is not centrally controlled, it is difficult to stop. Some experts argue that we need rules and agreements—like treaties between nations—to slow down and ensure AI is developed safely. Others believe that AI safety research must advance just as quickly as AI itself, to make sure we don’t create something we can’t control.
No one knows exactly what will happen, but one thing is clear: the choices of individual companies and governments are shaping the future of AI, just like the rules of individual birds shape a flock’s movement.
Checking for understanding
Further exploration
Fireflies – An interactive simulation by Nicky Case that explores synchronisation in nature and complex systems. Through a playful and engaging model of fireflies flashing in unison, it demonstrates how simple rules at the individual level can lead to emergent patterns at the system level. Difficulty level: easy.
The Evolution of Trust – An interactive game by Nicky Case that explores how trust forms and breaks in human interactions. Through simple simulations, it shows how cooperation, betrayal, and reciprocity shape relationships over time. A great way to understand how social rules influence cooperation. Difficulty level: medium.
The ultimatum game - a short video explaining the ultimatum game, a classic experiment in behavioural economics that shows complex, non-’rational’ behaviour in human beings. Teachers might like to run this experiment with students, or students could try to run the experiment themselves. Difficulty level: medium
The Systems Thinking Playbook – A practical guide by Linda Booth Sweeney and Dennis Meadows, offering hands-on exercises to develop systems thinking skills through understanding feedback loops, delays, and interconnected systems in an engaging way. It is widely used in education, leadership training, and sustainability studies. Difficulty level: medium.
Sources
Cabrera, D., & Cabrera, L. (2018). Systems thinking made simple: New hope for solving wicked problems (2nd ed.). Odyssean Press.
Haug, M. (2024, January 29). A race to extinction: How great power competition is making artificial intelligence existentially dangerous. Harvard International Review. https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/
Mulyono, Y. O., Sukhbaatar, U., & Cabrera, D. (2023). ‘Hard’ and ‘soft’ methods in complex adaptive systems (CAS): Agent-based modeling (ABM) and the agent-based approach (ABA). Journal of Systems Thinking, 1(1). https://www.scienceopen.com/hosted-document?doi=10.54120/jost.000009
Van Staveren, I. (2015). Economics after the crisis: An introduction to economics from a pluralist and global perspective. Routledge.
Terminology (in order of appearance)
Coming soon!