Opinion Formation

Paul E. Johnson
Dept. of Political Science
1541 Lilac Lane, Room 504
University of Kansas
Lawrence, Kansas
August 15, 2003

This documentation describes a public opinion simulation model developed by R.Robert Huckfeldt, Indiana Univerity, and Paul Johnson, University of Kansas, with the assistance of Michael Craw of the University of Indiana.

List of Classes

Simulation Management

ModelSwarm: creates agents and tabulates information on them
ObserverSwarm : orchestrates the interaction of the graphical interface with the agents
Parameters: collects up parameters for interactive adjustment at runtime, makes itself available throughout the simulation
Agent Types
Citizen : root class of all agents
Axelrod: Subclass of Citizen, offers functionality of agents described in Axelrod's original model
Coleman: Subclass of Citizen, offers functionality of agents the model we call "Coleman"
SelectiveCitizen: Subclass of Citizen, offers basis functionality that allows agents to be selective in their interactions with one another
HJCitizen : Subclass of SelectiveCitizen, offers several variants on methods for selecting discussion partners and responding to them
MikePM2: A partial implementation of Mike Craw's ideas for a model of on-line information processing
Support Classes for Agents
Attribute: a record that agents use to remember qualities of others they meet
MovingAverage: a class to support averaging of experiences
Position: contains information on grid and coordinates relevant to an agent. Agents keep one of these for each grid they visit, so they can find their way

Classes that support batch mode simulations

BatchSwarm: A Non-GUI replacement for ObserverSwarm
BatchColormap: needed by BatchRaster
BatchRaster : Writes ppm files to represent rasters
MyArguments: handles command line arguments>
TimeStamp: A class which can get today's date and convert it into a string for record keeping

Classes for managing "space" and "position" of agents

MultiGrid2d: two dimensional grid of locations where agents can go
MultiGridCell: standard qualities of the locations within the grid where agents can go
AppSpecificCell: customized qualities of the locations where agents can go
AVLSet.html: A Swarm collection that is backed by an avl tree (binary search tree)

About the code

OpinionFormation is a program written in Objective-C using the Swarm library of simulation tools. Opinion explores the processes by which public opinion approaches a steady state that sustains heterogeneity across an array of issues. Individual opinions are formed through discussions between individuals. By exploring different rules for individual level interaction, we hope to explain how heterogeneity in public opinion can be preserved.

Program management

This model follows most of the conventions of standard Swarm simulations, with the possible exception of the scheduling of actions. In particular, the classes are organized hierarchically, with the observer swarm orchestrating the interface, the model swarm managing the creation of objects and record keeping, and the agents themselves execute the substantively important actions.

A Swarm object is mainly marked by the ability to execute actions like "buildObjects", "buildActions" (for creating schedules), and "activateIn:" (which places the actions scheduled in that swarm into the overall time sequence of the simulation). There are several scheduling variants built into this code (see below).

The creation of agents is governed in the ModelSwarm.m file. There one will find the variable "modelType" used as a key which determines whether the agents that are created are going to be of one sort or another.

Building model objects

The model consists of two major kinds of objects: citizens and grids. Citizens hold individual opinions on a range of policy issues, and they may alter their opinions by interacting with each other. Grids define how citizens are related to each other spatially and thus help to structure social interaction. This simulation uses a significantly enhanced grid class called "MultiGrid2d" which allows many citizens to exist within the same cell.

The model can allow agents to move among a large number of different environments. So far, we have given serious consideration to only two kinds of grids: homeGrids and workGrids. The model can have several homeGrids, the number determined by the parameter "numNeighborhoods" and each citizen is assigned to one. Some citizens may have all of their interactions in their home neighborhood, but some may also go to other places, for discussion called workplaces.

Multiple workGrids might exist as well. Generally, a workgrid is smaller than a home grid, meaning more agents might be in a cell and interaction might expose people to a greater variety of influences. Moreover, each citizen is randomly assigned to a workGrid and to a position within that workGrid. This is in contrast to the homeGrid, where the default is to evenly distribute citizens among all cells.

The position of a citizen in each grid is significant because the grids help determine who the citizen may interact with. Citizens interact with individuals within their own cells, and with individuals in neighboring cells to the north, south, east and west (NEWS neighbors). Thus, a citizen's position in each grid determines who his neighbors are, and thus whom he will interact with.

In addition, the shape of the grid is determined by a parameter that the user may set. If wrapAround is 0, then each grid is a flat plane, and hence cells on the borders do not have as many neighbors as those in the middle. If wrapAround is 1, then each grid is a toroid, and each cell has four neighbors.

This model uses a class called Position to collect up all the information needs when the agent has to "go home" or "go to work". The Position object has variables that record the (x,y) coordinates of the agent's location, as well as a pointer to the grid that represents the neighborhood.

Subclassing Agents through Citizen

The design of the Citizen class is intended to allow easy subclassing to implement alternative theories. Citizen's step method tells agents to interact, but it does not tell them how to find discussants or how to respond to interactions. That work is done in subclasses. See the documentation in Citizen and its subclasses SelectiveCitizen Axelrod, Coleman, and HJCitizen for more information.

Basically, the object-oriented hierarchy is like this:

Each citizen interaction consists of separate stages: selecting a discussant, updating contact records, and adjusting in response to interaction. Many different theories describe how individuals handle each of these stages of argument and persuasion. Hence, the designer can choose between different models of each of these stages. By choosing different parameters, the user can select which set of assumptions s/he wishes to use for each stage of argument.

In case you just want to run the original Axelrod model, choose modelType 0. Make sure you only choose one "homegrid". Choose "daylength" equal to 1 and "homeprob" equal to 1 in the graphical interface. That will make agents stay home all day. The setting modelType=0 causes all agents to be created as instances of the Axelrod class.

The Citizen class is a record-keeping and scheduling class. The Citizen is aware of the "positions" where the agent can go and it schedules movement between them and it also triggers "interactions". Subclasses must specify what happens when an interaction occurs. We have added a Swarm "map" of the contacts for each agent. This map serves no purpose in the baseline Axelrod model, except to keep track of how many other agents each one has been in contact with. Some subclasses, though, can use the list of contacts as a basis for choosing among discussants or responding to them strategically.

There are a few other customizations that can be triggered by setting variables in the Citizen class. Recall that, in Axelrod's culture model, one citizen is chosen and then that citizen may interact with someone and copy one of their opinions. In that appoach, the "lucky" other agent never adjusts as a result of the experience. We have generalized that somewhat, introducing three parameters that govern persuasion processes:onlySelfAdjusts, only0neAdjusts, and consistentAdjustment. If the parameter onlySelfAdjusts is 1, then the original Axelrod approach is followed. If onlySelfAdjusts is 0, and onlyOneAdjusts is 1, then either citizen might execute his routine to adjust his opinion. If onlySelfAdjusts is 0 and only0neAdjusts is 0, then a third condition called "consistentAdjustment" takes over. If consistentAdjustment is 1, then one of the two is randomly chosen and asked to consider adjusting its opinion, and if it does not change, then the other is asked to consider changing. All of these methods assure that, if one agent changes, then the two agree at the end of the interaction. Not so for the final alternative. If all three of these parameters are set to 0, then then both agents are asked to respond to the interaction, and there is a chance both might change to mimic the other and they walk away without realizing it.

The class SelectiveCitizen is an intermediate record-keeping class that turns out mainly to serve as a holding company for several methods and variables that are necessary for agents to selectively choose discussants. This is the class in which detailed information about new contacts (strangers) is kept. SelectiveCitizen is a superclass of HJCitizen. In HJCitizen, the citizen is able to screen a list of discussants and select one that appears the most appealing. One critical element of the model is the assumption that the citizen makes about a stranger, and the stranger record keeping is done in SelectiveCitizen. Is a stranger more appealing than a person who is known, but disagreeable?

For classes that descend from the SelectiveCitizen class, we have built in a "mix-an-match" functionality, to allow models to use various parts of the Axelrod-type behavior in combination with other elements of their own models. This is done by setting flags like "axelrodSelect" or "axelrodAdjust", which, respectively, will cause an agent to follow Axelrod's rules in selecting discussants or responding to others. Subclasses, of course, have to check that flag when they decide what to do.

Agent movement and scheduling of actions

Swarm models proceed in discrete time steps, and use schedules to order events. Basically, a schedule is a swarm object that executes a series of calls to specified methods at specified times. Thus, schedules are the way in which a model's dynamics are defined.

An agent has to be in a "position" in order to find other agents and interact with them. The scheduling systems we have considered all are concerned with ways to allow the agents to move among situations. Movement creates a variety of interaction opportunities.

Scheduling is done on a "daily" basis. Swarm timesteps are grouped into sets of length "dayLength". At any step (think "hour" even though the default is 10 steps per day) the actions that the agent has scheduled for that day are executed by the Swarm activity library. At time 0, the agents are told to carry out their "step" method, and that method has commands that can schedule actions throughout the day.

We currently have a compiler flag "NO_MASTER_SCHEDULE". If the model is compiled with that flag, then each Citizen object creates its own schedule which is activated in the model swarm. This is completely decentralized, autonomous action, in that each agent's actions are scheduled individualistically.

Because of performance (speed) issues with decentralized schedules, we changed create a default simulation in which there is a single "master" schedule, created in the a "holding class" called "AgentSwarm", and all agents can "hang" their planned actions on that schedule. In the first timestep of each day, the agents are told (in random order, if randomizeCitizenUpdateOrder=1) to plan their actions for the day by executing their "step" methods. They decide where to go and when to interact. If the compiler flag "CONCURRENT_GROUP_SCHEDULE" is defined, then all actions that are scheduled at each timestep are randomized. Otherwise, during a day, the ordering of the actions of the agents at each step is the same. This speeds up the simulation a good deal, and because the ordering of agent actions is randomized each day, we do not believe this is much of a sacrifice.

The model swarm keeps track of interactions by polling citizens or grid cells to meaure conditions. This is done at the start of each day.

Graphics and measurement

The opinion model is concerned with measuring two things: diversity in public opinion, and the stability of public opinion over time. The model keeps track of measures of both these concepts as it runs, and records their values to an output file called DataCultureX, where X is a "timestamp". It also saves a file called ParamsX which records the settings under which the model was run, using the same timestamp. In addition, when not in batch mode it provides graphic displays of these measures as the program is running.

The ModelSwarm object has several variables to help it keep track of the model's state. It includes a "tabulator" array that records the number of citizens holding each opinion for every issue in the issue space. ModelSwarm also includes counters, one that counts the number of interactions that occur between citizens and another that counts the number of changes in opinion that have occurred. In previous versions of the model (in case you have seen that code), the Citizens would notify the model when the interacted or changed. To simplify things, now the Citizens just make personal notes of their interactions and changes and the model swarm collects the information when it needs it. That is to say, every time an interaction occurs between two citizens, the agent who initiated the interaction sets an interactionFlag, and every time a citizen changes opinion on some issue, a change flag is set.

In addition, each citizen has three moving average objects that record experiences over his past 20 encounters (20 is the moving average width, currently implemented as a pre-processor flag). The acqMA object records acquaintance, whether or not the citizen shared at least one opinion in common with another agent. The samMA object records harmony, the percentage of issues on which the citizen agreed with a discussant when a full interaction occurred. The identicMA object records identicality, whether or not the citizen agreed on every issue with a discussant. Every citizen has these three moving average objects, and each records this information for the past 20 discussants. The ModelSwarm makes use of this information in computing model averages for acquaintance, harmony and identicality.

ModelSwarm includes a schedule that instructs it to record data at the start of the "day." The data that gets written to the output file includes the following:

  1. Time period

  2. Run number

  3. Random Seed

  4. Number of interactions over the past "day"

  5. Number of changes in opinion over the past "day"

  6. For each issue, the number of people holding each opinion

  7. The average opinion on each issue

  8. Entropy on each issue

  9. Multidimensional entropy

  10. Acquaintance average

  11. Harmony average

  12. Identicality average

Entropy is a measure of the diversity of opinion on an issue. It ranges from 0 to 1, with 0 being maximum homogeneity (everyone has the same opinion on the issue), and 1 being maximum heterogeneity (the population is equally distributed over all opinions on the issue). It is computed as Shannon's information measure: the sum of the percentages of people holding each opinion, weighted by the log of that percentage, sum divided by the log of 1/the number of opinion categories.

Multidimensional entropy is a measure of the diversity of opinion across all issues. The higher this measure, the more heterogeneous is public opinion. Basically, multidimensional entropy is computed the same way as entropy on individual features, except that the weighting is done by each possible combination of opinions across the issue space.

Acquaintance average is the average of the percentage of the past 20 (or moving_average_width) interactions that each citizen had that were with a person with whom he agreed on at least one issue. ModelSwarm computes this by asking each citizen to return the percentage of his past 20 interactions that were with a discussant with whom he agreed on at least one issue, and then taking the average of these percentages over all the citizens.

Harmony average is the average of the average number of issues on which a citizen agreed with his discussant over his past 20 interactions. ModelSwarm computes this by asking each citizen to return the average percentage of issues on which the citizen agreed with his discussant over his past 20 interactions, and then taking the average of this over all the citizens.

Identicality average is the average of the percentage of the past 20 interactions that each citizen had that were with a person with whom he agreed on every issue. ModelSwarm computes this by asking each citizen to return the percentage of his past 20 interactions that were with a discussant with whom he agreed on every issue.


As of May 15, 2002, this model is capable of serialization. That is, the state of the model can be saved into a file and then restarted from that point. The save files are in a Lisp format.

I did not have any working example of serialization in Swarm against which to work, so what you see is the result of necessity and invention. A few enhancements in Swarm itself were needed to allow saves of dynamically allocated arrays and Swarm Arrays and I'm proud to say I've got those changes put into the main Swarm distribution.

The basic idea is this. We don't want to save the "space" objects because they are just hollow shells in which agents exits. In the opinion model, there is no "environment' except for that which the agents perceive, remember, and use. Agents of some types--say "HJCitizen"--have complicated recollections of the others that they have met, and those recollections have to be saved and then restored. The magic occurs in the "lispOutDeep" methods of the agents, which save the vital information for the agents, and in the lispInReset methods of the agents. lispInReset will cause the restored agent to "scramble about" and reconstruct its environment. The archive will also save the parameters from the model, and upon restart it goes right back where it was, except that the time in the new model is 0. I don't know how to reset time, and have not tried to.