"Rational Actors Versus Adaptive Agents: Social Science Implications"

Paul E. Johnson

Dept. of Political Science

University of Kansas

Lawrence, Kansas 66045

(785) 864-9086

pauljohn@ku.edu

Abstract

This is a comparison of two research methods, "rational choice theory" and "agent-based modeling." Is agent-based modeling really different and is it really better? The answers are "yes" and "sometimes". Improvements in computer hardware and software now allow a richer kind of modeling that will significantly enhance social science methodology. There are pitfalls, however. Examples of agent-based models built with the Swarm software libraries (made available by the Santa Fe Institute) are offered to illustrate.

Special thanks to Chris Langton, Glen E.P. Ropella, Marcus Daniels, Alex Lancaster, Rick Riolo, Benedikt Stefansson, Sven Thommesen, the Santa Fe Institute, and many others in the Swarm Community. All remaining mistakes are the sole responsibility of the author. This research is partially supported by NSF grant SBR-9709404. Prepared for delivery at the 1998 Annual Meeting of the American Political Science Association, Boston Marriott Copley Place and Sheraton Boston Hotel and Towers, September 3-6, 1998.

Will the field of formal analysis in political science be fundamentally changed by the introduction of agent-based modeling with computers? Some recent research, for example Axelrod's The Complexity of Cooperation(1997), suggests that the answer is yes. The time is now ripe for a careful consideration of the strengths and weaknesses of agent-based modeling.

Agent-based modeling, which has been used in fields like artificial life (Langton, 1989), biology and ecology (Kauffman, 1993), and to some extent economics (Anderson, Arrow, and Pines, 1988), is not so widely practiced in political science as is rational choice theory. An agent-based model is cast at the individual level, as is a rational choice theory. Like a rational choice model, an agent-based model is typically formalized as a set of assumptions about the objectives and opportunities of individual actors. Unlike rational choice theory, which is typically practiced as a pencil-and-paper exercise in mathematics, agent-based models are implemented on computers. Artificial agents are typically presumed to adjust, adapt, and change, usually in incremental steps. Agent-based models usually describe a simpler and less knowledgable individual actor.

The promise of agent-based models, the proponents argue, is found in the relaxation of the restrictive conditions under which rational choice theory is usually practiced. A rational choice approach typically requires strong assumptions about the information and computational abilities of the individual actors. An agent-based approach, it is argued, can fruitfully do away with those restrictions. Agent-based models allow the investigation of more detailed structures with a higher number of actors.

This essay is narrowly focused on a comparison of rational choice theory versus agent-based modeling in theory development. Framing the question in this way, a number of important questions are (knowingly) pushed to the side. For example, it is assumed from the outset that theories should be formalized. Rational choice theory and agent-based models require detailed specification, and the question of whether theory ought to be practiced in a more rhetorical style is not discussed. Furthermore, the discussion here is focused on individual-level theories. Whereas in ecological modeling there is a major debate over the question of whether models ought to be cast at the aggregate or individual level (Huston, DeAngelis, and Post, 1988; Judson, 1992; DeAngelis and Gross, 1992), this paper assumes a methodologically individualistic position from the start.

With attention focused on this topic, then, some comparisons between the practice of rational choice theory and agent-based modeling can be offered. The first section of the paper compares the strengths and weaknesses of the two approaches as they are experienced in the modeling enterprise. The final part of the paper describes the Swarm simulation toolkit and presents a few examples to highlight the tradeoffs experienced when using one approach or the other.

The conclusion is that agent-based models have a reach that is potentially broader than rational choice theory, and yet there are dangers as well. Whereas rational choice theory has methodological and substantive qualities that reign in the tendency to make models too complicated and unwieldy, agent-based modeling does not. The strength of agent-based modeling is thus its weakness as well. The lack of structure opens up possibilities for creating models, but the creative process is not well regulated.

Comparing Rational Choice and Agent-Based Models

The most important questions in the choice of a modeling strategy usually boil down to this: does this method give the framework within which to represent an idea and investigate its implications? The major dilemma is that there is a tradeoff between the ability to represent and the power to investigate. The proponents of agent-based modeling argue that it ameliorates the tradeoff, but as we shall see, the arguments on both sides are persuasive.

1. Tractability versus Verisimilitude

Rational choice models often seem hopelessly complicated to those who view the field from the outside, but from the inside one is often struck by their simplicity. Models are simple because they are designed to be that way. Modelers strive for tractability in model design. While tractability is a nebulous and hard to define quality, it is nevertheless highly prized. Above all else, a model must be workable, it must allow a "deductive interrogation" of its implications. Theorists want to be able to state results, in particular to describe an equilibrium outcome, a stable pattern of behavior that can be expected to sustain itself.

In order to get deductions, however, one must often sacrifice details. It is hoped that the omitted elements are not part of the "essence of the problem," but there is often no way to be sure. Typically, one cannot work out a model under the "tractable conditions," and then introduce the "intractable conditions" and get a result for comparison (or else one would have used the intractable conditions from the start). Sometimes the concessions to tractability seem minor, such as the treatment of a range of alternatives as a real-valued continuum in a voting model. However, sometimes these seemingly unimportant assumptions can have a pivotal effect. In formal models of sequential interactions, for example, the simplification of a player's range of alternatives to a dichotomy ("HIGH" versus "LOW" or such) can make the model solvable. The solutions obtained under such conditions, however, may not generalize to a continuous strategy space (Hellwig and Leininger, 1987, 1990). Most of the assumptions imposed for reasons of tractability are not rigorously investigated.

The adjustment of assumptions to allow analysis is both a weakness and a strength of the rational choice approach. Rational choice modeling offers graduate students a clear path for study and research. Axelrod argues that it has been the thirst for deductions, more than anything else, that has led to the growth of rational choice theory. "The real advantage of the rational-choice assumption is that it often allows deduction" (1997, p. 4). The implication is that any theory that allows deductions could be readily substituted, an implication which does not ring true. Rational choice theory not only leads to deductions, but to interesting deductions. The ability to make a statement about how behavior will change as conditions change--to state comparative statics--is the most interesting and important part of microeconomics, the spring from which much rational choice theory flows.

Critics of rational choice theory may find that simplifications render the models unrealistic or unhelpful. Such critics will eagerly embrace agent-based modeling. Agent-based models offer tractability galore. If one can think of an assumption, one can have a computer model designed to represent it. In that sense, computer models can be expanded to include a large number of details, subject only to the technical restrictions of hardware capacity and software design limitations. One team of advocates for agent-based modeling, Kollman, Miller, and Page, claim that, "constructing mathematical models is a slow, challenging pursuit, where assumptions must often be guided by tractability rather than realism." They argue that computer models circumvent this problem. "Alternative behavioral and environmental assumptions can be included quickly and at low cost in a computational model" (1997, p. 463).

I suspect that people will agree with this claim if they have more training in computer programing than in mathematical modeling, since "ease" and "difficulty" are in large part matters of familiarity. There is no doubt, however, that the potential for detail and verisimilitude in computer models is higher than in the rational choice approach. Whereas the formal theory might impose regularity assumptions on individual classes of actors, the agent-based model can easily allow for complete differentiation of individuals and indefinitely complicated structures through which they interact. Deductive theory is not workable with a large number of different individuals behaving according to unique, individualized logics.

One should not conclude from this, however, that useful agent-based models will necessarily result from the continuous addition of features. That belief is one source of the downfall of some of the largest computer simulation research projects, according to Bankes (1993). If one strives for an extremely high level of verisimilitude, a model will incorporate more an more assumptions, many of which are ad hoc. Exploring the implications of so many parameters is too expensive and impractical. While the desire to build a computer model that is a replica of reality no doubt inspires many projects (and a lot of science fiction, I suppose), it is not necessarily the right approach. As Bankes argues, the number of uninvestigated assumptions grows very rapidly in such a project and the researcher's handle on the original research question may be lost.

The need to keep a computer simulation within manageable bounds thus imposes some of the same kinds of restrictions that the desire for tractability imposes on formal theory. Bankes's advice for question-driven exploration rings true: design a model that can be executed under a variety of conditions, and let the conditions being varied be governed by the research question as much as possible. Such an investigation does not strive to duplicate reality inside the computer. Rather, it attempts to investigate the implications of changes in conditions, an objective not significantly different from many rational choice projects.

Interestingly enough, even with the agent-based modeling community there are significant differences of opinion about modeling strategy. As some leading figures in ecological modeling observed, "These models offer the greatest promise of realism. But this promise is often not attained because the complexity of the model precludes diagnosis of problems when something goes wrong and inhibits understanding of why interesting results emerge as they do" (Roughgarden, et al., 1996, p.27) Axelrod offers caution to those who seek to build all-inclusive, realistic models. He admits that some models, particularly ones used to predict economic variables, may be complex. "But if the goal is to deepen our understanding of some fundamental process, then simplicity of the assumptions is important, and realistic representation of all the details of a particular setting is not" (1997, p. 5). The emphasis on model simplicity, in particular simple models of individual actors, is certainly consistent with the research agenda in artificial life.

2. Equilibrium versus Emergence.

One of the major differences between rational choice theory and agent-based modeling is the objective of the modeling process. Rational choice theory is mostly a search for equilibrium, a stable pattern of behavior or policy. Such an outcome is commonly referred to as a "solution," in the sense that a mutually-reinforcing pattern of behavior among the participants and their behavior can be understood as individually rational. This substantive consideration has a vital methodological role: it keeps the model-building process within manageable bounds. It keeps a research community focused and enhances that possibility that researchers will share a common understanding. The possibility that individual behavior might not follow an equilibrium strategy is usually ignored, or when it is recognized it is not well integrated into the theory. In fact, aside from some advanced elements of game theory (in particular, trembling-hand perfect equilibrium), these issues are not often investigated. And, in that case, the individual actor is thought of as attempting to comply with the dictates of equilibrium, but some unknown force diverts action from it.

In rational choice theory, a finding which indicates that there is no equilibrium is viewed with suspicion, or as a problem waiting to be solved. One might be quick to counter, and argue that some of the most important and influential results in rational choice have been negative results, proofs that a given set of assumptions is not sufficient to justify the conclusion that there is an equilibrium. Arrow's "Impossibility Theorem" (1951) and the McKelvey-Schofield "Chaos Theorems" (McKelvey, 1976; Schofield 1979) are examples of projects with such negative results. These findings are viewed as "research problems," however. Rational choice theorists (mostly) frown on "disequilibrium" theories. The often unspoken assumption of many researchers in rational choice theory is that reality is a stable, predictable thing. Hence, models that have predictions of instability are seen as intrinsically flawed. Rational choice theorists would rather that the structural and behavioral particulars of the model be adjusted until the outcome is brought into a stable alignment. The typical approach is decidedly functionalist. Reasoning that the model could not generate a meaningful result if it were not for new features X, Y, and Z, one concludes that X, Y, and Z exist in reality to serve the purpose of solving the instability.

In contrast to rational choice theories, agent-based models are not driven by the desire to find equilibrium outcomes. While stable patterns of interaction may occur, it is also possible to find endless change and instability in an agent-based model. Analysts, often with the help of computer graphics, search for any interesting or surprising patterns. Whereas the rational choice modeler is uncomfortable in a discussion about how agents might endlessly adjust their mutual expectations about each other, agent-based modelers are quite at home.

The emphasis on equilibrium gives the rational choice modeling enterprise a coherence that might not exist in the agent-based modeling community. If a research team builds an agent-based model, they hope that something surprising or interesting may come out of it. But terms like "interesting" are open to so much interpretation that they may serve as no guidance whatsoever. Considering the future of the field called artificial life, for example, one author wondered, "how do we know if the new artificial phenomenon constitutes life as it could be in a possible material world? .... As our ultimate standards for what constitute interesting lifelike phenomena in a virtual universe (possible life), we only have our prescientific ideas and traditional biological intuitions of what makes up real life" (Emmeche, 1994, p. 561).

There are some common threads in agent-based modeling, however, that might help to give investigation more structure. First and foremost, there is the pursuit of "emergent properties." "Emergent property" is an elusive term that comes from the field of complex systems analysis. An emergent property is something that arises from the interaction of many agents, none of whom necessarily intend this aggregate outcome. As Holland recently commented, emergence "occurs only when the activities of the parts do not sum to give activity of the whole" (1998, p. 16). Many political scientists are no doubt familiar with Thomas Schelling's (1971) study of housing segregation, a result that would be called an emergent property.

The field of complex systems has a number of other concepts and tools which may to varying extents be translated into political terms and thereby lend additional structure to the field. The papers on political position taking by Kollman, Miller, and Page, for example, utilize the concept of "landscape ruggedness" as it affects the process of adaptation. Other prototype examples, such as the traveling salesman problem, may by translated into forms that enrich political analysis.

To summarize, while it is the case in both rational choice and agent-based modeling that we search for system-wide implications of individualized behavior, there is a major difference in the flavor of the projects. An equilibrium in economics is obviously an emergent property, in the sense that each agent intends to advance its own welfare and a stable pattern of mutual behavior results. In agent-based modeling, an equilibrium may be a special case. A space of parameters might be explored and the conditions under which a system arrives in equilibrium might be studied. Rational choice theories are typically based on the premise that an equilibrium will result, while agent-based theories are not. In a sense, the search for a variety of emergent properties is the thing that separates agent-based models from rational choice theory. It is also the factor which can make agent-based models unmanageable.

3. Dealing With Bounded Rationality

It may be that agent-based modeling and rational choice theory will part ways over the gut-level issue of complete versus bounded rationality. In rational choice theories, agents are typically assumed to have a huge amount of information and the proof that an equilibrium exists is usually considered to be so valuable a conclusion as to justify the assumption. Partly for reasons of tractability, the adjustment process through which rational actors arrive at that equilibrium is seldom given explicit attention in rational choice.

If the participants in a model are not "completely informed," their incomplete information is represented in a highly stylized way. Doing so allows further assumptions about the type of behavior that will emerge in equilibrium. In order to be workable, games often impose a requirement of "common knowledge" in which the actors beliefs about each other are somehow known to each other. The height of this kind of regularity is imposed in the so-called Bayesian Nash Equilibrium (Harsanyi, 1967). Each actor knows that it is a certain type of actor, but the others do not know each others' type. Hence, each must plan against the optimal strategies of each type of actor. An equilibrium is a situation in which each possible type of actor has an optimal plan specified, and when those plans are made available to all actors, none of them would choose to alter its plan for any of its types. To arrive at such an outcome, each actor must work out a conjecture about the optimal plan of action for each type of actor that each of the other players might be. Somehow each actor arrives at the same estimate of each opponent-type's planned behavior. Even a proponent of game theory admits, "Is this assumption 'intuitive'? Well, perhaps not" (Kohlberg, 1990, p. 33). The author proceeds, however, observing that it is no less 'unintuitive' than the equilibrium premise itself.

Resistance to these "heroic" assumptions surfaced in the form of new theories about behavior that is not perfectly rational. Rather, people are assumed to have only bounded rationality (Simon, 1957). Simon suggested a model of an individual with a limited knowledge of the alternatives and scarce computational ability. Such an actor makes limited adjustments according to "heuristic" rules of behavior. This view rings true in the minds of many and it has had significant influence in psychology and interdisciplinary fields like complex systems modeling (see Simon, 1981; 1990). At the fall, 1997 International Conference on Complex Systems (in Nashua, NH), Simon was the keynote speaker.

The idea of bounded rationality has not had such a major impact in political science, and probably less still in economics. Some concessions to bounded rationality have appeared. Models have introduced limitations on the amount of history that rational actors can recollect. For the most part, it seems safe to say that bounded rationality does not matter to most rational choice theorists. Axelrod's claim that the desire for deduction has fueled the development of rational choice theory is no doubt correct in this case. Simply put, from the rational choice perspective, it is difficult to get deductions from a model of bounded rationality.

The ability to design models of bounded rationality is one of the most appealing aspects of agent-based models (Epstein and Axtell, 1996; Axelrod, 1997). Epstein and Axtel present a computer model called Sugarscape, where agents seek sugar on a landscape. This is meant to be a metaphor for economic exchange of wealth. Their example shows the ability of agent-based models can take into account bounded rationality. Agents can be restricted in the opportunities they perceive or the ability to adjust their strategies, for example. Like other proponents of agent-based models, Epstein and Axtel offer their version of bounded-rationality as a more realistic alternative to the stark depiction of humans in that rational choice genre.

I have often found it ironic that the advocates of computer models advocate a model of man that is less able to calculate than the model advocated by pencil-and-paper theorists. It is not necessarily so. Whatever assumptions are invoked in a pencil-and-paper model in rational choice can be implemented in a computer model. Typically, however, the people who work in agent-based models are interested in adaptation, learning, and adjustment, and these qualities are most naturally investigated in a model with limited information. These models are usually held up as improvements because the individual level model is more believable, more realistic.

"Realism" is in the eye of the beholder, however, and the eye is influenced by training and personal experience. The "realistic" alternative presented by Epstein and Axtell describes artificial creatures seeking sugar that has been deposited in the adjacent cells of a lattice. That environment is quite unlike the markets where most people shop, however, and doubts about the realism of the model are sure to arise. In return for the realism of agents with bounded rationality, we are typically forced to accept a number of ad hoc assumptions about what agents know, how they learn, and so forth, assumptions which seem no more or less believable than the assumptions of the rational choice theories. Furthermore, we are not typically offered a sensitivity analysis to probe the impact of the detailed assumptions about the form of limited rationality that is offered.

No model is perfectly realistic, of course, and as noted above, the desire for verisimilitude can kill a research agenda. Rather than realism, models should be designed so the researcher can probe particular kinds of research questions. The problem facing models of bounded rationality is that they lack a common focus or a unifying question. Whereas rational choice theories can derive equilibrium and its properties, students of boundedly rational behavior are not similarly unified. Until some stronger unifying principles and research problems can be found, I suspect that agent-based models will tend to be foils against the more well-established rational choice theories, not genuine alternatives. Epstein and Axtell do a fine job of pointing out the assumptions behind the neoclassical economic model and showing how a model based on different principles might lead to different outcomes. Beyond that, the role of the agent-based model is not clear.

As mentioned above, there is some hope for unifying principles in the rapidly developing field of complex systems and ideas like "emergent property." Computer models in which many boundedly rational individuals act and adjust to each other are examples of "complex systems." Perhaps the general characterizations of complex systems for which they strive will lend structure and coherence to agent-based models of bounded rationality. In particular, the field of artificial life, with its emphasis on the study of interaction among many simple entities, may serve as a guide for the development of more models of bounded rationality in politics.

4. Insights and Presentations.

The intangible factors that influence the progress of research are important. While methodology textbooks may make research seem as though it is a linear process from conception through analysis to conclusion, there is usually digging and scratching for an interesting problem and a method with which to attack it. The avenues through which formal theorists seek an interesting problem are probably quite different from the avenues traveled by agent-based modelers.

Pencil-and-paper rational choice theory offers significant advantages. Because the structure is written out in detail, the logical gaps between models can be probed for interesting research problems. New insights from mathematics also often point the way for new characterizations of equilibrium and changes in quantities of interest. While the deductive structure may block research in certain directions (the 'intractable problems'), there is no doubt that it also fosters useful projects.

Axelrod (1997) refers to agent-based models as "thought experiments," ways to investigate the implications of a theory. That is no doubt correct. While a theory may not allow a deduction that states the tendencies of a system under all conditions, one can use the computer generate possible worlds and adaptation within them. An important part of the thought experiment is the ability to watch a graphical representation of the unfolding process. The eye can discern patterns in a graph that might be lost in a column of numbers.

Presentation of graphical results is by no means easy, however, when a larger audience must be addressed. If a simulation exercise generates hundreds of time-paths, each of which displays intricate patterns of adjustment, it may be difficult to summarize the research findings. Short of showing people movies of a simulation, one cannot convey the research experience to others. Instead, one must cast about for methods which summarize simulation runs and quantify them. A significant part of the "inspiration" is lost, however, and probably cannot be conveyed.

Another significant problem in the agent-based modeling enterprise is the lack of a standardized modeling approach and terminology. There are a variety of computer languages and simulation approaches in use. One goal of the Swarm toolkit, which is discussed next, is to give some order and method to the "simulation workbench" so that the practitioners may more easily talk with each other and present their results to the general audience.

In order to illustrate, I plan to take up a couple of classic problems in rational choice theory from an agent-based perspective. Using the Swarm toolkit, the plan is to build models of familiar problems and then show how the agent-based approach alters the nature of the research process and as well as the conclusions. The first example application is to majority rule "chaos" multidimensional issue spaces. The second example applies to the study of competitive position-taking by candidates and political parties. Both of these examples show that the agent-based models complement the more traditional formal rational choice theory.

The Swarm Toolkit

Christopher Langton, of the Santa Fe Institute, is well known for his research in complex systems and artificial life. To encourage the widespread interdisciplinary adoption of agent-based modeling, he has led a project to develop a software toolkit for modeling of complex systems.

Swarm is made available under the GNU Public License (GPL), which means it is "open source" software. In a nutshell, this means that the source code is publicly available and free to use. Programmers are free to go into the source to find out how the program works and whether fixes might be necessary. In contrast, commercial software users are at the mercy of a company to look into its source and make changes. The newest version, along with plenty of information, is available on the SFI website (www.santafe.edu/projects/swarm).

A programming toolkit is a code library that is meant to help researchers deal with the major challenges in their projects. When many projects need to get a particular job done, then one can write code to accomplish those chores and make them available in the form of a "library." A library is, generally speaking, code that is considered to be workable and of general applicability. The need for libraries is especially clear in the graphical presentation of results. It requires quite a deep knowledge of operating systems and graphics to make a "window," "menu," or "graph" look like it should. And, since the graphical depiction of the results is a vital part of the experience, a software toolkit should make that as elegant and as easy as possible. The Swarm toolkit also provides libraries that do the dirty work of computer programming, especially the allocation of memory.

Another notable feature of the Swarm project is that it uses a computer language called Objective-C. The C language, the workhorse of the Unix operating system, is quite well known. Objective-C includes the C language and adds features to make it "object oriented." The language is designed to allow users to create separate containers that include both information (data) and functionality. The code that creates a general case of such a container is called a "class" and when an example of that class is created, it is called an instance of the class, or more simply, an object. An object may have variables inside it to keep track of its age, position, and so forth, and it also has the ability to respond to "messages." The actions that an object can carry out are called "methods". The method can allow parameters as well.

The Swarm toolkit makes available a number of "classes", kinds of objects from which the user can create "subclasses." A subclass inherits all of the features of the superclass from which it is drawn, and it also can include project-specific ingredients. In dealing with classes like graphs and menu panels or such, many programmers will not have to fiddle with the libraries and can use the classes straight out of the box. The user's attention is on creating the logical structure of the individuals in the model and designing their interaction. After the subclasses for the individual actors is written, then the program can be written to create a list of voters (create instances of the class), and those instances are objects which can carry out specified actions.

The Swarm toolkit makes available not only a set of libraries, but also some examples of complex systems. Through these examples, the authors of Swarm hope to establish a vocabulary that helps researchers to discuss their projects with one another. All Swarm projects of which I am aware follow a basic hierarchy. The first object created is an "ObserverSwarm". It is then told to initialize the control panel, where the user can mouse click to start and stop the simulation. It also creates displays where paramater values can be entered and results displayed.

In addition to creating its display objects, the ObserverSwarm will create an instance of an class called ModelSwarm. The ModelSwarm manages the substantive part of the simulation. The ModelSwarm has code used to collect data from the individual actors and pass it along to the ObserverSwarm for display. The ModelSwarm also has methods which create individual actors. One centerpiece of the Swarm project is the scheduling apparatus, which allows the user, with relative ease, to cause all agents in a class to carry out a certain action. It is not necessary to use this jargon of ObserverSwarm and ModelSwarm, but doing so significantly eases communication with other users and the Swarm development team. In addition, the computer code within each category adheres to a convenient labeling system that helps to maintain the separation of tasks.

Objective-C is an alternative to a more widely used object-oriented language called C++. Objective-C is preferred for practical and aesthetic reasons. Objective-C is superior because it allows run-time binding. An agent can be told to respond to a certain message, but (unlike other languages) the sort of message or sender may not be known until the program is actually running. Code can be written so that an object is told to check for another object in position (x,y) in a lattice and then adjust its behavior accordingly. Other languages force more details to be fixed into the code itself before the model is compiled. (Compiled means the code is converted into an executable file. Most commercial software is made available in an already-compiled form.)

The syntax of Objective-C seems naturally designed to suit an agent-based model because it mainly consists of instructions that tell the objects to carry out their methods. Agents can send messages to each other or they can send messages to other constructs, such as a "world" where the positions of all agents are recorded. Aesthetically, the language is elegant and seems naturally suited to the project of agent-based simulation. Suppose there is an object called "voter1" and that voter has a method that ranks two candidates according to some criterion and reports back which is preferred. The syntax to cause the voter to make the comparison would be something like:

[voter1 compare: candidate1 And: candidate2]

In this case, the name of the method being executed is "compare:And:" and candidate1 and candidate2 are names of candidates defined elsewhere in the code. In this way, object-oriented programming is intuitive and understandable.

Object oriented programs in general, and Objective-C in particular, offer a natural an convenient way to discuss models of limited information in political science. The philosophy of object-oriented programming has a significant effect on the nature of agent-based models, similar to the way in which the philosophy of economic modeling had an effect on rational choice theory. In object-oriented programming, the emphasis is on the elimination of "global variables", variables that can be accessed by all parts of a program. Rather, an agent is an instance of an object, which encapsulates information. An object only makes information available under particular conditions. Agents don't know anything about other agents unless that information is explicitly make available. Agents may hold their "genetic" information (preferences or abilities) close-to-the-vest, or they may respond to requests for information from other actors and reveal that information. Agents are not necessarily assumed to be aware of features of their environment or the preferences of other actors. Instead, agents develop personal models that represent their beliefs. In this way, one strength of agent-based models is that they come closer to truly representing interactions under "incomplete information" conditions. The usage of "globally visible" variables is strongly discouraged. An object is a "self contained" entity. When an object is created, typically the code must be written to "put values inside" that object. The object can keep track of its age, tastes, etc., in a way that is known only to it. That information is isolated and held in private, unless a method is written through which the agent can announce the information to other agents. The methods that put information inside an object are typically prefixed with "set", as in "setIdealPoint", while the methods that retrieve information are prefixed with "get", as in "getIdealPoint".

Swarm Examples: Contrasting Rational Choice and Agent-Based Models

Example #1: Majority Rule Chaos

One of the most interesting areas of rational choice theory investigates majority rule in a spatial model. The space of alternatives is often (though not necessarily) thought of as a Euclidean space, IRm. The voter wants political outcomes to be close to his or her personal ideal point, xi*. An ideal point is a vector of positions, one for each dimension. For example, if there are three dimensions, the ideal point would be:

Models differ in the way in which proposals are generated. Some models describe an agenda setter. Others present us with candidates for office who make proposals to the voters. Some present the model as a depiction of alternative in a legislature and the alternatives arise from some sort of agenda building process.

As the policy proposed goes away from the individual's ideal point, the desirability of that policy is lowered according to a utility function, Ui(x). For a model with two policy dimensions, we might designate coefficients a11, a12 , a22, and then the utility of a policy is given by its "Weighted Euclidean Distance" (Davis and Hinich, 1966; Davis, Hinich, and Ordeshook, 1970).

For higher numbers of dimensions, matrix algebra is used to describe the utility of the alternatives. Given a matrix of weights A, which indicates the importance of each issue and its interaction with other issues, this is often written as

The matrix of weights can be adjusted to represent differences in tastes and emphasis among voters. If the matrix A is the identity matrix, (a matrix in which all coefficients are zero except the main diagonal, and those coefficients--a11,a22,a 33, through amm--equal 1), then the voter's utility for a proposal depends solely on its distance from the ideal. Direction of movement is not important in such a case, only distance. Many interesting examples are built with this simple model of tastes. The literature on this problem is reviewed in depth in a recent textbook (Johnson, 1998).

One of the primary research problems in the spatial model concerns the question of equilibrium in a multidimensional voting process. Will the "will of the majority" lead society to a meaningful outcome? Most people guess the answer is yes, but the troubling conclusion of social choice research was a resounding no (see, for example, Riker, 1982). There is no unbeatable proposal for just about every set of preferences that one can specify.

Not only is there a lack of equilibrium, there are so-called "chaos theorems" which indicate that majority rule is capable of wandering far and wide in the space of alternatives. In the words of Richard McKelvey, "if there is no equilibrium outcome, then the intransitivities extend to the whole policy space in such a way that all points are in the same cycle set. The implications of this result are that it is theoretically possible to design voting procedures which, starting from any given point, will end up at any other point in the space of alternatives, even at Pareto dominated ones" (McKelvey, 1976, p. 472). Schofield then showed that majority rule can wander on a continuous curve. He painted a picture of a society myopically searching for improvements in policy, taking small steps on a circuitous route through the entire policy space. One of his early convention papers on the topic observed, "Any point can be defeated by another outcome in general: even more alarmingly, the 'local' trajectories can lead anywhere..."(1978, p2).

The results presented a challenge to the rational choice community: find equilibrium. Gordon Tullock kicked of a symposium in Public Choice with his essay, "Why So Much Stability?" (1981). In the time since, both structural and behavioral changes have been considered which would stabilize the model. The structure-induced equilibrium approach, initially proposed by Kenneth Shepsle (1979), adds details to the model of the political process that stop majority rule from wandering. Behavioral changes, such as the inclusion of voters with foresight about the implications of current votes on future decisions, have been considered (Miller, 1980; McKelvey, 1986). Finding those explanations wanting, others have considered making it expensive to hold votes or adding uncertainty into the voters' minds about the implications of policy change (Lupia and McCubbins, 1997).

Would an agent-based approach have led to a different set of conclusions? Logically, certainly not. However, the interpretation might have been significantly different. Whereas the rational choice model is searching for a model that generates an equilibrium all-the-time, the agent-based model seems to lead to the conclusion that majority rule is, on the whole, a rather stable process.

I now present a model of multidimensional majority rule using the Swarm simulation tookit. The goal is to understand the behavior of a "bare bones" political institution and to help us interpret the chaos theorems. I followed Axelrod's KISS dictum (1997, p.5) and built a model that has only voters and a random agenda-setter. The model is based on these assumptions.

1. Voter ideal points are created randomly in a multidimensional space. The dimension of the model is arbitrary, but for making pictures and illustrations a two-dimensional model is most workable. In the examples discussed here, the preferred level on each dimension is drawn from a Uniform Distribution on the interval [0,100]. Again for simplicity, the weight matrix A is the identity matrix, so when choosing between two points a voter picks the one closest his or her ideal point, where distance is defined in the usual way (Euclidean distance).

2. The program is written so that the starting point of the voting process can be chosen by the user or generated at random. Then, a direction of change is chosen and a proposal in that direction is generated by the agenda-setter. The direction is represented by an angle relative to the horizontal axis, measured in radians. The length of the step taken by the proposal is chosen at random as well. If the majority approves that proposal, then the proposal is adopted. Then a new proposal is generated, evaluated by the majority, and if it is preferred, then it is adopted.

Designing the process through which the agenda setter creates proposals is an example of the specification problems that arise in agent-based models. Code must be written to make proposals pop out of the model, and yet there is little substantive guidance. I did not necessarily want an efficient method of generating proposals, I wanted one that fit the intuition of the chaos models as presented by Schofield, namely that the agenda proceeds by making small changes on a relatively continuous trajectory. In the search for proposals that please the majority, the agenda setter should continue in the direction in which recent successes have been found. The agenda setter is bound to bump up against resistance, however, so changes in direction must be possible. A variety of formalizations of this proposal process were considered, but in the end a very simple model was chosen. The distance of the proposed change is chosen at random from a Uniform Distribution on [0,max]. The longest possible step, max, can be specified at run-time by the user. At any given stage, the direction of the next proposed change is found by taking the current direction, , and adding a displacement, . The direction is measured in radians, so a change of -1 means a complete reversal in direction.

The working hypothesis is that a new proposal ought most likely proceed in the same direction as previous change. To capture this logic, the displacement is drawn from a truncated Normal Distribution with a mean of 0. The distribution of puts more weight on 0 than any other value. If the agenda setter is intended to move only in a "forward" direction, the value of can be restricted to fall between (-.5,+.5). This is done by truncating a Normal distribution at (-2,2) and then rescaling so it fits into the (-.5,.+5) range. If the proposal is allowed to completely reverse itself, then would be restricted to fall between (-1,+1).

How much does majority rule wander? In Figure 1, a screen shot of one Swarm simulation is presented. The panels on the top-left control the simulation and its parameters. On the top-right the two-dimensional space is illustrated. This graph has a range from 0 to 100 in each dimension. In this example, there are only three voters, the agenda setter's steps are restricted to the (0,5) interval, and the agenda setter's search is allowed a full range of motion. In honor of the Swarm team, I've used their ant to mark the locations visited by the agenda setter.

(Figure 1 about here)

The agenda setter begins at the top-left portion of the figure, and wanders down into the triangle formed by the ideal points of the voters. As the chaos theorems would suggest, a very diverse set of policies is approved during 500 periods. The graphs on the bottom left illustrate the magnitude of the instability. The agenda setter finds support by a majority for change more than 44 percent of the time, and policy is hardly ever in place for more than 5 periods.

The extent of the chaos problem, however, may not be so severe in all cases. The ability to probe examples is surely one of the major advantages of the agent-based approach. Consider Figure 2. This run illustrates that when the three ideal points lie almost in a line, then the agenda setter is drawn to the median position. There is no equilibrium in the sense of the formal theory, because the ideal points are not exactly on a line. (In the formal theory, recall that the equilibrium requirement is a knife-edge condition.) The simulation shows that the agenda setter, whose path wanders only a little bit as it approaches the median, finds it difficult to formulate a new proposal that two of the voters will support. The percentage of periods in which proposals are approved is reduced dramatically and there are long periods in between policy changes.

(Figure 2 about here)

Incidentally, it is important to note that policy outcomes are drawn to the multidimensional median of the ideal points, not the average as predicted by some theories. Figure 3 makes this most apparent, as the policy outcome cluster in a corner of the space near the ideal point of the middle voter.

(Figure 3 about here)

As the size of the population of voters is increased, what is the likely result? Experiments on preferences in discrete choice spaces show the probability of cycles increases in the number of voters (Riker, 1982). In this model, it appears the severity of the chaos is significantly reduced as voters are added. Figures 4, 5, and 6 illustrate runs with 11, 1111, and 11,111 voters respectively. (In the latter, the dots showing voter positions are not illustrated because they create a blackout that obscures everything else in the figure). When the number of voters is 11, the random agenda setter goes to the center of the electorate and wanders somewhat, but the cycle is not global by any means. As the number of electors is increased, first to 1111 and then to 11,111, the proportion of the agenda setters proposals that find majority support is reduced and there are longer periods of stability between changes.

(Figures 4, 5, and 6 about here)

The agent-based model does not fundamentally change our understanding of the majority voting process, but it might color our interpretation of those results. The chaos theorems did not state that majority rule would always cycle over the whole space. However, from some interpretations, it seemed that way. In an electorate of hundreds or thousands of people, the possibility that majority rule might wander far and wide is not extremely high. In particular, the simulations lend strong support to the position taken by Feld, Grofman, and Miller, who argue that majority rule "creates significant centripetal forces" that draw outcomes into the center of the electorate (1989, p 415). As predicted by their theory, and the model of Ferejohn, McKelvey, and Packel (1984), the tendency of majority rule to wander is limited. A cycle can only be global if an agenda setter is able to string together a set of proposals in a particular way. It seems that is not too likely. (As the old saying goes, monkeys could type the Bible if you gave them enough time and paper.)

The simulations will have to be replicated across a broad category of distributions of preferences. It may also be interesting to investigate the impact of changes in the model of the agenda setter. So far, however, I'm convinced that the simulation approach, particularly the graphical presentation that it makes, does significantly enhance the persuasiveness of the result. I doubt, however, that it replaces the formal analysis.

Example 2: The Difference between Political Parties and Candidates

This section compares the behavior of candidates who seek to maximize their support in the general public with the behavior of political parties who seek to maximize their membership. Starting with Anthony Downs's An Economic Theory of Democracy (1957), many models have equated political parties and political candidates. In my opinion, however, there is a fundamental difference. At election time, people who vote are offered a choice of two positions. Voters can reasonably be expected to choose the candidate they like best.

Party membership is subject to a different kind of logic. There is no reason to expect that a person will join one party rather than another just because its policy promises are closer to the person's ideal point. Rather, a party's positions must be sufficiently close to those of the individual before the person will join the party. Each voter may have an individualized notion of how close a party ought to be.

If the difference between elections and party-building is thus in the mind of the voter, what differences in position-taking are we likely to see in those two contexts? Consider first the problem of a two-candidate spatial election. In two candidate electoral competitions on a single dimension, it is well known that the median voter's ideal point will attract the candidates. The problem of multidimensional chaos, however, seems to make it more difficult to predict how candidates will behave in a multidimensional contest.

As in the previous model, the issue space is a multidimensional continuum. Voter ideal points are assigned at random. The candidates are adaptive agents, not as intelligent as the ones discussed in Kollman, Miller, and Page (1992, 1997), but not different in spirit. Each candidate begins at a randomly assigned position and makes a proposal to move from that position in a randomly determined direction and distance. Voters register their preference between the proposals by sending a message to one candidate or the other indicating "I'm in your camp now." After observing only its own level of support, each candidate makes a new proposal. Rather than allowing candidates to sample only a subset of the voters as in Kollman, Miller, and Page, 1992, 1997), this model allows candidates to offer their positions to all voters in each time period. (It is easy enough to limit candidates to samples, but I wanted to investigate the logic of position-taking without the distortion caused by the random sampling of voters).

The process through which candidates make proposals and decide to move from their current positions was the most difficult part of the model to specify (and it is the part that needs the most additional work). After several stabs at the problem, it became clear that the vital question was this: how should a candidate decide that the proposal is a stronger (or weaker) position than the previous position? An increase in voter support might mean only that the opposing has made some kind of mistake. It does not necessarily signal that the proposal is better than the existing position. Similarly, it is possible that a candidate might make a proposal in the correct direction, but still lose support if the opponent also makes a strengthening move. This ambiguity, is a significant problem in ecological models of coevolution, models in which each competitor's fitness landscape is constantly deformed and altered by the behavior of other species (Kauffman, 1988; 1993).

The algorithm used in the simulations reported here is as follows. Each candidate keeps records to indicate its "baseline position," pi=(pi1,p i2). That position is offered to the electorate and candidates tally their support. Then each candidate offers a new proposal, ri =(pi1+D 1,pi2+D 2), which at time period 0 is generated randomly. If that proposal caused an increase in support, the candidate takes that proposal as the new baseline position. Furthermore, the candidate assumes that the change was in the right direction. The next proposal from that candidate continues in exactly the same direction and a randomly determined distance from the new baseline. If the candidate's support stayed stable or declined, the existing baseline position remains unchanged, but then the candidate begins to search about in new directions for support. The method of search used by the candidates is the same method described in Example 1 for the agenda setter. The most recent promising direction of movement remains the most likely direction of movement, but proposals can be offered in any direction that falls within an arc of allowable movement. If the candidates are thought of as horses with blinders on, they might only search "in front" of them for new proposals, for example. The width of the search is a run-time decision and the implications of various settings can be explored.

How would a model of position taking by a political party differ from an election model? Each party operates according to the same logic under which candidates operate in the previous model. A party is a position-taker and searches for proposals that increase its level of support. The difference between the models is in the behavior of the citizens. Suppose that when voters receive the proposals, they respond favorably only if the distance from the proposal to their personal ideal point is smaller than some threshold. One could complicate the model by assigning the tolerance thresholds at random or possibly in correlation with some other attributes of the agent. But, as a first cut at least, this model is designed so that all agents have identical tolerance values. In the simulation shown here, a voter will only join if the squared distance is less than 500 units, or roughly 22.3 units. As this distance is increased to a high level, the party model converges into the election model because the citizens are going to join whichever party is closer, even if it is quite far away. (In fact, the simulations for the electoral model are done with the exact same code as the party model, except in the election model the voters' tolerance is set at a very large number).

In the presentation of these models, I have encountered one of the classic problems in agent-based modeling. Where one might wish to present a movie showing the evolution of candidate positions, instead one can only present a snapshot to illustrate the development of the candidate positions. In a further exploration of the model, a large number of simulations will have to be compared.

At the current time, the state of the findings can best be seen in Figure 7.

This figure illustrates a simulation model of elections on the left and party-building on the right in an electorate with 1111 voters. As one might expect from the previous discussion of the multidimensional voting model, the median seems to have a powerful attraction for the candidates. As the simulation develops, the candidates hover around the median position. They may stray looking for positions of improvement, but inevitably they cycle back to the median. The distance between their positions is almost always less than 10 units, often less than 5.

(Figure 7 about here)

The conditions under which the candidates stray from positions close to the median are quite thought provoking. Such movements may occur if one candidate finds a majority-preferred position and the other stays on the median. The chaos theorems guarantee that such movements are certain to be possible. If that candidate moves away from the median, then eventually the other will follow because doing so wins the support of voters between their previous positions. This is, of course, reminiscent of the argument advanced by Glazer, Grofman and Owen (1989). They argue that when a candidate is unsure of voter tastes, it is rational to move closer to the opponent, even if that means moving away from the median. It seems important to note that this strange game of tag can occur in other situations as well, which explains why candidate position taking is not as stable as majority rule with an agenda setter in the previous example. Suppose both candidates are near the multidimensional median, but they simultaneously make a "mistake" of proposing a move away from it. If both make proposals to change, one is bound to gain support by doing so--they can't both lose electoral support. The one whose mistake is "less severe" gains votes and infers that the move away from the median is beneficial. That candidate will make additional proposals heading away from the median, and the other will eventually follow. In this fashion, the agent/candidates play a peculiar form of the children's game "tag." The candidates wander away from the median until one of them happens to cross the other's path by making a proposal that is closer to the center, and then the competitive process draws them back to the multidimensional median.

In the right column of Figure 7 the competitive positions of two political parties are illustrated. The simulation is based on the same random number stream as the electoral model shown in the left column. In the party mobilization model, each voter is restricted to choose only one political party and the voter will join only if the party's proposal is within 22.3 units. The parties make myopic adjustments in their positions according to the same logic that drove the candidates. Their baseline position is changed only when their proposal is greeted by an increase in support, and then their next proposal continues in the same direction. If a proposal is not successful, they keep the same baseline and make a proposal according to the random process which makes the last successful proposal's direction the most likely direction of change.

The evolution of political positions taken by parties is dramatically different from the pattern exhibited by candidates. Whereas candidates who seek to increase their level of support are drawn into the center, political parties are not. If the positions of the parties start to get close together, one or both will reverse its path in order. And, perhaps equally importantly, unlike the electoral model which typically predicts close elections, the party model allows the possibility that one party may develop a membership advantage and maintain it.

Does the agent-based approach help in this case? I think so. One area of strength is that new samples of random numbers can be used to repeat the simulation and study the differences among the developmental processes. In Figure 8, two more runs of the model are shown. As in the previous example, we see the parties are separated widely. However, the vital difference is that the positions of the parties are not diametrically opposed on either side of the center of the electorate. It is possible for a two-party system to evolve in which both parties maintain positions on one side of the preference distribution.

(Figure 8 about here)

The results presented here are, of course, incomplete. The agent-based model affords many possible avenues of further exploration. As it is currently designed, the code allows the insertion of additional parties. Rules to govern their creation and removal from the process can be implemented to investigate the "carrying capacity" of the system. As emphasized above, the limitations on such a modeling enterprise are not found in the intractability of the model but the exploration and management of its many parameters.

Commentary and Conclusion

This essay has tried to illustrate the differences between two approaches to political theory, rational choice and agent-based modeling. Practitioners of both approaches share a desire to study the interaction of individuals within a political system. In both approaches, there are tools to understand the way system-wide outcomes depend on many diverse individual behaviors. The creation of computer models requires much of the same detail work as the creation of a rational choice model. Assumptions about the individuals--their objectives, their mode of behavior, their choices, and their environment--must all be spelled out in detail.

There are major differences of emphasis between rational choice and agent-based modeling. The emphasis on tractability and equilibrium in rational choice surely separates it from agent-based modeling. By adopting a restrictive structure that guides the deductive process, the rational choice approach seems to block from consideration some of the questions that agent-based modelers find to be the most interesting. Agent-based modeling typically looks for emergent properties, properties which are most interesting in a model that includes individuals who adapt and change in response to their environment. Rather than assuming that actors have a fixed amount of wealth or knowledge, a computer model can allow such quantities to change and adjust over time.

The differences in assumptions and modeling combine to create different mindsets in the two research camps. Agent-based modeling enterprise is not so tightly self-organized (please pardon the use of the phrase) as is rational choice theory. The lack of structure is at the same time its greatest source of potential and also its greatest weakness. Various "what if" questions can be explored in an agent-based setting that might not be accessible within the rational choice perspective. It is an up-hill battle, however, to shape observations gleaned from many computer simulations into a coherent set of results.

References

Philip W. Anderson, Kenneth J. Arrow, and David Pines, eds. 1988. The Economy as an Evolving Complex System. Reading, Mass: Addison-Wesley.

Axelrod, Robert. 1997. The Complexity of Cooperation. Princeton, NJ: Princeton University Press.

Axelrod, Robert. 1984. The Evolution of Cooperation. New York: Basic Books.

Bankes, Steve. 1993. Exploratory Modeling for Policy Analysis. Operations Research. 41(3): 435-449.

Epstein, Joshua M. and Robert Axtell. 1996. Growing Artificial Societies: Social Science from the Bottom Up. Washington,DC: Brookings Institution Press.

Davis, Otto A., and Melvin J. Hinch. 1966. Some Results Related to a Mathematical Model of Policy Formation in a Democratic Society. In J.L. Bernd, ed. Mathematical Applications in Political Science II Dallas, TX: Southern Methodist University Press.

Davis, Otto A., Melvin J. Hinch and Peter C. Ordeshook. 1970. An Expository Development of a Mathematical Model of the Political Process. American Political Science Review 64: 426-448.

DeAngelis, D.L and L.J. Gross, eds. 1992. Individual-based Models and Approaches in Ecology: Populations, Communities, and Ecosystems. Chapman & Hall.

Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper and Row.

Emmeche, Claus. 1994. Is Life as a Multiverse Phenomenon? In Christopher G. Langton, ed. Artificial Life III . Reading, Mass.: Addison-Wesley Publishing Company.

Feld, Scott L., Bernard Grofman, and Nicholas R. Miller. 1989. Limits on Agenda Control in Spatial Voting Games. Mathematical and Computer Modelling 12: 405-416.

Ferejohn, John A., Richard D. McKelvey, and Edwared Packel. 1984. Limiting Distributions for Continuous State Markov Voting Models. Social Choice and Welfare 1: 45-67.

Glazer, Amihai, Bernard Grofman, and Guillermo Owen. 1989. A Model of Candidate Convergence Under Uncertainty About Voter Preferences. Mathematical and Computer Modelling, 12: 471-478.

Harsanyi, John. 1967-68. Games with Incomplete Information Played By Bayesian Players. Management Science 14: 159-182, 320-334, 486-502.

Hellwig, Martin, and Wolfgang Leininger. 1990. Subgame-Perfect Equilibria in Discrete and Continuous Games: Does Discretization Matter? In Tatsuro Ichiishi, Abraham Neyman, and Yair Tauman, eds., Game Theory and Applications. San Diego, CA: Academic Press, pp. 381-2.

Hellwig, Martin, and Wolfgang Leininger. 1987. On the Existence of Subgame-Perfect Equilibrium in Infinite-action Games of Perfect Information. Journal of Economic Theory . 43: 55-75.

Holland, John H. 1998. Emergence: From Chaos to Order. Reading, Mass.: Helix Books.

Huston, Michael, Donald DeAngelis, and Wilfred Post. 1988. New Computer Models Unify Ecological Theory. BioScience 38(10):682-691.

Johnson, Paul E. 1998. Social Choice: Theory and Research. Thousand Oaks, Ca: Sage.

Judson, Olivia P. 1992. The Rise of the Individual-Based Model in Ecology. Tree 7(6):

Kauffman, Stuart A. 1988. The Evolution of Economic Webs. In Philip W. Anderson, Kenneth J. Arrow, and David Pines, eds. The Economy as an Evolving Complex System Reading, Mass: Addison-Wesley.

Kauffman, Stuart A. 1993. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press.

Kohlberg, Elon. 1990. Refinement of Nash Equilibrium: The Main Ideas. 1990. In Tatsuro Ichiishi, Abraham Neyman, and Yair Tauman, eds., Game Theory and Applications. San Diego, CA: Academic Press, pp. 3-45.

Kollman, Ken, John H. Miller, and Scott E. Page. 1992. Adaptive Parties in Spatial Elections. American Political Science Review. 86: 929-937.

Kollman, Ken, John H. Miller, and Scott E. Page. 1997. Computational Political Economy. In W. Brian Arthur, Steven N. Durlauf, and David A. Lane, eds. The Economy as an Evolving Complex System . Reading MA: Addison Wesley, pp. 461-490.

Langton, Christopher G., ed. 1989. Artificial Life. Proceedings of the Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems. Reading, Mass.:Addison-Wesley.

McKelvey, Richard D. 1976. Intransitivities in Multidimensional Voting Models and Social Choice. Journal of Economic Theory 12: 472-482.

McKelvey, Richard D. 1986. Covering, Dominance, and Institution-Free Properties of Social Choice. American Journal of Political Science 30: 283-314.

Miller, Nicholas. 1980. A New Solution Set For Tournaments and Majority Voting. American Journal of Political Science 24: 68-96.

Riker, William. 1982. Liberalism Against Populism. San Francsico, Ca.: W.H. Freeman

Schelling, Thomas C. 1971. Dynamic Models of Segregation. Journal of Mathematical Sociology 1: 143-86.

Schofield, Norman. 1978a. Instability of Simply Dynamic Games. Review of Economic Studies 45: 575-594.

Schofield, Norman. 1978b. Generic Instability of Voting Games. Presented at the Annual Meeting of the Public Choice Society, New Orleans, La.

Shepsle, Kenneth. 1979. Institutional Arrangements and Equilibrium in Multidimensional Voting Models. American Journal of Political Science 28: 49-74.

Simon, Herbert. 1957. Models of Man .

Simon, Herbert. 1981. The Architecture of Complexity. In The Sciences of the Artificial, 2ed. Cambridge, Mass: MIT Press.

Simon, Herbert. 1990. Prediction and Prescription in Systems Modeling. Operations Research 38 (1): 7-14.

Tullock, Gordon. 1967. The General Irrelevance of the General Impossibility Theorem. Quarterly Journal of Economics 81: 256-270.

Tullock, Gordon. 1981. Why So Much Stability. Public Choice 37: 189-202.