R Costanza, L Wainger, C Folke, K Mäler, 1993 | BioScience, 43(8) 545-555 |
R. Boero and F. Squazzoni, 2005 | Journal of Artificial Societies and Social Simulation 8(4) | html |
Abstract: The paper deals with the use of empirical data in social science agent-based models. Agent-based models are too often viewed just as highly abstract thought experiments conducted in artificial worlds, in which the purpose is to generate and not to test theoretical hypotheses in an empirical way. On the contrary, they should be viewed as models that need to be embedded into empirical data both to allow the calibration and the validation of their findings. As a consequence, the search for strategies to find and extract data from reality, and integrate agent-based models with other traditional empirical social science methods, such as qualitative, quantitative, experimental and participatory methods, becomes a fundamental step of the modelling process. The paper argues that the characteristics of the empirical target matter. According to characteristics of the target, ABMs can be differentiated into case-based models, typifications and theoretical abstractions. These differences pose different challenges for empirical data gathering, and imply the use of different validation strategies.
A Crooks, C Castle, and M Batty, 2007 | Agents2007 |
The five challenges that we see as important to their [agent-based models] development involve the following:
purpose of the model | The purpose of agent-based models range from explanatory to predictive | |
---|---|---|
dependence of the model on theory | Agent-based models are being considered generic, independent of any particular field or application, and hence subject to use for any purpose that arise in an ad hoc way. In short, the scientific standards of the past are often buried in ad hoc model development | |
representation of agents and their dynamics | Agents that do not move such as cells in cellular automata we would not define as agents in this context. As we aggregate, it is more and more difficult to define relevant processes | |
calibration, validation and verification of the model against theory and data | ||
the development of operational models through software | the object oriented paradigm allows the integration of additional functionality from libraries not provided by the simulation/modelling toolkit. | Of particular interest here is the integration from GIS software libraries, which provide ABM toolkits with greater data management and spatial analytical capabilities. |
M. W. Macy and R. Willer , 2002 | Annual Review of Sociology 28:143-166 |
Abstract: Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from “factors” to “actors” in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach.
M T Parker, 2001 | Journal of Artificial Societies and Social Simulation | (4)1 | html |
Abstract: Ascape is a framework designed to support the development, visualization, and exploration of agent based models. In this article I will argue that agent modeling tools and Ascape, in particular, can contribute significantly to the quality, creativity, and efficiency of social science simulation research efforts. Ascape is examined from the perspectives of use, design, and development. While Ascape has some unique design advantages, a close examination should also provide potential tool users with more insight into the kinds of services and features agent modeling toolkits provide in general.
There are very good tools (AgentSheets, StarLogo) available now that address this need for relatively simple models. For more complex (and in some cases, more testable) models, there is a significant gap. To address this gap, the development of significant end-user programming and composition features […]
R Leombrunia and M Richiardi, 2005 | Physica A: Statistical Mechanics and its Applications |
Abstract: Despite many years of active research in the field and a number of fruitful applications, agent-based modeling has not yet made it through to the top ranking economic journals. In this paper we investigate why. We look at the following problematic areas: (i) interpretation of the simulation dynamics and generalization of the results, and (ii) estimation of the simulation model. We show that there exist solutions for both these issues. Along the way, we clarify some confounding differences in terminology between computer science and economic literature.
M Richiardi, R Leombruni, N Saam and M Sonnessa, 2006 | Journal of Artificial Societies and Social Simulation | html |
Abstract: Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulations.
The advantage of agent-based simulations over more traditional approaches lies in the flexibility they allow in model specification. Of course more freedom means more heterogeneity. While analytical models generally build on the work of their predecessors, agent-based simulations often depart radically from the existing literature.
There are some basic features that characterize a simulation model.
Technical:
the treatment of time | discrete or continuous |
---|---|
the treatment of fate | stochastic or deterministic |
the representation of space | topology |
the population evolution | birth and death processes |
Less technical:
the treatment of heterogeneity | which variables differ across individuals and how |
---|---|
the interaction structure | localized or non-localized |
the coordination structure | centralized, decentralized |
the type of individual behaviour | optimising, satisficing, etc. |
In ABM individual behaviour is generally less sophisticated, and expectations are sometimes not even defined
Good talk about
Benenson and Torrens, 2004 | Computers, Environment and Urban Systems |
The familiar regional models detailing the exchange of population, goods, and jobs between coarsely represented divisions of geographical space have been substituted by simulation of urban systems as collectives of numerous element acting in the city.
The motivations behind the move to individual-scale simulation is clear:
entity-level representations.
object representation can avoid the drawbacks of the so called ecological fallacy.
GIS provide an extraordinary background for geosimulation, because the information that they contain relates to 'atomic' urban objects […] GIS enables the encoding of spatial objects and information on their attributes into simulation models and provides methods for relating objects based on their proximity, intersection, adjacency or visibility.
S. C. Bankes, 2002 | PNAS | 14 citations in Scholar |
Abstract: A clear consensus among the papers in this Colloquium is that agent-based modeling is a revolutionary development for social science. However, the reasons to expect this revolution lie more in the potential seen in this tool than through realized results. In order for the anticipated revolution to occur, a series of challenges must be met. This paper surveys the challenges suggested by the papers of this session.
to evaluate this proposed revolution, what matters is not the computer science advances that make ABM possible, but rather the social science challenges that make it necessary.
Three generic reasons that make ABM important to social sciences:
H. Couclelis, 2001 | Proceedings of a Special Workshop on Land-Use/Land-Cover Change |
One major difference in practical terms [between “design” and “analytic”] is that when you design something you have direct (partial or total) control on the outcome, whereas when you analyze something that's “out there” you can only hope that you guessed correctly. (When she says that an agent is designed, she does not believe that a group of agents can be more complex than the sum of the parts.)
Modelling with agents and space leads to four cases:
Agents designed | Agents analysed | |
---|---|---|
Environment designed | 'social laboratories', they serve as abstract thought experiments at best. | behavioral experiments where natural subjects are observed within controlled laboratory conditions. it is always questionable whether the rules thus derived will also be valid 'out there' in the real world. |
Environment analysed | robots designed to operate within pre-existing environments. can be effective in practice though they can be defeated by the complexity of the real environments within which they operate. | only that concerns LUCC. descriptive, predictive or explanatory models. a descriptive model can always be done given enough free parameters. Predictive models based on theory are by that token also explanatory models, though not all explanatory models are also predictive (e.g., the causal relations identified may change over time in unpredictable ways). Reasonably reliable predictive and explanatory models of land use change would be of tremendous value to planning and policymaking but after forty years of efforts in that area the success stories are still quite limited. |
The question is whether the benefits of that approach to spatial modeling exceed the considerable costs of the added dimensions of complexity introduced into the modeling effort.
W. Brian Arthur, 2006 | Handbook of Computational Economics |
Abstract: Standard neoclassical economics asks what agents’ actions, strategies, or expectations are in equilibrium
with (consistent with) the outcome or pattern these behaviors aggregatively create. Agent-based computational
economics enables us to ask a wider question: how agents’ actions, strategies, or expectations might
react to—might endogenously change with—the patterns they create. In other words, it enables us to examine
how the economy behaves out of equilibrium, when it is not at a steady state.
This out-of-equilibrium approach is not a minor adjunct to standard economic theory; it is economics done
in a more general way. When examined out of equilibrium, economic patterns sometimes simplify into a
simple, homogeneous equilibrium of standard economics; but just as often they show perpetually novel and
complex behavior. The static equilibrium approach suffers two characteristic indeterminacies: it cannot
easily resolve among multiple equilibria; nor can it easily model individuals’ choices of expectations. Both
problems are ones of formation (of an equilibrium and of an “ecology” of expectations, respectively), and
when analyzed in formation—that is, out of equilibrium—these anomalies disappear.
how patterns in economy form, and usually such formation is too complicated to be handled analytically - hence the resort to computer simulation.
individual behaviours collectively create an aggregate outcome; and they react to this outcome. […] what strategies, moves, or allocations are consistent with, given the strategies, moves, allocations his rivals might choose?
To ensure tractability we usually have to assume homogeneous agents, or at most two or three classes of agents. We have to assume agent behaviour that is intelligent but has no incentive to change. […] it is natural to ask how the economy behaves when it is not at a steady state - when it is out of equilibrium.
Two possible objections to doing economics this way:
Some examples that makes ABM better than analytical models:
M Wooldridge, N. R. Jennings, 1995 | The Knowledge Engineering Review | 2983 citations in Scholar |
Abstract: The concept of an agent has become important in both AI and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide these issues into three areas (though as the reader will see, the divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be though of as software engineering models of agents; researches in this area are primarily concerned with the problem of designing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages may embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply the most important issues, and point to work that elaborates on them. The article includes a short review of current and potential applications of agent technology.
Properties of a weak definition for agents:
[…] in mainstream computer science, the notion of an agent as a self-contained, concurrently executing software process, that encapsulates some state and is able to communicate with other agents via message passing, is seen as a natural development of the object-based concurrent programming paradigm.
In a stronger definition, we have agents with the all above properties, together with some mentalistic notions (knowledge, belief, intention, obligation), and perhaps some emotional characteristics. Some other attributes can be:
It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires (Shoham, 1990). Put crudely, the more we know about a system, the less we need to rely on animistic (anthropomorphic), intentional explanations of its behaviour. However, with complex systems, a mechanistic explanation of its behaviour may not be practicable. Then it describes an example of clicking a mouse on an icon.
An agent must be represented in terms of at least one information attitude (belief, knowledge), and at least one pro-attitude (desire, intention, obligation, choice, etc.).
The author describes a classical logic with two additional operators: necessarily and possibly, and some new axioms. This logic is unsuitable for representing resource bounded believers, and has two problems, called logical omniscience: knowing everything (all valid formulae) and being able to process all logical consequences from the formulae. Then he describes some works trying to formally represent goals, desires, action, intention, with lots of critiques by other authors. This problems cannot be regarded as solved, but real agents are resource-bounded. One solution is to ground worlds semantics, giving them a precise interpretation in terms of the world (Rosenschein and Kaelbling)
“An agent architecture is a general methodology for designing particular modular decompositions of particular tasks.” (Kaelbling, 91)
The two main problems with symbolic AI are: (1) translating the real world into an accurate, adequate symbolic description, and (2) how to represent and process this information (most algorithms are timely intractable).
Due to the problems of symbolic AI, some authors propose reactive architectures, without any symbolic world or symbolic reasoning. Brooks has two key ideas:
Some reactive approaches are presented, and then the authors describe some hybrid architectures, neither completely reactive not completely deliberative. Agents composed by processing layers that can communicate. Some authors propose reaction to urgent events, and use the symbolism to reason and schedule more persistent actions.
Pure reactive approaches seem to be ad hoc, and the lack of methodology makes symbolic approach to be more successful “nowadays”. Hybrid approach has advantages over both, but a problem that remains is how to combine multiple deliberative and reactive systems cleanly.
R Axelrod, 1997 | Complexity | 238 citations in Scholar |
Abstract: Advancing the state of the art of simulation in the social sciences
requires appreciating the unique value of simulation as a third way of doing
science, in contrast to both induction and deduction. This essay offers advice for
doing simulation research, focusing on the programming of a simulation model,
analyzing the results and sharing the results with others. Replicating other
people’s simulations gets special emphasis, with examples of the procedures and
difficulties involved in the process of replication. Finally, suggestions are offered
for building of a community of social scientists who do simulation.
Simulation is a third way of doing science. Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems. Instead, a simulation generates data that can be analyzed inductively. Unlike typical induction, however, the simulated data comes from a rigorously specified set of rules rather than direct measurement of the real world. While induction can be used to find patterns in data, and deduction can be used to find consequences of assumptions, simulation modeling can be used as an aid intuition.
when the agents use adaptive rather than optimizing strategies, deducing the consequences is often impossible; simulation becomes necessary.
the goal of agent-based modeling is to enrich our understanding of fundamental processes that may appear in a variety of applications. This requires adhering to the KISS principle. The point is that while the topic being investigated may be complicated, the assumptions underlying the agent-based model should be simple. The complexity of ABM should be in the simulated results, not in the assumptions of the model. [this approach is contested by Edmonds and Moss, 2005].
The programming of a simulation model should achieve three goals:
[…] replication is a feasible, although rarely performed, part of the process of advancing computer simulation in the social sciences. In particular it would pay to replicate a diverse set of simulation models to see what types of problems arise.
The author selects eight models for replicate, and found the following problems when replicating:
The question now is what is would take to promote the growth and success of social science simulation. My answer comes in three parts: progress in methodology, progress in standardization, and progress in institution building.
J M Vidal, P A Buhler, M N Huhns, 2001 | Internet Computing, IEEE | 30 citations in Scholar |
Agent features relevant to implementation are unique identity, proactivity, persistence, autonomy, and sociability. There is a good figure showing the interaction between the agent and the environment.
An agent’s input can be a piece of sensory information, a message from another agent, or an event defined by the agent.
TerraME: Note that these three types of input match the three basic concepts of TerraME: space, behaviour and time, respectively.
It would be interesting that TerraME provide some methodology for behavioural or spatial events. Behavioural event can be just a
function call. Today a spatial event occurs in the beginning of a temporal event (the time to “ground” the agent).
They present UML diagrams for building two types of agents: reactive and BDI (Believe-Desire-Intention).
A behaviour is distinguished from an action in that an action is an atomic event, while a behavior can span a longer period of time.
One of the authors edits and maintains the http://www.multiagent.com/ Web site.
D G Brown, 2006 | In H. Geist, Ed. The Earth’s Changing Land: An Encyclopedia of Land-Use and Land-Cover Change |
Agent-based models for LUCC are nearly always spatially explicit, which means that the agents and/or their actions are referenced to particular locations on the Earth’s surface. For this reason, many agent-based models have either direct or indirect interaction with GIS.
traditional method | alternative |
---|---|
rationality | impose limits on the amount of effort agents use to search for and/or evaluate alternatives |
homogeneity | drawing parameters of a utility function from a statistical distribution, or using alternative decision making approaches for different agent types |
utility maximization | use satisficing behavior, in which agents select alternatives that are “good enough” using heuristics to determine agent choices |
randomness in environmental conditions, information availability, or decision outcomes using stochastic processes |
What is the difference between rationality and utility maximization?
Because an ABM is a dynamical system, the model can incorporate positive and negative feedbacks, such that the behavior of an agent has an influence on the subsequent behavior of other agents. These feedbacks can be used to represent the endogeneity of various driving forces of land-use and land-cover change.
Steps for building ABM:
J G Polhill, L R Izquierdo, N M Gotts, 2003 | International Conference of the European Social Simulation Association | html | 10 citations in Scholar |
Abstract: This paper will explore the effects of errors in floating point arithmetic in two published agent-based models: the first a model of land use change, the second a model of the stock market. The first example demonstrates how branching statements with floating point operands of comparison operators create a high degree of nonlinearity, leading in this case to the creation of 'ghost' agents - visible to some parts of the program but not to others. A potential solution to this problem is proposed. The second example shows how mathematical descriptions of models in the literature are insufficient to enable exact replication of work since mathematically equivalent implementations in terms of real number arithmetic are not equivalent in terms of floating point arithmetic.
David Hales noted that reimplementation of agent-based models should not use the original source code, as they would then introduce the same artefacts. A distinction needs to be made between reimplementation and repetition in this context. A supposed advantage of using computer simulation is that experiments can be exactly repeated, allowing the confirmation of the results obtained by other authors, including errors and artefacts. Reimplementation is, of course, a more rigorous test of a reported effect, but this does not necessarily mean that repetition is without value. Repetition is an exercise researchers can undertake themselves with relatively little effort. Compiling the model on different platforms at least provides a check that results are not dependent on such things as operating system, compiler and software library versions.
R Sengupta and R Sieber, 2007 | Transactions in GIS |
Abstract: The use of the related terms “agent-based”, “multi-agent”, “software agent” and “intelligent agent” have witnessed significant growth in the Geographic Information Science (GIScience) literature in the past decade. These terms usually refer to both artificial life agents that simulate human and animal behavior and software agents that support human-computer interactions. In this article we first comprehensively review both types of agents. Then we argue that both these categories of agents borrow from Artificial Intelligence (AI) research, requiring them to share the characteristics of and be similar to AI agents. We also argue that geospatial agents form a distinct category of AI agents because they are explicit about geography and geographic data models. Our overall goal is to first capture the diversity of, and then define and categorize GIScience agent research into geospatial agents, thereby capturing the diversity of agent-oriented architectures and applications that have been developed in the recent past to present a holistic review of geospatial agents.
ALGA - Artificial Life Geospatial Agents
SGA - Software Geospatial Agents
whereas ALGA focus on modelling social interactions and response to stimuli […], SGA act as software assistants to computer users, managing and automating specific hardware/software tasks. The paper presents a vast literature review in these two topics. ALGA: adoption of agricultural practices/subsidy, patterns of human movement (and settlement), human social collaboration, movement of animals, and LUCC.
Characteristics of ALGA:
they discusses the differences betweeen agents and geospatial softwares, arguing the same differences between agents and “common” software.
differences between agent and ALGA
The paper concludes that geospatial agent represents a distinct instance of intelligent agents “because of the explicit geographic or spatial nature of the agents and the tools with which geographers can bring to examine them (e.g. scale, extent, proximity and topology).
D. G. Brown and R. Riolo and D. T. Robinson and M. North and W. Rand, 2005 | Journal of Geographical Systems |
Abstract: The use of object-orientation for both spatial data and spatial process models facilitates their integration, which can allow exploration and explanation of spatial-temporal phenomena. In order to better understand how tight coupling might proceed and to evaluate the possible functional and efficiency gains from such a tight coupling, we identify four key relationships affecting how geographic data (fields and objects) and agent-based process models can interact: identity, causal, temporal and topological. We discuss approaches to implementing tight integration, focusing on a middleware approach that links existing GIS and ABM development platforms, and illustrate the need and approaches with example agent-based models.
ABMs often use relatively limited representations of space. For example, ABMs frequently use hypothetical spaces based on square or hexagonal tessellations, and only recently have ABMs begun to use real-world spatial data. To avoid edge effects on the performance of some models, researchers commonly use a toroidal representation of space, which wraps around from top-bottom, left-right, and vice versa. The rich temporal representations (agents and processes) of agent-based models, therefore complement the spatial data representations (fields, objects and functions) of GIS. The object-oriented nature of both presents tremendous opportunities for their integration.
Relationships:
The authors discuss the types of integration: tight, loose and an integration developing a third environment that can access both.
D. T. Robinson and D. G. Brown and D. C. Parker and P. Schreinemachers and M. A. Janssen and M. Huigen and H. Wittmer and N. Gotts and P. Promburom and E. Irwin and T. Berger and F. Gatzweiler and C. Barnaud, 2007 | Journal of Land Use Science |
Abstract: The use of agent-based models (ABMs) for investigating land-use science questions has been increasing dramatically over the last decade. Modelers have moved from ‘proofs of existence’ toy models to case-specific, multi-scaled, multi-actor, and data-intensive models of land-use and land-cover change. An international workshop, titled ‘Multi-Agent Modeling and Collaborative Planning—Method2Method Workshop’, was held in Bonn in 2005 in order to bring together researchers using different data collection approaches to informing agent-based models. Participants identified a typology of five approaches to empirically inform ABMs for land use science: sample surveys, participant observation, field and laboratory experiments, companion modeling, and GIS and remotely sensed data. This paper reviews these five approaches to informing ABMs, provides a corresponding case study describing the model usage of these approaches, the types of data each approach produces, the types of questions those data can answer, and an evaluation of the strengths and weaknesses of those data for use in an ABM.
Five approaches to inform ABM for LUCC:
mathematical and statistical models place emphasis on fitting parameters to observations (Brown et al 04, Verbung et al 06)
Cosma Shalizi (blog: three-toed sloth) has many good links where you might find something relevant. Here's some:
http://cscs.umich.edu/~crshalizi/notebooks/agent-based-modeling.html
http://www.cscs.umich.edu/~crshalizi/weblog/517.html
and his 'chaos, complexity and inference' course which puts ABM in a wider context:
http://cscs.umich.edu/~crshalizi/weblog/598.html
Somewhere - I can't find where - he also links to this paper:
http://www.isi.edu/~lerman/papers/isitr529.pdf
“A General Methodology for Mathematical Analysis of Multi-Agent Systems” which attempts to show that at least some set of ABMs can be replaced with differential equations - so an implicit critique.
S. Arai, K. Sycara, and T. R. Payne, 2000 | ICMAS2000 |
R. A. Brooks, 1990 | Robotics and Autonomous Systems | 496 citations in Scholar |
Abstract: There is an alternative route to AI that diverges from the directions pursued under that banner for the last thirty some years. The traditional approach has emphasized the abstract manipulation of symbols, whose grounding, in physical reality has rarely been achieved. We explore a research methodology which emphasizes ongoing physical interaction with the environment as the primary source of constraint on the design of intelligent systems. We show how this methodology has recently had significant successes on a par with the most successful classical efforts. We outline plausible future work along these lines which can lead to vastly more ambitious systems.
Nouvelle AI is based on the physical grounding hypothesis. This hypothesis states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Our experience with this approach is that once this commitment is made, the need for traditional symbolic representations soon fades entirely. The key observation is that the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough.
[…] but for another task (e.g., deciding whether the bananas are rotten) quite a different representation might be important. Psychophysical evidence certainly points to perception being an active and task dependent operation. The effect of the symbol system hypothesis has been to encourage vision researchers to quest after the goal of a general purpose vision system which delivers complete descriptions of the world in a symbolic form. Only recently has there been a movement towards active vision which is much more task dependent, or task driven.
In order to explore the construction of physically grounded systems we have developed a computational architecture known as the subsumption architecture. A subsumption program is built on a computational substrate that is organized into a series of incremental layers, each, in the general case, connecting perception to action. In our case the substrate is, networks of finite state machines augmented with timing elements.
We do not usually complain that a medical expert system, or an analogy program cannot climb real mountains. It is clear that their domain of expertise is somewhat more limited, and that their designers were careful to pick a well circumscribed domain in which to work. Likewise it is unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.
Discussion: The question is that, in the case of modelling, symbolic AI is more appropriate than the nouvelle AI approach. Why?
Because the model has a fixed representation, and all the believes of the agents are based on the limits of the model.
Therefore this active vision is contextualized when we describe all the data that the model will work with. OK, ok, but the question of time complexity remains.
B Edmonds and S Moss, 2005 | LNCS - Multi-Agent and Multi-Agent-Based Simulation | 17 citations in Scholar |
Abstract. A new approach is suggested under the slogan “Keep it Descriptive Stupid” (KIDS) that encapsulates a trend in increasingly descriptive agent-based social simulation. The KIDS approach entails one starts with the simulation model that relates to the target phenomena in the most straightforward way possible, taking into account the widest possible range of evidence, including anecdotal accounts and expert opinion. Simplification is only applied if and when the model and evidence justify this. This contrasts sharply with the KISS approach where one starts with the simplest possible model and only moves to a more complex one if forced to. An example multi-agent simulation of domestic water demand and social influence is described.
We are limited beings, which could explain why simplicity has such advantages and attractions. we have a tendency to project our own characteristics upon the natural world, we would like to think that everything has an “inner” simplicity even if they appear to be very complicated. Sometimes this is expressed as an assumption that simplicity is a (fallible) guide to truth, sometimes by conflating simplicity with generality (hiding the assumption that simpler models will apply to a greater variety of real cases, an assumption that is often unjustified).
The facts that complex outcomes can emerge from apparently simple systems does not mean that the complex phenomena we now observe is reducible to simple models.
MABS facilitates a more direct correspondence between what is observed and what is modelled. One benefit of this move to descriptive MABS is that a whole swath of evidence becomes available for validating our models. What is new is that this evidence may be anecdotal or “common-sense”. Previously such evidence may have been rejected on the ground that it is not “scientific” or “rigorous”, but this was because it was not formalisable in terms of the current modelling technology (analytic mathematics) and hence had no deducible outcomes that could be checked.
formalist stance: not claiming that a particular model represents in any sense any observed phenomena but that one is merely establishing the model’s properties, so that in the future someone may be able to apply it to solving real problems. It is often adopted in fields which are currently lacking significant empirical or practical success (e.g. AI or Economics).
section 3+
In science, truth comes before beauty.
Goodchild, M.F., 2005 | GIS, Spatial Analysis and Modelling |
N Gilbert and P Terna, 2000 | Mind & Society | 72 citations in Scholar |
Abstract: The use of computer simulation for building theoretical models in social science is introduced. It is proposed that agent-based models have potential as a “third way” of carrying out social science, in addition to argumentation and formalisation. With computer simulations, in contrast to other methods, it is possible to formalise complex theories about processes, carry out experiments and observe the occurrence of emergence. Some suggestions are offered about techniques for building agent-based models and for debugging them. A scheme for structuring a simulation program into agents, the environment and other parts for modifying and observing the agents is described. The article concludes with some references to modelling tools helpful for building computer simulations.
G. T. Jones, 2007 | Am J Public Health |
The author replicates the spatial model of drinking behaviour proposed by Gorman et al. He found a typographical error in a formula (where ”=“ is replaced with by ”+“), and some other errors in the results. The authors responded to the letter.
For agent-based modeling to meet its promise, practitioners should resist the temptation to overstate the implications of model outcomes, carefully design agent-based models with an eye toward ecological validity, and provide precise specifications and results based on families of models subjected to repeated tests.
R J Lempert, 2002 | PNAS |
Abstract: Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.
L Henrickson and B McKelvey, 2002 | PNAS |
Abstract: Since the death of positivism in the 1970s, philosophers have turned their attention to scientific realism, evolutionary epistemology, and the Semantic Conception of Theories. Building on these trends, Campbellian Realism allows social scientists to accept real-world phenomena as criterion variables against which theories may be tested without denying the reality of individual interpretation and social construction. The Semantic Conception reduces the importance of axioms, but reaffirms the role of models and experiments. Philosophers now see models as ‘‘autonomous agents’’ that exert independent influence on the development of a science, in addition to theory and data. The inappropriate molding effects of math models on social behavior modeling are noted. Complexity science offers a ‘‘new’’ normal science epistemology focusing on order creation by self-organizing heterogeneous agents and agent-based models. The more responsible core of postmodernism builds on the idea that agents operate in a constantly changing web of interconnections among other agents. The connectionist agent-based models of complexity science draw on the same conception of social ontology as do postmodernists. These recent developments combine to provide foundations for a ‘‘new’’ social science centered on formal modeling not requiring the mathematical assumptions of agent homogeneity and equilibrium conditions. They give this ‘‘new’’ social science legitimacy in scientific circles that current social science approaches lack.
C Cioffi-Revilla, 2002 | PNAS |
Abstract: Agent-based simulation models have a promising future in the social sciences, from political science to anthropology, economics, and sociology. To realize their full scientific potential, however, these models must address a set of key problems, such as the number of interacting agents and their geometry, network topology, time calibration, phenomenological calibration, structural stability, power laws, and other substantive and methodological issues. This paper discusses and highlights these problems and outlines some solutions.
K. S. Barber and C. E. Martin, 1999 | Autonomy Control Software Workshop at Agents’99 |
Abstract: Autonomy is an often cited but rarely agreed upon agent characteristic. As the demand for agents capable of adjustable autonomy increases, a formal definition of agent autonomy becomes necessary to provide a foundation for work in the area as well as to support operational deployment of the concept. This paper presents such a definition. Based on this definition, an autonomy representation is provided through which an agent’s degree of autonomy can be assessed quantitatively. Qualitative observations and additional autonomy constraints are also discussed.