Pages.shanti.virginia.edu



OBO: Philosophy

Entry ID:

Citation Style: Humanities

Version Date: uploaddate

Computational Science

Paul Humphreys

University of Virginia

Introduction

Early Work

General Overviews

Models and Simulations

Equation Based Simulations

Agent Based Simulations

Large Scale Simulations

Simulations and Experiments

Visualization and Representation

Computational Science and Emergence

Computational Science and Computer Assisted Mathematics

Validation and Verification of Simulations

Quantum Computation, Digital Physics, and Monte Carlo Simulations

Philosophical Offshoots

INTRODUCTION

Computational science is a recent addition to the stock of scientific methods. Many scientists and philosophers hold the view that it constitutes a third principal mode of scientific investigation that supplements the traditional methods of theory and experiment. Its most frequently discussed form is that of digital computer simulations, but data analysis, computer proofs and proof assistants in mathematics, computer-assisted scientific instruments, visualization techniques, and the study of emergent phenomena are all part of computational science. Modern computer simulations were introduced in the 1940s. Philosophers initially focused their attention on uses in artificial intelligence rather than in scientific computation more generally, with the result that most of the philosophical literature on the topic of computational science is of relatively recent origin. Because of this, what counts as the boundaries of this field and the topics within it are still evolving. Some of the features of computational science that make for distinctive philosophical issues are the dependence of which theoretical methods can be effectively used on the available technology, a lack of epistemic access by humans to the details of many evidential processes, the replacement in some cases of data drawn from material experiments by data generated from simulations, problems of validating and interpreting enormously complex models, claims that the universe is itself a computational device, and connections with artificial intelligence and the philosophy of mind. In addition, some of the sources clearly indicate that familiar philosophical issues, such as realism and empiricism, epistemological externalism and internalism, theory and experiment, take an interesting twist within this new area. The articles in this bibliography have been selected with an eye to illustrating the novel character of problems raised by computational science. They include some drawn from the social studies of science literature as well as a majority that appeared in squarely philosophical sources. Articles and books that are especially suitable for neophytes and those that require a technical background are identified as such. All other sources are accessible to those with a solid background in philosophy. Thanks to Anouk Barberousse, Mark Bedau, Cyrille Imbert, Tarja Knuuttila, Johannes Lenhard, Margaret Morrison, Wendy Parker, and Michael Stoeltzner for helpful suggestions in compiling this bibliography.

EARLY WORK

After the first military applications were run on the ENIAC computer in the mid-1940s, unclassified scientific articles about computational science, including those that emphasized methodological issues began to appear. Philosophical articles about simulating human cognition, as in Turing 1950 and Newell and Simon 1976 became the focus of much discussion. Humphreys 1991 and Rohrlich 1991 are perhaps the first English language philosophical papers explicitly focused on computer simulations outside artificial intelligence. Hartmann 1996 is another influential early article. The paper by Metropolis and Ulam 1949 is a good example of a self-consciously new scientific method that is put into historical perspective in Galison 1996.

Galison, Peter. `Computer Simulations and the Trading Zone’. In The Disunity of Science: Boundaries, Contexts, and Power. Edited by P. Galison and D Stump, 118-157. Stanford: Stanford University Press 1996.

An historically oriented paper focused on the introduction of Monte Carlo methods as a novel technique. It emphasizes the importance of direct modeling of stochastic physical processes by those methods rather than by using differential equation models. It also has some important insights into the relations between pure and applied mathematics.

Hartmann, Stephan. `The World as a Process: Simulations in the Natural and Social Sciences’. In Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Edited by R. Hegselmann, U. Muller and K. Troitzsch, 77-100. Dordrecht: Kluwer 1996.

Provides an influential characterization of simulations as the mimicking of one dynamical process by another.

Humphreys, Paul. `Computer Simulations’. In PSA 1990: Proceedings of the 1990 Philosophy of Science Association Biennial Meetings. Edited by A. Fine, M. Forbes, and L. Wessels, 497-506. East Lansing, MI: Philosophy of Science Association 1991.

Discusses the limitations of analytically solvable models and argues that computer simulations are a sui generis method that have become the dominant approach in some sciences. Provides a working definition of computer simulations, later modified by Hartmann 1996 (above).

Metropolis, Nicholas and S. Ulam. `The Monte Carlo Method’ Journal of the American Statistical Association 44 (1949): 335-34

A seminal work on the Monte Carlo method with a number of insightful methodological remarks. Some technical background required.

Newell, Alan and Simon, Herbert. `Computer science as empirical inquiry: symbols and search’ Communications of the ACM 19 (1976): 113-126.

A highly influential essay that lays out the basis of classical artificial intelligence. Their main claim is that the successful operation of a physical symbol system provides necessary and sufficient conditions for intelligent action. Suitable for all levels.

Rohrlich, Fritz. `Computer Simulations in the Physical Sciences’. In PSA 1990: Proceedings of the 1990 Philosophy of Science Association Biennial Meetings. Edited by A. Fine, M. Forbes, and L. Wessels, 507-518. East Lansing, MI: Philosophy of Science Association, 1990.

Argues that computer simulations constitute a new methodology for the physical sciences. The article anticipates later discussions in drawing parallels between experiments and simulations, noting the role of non-integrable motions, and the differences between the syntax of cellular automata and differential equation models.

Turing, Alan. `Computing Machinery and Intelligence’, Mind 59 (1950): 433-460.

A classic article, widely cited and very readable, that contains criteria for judging when a simulation has accurately imitated human verbal behavior. These criteria came to be known as the Turing Test.

GENERAL OVERVIEWS

The books listed below provide a good way into the field, covering some but by no means all of the principal topics in the area. For the most part they are not surveys or general introductions but works arguing for particular perspectives on computational science and related matters. Humphreys 2004 focuses on methodological issues. Winsberg 2010 emphasizes epistemological topics and contains many of his own influential papers. Mainzer 2007 contains a comprehensive survey of research activities in complex systems theory. Lenhard 2010 approaches computational science from a science studies perspective, as do some of the papers in Lenhard et al. 2006.

Humphreys, Paul. Extending Ourselves. New York: Oxford University Press, 2004.

Argues that computational science is changing the way that science is done by diminishing the role of human epistemology. Provides an introduction to simulation methods and the role played by models.

Lenhard, Johannes. `Computation and Simulation’. In The Oxford Handbook of Interdisciplinarity. Edited by Robert Frodeman, Julie Thompson Klein, and Carl Mitcham, 246-258. Oxford: Oxford University Press, 2010.

Contains a valuable historical survey of the field and an extensive bibliography with cultural and social aspects of the topic included.

Lenhard, Johannes, Günter Küppers, and Terry Shinn (eds) Simulation: Pragmatic Construction of Reality. Dordrecht: Springer 2006.

A collection of articles ranging from straight philosophy to social practice. The Introduction is especially useful. Issues about combining models from different disciplines and levels of treatment are included.

Mainzer, Klaus Thinking in Complexity (5th edition). Berlin: Springer-Verlag 2007.

Although this book is primarily about complex systems and does not directly discuss computational science, many of the models discussed are classic simulation models and there are brief discussions of what the author calls `computer-assisted models’. Some technical background required.

Winsberg, Eric Science in the Age of Computer Simulation Chicago: University of Chicago Press 2010.

The book synthesizes a number of the author’s contributions to the area, including the relation between experiments and simulations, verification and validation, multi-scale simulations, and the role of fictions in simulation models.

MODELS AND SIMULATIONS

The philosophical literature on scientific models was pioneered by, amongst others, Mary Hesse, Patrick Suppes, Michael Redhead, and William Wimsatt. Their emphasis was on the ways in which representations of specific systems are constructed, in contrast to the earlier emphasis on the broad generalizations captured in theories, although what came to be known as the semantic account of theories grew out of Suppes’s contributions. Because interest in computational models was slow in coming, contact between the older literature on models and the more recent literature on computational science has only recently occurred. Frigg and Hartmann 2009 is a standard source on models, with some attention paid to simulations. The Morgan and Morrison 1999 anthology is a good starting point for learning about some of the issues specific to non-computational modeling whereas Humphreys and Imbert 2011 has papers covering both areas. Frigg and Reiss 2009 and Humphreys 2009 take opposite sides on the issue of whether simulations raise philosophical issues beyond those already treated in the literature on models. Hughes 1999 is a nice mixture of sensitivity to modeling and scientific detail.

Frigg, Roman and Hartmann Stephan, "*Models in Science[].*"The Stanford Encyclopedia of Philosophy (Summer 2009 Edition). Edited by Edward N. Zalta.

A useful survey of models, with section 3.1 devoted to simulations.

Frigg, Roman and Julian Reiss ‘The Philosophy of Simulation: Hot New Issues or Same Old Stew?’ Synthese 169 (2009): 593-613.

The authors agree that computer simulations have introduced new methods into science but deny that computer simulations have introduced any new metaphysical, epistemological, semantic or methodological issues into the philosophy of science, suggesting that the literature on models already addresses the relevant topics.

Humphreys, Paul `The Philosophical Novelty of Computer Simulation Methods’, Synthese 169 (2009): 615-626.

A response to Frigg and Reiss 2009. Argues that there are genuinely new epistemological and semantic issues arising from the use of computational science and that these issues can be obscured by the current need for human/machine interfaces.

Humphreys, Paul and Cyrille Imbert, eds. Models, Simulations and Representations. New York: Routledge 2011.

A recent collection of papers concerning the topics mentioned in the title. Essays on understanding through simulations, emergence, the role of simulations in education, agent based models in social science, and the role of false idealizations are included.

Morgan, Mary and Margaret Morrison, eds. Models as Mediators. Cambridge: Cambridge University Press 1999.

A much cited collection of papers, mostly concerned with traditional models, arguing that models are autonomous objects of scientific construction and investigation.

Hughes, R.I.G.. `The Ising model, computer simulation, and universal physics’. In Morgan and Morrison 1999: 97-145.

Contains a lucid exposition of how the canonical Ising model was implemented computationally, and locates the discussion in the context of Hughes’ DDI (denotation, demonstration, interpretation) account of theoretical representation.

EQUATION BASED SIMULATIONS AND AGENT BASED SIMULATIONS

These former approaches start with a traditional mathematical model and bring to bear computational methods that numerically solve the equations to produce a simulation. The simulation is not not, however, just a set of numerical methods because as Lenhard 2007 emphasizes, it often employs a discrete model that is different in many ways from the original continuous model. Knuuttila and Loettgers 2011 provide a detailed historical case study in the construction of a model that is widely used in population biology and other areas.

Knuuttila, Tarja and Andrea Loettgers `The Productive Tension: Mechanisms vs. Templates in Modeling the Phenomena’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 3-24. New York: Routledge 2011.

Uses the iconic Lotka-Volterra models as an illustration of how the same computational template can be constructed from very different starting principles, using different underlying mechanisms, and with different degrees of abstraction in mind.

Lenhard, Johannes `Computer Simulation: The Cooperation Between Experimenting and Modeling’. Philosophy of Science 74 (2007): 176-194.

Argues that linkages between experimentation and modeling in simulations allows the data to exert a direct influence on the model. The traditional hypothetico-deductive method then becomes partly empirical in nature. The paper also contains an important argument that the discrete models used in simulations are autonomous in the sense of modeling the phenomena directly rather than as tools to solve continuous model equations.

AGENT BASED SIMULATIONS

Agent based simulations are widely used in the social sciences, ecology, statistical physics and elsewhere, and unlike equation based simulations, contain no global model of the system. They constitute a way of following the evolution of the state of a heterogenous collection of agents that interact with one another through a specifiable set of rules. Unexpected patterns can emerge through such iterated interactions and such simulations avoid lessen the need for idealizations such as the representative agents used in many models in economics. Schelling 1978 contains one of the first such models and one that is easily accessible to neophytes. Epstein 2006 contains both an overview of methods in the area and some core examples of agent based models. Tesfatsion and Judd 2006 is an advanced source for models in economics. Lehtinen and Kuorikowski 2007 explore why mainstream economics has resisted these simulations. Epstein 2011 argues that the individualistic orientation of the models must be accompanied by higher level representations. Grüne-Yanoff 2009 discusses the limitations of a well-known simulation in historical anthropology. DiPaolo et al 2000 address a key question in artificial life that also arises in digital physics: what, if anything, constitutes the difference between a model of a system type and an instance of that system type?

Di Paolo, E. A., Noble, J. and Bullock, S.. `Simulation models as opaque thought experiments’. In Artificial Life VII: Proceedings of the Seventh International Conference. Edited by M. Bedau, J. S. McCaskill, N. H. Packard, and S. Rasmussen, 497 – 506. Cambridge (Mass.): The MIT Press 2000.

In disciplines such as artificial life, one can view computer models as examples or instantiations of life or as merely a model from which no novel information about life flows. The authors argue that there is an intermediate position in which viewing the models as thought experiments and understanding their workings can provide knowledge.

Epstein, Brian. `Agent-Based Modeling and the Fallacies of Individualism’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 115-144. New York: Routledge 2011.

Argues that agent based models cannot always provide the micro-level foundations for macro-properties in social and economic systems and that we have to abandon the view that ontological redundancy must be avoided; that is, we must include some macro-properties in the model in addition to base level properties. The author also identifies two fallacies of agent-based modeling:. The first is taking certain social properties such as `the average age of the freshman class at Tufts’ as determined by spatially local properties when they are not. The second is and considering social entities to be composed only of individual humans. when multiple other kinds of entities are also constituents of the social system. [au: please edit this annotation to approx. 60 words.]

Epstein, Joshua. Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton: Princeton University Press 2006.

A collection of chapters on the well-known Anasazi model, epidemics, the Prisoners’ Dilemma, and other models. The first two chapters contain valuable methodological discussions about the role of agent based models in the social sciences.

Grüne-Yanoff, Till. `The explanatory potential of artificial societies. Synthese 169 (2009): 539-555.

Discusses simulations of the population collapse of the Anasazi civilization and their explanatory potential. Argues that alternative models can explain the effects just as well and hence evidential support for the model must come from direct observation, well-confirmed theory, or externally valid behavioral experiments. The paper also provides criteria for when a potential explanation can contribute to the understanding that an agent based simulation provides. and argues that these simulations provide a functional analysis of the system they model. [au: please edit this annotation to approx. 60 words.]

Knuuttila, Tarja and Andrea Loettgers `The Productive Tension: Mechanisms vs. Templates in Modeling the Phenomena’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 3-24. New York: Routledge 2011.

Uses the iconic Lotka-Volterra models as an illustration of how the same computational template can be constructed from very different starting principles, using different underlying mechanisms, and with different degrees of abstraction in mind.

Lehtinen, A. and Kuorikoski, Juha. `Computing the Perfect Model: Why Do Economists Shun Simulation?’. Philosophy of Science 74 (2007): 304-329.

Drawing a distinction between computational models and simulations models, the authors argue that the former are given preference over the latter in economics because of the suspect epistemic status of the latter. The reasons vary over preferences for theorems based on equilibrium and rationality assumptions through the use of robustness analysis to a tacit preference for understanding through unifying explanatory theories.

Lenhard, Johannes `Computer Simulation: The Cooperation Between Experimenting and Modeling’. Philosophy of Science 74 (2007): 176-194.

Argues that linkages between experimentation and modeling in simulations allows the data to exert a direct influence on the model. The traditional hypothetico-deductive method then becomes partly empirical in nature. The paper also contains an important argument that the discrete models used in simulations are autonomous in the sense of modeling the phenomena directly rather than as tools to solve continuous model equations.

Schelling, Thomas `Sorting and Mixing: Race and Sex.’ In Thomas Schelling, Micromotives and Macrobehavior, 135-166. New York: Norton 1978.

An accessible version of the author’s classic article in agent based modeling that combines great simplicity with a profound insight into how exploring different mechanisms can suggest that mainstream explanations might be incorrect. The model demonstrates that small preferences on the part of individuals to associate with individuals like themselves can lead to significant segregation at the group level. A seminal article in how simulations can provide `how possibly’ explanations.

Tesfatsion, Leigh and Kenneth L. Judd, eds. Handbook of Computational Economics, Volume 2: Agent-Based Computational Economics. Amsterdam: North Holland 2006.

Part 3 is a guide for beginners to agent based modeling. Part 2 contains a number of valuable articles by seminal figures on methodological topics. Although the collection concerns models in economics, many of the issues discussed apply more generally.

LARGE SCALE SIMULATIONS

Models that require the cooperative effort of large groups of scientists or that involve large numbers of variables have a different epistemological status than do small-scale models. The most discussed of these large scale models involve simulations of the Earth’s climate, and these use some of the most complex models in science. Because of this complexity, the existence of competing models with different predictions, and the political controversies that accompany such models, climate models pose special methodological problems. Edwards 2010 is a good entrance point to the discussions and can be read in conjunction with Heymann 2010. Parker 2006 is a fine assessment of why multiple models exist in this area and how they are combined into ensembles. Smith 2006 is a more technical discussion of the use of ensembles of models. Kuppers and Lenhard 2004 illustrate the semi-autonomy of some climate models from the underlying theory. Norton and Suppe 2001 argue that the use of models in this area does not imply that the results are suspect. Other types of large-scale models, such as those in nuclear weapons research, have been less discussed, perhaps because of restrictions on public discussion of the models. Other areas, such as astrophysics, do use models that require cooperative efforts in model building and Sundberg 2010 contains an interesting analysis of how the social organization of such efforts affects the reliability and other features of the resulting software.

Edwards, Paul. A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming. Cambridge: MIT Press, 2010.

Argues that data models are an unavoidable part of climate simulations and that this does not in itself diminish their value. Also contains a valuable history of climate modeling. A good introduction for those unfamiliar with the area.

Heymann, Matthias. `Understanding and Misunderstanding Computer Simulation: The Case of Atmospheric and Climate Science -- An Introduction’. Studies in History and Philosophy of Modern Physics 41 (2010): 193-200

An excellent survey article on the current state of climate science with a useful bibliography. Makes the interesting point that computational methods are so pervasive in this area that the term `computational meteorology’ is rarely used, unlike `computational chemistry’ and `computational biology’. Also has remarks about the political and cultural influences on weather and climate forecasting. Suitable for undergraduates.

Küppers, G. and Lehnard, J.. `The controversial status of computer simulations.’ Proceedings of the 18th European Simulation Multiconference. Edited by G. Horton. SCS Europe 2004. 271-275.

Argues that many simulation models, including those in climate science, are developed in ways that are often independent of the underlying theory. The introduction of physically unmotivated techniques is sometimes required in order to reproduce the long term behavior of the target system.

Norton, S. and Suppe, F. `Why Atmospheric Modeling is Good Science’. In Changing the Atmosphere: Expert Knowledge and Environmental Governance. Edited by C. Miller and P. Edwards, 67-105. Cambridge: MIT Press, 2001.

Argues that models of data are unavoidable, that simulation models are a source of empirical data, and that simulation models can be used as instruments for probing real-world phenomena. Although this article is often cited, Edwards 2010 (above) makes a less polemical case for the first claim and more nuanced versions of the second claim can be found in the articles listed in the *Simulations and Experiments* section.

Parker, Wendy S. `Understanding Pluralism in Climate Modeling’. Foundations of Science, 11 (2006): 349-368

Climate scientists have available to them a number of incompatible simulation models on which to base predictions. This article examines why such a variety of incompatible models exists within climate science, what choice criteria for model evaluation are used, how incompatible models are used in multi-model ensembles, how realist, pragmatist, and instrumentalist motives drive different aspects of the simulations, and why no single best model has been selected. The considerations addressed here clearly show how the science of very large scale models differs from the science of simple systems with few parameters and variables. [au: please edit this annotation to approx. 60 words.]

Smith, Leonard A."What might we learn from climate forecasts?". Proceedings of the National Academy of Sciences 99 (2002): 2487-2492.

A technical but methodologically insightful discussion of the role played by large ensembles of models in making predictions for high-dimensional systems. A contrast is drawn with the situation in which a perfect model exists, a situation that the author considers to be a misleading fiction. Although centered on climate models, the morals transfer to most high-dimensional models.

Sundberg, Mikaela. `Organizing Simulation Code Collectives’. Science Studies 23 (2010): 37-57.

Provides a typology of group coding efforts and a discussion of how the organization of these collectives, such as open-source and closed source approaches, affects the ways in which the software is used. Some important distinctions between the users and builders of the models are discussed.

SIMULATIONS AND EXPERIMENTS

One of the most discussed issues in computational science is where simulations fit as a method. There is a lively debate over whether they can be assimilated to the category of experiments in which the papers of Guala 2002, Winsberg 2009, and Parker 2009 have been influential. Barberousse et al 2009 argue that simulations are essentially different from experiments. Morrison 2009 has a different angle: simulations can be considered in certain circumstances as being measuring devices, whereas Gelfert 2011 explores the ways in which one can validate simulation models in a non-circular fashion. Dowling 1999 provides insights into the relations between the two areas from scientists using simulations.

Barberousse, Anouk, Sara Franceschelli and Cyrille Imbert. `Computer Simulation and Experiments’. Synthese 169 (2009): 557-574.

Argues that the fact that simulations are run on physical machines is not a sound basis for comparing simulations with material experiments and that it is the representational content of simulation models that is essential to their effectiveness.The authors distinguish between data that results from a physical interaction with a detection device, data that is about a system, and data that is about the simulation itself. The second type can be, but need not be, identical with the first type of data and the authors provide an account of how the third type of data can be used to arrive at the second type. [au: please edit this annotation to approx. 60 words.]

Dowling, Deborah. `Experimenting on Theories’. Science in Context (1999): 261-273.

A paper in the social studies of science tradition that includes interviews with scientists regarding their views on simulations and experiments. Argues that when considered as experiments simulations must be treated as a black box but that an understanding of the running code is necessary to legitimate the simulation.

Gelfert, Axel. `Scientific models, simulation, and the experimenter’s regress’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 145-167. New York: Routledge 2011.

The experimenter’s regress is the problem that with cutting-edge experiments, the only evidence we have that the apparatus is functioning correctly are the data that it produces. The author examines whether there is a similar problem for simulations. One important aspect of this is the difficulty of justifying the use of a particular numerical technique in a simulation without appealing to the `successful’ running of the simulation.

Guala, F.. `Models, Simulations, and Experiments’. In Model-Based Reasoning: Science, Technology, Values. Edited by Lorenzo Magnini and Nancy Nersessian, 59-74. Dordrecht: Kluwer Publishing Company 2002.

Argues that a major difference between experiments and simulations is that the former rely on the sameness of relevant material composition with the target system for reliable inferences whereas the latter rely on a formal similarity of structure with the target system. Despite this difference, contextual factors play a role in determining whether a given investigation counts as an experiment or a simulation.

Morrison, Margaret. `Models, Measurement, and Computer Simulation: The Changing Face of Experimentation’. Philosophical Studies 143 (2009): 33-57.

Argues for epistemic similarities between simulations and experimental measurements on the grounds that models can function as measuring instruments. Her approach differs from others in locating the materiality of the simulation in the simulation model rather than the computer, where `physical’ here means a justified interpretation of the model in terms of physical entities. The essential role of a hierarchy of models in obtaining data from both experiments and simulations is also emphasized. [au: please edit this annotation to approx. 60 words.]

Parker, Wendy. `Does Matter Really Matter?: Computer Simulations, Experiments, and Materiality’, Synthese 169 (2009): 483-496.

Using a distinction between computer simulations and computer simulation models, the author argues that the former do not count as experiments whereas scientific investigations using the latter can. The author’s paper contains an extended discussion of some other approaches to the topic.

Winsberg, Eric. `A Tale of Two Methods’. Synthese 169 (2009): 575-592

Argues that what distinguishes simulations and experiments is the kind of argument given the legitimate inferences from the system under study to the target system and the character of the background knowledge grounding those inferences. In simulations, it is the process of building the simulation model that justifies substituting the simulation for the target system. Considerations about the kind of background knowledge used to construct models are also provided.

VISUALIZATION AND REPRESENTATION

One of the most powerful advantages of computational science is the ability to provide visualizations of the output data, often in dynamical forms. Although much of philosophy has considered the form of the representation to be irrelevant because transformations can be specified to take us from the linguistic to the pictorial mode, these forms can be epistemologically and cognitively very different. Kulvicki 2010 and Vorms 2011 explore some of these philosophical differences and Oreskes et al 1994 (see *Validation and Verification of Simulations*) argue that the complexity of these models makes assessing them unusually difficult. Kuorikoski 2011 argues in contrast that visualizations can often be seriously misleading to our sense of understanding. Lenhard 2006 is an earlier attempt to explain the role played by visualizations in understanding. Ruivenkamp and Rip 2010 is an accessible survey of visualization methods at the nanoscale.

Kulvicki, John. `Knowing with Images: Medium and Message’. Philosophy of Science 77 (2010): 295-313.

Explores the differences between visual and descriptive representations in science and uses the properties of extractability, syntactic salience and semantic salience to capture the immediacy of the information contained in images.

Kuorikoski, Jaakko. `Simulation and the Sense of Understanding’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 168-187. New York: Routledge, 2011.

Computational science requires a different kind of understanding than do traditional analytic models. Drawing a distinction between a sense of understanding and understanding proper, the author discusses how various psychological features, including those associated with visualizations, can lead to an illusion of understanding.

Lenhard, Johannes. `Surprised by a Nanowire: Simulation, Control, and Understanding.’ Philosophy of Science 73 (2006): 605–616.

Argues that the kind of theoretical understanding available in traditional models through being able to explicitly follow the chains of inferences between inputs and outputs is replaced in simulations by a different kind of pragmatic understanding based on the ability to manipulate the simulation. The acceptability of the simulation is grounded in holistic criteria, including the informativeness of the visualization outputs.

Ruivenkamp, Martin and Arie Rip. `Visualizing the Invisible Nanoscale Study: Visualization Practices in Nanotechnology Community of Practice’. Science Studies 23 (2010): 3-36.

Contains both a philosophical overview of visualization techniques at the nanoscale and the results of a survey on the use of such techniques by scientists. Especially useful for information about how often visualization images are manipulated and thus require careful interpretation.

Vorms, Marion. `Formats of Representation in Scientific Theorizing’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 250-273. New York: Routledge 2011.

The output of computer models can be presented in a number of different formats – graphical, sentential, in the form of tables, etc. Vorms argues that the format affects the informational costs of the reasoning processes used by different agents. The article also contains a useful discussion of Lagrangian and Hamiltonian representations and of Feynman diagrams and Schwinger’s equations in QED.

COMPUTATIONAL SCIENCE AND EMERGENCE

The widespread use of computers has allowed scientists to more readily study the dynamics of complex systems, the evolution of which can produce unexpected behavior and patterns. Crutchfield et al 1986 provides an accessible introduction to the best known case, that of chaotic systems. Bedau 1997 furnishes a definition of emergence illustrated by Conway’s Game of Life that is suited to diachronic rather than synchronic emergence. Bedau 2011 clarifies the definition by relating it to objective explanations of physical processes. Dennett 1991 discusses whether the kinds of patterns that occur in computational systems and elsewhere are real (objective) or artificial (conventional). Although emergence and explanation are traditionally considered to be incompatible, Symons 2008 provides an account of how explanations can be given of the emergent features in computational systems. [au: please also introduce/contextualize here Bedau 2011.]

Bedau, Mark. `Weak Emergence’. Philosophical Perspectives 11 (1997): 375-399.

A seminal article defining a macrostate of a system as one that can be derived from the systems microdynamics but only by simulation. In later articles Bedau has argued that the emergent features are objective and not a result of epistemological limitations.

Bedau, Mark. `Weak Emergence and Computer Simulation’. In Models, Simulations and Representations. Edited by Paul Humphreys and Cyrille Imbert, 91-114. New York: Routledge 2011.

In this paper Bedau argues that the emergent features in weak emergence are objective and are not a result of epistemological limitations.

Crutchfield, James, J. Doyne Farmer, Norman H. Packard, and Robert S. Shaw. `Chaos’. Scientific American 255 (1986): 46-57.

A survey article on how chaotic systems can produce novel phenomena and the need to provide higher level structural descriptions on the state space in order to understand and classify the phenomena. Very readable.

Dennett, Daniel. `Real Patterns’. Journal of Philosophy 88 (1991): 27-51.

Addresses the question of whether patterns, including those that are generated in computational models are `real’. Argues that different goals can result in different attitudes towards the same data array; simple patterns with much noise are preferred for some purposes; less simple patterns with reduced noise for others.

Symons, John. `Computational Models of Emergent Properties’. Minds & Machines 18 (2008): 475-491

This paper provides a good account of the ways in which shows how simple computational models, especially cellular automata models, can explain phenomena. Claims include that the explanations come via our ability to manipulate and control the outputs of the simulation and not through any ontological content of the the underlying models; that these explanations are not reductive, and that these explanations are not based on the kinds of generalizations that traditionally have been required by philosophers of science. [au: please edit this annotation to approx. 60 words.]

COMPUTATIONAL SCIENCE AND COMPUTER ASSISTED MATHEMATICS

Since the proof of the Four Color Theorem appeared, the use of computers in mathematics has spread, although it is still viewed with suspicion by many mathematicians. Borwein and Bailey 2008 is an excellent introduction to the area, with numerous exercises. Tymoczko 1979 was one of the first philosophical assessments of these new techniques and argued that they had changed the concept of what counts as a proof. Detlefsen and Luker 1980 and Teller 1980 are replies. Burge 1998 is a sophisticated treatment of how the unsurveyability of computer proofs does not prevent us from having a warrant for their conclusions.

Borwein, Jonathan and David Bailey. Mathematics by Experiment: Plausible Reasoning in the 21st Century (Second Edition). Natick, Mass.: A.K. Peters, 2008

An accessible introduction to computational mathematics by two of its pioneers. Has insightful methodological comments as well as many examples. A companion volume by the same authors, Experimentation in Mathematics: Computational Paths to Discovery, is somewhat more advanced.

Burge, Tyler. `Computer Proof, A Priori Knowledge, and Other Minds’, Noûs 32 (1998, Supplement): 1-37.

Uses the epistemic concept of entitlement as a warrant for the use of opaque justificatory processes in both traditional mathematics and computer assisted mathematics. A rich source of philosophical ideas. Advanced level.

Detlefsen, Michael and Luker, M.. `The Four Color Theorem and Mathematical Proof’. Journal of Philosophy 77 (1980): 803-820.

An early response to Tymoczko 1979.

Teller, Paul. `Computer Proof’. Journal of Philosophy 77 (1980): 797-803.

Another early response to Tymoczko’s article, arguing that the traditional concept of a proof remains unchanged despite the use of computers in part of the proof.

Tymoczko, T.. `The Four Color Problem and Its Philosophical Significance’. Journal of Philosophy 76 (1979): 57-83.

One of the first philosophical discussions of computer assisted proof. Argues that because such proofs are not surveyable, their existence changes the definition of a mathematical proof and the necessity of running the programs on concrete machiners introduces empirical content into mathematics.

VALIDATION AND VERIFICATION OF SIMULATIONS

A simulation is validated if it correctly reproduces the phenomena it is intended to model. There is some dispute about what verification of a simulation amounts to. Taken in the sense of formal verification, the program underlying the simulation is formally verified with respect to some property P if a proof is given that, given specified inputs, the code will produce P. The concept can be applied to the algorithm itself in which case a formal verification of an algorithm is an explicit process of checking that the algorithm has the intended outcome. There are deep divisions about the extent to which program verification is possible. Both DeMillo et al 1979 and Fetzer 1988 argue, from very different perspectives, that formal verification is impossible. Unfortunately, there is no authoritative source accessible to those without a serious background in computer science that presents the successes in this field: one now dated and very technical text is Loeckx and Sieber 1987. A more recent, somewhat friendlier text is Oberkamp and Roy 2010. Oreskes et al. 1994 is a frequently cited discussion applied to climate models and Parker 2008 contains constructive suggestions of a more general kind.

DeMillo, Richard A., Richard J. Lipton, and Alan J. Perlis. `Social Processes and Proofs of Theorems and Programs’. Communications of the Association for Computing Machinery 22 (1979): 271-280

Argues that the formal verification of programs is deeply disanalogous to the verification of mathematical proofs and that there are severe limitations on how successful program verification can be. Also argues that the verification process is primarily social.

Fetzer, James. `Program Verification: The Very Idea’. Communications of the Association for Computing Machinery 31(1988): 1048-1063.

A highly controversial article replying to DeMillo 1979 (above) and arguing that program verification is impossible. Succeeding issues of the CACM contain responses ranging from careful evaluation of the issues to apoplectic rebuttals.

Loeckx, Jacques and Kurt Sieber The Foundations of Program Verification, 2nd Edition Wiley 1987.

An advanced text on formal program verification. Not for beginners.

Oberkampf, W. and C. Roy (2010) Verification and Validation in Scientific Computing. Cambridge University Press.

A somewhat more accessible reference than Loeckx and Sieber but requires a technical background.

Oreskes, Naomi, Kristin Schrader-Frechette, and K. Belitz `Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences’ Science 263 (1994), pp. 641-646.

An early article which argues that verification and validation of complex models including those used in climate science, is impossible. The reasons include the underdetermination of models by data, application of the model to open systems, the use of approximations, and so on. Readers should be aware that the authors’ use of `verification’ is unusually strict, equating it to establishing the truth of the model. and that there is a wide variety of uses of `verification’ in the computer science literature. [au: please edit this annotation to approx. 60 words.]

Parker, Wendy S. `Franklin, Holmes and the Epistemology of Computer Simulation.’ International Studies in the Philosophy of Science 22 (2008): 165-183.

Argues that a number of strategies used to assess the results from experiments can be transferred, with appropriate modifications, to assess computer simulations.

QUANTUM COMPUTATION, DIGITAL PHYSICS, AND MONTE CARLO SIMULATIONS

Physics has been a heavy user of computers since their invention. One of the earliest distinctive methods in the area were the now widely used Monte Carlo methods. Methodological discussions of these methods are in Humphreys 1994 and Galison 1997. Cellular automata, invented by Stanislaw Ulam and John von Neumann in the 1940s, have become increasingly popular as illustrations of computational behavior. Keller 2003 contains an elementary introduction to the area. One controversial position in this area asserts that all physical phenomena not only can be modeled by cellular automata but are themselves digital computational devices. Wolfram 2002 is often associated with this view but a better introduction to these ideas can be found in Vichniac 1984 while a philosophical assessment of physical computation is in Piccinini 2010. Those who want to understand the technically difficult material of quantum computation can begin with Mermin 2007.

Galison, Peter. Image and Logic. Chicago: University of Chicago Press, 1997.

Chapter 8 is an extended discussion of the historical development of Monte Carlo simulations and their role in establishing a `trading zone’ within which experimentalists, theoreticians, programmers, and engineers could communicate.

Humphreys, Paul. `Numerical Experimentation’. In Patrick Suppes: Scientific Philosopher, Volume 2. Philosophy of Physics, Theory Structure and Measurement Theory. Edited by Paul Humphreys, 103-118. Dordrecht:: Kluwer Academic Publishers, 1994.

Contains an extended discussion of the Ising model and the Metropolis algorithm for Monte Carlo simulations.

Keller, Evelyn Fox. `Models, Simulation, and ‘Computer Experiments’‘. In The Philosophy of Scientific Experimentation. Edited by Hans Radder, 198-215. Pittsburgh: University of Pittsburgh Press, 2003.

A basic article that discusses cellular automata.

Mermin, N. David. Quantum Computer Science: An Introduction. Cambridge: Cambridge University Press, 2007

The clearest way in to this difficult area. Some background in quantum theory and classical computation is needed.

Piccinini, Gualtiero. `*Computation in Physical Systems[]*.’. In The Stanford Encyclopedia of Philosophy (Fall 2010 Edition). Edited by Edward N. Zalta.

A sensible assessment (in section 3.4) of the highly speculative literature which asserts that the universe is at root a universal computational device.

Vichniac, G. `Simulating physics with cellular automata’. Physica D 10 (1984): 96-116.

A technical but methodologically provocative article that was an early contribution to digital physics. Contains a clear discussion of the differences between cellular automata as computational tools, as examples of dynamical systems, and as models of other systems. See the references to Fredkin and Feynman therein.

Wolfram, S. A New Kind of Science. Champaign, Ill.: Wolfram Media, 2002.

A long and controversial book that lays out at great length the author’s vision of a world driven by computation, especially cellular automata. The endnotes contain much interesting material.

PHILOSOPHICAL OFFSHOOTS

There are many philosophical articles that indirectly rely on arguments drawn from computational science. Two that address skeptical arguments involving virtual reality scenarios are Bostrom 2003 and Chalmers 2005.

Bostrom, Nick. `Are We Living in a Computer Simulation?’ Philosophical Quarterly 53 (2003): 243-255.

Uses a simple probabilistic argument to argue that at least one of these claims is true: ( i) It is very probable that the human species will become extinct before reaching a posthuman stage;( 2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history; (3) we are almost certainly living in a computer simulation. The argument is predicated on the not obviously true assumption that computational process of sufficient sophistication will result in consciousness. [au: please edit this annotation to approx. 60 words.]

Chalmers, David J.. `The Matrix As Metaphysics’ pp. 132-176 In Philosophers Explore 'The Matrix'. Edited by Christopher Grau, 132-176. New York: Oxford Univ Press 2005.

An entertaining yet philosophically sophisticated article by a prominent philosopher of mind that is accessible to those with little background in philosophy. Notes for those with more background are appended. Argues that contrary to the usual views, if our cognitive experiences are the result of a convincing simulation of what we consider to be the external world, this does not entail skeptical conclusions about reality in the sense that it leaves untouched most or our ordinary beliefs about reality. Compares various matrix scenarios to traditional skeptical hypotheses such as Descartes’ dreaming and evil genius arguments. [au: please edit this annotation to approx. 60 words.]

Acknowledgments: I am indebted to Anouk Barberousse, Mark Bedau, Cyrille Imbert, Tarja Knuuttila, Johannes Lenhard, Margaret Morrison, Wendy Parker, and Michael Stoeltzner for helpful suggestions in compiling this bibliography.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download