Data Physics – An architecture for the creation of ...



Data Physics – An environment for Adaptive Agents

or

The Blind Watchmaker with the Invisible Hand

Dissertation project September 2000

Cefn Hoile

MSc Evolutionary and Adaptive Systems

“We must…not be discouraged by the difficulty of interpreting life by the ordinary laws of physics. For that is just what is to be expected from the knowledge we have gained of the structure of living matter.” [Schrodinger 1944]

Abstract

The internet and business intranets represent a forum for virtual interactions between users, their software and the computational resources of the network. The promise of the multi-agent system paradigm was to allow users to assign goals to autonomous programs which could engage in virtual interactions to achieve these goals on their behalf. However, research in this area has been dominated by the top-down, centralised and explicit programming, planning and representation techniques of classical AI. This has led to design difficulties. The complex dynamics of an open network are difficult for human designers to anticipate, and the solutions they build are fragile, over-engineered and inextensible. It is often unclear in what way the agent paradigm has assisted in the delivery of a solution.

This artificial life project is a novel synthesis of ideas from computer science, agoric systems, ecology, ethology, biochemistry, thermodynamics, economics, and sociology leading to a template for a bottom-up, fully distributed, adaptable and scaleable ‘physics’ providing a basis for the open-ended evolution of digital organisms.

It is argued that an open-ended approach to agent evolution can only be achieved by imposing a physics on the agent world, determining the interactions of low level data components, rather than imposing biological abstractions which enforce high-level structure. Within the “Data Physics”, structures can maintain themselves only through meeting users’ requirements, but are otherwise free to explore and exploit a broad range of the strategies seen in natural multi-agent systems in order to achieve survival.

In this approach, top-down, centralised or explicit agent specification and planning are rejected. Instead, local interactions of data objects imply locally defined fitness functions over data complexes. This guides a distributed selection algorithm which credits agents for participation in service delivery to the network user.

It is argued that the features of this architecture, in concert with a human society of users and developers, could allow the open-ended niche specialisation and adaptation required to optimise the productivity of the computational resources which are now available on the network.

An implementation of such an architecture is presented, experiments detailed, and future research challenges identified.

“A major problem in making effective use of computers is dealing with complexity” [Miller and Drexler 1988a]

“The sequence of instructions which tells the computer how to solve a particular problem is called a program. The program tells the machine what to do, step by step, including all decisions which are to be made. It is apparent from this that the computer does not plan for itself, but that all planning must be done in advance. The growth of the computer industry created the need for trained personnel who do nothing but prepare the programs, or sequences of instructions, which direct the computer. The preparation of the list of instructions to the computer is called programming, and the personnel who perform this function are called programmers.”[1] [Bartee 1977]

“Anything produced by a computer must (barring hardware faults) have been generated by the computational principles built/programmed into it.” [Boden 1996a]

“How can computers be programmed so that problem-solving capabilities are built up by specifying ‘what is to be done’ rather than ‘ how to do it’?” [Holland 1975]

“Virtual life is out there, waiting for us to provide environments in which it may evolve.” [Ray 1996]

Introduction

The computer has radically changed the way in which we interact with each other, economically, socially and intellectually. Object-oriented design, optimised compilation algorithms and advances in semiconductor technology are all aspects which have contributed to this success. The common element in all these advances is the participation of the human mind in the design process. However, the human design process represents a bottleneck in the further development of the computing paradigm.

Conventional design is carried out by first agreeing a specification, which encapsulates the objectives which the design must meet and the range of conditions under which the design should function. If the design process is successful, an explicit and human-comprehensible solution results. Comprehensibility is applauded, since it reassures investors, users and future designers that the system will indeed perform as promised. However, as computational systems have become more complex, the limitations of this process of rational human design has been exposed [Hoile and Tateson 2000] [Maes 1991] [Miller and Drexler 1988a].

Furthermore, there may in fact be advantages to an increase in complexity. The eventual demise of Moore’s Law demands the proper co-ordination of multiple processors to meet our expanding computational needs. However, the exploitation of parallel computers, “[g]etting messages between processors and making sure that processors are fully occupied most of the time is far from easy… and difficult to automate” [Bossomaier and Green 1998].

The multiplicity, responsiveness, originality and complexity of solutions required to shape a productive network ecosystem are beyond the scope of human design. Artificial limitations may be imposed to minimise complexity to maintain control, but the solutions which could exploit complexity will remain unexplored.

When the dynamics of a system are beyond human comprehension, top-down control of the behaviour of that system will certainly be beyond human comprehension. The lesson from nature is that its own agent ‘designs’ – organisms and collectives – thrive in the context of complex ecosystems and economies, and do not suffer from the limitations of a designer’s comprehension. In spite of the anthropomorphism of commentators, (for example ‘Mother Nature’, ‘Blind Watchmaker’, or ‘Invisible Hand’), structures can adapt to exploit the dynamics of the system without the explicit comprehension of those dynamics. Within such systems well-adapted, self maintaining, embodied and embedded complexes made up from primitive elements can sustain themselves and adapt to prevailing conditions.

It is proposed to identify the “enabling infrastructure” [Mitleton-Kelly 2000] which can support this phenomenon within a digital environment. The study of natural optimising systems will inform the design of a ‘Data Physics’, an environment in which complexes of data – agents – may maintain themselves through the satisfaction of human specified ends.

“[I]mprovements in machinery…have been made by the ingenuity of…those who are called philosophers or men of speculation, whose trade it is not to do any thing but to observe every thing and who, upon that account are often capable of combining together the powers of the most distant and dissimilar objects.” [Smith 1776]

Methodology

This paper draws on observations from many disciplines to propose an architecture which could solve some of the design problems of multi-agent systems, (MAS), avoiding some of the simplifying assumptions which devalue the results of other MAS experiments. It is hoped that a synthesis of features identified within these diverse areas of study can provide a network environment for an ecosystem of adaptive agents or virtual organisms, which collaborate to deliver services to users who specify (following Holland [1975]) ‘what is to be done’ rather than ‘how to do it’.

There is a long history of exchange between evolution and the social sciences, and between theories of computation and of self-replication. Malthus’ 1798 “An Essay on the Principle of Population as it Affects the Future Improvement of Society” sought to explain why population did not increase in geometric progression as a simple mathematical treatment might suggest. Amongst other influences, Malthus proposed a limitation of food stocks which would check this exponential growth. This treatment of resource constraints on reproductive populations inspired Darwin’s principle of natural selection. Darwinism in turn inspired social Darwinism and sociobiology [Cohen 1994] and strongly informed the development of Hayek’s concept of “spontaneous order” [Hodgson 1994]. Two common themes have contributed to the fruitful exchange between these domains. Firstly, the exploration of co-operative and competitive strategies under resource constraints. Secondly, niche specialisation or division of labour. These two themes will be treated in later sections.

Computation and self-replication also have a long history together. John von Neumann, who is credited for the conceptual design of the modern stored program computer, proposed that the secret of self-replication was to employ a representation at the heart of the process. To avoid an infinite regress of Russian dolls, each of which must have the tiny seed of its future offspring, he proposed that the representation would play two roles in the life cycle of the self-replicating creature. It could be ‘decoded’[2] to generate the body of the offspring, and it could be duplicated to allow a copy of the information to be passed on, creating a new self-sufficient creature, whose duplicate copy allows it to reproduce indefinitely in the same way. Von Neumann presented an implementation of this concept using a cellular automaton model, in which the replication of the creature was achieved using local interactions between the states of cells in a plane, according to a hand-designed rule table [McMullin 2000]. The ‘software’ or genotype of the artificial creature, took the form of a long string of cells, decoded by a ‘hardware’ reading head which constructed the offspring by responding to the information encoded therein – a distinction which still exists in ‘von Neumann’ computers today. This specification is all the more remarkable considering that “von Neumann’s 1949 lectures predated the discovery and elucidation by Watson and Crick in 1953 of the self-replicating structure and genetic role of DNA” [Koza 1994].

Although the intersections between these disciplines are apparent, many commentators, (and research supervisors) warn of the dangers of combining too many features in artificial life experiments, a viewpoint well captured by the following criticism of biological models;

“It may seem natural to think that, to understand a complex system, one must construct a model incorporating everything one knows about the system. There are two snags…The first is that one finishes up with a model so complicated that no-one can understand it: the point of a model is to simplify, not to confuse. The second is that if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model which can predict anything predicts nothing.” [Maynard-Smith and Szathmary 1999]

It is of the nature of the artificial system proposed that many different aspects are integrated together. These criticisms must be addressed head on.

It should be stressed first of all that the proposed system is not a model of a biological, social or economic system, but an actual computation architecture with dynamics which (it is proposed) can be exploited to optimise the design of multi-agent systems with an unprecedented flexibility. Since it draws on metaphors from natural systems, it may indeed shed light on the dynamics of systems which share those features, but its primary role is not to model such systems.

Secondly, it is worth reaffirming the objectives which led to an examination of natural systems in the first place. We wish to provide the conditions for solutions to arise which are presently beyond our understanding. In an ideal world, we will be able to identify the ‘slick tricks’ which agents discover to satisfy our requirements. However, if we restrict their behaviour to strategies we already understand, we have defeated the object of the exercise.

Finally, generality or universality should not be confused with complexity. Although it will be claimed that a very large range of interactions and behaviours can be implemented within the proposed architecture, it is through the minimalism of its specification that this is achieved, rather than the complexity of its specification.

In spite of the rejection of system simplicity as a guiding principle, it is important for the design specification of the architecture itself to satisfy the following criteria.

“…axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should have an immediate intuitive meaning by which its appropriateness can be judged directly.” [von Neumann and Morgenstern 1990]

Section 1 will detail the proposed benefits and recognised difficulties of multi-agent system design. Subsequent sections will establish the means by which natural optimising systems achieve similar benefits whilst avoiding such difficulties. This discussion will lead to the specification of a set of axioms or ‘design principles’ which will preserve the desirable properties of each domain, following Pfeifer [1996]. These design principles (DPs) will be explicitly stated and argued for. The approach is effectively that of reverse engineering.

“The general idea [of Reverse Engineering] is to start with the product…and then to work through the design process in the opposite direction and reveal design ideas that were used to produce a particular product….Stages in reverse engineering are system level analysis, …subsystem analysis… and finally component analysis where physical principles of component are identified.” [Dautenhahn 2000]

Design ideas in nature do not precede the systems which exploit them. They are abstractions which we derive from our observation of the system. Nevertheless, our treatment of natural systems will follow Dautenhahn’s prescription of a three level analysis.

‘Section 2 - System Level Analysis’ examines whole economies and ecosystems, discusses their dynamics, and characterises the way these dynamics are constituted by interactions between constitutive subsystems. This section addresses the highest level of organisation, and profits mainly from economic perspectives for resource allocation.

‘Section 3 - Subsystem Level Analysis’ examines the multiple levels of modularity which can contribute to our understanding of global behaviour at the subsystem level (for example cell, organism, colony, department, firm, corporation). Economics, ethology, evolutionary theory, behavioural ecology, symbiotic theory, game theory and thermodynamics are exploited to provide an understanding of the emergence of niches, and the correlated differentiation and adaptation of subsystems.

‘Section 4 - Component Level Analysis’ of natural systems offers an insight into the lowest levels of composition and interaction through which the higher levels of these systems are constituted.

Section 5 compares the approach adopted to existing work in Alife and computer science.

Finally, in Section 6, the central argument of the paper – a thread running parallel to the derivation of the design principles – is restated and outstanding challenges and future research directions are discussed.

The appendices contain comprehensive details of the Java implementation of the Data Physics architecture. Appendix A provides an overview, and describes the way the architecture satisfies the specified design principles. Appendix B details a simple experimental application exploiting the dynamics discussed. Appendix C provides fully cross-indexed architectural reference.

Section 1 - Multi Agent Systems

In recent years, the agent-oriented paradigm has promised a powerful way of constructing distributed software architectures. Agents are software entities which are goal-oriented and autonomous [Jennings et al. 1998][Franklin and Graesser 1998][Nwana and Ndumu 1999]. A well-designed Multi-Agent System (MAS) architecture allows the diverse goals of multiple users to be resolved through the interactions of multiple agents representing their interests. Agents can selectively exploit the affordances and resources offered by each other and the network environment they share to achieve these ends, collaborating and competing to maximise benefits for the users they serve.

However, a recent survey [Nwana and Ndumu 1999] has identified serious problems facing the MAS design community in specifying general agent frameworks and hand-designing the agents to populate them. These problems are:

• The reasoning problem

- How can agents determine what actions to take, given their current state, their objectives and the environment which confronts them? In particular, can a general, extensible solution be found?

• The communication/ontology problem

- “The ontology (or concept definitions) specifies the terms each party must understand and use during communication” [Nwana and Ndumu 1999]. Explicit design will allow specialised agents to share data structures of anticipated kinds, and to engage in communicative acts about their experiences. How can this be made extensible?

• The information discovery problem (Information Pull)

- “How do agents learn for themselves to discover new information resources in open environments like the Internet? How do they decide which sources are no longer reliable or up-to-date? …[M]uch of the information discovery problem is currently tackled manually – and as long as this persists, it serves as a limiting factor to deploying multi-agent systems for real since they will essentially remain closed systems.” [Nwana and Ndumu 1999]

• The monitoring problem (Information Push)

- How will agents be prompted with information relevant to their operation?

• The legacy system problem

- How can access to existing technologies be made continuous with an agent world, allowing them to employ existing solutions and techniques without re-inventing the wheel?

Agent-oriented research draws heavily on the reasoning, representation and planning techniques of classical artificial intelligence. As Pryor [1995] notes, crucial assumptions are made about the world in which agents operate in order to allow analytic tractability of the problem space…

“Simplicity: it is possible to know everything about the world that might affect the agent’s actions;

Stasis: there will be no changes in the world except those caused by the agent’s actions;

Certainty: the agent’s actions have deterministic results.” [Pryor 1995]

…but most natural environments display the following characteristics:

“Complexity: it is impossible to know everything;

Dynamism: changes occur as a result of the actions of other agents or of natural phenomena;

Uncertainty: the agent cannot be sure what the results of its actions will be.” [Pryor 1995]

The “benevolent agent assumption…that agents…have common or non-conflicting goals. [Rosenschein and Genesereth 1988] is a further simplification which is commonly made in agent research projects, raising the question : “Since all agents have the same goal, why have agents with separate goals at all?”. More importantly, in a real MAS, conflicts of interests are inevitable, since agents are represent their users’ conflicting interests, and additionally, occupy an environment in which limited network resources must be shared between agents. These conflicts introduce game theoretic aspects to the optimisation problem.

“Imagine that we have discovered a set of rules for all participants – to be termed as ‘optimal’ or ‘rational’ – each of which is indeed optimal provided that the other participants conform. Then the question remains as to what will happen if some of the participants do not conform. If that should turn out to be advantageous for them – and quite particularly, disadvantageous to the conformists – then the above ‘solution’ would seem very questionable.” [von Neumann and Morgenstern 1990]

Ignoring these aspects undermines the value of agent research. Agents in open systems whose well-functioning depends upon the “benevolent agent assumption” will have their weaknesses exploited by other agents to suit their objectives.

MAS approaches have great market potential as a means for users to delegate interactions to tireless, network-embedded programs which can serve their interests. However, hand-designed agent architectures have failed to achieve the objectives defined within the simplified, benevolent world of the agent research testbed. Moreover, the assumptions on which these testbed worlds are built are wholly unrepresentative of the environments which MAS will face in their vaunted future applications.

Section 2 - System Level Analysis of Natural Optimising Systems

The complexes arising in natural ‘selective systems’ [Wright 2000] such as social economies and ecosystems, in common with software agents, are goal-oriented, autonomous entities. In this section the global behaviour of such selective systems is examined.

In practice, economies and ecosystems are never isolated. Ecosystems require the input of energy from external sources such as fusion reactions in the sun, or geothermal energy sources, and they may in turn be part of a larger ecosystem. Economies must acquire raw resources from the natural environment, and may in turn be part of a larger economy. Networks must acquire raw resources from their users, and may be connected together into a larger network. For the purposes of study, the boundaries of a social economy, an ecosystem or a network are drawn arbitrarily.

To understand such systems we can identify the exchanges which take place across the defined boundary, and establish general laws which govern the interactions of elements within the boundary.

In the proposed Adaptive Multi-Agent System AMAS [Wright 2000], there will be a flow of computational resources into the system from the real world – resources provided by the users of our system and exploited by autonomous network agents. The AMAS correlates strongly with Tom Ray’s Tierra in this respect, in which “digital life can be viewed as using CPU time, to organise memory” [Ray 1996] paralleling organisms’ use of energy to structure matter. From an ecosystems perspective, the inflow of computational resources may be seen as somewhat analogous to the energy received from the sun on earth, while the network organisms are biomass cultivated to satisfy human requirements. However, there are important differences which underlie these parallels. The commodities exchanged over the boundary between human and digital society have a value within each respective system, defining an exchange rate, and coupling the systems. The composition and dynamics of the digital ecosystem are specifically engineered to ensure that its output is of value to the population of users – if the system functions correctly, the crop will be the information requested by users. These differences are clarified later.

Another vital resource provided by users is information. This could be inert information such as financial or meteorological data. It could be active information – operators which are able to carry out a transformation on these data structures. In a parallel to Holland’s notion of the ‘adaptive plan’ we can visualise the domain of action of our AMAS in terms of allocating users’ network resources to a space of “possible operator sequences” [Holland 1975]. Given an initial set of data and a set of operators which can transform them, this space would be defined as the set of possible trajectories which the system could follow, by assigning valid operations to the processors[3] available. Selecting a trajectory may not be a static problem, since the available data and operations would change owing to ongoing user interaction with the system. Since the data and operators cannot be anticipated, it must be possible to determine the kind of data for which a specific operator is appropriate at runtime. This metadata allows the system to determine the trajectories available, i.e. the degrees of freedom of the system at any given time. This leads to the first design principle.

DP1: The degrees of freedom available will be defined within the architecture. Given a set of data and a set of operations, a set of valid {data:operation} pairs will be defined.

The system must finally be constrained by the limits of the computational resources available. In the modern computer these finite resources are allocated using an operating system between tasks whose sequence of operations is specified by programs. This entitles the user to employ the computational resources of his own computer, the full extent of which remains unused most of the time. Structured resources such as data and software are almost always only exploited by the owner of the machine. If these resources were made available for exchange, all users could reap benefits. As Marx [1867] observed, when a single system co-ordinates a large volume of productive power it “effects a revolution in the material conditions of the labour process… a portion of the means of production are now used in common…and therefore on a larger scale than before.” Although these synergistic effects could increase the overall benefit for all users, as a matter of practicality, the satisfaction of our individual preferences must depend on our having individually contributed sufficiently to the shared computational resources, leading to…

DP2: The satisfaction of a user’s requests will depend upon the contribution of sufficient computational resources to the network.

The ultimate success of the AMAS is to maximally exploit the resources contributed to the network by users, by allocating them to achieve user-specified requirements. It would take on the role of an operating system, deciding upon and carrying out a resource-efficient sequence of valid operations leading to the satisfaction of our individual specifications.

In classical AI, this sort of problem is addressed by planning theory, and it was presumed to be sufficient to employ systems of symbols to describe the state of a world and to manipulate them according to specific rules in order to predict world behaviour. In classical planning, a single goal state specifies a condition which the final data will fulfil. The planning problem in AI is to work from the goal state to identify a set of operations which can transform the initial state into a state which meets its specification. AI has faced insurmountable difficulties when trying to build a logic which can “specify both the circumstances under which an action may be performed and the effects of that action” [Jennings et al 1998] in any real-world contexts. Furthermore, in uncertain worlds, such a deterministic logic is impossible. However, traditional planning algorithms depend on such a representation.

The sphere of action for symbol manipulation systems, and for economic systems is similar. The ‘goods’ of an economic system are the ‘data’ of the computer system. In a computer, it is possible to transform a set of data into another set of data by employing a function. In an economy it is possible to transform a set of goods into another set of goods according to our technology [Holland 1975].

The discipline of economics concerns a planning problem, except there is no single goal state, the objective being a compromise – to find the best possible allocation of resources as defined by the individuals’ preferences, moderated according to the individual’s contribution of resources. Economics has been characterised as a kind of distributed planning.

“In ordinary language we describe by the word ‘planning’ the complex of interrelated decisions about the allocation of our available resources….Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning – direction of the whole economic system according to one unified plan. Competition, on the other hand, means decentralized planning by many separate persons.” [Hayek 1945]

In economic theories, it is natural for individual-level preferences both to define the resource allocation problem – i.e. the compromise state – and to positively assist in its solution. However, it is worth contrasting Hayek’s perspective with the more traditional school of economic thought. At first glance, Hayek’s perspective seems to be well captured by the notion of ‘competitive equilibrium’. However, Walrasian economic theory, from which the principle of the ‘competitive equilibrium’ was derived has tended to adopt the convenience of a single, all-knowing co-ordinator of resources which employs them to achieve a specific state. This approach demonstrates analytically, for certain mathematically ideal economies, that the economy will converge to a ‘competitive equilibrium’, (i.e. a state where supply equals demand), or Pareto optimality, (no-one could possibly be better off without someone becoming worse off). These mathematically ideal economies are as unsatisfactory as the ideal ‘blocks worlds’ of classical AI, and the benevolent predictable worlds of the MAS research community, and for the same reasons – they bypass the real problems faced by agents operating in an uncertain world on limited information…

“What is the problem we wish to solve when we try to construct a rational economic order? On certain familiar assumptions the answer is simple enough. If we possess all of the relevant information, if we can start out from a given system of preferences and if we command complete knowledge of available means, the problem which remains is purely one of logic. That is, the answer to the question of what is the best use of available means is implicit in the assumptions…This, however, is emphatically not the economic problem which society faces…The reason for this is that the ‘data’ from which the economic calculus starts are never for the whole society ‘given’ to a single mind which could work out the implications, and can never be so given. The economic problem of society is…how to secure the best use of resources known to any of the members of society for ends whose relative importance only these individuals know….[I]t is a problem of the utilization of knowledge not given to anyone in its totality.” [Hayek 1945]

This also characterises well the problem faced by an adaptive network operating system. The dynamically changing requirements of multiple distributed users, and the computational means to which they have access, could never be encapsulated in a single meaningful representation. Even if it could, the problem would be too complex to solve in real time, or to be centrally co-ordinated. The earlier discussion of MAS and AI planning indicates that the applications of the agent paradigm cannot depend on the possibility of a central controller, the existence of a representation which can interpreted or reasoned over, or the determinism of the world’s dynamics.

DP3: The system will not employ interpreted representations.

DP4: The system will be decentralised in its operation.

DP5: The system’s well-functioning will not depend on a deterministic world.

We should take note of the way the problem is solved in human social economies, where the system employs only local processing and local interactions between individuals addressing their own local needs.

“Every workman has a great quantity of his own work to dispose of beyond what he himself has occasion for; and every other workman being exactly in the same situation, he is enabled to exchange a great quantity of his own goods for a great quantity, or, what comes to the same thing, for the price of a great quantity of theirs. He supplies them abundantly with what they have occasion for, and they accommodate him as amply with what he has occasion for, and a general plenty diffuses itself through all the different ranks of the society.” [Smith 1776 Book 1 Chapter 1]

This provides the context for a distributed selection algorithm driven by local demand in which “the trials are the various concrete labours that produce commodities, the evaluation mechanisms are the various needs and demands of individual consumers, and selection occurs through the buying and selling of commodities” [Wright 2000]. If a workman does not provide work which he can dispose of, he will not be able to fulfil his own needs, and hence he will cease to practise that trade. In general, this will lead to a modification or a total change of livelihood, rather than a premature death, but the principle remains the same. The effect is to change the distribution of labour and material resources between technologies to favour the demands of the market.

DP6: The system will implement a distributed selection algorithm driven by local demand to allocate network resources to its subsystems.

As Smith observes, the medium of a conserved currency or fiat money is the lubrication for such a system. Since money is agreed upon by all to have intrinsic value, whatever good you want can be arrived at by collecting together enough money [Shubik]. One could engage in a simple barter with the current owners of the desired goods. However, one would first need to establish what it is they want. Since money has intrinsic value to everyone, and can be exchanged for anything, we can assume that money is something they want. “Currency makes it easier for the equivalent of large multi-way barter deals to occur through separate pair-wise trades” [Miller and Drexler 1988a].

DP7: The system will employ a conserved currency to assist in the allocation of resources.

It is possible to view an ecosystem as a system of resource allocation. The biomass on our earth has been transformed from natural resources such as sunlight, raw elements and minerals. The entities which populate our ecosystems must compete for their share of these resources, and the strategies they adopt and the relationships they form have a great deal in common with those adopted by self-interested individuals in economies. However, there are many differences. For example, it appears that “biological ecosystems involve more predation, while idealized market ecosystems involve more symbiosis” [Miller and Drexler 1988a]. Baum suggests one important difference which underlies this asymmetry, regarding the legislative backdrop under which an ecosystem operates.

“Comparing an ecology to an economy, the lack of rules – property rights – stands out most glaringly. Creatures do not even have secure property rights in their own protoplasm: other creatures regard it as lunch. This leads evolution to invest the bulk of its efforts in arms races. Creatures invest heavily in armor and teeth, fast legs, immune systems…Evolution is running in circles The plants develop toxins, so the animals develop complex livers. In the end, they both have overhead, and neither is obviously better off….The vast majority of the biomass in a forest is in trunks, the sole purpose of which is to put a tree’s leaves higher than the other trees’. The culprit here is that the sunlight has no owner. If the sun had a rational owner, with the right to dispose of sunlight as he saw fit, building a trunk would be irrelevant. The owner would want to be paid. He would auction off the sunlight to the highest bidder. Money and sunlight would trade hands, but there’d be no wasted investment in trunks.” [Baum 1998]

This line of argument leads him to recommend that all elements of an adaptive computational system should be owned by an agent within the system who retains total authority over its fate, an authority backed up by a legislative framework. This leads to a further design principle for the Data Physics.

DP8: All resources must be owned – ownership implies control of access to a resource.

Fortunately, in computational systems, such a legislative framework can be enforced without exception, by restricting the behaviour of the entities within it, by contrast with a human legislative framework, which can only impose penalties to deter criminals. It will still be possible for irrational agents to be exploited, but their explicit agreement must be obtained before access to the owned resource is permitted. This does not sacrifice the evolutionary benefits of the “Red Queen” effect [Ridley1994] since sub-populations must still compete against each other under the constraint of limited resources, and will exploit each others inefficiencies, hence providing a pressure for change. However, it helps to prevent runaway selection effects and arms races. When such economic entities compete, they are competing to meet the needs of other entities – competing to co-operate. The stress on co-operation rather than competition in economics is borne out by the original definition of the ‘zero sum game’.

“…the economically significant games are most essentially not [zero-sum games]. There the sum of all payments, the total social product, will in general not be zero, and not even constant. I.e. it will depend on the behavior of the players – the participants in the social economy.” [von Neumann and Morgenstern 1990]

That is to say, destructive competition, (for example parasitism and predation), is not the only way to win in an economic system. Not every benefit to an agent implies a disadvantage to another in the system. The previous assumptions, applied to a system of rational agents, leads to a vital result.

“[I]f money is conserved, an agent can not increase its money without either increasing the total money pay-in to the system (i.e. our overall goal) or getting the money from somewhere else in the system. If everything is owned by agents, then the money cannot come from somewhere else unless the agent owning it agrees to the transaction. But if this agent is rational, it will not agree unless it also profits. Thus conservation of money, plus ownership of everything by some agent in the system, yields a system that does not make transitions to worse performance…”[Baum 1998]

The increase of total money pay-in for our AMAS correlates with an increase of computational resources to the system – an exchange across our arbitrarily defined border. If Users act as rational agents (through avatars) within the network system, they will submit computational resources to the system only if this is to their benefit. Providing access to their computational resources will earn currency for their avatar. This currency provides the avatar with the ability to offer currency in exchange for the completion of tasks. An autonomous agent participating in the system may therefore increase their money either by acquiring it from other rational autonomous agents, which implies that those autonomous agents must benefit or increasing the total amount of computational resources submitted to the AMAS by users, (and therefore the total liquidity in the system). Any transaction must therefore either increase the efficiency or the adoption of the AMAS. Of course, the behaviour of agents in the system can never be perfectly rational. The ‘approximation to rationality’ with which we will operate, and its consequences are considered later.

It is natural that our perspective on optimal resource allocation draws heavily on economic theory. This is partly because economic systems have a higher-level purpose, a moral purpose, which offers a foundation for a definition of optimality. No matter what their politics, people will agree that it is possible to have a good economic system, even if they disagree on the specific virtues. An economy exists to fulfil a role in our society. Legislation, whether effective or not, attempts to modify its dynamics in the interests of the members of that society. By contrast, an ecosystem just exists. Excepting the work of Gaia theorists, [Lenton] teleological analysis in ecology is generally only appropriate for the lower levels, the units of selection. It is to the differentiation between subsystems, and to an understanding of the subsystems’ rationality which underpins distributed planning, that we now turn.

Section 3 - Subsystem Level Analysis of Natural Optimising Systems

.

To participate with an economy, an entity must know what it has, what it can do, and what it wants. A competitive entity will select the most profitable sequence of the operations available to it, transforming the goods in its possession, exchanging what it has with others, and acquiring what it wants.

There are certain pre-requisites for an entity to achieve these ends.

a) The entity must be able to operate on goods in some way.

b) It must have the appropriate relationship with entities which want what it has.

c) It must have the appropriate relationship with entities which have what it wants.

The means by which a), b) and c) become finely co-adapted to produce rational, competitive and co-operative entities remains to be described. However, an important result has been achieved. No entity in the system needs to understand the needs of the entire system, or have the reasoning capacity to employ this knowledge. Human-level capacities as top-down planners are not required for the existence of an economy which can undertake distributed resource allocation. The human economy demonstrates that entities with simple relationships to a subset of others, engaging in simple behaviours is sufficient.

The goods traded and transformed in a human economy are many and various. This inevitably results in entities which fulfil the above criteria a) , b) and c) using different, though complementary strategies. It is proposed to use life on earth as a model for the constitution and interrelationships of the entities in our network ecosystem. Our ecosystems are characterised by a massive diversity of interdependent structures each exploiting its own strategy for self-maintenance and self-reproduction in its ecological context. These strategies have arisen from the parallel exploration of metabolic and reproductive configurations over evolutionary time through “[n]atural selection…a power incessantly ready for action…immeasurably superior to man’s feeble efforts” [Darwin 1872].

Maynard-Smith goes further still.

“Given time, any degree of adaptive complexity can be generated by natural selection.” [Maynard-Smith 1975]

In the coming sections, we establish what is required to provide a context for our data organisms to evolve open-endedly to achieve the tasks we specify. In Section 4, the pre-conditions for diversity and open-ended evolvability within populations are examined.

First, we attend to the minimal requirements for metabolic and population stability, specifying the constraints which units of selection may explore, and detail an abstract model of rationality for strategies operating under resource constraints which supports the distributed planning algorithm identified in the previous section.

The preconditions for evolution at any level are, according to Maynard-Smith, “the properties of multiplication, heredity and variation… Any population with these properties will evolve…so as to become better adapted to its environment.”[Maynard-Smith 1975] An important omission from this list of properties is that of ‘conservation’. Without this environmental constraint, the inevitable result is a geometric expansion of individuals. The fact that real populations did not expand geometrically was the phenomenon which Malthus originally sought to explain, and which provided Darwin’s original inspiration for “The Origin of Species”. However, it is understandable that a biologist would take the conservation of energy for granted, since the domain of biology is the physical world, in which this constraint is strict legislation. In an artificial physics, it must be explicitly acknowledged and implemented.

In nature an important constraint on organisms is the second law of thermodynamics, which implies that “the critical resources – biomass and free energy are ‘downwards conserved’: they can be transferred and reduced by the transactions animals are capable of, but not increased.” [Miller and Drexler 1988a] Within these constraints, the form adopted by a ‘unit of selection’, and the strategy of self-maintenance and self-reproduction, (implied by its form), is left open for exploration. The way in which purpose is imposed upon biological organisms is, at bottom, through locally imposed thermodynamic constraints.

“Individual living entities (organisms) maintain their self-identity and their self-organisation while continually exchanging materials, energy, and information with their local environment…[M]etabolism is required for a physical entity to persist in the face of the second law of thermodynamics, so any [such] physical system…must rely on metabolic processes to sustain itself” [Bedau 1996].

The purpose of the data physics is to select for organisms with metabolisms which carry out operations which are of value to users. Imposing resource conservation, and costs on organisms to maintain their structure places them under a selection pressure to provide valuable services to users, directly or indirectly, since users are the final source of all computational resources. Agents must either participate in exchanges which profit users in order to earn the currency they need to survive, or they will die.

DP9: Resources will be conserved: they may be transferred and reduced by the transactions network organisms are capable of, but not increased.

DP10: The maintenance of structure must have an associated cost. [4]

For a unit of selection to survive and propagate, the interactions which take place with the environment must be of value in improving its fitness. For units of selection, utility is finally determined in terms of fitness,“[r]oughly…the number of its offspring which survive to reproduce” [Holland 1975].

“[I]t is of evolutionary advantage for animals to be rational.” [McFarland and Bosser 1993]

The concept of utility or payoff is used in economics, game theory and ethology to characterise value for an agent. It provides a measure by which actions may be compared. The rational behaviour for an agent in a given situation is defined as the behaviour with the greatest payoff for it. Given a transitive ordering, there will be a behaviour which lies at the top of the scale, i.e. returns the maximum utility. This will be the rational course of action.

“Economic principles apply not only to humans but to all animal species. All animals have to allocate scarce means among competing ends. The scarce means may be energy, nutrients or time. The competing ends may be growth and reproduction,…or one activity versus another.” [McFarland and Bosser 1993]

Actions constrained by ‘scarce means’ (conservation constraints) are exclusive to each other. Thus a process of ‘action selection’ [Maes 1991] must take place. This does not imply any specific form of implementation, but simply requires that one of the available options is selected. Even inactivity is a choice with an associated cost.

The above definition of rationality is not automatically satisfied by classical reasoning processes, however effective they may be in a theoretical sense. The so-called effective procedures of classical mathematical logic provide a guarantee of arriving at a conclusion, given indefinite, sometimes infinite time and resources. However, creatures in real ecosystems, and agents operating in real-time computational environments, must actually implement their information processing and will incur its costs. These costs may be in terms of energy, material or time. The costs incurred must be balanced by the value of the result. This is known as bounded rationality, rationality operating within the boundary of cost constraints. A timely procedure which achieves a near-optimal result may well be more rational under this definition than a long-winded procedure to find the absolute optimum.

In game theoretic problems, the utility of a specific action for an agent is determined not only by his own actions, but also by the actions of other agents. Once all agents have selected from a range of actions, a payoff matrix is used to determine the respective payoffs for each participating agent [von Neumann and Morgenstern 1990][Poundstone 1992][Kreps 1990][Axelrod 1997]. Game theory experiments are unrealistic in that agents activities are synchronised, all agents know the details of the payoff matrix in advance, and it is dependent only upon agent actions [Jennings et al 1998][Olafsson 1996]. In an ecosystem, the ‘payoff matrix’ is continually changing due to noise, environmental effects and the actions of other agents. As a result, the ordering of actions, or preferences demonstrated by an organism’s actual responses can not be expected to conform to the ideal ordering of responses.

One way of conceiving of this divergence is to propose a value function [McFarland 1996], which determines the ordering of actions which an animal actually displays. Thus “the entity maximized by a rational agent is a property of the individual, and will vary from one individual to another.” [McFarland and Bosser 1993] The animal’s behaviour is, by definition, rational with respect to this value function. The actual payoffs deriving from a particular action are determined by the ‘cost function’. The individual’s metabolic and reproductive strategy is determined by its form or phenotype. This then dictates the value function and the cost function for different world states.

Schrodinger tries to characterise metabolism in his ‘What Is Life?’.

“[T]he device by which an organism maintains itself stationary at a fairly high level of orderliness…consists in continually sucking orderliness from its environment…In the case of higher animals we know the kind of orderliness they feed upon well enough, viz. the extremely well-ordered state of matter in more or less complicated organic compounds, which serve them as foodstuffs.” [Schrodinger 1944]

To take Schrodinger’s higher organism example, the state of having-digested-an-antelope is of great value to a lioness, having bound free energy into energy-rich organic compounds to help maintain order in her tissues. To achieve this state, she will have passed through the state of having-seen-an-antelope, chasing-an-antelope, having-caught-an-antelope and having-eaten-an-antelope. Appropriate responses to each of these states will improve her chances of moving to the next state in the sequence. The value of the predecessor states may be calculated, following von Neumann, as the value of its successor multiplied by the probability of its occurrence.

The lioness gambles energetic compounds in generating a response to her environment in the hope of an eventual payback. The final payback, for the lioness’s successful ancestors, was their ability to produce offspring, using the energetic compounds they acquired through predatory behaviour. Assuming that the lioness’ current environment has enough in common with her ancestors’ environment, similar paybacks will accrue to her, on average, by following the value function she has inherited. The process of evolution can be interpreted as operating upon the variability of individuals’ structure, and increasing the frequency of those metabolising structures whose ‘value function’ approximates most closely to their ‘cost function’ – the actual utility – since those are most able to reproduce.

Schrodinger’s characterisation of metabolism rests upon the notion of the entropy of a material – a measure of ‘orderliness’ – whose value determines the amount of free energy available in that material. However, this does not mean that the simple calorific value of a foodstuff determines its utility or value to an organism. The amount of free energy – equivalent to currency [McFarland and Bosser 1993] – which can be extracted from material state with a given entropy depends upon the nature of the machine which does the extraction. This goes deeper than the simple constitution of foodstuffs. It relates to any regularity in material state from which a creature can extract orderliness to improve its fitness. This includes the regularities which the sensory apparatus exploits to extract important features in the world and better inform the creature’s actions. The utility for an organism of any material state depends upon phenotypic adaptations allowing it to exploit that state in order to improve its fitness. There is a selection pressure towards self-sustaining metabolic systems whose responses are rational, i.e. which maximise utility by this definition.

As Ray [1996] reminds us, “[f]reely evolving creatures will discover means of mutual exploitation and associated [cost functions] that we would never think of.” In a real ecosystem, the success of a specific metabolic strategy depends upon the exchanges made between organisms through the medium of the local environment. There are many different and interdependent forms of trophism (food exploitation) which characterise different forms of metabolism. Autotrophic organisms exist ‘at the boundary’ of the ecosystem, and are responsible for fixing natural energy sources into biomass through photosynthesis, (heliotrophism) or naturally occuring chemicals (chemotrophism) such as those found at geothermal vents. All organisms must establish themselves such that their local environment can deliver the appropriate form of nourishment. Organisms further down in the food web depend not only on the affordances of the inorganic environment, but also depend on an appropriate relationship with others’ metabolisms.

“Ecosystems provide contexts for evolution in two distinct senses. First, the ecosystem’s replicators evolve in response to unchanging aspects of their environment, such as climate and physical principles. Second, the replicators interact …with each replicator providing part of the context for others…[I]n a non-ecological system, the subtlety and sophistication of the selective pressures are fixed.” [Miller and Drexler 1988a]

All replicating individuals do not exploit the same niche. For example, different predators depend upon the metabolisms of different prey to furnish them with energy and nutrients. Their strategies diversify accordingly. Following Baum’s example (see earlier) our focus will be on relationships in which participation is either beneficial (mutualism) or neutral (commensalism) for the parties involved, since the assertion of property rights ensures that exchanges are agreed by all. Rational agents will not agree to disadvantageous interactions.

The existence of a currency within our AMAS simplifies the sharing of energy between entities, but it does not eliminate the distinctions between metabolic strategies. Each strategy will depend upon further resources and environmental features in order to engage in profitable productive activities.

An individual’s connectivity to other metabolising individuals determines the range of metabolic strategies which can be employed. In turn, as the progeny of that individual diversifies, so there are new dependencies to exploit. As has already been remarked, there is strictly speaking, no purpose to bio-diversity in a natural ecosystem. However, the value of diversity within our network ecosystem is that it could provide a means of automatically dividing the problems we set by providing metabolic niches, (implicit fitness functions), which our simple agents may evolve to exploit.

“If we can really succeed in modularizing and ferreting out…[problem] structure, in converging to assign credit accurately, perhaps we can break up these problems, make consistent progress, and solve them given merely vast, but plausible computational resources.” [Baum 1998]

This is effectively the principle of the division of labour. There are more advantages to this process than reducing complex problems into simple ones.

“The division of labour…so far as it can be introduced, occasions in every art a proportionable increase of the productive powers of labour…This great increase of the quantity of work…is owing to three different circumstances; first to the increase of dexterity in every particular workman; secondly, to the saving of time which is commonly lost in passing from one work to another; and lastly to the invention of a great number of machines which facilitate and abridge labour and enable one man to do the work of many.” [Smith 1776 Book 1 Chapter 1]

The three benefits of the division of labour described above show the value of diversity in our network ecosystem. Firstly, as a population of agents undergoes natural selection in a specific niche, they are placed under a selective pressure to adapt their metabolism to exploit that niche more efficiently. Secondly, as they are specialised to play a specific role in a sequence of software operations, they will not need to reconfigure themselves in order to play a particular role, as in other agent systems for example [Clements 2000]. Since there is a cost associated with the use of all computational resources, including the cost of loading specialist agents or operations into memory, a resource-optimum strategy will exist for specialist operations, depending upon their frequency of use. “Main core memory is a high performance resource in short supply. Disk is a lower performance resource in more plentiful supply. …[C]ore memory will typically be a high rent (business) district, while disk will typically be a low-rent (residential) district” [Miller and Drexler 1988b]. The final point, in an entirely mechanised economy would appear to be the same as the first. However, it is worth noting that amateur programmers and professional developers may interact with the network ecosystem, and may intervene by providing operations or structures for which a need is identified, as the following authors acknowledge.

“When also open to human society, computational market ecosystems will enable diverse authors to create software entities and receive royalties for the services they provide, and enable diverse users to mold the system to their needs by exercising their market power as consumers. Computational markets can be made continuous with the market ecosystem of human society.” [Miller and Drexler 1988b]

The means of revenue generation exploited may be in distributing this optimised code, or in acquiring some part of the resources employed in executing it. As Baum observes of computational economies, “if any agent (or collection of agents) can discover a better way of having the whole system earn money from the world, they can get everybody relevant to go along. This is a kind of universality result, saying that you can’t do better than this system: if we knew a better way of solving, we could simply act as an agent, and get the system to implement our idea with appropriate payoffs.” [Baum 1998]

As discussed above, in real biology, locality determines an entity’s access to others’ metabolic products. It must adapt to the niches offered within its locality, or change its locality to find an appropriate niche. However, digital organisms in a modern computer exist within an environment which has no clear spatial analogue - “[a]ll memory in an ideal von Neumann computer is effectively equidistant.” [Miller and Drexler 1988b]. This seems to imply that all our digital organisms are embedded within the same context – a single layer in which all agents are provided with the same sensory context, and the same opportunities for action. This leads to an architecture which is unscalable (since all organisms must be informed of all events), and agents which are undifferentiated. However, the pre-conditions do exist for digital entities to explore niche specialisation, and to benefit from the effects discussed in the previous section.

Despite the digital revolution, everyone on earth does not know everyone else, or receive every e-mail. The relationships formed between human individuals form ‘small world networks’. Small numbers of connections from individuals to other individuals connects human society into a ‘unique giant component’. This is the inevitable result of connecting multiple nodes together even when using only a few links for each node, according to results from random graph theory [Bossomaier and Green 1998]. As a result, every entity is connected, indirectly, to every other.

This observation does not in itself explain how human societies solve the scalability problem. We must also acknowledge that the connections in our society are not made randomly. Taking a mercenary perspective, human relationships are selected according to the criteria b) and c) above, each of which may be partly a function of criterion a). That is, a blacksmith must have appropriate relationships with smelters and ironmongers, owing to his industry, as well as butchers, bakers and others owing to his carnal needs. This leads to the formation of a graph in which locality implies shared needs, shared technologies or shared produce. Individuals may avoid contact with others who exploit the same niche, but he cannot help sharing producers and consumers, placing his competitors only two hops away. Since the neighbourhood of all agents in the system is determined according to these individual incentives, clusters will form through individuals locally determined decisions. These clusters are the key to solving the scalability problem.

In a randomly connected graph, if I wish to locate particular kinds of node, I have no basis for exploring one part of a over another. However in a non-random graph of the kind described, the topology of the space helps to reduce the search problem. Once a member of the cluster has been identified, similar nodes will not be too far away. Furthermore, in the sort of system conceived, an organism’s search will not begin at random anyway. Since its niche is determined by its connectivity, it will begin at a node whose connectivity is determined by the niche it occupies, and will already have appropriate neighbourhood relationships for its needs to be satisfied. For example, in a system of reproducing individuals, their offspring will already find themselves within the location which provided rich resources for their parents. Since the connected graph metaphor is so powerful in delivering scalable solutions, and provides a concept of space for niche specialisation, this discussion has provided us with a further design principle of the system.

DP11: Network organisms will be able to form relationships, and connect together to form small world networks.

It is a useful illustration of the design principle – that the use of computational resources must have an associated cost – that agents have no option but to seek scalable solutions within this framework. Since all resources are owned, and hence have an associated cost for their use, all references will cost too. It will be in the interests of agents to maintain the minimum of references required for their stable existence. This does not mean that all agents must maintain few references, but it means that any references maintained must be worth keeping. Agents may specialise in maintaining large numbers of references, providing a yellow-pages service to other agents. Adopting such a centralised strategy could be the resource optimal way of delivering information. Whether this is the case or not will be decided by the ‘energetic’ success of the strategy. If fellow agents can acquire the required information by a more cost effective means then they will not trade with the yellow pages agent, ensuring its extinction.

This simple specification allows for many further strategies which are characteristic of human social organisation. Specialists who don’t do anything, but know someone who does – facilitators or intermediaries – could benefit from their knowledge of particular ‘industry’ clusters, and higher level intermediaries could benefit from their knowledge of lower level intermediaries. Specialists technologists could acquire each others’ techniques, improving the productive process.

The flexibility of interrelationships in the natural kingdom also provides a context for the development of reciprocal relationships between and within species. Symbiosis and sociality are ubiquitous in the animal kingdom, and are believed by some to have been at the heart of innovation during evolutionary history [Margulis 1981][Maynard-Smith and Szathmary 1999] [Watson et al 2000][Best 2000][Johnson 2000]. Coral species and lichens represent long-established symbiotic relationships between different phylogenetic taxa. Siphonophores, such as the Portuguese Man O’ War are in fact colonies of loosely aggregated individuals which have differentiated to perform different roles for the colony. [Limoges and Ménard 1994] The free exchange of compounds between organisms in such colonies enables ‘payoff’ to be distributed between the participants whose strategies contribute to each other’s benefit. Examples include the bacteria in human and ruminant guts which carry out the digestion of specific compounds. This case is interesting on a strategic level since it is conceivable that the shorter life cycle and the horizontal transposition of bacterial DNA could allow them to respond rapidly to the defence strategies of other plants and animals which don’t want to be digested. The long lifespan and vertical heredity of mammalian evolution implies an adaptive inertia when compared with their symbiotic bacteria.

There are still more astonishing examples of strategic mutualism depending directly upon energy exchange. Organelles such as mitochondria and plastids fulfil a similar role within eukaryotic cells to to the digestive bacteria of mammals. These organelles are believed to have derived from formerly free living prokaryotes. [Margulis 1981] In such cases, the metabolisms of each participant have become inextricably coupled together.

Eukaryotic cells have developed the most complex interrelationships, conspiring together to construct the strongly differentiated and highly organised tissues of multi-cellular organisms – colonies of cells which are both metabolically and reproductively interdependent. The organelles within eukaryotic cells maintain their own store of genetic information, but their reproductive cycle is integrated with that of the host eukaryotic cell. Evolutionary theorists acknowledge that natural selection is not constrained to operating at one specific level, but may operate at multiple levels simultaneously [Lewontin 1970]. “Colonies of social animals can, with some justification, be regarded as ‘super organisms’ in the sense that they can have adaptations…at the colony level.” [Maynard-Smith and Szathmary 1999] Maynard-Smith and Szathmary offer three criteria for a eusocial[5] colony: “the reproductive division of labour”; “an overlap of generations within the colony”; “co-operative care of the young produced by the breeding individuals.” [Maynard-Smith and Szathmary 1999] A multicellular creature is a eusocial colony by their definition.

‘Major transitions’ [Maynard-Smith and Szathmary 1999], such as the evolution of multicellularity, have introduced new classes of biological entity which further exploit the affordances of our physics to achieve reproductive fitness. If reproduction is definitive, then eusociality provides examples of reproductive coupling which define new reproductive entities. If entities are metabolically defined (following Dyson), the metabolic processes of each participant in an energy sharing symbiosis has become inextricably coupled together – defining a new metabolic entity.

To exploit the potential of novel, unanticipated strategies and interrelationships, the architecture itself cannot be too inflexible. It must support the differentiation and divergence of individuals in interdependent niches as a route to delivering productive solutions, leading to the next design principle.

DP12: The form of agent-organisms, their interactions and exchanges and hence their strategy for self-maintenance will remain open for exploration.

The advantage of a physics-based approach to the specification of our architecture is that we may allow an open-ended exploration of structure, composed from locally interacting low-level components, which can generate ‘radical novelty’ of metabolic and reproductive strategies [Boden 1996a] through symbiosis, sociality and other mutual interrelationships.

Section 4 – Component Level Analysis of Natural Optimising Systems

“There is only one evolution - the evolution of matter from elementary particles through atoms and molecules to systems of higher and higher levels of organisation” [Keosian 1974].

“What are the deep rules that life on earth exploits?…DNA is just the trick that earthly life uses to exploit the deep rules- it’s not the rules themselves”[Stewart 1999].

In previous sections there has been a focus on possible exploitation strategies – methods of self-maintenance constrained by the available resources. In particular, we have operated under the assumption that agents were able to evolve open-endedly to converge on the rational strategies required to exploit niche opportunities, and hence solve the initial planning problem. Now we turn to exploration strategies – the ways in which a population may arrive at appropriate exploitation strategies. The pre-conditions for open-ended adaptation are determined at the component level of the system.

Darwin’s original argument for natural selection [Darwin 1872] is illustrated with examples from animal husbandry in which genes are redistributed between individuals to satisfy “man’s use or fancy”, leading to changes in their phenotype. He was aware of the particulate nature of inheritance – “the laws of the correlation of growth” – noting for example that “pigeons with feathered feet have skin betweeen their outer toes”. However, it was not until Watson and Crick’s research that the base-pair structure of DNA was elucidated, and its function in the production of proteins understood.

In retrospect we may contrast different forms of variation within a population.

i) the redistribution of existing alleles between individuals in a population

ii) structural modification of individuals leading to new alleles and radical novelty

Darwin’s original examples are of type i). Evolution is still commonly illustrated using the ‘breeder equation’ which details the frequency of a pre-determined set of alleles, given their heritability and utility for a Mendelian population of a stable species which is otherwise genetically homogeneous. epistasis[6] and neutrality[7] are eliminated for simplicity.

More complex models from evolutionary computation (EC) as explored by the schema theorem [Holland 1975] and neutral networks [Shipman et al 2000] allow for non-linear effects between loci in the genome, but operate on a pre-determined and limited set of possible genotypes with a prescribed phenotypic mapping. Variable length genotypes [Harvey 1997] and graph encodings [Gruau 1995][Koza 1994][Sims 1994] provide for theoretically indefinite complexity. However, all these models require the participation of the designer to specify the phenotypic components in advance, and implement the ‘adaptive plan’ [Holland 1975] which determines the way the population explores and exploits the feature space provided. These specifications integrate the designer’s expectations, and strongly constrain the range of phenotypes and strategies which can be explored to solve the problem.

The process of evolution is commonly viewed in terms of the mutation and recombination of elements in a symbol string specifying the phenotype of an individual. The task is to explore the space of possible symbol strings to find a phenotype which is well adapted to satisfying the engineer’s design criteria.

“Most evolutionary simulations are not open-ended. Their potential is limited by the structure of the model, which generally endows each individual with a predefined set of allelic forms,…The data structures do not contain the mechanism for replication…[T]he mechanisms of selection must also be predetermined by the simulator.” [Ray 1996]

The EC approach is to specify an interpretative system, an evaluative environment and a population management algorithm. These simplifications are based upon biological abstractions which we use to understand the subsystem level of biological dynamics, and hence assumed to be harmless. The interpretative system maps symbols to phenotypic features – caricaturing DNA interpretation. The evaluative environment provides fitness values for each candidate phenotype. The population management algorithm maintains a ‘conservation of lives’, and implements natural selection using these fitness values to determine the individuals which will contribute to the next generation. Evolution of type i), is possible under these specifications. Prescribed phenotypic traits can be redistributed amongst individuals. This does not extend to the type ii) structural modifications which provide the basis for true open-ended evolution.

Maynard-Smith and Szathmary focus on eight major transitions in biological evolution which have “made the evolution of complexity possible.” These are cases in which radical novelty and complexity has arisen from modifications of existing self-reproducing structures…

1) Self-replicating molecules and hypercycles ( Compartmentalised populations of molecules

2) Independent replicating molecules ( Chromosomes

3) RNA in both genetic and catalytic roles ( DNA as genotype and protein as catalyst

4) Prokaryote ( Eukaryote

5) Asexual clones ( Sexually recombinant populations

6) Protists ( Animals, plants and fungi

7) Solitary individuals ( colonies

8) Primate societies ( Human, language-using societies

Most of these transitions can be seen as a renegotiation of what a reproductive individual is, and how information representing its phenotype can be encoded in stable physical structure. They are not based upon the shuffling around of alleles.

“Systems of matter, in order to be eligible for selective self-organization, have to inherit physical properties which allow for metabolism…and for (‘noisy’) self-reproduction. These prerequisites are indispensable.” [Eigen and Schuster 1979]

Many of the complexities of DNA processes exist to maintain the stability of discrete inherited information in a biochemical context – allowing metabolic strategies to be passed on to the offspring. However, strategies for maintaining structure against thermodynamic noise and reproducing are not always successful, leading to slight modifications, many of which have no immediate impact, but can be retained through heredity. Certain modifications may trigger a distinct change in the structure of the individual. This change in structure leads to a change in activity. Where the strategy implied by this modified activity can survive and propagate, it will proliferate.

Digital reproduction, by contrast to biochemical reproduction is highly accurate. The proper operation of open-ended type ii) digital evolution therefore requires the injection of artificial noise to introduce the modifications to structure over which selection can operate.

DP13: Noisy replication must be supported within the architecture, providing the basis for natural selection

This is just one of the ways in which the domain of digital metabolism and reproduction differs from the biochemical domain. Since DNA is so finely adapted to a biochemical context it is likely that there exist more efficient ways of exploring possible digital metabolisms than the duplication of these processes in a digital physics. We should acknowledge that DNA, (as a phenotypic trait), is a product of evolution itself, and seek to provide a digital context within which equivalent innovation may occur.

In biochemical evolution, noise may introduce phenotypic changes which improve the evolvability[8] of the structure in the future – as is the case with the innovations of RNA, DNA and multicellularity – in these cases, the offspring may not only proliferate, but can also adapt more readily to future niches.

Examining mutation and radical novelty leads to a conception of evolution in which a structure like DNA is understood as a further phenotypic element of a biochemical system, implemented within our physics, which has an active role in metabolism and reproduction. Most importantly of all, it is the interaction of these biochemical systems which implements natural selection within the world’s ecosystems. There is no separation of systems for interpretation, evaluation or population management. No designer needs to specify phenotypic traits, templates for individuals or symbolic encodings.

““[S]election” is nothing other than the expressions of tendencies which inhere in the starting materials themselves, interacting with energy and the general environment, itself determined by general laws.” [Kenyon 1974]

EC researchers’ purely symbolic interpretation of DNA-encoding overlooks the complexity of its role, and the physical properties which underpin it.

“The term code script is…too narrow. The chromosomal structures are at the same time instrumental in bringing about the development they foreshadow. They are law-code and executive power - or, to use another simile, they are architect’s plan and builder’s craft - in one.”[Schrodinger 1944]

Genetics provide a mechanism for the production of a phenotype, not a mere representation of it.

“In spite of our impression that the genome of an organism provides a blueprint for its construction, we find on reflection that we have been deceived by the overwhelming generality, near universality of the mechanisms of genetic information processing…the means of interpreting genetic information are as vital to molecular biological systems as the information itself.” [Wills]

DNA ‘encoding’ employs only 4 bases[9], but this does not restrict its capacity to generate phenotypes. It can build proteins with novel functions through the “embedding of functional operations in the material field.” [Wills] In other words, the possibility of new functional gene loci and alleles in DNA-based organisms depends upon the fact that all strings of bases map to protein polymers made up of the twenty amino acids. In turn the activity of all possible polymers is defined by physical law – determined by low-level interactions. This space of possibilities has a vast range of reactive and catalytic properties ‘embedded’ within it.

This provides a limitless resource of complexes with different activities, which can in turn, interact with and modify pre-existing biochemical systems. The process implemented by DNA machinery builds the polymers, and the polymers have an associated activity. At no point are there symbols whose meaning must be interpreted. Both reproduction and metabolism are processes constituted by the interactions of many elements.

The final solution to this problem of constitution – which provides the component elements for all of the higher level structures so far – owes itself to the following insight, originally due to von Neumann (see Methodology).

“[S]elf-construction in macromolecular systems requires the existence of a reflexive relationship between structural components and the functional operations they perform to synthesize themselves.” [Wills]

Biochemical compounds can play the role of transformer and the role of transformed. This is essential to the possibility of reproduction and metabolism. There is an essential recursiveness to the process of reproduction. The structure of a creature must be able to be constructed, (as offspring) and to construct (as parent). This reflexivity exists also for metabolism. The structure of a creature is acted on by its environment, and its phenotype determines how it will respond, by acting on its environment – that is the essence of a metabolic strategy[10].

The link between biochemistry and computer science is clarified by Koza.

“[I]n chemical reactions involving molecules, entities interact in such a way that the entity can serve as both the entity executing a reaction (the program) and the entity being acted upon by the reaction (the data).” [Koza 1994]

DP14: The system will be built up of objects which may act both as data and as function.

We have come full circle. The optimisation task which was first specified in terms of the available data and the available operations has led us to a solution in terms of a set of data and operations. The minimal requirement, on this analysis, is the possibility of operations which modify the data which determine the operations. This recursive process, operating in a noisy environment has been responsible for the natural innovations we see around us, and could be responsible for artificial innovation in a network ecosystem. To understand how major transitions give rise to creatures implementing entirely new metabolic and reproductive strategies, we must acknowledge that the space of possibility which is being explored through evolution is that of possible stable and self-replicating physical structure, not a parameter space of anticipated phenotypic traits specified by interpreted symbol strings for a template individual.

The data physics should operate at the lowest possible level, allowing any computationally well-defined set of operations to be possible as part of an agent’s strategy, and avoiding artificial limitations imposed by our conceptions. If we provide fundamental data structures and fundamental operations which may be carried out on them, the way in which these interactions will be employed remains open. With a “ certain ‘sufficient set’ of basic procedures [a computer] can do basically anything any other computer can do” [Feynman1999]. Existing EC approaches could therefore be a subset of the possible agent implementations. However, if the components and their interactions have the appropriate recursive relationship they may provide for the open-ended evolution of different metabolic and reproductive processes, achieving type ii) digital evolution.

Section 5 - Data Physics and other Approaches

At this point it is worth mentioning some existing work in the field of Artificial Life and computer science, to identify their commonalities and disparities to this approach.

Computational resource allocation features in the work of many computer science researchers [Miller and Drexler1988a/b] [Huberman1988] [Baum1998] [Kearney2000] [Olafsson1996] [Ygge and Akkermans 2000] [Czajkowski and Eicken 1998] [Wellman1988] [Wright2000]. The ‘agoric systems’ approach [Miller and Drexler1988b] is a design philosophy for software systems which employ a marketplace, (Greek ‘agora’) to determine the value of resources through interactions between individuals. Implementations of such systems focus on the differential allocation of resources to programs with pre-determined roles, and are usually implemented through a centralised marketplace.

The crucial difference between the above approaches to resource allocation, and the approach adopted here is that the agents and the management algorithm of the system are all constituted through the local interactions of data objects- a physics based approach. The guiding principles of the physics-based approach to open-ended software evolution are…

- Constituents and their local interactions specify the physics.

- Physics embodies all functions - representational, metabolic, reproductive.

- Complexes emerge from locally negotiated interactions.

- Fitness implicit and Context-dependent

John von Neumann, Chris Langton, Walter Fontana and Tom Ray have all undertaken experiments which conform to these guiding principles. John von Neumann designed a 2-dimensional CA to implement the self-reproduction of a complex structure using only locally negotiated transactions between automata [Koza 1994]. Within his 29 state CA universe a specific configuration of many millions of cells would be able to reproduce itself, simply according to the local update rules of the CA cells. Although the replicator itself is incredibly fragile to modifications, threatening its evolvability, it has been argued by some that this experiment was an attempt to answer the definitive question of open-ended evolution – ‘How can a machine construct something more complex than itself?’ [McMullin 2000]. Chris Langton presented a much simpler CA physics within which simple loop structures were able to self-replicate. Walter Fontana has focused on stable self-replicating autocatalytic sets [Fontana and Buss]. His artificial chemistries employ the lambda calculus to provide a set of chemical species which may operate recursively on one another. Pietro Speroni di Fenizio[11] has extended these results by providing a spatial context, leading to the formation of isolated cell-like structures from random initial configurations. Tom Ray’s Tierra also fulfils all of the above criteria, but is a much closer cousin to this work. Tierra consisted of creatures whose bit-string phenotype was interpreted as an assembler code, whose execution was able to modify the bit strings of their own and their offspring’s phenotypes, hence modifying their execution. The code of their neighbours was also able to be executed, leading to complex interdependencies, parasitism and sociality.

Underlying these common principles is a very great contrast. A bottom-up, locally negotiated, fully distributed, recursive physics based approach in artificial life is far from conventional as a solution to an optimisation problem. Maes identifies both the benefits and the problems of exploiting bottom up solutions.

“It has become clear that in complex and unpredictable environments, flexible and robust functionality is difficult to obtain in a “top-down”, programmed, hierarchical architecture. A more successful strategy is to design a functionality in a bottom-up fashion, by making it emerge as a global side-effect of some intensive, local interactions among the components that make up the organism… Disadvantages of systems with emergent functionality are that the resulting behavior selection is less predictable, and related to this, that it is less understood how to obtain the desired global functionality.” [Maes 1991]

As Kearney points out, creating a complex system with complex dynamics is one thing. Making it functional is something else.

“Spontaneously appearing self-sustaining patterns can easily be achieved in simple computational models of dynamical systems involving repeated non-linear interaction of ‘agents’. The patterns are the attractors of the systems. Examples include the ‘game of life’, Kauffman networks, and ‘sandpile’ models. These patterns do not play any functional role, or serve any useful purpose, they just are.” [Kearney 2000]

The difference between such experiments and the approach taken here mirrors the difference between an ecosystem and an economic system mentioned earlier. Economies are for something. Ecosystems are not. For optimisation problems, we need a system in which structure emerges to serve our needs. The structures arising out of ecosystem simulations exist only for themselves.

Ray confronts these problems when trying to exploit Tierran innovation. He hopes to identify operations and strategies within the Tierra system which can be employed in human software design. However, the problems which are being addressed by Tierran creatures are simply not the same problems as those faced by network software. The system does not enforce a correlation between survival and the usefulness of the phenotype to human ends.

The plants and animals which we crop in the conventional sense are engaged in metabolic strategies within the same physical context as our own. As a result, the carbohydrates in wheat are of energetic value to us, and domesticated horses can aid us in our physical movement through 3D space. If the innovations of our data organisms are to be relevant to the problems faced by our software, then they must be engaged in processes with a common context, and hence common goals. Tierran creatures are looking out for themselves in an environment of binary and assembler which poses very different tasks to the high-level operations required of software.

Uncovering and elucidating Tierran innovations is also fraught. Although an assembler may be the most intuitive representation for the computer, it is incredibly difficult for a human to extract the strategy of a creature by following the flow of program control, a problem which worsens with the complexity of the strategy. This extraction stage would be required for human beings to be able to integrate the same strategy into their own software design.

In conventional evolutionary computation there is no difficulty in enforcing the correlation between survival and delivering the solution to a task. The clearest example of this approach within the arena of program design is John Koza’s Genetic Programming approach [Koza 1994, 1999][12] As well as optimising programs and other tree-structured phenotypes for specific optimisation problems, he shares with Ray the desire to discover modules which may be helpfully employed as part of multiple different solutions for a given domain – his ‘Automatically Defined Functions’ or ADFs. The problem is that the convergence of EC populations on a solution is achieved at a great cost as we have seen. In order to detail an ‘adaptive plan’ to manipulate the population, the task itself must be defined, a measure of success must be provided, and a parameterised template for the individuals in the population detailed. Although results can be achieved human participation remains a vital part.

The Data Physics system attempts to go further still, and to confront digital organisms with actual realtime software problems. To be truly functional, network organisms must manipulate structured data – objects, not just binary strings. Data objects are capable of representing complex information categories such as ‘House’, ‘Car’ ‘Webpage’, allowing human users to introduce and access the structured information which agents are manipulating. Operations should be able to be introduced in real time enabling the exchange, transformation and examination of items of structured data. Since the organisms directly address the problem, no extraction of complex strategies is required. Since the problems they have to solve for their own survival are the actual problems we want to solve, there is a strong correlation between survival and human usefulness.

The following design principles ensure that network organisms are embedded in a world in which they may be confronted by real time human-specified problems.

DP15: The Data physics should support structured data components, allowing human relevant information to be manipulated.

DP16: The Data physics should support real time introduction of new instances and new types of data structures and operations.

Section 6 – Conclusion and Future Work

The challenge we face is to build a complex network ecosystem such that the dynamic equilibrium of that system satisfies our requirements as network users. In order to achieve this, it is claimed that the system should be built around the ‘design principles’ identified from natural optimising systems in order explore, exploit and subdivide niches. The argument we have presented in favour of a ‘Data Physics’, in parallel with the derivation of the design principles, is as follows.

1) The range of well-defined behaviours of the network system is determined by the set of possible operations on available data.

2) The range of possible behaviours is further constrained by the limited resources of the network.

3) The optimal behaviour of the system is determined by a complex of the individual preferences of users.

4) The equilibrium of an economic system implements decentralised planning to assign limited resources to multiple possible operations in order to satisfy the individual preferences of multiple distributed parties, assuming that the participants are rational. [problem features described in 1), 2) and 3)]

5) The selection pressures imposed by limited resources in natural selective systems cause participants to converge on rational behaviour for their niche, assuming open-ended evolution.

6) Open-ended evolution is possible given organisms constituted by components whose structure and activity have a reflexive relationship, (structure determines activity, and activity determines structure), engaging in noisy reproduction.

7) The structure and activity of Data and Operations in a computational system have a reflexive relationship. Data can implement functions and be operated upon by functions. Noisy reproduction may be implemented in a computer – reproduction is trivial and the noise may be introduced.

8) Self-reproducing, metabolising data structures may be constructed from data and functions, which evolve open-endedly to achieve niche-rationality. [from 5), 6) and 7)]

9) If users provide economic niches for network organisms who satisfy their requirements, and the existing individuals demonstrate a sufficient degree of evolvability[13], then the satisfactory behaviour of the network system may be determined by the action of network organisms. [from 4) and 8)]

If the arguments presented here are correct, an architecture which satisfies these design criteria, populated with simple ancestors would lead to service delivery as an implication of the system’s dynamics.

In the approach proposed, network users offer agents the opportunity to exchange specified data for the currency the agents require for their survival. These offers generate niches for agents to occupy. Agents, (as data complexes) can thus maintain a stable ‘metabolism’ only by carrying out the operations which contribute to the satisfaction of the users’ requirements.

The intention of the ‘Data Physics’ approach is to provide the potential for novel forms of interdependency, and hence for novel productive roles to arise from the interactions of data objects. Strategies which perform such a role can be amplified by their own success within a system in which resources are allocated according to productivity. In this way, the problems arising from human participation in the agent design process are bypassed.

The dynamics of niche formation provides a basis for task decomposition, based on mutual exchange via a common currency. Competition between network organisms under limited resources provides a selection pressure for resource-optimisation in the completion of each role, and collaboration between network organisms offers synergistic benefits from the sharing of skills and resources. In some cases, multiple agents will participate in a single production chain to deliver the appropriate data, in other cases agents will negotiate stable niches which contribute to multiple production chains. The interdependency of agents will demand the formation of symbiotic relationships in which resources are shared in order to stabilise the complex of agents.

In this way, the rational behaviour of locally interacting agents provides a basis for both increasing productivity and credit assignment. The connectivity of ‘small world’ reference networks has been used to define localities, allowing agent neighbourhoods to form according to the affinity of individuals. Locally-defined interactions assist with scalability through avoiding the need for centralised information to be maintained. Since locality is correlated with niche, this correlation may also be used to advantage when undertaking searches for objects or seeking specific niches, assisting still further with the scalability of the system.

There remain many important issues to explore in order to capitalise on these ideas. The first is the issue of completeness. Even if the system could identify the most effective sequence of operations to meet user’s requirements, it will depend upon the primitives provided in the system. The homogeneous agent experiment described in Appendix B bypasses one of the main problems – to select a computationally complete set of reflexively defined data and operations. A complete set should ensure that any well-defined flow of control could be constructed as an agent. Early examination of computational completeness suggests that an object-level assembler with conditional and unconditional branch, skip and jump instructions could be employed. This assembler would address data and instructions by index in a list. Modifications to the order of this list and the elements of which it is made up on cloning and mutation could lead to new metabolic and reproductive strategies in a similar way to the assembler modifications of the Tierran ancestor in Ray’s work.

Computational completeness is not the whole story however. If we hope to generate innovative new agent phenotypes, this depends upon their accessibility by mutation from existing network organisms. Significant research must be made to establish a set of primitives with appropriate properties. Up until this point, it has been assumed that agents have the power to adapt to fill new niches in the system. Whether this is true or not depends upon the space of possible configurations which they are exploring- whether such solutions exist within the space, and whether a sustainable trajectory exists by which an ancestral line can adapt to reach them.

In addition to the issues of accessibility, examining the transferability of strategic elements between problems would be interesting[14]. As Maynard-Smith and Szathmary comment, “[w]e may be able to breed cows that give more milk, but we could not breed pigs that fly, or horses that talk: there would be no promising variants that we could select and breed from…”. However, general solutions exist in the natural world, such as the four-limbed body plan, and the homeotic gene network which may be modified to exploit many different niches. If such general agent plans could emerge from the open-ended evolutionary process, this could assist in achieving the evolvability required for the system to respond appropriately.

The achievement of innovation at the agent level or the agent community level depends upon the possibility of a stable metabolising population, to provide the raw material for variation. Populations in real ecosystems exhibit Lotka-Volterra oscillations and their stability is sensitive to human interventions. The interdependence of niches implies other problems when trying to establish stable mutual relationships.

“…it is hard to envision evolving because of the chicken and egg problem: why should one agent create information before the other agent can use it? How can the other agent learn to use it before it has been created?”[Baum 1998]

Issues of the transitions between dynamic equilibria, and their stability under noise and in the absence of absolute rationality may be examined in upcoming work. It is important to establish that such features will not disturb the convergence on desired functionality which we hope to achieve in the network ecosystem.

The theoretical work presented in this paper is intended as a step towards a self-organising system for computational resource allocation. As a practical continuation of this work, a Java implementation has been provided to accompany the project. It is detailed in full in the associated appendices. Appendix A describes the way in which the implementation fulfils the design principles derived above. Appendix B contains the specification of an early experiment in the research programme. Finally in Appendix C the complete architectural documentation and commented code are provided.

Appendix A – An Architectural specification which conforms to the stated Design Principles

The following design principles have been derived from the examination of natural optimising systems and from the requirements of a network service-delivery ecosystem:

DP1: The degrees of freedom available will be defined within the architecture. Given a set of data and a set of operations, a set of valid {data:operation} pairs will be defined.

DP2: The satisfaction of a user’s requests will depend upon the contribution of sufficient computational resources to the network.

DP3 : The system will not employ interpreted representations.

DP4: The system will be decentralised in its operation.

DP5: The system’s well-functioning will not depend on a deterministic world.

DP6: The system will implement a distributed selection algorithm driven by local demand to allocate network resources to its subsystems.

DP7: The system will employ a conserved currency to assist in the allocation of resources.

DP8: All resources must be owned - ownership implies control of access to a resource.

DP9: Resources will be conserved: they may be transferred and reduced by the transactions network organisms are capable of, but not increased.

DP10: The maintenance of structure must have an associated cost.

DP11: Network organisms will be able to form relationships, and connect together to form small world networks.

DP12: The form of agent-organisms, their interactions and exchanges and hence their strategy for self-maintenance will remain open for exploration.

DP13: Noisy replication must be supported within the architecture, providing the basis for natural selection

DP14: The system will be built up of objects which may act both as data and as function.

DP15: The Data physics should support structured data components, allowing human relevant information to be manipulated.

DP16: The Data physics should support real time introduction of new instances and new types of data structures and operations.

In this appendix, a Java architecture is detailed for the implementation of a system meeting the above design criteria, (see Appendix C Architectural Specification), which has been deliberately conceived to be network-ready, fully distributed and scalable. The satisfaction of the design principles by this architecture is clarified, and an experiments is detailed which will be used as a final proof of concept. This experiment examines the dynamics of niche determination for otherwise homogeneous agents.

In order to exploit the functionality of the architecture the programmer must voluntarily conform to certain rules in the design of data objects and operations as specified by the design principles. The architecture explicitly provides objects whose behaviour supports programmers in this effort, if they are used in accordance with their specification. The strict enforcement of specific rules as part of the Java Virtual Machine, security strategies such as public key encryption, and JVM Resource Tracking have been demonstrated elsewhere [Sun][Czajkowski1998]. It has been viewed as unnecessary to implement strict legislative controls since the focus of the research programme is to establish the behaviour which may arise from the dynamics of system.

The core classes of the architecture are Commodity and Competence – these represent data and functions respectively. Programmers may create their own Commodity and Competence objects which maintain and manipulate structured data of arbitrary complexity, satisfying DP15.

All objects in the system must implement Commodity. The Commodity interface combines and extends two sub-interfaces – MemoryUser and Ownable.

The MemoryUser interface is a set of methods which are required of all objects which employ Memory as a network resource, requiring payments to be made to a ResourceProvider which owns the memory. This contributes to the satisfaction of DP8, DP9 and DP10.To implement this interface, methods must be provided which…

- allow a ResourceProvider to be specified on construction, else the object will be destroyed

- determine the amount of memory occupied by the object at any time, allowing the ResourceProvider to calculate the amount of rental due

- allow the ResourceProvider to identify the ResourceBuyer object responsible for rental payment

- allow the the object to be destroyed when rental is unpaid, or when it is no longer required by its Owner

The Ownable interface is a set of methods which are required of all objects which may have an owner. To implement this interface, methods must be provided which…

This interface requires support for…

- Set the Owner assigned to the Ownable

- Get the Owner assigned to the Ownable, or destroy the object if no owner can be found

- Get an Owner of a specified type from further up the ownership hierarchy, or destroy the object if no owner of that type can be found

A Commodity must also implement two methods providing for noisy reproduction.

- A clone method, which duplicates the object

- A mutate method which randomly modifies parameters in the object

The AbstractCommodity class provides an implementation of all of these methods, providing a convenient way of constructing objects which conform to the correct behaviour. It is recommended that all objects within the system are subclasses of this object. The lifecycle of an AbstractCommodity proceeds as follows.

- Creation: The proper construction of the object is only possible by a specific kind of execution thread – a GenericProcess. This is a bespoke Thread which maintains a reference to its own Owner and ResourceProvider. The AbstractCommodity will be assigned to the Owner and ResourceProvider of the calling Thread, ensuring that all objects are owned and network resources are accessible to the object. If either of these assignments fails, the object is destroyed.

- Existence: The ResourceProvider requests a payment for rental of the resources employed by the AbstractCommodity. The call for payment is passed up the chain of ownership until an Owner is found which implements ResourceBuyer and can handle the payment. If no Owner is found in order to pay the advance rental, the object is destroyed. This payment is requested in advance for a rental period, ensuring that no system resources come for free. Payment is requested again after every rental period has elapsed until the object is destroyed.

- Cloning: If the AbstractCommodity is cloned, it is assigned to the Owner of the thread which called the cloning method. Subclasses of AbstractCommodity are responsible for copying their specific state if required.

- Mutation: Subclasses of AbstractCommodity are responsible for determining what object-specific changes take place when this method is called. Along with the cloning method above, this meets the requirements of DP13

- Destruction: The object is destroyed, all resources are freed and all references maintained by other objects are removed.

Competence is itself a subclass of AbstractCommodity – that is to say it is a Commodity in its own right. The simplest form of reflexivity is achieved in this way - Competences may transform Competences. More complex forms of reflexivity take place when the behaviour of the Competence is modified by the state of the Commodities to which it has access, and its behaviour modifies the state of those Commodities. Both forms of reflexivity contribute to the satisfaction of DP14 Competences operate on Commodities through a method called transform() which takes a Commodity as an input argument and returns a Commodity as output.

Since the method is entirely general (it accepts any Commodity) metadata must be provided by the Competence which describes the Commodities which it can transform this is a requirement for the satisfaction of DP1. This metadata is provided through two methods, canTransform() and willTransform(). The canTransform() method details the criteria which Commodities must meet to be suitable substrate for the operation. However, the approach adopted does not undermine the commitment of DP3. This ‘description’ is not encoded using interpreted representations. Instead, a test is returned which will return true iff the criterion is satisfied. This test is itself a Competence, which transforms a Commodity into a BooleanCommodity representing true or false. The willTransform() method accepts a CommodityTest description of an input Commodity in the same way and returns a CommodityTest description of the output which would result. At no point has a representation been employed which must be interpreted. Instead, active objects are returned which may be executed to preform a categorisation. There is no requirement for the metadata to be processed or reasoned over before passing a Commodity as input to a Competence. There is no strict contracting within the data physics. Non-deterministic methods are perfectly acceptable, although it is good practice to implement canTransform() and willTransform() with minimal commitments. The stability of an agent’s strategy may be maintained by an operation which works right ‘often enough’ this supports the requirements of DP5.

The metadata provided may assist agents in achieving appropriate combinations of data and operations, although in fact, the only consequence of passing an inappropriate Commodity into a Competence is the waste of processor time.

This general interface to data and operations on data provides network organisms with the ability to develop strategies which are entirely general, since all Competences and Commodities interact in the same way. The satisfaction of DP16 depends upon this generality of operation, since an agent need not anticipate a Commodity or Competence in order to be able to use it[15]. Note also that if any objects are created as part of the execution of a Competence, they are automatically assigned to the Owner of the GenericProcess which is executing the code.

The next most important core classes are Agents and Users. These represent the network organisms and the user avatars which interact with them.

Agents are subclasses of AbstractCommodity which implement Owner and ResourceBuyer. Since they are AbstractCommodities, they must therefore have an Owner themselves. Since they implement ResourceBuyer they are capable of paying ResourceProviders when payment requests propagate from below them in the ownership hierarchy. After they have been provided with their first injection of currency, they may not pass these requests to their Owner. Once their currency is exhausted, they are no longer able to pay for the resources used by them and their property, helping to ensure DP7. As a result, they will be destroyed by the ResourceProvider. This provides the basis for natural selection, since the objects below the Agent in the hierachy must earn enough currency to meet the combined rental due to their ResourceProvider, or they will cease to exist. The self-maintaining complex is defined by the use of the Agent as a shared energy store. The means by which the complex of objects achieve this self-maintaining state in underdetermined by the architecture. This leaves Agents open to explore the set of computationally possible self-maintaining and self-reproducing complexes satisfying DP12. The local ‘fitness function’ will depend upon the agent’s structure and their local environment, each of which is open to determination by the interactions of data objects. This leads to the satisfaction of DP6.

During construction, agents are provided with a property CommodityList, a reference CommodityList and a ProcessCommodity.

The property list and reference list are both instances of CommodityList. This is a Commodity which implements ReferenceList i.e. it maintains an index of references to other commodities. A reference to all Commodities directly owned by the agent is maintained automatically by the property list. The reference list may be filled with any number of references to other objects in the system depending on the activity of the agent. Referencers like CommodityLists provide the basis for the development of small world networks, which help to satisfy DP4 and DP11. Methods are provided for agents to search their neighbourhood for specific objects and carry out recursive calls carrying out multiple hops through the directed graph of references. One way that an agent may orient itself is by requesting a random reference from its ResourceProvider. Alternatively, its Owner may provide an appropriate starting point after constructing it. Access control to prevent ‘predatory’ behaviour can be managed through Referencers. If an agent does not wish an object to be accessible externally, it need not include it in its externally accessible reference list, and can retain it in its private property list instead.

The ProcessCommodity is a Commodity which wraps an instance of GenericProcess. The ProcessCommodity is acquired from a ProcessProvider (a specific implementation of ResourceProvider). The ProcessProvider continues to charge for the GenericProcess so long as the ProcessCommodity has not been destroyed. The GenericProcess is constructed ready to run the Agent’s code, and is owned by the Agent, ensuring that all Commodities constructed by running the Agent’s code are Owned by the Agent (see AbstractCommodity:Creation above). Subclasses of Agent are expected to implement their own behaviour in the run() method.

The User class is a subclass of Agent. They technically have Owners like all other Commodities, but in practice they are owned by the App which configures the simulation as a matter of convenience. The amount of currency which a User can access is unlimited. To ensure conformity to DB7 It is the responsibility of the designer of the App to ensure that the behaviour specified in the run() method simulates the realistic behaviour of a network user. A realistic network user would have a certain income of network currency acquired by renting his own system resources to Agents. This would be achieved by the use of a ResourceProvider to mediate between the User and the Agents. In fact the ResourceProvider would probably be an agent itself, trying to maximise income for the user, and hence earn income itself. The income earned could then be employed to provide incentives to Agents for service delivery to him/her. The amount of currency the user could spend would be moderated by this constraint, as secured by DP7 – it would be no more than the value of the resources the user submitted, hence satisfying DP2.

[pic]

The App is responsible for configuring the simulation and monitoring its progress. The monitoring of the system is made more simple by the CommodityObserver interface. A fixed CommodityObserver is able to receive all CommodityEvents issued by objects in the system. The events are.

|Class of Event |Notifies observer of… |

|modityCreatedEvent |creation of new Commodity |

|modityDestroyedEvent |destruction of existing Commodity |

|Ownable.OwnerChangedEvent |change of ownership of existing Commodity |

|ResourceUser.ResourceProviderChangedEvent |change of resource provision for existing Commodity |

|Referencer.ReferenceAddedEvent |referencer added new reference |

| Referencer.ReferenceRemovedEvent |referencer removed existing reference |

These may be used to maintain detailed information about the state of objects and their relationships with others. The CommodityGraph class uses these events to maintain a directed cyclic graph indicating the relationships between objects within the system. The diagram above shows the connected graph representation acquired during a test run of the architecture, in which a single User has introduced multiple autonomous agents. The ownership relationships between objects are represented as red (grey) edges in the graph, whilst relations of reference are denoted by black (black) edges. A spring balancer algorithm is employed to map the connected graph to a 2d plane, in which spatial neighbourhood is correlated with connectivity. This test run created multiple agents, each of which was initialised with a reference to its predecessor’s reference list. The agent’s code simply called a depth-first search to a depth of 5 hops and added the references found in the search to its own reference list. As a result of this intensive cross-referencing, a cluster of the agents’ reference CommodityLists, (marked LC for ‘list commodity’), is beginning to form.

Trials of architectural stability have continually created and destroyed agents of this kind over a 48 hour period to ensure the stability of memory allocation, thread allocation and the interaction of parallel asynchronous threads. Monitoring CommodityDestroyedEvents allowed hanging references from destroyed data objects to be automatically cleaned up by the system – the Java virtual machine maintains all objects in memory to which there is a reference, employing a centralised garbage collection system. In a bespoke virtual machine without garbage collection, this cleanup could be abandoned, removing the need for any centralised records of data objects. In this way the architecture would not be dependent on an event model for its proper functioning, although it may be useful for the monitoring of system dynamics for research purposes.

Appendix B – An Integer Processing Application

In the first proposed example App, there are three kinds of User.

‘Buyer’ users create a niche for the supply of a specific Commodity by offering a ‘Buy’ competence to their neighbours. This competence transforms a Commodity which meets their requirements into a CurrencyCommodity which the supplying Agent may cash for its own use. The CurrencyCommodity class enables the transfer of currency between individuals. On creation it acquires payment from the current owner. Its ownership may then be transferred, taking the currency with it. On destruction, the value embedded within it is added to the account of its current Owner. This is one of the aspects supporting DP7.

‘Seller’ users offer a ‘Sell’ competence to their neighbours making available a specific Commodity in exchange for a CurrencyCommodity of a specific value.

‘Developer’ users are responsible for introducing Agents which can mediate between the two Users, carrying out operations and exchanging Commodities.

The application domain is primitive, but is representative of the kinds of optimisation which the network ecosystem will be expected to handle. The focus of this experiment will be the ways in which reference topology determines niches. Homogeneous agents will be employed to eliminate extraneous effects.

The raw resources available to the Agents in the system are the integers {1, 2, 4, 8, 16, 32, 64, 128}, represented as IntegerCommodities. These are introduced by the ‘Seller’ users who offer to transform a certain amount of currency for a certain integer. The seller users will each offer a different integer at a different price. Processed resources are required by the ‘Buyer’ agents which may require any of the full set of integers from 1 to 255. The longer the Buyer agents have to wait without satisfaction, the more money they will offer for their prized integer. The ‘Developer’ agent will introduce Agents who are provided with a PlusCompetence and a random User reference as a starting point. The PlusCompetence will accept a CommodityList containing two IntegerCommodities, sum them to create a new IntegerCommodity and destroy the original numbers.

Pseudocode for this simple ancestor might look as follows.

while(true){

if(currency > set value)

clone //reduces currency

while(operator not satisfied by property list){

find reference to ‘sell’ competence

exchange currency for new integer

}

call operator on owned commodities: store result

if(find reference to ‘buy’ competence wanting result)

exchange result for currency

else

create ‘sell’ competence offering result

}

When an agent is added, it tries to acquire two integers from its reference neighbourhood, carry out its operation on them, and sell the result. The integers available and hence the result, will depend upon the relationships with other agents in the system. The results desired by neighbours will also depend on the agent’s place within the reference graph. If appropriate relationships can be formed, production chains are possible which take in raw integers and generate desired integers. Production chains may be made up of more than one agent, and single agents may play a role in more than one production chain. Each production chain may be able to accomplish the route from the raw integers to the desired integers with a different level of resource efficiency. The production chains with maximum efficiency will be reinforced. Since all the agents are homogeneous in this scenario, there may be no adaptation at the agent level. Successful formation of a production chain satisfies the earlier specification of a super-organism, through having “[a]daptations at the colony level” [Maynard-Smith and Szathmary 1999].

Owing to the generality of the architecture, the Plus operator may be replaced by another competence without modifying the structure of the agent. The agent does not know what the competence is, but simply tries to acquire Commodities until it is satisfied, then executes it. In this way, the simple agent may complete a stage in any production chain.

The pricing of commodities can also arise from the random configuration. Since agents which spend more on purchase than sales will die out, it is in the interests of the Sellers and the Buyers to modify their prices to provide at least the subsistence profit margin required in order to sell their wares and acquire their goods respectively.

This application has been partially implemented, but more work is required. It is hoped to continue development leading to results which may be submitted at conference.

Appendix C – Architectural Specification

See separate volume.

Bibliography

|Adar, E. and Huberman, B.A. [2000] “Free Riding on Gnutella” Internet Ecologies Area, Xerox Palo Alto Research Center, Palo |

|Alto, CA 94304 available from |

|Axelrod, Robert [1997] “The Complexity of Cooperation” Princeton University Press |

|Bartee, Thomas C. [1977] “Digital Computer Fundamentals” McGraw Hill |

|Baum, Eric [1998] “Manifesto for an Evolutionary Economics of Intelligence” in “Neural Networks and Machine Learning” p285-344|

|C.M.Bishop (ed.) Springer Verlag also available at NEC Research Institute |

| |

|Bedau, M. [1996] “The Nature of Life” in [Boden 1996b] |

|Bedau, M.A. McKaskill, J.S. Packard, N.H. and Rasmussen, S. [2000] “Proceedings of the Seventh International Conference on |

|Artificial Life” MIT Press |

|Best, M.L. [2000] “Coevolving Mutualists Guide Simulated Evolution” in [Bedau et al (eds.) 2000] |

|Bigus, Joseph P. and Jennifer [1998] “Constructing Intelligent Agents with Java” John Wiley and Sons Inc. |

|Boden, M. [1996a] “The Creative Mind” Abacus |

|Boden, M. [1996b] “The Philosophy of Artificial Life” OUP |

|Bossomaier, Terry and Green, David [1998] “Patterns in the Sand” Perseus Books, Reading Mass. |

|Burian, Richard M. and Richardson, Robert C. [1996] “Form and Order in Evolutionary Biology” in [Boden 1996b] |

|Chang, Myong-Hun and Harrington, Joseph E. [1999] “Centralization vs. Decentralization in a Multi-Unit Organisation: A |

|Computational Model of a Retail Chain as a Multi-Agent Adaptive System” Santa Fe Institute Working Paper 00-02-010 |

|Clearwater, Scott (ed) [1996] “Market Based Control – A paradigm for Distributed Resource Allocation” World Scientific |

|Publishing Co. Pte. Ltd. Singapore |

|Clements, Ross [2000] - Presentation to SOMAS (Simulation of Multi-Agent Systems) conference Milton Keynes – Unpublished |

|Coase, Ronald [1937] “The nature of the firm” Economica 4 (1937) 386-405 abridged in [Putterman 1997] |

|Cohen, I. Bernard [1994] “Newton and the Social Sciences” in [Mirowski 1994] |

|Dautenhahn, Kerstin [2000] “Reverse Engineering of Societies – A biological perspective” |

|Proceedings of AISB symposium on “Starting from society, the application of social analogies to computational systems” Printed|

|at the University of Birmingham, UK (ISBN 1 902956 13 8) |

|Darwin, C. [1872] “The Origin of Species” 1998 reprint Wordsworth Editions Ltd. |

|Also available online at |

|Dose, Fox, Deborim and Paulovska (eds) [1974] “The Origin of Life and Evolutionary Biochemistry” Plenum Press New York |

|Eigen, M. and Schuster, P. [1979] “The Hypercycle” Springer-Verlag |

|Faith, Joe [1996] “Why you cannot eliminate intentionality” AISB Quarterly Winter 1996 No 96 |

|Ferguson, D. F. Nikolaou, C. Sairamesh, J. and Yemini, Y. [1996] “Economic Models for Allocating Resources in Computer |

|Systems” in [Clearwater (ed.) 1996] |

|Feynman, Richard P. [1999] “Feynman Lectures on Computation” Hey, Anthony G. and Allen, Robin W. (eds.) Penguin |

|Fontana, W. and Buss, L.[1993] “What would be conserved if the tape were played twice?” Santafe Working Paper 93-10-067 |

|Franklin, S. and Graesser, A. [1996] “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents” in “Proceedings of|

|the Third International Workshop in Agent Theories, Architectures and Languages” Springer-Verlag |

|FreeNet |

|Gnutella |

|Goodwin, Brian [1997] “How the Leopard Changed its Spots” The Guernsey Press Co. Ltd. |

|Gruau, F. [1995] “Automatic definition of modular neural networks” Adaptive Behavior 3 Vol 2, p151—184 |

|Grzegorz Czajkowski and Thorsten von Eicken [1998] “JRes: A Resource Accounting Interface for Java” In Proceedings of the 1998|

|ACM OOPSLA Conference, Vancouver, BC, October 1998. |

|Hardin, G. [1968] “The Tragedy of the Commons,” Science, 162:1243- |

|1248. Available online at: |

|Harvey, Inman [1997] “Open the Box” Workshop on Evolutionary Computation with Variable Size Representation, at ICGA97, Intl. |

|Conf. on Genetic Algorithms, July 19--23 1997 at Michigan State University, East Lansing, Michigan available from |

| |

|Harvey, Inman [1996] “Untimed and Misrepresented: Connectionism and the Computer Metaphor” AISB Quarterly 96 Winter |

|Hayek, Freidrich [1988] “The Fatal Conceit: The errors of socialism” in “Collected works of F.A. Hayek” Routledge and Kegan |

|Paul - quoted in [Hodgson 1994] |

|Hayek, Friedrich [1945] “The Use of Knowledge in Society” The American Economic Review 35 pp519-30 abridged in [Putterman |

|1997] |

|Hodgson, Geoffrey M. [1994] “Hayek, Evolution and spontaneous order” in [Mirowski 1994] |

|Hoile, C. and Tateson, R. [2000] “Design by Morphogenesis” in [Bedau et al (eds.) 2000] |

|Holland, John H. [1975] “Adaptation in Natural and Artificial Systems” MIT Press |

|Huberman, B.A. [1988] “The Ecology of Computation” North Holland |

|Jennings, Nick, Sycara, Katia and Woodlridge, Michael [1998] “A RoadMap of Agent Research and Development” in Autonomous |

|Agents and Multi-Agent Systems 1 p275-306 |

|Jensen, Finn [1995] “Bayesian Networks Basics” AISB Quarterly Winter 1995 no 94 |

|Johnson, N. L. [2000] “Developmental Insights into Evolving Systems: Roles of Diversity, Non-Selection, Self-Organisation, |

|Symbiosis” in [Bedau et al (eds.) 2000] |

|Kearney, Paul [due 2000] “Integration of computational models inspired by economics and genetics” Submitted for BT Technology |

|Journal due October 2000 |

|Keijzer, F. A. [1998] Some Armchair Worries about Wheeled Behavior. in “From |

|Animals to animats 5 - Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior”, pages 13--21, |

|Cambridge, MA. MIT Press |

|Kenyon, D.H. [1974] “Prefigured ordering and Protoselection in the origin of life” in [Dose et al. 1974] |

|Keosian, J. [1974] “Life’s beginnings – origin or evolution” in [Dose et al. 1974] |

|Knight, Frank [1921] “Risk, Uncertainty and Profit” abridged in [Putterman 1997] |

|Koza, John R. Bennett, Forrest H. III Andre, David and Keane, Martin A. [1999] “Genetic Programming III” San Francisco, CA: |

|Morgan Kaufmann |

|Koza, John R. [1994] “Artificial Life – Spontaneous Emergence of Self-Replicating and Evolutionary Self-Improving Computer |

|Programs” pp225-261 in [Langton 1994] |

|Kreps, David M. [1990] “Game Theory and Economic Modelling” Clarendon Press, Oxford |

|Langton, Christopher G. [1983] “Self-Reproduction in Cellular Automata” in Evolution, Games and Learning edited by J. Doyne |

|Farmer, T. Toffoli and S Wolfram Proceedings of an Interdisciplinary Workshop North Holland Physics Publishing Referenced in |

|Koza, John R. “Artificial Life: Spontaneous Emergence” in [Langton 1994] |

|Lenton, T. “Further implications of the Daisyworld Parable” |

|Levy, Steven [1993] “Artificial Life – The quest for a new creation” Penguin |

|Lewontin, R.C. [1970] “The Units of Selection” Am. Rev. Ecol. System. Vol I p1-18 |

|MacFarland [1993] “Animal Behaviour” Longman Singapore Publishers (Pte) Ltd. |

|Macfarland D. [1996] “Animals as Cost-based Robots” in [Boden 1996b] |

|Macfarland, D. and Bosser, T. [1993] “Intelligent Behaviour in Animals and Robots MIT Press |

|Maes, P [1991] “A Bottom-Up Mechanism for Behavior Selection in an Artificial Creature.” in Meyer, |

|J.A. and Wilson, S.W. (Eds.) “From animals to animats: Proceedings of the First International Conference on The Simulation of |

|Adaptive Behavior” pp238-246 MIT Press, 1991 |

|Margulis, Lynn [1981] “Symbiosis in Cell Evolution” W.H. Freeman and Co. |

|Marx, Karl [1867] “Capital: A critique of Political Economy Volume 1” abridged in [Putterman 1997] |

|Maynard-Smith, J. and Szathmary, E. [1999] “The Origins of Life” OUP |

|Maynard-Smith, John [1996] “Evolution, Natural and Artificial” in [Boden 1996b] |

|McMullin, Barry [2000] “John von Neumann and the Evolutionary Growth of Complexity: Looking backwards, Looking Forwards…” in |

|[Bedau et al (eds.) 2000] |

|Miller, Mark S. and Drexler, K. Eric [1988a] “Comparative Ecology – A computational Perspective” in [Huberman (ed) 1988] |

|Miller, Mark S. and Drexler, K. Eric [1988b] “Markets and Computation : Agoric Open Systems” in [Huberman (ed) 1988] |

|Mirowski, Philip (ed.) [1994] “Natural Images in Economic Thought” Cambridge University Press |

|Mitleton-Kelly, Eve [2000] Keynote Presentation to SOMAS (Simulation of Multi-Agent Systems) conference Milton Keynes – |

|Unpublished |

|Morowitz, Harold J.[1970] “Entropy for Biologists” Academic Press Inc. New York |

|Murphy, James Bernard [1994] “The kinds of order in society” in [Mirowski 1994] |

|Napster |

|Nwana, H. and Ndumu, D. [1999] “A Perspective on Software Agents Research" To appear in Knowledge Engineering Review. |

|Olafsson, Sverrir [1996] “Resource Allocation as an Evolving Strategy” Evolutionary Computation 4(1) p33-55 MIT |

|Ormerod, Paul [1994] “The Death of Economics” Faber and Faber |

|Ormerod, Paul [1998] “Butterfly Economics” Faber and Faber |

|Park, Sunju Durfee, Edmund H. and Birmingham William P. [2000] “Emergent Properties of a Market-based Digital Library with |

|Strategic Agents 1” in Autonomous Agents and Multi-Agent Systems, 3, 33-51 Kluwer Academic Publishers |

|Perkins, David N. [1996] Beyond the Darwinian Paradigm in [Boden 1996b] |

|Pfeifer, R. [1996] “Building "Fungus Eaters: Design principles of autonomous agents.” In Proceedings of the Fourth |

|International Conference on Simulation of Adaptive Behavior SAB96 (From Animals to Animats), pages 3--12, Cape Cod, |

|Massachusetts, USA |

|Poundstone, William [1992] “Prisoner’s Dilemma” Anchor Books, Doubleday |

|Pryor, Louise [1995] “Decisions, Decisions: Knowledge goals in planning” AISB Quarterly 92 Summer 1995 |

|Putterman, Louis and Kroszner, Randall S. [1996] “The Economic Nature of the Firm - A Reader” Cambridge University Press |

|Ray, Thomas S. [1996] “An Approach to the Synthesis of Life” in [Boden 1996b] |

|Richardson, G.B. [1972] “The organization of industry” Economic Journal (1972): 82 pp883-96 abridged in [Putterman 1997] |

|Ridley, M. [1994] “The Red Queen” Penguin |

|Schrodinger, E. [1944] “What is Life?” Cambridge University Press, reprinted 1992 |

|Shubik, Martin “The theory of money” Santa Fe Institute Working Paper 00-03-021 |

|Sims, Karl [1994] “Evolving 3D morphology and behavior by competition” in R. Brooks and P. Maes, (eds.) “Artificial Life IV” |

|pp 28-39. MIT Press |

|Smith, Adam [1776] “An Inquiry into the Nature and Causes of the Wealth of Nations” abridged in [Putterman 1997] |

|Stewart, Ian [1999] “Life’s Other Secret” Penguin |

|Sun |

|Terry Fogarty, Brian Carse and Larry Bull [1994] “Classifier Systems: Recent Research” AISB Quarterly 89 Autumn 1994 |

|Thompson, D’Arcy Wentworth [1992] “On Growth and Form” Canto |

|von Neumann, John and Oskar Morgenstern [1990] “Theory of Games and Economic Behavior” Princeton University Press |

|Watson, R.A. Reil,T. Pollack, J.B. [2000] “Mutualism, Parasitism, and Evolutionary Adaptation” in [Bedau et al (eds.) 2000] |

|Wellmann, M.P. [1996] “Market-oriented Programming : Some Early Lessons” in [Clearwater (ed.) 1996] |

|Wills, Peter “Autocatalysis, Information and Coding” Special issue of Biosystems “Physics and Evolution of Symbols and Codes” |

|Peter R Wills, Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand Available from |

| |

|Wright, Ian [2000] “The Society of Mind Requires an Economy of Mind” Proceedings of AISB 2000 symposium on “Starting from |

|society, the application of social analogies to computational systems” Printed at the University of Birmingham, UK (ISBN 1 |

|902956 13 8) |

|Ygge, Fredrik and Akkermans, Hans [2000] “Resource-Oriented Multicommodity Market Algorithms” Autonomous Agents and |

|Multi-Agent Systems, 3, 53-71 Kluwer Academic Publishers |

-----------------------

[1] My emphasis.

[2] Thorny issues in the theory of language concerning the basis for interpreting ‘representations’ cannot be explored in depth here. However, it is important to stress that von Neumann’s CA system implements a mechanical process which builds a structure in response to the information encoded in the cells. This bypasses the infinite regress of rule books posited by Wittgenstein, and the normative problems later elucidated by Kripke. For more on these issues please visit my BA dissertation at

[3] In fact other hardware, such as math co-processors, sound cards, 3d cards or network connections may in fact be involved in carrying out an operation, (i.e. a transformation, recombination or relocation of data within the network). It is convenient for present purposes to think of the processor as carrying out the job.

[4] Conservation, ownership and associated cost for all resources and structures in the system also helps to prevent the problems arising when resources are held in common. ‘Tragedies of the Commons’ [Hardin 1968] depend upon the action of individual rationality when an individual payoff is available, incurring a group cost. In circumstances where the individual’s share of the cost is less than the individual’s payoff there is no point at which it is rational to stop exploiting a resource, even if it leads to its destruction.

[5] A characterisation of true sociality in the natural world. (eu = true).

[6] Epistasis provides for the possibility of a one to many mapping between single genes and multiple phenotypic traits. It concerns the interdependencies between genes which control their expression. The productive capability of each gene can be moderated by its own products, the products of other genes, and the products of those products in an edifice of interdependencies.

[7] Neutrality provides for the possibility of a many to one mapping between genotypes and phenotypes. In biological organisms, multiple genotypes map to phenotypes which are identical in terms of structure or fitness. Many modifications are therefore neutral with respect to fitness.

[8] Evolvability is the potential for adaptivity of a given organism. DNA, for example offers a way for offspring to explore alternative biochemistries, through the combinatorial power of ribosomal transcription, whilst base-pairing, helper enzymes and processes of repair also provide the basis for thermodynamic stability of chromosomes, allowing offspring to exploit an inherited phenotype without too much loss of information. An evolvable organism has a strategy which maintains a good balance between exploration and exploitation to ensure the survival of its progeny.

[9] Although it should be noted that DNA is not simply a string of symbols. It employs methylation, tertiary structure interactions with co-factors, chaperone proteins and many other innovations to modify its operation.

[10] See earlier discussion of self-maintenance in Section 3. This iterative reflexive process – stabilising distal events through the determination of proximal events – is particularly well described in Keijzer’s treatment of the Brunswik lens model in [Keijzer].

[11] Please e-mail psdf@ for more details of this work.

[12] Koza faces some difficulty in evaluating partially recursive programs. Programs may be evolved which do nothing but enter infinite loops, and it is not clear at what point they should be terminated in case they are successful. This is not a problem for an evaluative scheme which imposes strict cost on all resources. The offspring in a Data Phyics simulation may only waste as much resource as has been inherited from its parents. Parents whose costly strategies eventually pay off may have to provide their children with a large initial investment, but it must have been earned by the success of the parent, and is therefore a reasonable investment. Within a data physics, the correlation between a strategy’s success and the amplification of the resources assigned to them operates at all levels. Although Koza’s amplification of lives to the offspring of successful individuals provides a degree of control, the fact that it does not extend to computational resources represents a problem.

[13] In other words, the eventual convergence of agent phenotypes on niche-rational behaviour is not sufficient. Time is a factor in the satisfactory response of the network to user’s demands, therefore the evolvability (how readily agents can adapt to the selection pressures they face) is a significant factor.

[14] This issue is also discussed in Appendix A.

[15] It is also a feature of the Java virtual machine that dynamic classloading may take place at runtime, making the introduction of new types of data and operations possible without re-compiling or shutting down the system.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download