Systems Research and Behavioral Science 22, 413-430, 2005
LIFE AND SIMPLE SYSTEMS
Ron Cottam, Willy Ranson & Roger Vounckx
The Evolutionary Processing Group, ETRO
Vrije Universiteit Brussel, Brussels, Belgium
Communicating author: Ron Cottam
ricottam@etro.vub.ac.be
Tel/fax: +32 (2) 629.2933
ABSTRACT
This last decade has seen the publication of an extensive literature describing, cataloguing
and analyzing the ‘emergence’ of complexity. This seems very strange. The creation of a
complex assembly is comparatively easy – the difficult job is to generate simplicity from it.
So much is this the case, that the only context within which it takes place is that of life
itself. Although we naturally imagine life as a dynamic process rather than as a static
structure, both of these are critical to its survival. Continuously expanding multi-element
assemblies finally lose their cohesion, and split up into separate parts, or restructure
themselves to redress their stability by generating a simplified umbrella-level of operation.
In large organisms this process may repeat itself, thus creating a multi-leveled selfcorrelating operational hierarchy. It is not obvious how the associated generation of
simplicity is initiated, but it appears that such a self-correlating hierarchy is itself alive.
Keywords: life, hierarchy, multi-scalar, complexity, simplicity.
INTRODUCTION
One of the preoccupations of Homo sapiens is, and has been to explain in some way the
genesis of life, both in an abstract sense and in the appearance of individual organisms.
Over the last century there has been a progressive move away from attributing its origin to
some external intelligence and towards attempting an explanation from the bases developed
in the natural (i.e. mainly ‘inorganic’) ‘exact’ sciences. Unfortunately, to some extent this
has been doomed to failure, as these bases were derived for systems in equilibrium, and life
is anything but that! More precisely, the standard scientific scenarios comprise systems
which remain in equilibrium because their constituent parts are also in equilibrium, while
living systems maintain a temporally limited quasi-equilibrium because their constituent
parts are individually far from equilibrium. Within the standard scientific scenarios,
Life and Simple Systems
1
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
therefore, a reductive approach can provide reasonable understanding of the relationships
which exist across scales, while the application of a reductive scientific approach to living
systems merely provides the description of a dead organism, and not a living one.
If we are to arrive at a degree of ‘understanding’ of life itself, we are therefore constrained
to either adopt a far less mechanistic investigative tool, or to change the formulation of the
scientific tools we already possess to take account, at the very least, of this cross-scalar
difficulty. Within a traditional scientific discourse, there is a formal separation of static and
dynamic aspects of a system. Over the past few decades the investigation of complex
phenomena has brought about the beginnings of recognition that this compartmentalization
must be relaxed, but it is still less than obvious how this can be achieved without
completely abandoning the ‘exact’ nature of the ‘exact’ sciences.
Much has been written of late about complexity and the ‘emergence’ of complexity in both
inorganic and organic contexts. Unfortunately, the word ‘complexity’ appears in so many
guises that it is virtually impossible to use it without attempting some qualification of its
meaning. Our own use of the word is close to that implied by Robert Rosen [see, for
example, Rosen, 1997; Rosen and Kineman, 2005], but there are dozens of other
interpretations [for an overview see, for example, Edmonds, 1997]. It is now fairly
commonplace to associate the nature of life with the ‘emergence’ of complexity, and it is
certainly the case that the two can be in some way associated, but it is difficult to see how
the ‘emergence’ of complexity could be sufficient without a concurrent ‘emergence’ of
simplicity to resolve the difficulties which complexity entails. Complexity itself solves no
problems, eases no paths, and facilitates no response to stimulus – all these are the purview
of simplicity. We suggest that complexity is more the result of development than of
‘emergence’, and that the ‘true’ emergence is, if anything, of simplicity [Cottam et al.,
1998a]. The biggest difficulty which science will have to resolve in attempting an
understanding of life is the unfamiliar concurrence of complexity and simplicity, where the
appearance of a (living) system depends on the context it experiences, which may be very
different from the context which scientific study may attribute to it from an external setting.
In this paper we will explore the hypothesis that life is essentially a relationship between
complexity and simplicity within a unified system. We will regularly fall back on the
application of a computational paradigm in our descriptions. This is not to suggest that life
consists of ‘computation’ per se, but that in the relationships between complexity and
simplicity it provides a useful provisional ‘pictorial’ support in transmitting the ideas we
wish to propagate.
STATIC AND DYNAMIC ASPECTS OF LIFE
It would be difficult to think of life without immediately focusing on its dynamic aspects.
The initial perceptional distinction we draw of our surroundings is between stasis and
movement, and to a large extent it appears to be living things which move. Our mammalian
eyes have developed a degree of pre-neural processing which is mainly directed towards the
Life and Simple Systems
2
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
location in our field of view of moving entities whose recognition may be vital to survival.
Beyond this simplistic distinction, it becomes less easy to establish criteria for ‘alive or not
alive’ which are neither self-referential nor unsatisfactory across a range of timescales.
Biology has traditionally concentrated on metabolism and reproduction as distinguishing
features, but while these two are certainly necessary for the maintenance and evolution of
living forms, it is questionable whether they relate usefully to our personal momentary
sense of ‘being alive’. It would appear that these characteristics are more the servants of life
than its primitive nature.
While we automatically associate life with phenomenological dynamics, its substrate is the
structural nature of cells, tissue and organs – whose character appears most amenable to
reductive investigation. Even so, we must be careful to what extent we presuppose that
there is a one-to-one relationship between a detailed description of the internal processes of
living material and the external appearance of its importance on a larger scale. The major
difficulty appears to be that structure and process cannot be reductively separated in a
living system, and that their relationships depend on the scale from which they are viewed.
Our basic viewpoint is that which we have stated elsewhere [Cottam et al., 2003a, 2004a]:
if we analyze a dynamic system in terms of the classical separation of time-independent and
time-dependent parts, then we adopt the hierarchical form of supposing that dynamics
consists of two subsidiary complementary parts, namely structure and its counterpart
process. It should be noted that simplification of a binary complement can lead to binary
orthogonality, or by more extreme reduction leads to a pair of opposites. Structure and
process are complementary in neither of these simplified senses, but in that indicated by
Niels Bohr: “The opposite of a correct statement is a false statement… but the opposite of a
profound truth may well be another profound truth. We believe that this relationship
between time-independent and time-dependent complements predated the large-scale
organization of living systems and that it corresponds to the primitive form of
consciousness, which is almost but not quite the same as the most primitive form of energy:
in large suitably-organized information-processing networks this leads to the high-level
consciousness which we ourselves experience [Cottam et al., 1998b].
The most productive route towards an understanding of the grounding of life appears to be
by manipulation of the style and the degree of partiality or completeness we attribute to the
logic with which we describe phenomena which have implications at more than one scale
of an entity. This entails always keeping in mind that complexity engenders simplicity, and
that apparent simplicity conceals an underlying complexity. Characteristically, in this
context of describing or defining ‘life’, any definition of ‘complexity’ will be a simplified
one, and any definition of ‘simple’ will be overly complex!
What is “Complexity”?
Our references to ‘complexity’ in this paper may be associated with a sense of
incompleteness of description, and of consequent analytical difficulties.
Life and Simple Systems
3
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Following Robert Rosen [see, for example, Rosen, 1997], Rosen and Kineman [2005]
categorize complexity by
“… if there is no way to completely model all aspects of any given system, it is
considered ‘non-computable’ and is, therefore, ‘complex’ in this unique sense.”
Our own use of the word corresponds to the form of this computational description, but we
distinguish between two very different classes of complexity, which may be associated with
different styles of incompleteness or approximation.
Analog approximation is a reduced representation (i.e. a simplification) of the entire entity
under consideration, whose incompleteness or error is ‘fuzzy’ in style1. This corresponds to
the more usual use of the word ‘approximate’. For example, the analogue approximation of
a digital computer would presumably be rather vague across its entire organization and
operation.
Digital approximation is a reduced representation (i.e. a simplification) of the entity by
reduction in the number of its defined sub-units. Although entirely valid, this style of
approximation may appear to be arbitrary in many contexts. For example, the digital
approximation of a digital computer could be missing the central processor – raising the
question of whether it is still a computer. However, this problem analogously raises its head
in specific cases of analog approximation, and it appears to result from ‘intentionally
choosing a context which does not fit the (reduced) description’. Description is always by
reduction, so it is always possible to choose contexts within which a given description is
invalid.
The analog and digital approximations of a real-valued parameter are related to each other
through process and intention. An analog approximation would be to a degree vague in the
definition of each and every digit of its formulation, making it useless to specify its value to
more than a particular number of significant figures. A digital approximation, however,
would effectively lose one or more of the digits of its formulation, similarly limiting its
specification, but by the possible values of that one digit. A nice example of this distinction
is the way in which we relate to traditional or digital clocks. We would normally attribute
analog approximation to a traditional clock, by suggesting that it shows the ‘right’ time (i.e.
it is ‘accurate’) to within a certain (arbitrary) degree. More formally, we would suggest that
a digital clock is (also) only ‘right’ to within plus or minus ‘1’ in its least significant digit.
While it is possible to (approximately) categorize two types of complexity by associating
their incompletenesses with analog and digital approximations, any attempt at a complete
definition of complexity runs into precisely (!) the problems which complexity itself
engenders. If we look even further into the analog/digital categorization, we find that
complexity is often associated either with the digital representation of an analog context, or
with the analog representation of a digital context – but not always!
It should be noted that this is not the meaning of the word ‘fuzzy’ which is applied in ‘fuzzy logic’, where
set membership may be other than 1 or 0, but it is in any case precisely defined.
1
Life and Simple Systems
4
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
While our own use of the word “complex” corresponds to the form of the noncomputability criterion Robert Rosen used in his concept of complexity, the implied
content is different, as will be clear from the description above. We are in no way
suggesting that nature itself is complex – we do not have enough information to maintain
such an absolutist assertion. We are rather attributing complexity to the mismatch between
a ‘mode of expression’ of nature and a ‘mode of representation’ of it or, in David Bohm’s
terms [1987], a mismatch between nature’s implicate order and our interpretation of its
explicate order. This appears to be categorically different from Robert Rosen’s view that
nature is complex2. We find it difficult to see that this apparent view could extend beyond
belief to measurement. However, we do not imply that it is possible to choose a mode of
representation from a single point of view which would permit ‘complete’ modeling of the
universe.
What is “Simple”?
Sadly, not only do we need a sense of ‘simple’ to define ‘complexity’, we also need a sense
of ‘complexity’ to define ‘simple’! Fortunately, we can fall back on the computational
paradigm to help distinguish between them and give a sense of reality to their difference. In
doing so, we can also attribute a meaning to ‘complication’.
‘Simple’ implies ‘easy to compute’ – i.e. feasible to compute, and not taking too much
time. ‘Complicated’ implies ‘more of the same’ – i.e. feasible to compute, and longer to
compute, but not substantially more difficult: this is similar to Rosen’s [2000] formulation.
‘Complex’, however, implies ultimate incomputability, and that even if an approximation
can be obtained it may take infinite time to obtain it. These three ‘implied definitions’
appear to fit reasonably well with Rosennean complexity as referred to by Rosen and
Kineman [2006].
However, defining ‘complication’ non-computationally is as tricky as defining complexity,
if not more so. Complication is often associated either with the analog representation of an
analog context, or with the digital representation of a digital context – but not always! If we
accept the usual premise that communication is restricted by relativity, then the temporal
‘snapshot’ of any non-permanent spatially-measurable entity will always exhibit some
degree of ‘incomplete’ complexity. Consequently, any multi-elemental system, however
small and ‘simple’, will always depend on time to fulfill its system-defining entailments,
however simple these are.
An obvious immediate conclusion is that neither ‘simple’ nor ‘complicated’ exist outside
our own definitions, but this argument gets us nowhere. It is far more relevant that different
degrees of ‘incompleteness’ can be envisaged, and that in a hierarchical assembly different
So far as we are aware, this was Robert Rosen’s belief. E.g. Judith Rosen (private communication): “in my
father's usage, complexity is inherent in the universe”; John Kineman (private communication): “Rosen, I
believe, did assert that nature is complex”; Robert Rosen (Rosen, 1997): “Complexity is, I feel, a basic feature
of the world, the material world, even the world of physics, the world of machines”.
2
Life and Simple Systems
5
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
scalar levels depend on different optimal (never zero) ‘incompletenesses’ of their samescale entailments for structural or organizational quasi-stability (i.e. fractal degrees of
internal freedom). Persistent scalar levels of a system are those which have managed to
establish some kind of equilibrium between both the positive and the negative implications
of their elemental or procedural ‘vaguenesses’3. Rather than maintaining that complication
always has a complex component, it is more interesting to note that persistent scalar levels
in a self-correlating hierarchy most probably always constitute simple or complicated selfrepresentations, as they are always computationally viable entities. The stabilization of such
levels appears to be associated with the ejection of sufficient complexity into their adjacent
inter-scalar regions [Cottam et al., 2003a] to achieve optimal at-least-partially-complex
relations with other scales of the system of which they are a part, and although some degree
of incomplete-complexity remains, it is of little importance at that scale.
QUASI-STABILITY IN EXPANDING SYSTEMS
The primary quality of any recognizable entity is its unification. It is easy to bypass this
aspect and concentrate on more observable characteristics, but an entity’s unification
cannot be ignored if we are to come to any understanding of its nature and operation. A
viable characterization of ‘unification’ is provided by comparison between intra-entity
correlative organization and entity-environment inter-correlative organization – between
cohesion and adhesion [Collier, 1999]. Although the difference between these two for a
crystal is substantial, that for a living organism is far higher. This, then, provides us with a
major question. How is it that living entities which consist of a multiplicity of strikingly
different subordinate parts do not simply disintegrate? Clearly, the individual parts cannot
be entirely autonomous, or there would be no reason for them to remain together. But is it
just ‘the whole’ which requires the parts, or ‘the parts’ which require the whole, or both?
John Collier has provided a resolution to this dilemma [Collier, 1999]. He suggests as an
example that through evolution the brain has ceded its biological-support autonomy to the
body, in exchange for a gain in information-processing autonomy. This kind of autonomy
exchange need not necessarily be complete between two individual parts of an entity, but
can spread across the entire assembly, so that in some way all the individual subordinate
parts require and are subordinate to all the others. This networked autonomy exchange
seems to be a good candidate for the global structure of a living organism, especially as it
would be a likely result of survivalist evolution being applied not to complete organisms
but to their individual parts. It suggests that an ecosystemic paradigm can be applied not
only to an assembly of different species, but also to their internal structures.
It is notable that all of the entities we would consider to be alive possess a highly
differentiated internal structure, when compared to a crystal, for example. A net result is
that the difference in information content between a living entity’s smallest and largest
scales is enormous. We may compare this to the change in information content across
scales in a crystal, which is virtually zero. This seems at first sight to be a great
3
A term attributable to Stan Salthe.
Life and Simple Systems
6
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
disadvantage, as it is primarily the entity as a whole which must respond to external threats,
and if its global character requires such an enormous pool of descriptive information, then
it must of necessity be slow to react. However, this is not the case.
Communication requires effort. It is a central tenet of information processing that faster
communication requires more energy. As an integrated assembly of small elements
expands, it sooner or later reaches the point where its coherence is insufficient to support
rapid communication between all of its different parts, and it is faced with a ‘choice’:
disintegrate, or change. Disintegration results in a number of different entities, which is of
little interest. But in what way can it change to improve the situation and enable it to remain
integrated? Its only option is to partially split up but to also remain partially integrated. In
that way, the majority of communication can remain local, with only a limited amount
taking place between the new large-scale subordinate parts of the whole. This saves energy
which can be applied to increase the cohesion of the whole, but a result is that a new kind
of communication appears between different scales of the assembly.
Rosen’s modeling relation [Rosen, 1985] refers to just such a situation where one system
relates to another through a complementary assembly of dependence and autonomy.
Unfortunately, the relationship between adjacent scales of a unified system is massively
fractal, in terms both of analog and digital complexity. Kineman [2002] has proposed a
multi-scaled nested form of Rosen’s modeling relation, but this in itself is insufficient to
represent the multiply-fractal nature of the communicational complexity which resides
between different real scales. It should be noted that if it is possible to communicate
between adjacent levels of a hierarchy by means of the rationality of either of those scales,
then that part of the hierarchy will collapse into a single level, unless its structure is
supported by the external imposition of constraints. This form of constrained quasihierarchy corresponds to the structure of a digital computer. The only way to model interscalar transport in a real self-correlating hierarchy is by reference to all the system’s interscalar relationships: this requires bringing into play some kind of nonlocality [Cottam et al.,
1998c, 2003a, 2004b].
Information Transport across Scales
Analyzing the transport of information between different scales of an entity runs into a
major obstacle when it is realized that this cannot follow the same rationality which is
operational at one or other of the scales involved. If we consider the generation of a new
high-level scale, as was suggested above to increase cohesion, then the stability of that new
level depends on condensing an extended description of the interactions between numerous
lower-level elements to a far simpler description of their global properties. In short,
information must be compressed.
The compression of information is analogous to the physical compression of a gas, in that
beyond a certain degree of compulsion there will be a phase change to a much more
concentrated form (a liquid, in the case of a gas; an abstraction in the case of information)
which exhibits different properties from those of the initial content. A case in point is the
Life and Simple Systems
7
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
compression of steam. Not only does it first change to water – a polar liquid whose
universal-solvent properties are arguably responsible for the development of life – it then
changes into ice, which exhibits a series of different forms and properties with increasing
pressure. In general, this degree of compression means not only that much detail is lost
through phase change, but more seriously that the resultant higher abstracted form does not
depend on a description at the lower elemental level, but on what is required by the new
higher one. The only way the higher level can indicate this requirement to the elemental
level is by downward information-expansive communication, but it has only a sketchy
description of the lower level’s character because of the upscale compression. The only
way this can operate satisfactorily to produce a pair of levels whose descriptions are
coordinated is by a negotiation between the two, leading to autonomy exchange.
Autonomy negotiation between adjacent scalar levels requires a stable environment to
support it, but this appears at first sight to be absent from the questionable region of
compression/expansion between the scales. The creation of a previously nonexistent scalar
level above that of an assembly of numerous elements is usually referred to as the
‘emergence’ of a new level. Traditionally, this is attributed to the generation of ‘novelty’ as
a result of upward information pressure. Evidence from the physical structure of biological
molecules (most particularly of lipids) and from cross-scale information transport in single
crystals of gallium arsenide [Cottam et al., 2004a] suggests that the majority of the
information transport between different scalar levels of an entity is related to physical or
informational structure and not novelty. The successful transmission of low-noise
television signals depends on relatively small modulations of predictably-structured sets of
carrier frequencies. In a similar manner, it appears that the cross-scale transport of
information which leads to novel results requires a strongly structured carrier. This can then
provide a stable inter-scalar regime within which the autonomy negotiation can take place.
Generating Simplicity through Partial (En-)closure
The easiest way to reduce the descriptive complication of an entity is to surround it with a
complete barrier and then to establish a limited number of input/output ports through which
carefully limited information can be processed. This is exactly what a biological cell does.
The lipid cell membrane is penetrated by numerous channels which are each associated
with the transfer of specific kinds of information, for example the gap junctions which
permit inter-cellular communication. The cell itself also contains numerous organelles, or
different functional units, most of whose communication is limited by one or more
membranes. The central issue here is how a particular entity maintains suitable
relationships in both the up- and down-scale directions: an individual cell must be in some
sort of active correlation, not only with its organelles, but also with adjacent cells. Although
incoming information may be completely different from outgoing information, the
information cycle crossing the membrane in both directions must not only be selfconsistent, but must also be consistent with all of the other cross-scalar cycles within the
organism if its viability as a quasi-stable entity is to be maintained. It is the multiply-scalar
nature of these cycles which produces a local process closure as well as the spatial
enclosure of a cell and ultimately defines the cell’s function in the organism. As a result of
Life and Simple Systems
8
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
interaction between these closures, the cell may present itself to its environment in ways
which are useful to it, without confusing the issue with detail which is only relevant to its
internal processes. On a relatively short timescale, the cell’s lifetime is then controlled by
survivalist evolution at its own spatial scale and in relation to that of its ‘ecosystem’, much
as Darwinian evolution would proceed at the level of the organism.
Low-level Hierarchy
As soon as we start to describe ‘a whole’ consisting of ‘elements’ we are entering into the
regime of hierarchy. The basic problem of ‘how to correlate different levels’ is independent
of the number of levels as there are already two: ‘inside’ and ‘outside’ is sufficient!
Consequently, it is virtually impossible to take into consideration ‘how something works’
without invoking hierarchy and its associated advantages and difficulties. However, if there
are only a small number of levels, the ‘precision’ of definition of the assembly will be low,
as there will not be much pressure towards conformity from one inter-scalar transition
acting on another. An example of this is the difficulty in attributing a unique internal
mechanism to an entity comprising only a single level of internal operation. Current
investigations of the birth of the universe indicate that a couple of minutes after the big
bang there were no structures bigger than subatomic particles. At that point the entire
universe could be described as a very low-level hierarchy, with subatomic particles at the
peak of evolution. Although we can observe now that these same particles have very little
freedom of expression, as they can be described by a small set of quantum numbers, there is
little reason to suppose that at that time the situation was the same, and we should expect
‘primordial electrons’ to have had a wide range of different properties. At the current age of
the universe, electrons are far down the evolutionary fossil record when compared to
biological organisms. Consequently, the similarity of electrons can be attributed to
downward causal pressure, or ‘slaving’, from the multiplicity of higher-level closure cycles
which contribute to the stability of the entire structure.
Multi-level Hierarchy
The universe as we now experience it constitutes a wide range of observable or modellable
scales, from membranes, through superstrings, quarks, elementary particles, nuclei, atoms,
molecules, bio-molecules, organelles, cells… right up to organisms and societies. If the
arguments we have provided are relevant, we should find that our surroundings are stably
based on a majority of inter-scalar information-structural transport. Although in principle a
self-correlating multi-scalar hierarchy could be far from quasi-stability, our own experience
suggests that this is not the case at scales which are close to our own – if we close our eyes,
and then re-open them, the greater proportion of our surroundings remains the same.
The result of quasi-stability in a multi-level hierarchy is that the different scalar levels all
become very precisely defined, each with its own characteristic structure, rather than being
only vaguely internally self-consistent as in a low-level hierarchy. Virtually all the
imprecision and vagueness is ejected from the scalar levels themselves into the inter-scalar
regions, which become repositories of complexity. A simple example of this can be seen in
Life and Simple Systems
9
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Figure 1, which shows the classical logistic plot for the reproduction of rabbits, given
limited food resources:
xn1 xn (1 xn ) ,
xn = number of rabbits in year n, normalized to within 0→1,
λ = rabbit fertility,
where the different scalar solutions S1-S4 are separated by regions of deterministic chaos.
Transit between adjacent scalar levels of a multi-level hierarchy entails passage through an
interface which is not only complex, but which contains fractally-alternating regions of
both analog and digital complexity. The inter-scalar interfaces are archetypically complex
in the sense we referred to earlier – it is not only impossible to completely model them in a
formal manner, it is also impossible to model transit through them. It is easy to see why
this should be so. Autonomy negotiation within the interfaces depends not only on local
properties and processes, but on negotiation in all of the interfaces of the entire scalar
assembly. It is only by reference to global properties and processes that a local inter-scalar
transit can be specified.
Figure 1. The logistic plot: scalar levels S1 – S4 are separated by regions of chaos.
Hierarchical stability depends on the imposition of inter-scalar constraints. These may be
externally imposed, creating an artificial hierarchy, or they can be the result of hierarchywide auto-correlation of the scalar levels and their interfaces, generating a natural
hierarchy – the basis of living organisms. Figure 1 illustrates the scalar levels of an
artificial hierarchy, where the appearance of deterministic chaos depends on restriction of
the contextual dimensions. Natural hierarchy, however, has a completely different
character. Any dimensional constraints are the result of auto-correlation, and extant scalar
levels themselves are autonomously generated in such a way that local and global
properties approximately coincide within the levels. This is a major characteristic of any
natural hierarchy: extant scalar levels correspond to Newtonian potential wells. To see why
Life and Simple Systems
10
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
this should be the case, we must first look at how it is possible to distinguish one entity
from another and one scale from another.
Within the context of universal nonlocality, the restriction on communication speed which
results from relativity creates localization [Prigogine and Stengers, 1984; Cottam et al.,
1998a], and makes the instantaneous correlation of events in different parts of the universe
impossible. Within a multi-level hierarchy, the distinction of one scale from another
depends similarly on a restriction in communication speed, but now it is the inter-scalar
communication which is relevant. A generalized form of relativity4 applies to any kind of
communication, not just to photonic communication. Communication between glial cells in
the brain, for example, takes place by calcium ion diffusion [Newman and Zahs, 1997],
with a timescale of hundreds of milliseconds for distances of the order of microns. Even at
these tiny velocities, relativistic differentiation is significant.
Naturally occurring scales develop in regions of a general phase space where local
properties are close to global ones. This provides a way to avoid the most serious
consequences of relativity, by making sure that events at different locations of the global
phase space do not later turn out to have contradicted each other, or to contradict global
constraints. Newtonian physics itself is so successful because it mirrors the local-global
correspondence of natural scales. This character is entirely absent from the artificial
hierarchy of, for example, a digital computer, where the external imposition of quasi-scalar
levels completely disengages local operations from global intentions, thus providing a
favorable environment for catastrophic crashes.
Figure 2. The generalized representation of a natural hierarchy.
4
Note that we refer here to a generalized form of relativity, and not to Einsteinian General Relativity.
Life and Simple Systems
11
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
A further major characteristic of natural hierarchy derives from the quasi-autonomy of any
particular scalar level from all of the other scales. If a large organism is to survive by
successfully confronting external threats, it must be endowed with rapid response
mechanisms. The integration of a multitude of individual cells into a higher level
representation supports just such a capability, by providing a scaled set of models of the
organism in its environment which are conducive to rapid computation and stimulusreaction.
Representing Hierarchy
There are a number of common ways of illustrating or representing hierarchy. Most of these
fall into one of two classes – either scalar hierarchy or specification hierarchy. We do not
find that either of these modes of description portrays sufficiently well the properties of a
natural hierarchy. Figure 2 illustrates a model hierarchical description [Cottam et al., 1999,
2000], which is far richer than purely scalar or specification descriptions. Each vertical line
represents a different scalar level. Every level corresponds to a complete scaled model of
the same entity, from the most detailed version on the left to the simplest one on the right.
Vertical line length indicates the amount of information which is required to completely
describe the entity and its implications at that scale.
In its most general form, the extreme left hand side corresponds to perfect nonlocality,
while the extreme right hand side corresponds to the perfect localization of formal logic.
Consequently, the left hand extreme is completely indescribable, and the right hand
extreme is completely inaccessible, corresponding as it does to a formal rationality (e.g.
Boolean logic) whose domain of operation is completely self-sufficient. Transit between
adjacent scalar levels entails passage through an interface which is not only complex, but
which contains fractally-alternating regions of both analog and digital complexity as we
indicated earlier (see the simplified representation of Figure 2).
A Limit to Expansion
All natural organisms are scale-limited in size. Insects, for example, cannot be very large
because of the restricted manner in which they distribute oxygen to their internal organs –
this is evidenced by the larger-scaled insects which appear in the geological fossil record at
around the time that the Earth’s atmosphere contained a higher proportion of oxygen than it
does now. In the same way that an expanding system of primitive elements finally loses
coherence, an expanding hierarchy will ultimately suffer a similar fate. However, the
controlling factor is now not the number of similar elements, but the number of the
hierarchy’s scalar levels.
Hierarchical cohesion is directly coupled to correlation across the scalar level assembly. If
differently scaled levels of the entity are well correlated cohesion will be strong, but if there
is a great disparity between different scales the entity will be structurally weak and prone to
fragmentation. This illustrates a fundamental distinction between artificial hierarchies,
where inter-scalar constraints are externally imposed, and natural hierarchies, whose inter-
Life and Simple Systems
12
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
scalar constraints are the result of hierarchy-wide auto-correlation of the scalar levels and
their interfaces. It seems likely that the lack of hierarchy-wide auto-correlation in artificial
hierarchies will be the biggest block on any attempt to generate artificial intelligence or
artificial life. Any expanding information-processing structure will ultimately relativistically
‘fragment’: the central question will always be whether it can be made to generate a
coherent hierarchy, or whether it simply and uselessly splits into a number of independent
entities.
THE HIERARCHICAL MAINTENANCE OF SIMPLICITY AND COMPLEXITY
The net result of all of the auto-correlatory activity in a natural hierarchy is to maintain a
suitable balance between apparent complexity and apparent simplicity. While it is vital that
an organism has available simple models which able it to react swiftly to external threats, it
is not necessarily the case that it should appear to be simplistic from an external viewpoint.
In an environment where predators or parasites are capable of perceptional and intelligent
information processing, it may be that apparent simplicity of reaction is taken as a sign of
weakness. Bruce Edmonds [2004] has proposed that this may be at the origin of ‘free will’,
and that an apparently ‘irrational’ response to threat may persuade the threat’s originator
that it has not sufficiently evaluated its target’s capabilities. James Gunderson5 has found in
experiments to implement escape strategies for submarines that simply doing nothing in the
face of discovery may ward off further attack – presumably because the attacker is worried
that it has not understood what the submariners know or are capable of. This risks opening
up the entire question of ‘who defines intelligence, and from which viewpoint’, but suffice
it to say in our present context that a wide range of reactive options provides more
opportunities than a restricted one, and that the coherently multi-scaled pre-computed
scenarios which are available from a natural hierarchy are invaluable for defense and
survival.
Is a Natural Hierarchy Scalar or Not?
Auto-correlation across a natural hierarchy results not only in assuring cohesive stability of
the entity, but also in strengthening the self-consistency of individual scalar levels by
ejecting an optimal degree of complexity from them into the inter-scalar regions. In
principle, it should be possible to satisfactorily approximate process/structure interactions
within a single scalar level by a limited set of rules, corresponding to a scalar-dependent
rationality. Unfortunately, the rationality associated with each scalar level is then likely to
be specific to that level, and it will be impossible to make a simple rational transit between
scales through the intermediate complex layers. This is to be expected from the arbitrary
negotiated nature of each level’s partial (en-)closure and process closures. To that extent a
natural hierarchy is clearly scalar.
5
Private communication.
Life and Simple Systems
13
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
However, closer examination reveals that hierarchy-wide inter-level auto-correlation is
itself a recognizable characteristic of natural hierarchy (and is not, therefore, associable
with artificial hierarchy). As such, the auto-correlation provides a hierarchy-wide
indication of the degree to which all the scalar levels are mutually consistent: it creates a
hyper-scalar level which appears to be a simplification of the entire assembly.
This reveals possibly the most significant aspect of natural hierarchy: it permits an
organism to simultaneously operate as a mono-scalar and a multi-scalar entity. Simple logic
is presented as required; specific complexity is accessible to match contextual
requirements; even more possibilities are available than at first appears to be the case.
GENERATING SIMPLICITY
One of the major challenges in describing how living organisms operate is to identify the
means by which the generation of simplicity is initiated. In common with the majority of
physical phenomena, even if it is relatively easy to describe a system’s trajectory once it is
in operation, it is virtually impossible to be certain about how that operation began, other
than to rely on ‘stochastic processes’. We would ideally like to be able to see a little farther
than that!
An Immune System Approach
The generation of simplicity has a character in common with the usual description of the
evolutionary development of the mammal visual system. Although its inception appears to
be dependent on the presence of some kind of ‘random’ activity as a source of variation
from which selection may be made, this would seem to be far too hit-and-miss an operation
to account for reliable directed development within a restricted timescale. Some ‘creative
force’ is obviously present, and it can apparently be relied on to ‘deliver the goods’ in a
timely manner.
Luis Arata [2004] has proposed an interesting possibility, which is linked to the
development of self-repairing systems. He suggests that an immune system can be viewed
as an imperfect problem-solving mechanism: the autonomous system detects internal
malfunctions and tries to fix them on its own, tinkering with all it has at hand. Internal
innovation happens when a new type of malfunction is fixed. The system remembers the
solution, for use at some future time in an as-yet unknown context, and on that occasion
with a quicker response to stimulus. In this sense the system has adapted, it has innovated
with respect to its previous capabilities, and learned from this action. An interesting aspect
of this process is that the cause of a new type of malfunction may be the result of the
system's interaction with a new environment - the immune response mechanism can
function as an adaptive drive. Arata’s proposed ‘tinkering’, however, again suffers from the
requirement for an as-yet undefined faster-than-stochastic creative generator.
Life and Simple Systems
14
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Figure 3. (a) A number of naturally hierarchical entities within the same environment,
and (b) asynchronous auto-correlation across the hierarchy.
We propose that the superposition of a number of different natural hierarchies which use
the same database may help in solving this problem. Figure 3(a) illustrates the
superposition of a number of hierarchically-defined entities within a single unified
environment (we have left their multiple scalar levels out of the picture for clarity). In our
present case, each of these entities can represent a different multi-scaled model of
environmental problems which have already been solved. The aim is to provide a solution
to a recognizable but as-yet un-encountered problem. We suggest that this may be achieved
by one or a number of mutations of previously existing solutions.
The key to creative generation is that although natural hierarchical auto-correlation may be
strong, it will never be perfect. Differently scaled representations of an entity will never
exactly coincide – otherwise, there is only one scale, and no hierarchy. Correlation between
the various scales of a particular entity (e.g. of that shown centrally in Figure 3(a)) will take
place asynchronously between all the neighboring pairs of levels, as indicated in Figures
3(b) and 4(a). However, a small degree of non-coincidence between adjacent scales of the
particular entity and of its neighbors can now have an important effect.
Whereas complete scalar coincidence would mean that auto-correlation is relatively simple
across the entire scalar ensemble, if this is not the case there will be scales present where it
is less than obvious how to access the adjacent ones, as illustrated in Figure 4(b). The net
result can now be a mutation of local scales, by interference in the correlation path of one
entity by the scales of another (see Figure 4(c)).
We suggest that this mechanism can provide useful generation of new problem solutions by
mutation, and help to speed up the creation of new candidates for inclusion in the immune
‘toolbox’ Arata [2004] proposes.
Life and Simple Systems
15
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Figure 4. (a) Auto-correlation in a perfectly aligned natural hierarchy, (b) autocorrelation in a partially misaligned natural hierarchy, and (c) auto-correlation
mutation caused by overlapping misaligned scales of two different entities.
One other source of mutation is that transit between adjacent scalar levels involves passage
through both analog and digital complex regions. It is likely that this process itself can
modify the perception of representations between the members of an auto-correlating level
pair.
Defining “What is Alive”
The traditional way of defining ‘what is alive’ is to establish a number of criteria which
could distinguish the living from the inanimate (or inplantate?), and then fix those criteria
so that they define that ‘what we know is alive, is alive’. Self-referential definitions of this
kind are of little use in progressing our understanding of ‘what life is’. Biologists have long
used the combination of metabolism and reproduction to define life, but this approach
addresses more how an entity evolved to its state of ‘living’ than whether it is alive at this
moment. It is, however, possible to derive criteria for living systems from more abstract
considerations, and we present two of these below.
Protein Bending as the Origin of Life
It is reasonable to suppose that the complexities of a living system would not survive the
absence of the complex proteins which to a great extent make up the tissue substrate for
life. Is it possible that we can use the inception of proteins as a criterion?
Starting from a presumption of the big bang, as is intrinsic to evolution of the natural
hierarchy representation we presented in Figure 2, the ‘emergence’ of new structure
initially took the form of discrete (digital) entities emerging from a continuous (analog)
background. From 10-5 seconds after the big bang protons and neutrons were formed
(localized) out of quarks, through an intervening complex interface. From 3 minutes later,
atomic nuclei were synthesized; from 300,000 years later the first atoms formed; and so on.
Life and Simple Systems
16
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
From this point onward, and up to the formation of bio-molecules, it is reasonable to
suppose that the vast majority of information which was coded into the created structures
could be simplistically represented by the sequence and organization of atoms with respect
to each other.
The supposition is that this pattern continued up to the generation of the first simple
proteins, but that then a new difficulty raised its head. Normally, protein development is
referred to locations in sequence space – a multi-dimensional representation of the atomic
sequences which can go to making up complicated protein structures - where possible new
sequences of atoms can be visualized to see whether a new molecule is ‘feasible’ or not.
Unfortunately, this takes no account of the difficulties presented to complicated atomic
arrangements which must position themselves in only three spatial dimensions. Even at the
current stage of biological development, sequence space is virtually empty, and there are
vast numbers of possible but as-yet unrealized protein sequences. However, a modified
form of representation which takes account of the real spatial relationships between the
parts of a molecule, namely shape space, is already nearly filled up. Andrade [2000] has
suggested that this is the reason why protein bending developed, as a way of getting round
the limitations imposed on new sequences by their spatial constraints. It is notable that
protein bending makes possible the kind of ‘key in a keyhole’ molecular alignments which
control much of organic metabolism.
The development of protein bending provided bio-systems with the dual analog-digital
coding mechanism which is the basis of evolved life. In its most simple form, this consists
of DNA as a digital representation and the organism itself as an analog derivative of it. It
seems as if the development of protein bending could indicate the genesis of life as we
recognize it.
Redefining “What is Alive”
Further consideration of the implications of natural hierarchy suggests that it is the
stabilization of highly multi-scalar entities which indicates the onset of life. This does not
contradict our previous conclusion with respect to protein bending – it supports it. A major
step in stabilizing a large hierarchy is the organized development of correlated digitalanalog coding which can support inter-scalar information transit through the digital/analog
complex interfaces.
We suggest that the most significant contribution of hierarchy to sustainable life is that it
permits an organism to simultaneously operate as a mono-scalar and a multi-scalar entity.
DISCUSSION
So, is nature complex? And is its complexity distinguishable from complexity by assembly?
Responses to these questions depend, as always, on the contexts within which we decide to
ask and answer them. A vital part of any kind of modeling process is to remember that there
Life and Simple Systems
17
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
are always at least three participants in the narrative. These may be a natural system, a
formal system and an observer; they may be two mutually observing systems and their
interface; they may semiotically be an input object-relation, an interface mediation relation
and an output interpretant relation [Taborsky, 2005] - but there are always at least three. It
is worth remembering that Newtonian mechanics already has problems with the interaction
of three distinct entities. And it is easy to forget that a structural ‘snapshot’ of a living
system yields no information about its temporal organization, and that recourse to
simplistic description demands that all three narrative participants co-exist within one and
the same context for a model or state-description to have anything other than ephemeral
meaning.
Our own approach to modeling, as evidenced in this paper, is in accordance with the
evolution of our species, whose first technological successes were at scales related to our
human bodies, whether of space, or time, or hardness, or any other observable or
measurable phenomenon or property. Only recently have we progressed to investigation,
knowledge (or presumption) and control of more extreme scales, of carbon nanotubes, of
cosmological black holes, of terrestrial plate tectonics, of microscopic computational
devices. Is nature complex? Complexity is certainly in evidence at our own scale, and
clearly perceptible as we progress to scales lower or higher than our own, but can we justify
suggesting that there is a sense in which it is all-pervasive beyond our limited means of
observation? We believe not – we are predisposed to accept our belief that any other
conclusion would only be one of belief!
There is, however, a sense in which a Rosennean-style incompleteness-complexity enters
into every observation we make of our surroundings. With all its failings, Science has
managed to demonstrate that any measurements we make depend on the pre-assembly of
lower-scaled ‘entities’, whether these are molecules, atoms, protons neutrons and electrons,
quarks, superstrings, … . Measurement and subsequent modeling at every stage of this
Scientific progression has thrown up predictions of as-yet unmeasured scale. Everything we
currently ‘know’ about the physical nature of our universe is portrayed as an assembly. At
every scale, relativistic communicative restriction plays its part, making it impossible to
obtain complete time-independent information about any assembly of entities.
Consequently, in at least this restricted sense, any assembly of entities will exhibit a more
or less relevant degree of incompleteness-complexity. This not only applies to large
systems, whose complexity may derive from this or other causes, it also applies to
complicated and even simple assembled systems: everything ‘is’ complex. Even at the
quantum level, Ioannes Antoniou [1995] has demonstrated that when conventional quantum
theory is extended to large systems there is a breakdown in logical completeness. The
central issue is not one of definition, it is one of relevance to a particular context: is the
incompleteness ‘a difference which makes a difference’? 6 This may appear to be something
of a quibble, as life is most certainly not associated with minimal degrees of complexity,
but it is of prime importance in judging how we decide to model living systems.
6
"...what we mean by information... is a difference which makes a difference." - Gregory Bateson.
Life and Simple Systems
18
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Newtonian mechanics is ridiculously simplistic, but it reproduces very effectively the
tendency of nature to create simple representations of complex phenomena, because it
reproduces nature’s requirement for ‘computable’ models which exhibit approximate localto-global correlation [Cottam et al., 2004a]. The complexity which is implicit in its
simplistic avoidance of questions of temporal completeness is not a serious shortcoming, in
so far as it applies to our human scale. The complexity of life is not obviously amenable to
generating the rapid responses an organism needs if it is to avoid succumbing to external
threats in a hostile environment. Organisms create their own simple models of both
themselves and their environments to help them survive. Our own brains implement this
strategy (as ‘fear-learning’) through an approximate-but-fast amygdalic bypass to the
detailed-but-slow cortical processing [LeDoux, 1992]. Multiple scales provide multiple
degrees of organizational complexity, and associated multiple scales of temporally
constrained entailment.
In general, successful modelling involves recognizing how a difficulty in one part of a
model can be relocated to some other part of the model where it can be more easily dealt
with. Rosen and Kineman [2006] present a reappraisal of Robert Rosen’s proposition that
the complex nature of time and its interaction with biological systems is what allows
organisms to develop ‘internal predictive models’ that are built into a living system’s
organization, and that these models involve ‘acts of abstraction’ that lie ‘outside the
dynamics of the living system’. The natural hierarchy we present in this paper instantiates
just such a kind of organization7, and because the rational inhomogeneity of a ‘real’
hierarchy permits, or rather insists on, the partial enclosure of different scales, multi-scalar
temporal encoding is now embedded in the structural hierarchy which is characteristic of a
reactive organism’s embodiment.8 A major advantage of this reformulation is that it is then
possible to call upon modifications of already-existing theory to resolve the thorny problem
of inter-scalar transit.
A natural hierarchy is essentially birational [Cottam et al., 2004a]: the Newtonian potential
wells indicated in Figure 2 form a self-consistent auto-correlated set of scales; the
intermediate complex layers form a second one. The former Newtonian set is reductive
towards localization (on the right hand side of Figure 2); the complex set is reductive
towards nonlocality (on the left), the two yielding an interleaved complementary pair of
self-correlated mono-rational hierarchies. Each mono-rational hierarchy acts as a scaled
repository of ‘hidden variables’ [Bohm, 1987] for the other, and it seems [Cottam et al.,
2003a] that apparently a-rational inter-scalar transits may now be rationally modelled
through a generic form of quantum error correction: reformulation of the complex substrate
of life as a natural hierarchy pays great dividends!
… although it is a moot point whether the ‘acts of abstraction’ lie outside the dynamics of the living system:
we prefer to use the word ‘dynamics’ to describe all kinds of temporally significant activities, rather than
restricting its use to a small subset of these and therefore placing the ‘acts of abstraction’ outside ‘dynamics’.
7
8
The developmental origin of this research programme lies in AQuARIUM [Langloh et al., 1993] - an
architecture which was intended to reproduce the multi-temporal response capabilities of organisms in an
optical computer.
Life and Simple Systems
19
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
CONCLUSIONS
This paper targets the relationship between simple systems and life, which begs the
question “What is a simple system?”. We suggest that a simple system is a nominally stable
composition of nominally stable ‘sub-elements’, where communication across the entire
assembly can take place within a limited and complete set of rationalities9. The
maintenance of simplicity within a complex domain requires the application of external
constraints, in the manner that a causally-chaotic system may be encouraged to exhibit
deterministic chaos by restricting its degrees of freedom (e.g. for the logistic plot of Figure
1, by containing its expression to within two dimensions). In a natural hierarchy individual
Newtonian scales experience precisely this kind of externally applied en-closure, which is
imposed by adjacent scales of the same entity through their complex mutual interfaces:
complexity is a necessary condition for autonomous simplicity!
Living organisms depend on their quasi-stable simplistic-system appearances to adequately
relate to their environments, but evolution requires that their ‘sub-elements’ can mutate.
Whereas simple systems derive stability from that of their basic elements, living systems do
so by exploiting the complexity of their organization to circumvent the far less stable nature
of their constituents.
Rosen began his life’s investigations by asking the question “Why are living things alive?”.
We began from within a technological domain by asking “What is a computer?”, which
surprisingly, but we now see inevitably, led us towards complexity and life. Two major
processes are required for computational survival: decision-making and comparison.
Decision-making is locally data-destructive: information is progressively compressed (and
locally thrown away) to arrive at a simple computationally-serial decision. Comparison,
however, must either be computationally-serial and non-destructive of data, which for large
systems is impossibly resource-intensive, or it must proceed in parallel, either through
quantum superposition or by a simulation of it [Langloh et al., 1992]. Biological organisms
exploit complexity to simulate parallel comparison through chaotic computation [Cottam et
al., 1998a]. Karl Pribram [2000] has suggested that this simulation is responsible for
memory storage in the complex axonite meshes which couple together neuron outputs and
inputs.
We have presented elsewhere a network-based relational rationalization of the commonly
presumed Descartes-style duality of body and mind [Cottam et al., 2004a]. The coevolution of linearly-scaling direct and quadratically-scaling indirect relations in growing
networks ultimately leads to two different quasi-independent externally-interpretable
system characters. One corresponds to the normal Scientific view, which depends on
formally-rational cross-scale information transport, the other to parts of the holistic system
which are inaccessible to a ‘normal Scientific’ viewpoint, and which are associated with the
distributed nature of indirect relations. Complete representation of a system’s interactions
with its environment requires evaluation of both of these characters. We believe that this
9
This is, of course, a simple model of a simple system.
Life and Simple Systems
20
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
bifurcation of system character into dual reductive and holistically-related parts, and the
difference in reductively-rational contemplative accessibility to these two characters, has
led to the conventionally presumed split between body and mind, where the body is
associated with direct ‘Scientific’ bio-systemic relations and the mind is naturally ‘difficult
to understand’ from a ‘normally Scientific’ viewpoint which presupposes that all essential
system aspects can be related to a single localized platform.
We have proposed [Cottam et al. 2004a] that the dual system characters derived from direct
and indirect relationships in a large network are equivalent to those more usually associated
with formally-rational information processing: as a co-evolution:
the direct relationships lead to (reductive) ‘hardware’;
the indirect relationships lead to (holistic) ‘software’.
The most critical facet of this bifurcation is that of relatedness to a single localized
platform, which corresponds to the instantaneous uniqueness of our human consciousness,
and to its consequent and somewhat egotistical presumption that an albeit simplified
reduction of our environment to a single viewpoint can validly represent natural
phenomena. This is arguably a far greater indictment of the conventional Scientific stance
than those presented by Rosen [see, for example, Rosen, 1991], most particularly in that it
precludes the inter-scalar closures upon which organism survival depends.
If we describe a quasi-externally viewed system solely in terms of reductively specified
interactions (i.e. in our network analysis the ‘hardware’) we unsurprisingly risk missing out
the majority of the systemic character! This conclusion mirrors in many ways Rosen’s
conclusions about living systems. However, if we choose to completely discard the
reductive system interpretation (the ‘hardware’) we end up with an abstract description
which precludes embodiment.
Relationship to the Work of Robert Rosen
Although, as Rosen points out in Life Itself [1991, page 19], a dead organism is as
inanimate as anything; so, to the best of our knowledge, is an un-embodied organization.
We require both the reductive and holistic descriptions of a system to successfully interpret
its operation: similarly, we require both the reductive and organizational descriptions of a
living system to successfully understand its nature. A ‘state-based snapshot’ of a living
system can inform us about its architectural embodiment; “entailment without states”
[Rosen, 1991] can inform us about its more holistic organization.
We are reminded that in his ‘Note to the Reader’, at the beginning of ‘Life Itself’, Rosen
[1991, page xviii] commented that when he tried to integrate relational and structural
descriptions of the same biological systems they did not seem to want to go together
gracefully, but that they must go together, being alternate descriptions of the same systems,
the same material reality.
Life and Simple Systems
21
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
It appears to us that the results of Robert Rosen’s investigations and of our own are in not
essentially contradictory. Starting from “Why are living things alive?”, Rosen arrived at an
appraisal related to the ‘software’ needed for a living system; starting from “What is a
computer?”, we arrive at least at the ‘hardware’ architecture for its embodiment. Rosen’s
specification for life is in terms of the closure to efficient cause of his replicating (M, R)system [Rosen, 1991, page 244]; our own is in terms of the self-correlatory completeness
(i.e. closure) of inhomogeneously-birational multi-scalar natural hierarchy. A natural
hierarchy assumes functional inter-relationships and subsumes closure to efficient cause.
Rosen’s conclusions about life, however, do not appear to take into account the hierarchical
scalarity of living systems which is described in this paper, nor the tendency of scalar levels
to eject complexity into their inter-scalar regions in an attempt to improve their
computability by ‘becoming’ simple machines. Rosen [1991, page 251] points out that in a
replicating (M, R)-system every function is entailed by another function, and that apart
from initial input the environment is irrelevant, so far as entailment is concerned. While we
would agree with the former assertion that in the natural hierarchy of an organism every
function is entailed by another function, recognition that the hierarchy is birational makes
the latter claim unlikely, except when pertaining to a simple model of life, for example in
Rosen’s [1991, page 251] well-known abstract block diagram 10C.6 of the entailments of a
living system.
At every level of an organism’s hierarchy there is a locally-scaled environmental
complement to its locally-scaled description [Cottam et al., 2003b], which injects
environmental entailments into both its own and other adjacent scales. In biological terms,
it seems reasonable to attribute at least part of a cell’s efficient cause to the organism of
which it is a part, if less obviously vice versa (it is important to note that entailment
between biological scales is never unidirectional). It is difficult to see why the birational
entity-ecosystem fertilization of entailment should be restricted to taking place only within
an organism, and to be excluded from the similar entity-ecosystem relationships it
maintains with its external environment.
We conclude that auto-correlatory natural hierarchy lies at the root of the development of
life, and that its major characteristics correspond to the major requirements of living
entities. It seems most likely that the two are inseparable, and that through its inevitable
properties a natural hierarchy is alive. The attribute of logical intelligence derives from the
inter-scalar complex regions (in a way which is related to semiotic abduction), and that a
more global form of intelligence, related to wisdom, derives from correlation between the
hyper-scalar levels which are generated from the hierarchy’s entire scalar assembly [Cottam
et al., 2003b].
Life and Simple Systems
22
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
ACKNOWLEDGEMENT
The authors wish to thank the reviewers of this paper for their many helpful suggestions in
improving the manuscript.
REFERENCES
Andrade E. 2000. From external to internal measurement: a form theory approach to
evolution. BioSystems 57: 49-62.
Antoniou I. 1995. Extension of the conventional quantum theory and logic for large
systems. Presented at the International Conference Einstein meets Magritte,
Brussels, Belgium, 29 May 29 – 3 June.
Arata LO. 2004. Adaptive immunity and adaptive innovation. In Proceedings of
PerMIS’04: International Workshop on Performance Measures for Intelligent
Systems, NIST Special Publication 1036, Messina ER, Meystel AM (eds). NIST:
Gaithersburg, MA; 1-6.
Bohm D. 1987. Hidden variables and the implicate order. In Quantum Implications: Essays
in Honour of David Bohm, Hiley BJ, Peat FD (eds). Routledge and Kegan Paul:
London; 33-45.
Collier JD. 1999. Autonomy in anticipatory systems: significance for functionality,
intentionality and meaning. In Proceedings of CASYS'98, The Second International
Conference on Computing Anticipatory Systems, Dubois DM (ed). SpringerVerlag: New York; 75-81.
Cottam R, Ranson W, Vounckx R. 1998a. Emergence: half a quantum jump? Acta
Polytechnica Scandinavica Ma 91: 12-19.
Cottam R, Ranson W, Vounckx R. 1998b. Consciousness: the precursor to life? In The
Third German Workshop on Artificial Life: Abstracting and Synthesizing the
Principles of Living Systems, Wilke C, Altmeyer S, Martinez T (eds). Verlag Harri
Deutsch: Thun, Germany; 239-248.
Cottam R, Ranson W, Vounckx R. 1998c. Localization and nonlocality in computation. In
Information Processing in Cells and Tissues, Holcombe M, Paton RC (eds). Plenum
Press: New York; 197-202.
Cottam R, Ranson W, Vounckx R. 1999. A biologically consistent hierarchical framework
for self-referencing survivalist computation. In Computing Anticipatory Systems:
CASYS'98 - 2nd International Conference, AIP Conference Proceedings 517,
Dubois DM (ed). CHAOS: Liège, Belgium; 252-262.
Cottam R, Ranson W, Vounckx R. 2000. A diffuse biosemiotic model for cell-to-tissue
computational closure. BioSystems 55, 159-171.
Cottam R, Ranson W, Vounckx R. 2003a. Autocreative hierarchy II: dynamics – selforganization, emergence and level-changing. In International Conference on
Life and Simple Systems
23
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Integration of Knowledge Intensive Multi-Agent Systems, Hexmoor H (ed). IEEE:
Piscataway, NJ; 766-773.
Cottam R, Ranson W, Vounckx R. 2003b. Sapient structures for sapient control. In
International Conference on Integration of Knowledge Intensive Multi-Agent
Systems, Hexmoor H (ed). IEEE: Piscataway, NJ; 178-182.
Cottam R, Ranson W, Vounckx R. 2004a. Autocreative hierarchy I: structure – ecosystemic
dependence and autonomy. SEED Journal 4: 24-41.
Cottam R, Ranson W, Vounckx R. 2004b. Back to the future: anatomy of a system. In
Computing Anticipatory Systems: CASYS'98 - 2nd International Conference, AIP
Conference Proceedings 718, Dubois DM (ed). CHAOS: Liège, Belgium; 160-165.
Edmonds B. 1997. Bibliography of measures of complexity.
http://bruce.edmonds.name/combib/ [30 May 2005]
Edmonds B. 2004. Implementing free will. In Visions of the Mind – Architectures for
Cognition and Affect, Davis DN (ed). Kluwer: Dordrecht, 140-156.
Kineman JJ. 2002. Organization and evolution: a 21st century Baldwinian synthesis. In
Proceedings of the 46th Annual Conference of the International Society for the
Systems Sciences, Allen J, Wilby J (eds). ISSS: Toronto; 1-10.
Langloh N, Cottam R, Vounckx R, Cornelis J. 1993. In ESPRIT Basic Research Series,
Optical Information Technology, State-of-the-Art Report, Smith SD, Neale RF
(eds). Springer-Verlag: Berlin; 303-319.
LeDoux JE. 1992. Brain mechanisms of emotion and emotional learning. Current Opinion
in Neurobiology 2: 191-197.
Newman EA, Zahs KR. 1997. Calcium waves in retinal glial cells. Science 275: 844-847.
Pribram K. 2000. Proposal for a quantum physical basis for selective learning.
Presented at the International Conference on Evolution, Complexity, Hierarchy and
Order, Odense, Denmark, 31 July – 4 August. [30 May 2005]
http://www.hum.sdu.dk/center/filosofi/emergence/Abstracts/Abstract25.PDF
Prigogine I, Stengers I. 1984. Order out of chaos: man's new dialog with nature. FlamingoHarper Collins: London.
Rosen R. 1985. Anticipatory Systems: Philosophical, Mathematical and Methodological
Foundations. Pergamon Press: New York.
Rosen R. 1991. Life Itself: a Comprehensive Inquiry into the Nature, Origin and
Fabrication of Life. Columbia UP: New York.
Rosen R. 1997. Transcript of a videotaped interview of Dr. Robert Rosen, done in July,
1997, in Rochester, New York, USA by Judith Rosen.
http://www.people.vcu.edu/~mikuleck/rsntpe.html [30 May 2003]
Rosen R. 2000 Essays on Life Itself. Columbia UP: New York; 42-44.
Life and Simple Systems
24
Cottam, Ranson and Vounckx, 2005
Systems Research and Behavioral Science 22, 413-430, 2005
Rosen J, Kineman J. 2006. Anticipatory systems and time: a new look at Rosennean
complexity. Systems Research and Behavioral Science: in this volume.
Taborsky E. 2005. The Nature of the Sign as a WFF – A Well-Formed Formula. SEED
Journal 4: 5-15.
Life and Simple Systems
25
Cottam, Ranson and Vounckx, 2005