Andrew Kemendo

Relative Complexity and importance of Systems within Evolved Organizations


In any organization of systems that has adapted and survived as a result of selective pressure, can we assume that some systems are more complex or took more time to develop than others during co-evolution?

Is the modern human eyeball more or less complex, as a system, than the modern Central Nervous System? Which took longer to evolve, the modern eyeball or the modern Central Nervous System? Did their co-evolution make them inseparable to the point where the question is intractable?

If the goal is to replicate the function of an evolved system through planned engineering, and the sub-systems are going to be built in a decentralized manner, under the above assumption we can derive that some systems will be easier to build than others.

Should we then assume that the more complex systems will be more critical to the functioning of the whole system? Does it follow that the systems with a higher cost to develop are more important?

Intelligent Systems


A system, given defined physical boundaries, could be considered Intelligent if it does two things:

Sense: Accurately measures its environment Manipulate: Physically changes the orientation of the system and objects in it's environment

You can test for intelligence by asking: Is the system sensing and is it manipulating it's environment? Of note, only Manipulate is directly observable to an outside observer. We only infer Sense via system manipulation latency, Eg. Reflex tests.

The more precisely a system does these two things, the more “intelligent” it can be considered.

Within these measures, complexity seems to scale at a greater than linear rate. Eg. Sense includes undirected exploration, Manipulate includes abstraction with tool use etc... Some measures are more easily observable than others.

A third system seems to be required for increasingly precise intelligence:

Model: Maintain an accurate longitudinal representation of sense measurements

It is unclear how to measure the existence of this sub-system, as it's not directly observable. This system could also be called “memory.” Much has been written about attempts to tie physical structures within intelligent systems to the abstracted concept of memory.

This is insufficient to fully describe an intelligent system, as the criteria for the system to optimize manipulation is not built into the model. It's necessary to define a vector for manipulation criteria in the context of the model. Said another way, the system must determine the appropriate manipulation action given the ability to Sense, Model and Manipulate. Hence the need to:

Plan: Generation of a future model state

It is unclear how to measure the existence of this sub-system, as it's not directly observable. It's unclear through which process intelligent systems generate future model states and what the coupling is between manipulation criteria and Planning. Eg. what influences the proportion of planning which requires system manipulation of the environment, versus planning independent of the system manipulating the environment.

Stated in a solipsistic way: Planning intends to compare what the future world looks like without your input, versus what the future world looks like with your input.

The criteria for biasing planning to inform manipulation criteria still remains unclear.

I contend that Intelligent systems manipulate their environment with the purposes of reducing uncertainty in future model states. However this is unsubstantiated.

Finally, I contend that the meta-comparison of the precision to which a system can manipulate the model of the environment, via a precise catalog of it's sensors and manipulators, in conjunction with the ability to explicate the biasing criteria for planning, would be what we consider consciousness.