A system, given defined physical boundaries, could be considered Intelligent if it does two things:
Sense: Accurately measures its environment Manipulate: Physically changes the orientation of the system and objects in it's environment
You can test for intelligence by asking: Is the system sensing and is it manipulating it's environment? Of note, only Manipulate is directly observable to an outside observer. We only infer Sense via system manipulation latency, Eg. Reflex tests.
The more precisely a system does these two things, the more “intelligent” it can be considered.
Within these measures, complexity seems to scale at a greater than linear rate. Eg. Sense includes undirected exploration, Manipulate includes abstraction with tool use etc... Some measures are more easily observable than others.
A third system seems to be required for increasingly precise intelligence:
Model: Maintain an accurate longitudinal representation of sense measurements
It is unclear how to measure the existence of this sub-system, as it's not directly observable. This system could also be called “memory.” Much has been written about attempts to tie physical structures within intelligent systems to the abstracted concept of memory.
This is insufficient to fully describe an intelligent system, as the criteria for the system to optimize manipulation is not built into the model. It's necessary to define a vector for manipulation criteria in the context of the model. Said another way, the system must determine the appropriate manipulation action given the ability to Sense, Model and Manipulate. Hence the need to:
Plan: Generation of a future model state
It is unclear how to measure the existence of this sub-system, as it's not directly observable. It's unclear through which process intelligent systems generate future model states and what the coupling is between manipulation criteria and Planning. Eg. what influences the proportion of planning which requires system manipulation of the environment, versus planning independent of the system manipulating the environment.
Stated in a solipsistic way: Planning intends to compare what the future world looks like without your input, versus what the future world looks like with your input.
The criteria for biasing planning to inform manipulation criteria still remains unclear.
I contend that Intelligent systems manipulate their environment with the purposes of reducing uncertainty in future model states. However this is unsubstantiated.
Finally, I contend that the meta-comparison of the precision to which a system can manipulate the model of the environment, via a precise catalog of it's sensors and manipulators, in conjunction with the ability to explicate the biasing criteria for planning, would be what we consider consciousness.