<-back to home

Background

The hypothesis here is that intelligent systems use Predictive Modeling for agent behavior in a "virtual" environment and then the actions are sent to the effectors to test the behavior. Models increase in granularity and predictive ability the longer they gain observations and more simulations they do. The internal modeling and simulation system however appears to be one of the most interesting aspects here in that these stochastic simulation generators are constantly running and providing policy outputs.

This approach makes the claim, that some minimum number of elements [sensor, modeler, effector] of an intelligent system [human/octopus/tree/etc...] is required to generate the functions or capabilites observed that would be considered "intelligent", and rejects perspectives that suggest decomposition [brain in a vat] could be considered intelligent. Further, this approach does not intend to model biological systems, rather it intends to build systems which have equivalent objective performance irrespective of the means for which they are performed.

This system, if effective will not answer questions of what are: "consciousness", "soul", "ethics" but would likely inform concepts like that by demonstrating material origins.

The biological inspiration here is the thought experiment where: If you sliced the spinal cord, or optic nerve or any nerve within the PNS or CNS in half and just measured the voltage/information what is coming out, it would be a continuous stream with a laregely uniform process throughout - though certainly with significant variation in density/amount etc... This leads to my general assumption that the system should be uniform in mathematical composition, not a hybrid approach (despite SOTA results of hybrid systems) such as the MCTS + DL in AlphaGo

The goal of such a system would be able to be predictive about future state of any input streams, within the set of data stream constraints. Eg. The predictions would be only relevant for predictions about the same streams that are coming into the system. The prediction should be generalizable in the sense that modeling and prediction can take as input anything that can be serializable. Unknown whether the prediction of the stream is transferable to variable inputs after inputs are trained, however there is a bootstrapping function that could likely take advantage of previous stream characteristics. In theory as more streams are input the better any individual stream prediction would be - but this has not yet been verified.

Attributes of an idealized stream inference system:

Example of a streaming predictions:



Streaming input > Model Prediction
Sensors, primarily light sensors stream a 3D representation of this set of blocks falling to the rest of the system. The agent may or may not be paying full attention to this, but if the stream is coming into the receptor, there will likely be attention paid to it over other static activities in the environment. Depending on the age of the agent, the model of the system will predict when the blocks will hit the table/ground and also predict (anticipate) that there will be a sound and roughly what that sound will sound like.
Despite not being able to communicate the precise estimate/prediction of when the blocks would hit the table**, an agent coulc have expected give a rough estimate of where the blocks may fall, proven by interceding with some effector that is fast enough "catch" a falling block by using the future physical state estimate. The agent has some concept of what the next state will be.

Hypotheses about emergent attributes

MISC [*]

v0.1 - Updated 1/20/2021
2024 revist: No further revisions coming - this direction was integrated into my operational learning architecture that is not public
Copyright (c) 2020 Andrew Kemendo