<-back to Blog ToC

Artificial General Intelligence Hypothesis

March 17 2019


Disclaimer: This hypothesis is not going to have the rigor that is appropriate for this topic nor will I cite or support it in this format. I just need to get it down somewhere.

A system will have to follow the below process, either faster or more accurately than humans, across most domains of human action, in order to qualify as AGI:
Sense > Model > Plan > Act

We will achieve the goal of Artificial General Intelligence only when we can build an independent coordinated system of systems, with measurable boundaries that can follow this model.

Sensing:

Human Level systems will only appear when sensing is as granular as human sensing. That means an AGI must achieve parity along all forms of human sensing: visual, auditory, tactile, and chemical sampling (taste, smell). Superhuman AGI would exceed human sensing capabilities into non-human sense ranges such as X-Ray, nano-scale tactile etc...

Modeling:

Human Level systems will only appear when modeling is as accurate as human modeling. That means AGI must achieve parity with humans along physical modeling (navigation and spatial awareness), longitudinal modeling (change of physical systems over time, causal mapping) and conceptual modeling (social mapping, emotional mapping, ontological mapping). Superhuman AGI would exceed the specificity and granularity of a human model in each domain that it has sensors.

Planning:

Human Level systems will only appear when planning can repeatedly give as sustainable of options as human planning. That means an AGI can create predictions of the state of it's physical, longitudinal and conceptual models both with and without the AGI's input as accurate as human predictions. Superhuman AGI would be able to more accurately or more granulary predict a future world than a human could.

Acting:

Human Level systems will only appear when acting on the environment can be as granular as human actions. That means an AGI must be able to show that it's effectors can change the environment in which it acts in such a way that is consistent with it's planning capabilities, to the level of granularity of human actions. In simple terms this means, that when the AGI acts the outcomes of it's actions align with the intended outcome of it's planning, based on it's current model of the world. A Superhuman AGI would be able to effect the environment in a more granular way than a Human could given the same (or improved based on it's own design) tools.

AGI is not possible without the AGI having direct control of environmental sensors and effectors without outside influence. That means there is no such thing as an AGI with a human in the loop. Superhuman AGI would be made worse by the existence of a human in the loop, as a human would introduce a less granular model, planning and effecting capability than the AGI would.

Copyright (c) 2020 Andrew Kemendo