<-back to home


Lets say you wanted to build a learning system, where would you start?

1. You need an environment to start with for sure, something with rules or laws - and if you want the system to do something on earth, you'd need the rules of the environment to match the rules of earth. If you're planning on it operating somewhere else or places with multiple rule sets, then you need that environment to be able to encapsulate those environmental variations.

2. You need a programmable agent with a physical representation. No ghosts allowed (I don't think). That means something with the ability to recieve instructions via some sensor and something which can give you feedback with effectors.

If you look at an environment like MuJoCo, [1] it provides something resembling #1. However it's an abstraction. Not something that translates into the real world. Same with pretty much any other "AI" model - these are abstract computing machinery - not agents in the world. Abstracting away most of the hard parts of an agent and focusing on the high level misses so much that it's back to the brain in a jar problem. Questions like "how do I get fuel" are fundamental existential things that humans spend a significant portion of their lives trying to figure out.

Go out and make an agent and the majority of what it does is eat and sleep. Whoops you just made a baby. And they are pretty useless for the first 10 or so years.

The computer hardware and OS is abstracting away, power switching and memory, the physics engine is abstracting away a real environment, the programmed agent is abstracting the real world agent. So all of these abstractions are just distractions. People working on algorithms want to tackle the "hard problems" while ignoring the "boring" ones - which actually turn out to be the hardest ones after all.


Copyright (c) 2020 Andrew Kemendo