Andrew Kemendo

Environment Model Alignment

If we assume that humans sense-model-plan-act, then it would be safe to assume that where senses are the same, eg. places where people experience the same actions or events, should result in a similar model. In fact that often doesn't seem to be the case.

There are 7 billion different models for how the world works. How much similarity is there between the different models of how the world works? What fills in the gaps matters perhaps. I think of this concept of noble lies filling in truly unknown, or unknowable questions, such that there is commonality is important.

Maybe there is such a thing as a whole person intelligence test. A Maze, mechanical puzzles, eliciting information, active listening, make a song, recreate an image, describe a fundamental concept. How would you make it generalizable or scalable? The longer is goes, the more granular, and maybe more intelligent?

The Global Crisis of Meaning


It seems as though the world is going through a slowly evolving existential crisis.

Starting around the end of World War II, the European West had been effectively razed and set back to an infancy. The Nazi terrors and subsequent War had so effectively blanched any liberal idealism from the collective European West, that it was left at the philosophical starting blocks.

Lets consider this the “birth” of the modern philosophical world

The head start that the American west had coming out of WWII, as the named victor, cemented that American philosophy became the philosophical “north star” for the western world, if only primarily because of the vacuum left in the wake of the war. American style vocal liberal Democracy and the extreme fetishization of Markets – with a protestant bent and casual racism thrown in as a mixer – became the notary stamp of progress.

The philosophical counterbalance in the post-WWII period, of communism, was in-retrospect a competitor primarily in name. While in name, communism (in reality soviet-allyism) was spreading, the reality of liberal trade and democratic institutions was a low noise floor of how people actually behaved in nearly all of these communist countries.

The 60s and 70s served as somewhat of a pre-teen period of philosophical chaos, with a active and loud general questioning of authority and challenging fundamental assumptions about the structure of the world. Technological progress – driven by the classic liberal philosophical concept of democratizing knowledge – helped catalyze these challenges, but in the end drove the majority of people to consumerism, reinforcing those fundamental roots of extreme markets and liberal democratic ideals.

The largely un-restrained economic growth in the west from the 70s until the late 2000s unwittingly promoted the philosophy of consumerism riding on top of extreme markets and choose your own adventure liberal democracy. The dissolution of the Soviet Republic, only served to notarize the only other philosophical competitor with the western philosophical stamp.

The consumerism train started going off the tracks in the early 2000s however. Push back on consumption as a habit and an increasingly growing proof of consumer driven anthropogenic climate change started to grow from a whisper to a chorus. The Great Recession in 2007 officially ended the consumerism party and by 2016 the US elections and BREXIT typified the death of “spend more live better” philosophy.

The Philosophical north stars of the past have vanished

“Country First”: Nationalism is Nazism “God First”: God is long dead “Spend and Prosper”: Consumption will lead to collapse

Collectively, we are ridding ourselves of the leaps of faith that were the north stars of our communities – not that everyone always believed them, but at one point we all at least played nicely in systems that assumed those truths were the underlying bedrock. Nobility of Monarchs is a wink and nod, deity appointed rulers are tongue in cheek at best etc...

We've continued to whittle away at the mysteries of the world, and the collective hallucinations we shared to keep communion and community, are becoming widely recognized for what they are – as baseless in fundamental truths as any other thing. We're slowly working our way down the abstraction levels asking the basal ganglia where we should find our philosophy

We still as humans need some north star, and it seems like “happiness” is trying to take the lead. The idea of the “balanced” life seems to reflect this. The concept of a diversity of activities and goals that keeps the brain engaged but not obsessed. Alternating between challenge and comfort so as to not push the system into over-extension. Anxiety is the enemy, and we prove it because it reduces longevity. Optimize the Work-Rest cycle, to extend the exploration phase of life and maximize dopamine pumps.

We fetishize parenting because we have no other fundamental north star to guide our lives. The biological response to family make it impossible to prove as hollow because it feels important – you can't disprove that you feel a certain way about your children – it's just there and you can taste it. In that sense, it's no different than hedonism.

There is no further purpose than for the feeling of peace and harmony that is unique to communion with others – and most powerfully with an offspring. It's as empty as any other – but feels fulfilling. This philosophy sits pretty firmly at the monkey brain level of abstraction – the lowest common denominator as a singular vector. I'd worry though that it simply leads to hedonism – but maybe that's ok.

Which brings us to Camus. We have a choice at the end of the day to find meaning. You can ignore the absurdity of the lack of objective meaning, by removing yourself from the equation: Suicide. However we must take it head on – though I'm still working out why this is necessary – something about turtles I'm sure. In which case you must either take a leap of faith, or you must choose what to make your meaning about.

My singular vector is collective understanding – helping us wiggle toward a conscious universe. But what's behind that? Unknowable, but that's the choice I make.

Synth Setup Part 1

I decided to start actually making music after 20 years of just editing and DJ'ing. I realized that in order to make the music I wanted, I needed to get a keyboard synthesizer, but I didn't know where to start.

I had a chance conversation with a semi-pro musician and he said “Just go to Guitar Center and play around with the synths.” So that's what I did.

A day later I had a $230 used microKorg.

In order to effectively use this to make music, I realized after playing with it, that I needed a sampler and recorder. Instead of buying new hardware, I figured there were probably good software samplers, so the next step was to feed my Korg to my computer. However modern laptops don't have sound cards that you can plug a 3.5mm into like I used to have on my desktop. I assumed that would be with a MIDI out to USB controller, but that wasn't the case, turns out you need a USB audio interface.

So I bought the entry level Focusrite Scarlett Solo.

I also re-downloaded Audacity for the millionth time, which is a great open source music editor. The Scarlett Solo comes with some free sample packs and things so I think those will be interesting to play around with.

This is what the whole setup looks like:

setup

Artificial General Intelligence Hypothesis


Disclaimer: This hypothesis is not going to have the rigor that is appropriate for this topic nor will I cite or support it in this format. I just need to get it down somewhere.

A system will have to follow the below process, either faster or more accurately than humans, across most domains of human action, in order to qualify as AGI:

Sense > Model > Plan > Act

We will achieve the goal of Artificial General Intelligence only when we can build an independent coordinated system of systems, with measurable boundaries that can follow this model.

Sensing: Human Level systems will only appear when sensing is as granular as human sensing. That means an AGI must achieve parity along all forms of human sensing: visual, auditory, tactile, and chemical sampling (taste, smell). Superhuman AGI would exceed human sensing capabilities into non-human sense ranges such as X-Ray, nano-scale tactile etc...

Modeling: Human Level systems will only appear when modeling is as accurate as human modeling. That means AGI must achieve parity with humans along physical modeling (navigation and spatial awareness), longitudinal modeling (change of physical systems over time, causal mapping) and conceptual modeling (social mapping, emotional mapping, ontological mapping). Superhuman AGI would exceed the specificity and granularity of a human model in each domain that it has sensors.

Planning: Human Level systems will only appear when planning can repeatedly give as sustainable of options as human planning. That means an AGI can create predictions of the state of it's physical, longitudinal and conceptual models both with and without the AGI's input as accurate as human predictions. Superhuman AGI would be able to more accurately or more granulary predict a future world than a human could.

Acting: Human Level systems will only appear when acting on the environment can be as granular as human actions. That means an AGI must be able to show that it's effectors can change the environment in which it acts in such a way that is consistent with it's planning capabilities, to the level of granularity of human actions. In simple terms this means, that when the AGI acts the outcomes of it's actions align with the intended outcome of it's planning, based on it's current model of the world. A Superhuman AGI would be able to effect the environment in a more granular way than a Human could given the same (or improved based on it's own design) tools.

AGI is not possible without the AGI having direct control of environmental sensors and effectors without outside influence. That means there is no such thing as an AGI with a human in the loop. Superhuman AGI would be made worse by the existence of a human in the loop, as a human would introduce a less granular model, planning and effecting capability than the AGI would.

Identifying Cranks


On their own, many of these are normal things that all researchers and scientists face at some point. However the more of them apply, the more likely it is somebody is a crank.


They have been working on the same problem for decades with little progress

Experts in their field ignore their work

They publish in alt-journals

They start their own journal

Their production quality is consistently low

They quote themselves

They don't have many peers who are respected in their field

Anyone successful in their field is “doing it wrong”

The successful people in their field are “just doing what they did years ago”

Nobody is quite sure how they make a living

They are the primary person promoting them

Lifestyle Company


A few years ago Noam Wasserman came up with the Rich vs King Concept:

https://hbr.org/2008/02/the-founders-dilemma

The simple version is: if you start a company, you need to know if you care about monetary rewards or power to control the company. In almost every case you can't have both.

Around the same time, the popularization of a “lifestyle business” appeared, in contrast to the cult of massive growth startup. The 4 hour work week is the spirit guide to those who are inclined to the lifestyle business.

Within the VC/Growth startup world calling a company a “Lifestyle business” is usually done dismissively, inferring that whatever the company and it's founder are doing is trivial or not worth considering as serious.

Having been around a lot of startups and founders over the past 7 years in many different places, these two ways of thinking about business: Rich vs King and Lifestyle vs Growth highlight the major difference in how people who go into business see the world. So I came up with a new set of categories:

True Believers These people start a company to promote an Ideology. They are ideally Kings of Growth Companies

Hedonists These people start start a company to take control of their wealth. They are ideally Rich and Work as little as possible

True Believers either get huge and start massive movements, or they implode, often in a spectacular fashion. They are flashy and draw a crowd. They care more about “changing the world” than getting rich. These are the home run champs like Mark McGuire or Barry Bonds.

Hedonists typically can build something that has good numbers consistently, and their failure mode is pretty low impact. They are steady and usually considered more predictable. They care more about growing their pie than long term social impact. These are consistent base hitters like Cal Ripken Jr or Craig Biggio.

Obviously these are broad generalizations, but it seems like the majority of business people fall into the Hedonist category, while the majority of “startup” people fall into the True Believer category.

I think though, business might be the wrong place for true believers. I'm not sure where they do fit – and I'm trying to work that out cause I am one – but it might not be as a company founder. At the end of the day, making money is only in service to the ideology, not an end itself so there is a conflict there.

Segmenting the Intelligent from the Non-Intelligent


It seems we should first do binary classification:

Is this system intelligent? YES / NO

Then we can do [gradient based segmentation] within intelligence:

Where does this intelligent system lie on an evaluation scale? [Gf...Gc...Gf]

Radar Graph

So we lead to the questions:

  1. What are the innumerable variables that could be measured for any intelligent system?
  2. Is there a coefficient that scores each variable as additive toward a global variable or are variable coefficients always environmentally bounded?
  3. How do you compare two systems if they have different contextual environmental boundaries?

Maybe lets put this in plain terms:

Assuming you have a system that is deemed “intelligent,” what do you measure in order to determine that the measures are correlated with the system being able to act towards an outcome (either local or global), and further how would you compare these systems if they reside in different environments? This is the topic of Jose Hernandez-Orallo's book: The Measure of All Minds

The last portion would require a global action vector, or a generalized action vector for intelligent systems. Discovering if there is in fact a generalized action vector (AKA answering the question “What is life about”) would be a significant achievement.


Is intentionality of action a determinant of intelligence: If a system didn't want to do anything how would a system that did, rate in comparison?

Is the idea of a ratings system even coherent? Why would we want to rate or compare systems? We only do that to allocate resources to be efficient to some ends – back to the intention based evaluation.

Everything in intelligence evaluation seems to need an action vector:

Intention > Action > Result

Why should we make the distinction between intelligent and non-intelligent systems? What purpose does it serve to segment these two things?

Last-Mile Delivery and Autonomous vehicles


I've been working in and around the furniture industry since 2015 and one of the things you learn is that having firm control of last mile delivery logistics is what will make or break a furniture company.

It's not price or fashion or anything like that, though those are important. It's really delivery logistics. The reason American Furniture Warehouse is so successful is because it's a logistics company that also sells furniture.

The idealized transport system is on demand that can travel on surface roads. We've largely solved long range mass travel with public transportation. Where this falls apart is between the public transit stop and the home.

I would argue that whomever can solve the last mile problem for human logistics will win the market for autonomous vehicles.

I can imagine a “swarm” of autonomous vehicles that only operate within X miles radius of a metro stop. The “swarm” of these cars all communicate with each other and have a running map of the area. If they are being used they go to the riders destination within some boundary. When done they return to the station and get in queue. They pick up riders along both ways, up to X riders. Uber and Lyft already have enough information to know how to schedule multiple stops.

A couple of questions come up: What is the number of vehicles required to operate in the boundary to ensure there is burst capacity, and wait times are minimal?

My guess is that this is what Uber and Lyft want to eventually do: Own the public transport market. Everyone wants a piece of the government market.

That's a bad idea.

Governments should be investing in creating their own public autonomous last mile human delivery systems.


Other thoughts on transportation:

If you look at human transportation like all other logistics, what you find is that every household is running a logistics operation.

Some outsource all or portions of the operation to government entities (public transport, school busses). Some have formed cooperatives (carpooling, ride sharing). The majority of households though run all operations in-house and outsource the maintenance.

Artificial Intelligence and Privacy are Incompatible

Robot Butler

The Robot Butler has been a trope of science fiction since we could dream of computing. Today, Siri and Alexa serve as disembodied ancestors to the future robotic personal assistants we’ve always dreamed of. And boy do people like them. 43 Million Americans own some form of dedicated virtual assistants. That’s almost 20% of US households.

Besides hands free commands such as setting a timer or checking the weather, some of the fastest growing command categories are recommendations. Things like wine pairings, recipe suggestions and movie suggestions are all easy tasks that can inform your decisions in a friendly and conversational way.

And wouldn’t you know it, the best way to increase the accuracy of personalized recommendations is to give these assistants more information about your wants, likes, needs and behaviors. Doing so helps Amazon or Apple build a better profile that can “learn” from and serve the user better in future interactions, just like human personal assistants do. Everyday, 43 Million Americans are providing Petabytes of this private data to these companies through their personal assistants.

So maybe it’s worth thinking for a moment about what a “personal assistant” is anyway.

Putting the Person in…personal assistant

A personal assistant at their apex is a worker who has a deep understanding of every relevant attribute of their client, in order help the client improve their effectiveness and efficiency. Lets use two fictional examples of personal assistants:

Scene from The Devil Wears Prada Courtesy 20th Century Fox

In the movie The Devil Wears Prada, Andrea (Anne Hathaway) works as a personal assistant for Miranda (Meryl Streep) and spends every waking hour trying to meet her overly demanding needs. The amount of personal, private data that Andrea requires to predict and satisfy the needs and wants of Miranda is nearly equivalent to that of a spouse. Not only does she need to know buying preferences, but also medical needs, allergies, sexual preferences, work and sleep patterns, food likes and dislikes, family composition and personalities, location data, preferred activities, the list goes on and on. Without this information she can’t do her job successfully and the value of the relationship is diminished.

Scene from The Fresh Prince of Bel Air Courtesy Warner Brothers

In the Fresh Prince of Bel Air, Jeffrey the butler interacts with each family member in a unique, loving and personalized way because he has developed a personal relationship with each member over years of service. Jeffrey tailors his interactions with each person based on what he knows of them and the situation they find themselves in. He understands and internalizes their individual personalities, histories, even their secrets, and how they will respond to different styles of suggestion to make their lives easier or more productive.

In both cases, the assistant and butler have a deeply intimate relationship with their clients, one that goes well beyond anything that most people would have with anyone other than a family member. They are part of our private life.

If we expect that Amazon’s Alexa or Google’s Assistant will fulfill our personalized needs and desires in the same way a human assistant would, then aren’t we required to build a similarly personal relationship with Amazon or Google? That means sharing our preferences and behaviors with them is a required step to create the feedback loops and deep understanding that is necessary to provide personalized value. Can you trust Amazon or Google with the same information you would give Jeffrey the butler? Should you bring Google into your private life?

You can Trust Us

The number one dis-qualifier when looking for a personal assistant, butler, maid, plumber, babysitter — really any job come to think of it — is a lack of Trust. If you think a housekeeper will steal your jewels or your babysitter will ignore your child, you’re not going to hire them.

It’s not unreasonable to assume that a big part of why Amazon is leading in the smart home market is because Amazon is the second most trusted brand in the US. People trust Amazon already, so it’s easy to build a stronger relationship with them. However this assumes that the users are aware of the data they are giving away, how it is being used, and have made a calculated decision on how much personal information to share based on their level of trust with Amazon. Probably a bad assumption.

It’s unknown to what extent users understand how much data they are giving away everyday, so it’s hard to know if they are wittingly trusting companies with their data or if they are unwitting participants in the data game. How Amazon uses your data is all there if you want to read it, however most people don’t read it. Even if you did read it, it’s not exactly clear what is happening with your data, so most people either go with gut feel, or simply don’t think about it at all.

The European Union’s General Data Protection Regulation is taking a hard stance on this with the goal of having companies explicitly show users exactly how their data is being used, but I’m not convinced this will have the effect they intend it to. They want to force the question of trust into the spotlight, and that might work, but just like couples therapy you can only force the conversation so much before one side loses trust.

So trust is really what it comes down to. Can you trust a major company with your most intimate secrets? Chances are you probably already are.

Long term relationship with AI

Whether you know it or not, you are creating this personal private relationship with companies just by virtue of living in the modern world. Just search for “[Company] tracking/spying” and you’ll find hundreds of articles expressing concerns about privacy and how much data a company is collecting from users.

This behavior is not restricted to the “smart” tech corporations by the way. Even if you don’t have a smartphone, your phone carrier knows (generally) where you are at all times, because knowing your location is part of how cell service works. The Safeway/Kroger discount card you use, or the mileage rewards program you enroll in, or any other “loyalty” program you’ve had for two decades, all of these are trying to do one thing:

Predict your preferences and behaviors so that they can put the coupon/sale/product/content that matches your preferences in front of you at the right time and right place.

The Early Days of big data

The difference between now and 20 years ago is that users provide orders of magnitude more specific and persistent data in an easily digestible way. With advances over the last decade with Machine Learning and blended prediction systems, companies can process this data and get more and more accurate behavioral predictions. For example YouTube made their recommendations system magnitudes better using Deep Neural Networks. Amazon published their Destiny engine which uses Deep Neural Networks to build better recommendations from user behaviors. Both of these are in the category of “AI” if you aren’t familiar.

Better tools and predictions create better services for users, more tailored product offerings, more accurate recommendations and more efficient markets that keep you coming back to their services. As more companies use increasingly precise behavioral models to predict user actions with “AI,” the deeper these relationship will get. Organizations will seek to build ever more personal relationships with their users because they want to serve your preferences.

By default the recommendation systems that people are responding positively to require your personal input to be successful. Just like personal assistants do.

Wait, I don’t want to have a relationship with AI!

Actually, you probably do.

Remember those 20% of Americans that literally have an always listening voice based assistant in their home? They don’t keep those because they are forced to, they do so because the assistants really do provide value that people want.

Anytime you check in for your flight on your phone, search for something on google, like a post of facebook, create a board on pinterest, rate a product on amazon or leave a review on yelp, you are dancing with the system, leading with your left and the system is following with their right. You are giving them your private preferences for an increasingly tailored service. Your behavioral data is used to tailor the product to your wants, to personalize your experience, which theoretically keeps you happy and using it — or at least addicted to it, the sinister downside of giving people what they want. This is however what people have been asking for: better personalization. Congratulations, you’re in it.

I will make a prediction of my own: more deeply intertwined AI based services will increase the decision making capabilities for users over the next several decades to such an extent, that it will be considered irresponsible not to use them.

Ok, fine then we’ll do it offline!

I can hear you now: “Well fine, I value my privacy but I also agree that these services are beneficial. So we will just make systems that never touch Amazon, Google or Facebook or whatever mega corporation services. We’ll own our own data and I’ll just keep all my private data locally in my own home and have my open source Smart Speaker totally off the grid. Or maybe just send out relevant data where necessary. I’ll build my own DDPG based Deep Learning systems and teach it everything it needs to know!”

But will you? Will 100 Million Americans put the effort in to tweak and modify their own systems? Will 3 billion users worldwide? Is it feasible to think you can build a “smart” enough network with strictly compartmentalized data just from one person?

The recent lament of how we “lost our way” with the internet, is the latest proof that humans are great at consolidating power, so why would this time be any different? The cyber-utopian fantasy of egalitarian connectivity is in my estimation low probability, and I think it’s irresponsible to move forward with that as a thesis.

Practically speaking, robust information networks don’t work that way, they need to exchange information up and down to become better, faster and more accurate. I’m not just talking IT here, I’m talking about plant root systems, dolphin pods and migratory bird flocks. All nodes share information up the branches to make the system stronger, more responsive and more efficient. Whomever owns the most connected nodes has the power. We should recognize that as a natural law and build our social systems to compensate for that.

So how should we as individuals choose to enter personal relationships with organizations that provide services to us in exchange for our private data? Who should we trust?

The first step is to recognize what these relationships look like, what data is being shared, how is it being used and how we can mutually benefit from the relationship as both users and creators without blowing the whole thing up.

Second, We should acknowledge that you cannot simultaneously have systems which adapt to users behaviors while also keeping their behaviors out of reach

Third, we need to study the cost vs benefit for these systems. My gut tells me that the tradeoff is net positive, but we need more evidence to show that. This is especially important for engineers and designers as we have a duty to provide value and not just extract value.


Reposted from my original medium post here: https://medium.com/@andrewkemendo/artificial-intelligence-and-privacy-are-incompatible-5375035f15c0)

Relative Complexity and importance of Systems within Evolved Organizations


In any organization of systems that has adapted and survived as a result of selective pressure, can we assume that some systems are more complex or took more time to develop than others during co-evolution?

Is the modern human eyeball more or less complex, as a system, than the modern Central Nervous System? Which took longer to evolve, the modern eyeball or the modern Central Nervous System? Did their co-evolution make them inseparable to the point where the question is intractable?

If the goal is to replicate the function of an evolved system through planned engineering, and the sub-systems are going to be built in a decentralized manner, under the above assumption we can derive that some systems will be easier to build than others.

Should we then assume that the more complex systems will be more critical to the functioning of the whole system? Does it follow that the systems with a higher cost to develop are more important?