Andrew Kemendo

Synth Setup Part 1

I decided to start actually making music after 20 years of just editing and DJ'ing. I realized that in order to make the music I wanted, I needed to get a keyboard synthesizer, but I didn't know where to start.

I had a chance conversation with a semi-pro musician and he said “Just go to Guitar Center and play around with the synths.” So that's what I did.

A day later I had a $230 used microKorg.

In order to effectively use this to make music, I realized after playing with it, that I needed a sampler and recorder. Instead of buying new hardware, I figured there were probably good software samplers, so the next step was to feed my Korg to my computer. However modern laptops don't have sound cards that you can plug a 3.5mm into like I used to have on my desktop. I assumed that would be with a MIDI out to USB controller, but that wasn't the case, turns out you need a USB audio interface.

So I bought the entry level Focusrite Scarlett Solo.

I also re-downloaded Audacity for the millionth time, which is a great open source music editor. The Scarlett Solo comes with some free sample packs and things so I think those will be interesting to play around with.

This is what the whole setup looks like:

setup

Artificial General Intelligence Hypothesis


Disclaimer: This hypothesis is not going to have the rigor that is appropriate for this topic nor will I cite or support it in this format. I just need to get it down somewhere.

A system will have to follow the below process, either faster or more accurately than humans, across most domains of human action, in order to qualify as AGI:

Sense > Model > Plan > Act

We will achieve the goal of Artificial General Intelligence only when we can build an independent coordinated system of systems, with measurable boundaries that can follow this model.

Sensing: Human Level systems will only appear when sensing is as granular as human sensing. That means an AGI must achieve parity along all forms of human sensing: visual, auditory, tactile, and chemical sampling (taste, smell). Superhuman AGI would exceed human sensing capabilities into non-human sense ranges such as X-Ray, nano-scale tactile etc...

Modeling: Human Level systems will only appear when modeling is as accurate as human modeling. That means AGI must achieve parity with humans along physical modeling (navigation and spatial awareness), longitudinal modeling (change of physical systems over time, causal mapping) and conceptual modeling (social mapping, emotional mapping, ontological mapping). Superhuman AGI would exceed the specificity and granularity of a human model in each domain that it has sensors.

Planning: Human Level systems will only appear when planning can repeatedly give as sustainable of options as human planning. That means an AGI can create predictions of the state of it's physical, longitudinal and conceptual models both with and without the AGI's input as accurate as human predictions. Superhuman AGI would be able to more accurately or more granulary predict a future world than a human could.

Acting: Human Level systems will only appear when acting on the environment can be as granular as human actions. That means an AGI must be able to show that it's effectors can change the environment in which it acts in such a way that is consistent with it's planning capabilities, to the level of granularity of human actions. In simple terms this means, that when the AGI acts the outcomes of it's actions align with the intended outcome of it's planning, based on it's current model of the world. A Superhuman AGI would be able to effect the environment in a more granular way than a Human could given the same (or improved based on it's own design) tools.

AGI is not possible without the AGI having direct control of environmental sensors and effectors without outside influence. That means there is no such thing as an AGI with a human in the loop. Superhuman AGI would be made worse by the existence of a human in the loop, as a human would introduce a less granular model, planning and effecting capability than the AGI would.

Identifying Cranks


On their own, many of these are normal things that all researchers and scientists face at some point. However the more of them apply, the more likely it is somebody is a crank.


They have been working on the same problem for decades with little progress

Experts in their field ignore their work

They publish in alt-journals

They start their own journal

Their production quality is consistently low

They quote themselves

They don't have many peers who are respected in their field

Anyone successful in their field is “doing it wrong”

The successful people in their field are “just doing what they did years ago”

Nobody is quite sure how they make a living

They are the primary person promoting them

Lifestyle Company


A few years ago Noam Wasserman came up with the Rich vs King Concept:

https://hbr.org/2008/02/the-founders-dilemma

The simple version is: if you start a company, you need to know if you care about monetary rewards or power to control the company. In almost every case you can't have both.

Around the same time, the popularization of a “lifestyle business” appeared, in contrast to the cult of massive growth startup. The 4 hour work week is the spirit guide to those who are inclined to the lifestyle business.

Within the VC/Growth startup world calling a company a “Lifestyle business” is usually done dismissively, inferring that whatever the company and it's founder are doing is trivial or not worth considering as serious.

Having been around a lot of startups and founders over the past 7 years in many different places, these two ways of thinking about business: Rich vs King and Lifestyle vs Growth highlight the major difference in how people who go into business see the world. So I came up with a new set of categories:

True Believers These people start a company to promote an Ideology. They are ideally Kings of Growth Companies

Hedonists These people start start a company to take control of their wealth. They are ideally Rich and Work as little as possible

True Believers either get huge and start massive movements, or they implode, often in a spectacular fashion. They are flashy and draw a crowd. They care more about “changing the world” than getting rich. These are the home run champs like Mark McGuire or Barry Bonds.

Hedonists typically can build something that has good numbers consistently, and their failure mode is pretty low impact. They are steady and usually considered more predictable. They care more about growing their pie than long term social impact. These are consistent base hitters like Cal Ripken Jr or Craig Biggio.

Obviously these are broad generalizations, but it seems like the majority of business people fall into the Hedonist category, while the majority of “startup” people fall into the True Believer category.

I think though, business might be the wrong place for true believers. I'm not sure where they do fit – and I'm trying to work that out cause I am one – but it might not be as a company founder. At the end of the day, making money is only in service to the ideology, not an end itself so there is a conflict there.

Segmenting the Intelligent from the Non-Intelligent


It seems we should first do binary classification:

Is this system intelligent? YES / NO

Then we can do [gradient based segmentation] within intelligence:

Where does this intelligent system lie on an evaluation scale? [Gf...Gc...Gf]

Radar Graph

So we lead to the questions:

  1. What are the innumerable variables that could be measured for any intelligent system?
  2. Is there a coefficient that scores each variable as additive toward a global variable or are variable coefficients always environmentally bounded?
  3. How do you compare two systems if they have different contextual environmental boundaries?

Maybe lets put this in plain terms:

Assuming you have a system that is deemed “intelligent,” what do you measure in order to determine that the measures are correlated with the system being able to act towards an outcome (either local or global), and further how would you compare these systems if they reside in different environments? This is the topic of Jose Hernandez-Orallo's book: The Measure of All Minds

The last portion would require a global action vector, or a generalized action vector for intelligent systems. Discovering if there is in fact a generalized action vector (AKA answering the question “What is life about”) would be a significant achievement.


Is intentionality of action a determinant of intelligence: If a system didn't want to do anything how would a system that did, rate in comparison?

Is the idea of a ratings system even coherent? Why would we want to rate or compare systems? We only do that to allocate resources to be efficient to some ends – back to the intention based evaluation.

Everything in intelligence evaluation seems to need an action vector:

Intention > Action > Result

Why should we make the distinction between intelligent and non-intelligent systems? What purpose does it serve to segment these two things?

Last-Mile Delivery and Autonomous vehicles


I've been working in and around the furniture industry since 2015 and one of the things you learn is that having firm control of last mile delivery logistics is what will make or break a furniture company.

It's not price or fashion or anything like that, though those are important. It's really delivery logistics. The reason American Furniture Warehouse is so successful is because it's a logistics company that also sells furniture.

The idealized transport system is on demand that can travel on surface roads. We've largely solved long range mass travel with public transportation. Where this falls apart is between the public transit stop and the home.

I would argue that whomever can solve the last mile problem for human logistics will win the market for autonomous vehicles.

I can imagine a “swarm” of autonomous vehicles that only operate within X miles radius of a metro stop. The “swarm” of these cars all communicate with each other and have a running map of the area. If they are being used they go to the riders destination within some boundary. When done they return to the station and get in queue. They pick up riders along both ways, up to X riders. Uber and Lyft already have enough information to know how to schedule multiple stops.

A couple of questions come up: What is the number of vehicles required to operate in the boundary to ensure there is burst capacity, and wait times are minimal?

My guess is that this is what Uber and Lyft want to eventually do: Own the public transport market. Everyone wants a piece of the government market.

That's a bad idea.

Governments should be investing in creating their own public autonomous last mile human delivery systems.


Other thoughts on transportation:

If you look at human transportation like all other logistics, what you find is that every household is running a logistics operation.

Some outsource all or portions of the operation to government entities (public transport, school busses). Some have formed cooperatives (carpooling, ride sharing). The majority of households though run all operations in-house and outsource the maintenance.

Artificial Intelligence and Privacy are Incompatible

Robot Butler

The Robot Butler has been a trope of science fiction since we could dream of computing. Today, Siri and Alexa serve as disembodied ancestors to the future robotic personal assistants we’ve always dreamed of. And boy do people like them. 43 Million Americans own some form of dedicated virtual assistants. That’s almost 20% of US households.

Besides hands free commands such as setting a timer or checking the weather, some of the fastest growing command categories are recommendations. Things like wine pairings, recipe suggestions and movie suggestions are all easy tasks that can inform your decisions in a friendly and conversational way.

And wouldn’t you know it, the best way to increase the accuracy of personalized recommendations is to give these assistants more information about your wants, likes, needs and behaviors. Doing so helps Amazon or Apple build a better profile that can “learn” from and serve the user better in future interactions, just like human personal assistants do. Everyday, 43 Million Americans are providing Petabytes of this private data to these companies through their personal assistants.

So maybe it’s worth thinking for a moment about what a “personal assistant” is anyway.

Putting the Person in…personal assistant

A personal assistant at their apex is a worker who has a deep understanding of every relevant attribute of their client, in order help the client improve their effectiveness and efficiency. Lets use two fictional examples of personal assistants:

Scene from The Devil Wears Prada Courtesy 20th Century Fox

In the movie The Devil Wears Prada, Andrea (Anne Hathaway) works as a personal assistant for Miranda (Meryl Streep) and spends every waking hour trying to meet her overly demanding needs. The amount of personal, private data that Andrea requires to predict and satisfy the needs and wants of Miranda is nearly equivalent to that of a spouse. Not only does she need to know buying preferences, but also medical needs, allergies, sexual preferences, work and sleep patterns, food likes and dislikes, family composition and personalities, location data, preferred activities, the list goes on and on. Without this information she can’t do her job successfully and the value of the relationship is diminished.

Scene from The Fresh Prince of Bel Air Courtesy Warner Brothers

In the Fresh Prince of Bel Air, Jeffrey the butler interacts with each family member in a unique, loving and personalized way because he has developed a personal relationship with each member over years of service. Jeffrey tailors his interactions with each person based on what he knows of them and the situation they find themselves in. He understands and internalizes their individual personalities, histories, even their secrets, and how they will respond to different styles of suggestion to make their lives easier or more productive.

In both cases, the assistant and butler have a deeply intimate relationship with their clients, one that goes well beyond anything that most people would have with anyone other than a family member. They are part of our private life.

If we expect that Amazon’s Alexa or Google’s Assistant will fulfill our personalized needs and desires in the same way a human assistant would, then aren’t we required to build a similarly personal relationship with Amazon or Google? That means sharing our preferences and behaviors with them is a required step to create the feedback loops and deep understanding that is necessary to provide personalized value. Can you trust Amazon or Google with the same information you would give Jeffrey the butler? Should you bring Google into your private life?

You can Trust Us

The number one dis-qualifier when looking for a personal assistant, butler, maid, plumber, babysitter — really any job come to think of it — is a lack of Trust. If you think a housekeeper will steal your jewels or your babysitter will ignore your child, you’re not going to hire them.

It’s not unreasonable to assume that a big part of why Amazon is leading in the smart home market is because Amazon is the second most trusted brand in the US. People trust Amazon already, so it’s easy to build a stronger relationship with them. However this assumes that the users are aware of the data they are giving away, how it is being used, and have made a calculated decision on how much personal information to share based on their level of trust with Amazon. Probably a bad assumption.

It’s unknown to what extent users understand how much data they are giving away everyday, so it’s hard to know if they are wittingly trusting companies with their data or if they are unwitting participants in the data game. How Amazon uses your data is all there if you want to read it, however most people don’t read it. Even if you did read it, it’s not exactly clear what is happening with your data, so most people either go with gut feel, or simply don’t think about it at all.

The European Union’s General Data Protection Regulation is taking a hard stance on this with the goal of having companies explicitly show users exactly how their data is being used, but I’m not convinced this will have the effect they intend it to. They want to force the question of trust into the spotlight, and that might work, but just like couples therapy you can only force the conversation so much before one side loses trust.

So trust is really what it comes down to. Can you trust a major company with your most intimate secrets? Chances are you probably already are.

Long term relationship with AI

Whether you know it or not, you are creating this personal private relationship with companies just by virtue of living in the modern world. Just search for “[Company] tracking/spying” and you’ll find hundreds of articles expressing concerns about privacy and how much data a company is collecting from users.

This behavior is not restricted to the “smart” tech corporations by the way. Even if you don’t have a smartphone, your phone carrier knows (generally) where you are at all times, because knowing your location is part of how cell service works. The Safeway/Kroger discount card you use, or the mileage rewards program you enroll in, or any other “loyalty” program you’ve had for two decades, all of these are trying to do one thing:

Predict your preferences and behaviors so that they can put the coupon/sale/product/content that matches your preferences in front of you at the right time and right place.

The Early Days of big data

The difference between now and 20 years ago is that users provide orders of magnitude more specific and persistent data in an easily digestible way. With advances over the last decade with Machine Learning and blended prediction systems, companies can process this data and get more and more accurate behavioral predictions. For example YouTube made their recommendations system magnitudes better using Deep Neural Networks. Amazon published their Destiny engine which uses Deep Neural Networks to build better recommendations from user behaviors. Both of these are in the category of “AI” if you aren’t familiar.

Better tools and predictions create better services for users, more tailored product offerings, more accurate recommendations and more efficient markets that keep you coming back to their services. As more companies use increasingly precise behavioral models to predict user actions with “AI,” the deeper these relationship will get. Organizations will seek to build ever more personal relationships with their users because they want to serve your preferences.

By default the recommendation systems that people are responding positively to require your personal input to be successful. Just like personal assistants do.

Wait, I don’t want to have a relationship with AI!

Actually, you probably do.

Remember those 20% of Americans that literally have an always listening voice based assistant in their home? They don’t keep those because they are forced to, they do so because the assistants really do provide value that people want.

Anytime you check in for your flight on your phone, search for something on google, like a post of facebook, create a board on pinterest, rate a product on amazon or leave a review on yelp, you are dancing with the system, leading with your left and the system is following with their right. You are giving them your private preferences for an increasingly tailored service. Your behavioral data is used to tailor the product to your wants, to personalize your experience, which theoretically keeps you happy and using it — or at least addicted to it, the sinister downside of giving people what they want. This is however what people have been asking for: better personalization. Congratulations, you’re in it.

I will make a prediction of my own: more deeply intertwined AI based services will increase the decision making capabilities for users over the next several decades to such an extent, that it will be considered irresponsible not to use them.

Ok, fine then we’ll do it offline!

I can hear you now: “Well fine, I value my privacy but I also agree that these services are beneficial. So we will just make systems that never touch Amazon, Google or Facebook or whatever mega corporation services. We’ll own our own data and I’ll just keep all my private data locally in my own home and have my open source Smart Speaker totally off the grid. Or maybe just send out relevant data where necessary. I’ll build my own DDPG based Deep Learning systems and teach it everything it needs to know!”

But will you? Will 100 Million Americans put the effort in to tweak and modify their own systems? Will 3 billion users worldwide? Is it feasible to think you can build a “smart” enough network with strictly compartmentalized data just from one person?

The recent lament of how we “lost our way” with the internet, is the latest proof that humans are great at consolidating power, so why would this time be any different? The cyber-utopian fantasy of egalitarian connectivity is in my estimation low probability, and I think it’s irresponsible to move forward with that as a thesis.

Practically speaking, robust information networks don’t work that way, they need to exchange information up and down to become better, faster and more accurate. I’m not just talking IT here, I’m talking about plant root systems, dolphin pods and migratory bird flocks. All nodes share information up the branches to make the system stronger, more responsive and more efficient. Whomever owns the most connected nodes has the power. We should recognize that as a natural law and build our social systems to compensate for that.

So how should we as individuals choose to enter personal relationships with organizations that provide services to us in exchange for our private data? Who should we trust?

The first step is to recognize what these relationships look like, what data is being shared, how is it being used and how we can mutually benefit from the relationship as both users and creators without blowing the whole thing up.

Second, We should acknowledge that you cannot simultaneously have systems which adapt to users behaviors while also keeping their behaviors out of reach

Third, we need to study the cost vs benefit for these systems. My gut tells me that the tradeoff is net positive, but we need more evidence to show that. This is especially important for engineers and designers as we have a duty to provide value and not just extract value.


Reposted from my original medium post here: https://medium.com/@andrewkemendo/artificial-intelligence-and-privacy-are-incompatible-5375035f15c0)

Relative Complexity and importance of Systems within Evolved Organizations


In any organization of systems that has adapted and survived as a result of selective pressure, can we assume that some systems are more complex or took more time to develop than others during co-evolution?

Is the modern human eyeball more or less complex, as a system, than the modern Central Nervous System? Which took longer to evolve, the modern eyeball or the modern Central Nervous System? Did their co-evolution make them inseparable to the point where the question is intractable?

If the goal is to replicate the function of an evolved system through planned engineering, and the sub-systems are going to be built in a decentralized manner, under the above assumption we can derive that some systems will be easier to build than others.

Should we then assume that the more complex systems will be more critical to the functioning of the whole system? Does it follow that the systems with a higher cost to develop are more important?

Intelligent Systems


A system, given defined physical boundaries, could be considered Intelligent if it does two things:

Sense: Accurately measures its environment Manipulate: Physically changes the orientation of the system and objects in it's environment

You can test for intelligence by asking: Is the system sensing and is it manipulating it's environment? Of note, only Manipulate is directly observable to an outside observer. We only infer Sense via system manipulation latency, Eg. Reflex tests.

The more precisely a system does these two things, the more “intelligent” it can be considered.

Within these measures, complexity seems to scale at a greater than linear rate. Eg. Sense includes undirected exploration, Manipulate includes abstraction with tool use etc... Some measures are more easily observable than others.

A third system seems to be required for increasingly precise intelligence:

Model: Maintain an accurate longitudinal representation of sense measurements

It is unclear how to measure the existence of this sub-system, as it's not directly observable. This system could also be called “memory.” Much has been written about attempts to tie physical structures within intelligent systems to the abstracted concept of memory.

This is insufficient to fully describe an intelligent system, as the criteria for the system to optimize manipulation is not built into the model. It's necessary to define a vector for manipulation criteria in the context of the model. Said another way, the system must determine the appropriate manipulation action given the ability to Sense, Model and Manipulate. Hence the need to:

Plan: Generation of a future model state

It is unclear how to measure the existence of this sub-system, as it's not directly observable. It's unclear through which process intelligent systems generate future model states and what the coupling is between manipulation criteria and Planning. Eg. what influences the proportion of planning which requires system manipulation of the environment, versus planning independent of the system manipulating the environment.

Stated in a solipsistic way: Planning intends to compare what the future world looks like without your input, versus what the future world looks like with your input.

The criteria for biasing planning to inform manipulation criteria still remains unclear.

I contend that Intelligent systems manipulate their environment with the purposes of reducing uncertainty in future model states. However this is unsubstantiated.

Finally, I contend that the meta-comparison of the precision to which a system can manipulate the model of the environment, via a precise catalog of it's sensors and manipulators, in conjunction with the ability to explicate the biasing criteria for planning, would be what we consider consciousness.