Designing new AI? Learn from humankind’s first new lifeform

Humankind has a long history of looking to outer space for the possibility of intelligent life forms. In movies humans pit wits and weapons against extraterrestrials with hard to understand motivations. But Xenomorphs and little grey men are not the only kind of intelligent aliens. We as a species have spent the last few centuries evolving our definition of life, and eventually succeeded in creating a new life form that we are still learning to manage.

This new form of life is the corporation.

Aside from being a legal person for tax purposes purposes we have imbued companies with all manner of rights and abilities even though they are wholly intangible. We have lived with this new life form for hundreds of years and there are certainly lessons we can apply to creating any new kinds of intelligence.

So, what lessons are there for us to learn?

Asimov’s 3 Laws of Robotics are static and inapplicable

A sufficiently complicated neural network capable of qualifying as being a life form would also not be bound by these rules. The potential for subversion by way of logic would eventually be the same as in humans. With sufficient enough stakes almost any action can be justified by humans, and we need to build into AI the morality we as a species sometimes lack.

Preserving the environment protects life and is still profitable

If there is no safe environment for workers and customers then operating effectively is impossible. A temporary boost in profitably is not worth destroying the world or endangering its own potential for continued existence. I am not suggesting that we create a fear of death but the incentive structure for operating as instructed should include some notion of preservation. Both for itself and humanity.

Life feeds on life, preserve ecosystems to continue existing

There are very few organisms with the forethought to consider food storage within the natural world. Any life form that we would recognize as being alive should understand the concept of overconsumption. It should be designed to seek growth but not to the point of being unsustainable. It should also understand that constant growth is unsustainable, accepting smaller gains and successes over time.

Additionally it should have a concept of itself as a part of an ecosystem that includes other species that are essential for its survival. Though it is possible to destroy competition or entities that impede its directives the effects to the entire ecosystem need to be considered before taking action. In the near future I don’t think AI will be empowered to take actions that make such a large impact on society but building it in at the start will ensure we have time to get it right.

All successful teams share motivations and are aligned on goals

This addresses a common fear that an AI would “take over” and enslave humanity. In my opinion this concern comes from a very real possibility that an AI may not share our values. When other humans operate in positions of authority and do not share our values the harm that can be done is not something to ignore. Unlike in humans we have the opportunity to engineer and examine the values of an AI to potentially ensure that it does not harbor ulterior motives.

If we are to consider an AI as a new life form the ability to understand its motivations and gauge its reactions are vital. Not only to being recognized as a life form, but also in how ultimately decide on how much to trust it. And what we can trust it to operate on our behalf.

Rewards for preventing problems are greater than dealing with a crisis

In the cruel calculus done by every corporation when the penalties for a dealing with a crisis do not outweigh the incentives to profit until a catastrophe occurs, well, we have plenty of examples on what can happen. In the pursuit of profit an AI should consider safety and responsibility for others as vital to its success, and should be a large factor in how the AI objectively judges success. Our own assessment of performance will greatly consider this and the AI must have this information to properly weigh options. This way when it’s doing its calculus we can make sure the constants weighting safety against reward don’t get factored out as inconsequential.

Setting this expectation will also have the added bonus of requiring us to define the scope of corporate responsibility. Currently there is no such universally recognized definition.

And finally…

Like most, I am thrilled and slightly concerned about what the future of computing and AI will mean for the world. This topic effects so many aspects of life, today and in the far future. With the appropriate basic understandings of what it takes maintain life and being taught the best of our virtues an AI can truly be a beneficial technology.

If we manage to succeed in most of what I’ve laid out I think we could create a valuable and productive member of the work force, as well as well-meaning members of our households.

Leave a Comment