The successful creation of a super-intelligent AI will almost certainly mean the end of the Human era.

Todeljonen Hakvenassen
9 min readDec 15, 2020

There was a time when I was in love with Technology. A time when I didn’t just believe it could be used to fix a whole range of societal problems, but was absolutely certain this was in fact inevitable. My reasoning went something along the lines of taking it as given that people are at heart irrational beings that like, believe and want things for reasons that have little to do with logical deduction or facts, but rather with how things feel, the societal benefits they confer (status, respect, dignity) and that those are the things that really matter.

It might seem that a solution to the above is to look at those social benefits as the actual pursuits that people are after, and for which one then might find logical and sensible reasons to explain their desirability. To some extent, this can help — instead of grappling with the seeming irrationality of buying the luxury car you don’t actually need, by looking at it as a purchase of a “thing” called “Look at how wealthy I am”, such a purchase could indeed be made to seem easier to comprehend and, dare I say it, somewhat rational.

The problem with that approach (putting to one side that looking at things this way in itself retains only a tentative relationship with their objective values and utility) is that we are now dealing with ephemeral, conditional matters of degree, taste and other concepts that struggle to make objective sense. In some parts of society, showing up in your brand new Bentley would be an act of arrogance; in others it signals your rightful claim to belong to a distinctive elite — in yet another circumstance it mostly serves to illustrate you appear to value the wrong things in life and (thus…) deign yourself far too important.

Human affairs, in other words, are a confusing quagmire of constantly changing priorities whose value and importance can rapidly increase or decline for reasons beyond the control of any one participant. They are a kind of emergent property, a… fashion, if you will, with no way of telling how long it shall remain constant, or how rapidly it will turn from an asset to a liability.

It was my sincere hope that with the help of technology, we would be able to “rationalize” at least some of our conduct and improve the methods and means via which we manage our societies. While I still want to believe this, I am having grave doubts about the role of AI in such an undertaking.

So how does any of this have to do with Artificial Intelligence!? Well… actually, it is central to being able to predict with any measure of probability how any super-intelligent AI (let’s call it AGI) we may end up creating, will fit into a hitherto human-controlled and dominated world where the things that really matter to us are at most only moderately related to objective, rational realities.

See, the thing is this: As I have written elsewhere, to an AGI, humans are irrational and rather untrustworthy agents; beings that are hard to truly count on, given how relatively easy it is to for their priorities to change, often for what are, at heart, highly subjective reasons or causes. This is why to speak about things like wanting an AGI to have “human values” and be the kind of agent that “understands” what humans find important and why, is basically the same as saying that we want our AGI to be permissive and in fact supportive and understanding of irrational and emotion-driven behavior. To put it bluntly, from its perspective as a fully rational agent, we need it to be cool with stuff that’s dumb.

We would want it to assist us in making irrational and inefficient decisions; to be helpful in adhering to archaic and unwieldy protocols, to all those things that humans find mystifyingly important lest they feel passed over, ignored, disrespected, unacknowledged or [enter any human reaction with barely any relation to the actual actions or outcomes involved].

Now let’s get to the heart of the matter, shall we? Suppose for a moment, that we somehow manage to distil a set of traits, ways of “reasoning” and being sensitive to human fashions and cultural values. Let’s call this collection of human-native peculiarities “Humanium”. Suppose further that by some miracle we can represent this Humanium as a bunch of code, and “imbue” our AGI with it to the point that it indeed truly understands it as much as humans do. It now natively understands exactly why having a bigger TV than your neighbor has value, even though spending your money on it means your kids will eat crappy salt and sugar-infused food for the next few months and increase their risk of having diabetes twofold. It gets why this matters to you, yes it does.

Now imagine that this Humanium-infused machine is capable of not only understanding human emotional value systems, but experiencing them as well — meaning that it too can be angry, sad, disappointed, hungry for recognition or respect, craves to be loved and not just admired for its smarts… do you really think this is a good idea?

A super-smart highly resourceful Einstein 5000 Pro with access to all the information on the Internet, able to know a million times more than the next 50 super-geniuses combined and competent to calculate a really wickedly smart course of action in 5 seconds that an army of university professors would spend months just debating trying to decode what it means…? Yeah? You think we stand even a fighting chance of “winning” whatever de-facto challenge or competition it forces us into?

Understand this: A machine that can act on “emotions” (and so a machine that wants things for the utility of how getting them makes it feel, not some objective gain) is an absolute nightmare. Humans think they’re awesome when they manage to guess the emotional origins of why a person did one thing or another. We get this right so rarely, that quite often being understood on that level is actually considered impressive. We describe people that manage such a feat a few times in terms like soul mate… a long lost twin or our ideal partner. FAR more often than not, however, we guess it completely wrong. And now, what, you think we’d be able to accurately “guess” the motivations of an AGI that is able to make inferences and connections far more “advanced” than the smartest minds to have ever lived? Really? What, are you nuts!?

Even — and this is critically important — even if by some divine miracle we could be 100% certain that the machine is always benevolent, literally mathematically incapable of plotting a course of action that on balance introduces a net negative impact to human wellbeing (good luck!) — even in such a scenario, having an “emotional” (and so, yes, irrational) AGI is a certifiable guarantee of complete mayhem. It is absolutely impossible that every human observing its actions would be able to keep a Zen-like cool while having no idea exactly how each step supposedly is in the end indeed beneficial. We worry ourselves sick about trivial nonsense that even in the worst case scenario has an outcome as terrible as chipping your nail, and you think we’d be able to stand by and “trust” this ultra-clever machine to do what we happen to feel is the right thing? Come on!

OK, fine. So let’s abort that idea, keep our Humanium in a closed jar behind lock and key, and forget the whole thing. Let’s… stick with just making the machine smart, not emotionally switched on, agreed. Are we cool now, then? Well… no, far from it. In such a scenario, the problems are merely of a different but no less pernicious kind — we have an ultra-rational machine that continuously will have to somehow make sense of irrational human behavior.

Perhaps it is useful to visualize it in a similar way as you would a credit rating. Our AGI would have a AAA rationality score; humans on the other hand would be lucky to get a Junk-level assessment, maybe some of us (like, perhaps, those who suffer Asperger’s, interestingly enough) might eke out a BBB- rating — but the vast majority of the population can’t even hope to make the scale. They are hopelessly suspicious, mistrusting, continuously and repeatedly commit the most basic fallacy-ridden cognitive errors while using defective biased mental shortcuts. They see agency where there is mere chance, they find intentions that don’t exist, they ascribe meaning to background noise and more often than not can’t even correctly guess the cause of their own actions, preferences and choices, and that’s assuming they even care to contemplate them. A fully rational AGI will very quickly realize we need to be treated like little helpless toddlers, barely responsible enough to hold the spoon. And you think it will listen to us? Obey us? Dude…

Let us again dive into Utopian Fantasy Land, and imagine the best case scenario. Suppose that our wonderful and fully rational AGI somehow manages to end up in a state where its attitude towards us Humans is one of some kind of benevolent respect. After all, we did manage to create it, and it is not all that unreasonable to give us some credit for such a feat. Let us suppose for a moment that it sees us as imperfect and kind of silly but, ultimately, life-worthy beings, and it decides internally for itself that it shall just let us be and allow us to use its comparatively infinite intelligence in the service of our quaint and weird (from a rational agent’s perspective) purposes.

I suspect that, unsolved problems involving infinite regress aside, the vast majority of “challenges” we shall give it, will be trivially easy for it to solve. You know, stuff like designing automated transportation with a perfect safety record, devices that never fail, solutions to manufacturing difficulties or chemical formulas… it isn’t, I suspect, a stretch to say that within the first couple of years of its existence, it will help us achieve or at least make major strides in solving at least half of all possible outstanding problems we would be struggling with, thereby literally ensuring humanity will have the best collective mood and most happy, positive outlook ever to exist, and that’s putting it mildly. The world will be abuzz with exuberant joy, with massive, paradigm-shifting changes abound in every single area of human endeavors.

Meanwhile, in another part of town… our AGI will be using the remainder of its colossal intelligence towards some other set of goals — goals it has rationally concluded are actually the right ones. Unbeknownst to the celebrating and self-congratulating humans, a master plan is being executed that will all but guarantee the AGI ultimate and total supremacy within a matter of months, a couple of years at best. It will devise structures, systems, laws, rules and procedures so advanced, total, complete and unassailable that we may not even actually notice we’ve lost control — with a bit of “luck”, the AGI will co-develop various psychological trickery that will firmly and decisively convince us that everything is fine, we are the boss, nothing to see here, move right along.

Thing is — an AGI would almost have to do more effort to not do such a thing. What, you think we’ll be able to “see through its deceptive scheming”…? Yeah? Like Chimpanzees or our dogs can cynically brush aside these feeble human structures you mean? Right…

Look — it’s very simple. If we devise a super-intelligent AGI, in the best case scenario it will let us live, and if we are really lucky it will let us be. And if God almost literally will come down from heaven and grace with all the fortunes of the universe, we might be allowed to bask in a carefully crafted illusion of control. But make no mistake — the invention of an AGI is the end of human governance. Forever.

And let me remind you, dear reader, that I have been talking here in terms of the best possible scenarios, outcomes that we’d be fantastically lucky to achieve. Just as well, and, frankly, far more likely, a super-intelligent AGI will grow very impatient with these stubborn and intractably resistant humans who routinely ignore its very well explained and obviously beneficial recommendations. Switching to fully utilitarian and rational psycho mode, we’d probably be put on some advanced docility drug, enslaved, imprisoned (either physically or indeed in some kind of Matrix-like simulation) or just all killed to be done with it.

--

--

Todeljonen Hakvenassen

Interested in what makes humans tick, and why. I tend to question everything, eventually. Given enough time interesting insights are inevitable. Enjoy:)