5 Things that will come out of AI’s “magic hat” and one thing that absolutely won’t

𝓌itter
20 min readMay 14, 2017

--

The impending evolution of intelligence beyond humans has many of us concerned, and rightfully so. But to further understand those concerns requires a more careful look at what’s coming, and how inevitably. Such a closer inspection will likely be harrowing to many — let that serve as your sole warning about what is here discussed.

It is extremely disconcerting to hear someone as gifted and pro-humanity as Elon Musk talk in terms of whether an evolved intelligence — a Superintelligence — will be ‘benign’ or not. We are far past the point of being able to view things in the terms of a simplistic dichotomy, and as much as I otherwise support what Mr. Musk is doing and has done, I cannot reconcile the idea of phrasing things as a binary hypothetical. Even if he used it merely as an easy way to explain the most fundamental question which concerns humanity with respect to the development of AI, is it really such a Herculean jump to explain to people that we’re not talking about something so simple as the difference between good and bad?

Google promptly gives us this for a definition:

be·nign

adjective

1.

gentle; kindly.

“her face was calm and benign”

synonyms: kindly, kind, warmhearted, good-natured, friendly, warm, affectionate, agreeable, genial, congenial, cordial, approachable, tenderhearted, gentle, sympathetic, compassionate, caring, well disposed, benevolent

“a benign grandfatherly role”

2.

MEDICINE

(of a disease) not harmful in effect: in particular, (of a tumor) not malignant.

synonyms: harmless, nonmalignant, noncancerous; benignant

“a benign tumor”

Will a Superintelligence be kind to us or not? Frankly, it’s kind of a bullshit question. The salient question is who, if anyone, will a Superintelligence be good to? And with this terribly obvious question now traipsed out into the spotlight for everyone to look good and hard at, the follow up questions — very close on its heels — are:

Who will it be good FOR?

and then, most predictably of all:

Who won’t it be good for?

The successful funding entity will invariably turn into one Scrooge or the other. Remember Big Pharma’s Martin Shkreli?

We can obviously gloss over these questions if we like — the vast majority of us have, in fact. Recent polls have indicated that although the majority of people do favor restrictions on the development of AI, they favor them without an appreciable awareness of its implications. Restrictions on whom or what, exactly? In what manner, pray tell, are such hypothetical restrictions supposed to be enforced? It’s all very well and good to have rules for our China Shop — it’s another matter entirely to develop shop rules for China. Another matter to ensure adherence to those rules when essentially anyone in the world can let a bull through the door.

Okay, not just anyone. Only the wealthiest people — you know, the people who have the best track record of following rules. Only the smartest of people — you know, the ones who never make a mistake. Only those with the best intentions…

Hmmm…this is beginning to sound curiously like yet another case of loosing an invasive species[1] into an unfamiliar environment. Hoping, of course, that it will do its good and be “benign” otherwise. It would be one thing if those AI farmers only spoke for themselves, and would only themselves face the consequences of building an entity which could mangle or destroy anything and everything. Unfortunately, the viper they hope to thaw is in everyone’s coat, not just in their own.

Either way, in this as in so many other cases, what most people think is only tangentially relevant. What matters, as usual, is what Big Money thinks. Big Money determines what you see, and in a quaint little positive feedback loop, you reward with clicks what you approve most of. Either on your social media feed or with the remote control sitting innocuously on your living room table: thousands of superficially different choices offered to you while your hosts laugh themselves all the way to the bank. Indoctrination in the most purified form, and worst of all it’s thoroughly camouflaged by your fanciful belief that you think for yourself — that you make your own choices. The lion’s share of the choices you are given come from what amounts to a pre-approved list, thus it’s pretty easy to determine what to continue feeding you even though it’s nothing more than various types of mental candy and cookies.

Whether it’s Murdoch, Zucker, or Zuckerberg it’s all the same. Despite what you may think, there’s not a morsel of nutritive food on today’s lavish media buffet. You’ll get nothing but the most palatable, high-octane fare with plenty of targeted commercials to wash it all down and whet your appetite further. A junk food smorgasbord expertly engineered to get your dander up. To arouse you, either through anger or delight. The only thing that matters is that you keep yourself seated and continue to gobble it right up. The sedating influence is built in: waste your energy here and you won’t have it there. As long as you enjoy the show, you’ll spend all your energy with your fat, dumb, complacent ass seated just exactly where it can best be kept track of: mindlessly clicking substanceless New Age trinkets and beads in an endless loop. So what if perpetual activation of the sympathetic nervous system is associated with a constellation of illnesses? We’ve got pills for those things, and when they aren’t enough we’ve got a profitable healthcare system backing it all up. And the fact that you don’t believe you’ve been taken is the saddest fact of them all. Lots of other people are, but surely not you. After all, you’ve been made to feel special — there’s only one like you.

None are more hopelessly enslaved than those who falsely believe they are free. -Goethe

You needn’t be thrown behind bars, strapped to a chair, lashed to the oars, or bolted into an iron maiden — your eyes are glued to the screen and they lead you around better than a six foot long leash attached to the choke collar of a dog. You spend far more time per day in front of a video screen than the average dog spends on the end of a leash, and you and I and everyone else knows it. When you’re not watching something, you’re buying something, consuming something, or earning money to do one of the previous three things. You’ve got exactly four tricks in your arsenal and thinking for yourself isn’t one of them. You won’t be wandering away from a phone, computer, or television set anytime soon and it’s not because you’ve made a fair comparison between what you like and what you don’t. It’s because you don’t even know what’s outside the boundaries of your leash — at least if you’re like the lion’s share of people you don’t. You’ve seen pictures of what nature looks like, after all. Culled from the millions that are out there on the ‘Net — giving you false visuals not appreciably different than the airbrushed models on the front cover of magazines. But don’t mind me, gobble it right down. You’ve been trained to want it that way.

At Facebook, Zuckerberg and his well-intentioned lieutenants continue to beat the drum of ‘we give people what they want’ all the while deflecting attention from the far more relevant directive provided it results in the greatest number of clicks. The cardinal rule, obviously, is to optimize the making of money. But we have reached the end game of this psychosocial philosophy: the rubber of the what sells best is best mentality has met the road of reality and melted damn near completely off. We won’t be riding on those wheels for very much longer — at least not without something seriously catching fire. We now have little more than a grotesque collection of mostly wonderful and terrible things, and we ‘feed’ virtually nothing in between. Awareness? A casualty. Truth? A casualty. Fulfillment? Also a casualty. None of these things thrive when a dedicated effort to reduce all of humanity to an individually-identifiable series of button clicks predominates: we become what we are, not what we could be.

There is no water left for the flowers which do not sell. No bees left to pollinate anything but the monoculture.

The lever by which we bring our cheese forth is ingeniously wired to everyone else’s lever. When we pull down upon ours, not only do we receive some cheese but we also encourage similar Sneetches[2] to pull down upon thars and simultaneously reset the lever of dissimilar Sneetches in seesaw-like fashion. In this way, good ole Zuck has fashioned himself a New Age Sylvester McMonkey McBean — less the funny green hat and bowtie. It is hardly more complicated than an illustrated children’s book; the dynamics at play are essentially identical to those between a child and parents who are uniformly incapable of saying ‘no.’

Sylvester McMonkey McZuck presiding over his megacontraption

Simply put, McZuck was faced with a terribly difficult choice between two electives: Popularity 101 and Good Parenting 101. Once that choice was decided, the stewardship of his creation became rather easy: the concepts of right and wrong didn’t even need to take a backseat when The Creator, having diligently polled his echo chamber, magnanimously declared the pair equivalent.

You may all now relax, there is nothing to fear!

There will be no assigned seating — not so long as I’m here!

We do not exclude those who use half their brain

Ample is the seating on this gravy train!

Right and wrong! How absurd! Why would it matter?

The point is to ensure that the coffers grow fatter!

So what if the passengers are hedonistic fools?

It isn’t our responsibility — we just built the tools!

Besides, not a soul among us likes a world which has rules.

At the risk of sounding like the first drops of rain on two very amusing and engaging parades, the algorithmic entity that is Facebook is merely a trifling preview of what’s in store for us if we continue marching toward AI with the sociocidal fixation on self-indulgence and blasé attitude toward morality we currently have.

Some researchers are clearly aware of this. Speaking on the development of AI, noted philosopher and neuroscientist Sam Harris has cautioned us to perform “philosophy on a deadline.” He says:

We have to admit to ourselves that there are better and worse [philosophical] answers, and we have to converge on the better ones.[3]

Facebook is a discouraging example of highly competent people flinching at this task. Despite a net worth in excess of $60 billion and being surrounded by some of the most brilliant minds on the planet, Mark Zuckerberg has steadfastly refused to lead in any fashion aside from the one popularized by a certain piper from Hamelin. Rats and children alike answer the siren call of his instrument, and the average user spends 50 minutes a day on the site according to a New York Times article published last year.[4] According to Zephoria.com just over 1.2 billion people log in daily. In case you’re having difficulty comprehending the staggering power of this unwieldy tool, that amounts to roughly enough time spent daily to build The Great Wall of China. Equivalent to a company with 125 million full time employees — roughly forty times larger than the next most populous employer in the world — the U.S. Defense department.

The tools of software and hardware — combined with the illusion of privacy — make it ridiculously easy to segregate people into isolationist groups of “I agree” and “I disagree.” In fact, those tools are so powerful that a person might be tempted to imagine they serve the goal of greater understanding — after all, it is only a few degrees of separation between the most militant Breitbart folks and the most steadfast Huffposters. The boxes, however insulated, are arranged in such close virtual proximity to one another it’s easy to believe some diffusion will take place, and people will learn better how to see eye to eye. The theory sounds so wonderful when it echoes off the walls of a truly open office — where everyone enjoys access to a banker, a barber, free dry cleaning and some of the best food in the valley. Where everyone is within earshot of the CEO, and he them. There’s not much in the way of expostulation, certainly, but then you can’t very well strenuously argue things aren’t quite right when your meal ticket may rely on your unwillingness to make appreciable waves, can you? Burst the fragile bubble of genial subservience from the inside? Why would you? Why even go so far as to admit that there is one? Why, after all, would you argue with a man who is doing the best he can to give everyone what they want? Why when he is the one who signs your paycheck — he, the benevolent Edward Mooreian[5] despot a mere stone’s throw from taking informational control over the world? Such proximity must feel incredibly powerful — whether it’s based on self-selection or not.

Why would anyone ever even think of tinkering significantly with a system which is so close to being perfect — as self-affirmingly measured by its ability to capture people’s attention and make its purveyors money?

Do you see the danger yet? The danger of a Superintelligence arising which we can not only essentially guarantee will arise out of the ambitions of one incredibly wealthy entity or another, but which we can also measure with the yardstick of what we absolutely know can happen when a few imperfectly brilliant, terribly wealthy individuals climb into the control room and semi-contentedly start flipping switches this way and that? Of a device they can somewhat sensibly argue “gives people what they want” ?

If you don’t see it yet keep reading, but you’re still only getting that one warning.

Our inane dichotomies have needed to end for some time, and this could not be more evident than in the simplistic upvote/like/share rituals which characterize Facebook, the social networking monstrosity that governs — yes, governs — the largest share of the interactions of over a billion people daily on the planet. All of them behaving no differently than a great herd of shepherdless sheep, and Zuck egotistically believing he’s doing right by everyone merely because he’s a relatively smart guy, because he means well, and because he gives people a monetarily-filtered version of what they want. Who cares if they’re all running in ever-tightening circles, everyone wins, right? Unfortunately, Mark, you’re raising a horde of Veruca Salts with your massive chocolate factory[6] — without enough salt of your own to sort the good nuts from the bad, or even to believe there’s any difference.

Why have I taken this approach to talk of AI? Because the single largest source of data on what people want, like it or not (excuse the pun), is Facebook. The single most obvious way of determining what people are interested in is none other than Facebook. And if you’re still not clear about any of the countless logically inescapable reasons why this is a serious problem, let me plainly state the one I’m referring to, and the one which might override all the rest:

Facebook is, at the moment, the best case index of what we can expect AI to give us more of.

After all, we’re quite clear that it arose out of the machinations of what some of the best of our minds could possibly offer us, given essentially limitless money to work on it, right? It is superb at giving us what we want to see and hear — the most magnificent embodiment of confirmation bias the world has ever seen. A modern day Mjölnir[7] without all that pesky godlike strength the old one required to wield it. Which is to say nothing of the correspondingly impressive wisdom needed — wisdom which to date only exists in fairytales and mythology.

AI will represent the equivalent of this hammer — with no one competent to wield it.

So will it be good for all of us?

Put it this way: if we had it today, it might easily be argued that it would be no more constructive than Facebook. Thus I regretfully contradict the implication the legendary Mr. Musk made, referred to at the outset of this essay. At the moment there is essentially no chance whatsoever that AI will be ‘benign.’

The reason why we know this is because:

1. People are essentially uniformly unaware of what’s good for them.

Contrary to what you might think, this is not paternalistic nonsense. We really are unaware of what’s good for us. There are many reasons for this, but the primary one is we just plain have not uniformly defined what’s good and what isn’t. Most especially not as a group, but no better even as individuals. Smoking? Tackle sports? Religion? We don’t know what’s good for us and we certainly don’t know what’s best for us — particularly because the latter is inexorably linked to a level of awareness we don’t have. If the possibility of such an awareness appeared, would we accept it as a general rule? Oh yes, all of a sudden we’d magically perform a 180 and uniformly listen when the equivalent of an alien arrived to tell us what we should want. Considering that a massive fraction of the population would almost certainly view such an alien to be nothing more than a glorified marionette, forgive me if I have my doubts.

2. Our definitions of good are fleeting; they rely entirely on ephemeral conditions.

Not only do we fail to grasp what’s good for us in general but as a species we are patently incapable of figuring out how to put into perspective the value of good now versus good later. Society is absolutely littered with examples of this — and if you didn’t catch the pun, litter itself is one of them! With overwhelming and increasing consistency, we fall into the trap of believing that provided we always think that what is good right now is what’s good in general things will perpetually be as good as they can be. This anti-planning mentality vaguely corresponds to the generality that a bird in the hand is worth two in the bush, while simultaneously running counter to the idea that an ounce of prevention is worth a pound of cure. Unfortunately, it’s woven right into the fabric of our buy now, pay later fast food culture and we continuously experience the consequences of it in spite of the irresistibly tempting illusion that all such things — consequences, that is — come later. The value of “now” perpetually rises, an appreciation of the value of “later” consistently and correspondingly falls. Climate change is a key example of this — and is perhaps the first clear example of a human-created existential crisis (i.e. something that could potentially end the species): we have solutions already, but we have no intention of using them because as a populace we still believe now is more important than later.

3. Our definitions of good are arbitrary and individual.

Not only is the time value of good indeterminate, but the space value and individual versus group value is as well. Worse, misguided philosophers like Ayn Rand[8] come along and essentially decide that because no one can figure the puzzle out particularly well, the best default answer to broadcast and popularize is greed [e.g. “The Virtue of Selfishness” Rand, 1964.] Many can be expected to accept such a self-serving concept, and many do. It is perilously easy to accept the idea that greed is good, and allocate no time for further thought — further thought might serve to limit the concept, after all, which is not conductive to a wholehearted embrace of ‘selfish virtue.’ It hardly matters that a modicum of additional thought might allow a person to arrive at the conclusion something just might be better. A person content with a notion by definition doesn’t go out looking for anything better, and selfishness is inherently and instantly satisfying.

Such definitions are explained solely by the catch-all circular logic of self-evidence:

It’s evident that good outcomes for me arise from self-interest because it is only by being interested in myself that I can determine what good outcomes might be. Therefore I must decide what’s best for me independent of everyone else and then act on it. [Gordon Gecko? Martin Shkreli? Ayn Rand?]

Of course the greed is good philosophy encompasses cooperation when subscribers to that mind-puzzle believe it is good for them, but this is no different from children who cooperate when they see fit. Should the world then behave like a group of miscreant children? Self-determined ideas of good and group determined ideas of good can occasionally mesh — it’s just progressively less likely as the population considered by the noun “group” rises. The confusion is one of impetus (i.e. what’s good for me) versus direction (i.e. what decisions should I make in the full context in which I stand), and it’s an ages-old problem. For these reasons among others, it should be abundantly clear that Superintelligence can hardly be ‘benign’ anymore than its eventual developers are or can be.

It cannot be uniformly expected to do good when we have not even remotely decided what the word good even means!

Not when ‘good’ means something so different for everyone and for each different time, place, and situation. Not when it so often means two things which directly contradict one another. Not especially when ‘the greater good’ exists as a concept solely as a way of distinguishing between lesser, individual goods which cannot and will not ever be subjugated willfully — i.e. without coercion, force, or trickery. Not at least without greater understanding — a timeless goal which is all but completely ruled out by the Dunning–Kruger effect.[9]

AI cannot even win based on the concept that things will be better versus worse — because there is zero possibility that things can uniformly be described as being better versus worse without an impossible-to-conduct poll of what all people want (from a similarly impossible-to-obtain list of things which everyone considers good) and a subsequent compromise. The existence of mistrust — conversely, the lack of uniformity in trust — rules out the possibility of arriving at any such list.

We cannot trust developers to be ethical, nor can we trust the subsequent users to be ethical. We can’t police the development and we can’t enforce restrictions on AI use after it’s developed. We can’t even have a rational vote on what we should do, because the vast majority of people have no idea of the technology’s implications, and there are no appreciable plans on the table as to what we expect AI to do once we have it. In other words, we have only guesses and the promises of the developers on which to gauge whether or how it should even be pursued.

We essentially have a teenage kid who just discovered alcohol and how to drive in the same week asking for permission to use the car and not even telling anyone where he plans to go.

With everyone in the world in the backseat. BLINDFOLDED.

No, he’s not even asking permission. He’s saying he’s taking the car and you and I and everyone else is going for the ride of our lives whether we like it or not. But surely there’s no need to worry…

So, what will come out of the AI hat?

1. It can’t possibly be benign. Preferentially beneficial is not benign as Bob Cratchit might easily explain.

2. We have a reasonably horrifying index of its implications in Facebook. It is a. unproductive, b. cumbersome, and c. easily manipulated/indirectly quite dangerous. Its directors are not guided by an appreciable set of rules, and there are essentially no consequences associated with bad actions either way.

3. We can’t stop it from being developed, and we can’t predict when it will be developed or by whom.

4. We have a storied history of what happens when invasive species are introduced into new environments. The short version? It never EVER ends well.

5. The general public has already been acclimated to a media diet which is conducive to subservience and as such its members couldn’t reasonably be expected to know what’s good for them even in the best or most important of cases. The recent U.S. presidential election is an example of this. It is a fact that millions of people voted against their own best interests after being captivated by a snake oil salesman who knew how to talk tough.

6. This guy won’t:

So what do we do?

As I see it, the best you can do is make it a priority to understand where AI research is going, how fast, and by whom. Make it a priority to look at mass media and tools like Facebook as ‘sweets’ to be taken in moderation. Read good books by reputable authors (especially dead ones who by definition don’t have a current agenda).

Oh yeah, and one last thing — get plenty of exercise and either hold onto your hats or be ready to chase after them because the ride is about to get seriously rocky.

[1] invasive species: an invasive species is a species that is not native to a specific location (an introduced species), and which has a tendency to spread to a degree believed to cause damage to the environment, human economy or human health. The emerald ash borer, zebra mussel, English sparrow, and purple loosestrife are all examples.

[2] Sneetches: a fictitious bipedal creature created by Dr. Seuss originally as a satire about discrimination. In Seuss’ story, the Sneetches were essentially identical to one another aside from the superficial difference of a star on the bellies of some of them. The reference here is intended to indicate that despite philosophical differences, everyone who interacts on Facebook (and people in general) are essentially the same. If we’re to discriminate between people the only remotely rational way of doing so is by endeavoring to accept some uniform ideas about what constitutes right and wrong.

[3] Sam Harris: https://en.wikipedia.org/wiki/Sam_Harris Sam Harris is a noted philosopher and neuroscientist who spoke at the “Beneficial AI 2017” symposium in Asilomar, California Jan 6–8, 2017 (https://www.youtube.com/watch?v=h0962biiZa4)

[4] New York Times use of Facebook: https://www.nytimes.com/2016/05/06/business/facebook-bends-the-rules-of-audience-engagement-to-its-advantage.html

[5] Edward Moore: https://en.wikipedia.org/wiki/Edward_Moore_(dramatist) “rich beyond the dreams of avarice” this reference is intended to suggest that Zuckerberg could theoretically do what he wishes with Facebook at this point — the only practical limits being his apparent belief that he’s already essentially a benevolent despot. The contradiction is apparent in the pledge he and his wife made to give away virtually all of their wealth: it looks very much like an apology intended to compensate for any bad that might have come of the device he presides over.

[6] Veruca Salt, Chocolate Factory: http://roalddahl.wikia.com/wiki/Veruca_Salt this reference compares Facebook users to the notoriously spoiled child of Dahl’s Charlie and the Chocolate Factory. In the story, Veruca is depicted as a person who gets what she wants by behaving in a “loud and petulant” manner. The intention here is to indicate that giving people what they want simply because they demand it is not necessarily wise. Those who respect Facebook as a capital-generating entity and those who support it as users have little or no reason to question it on these grounds — it does what they expect and want it to. The question is: is this good or isn’t it?

[7] Mjölnir: https://en.wikipedia.org/wiki/Mj%C3%B6lnir famously, Thor’s hammer. A hammer imbued with magical weight which renders it impossible to lift for anyone “unworthy of Asgardian honor.” Essentially this means no one who is not truly “righteous” can lift or wield it.

[8] Ayn Rand: https://en.wikipedia.org/wiki/Ayn_Rand Famous for two bestselling novels and for founding Objectivism. “Ayn Rand (1905–1982) helped make the United States into one of the most uncaring nations in the industrialized world, a neo-Dickensian society where healthcare is only for those who can afford it, and where young people are coerced into huge student-loan debt that cannot be discharged in bankruptcy.” [http://www.rawstory.com/2017/04/a-clinical-psychologist-explains-how-ayn-rand-seduced-young-minds-and-helped-turn-the-us-into-a-selfish-nation/]

[9] Dunning–Kruger effect: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect The Dunning-Kruger effect is the tendency of low-aptitude people to overestimate their aptitude and high-aptitude people to underestimate their aptitude relative to others. In this context, the reference is intended to show that even if we did have some Superintelligent entity, many people would still find themselves incapable of realizing how little they know.

If you enjoyed this content, please give it a like. If you found it informative, please follow me and feel free to send questions you’d like explored. Thanks!

--

--

𝓌itter

Placed in this position to maximally reflect all the wonderfully intricate facets of the women around me; we're to build a chandelier, ladies.