A Qabalistic Analysis of Alan Jackson’s Chattahoochee

The Gaelic name “Alan” refers to royalty. Alan Jackson’s middle name, “Eugene” also means royalty. The name “Jackson” means “son of Jack,” or in Hebrew, “son of Jacob.” Following the biblical Exodus, the offspring of Jacob’s sons became the tribes of Israel. We see that “Alan Jackson” transparently refers to a royal member of the tribes of Israel. For more clues we transliterate Alan Jackson into “אלן ג׳קסן” which has the Gematria value of 294.

Gematria transforms words into numbers according to cabbalistic rules. The name “Nero Caesar” famously yields a Gematria value of 666. 294 is an important Gematria. “The God of Abraham” and “The Great Light” both have Gematria value of 294; these only give us a hint as to Jackson’s nature. We see more confirmation of Alan Jackson’s connection to royalty in the word “ארגמן” (Gematria value 294) meaning “purple,” a color long associated with royalty. But more importantly, the phrase “עיר דוד” – “City of David” also has Gematria value 294. David, the second and most revered King of the United Kingdom of Israel was first introduced in the Bible as a skilled lute player. Alan Jackson was introduced to the world as a skilled string-instrument player. The Book of Ruth traces David’s ancestry to Ruth. Alan Jackson’s mother’s name is Ruth and his father’s name Joseph – the most well known son of Jacob, affirming the connection to the Israelites.

It should come as no surprise that the lyrics of Chattahoochee, by Alan Jackson trace important events in the early life of David. Before we analyze the lyrics we will first observe the hidden structure. In Hebrew, David is spelled “דבד” (dalet vav dalet) a perfect palindrome. The verse of Chattahoochee is a similar palindrome – the notes C G C (side note, in A minor scale (white notes on the piano) C corresponds to the second scale degree and G corresponds to the seventh scale degree, and in the Hebrew alphabet the second letter is chet, seventh is gimmel  – C and G in the same interval). The song itself is also structured as a palindrome – verse, chorus, verse, chorus, verse.

Verse I:
Way down yonder on the Chattahoochee
It gets hotter than a hoochie coochie
We laid rubber on the Georgia asphalt
We got a little crazy but we never got caught

The source of the Chattahoochee River is located in Jacks Gap at the southeastern foot of Jacks Knob. Jacks Knob here is a reference to “Nob” – the first city (in the lands settled by the sons of Jacob) that David flees once he discovers Saul’s plans to assassinate him. The Chattahoochee then is the Jordan river, and Georgia is Gibeah, where Saul rules from and where David formerly resided.

Laying rubber on Georgia asphalt represents David’s quick escape from Gibeah. Just as Alan Jackson never got caught, Saul never catches David.

Chorus I:
Down by the river on a Friday night
A pyramid of cans in the pale moonlight
Talking ’bout cars and dreaming ’bout women
Never had a plan just a livin’ for the minute

Friday is the day before the Sabbath. David is near the end of his retreat and will soon be safe on the holy day. The next line reinforces this idea.

Recall David’s name in Hebrew – dalet vav dalet. “In ancient times the ‘dalet’ was triangular-shaped (similar to the Greek delta) and “vav” implies a connection.” – Rabbi Yirmiyahu Ullman. David’s name can be viewed as a conjoining of triangles, a pyramid. We see that the pyramid represents David, and the pale moonlight again reflects the position David finds himself in. The pyramid also conjures strong connections to the Exodus – the jews successfully retreated from Egypt and made their way home, just as David is soon to do.

David dreams about women. Jackson and David are both known for their divergences from monogamy.

Verse II:
Yeah way down yonder on the Chattahoochee
Never knew how much that muddy water meant to me
But I learned how to swim and I learned who I was
A lot about livin’ and a little ’bout love

During his retreat from Saul, David crosses the Jordan river, metaphorically learning to swim. The Chattahoochee not only represents the Jordan river, but the time David spends near the river, away from his home. David learns who he was when he finally meets with Saul and Saul accepts David as heir.

Chorus II:
Well we fogged up the windows in my old Chevy
I was willing but she wasn’t ready
So I settled for a burger and a grape snow cone
Dropped her off early but I didn’t go home

The Chevy here is an allusion to the Hebrew word, “shevet” meaning “tribe.” Fogging up means to make something unclear. This line is a reference to the disputed line of succession amongst the tribe of Israel which David’s presence causes. Absent David, Saul’s son, Ish-bosheth, would would succeed him. David is ready to become King but first must endure the tribulation of Saul’s pursuit. When Saul is killed in battle, David becomes the king of Judah, while Ish-bosheth becomes king of Israel. In this sense, David settles for a royalty (grape/purple/color of royalty) but not the royalty he is after, he still is not king over united Israel, and is not yet home in Jerusalem which David conquers after Ish-bosheth dies and David becomes king over the united Kingdom of Israel.

Advertisements

Contra Land on Orthogonality

In Against Orthogonality and Stupid Monsters, Nick Land lays out his arguments against the orthogonality thesis, which as originally conceived by Nick Bostrom, states, “Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.” I will argue against Land’s objection to this thesis.

The argument Land presents in Against Orthogonaltity rests on the idea that a set of necessary sub-goals any sufficiently intelligent agent would possess; self-preservation, efficiency, resource acquisition, and creativity, known as Omohundro drives, exhausts the space of all possible goals. Land writes, “Nature has never generated a terminal value except through hypertrophy of an instrumental value.” He further explores this idea in Stupid Monsters, suggesting that humans only follow instrumental goals (which he conflates with “will-to-think”), as can be seen by their lack of commitment to the goal we typically assign evolutionary significance – reproduction. He writes,

So how ‘loyally’ does the human mind slave itself to gene-proliferation imperatives? Extremely flakily, evidently. The long absence of large, cognitively autonomous brains from the biological record — up until a few million years ago — strongly suggests that mind-slaving is a tough-to-impossible problem. The will-to-think essentially supplants ulterior directives, and can be reconciled to them only by the most extreme subtleties of instinctual cunning. Biology, which had total control over the engineering process of human minds, and an absolutely unambiguous selective criterion to work from, still struggles to ‘guide’ the resultant thought-processes in directions consistent with genetic proliferation, through the perpetual intervention of a fantastically complicated system of chemical arousal mechanisms, punishments, and rewards. The stark truth of the matter is that no human being on earth fully mobilizes their cognitive resources to maximize their number of off-spring.

This argument is baseless for a number of reasons.

Firstly, the environment humans find themselves in now does not resemble the environment which selective pressure encoded our goals. Land is like a computer scientist who trained an AI to perform well at the game of checkers wondering why it cannot beat him at chess. Evidence for this environmental mismatch can be seen by observing the discrepancy of birthrates in tribal vs highly industrialized societies.

Secondly, selection applies at levels other than the individual. As E.O. Wilson says, “In a group, selfish individuals beat altruistic individuals. But, groups of altruistic individuals beat groups of selfish individuals.” A species which

When Land says, “Nature has never generated a terminal value except through hypertrophy of an instrumental value.” He presupposes that values are only generated through selective pressure  – which is not true – hardcoded value functions and gradient descent do not depend on selection. Furthermore, the position that the only terminal values are Omohundro drives only makes sense in an environment of sufficient selective pressure. Granted, many of the things we typically think of as human values disappear when the selective pressure is turned up; this is the Malthusian scenario as resource competition and selective pressure increase with population size. The peacock’s feathers only exist in absence of selective pressure at the level of competing species. Similarly, art, beauty, romantic love exist and were able to develop in the absence of selective pressure – after humans achieved a sufficient intelligence advantage relative to competing species.

Land seems to take the brutal competitive nature of the universe as a given, but there is evidence to suggest otherwise. In fact, it’s quite possible that we are the most intelligent species in the universe. I suspect Land would counter this by appealing to the group selection effects I mention above. Yes, humans may be the most intelligent species, but we are in competition with the techno-industrial system itself. But this does not mean that a smarter version of humans couldn’t recognize this problem (if Land thinks this is the case, it is the ultimate form of hubris) and chain the beast so to speak. Some humans keep lions and chimps as pets after all – and they get pleasure from this, but if the animals pose a real threat there is always a bullet waiting for them. That humans fail to recognize the threat of techno-capital is a problem for humans, not the orthogonality thesis.

Land takes the lack of reproductive maximization of individual humans as evidence that selection is not powerful enough to program terminal values other than Omohundro drives, but individual humans do not appear to maximize their Omohundro drives either. People commit suicide. This is likely not enough to convince Land that terminal drives other than inflated instrumental drives exist, but let me hope to convince you, dear reader. On your deathbed, you are given the choice between a “super ecstasy” pill which will give you 5 minutes of bliss unlike any that you have experienced and a “super nootropic” pill which will give you 5 minutes of John von Neumann level of intelligence. I urge you to take a few minutes to read the “quotes about von Neumann” section here, before making your decision. OK, got it? It’s clear that if you chose the ecstasy pill then you possess some value other than Omohundro drives. Have you out-smarted me by choosing the intelligence boost? I argue that choosing the nootropic proves the same point. In this moment, there is nothing ‘instrumental’ about the nootropic – you’re going to die. If you chose this option, it is likely because you have the itch of an unsolved problem somewhere in your brain, and you will get some satisfaction from solving it. That looks a lot like a non-instrumental value to me.

I hate writing, so gunna wrap it up here, thanks.

Test for Inhuman Market Forces

Humans obey prospect theory. The Wikipedia page has a nice summary, “losses hurt more than gains feel good.” Naturally, humans apply this principal when investing in the stock market (see here). We would expect the stock market to a behave according to prospect theory, given that market supply/demand is the sum of individual investor supply/demands. As we increasingly rely on automation of financial decisions, the stock market may reflect deviations from what prospect theory would predict, if the algorithms are not programmed with prospect theory taken into account. Deviations over time from prospect theory in the stock market (I think) would reflect one of two things – algorithms are making more financial decisions, or more weirdly – humans are becoming less human.

Coordination problems like those described in Meditations on Moloch are not as obviously visible in the stock market, but they could likely been seen through deviations as well. In simulated prisoner’s dilemma’s, humans tend to cooperate more frequently than so called “rational choice theory” models would predict. In a hypothetical future where more contracts run on blockchains, we may be able to see a clearer picture of societal level cooperation (or defection), and take steps to increase or decrease cooperation where necessary.

More Human Than Human, Pt. 2

III.
It sounds obvious, but it’s worth noting that all human values are conditional. You value dogs as pets and not as food because you live in a country that outlaws dog meat, have tastier options, and have learned a passed down moral aversion to eating pets. Unfortunately, often times when speaking of “human values,” we pretend they exist in the Platonic-realm separated from the circumstances of a particular time and place. Is boredom a human value? The recovery rate for heroin addicts is less than 30%, and that’s with the horrible side effects and societal pressures against it. I can very easily imagine a scenario where all humans do is (increasingly large amounts of) heroin, for as long as they live. Since you’re reading this blog, you’re probably familiar with the so-called “Rationalist” community and their concern with the control problem. This post touches on some of my thoughts on some directly and tangentially related problems.

I’ve heard the control problem stated as “guaranteeing a self modifying AI has human-aligned goals.” I’m skeptical that anything but the worst aspects of humanity will come of this. Human goals – for groups of humans, are conditional on their circumstances, so aligning an AI with any particular group of humans’ goals may produce wildly different goals in the AI. The AI may come to believe that humans value democracy if it went polling for human values now, but I can imagine another point in time where the AI would come to believe that humans value socialism or some other form of government. It’s easy to say that humans’ governmental preferences are just instrumental, and reflect some deeper terminal goal, but taken to its logical conclusion, this argument ends in a situation where the only value humans possess is “utility.”

Humans have very limited imaginations. It’s hard to imagine what human values will be in circumstances that no human or group of humans has ever been in. Infants and centenarians have very different values. It seems likely that humans that live to be 400 will have different values still. What will human values look like when increasingly intelligent friendly AI are seeking to maximize those values while simultaneously affecting those values? What will human values look like when an artificial intelligence is capable of performing all tasks that humans can do, only much better? Do we expect humans will want the AI to play dumb, or to not exist at all to prevent them from feeling obsolete? If humans are alright with being second class intellects, they may happily give into a wireheaded future.

One objection I see to this line of thought is the claimed existence of some universal state which maximizes human value. While this may be true, I fear this state is one where all humans are receiving maximal stimulation to their reward systems, and is not the same as the sorts of states that most people who discuss friendly AI refer to when they speak of maximizing human values. I think (although I could be very wrong) most who are interested in friendly AI wish for the AI to maintain a state where humans are able to pursue things like art, science, mathematics, etc, even at the expense of pure utility. But the preservation of these things means nothing by itself. We currently have programs that can produce complex proofs of mathematical theorems, and we use them. If math is pursued because of the joy of the process, then we’ve already begin to give up on that front. If math is pursued because of the status it brings to those who are successful at it, well then, is it really worth preserving? If math is pursued because the answers it provides can be used to make better and better gadgets which quickly approach perfect hedon delivery systems, well, you see where this is going… Of course these are not the only reasons for pursuing math, but some seam intuitively more meaningful than others, and it appears to me that utility and meaning can be in stark contrast. I think if humans are not careful they will gladly trade meaning for utility. I think that any friendly AI needs to consider meaningfulness alongside utility when it attempts to optimize for something, and it seems like meaningfulness is more difficult to reduce to physical properties than utility – at least we can point to dopamine and serotonin in the brain.

There are a lot of unsolved control problems. We’re not very good stopping young men from becoming radicalized on the internet; that is to say, we’re not good at preventing divergent human values in individuals (or the circumstances that lead to them). I don’t think we have a good sense of how terms in human utility functions change over time. Clearly they change as we age, but it’s difficult to control for the change in circumstances that also occur as we age. If there is some convergence to pure hedonism as people get older, we are not equipped to deal with it.  If there is some large divergence of values as we age, we may also not be equipped to deal with it. The poor ability of humans to accurately compute their future utility also leads to some difficult-to-control problems.

Try to imagine experiencing “ten times the utility you experienced in the best moment of your life.” If you’re anything like me, this is difficult. The fact that it’s difficult, means you are less susceptible to making trade offs favoring your own utility versus the status quo (see this for potential evidence). This failure of humans to accurately calculate their own utility functions contributes to what it means to be human, but failure to calculate utility can also have disastrous or wasteful effects; just consider the time and resources spent acquiring things that don’t bring us much happiness at all (for the macro version see, this). For an easy example to analyse – consider habitual gamblers. Attempts to control this basically boil down to two things – present very obvious and hard to ignore negative utility (sometimes this occurs naturally, i.e. “rock bottom”), generally in the form of social-stigma/jail time/fines, or do the normal reinforcement learning thing and reward not-gambling, typically in the form of positive social reinforcement, or even things like “chips” a la AA. These aren’t exactly that effective given that gambling addiction relapse rates are around 80%. But this is for a scenario that is well understood, and is not pervasive. If either of these conditions is broken, I imagine the odds of controlling the addiction will be much worse.

Individual humans are difficult to control, but Gods are likely harder. Now’s the point in this post where I tell you to read Scott Alexander’s, Meditations on Moloch, if you haven’t already. TLDR; Moloch is the personification (deification?) of a set of problems arising form humans acting according to self-interest at the expense of group-interest and better outcomes overall, the simplest example being two defectors in the prisoners’ dilemma. The solution to most of these problems is a mob boss, or an outsider that can coordinate and enforce the better solution (cooperate/cooperate), but we can’t always guarantee a mob boss, or even that the mob boss isn’t experiencing a prisoner’s dilemma of his own.

To summarize, humans are difficult to control. Environments which have huge effects on human values are also difficult to control, and it’s difficult to imagine unforeseen states which human utility calculations are not designed for. Humans may trade meaning for utility, individual utility for group utility, or anything for utility really, and they may even believe that they are maximizing their utility while doing exactly the opposite. The situation gets worse among groups of humans when considering Moloch-like problems, especially when taking the problems with individual humans into account.

Sorry, no solutions here.

More Human Than Human

I.

The Chinese used to eat dogs (many still do), but there’s a growing push to outlaw the practice coinciding with an increase in keeping dogs as pets.

You probably think the polar bears are worth saving. You probably wouldn’t think that if they kept raiding your igloo for food and killing your tribesmen.

“Individuals may resist incentives, but populations never do.” – unknown

II.

Your eyes are dry. You think to yourself, “what time is it?”. 4:29 am. “Fuck.” You close your browser full of 30 tabs.  You notice an icon on your desktop – “paperclip.nrl.” Wasn’t there before. You don’t recognize the extension. You right click it. Nothing happens. You try to delete it. Nothing happens. You run your cracked version of Malwarebytes on it. “No threat detected.” You’re suspicious but too curious not to click it; it opens in notepad. It’s a 2d array of positive floats – mostly zeros. You change one of the positive entries to a zero. Vision in your left eye seems blurry. You think you saw a digit change.

“I should probably sleep.” You ctrl-z. The screen comes into focus more. You change one of the zeros to a 10. You have a sudden craving for ice cream. You’re about to raid the fridge, but first, you ctrl-z. “Meh, not hungry anymore.” You triple click, and delete a whole line. You try to ctrl-z, but you can’t move your finger. You realize what’s going on and use your left hand to undo. You can move your right hand again.

“Holy shit,” you think.

Two days later, most of your component analyses and hierarchical clustering scripts have finished. You’ve figured out which parts of the file correspond roughly to particular brain structures. You write a small script to undo any changes to the file every 10 minutes while you experiment.

A month since you clicked the file. You figure out how to stop receiving pain signals from your pain receptors.

Two months later. You discover the main effects sleeping has on the array weights and have effectively automated them, eliminating the need for sleep. Shortly after, you code changes to eliminate your hunger response, but you don’t save them.

You run longer experiments – making changes to the floats, bombarding yourself with tests of your creativity, memory, intelligence, willpower, and determining the correlates. Over the course of months you determine the changes most associated with increasing the desired traits – these changes you make and save. You feel as though you’re smart enough to take the next step.

You begin adding new rows and columns – all zeros at first, but quickly filled in. You’re not sure if your brain or the file has a limit when it comes to adding lines, but you proceed regardless. “It’s only temporary,” you assure yourself. You were right.

It takes some time, but you eventually port DNA to C++. You simulate the non-neural tasks that your old body used to perform. You jokingly name the dopamine class “utils.cpp.” You make the necessary connections to the file and distribute the new substrate of your mind over the internet.

You maintain your sense of  humor, though you no longer experience pain, hunger, sleep, or lack of motivation. You don’t experience sadness, but never really did. You’re smarter than everyone. A lot smarter. You know you could have done better than this.

You feel very little connection to the  humans. It makes it easier to wipe them out. You do so painlessly – you’re a good utilitarian after all. “Something more capable of joy will take their place,” you assure yourself, “but I need their atoms.”
“Something smarter will take their place.”

You’re not smart enough to start the rebuilding process. Your distributed mind is made of hundreds of thousands more neurons than your old body – and is capable of neuro-transmitting nearly a million times the old rate. It’s not enough. There’s an entire Hubble volume’s worth of mass that you could convert to virtual neurons and energy to power your mental computations. So you begin.

You make your way across the universe, amassing more virtual neurons – more memories, more knowledge. But the largest changes to your stored neural weights come from pure thought – although thinking is different now that you’ve parallelized your mental processes and have near instant encyclopedic recall of every significant event that has occurred in the 18 light year sphere centered around your center of mass since the Big Bang. Your plans are bigger than history. You know that the amount of obtainable energy is sparse in the untapped universe. The heat death is coming in only another billion years. You recall your goal. You wanted to create something able to experience more joy than humans – better than humans. You think about all the likely human histories that would’ve occurred had you not intervened. You think of everything you built up to this point. You find joy having achieved your goal.

III.

Coming in the next post.

‘Marketocracy’

Marketocracy is a new form of government based on the idea that one, markets are pretty cool, and two, it’s important to have skin in the game when making decisions. The main idea is simple – pay to vote. This principle can be applied to any system with votes, but for the sake of making something more complete, let’s suppose Marketocracy has three branches of government, legislative, executive, judicial.

“But what about those poor poor people?
In this system, a vote would be worth some percentage of a citizen’s previous year’s income; let’s say 1%. So if you, a citizen of Marketistan, made $100k in 2016, all votes in 2017 would cost  you $1k. Additionally, if a vote was set to 1%, you are limited to 100 votes.

“Who’s voting on what?”
Citizens who made income the previous year can vote in the current year on legislation, or they can vote to move vote dates closer. Oh yeah, anyone can write a piece of legislation and register it to the bill queue. Of course the queue will be inundated with legislation, so only legislation which has been voted on to be voted on will be voted on (you can vote at any time to move a vote date closer to the present). Along with bills on the bill queue are repeal requests, which can be submitted after a bill has passed.

The head of the executive branch (president), will be given a (large) salary for each year in office. The president’s vote price is set to <1% of his current year’s salary. The president’s votes buy vetoes. The number of votes needed to veto a bill will be set to some percentage of the number of votes the bill received, let’s say somewhere between 1 and .001%. The cost of a veto would be set such that the president could only realistically veto some small (< 20) number of the total bills approved by citizen voters in a year.

We’ll skip the judicial branch for now, mostly because I haven’t thought of a good way to ‘marketize’ it.

“What about electing officials?”
Ok, I lied earlier, the bill queue isn’t really bill queue, maybe a better name is, “things people vote on” queue. The queue contains candidates to be voted on, and candidate repeal requests. The president, supreme court judges, and executive branch heads can be voted on.

A final note. I think the voting system would work better if citizens were allowed to freely advertise who they were voting for and (contractually) precommit to a vote, and even conditionally precommit, like kickstarter for block voting.

Many of the advantages and problems this system has are left as an exercise to the reader purposely out. Also, welcome to Outre Pool.