Categories
Computing Evolution Psychology

Why our genes do not doom us

In the world at large (and even, it would appear, sometimes within the biological sciences) there can be a certain amount of trepidation surrounding the idea of a 'genetic tendency'. For example, if we posit a 'genetic tendency' towards aggression in human males, it seems as if we are saying something terrible. It seems even worse to posit a gene or genes 'for' aggression, even though, as Dawkins explains in The Extended Phenotype, this is exactly the same thing. A genetic basis for, a gene for, a genetic tendency towards, genetically influenced - however we phrase it, we are saying the same thing - that genes in some way contribute in the causal path.

The bogeyman of genetic tendency is at root a fear that if we have a 'gene for' X, that means that we will ineluctably do X. It can also go under the name of 'genetic determinism'. It is the idea that genes in some way control us, despite what we would like, and we (or other people) are doomed to carry out certain actions contrary to our best interests or contrary to morality or contrary to some other important social goal depending on what 'X' is.

The fear has a certain superficial credibility. After all, we all have 'genes for' hands, and we all (or almost all) end up with hands. Similarly for feet, hearts, livers, eyes, stomachs and so on. If men have a 'gene for' aggression, then surely all men end up violent?

This tempting line of reasoning contains a critical flaw. While genes for hands build hands, genes for aggression do not build violence, whatever that could mean, rather they build an aggression mechanism. Any time we have genes 'for' a behaviour, that necessarily means that the genes build a mechanism for producing that behaviour. Possessing a mechanism for a behaviour does not tell us much about how much if at all we should expect to see that behaviour. Let's consider why.

Let's suppose that we have a subroutine for punching people. I'm going to go ahead and write some pseudo-code:

function punch(thePersonToPunch){
    executePunchOn(thePersonToPunch);
}

Wow! How deterministic! If you call function 'punch' with an input of who to punch, it just punches them!

But while a mechanism for aggression must call something like the 'punch' function, that can't be the whole of the aggression mechanism. There's nothing in this code to say when it runs. It surely can't run constantly, or the poor human would just spend all its time and energy punching people and we know that doesn't happen. So the mechanism might look a bit more like this:

function aggressionMechanism(context){
if (context.containsThreat()){
punch(context.getThreat());
}
}

This makes a bit more sense. We can run the aggressionMechanism constantly, and it will only output a punch if it detects that its environment contains a threat. We can say that the aggression mechanism outputs violence conditionally, in other words only under certain conditions.

Of course we can make the mechanism a bit more sensitive:

function aggressionMechanism(context){
    threatLevel = context.getThreatLevel();
    if (threatLevel < 10){
        doNothing();
    } 
    else if (threatLevel < 20){
        giveHardStare();
    }
    else if (threatLevel < 30){
        puffChest();
    }
    else {
        punch(context.getThreat());
    }
}

Now our mechanism is somewhat sensitive to the level of the threat. If it's only a mild threat, it ignores it. Slightly bigger threats gain a hard stare or a puffing of the chest. Finally, it is only very large threats that get a punch.

Hopefully we're already starting to feel somewhat safer. An 'aggression mechanism' does not need to mean constant mayhem and indeed no gene that generated constant mayhem would be likely to make it into the next generation.

But a 'gene for' a mechanism does not necessarily mean that the code of the mechanism is completely determined by the genes. We can imagine an aggression mechanism that is sensitive to the way that other people have historically treated the person who possesses the mechanism. Let's say that after a year of being treated nicely, the owner of the above mechanism now has a mechanism that looks like:

function aggressionMechanism(context){
    threatLevel = context.getThreatLevel();
    /* NOTE THE NEW THRESHOLDS! */
    if (threatLevel < 20){ // Was 10 before
        doNothing();
    } 
    else if (threatLevel < 40){ // Was 20 before
        giveHardStare();
    }
    else if (threatLevel < 60){ // Was 30 before
        puffChest();
    }
    else {
        punch(context.getThreat());
    }
}

For this hypothetical example, the structure of the mechanism has stayed the same, but the thresholds have changed. It takes a lot more for this individual to get riled up now. There's no reason a neural mechanism should not change like this. Just because something is a 'mechanism' does not mean it is completely unchangeable. Much encoding of mechanisms in the brain is done with synapse strengths and those are highly flexible.

So a 'genetically determined' aggression mechanism can nevertheless be highly conditional, firing rarely if at all, and it can be highly sensitive to environmental input, meaning that it can be configured by external input.

But we can go further. When you're considering a complex system (like a human brain), it's always a mistake to consider one component in isolation. It's important to consider the interactions between all components if we're going to understand the behaviour of the whole.

Let's make a crude model of human behavioural output. At any given moment we can decide to do any of a great number of behaviours - singing, dancing, talking, walking, running, jumping and so on. We can imagine some kind of central authority that decides which of the many possible available actions to execute. We could suppose something like:

function centralExecutive(){
    options = new List;
    options.add(getSingingMechanismRecommendation());
    options.add(getDancingMechanismRecommendation());
    options.add(getTalkingMechanismRecommendation());
    options.add(getWalkingMechanismRecommendation());
    options.add(getRunningMechanismRecommendation());
    options.add(getJumpingMechanismRecommendation());
    /* And our aggression mechanism */
    options.add(getAggressionMechanismRecommendation());

    actionToExecute = getHighestScoring(options);
    execute(actionToExecute);
}

In this toy central executive, lots of different mechanisms are polled, each one 'recommending' an action. I imagine you've experienced something similar. In the mid-afternoon, you might ask yourself, "What do I feel like doing this evening?", and you might be aware of several different options, that we can suppose come from different parts of your brain. One part says "I quite fancy some Indian food." Another part says "I should really go to the gym". Another part says "I'm tired, I just want to go home." Maybe another part says, "I'd like to meet up with some friends."

We can suppose that we have something roughly like the central executive function running all day every day. At every moment, our brain is trying to figure out what is the best thing to be doing right now, and then executing its choice.

We can also suppose that the different parts of the brain must make their recommendations in some kind of common currency, otherwise how is the executive to choose between them? We experience this subjectively as how much we 'feel like' or 'want to' do something. Generally the thing we 'want to do' the most wins. But these feelings of wanting must somehow be in the same currency or we could not compare them.

So let's suppose that there is a threat in the environment. It's a threat at level 70, so even our most recent, milder aggression mechanism is triggered. But to incorporate this new element of the central executive, we need to change our aggression mechanism slightly.

function aggressionMechanism(context){
    threatLevel = context.getThreatLevel();
    if (threatLevel < 20){
        recommendDoNothing();
    } 
    else if (threatLevel < 40){
        recommendHardStare();
    }
    else if (threatLevel < 60){
        recommendPuffChest();
    }
    else {
        recommendPunch(context.getThreat());
    }
}

The difference to the previous mechanism is that now, the mechanism only recommends courses of action to the central executive, rather than taking those actions itself.

So even though the threat level is now at 70, the aggression mechanism just says to the central mechanism "Hey, I'm strongly recommending that we punch this threat."

But crucially, the central executive has a chance to evaluate this course of action before executing it. Perhaps it runs it by the 'future simulator' function, asking it "Hey future simulator, what would happen if I punch the threat?" The future simulator might calculate the likely results and say, "Well, I think you would damage the threat but it's likely you'd end up in jail and that would be very bad."

So the central executive weighs up the pros and cons and decides against the violence, despite the aggression mechanism strongly recommending it.

Hopefully now we're feeling a lot safer.

The fact is that we can have a 'gene for' a behaviour, and that gene can 100% reliably (or close to it) build a mechanism for producing that behaviour, just as genes build hands, and yet that behaviour only be produced under very rare or specific circumstances if at all.

The existence of a mechanism does not mean it's going to produce the product it was designed to produce. I've lived in my flat for 5 years and I've never once used the central heating mechanism.

Of course we know that people aren't aggressive all the time or even most of the time, and we know that it only happens under tightly circumscribed circumstances. The point of this article is to illustrate how this is perfectly compatible with a genetic tendency towards or even genes for (it's the same thing!) aggression. The same goes for genes for any other behaviour we may care to consider.

Categories
Computing Consciousness Evolution Psychology

Is the human brain a computer?

Sometimes people object when I describe the human brain as a computer. The most common objections are things like:

  • Computers are made by humans whereas brains are biological
  • Computers are made out of silicon and our brains are made of cells
  • Computers have addressable memory whereas our brains have neural networks
  • Computers don't have emotions whereas we do
  • Computers are mechanical whereas we have flexibility and free will

Some of the disagreement is simply arguing over definitions. If someone is using the word 'computer' to refer solely to devices made out of silicon designed by humans then of course they're not going to agree that human brains are computers. So let me short circuit some of the disagreement by making the following stipulative definition of 'computer':

A computer is a device that transforms input into useful output.

If we substitute in my definition, we can rephrase the question in the title of this post as "Is the human brain a device that transforms input into useful output?"

Using this definition, there doesn't seem to be much room for disagreement. The human brain takes input from the senses, and from internally stored information (for example memories), and transforms that input into useful behaviours.

If it really is fair to see the human brain as a computer, that suggests that we should be able to use much of the content of computer science to characterise and analyse the workings of the brain. We might expect to find some or all of the following concepts usefully applicable:

  • Variables
  • Subroutines
  • Data encoding
  • Memory storage and retrieval
  • Lookup tables
  • Ranking and sorting (eg in action prioritization subroutines)
  • Daemon processes (processes with their own largely separate cause and effect chains)
  • Concurrency
  • Testing
  • Bugs and debugging
  • Caching
  • Mechanisms optimised for speed or low resource usage or for accuracy
  • Subsystems

and many others.

Categories
Computing Consciousness Evolution Psychology

List of Axioms

When discussing a topic as broad as human behaviour, it helps to make any philosophical/scientific assumptions explicit so that any reader can see if he or she has some fundamental difference of position to that of the author. I therefore give the following list of assumptions as axioms, which I take for granted elsewhere on the site (justifications/discussions are in the links [coming soon]):

  1. The universe is deterministic. If precisely the same starting conditions are set up twice, precisely the same result will occur each time. Every process can therefore be described as mechanical, including consciousness.
  2. Natural selection is the only known process in the universe capable of building complex adaptations.
  3. "The ultimate goal that the mind was designed to attain is maximizing the number of copies of the genes that created it." (Steven Pinker)
  4. It's legitimate to hypothesize about function in plain language or any computer language, even though functions are actually implemented in neurons, synapses etc in the brain. Functional hypotheses are at a layer of abstraction above implementation and can therefore be implementation-agnostic.
Categories
Computing Consciousness Evolution Psychology

Relevant Quotations

"We are survival machines — robot vehicles blindly programmed to preserve the selfish molecules known as genes." - Richard Dawkins - The Selfish Gene

"I am not apologizing for using the language of robotics. I would use it again without hesitation." - Richard Dawkins - The Extended Phenotype

"The ultimate goal that the mind was designed to attain is maximizing the number of copies of the genes that created it." - Steven Pinker - How The Mind Works

"Talk is cheap. Show me the code." - Linus Torvalds, creator of Linux operating system

"Those who study species from an adaptationist perspective adopt the stance of an engineer. In discussing sonar in bats, e.g., Dawkins proceeds as follows: "...I shall begin by posing a problem that the living machine faces, then I shall consider possible solutions to the problem that a sensible engineer might consider; I shall finally come to the solution that nature has actually adopted" (1986, pp. 21-22). Engineers figure out what problems they want to solve, and then design machines that are capable of solving these problems in an efficient manner. Evolutionary biologists figure out what adaptive problems a given species encountered during its evolutionary history, and then ask themselves, "What would a machine capable of solving these problems well under ancestral conditions look like?" Against this background, they empirically explore the design features of the evolved machines that, taken together, comprise an organism. Definitions of adaptive problems do not, of course, uniquely specify the design of the mechanisms that solve them. Because there are often multiple ways of achieving any solution, empirical studies are needed to decide "which nature has actually adopted". But the more precisely one can define an adaptive information-processing problem -- the "goal" of processing -- the more clearly one can see what a mechanism capable of producing that solution would have to look like. This research strategy has dominated the study of vision, for example, so that it is now commonplace to think of the visual system as a collection of functionally integrated computational devices, each specialized for solving a different problem in scene analysis -- judging depth, detecting motion, analyzing shape from shading, and so on." - Leda Cosmides & John Tooby - Evolutionary Psychology: A Primer

"In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history." - Charles Darwin - On the Origin of Species

"The human mind consists of a set of evolved information-processing mechanisms … produced by natural selection over evolutionary time." John Tooby and Leda Cosmides in The Adapted Mind

"Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them." Steven Pinker - How The Mind Works

"The brain’s special status comes from a special thing the brain does, which makes us see, think, feel, choose, and act. That special thing is information processing, or computation." Steven Pinker - How The Mind Works

"I have a friend who's an artist, and he sometimes takes a view which I don't agree with. He'll hold up a flower and say, "Look how beautiful it is," and I'll agree. But then he'll say, "I, as an artist, can see how beautiful a flower is. But you, as a scientist, take it all apart and it becomes dull." I think he's kind of nutty. ... There are all kinds of interesting questions that come from a knowledge of science, which only adds to the excitement and mystery and awe of a flower. It only adds. I don't understand how it subtracts." - Richard Feynman

"You are a computer, built by selection, and melted or disordered by entropy." John Tooby

Categories
Computing

The Simplicity Of Computers

Computers do clever things - they play chess, run spreadsheets, do voice recognition, translate languages, predict stock markets and offer thousands of other useful, or fun, or even life-saving applications. It is very tempting to suppose that they work in some irreducibly complex way. Indeed, as Arthur C. Clarke put it, "Any sufficiently advanced technology is indistinguishable from magic."

However, in their fundamental workings, computers are ridiculously simple. They are so simple that you can build computers out of wood, or lego, or tubes filled with water. All their cleverness comes from the way their very simple components have been combined. The components themselves are laughably trivial.

Suppose we play a game in which you hold out your hands and I put apples in them. I can put an apple in your left hand or leave it empty. Similarly I can put an apple in your right hand or leave it empty. If you end up with two apples, one in each hand, you say "Yes", otherwise you say "No". That's it. Quite a simple game. Congratulations, you are now a fully functioning "AND gate", a critical component of computers. An AND gate takes two inputs and if the first input AND the second input are both positive, it outputs "Yes" otherwise it outputs "No".

Let's change the game slightly. This time if you end up with an apple in your left hand OR your right hand OR both, you say "Yes" otherwise you say "No". In other words, you only say "No" if you get no apples at all. Congratulations, you are now an "OR gate", another crucial component of computers. An OR gate takes two inputs and if the first OR the second OR both are positive, it outputs "Yes", otherwise "No".

You can build these 'logic gates' out of just about anything. Here is an 'OR gate' built out of dominoes. If you knock over either the lower left or lower right domino or both, all the dominoes will fall. So we can see the lowest two dominoes as the inputs and the domino at the top as the output.

As a matter of fact, the gates in computers are typically made out of silicon and work electronically, but this is irrelevant to the cleverness of computers. What makes computers clever is the way the very simple gates are combined.

So how can we combine gates to do something clever?

Let us suppose we want to build a computer that can add two numbers together, for example 4 and 2.

Clearly we need to give those numbers to a set of gates somehow, to "input" those numbers. But we have a problem. As we saw above, gates typically take two inputs, but they can only take inputs of "yes" or "no", an apple or no apple, or in the case of computers, 1 or 0. We can't give the number 4 or 2 to a single gate.

The solution is to convert each number to a set of 1s and 0s. In other words, we need to convert the inputs to binary. If you are not conversant with binary, it's just another way of representing numbers like our normal decimal system or roman numerals. In the decimal system, columns represents 1s, 10s, 100s etc, so 523 means 3 + 20 + 500 = 523. In binary, columns represent 1s, 2s, 4s, 8s etc. So the binary number 101 means 1+4 = 5 in decimal.

So if we convert our inputs 4 and 2 to binary, we get:

DecimalBinary4s2s1s
4100100
2010010

Now that our inputs are '100' and '010', they are in a format where we can give them to some gates. But what gates?

Both numbers, '100' and '010' have 3 digits. What if we had a row of 3 OR gates and we gave each OR gate one digit from each number? It would look like this:

OR gate 1OR gate 2OR gate 3
Input 1100
Input 2010
Output110

If we 'OR' together '100' and '010', we get '110'.

'110' in binary is 6 in decimal:

DecimalBinary4s2s1s
6110110

In binary 100+010 = 110 (6 in decimal) and in decimal 4+2 = 6. It worked! By 'OR'ing together binary versions of our numbers, we got the correct sum of the two numbers. (Note that 'OR'ing together numbers like this to add them works fine if the two numbers don't have a '1' in the same column. If they both have a '1' in the same column, like adding 101 and 001, the gates need to be a bit more complicated, because you need to 'carry' values, like when you add 9 and 7 in decimal and 'carry' 1 to give you 16.)

So you can see that we have done something quite clever, adding together two numbers, using a purely mechanical process built out of very simple components.

Everything that computers do, including sophisticated functions like playing chess or running spreadsheets or translating languages, is built up out of very simple gates like the ones we have looked at in this article.

Of course, the main point of this post is to suggest that just as the cleverness of computers is the aggregated result of simple components, so too may be the workings of our minds. The sophistication of our thought processes need not necessarily indicate that something irreducibly complex is going on.