# Brain is Turing-equivalent?

## Definitions

A Turing-equivalent system can compute the same set of functions that an idealized Turing machine can.

• All of our computers (variations on the von Neumann architecture) are Turing-equivalent for functions (programs) that use finite memory and terminate in finite time (halt).
• Lots of subtleties around the idea of finite memory
• No real computer can truly be Turing-equivalent, since the Turing machine is assumed to be able to allocate an unbounded amount of memory
• So a computer with finite memory is actually a finite-state machine (a deterministic finite automaton).
• But since the memory could be unbounded in principle, we can pretend that ordinary computers are Turing machines, and we won’t run into problems as long as memory is not full
• Although the number of states is finite, it’s incomprehensibly large: 2 raised to the power of the number of memory cells for just the RAM
• We’re only interested in computers that could physically exist (i.e. have finite memory and finite input length), so I’ll use “Turing machine” and “bounded Turing machine” interchangeably. If this seems like a huge problem, just pretend that there are infinitely many RAM slots that you can fill up as necessary.

Uncomputable functions cannot be evaluated on a Turing machine.

• Standard example: the function that determines, given any program, whether that program halts, is uncomputable (the Halting Problem).
• Running a program doesn’t tell you whether it halts, since you’d have to wait forever to have a conclusive answer
• The assertion here is that even if you can create a program that can analyze the structure of certain programs to determine if they halt, you can’t write a program to do it in general for all programs
• The term “computational power” doesn’t refer to speed of calculation, but rather to ability.
• If computer X has more computational power than a Turing machine, then X can compute at least one function that a Turing machine cannot in any finite amount of steps.

The terms super-Turing computation and hypercomputation are used interchangeably to refer to computing uncomputable functions.

• If general intelligence requires evaluating uncomputable functions, then we have no hope of running AGI on traditional computers.
• We want to show that the brain is no more powerful than a Turing machine, and no less powerful, so the functions that a brain can compute are all computable on standard computers and vice versa.

## Turing machines ⊆ brains

Claim: the brain is at least as computationally powerful as a Turing machine.

• Turing based his computational model on a human manually doing computation, so this is straightforwardly true.

• Humans can evaluate any computer program with pen-and-paper.

• In this scenario the paper is the Turing machine’s tape
• This excludes programs that access external resources like the Internet, but the idealized Turing machine can’t access external resources either, so not a problem

## Brains ⊆ Turing machines

Claim: the brain is no more computationally powerful than a Turing machine. *

• Can all physical processes can be simulated by a Turing machine?

• This is sometimes called the physical Church-Turing thesis.1
• If this were true, then the brain would necessarily be at most Turing-equivalent, as its operation is a physical process.
• It is not known if this is true.
• All proposed super-Turing machines are impossible to physically instantiate.

• Examples of hypothetical super-Turing computers2
1. An oracle that can solve the halting problem, plus a Turing machine with access to the oracle, is strictly more powerful than a Turing machine alone
2. A Turing machine that computes each step in half the time needed for the previous step (“Zeno’s machine”)
• This machine can perform infinitely many operations in finite time (2 time units; sum the geometric series)
• So it can solve the halting problem: simply run the program, and check whether the machine has halted after 2 time units.
3. Many papers claim that certain kinds of (artificial) recurrent neural networks have super-Turing capabilities
• Basically all of these papers assume infinite numerical precision
• None of these machines can be realized2
1. Need to create the oracle, which is itself a super-Turing machine
2. Speeding up by 2x each step implies that eventually information will need to move faster than the speed of light for any computer with finite size. Alternatively you’ll need infinite energy to increase a signal frequency to infinity.
3. Infinite numerical precision is unphysical, due to the presence of noise in measurements.3
• Uncomputable functions are essentially mathematical curiosities, with little applicability to the real world.

To summarize:

• Humans can evaluate any program manually, so the brain is no less powerful than a Turing machine.
• Super-Turing machines appear to be physically unrealizable, in which case the brain cannot be super-Turing.
• It follows that the brain is exactly as powerful as a Turing machine. *

The second set inclusion is unsatisfying. Unfortunately, I can’t see any way to directly prove this hypothesis without tackling the physical Church-Turing thesis, so we’ll have to be satisfied with a conjecture backed by considerable evidence. If the brain is in fact super-Turing, and the super-Turing facilities are instrumental in the mechanism of intelligence, then AGI is impossible on modern computers. That said, I don’t see any reason that evolutionary fitness would benefit from the computation of uncomputable functions, considering how exotic they are. To use classical computers as a substrate, we must make the relatively safe assumption that the brain is exactly as capable as a Turing machine.

Back to index

1. Copeland, B. Jack, The Church-Turing Thesis, The Stanford Encyclopedia of Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.). ↩︎

2. Broersma, Hajo, Susan Stepney, and Göran Wendin. Computability and complexity of unconventional computing devices. Computational Matter. Springer, Cham, 2018. 185-229. ↩︎

3. Piccinini, Gualtiero, Computation in Physical Systems, The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.). ↩︎