Continuous-time computation

The brain operates asynchronously over continuous time. Since our artificial computers use global clocks and discrete time, a natural question is whether the continuity of time matters with respect to the results or efficiency of the computation.

Fortunately, it is easy to see that we can ignore most of the implications of continuous-time computation. Dynamical systems can be approximated arbitrarily well in discrete time by decreasing the duration of the timestep. This is because the system is governed by differential equations, for which solutions are locally linear, so a repeated tangent-line approximation can be made (Euler's method). Of course, the approximated solution could deviate, and perhaps even become arbitrarily distant, from the true solution for a particular initial condition, due to finite precision in the representation of the system’s state.

However, all physical systems are subject to noise from a variety of sources, so the state vector effectively has finite precision even in real life. This is like a solution “band” instead of a solution curve; the noise groups nearby solutions into equivalence classes. To reliably perform computation, the system must be robust to a certain level of noise, and entire solution bands must converge to the desired result. So long as our timestep is small enough that the approximation error at each step is less than the noise contribution, we can expect any sufficiently robust computational system to produce the desired result, whether through attractor dynamics or error-correction mechanisms.

The presence of noise encourages us to believe that all physical computers must be structurally stable, and likely prohibits the exploitation of chaotic dynamics in practice.*Be skeptical! This relates closely to a conjecture by Cristopher Moore,1 which was brought to my attention by an interesting and relevant blog post by Hessam Akhlaghpour.

The question of efficiency remains; numerically approximating trajectories of a high-dimensional dynamical system that performs computation like the brain may be intractable on modern hardware. Can we do much better, by finding computational primitives in the brain that define a rule-based system independent of physical time, and then implementing those primitives directly in software? This question will guide future exploration.

Back to index


  1. Moore, Cristopher. “Finite-dimensional analog computers: Flows, maps, and recurrent neural networks.” 1st International Conference on Unconventional Models of Computation-UMC. Vol. 98. 1998. ↩︎