Description
Reservoir computing exploits the properties of random dynamical systems to perform computation. Two components of a reservoir computer:
 A source of nonlinear dynamics (the “reservoir”)
 For example, a random recurrent neural network, or a bucket of water with a camera observing the surface^{1}
 Provides complex interactions that can be mapped to computations
 Reservoir parameters are fixed, so the reservoir is not involved in training
 A linear readout system
 Can measure values from the reservoir
 Imagine that there are “probes” that the readout system uses to measure certain aspects of the system
 Measurement happens at the same fixed “place” (could be a specific variable, or some combination of variables) at regular intervals
 Usually a linear classifier
 Parameters are trained per problem
 Multiple readouts can be used for parallel computation
 Can measure values from the reservoir
 A source of nonlinear dynamics (the “reservoir”)
 Since the reservoir is fixed, fewer trainable parameters needed for the same model complexity
 The key idea is that by feeding data into the reservoir, computations happen “naturally”, and just need to be decoded linearly
 Two kinds of reservoir computing relevant to the brain:
 Echo state networks^{2} (ESNs) are a random recurrent neural network + linear classifier
 Liquid state machines^{3} (LSMs) are a random spiking neural network + linear classifier
Discussion
 Surprisingly, computation is possible even if the computer is mostly random
 Easy way to get parallelism is to use “branches” (multiple readout layers in this case)
 This is true even for regular neural networks
 The reservoir turns a learning problem into a search problem: find the desired nonlinear dynamics in the reservoir
 so a random reservoir has to be much larger than a learned reservoir to reliably contain those dynamics*Be skeptical!
 similarly, a learned reservoir of a given size will always outperform a random one*Be skeptical!
 (biological) evolution would favor learning to reduce size and energy consumption*Be skeptical!
 The reservoir framework doesn’t say much about how to construct the reservoir
 Synapses in the brain are not random^{4}
The most interesting fact from reservoir computing is that you can put a dynamical system into some state, and then decode the state into the result of a computation without the system converging to a stable attractor. Such a system has memory of its past states, and one important goal for future research is to explore the connection between this memory and human memory.
This diagram is how I imagine information decoding in the brain could work. Some stimulus (a visual scene) induces a dynamical state (the squiggly arrows) in a brain region (the first lumpy thing). Then, probes connected to a readout system (the dots and straight arrows) extract relevant information and transmit it to another brain region, inducing a dynamical state there, and so on.
Overall, I don’t think reservoir computing gives us much information about how to design a brain. Even so, it proves that computers don’t need to be hierarchical and structured in the ways we’re used to. As we study the properties of the brain, it will help to keep that in mind.

Fernando, C., & Sojakka, S. (2003). Pattern Recognition in a Bucket. Lecture Notes in Computer Science, 588–597. ↩︎

Jaeger, H. Echo state network. Scholarpedia, 2(9):2330. ↩︎

Maass W, Natschläger T, Markram H. Realtime computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 2002 Nov;14(11):253160. ↩︎

Sporns, O. Brain connectivity. Scholarpedia, 2(10):4695. ↩︎