Reservoir computing



The most interesting fact from reservoir computing is that you can put a dynamical system into some state, and then decode the state into the result of a computation without the system converging to a stable attractor. Such a system has memory of its past states, and one important goal for future research is to explore the connection between this memory and human memory.

This diagram is how I imagine information decoding in the brain could work. Some stimulus (a visual scene) induces a dynamical state (the squiggly arrows) in a brain region (the first lumpy thing). Then, probes connected to a readout system (the dots and straight arrows) extract relevant information and transmit it to another brain region, inducing a dynamical state there, and so on.

Overall, I don’t think reservoir computing gives us much information about how to design a brain. Even so, it proves that computers don’t need to be hierarchical and structured in the ways we’re used to. As we study the properties of the brain, it will help to keep that in mind.

Back to index

  1. Fernando, C., & Sojakka, S. (2003). Pattern Recognition in a Bucket. Lecture Notes in Computer Science, 588–597. ↩︎

  2. Jaeger, H. Echo state network. Scholarpedia, 2(9):2330. ↩︎

  3. Maass W, Natschläger T, Markram H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 2002 Nov;14(11):2531-60. ↩︎

  4. Sporns, O. Brain connectivity. Scholarpedia, 2(10):4695. ↩︎