We describe a style of computing that differs from traditional numeric and symbolic computing and is suited for modeling neural networks. We focus on one aspect of ``neurocomputing,'' namely, computing with large random patterns, or high-dimensional random vectors, and ask what kind of computing they perform and whether they can help us understand how the brain processes information and how the mind works. Rapidly developing hardware technology will soon be able to produce the massive circuits that this style of computing requires. This chapter develops a theory on which the computing could be based.
Chapter V includes these articles: Kanerva, P. Analogy as a basis of computation. (pp. 254-272) Sjoedin, G. The Sparchunk Code: A method to build higher-level structures in a sparsely encoded SDM (pp. 272-282) Kristoferson, J. Some results on activation and scaling of sparse distributed memory. (pp. 283-289) Karlsson, R. A fast activation mechanism for the Kanerva SDM memory. (pp. 289-293) Karlgren, J. and Sahlgren, M. From words to understanding. (pp. 294-308) Compressed postscript file for the chapter can be found on /home/kanerva/rwibook/final/V-SICS.ps.gz (493 kb)