Computing with large random patternsVisa övriga samt affilieringar
Antal upphovsmän: 112001 (Engelska)Ingår i: Foundations of Real-World Intelligence, Stanford, California: CSLI Publications , 2001, 1, s. 251-311Kapitel i bok, del av antologi (Refereegranskat)
Abstract [en]
We describe a style of computing that differs from traditional numeric and symbolic computing and is suited for modeling neural networks. We focus on one aspect of ``neurocomputing,'' namely, computing with large random patterns, or high-dimensional random vectors, and ask what kind of computing they perform and whether they can help us understand how the brain processes information and how the mind works. Rapidly developing hardware technology will soon be able to produce the massive circuits that this style of computing requires. This chapter develops a theory on which the computing could be based.
Ort, förlag, år, upplaga, sidor
Stanford, California: CSLI Publications , 2001, 1. s. 251-311
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
URN: urn:nbn:se:ri:diva-22648OAI: oai:DiVA.org:ri-22648DiVA, id: diva2:1042213
Anmärkning
Chapter V includes these articles: Kanerva, P. Analogy as a basis of computation. (pp. 254-272) Sjoedin, G. The Sparchunk Code: A method to build higher-level structures in a sparsely encoded SDM (pp. 272-282) Kristoferson, J. Some results on activation and scaling of sparse distributed memory. (pp. 283-289) Karlsson, R. A fast activation mechanism for the Kanerva SDM memory. (pp. 289-293) Karlgren, J. and Sahlgren, M. From words to understanding. (pp. 294-308) Compressed postscript file for the chapter can be found on /home/kanerva/rwibook/final/V-SICS.ps.gz (493 kb)
2016-10-312016-10-312023-05-09Bibliografiskt granskad