Neuromorphic computing: The story so far
Inspired by a theory into the organisms of memory and recall in the brain, neural networking is a digital simulation of how synapses may retain information, after being trained to recognize patterns. For instance, neural nets enable a computer, or perhaps a cloud-based service, to recognize the characters of printed text without the need for programming explicitly specifying what text is, or how it can spot a certain face in a crowd after having seen several photographs of the same face.
As a neural networking problem becomes linearly broader — for example, distinguishing one form of written text from another — the data required to train it grows exponentially larger. There’s a valid argument that some of the tasks being envisioned for neural nets, such as spotting when anyone is getting depressed or agitated, may be impossible, even with today’s storage and memory technologies. So the revelations by researchers that chemical structures comprised of completely random assemblies of nanometer-scale wires may exhibit the electrical characteristics of memory in a brain perhaps shouldn’t continue to be dismissed for much longer.
“I want to create a synthetic brain,” wrote Dr. James K. Gimzewski in October 2012 [ PDF ]. “I want to create a machine that thinks, a machine that possesses physical intelligence. . . Such a system does not exist and promises to cause a revolution one might call the post-human revolution.”
It is the stuff of so much science fiction that lately, sci-fi authors have steered away from the topic, for fear of coming off sounding like a retread — or a retread of a retread — of Isaac Asimov. The mechanism foreseen by Dr. Gimzewski and his colleagues at the California NanoSystems Institute of UCLA is, strangely enough, not a digital processor and not, in the context of modern electronics, a semiconductor. It is not, at least for now, about programming.
The question at the heart of his team’s research is this: If the process that constitutes natural memory is, at least at the atomic level, essentially mechanical, then instead of building a digital simulation of that mechanism, why not explore building an actual machine at the same atomic level that performs the same functions in the same way? Put another way, if the brain is an atomic machine, then why can’t an atomic machine be a brain?
Earlier at ZDNet, we introduced you to the concept of neuromorphic computing , and contrasted it against the realm of digital neural net simulation. In conventional simulations, the relative strength of a “synapse” compared to other synapses is represented by a value in memory — or, to be more accurate about this, in RAM. A “learned” pattern may weight the synapse so that, when an image closely matches one that the system has “seen” before, the weighted synapse is given precedent, and “fires” in an event analogous to the electrical impulse of a synapse in the brain.
Any neuromorphic architecture is an effort to build a system that actually works this way, rather than simulating it digitally. What conventional semiconductor-based computers lack with respect to simulating neural activity is, for lack of a more appropriate word, scale. A 2013 research project pairing Germany’s Jülich Research Centre and Japan’s RIKEN laboratory, involving RIKEN’s K supercomputer — the fastest at that time — successfully simulated the neural activity observed in approximately 1 percent of a human brain, in a sequence that took about 40 minutes to execute. It took another five years of algorithmic resequencing before that team would announce a methodology that pared down otherwise ancillary neural activity, speeding up execution by about five times.
At this rate, they should be able to simulate the neural activity required for a presidential tweet by about 2050.
Dr. Gimzewski’s epiphany — inspired by his close work over the decades, not only with Intel but with colleagues in physics and chemistry, including a Nobel laureate or two — is that the structures produced naturally through chemical reactions already possess behaviors similar to the switches (digital or physical) used in simulating the operation of synapses, particularly in how they conduct electricity. They resist the application of current, but over time, they resist less — a phenomenon associated with brain activity when an individual is perceived to be learning.
The UCLA team’s research is centered around leveraging natural chemical phenomena at the atomic level as atomic switches, and their evidence is revealing that if their chemically produced systems are treated like a natural memory (like the receptive components of a brain that retain information), then they’ll behave like a natural memory.
“If you take a machine learning analogy, we have a network, and we have some inputs and some outputs. In such systems, you have to train the network,” explained Gimzewski in an interview with ZDNet Scale . “In a conventional system, you have to train the network in that each synaptic connection in the system has a thing called a ‘weight.’ It’s just a number. The bigger the weight, the stronger the effect.”
The act of training the network — for example, by giving it more samples of the same class of data, such as the recordings from one person’s voice, or images from one person’s face — changes the weights’ values. To the extent that these values become relatively high, developers say the system is “learning.” The greater the variety of possible, learnable entities in the training set (for example, multiple people’s faces), the more weights are required to establish differentiation. Even today, conventional digital supercomputers find the learning of complex patterns from nature difficult, and results are less than optimum.
In a neuromorphic system, these weights are not digital. They are the products of atomic switches — devices composed of ions or pairs of ions whose binary quantum attributes may be manipulated to one state or the other. They’re like binary digits or bits, but in this case, they’re not electronic. An atomic switch may be “manufactured,” at least in one sense, through directly coercing a pair of covalent (bonded together) ions to swap positions with one another, using a dynamic force microscope whose tip, like the needle of an atomic record player, is sharpened to a one-atom width.
But Gimzewski’s atomic switches aren’t made so much as grown . Continuing the work begun by Prof. Masakazu Aono at Japan’s International Center for Materials Nanoarchitectonics (MANA) , his team chemically produces networks whose circuits are formed by silver sulfide nanowires. To be more specific, they treat a grid of copper posts stationed one micron apart from one another, with silver nitrate. As a result, the nanowires grow from these posts, in completely random directions. One word to describe the shape of these structures is dendritic , which — not coincidentally — is used to describe the structure of synapses in the brain.
After these dendrites have formed, sulfurizing the product enables the junctions formed where the nanowires touch, to become connections. Gimzewski refers to these connections as synapses. At the atomic level, these synapses behave like the simulated synapses in a digital neural network, even though they are technically non-electronic.
Yet if you’ve ever studied a railway system, you know that simple switches determine the routes that trains take. If we assert that, like the internet itself, a system is defined by the routes it forms, it’s not too great of a leap of logic to conclude that a system such as a brain is physically comprised of the neurons, axons, and synapses that collectively comprise its functions. This is the attribute of the brain that neurologists call neuroplasticity . Applied to an artificial device such as a processor, an analogous attribute would be the ability for the device to build onto itself to fulfill a new function. The simplest way to attain this attribute would be through a rearrangement of switches.
To pretend that physicists and chemists are just now coming to the point of leveraging natural processes for computational or mathematical purposes, is to do an injustice to the people who gave rise to computing in the first place. Among Charles Babbage’s intentions for his calculating engine was to make obvious the notion that mathematics was merely a human interpretation of a greater, divine mechanism. As Babbage wrote in 1838:
To illustrate the distinction between a system to which the restoring hand of its contriver is applied, either frequently or at distant intervals, and one which had received at its first formation the impress of the will of its author, foreseeing the varied but yet necessary laws of its action throughout the whole of its existence, we must have recourse to some machine, the produce of human skill. But far as all such engines must ever be placed at an immeasurable interval below the simplest of Nature’s works, yet, from the vastness of those cycles which even human contrivance in some cases unfolds to our view, we may perhaps be enabled to form a faint estimate of the magnitude of that lowest step in the chain of reasoning, which leads us up to Nature’s God.
The work of James Gimzewski’s team has demonstrated that a mechanism assembling itself from the random whimsy of a chemical process can exhibit a phenomenon commonly associated with a digital simulation, whose own intent is to behave like the thing in nature this mechanism calls, if faintly, to mind: the brain’s neocortex. Nature can imitate the imitator, and perhaps in so doing, have the last laugh.
But it is here that the professor would have us take the biggest leap of both faith and logic: a mental reach of Babbagian proportions. In moving forward with his research, he sought to model what neurologists call the neuropil — the densest collection of synapses in the brain, collecting countless nerve fibers together. At one point, he estimated a synthetic interconnection density of one billion per square centimeter, which is denser than the arrays of transistors in modern semiconductors.
This neuronal “fabric,” to borrow a term from computer networking, is grown chemically in what Gimzewski calls a “bottom-up” process. It’s then interfaced with an electrode grid, which is an ordinary device comprised of 64, or on occasion 128, copper outputs, conventionally fabricated from the top down. That interface enables a multi-electrode readout, similar to how neurologists scan for brain activity.
“In the type of circuit we produce, the behavior of an individual element — in the atomic switch, an individual junction — is not so important to us. It’s the system-wide activity of the whole device, and how it’s spatially and temporally organized, that we’re concerned with.”
The dendritic network formed by these self-assembling atomic switches, Gimzewski asserts, has adopted a style of learning — a model that corresponds in many ways to what engineers of neuronal networks (using simulated neurons) call reservoir computing (RC).
There isn’t necessarily any linear correlation between the sequence of the input signals and the signals recorded from the outputs. So, for instance, a perfect sine wave employed for the input would not yield a sine wave in any one of the outputs individually.
What happens nevertheless, states the professor, for reasons that are not yet totally explainable, is that the dendritic paths appear to work things out for themselves. “When they’re all combined, they start to talk to each other,” he said. “In a way, the whole circuitry comes alive, in a sense, in that every part is interacting with every other part. And there are pathways in which we can establish stronger neuromorphic connections.”
In an RC network, weights are associated at the output layer, where the results are registered. “Then by a thing called linear regression, which is conventional computation, we reconstruct the waveform.”
Meaning, all of the outputs put together form a matrix upon which linear regression may be applied, to extract a pattern reconstructed from the inputs. So if sine waves were input, the result of the computation would be a sine wave; if a person’s voice was used as input, an audible extrapolation of that voice would appear in the outputs.
Thus a network of naturally occurring phenomenon that is grown, not programmed, may be treated like a neural network, and thus in response behaves like one — not an ordinary neural net, mind you, but the most sophisticated class in current use.
His extrapolation does not stop there. Gimzewski goes on to correlate the behavior of his neuropil network with an actual psychology — an actual theory of human cognition. The so-called Multistore Model is a theoretical framework for human memory, first proposed in 1968 by UCSD Chancellor Emeritus Dr. Richard C. Atkinson and Indiana University cognitive science professor Dr. Richard M. Shiffrin. It divides memory into three structural components: short-term sensory retention, relatively short term “working memory,” and long-term, permanent memory. Information gathered from the senses travels through the shorter-term phases toward the permanent state, or else is allowed to decay and be forgotten.
It wasn’t the way the dendritic network remembers information that compelled Gimzewski to draw this correlation with Atkinson-Shiffrin, but rather how human-like it seemed when the system forgets it.
“It’s dangerous to directly correlate things like, ‘This is a brain!'” the professor acknowledged at one point. “It’s exhibiting electrical characteristics which are very similar to a functional MRI of brains, similar to the electric characteristics of neuronal cultures, and also EEG patterns. We call it self-organized criticality , which is a whole area of science that’s accepted, more or less. Some people may disagree, but it’s generally accepted now that the brain does exhibit a similar electrical characteristic to what we have in our circuitry, [ which ] is fairly unique in terms of its function. We’re not trying to make a deterministic system. We’re letting the system self-assemble, and then observing what it does and try to learn from it.”
In my family, there’s a phrase that crops up in conversation that dates back to about 1973, when the historian Jacob Bronowski appeared on NBC’s Today show to discuss his book, The Ascent of Man . He was explaining to then-host Frank McGee why he believed ancient civilizations, such as the Aztecs, had a deeper and more accurate understanding of space and time than in the Renaissance, based on his examinations of their calendars. The phrase, uttered first by McGee and mocked shamelessly by my mother for three decades thereafter, is this: “What’s that have to do with the price of tea in China?”
So you can grow a memory in a jar, plug it into a Radio Shack electrode grid kit, and make it repeat things after you. Would that bring to an end the war in Vietnam?
It’s fair to say we’re not talking about a system that, once implanted into a Galaxy S29 smartphone, would leverage neuroplasticity to make itself into a Galaxy S30. From a purely practical standpoint, the Gimzewski team’s research points the way towards replacing conventional, digital supercomputers in tasks that would require inductive reasoning, with an entirely new form of machine. It would be a time-sharing system, provisioned through a cloud or cloud-like service, and at least theoretically, it may be far more economical to run and manage.
But today, and for the foreseeable future, it’s like the state of my bedroom in 1973: a science experiment.
“If you saw the device itself, it’s connected by a whole bunch of wires to machines that are basically connected to the computer, which does all the analysis,” said the professor, repressing a few giggles. “It’s not like we could operate this thing without any transistors or integrated circuits — we can’t… It’s just part of the whole system. There’s not really just the brain and nothing else.”
Also: The 10 most in-demand AI jobs in the world TechRepublic
Yet if the brain is like that UCLA laboratory — a bunch of wires thrown together haphazardly, which through some imperceptible process concocts the processes with which we drive cars, converse, write long articles, and create new neuromorphic devices — then Prof. Gimzewski is bringing all of us to the same doorstep of realization that Charles Babbage and Jacob Bronowski led us to (excepting, of course, Frank McGee). If we can reproduce, down to the last atom, everything that comprises the physical human system, and yet we end up with just another cloud computing service, then there must be some significant element we’re missing on our checklist.
“Memory is what we are,” wrote Atkinson and Shiffrin near the 50th anniversary of their modal model of memory, “and what defines us as individuals.” If that is true, then we may want to revisit the subject of what it is we truly are, once we have successfully automated the process of growing it in a glass jar.
Learn more — From the CBS Interactive Network
What neuromorphic engineering is, and why it’s triggered an analog revolution by Scott M. Fulton, III, ZDNet
Intel’s neuro guru slams deep learning: ‘It’s not actually learning’ by Tiernan Ray, ZDNet
The AI chip unicorn that’s about to revolutionize everything has computational Graph at its Core by George Anadiotis, ZDNet
Neuromorphic Atomic Switch Networks by James K. Gimzewski, Masakazu Aono, et al
Why Initialize a Neural Network with Random Weights? by Dr. Jacob Brownlee, Machine Learning Mastery
ORNL Neuromorphic Work Suggests Direct Computer-to-Brain Potential by John Russell, HPC Wire
The “memory map” featured in this edition of ZDNet Scale was inspired by the drawings of Santiago Ramon y Cajal, who discovered the guiding principles for the operation of neurons in the brain. He first published his discoveries accompanied by original ink drawings, leveraging his skills as an artist — using, many would say today, both sides of his brain. For this, Cajal was awarded the Nobel Prize in 1906. The example above is in the collection of Instituto Cajal in Madrid, and this photo in the public domain.
- Girl, 3, dies after suffering seizure on way home from family’s dream Maldives trip
- 'I locked myself in the bathroom so my son wouldn't see me die'
- Mum who became homeless and had a mental breakdown when her five-year-old son died is now a self-made MILLIONAIRE
- Mum who was homeless and had breakdown after son died becomes MILLIONAIRE
- Boy, 6, reading book died in window fall
- Man who brought Beatles to the big screen dies
- Creepy AI can predict who will die early from cancer and heart disease with ‘unsettling accuracy’
- Miracle of how a diver survived 300ft under the sea with no oxygen for 30mins after a routine job went disastrously wrong
- Redacted Mueller report, Notre Dame debate, IPO surge
- I heard my family say goodbye and my last rites being read as I lay terrified and helpless in a coma
- 34 criminals including killer mum and Sean Cox attacker jailed in February
- Finding the cure: Immunotherapy offers hope to children battling cancer
- From the Web to Real Life: The Growing Threat of Online-Bred Right-Wing Extremism
- 'To see your son on a slab in a mortuary... His hair was so soft, he looked like he was just sleeping'
- 'I can't recognise my loved-ones' faces'
- Insight: 50 years of the Open University
- Maketime for kids
- Crimes of the men and women put behind bars last month
- Neck crack habit left paramedic, 23, paralysed after rupturing major artery
- 'My son was murdered but the people who did it have never paid for it'
Neuromorphic computing and the brain that wouldn’t die have 3315 words, post on www.zdnet.com at February 27, 2019. This is cached page on Europe Breaking News. If you want remove this page, please contact us.