tks
0
Source: http://www.physicscentral.com/buzz/blog/index.cfm?postid=7058513908763186701
======================================================================
An overview of one of the neuron designs. The key components include the superconducting detectors (running parallel within the square), the connecting waveguides coming out from three sides, and the LED (bottom right).
IJ. M. Shainline et al., Phys. Rev. Applied (2017)
Thursday, March 23, 2017
You have 100 billion neurons in your brain, each one connected to a multitude of others. Every time you think, feel, or move, neurons in this massive network react, rapidly sending, processing, and receiving signals. Through this behind-the-scenes activity we learn about and navigate the world. Well, through our brains and Google.
Traditional computers excel at logic and math and are excellent rule followers. From business transactions to social interactions, they have revolutionized our way of life. However, they often fall short when it comes to complicated problems like modeling climate systems, monitoring markets, and detecting computer hacking attacks. These systems involve large, noisy sets of data and it’s difficult for computers to tease out the important information.
In research published in the American Physical Society’s journal Physical Review Applied, a team of scientists from the National Institute of Standards and Technology (NIST) in Boulder, Colorado propose a new hardware system for addressing these problems. This system is based on a network model, like the brain, and is made of superconductors and light. Their designs could enable powerful new computers that can perceive their environment, learn, and solve complicated problems.
“I’m inspired by the vast possibilities for information processing that can be realized if we move beyond the hardware utilized by existing computers, beyond the architecture that really hasn’t changed much in five decades, and beyond the semiconductor physics of transistors that has proven so useful for implementing binary digits,” says Jeffrey Shainline, lead author of the new paper. “What can be achieved if we add light to the platform? What happens if we add new materials—like superconductors? We have no idea where the possibilities end.”
Typical modern computers have one to four central processing units with several registers that hold information. They process information sequentially and behave according to a set of programmed rules. A combination of history, economics, and technology led to the popularity of this design over others and it works well for a variety of applications. For complicated, noisy situations like modeling the climate, though, it’s much more efficient to use a different kind of computer—one that can process many things all at once and learn over time (like the human brain). These are called neural networks because they are inspired by the giant web of interconnected neurons in the brain and nervous system.
Neurons are highly specialized cells. They consist of a body that houses the cell nucleus, branching structures called dendrites that typically detect signals, and a long arm called an axon that typically transmits information. Neurons are connected to one another by synapses, special junctions through which neurons can send chemical and electrical signals.
The basic process works like this: A dendrite receives a chemical signal from another neuron upstream, then fires an electrical signal—really just a flood of electrolyte ions like sodium or calcium—down the axon. The axon transmits this electrochemical signal across the synapse to one or more neurons downstream. In reality it’s not that simple, of course. Each neuron can be connected to thousands of other neurons, there are many different types of signals, and not every signal gets passed on every time—signals may be combined and then subject to thresholds or limits.
In a neural network computer system, every neuron is a processing unit that sends and receives signals from several other units. Making a neural network on the order of the human brain would require 100 billion processing units that can each interact with thousands of others units. Imagine trying to do that with traditional electronics and wires! It’s just not practical to make that many connections and the system would consume a huge amount of power.
But recent advances in technology and machine learning made it possible for Shainline, Sonia Buckley, Richard Mirin, and Sae Woo Nam at NIST to imagine creating this kind of network using superconducting components as neurons (instead of cells) and light as the signal (instead of ions). Not only did they imagine this network, they designed it.
Over the course of four years, the team identified the components you would need to build it, chose the best options from among current technologies, outlined specific designs, and calculated the energy required per neuron firing for two different designs—which turn out to be less than the energy required by the brain. The design is compact, simple to build, energy-efficient, and can be easily scaled up or down.
This system transmits information via very faint pulses of light, so the “neurons” must be able to detect, process, and transmit these faint pulses. The researchers suggest that this can be accomplished with tiny superconducting wires—wires that have zero resistance to electricity at extremely low temperatures, but a higher resistance at higher temperatures.
When a superconducting wire absorbs a light pulse, its temperature rises. Their design hinges on the fact that you can cool a wire down to a point where it just becomes superconducting, but becomes non-superconducting after absorbing a pulse of light.
Let’s say you give electricity the option to either travel through the superconducting wire or through a small LED. Electricity will take the path with the lowest resistance, so it will travel through the superconducting wire instead of the LED (LEDs have some internal resistance). But, if the wire heats up and its resistance is now higher than that of the LED, the electricity will go through the LED instead. In this case, the electricity will power the LED and the LED will produce a pulse of light.
In this analogy, the superconducting wire takes the places of the dendrites in detecting the signal and the LED takes the place of the axon in sending the signal downstream. The synapses are replaced by tiny fibers that guide the light pulse to other neurons.
This physics forms the backbone of the team’s design. Their work involves careful study of the necessary components, possible designs of each component based on current technology, and possible geometrical arrangements to optimize energy-efficiency, space, and functionality.
The personal computers and phones of the future probably won’t be neural networks. The structure of the typical computer can meet most of our needs more efficiently and simply than neural networks. Neural networks can be powerful tools, but they are complicated and expensive even when made from more traditional materials.
As we work to understand complicated situations with large, messy data sets that evolve over time, neural networks based on these new designs could enable significant progress. In addition, say the authors, we could use these designs to build biologically realistic hardware that could simulate complex systems, like the visual cortex in brain, and fill in some of the gaps in our understanding of how these systems manage information.
“We refer to the present as the Information Age, but the changes we’ve witnessed due to semiconductor-based computation is just the infancy of a much larger evolution,” says Shainline. “At the present moment, we simply cannot conceive of the power of [advanced computing] systems, and we cannot conceive of how our species will change in the presence of these technologies, yet we cannot ignore the potential.”
======================================================================
An overview of one of the neuron designs. The key components include the superconducting detectors (running parallel within the square), the connecting waveguides coming out from three sides, and the LED (bottom right).
IJ. M. Shainline et al., Phys. Rev. Applied (2017)
Thursday, March 23, 2017
You have 100 billion neurons in your brain, each one connected to a multitude of others. Every time you think, feel, or move, neurons in this massive network react, rapidly sending, processing, and receiving signals. Through this behind-the-scenes activity we learn about and navigate the world. Well, through our brains and Google.
Traditional computers excel at logic and math and are excellent rule followers. From business transactions to social interactions, they have revolutionized our way of life. However, they often fall short when it comes to complicated problems like modeling climate systems, monitoring markets, and detecting computer hacking attacks. These systems involve large, noisy sets of data and it’s difficult for computers to tease out the important information.
In research published in the American Physical Society’s journal Physical Review Applied, a team of scientists from the National Institute of Standards and Technology (NIST) in Boulder, Colorado propose a new hardware system for addressing these problems. This system is based on a network model, like the brain, and is made of superconductors and light. Their designs could enable powerful new computers that can perceive their environment, learn, and solve complicated problems.
“I’m inspired by the vast possibilities for information processing that can be realized if we move beyond the hardware utilized by existing computers, beyond the architecture that really hasn’t changed much in five decades, and beyond the semiconductor physics of transistors that has proven so useful for implementing binary digits,” says Jeffrey Shainline, lead author of the new paper. “What can be achieved if we add light to the platform? What happens if we add new materials—like superconductors? We have no idea where the possibilities end.”
Typical modern computers have one to four central processing units with several registers that hold information. They process information sequentially and behave according to a set of programmed rules. A combination of history, economics, and technology led to the popularity of this design over others and it works well for a variety of applications. For complicated, noisy situations like modeling the climate, though, it’s much more efficient to use a different kind of computer—one that can process many things all at once and learn over time (like the human brain). These are called neural networks because they are inspired by the giant web of interconnected neurons in the brain and nervous system.
Neurons are highly specialized cells. They consist of a body that houses the cell nucleus, branching structures called dendrites that typically detect signals, and a long arm called an axon that typically transmits information. Neurons are connected to one another by synapses, special junctions through which neurons can send chemical and electrical signals.
The basic process works like this: A dendrite receives a chemical signal from another neuron upstream, then fires an electrical signal—really just a flood of electrolyte ions like sodium or calcium—down the axon. The axon transmits this electrochemical signal across the synapse to one or more neurons downstream. In reality it’s not that simple, of course. Each neuron can be connected to thousands of other neurons, there are many different types of signals, and not every signal gets passed on every time—signals may be combined and then subject to thresholds or limits.
In a neural network computer system, every neuron is a processing unit that sends and receives signals from several other units. Making a neural network on the order of the human brain would require 100 billion processing units that can each interact with thousands of others units. Imagine trying to do that with traditional electronics and wires! It’s just not practical to make that many connections and the system would consume a huge amount of power.
But recent advances in technology and machine learning made it possible for Shainline, Sonia Buckley, Richard Mirin, and Sae Woo Nam at NIST to imagine creating this kind of network using superconducting components as neurons (instead of cells) and light as the signal (instead of ions). Not only did they imagine this network, they designed it.
Over the course of four years, the team identified the components you would need to build it, chose the best options from among current technologies, outlined specific designs, and calculated the energy required per neuron firing for two different designs—which turn out to be less than the energy required by the brain. The design is compact, simple to build, energy-efficient, and can be easily scaled up or down.
This system transmits information via very faint pulses of light, so the “neurons” must be able to detect, process, and transmit these faint pulses. The researchers suggest that this can be accomplished with tiny superconducting wires—wires that have zero resistance to electricity at extremely low temperatures, but a higher resistance at higher temperatures.
When a superconducting wire absorbs a light pulse, its temperature rises. Their design hinges on the fact that you can cool a wire down to a point where it just becomes superconducting, but becomes non-superconducting after absorbing a pulse of light.
Let’s say you give electricity the option to either travel through the superconducting wire or through a small LED. Electricity will take the path with the lowest resistance, so it will travel through the superconducting wire instead of the LED (LEDs have some internal resistance). But, if the wire heats up and its resistance is now higher than that of the LED, the electricity will go through the LED instead. In this case, the electricity will power the LED and the LED will produce a pulse of light.
In this analogy, the superconducting wire takes the places of the dendrites in detecting the signal and the LED takes the place of the axon in sending the signal downstream. The synapses are replaced by tiny fibers that guide the light pulse to other neurons.
This physics forms the backbone of the team’s design. Their work involves careful study of the necessary components, possible designs of each component based on current technology, and possible geometrical arrangements to optimize energy-efficiency, space, and functionality.
The personal computers and phones of the future probably won’t be neural networks. The structure of the typical computer can meet most of our needs more efficiently and simply than neural networks. Neural networks can be powerful tools, but they are complicated and expensive even when made from more traditional materials.
As we work to understand complicated situations with large, messy data sets that evolve over time, neural networks based on these new designs could enable significant progress. In addition, say the authors, we could use these designs to build biologically realistic hardware that could simulate complex systems, like the visual cortex in brain, and fill in some of the gaps in our understanding of how these systems manage information.
“We refer to the present as the Information Age, but the changes we’ve witnessed due to semiconductor-based computation is just the infancy of a much larger evolution,” says Shainline. “At the present moment, we simply cannot conceive of the power of [advanced computing] systems, and we cannot conceive of how our species will change in the presence of these technologies, yet we cannot ignore the potential.”