Neuromorphic computing could solve the tech industry’s looming crisis

What’s the best computer in the world? The most souped-up, high-end gaming rig? Whatever supercomputer took the number one spot in the TOP500 this year? The kit inside the datacentres that Apple or Microsoft rely on? Nope: it’s the one inside your skull. 

As computers go, brains are way ahead of the competition. They’re small, lightweight, have low energy consumption, and are amazingly adaptable. And they’re also set to be the model for the next wave of advanced computing.

These brain-inspired designs are known collectively as ‘neuromorphic computing’. Even the most advanced computers don’t come close to the human brain — or even most mammal brains — but our grey matter can give engineers and developers a few pointers on how to make computing infrastrastructure more efficient, by mimicking the brain’s own synapses and neurones.

SEE: Building the bionic brain (free PDF) (TechRepublic)

First, the biology. Neurones are nerve cells, and work as the cabling that carries messages from one part of the body to the other. Those messages are passed from one neurone to another until they reach the right part of the body where they can produce an effect — by causing us to be aware of pain, move a muscle, or form a sentence, for example. 

The way that neurones pass on messages to each other is across a gap is called a synapse. Once a neurone has received enough input to trigger it, it passes a chemical or electrical impulse, known as an action potential, onto the next neurone, or onto another cell, such as a muscle or gland. 

Next, the technology. Neuromorphic computing software seeks to recreate these action potentials through spiking neural networks (SNNs). SNNs are made of neurons that signal to other neurons by generating their own action potentials, conveying information as they

Intel Teams with Uncle Sam to Research Chip Packages, ‘Neuromorphic’ Systems

Intel  (INTC) – Get Report is using Manufacturing Day 2020 to announce a pair of new chip R&D projects with the federal government.

One of the projects is a 3-year partnership with Sandia National Labs — a government R&D lab focused on nuclear research — to research the use of neuromorphic computing to handle demanding computational problems.

Neuromorphic computing, which has also been researched by the likes of IBM  (IBM) – Get Report, Qualcomm  (QCOM) – Get Report and Micron  (MU) – Get Report, involves developing chip architectures that are patterned on the functioning of neurons in the human brain — and which by doing so can exhibit a human brain’s flexibility, power efficiency and ability to respond to new data.

A lot of current neuromorphic computing R&D involves “edge” applications such as robotics, industrial automation and understanding and responding to the physical actions of smartphone users. By contrast, Intel and Sandia plan to research demanding spiking neural network workloads such as physics modeling and graph analytics.

The research will involve the use of a system that contains 50 million artificial neurons and relies on Intel’s recently-announced Loihi neuromorphic chip. With each Loihi chip only containing 130,000 neurons, this points to the use of more than 380 chips (given how such systems are designed, 384 chips is a possibility). Intel/Sandia also plan to eventually develop more powerful systems that rely on a next-gen Intel neuromorphic research chip.

The second government project announced on Friday involves working with the Naval Surface Warfare Center to develop advanced multi-chip packages that could be used in military equipment. The effort, known as SHIP, will include developing prototype chip packages that pair “special-purpose government chips” with an assortment of Intel silicon, including CPUs, ASICs and

Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing

As a part of the U.S. Department of Energy’s Advanced Scientific Computing Research program, Intel today inked a three-year agreement with Sandia National Laboratories to explore the value of neuromorphic computing for scaled-up AI problems. Sandia will kick off its work using the 50-million-neuron Loihi-based system recently delivered to its facility in Albuquerque, New Mexico. As the collaboration progresses, Intel says the labs will receive systems built on the company’s next-generation neuromorphic architecture.

Along with Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing — circuits that mimic the nervous system’s biology — to develop supercomputers 1,000 times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems.

Intel’s 14-nanometer Loihi chip contains over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses. Uniquely, the chip features a programmable microcode engine for on-die training of asynchronous spiking neural networks (SNNs), or AI models that incorporate time into their operating model such that the components of the model don’t process input data simultaneously. Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude, according to Intel. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times, whereas traditional hardware uses 500% more power to do the same.

Intel and Sandia hope to apply neuromorphic computing to workloads in scientific