Commentary

Intel Follows Qualcomm Down Neural Network Path

23 June 2014

One of the reasons Intel Corp. is interested in putting FPGA die next to its Xeon processors (see Intel to Package FPGA with Xeon Processor) is so that it can deploy neural networks along side its x86 processors. Of course, in the longer term Intel could try to go for monolithic implementation of CPU cores and FPGA fabric if it can obtain the appropriate IP.

Neural networks often implemented as software on conventional processors, including x86 architecture processors, were a hot topic 25 years ago as the first software simulations of weighted summing networks started to show the interesting ability of being able to learn how to process data. However, in those days networks of a few 10s or 100s of neurons represented a practical limit, and fell a long way short of the biological systems on which they were based.

Similarly hardware implementations, either in custom silicon or implemented on FPGAs, could not scale much further and the radical differences in the neural network and processor programming models meant joining two such systems together was difficult.

However, as Moore's Law has marched us forward the trade offs have changed and it looks like neural networks are coming back. Scaling processors with Moore's Law is now more expensive than it was without providing an increase in performance. Problems with programming coarse-grained arrays of old-fashioned CPU cores and heterogeneous systems are leading researchers to re-evaluate other processing ideas and architectures.

Fabless chip company Qualcomm announced back in October 2013 that it was working with, and supporting, Brain Corp. (San Diego, Calif.) to develop a neural processor architecture to provide human-brain like cognition and processing for mobile computing (see Qualcomm Working on Neural Processor Core).

Qualcomm's Zeroth processor

Qualcomm has been working on what it calls the Zeroth processor project for a few years. Qualcomm's vision is to include one or more neural processing units (NPUs) within future mobile system chips so that appropriate work can be hosted on conventional cores and other working, more amenable to learning and human-machine interface functions can be offloaded.

The idea seems very similar to what Intel has said it wants to do with neural networks in an FPGA local to the x86 processor.

"Our internal analysis indicates that adding an FPGA that can be programmed to accelerate specific functions (e.g. complex neural networks, video codecs or search algorithms) could deliver up to 10x performance efficiency across a variety of workloads, and integrating the FPGA with coherent and non-coherent links within the Xeon package (versus discreet FPGAs) could lead to an additional 2x performance improvement," said an Intel spokesperson in email correspondence with Electronics360.

It is interesting and slightly counterintuitive cultural note that Intel – an IDM – is considering the implementation of neural networks as a software layer on an FPGA fabric, while fabless Qualcomm is looking to produce a hardware neural network core, including a spiking neuron communications model, that can be implemented by its foundry silicon supplier.

It should be noted that a neural network implemented on top of an FPGA fabric is unlikely to be as energy efficient as a custom-designed network and connection infrastructure but has the advantage of being reprogrammable to more conventional DSP or other function.

It is also notable that none of the FPGA vendors appear to have published much by way of research into hosting neural networks on their fabrics in recent years.

"We have not disclosed which specific FPGA will be integrated or which future Intel Xeon processor this will happen on, nor have we disclosed the timeframe in which this future products will come to market. Rather this was a directional statement regarding our intent to provide customers with additional options to optimize Intel processors for their specific workloads," the spokesperson said.

Across the electronics industry an increasing number application-specific processors being deployed, for image and vision processing, and many of those have architectures that seem to be edging closer to their biological counterparts. Neural networking is also back now that Moore's Law can deliver systems with thousands if not hundreds of thousands of nodes.

But has the industry learned how to co-ordinate the collaboration of such different architectures?

Related links and articles:

www.intel.com

News articles:

Qualcomm Working on Neural Processor Core

Intel to Package FPGA with Xeon Processor

Inventor Seeks to Cash-in Memristor Patents

Micron Preps Memory-Based Automata Processor



Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement