Photonic processor can classify images at a glance • The Register

0

Engineers from the University of Pennsylvania claim to have developed a photonic deep neural network processor capable of analyzing billions of images every second with high precision using the power of light.

It may sound like science fiction or an optical engineer’s fever dream, but that’s exactly what researchers from the US University’s School of Engineering and Applied Sciences claim to have done in an article published in the review Nature earlier this month.

The standalone light-driven chip — it’s not another PCIe accelerator or coprocessor — manages the data by simulating brain neurons that have been trained to recognize specific patterns. This is useful for a variety of applications including object detection, facial recognition and audio transcription to name a few.

Traditionally, this has been achieved by simulating an approximation of neurons using standard silicon chips, such as GPUs and other ASICs. The academics said their chip was the first to do this optically using light signals.

“The low power consumption and ultra-low computation time offered by our photonic classifier chip can revolutionize applications such as salient and event-driven object detection,” the paper’s authors wrote.

In a proof of concept detailed in Nature, the photonic chip was able to categorize an image in less than 570 picoseconds with an accuracy of 89.8-93.8%. According to the authors, this puts the chip on par with high-end GPUs for image classification.

To put that into perspective, that equates to just over half a billion frames in the time it takes you to blink (1/3 of a second). And the team posits that even faster processing – on the order of 100 picoseconds per frame – is possible using commercial manufacturing processes available today.

According to the article, this offers many advantages, including low power consumption, high throughput, and fewer bottlenecks compared to existing deep neural network technologies that are either physically separate from the image sensor or linked to a clock frequency.

“Clockless direct processing of optical data eliminates analog-to-digital conversion and the need for a large memory module, enabling faster, more power-efficient neural networks for the next generation of deep learning systems” , wrote the authors of the article.

Also, since all calculations are done in-chip, no image sensor is required. In fact, because the processing is done optically, it is the image sensor.

The test

Before you got too excited, the images used in the proof of concept were positively tiny, measuring 30 pixels in total. The actual test involved classifying the hand-drawn “P” and “d” characters projected onto the chip. Nevertheless, it was still able to achieve slightly lower accuracy than the popular Keras deep learning API (96%) running in Python.

However, the team notes that resolution isn’t the limiting factor here, and there’s nothing stopping them from upgrading the chip to support higher resolutions. Additionally, they claim that the technology could be used to classify any data that can be converted into an optical signal.

If true, the technology has implications for a variety of fields, video object detection being the most obvious, since the processing could effectively be done in real time and would not be limited to the frame rate of a traditional digital image sensor.

“The wide bandwidth available at optical frequencies together with the low propagation loss of nanophotonic waveguides – serving as interconnects – make photonic integrated circuits a promising platform for implementing fast and energy-efficient processing units. energy,” the article said.

How it works

The nine-millimeter-square chip is made up of two layers, an optical component that handles the computational side and an optoelectric layer responsible for signal processing.

The optical layer has a 5×6 array of gate couplers that act as input pixels. Light from these pixels is split into three overlapping 3 x 4 pixel sub-images and then routed to nine artificial neurons spread over three layers using nanophotonic waveguides.

The optoelectric layer then converts the optical signal to a voltage, amplifies it, and passes it to a micro-ring modulator which converts the signal back into light, which can then be interpreted by a digital signal processor.

However, before the chip can render usable results, it must be trained. The researchers achieved this by using a series of training images projected onto a secondary pixel array on the chip.

The output of these images was then in a digital neural network that replicates Keras’ chip running on Python to determine the optimal weight vectors. A combination of microcontrollers and analog-to-digital converters was then used to write these weights back to the chip.

Once trained, all classifications are managed within the chip.

According to the researchers, the technology addresses several of the inherent limitations of GPU and ASIC-based deep neural networks today and has the potential to “revolutionize” several applications, including object detection.

The team further claims that by increasing the chip size, higher resolutions or larger numbers of neurons could be achieved, the only limitations being the bandwidth of micro-ring modulators and silicon-germanium photodiodes. in the optoelectronic layer.

Additionally, research posits that commercial manufacturing processes offering monolithic integration of electrical and photonic components could further accelerate the chip, enabling bandwidths on the order of tens of gigahertz and processing times of less than 100 picoseconds. ®

Share.

Comments are closed.