What precisely is a pc? Is it mechanical, digital, or someplace in-between? Analog or digital? What number of bits are in a byte? What number of bits ought to a CPU function on at one time? How do the basic models, like processing, reminiscence, and storage work together with each other? These questions are all however settled now (though quantum computing could shake issues up once more), however there was a time when there have been nearly as many various solutions to those questions as there have been pc scientists.
It makes a number of sense that we ended up with standardized architectures and instruction units, as a result of with out them, interoperability and technological progress could be significantly hindered. However this standardization additionally comes with a value, as we are actually seeing with the rise of synthetic intelligence (AI). Computer systems are being requested to do issues that had not been imagined when the fundamental designs had been drawn up, and because it seems, these designs will not be well-suited for operating massive AI algorithms.
A brand new type of computing is required to deal with right now’s AI workloads, and that’s what an organization known as Lightmatter is promising with their lately introduced photonic processor . They’ve demonstrated that their non-traditional computing system can carry out superior AI duties precisely and effectively. This know-how seems to have the potential to push computing past its current limitations.
Lightmatter’s processor makes use of photons — particles of sunshine — as an alternative of electrons for performing calculations. This method affords a number of built-in benefits, together with excessive bandwidth, ultra-low latency, and considerably improved power effectivity. Most significantly, it allows parallel computing at a scale that’s merely not doable with present methods.
The processor package deal itself incorporates six chips, full of 1 million photonic elements and 50 billion transistors, linked by way of high-speed vertical interconnects. Regardless of its complexity, it consumes solely 78 watts {of electrical} energy and 1.6 watts of optical energy whereas attaining a processing pace of 65.5 trillion operations per second.
The chip makes use of a numerical system known as Adaptive Block Floating Level (ABFP), which permits the analog photonic computations to take care of the precision wanted for deep studying duties. As a substitute of assigning an exponent to each quantity (as in floating-point math), ABFP assigns a shared exponent to blocks of numbers, corresponding to neural community weights or activations. These values are then normalized, processed by the photonic {hardware} in fixed-point format, and eventually rescaled utilizing the shared exponent. This technique drastically reduces quantization errors that normally plague analog methods.
What makes this method sensible — not simply theoretical — is its integration with customary AI instruments like PyTorch and TensorFlow. Builders don’t have to reinvent their software program stacks to make use of Lightmatter’s {hardware}. For that reason, the processor has been capable of successfully run plenty of superior neural networks — like ResNet, BERT, and DeepMind’s Atari-playing reinforcement fashions — with out modifying the fashions or utilizing particular coaching methods.
Lightmatter’s processor is just not a substitute for digital computing — at the very least not but. As a substitute, it’s a complementary know-how, very similar to GPUs are right now. However as scaling issues are amplified, photonic computing might turn into more and more vital to the way forward for computing.The photonic processor is offered on a PCI-e card (📷: Lightmatter)
A rack of photonic processors (📷: Lightmatter)
A micrograph of the chip (📷: Lightmatter)