DTechEx Discusses, Optical Compute: How the New Age of Computation Seems So Close, and Yet Still So Far Away

Modern technology is largely built upon the back of tiny electrical circuits connected together to form chips. These come in a variety of configurations, and can be found in all kinds of household appliances (from rice cookers to air conditioners), consumer electronics (laptops, smartphones, TVs), and products and services necessary to everyday life in an interconnected world (such as airplanes, trains, and cars). The technology at the heart of this is the transistor, a device that utilizes electrical current to change between an off state and an on state, invented in the 1940s. Though it took another twenty years to apply this technology to wafer-scale manufacture (a process known as complementary metal-oxide-semiconductor, or CMOS), semiconductor chips housing hundreds of millions to billions of transistors in a millimeter-squared package are now ubiquitous. The improvements to chip technology since the 1960s have been – for a number of years – succinctly described by Moore’s law, which is not an actual physical law but an empirical observation that the number of transistors on an integrated circuit (which we call here chip, for brevity) doubles every two years. This law is named after the co-founder of Intel (and recently deceased) Gordon Moore, who posited this doubling in 1965. Ultimately, Moore’s law has to come to an end due to the physical limitation of being unable to produce a transistor less than one atom thick. This is the theoretical limit, though – as IDTechEx explain in their new report on silicon photonics, “Semiconductor Photonic Integrated Circuits 2023-2033” – the practical limit to create a functional device is likely to come well before that stage, with a noticeable slowing to Moore’s law in recent years (and indeed, one could argue that a slowing of Moore’s law is synonymous with it ending, as it no longer holds true).

An end to Moore’s law is not inherently a bad thing…or, at least, it wouldn’t be if it existed in a vacuum. But it doesn’t, and what makes the need to continuously improve and build on compute power so important is the increasing demand for data, particularly from data centers and the users accessing them. While these users may have previously been largely restricted to financial and research institutions utilizing data center resources for high-performance computing workloads, the general populace is growing ever-more data hungry, with the likes of video-on-demand and streaming services to edge devices driving up data center operational costs.

However, electronic integrated circuits are one form of a physical system that can be used for computations. As mature as the technology is and as widespread the use of electrical chips globally, it is easy to forget that this is not the only physical system that we can manipulate at such a small scale. Another utilizing photonics – light – was until recently only consigned to the realms of academic research. However, recent advances by several innovative start-ups have seen optical compute platforms become a reality.

Silicon photonics – where light is used as the data carrier, rather than electricity – is used largely for data transmissions within the data center. Whereas electrons experience resistance as they move within a conductive wire, which increases as the length of wire increases, light does not experience this resistance. So a photonic system is preferable to an electrical one, especially where long distances are concerned. As nothing can travel faster than light, platforms that use light to transmit data between chips and between systems are now commonplace, contributing as they do to high bandwidth data transmissions.

While this area of silicon photonics is now well-developed, optical compute has been a trickier riddle to solve. Beginning with the building blocks, optical components are required that together can perform the same function as transistors. In the case of photonic integrated circuits (PICs), a laser is needed to provide light into the system, a waveguide is needed to confine the light, a modulator is needed to change (at least) one aspect of the light, and a coupler is needed to pick up the emerging beam. Each of these comes with challenges; for example, considering the laser, if silicon/silica is used for the waveguide and substrate material, then the laser must be made in a different material (given that silicon is an indirect bandgap semiconductor, and so cannot produce laser light). In which case, it is common for the laser to be mounted off-chip, which requires coupling into the chip. However, recent advances by the likes of Scintil Photonics have made the integration of III-V lasers onto silicon substrates possible. While this solves the coupling problem, having a laser on-chip introduces heat to the system, and so thermal management is necessary to ensure the structural stability of the chip.

In addition to the above, computation is a non-linear process; a change to one output is not proportional to the change to one input, but rather – where computation is concerned – is dependent on the change to multiple inputs. Photonic systems can achieve this non-linearity in several ways. The first is by modulating (changing) the wavelength of the light. In a web of connected nodes (where each node represents a calculation unit, which sends and receives signals to other nodes along the web), wavelength division multiplexers (WDMs, which combine signals of different frequencies into one signal) first sum each of the signals from one layer of connected nodes into one input signal to the node of interest on the next layer. Microring resonators (MRRs) can then be used as tunable filters, one acting on each individual wavelength incorporated into the input signal by the WDM.

Another method is by manipulating the modes of the light. Mode-based weighted interconnections employ beam splitters and phase shifters to implement the required matrix operations for weighted addition. These can take the form of Mach-Zehnder Interferometers (MZIs), where the output signals from each of the connected nodes are branched off and phases applied to them, as well as thermal tuning via microheaters, where phases are applied to the signals by changing the refractive indices of the waveguides through which the different signals are passing.

Despite these significant challenges, in 2021, Lightelligence – a Boston-based start-up – produced the first example of an optical compute platform comprising a 64 x 64 systolic array (an array of tightly-coupled nodes that can be used to perform matrix multiplications). An impressive accomplishment indeed, but one that does not yet represent a turning point for optical compute; while Lightelligence’s oMAC product has achieved a considerable speedup over competing GPUs, this is only in relation to a very limited workload, designed as the oMAC is for solving the Ising problem for 2D arranged 64 x 64 spins. Photonic systems for computing purposes are taking steps – steps that are crucial to future development and adoption – but are not yet taking strides.

The question then remains: Is an inflection point coming for optical compute, and if so, when? Right now, according to IDTechEx, this appears nowhere on the horizon. While optical computing platforms will certainly see some uptake in the academic and private research segments in the next five years, they are nowhere near to being able to compete with semiconductor-based electronic compute systems in terms of applicability and adoption. While optical interconnects have a unique selling point in that they enable fast data transmissions with low losses, computing platforms require cascadability (a node’s output signal should be powerful enough to power the next node, and so on through the entire network), low power consumption and logic-level restoration, while keeping design, manufacture and operational costs at a minimum. All of these attributes are provided by electronic transistors presently, though costs will increase as technologies improve and processes move to ever-smaller node sizes. Optical compute will not be the short-term savior for Moore’s Law, but the usage of such platforms should not be counted out altogether, as a combination of optical and electrical (O/E/O) components – as well as all-optical representations – will likely be employed to dramatic effect for research across, HPC, Scientific computing and Artificial Intelligence.

Report Coverage

More information on optical computing and the possibilities afforded by photonic integrated circuits is given in the IDTechEx report, where the technologies inherent to PIC design and manufacture are examined, as well as their use cases evaluated.

More generally, the report covers the global PIC market across six different application segments by giving a 10-year forecast for each, up to and including 2033 in terms of the market value of PIC systems. Additionally, the report covers evaluations of materials used in PIC design and manufacture, the developments being undertaken by large companies and start-ups alike to improve on key metrics such as performance and cost from both a systems and architecture perspective, as well as a thorough review of the product offerings by key market players.

While the importance of photonic circuitry at the chip scale is already in evidence within the communications industry, there are questions around the ease and cost of adoption of PICs for other markets, comparative to incumbent technologies. IDTechEx’s latest report, “Semiconductor Photonic Integrated Circuits 2023-2033”, answers the major questions, challenges and opportunities faced by photonic integrated circuit technologies. For further understanding of the markets, players, technologies, opportunities, and challenges, please refer to it.



Check Also

DCB Bank announces fourth Quarter FY 2024 Results

April 24, 2024, Bengaluru: The Board of Directors of DCB Bank Ltd. (BSE: 532772; NSE: …