Data Center Requirements: Change is Coming Sooner Than You Think
Sponsored content
A Zettabyte (ZB) is a serious amount of data. One ZB is 1,000,000,000,000,000,000,000 bytes —that’s a 1 followed by 21 zeros. Yet, according to Cisco’s Global Cloud Index (2015-2020), driven by exponential data growth from an ever-increasing range of internet of things (IoT) devices (seemingly every device made by man) and cloud computing, data center traffic will reach 14.1 ZB in 2020, up from 3.9 ZB in 2015.
To meet the challenges of a hugely interconnected world, network infrastructures and data centers are looking toward future requirements with the understanding that they will have to evolve at unprecedented rates. The 30% year over year traffic growth and approximately 5 year lifecycles for equipment in data centers will continue to drive massive new roll outs and require new performance gains to support these trends
So that you’re not left behind by the rapidity of change that’s coming, let’s start at the top.
All of the computing and storage elements inside a data center are networked together using a switching architecture known as fabric. The switches in this network are all connected using optical signals carried over glass fiber cables. In a typical optical link, high-speed digital signals are first transferred from the electrical to the optical domain via modulated laser light and then those signals are sent over the fiber optic cables. Optical communications are typically classified by the reach, or length, of their links, and their distances can range from tens of meters to thousands of meters. Regardless of length, however, in the end a receiving transceiver converts the high speed optical signals back to electric signals.
Data center switch electro-optical transceivers are packaged into small modules that plug into the switch and connect to the fiber optic cable. one of the most popular form factor in use today is the QSFP (also referred to as QSFP for 100G data rates) — an 8.5 mm x 18 mm x 72 mm physical module with power consumption generally less than 3.5 watts.
Traditionally, data center interconnect transceivers were based on non-return-to-zero (NRZ) modulation, which transmits one bit per symbol. NRZ uses a currently available technology and will continue its linear evolution. Two variant voltage levels are used to represent a 0 and a 1 so that NRZ can be referred to as PAM-2 (pulse amplitude modulation, 2-level) for the two intensity levels of the transmitted light that contains one bit of information in every symbol.
The next level of modulation provides four-level pulse amplitude modulation (PAM4), which transmits two bits in each symbol, thereby doubling the data rate without doubling the required bandwidth.
From a practical standpoint, once a module is required to support 28GBd PAM4 on the electrical interface, it will be both cost and power consumption prohibitive to continue using optical transceivers that are only capable of 25G NRZ on the optical interface. Since 28GBd has effectively the same bandwidth requirements as 25G NRZ, most of today’s 25G NRZ lasers should be able to transition to the PAM4 implementation as long as their noise levels are low enough. But once the module is required to support 100Gbps using 53GBd PAM4 on the optical interface, the bandwidth of the components will need to be doubled.
Accelerating Data Rates
Electrical I/O data rates are changing faster now than in the past and this will have a major impact on optics development cycles. In the data center, a 10G NRZ SerDes had a relatively long run; it was a number of years before 25G NRZ ICs were available and 10 Gb/sec lanes could be used to achieve the intermittent 40 Gb/sec Ethernet standard. For its part, 25G NRZ will have been out less than three years before 28GBaud PAM4 (50G) SerDes ICs are available. And it is possible that 28GBd PAM4 SerDes may be out less than two years before 53GBd (100G) ICs are available.
Right now, we are no longer at a point where 100Gb Ethernet rates are unusual. The largest data centers moved to 40Gb Ethernet years ago, and starting in 2016, many hyperscale cloud content and service providers (those facilities with something on the order of 100,000 or more servers) such as Facebook, Amazon, Google and Microsoft began deploying 100GbE connections using single-mode optics-based infrastructure in their new data centers.
The migration from 10G and 40G to 100G data rates is expected to accelerate until 2020, with more than 15 million 100G ports expected to be deployed. The migration to >100G optical transceivers will begin in 2019 as 12.8TB/s switching ICs begin to be used in systems.
Moving Toward 400G
As mentioned, PAM4 technology will pave the way for the next generation of 100G and 400G transceiver modules enabling data centers to meet the explosive growth targets of the Ethernet Alliance, which has set a goal for 10 Tb/sec Ethernet speeds in about a decade from now. To get there, the research community is putting a lot of effort into demonstrating 100 Gb/s single-lane interfaces, which would allow 400G by continuing with the four-factor lane scaling currently used. In first-generation 400 Gb/s optical transceivers, the industry may need eight-lane data channels, with each of them delivering at least 50 Gb/s data, probably in PAM4 format but ultimately this application will be served in volume with 4 lanes of 100Gbps.
Indeed, the 802.3bs task force of the Institute of Electrical and Electronics Engineers (IEEE) adopted PAM4 as the optical signaling standard going forward for 100G per lane next-generation 400 Gb/s Ethernet networks for 500m distances. This signaling standard, commonly referred to as 53GBaud, is equivalent to 2-bits/symbol equaling 106 Gb/s (the 106 is due to the need to have extra bandwidth over 100 Gb/s for forward-error correction requirements). Although the official IEEE roadmap has yet to precisely detail what lies beyond 400 GbE, doubling to 800 GbE will likely become a reality after single-lane 100 Gb/s links start deploying in the market.
Recently, too, the 100G Lambda Multi-Source Agreement (MSA) Group announced the release of specifications based on 100 Gb/s per wavelength PAM4 optical technology for 2km and 10km. Under the MSA, member companies can address the technical challenges to achieving optical interfaces utilizing 100 Gb/s per wavelength PAM4 technology, enabling multi-vendor interoperability for optical transceivers produced by different manufacturers and in various form factors.
To demonstrate that Ethernet speeds are rushing forward faster than ever, consider that there is already an IEEE forum for 112 Gb/s SerDes, which is going to be critical for standards greater than 400G and for smaller formfactor lower power consumption transceivers at 100-400G.
As a result of this high-speed development, architects working on switch roadmaps are pushing for more rapid adoption of new technologies to support the higher electrical I/O data rates. The transceiver industry, as part of this ecosystem, has to match what's going on in the SerDes.
One of their concerns is that in trying to meet the accelerating schedules, some optical components may not be able to keep up. One of the key elements driving performance is the laser and its modulation. In existing less than 2 km 100G (using 25G NRZ) applications, directly modulated lasers (DMLs) provide acceptable performance. But going forward, DMLs might have to move to being cooled to achieve 53GBd PAM and meet the link budget requirements, impacting their current lower cost packaging implementation.
Cooled EMLs would then have similar packaging costs as DMLs with the added benefit of more bandwidth margin and better dispersion performance for WDM applications use. On the other hand, EMLs are larger devices so their use represents a trade off in InP fab capacity leading to some additional cost. Uncooled EMLs are also another way to possibly achieve the required performance at better price points.
We also know that for 100G transceivers, single-wavelength PAM4 technology reduces the number of lasers to one and eliminates the need for optical multiplexing simplifying the transceiver design and manufacturing costs.
Suppliers Are Responding
Intense industry demand for lower cost and higher density are the key drivers powering the shift to single wavelength 100G PAM4 for cloud data center applications. And suppliers are responding.
Source Photonics with its EML capabilities is well situated to continue being a key supplier in the electro-optical communications world. The company has shown the ability to anticipate the rapid expansion in deployment of 100G modules and has positioned itself as a market leader through its investment in 100G small form factor, long reach single-mode devices. In parallel, R&D investments are funding multiple projects for 28Gbd and 53Gbd PAM4-based next-generation technology for 100G and 400G products expected to be released for production later this year.
In all, as a leading global provider of advanced technology solutions for communications and data connectivity, Source Photonics understands the need to be nimble in a rapidly changing marketplace — anticipating what’s around the corner. Headquartered in West Hills, California, with manufacturing facilities, R&D and sales offices worldwide, the company is inventing next-generation solutions to provide data centers with low power, high data rate technology to meet the demands of a rapidly growing industry.
For more information, please visit Source Photonics.