Marc Stackler from Teledyne e2v, comments on the differences between parallel and serial interfaces and the applications they best serve.
The use of high-speed data converters is on the rise, as more end-users are implementing them to improve system performance and increase capabilities. Applications can range from communication (ground- and satellite-based), high-energy physics (accelerators), and defense (electronic warfare and radar jamming), to name a few.
There is a current trend for many applications that use data converters to get closer to a full SDR (software defined radio) system. To do this, a higher bandwidth capability is needed, complicating the interface between the FPGA (Field Programmable Gate Array) and the data converter, as SDR system architecture is directly linked to the data converter sampling speed. Today, there are currently two means of interfacing at high-speed between FPGA and data converter – high-speed LVDS parallel interface and high-speed serial interface.
A parallel interface uses a number of lanes to transmit data simultaneously and an additional lane to transmit the clock between the transmitter and receiver. This is the traditional way of interfacing with a data converter as it is straightforward in terms of the PCB and firmware design. It works by each digital clock cycle causing a bit value to be transmitted on the lane. The receiver has access to these clock signals so easily recovers the data. When there is a low data rate, this works well, but when the amount of data to transmit starts to increase or there is a higher data rate requirement, you either need very fast transmission or an increased number of lanes, complicating the transmission.
This is because, when using a parallel interface at high speed (above 1 GHz) many parameters that are negligible at lower speeds begin to limit performance. As a result, around ten years ago, new serial interface options began to appear on the market and are now the preferred interface in the majority of applications.
A serial interface has a much simpler layout and works by using fewer lanes than a parallel interface to transmit encoded data between the transmitter and receiver via high-speed transceivers, comprising a serializer on the transmitter side and a deserializer on the receiver side. There is also no need to send the clock on a separate path, as this is extracted from the data during the reception stage via the CDR (Clock and Data Recovery System), allowing serial interfaces to work at a much higher speed than parallel interfaces.
For a serial interface to work efficiently and ensure there is no loss of data transmission it needs a suitable a protocol. Teledyne e2v has developed the open-source ESIstream protocol to simplify serial data processing. The protocol works by using two-stage encoding and a two overhead bit process. The former is a scrambling process and the latter is a disparity process. This protocol has been designed to have a simple implementation which also helps to improve latency and resources requirement with reduced data overhead when using serial interfaces.
The encoding/decoding protocol of the data is mandatory as, without it, BER (Bit Error Rate) and synchronization loss for the transmission occurs. This is partly because of the CDR and partly because of the AC coupling interface. The CDR recovers the clock signal from the data on the receiver side, as it is not transmitted directly. This means that the clock recovered has been subject to similar timing effect as the data up to the CDR stage. Thus the timing effect on the data and the clock cancel each other, allowing for much higher speeds. CDR allows the data rate per lane to be increased, meaning the same amount of data can be sent in a smaller number of lanes, saving PCB space and simplifying its layout. However, it can cause BER and synchronization loss, as mentioned above. This, in turn mandates the need for a protocol, causing long latency and requiring more resources form the FPGA (LUT and FIFO or elastic buffer). Although this is generally small enough to not cause an issue, it could complicate the timing within the FPGA if the application requires a lot of resources or high-speed digital design.
In terms of applications, the main benefits of parallel interfaces are that they are easy to use, cost-effective and have a short latency. Nowadays, they are less commonly used than serial interfaces, but are still used if the application requires a short latency or low data transmission. One application that parallel interfaces are still used in is electronic warfare. This is because short latency is an important advantage when a few nanoseconds can mean the difference between being spotted or remaining invisible on enemy radar systems.
When deciding which interface to choose, you need to ensure you select the right type of interface for the right application. It is also important to get expert advice as the technology behind interfaces will keep evolving and there are already advances in modulated serial interfaces, starting to reach the next step in high-speed data transmission, bringing more problems to solve.
Nowadays, serial interfaces are the technology of choice due to the benefits they bring in terms of bandwidth capabilities. But parallel interfaces are still necessary for applications that require low latency such as in defense. However, both solutions could be considered for low cost or low-speed applications depending on factors such as development time, reuse of already developed sub-systems, and other application requirements.