PCI Express
Back in 2003, electronics engineers
employed at standards bodies and device manufacturers such as Intel, Dell, HP,
and IBM set out to address the need to improve on the PCI (Peripheral Component
Interconnect) interface for internal computer components. Toward the end of its
life, PCI bandwidth was reaching its theoretical limits, but applications that
required more were only just gaining in popularity. When the standard came into
being in the mid-‘90s, there wasn’t a whole lot of streaming audio and
on-demand video available for consumption, but by the end of its lifespan
nearly a decade later, those applications were just on the horizon, and they
demanded real-time performance from servers and computers that PCI could not
handle in its end-of-life incarnation. PCI was built for computers and servers,
but ended up being used in mobile, communications, and embedded applications,
as well, making it one of the most flexible interfaces ever created.
The
two notches in this 32-bit PCI card indicate that it can be used in either a
3.3v or 5v PCI slot.
The PC’s Split Personality
Traditionally, a computer’s components have
been grouped into two distinct hubs. The first is the CPU and memory subsystem
and on the second is the I/O hub, which handles everything else, such as
graphics (though this component is also very memory-bound, but we’ll get to
that later), peripherals, storage devices, and other internal components. The
CPU and memory subsystems change rapidly, generally keeping step with Moore’s
Law, whereas the I/O hub is a much less volatile environment. This part of the
computer is the bread and butter of the consumer electronics industry. Key
considerations for the interfaces on this side of the PC include backward
compatibility, low power, a highly scalable architecture, and support for a
variety of form factors. Protocols on this hub include USB, FireWire, ATA, and
of course, PCI.
The
AGP (Accelerated Graphics Port) standard gave graphics cards a significant
boost in bandwidth compared to PCI.
The AGP Stopgap
PCI’s shared bandwidth problem also
contributed to the rise of AGP, or the Accelerated Graphics Port, also
developed by Intel. Like the proposed PCIe, AGP was a point-to-point connection
that was dedicated to the graphics subsystem. PCI-based graphics cards were
required to copy textures from system memory to the graphics cards memory.
AGP-based graphics cards, on the other hand, are able to set aside a portion of
system memory for storing textures, eliminating the copy step that slowed
PCI-based cards. With AGP’s benefits becoming redundant as of PCIe’s debut, the
dedicated graphics interface didn’t last long past the newer specifications
launch.
During its initial decade, PCI saw several
incremental improvements. The original spec was for a 32-bit bus that operated
at 5 volts and 33MHz. Later, the PCI-SIG (PCI Special Interest Group) added a
66MHz 3.3V standard, a double-wide 64-bit bus, and a 133MHz PCI-X extension,
the latter two of which eventually became popular in servers.
This
Gigabit Ethernet network adapter features the tremendously long 64-bit PCI
interface
The Parallel/Serial Seesaw
PCI uses a parallel bus interface in which
every device connected to the PCI host shares the same address, data, and
control lines. Due to its architecture, a PCI host can only allow the transfer
of data to occur for a single master at a time and in just one direction. PCIs
clocking scheme also limits the overall bandwidth to the speed of the slowest
connected component. If you run legacy components (as many businesses tend to),
this largely nullifies the PCI specifications incremental updates. The PCI host
on a given system is also limited to five devices.
Prior to AGP, you could easily max out a
system by installing a PCI-based graphics card, wired NIC, wireless NIC, sound
card, and an auxiliary storage controller.
Here
are some examples of the different PCIe slots.
Early computing relied heavily on serial
connections that transmitted data over a single wire in packets. Advantages
included the ability to send data over a single wire, and dedicated pins and
wires could allow bi-directional communication to take place. Serial
connections were reliable, but not fast enough to keep up with computing
accelerating demands. Serial connections eventually gave way to parallel
connections, which could utilize multiple wires to send multiple bits
simultaneously; whole bytes at once. PCIs parallel bus also ran into problems,
including electromagnetic interference between the wires. Properly shielding
additional wires for higher throughputs simply became cost prohibitive.
PCI’s engineers looked back to the serial
bus for a solution. And why not? Other serial protocols had stepped up since
parallel’s rise to prominence, including USB and FireWire. The earliest name
for the new protocol was HSI (High Speed Interconnect) and later it became 3GIO
(3rd Generation 1/O) before finally adopting PCI Express as the interface’s
name. When it was initially launched in 2003, PCI Express essentially
serialized the serial interface. PCIe is a point-to-point serial interface that
supports dedicated links between each component and the PCIe hub. PCIe lets
full-duplex communication take place between the hub and component. As such,
multiple PCIe devices can communicate with the hub simultaneously, without
suffering a delay or taking a bandwidth hit from a slower-performing component.