I/O Bus Requirements

This section summarizes requirements for the I/O bus, with emphasis on requirements related to the PCI bus.

System provides an I/O bus based on industry standard specification

Required

Currently, for most systems, this requirement is met with PCI.

System supports a 32-bit bus architecture

Required

For example, for PCI, the server system must support the 32-bit physical address space; PCI adapters must be capable of addressing any location in that address space.

System supports a 64-bit bus architecture

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Recommended Recommended Optional
Enterprise: Recommended Required Optional
SOHO: Recommended Recommended Optional

For example, the server system with a 64-bit PCI bus should support the 64-bit physical address space; 64-bit PCI adapters must be able to address any location in the address space supported by the platform.

For servers with 64-bit processors or running Windows NT Server/Enterprise Edition version 5.0 or later, the system must support a 64-bit PCI bus. Additionally, support for a 66-MHz PCI bus is recommended.

PCI bus and devices comply with PCI 2.1 and other requirements

Required

Recommended: PCI controllers implemented as peer bridges to provide more effective bus bandwidth. Also, servers with more than 4 GB of memory should support the PCI dual address cycle (DAC) for the 64-bit physical address space. DAC support does not preclude hardware from using 32-bit addressing.

If PCI is present in the system, the PCI bus and PCI expansion connectors must meet the requirements defined in PCI Local Bus Specification, Revision 2.1 or later (PCI 2.1), plus any additional PCI requirements in this guide. It is recommended that PCI devices, chip sets, and expansion slots support the requirements defined in the PCI 2.2 specification. The system must also support the addition of PCI bridge cards, and all PCI connectors on the system board set must be able to allow any PCI expansion card to have bus master privileges.

All server systems also must meet the PCI requirements defined in this section, which include requirements to ensure effective Plug and Play support. In particular, see the required implementation for PCI 2.1 Subsystem Vendor IDs in requirement #30, “Device IDs include PCI Subsystem IDs.”

System makes a best effort to provide each PCI slot and device type access to a non-shared interrupt line

Required

System designers must make a best effort to provide access to non-shared interrupt lines by meeting these conditions:

The high-end and low-end commodity server platforms present certain design challenges. For high-end servers, PCI 2.1 taken by itself imposes a limitation for Intel Architecture-based systems because the values written to the Interrupt Line register in configuration space must correspond to IRQ numbers 0–15 of the standard dual 8259 configuration, or to the value 255 which means “unknown” or “no connection.” The values between 15 and 255 are reserved. This fixed connection legacy dual 8259 configuration, if examined alone, constrains Intel Architecture-based systems, even when they use sophisticated interrupt-routing hardware and APIC support. For low-end servers, some core logic offerings provide little or no interrupt-routing support, and designers implement rotating access to interrupt resources using simple wire-OR techniques, such as those illustrated in the PCI 2.1 implementation note in section 2.2.6 of the PCI 2.1 specification.

Windows NT, with its support for both MPS 1.4 and ACPI, uses mechanisms beyond the legacy methods of routing all PCI interrupts through the legacy cascaded 8259 interrupt controllers to determine proper allocation and routing of PCI bus IRQs. This Windows NT capability allows use of interrupts beyond the 0–15 range permitted by the strict reading of the current PCI 2.1 specification language for Intel Architecture systems. System designers should include sufficient interrupt resources in their systems to provide at least one dedicated interrupt per PCI function for embedded devices and one interrupt per PCI INTA# – INTD# line on a PCI slot. This will become a requirement for all servers in a future version of this guideline.

When system designers cannot provide a non-shared interrupt line to a particular PCI device or slot because of the above situations, the server system’s documentation must explain clearly to the end user of the system how interrupt resources are allocated on the platform and which devices cannot avoid sharing interrupts. System designers may provide this documentation or information as they deem most appropriate for their product. Some possible mechanisms include:

Some instances need additional clarification to fit within the context of this guideline. At the system designer’s discretion, PCI devices can share an interrupt line under the following conditions:

System does not contain ghost devices

Required

A computer must not include any ghost devices, which are devices that do not correctly decode the Type 1/Type 0 indicator. Such a device will appear on multiple PCI buses.

A PCI card should be visible through hardware configuration access at only one bus/device/function coordinate.

System uses standard method to close BAR windows on nonsubtractive decode PCI bridges

Required

PCI-to-PCI bridges must comply with the PCI to PCI Bridge Specification, Revision 1.0. Setting the base address register (BAR) to its maximum value and the limit register to zeroes must effectively close the I/O or memory window references in that bridge BAR.

PCI devices do not use the <1 MB BAR type

Required

Recommended for Enterprise class servers: Devices on a 64-bit PCI bus must take any 64-bit BAR address.

Devices must take any 32-bit BAR address.

PCI devices decode only their own cycles

Required

PCI devices must not decode cycles that are not their own to avoid contention on the PCI bus. Notice that this requirement does not in any way prohibit the standard interfaces provided for by the PCI cache support option discussed in PCI 2.1, which allows the use of a snooping cache coherency mechanism. Auxiliary hardware that is used to provide non-local console support is permitted within the scope of this requirement.

VGA-compatible devices do not use non-video I/O ports

Required

Recommended: Device includes a mode that does not require ISA VGA ports to function.

A VGA-compatible device must not use any legacy I/O ports that are not set aside for video in the PCI 2.1 specification.

PCI chip sets support Ultra DMA if primary host controller uses ATA

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Required Not applicable Required
Enterprise: Required Not applicable Required
SOHO: Required Not applicable Required

For servers that implement PCI ATA connectivity, PCI chip sets must implement DMA as defined in ATA/ATAPI-4 Revision 17 (ATA-4), and implement Ultra DMA (also known as Ultra-ATA) as defined in the ATA-4 specification.

Ultra DMA is required to avoid the bottleneck created by the current 16.6 MB per second limit on ATA disk transfer. Ultra DMA also provides error checking for improved robustness over previous ATA implementations.

An exemption exists for PCI ATA-connected CD drives used solely for the purpose of software installation on a server system. Such devices cannot be used for any other purpose, including access to data by client systems. This exemption will not be allowed in the next version of these guidelines.

This requirement does not apply for servers running Windows NT Server/Enterprise Edition, which do not use ATA for primary storage.

Functions in a multifunction PCI device do not share writable PCI Configuration Space bits

Required

The operating system treats each function of a multifunction PCI device as an independent device. As such, there can be no sharing between functions of writable PCI Configuration Space bits (such as the Command register).

Devices use the PCI Configuration Space for their Plug and Play identifiers

Required

The PCI 2.1 specification describes the Configuration Space used by the system to identify and configure each device attached to the bus. The Configuration Space is made up of a 256-byte address space for each device and contains sufficient information for the system to identify the capabilities of the device. Configuration of the device is also controlled from this address space.

The Configuration Space is made up of a header region and a device-dependent region. Each Configuration Space must have a 64-byte header at offset 0. All the device registers that the device circuit uses for initialization, configuration, and catastrophic error handling must fit within the space between byte 64 and byte 255.

All other registers that the device uses during normal operation must be located in normal I/O or memory space. Unimplemented registers or reads to reserved registers must complete normally and return zero. Writes to reserved registers must complete normally, and the data must be discarded.

All registers required by the device at interrupt time must be in I/O or memory space.

Device IDs include PCI Subsystem IDs

Required

The Subsystem ID (SID) and Subsystem Vendor ID (SVID) fields are required to comply with the Subsystem ID ECN to PCI 2.1 or the equivalent requirement in PCI 2.2. The Subsystem ID ECN is available to PCI SIG members on the web at http://www.pcisig.com.

The device designer is responsible for ensuring that the SID and SVID registers are implemented. The adapter designer or system-board designer who uses this device is responsible for ensuring that these registers are loaded with valid non-zero values before the operating system accesses this device.

Valid non-zero values in the Subsystem ID fields are necessary for the correct enumeration of the device. When the Subsystem ID fields are populated correctly for the adapter, the operating system can differentiate between adapters based on the same PCI chip.

Valid non-zero values in the Subsystem ID fields also allow the operating system to load system miniports for system-board devices, and therefore Subsystem ID fields must also be populated on system-board devices. The exceptions to this requirement are PCI-to-PCI bridges, core chip sets, and OEM-unique system board set devices for system management that are not visible to the operating system. Notice that integration of features into core chipsets, such as graphics, audio, and so on, still requires that the unique feature integrated into a core chip set must meet this requirement.

The PCI specification and these guidelines require that all PCI functions ensure that the Subsystem ID fields are loaded with valid non-zero values before the operating system accesses the function’s Configuration Space registers. This is required both at initial operation system load and after any transition of the PCI bus from B3 (the unpowered state) back to B0 (the fully powered state).

For add-on cards, this requirement must be done by hardware on the card itself—for example, by way of serial EEPROM—and not by an extension BIOS or device driver. This is because the extension BIOS code or driver code is not guaranteed to run in all relevant cases, especially for system sleep transitions or dynamic bus power state transitions in which the bus becomes unpowered. Hardware methods to support this include:

If a device is designed to be exclusively used on the system board, then the system-board vendor can load valid non-zero values into these registers using the system BIOS power-on self test (POST) code or ACPI control methods (_PS0 for PCI bus B3 to B0 transitions). This is because this code must always run before the operating system accesses a function’s Configuration Space registers. Once the operating system has control of the system, Subsystem IDs must not be directly writeable—that is, the read-only bit must be set and valid. See also the note on Subsystem Vendor IDs related to multiple-monitor support for display devices in requirement #13, “Unique Plug and Play ID is provided for each system device and add-on device.”

Configuration Space is correctly populated

Required

Windows NT places extra constraints on a few configuration registers. Microsoft provides a program named Pci.exe to help debug the use of the Configuration Space. This program is available at http://www.microsoft.com/hwdev/pci/.

The following items are specific requirements for the Configuration Space:

Follow the base class, sub-class, and programming interface values outlined in PCI 2.1 or later.

See PCI 2.1 or later for the correct usage of these registers. Notice that BARs (10h, 14h, 18h, 1Ch, 20h, and 24h) should return zero if they are not used, indicating that no memory or I/O space is needed.

Registers that have specific timing or latency requirements must not be placed in PCI Configuration Space.

Interrupt routing is supported using ACPI

Required

The system must provide interrupt routing information using a _PRT object, as defined in Section 6.2.3 of the ACPI 1.0 specification.

BIOS does not configure I/O systems to share PCI interrupts

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Recommended Required Recommended
Enterprise: Recommended Required Recommended
SOHO: Recommended Required Recommended

This applies to boot devices configured by the BIOS on systems that use Intel Architecture processors. The operating system should configure all other devices.

BIOS configures boot device IRQ and writes to the interrupt line register

Required

This requirement does not apply for DEC Alpha servers. This requirement applies only to boot devices configured by the BIOS. All other devices should be configured by Windows NT because, after an interrupt request (IRQ) is assigned by the system BIOS, Windows NT cannot change the IRQ, even if necessary. If the BIOS assigns the IRQ and Windows needs it for another device, a sharing problem occurs.

The BIOS must configure the boot device IRQ to a PCI-based IRQ and write the IRQ into the interrupt line register 3Ch, even if the BIOS does not enable the device. This way, the operating system can still enable the device with the known IRQ at configuration time, if possible.

Systems that support hot swapping for any PCI device use ACPI-based methods

Required

Windows NT 5.0 supports dynamic enumeration, installation, and removal of PCI devices only if there is a supported hardware insert/remove notification mechanism. The hardware insert/remove notification mechanism must be implemented as defined in Section 5.6.3 of the ACPI 1.0 specification.

All 66-MHz and 64-bit PCI buses in a server system comply with PCI 2.1 and other requirements

Required

If PCI buses that are 66 MHz, 64 bit, or both are present in a server system, all devices connected to these buses must meet the requirements defined in PCI 2.1 or later.

It is recommended that 33-MHz/32-bit PCI devices and 66-MHz/64-bit PCI devices be placed on separate PCI buses to allow the best use of I/O bandwidth in a server system.

All PCI devices complete memory write transaction (as a target) within specified times

Required

All devices must comply with the Maximum Completion Time ECN that is approved for the PCI 2.1 specification. This requirement is also documented in the PCI 2.2 specification. Complying with this requirement ensures shorter transaction latencies on PCI, allowing more robust handling of isochronous streams in the system.

All PCI components comply with PCI Bus Power Management Interface specification

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Recommended Recommended Recommended
Enterprise: Recommended Recommended Recommended
SOHO: Required Required Required

The PCI bus, any PCI-to-PCI bridges on the bus, and all add-on capable devices on the PCI bus must comply with PCI Bus Power Management Interface Specification, Revision 1.1 or later. This includes correct implementation of the PCI Configuration Space registers used by power management operations, and the appropriate device state (Dx) definitions for the PCI bus, any PCI-to-PCI bridges on the bus, and all add-on-capable devices on the PCI bus. ACPI is not an acceptable alternative.

System provides support for 3.3Vaux if system supports S3 or S4 state

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Recommended Recommended Recommended
Enterprise: Recommended Recommended Recommended
SOHO: Required Required Required

System support for delivery of 3.3Vaux to the PCI bus must be capable of powering a single PCI slot with 375 mA at 3.3V and it must also be capable of powering each of the other PCI slots on the segment with 20 mA at 3.3V whenever the PCI bus is in the B3 state.

Systems must be capable of delivering 375 mA at 3.3V to all PCI slots whenever the PCI bus is in any “bus powered” state: B0, B1, or B2.

PCI bus power states are correctly implemented

Windows NT Server Enterprise Edition Small Business Server
Basic Server: Recommended Recommended Recommended
Enterprise: Recommended Recommended Recommended
SOHO: Required Required Required

The PCI bus must be in a bus state (Bx) no higher than the system sleeping state (Sx). This means that if the system enters S1, the bus must be in B1, B2, or B3. If the system enters S2, the bus must be in B2 or B3, and if the system enters S3, the bus must be in B3. Of course, in S4 and S5, the system power is removed, so the bus state is B3. A PCI bus segment must not transition to the B3 state until all downstream devices have transitioned to D3.

Control of a PCI bus segment’s power is managed using the originating bus bridge for that PCI bus segment.