Site Preparation Guide, HP Integrity rx8640,
HP 9000 rp8440 Servers
HP Part Number: AB297–9014A
Published: September 2007
Edition: Fourth Edition
List of Figures
5
6
List of Tables
7
8
About This Document
This document covers the HP Integrity rx8640 and the HP 9000 rp8440 server systems.
This document does not describe system software or partition configuration in any detail. For
detailed information concerning those topics, refer to the HP System Partitions Guide:
Administration for nPartitions.
Book Layout
This document contains the following chapters and appendices:
•
•
•
•
Chapter 1 - Server Overview
Chapter 2 - System Specifications
Appendix A- Templates
Index
Intended Audience
This document is intended to be used by customer engineers assigned to support the HP Integrity
rx8640 and HP 9000 rp8440 servers.
Publishing History
The following publishing history identifies the editions and release dates of this document.
Updates are made to this document on an unscheduled, as needed, basis. The updates will consist
of a new release of this document and pertinent online or CD-ROM documentation.
First Edition
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
March 2006
Second Edition
Third Edition
Fourth Edition
September 2006
January 2007
September 2007
Book Layout
9
Related Information
You can access other information on HP server hardware management, Microsoft® Windows®
administratuon, and diagnostic support tools at the following Web sites:
The
It provides HP nPartition server hardware management information, including site preparation,
installation, and more.
10
About This Document
Windows Operating System Information
Microsoft® Windows® operating system at the following Web sites, among others:
You can find information about administration of the
Diagnostics and Event Monitoring: Hardware Support Tools
Complete information about HP
hardware support tools, including online and offline diagnostics and event monitoring tools, is
and other reference material.
HP IT resource center
information for IT professionals on a wide variety of topics, including software, hardware, and
networking.
Books about HP-UX Published by Prentice Hall
site lists the HP books that Prentice Hall currently publishes, such as HP-UX books including:
•
•
HP-UX 11i System Administration Handbook and Toolkit
HP-UX Virtual Partitions
HP books are available worldwide through bookstores, online booksellers, and office and
computer stores.
Typographic Conventions
The following notational conventions are used in this publication.
WARNING! A warning lists requirements that you must meet to avoid personal injury.
CAUTION: A caution provides information required to avoid losing data or avoid losing system
functionality.
NOTE: A note highlights useful information such as restrictions, recommendations, or important
details about HP product features.
• Commands and optionsare represented using this font.
• Text that you type exactly as shownis represented using this font.
•
Text to be replaced with text that you supply is represented using this font.
Example:
“Enter the ls -l filename command” means you must replace filename with your own text.
•
Keyboard keys and graphical interface items (such as buttons, tabs, and menu items)
are represented using this font.
Examples:
The Control key, the OK button, the General tab, the Options menu.
•
Menu —> Submenu represents a menu selection you can perform.
Example:
“Select the Partition —> Create Partition action” means you must select the Create Partition
menu item from the Partition menu.
• Example screen outputis represented using this font.
Typographic Conventions
11
HP Encourages Your Comments
HP encourages your comments concerning this document. We are committed to providing
documentation that meets your needs. Send any errors found, suggestions for improvement, or
compliments to:
Include the document title, manufacturing part number, and any comment, error found, or
suggestion for improvement you have concerning this document.
12
About This Document
1 HP Integrity rx8640 and HP 9000 rp8440 Server
Overview
The HP Integrity rx8640 server and the HP 9000 rp8440 server are members of the HP
business-critical computing platform family of mid-range, mid-volume servers, positioned
between the HP Integrity rx7640, HP 9000 rp7440 and HP Integrity Superdome servers.
IMPORTANT: The differences between the HP Integrity rx8640 and the HP 9000 rp8440 servers
sx2000–based systems share common hardware and technology throughout.
1
The server is a 17U high, 16-socket symmetric multiprocessor (SMP) rack-mount or standalone
server. Features of the server include:
•
•
Up to 512 GB of physical memory provided by dual inline memory modules (DIMMs).
Up to 32 processors with a maximum of 4 processor modules per cell board and a maximum
of 4 cell boards. Supports dual-core processors.
•
•
•
•
•
•
•
•
•
•
•
One cell controller (CC) per cell board.
Turbo fans to cool CPUs and CCs on the cell boards.
Up to four embedded hard disk drives.
Up to two internal DVD drives or one DVD drive and one DDS-4 DAT drive.
Nine front chassis mounted N+1 fans.
Twelve rear chassis mounted N+1 fans.
Six N+1 PCI-X card cage fans.
Up to six N+1 bulk power supplies.
Two N+1 PCI-X power supplies.
N+1 hot-swappable system clock oscillators.
Sixteen PCI-X slots are divided into two I/O chassis. Each I/O chassis can accommodate up
to eight PCI/PCI-X/PCIe/PCI-X 2.0 cards.
•
•
•
Up to two core I/O cards.
One failover service processor per core I/O card.
Four 220 V AC power plugs. Two are required and the other two provide power source
redundancy.
Detailed Server Description
The following section provides detailed intormation about the server components.
1. The U is a unit of measurement specifying product height. One U is equal to 1.75 inches.
Detailed Server Description
13
Figure 1-1 16-Socket Server Block Diagram
Cell Board
memory
Cell Board
memory
Cell Board
memory
Cell Board
memory
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cpu
cc
cc
cc
cc
cpu
cpu
I/O EXPANSION
CONNECTOR
clocks
Bulk
Power
Supply
Crossbar (XBC)
System Backplane
LBA
LBA
lan
lan
SBA
SBA
LBA
scsi
MP
scsi
MP
LBA LBA
LBA
LBA
LBA
LBA
LBA
LBA LBA
LBA LBA
core I/O
core I/O
LBA LBA
LBA
LBA
PCI
Power
PCI-X Backplane
Disk Bay
disk
disk
disk
dvd
dvd
Mass Storage
Board
disk
Disk Bay
Dimensions and Components
The following section describes server dimensions and components.
14
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
The server has the following dimensions:
•
Depth: Defined by cable management constraints to fit into a standard 36-inch deep rack:
25.5 inches from front rack column to PCI connector surface
26.7 inches from front rack column to core I/O card connector surface
30 inches overall package dimension, including 2.7 inches protruding in front of the front
rack columns
•
•
Width: 17.5 inches, constrained by EIA standard 19-inch racks
Height: 17 U (29.55 inches), constrained by package density
The mass storage section located in the front enables access to removable media drives without
removal of the bezel. The mass storage bay accommodates two 5.25-inch removable media drives
and up to four 3.5-inch hard disk drives. The front panel display, containing LEDs and the system
power switch, is located directly above the hard drive media bays.
Below the mass storage section and behind a removable bezel are two PCI-X power supplies.
Each PCI-X power supply powers both I/O partitions. Two PCI-X power supplies offer a N+1
configuration.
Enclosed with protective finger guards are nine front online replace (OLR) fan modules.
The bulk power supply is partitioned by a sealed metallic enclosure located in the bottom of the
server. This enclosure houses the N+1 fully redundant bulk power supplies. Install these power
supplies from the front of the server after removing the front bezel. The power supply is 2.45 X
5.625 X 20.0 inches.
Figure 1-4 Server (Rear View)
PCI OLR Fans
PCI I/O Card Section
Core I/O Cards
Rear OLR Fans
AC Power Receptacles
Access the PCI-X I/O card section, located toward the rear by removing the top cover.
16
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
The PCI card bulkhead connectors are located at the rear top.
The PCI X OLR fan modules are located in front of the PCI cards. They are housed in plastic
carriers.
The 12 rear OLR fans attached outside the chassis house 120-mm exhaust fans.
The cell boards are located on the right side of the server behind a removable side cover. For
rack mounted servers on slides, the rack front door requires removal if it is hinged on the right
side of the rack. Removal will allow unrestricted access to server sides after sliding server out
for service..
The two redundant core I/O cards are positioned vertically end-to-end at the rear of the chassis.
Redundant line cords attach to the AC power receptacles at the bottom rear. Two 20-amp cords
are required to power the server. Two additional line cords provide redundancy.
Access the system backplane by removing the left side cover. The system backplane hinges from
the lower edge and is anchored at the top with a single large jack screw assembly.
The SCSI ribbon cable assembly also routes across and fastens to the backside of the system
backplane near the connectors that attach the core I/O boards.
The blue deployment handles hinge outward for manual lift. When server is slide mounted, they
retract against chassis to enable slide action without obstruction.
Front Panel
Front Panel Indicators and Controls
The front panel, located on the front of the server, includes the power switch. Refer to Figure 1-5.
Enclosure Status LEDs
The following status LEDs are on the front panel:
•
•
•
•
Locate LED (blue)
Power LED (tricolor)
Management processor (MP) status LED (tricolor)
Cell 0, 1, 2, 3 status (tricolor) LEDs
Figure 1-5 Front Panel LEDs and Power Switch
Cell Board
application specific integrated circuit (ASIC) which interfaces the processors and memory with
the I/O. The CC is the heart of the cell board, providing a crossbar connection that enables
communication with other cell boards in the system. It connects to the processor dependent
hardware (PDH) and microcontroller hardware. Each cell board holds up to four processor
modules and 16 memory DIMMs. One to four cell boards can be installed in the server. A cell
board can be selectively powered off for adding processors, memory or maintenance of the cell
board, without affecting cells in other configured partitions.
Detailed Server Description
17
Figure 1-6 Cell Board
The server has a 48 V distributed power system and receives the 48 V power from the system
backplane board. The cell board contains DC-to-DC converters to generate the required voltage
rails. The DC-to-DC converters on the cell board do not provide N+1 redundancy.
The cell board contains the following major buses:
•
•
•
•
Front side buses (FSB) for each of the four processors
Four memory buses (one going to each memory quad)
Incoming and outgoing I/O bus that goes off board to an SBA chip
Incoming and outgoing crossbar busses that communicate to the crossbar chips on the system
backplane
•
PDH bus that goes to the PDH and microcontroller circuitry
All of these buses come together at the CC chip.
Because of space limitations on the cell board, the PDH and microcontroller circuitry reside on
a riser board that plugs at a right angle into the cell board. The cell board also includes clock
circuits, test circuits, and decoupling capacitors.
PDH Riser Board
The server PDH riser board is a small card that plugs into the cell board at a right angle. The
PDH riser interface contains the following components:
•
•
•
Microprocessor memory interface microcircuit
Hardware including the processor dependant code (PDH) flash memory
Manageability microcontroller with associated circuitry
The PDH obtains cell board configuration information from cell board signals and from the cell
board local power module (LPM).
18
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
Central Processor Units
The cell board can hold up to four CPU modules. Each CPU module can contain up to two CPU
cores on a single die. Modules are populated in increments of one. On a cell board, the processor
modules must be the same family, type, and clock frequencies. Mixing of different processors
board for installing processor modules.
NOTE: Unlike previous HP cell based systems, the server cell board does not require that a
termination module be installed at the end of an unused FSB. System firmware is allowed to
disable an unused FSB in the CC. This enables both sockets of the unused bus to remain
unpopulated.
Table 1-1 Cell Board CPU Module Load Order
Number of
CPU Modules
Installed
Socket 2
Socket 3
Socket 1
Socket 0
1
2
3
4
Empty slot
Empty slot
Empty slot
Empty slot
CPU installed
Empty slot
CPU installed
CPU installed
CPU installed
CPU installed
CPU installed
CPU installed
CPU installed
Empty slot
CPU installed
CPU installed
Figure 1-7 Socket Locations on Cell Board
Socket 2
Socket 3
Socket 1
Socket 0
Cell
Controller
Memory Subsystem
Figure 1-8 shows a simplified view of the memory subsystem. It consists of four independent
access paths, each path having its own address bus, control bus, data bus, and DIMMs . Address
and control signals are fanned out through register ports to the synchronous dynamic random
access memory (SDRAM) on the DIMMs.
Detailed Server Description
19
The memory subsystem comprises four independent quadrants. Each quadrant has its own
memory data bus connected from the cell controller to the two buffers for the memory quadrant.
Each quadrant also has two memory control buses: one for each buffer.
Figure 1-8 Memory Subsystem
D
D
I
I
MM
MM
D
D
I
I
MM
MM
P
D
H Riser
B
o
ard
A
d
d
r
e
ss
/
Address/
C
o
n
t
r
o
l
l
e
r
Controller
Bu
f
f
e
r
Bu
f
f
e
r
Bu
f
f
e
r
Bu
f
f
e
r
B
u
f
f
e
r
Buffer
D
D
I
I
MM
MM
D
I
MM
MM
DI
D
D
I
I
MM
MM
D
D
I
I
MM
MM
A
d
d
r
e
ss
/
Address/
C
o
n
t
r
o
l
l
e
r
Controller
Bu
f
f
e
r
Bu
f
f
e
r
Bu
f
f
e
r
Bu
f
f
e
r
B
u
f
f
e
r
Buffer
D
D
I
I
MM
MM
D
I
MM
MM
DI
F
2
ron
t
S
i
d
e
B
u
s 1
F
r
on
t
S
i
d
e
1
Bu
s 0
C
C
ell
C
ontroller
CPU
CP
U
3
C
P
U
PU
0
DIMMs
The memory DIMMs used by the server are custom designed by HP. Each DIMM contains DDR-II
SDRAM memory that operates at 533 MT/s. Industry standard modules do not support the high
availability and shared memory features of the server. Therefore, industry standard DIMM
modules are not supported.
DIMM size, the resulting total server capacity, and the memory component density. Each DIMM
is connected to two buffer chips on the cell board.
Table 1-2 DIMM Sizes Supported
DIMM Size
1 GB
Total Capacity
64 GB
Memory Component Density
256 Mb
512 Mb
1024 Mb
2048 Mb
2 GB
128 GB
4 GB
256 GB
8 GB
512 GB
20
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
Valid Memory Configurations
The first cell must have one DIMM pair loaded in slots 0A/0B. The server can support as little
as 2 GB of main memory using two 1 GB DIMMs installed on one of the cell boards and as much
as 512 GB by filling all 16 DIMM slots on all four cell boards with 8 GB DIMMs.
The following rules explain the memory configuration:
1. DIMMs must be loaded in pairs (same size within a pair).
2. DIMM pairs must be loaded in slot order (0A/0B, 1A/1B, 2A/2B, ...)
3. Largest DIMMs must be loaded first followed by progressively smaller DIMM module sizes.
A paired set of DIMMs is called a rank. DIMMs in a rank must be of the same capacity. See
are recommended. Adding a rank enables a dedicated DDR-II bus on a cell to increase the amount
of usable memory bandwidth available. Available memory is proportional to the amount of
memory installed.
Table 1-3 DIMM Load Order
Number of DIMMs Installed
Action Taken
DIMM Location on Cell Quad Location
Board
2 DIMMs = 1 rank
4 DIMMs = 2 rank
6 DIMMs = 3 rank
8 DIMMs = 4 rank
10 DIMMs = 5 rank
12 DIMMs = 6 rank
14 DIMMs = 7 rank
16 DIMMs = 8 rank
Install first
Add second
Add third
Add fourth
Add fifth
0A and 0B
1A and 1B
2A and 2B
3A and 3B
4A and 4B
5A and 5B
6A and 6B
7A and 7B
Quad 2
Quad 1
Quad 3
Quad 0
Quad 2
Quad 1
Quad 3
Quad 0
Add sixth
Add seventh
Add last
Figure 1-9 DIMM Slot Layout
Front Edge of Cell Board
6A
1A
1B
6B
2B
5B
5A
Quad 3
Quad 1
2A
0A
0B
7A
7B
Quad 2
Quad 0
3B
3A
4B
4A
Rear Edge of Cell Board
(Plugs into Server Backplane)
Detailed Server Description
21
Cells and nPartitions
An nPartition comprises one or more cells working as a single system. Any I/O chassis that is
attached to a cell belonging to an nPartition is also assigned to the nPartition. Each I/O chassis
has PCI card slots, I/O cards, attached devices, and a core I/O card assigned to the I/O chassis.
On the server, each nPartition has its own dedicated portion of the server hardware which can
run a single instance of the operating system. Each nPartition can boot, reboot, and operate
independently of any other nPartitions and hardware within the same server complex.
The server complex includes all hardware within an nPartition server: all cabinets, cells, I/O
chassis, I/O devices and racks, management and interconnecting hardware, power supplies, and
fans.
A server complex can contain one or more nPartitions, enabling the hardware to function as a
single system or as multiple systems.
NOTE: Partition configuration information is available on the Web at:
Refer to HP System Partitions Guide: Administration for nPartitions for details.
Internal Disk Devices
Figure 1-10 (page 22) shows the top internal disk drives connect to cell 0 through the core I/O
for cell 0, in a server cabinet. The bottom internal disk drives connect to cell 1 through the core
I/O for cell 1.
The upper removable media drive connects to cell 0 through the core I/O card for cell 0 and the
lower removable media drive connects to cell 1 through the core I/O card for cell 1.
Figure 1-10 Internal Disks Locations
Slot 0 Media
Slot 1 Media
Slot 0 Drive
Slot 1 Drive
Slot 3 Drive
Slot 2 Drive
22
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
Table 1-4 Removable Media Drive Path
Removable Media
Slot 0 media
Path
1
0/0/0/2/1.x .0
1
Slot 1 media
1/0/0/2/1.x .0
1
X equals 2 for a DVD drive while X equals 3 for a DDS-4 DAT drive.
Table 1-5 Hard Disk Drive Path
Hard Drive
Slot 0 drive
Slot 1 drive
Slot 2 drive
Slot 3 drive
Path
0/0/0/2/0.6.0
0/0/0/3/0.6.0
1/0/0/2/0.6.0
1/0/0/3/0.6.0
System Backplane
The system backplane board contains the following components:
•
•
•
•
•
Two crossbar chips (XBC)
Clock generation logic
Preset generation logic
Power regulators
Two local bus adapter (LBA) chips that create internal PCI buses for communicating with
the core I/O card.
The backplane also contains connectors for attaching the cell boards, PCI-X backplane, MP core
I/O cards SCSI cables, bulk power, chassis fans, front panel display, intrusion switches, and
external system bus adapters (SBA) link connectors.
Figure 1-11 System Backplane Block Diagram
System Backplane
LBA
PCI-X Backplane
Cell 0
LBA
Core I/O 0
Cell 1
Cell boards are perpendicular
to the system backplane.
XBC
XBC
Cell 2
Cell 3
Core I/O 1
Detailed Server Description
23
The two LBA PCI bus controllers on the system backplane create the PCI bus for the core I/O
cards. You must shut down the partition for the core I/O card before removing the card.
Having the SCSI connectors on the system backplane allows replacement of the core I/O card
without having to remove cables in the process.
System Backplane to Cell Board Connectivity
The system backplane provides four sets of connectors, one set for each cell board.
The system backplane routes the signals from the cell boards to the communication crossbars.
Cell boards 0 and 1 are directly connected to the I/O backplane found in the server. Cell boards
2 and 3 can be connected to a separate I/O expansion chassis connected to the system backplane.
System Backplane to Core I/O Card Connectivity
The core I/O cards connect at the rear of the system backplane through two connectors. SCSI and
LAN on a core I/O are accessed via a PCI-X 66 MHz bus. Two LBA bus controllers located on
the system backplane allow communication to the I/O devices. The LBAs are connected to the
SBA on the PCI-X backplane by single ropes.
The system backplane routes the signals to the various components in the system. The core I/O
signals include the SCSI bus for the system hard drives and the bus for the removable media
devices. Each core I/O card provides SCSI buses for the mass storage devices.
The management processor for the chassis resides on the core I/O card, so the system backplane
also provides interfaces required for management of the system. These interfaces and the
manageability circuitry run on standby power.
You can remove the core I/O cards from the system as long as you shut down the partition for
the core I/O card before removing the card. The hot-plug circuitry that enables this feature is
located on the system backplane near the core I/O sockets.
System Backplane to PCI-X Backplane Connectivity
The PCI-X backplane uses two connectors for the SBA link bus and two connectors for the
high-speed data signals and the manageability signals.
SBA link bus signals are routed through the system backplane to the cell controller on each
corresponding cell board.
The high-speed data signals are routed from the SBA chips on the PCI-X backplane to the two
LBA PCI bus controllers on the system backplane.
Clocks and Reset
The system backplane contains reset and clock circuitry that propagates through the whole
system. The system backplane central clocks drive all major chip set clocks. The system central
clock circuitry features redundant, hot-swappable oscillators.
PCI/PCI-X I/O Subsystem
The cell board to the PCI-X board path runs from the CC to the SBA, from the SBA to the ropes,
on cell board 0 and cell board 1 communicates through an SBA over the SBA link. The SBA link
consists of both an inbound and an outbound link with a peak bandwidth of approximately 11.5
GB/s at 3.2 GT/s. The SBA converts the SBA link protocol into “ropes.” A rope is defined as a
high-speed, point-to-point data bus. The SBA can support up to 16 of these high-speed
bidirectional rope links for a total aggregate bandwidth of approximately 11.5 GB/s.
There are LBA chips on the PCI-X backplane that act as a bus bridge, supporting either one or
two ropes for PCI-X 133 MHz slots and the equivalent bandwidth of four ropes for PCI-X 266
slots. Each LBA acts as a bus bridge, supporting one or two ropes and capable of driving 33 MHz
or 66 MHz for PCI cards. The LBAs can also drive at 66 MHz or 133 MHz for PCI-X mode 1 cards,
24
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
and at 266 MT/s for PCI-X mode 2 cards installed in mode 2 capable slots. When cell board 2 and
cell board 3 are present, the cell boards attach to their own associated SBA and LBA chips on the
PCI-X board in the Server Expansion Unit.
Figure 1-12 PCI-X Board to Cell Board Block Diagram
the cell boards installed in the server.
Table 1-6 PCI-X Slot Boot Paths Cell 0
Cell
0
PCI Slot
Ropes
8/9
Path
1
2
3
4
5
6
7
8
0/0/8/1/0
0/0/10/1/0
0/0/12/1/0
0/0/14/1/0
0/0/6/1/0
0/0/4/1/0
0/0/2/1/0
0/0/1/1/0
0
10/11
12/13
14/15
6/7
0
0
0
0
4/5
0
2/3
0
1
Table 1-7 PCI-X Slot Boot Paths Cell 1
Cell
1
PCI Slot
Ropes
8/9
Path
1
2
3
1/0/8/1/0
1/0/10/1/0
1/0/12/1/0
1
10/11
12/13
1
Detailed Server Description
25
Table 1-7 PCI-X Slot Boot Paths Cell 1 (continued)
Cell
1
PCI Slot
Ropes
14/15
6/7
Path
4
5
6
7
8
1/0/14/1/0
1/0/6/1/0
1/0/4/1/0
1/0/2/1/0
1/0/1/1/0
1
1
4/5
1
2/3
1
1
The server supports two internal SBAs. Each SBA provides the control and interfaces for eight
PCI-X slots. The interface is through the rope bus (16 ropes per SBA). For each SBA, the ropes
are divided in the following manner:
•
•
•
•
A single rope is routed to support the core I/O boards through LBAs located on the system
backplane.
A single rope is routed to an LBA on the PCI backplane to support a slot for PCI and PCI-X
cards (slot 8).
Six ropes are bundled into double ropes to three (3) LBAs. They support slots 1, 2, and 7 for
PCI and PCI-X mode 1 cards.
Eight fat ropes are bundled into quad ropes to four (4) LBAs. They support slots 3, 4, 5, and
6 for PCI and PCI-X mode 2 cards.
NOTE: PCI-X slots 1-7 are dual rope slots while slot 8 is a single rope slot. A rope is defined as
a high-speed, point-to-point data bus.
Each of the 16 slots is capable of 33 MHz/66 MHz PCI or 66 MHz/133 MHz PCI-X. Four slots in
PCI-X support 266 MHz. All 16 PCI slots are keyed for 3.3 V connectors (accepting both Universal
and 3.3 V cards). The PCI-X backplane does not provide any 5 V slots for the I/O cards. Table 1-8
summarizes the PCI-X slot types.
The PCI-X backplane is physically one board, yet it behaves like two independent partitions.
SBA 0 and its associated LBAs and eight PCI-X slots form one I/O partition. SBA 1 and its
associated LBAs and eight PCI-X slots form the other I/O partition. One I/O partition can be reset
separately from the other I/O partition but cannot be powered down independently.
26
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
IMPORTANT: Always refer to the PCI card's manufacturer for the specific PCI card performance
specifications. PCI, PCI-X mode 1, and PCI-X mode 2 cards are supported at different clock
speeds. Select the appropriate PCI-X I/O slot for best performance.
Table 1-8 PCI-X Slot Types
1
I/O Partition Slot
Maximum MHz
Maximum Peak Ropes
Bandwidth
Supported Cards PCI Mode Supported
2
0
8
7
66
533 MB/s
001
3.3 V
3.3 V
PCI or PCI-X Mode
1
133
1.06 GB/s
002/003
PCI or PCI-X Mode
1
6
5
4
3
2
266
266
266
266
133
2.13 GB/s
2.13 GB/s
2.13 GB/s
2.13 GB/s
1.06 GB/s
004/005
006/007
014/015
012/013
010/011
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V
PCI-X Mode 2
PCI-X Mode 2
PCI-X Mode 2
PCI-X Mode 2
PCI or PCI-X Mode
1
1
8
7
133
66
1.06 GB/s
533 MB/s
1.06 GB/s
008/009
001
3.3 V
3.3 V
3.3 V
PCI or PCI-X Mode
1
2
PCI or PCI-X Mode
1
1
133
002/003
PCI or PCI-X Mode
1
6
5
4
3
2
266
266
266
266
133
2.13 GB/s
2.13 GB/s
2.13 GB/s
2.13 GB/s
1.06 GB/s
004/005
006/007
014/015
012/013
010/011
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V or 1.5 V
3.3 V
PCI-X Mode 2
PCI-X Mode 2
PCI-X Mode 2
PCI-X Mode 2
PCI or PCI-X Mode
1
1
133
1.06 GB/s
008/009
3.3 V
PCI or PCI-X Mode
1
1
Each slot will auto select the proper speed for the card installed up to the maximum speed for the slot. Placing high
speed cards into slow speed slots will cause the card to be driven at the slow speed.
Slot is driven by a single rope and has a maximum speed of 66 MHz.
2
PCIe Backplane
The 16–slot (8 PCI and PCI-X; 8 PCI-Express) mixed PCI-X/PCI-Express (“PCIe”) I/O backplane
was introduced for the Dual-Core Intel® Itanium® processor 9100 Series release and is heavily
leveraged from the PCI-X backplane design. Only the differences will be descibed here. See
“PCI/PCI-X I/O Subsystem” (page 24) for common content between the two boards..
The PCI-Express I/O backplane comprises two logically independent I/O circuits (partitions) on
one physical board.
•
The I/O chip in cell location zero (0) and its associated four PCI-X ASICs, four PCIe ASICs,
and their respective PCI/PCI-X/PCIe slots form PCI-Express I/O partition 0 plus core I/O.
•
The I/O chip in cell location one (1) and its associated four PCI-X ASICs, four PCIe ASICs,
and their respective PCI/PCI-X/PCIe slots form PCI-Express I/O partition 1 plus core I/O.
Detailed Server Description
27
Each PCI/PCI-X slot has a host-to-PCI bridge associated with it, and each PCIe slot has a
host-to-PCIe bridge associated with it. A dual slot hot swap controller chip and related logic is
also associated with each pair of PCI or PCIe slots. The I/O chip on either cell location 0 or 1 is a
primary I/O system interface. Upstream, the I/O chips communicate directly with the cell controller
ASIC on the host cell board via a high bandwidth logical connection known as the HSS link.When
installed in the SEU chassis within a fully configured system, the ASIC on cell location 0 connects
to the cell controller chip on cell board 2, and the ASIC on cell location 1 connects to the cell
controller chip on cell board 3 through external link cables.
Downstream, the ASIC spawns 16 logical 'ropes' that communicate with the core I/O bridge on
the system backplane, PCI interface chips, and PCIe interface chips. Each PCI chip produces a
single 64–bit PCI-X bus supporting a single PCI or PCI-X add-in card. Each PCIe chip produces
a single x8 PCI-Express bus supporting a single PCIe add-in card.
The ropes in each I/O partition are distributed as follows:
•
•
•
One PCI-X ASIC is connected to each I/O chip with a single rope capable of peak data rates
of 533Mb/s (PCIX-66).
Three PCI-X ASICs are connected to each I/O chip with dual ropes capable of peak data
rates of 1.06Gb/s (PCIX-133).
Four PCIe ASICs are connected to each I/O chip with dual fat ropes capable of peak data
rates of 2.12Gb/s (PCIe x8).
In addition, each I/O chip provides an external single rope connection for the core I/O.
Each PCI-Express slot on the PCIe I/O board is controlled by its own ASIC and is also
independently supported by its own half of the dual hot swap controller. All PCIe slots are
designed to be compliant with PCIe Rev.1.0. The PCI-Express I/O backplane will provide slot
support for VAUX3.3, SMB*, and JTAG.
PCIe Slot Boot Paths
PCIe slot boot paths are directly leveraged from the PCI-X backplane. See Table 1-6 (page 25)
28
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
NOTE: The differences between the PCI X backplane and the PCIe backplane are as follows:
•
Twelve ropes are bundled in two rope pairs to 6 LBAs to support 6 slots for PCI and PCI-X
cards instead of 14. These ropes are capable of 133MHz.
•
Sixteen ropes are bundled into dual fat ropes to 8 LBAs to support 8 additional slots for
PCIe cards. These ropes are capable of 266MHz.
Table 1-9 PCIe Slot Types
1
I/O Partition Slot
Maximum MHz
Maximum Peak Ropes
Bandwidth
Supported Cards PCI Mode Supported
2
0
8
7
66
533 MB/s
001
3.3 V
3.3 V
PCI or PCI-X Mode
1
133
1.06 GB/s
002/003
PCI or PCI-X Mode
1
6
5
4
3
2
266
266
266
266
133
2.13 GB/s
2.13 GB/s
2.13 GB/s
2.13 GB/s
1.06 GB/s
004/005
006/007
014/015
012/013
010/011
3.3 V
3.3 V
3.3 V
3.3 V
3.3 V
PCIe
PCIe
PCIe
PCIe
PCI or PCI-X Mode
1
1
8
7
133
66
1.06 GB/s
533 MB/s
1.06 GB/s
008/009
001
3.3 V
3.3 V
3.3 V
PCI or PCI-X Mode
1
2
PCI or PCI-X Mode
1
1
133
002/003
PCI or PCI-X Mode
1
6
5
4
3
2
266
266
266
266
133
2.13 GB/s
2.13 GB/s
2.13 GB/s
2.13 GB/s
1.06 GB/s
004/005
006/007
014/015
012/013
010/011
3.3 V
3.3 V
3.3 V
3.3 V
3.3 V
PCIe
PCIe
PCIe
PCIe
PCI or PCI-X Mode
1
1
133
1.06 GB/s
008/009
3.3 V
PCI or PCI-X Mode
1
1
Each slot will auto select the proper speed for the card installed up to the maximum speed for the slot. Placing high
speed cards into slow speed slots will cause the card to be driven at the slow speed.
Slot is driven by a single rope and has a maximum speed of 66 MHz.
2
Core I/O Card
Up to two core I/O cards can be plugged into the server. Two core I/O cards enable two I/O
partitions to exist in the server. The server can have up to two partitions. When a Server Expansion
Unit with two core I/O cards is attached to the server, two additional partitions can be configured.
A core I/O card can be replaced with standby power applied. The system power to the core I/O
is handled in the hardware the same way a hot-plug PCI/PCI-X card is handled. Standby power
to core I/O is handled by power manager devices to limit inrush current during insertion.
Detailed Server Description
29
Core I/O Boot Paths
The servers internal I/O devices are located on the core I/O. The following table outlines the paths
assigned to the hard disk and removable media disk bays located on the front of the server
chassis. Core I/O card 0 refers to the core I/O located in the upper slot at the rear of the system.
Core I/O card 1 refers to the core I/O located in the lower slot at the rear of the system. Core I/O
cards 2 and 3 are located in the SEU (if available).
Table 1-10 Core I/O Boot Paths
Core I/O Card
Device
Path
Description
0
1Gb LAN
0/0/0/1/0
Core I/O 0 SYS LAN
connector.
0
0
SCSI Drive
SCSI Drive
0/0/0/2/0.6.0
0/0/0/2/1.X.0
Hard drive located in upper
left disk bay.
Removable media DVD
(X=2) or DDS-4 (X=3) tape
drive located in the upper
disk bay.
0
0
SCSI Drive
SCSI Drive
0/0/0/3/0.6.0
0/0/0/3/1
Hard drive located in the
upper right disk bay.
SCSI drive connected to the
external SCSI Ultra3
connector on the core I/O
card.
1
1
1
1Gb LAN
SCSI Drive
SCSI Drive
1/0/0/1/0
Core I/O 1 SYS LAN
connector.
1/0/0/2/0.6.0
1/0/0/2/1.X.0
Hard drive located in the
lower left disk bay.
Removable media DVD
(X=2) or DDS-4 (X=3) tape
drive located in the lower
disk bay.
1
1
SCSI Drive
SCSI Drive
1/0/0/3/0.6.0
1/0/0/3/1
Hard drive located in the
lower right disk bay.
SCSI drive connected to the
external SCSI Ultra3
connector on the core I/O
card.
Mass Storage (Disk) Backplane
Internal mass storage connections to disks are routed on the mass storage backplane, which has
connectors and termination logic. All hard disks are hot-plug but removable media disks are not
hot-plug. The server accommodates two internal, removable media devices. Power connectors
for removable media devices are on the mass storage backplane. For more information, refer to
30
HP Integrity rx8640 and HP 9000 rp8440 Server Overview
Figure 1-13 Mass Storage Block Diagram
SCSI
SCSI
TERM
TERM
J11
J21
SCSI_1-1
SCSI_2-1
SCSI
SCSI
J13
J23
HARD
HARD
1-1
DRIVE 2
SCA
SCA
0-1
DRIVE 1
12V PWR
MGR
12V PWR
MGR
5V PWR
MGR
5V PWR
MGR
SCSI
SCSI
TERM
TERM
J12
J22
SCSI_1-2
SCSI_2-2
SCSI
SCSI
J14
J24
HARD
HARD
SCA
SCA
0-2
1-2
DRIVE 1
DRIVE 2
12V PWR
MGR
12V PWR
MGR
5V PWR
MGR
5V PWR
MGR
V12P0_1
V5P0_1
12 VDC_2
5 VDC_2
J16
J26
DVD-1
DVD-2
DVD
PWR
DVD
PWR
POWER
POWER
J15
J25
PWR
PWR
2
IC_FRU
FRU
I/O
I/O
2
IC_MON_1
2
IC_MON_2
EXPANDER
EXPANDER
Detailed Server Description
31
32
2 System Specifications
This chapter describes the basic system configuration, physical specifications and requirements
for the server.
Dimensions and Weights
This section provides dimensions and weights of the server and server components. Table 2-1
gives the dimensions and weights for a fully configured server.
Table 2-1 Server Dimensions and Weights
Standalone
29.55 (75.00)
17.50 (44.50)
30.00 (76.20)
Packaged
Height - Inches (centimeters)
Width - Inches (centimeters)
Depth - Inches (centimeters)
Weight - Pounds (kilograms)
86.50 (219.70)
40.00 (101.60)
48.00 (122.00)
1
2
368.00 (166.92)
813.00 (368.77)
1
2
This weight represents a fully configured server before it is installed in a rack.
The packaged weight represents a server installed in a 2-m rack. The packaged weight includes a fully configured
server in a 2-m rack with a rear door, rail slide kit, line cord anchor kit, interlock assembly, cable management arm,
120-lb ballast kit, and a 60-A PDU. The shipping box, pallet, and container, not included in the packaged weight in
Table 2-1, adds approximately 150.0-lb to the total system weight when shipped. The size and number of miscellaneous
pallets will be determined by the equipment ordered by the customer.
Table 2-2 provides component weights for calculating the weight of a server not fully configured.
calculating the weight of the server. To determine the overall weight, follow the example in
Table 2-2 Server Component Weights
Quantity
Description
Weight lb (kg)
131.00 (59.42)
20.0 (9.07)
1
Chassis
1
System backplane
PCI-X I/O backplane assembly
PCI-X power supply
Bulk power supply
Mass storage backplane
Cell board
1
20.40 (9.25)
2
5.00 (2.27) each
12.00 (5.44) each
1.00 (0.45)
6
1
1 - 4
1 - 4
1 - 2
27.80 (12.61) each
1.60 (0.73) each
2.20 (1.00) each
Hard disk drive
Removable media disk drive
Table 2-3 Example Weight Summary
Component
Quantity
Multiply
Weight (kg)
Cell board
4
27.8 lb
107.20 lb
48.64 kg
12.16 kg
PCI card (varies - used sample
value)
4
0.34 lb
1.36 lb
0.61 kg
0.153 kg
Dimensions and Weights
33
Table 2-3 Example Weight Summary (continued)
Component
Quantity
Multiply
Weight (kg)
Power supply (BPS)
6
12 lb
72 lb
5.44 kg
32.66 kg
DVD drive
2
4
1
2.2 lb
4.4 lb
1.0 kg
2.0 kg
Hard disk drive
1.6 lb
6.40 lb
2.90 kg
0.73 kg
Chassis with skins and front
bezel cover
131 lb
131 lb
59.42 kg
59.42 kg
Total weight
322.36 lb
146.22 kg
Table 2-4 Weight Summary
Component
Quantity
Multiply By
Weight (kg)
Cell Board
27.8 lb
lb
12.16 kg
kg
PCI Card
varies lb
varies kg
lb
kg
Power Supply (BPS)
DVD Drive
12 lb
lb
5.44 kg
kg
2.2 lb
lb
1.0 kg
kg
Hard Disk Drive
1.6 lb
lb
0.73 kg
kg
Chassis with skins and front
bezel cover
131 lb
lb
59.42 kg
kg
Total weight
lb
kg
Electrical Specifications
This section provides electrical specifications for the HP Integrity rx8640 and the HP 9000 rp8440
servers. These servers share common specifications. The exceptions are separate system power
as well as power dissipation and cooling requirements. The associated data can be found in (xrefs
here).
Grounding
The site building shall provide a safety ground and protective earth for each AC service entrance
to all cabinets.
34
System Specifications
Install a protective earthing (PE) conductor that is identical in size, insulation material, and
thickness to the branch-circuit supply conductors. The PE conductor must be green with yellow
stripes. The earthing conductor described is to be connected from the unit to the building
installation earth or if supplied by a separately derived system, at the supply transformer or
motor-generator set grounding point.
Circuit Breaker
The Marked Electrical for the server is 15 amps per line cord. The recommended circuit breaker
size is 20 amps for North America. For countries outside North America, consult your local
electrical authority having jurisdiction for the recommended circuit breaker size.
The server contains four C20 power receptacles located at the bottom rear bulkhead. A minimum
of two power cords (A0–A1) must be used to maintain normal operation of the server. A second
set of two cords (B0–B1) can be added to improve system availability by protecting, for example,
against power source failures or accidentally tripped circuit breakers. The server can receive AC
input from two different AC power sources.
System AC Power Specifications
Power Cords
Table 2-5 lists the various power cables available for use with the server. Each power cord is 15
feet (4.5-m) in length with a IEC 60320-1 C19 female connector attached to one end.
Table 2-5 Power Cords
Part Number
8120-6895
8120-6897
8121-0070
8120-6903
Description
Where Used
Stripped end, 240 volt
Male IEC309, 240 volt
Male GB-1002, 240 volt
Male NEMA L6-20, 240 volt
International - Other
International
China
North America/Japan
System Power Specifications
Table 2-6 lists the AC power requirements for the servers. This table provides information to
help determine the amount of AC power needed for your computer room.
Table 2-6 HP Integrity rx8640 and HP 9000 rp8440 AC Power Requirements
Requirements
Value
Comments
Nominal input voltage
Minimum operating voltage
Maximum operating voltage
200–240 VAC
180 VAC
269 VAC
Frequency range (minimum - maximum) 50/60 Hz
Number of phases
1
Rated line current
15 A
Per line cord
Per line cord
Maximum inrush current
54 A peak for 20 ms
Dropout carry-through time at minimum 20 ms
line voltage
Circuit breaker rating
20A
Per line cord
Electrical Specifications
35
Table 2-6 HP Integrity rx8640 and HP 9000 rp8440 AC Power Requirements (continued)
Requirements
Value
Comments
Power factor correction
>0.98
>0.95
At all loads of 50%–100% of supply
rating.
At all loads 0f 25%–50% of supply rating
Ground leakage current (mA)
<3.0 (ma)
Per line cord
Table 2-7 HP Integrity rx8640 System Power Requirements
Power Required (50–60 Hz)
Maximum Theoretical Power
Marked Electrical Power
Watts
5862
– – –
3883
VA
Comments
5982
5400
3962
See Note 1
30A @ 180 VAC, See Note 2
See Note 3
User Expected Maximum Power
Note 1: Maximum Theoretical Power: or “Maximum Configuration” (Input power at the ac input
expressed in Watts and Volt-Amps to take into account Power factor correction.)The calculated
sum of the maximum worst case power consumption for every subsystem in the server. This
number will never be exceeded by a functioning server for any combination of hardware and
software under any conditions.
Note 2: Marked Electrical Power: (Input power at the ac input expressed in Volt-Amps.)The
Marked Electrical Power is the rating given on the chassis label and represents the input power
required for facility ac power planning and wiring requirements. This number represents the
expected maximum power consumption for the server based on the power rating of the bulk
power supplies. This number can safely be used to size ac circuits and breakers for the system
under all conditions.
Note 3: Typical Maximum Power: or User Expected Maximum Power, (Input power at the ac
input expressed in Watts and Volt-Amps.)The measured maximum worst case power
consumption. This number represents the larges power consumption that HP engineers were
able to produce for the server with any combination of hardware under laboratory conditions
using aggressive software applications designed specifically to work the system at maximum
load. This number can safely be used to compute thermal loads and power consumption for the
system under all conditions.
Table 2-8 HP 9000 rp8440 System Power Requirements
Power Required (50–60 Hz)
Maximum Theoretical Power
Marked Electrical Power
Watts
5720
– – –
3789
VA
Comments
5837
5400
3866
See Note 1
30A @ 180 VAC, See Note 2
See Note 3
User Expected Maximum Power
Note 1: Maximum Theoretical Power: or “Maximum Configuration” (Input power at the ac input
expressed in Watts and Volt-Amps to take into account Power factor correction.)
The calculated sum of the maximum worst case power consumption for every subsystem in the
server. This number will never be exceeded by a functioning server for any combination of
hardware and software under any conditions.
Note 2: Marked Electrical Power: (Input power at the ac input expressed in Volt-Amps.)
The Marked Electrical Power is the rating given on the chassis label and represents the input
power required for facility ac power planning and wiring requirements. This number represents
the expected maximum power consumption for the server based on the power rating of the bulk
power supplies. This number can safely be used to size ac circuits and breakers for the system
under all conditions.
36
System Specifications
Note 3: User Expected Maximum Power: (Input power at the ac input expressed in Watts and
Volt-Amps.)
The measured maximum worst case power consumption. This number represents the largest
power consumption that HP engineers were able to produce for the server with any combination
of hardware under laboratory conditions using aggressive software applications designed
specifically to work the system at maximum load. This number can safely be used to compute
thermal loads and power consumption for the system under all conditions.
Environmental Specifications
This section provides the environmental, power dissipation, noise emission, and air flow
specifications for the server.
Temperature and Humidity
The cabinet is actively cooled using forced convection in a Class C1-modified environment. The
recommended humidity level for Class C1 is 40 to 55% relative humidity (RH).
Operating Environment
The system is designed to run continuously and meet reliability goals in an ambient temperature
of 5° C–32° C at sea level. The maximum allowable temperature is derated 1° C per 1,000 feet of
elevation above 3,000 feet above sea level up to 25° C at 10,000 feet. For optimum reliability and
performance, the recommended operating range is 20° C to 25° C. This meets or exceeds the
example of the ASHRAE thermal report.
Table 2-9 Example ASHRAE Thermal Report
Condition
Voltage 208
Volts
Typical Heat
Release
Airflow,
nominal
Airflow,
maximum at
32° C
Weight
Over System Dimensions
(W x D x H)
Description
Watts
971
cfm
960
(m3/hr)
1631
lb
kg
81
Inches
mm
Minimum
configuration
178
h=29.55
w=17.50
d=30.00
750.57
444.50
762.00
Full
3883
2380
960
960
1631
1631
370
286
168
130
h=29.55
w=17.50
d=30.00
750.57
444.50
762.00
configuration
Typical
configuration
h=29.55
w=17.50
d=30.00
750.57
444.50
762.00
Environmental Specifications
37
Table 2-9 Example ASHRAE Thermal Report (continued)
Condition
ASHRAE class
Minimum configuration
1 cell board, 2 CPUs, 2 GB, 1 core I/O
card
Full configuration
4 cell boards, 16 CPUs, 128 GB, 2 core
I/O cards, 16 I/O cards, 4 hard disks
Typical configuration
2 cell boards, 8 CPUs, 64 GB, 1 core I/O
card, 8 I/O cards, 2 hard disks
Environmental Temperature Sensor
To ensure that the system is operating within the published limits, the ambient operating
temperature is measured using a sensor placed on the server backplane. Data from the sensor is
used to control the fan speed and to initiate system overtemp shutdown.
Non-Operating Environment
The system is designed to withstand ambient temperatures between -40° C to 70° C under
non-operating conditions.
Cooling
Internal Chassis Cooling
The cabinet incorporates front-to-back airflow across the system backplane. Nine 120-mm fans
mounted externally on the front chassis wall behind the cosmetic front bezel push air into the
unit. Twelve 120-mm fans housed in cosmetic plastic fan carriers and mounted externally to the
rear chassis wall pull air through the unit.
Each fan is controlled by a smart fan control board embedded in the fan module plastic housing.
The smart fan control board receives fan control input from the system fan controller on the
system backplane and returns fan status information to the system fan controller. The smart fan
control board also controls the power and the pulse width modulated control signal to the fan
and monitors the speed indicator back from the fan. The fan status LED is driven by the smart
fan control board.
Bulk Power Supply Cooling
Cooling for the bulk power supplies (BPS) is provided by two 60-mm fans contained within each
BPS. Air flows into the front of the BPS and is exhausted out of the top of the power supply
though upward facing vents near the rear of the supply. The air is then ducted out of the rear of
the chassis.
PCI/Mass Storage Section Cooling
Six 92-mm fans located between the mass storage devices and the PCI card cage provide airflow
through these devices. The PCI fans are powered off of housekeeping power and run at full
speed at all times. The air is pulled through the mass storage devices and pushed through the
PCI card cage. Separation is provided between the PCI bulkheads to allow adequate exhaust
ventilation and to help reduce the localized airflow dead spots that typically occur at the faceplate
tail of each PCI card.
Standby Cooling
Several components within the chassis consume significant amounts of power while the system
is in standby mode. The system fans will run at a portion of full speed during standby to remove
38
System Specifications
the resulting heat from the cabinet. The fans within the power supply will operate at full speed
during standby.
Typical HP Integrity rx8640 Server Power Dissipation and Cooling
Table 2-10 provides calculations for the rx8640 configurations as described in the table.
Table 2-10 Typical HP Integrity rx8640 Server Configurations
Cell
Board
Memory
per Cell
Board
PCI Cards
(assumes 10W
each)
DVDs
Hard Disk Core
Bulk Power
Supplies
Typical
Power
Typical
Cooling
Drives
I/O
Qty
4
GBytes
Qty
16
16
8
Qty
2
Qty
4
Qty
2
Qty
6
Watts
3883
3627
3419
2749
2461
2397
1893
BTU/hr
13257
12383
11672
9385
32
16
8
4
2
4
2
6
4
0
2
2
6
2
32
16
8
16
8
2
4
2
4
2
0
2
2
4
8402
2
8
0
2
2
4
8183
1
8
8
0
1
1
3
6463
•
•
•
Watts x (0.860) = kcal/hour
Watts x (3.414) = Btu/hour
BTU/hour divided by 12,000 = tons of refrigeration required
NOTE: When determining power requirements, you must consider any peripheral equipment
that will be installed during initial installation or as a later update. Refer to the applicable
documentation for such devices to determine the power and air-conditioning that is required to
support these devices.
Typical HP 9000 rp8440 Server Power Dissipation and Cooling
Table 2-11 provides calculations for the rp8440 configurations as described in the table.
Table 2-11 Typical HP 9000 rp8440 Server Configurations
Cell
Board
Memory
per Cell
Board
PCI Cards
(assumes 10W
each)
DVDs
Hard Disk Core
Bulk Power
Supplies
Typical
Power
Typical
Cooling
Drives
I/O
Qty
4
GBytes
Qty
16
16
8
Qty
2
Qty
4
Qty
2
Qty
6
Watts
3789
3533
3325
2702
2414
2350
1893
BTU/hr
12936
12062
11352
9225
32
16
8
4
2
4
2
6
4
0
2
2
6
2
32
16
8
16
8
2
4
2
4
2
0
2
2
4
8241
2
8
0
2
2
4
8023
1
8
8
0
1
1
3
6463
Environmental Specifications
39
•
•
•
Watts x (0.860) = kcal/hour
Watts x (3.414) = Btu/hour
BTU/hour divided by 12,000 = tons of refrigeration required
NOTE: When determining power requirements, you must consider any peripheral equipment
that will be installed during initial installation or as a later update. Refer to the applicable
documentation for such devices to determine the power and air-conditioning that is required to
support these devices.
Acoustic Noise Specification
The acoustic noise specification for the servers is 55.6 db (sound pressure level at bystander
position). It is appropriate for dedicated computer room environments, not office environments.
The LwA is 7.4 Bels. Care should be taken to understand the acoustic noise specifications relative
to operator positions within the computer room or when adding servers to computer rooms with
existing noise sources.
Air Flow
The recommended server cabinet air intake temperature is between 20° C and 25° C (68° F and
77° F) at 960 CFM.
Figure 2-1 illustrates the location of the inlet and outlet airducts on a single cabinet. Air is drawn
into the front of the server and forced out the rear.
Figure 2-1 Airflow Diagram
40
System Specifications
A Templates
This appendix contains blank floor plan grids and equipment templates. Combine the necessary
number of floor plan grid sheets to create a scaled version of the computer room floor plan.
Figure A-1 illustrates the overall dimensions required for the servers.
Figure A-1 Server Space Requirements
Equipment Footprint Templates
Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1
foot). These templates show basic equipment dimensions and space requirements for servicing.
The service areas shown on the template drawings are lightly shaded.
Use the equipment templates with the floor plan grid to define the location of the equipment
that will be installed in your computer room.
NOTE: Photocopying typically changes the scale of drawings copied. If you copy any templates,
then you must also copy all templates and floor plan grids.
Computer Room Layout Plan
Use the following procedure to create a computer room layout plan:
Equipment Footprint Templates
41
2. Cut and join them together (as necessary) to create a scale model floor plan of your computer
room.
4. Cut out each template selected in step 3; then place it on the floor plan grid created in step
2.
5. Position pieces until you obtain the desired layout, then fasten the pieces to the grid. Mark
locations of computer room doors, air-conditioning floor vents, utility outlets, and so on.
NOTE: Attach a reduced copy of the completed floor plan to the site survey. HP installation
specialists use this floor plan during equipment installation.
Figure A-2 Server Cabinet Template
42
Templates
Index
A
M
AC power specifications, 35
air ducts, 40
illustrated, 40
subsystem, 18
MP core I/O, 22
ASIC, 13
B
N
backplane
N+1, 13
noise emission specifications, 40
O
C
operating environment, 37
cell controller, 13
circuit breaker, 35
component
P
PCI, 13
power
power requirements, 35
computer system
air ducts, 40
plugs, 13
requirement, 13
power cords, 35
power requirements
component, 35
power supplies, 13
processor
controls, 17
cooling, 38
D
service, 13
DAT, 13
processors, 13
dimensions and weights, 33
DIMMs, 19
disk
internal, 30
disk drive, 13
DVD/CD, 13
R
rank, 21
S
server
block diagram, 14
front panel, 17
E
service processor, 13
Standby power LED, 17
status LEDs, 17
electrical specifications, 34
environmental specifications, 37
F
fans, 13
T
front panel, 17
temperature, 37
turbocoolers, 13
G
grounding, 34
H
I
L
LED
management processor, 17
remote port, 17
traffic light, 17
45
|