IBM Projector 520Q User Manual

IBM System p5 520 and 520Q  
Technical Overview and Introduction  
Finer system granulation using Micro-Partitioning  
technology to help lower TCO  
Support for versions of AIX 5L and  
Linux operating systems  
From Web servers to  
integrated cluster solutions  
Giuliano Anselmi  
Charlie Cler  
Carlo Costantini  
Bernard Filhol  
SahngShin Kim  
Gregor Linzmeier  
Ondrej Plachy  
Redpaper  
Download from Www.Somanuals.com. All Manuals Search And Download.  
International Technical Support Organization  
IBM System p5 520 and 520Q  
Technical Overview and Introduction  
September 2006  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Note: Before using this information and the product it supports, read the information in “Notices” on  
Second Edition (September 2006)  
This edition applies to IBM System p5 520 (product number 9131-52A), Linux, and IBM AIX 5L Version 5.3,  
product number 5765-G03.  
© Copyright International Business Machines Corporation 2006. All rights reserved.  
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule  
Contract with IBM Corp.  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Contents  
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix  
© Copyright IBM Corp. 2006. All rights reserved.  
iii  
Download from Www.Somanuals.com. All Manuals Search And Download.  
iv  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91  
Contents  
v
Download from Www.Somanuals.com. All Manuals Search And Download.  
vi  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Notices  
This information was developed for products and services offered in the U.S.A.  
IBM may not offer the products, services, or features discussed in this document in other countries. Consult  
your local IBM representative for information on the products and services currently available in your area. Any  
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,  
program, or service may be used. Any functionally equivalent product, program, or service that does not  
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to  
evaluate and verify the operation of any non-IBM product, program, or service.  
IBM may have patents or pending patent applications covering subject matter described in this document. The  
furnishing of this document does not give you any license to these patents. You can send license inquiries, in  
writing, to:  
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.  
The following paragraph does not apply to the United Kingdom or any other country where such provisions are  
inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS  
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,  
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,  
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of  
express or implied warranties in certain transactions, therefore, this statement may not apply to you.  
This information could include technical inaccuracies or typographical errors. Changes are periodically made  
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make  
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time  
without notice.  
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any  
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the  
materials for this IBM product and use of those Web sites is at your own risk.  
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring  
any obligation to you.  
Any performance data contained herein was determined in a controlled environment. Therefore, the results  
obtained in other operating environments may vary significantly. Some measurements may have been made  
on development-level systems and there is no guarantee that these measurements will be the same on  
generally available systems. Furthermore, some measurement may have been estimated through  
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their  
specific environment.  
Information concerning non-IBM products was obtained from the suppliers of those products, their published  
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the  
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the  
capabilities of non-IBM products should be addressed to the suppliers of those products.  
This information contains examples of data and reports used in daily business operations. To illustrate them  
as completely as possible, the examples include the names of individuals, companies, brands, and products.  
All of these names are fictitious and any similarity to the names and addresses used by an actual business  
enterprise is entirely coincidental.  
COPYRIGHT LICENSE:  
This information contains sample application programs in source language, which illustrates programming  
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in  
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application  
programs conforming to the application programming interface for the operating platform for which the sample  
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,  
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and  
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,  
marketing, or distributing application programs conforming to IBM's application programming interfaces.  
© Copyright IBM Corp. 2006. All rights reserved.  
vii  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Trademarks  
The following terms are trademarks of the International Business Machines Corporation in the United States,  
other countries, or both:  
Eserver®  
Redbooks (logo)  
pSeries®  
AIX 5L™  
HACMP™  
IBM®  
Micro-Partitioning™  
OpenPower™  
PowerPC®  
PTX®  
Redbooks™  
RS/6000®  
Service Director™  
System p™  
AIX®  
Chipkill™  
DS4000™  
DS6000™  
DS8000™  
FICON®  
POWER™  
System p5™  
System Storage™  
TotalStorage®  
Virtualization Engine™  
1350™  
POWER Hypervisor™  
POWER4™  
POWER5™  
POWER5+™  
The following terms are trademarks of other companies:  
Internet Explorer, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United  
States, other countries, or both.  
UNIX is a registered trademark of The Open Group in the United States and other countries.  
Linux is a trademark of Liux Torvalds in the United States, other countries, or both.  
Other company, product, or service names may be trademarks or service marks of others.  
viii  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Preface  
This IBM Redpaper is a comprehensive guide that covers the IBM® System p5™ 520 and  
520Q UNIX® servers. It introduces major hardware offerings and discusses their prominent  
functions.  
Professionals who want to acquire a better understanding of IBM System p™ products should  
read this document. The intended audience includes:  
Clients  
Marketing representatives  
Technical support professionals  
IBM Business Partners  
Independent software vendors  
This document expands the current set of IBM System p documentation and provides a  
desktop reference that offers a detailed technical description of the p5-520 and the p5-520Q  
system.  
This publication does not replace the latest IBM System p marketing materials and tools. It is  
intended as an additional source of information that you can use, together with existing  
sources, to enhance your knowledge of IBM server solutions.  
The team that wrote this Redpaper  
This Redpaper was produced by a team of specialists from around the world working at the  
International Technical Support Organization (ITSO), Austin Center.  
Giuliano Anselmi is a certified pSeries® Presales Technical Support Specialist who works in  
the Field Technical Sales Support group based in Rome, Italy. For seven years, he was an  
IBM Sserver pSeries Systems Product Engineer, supporting the Web Server Sales  
Organization in EMEA, IBM Sales, IBM Business Partners, Technical Support Organizations,  
and IBM Dublin eServer Manufacturing. Giuliano has worked for IBM for 14 years, devoting  
himself to RS/6000® and pSeries systems with his in-depth knowledge of the related  
hardware and solutions.  
Charlie Cler is a Certified IT Specialist for IBM and has over 21 years of experience with IBM.  
He currently works in the United States as a presales Systems Architect representing IBM  
Systems and Technology Group product offerings. He has been working with IBM System p  
servers for over 16 years.  
Carlo Costantini is a Certified IT Specialist for IBM and has over 28 years of experience with  
IBM and IBM Business Partners. He currently works in Italy Presales Field Technical Sales  
Support for IBM Sales Representatives and IBM Business Partners for all pSeries and IBM  
System p5 systems offerings. He has broad marketing experience. He is a certified specialist  
for pSeries and IBM System p servers.  
Bernard Filhol is a UNIX Server Customer Satisfaction Resolution Team Leader for NEE  
and SWE IOTs in Montpellier, France. He has more than 25 years of experience in  
mainframes and five years of experience in pSeries Customer Satisfaction. He holds a  
degree in Electronics from Montpellier University Institute of Technology. His areas of  
© Copyright IBM Corp. 2006. All rights reserved.  
ix  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
expertise include Mainframe Channel Subsystem, FICON®, and pSeries RAS. He has written  
extensively on FICON.  
SahngShin Kim is a sales specialist of STG infra-solution sales team in Seoul, Korea. For  
three years, he was a sales specialist of IBM eServer pSeries, for two years of grid  
computing, and for one year for infra-solutions. SahngShin has worked for IBM for six years,  
devoting himself to RS/6000 and pSeries systems and STG server products and as an  
architect for these products.  
Gregor Linzmeier is an IBM Advisory IT Specialist for RS/6000 and pSeries workstation and  
entry servers as part of the Systems and Technology Group in Mainz, Germany, supporting  
IBM sales, IBM Business Partners, and clients with pre-sales consultation and  
implementation of client/server environments. He has worked for more than 15 years as an  
infrastructure specialist for RT, RS/6000, and AIX® in large CATIA client/server projects.  
Ondrej Plachy is an IT specialist in IBM Czech Republic responsible for project design,  
implementation, and support of large scale computer systems. He has 11 years of experience  
in the UNIX field. He holds the Ing. academic degree in Computer Science from Czech  
Technical University (CVUT), Prague. He has worked at Supercomputing Centre of Czech  
Technical University for four years and currently works for IBM (seven years) in the AIX 5L™  
support team.  
The project that produced this document was managed by:  
Scott Vetter  
IBM U.S.  
Thanks to the following people for their contributions to this project:  
Larry Amy, Baba Arimilli, Ron Arroyo, Joergen Berg, Terry Brennan, Erin Burke, Mark Dewalt,  
Bob Foster, Ron Gonzalez, Dan Henderson, David A. Hepkin, Tenley Jackson, Hal Jennings,  
Carolyn Jones, Brian J King, Bill Mihaltse, Thoi Nguyen, Ken Rozendal, Craig Shempert,  
Doug Szerdi, and Dave Willoughby  
IBM  
Become a published author  
Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with  
specific products or solutions, while getting hands-on experience with leading-edge  
technologies. You'll team with IBM technical professionals, Business Partners, or clients.  
Your efforts will help increase product acceptance and client satisfaction. As a bonus, you'll  
develop a network of contacts in IBM development labs, and increase your productivity and  
marketability.  
Find out more about the residency program, browse the residency index, and apply online at:  
Comments welcome  
Your comments are important to us!  
We want our papers to be as helpful as possible. Send us your comments about this  
Redpaper or other Redbooks™ in one of the following ways:  
x
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Use the online Contact us review redbook form found at:  
Send your comments in an e-mail to:  
Mail your comments to:  
IBM Corporation, International Technical Support Organization  
Dept. HYTD Mail Station P099  
2455 South Road  
Poughkeepsie, NY 12601-5400  
Preface  
xi  
Download from Www.Somanuals.com. All Manuals Search And Download.  
xii  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
1
General description  
The IBM System p5 520 and IBM System p5 520Q rack-mount and deskside servers  
(9131-52A) give you new tools for managing on demand business, greater application  
flexibility, and innovative technology in 1-core, 2-core, and 4-core configurations — all  
designed to help you capitalize on the on demand business revolution. To simplify naming,  
both products are referred to as p5-520 or p5-520Q.  
The p5-520 and p5-520Q have POWER5+™ processors which provide performance and  
reliability advances (or enhancements) over the POWER5™ architecture that it replaces.  
Chief among the enhancements is 90 nm processor fabrication technology.  
The p5-520 processor is packaged as a 1-core single-core module running at 1.65 GHz with  
no L3 cache or as a 1-core single-core module running at 2.1 GHz with 36 MB of L3 cache or  
a 2-core dual-core module running at 1.65 or 1.9 or 2.1 GHz with 36 MB of L3. The p5-520Q  
offers the same features but comes with a 4-core POWER5+ quad-core module running at  
1.5 or 1.65 GHz with two 36 MB of L3 caches.  
When you purchase a p5-520 or p5-520Q Express Product Offering that is only available on  
an initial order request, you might qualify for processor activation at no extra charge. The  
number of processors, total memory, quantity and size of disk, and the presence of a media  
device are the only features that determine if you are entitled to a processor entitlement at no  
additional charge. Contact your marketing representative regarding the feature for Express  
Product Offering or volume offering.  
The p5-520 and p5-520Q server have a base of 1 GB of DDR2 memory that can be  
expanded to 32 GB, designed for performance and exploitation of 64-bit addressing as used  
in large database applications.  
The p5-520 and p5-520Q include four front-accessible, hot-swap capable disk bays in a  
minimum configuration with an additional four hot-swap capable disk bays as an optional  
feature. The eight disk bays can accommodate up to 2.4 TB of disk storage using the 300 GB  
Ultra320 SCSI disk drives. Other features included in the p5-520 and p5-520Q are six  
hot-plug PCI-X slots with Enhanced Error Handling (EEH), integrated service processor,  
integrated 10/100/1000 Mbps two-port Ethernet, two system, two USB, and two Hardware  
Management Console (HMC) ports, integrated dual-channel Ultra320 SCSI controller,  
hot-swappable power and cooling, and optional redundant power.  
© Copyright IBM Corp. 2006. All rights reserved.  
1
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Three non-hot-swappable media bays are used to accommodate additional devices. Two  
media bays only accept slim-line media devices, such as DVD-ROM or DVD-RAM drives, and  
one half-height bay is used for a tape drive. The rack-mount model also has I/O extension  
capability using the RIO-2 bus that allows attachment of the 7311 Model D20 I/O drawers.  
For partitioning, we recommend an HMC. Dynamic LPAR is supported on the p5-520 and  
p5-520Q servers, allowing up to two logical partitions. In addition, the optional Advanced  
POWER™ Virtualization feature supports up to 40 micro-partitions using Micro-Partitioning™  
technology. The Integrated Virtualization Manager provides partition management in settings  
where an HMC is unavailable or not desired.  
Additional reliability and availability features include redundant hot-swappable cooling fans  
and redundant power supplies. Along with these components, the p5-520 and p5-520Q are  
designed to provide an extensive set of reliability, availability, and serviceability (RAS)  
features that include a dual service processor, fault isolation, recovery from errors without  
stopping the system, avoidance of recurring failures, and predictive failure analysis.  
The p5-520 and p5-520Q are backed by a three-year limited warranty. Check with your IBM  
representative for particular warranty availability in your region.  
2
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
1.1 System specifications  
Table 1-1 lists the general system specifications of the p5-520 and p5-520Q systems.  
Table 1-1 IBM System p5 520 and IBM System p5 520Q specifications  
Description  
Range  
Operating temperature  
Relative humidity  
5 to 35 degrees Celsius (41 to 95 F)  
8% to 80%  
Operating voltage  
100 to 127 or 200 to 240 V ac (auto-ranging)  
47/63 Hz  
Operating frequency  
Maximum power consumption  
Maximum thermal output  
750 watts maximum  
2560 BTU/hour (maximum)  
1.2 Physical package  
This section discusses the major physical attributes of the p5-520 and p5-520Q systems in  
rack-mounted and deskside versions that are selectable through a feature code.  
1.2.1 Deskside model  
The p5-520 and p5-520Q can be configured as deskside models. Table 1-2 lists the physical  
1
attributes and Figure 1-1 on page 4 shows the system.  
Table 1-2 Physical attributes of the deskside model  
a
Dimension  
Deskside (FC 7919)  
533 mm (21.0 in.)  
201 mm (7.9 in.)  
Height  
Width  
Depth (without rear cover; FC 6587)  
Depth (with rear cover; FC 6587)  
Weight  
630.0 mm (23.0 in.)  
706.0 mm (27.8 in.)  
Weight  
43 kg (95 lb.)  
50 kg (110 lb.)  
Shipping weight  
a. For a specific region, such as China, check specifications for specific dimensions.  
1
One Electronic Industries Association Unit (1U) is 44.45 mm (1.75 in.).  
Chapter 1. General description  
3
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 1-1 The deskside model (FC 7184) and acoustic cover (right FC 7185)  
The p5-520 or p5-520Q, when configured as a deskside server, is ideal for environments that  
require local access to the machine, such as applications that require a native graphics  
display. To order a system as a deskside version, FC 7184 or FC 7185 is required. FC 7185 is  
designed for quiet operation in office environments. The system is designed to be set up by  
the client and, in most cases, does not require the use of any tools. The system includes full  
setup instructions.  
The GXT135P 2D graphics accelerator with analog and digital interfaces (FC 1980) is  
available and is supported for SMS, firmware menus, and other low-level functions, as well as  
when AIX 5L or Linux® starts the X11-based graphical user interface. You can use graphical  
AIX 5L system tools for configuration management if the adapter is connected to the primary  
console, such as the IBM 15-inch, 17-inch, 19-inch, or 20-inch TFT Color Monitor (FC 3641,  
FC 3645, FC 3644, and FC 3643).  
4
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
1.2.2 Rack-mount model  
The IBM System p5 520 or IBM System p5 520Q can be configured as a 4U rack-mount  
model with the selected feature code. Table 1-3 lists the physical attributes and Figure 1-2  
shows the system.  
Table 1-3 Physical attributes of the rack-mount model  
a
Dimension  
Height  
Rack (FC 7918)  
178 mm (7.0 in.)  
437 mm (17.2 in.)  
584 mm (23.0 in.)  
Width  
Depth  
Weight  
Weight  
43.0 kg (95 lb.)  
53.0 kg (117 lb.)  
Shipping weight  
a. For a specific region, such as China, check specifications for specific dimensions.  
Figure 1-2 IBM System p5 520 and IBM System p5 520Q rack-type model (FC 7160)  
The p5-520 or p5-520Q, when configured as a 4U rack-mounted server, is intended to be  
installed in a 19-inch rack, thereby enabling efficient use of computer room floor space. If the  
IBM 7014 T42 rack is used to mount the server, it is possible to place up to 10 systems in an  
area of 644 mm (25.5 in.) x 1147 mm (45.2 in.).  
To order a p5-520 or p5-520Q system as a rack-mounted version, FC 7190 must be selected.  
In addition to the rack-mounted version, the server can be installed in either IBM or OEM  
racks. Therefore, you are required to select one of the following features:  
IBM Rack-mount Drawer Rail Kit (FC 7160)  
OEM Rack-mount Drawer Rail Kit (FC 7161)  
Included with the rack-mounted server packaging are all of the components and instructions  
necessary to enable installation in a 19-inch rack using suitable tools.  
The GXT135P 2D graphics accelerator with analog and digital interfaces (FC 1980) is  
available and is supported for SMS, firmware menus, and other low-level functions, as well as  
when AIX 5L or Linux starts the X11-based graphical user interface. You can use graphical  
Chapter 1. General description  
5
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
AIX 5L system tools for configuration management if the adapter is connected to a common  
maintenance console, such as the 7316-TF3 rack-mounted flat-panel display.  
1.3 Minimum and optional features  
The systems are based on a flexible, modular design based on POWER5+ processors. The  
server is available in 1-core, 2-core, and 4-core configurations that feature the following:  
1.65 (SCM and DCM), 1.9 or 2.1 GHz (DCM), and 1.5 or 1.65 GHz (QCM) POWER5+  
processors.  
From 1 GB to 32 GB of total system memory capacity using 533 MHz DDR2 DIMM  
technology.  
Four SCSI disk drives in a minimum configuration, eight SCSI disk drives with an optional  
second 4-pack enclosure for a total internal storage capacity of 2.4 TB using 300 GB disk  
drives.  
Six PCI-X slots (one 266 MHz 64-bit PCIX-2, three 133 MHz 64-bit PCI-X, two 66 MHz  
32-bit PCI-X). All slots support Enhanced Error Handling (EEH).  
Two slim-line media bays for optional storage devices.  
One half-high bay for an optional tape device.  
The p5-520 and p5-520Q, including the service processor that is described in 3.2.1, “Service  
processor” on page 83, support the following native ports:  
Two 10/100/1000 Ethernet ports on a single controller  
Two system ports  
Two USB 2.0 ports on a single controller  
Optionally, an external USB diskette drive 1.44 (FC 2591) is available.  
Two HMC ports  
Optional GX+ Bus to RIO-2 adapter card (FC 2888)  
Two SPCN ports  
In addition, the p5-520 and p5-520Q feature one internal Ultra320 SCSI dual channel  
controller, redundant hot-swap power supply (optional), and cooling fans.  
The system supports 32-bit and 64-bit applications and requires specific levels of AIX 5L and  
Linux operating systems. For more information, see 2.14, “Operating system support” on  
1.3.1 Processor features  
The p5-520 features one or two POWER5+ processors, each with one or two cores running at  
1.65 GHz, 1.9 GHz, or 2.1 GHz, or the p5-520Q with four cores running at 1.5 GHz or  
1.65 GHz. The processors are installed on either single-core modules (SCM), dual-core  
modules (DCM), or quad-core modules (QCM). The POWER5+ processor modules are  
mounted directly to the system planar. Table 1-4 on page 7 lists the available processor  
features.  
6
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Table 1-4 Processor feature codes  
Feature code  
8321  
Description  
1-core 1.65 GHz POWER5+ Processor Card, no L3 Cache  
2-core 1.65 GHz POWER5+ Processor Card, 36 MB L3 Cache  
2-core 1.9 GHz POWER5+ Processor Card, 36 MB L3 Cache  
1-core 2.1 GHz POWER5+ Processor Card, 36 MB L3 Cache  
2-core 2.1 GHz POWER5+ Processor Card, 36 MB L3 Cache  
4-core 1.5 GHz POWER5+ Processor Card, 2 x 36 MB L3 Cache  
4-core 1.65 GHz POWER5+ Processor Card, 2 x 36 MB L3 Cache  
8323  
8330  
8315  
8316  
8333  
8314  
Note: When configuring p5-520 and p5-520Q systems, remember that the processor  
modules are mounted directly on the system planar and cannot be upgraded.  
1.3.2 Memory features  
The minimum memory requirement for the p5-520 and p5-520Q servers is 1 GB, and the  
maximum capacity is 32 GB using 533 MHz DDR2 technology. The planar of each system  
has eight sockets for memory DIMMs. Table 1-5 lists the available memory features.  
Table 1-5 Memory feature codes  
Feature code  
1930  
Description  
1 GB (2 x 512 MB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
2 GB (2 x 1 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
4 GB (2 x 2 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
8 GB (2 x 4 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
1931  
1932  
1934  
Note that an amount of memory is always in use by the Hypervisor, even when the machine is  
not partitioned. You can use the System Planning Tool to calculate the amount of available  
memory for an operating system based on machine configuration as follows:  
1.3.3 Disk and media features  
The minimum configuration includes a 4-pack disk drive enclosure. A second 4-pack disk  
drive enclosure can be installed by ordering FC 6574 or FC 6594, so that the maximum  
internal storage capacity can reach 2.4 TB (using the disk drive features available at the time  
of writing). The p5-520 and p5-520Q feature up to eight disk drive bays, two slim-line media  
device bays, and one half-height media bay. The minimum configuration requires at least one  
disk drive. Table 1-6 shows the disk drive feature codes that each bay can contain.  
Chapter 1. General description  
7
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Table 1-6 Hot-swappable disk drive options  
Feature code  
1968  
Description  
73.4 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
146.8 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
36.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
73.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
146.8 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
300 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
1969  
1970  
1971  
1972  
1973  
You can install any combination of the following DVD-ROM and DVD-RAM drives in the two  
slim-line bays:  
DVD-RAM drive, FC 1993  
DVD-ROM drive, FC 1994  
A logical partition running a supported release of Linux requires a DVD-ROM drive or  
DVD-RAM drive to provide a way to run the diagnostics CD for hardware diagnostics.  
Concurrent diagnostics, as provided by the AIX 5L diagcommand, are not available on the  
Linux operating system at the time of writing.  
You can install supplementary devices in the half-height media bay, such as:  
Internal 4 mm 36/72 GB LVD tape drive, FC 1991  
IBM 80/160 GB internal tape drive VXA, FC 1992  
IBM 160/320 GB internal tape drive with VXA-3 technology, FC 1892  
IBM 200/400 GB LTO2 tape drive, FC 1997  
DVD devices installed in the slim-line bays must be assigned as a group to a single LPAR on  
a partitioned system.  
A dual-channel RAID enablement daughter card is also available (FC 1907).  
1.3.4 USB diskette drive  
The externally attached USB diskette drive provides storage capacity up to 1.44 MB  
(FC 2591) on high-density (2HD) floppy disks and 720 KB on a double density floppy disk. It  
includes a 350 mm (13.7 in.) cable with standard USB connector. This super slim-line and  
lightweight USB V2-attached diskette drive takes its power requirements from the USB port.  
The drive can be attached to the integrated USB ports or to a USB adapter (FC 2738). A  
maximum of one USB diskette drive is supported per integrated controller/adapter. The same  
controller can share a USB mouse and keyboard.  
1.3.5 I/O drawers  
The p5-520 and p5-520Q have six internal PCI-X slots — three long slots and three short  
slots. If you need more PCI-X slots to extend the number of LPARs and partitions, you can  
connect up to four 7311 Model D20 drawers to the optional RIO-2 ports (FC 2888) that are  
provided on the rear of the system in a minimum configuration.  
The 7311 Model D20 I/O drawer is a 4U full-size drawer, which must be mounted in a rack. It  
features seven hot-pluggable PCI-X slots and, optionally, up to 12 hot-swappable disks  
arranged in two 6-packs. Redundant, concurrently maintainable power and cooling is an  
optional feature (FC 6268). The 7311 Model D20 I/O drawer offers a modular growth path for  
a system with increasing I/O requirements. When a p5-520 or p5-520Q is fully configured with  
8
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
four attached 7311 Model D20 drawers, the combined system supports up to 34 PCI-X  
adapters (in a maximum configuration, remote I/O expansion cards are required) and  
56 hot-swappable SCSI disks, for a total internal capacity of 16.8 TB using 300 GB disks.  
PCI-X and PCI cards are inserted from the top of the I/O drawer down into the slot from the  
drawer’s front service position. The installed adapters are protected by plastic separators,  
which are designed to prevent grounding and damage when adding or removing adapters.  
The drawer has the following attributes:  
4U rack-mount enclosure assembly  
Seven PCI-X slots 3.3 volt, keyed, 133 MHz hot-pluggable  
Two 6-pack hot-swappable SCSI bays (optional)  
Redundant hot-swap power (optional)  
Two RIO-2 ports and two SPCN ports  
Note: A 7311 Model D20 I/O drawer initial order or an existing 7311 Model D20 I/O drawer  
that is migrated from another pSeries system must have the RIO-2 ports available  
(FC 6417).  
The I/O drawer has the following physical characteristics:  
Width: 482 mm (19.0 in.)  
Depth: 610 mm (24.0 in.)  
Height: 178 mm (7.0 in.)  
Weight: 45.9 kg (101 lb.)  
Figure 1-3 shows the different views of the 7311-D20 I/O drawer.  
Adapters  
Service  
Access  
I/O  
Drawer  
Front  
Rear  
Power supply 2  
Power supply 1  
RIO ports  
1
2
3
4
5
6
7
8
9
A
B
C
D
8
9
A
B
C
D
SPCN ports  
SCSI disk locations and IDs  
PCI-X slots  
Reserved ports  
Rack indicator  
Figure 1-3 7311-D20 I/O drawer views  
Chapter 1. General description  
9
Download from Www.Somanuals.com. All Manuals Search And Download.  
Note: The 7311 Model D20 I/O drawer is designed to be installed by an IBM service  
representative. Only the 7311 Model D20 I/O drawer is supported on a p5-520 or p5-520Q  
system.  
1.3.6 Hardware Management Console models  
A p5-520 or p5-520Q can be either HMC-managed or non-HMC-managed. In HMC-managed  
mode, an HMC is required as a dedicated workstation that allows you to configure and  
manage partitions. The HMC provides a set of functions to manage the system LPARs,  
dynamic LPAR operations, virtual features, Capacity on Demand, inventory and microcode  
management, and remote power control functions. These functions also include the handling  
of the partition profiles that define the processor, memory, and I/O resources allocated to an  
individual partition. For detailed information about the HMC, see 2.13, “Hardware  
Note: Non-HMC-managed modes are full system partition modes, where only one partition  
contains all system resources that exist on the system. For more information about using  
the Integrated Virtualization Manager (IVM), see 2.12.5, “Integrated Virtualization  
Table 1-7 lists the HMC options for POWER5 processor-based systems that are available at  
the time of writing. You can also use existing HMC models.  
Table 1-7 Supported HMC models  
Type-model  
7310-C05  
Description  
IBM 7310 Model C05 Deskside Hardware Management Console  
IBM 7310 Model CR3 Rack-Mount Hardware Management Console  
7310-CR3  
Systems require Ethernet connectivity between HMC and one of the Ethernet ports of the  
service processor. Ensure that sufficient HMC Ethernet ports are available to enable public  
and private networks if you need both. The 7310 Model C05 is a deskside model with one  
native 10/100/1000 Ethernet port. It can be extended with two additional two-port  
10/100/1000 Gb adapters. The 7310 Model CR3 is a 1U, 19-inch rack mountable drawer that  
has two native Ethernet ports and can be extended with one additional two-port 10/100/1000  
Gb adapter.  
In HMC-managed installations with very high demand for high availability, you should  
consider deployment of two HMCs. The service processor allows for connection of two HMCs,  
and there is no need for special handling of a dual HMC environment. HMCs provide a locking  
mechanism so that only one HMC has write access to the service processor at a time.  
When an HMC is connected to the system, the integrated system ports are disabled.  
To support a non-Ethernet HACMP™ heartbeat, you need to provide an asynchronous  
adapter (FC 5723 or FC 2943).  
Note: It is not possible to connect POWER4™ with POWER5 or POWER5+  
processor-based systems simultaneously to the same HMC. However, it is possible to  
connect POWER5 and POWER5+ processor-based systems together to the same HMC.  
10  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
1.4 Express Product Offerings  
The Express Product Offerings provide a convenient way to order any of several  
configurations that are designed to meet typical client requirements. Special reduced pricing  
is available when a system order satisfies specific configuration requirements for memory,  
disk drives, and processors.  
1.4.1 Express Product Offerings requirements  
When you order an Express Product Offering, the configurator offers a choice of starting  
points onto which you can add. You can configure systems with one or two processor cards  
and two or four processor activations.  
With the purchase of an Express Product Offering, for each paid processor activation, you are  
entitled to one processor activation at no additional charge, if the following requirements are  
met:  
The system must have at least two disk drives of at least 73.4 GB each.  
There must be at least 2 GB of memory installed for each active processor.  
If you order a p5-520 server Express Product Offering as defined here, you might qualify for a  
processor activation at no extra charge. The number of processors, total memory, quantity  
and size of disk, and presence of a media device are the only features that determine if a  
client is entitled to a processor entitlement at no additional charge.  
When you purchase an Express Product Offering, you are entitled to a lower priced AIX 5L or  
Linux operating system license, or you can choose to purchase the system with no operating  
system. The lower priced AIX 5L or Linux operating system is processed via a feature number  
on AIX 5L and either Red Hat or SUSE Linux. You can choose either the lower priced AIX 5L  
or Linux subscription, but not both.  
If you choose AIX 5L for your lower priced operating system, you can also order Linux but will  
purchase your Linux subscription at full price versus the reduced price. The same is true if  
you choose a Linux subscription as your lower priced operating system. Systems with a  
reduced price AIX 5L offering are the IBM System p5 Express Product Offering, AIX 5L  
edition. Systems with a lower priced Linux operating system are referred to as the  
IBM System p5 Express Product Offering, OpenPower™ edition. In the case of Linux, only  
the first subscription purchased is lower priced. So, for example, additional licenses  
purchased for Red Hat to run in multiple partitions will be at full price.  
You can make changes to the standard features as needed and still qualify for processor  
entitlements at no additional charge and a reduced price AIX 5L or Linux operating system  
license.  
If the system was initially ordered as an Express Product Offering, the system can be  
expanded at a later time using Express Product Offering pricing, when additional processors  
and activations along with the required memory are ordered on the same hardware upgrade  
order. The upgraded p5-520Q configuration must satisfy the Express Product Offering  
requirements for disk drives, memory, and processors. However, if the selection of total  
memory or disk drives is smaller than the total defined as the minimums, it disqualifies the  
order as an Express Product Offering.  
1.4.2 Configurator starting points for Express Product Offerings  
All Express Product Offerings have a set of standard features for the rack-mounted or  
deskside versions as listed in Table 1-8 on page 12.  
Chapter 1. General description  
11  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Table 1-8 Express Product Offering standard set of feature codes  
Feature code description  
Rack-mounted feature  
codes  
Deskside feature code  
System bezel and hardware  
Rack-mount rail kit  
7190  
7916 x 1  
n/a  
7160 x 1  
5159 x 1  
1994 x 1  
7877 x 1  
6574 x 1  
1968 x 2  
850 Watt power supply  
IDE DVD-ROM  
5159x 1  
1994 x 1  
7877 x 1  
6574 x 1  
1968 x 2  
Media backplane  
4-pack disk drive enclosure  
73.4 GB 10 k disk drives  
A specific Express Product Offering ID or specific offering feature code is used to select the  
processor type and quantity, and the associated memory feature code and quantity, on top of  
the standard set. Table 1-9 and Table 1-10 provide these configuration differences.  
Table 1-9 Express Product Offering features - SCM and DCM configurations  
Description  
1.65 GHz  
1-core  
1.9 GHz  
2-core  
2.1 GHz  
1-core  
8315 x 1  
n/a  
Configuration  
2-core  
2-core  
Processor cards  
Processor activations  
8321 x 1  
n/a  
8323 x 1  
7309 x 1  
8419 x 1  
8330 x 1  
7320 x 1  
8410 x 1  
8316 x 1  
7271 x 1  
8481 x 1  
Zero-priced express  
activations  
8418 x 1  
8480 x 1  
Total active processors  
Minimum memory  
1
2
1
1
2
1 GB  
2 GB  
2 GB  
1 GB  
2 GB  
Table 1-10 Express Product Offering features - QCM configurations  
Description  
1.5 GHz  
4-core  
8333 x 1  
7337 x 2  
8421 x 2  
4
1.65 GHz  
Configuration  
4-core  
8314  
7269  
8479  
4
Processor cards  
Processor activations  
Zero-priced express activations  
Total active processors  
Minimum memory  
4 GB  
4 GB  
1.5 System racks  
The IBM 7014 Model S11, S25, T00, and T42 Racks are 19-inch racks for general use with  
IBM System p and OpenPower Edition rack-mount servers. The racks provide increased  
capacity, greater flexibility, and improved floor space utilization.  
12  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
If a server is to be installed in a non-IBM rack or cabinet, you must ensure that the rack  
2
conforms to the EIA standard EIA-310-D (see 1.5.9, “OEM rack” on page 21).  
Note: It is the client’s responsibility to ensure that the installation of the drawer in the  
preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and  
compatible with the drawer requirements for power, cooling, cable management, weight,  
and rail security.  
1.5.1 IBM 7014 Model T00 rack  
The 1.8-meter (71-inch) Model T00 is compatible with past and present IBM System p  
systems. It is a 19-inch rack and is designed for use in all situations that have previously used  
the earlier rack models R00 and S00. The T00 rack has the following features:  
36 EIA units (36U) of usable space.  
Optional removable side panels.  
Optional highly perforated front door.  
Optional side-to-side mounting hardware for joining multiple racks.  
Standard business black or optional white color in OEM format.  
Increased power distribution and weight capacity.  
Optional reinforced (ruggedized) rack feature (FC 6080) provides added earthquake  
protection with modular rear brace, concrete floor bolt-down hardware, and bolt-in steel  
front filler panels.  
Support for both ac and dc configurations.  
The dc rack height is increased to 1926 mm (75.8 in.) if a power distribution panel is fixed  
to the top of the rack.  
Up to four power distribution units (PDUs) can be mounted in the PDU bays (see  
Figure 1-4 on page 17); additional PDUs can fit inside the rack. See 1.5.6, “The ac power  
Weights:  
– T00 base empty rack: 244 kg (535 pounds)  
– T00 full rack: 816 kg (1795 pounds)  
1.5.2 IBM 7014 Model T42 rack  
The 2.0-meter (79.3-inch) Model T42 addresses the client requirement for a tall enclosure to  
house the maximum amount of equipment in the smallest possible floor space. The features  
that differ in the Model T42 rack from the Model T00 include:  
42 EIA units (42U) of usable space (6U of additional space).  
The Model T42 supports ac only.  
Weights:  
– T42 base empty rack: 261 kg (575 lb.)  
– T42 full rack: 930 kg (2045 lb.)  
2
Electronic Industries Alliance (EIA). Accredited by American National Standards Institute (ANSI), EIA provides a  
forum for industry to develop standards and publications throughout the electronics and high-tech industries.  
Chapter 1. General description  
13  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Optional Rear Door Heat eXchanger (FC 6858)  
Improved cooling from the heat exchanger enables the client to more densely populate  
individual racks freeing valuable floor space without the need to purchase additional air  
conditioning units. The Rear Door Heat eXchanger features:  
Water-cooled heat exchanger door designed to dissipate heat generated from the back of  
computer systems before it enters the room  
An easy-to-mount rear door design that attaches to client-supplied water, using industry  
standard fittings and couplings  
Up to 15 KW (approximately 50,000 BTUs/hr.) of heat removed from air exiting the back of  
a fully populated rack  
One year, limited warranty  
Physical specifications  
The physical specifications are:  
Approximate height: 1945.5 mm (76.6 in.)  
Approximate width: 635.8 mm (25.03 in.)  
Approximate depth: 141.0 mm (5.55 in.)  
Approximate weight: 31.9 kg (70.0 lb.)  
Client responsibilities  
The client responsibilities are:  
Secondary water loop (to the building chilled water)  
Pump solution (for secondary loop)  
Delivery solution (hoses and piping)  
Connections: standard 3/4-inch internal threads  
1.5.3 IBM 7014 Model S11 rack  
The Model S11 rack satisfies many light-duty requirements for organizing smaller rack-mount  
servers and expansion drawers. The 0.6-meter-high rack has a perforated, lockable front  
door; a heavy-duty caster set for easy mobility; a complete set of blank filler panels for a  
finished look; EIA unit markings on each corner to aid assembly; and a retractable stabilizer  
foot. The Model S11 rack has the following specifications:  
Width: 520 mm (20.5 in.) with side panels  
Depth: 874 mm (34.4 in.) with front door  
Height: 612 mm (24.0 in.)  
Weight: 37 kg (75.0 lb.)  
The S11 rack has a maximum load limit of 16.5 kg (36.3 lb.) per EIA unit for a maximum  
loaded rack weight of 216 kg (475 lb.).  
1.5.4 IBM 7014 Model S25 rack  
The 1.3-meter-high Model S25 rack satisfies many light-duty requirements for organizing  
smaller rack-mount servers. Front and rear rack doors include locks and keys, helping keep  
your servers secure. Side panels are a standard feature, simplifying ordering and shipping.  
This 25U rack can be shipped configured and can accept server and expansion units up to  
28-inches deep.  
The front door is reversible so that it can be configured for either left or right opening. The rear  
door is split vertically in the middle and hinges on both the left and right sides. The S25 rack  
has the following specifications:  
14  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Width: 605 mm (23.8 in.) with side panels  
Depth: 1001 mm (39.4 in.) with front door  
Height: 1344 mm (49.0 in.)  
Weight: 100.2 kg (221.0 lb.)  
The S25 rack has a maximum load limit of 22.7 kg (50 lb.) per EIA unit for a maximum loaded  
rack weight of 667 kg (1470 lb.).  
1.5.5 S11 rack and S25 rack considerations  
The S11 and S25 racks do not have vertical mounting space that will accommodate FC 7188  
PDUs. All PDUs required for application in these racks must be installed horizontally in the  
rear of the rack. Each horizontally mounted PDU occupies 1U of space in the rack, and  
therefore reduces the space available for mounting servers and other components.  
FC 0469 Customer Specified Rack Placement provides the ability to specify the physical  
location of the system modules and attached expansion modules (drawers) in the racks. The  
client’s request is reviewed by eConfig for safe handling by checking the weight distribution  
within the rack. The Manufacturing Plant provides the final approval for the configuration. This  
information is then used by IBM Manufacturing to assemble the system components  
(drawers) in the rack according to the client’s request.  
The CFReport from eConfig must be submitted to the following site:  
Table 1-11 on page 16 lists the machine types that are supported in the S11 and S25 racks.  
Chapter 1. General description  
15  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Table 1-11 Models supported in S11 and S25 racks  
Machine type-model  
Name  
Supported in:  
7014-S11 rack  
7014-S25 rack  
7037-A50  
7031-D24/T24  
7311-D20  
9110-510  
9111-520  
9113-550  
9115-505  
9123-710  
9124-720  
9110-51A  
9131-52A  
9133-55A  
9116-561  
9910-P33  
9910-P65  
7315-CR3  
7315-CR3  
7026-P16  
IBM System p5 185  
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
N
N
N
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
EXP24 Disk Enclosure  
I/O Expansion Drawer  
IBM System p5 510  
IBM System p5 520  
IBM System p5 550  
IBM System p5 505  
OpenPower 710  
OpenPower 720  
IBM System p5 510 and 510Q  
IBM System p5 520 and 520Q  
IBM System p5 550 and 550Q  
IBM System p5 560Q  
3000VA UPS (2700 watt)  
500VA UPS (208-240V)  
Rack-mount HMC  
Rack-mount HMC  
LAN-attached remote asynchronous  
node (RAN)  
7316-TF3  
Rack-mounted flat-panel console kit  
N
Y
1.5.6 The ac power distribution unit and rack content  
Note: Each server, or system drawer to be mounted in the rack, requires two power cords,  
which are not included in the base order. For maximum availability, we highly recommend  
that you connect power cords from the same server or system drawer to two separate  
PDUs in the rack. These PDUs could be connected to two independent client power  
sources.  
For rack models T00 and T42, 12-outlet PDUs (FC 9188 and FC 7188) are available. For rack  
models S11 and S25, FC 7188 is available.  
Four PDUs can be mounted vertically in the T00 and T42 racks. See Figure 1-4 on page 17  
for the placement of the four vertically mounted PDUs. In the rear of the rack, two additional  
PDUs can be installed horizontally in the T00 rack and three in the T42 rack. The four vertical  
mounting locations will be filled first in the T00 and T42 racks. Mounting PDUs horizontally  
consumes 1U per PDU and reduces the space available for other racked components. When  
mounting PDUs horizontally, we recommend that you use fillers in the EIA units occupied by  
these PDUs to facilitate proper air flow and ventilation in the rack.  
16  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
The S11 and S25 racks support as many PDUs as there is available rack space.  
For detailed power cord requirements and power cord feature codes, see IBM System p5,  
IBM Eserver p5 and i5, and OpenPower Edition Planning, SA38-0508. For an online copy,  
select Map of pSeries books to the information center Planning Printable PDFs →  
Planning at the following Web site:  
Note: Ensure that the appropriate power cord feature is configured to support the power  
that is supplied.  
The Base/Side Mount Universal PDU (FC 9188) and the optional, additional Universal PDU  
(FC 7188) support a wide range of country requirements and electrical power specifications.  
The PDU receives power through a UTG0247 power line connector. Each PDU requires one  
PDU-to-wall power cord. Nine power cord features are available for different countries and  
applications by varying the PDU-to-wall power cord, which must be ordered separately. Each  
power cord provides the unique design characteristics for the specific power requirements. To  
match new power requirements and save previous investments, you can request these power  
cords with an initial order of the rack or with a later upgrade of the rack features.  
The PDU has 12 client-usable IEC 320-C13 outlets. There are six groups of two outlets fed by  
six circuit breakers. Each outlet is rated up to 10 amps. Each group of two outlets is fed from  
one 15 amp circuit breaker.  
Note: Based on the power cord that is used, the PDU can supply from 4.8 kVA to 19.2 kVA.  
The total kilovolt ampere (kVA) of all the drawers plugged into the PDU must not exceed  
the power cord limitation.  
The Universal PDUs are compatible with previous models.  
Figure 1-4 PDU placement and PDU view  
Chapter 1. General description  
17  
Download from Www.Somanuals.com. All Manuals Search And Download.  
1.5.7 Rack-mounting rules  
The primary rules that you should follow when you mount the server into a rack are:  
The p5-520 or p5-520Q is designed to be placed at any location in the rack. For rack  
stability, we advise that you start filling a rack from the bottom.  
Any remaining space in the rack can be used to install other systems or peripherals,  
provided that the maximum permissible weight of the rack is not exceeded and the  
installation rules for these devices are followed.  
Before placing or sliding a p5-520 or p5-520Q into the service position, it is essential that  
you have followed the rack manufacturer’s safety instructions regarding rack stability.  
The availability of 14-foot, 9-foot, and 6-foot jumper cords (between the drawer and the PDU)  
provides several options to ensure that all cables are accounted for inside the rack space.  
Depending on the current implementation and future enhancements of additional 7311 Model  
D20 drawers that are connected to the system, Table 1-12 shows examples of the minimum  
and maximum configurations for different combinations of servers and attached 7311 Model  
D20 I/O drawers.  
Table 1-12 Minimum and maximum configurations for servers and 7311-D20s  
Only servers  
One server,  
One server,  
one 7311-D20  
four 7311-D20s  
7014-T00 rack  
7014-T42 rack  
7014-S11 rack  
7014-S25 rack  
9
4
5
1
3
1
2
0
1
10  
2
6
1.5.8 Additional options for the rack  
This section highlights some solutions available to provide a single point of management for  
environments composed of multiple p5-520 or p5-520Q servers or other IBM System p5  
servers.  
IBM 7212 Model 103 IBM TotalStorage storage device enclosure  
The IBM 7212 Model 103 is designed to provide efficient and convenient storage expansion  
capabilities for selected System p servers. The IBM 7212 Model 103 is a 1U rack-mountable  
option to be installed in a standard 19-inch rack using an optional rack-mount hardware  
feature kit. The 7212 Model 103 has two bays that can accommodate any of the following  
storage drive features:  
Digital Data Storage (DDS) Gen 5 DAT72 Tape Drive provides a physical storage capacity  
of 36 GB (72 GB with 2:1 compression) per data cartridge.  
VXA-2 Tape Drive comes with a media capacity of up to 80 GB (160 GB with 2:1  
compression) physical data storage capacity per cartridge.  
VXA-320 Tape Drive comes with a media physical capacity of up to 160 GB (320 GB with  
2:1 compression) physical data storage capacity per cartridge.  
Half-High LTO-2 Tape Drive comes with media physical capacity of up to 200 GB (400 GB  
with 2:1 compression) data storage per Ultrium 2 cartridge and a sustained data transfer  
rate of 24.0 MB per second (48 MB per second with 2:1 compression). In addition to  
18  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
reading and writing on Ultrium 2 tape cartridges, it is also read and write compatible with  
Ultrium 1 cartridges.  
SLR60 Tape Drive (QIC format) comes with 37.5 GB native data physical capacity per  
tape cartridge and a native physical data transfer rate of up to 4 MB per second and uses  
2:1 compression to achieve a single tape cartridge physical capacity up to 75 GB of data.  
SLR100 Tape Drive (QIC format) comes with 50 GB native data physical capacity per tape  
cartridge and a native physical data transfer rate of up to 5 MB per second and uses 2:1  
compression to achieve single tape cartridge storage of up to 100 GB of data.  
DVD-RAM 2 drive can read and write on 4.7 GB and 9.4 GB DVD-RAM media. The  
DVD-RAM 2 uses only bare media, which reduces media costs, and is also read  
compatible with multisession CD, CD-RW, and 2.6 GB and 5.2 GB DVD-RAM media. The  
9.4 GB physical capacity of DVD-RAM allows storage of more data than on conventional  
CD-R media. Fast performance also allows quick access to information, while downward  
compatibility helps provide investment protection.  
Note: Disc capacity options are 2.6 GB and 4.7 GB per side. The 5.2 GB and 9.4 GB  
capacities can be achieved by using double-sided DVD-RAM discs.  
Flat panel display options  
The IBM 7316-TF3 Flat Panel Console Kit can be installed in the system rack. This 1U  
console uses a 17-inch thin film transistor (TFT) LCD with a viewable area of 337.9  
mm x 270.03 mm and a 1280 x 1024 Picture elements (pels) resolution. The 7316-TF3 Flat  
Panel Console Kit has the following attributes:  
A 17-inch, flat screen TFT color monitor that occupies only 1U (1.75 inches) in a 19-inch  
standard rack.  
Ability to mount the IBM Travel Keyboard in the 7316-TF3 rack keyboard tray.  
Support for the new 1x8 LCM switch (FC 4280), the Netbay LCM2 (FC 4279) with access  
to and control of as many as 64 servers, and support of both USB and PS/2 server-side  
keyboard and mouse connections.  
IBM Travel Keyboard mounts in the rack keyboard tray (Integrated Track point and  
UltraNav).  
IBM PS/2 Travel Keyboards are supported on the 7316-TF3 for use in configurations where  
only PS/2 keyboard ports are available.  
The IBM 7316-TF3 Flat Panel Console Kit provides an option for the USB Travel Keyboards  
with UltraNav. The keyboard enables the 7316-TF3 to be connected to systems that do not  
have PS/2 keyboard ports. The USB Travel Keyboard can be directly attached to an available  
integrated USB port or a supported USB adapter (2738) on System p5 servers or 7310-CR3  
and 7315-CR3 HMCs.  
The IBM 7316-TF3 flat-panel, rack-mounted console is now available with two console switch  
options, which let you inexpensively cable, monitor, and manage your rack servers: the new  
1x8 LCM Console Switch (FC 4280) and the LCM2 console switch (FC 4279).  
The 1x8 Console Switch is a cost-effective, densely-packed solution that helps you set up and  
control selected System p rack-mounted IBM servers:  
Supports one local user with PS/2 keyboard, PS/2 mouse, and video connections  
Features an 8-port, CAT5 console switch for single-user local management  
Supports both USB and PS/2 server-side keyboard and mouse connections  
Chapter 1. General description  
19  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Occupies only 1U (1.75 in) in a 19-inch standard rack  
The 1x8 Console Switch can be mounted in one of the following racks: 7014-T00, 7014-T42,  
7014-S11, or 7014-S25.  
The 1x8 Console Switch supports GXT135P (FC 1980 and FC 2849) graphics accelerators.  
The following cables are used to attach the IBM servers to the 1x8 Console Switch:  
IBM 3M Console Switch Cable (PS/2) (FC 4282)  
IBM 3M Console Switch Cable (USB) (FC 4281)  
The 1x8 Console Switch supports the following monitors:  
7316-TF3 rack console monitor  
pSeries TFT monitors (FC 3641, FC 3643, FC 3644, and FC 3645)  
Separately available switch cables convert KVM signals for CAT5 cabling for servers with USB  
and PS/2 ports. A minimum of one cable feature (FC 4281) or USB feature (FC 4282) is  
required to connect the IBM 1x8 Console Switch (FC 4280) to a supported server. The  
3-meter cable FC 4281 has one HD15 connector for video and one USB connector for  
keyboard and mouse. The 3-meter cable FC 4282 has one HD15 connector for video, one  
PS/2 connector for keyboard, and one PS/2 connector for the mouse and is used to connect  
the IBM 1x8 Console Switch to a supported server.  
The 1x8 Console Switch is a 1U (1.75-inch) rack-mountable LCM switch containing eight  
analog rack interface ports for connecting switches using CAT5 cable. The switch supports a  
maximum video resolution of 1280x1024.  
The Console Switch allows for two levels of tiering and supports up to 64 servers at a single  
user location through switch tiering. The previous VGA switch (FC 4200), the LCM (FC 4202),  
and LCM2 (FC 4279) switches can be tiered with the 1x8 Console Switch.  
Note: When the 1x8 Console Switch is tiered with the previous VGA switch (FC 4200) or  
LCM (FC 4202) switch, it must be at the top level of the tier. When the 1x8 Console Switch  
is tiered with the LCM2 (FC 4279) switch, it must be at the secondary level of the tier.  
The IBM Local 2x8 Console Manager (LCM2) switch (FC 4279) provides users single-point  
access and control of up to 1024 servers. The IBM Local 2x8 Console Manager (LCM2)  
switch (FC 4279) supports connection to servers with either PS/2 or USB connections with  
installation of appropriate options. The maximum resolution is 1280 x 1024 at 75 Hz. The  
LCM2 switch can be tiered, and three levels of tiering are supported.  
A minimum of one LCM feature (FC 4268) or USB feature (FC 4269) is required with an IBM  
Local 2x8 Console Manager (LCM2) switch (FC 4279). Each feature can support up to four  
systems. When connecting to a p5-520 or p5-520Q, FC 4269 provides connection to the  
POWER5+ USB ports. Only the PS/2 keyboard is supported when attaching the 7316-TF3 to  
the LCM Switch.  
When selecting the LCM Switch, consider the following information:  
The KVM Conversion Option (KCO) cable (FC 4268) is used with systems with PS/2 style  
keyboard, display, and mouse ports.  
The USB cable (FC 4269) is used with systems with USB keyboard or mouse ports.  
20  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
The switch offers four ports for server connections. Each port in the switch can connect a  
maximum of 16 systems:  
– One KCO cable (FC 4268) or USB cable (FC 4269) is required for every four systems  
supported on the switch.  
– A maximum of 16 KCO cables or USB cables per port can be used with the Netbay  
LCM Switch to connect up to 64 servers.  
Note: A server microcode update might be required on installed systems for boot-time  
System Management Services (SMS) menu support of the USB keyboards. For microcode  
updates, see:  
We recommend that you have the 7316-TF3 installed between EIA 20 and EIA 25 of the  
rack for ease of use. The 7316-TF3 or any other graphics monitor requires a  
POWER GXT135P graphics accelerator (FC 1980 and FC 2849) installed in the server, or  
some other graphics accelerator, if supported.  
Hardware Management Console 7310 Model CR3  
The 7310 Model CR3 Hardware Management Console (HMC) is a 1U, 19-inch  
rack-mountable drawer that is supported in the 7014 racks. For additional HMC specifications,  
1.5.9 OEM rack  
The p5-520 or p5-520Q can be installed in a suitable OEM rack, provided that the rack  
conforms to the EIA-310-D standard for 19-inch racks. This standard is published by the  
Electrical Industries Alliance, and a summary of this standard is available in the publication  
IBM System p5, IBM Eserver p5 and i5, and OpenPower Planning, SA38-0508.  
The key points mentioned in this documentation are as follows:  
The front rack opening must be 451 mm wide + 0.75 mm (17.73 in. + 0.03 in.), and the  
rail-mounting holes must be 465 mm + 0.8 mm (18.3 in. + 0.03 in.) apart on center  
(horizontal width between the vertical columns of holes on the two front-mounting flanges  
and on the two rear-mounting flanges). Figure 1-5 on page 22 shows a top view of the  
specification dimensions.  
Chapter 1. General description  
21  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Figure 1-5 Top view of non-IBM rack specification dimensions  
The vertical distance between the mounting holes must consist of sets of three holes  
spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.67 mm  
(0.5 in.) on center, making each three-hole set of vertical hole spacing 44.45 mm (1.75 in.)  
apart on center. Rail-mounting holes must be 7.1 mm + 0.1 mm (0.28 in. + 0.004 in.) in  
diameter. See Figure 1-6 and Figure 1-7 on page 23 for the top and bottom front  
specification dimensions.  
Figure 1-6 Rack specification dimensions, top front view  
22  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Figure 1-7 Rack specification dimensions, bottom front view  
It might be necessary to supply additional hardware, such as fasteners, for use in some  
manufacturer’s racks.  
The system rack or cabinet must be capable of supporting an average load of 15.9 kg  
(35 lb.) of product weight per EIA unit.  
The system rack or cabinet must be compatible with drawer mounting rails, including a  
secure and snug fit of the rail-mounting pins and screws into the rack or cabinet rail  
support hole.  
Note: The OEM rack must only support ac-powered drawers. We strongly recommend that  
you use a power distribution unit (PDU) that meets the same specifications as the PDUs to  
supply rack power. Rack or cabinet power distribution devices must meet the drawer power  
requirements, as well as the requirements of any additional products that will be connected  
to the same power distribution device.  
Chapter 1. General description  
23  
Download from Www.Somanuals.com. All Manuals Search And Download.  
24  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
2
Architecture and technical  
overview  
This chapter discusses the overall system architecture of the p5-520 and p5-520Q. Figure 2-1  
details the base system hardware and the DCM or QCM options. (You cannot mix an  
installation of DCM and QCM options.) The bandwidths in this chapter are theoretical  
maximums that are provided for reference. We always recommend that you obtain real-world  
performance measurements using production workloads.  
Two SPCN ports  
P1-C7-T3 T4  
HMC ports  
P1-C7-T1 T2 cable port P1-T9  
Rack Indicator Light System Ports  
P1-T1 T2  
RIO-2 bus 2B (Diff’l)  
Each direction @ 1GB/s  
USB ports Ethernet ports  
P1-T7 T8  
P1-T5 T6  
P1-C1 C2  
C3  
C4  
C5  
C6  
Dual 1GB  
Ethernet  
64-bit  
Service Processor  
USB  
32-bit  
CoD key card  
buzz interface  
33  
133  
MHz  
MHz  
To Enhanced I/O Controller  
To Enhanced I/O Controller  
2x4 B @  
633 MHz  
Short  
Short  
Short  
Long  
Long  
Long  
GX+  
700  
Core  
Core  
1.65 GHz  
1.65 GHz  
MHz  
(DCM)  
Enhanced  
I/O Controller  
PCI-X to PCI-X  
bridge 0  
Core  
1.65 GHz  
Core  
1.65 GHz  
L3  
ctrl  
Mem  
ctrl  
L3  
ctrl  
Mem  
ctrl  
PCI-X to PCI-X  
bridge 3  
133 MHz 64-bit  
IDE  
controller  
2x16 B  
@1.05 GHz  
Dual SCSI  
Operator panel  
66 MHz  
32-bit  
Ultra320 64-bit  
RAID enablement  
card  
Slim-line media device  
Slim-line media device  
1056 MHz  
1056 MHz  
2x8 B for read  
2x8 B for write  
2x8 B for read  
2x8 B for write  
Optional media backplane  
Tape drive  
P4-D1  
SMI-II  
SMI-II  
SMI-II  
SMI-II  
2x8 B  
2x8 B  
@528 MHz  
@528 MHz  
4-pack disk drive backplane  
P2-T15-L15-L0  
4-pack disk drive backplane  
P3-T14-L15-L0  
Figure 2-1 IBM System p5 520 and IBM System p5 520Q architecture with QCM or DCM  
© Copyright IBM Corp. 2006. All rights reserved.  
25  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
2.1 The POWER5+ processor  
The IBM POWER5+ processor capitalizes on all the enhancements brought by the POWER5  
processor. For a detailed description of the POWER5 processor, refer to IBM System p5 520  
Technical Overview and Introduction, REDP-9111. Figure 2-2 shows a high level view of the  
POWER5+ processor.  
POWER 5+ Processor  
GX+  
Intf  
GX+  
Bus  
Core  
Core  
2.1 GHz  
2.1 GHz  
L3  
Intf  
L3  
Bus  
SMP  
Fabric  
Bus  
1.9 MB L2  
Mem  
Bus  
Enhanced Distributed Switch  
(Fabric Bus Controller)  
Mem  
Cntrl  
Vertical  
Fabric  
Bus  
Figure 2-2 Power5+ processor  
The CMOS10S technology in the POWER5+ processor uses a 90 nanometer or nm  
fabrication process, which enables:  
Performance gains through faster clock rates  
Processor size reduction (243 mm compared with 389 mm)  
The POWER5+ processor is 37% smaller than the POWER5 processor. It consumes less  
power and requires less cooling. Thus, you can use the POWER5+ processor in servers  
where previously you could only use lower frequency processors due to cooling restrictions.  
The POWER5+ design provides the following additional enhancements:  
New page sizes in ERAT and TLB. Two new pages sizes (64 KB and 16 GB), which were  
recently added in PowerPC® architecture.  
New segment size in SLB. One new segment size (1 TB) was recently added in PowerPC  
architecture.  
The TLB size has been doubled in the POWER5+ over the POWER5 processors. The  
TLB in POWER5+ has 2048 entries.  
Floating-point round to integer instructions. New instructions (frfin, frfiz, frfip, frfim) have  
been added to round floating-point numbers with the following rounding modes: nearest,  
zero, integer plus, and integer minus.  
Improved floating-point performance.  
Lock performance enhancement.  
Enhanced SLB read.  
True Little-Endian mode. Support for the True Little-Endian mode as defined in the  
PowerPC architecture.  
26  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Double the SMP support. Changes have been made in the fabric, L2 and L3 controller,  
memory controller, GX controller, and processor RAS to provide support for the QCM that  
allows the SMP system sizes to be double that which is available in POWER5 DCM-based  
servers. However, current POWER5+ implementations only support single address loop.  
Several enhancements have been made in the memory controller for improved  
performance. The memory controller is ready to support DDR2 667 MHz DIMMs in the  
future.  
Enhanced redundancy in L1 cache, L2 cache, and L3 directory. Independent control of the  
L2 cache and the L3 directory for redundancy to allow split-repair action has been added.  
More word line redundancy has been added in the L1 Dcache. In addition, Array Built-In  
Self Test (ABIST) column repair for the L2 cache and the L3 directory has been added.  
2.2 Processor and cache  
In the p5-520 and p5-520Q, the POWER5+ processors, associated L3 cache (if present), and  
memory DIMMs are packaged on the system planar. The p5-520 1-core, 2-core, and  
p5-520Q 4-core systems use different POWER5+ processor modules.  
Note: Because the POWER5+ processor modules are soldered directly to the system  
planar, you must take special care in sizing and selecting the ideal CPU configuration.  
2.2.1 POWER5+ single-core module  
The 1-core p5-520 POWER5+ system planar contains a single-core module (SCM) and the  
local memory storage subsystem for that SCM. The POWER5+ single-core processor is  
packaged in the SCM. The 1-core 1.65 GHz system planar contains a single-core module  
(SCM) and the local memory storage subsystem for that SCM. L3 Cache is not available in  
this configuration. Figure 2-3 on page 27 shows the layout of a 1.65 GHz p5-520 SCM and  
associated memory.  
DIMM  
Single-Core Module  
DIMM  
SCM  
GX+  
Bus  
GX+  
Ctrl  
POWER5+  
core  
DIMM  
DIMM  
2 x 8 B  
@528 MHz  
L3  
Ctrl  
1.9 MB Shared  
L2 cache  
DIMM  
DIMM  
DIMM  
DIMM  
Mem  
Ctrl  
Enhanced distributed switch  
1056 MHz  
2 x 8 B for read  
2 x 2 B for write  
Figure 2-3 p5-520 POWER5+ 1.65 SCM with DDR2 memory socket layout view  
The 1-core 2.1 GHz p5-520 system planar contains a single-core module (SCM), the local  
memory storage subsystem for that SCM, and the L3 Cache. Figure 2-4 shows the layout of a  
2.1 GHz p5-520 SCM and associated memory.  
Chapter 2. Architecture and technical overview  
27  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
DIMM  
DIMM  
DIMM  
DIMM  
Single-Core Module  
SCM  
GX+  
Bus  
GX+  
Ctrl  
POWER5+  
core  
2 x 8 B  
@528 MHz  
2x16B  
2:1  
36 MB  
L3  
Ctrl  
1.9 MB Shared  
L2 cache  
L3 cache  
DIMM  
DIMM  
DIMM  
DIMM  
Mem  
Ctrl  
Enhanced distributed switch  
1056 MHz  
2 x 8 B for read  
2 x 2 B for write  
Figure 2-4 p5-520 POWER5+ 2.1 GHz SCM with DDR2 memory socket layout view  
The storage structure for the POWER5+ processor is a distributed memory architecture that  
provides high-memory bandwidth. The processor is interfaced to eight memory slots that are  
controlled by two Synchronous Memory Interface II (SMI-II) chips, which are located in close  
physical proximity to the processor module.  
I/O connects to the p5-520 processor module using the GX+ bus. The processor module  
provides a single GX+ bus. The GX+ bus provides an interface to I/O devices through the  
RIO-2 connections.  
The theoretical maximum throughput of the L3 cache is 16 byte read, 16 byte write at a bus  
frequency of 1.05 GHz (based on a 2.1 GHz processor clock), which equates to 33600 MBps  
or 33.60 GBps. Additional throughput details are provided in Table 2-3 on page 33.  
2.2.2 The p5-520 POWER5+ dual-core module  
The 2-core p5-520 system planar contains a dual-core module (DCM) and the local memory  
storage subsystem for that DCM. The POWER5+ dual-core processor and its associated L3  
cache are packaged in the DCM.  
Figure 2-5 on page 28 shows a layout view of p5-520 DCM and associated memory.  
DIMM  
DCM  
DIMM  
DIMM  
DIMM  
GX+  
Bus  
GX+  
Ctrl  
POWER5+  
core  
POWER5+  
core  
2.1 GHz  
2.1 GHz  
2 x 8 B  
@528 MHz  
2x16B  
@1.05 GHz  
36 MB  
L3 cache  
L3  
Ctrl  
1.9 MB Shared  
L2 cache  
DIMM  
DIMM  
DIMM  
DIMM  
Mem  
Ctrl  
Enhanced distributed switch  
1056 MHz  
2 x 8 B for read  
2 x 2 B for write  
Figure 2-5 The p5-520 POWER5+ 2.1 GHz DCM with DDR2 memory socket layout view  
The storage structure for the POWER5+ processor is a distributed memory architecture that  
provides high-memory bandwidth, although each processor can address all memory and  
28  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
sees a single shared memory resource. They are interfaced to eight memory slots, controlled  
by two SMI-II chips, which are located in close physical proximity to the processor modules.  
I/O connects to the p5-520 processor module using the GX+ bus. The processor module  
provides a single GX+ bus. The GX+ bus provides an interface to I/O devices through the  
RIO-2 connections.  
The theoretical maximum throughput of the L3 cache is 16 byte read, 16 byte write at a bus  
frequency of 1.05 GHz (based on a 2.1 GHz processor clock), which equates to 33600 MBps  
or 33.60 GBps. Additional throughput details are provided in Table 2-3 on page 33.  
2.2.3 The p5-520Q quad-core module  
The 4-core p5-520Q system planar contains a new quad-core module (QCM) and the local  
memory storage subsystem for that QCM. Two POWER5+ dual-core processors and their  
associated L3 cache are packaged in the QCM.  
Figure 2-6 shows a layout view of a p5-520Q QCM with associated memory.  
2x 8B  
@528 MHz  
GX+  
Bus  
QCM  
GX+  
Ctrl  
Core  
1.65 GHz  
Core  
1.65 GHz  
DIMM  
DIMM  
DIMM  
DIMM  
36 MB  
L3 cache  
2 x 16B  
L3  
ctrl  
1.9 MB  
L2 cache  
@825 MHz  
1056 MHz  
2 x 8B for read  
2 x 2B for write  
Mem  
ctrl  
Enhanced  
distributed switch  
Enhanced  
distributed switch  
Mem  
ctrl  
DIMM  
DIMM  
DIMM  
DIMM  
2 x 16B  
L3  
ctrl  
1.9 MB  
L2 cache  
36 MB  
L3 cache  
@825 MHz  
Core  
1.65 GHz  
Core  
1.65 GHz  
GX+  
Ctrl  
Figure 2-6 The p5-520Q POWER5+ 1.65 GHz QCM with DDR2 memory socket layout view  
The storage structure for the POWER5+ processor is a distributed memory architecture that  
provides high-memory bandwidth. Each processor in the QCM can address all memory and  
see a single shared memory resource. In the QCM, one POWER5+ processor has direct  
access to eight memory slots, controlled by two SMI-II chips, which are located in close  
physical proximity to the processor modules. The other POWER5+ processor has access to  
the same memory slots through the Vertical Fabric Bus.  
I/O connects to the p5-520Q QCM using the GX+ bus. The QCM provides a single GX+ bus.  
One POWER5+ processor has direct access to the GX+ Bus using its GX+ Bus controller and  
the other uses the Vertical Fabric Bus controlled by the Fabric Bus controller. The GX+ bus  
provides an interface to I/O devices through the RIO-2 connections.  
The POWER5+ processor, without direct access to memory, does have a direct access to the  
GX+ Bus.  
The theoretical maximum throughput of the L3 cache is 16 byte read, 16 byte write at a bus  
frequency of 825 MHz (based on a 1.65 GHz processor clock), which equates to 26400 MBps  
or 26.4 GBps per L3 cache. There are two L3 caches on the QCM, which provide a total L3  
cache bandwidth of 52800 MBps or 52.8 GBps per QCM. Additional throughput details are  
Chapter 2. Architecture and technical overview  
29  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
2.2.4 Available processor speeds  
Table 2-1 lists the available processor capacities and speeds for the p5-520 and p5-520Q  
systems.  
Table 2-1 p5-520 and p5-520Q available processor capacities and speeds  
p5-520 @  
1.65 GHz  
p5-520 @  
1.9 GHz  
p5-520 @  
2.1 GHz  
p5-520Q @  
1.5 GHz  
p5-520Q @  
1.65 GHz  
1-core  
2-core  
4-core  
Yes  
Yes  
No  
No  
Yes  
No  
Yes  
Yes  
No  
No  
No  
Yes  
No  
No  
Yes  
To determine the processor characteristics, use one of the following commands:  
lsattr -El procX  
In this command, X is the number of the processor. For example, proc0is the first  
processor in the system. The output from the command is similar to the following output  
(False, as used in this output, signifies that the value cannot be changed through an AIX  
5L command interface):  
frequency 1498500000  
smt_enabled true  
smt_threads 2  
Processor Speed  
False  
Processor SMT enabled False  
Processor SMT threads False  
state enable  
Processor state  
Processor type  
False  
False  
type powerPC_POWER5  
pmcycles -m  
The pmcyclescommand (AIX 5L) uses the performance monitor cycle counter and the  
processor real-time clock to measure the actual processor clock speed in MHz. The  
following output is from a 4-core p5-520Q system running at 1.5 GHz with simultaneous  
multithreading enabled:  
Cpu 0 runs at 1498 MHz  
Cpu 1 runs at 1498 MHz  
Cpu 2 runs at 1498 MHz  
Cpu 3 runs at 1498 MHz  
Note: The pmcyclescommand is part of the bos.pmapi fileset. This component must be  
installed before using the lslpp -l bos.pmapicommand.  
2.3 Memory subsystem  
The p5-520 and p5-520Q servers offer pluggable DDR2 DIMMs for memory. DDR2 DIMMs  
have a double rate compared with DDR DIMMs (DDR DIMMs have double rate bits  
compared with SDRM), so that enables up to four times the performance of traditional  
SDRAM. The system planar provides eight slots for up to eight pluggable DDR2 DIMMs.  
The minimum memory for a p5-520 or p5-520Q server is 1.0 GB (2 x 512 MB) and 32 GB is  
the maximum installable memory option. Figure 2-7 shows the memory slot and location  
codes. All memory is accessed by two Synchronous Memory Interface (SMI)-II chips that are  
located between the memory and the processor. The SMI-II supports multiple data flow  
modes.  
30  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
POWER5+  
DCM (Dual-Core Module)  
or  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
QCM (Quad-Core Module)  
First quad  
Second quad  
J2A  
J2B  
J2C  
J2D  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
DIMM CX JXX “Ax”  
J0D  
J0C  
J0B  
J0A  
Figure 2-7 Memory placement for the p5-520 and p5-520Q servers  
2.3.1 Memory placement rules  
Table 2-2 lists the memory features that are available at the time of writing for the p5-520 and  
p5-520Q servers.  
Table 2-2 Available memory features  
Feature code  
1930  
Description  
1 GB (2 x 512 MB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
2 GB (2 x 1 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
4 GB (2 x 2 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
8 GB (2 x 4 GB) DIMMs, 276-pin DDR2, 533 MHz SDRAM  
1931  
1932  
1934  
Memory can be pluggable in pairs or quads, as required by the total memory requirement.  
Memory feature numbers might be mixed within a system. The DIMMs slots are accessed by  
first removing the PCI riser book.  
When additional memory is added to a system using FC 1930, an additional feature, FC  
1930, must be added to the original pair to make a quad, allowing one additional quad to be  
added to the system. Memory is installed in the first quad in the following order: J2A, J0A,  
J2C, and J0C; and for the second quad, in the order J2B, J0B, J2D, and J0D. Memory must  
be balanced across the DIMM quad slots. The Service Information label, located on the top  
cover of the system, provides memory DIMMs slot location information.  
Chapter 2. Architecture and technical overview  
31  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
To determine how much memory is installed in a system, use the following command:  
# lsattr -El sys0 | grep realmem  
realmem  
524288  
Amount of usable physical memory in Kbytes False  
Note: A quad must consist of a single feature (that is, be made of identical DIMMs). Mixed  
DIMM capacities in a quad will result in reduced RAS.  
2.3.2 OEM memory  
OEM memory is not supported or certified by IBM for use in an IBM System p5 server. If the  
server is populated with OEM memory, you could experience unexpected and unpredictable  
behavior, especially when the system is using Micro-Partitioning technology.  
All IBM memory is identified by an IBM logo and a white label that is printed with a barcode  
and an alphanumeric string, as illustrated in Figure 2-8.  
Figure 2-8 IBM memory certification label  
2.3.3 Memory throughput  
The memory subsystem throughput is based on the speed of the memory. An elastic  
interface, contained in the POWER5+ processor, buffers reads and writes to and from  
memory and the processor. There are two Synchronous Memory Interface (SMI-II) chips,  
each with a single 8-byte read and 2-byte write high speed Elastic Interface-II bus to the  
memory controller of the processor. The bus allows double reads or writes per clock cycle.  
Because the bus operates at 1066 MHz, the peak processor-to-memory throughput for read  
is (8 x 2 x 1056) = 16896 MBps or 16.89 GBps. The peak processor-to-memory throughput  
for write is (2 x 2 x 1056) = 4224 MBps or 4.22 GBps, making a total of 21.12 GBps.  
The 533 MHz DDR2 memory DIMMS operate at 528 MHz through four 8-byte paths. Read  
and write operations share these paths. There must be at least four DIMMs installed to  
effectively use each path. In this case, the throughput between the SMI-II and the DIMMs is  
(8 x 4 x 528) or 16.89 GBps.  
These values are maximum theoretical throughputs for comparison purposes only. Table 2-3  
provides the theoretical throughput values for different configurations.  
32  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Table 2-3 Theoretical throughput rates  
Processor speed  
(GHz)  
Processor Type  
Cores  
Memory  
(GBps)  
L2 to L3  
(GBps)  
GX+  
(GBps)  
1.65  
1.65  
1.9  
POWER5+  
POWER5+  
POWER5+  
POWER5+  
POWER5+  
POWER5+  
POWER5+  
1-core  
2-core  
2-core  
1-core  
2-core  
4-core  
4-core  
21.1  
21.1  
21.1  
21.1  
21.1  
21.1  
21.1  
26.4  
26.4  
30.4  
33.6  
33.6  
48  
4.4  
4.4  
5.1  
5.6  
5.6  
4
2.1  
2.1  
1.5  
1.65  
52.8  
4.4  
2.4 I/O buses  
This section provide additional information that is related to the internal RIO-2 buses and GX+  
buses.  
The QCM or DCM provides a GX+ bus. In the past, the 6XX bus was the front end from the  
processor to memory, PCI Host bridge, cache, and other devices. The follow-on to the 6XX  
bus is the GX bus, connecting the processor to the I/O subsystems. Compared with the 6XX  
bus, the GX+ bus is both wider and faster and connects to the Enhanced I/O Controller.  
The Enhanced I/O Controller is a GX+ to PCI and PCI-X 2.0 Host bridge chip. It contains a  
GX+ passthru port and four PCI-X 2.0 buses. The GX+ passthru port allows other GX+ bus  
hubs to be connected into the system. Each Enhanced I/O Controller can provide four  
separate PCI-X 2.0 buses. Each PCI-X 2.0 bus is 64 bits in width and individually capable of  
running either PCI, PCI-X, or PCI-X 2.0 (DDR only).  
The p5-520 and p5-520Q systems do not have RIO-2 ports integrated on the system planar  
to connect supported external I/O subsystems. As shown in Figure 2-9 on page 34, one  
Remote I/O expansion card (FC 2888) is required to connect the supported external I/O  
subsystems. When this card is present, the Enhanced I/O Controller routes the GX+ bus to  
the external RIO-2 ports.  
Chapter 2. Architecture and technical overview  
33  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
External  
I/O  
Subsystem  
(up to 4  
7311-D20)  
RIO-2  
Card  
GX+  
Bus  
Enhanced  
I/O  
Controller  
Internal  
I/O  
Subsystem  
DCM  
or  
QCM  
Memory  
System Planar  
Processor Card  
Figure 2-9 p5-520 or p5-520Q GX+ Bus connection overview  
According to the processor speed, the I/O subsystem is capable of supporting 5.6 GBps  
when using the 2.1 GHz processor, or capable of supporting 4.4 GBps when using a 1.65  
GHz processor. The bus is a dual four-byte wide bus running at a 3:1 processor to bus ratio.  
2.5 Internal I/O subsystem  
PCI-X, where the X stands for extended, is an enhanced PCI bus, delivering a bandwidth of  
up to 2 GBps, running a 64-bit bus at 133 MHz or 266 MHz. PCI-X is backward compatible,  
so the systems can support existing 3.3 volt PCI adapters.  
The system planar provides six PCI-X slots and several integrated I/O devices. The PCI-X  
slot 1, slot 5, and slot 6 are 64-bit capable running at 133 MHz. The PCI-X slot 2 and slot 3  
are 32-bit capable running at 66 MHz, but PCI-X 64-bit short adapters can be used in these  
slots.  
All the PCI-X slots and the integrated I/O devices, except the PCI-X slot 4, are connected  
through two EADS-X chips that function as PCI-X to PCI-X bridges to the Enhanced I/O  
Controller. The connections of the PCI-X slots and integrated I/O devices to the PCI-X to  
PCI-X bridges are properly distributed to maximize the system performances.  
The first three PCI-X slots can accept a short PCI-X or PCI card. The remaining PCI-X slots  
are full length cards. PCI-X slot 4 is a PCI-X DDR 266 MHz and 64 bit capable slot and is  
driven by the Enhanced I/O Controller directly. The dual 10/100/1000 Mbps Ethernet adapter  
and the Dual Channel SCSI Ultra320 adapter are some of the integrated devices on the  
system planar.  
The PCI-X slots in the p5-520 and p5-520Q system support hot-plug and Extended Error  
Handling (EEH). In the unlikely event of a problem, EEH-enabled adapters respond to a  
special data packet generated from the affected PCI-X slot hardware by calling system  
firmware, which will examine the affected bus, allow the device driver to reset it, and continue  
without a system reboot.  
34  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
2.6 64-bit and 32-bit adapters  
IBM offers 64-bit adapter options for the p5-520 and p5-520Q, as well as 32-bit adapters.  
Higher-speed adapters use 64-bit slots because they can transfer 64 bits of data for each  
data transfer phase. Generally, 32-bit adapters can function in 64-bit PCI-X slots; however,  
some 64-bit adapters cannot be used in 32-bit slots. For a full list of the adapters that are  
supported on the systems and for important information regarding adapter placement, see  
the IBM Systems Hardware Information Center at:  
The internal PCI-X slots support a wide range of PCI-X I/O adapters to handle your I/O  
requirements.  
2.6.1 LAN adapters  
To connect a p5-520 or p5-520Q to a local area network (LAN), you can use the dual port  
internal 10/100/1000 Mbps RJ-45 Ethernet controller that is integrated on the system planar.  
Table 2-4 lists the additional LAN adapters that are available for an initial system order at the  
time of writing. IBM supports an installation with NIM using Ethernet and token-ring adapters  
1
(CHRP is the platform type). Token-ring is not allowed as the initial order.  
Table 2-4 Available LAN adapters  
Feature code  
1954  
Adapter description  
Type  
Slot  
Size  
Max  
4
4-port 10/100/1000 Ethernet  
Gigabit Ethernet  
Copper  
Fibre  
32 or 64 Short  
32 or 64 Short  
32 or 64 Short  
32 or 64 Short  
32 or 64 Short  
32 or 64 Short  
1978  
6
1979  
Gigabit Ethernet  
Copper  
Fibre  
6
5721  
10 Gigabit Ethernet - short reach  
10 Gigabit Ethernet - long reach  
2-port Gigabit Ethernet  
2-port Gigabit Ethernet  
3
5722  
Fibre  
3
1983  
Copper  
Fibre  
6
1984  
32 or 64  
Short  
6
2.6.2 SCSI adapters  
To connect to external SCSI devices, the adapters that are provided in Table 2-5 are available,  
at the time of writing, to be configured with an initial order.  
Table 2-5 Available SCSI adapters  
Feature code  
1912  
Adapter description  
Slot  
64  
Size  
Max  
6
Dual Channel Ultra320 SCSI  
Dual Channel Ultra320 SCSI RAID  
Short  
Long  
1913  
64  
3
Note: Previous SCSI adapters are also supported for use in the p5-520 and p5-520Q but  
cannot be part of an initial order configuration. If you want to connect existing external  
SCSI devices, contact your IBM service representative.  
1
CHRP stands for Common Hardware Reference Platform, a specification for PowerPC-based systems that can  
run multiple operating systems.  
Chapter 2. Architecture and technical overview  
35  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
You also have the option to make the internal Ultra320 SCSI channel externally accessible on  
the rear side of the system by installing FC 4275. No additional SCSI adapter is required in  
this case. If FC 4275 is installed, a second 4-pack disk enclosure (FC 6574 or FC 6594)  
cannot be installed, which limits the maximum number of internal disks to four. Slot 5 cannot  
be used when FC 4275 is installed. For more information about the internal SCSI system, see  
2.6.3 Integrated RAID options  
The p5-520 and p5-520Q can be configured with the optional SCSI RAID daughter card  
(FC 1907) that plugs directly on the system board or with a Dual Channel Ultra320 SCSI  
RAID adapter (FC 1913) to drive one 4-pack disk enclosure.  
RAID implementation requires a minimum of three disk drives to form a RAID set.  
Important: RAID Capacity limitation. There are limits to the amount of disk drive capacity  
allowed in a single RAID array. Using the 32-bit AIX 5L kernel, there is a capacity limitation  
of 1 TB per RAID array. Using the 64 bit kernel, there is a capacity limitation of 2 TB per  
RAID array. For RAID adapter and RAID enablement cards, this limitation is enforced by  
AIX 5L when RAID arrays are created using the PCI-X SCSI Disk Array Manager.  
These are different internal RAID options that you can consider:  
Install FC 1907 and up to 4 disk drives in the default 4-pack disk enclosure. This allows  
RAID capabilities within a single 4-pack.  
Install FC 1907 and a second 4-pack disk enclosure (FC 6574). This allows RAID  
capabilities across two 4-packs.  
Install FC 1907 (or later) and the Ultra320 SCSI 4-Pack Enclosure for Disk Mirroring  
(FC 6594). Install the PCI-X Dual Channel Ultra320 SCSI RAID adapter (FC 1913) and  
the SCSI cable (FC 4267), which connects the PCI-X adapter to the optional 4-pack disk  
enclosure. This RAID configuration provides increased reliability over first and second  
options  
Note: Because the p5-520 and p5-520Q have up to eight disk drive slots, if you are  
upgrading, you must plan appropriately to ensure the correct handling of your RAID arrays.  
2.6.4 iSCSI  
iSCSI is an open, standards-based approach by which SCSI information is encapsulated  
using the TCP/IP protocol to allow its transport over IP networks. It allows transfer of data  
between storage and servers in block I/O formats (that is defined by iSCSI protocol) and thus  
enables the creation of IP SANs. iSCSI allows an existing network to transfer SCSI  
commands and data with full location independence and defines the rules and processes to  
accomplish the communication. The iSCSI protocol is defined in iSCSI IETF draft-20. For  
more information about this standard, see:  
Although iSCSI can be, by design, supported over any physical media that supports TCP/IP  
as a transport, today's implementations are only on Gigabit Ethernet. At the physical and link  
level layers, iSCSI supports Gigabit Ethernet and its frames so that systems supporting iSCSI  
can be directly connected to standard Gigabit Ethernet switches and IP routers. iSCSI also  
enables the access to block-level storage that resides on Fibre Channel SANs over an IP  
network using iSCSI-to-Fibre Channel gateways such as storage routers and switches.  
36  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
The iSCSI protocol is implemented on top of the physical and data-link layers and presents to  
the operating system a standard SCSI Access Method command set. It supports SCSI-3  
commands and reliable delivery over IP networks. The iSCSI protocol runs on the host  
initiator and the receiving target device. It can either be optimized in hardware for better  
performance on an iSCSI host bus adapter (such as FC 1986 and FC 1987 supported in IBM  
System p5 servers) or run in software over a standard Gigabit Ethernet network interface  
card. IBM System p5 systems support iSCSI in the following two modes:  
Hardware  
Software  
Using iSCSI adapters (see “IBM iSCSI adapters” on page 37).  
Supported on standard Gigabit adapters, additional software (see  
“IBM iSCSI software Host Support Kit” on page 38) must be installed.  
The main processor is utilized for processing related to the iSCSI  
protocol.  
Initial iSCSI implementations are targeted at small to medium-sized businesses and  
departments or branch offices of larger enterprises that have not deployed Fibre Channel  
SANs. iSCSI is an affordable way to create IP SANs from a number of local or remote storage  
devices. If Fibre Channel is present, which is typical in a data center, it can be accessed by  
the iSCSI SANs (and vice versa) via iSCSI-to-Fibre Channel storage routers and switches.  
iSCSI solutions always involve the following software and hardware components:  
Initiators  
Targets  
These are the device drivers and adapters that reside on the client.  
They encapsulate SCSI commands and route them over the IP  
network to the target device.  
The target software receives the encapsulated SCSI commands over  
the IP network. The software can also provide configuration support  
and storage-management support. The underlying target hardware  
can be a storage appliance that contains embedded storage, and it  
can also be a gateway or bridge product that contains no internal  
storage of its own.  
IBM iSCSI adapters  
New iSCSI adapters in IBM System p5 systems provide the advantage of increased  
bandwidth through the hardware support of the iSCSI protocol. The 1 Gigabit iSCSI TOE  
(TCP/IP Offload Engine) PCI-X adapters support hardware encapsulation of SCSI commands  
and data into TCP and transports them over the Ethernet using IP packets. The adapter  
operates as an iSCSI TOE. This offload function eliminates host protocol processing and  
reduces CPU interrupts. The adapter uses a Small form factor LC type fiber optic connector  
or a copper RJ45 connector.  
Table 2-6 provides the orderable iSCSI adapters.  
Table 2-6 provides the orderable iSCSI adapters.  
Table 2-6 Available iSCSI adapters  
Feature Description  
code  
Slot  
Size  
Max  
1986  
1987  
Gigabit iSCSI TOE PCI-X on copper media adapter  
Gigabit iSCSI TOE PCI-X on optical media adapter  
64  
64  
Short  
Short  
3
3
Chapter 2. Architecture and technical overview  
37  
Download from Www.Somanuals.com. All Manuals Search And Download.  
IBM iSCSI software Host Support Kit  
The iSCSI protocol can also be used over standard Gigabit Ethernet adapters. To utilize this  
approach, download the appropriate iSCSI Host Support Kit for your operating system from  
the IBM NAS support Web site at:  
The iSCSI Host Support Kit on AIX 5L and Linux acts as a software iSCSI initiator and allows  
you to access iSCSI target storage devices using standard Gigabit Ethernet network  
adapters. To ensure the best performance, enable the TCP Large Send, TCP send and  
receive flow control, and Jumbo Frame features of the Gigabit Ethernet Adapter and the  
iSCSI Target. Tune network options and interface parameters for maximum iSCSI I/O  
throughput on the operating system.  
IBM System Storage N series  
The combination of IBM System p5 and IBM System Storage™ N series as the first of a  
whole new generation of iSCSI-enabled storage products provides an End-to-End set of  
solutions. Currently, the System Storage N series feature three models: N3700, N5200, and  
N5500.  
All models provide:  
Support for entry-level and midrange clients requiring Network Attached Storage (NAS) or  
Internet Small Computer System Interface (iSCSI) functionality  
Support for Network File System (NFS), Common Internet File System (CIFS), and iSCSI  
protocols  
Data ONTAP software (at no charge), with plenty of additional functions such as data  
movement, consistent snapshots, and NDMP server protocol, some available through  
optional licensed functions  
Enhanced reliability with optional clustered (2-node) failover support  
2.6.5 Fibre Channel adapter  
The p5-520 and p5-520Q servers support direct or SAN connection to devices using Fibre  
Channel adapters. Single-port Fibre Channel adapters are available in 2 Gbps and 4 Gbps  
speeds. A dual-port 4 Gbps Fibre Channel adapter is also available. Table 2-7 provides a  
summary of the available Fibre Channel adapters.  
All of these adapters have LC connectors. If you are attaching a device or switch with an SC  
type fibre connector, then an LC-SC 50 Micron Fiber Converter Cable (FC 2456) or an LC-SC  
62.5 Micron Fiber Converter Cable (FC 2459) is required.  
Supported data rates between the server and the attached device or switch are as follows:  
Distances of up to 500 meters running at 1 Gbps, distances up to 300 meters running at  
2 Gbps data rate, and distances up to 150 meters running at 4 Gbps. When these adapters  
are used with IBM supported Fibre Channel storage switches supporting long-wave optics,  
distances of up to 10 kilometers are capable running at 1 Gbps, 2 Gbps, and 4 Gbps data  
rates.  
38  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Table 2-7 Available Fibre Channel adapters  
Feature  
code  
Description  
Slot  
Size  
Max  
1905  
1910  
1977  
4 Gigabit single-port Fibre Channel PCI-X 2.0 Adapter (LC)  
4 Gigabit dual-port Fibre Channel PCI-X 2.0 Adapter (LC)  
2 Gigabit Fibre Channel PCI-X Adapter (LC)  
64  
64  
64  
Short  
Short  
Short  
6
6
6
2.6.6 Graphic accelerators  
The p5-520 and p5-520Q support up to four enhanced POWER GXT135P (FC 1980) 2D  
graphic accelerators. The POWER GXT135P is a low-priced 2D graphics accelerator for  
IBM System p5 servers. This adapter supports both analog and digital monitors and is  
supported for System Management Services (SMS), firmware, and other functions, as well as  
when AIX 5L or Linux starts an X11-based graphic user interface (GUI).  
2.6.7 InfiniBand Host Channel adapter  
The p5-520 and p5-520Q support the RIO-2 expansion cards (FC 2888) to connect the  
supported additional I/O subsystems. The server also supports one GX Dual-port 4x  
InfiniBand Host Channel Adapter (FC 1812) that enables the attachment of the Topspin  
Server Switch models 120 and 270. Only a single GX Dual-port 4x InfiniBand HCA or RIO-2  
expansion card can plug into the system planar, using the GX slot, at a time. Connection to  
the Topspin Server Switches is accomplished by using the 4x IB Cables.  
Topspin Server Switch models 120 and 270  
Switches are the fundamental components of an InfiniBand fabric. An IBM System p5 server  
proposal might also include the Topspin Server Switch model 120 and 270 in an initial system  
order.  
The Topspin Server Switch models 120 and 270 are programmable switching platforms that  
consist of a switched multiple-terabit interconnect and an intelligent control architecture. The  
high-bandwidth, low-latency interconnection is extremely adaptable. The switches enable an  
outstanding level of application scaling, rapid deployment, and resource consolidation.  
For more information about Topspin Server Switch, see:  
2.6.8 Asynchronous PCI-X adapters  
Asynchronous PCI-X adapters provide connection of asynchronous EIA-232 or RS-422  
devices. If you have a cluster configuration or high-availability configuration and plan to  
connect the IBM System p5 servers using a serial connection, the use of the two default ports  
is not supported. You should use one of the features listed in Table 2-8.  
Table 2-8 Asynchronous PCI-X adapters  
Feature code  
2943  
Description  
8-Port Asynchronous Adapter EIA-232/RS-422  
2-Port Asynchronous IEA-232 PCI Adapter  
5723  
Chapter 2. Architecture and technical overview  
39  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
In many cases, the 5723 asynchronous adapter is configured to supply a backup HACMP  
heartbeat. In these cases, a serial cable (FC 3927 or FC 3928) must be also configured. Both  
of these serial cables and the 5723 adapter have 9-pin connectors.  
2.6.9 PCI-X Cryptographic Coprocessor  
The PCI-X Cryptographic Coprocessor (FIPS 4) (FC 4764) for selected System p servers  
provides both cryptographic coprocessor and secure-key cryptographic accelerator functions  
in a single PCI-X card. The coprocessor functions are targeted to banking and finance  
applications. Financial PIN processing and credit card functions are provided. EMV is a  
standard for integrated chip-based credit cards. The securekey accelerator functions are  
targeted to improving the performance of Secure Sockets Layer (SSL) transactions. The  
FC 4764 provides the security and performance required to support On Demand Business  
and emerging digital signature application.  
The PCI-X Cryptographic Coprocessor (FIPS 4) (FC 4764) for selected System p servers  
provides both cryptographic coprocessor and secure-key cryptographic accelerator functions  
in a single PCI-X card. The FC 4764 provides secure storage of cryptographic keys in a  
tamper resistant hardware security module (HSM), which is designed to meet FIPS 140  
security requirements. FIPS 140 is a U.S. Government National Institute of Standards &  
Technology (NIST)-administered standard and certification program for cryptographic  
modules. The firmware for the FC 4764 is available on a separately ordered and distributed  
CD. This firmware is an LPO product: 5733-CY1 Cryptographic Device Manager. The FC  
4764 also requires LPP 5722-AC3 Cryptographic Access Provider to enable data encryption.  
Note: This feature has country-specific usage. Refer to the IBM representatives in your  
country for availability or restrictions.  
2.6.10 Additional support for PCI-X adapters you own  
The lists of the major PCI-X adapters that you can configure in a p5-520 or p5-520Q when  
you build an initial configuration order are described in 2.6.1, “LAN adapters” on page 35  
through 2.6.8, “Asynchronous PCI-X adapters” on page 39. The list of all the supported PCI-X  
adapters, with the related support for additional external devices, is more extended.  
If you would like to use PCI-X adapters you already own, contact your IBM service  
representative to verify whether those adapters are supported.  
2.6.11 Internal system ports  
The system ports S1 and S2, at the rear of the system, are only available if the system is not  
managed using a Hardware Management Console (HMC). In this case, the S1 and S2 ports  
support the attachment of a serial console and a modem and are of limited function.  
If an HMC is connected, a virtual serial console is provided by the HMC (logical device vsa0  
under AIX 5L), and you can also connect a modem to the HMC. The S1 and S2 ports are not  
usable in this case.  
If you need serial port function, optional PCI adapters are available. For more information,  
40  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
2.6.12 Ethernet ports  
The two built-in Ethernet ports provide 10/100/1000 Mbps connectivity over CAT-5 cable for  
up to 100 meters. Table 2-9 lists the attributes of the LEDs that are visible on the side of the  
jack.  
Table 2-9 Ethernet LED descriptions  
LED  
Light  
Description  
Link  
Off  
No link; could indicate a bad cable, not selected, or configuration  
error.  
Green  
Connection established.  
Activity  
On  
Off  
Data activity.  
Idle.  
2.7 Internal storage  
There is one dual channel Ultra320 SCSI controller that is managed by the EADS-X chips,  
integrated into the system planar, and that is used to drive the internal disk drives. The eight  
internal drives plug into the disk drive backplane, which has two separate SCSI buses with  
four disk drives per bus.  
The internal disk drive can be used in two different modes based on whether the SCSI RAID  
Enablement Card (FC 1976) is installed (see 2.6.3, “Integrated RAID options” on page 36).  
The p5-520 and p5-520Q supports two 4-pack disk drives using a backplane that is designed  
for hot-pluggable disk drives. The disk drive backplane docks directly to the system planar.  
The virtual SCSI Enclosure Services (VSES) hot-plug control functions are provided by the  
Ultra320 SCSI controllers.  
2.7.1 Internal media devices  
The p5-520 and p5-520Q provide two slim-line media bays for optional DVD-ROM (FC 1994)  
and optional DVD-RAM (FC 1993) and one media bay for a tape drive. Table 2-10 shows all  
additional media devices for the systems.  
Table 2-10 Available optical and tape drives  
Feature code  
1993  
Description  
4.7 GB IDE Slimline DVD-RAM drive  
IDE Slimline DVD-ROM drive  
1994  
1892  
VXA-320 160/320 GB Internal Tape Drive  
36/72 GB 4 mm Internal Tape Drive  
IBM 80/160 GB Internal Tape Drive with VXA Technology  
200/400 GB Half High Ultrium 2 Tape Drive  
1991  
1992  
1997  
2.7.2 Internal hot-swappable SCSI disks  
The p5-520 and p5-520Q can have up to eight hot-swappable disk drives plugged in the two  
4-pack disk drive backplanes. The hot-swappable process is controlled by the SCSI  
enclosure service (SES), which is located in the 4-pack disk drives backplane (AIX 5L  
Chapter 2. Architecture and technical overview  
41  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
assigns the name ses0to the first 4-pack, and ses1to the second, if present). The two  
hot-swappable 4-pack disk drive backplanes can accommodate the devices listed in  
Table 2-11 Available hot-swappable disk drives  
Feature code  
1968  
Description  
73.4 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
146.8 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
36.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
73.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
146.8 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive  
300 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive  
1969  
1970  
1971  
1972  
1973  
At the time of writing, if a new order is placed with two 4-pack DASD backplanes (FC 6574)  
and more than one disk, the system configuration shipped from manufacturing balances the  
total number of SCSI disks between the two 4-pack SCSI backplanes. This is for  
manufacturing test purposes and not because of any limitation. Having the disks balanced  
between the two 4-pack DASD backplanes allows the manufacturing process to  
systematically test the SCSI paths and devices related to them.  
Prior to the hot-swap of a disk in the hot-swap capable bay, all necessary operating system  
actions must be undertaken to ensure that the disk is capable of being deconfigured. After the  
disk drive has been deconfigured, the SCSI enclosure device will power off the slot, enabling  
safe removal of the disk. You should ensure that the appropriate planning has been given to  
any operating system-related disk layout, such as the AIX 5L Logical Volume Manager, when  
using disk hot-swap capabilities. For more information, see Problem Solving and  
Troubleshooting in AIX 5L, SG24-5496.  
Note: After you have deconfigured the disk, we recommend that you follow this procedure  
when removing a hot-swappable disk:  
1. Release the tray handle on the disk.  
2. Pull out the disk assembly a little bit from the original position.  
3. Wait up to 20 seconds until the internal disk stops spinning.  
4. Now, you can safely remove the disk from the 4-pack DASD backplane.  
After the SCSI disk hot-swap procedure, you can expect to find SCSI_ERR10 logged in the  
AIX 5L error log, with the second word of the sense data equal to 0017. This error is  
generated from a SCSI bus reset that is issued by the SES to reset all processes when a  
drive is inserted, and this error is not an issue.  
Hot-swappable disks and Linux  
Hot-swappable disk drives on IBM System p5 systems are supported with SUSE Linux  
Enterprise Server 9 for POWER, or later, and Red Hat Enterprise Linux AS for POWER  
Version 3, or later.  
42  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
2.8 External I/O subsystem  
This section describes the external I/O subsystem, the 7311 D20 I/O drawer that is the only  
drawer supported on the p5-520 and p5-520Q systems.  
2.8.1 I/O drawers  
As described in Chapter 1, “General description” on page 1, the p5-520 or p5-520Q systems  
have six internal PCI-X slots, which is enough in many cases. If more PCI-X slots are needed  
to dedicate more adapters to a partition or to increase the bandwidth of network adapters, up  
to four 7311 Model D20 I/O drawers can be added to the p5-520 or p5-520Q systems.  
The p5-520 or p5-520Q systems have a standard RIO-2 bus to connect the internal PCI-X  
slots through the PCI-X to PCI-X bridges and support up to four external I/O drawers.  
An optional RIO-2 adapter (FC 2888) is required for external RIO-2 devices, such as I/O  
drawers.  
The 7311 Model D20 I/O drawer must have the RIO-2 loop adapter (FC 6417) to be  
connected to the p5-520 or p5-520Q systems. The PCI-X host bridge inside the I/O drawer  
provides two primary 64-bit PCI-X buses running at 133 MHz. Therefore, a maximum  
bandwidth of 1 GBps is provided by each bus. To avoid overloading an I/O drawer, you  
should follow the recommendation in the IBM System p5 Hardware Information Center at:  
Figure shows a conceptual diagram of the 7311 Model D20 I/O drawer subsystem.  
PCI-X Host  
Bridge  
RIO  
133 MHz,  
64-bit PCI-X  
133 MHz, 64-bit PCI-X  
PCI-X Bridge  
PCI-X Bridge  
1
2
4
5
6
7
3
6
4
/
6
4
/
6
4
/
6
4
/
6
4
/
6
4
/
6
4
/
1
3
3
1
3
3
1
3
3
1
3
3
1
3
3
1
3
3
1
3
3
Figure 2-10 Conceptual diagram of the 7311-D20 I/O drawer  
7311 Model D20 internal SCSI cabling  
A 7311 Model D20 supports hot-swappable disks using two 6-pack disk bays for a total of 12  
disks. Additionally, the SCSI cables (FC 4257) are used to connect a SCSI adapter (that can  
have various features) in slot 7 to each of the 6-packs, or two SCSI adapters, one in slot 4  
and one in slot 7 (Figure ).  
Chapter 2. Architecture and technical overview  
43  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Connect the SCSI cable feature to the SCSI adapter  
in rightmost slot (7) as shown below:  
If a SCSI card is also placed in slot 4, wire as shown below:  
to 6-pack  
backplanes  
to 6-pack  
backplanes  
SCSI cables FC 4257  
SCSI cables FC 4257  
Figure 2-11 7311 Model D20 internal SCSI cabling  
Note: Any 6-packs and the related SCSI adapter can be assigned to a partition. If one  
SCSI adapter is connected to both 6-packs, both 6-packs can be assigned only to the  
same partition. When the server is configured with the Advanced POWER Virtualization  
hardware feature and the Virtual I/O Server is used for virtual SCSI, the disks can be  
shared between partitions.  
2.8.2 7311 I/O drawer RIO-2 cabling  
As described in 2.8, “External I/O subsystem” on page 43, you can connect up to four I/O  
drawers in the same loop to the p5-520 or p5-520Q system. Each RIO-2 port can operate at  
1 GHz in bidirectional mode and is capable of passing data in each direction on each cycle of  
the port. Therefore, the maximum data rate is 4 GBps per I/O drawer in double barrel mode.  
There is one default primary RIO-2 loop in any p5-520 or p5-520Q system. This feature  
provides two Remote I/O ports for attaching up to four 7311 Model D20 I/O drawers to the  
system in a single loop.  
Figure shows how you could connect four I/O drawers to one system.  
44  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
PCI-X slots  
FC 2888  
I/O drawer #1  
I/O drawer #2  
I/O drawer #3  
I/O drawer #4  
Figure 2-12 RIO-2 connections  
The RIO-2 cables used have different lengths to satisfy the different connection  
requirements:  
Remote I/O cable, 1.2 m (FC 3146)  
Remote I/O cable, 1.75 m (FC 3156)  
Remote I/O cable, 2.5 m (FC 3168)  
Remote I/O cable, 3.5 m (FC 3147)  
Remote I/O cable, 10 m (FC 3148)  
2.8.3 7311 Model D20 I/O drawer SPCN cabling  
The SPCN is used to control and monitor the status of power and cooling within the I/O  
drawer. The SPCN is a loop, the cabling starts from SPCN port 0 on the p5-520 or p5-520Q  
system to SPCN port 0 on the first I/O drawer. The loop is closed connecting the SPCN port 1  
of the I/O drawer back to the port 1 of the p5-520 or p5-520Q system. If you have more than  
one I/O drawer, you continue the loop connecting the following drawer (or drawers) with the  
same rule.  
Figure shows SPCN cabling examples.  
Chapter 2. Architecture and technical overview  
45  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Primary drawer  
Primary drawer  
SPCN port 0  
SPCN port 1  
SPCN port 0  
SPCN port 1  
I/O drawer or secondary drawer  
I/O drawer or secondary drawer  
SPCN port 0  
SPCN port 1  
SPCN port 0  
SPCN port 1  
I/O drawer or secondary drawer  
SPCN port 0  
SPCN port 1  
Figure 2-13 SPCN cabling examples  
There are different SPCN cables to satisfy different length requirements:  
SPCN cable drawer to drawer, 2 m (FC 6001)  
SPCN cable drawer to drawer, 3 m (FC 6006)  
SPCN cable rack to rack, 6 m (FC 6008)  
SPCN cable rack to rack, 15 m (FC 6007)  
SPCN cable rack to rack 30 m (FC 6029)  
2.9 External disk subsystems  
The p5-520 and p5-520Q have internal hot-swappable drives. When the AIX 5L operating  
system is installed in a IBM System p5 servers, the internal disks are usually used for the  
AIX 5L rootvg volume group and paging space. Specific client requirements can be satisfied  
with the several external disk possibilities that the system supports.  
2.9.1 IBM TotalStorage EXP24 Expandable Storage  
The IBM TotalStorage® EXP24 Expandable Storage disk enclosure, Model D24 or T24, can  
be purchased together with the p5-520 or p5-520Q and will provide low-cost Ultra320 (LVD)  
SCSI disk storage. This disk storage enclosure device provides more than 7 TB of disk  
storage in a 4U rack-mount (Model D24) or compact deskside (Model T24) unit. Whether high  
availability storage solutions or simply high capacity storage for a single server installation,  
the unit provides a cost-effective solution. It provides 24 hot-swappable disk bays, 12  
accessible from the front and 12 from the rear. Disk options that can be accommodate in any  
of the four six-packs disk drive enclosure are 73.4 GB, 146.8 GB or 300 GB 10K rpm or  
46  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
36.4 GB, 73.4 GB or 146.8 GB 15K rpm drives. Each of the four six-packs disk drive  
enclosure can be attached independently to an Ultra320 SCSI or Ultra320 SCSI RAID  
adapter. For high available configurations, a dual bus repeater card (FC 5742) allows each  
six-pack to be attached to two SCSI adapters, installed in one or multiple servers or logical  
partitions. Optionally, the two front or two rear six-packs can be connected together to form a  
single Ultra320 SCSI bus of 12 drives.  
2.9.2 IBM System Storage N3000 and N5000  
The IBM System Storage N3000 and N5000 line of iSCSI-enabled storage offerings provides  
the flexibility for implementing a Storage Area Network over an Ethernet network. The N3000  
supports up to 16.8 TB of physical storage and the N5000 supports up to 84 TB of physical  
disk. Additional information about IBM iSCSI-based storage systems is available at:  
2.9.3 IBM TotalStorage DS4000 Series  
The IBM System Storage DS4000™ line of Fibre Channel-enabled Storage offerings  
provides a wide range of storage solutions for your Storage Area Network (SAN). The  
DS4000 Storage server family consists of the following models: DS4100, DS4300, DS4500,  
and DS4800. The Model DS4100 Express model is the smallest model and scales up to  
44.8 TB; the Model DS4800 is the largest and scales up to 89.6 TB of disk storage at the time  
of this writing. Model DS4300 provides up to 16 bootable partitions, or 64 bootable partitions  
if the turbo option is selected, that are attached with a Fibre Channel Adapter. Model DS4500  
provides up to 64 bootable partitions. Model DS4800 provides 4 GB switched interfaces. In  
most cases, both the IBM TotalStorage DS4000 family and the IBM System p5 servers are  
connected to a storage area network. If you only need space for the rootvg, the Model  
DS4100 is a good solution.  
For support of additional features and for further information about the IBM TotalStorage  
DS4000 Storage Server family, refer to the following Web site:  
2.9.4 IBM TotalStorage DS6000 and DS8000 Series  
The IBM TotalStorage Models DS6000™ and DS8000™ are the high-end premier storage  
solution for use in storage area networks and use POWER technology-based design to  
provide fast and efficient serving of data. The IBM TotalStorage DS6000 provides enterprise  
class capabilities in a space-efficient modular package. It scales to 67.2 TB of physical  
storage capacity by adding storage expansion enclosures. The Model DS8000 series is the  
flagship of the IBM TotalStorage DS family. The DS8000 scales to 192 TB; however, the  
system architecture is designed to scale to over one petabyte. The Model DS6000 and  
DS8000 systems can also be used to provide disk space for booting logical partitions  
(LPARs) or partitions using Micro-Partitioning technology. DS6000 and DS8000 and the IBM  
System p5 servers are usually connected together to a storage area network.  
For further information about ESS, refer to the following Web site:  
Chapter 2. Architecture and technical overview  
47  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
2.10 Logical partitioning  
Dynamic logical partitions (LPARs) and virtualization increase utilization of system resources  
and add a new level of configuration possibilities. This section provides details and  
configuration specifications about this topic. The virtualization discussion includes  
virtualization enabling technologies that are standard on the system, such as the POWER  
Hypervisor™, and optional ones, such as the Advanced POWER Virtualization feature.  
2.10.1 Dynamic logical partitioning  
Logical partitioning (LPAR) was introduced with the POWER4 processor-based product line  
and the AIX 5L Version 5.1 operating system. This technology offered the capability to divide  
a pSeries system into separate logical systems, allowing each LPAR to run an operating  
environment on dedicated attached devices, such as processors, memory, and I/O  
components.  
Later, dynamic LPAR increased the flexibility, allowing selected system resources, such as  
processors, memory, and I/O components, to be added and deleted from dedicated partitions  
while they are executing. AIX 5L Version 5.2, with all the necessary enhancements to enable  
dynamic LPAR, was introduced in 2002. The ability to reconfigure dynamic LPARs  
encourages system administrators to dynamically redefine all available system resources to  
reach the optimum capacity for each defined dynamic LPAR.  
Operating system support for dynamic LPAR  
Table 2-12 lists AIX 5L and Linux support for dynamic LPAR capabilities.  
Table 2-12 Operating system supported function  
Function  
AIX 5L  
AIX 5L  
Linux  
Linux  
Linux  
Version 5.2  
Version 5.3  
SLES 9  
RHEL AS 3 RHEL AS 4  
Dynamic LPAR capabilities (add, remove, and move operations)  
Processor  
Memory  
I/O slot  
Y
Y
Y
Y
Y
Y
Y
N
Y
N
N
N
Y
N
Y
2.11 Virtualization  
With the introduction of the POWER5 processor, partitioning technology moved from a  
dedicated resource allocation model to a virtualized shared resource model. This section  
briefly discusses the key components of virtualization on IBM System p servers.  
For more information about virtualization, see the following Web site:  
You can also consult the following IBM Redbooks:  
Advanced POWER Virtualization on IBM System p5, SG24-7940  
Advanced POWER Virtualization on IBM Sserver p5 Servers: Architecture and  
Performance Considerations, SG24-5768  
48  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
2.11.1 POWER Hypervisor  
Combined with features designed into the POWER5 and POWER5+ processors, the POWER  
Hypervisor delivers functions that enable other system technologies, including  
Micro-Partitioning technology, virtualized processors, IEEE VLAN, compatible virtual switch,  
virtual SCSI adapters, and virtual consoles. The POWER Hypervisor is a basic component of  
system firmware that is always active, regardless of the system configuration.  
The POWER Hypervisor provides the following functions:  
Provides an abstraction between the physical hardware resources and the logical  
partitions using them.  
Enforces partition integrity by providing a security layer between logical partitions.  
Controls the dispatch of virtual processors to physical processors. (For more information,  
Saves and restores all processor state information during logical processor context switch.  
Controls hardware I/O interrupt management facilities for logical partitions.  
Provides virtual LAN channels between physical partitions that help to reduce the need for  
physical Ethernet adapters for inter-partition communication.  
The POWER Hypervisor is always active when the server is running, whether the server is  
partitioned or not, and also even when the server is not connected to the HMC. It requires  
memory to support the logical partitions on the server. The amount of memory required by the  
POWER Hypervisor firmware varies according to several factors. Factors influencing the  
POWER Hypervisor memory requirements include the following:  
Number of logical partitions  
Partition environments of the logical partitions  
Number of physical and virtual I/O devices used by the logical partitions  
Maximum memory values given to the logical partitions  
Note: Use the System Planning Tool to estimate the memory requirements of the POWER  
Hypervisor.  
In AIX 5L V5.3, the lparstat command using the -hand -Hflags displays the POWER  
Hypervisor statistical data. Using the -hflag adds summary POWER Hypervisor statistics  
to the default lparstatoutput.  
The minimum amount of physical memory for each partition is 128 MB, but in most cases the  
actual requirements and recommendations are between 256 MB and 512 MB for AIX 5L,  
Red Hat Linux, and Novell SUSE Linux. Physical memory is assigned to partitions in  
increments of Logical Memory Block (LMB). For POWER5+ processor-based systems, LMB  
can be adjusted from 16 MB to 256 MB.  
The POWER Hypervisor provides the following types of virtual I/O adapters:  
Virtual SCSI  
Virtual Ethernet  
Virtual (TTY) console  
Virtual SCSI  
The POWER Hypervisor provides a virtual SCSI mechanism for virtualization of storage  
devices (a special logical partition to install the Virtual I/O Server is required to use this  
feature, see 2.12.3, “Virtual I/O Server” on page 54). The storage virtualization is  
accomplished using two paired adapters: a virtual SCSI server adapter and a virtual SCSI  
Chapter 2. Architecture and technical overview  
49  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
client adapter. Only the Virtual I/O Server partition can define virtual SCSI server adapters,  
other partitions are client partitions. The Virtual I/O Server is available with the optional  
Advanced POWER Virtualization feature (FC 7940).  
Virtual Ethernet  
The POWER Hypervisor provides a virtual Ethernet switch function that allows partitions on  
the same server to use a fast and secure form of communication without any need for  
physical interconnection. The virtual Ethernet allows a transmission speed in the range of 1 to  
3 GBps. depending on the maximum transmission unit (MTU) size and CPU entitlement.  
Virtual Ethernet requires a system with either AIX 5L Version 5.3 or an appropriate level of  
Linux supporting virtual Ethernet devices (see 2.14, “Operating system support” on page 64).  
The virtual Ethernet is part of the base system configuration.  
Virtual Ethernet has the following major features:  
The virtual Ethernet adapters can be used for both IPv4 and IPv6 communication and can  
transmit packets with a size up to 65408 bytes. Therefore, the maximum MTU for the  
corresponding interface can be up to 65394 (65390 if VLAN tagging is used).  
The POWER Hypervisor presents itself to partitions as a virtual 802.1Q compliant switch.  
Maximum number of VLANs is 4096. You can configure virtual Ethernet adapters as either  
untagged or tagged (following IEEE 802.1Q VLAN standard).  
A partition supports 256 virtual Ethernet adapters. Besides a default port VLAN ID, the  
number of additional VLAN ID values that can be assigned per virtual Ethernet adapter is  
20, which implies that each virtual Ethernet adapter can be used to access 21 virtual  
networks.  
Each partition operating system detects the virtual local area network (VLAN) switch as an  
Ethernet adapter without the physical link properties and asynchronous data transmit  
operations.  
Any virtual Ethernet can also have connection outside of the box if a layer-2 bridging to a  
physical Ethernet adapter is set in one Virtual I/O Server partition (see 2.12.3, “Virtual I/O  
Server” on page 54 for more details about shared Ethernet).  
Note: Virtual Ethernet is based on the IEEE 802.1Q VLAN standard. No physical I/O  
adapter is required when creating a VLAN connection between partitions, and no access to  
an outside network is required.  
Virtual (TTY) console  
Each partition needs to have access to a system console. Tasks such as operating system  
installation, network setup, and some problem analysis activities require a dedicated system  
console. The POWER Hypervisor provides the virtual console using a virtual TTY or serial  
adapter and a set of Hypervisor calls to operate on them. Virtual TTY does not require the  
purchase of any additional features or software such as the Advanced POWER Virtualization  
feature.  
Depending on the system configuration, the operating system console can be provided by the  
Hardware Management Console virtual TTY, IVM virtual TTY, or from a terminal emulator  
connected to a system port.  
50  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
2.12 Advanced POWER Virtualization feature  
The Advanced POWER Virtualization feature (FC 7940) is an optional, additional cost feature.  
This feature enables the implementation of more fine-grained virtual partitions on IBM  
System p5 servers.  
The Advanced POWER Virtualization feature includes:  
Firmware enablement for Micro-Partitioning technology.  
Support for up to 10 partitions per processor using 1/100 of the processor granularity.  
Minimum CPU requirement per partition is 1/10. All processors are enabled for  
micro-partitions (the number of processors on the system equals the number of Advanced  
POWER Virtualization features ordered).  
Installation image for the Virtual I/O Server software that is shipped as a system image on  
DVD. Client partitions can be either AIX 5L Version 5.3 or Linux. It supports:  
– Ethernet adapter sharing (Ethernet bridge from virtual Ethernet to external network).  
– Virtual SCSI Server.  
– Partition management using Integrated Virtualization Manager (Virtual I/O Server  
Version 1.2 or later only).  
Partition Load Manager (AIX 5L Version 5.3 only)  
– Automated CPU and memory reconfiguration.  
– Real-time partition configuration and load statistics.  
– Graphical user interface.  
For more details about Advanced POWER Virtualization and virtualization in general, see:  
2.12.1 Micro-Partitioning technology  
The concept of Micro-Partitioning technology allows you to allocate fractions of processors to  
the partition. The Micro-Partitioning technology is only available with POWER5 and  
POWER5+ processor-based systems. From an operating system perspective, a virtual  
processor cannot be distinguished from a physical processor, unless the operating system  
has been enhanced to be made aware of the difference. Physical processors are abstracted  
into virtual processors that are available to partitions. See 2.12.2, “Logical, virtual, and  
When defining a shared partition, you have to define several options:  
Minimum, desired, and maximum processing units. Processing units are defined as the  
processing power, or the fraction of time, that the partition is dispatched on physical  
processors.  
The processing sharing mode, either capped or uncapped.  
Weight (preference) in the case of an uncapped partition.  
Minimum, desired, and maximum number of virtual processors.  
POWER Hypervisor calculates a partition’s processing entitlement based on its desired  
processing units and logical processor settings, sharing mode, and also based on other  
active partitions’ requirements. The actual entitlement is never smaller than the desired  
processing unit’s value and can exceed the desired processing unit’s value if the LPAR is an  
uncapped partition.  
Chapter 2. Architecture and technical overview  
51  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
A partition can be defined with a processor capacity as small as 0.10 processing units. This  
represents one-tenth of a physical processor. Each physical processor can be shared by up  
to 10 shared processor partitions, and a partition’s entitlement can be incremented  
fractionally by as little as one-hundredth of the processor. The shared processor partitions are  
dispatched and time-sliced on the physical processors under control of the POWER  
Hypervisor. The shared processor partitions are created and managed by the HMC or  
Integrated Virtualization Management (included with Virtual I/O Server software version 1.2 or  
later). There is only one pool of shared processors at the time of writing this publication and  
all shared partitions are dispatched by Hypervisor within this pool. Dedicated partitions and  
micro-partitions can coexist on the same POWER5+ processor-based server as long as  
enough processors are available.  
The systems support up to a 4-core processor configuration, therefore, up to four dedicated  
partitions, or up to 40 micro-partitions can be created. It is important to point out that the  
maximums stated are supported by the hardware, but the practical limits depend on the  
application workload demands.  
2.12.2 Logical, virtual, and physical processor mapping  
The meaning of the term physical processor in this section is a processor core. For example,  
in a 2-core server with a DCM (dual-core module), there are two physical processors, and in a  
4-core configuration with a QCM (quad-core module), there are four physical processors.  
In dedicated mode, physical processors are assigned as a whole to partitions. The  
simultaneous multithreading feature in the POWER5+ processor core allows the core to  
execute instructions from two independent software threads simultaneously. To support this  
feature, the concept of logical processors was introduced. The operating system (AIX 5L or  
Linux) sees one physical processor as two logical processors if the simultaneous  
multithreading feature is on. It can be turned off while operating system is executing (for AIX  
5L, use the smtctlcommand). If simultaneous multithreading is off, then each physical  
processor is presented as one logical processor, and, thus, only one thread is executed on  
the physical processor at the time.  
In a micro-partitioned environment with shared mode partitions, an additional concept of  
virtual processors was introduced. Shared partitions can define any number of virtual  
processors (maximum number is 10 times the number of processing units assigned to the  
partition). From the POWER Hypervisor point of view, the virtual processors represent  
dispatching objects (for example, the POWER Hypervisor dispatches virtual processors to  
physical processors according to partition’s processing unit entitlement). At the end of the  
POWER Hypervisor’s dispatch cycle, all partitions should receive total CPU time equal to  
their processing unit entitlement. Virtual processors are either running (dispatched) on a  
physical processor or standby (waiting). The operating system is able to dispatch its software  
threads to these virtual processors and is completely screened from the actual number of  
physical processors. The logical processors are defined on top of virtual processors in the  
same way that physical processors are defined. So, even with a virtual processor, the  
concept of logical processor exists and the number of logical processors depends on whether  
the simultaneous multithreading is turned on or off.  
Some additional information related to the virtual processors:  
There is a one-to-one mapping of running virtual processors to physical processors at any  
given time. No more virtual processors can be active at any given time than the total  
number of physical processors in the shared processor pool.  
A virtual processor can be either running (dispatched) on a physical processor or standby  
waiting for a physical processor to become available.  
52  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Virtual processors do not introduce any additional abstraction level, they are really only a  
dispatch entity. When running on a physical processor, they run at the full speed of the  
physical processor.  
Each partition’s profile defines a CPU entitlement that determines how much processing  
power any given partition should receive. The total sum of CPU entitlement of all partitions  
cannot exceed the number of available physical processors in the shared processor pool.  
A partition has the same amount of processing power regardless of the number of virtual  
processors that it defines.  
A partition can use more processing power, regardless of its entitlement, if it is defined as  
an uncapped partition in the partition profile. If there is spare processing power available in  
the shared processor pool or other partitions are not using their entitlement, an uncapped  
partition can use additional processing units if its entitlement is not enough to satisfy its  
application processing demand in the given processing entitlement.  
When the partition is uncapped, the number of defined virtual processors determines the  
limitation of the maximum processing power it can receive. For example, if the number of  
virtual processors is two, then the maximum usable processor units is two.  
You are allowed to define more virtual processors than physical processors. In that case,  
the virtual processor is waiting for dispatch more often, and you should consider some  
performance impact caused by redispatching virtual processors on physical processors.  
Also, some applications might benefit from using more virtual processors than physical  
processors.  
You can change the number of virtual processors dynamically through a dynamic LPAR  
operation.  
Virtual processor recommendations  
For each partition, you can define a number of virtual processors set to the maximum  
processing power the partition could ever request. If there are, for example, four physical  
processors installed in the system, one production partition and three test partitions, then:  
Define the production LPAR with four virtual processors, so that it can receive full  
processing power of all four physical processors during the time that the other partitions  
are idle.  
If you know that the test system never consumes more than one processor computing  
unit, then you should define the test system with one virtual processor. Some test systems  
might require additional virtual processors, such as four, in order to use idle processing  
power left over by a production system during off-business hours.  
Figure 2-14 on page 54 shows logical, virtual, and physical processor mapping, and an  
example of how the virtual processor and logical processor can be dispatched to the physical  
processor.  
Chapter 2. Architecture and technical overview  
53  
Download from Www.Somanuals.com. All Manuals Search And Download.  
LPAR1 -  
LPAR3 -  
LPAR4 -  
LPAR2 -  
shared mode  
5 virual CPU's  
SMT ON  
shared mode  
2 virual CPU's  
SMT ON  
dedicated  
1 physical CPU  
SMT ON  
shared mode  
1 virual CPU  
SMT OFF  
virtual  
processor  
(VP0)  
virtual  
processor  
(VP1)  
virtual  
processor  
(VP2)  
virtual  
processor  
(VP3)  
virtual  
processor  
(VP4)  
virtual  
processor  
(VP0)  
virtual  
processor  
(VP0)  
virtual  
processor  
(VP1)  
dispatched  
dispatched  
dispatched  
.
physical  
CPU  
physical  
CPU  
physical  
CPU  
physical  
CPU  
(proc0)  
(proc1)  
(proc2)  
(proc3)  
shared pool  
dedicated  
HW - physical resources driven by Hypervisor  
sample  
time  
0
10  
msec  
msec  
LPAR1  
VP0  
LP0+1  
LPAR1  
VP2  
LP4+5  
LPAR3  
VP0  
LP0+1  
LPAR3  
VP1  
LP2+3  
LPAR3  
VP1  
LP2+3  
Physical CPU proc0  
Dispatching example with:  
LPAR1 entitlement = 0.5  
LPAR2 entitlement = 0.5  
LPAR3 entitlement = 1.0  
LPAR1  
VP2  
LP2+3  
LPAR1  
VP3  
LP6+7  
LPAR1  
VP4  
LP8+9  
LPAR3  
VP0  
LP0+1  
Spare  
Processing units  
Spare  
Processing units  
Physical CPU proc1  
Physical CPU proc2  
LPAR2  
VP0  
LP0  
Spare  
Processing units  
Figure 2-14 Logical, virtual, and physical processor mapping  
In Figure 2-14, a system with four physical processors and four partitions is presented; one  
partition (LPAR4) is in dedicated mode and three partitions (LPAR1, LPAR2, and LPAR3) are  
running in shared mode. Dedicated mode LPAR4 is using one physical processor and, thus,  
three processors are available for shared processor pool. The LPAR1 defines five virtual  
processors and the simultaneous multithreading feature is on (thus, it sees 10 logical  
processors). LPAR2 defines one virtual processor and simultaneous multithreading is off (one  
logical processor). LPAR3 defines two virtual processors and simultaneous multithreading is  
on. Currently (sample time), virtual processors 2 and 3 of LPAR1 and virtual processor 0 of  
LPAR2 are dispatched on physical processors in the shared pool. Other virtual processors  
are idle waiting for dispatch by the Hypervisor. When more virtual processors are defined  
within a partition, any virtual processors share equal parts of the partition processing  
entitlement.  
2.12.3 Virtual I/O Server  
The Virtual I/O Server (VIOS) is a special purpose partition that provides virtual I/O resources  
to other partitions. The Virtual I/O Server owns the physical resources (actually SCSI, Fibre  
Channel and network adapters, and optical devices) and allows client partitions to share  
access to them, thus, minimizing the number of physical adapters in the system. The Virtual  
I/O Server eliminates the requirement that every partition own a dedicated network adapter,  
disk adapter, and disk drive.  
Figure 2-15 on page 55 shows an organization view of a micro-partitioned system including  
the Virtual I/O Server. The figure also includes virtual SCSI and Ethernet connections and  
mixed operating system partitions.  
54  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
POWER5 Partitioning  
Network  
2 CPUs  
2 CPUs 3 CPUs 3 CPUs  
6 CPUs  
Micro-Partitioning  
Virtual I/O  
Server  
Linux AIX  
AIX  
Virtual  
SCSI  
Virtual  
adapter  
External  
storage  
v5.2 v5.3  
Virtual Ethernet  
POWER Hypervisor  
I/O  
Storage Network  
I/O  
I/O  
I/O  
I/O  
S N  
HMC  
Sto Net Sto Net Sto Net  
Figure 2-15 Micro-Partitioning technology and VIOS  
Because the Virtual I/O Server is an operating system-based appliance server, redundancy  
for physical devices attached to the Virtual I/O Server can be provided by using capabilities  
such as Multipath I/O and IEEE 802.3ad Link Aggregation.  
Installation of the Virtual I/O Server partition is performed from a special system backup DVD  
that is provided to clients that order the Advanced POWER Virtualization feature. This  
dedicated software is only for the Virtual I/O Server (and IVM in case it is used) and is only  
supported in special Virtual I/O Server partitions.  
The Virtual I/O Server can be installed by:  
Media (assigning the DVD-ROM drive to the partition and booting from the media)  
The HMC (inserting the media in the DVD-ROM drive on the HMC and using the  
installioscommand)  
Using the Network Install Manager (NIM)  
Note: To increase the performance of I/O-intensive applications, use dedicated physical  
adapters using dedicated partitions.  
We recommend that you install the Virtual I/O Server in a partition with dedicated  
resources or at least a 0.5 processor entitlement to help ensure consistent performance.  
The Virtual I/O Server supports RAID configurations and SAN-attached devices (possibly  
with multipath driver). Logical volumes created on RAID or JBOD configurations are  
bootable, and the number of logical volumes is limited to the amount of storage available  
and architectural limits of the Logical Volume Manager.  
Two major functions are provided with the Virtual I/O Server: a shared Ethernet adapter and  
Virtual SCSI.  
Chapter 2. Architecture and technical overview  
55  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Shared Ethernet adapter  
A shared Ethernet adapter (SEA) is a Virtual I/O Server service that acts as a layer 2 network  
bridge between a physical Ethernet adapter or aggregation of physical adapters  
(EtherChannel) and one or more Virtual Ethernet adapters defined by the Hypervisor on the  
Virtual I/O Server. A SEA enables LPARs on the virtual Ethernet to share access to the  
physical Ethernet and communicate with stand-alone servers and LPARs on other systems.  
The shared Ethernet network provides this access by connecting the internal Hypervisor  
VLANs with the VLANs on the external switches. Because the shared Ethernet network  
processes packets at layer 2, the original MAC address and VLAN tags of the packet are  
visible to other systems on the physical network. IEEE 802.1 VLAN tagging is supported.  
The virtual Ethernet adapters that are used to configure a shared Ethernet adapter are  
required to have the trunk setting enabled. The trunk setting causes these virtual Ethernet  
adapters to operate in a special mode, so that they can deliver and accept external packets  
from the POWER5+ internal switch to the external physical switches. The trunk setting should  
only be used for the virtual Ethernet adapters that are part of a shared Ethernet network setup  
in the Virtual I/O server.  
A single SEA setup can have up to 16 virtual Ethernet trunk adapters and each virtual  
Ethernet trunk adapter can support up to 20 VLAN networks. Therefore, it is possible for a  
single physical Ethernet to be shared between 320 internal VLANs. The number of shared  
Ethernet adapters that can be set up in a Virtual I/O Server partition is limited only by the  
resource availability because there are no configuration limits.  
For a more detailed discussion about virtual networking, see:  
Virtual SCSI  
Access to real storage devices is implemented through the virtual SCSI services, a part of the  
Virtual I/O Server partition. You accomplish this by using a pair of virtual adapters: a virtual  
SCSI server adapter and a virtual SCSI client adapter. The virtual SCSI server and client  
adapters are configured using an HMC or through Integrated Virtualization Manager on  
smaller systems. The virtual SCSI server (target) adapter is responsible for executing any  
SCSI commands it receives. It is owned by the Virtual I/O Server partition. The virtual SCSI  
client adapter allows a client partition to access physical SCSI and SAN-attached devices and  
LUNs that are assigned to the client partition.  
Physical disks owned by the Virtual I/O Server partition can either be exported and assigned  
to a client partition as a whole device, or they can be configured into a volume group and  
partitioned into several logical volumes. These logical volumes can then be assigned to  
individual partitions. From the client partition point of view, these two options are equivalent.  
The Virtual I/O Server provides mapping between backing devices (physical devices or logical  
volumes assigned to client partitions in VIOS nomenclature) and client partitions by a  
command line interface. The appropriate command is the mkvdevcommand. For syntax and  
semantics, see Virtual I/O Server documentation.  
All current storage device types, such as SAN, SCSI, and RAID are supported. SSA and  
iSCSI are not supported at the time of writing.  
For more information about the specific storage devices supported, see:  
56  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Important: We do not recommend using Mirrored Logical Volumes (LVs) on the Virtual I/O  
Server level for as backing devices. If mirroring is required, two independent devices  
(possibly from two separate VIO servers) should be assigned to the client partition, and  
then the client partition should define mirroring on top of them.  
Virtual I/O Server version 1.3  
Virtual I/O Server version 1.3 brings a host of new enhancements, including improved  
monitoring such as additional topasand viostatperformance metrics and the bundling of the  
Performance ToolKit (PTX®) agent. Virtual SCSI and Virtual Ethernet performance  
increases, and command line enhancements and the enablement of additional storage  
solutions are also included.  
Virtual I/O Server version 1.3 introduced several enhancements for Virtual SCSI and shared  
Fibre Channel adapter support:  
Independent Software Vendor/Independent Hardware Vendor Virtual I/O enablement  
iSCSI TOE adapter  
iSCSI directly attached n3700 storage subsystem  
HP storage  
Virtual SCSI functional enhancements:  
– Support for SCSI Reserve/Release for limited configurations  
– Changeable queue depth  
– Updating virtual device capacity non-disruptively so that the virtual disk can "grow"  
without requiring a reconfig  
– Configurable fast fail time (number of retries on failure)  
– Error log enhancements  
Virtual I/O Server version 1.3 also introduced several enhancements for virtual Ethernet and  
shared ethernet adapter support, including TCP/IP Acceleration: Large Block Send.  
2.12.4 Partition Load Manager  
Partition Load Manager (PLM) provides automated processor and memory distribution  
between a dynamic LPAR and a Micro-Partitioning technology capable logical partition  
running AIX 5L. The PLM application is based on a client/server model to share system  
information, such as processor or memory events, across the concurrent present logical  
partitions.  
The following events are registered on all managed partition nodes:  
Memory-pages-steal high thresholds and low thresholds  
Memory-usage high thresholds and low thresholds  
Processor-load-average high threshold and low threshold  
Note: PLM is supported on AIX 5L Version 5.2 and AIX 5L Version 5.3. It is not supported  
on Linux.  
2.12.5 Integrated Virtualization Manager  
In order to ease virtualization technology adoption in any IBM System p5 environment, IBM  
has developed Integrated Virtualization Manager (IVM) — a simplified hardware  
Chapter 2. Architecture and technical overview  
57  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
management solution that inherits some HMC features, thus avoiding the necessity of a  
dedicated control workstation. This solution enables the administrator to reduce system setup  
time. IVM is targeted at small and medium systems.  
IVM supports up to the maximum 16-core configuration. The IVM provides a management  
model for a single system. Although it does not provide the full flexibility of an HMC, it enables  
the exploitation of the IBM Virtualization Engine™ technology. IVM is an enhancement of the  
Virtual I/O Server, offered as part of Virtual I/O Server Version 1.2 and follow-on versions,  
which is the product that enables I/O virtualization in POWER5 and POWER5+ systems. It  
provides the same Virtual I/O Server features plus a Web-based graphical user interface that  
enables the administrator to remotely manage the System p5 server with an Internet browser.  
You can use IVM to:  
Create and manage logical partitions.  
Configure the virtual Ethernet networks.  
Manage storage in the Virtual I/O Server.  
Create and manage user accounts.  
Create and manage serviceable events through Service Focal Point.  
Download and install updates to device microcode and to Virtual I/O Server software.  
Back up and restore logical partition configuration information.  
View application logs and the device inventory.  
The requirements for an IVM-managed server are as follows:  
A server managed by IVM cannot be simultaneously managed by an HMC.  
IVM (with Virtual I/O Server) must be installed as the first operating system.  
An IVM partition requires a minimum of one virtual processor and 512 MB of RAM.  
Virtual I/O Server version 1.3 introduced enhancements to IVM. The Integrated Virtualization  
Manager (IVM) adds an industry leading function in this release: support for Dynamic Logical  
Partitioning (DLPAR) for memory and processors in managed partitions. Additionally, a  
number of usability enhancements include support through the browser-based interface for IP  
configuration of the Virtual I/O Server:  
DLPAR Support for memory and processors in managed partitions  
GUI Support for System Plan management, including the Logical Partition (LPAR)  
Deployment Wizard  
Web UI Support for:  
– IP configuration support  
Task Manager for long-running tasks  
– Various usability enhancements, including the ability to create a new partition based on  
an existing one  
The major considerations of IVM in comparison to an HMC-managed system are as follows:  
All physical adapters are owned by IVM, and LPARs use virtual devices.  
There is only one profile per partition.  
A maximum of four virtual Ethernet networks are available inside the system.  
Each LPAR can have a maximum of one Virtual SCSI adapter assigned.  
IVM supports a single Virtual I/O Server to support all of your mission critical production  
needs.  
Service Agent (see 3.2.3, “Service Agent” on page 85) for reporting Hardware errors to  
IBM is not available on IVM.  
58  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
IVM cannot be used by HACMP software to activate Capacity on Demand (CoD)  
resources on machines that support CoD.  
IVM provides advanced virtualization functionality without the need for an extra-cost  
workstation. For more information about IVM functionality and best practices, see Virtual I/O  
Server Integrated Virtualization Manager, REDP-4061 at this Web site:  
Figure 2-16 shows how a system with IVM is organized. There is a Virtual I/O Server and IVM  
installed in one partition that owns all of the physical server resources and four client  
partitions. IVM communicates to the POWER Hypervisor to create, manage, and provide  
virtual I/O for client partitions. But the dispatch of partitions on physical processors is done  
by the POWER Hypervisor as in HMC-managed servers. The rules for mapping the physical  
processors, virtual processors, and logical processors apply for shared partitions managed by  
Figure 2-16 IVM principles  
Note: IVM and HMC are two separate management systems and cannot be used at the  
same time. IVM targets ease of use, while HMC targets flexibility and scalability. The  
internal design is so different that you should never connect an HMC to a working IVM  
system. If you want to migrate an environment from IVM to HMC, you have to rebuild the  
configuration setup manually.  
Operating system support for advanced virtualization  
Table 2-13 on page 60 lists AIX 5L and Linux support for advanced virtualization.  
Chapter 2. Architecture and technical overview  
59  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Table 2-13 Operating system supported functions  
Advanced POWER  
AIX 5L  
AIX 5L  
Linux  
Linux  
Linux  
Virtualization feature  
Version 5.2  
Version 5.3  
SLES 9  
RHEL AS 3 RHEL AS 4  
Micro-partitions  
N
Y
Y
Y
Y
(1/10th of processor)  
Virtual Storage  
N
N
Y
Y
Y
Y
Y
Y
N
Y
Y
N
Y
Y
N
Virtual Ethernet  
Partition Load Manager  
2.13 Hardware Management Console  
The Hardware Management Console (HMC) is a dedicated workstation that provides a  
graphical user interface for configuring, operating, and performing basic system tasks for the  
IBM System p5 servers that function in either non-partitioned, LPAR, or clustered  
environments. In addition, the HMC is used to configure and manage partitions. One HMC is  
capable of controlling multiple POWER5 and POWER5+ processor-based systems.  
At the time of writing, one HMC supports up to 48 POWER5 and POWER5+ processor-based  
systems and up to 254 LPARs using the HMC machine code Version 5.2. For updates of the  
machine code and HMC functions and hardware prerequisites, refer to the following Web site:  
POWER5+ and POWER5 processor-based system HMCs require Ethernet connectivity  
between the HMC and the server’s service processor. Moreover, if dynamic LPAR operations  
are required, all AIX 5L and Linux partitions must be enabled to communicate over a network  
to the HMC. Ensure that sufficient Ethernet adapters are available to enable public and  
private networks, if you need both:  
The HMC 7310 Model C05 is a deskside model with one integrated 10/100/1000 Mbps  
Ethernet port and two additional PCI slots.  
The 7310 Model CR3 is a 1U, 19-inch rack-mountable drawer that has two native  
10/100/1000 Mbps Ethernet ports and two additional PCI slots.  
For any partition in a server, it is possible to use the shared Ethernet adapter in the Virtual I/O  
Server for a unique connection from the HMC to the partitions. Therefore, your partition does  
not require your own physical adapter to communicate to an HMC.  
It is a good practice to connect the HMC to the first HMC port on the server, which is labeled  
as HMC Port 1, although other network configurations are possible. You can attach a second  
HMC to HMC Port 2 of the server for redundancy (or vice versa). Figure 2-17 on page 61  
shows a simple network configuration to enable the connection from the HMC to the server  
and to enable Dynamic LPAR operations. For more details about HMC and the possible  
network connections, refer to Hardware Management Console (HMC) Case Configuration  
Study for LPAR Management, REDP-3999, at:  
60  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Management LAN  
eth0 eth0 eth0 eth0  
eth1  
eth0  
HMC 1  
HMC 2  
`
Service  
Processor  
HMC  
p5 System  
Figure 2-17 HMC to service processor and LPARs network connection  
The default mechanism for allocation of the IP addresses for the service processor HMC  
ports is dynamic. The HMC can be configured as a DHCP server, providing the IP address at  
the time the managed server is powered on. If the service processor of the managed server  
does not receive the DHCP reply before time-out, predefined IP addresses set up on both  
ports. Static IP address allocation is also an option. You can configure the IP address of the  
service processor ports with a static IP address by using the Advanced System Management  
Interface (ASMI) menus. See 2.15.7, “Service processor” on page 73 for predefined IP  
addresses and additional information.  
Note: If you need to access ASMI (for example, to set up the IP address of a new  
POWER5+ processor-based server when HMC is not available or not providing DHCP  
services), you can connect any client to one of the service processor HMC ports with any  
kind of Ethernet cable, and use a Web browser to access the predefined IP address, such  
as the following example:  
Functions performed by the HMC include:  
Creating and maintaining a multiple partition environment  
Displaying a virtual operating system session terminal for each partition  
Displaying a virtual operator panel of contents for each partition  
Detecting, reporting, and storing changes in hardware conditions  
Powering managed systems on and off  
Acting as a service focal point  
The HMC provides both graphical and command line interface for all management tasks.  
Remote connection to the HMC using Web-based System Manager or SSH is possible. For  
accessing the graphical interface, you can use the Web-based System Manager Remote  
Client running on the AIX 5L, Linux, or Windows® operating systems. The Web-based  
System Manager client installation image can be downloaded from the HMC itself from the  
following URL:  
http://<hmc_address_or_name>/remote_client.html  
Both unencrypted and encrypted Web-based System Manager connections are supported.  
The command line interface is also available by using the SSH secure shell connection to the  
HMC. The command line interface can be used by an external management system or a  
partition to perform HMC operations remotely.  
Chapter 2. Architecture and technical overview  
61  
Download from Www.Somanuals.com. All Manuals Search And Download.  
2.13.1 High availability using the HMC  
The HMC is an important hardware component. HACMP Version 5.3 High Availability cluster  
software can be used to activate resources automatically (where available), thus becoming  
an integral part of the cluster. For some environments, we recommend that you work with  
redundant HMCs.  
POWER5 and POWER5+ processor-based systems have two service processor interfaces  
(HMC port 1 and HMC port 2) available for connections to the HMC. We recommend that you  
use both of them for redundant network configuration. Depending on your environment, you  
have multiple options to configure the network. Figure 2-18 shows one possible highly  
available configuration.  
LAN3 - outside connection  
HMC1  
HMC2  
LAN1 –  
LAN2 –  
hardware management network for  
first FSP ports (private)  
LAN 1  
LAN 2  
hardware management network for  
second FSP ports (private), separate  
network hardware than LAN1  
1
2
1
2
LAN3 -  
management network for WebSM  
access to HMC from outside (public)  
and for HMC to LPAR communication  
FSP  
FSP  
p5 System A  
p5 System B  
LPAR A1  
LPAR A2  
LPAR A3  
LPAR B1  
LPAR B2  
LPAR B3  
Figure 2-18 Highly available HMC and network architecture  
Note that only hardware management networks (LAN1 and LAN2) are highly available on the  
above picture in order to keep simplicity. However, management network (LAN3) can be  
made highly available by using a similar concept and adding more Ethernet adapters to  
LPARs and HMCs.  
2.13.2 IBM System Planning Tool  
The IBM System Planning Tool (SPT) is the next generation of the IBM LPAR Validation Tool  
(LVT). It contains all of the function from the LVT and is integrated with the IBM Systems  
Workload Estimator (WLE). System plans generated by the SPT can be deployed on the  
system by the Hardware Management Console (HMC). The SPT is available to assist the  
user in system planning, design, validation, and to provide a system validation report that  
reflects the user’s system requirements while not exceeding system recommendations. The  
SPT is a PC-based browser application designed to run in a stand-alone environment.  
The IBM System Planning Tool can be downloaded at no additional charge from:  
The System Planning Tool (SPT) helps you design a system to fit your needs. You can use  
the SPT to design a logically partitioned system or you can use the SPT to design an  
62  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
unpartitioned system. You can create an entirely new system configuration, or you can create  
a system configuration based upon any of the following:  
Performance data from an existing system that the new system is to replace  
Performance estimates that anticipate future workloads that you must support  
Sample systems that you can customize to fit your needs  
Integration between the SPT and both the Workload Estimator (WLE) and IBM Performance  
Management (PM) allows you to create a system that is based upon performance and  
capacity data from an existing system or that is based on new workloads that you specify.  
You can use the SPT before you order a system to determine what you must order to support  
your workload. You can also use the SPT to determine how you can partition a system that  
you already have.  
Important: We recommend using the IBM System Planning Tool to estimate Hypervisor  
requirements and to determine the memory resources that are required for all partitioned  
and non-partitioned servers.  
Figure 2-19 shows the estimated Hypervisor memory requirements based on sample  
partition requirements.  
Figure 2-19 IBM System Planning Tool window showing Hypervisor requirements  
Chapter 2. Architecture and technical overview  
63  
Download from Www.Somanuals.com. All Manuals Search And Download.  
2.14 Operating system support  
The p5-520 and p5-520Q are capable of running the AIX 5L and Linux operating systems.  
The AIX 5L operating system has been developed and enhanced specifically to exploit and to  
support the extensive RAS features on IBM System p systems.  
2.14.1 AIX 5L  
If you are installing AIX 5L on the server, the following minimum requirements must be met:  
AIX 5L for POWER V5.2 with the 5200-09 Technology Level (APAR IY82425), or later  
AIX 5L for POWER V5.3 with the 5300-05 Technology Level (APAR IY82426), or later  
Note: The Advanced POWER Virtualization feature (FC 7940) is not supported on AIX 5L  
V5.2. It requires AIX 5L V5.3.  
IBM periodically releases maintenance packages for the AIX 5L operating system. These  
packages are available on CD-ROM or you can download them from the Internet at:  
The Web page provides information about how to obtain the CD-ROM.  
You can also get individual operating system fixes and information about obtaining AIX 5L  
service at this Web site. In AIX 5L V5.3, the sumacommand is also available, which helps the  
administrator to automate the task of checking and downloading operating system  
downloads. For more information about the sumacommand, refer to:  
Electronic Software Delivery (ESD) for AIX 5L V5.2 and V5.3 for POWER5 systems was  
made available. This is a way for clients to receive software and associated publications  
online, as opposed to waiting for a physical shipment to arrive. Clients requesting ESD should  
order FC 3450.  
ESD has the following requirements:  
POWER5 system  
Internet connectivity from a POWER5 system or PC and reasonable connection speed for  
downloading large products such as AIX 5L  
Registration on the ESD Web site  
For additional information, contact your IBM marketing representative.  
Software support for new features in the POWER5+ processor  
For a complete list of the new features introduced in the POWER5+ processor, see 2.1, “The  
POWER5+ processor” on page 26. Support for two new virtual memory page sizes was  
introduced: 64 KB and 16 GB as well as support for 1 TB segment size. While 16 GB pages  
are intended for use only in very high performance environments, 64 KB pages are  
general-purpose. AIX 5L Version 5.3 with the 5300-04 Technology Level 64-bit kernel is  
required for 64 KB and 16 GB page size support.  
As with all previous versions of AIX, 4 KB is the default page size. A process continues to use  
4 KB pages, unless a user specifically requests that another page size is used. AIX 5L has  
rich support of 64 KB pages. They are easy to use, and we expect that many applications will  
see performance benefits when using 64 KB pages rather than 4 KB pages. No system  
64  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
configuration changes are necessary to enable a system to use 64 KB pages; they are fully  
pageable, and the size of the pool of 64 KB page frames on a system is dynamic and fully  
managed by AIX 5L.  
The main benefit of a larger page size is improved performance for applications that allocate  
and repeatedly access large amounts of memory. The performance improvement from larger  
page sizes is due to the overhead of translating a page address as it is used in an application,  
to a page address that is understood by the computer's memory subsystem. To improve  
performance, the information needed to translate a given page is usually cached in the  
processor. In POWER5+, this cache takes the form of a translation lookaside buffer (TLB).  
Because there are a limited number of TLB entries, using a large page size increases the  
amount of address space that can be accessed without incurring translation delays. Also, the  
size of TLB in POWER5+ has been doubled compared to POWER5.  
Huge pages (16 GB) are intended for use only in very high performance environments, and  
AIX 5L does not automatically configure a system to use these page sizes. A system  
administrator must configure AIX 5L to use these page sizes and specify their number using  
an HMC before the partition starts.  
A user can specify page sizes to use for three regions process' address space with an  
environment variable or with settings in an application's XCOFFbinary using the ldeditor ld  
commands. These three regions are: data, stack, and program text. An application  
programmer can also select the page size to use for System V shared memory using a new  
SHM_PAGESIZEcommand to the shmctl()system call.  
The following is an example of using system variables to start a program with 64 KB page  
size support:  
LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K <program>  
Systems commands (ps, vmstat, svmon, and pagesize) have been enhanced to report various  
page size usage.  
2.14.2 Linux  
For the p5-520 and p5-520Q, Linux distributions are available through Novell SUSE and  
Red Hat at the time of writing. The server requires the following version of Linux distributions:  
SUSE Linux Enterprise Server 9 for POWER Systems or SUSE Linux Enterprise Server  
10 for POWER Systems, or later  
Red Hat Enterprise Linux AS 4 for POWER, or later  
Note: Not all features available on AIX 5L are available on Linux. IDE VD-ROM/DVD-RAM  
DLPAR operation is not supported by Red Hat Enterprise Linux AS 4 for POWER.  
For information about the features and external devices that are supported by Linux, refer to:  
For information about SUSE Linux Enterprise Server 9, refer to:  
For information about Red Hat Enterprise Linux AS, refer to:  
Chapter 2. Architecture and technical overview  
65  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Many of the features that are described in this document are operating system dependent  
and might not be available on Linux. For more information, see:  
Note: IBM only supports the Linux systems of clients with a SupportLine contract that  
covers Linux. Otherwise, contact the Linux distributor for support.  
Specially priced Linux subscriptions  
Linux subscriptions are now available when ordered through IBM and combined with an IBM  
System p5 Express Product Offering. Clients can purchase a one-year, specially priced  
subscription or a greater discount for a three-year subscription.  
These new Linux options, available on IBM System p5 Express Product Offering servers,  
bring improved pricing and price performance to our clients interested in Linux as their  
primary operating system. Clients interested in AIX 5L can also obtain an Express Product  
Offering that fits their needs.  
Clients are still encouraged to purchase support for their Linux subscription either through  
IBM Global Services or through the distributor to receive updates and technical assistance as  
needed. Support is not included in the price of the subscription.  
The new lower-priced Linux subscriptions, when combined with the lower package prices of  
the IBM System p5 Express Product Offering, make these products an exceptional value for  
our smaller to mid-market clients, as well as larger enterprises.  
Refer to the following Web site for Red Hat information:  
For additional information about Linux on POWER, visit:  
2.15 Service information  
The p5-520 and p5-520Q are customer setup (CSU) servers and are shipped with materials  
to assist in the general installation of the server. The server cover has a quick reference  
service information label that provides graphics that can aid you in identifying features and  
locating information. This section provides some additional service-related information.  
66  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
2.15.1 Touch point colors  
Blue (IBM blue) or terra-cotta (orange) on a component indicates a touch point (for electronic  
parts) where you can grip the hardware to remove it from or install it into the system, open or  
close a latch, and so on. IBM defines the touch point colors as follows:  
Blue  
This requires a shutdown of the system, before the task can be  
performed, for example, installing additional processors contained  
in the second processor book.  
Terra-cotta  
The system can remain powered on while this task is performed.  
Keep in mind that some tasks might require that you have to  
perform other steps first. One example is deconfiguring a physical  
volume in the operating system before removing a disk from a  
4-pack disk enclosure of the p5-520 and p5-520Q.  
Blue and terra-cotta  
Terra-cotta takes precedence over this color combination, and the  
rules for a terra-cotta only touch point apply.  
Important: It is important to adhere to the touch point colors on the system. Not doing so  
can compromise your safety and damage the system.  
2.15.2 Securing a rack-mounted system into a rack  
The optional rack-mount drawer rail kit is a unique kit designed for use with the rack-mounted  
model. No tools are required to install the server or drawer rails into the system rack.  
The kit has a modular design that you can adapt to accommodate various rack depth  
specifications. The drawer rails are equipped with thumb-releases on the sides, toward the  
front of the server, that allow for easy slide out from its rack position for servicing.  
Note: Always exercise standard safety precautions when installing or removing devices  
from racks. By placing the rack-mounted system or expansion unit in the service position,  
you can access the inside of the unit.  
2.15.3 Placing a rack-mounted system into a rack  
To place the rack-mounted system or expansion unit into the service position:  
1. If necessary, open the front rack door.  
2. Remove the two thumbscrews (A) that secure the system or expansion unit (B) to the  
rack, as shown in the Figure 2-20 on page 68.  
Chapter 2. Architecture and technical overview  
67  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 2-20 Pull the server to the service position  
3. Release the rack latches (C) on both the left and right sides, as shown in the Figure 2-20.  
4. Review the following notes, and then slowly pull the system or expansion unit out from the  
rack until the rails are fully extended and locked:  
– If the procedure you are performing requires you to unplug cables from the back of the  
system or expansion unit, do so before you pull the unit out from the rack.  
– Ensure that the cables at the rear of the system or expansion unit do not catch or bind  
as you pull the unit out from the rack.  
– When the rails are fully extended, the rail safety latches lock into place. This action  
prevents the system or expansion unit from being pulled out too far.  
Caution: This unit weighs approximately 43 kg (95 lb.). Ensure that you can safely  
support this weight when removing the server unit from the system rack.  
The IBM Systems Hardware Information Center is available for more information or to view  
available video-clips that describe several of the maintenance repair-action procedures.  
2.15.4 Cable-management arm  
The rack-mounted model is shipped with a cable-management arm to route all the cables  
through the hooks along the cable arm and secure them with the straps provided. The  
cable-management arm simplifies the cables management in the case of a service action that  
requires you to pull out the rack-mounted system from the rack.  
68  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
2.15.5 Operator control panel  
The service processor provides an interface to the control panel that is used to display server  
status and diagnostic information. See Figure 2-21 for operator control panel physical details  
and buttons.  
Figure 2-21 Operator control panel physical details and buttons  
Note: For servers managed by the HMC, use the HMC to perform control panel functions.  
Primary control panel functions  
The primary control panel functions are defined as functions 01 to 20, including options to  
view and manipulate IPL modes, server operating modes, IPL speed, and IPL type.  
The following list describes the primary functions:  
Function 01: Display selected IPL type, system operating mode, and IPL speed  
Function 02: Select IPL type, IPL speed override, and system operating mode  
Function 03: Start IPL  
Function 04: Lamp test  
Function 05: Reserved  
Function 06: Reserved  
Function 07: SPCN functions  
Function 08: Fast power off  
Functions 09 to 10: Reserved  
Functions 11 to 19: System reference code  
Function 20: System type, model, feature code, and IPL type  
All the functions mentioned are accessible using the Advanced System Management  
Interface (ASMI), HMC, or the control panel.  
Extended control panel functions  
The extended control panel functions consist of two major groups:  
Functions 21 through 49, which are available when you select Manual mode from Function  
02.  
Support service representative Functions 50 through 99, which are available when you  
select Manual mode from Function 02, then select and enter the customer service switch 1  
(Function 25), followed by service switch 2 (Function 26).  
Chapter 2. Architecture and technical overview  
69  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Function 30 – CEC SP IP address and location  
Function 30 is one of the Extended control panel functions and is only available when Manual  
mode is selected. You can use this function to display the central electronic complex (CEC)  
Service Processor IP address and location segment. Table 2-14 shows an example of how to  
use Function 30.  
Table 2-14 CEC SP IP address and location  
Information on operator panel  
Action or description  
3 0  
Use the increment or decrement buttons to scroll  
to Function 30.  
3 0 * *  
3 0 0 0  
Press Enter to enter sub-function mode.  
Use the increment or decrement buttons to select  
an IP address:  
0 0 = Service Processor ETH0 or HMC1 port  
0 1 = Service Processor ETH1 or HMC2 port  
S P A: E T H 0: _ _ _ T 5  
Press Enter to display the selected IP address.  
1 9 2 . 1 6 8 . 2 . 1 4 7  
3 0 * *  
Use the increment or decrement buttons to select  
sub-function exit.  
3 0  
Press Enter to exit sub-function mode.  
2.15.6 System firmware  
Server firmware is the part of the Licensed Internal Code that enables hardware, such as the  
service processor. Depending on your service environment, you can download, install, and  
manage your server firmware fixes using different interfaces and methods, including the  
HMC, or by using functions specific to your operating system. See 3.2.4, “IBM System p5  
firmware maintenance” on page 87 for a detailed description of IBM System p5 firmware.  
Note: Normally, installing the server firmware fixes through the operating system is a  
nonconcurrent process.  
Temporary and permanent firmware sides  
The service processor maintains two copies of the server firmware:  
One copy is considered the permanent or backup copy and is stored on the permanent  
side, sometimes referred to as the p side.  
The other copy is considered the installed or temporary copy and is stored on the  
temporary side, sometimes referred to as the t side. We recommend that you start and run  
the server from the temporary side.  
The copy actually booted from is called the activated level, sometimes referred to as b.  
Note: The default value, from which the system boots, is temporary.  
70  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
The following examples are the output of the lsmcodecommand for AIX 5L and Linux,  
showing the firmware levels as they are displayed in the outputs:  
AIX 5L  
The current permanent system firmware image is SF220_005.  
The current temporary system firmware image is SF220_006.  
The system is currently booted from the temporary image.  
Linux  
system:SF220_006 (t) SF220_005 (p) SF220_006 (b)  
When you install a server firmware fix, it is installed on the temporary side.  
Note: The following points are of special interest:  
The server firmware fix is installed on the temporary side only after the existing  
contents of the temporary side are permanently installed on the permanent side (the  
service processor performs this process automatically when you install a server  
firmware fix).  
If you want to preserve the contents of the permanent side, you need to remove the  
current level of firmware (copy the contents of the permanent side to the temporary  
side) before you install the fix.  
However, if you get your fixes using the Advanced features on the HMC interface and  
you indicate that you do not want the service processor to automatically accept the  
firmware level, the contents of the temporary side are not automatically installed on the  
permanent side. In this situation, you do not need to remove the current level of  
firmware to preserve the contents of the permanent side before you install the fix.  
You might want to use the new level of firmware for a period of time to verify that it works  
correctly. When you are sure that the new level of firmware works correctly, you can  
permanently install the server firmware fix. When you permanently install a server firmware  
fix, you copy the temporary firmware level from the temporary side to the permanent side.  
Conversely, if you decide that you do not want to keep the new level of server firmware, you  
can remove the current level of firmware. When you remove the current level of firmware, you  
copy the firmware level that is currently installed on the permanent side from the permanent  
side to the temporary side.  
System firmware download Web site  
For the system firmware download Web site, go to:  
Chapter 2. Architecture and technical overview  
71  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Receive server firmware fixes using an HMC  
If you use an HMC to manage your server and you need to configure several partitions on the  
server periodically, you need to download and install fixes for your server and power  
subsystem firmware.  
How you get the fix depends on whether the HMC or server is connected to the Internet:  
The HMC or server is connected to the Internet.  
There are several repository locations from which you can download the fixes using the  
HMC. For example, you can download the fixes from your service provider's Web site or  
support system, from optical media that you order from your service provider, or from an  
FTP server on which you previously placed the fixes.  
Neither the HMC nor your server is connected to the Internet (server firmware only).  
You need to download your new server firmware level to a CD-ROM media or FTP server.  
For both of these two options, you can use the interface on the HMC to install the firmware fix  
(from one of the repository locations or from the optical media). The Change Internal Code  
wizard on the HMC provides a step-by-step process for you to perform the procedure to  
install the fix. Perform these steps:  
1. Ensure that you have a connection to the service provider (if you have an Internet  
connection from the HMC or server).  
2. Determine the available levels of server and power subsystem firmware.  
3. Create optical media (if you do not have an Internet connection from the HMC or server).  
4. Use the Change Internal Code wizard to update your server and power subsystem  
firmware.  
5. Verify that the fix installed successfully.  
Receive server firmware fixes without an HMC  
Periodically, you need to install fixes for your server firmware. If you do not use an HMC to  
manage your server, you must get your fixes through your operating system. In this situation,  
you can get server firmware fixes through the operating system regardless of whether your  
operating system is AIX 5L or Linux.  
To do this, complete the following tasks:  
1. Determine the existing level of server firmware using the lsmcodecommand.  
2. Determine the available levels of server firmware.  
3. Get the server firmware.  
4. Install the server firmware fix to the temporary side.  
5. Verify that the server firmware fix installed successfully.  
6. Install the server firmware fix permanently (optional).  
72  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Note: To view existing levels of server firmware using the lsmcode command, you need to  
have the following service tools installed on your server:  
AIX 5L  
You must have AIX 5L diagnostics installed on your server to perform this task. AIX 5L  
diagnostics are installed when you install AIX 5L on your server. However, it is possible  
to deselect the diagnostics. Therefore, you need to ensure that the online AIX 5L  
diagnostics are installed before proceeding with this task.  
Linux  
– Platform Enablement Library: librtas-nnnnn.rpm  
– Service Aids: ppc64-utils-nnnnn.rpm  
– Hardware Inventory: lsvpd-nnnnn.rpm  
Where nnnnn represents a specific version of the RPM file.  
If you do not have the service tools on your server, you can download them at the  
following Web site:  
2.15.7 Service processor  
The service processor is an embedded controller running the service processor internal  
operating system. The service processor operating system contains specific programs and  
device drivers for the service processor hardware. The host interface is a 32-bit PCI-X  
interface connected to the Enhanced I/O Controller.  
The service processor is used to monitor and manage the system hardware resources and  
devices. The service processor offers two Ethernet 10/100 Mbps ports:  
Both Ethernet ports are only visible to the service processor and can be used to attach the  
server to an HMC or to access the Advanced System Management Interface (ASMI)  
options from a client Web browser, using the http-server integrated into the service  
processor internal operating system.  
Both Ethernet ports have a default IP address  
– Service processor Eth0 or HMC1 port is configured as 192.168.2.147 with netmask  
255.255.255.0  
– Service processor Eth1 or HMC2 port is configured as 192.168.3.147 with netmask  
255.255.255.0  
For the major functions of the Service Processor, see 3.2.1, “Service processor” on page 83.  
2.15.8 Hardware management user interfaces  
This section provides a brief overview of the different hardware management user interfaces  
available.  
Advanced System Management Interface  
The Advanced System Management Interface (ASMI) is the interface to the service  
processor that enables you to set flags that affect the operation of the server, such as auto  
power restart, and to view information about the server, such as the error log and vital product  
data.  
Chapter 2. Architecture and technical overview  
73  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
This interface is accessible using a Web browser on a client system that is connected directly  
to the service processor (in this case, you can use either a standard Ethernet cable or a  
crossed cable) or through an Ethernet network. Using the network configuration menu, the  
ASMI enables the possibility to change the service processor IP addresses or to apply some  
security policies and avoid access from undesired IP addresses or ranges. You can also  
access the ASMI using a terminal attached to the system service processor ports on the  
server, if the server is not HMC-managed. The service processor and the ASMI are standard  
on all IBM System p servers.  
You might be able to use the service processor's default settings. In that case, accessing the  
ASMI is not necessary.  
Accessing the ASMI using a Web browser  
The Web interface to the Advanced System Management Interface is accessible through, at  
the time of writing, Microsoft® Internet Explorer® 6.0, Netscape 7.1, Mozilla Firefox, or  
Opera 7.23 running on a PC or mobile computer connected to the service processor. The  
Web interface is available during all phases of system operation including the initial program  
load and run time. However, some of the menu options in the Web interface are unavailable  
during IPL or run time to prevent usage or ownership conflicts if the system resources are in  
use during that phase.  
Accessing the ASMI using an ASCII console  
The Advanced System Management Interface on an ASCII console supports a subset of the  
functions provided by the Web interface and is available only when the system is in the  
platform standby state. The ASMI on an ASCII console is not available during some phases  
of system operation, such as the initial program load and run time.  
Accessing the ASMI using an HMC  
To access the Advanced System Management Interface using the Hardware Management  
Console, complete the following steps:  
1. Ensure that the HMC is set up and configured.  
2. In the navigation area, expand the managed system with which you want to work.  
3. Expand Service Applications and click Service Focal Point.  
4. In the content area, click Service Utilities.  
5. From the Service Utilities window, select the managed system with which you want to  
work.  
6. From the Selected menu on the Service Utilities window, select Launch ASM.  
System Management Services  
Use the System Management Services (SMS) menus to view information about your system  
or partition and to perform tasks such as changing the boot list or setting the network  
parameters.  
To start System Management Services, perform the following steps:  
1. For a server that is connected to an HMC, use the HMC to restart the server or partition.  
If the server is not connected to an HMC, stop the system, and then restart the server by  
pressing the power button on the control panel.  
2. For a partitioned server, watch the virtual terminal window on the HMC.  
For a full server partition, watch the firmware console.  
74  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
3. Look for the power-on self-test (POST) indicators: memory, keyboard, network, SCSI, and  
speaker that appear across the bottom of the screen. Press the numeric 1 key after the  
word keyboardappears and before the word speakerappears.  
The SMS menus are useful to define the operating system installation method, choosing the  
installation boot device, or setting the boot device priority list for a fully managed server or a  
logical partition. In the case of a network boot, SMS menus are provided to set up the network  
parameters and network adapter IP address.  
HMC  
The Hardware Management Console is a system that controls managed systems, including  
IBM System p5 hardware, logical partitions, and Capacity on Demand. To provide flexibility  
and availability, there are different ways to implement HMCs, including a local HMC, remote  
HMC, redundant HMC, and the Web-based System Manager Remote Client.  
Local HMC  
A local HMC is any physical HMC that is directly connected to the server that it manages  
through a private service network. An HMC in a private service network can be a Dynamic  
Host Control Protocol (DHCP) server from which the managed server obtains the address for  
its firmware. Additional local HMCs in your private service network cannot be other DHCP  
servers, but they can be DHCP clients.  
Remote HMC  
A remote HMC is a stand-alone HMC or an HMC installed in a rack that is used to access  
another HMC remotely. A remote HMC can be present in an open network.  
Redundant HMC  
A redundant HMC manages a server that is already managed by another HMC. When two  
HMCs manage one server, those HMCs are peers and can be used simultaneously to  
manage the server. The redundant HMC in your private service network is usually a DHCP  
client.  
Web-based System Manager Remote Client  
The Web-based System Manager Remote Client is an application that you typically installed  
on a PC and you can download directly from an installed HMC. After you have installed an  
HMC, and you have assigned HMC Ethernet IP addresses, you can download the  
Web-based System Manager Remote Client from a Web browser, using the following URL:  
You can then use the PC to access other HMCs remotely. Web-based System Manager  
Remote Clients can be present in private and open networks. You can perform most  
management tasks using the Web-based System Manager Remote Client.  
The remote HMC and the Web-based System Manager Remote Client allow you the flexibility  
to access your managed systems (including HMCs) from multiple locations using multiple  
HMCs.  
For more detailed information about the use of the HMC, refer to the IBM Systems Hardware  
Information Center.  
Open Firmware  
An IBM System p5 server has one instance of Open Firmware both when used in the  
partitioned environment and when running as a full system partition. Open Firmware has  
access to all devices and data in the server. Open Firmware is started when the server goes  
Chapter 2. Architecture and technical overview  
75  
Download from Www.Somanuals.com. All Manuals Search And Download.  
through a power-on reset. Open Firmware, which runs in addition to the Hypervisor in a  
partitioned environment, runs in two modes: global and partition. Each mode of Open  
Firmware shares the same firmware binary that is stored in the flash memory.  
In a partitioned environment, Open Firmware runs on top of the global Open Firmware  
instance. The partition Open Firmware is started when a partition is activated. Each partition  
has its own instance of Open Firmware and has access to all the devices assigned to that  
partition. However, each instance of Open Firmware has no access to devices outside of the  
partition in which it runs. Partition firmware resides within the partition memory and is  
replaced when AIX 5L or Linux takes control. Partition firmware is needed only for the time  
that is necessary to load AIX 5L or Linux into the partition server memory.  
The global Open Firmware environment includes the partition manager component. That  
component is an application in the global Open Firmware that establishes partitions and their  
corresponding resources (such as CPU, memory, and I/O slots), which are defined in partition  
profiles. The partition manager manages the operational partitioning transactions. It responds  
to commands from the service processor external command interface that originates in the  
application running on the HMC.  
The ASMI can be accessed during boot time or by using the ASMI and selecting the boot to  
Open Firmware prompt.  
For more information about Open Firmware, refer to Partitioning Implementations for IBM  
Sserver p5 Servers, SG24-7039, at:  
76  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
3
RAS and manageability  
This chapter provides information about IBM System p5 design features that help lower the  
total cost of ownership (TCO). IBM reliability, availability, and service (RAS) technology allow  
you to improve your TCO architecture by reducing unplanned down time. This chapter  
includes several features based on the benefits that are available when you use AIX 5L.  
Support of these features using Linux can vary.  
© Copyright IBM Corp. 2006. All rights reserved.  
77  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
3.1 Reliability, availability, and serviceability  
Excellent quality and reliability are inherent in all aspects of the IBM System p5 processor  
design and manufacturing. The fundamental objective of the design approach is to minimize  
outages. The RAS features help to ensure that the system operates when required, performs  
reliably, and efficiently handles any failures that might occur. This is achieved using  
capabilities that both the hardware and the operating system AIX 5L provide.  
The p5-520 or p5-520Q as a POWER5+ server enhances the RAS capabilities that are  
implemented in POWER4-based systems. RAS enhancements available on POWER5 and  
POWER5+ servers are:  
Most firmware updates allow the system to remain operational.  
The ECC has been extended to inter-chip connections for the fabric and processor bus.  
Partial L2 cache deallocation is possible.  
The number of L3 cache line deletes improved from two to ten for better self-healing  
capability.  
The following sections describe the concepts that form the basis of leadership RAS features  
of IBM System p5 systems in more detail.  
3.1.1 Fault avoidance  
IBM System p5 servers are built on a quality-based design that is intended to keep errors  
from happening. This design includes the following features:  
Reduced power consumption and cooler operating temperatures for increased reliability,  
which is enabled by the use of copper circuitry, silicon-on-insulator, and dynamic clock  
gating  
Mainframe-inspired components and technologies  
3.1.2 First-failure data capture  
If a problem should occur, the ability to diagnose that problem correctly is a fundamental  
requirement upon which improved availability is based. The p5-520 and p5-520Q incorporate  
advanced capability in start-up diagnostics and in run-time First-failure data capture (FDDC)  
based on strategic error checkers built into the processors.  
Any errors detected by the pervasive error checkers are captured into Fault Isolation  
Registers (FIRs), which can be interrogated by the service processor. The service processor  
has the capability to access system components using special purpose ports or by access to  
the error registers. Figure 3-1 on page 79 shows a schematic of a Fault Register  
Implementation.  
78  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Error Checkers  
F
I
R
(FIR)  
ault solation egister  
CPU  
(unique fingerprint of each  
error captured)  
L1 Cache  
Service  
Processor  
L2/L3 Cache  
Log Error  
Non-volatile  
RAM  
Memory  
Disk  
Figure 3-1 Schematic of Fault Isolation Register implementation  
The FIRs are important because they enable an error to be uniquely identified, thus enabling  
the appropriate action to be taken. Appropriate actions might include such things as a bus  
retry, ECC correction, or system firmware recovery routines. Recovery routines can include  
dynamic deallocation of potentially failing components.  
Errors are logged into the system non-volatile random access memory (NVRAM) and the  
service processor event history log, along with a notification of the event to AIX 5L for capture  
in the operating system error log. Diagnostic Error Log Analysis (diagela) routines analyze  
the error log entries and invoke a suitable action such as issuing a warning message. If the  
error can be recovered, or after suitable maintenance, the service processor resets the FIRs  
so that they can record any future errors accurately.  
The ability to correctly diagnose any pending or firm errors is a key requirement before any  
dynamic or persistent component deallocation or any other reconfiguration can take place.  
3.1.3 Permanent monitoring  
The service processor (SP) included in the p5-520 or p5-520Q provides a way to monitor the  
system even when the main processor is inoperable.  
Mutual surveillance  
The SP can monitor the operation of the firmware during the boot process, and it can monitor  
the operating system for loss of control. This allows the service processor to take appropriate  
action, including calling for service, when it detects that the firmware or the operating system  
has lost control. Mutual surveillance also allows the operating system to monitor for service  
processor activity and can request a service processor repair action if necessary.  
Environmental monitoring  
Environmental monitoring related to power, fans, and temperature is done by the System  
Power Control Network (SPCN). Environmental critical and non-critical conditions generate  
Early Power-Off Warning (EPOW) events. Critical events (for example, Class 5 ac power loss)  
trigger appropriate signals from the hardware to the impacted components in order to prevent  
any data loss without the operating system or firmware involvement. Non-critical  
environmental events are logged and reported using Event Scan.  
Chapter 3. RAS and manageability  
79  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
The operating system cannot program or access the temperature threshold using the SP.  
EPOW events can, for example, trigger the following actions:  
Temperature monitoring, which increases the fan’s rotation speed when ambient  
temperature is above a preset operating range.  
Temperature monitoring warns the system administrator of potential environment-related  
problems. It also performs an orderly system shutdown when the operating temperature  
exceeds a critical level.  
Voltage monitoring provides warning and an orderly system shutdown when the voltage is  
out of the operational specification.  
3.1.4 Self-healing  
For a system to be self-healing, it must be able to recover from a failing component by first  
detecting and isolating the failed component, taking it offline, fixing or isolating it, and  
reintroducing the fixed or replacement component into service without any application  
disruption. Examples include:  
Bit steering to redundant memory in the event of a failed memory module to keep the  
server operational  
Bit-scattering, thus allowing for error correction and continued operation in the presence  
of a complete chip failure (Chipkill™ recovery)  
Single bit error correction using ECC without reaching error thresholds for main, L2, and  
L3 cache memory  
L3 cache line deletes extended from 2 to 10 for additional self-healing  
ECC extended to inter-chip connections on fabric and processor bus  
Memory scrubbing to help prevent soft-error memory faults  
Memory reliability, fault tolerance, and integrity  
The p5-520 and p5-520Q use Error Checking and Correcting (ECC) circuitry for system  
memory to correct single-bit and to detect double-bit memory failures. Detection of double-bit  
memory failures helps maintain data integrity. Furthermore, the memory chips are organized  
such that the failure of any specific memory module only affects a single bit within a four-bit  
ECC word (bit-scattering), thus allowing for error correction and continued operation in the  
presence of a complete chip failure (Chipkill recovery). The memory DIMMs also use  
memory scrubbing and thresholding to determine when spare memory modules within each  
bank of memory should be used to replace memory modules that have exceeded their  
threshold of error count (dynamic bit-steering). Memory scrubbing is the process of reading  
the contents of the memory during idle time and checking and correcting any single-bit errors  
that have accumulated by passing the data through the ECC logic. This function is a  
hardware function on the memory controller and does not influence normal system memory  
performance.  
80  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
3.1.5 N+1 redundancy  
The use of redundant parts allows the p5-520 and p5-520Q to remain operational with full  
resources:  
Redundant spare memory bits in L1, L2, L3, and main memory  
Redundant fans  
Redundant power supplies (optional)  
Note: With this optional feature, every deskside or rack-mounted p5-520 or p5-520Q  
requires two power cords, which are not included in the base order. For maximum avail-  
ability, we highly recommend that you connect power cords from the same p5-520 or  
p5-520Q to two separate PDUs in the rack, which are connected to two independent client  
power sources. For a deskside p5-520 or p5-520Q, you need to plug power cords to two  
independent power sources in order to achieve maximum availability.  
3.1.6 Fault masking  
If corrections and retries succeed and do not exceed threshold limits, the system remains  
operational with full resources, and no intervention is required:  
CEC bus retry and recovery  
PCI-X bus recovery  
ECC Chipkill soft error  
3.1.7 Resource deallocation  
If recoverable errors exceed threshold limits, resources can be deallocated with the system  
remaining operational, allowing deferred maintenance at a convenient time.  
Dynamic or persistent deallocation  
Dynamic deallocation of potentially failing components is nondisruptive, allowing the system  
to continue to run. Persistent deallocation occurs when a failed component is detected, which  
is then deactivated at a subsequent reboot.  
Dynamic deallocation functions include:  
Processor  
L3 cache line delete  
Partial L2 cache deallocation  
PCI-X bus and slots  
For dynamic processor deallocation, the service processor performs a predictive failure  
analysis based on any recoverable processor errors that have been recorded. If these  
transient errors exceed a defined threshold, the event is logged and the processor is  
deallocated from the system while the operating system continues to run. This feature  
(named CPU Guard) enables maintenance to be deferred until a suitable time. Processor  
deallocation can only occur if there are sufficient functional processors (at least two).  
To verify whether CPU Guard has been enabled, run the following command:  
lsattr -El sys0 | grep cpuguard  
If enabled, the output is similar to the following:  
cpuguard  
enable  
CPU Guard  
True  
Chapter 3. RAS and manageability  
81  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
If the output shows CPU Guard as disabled, enter the following command to enable it:  
chdev -l sys0 -a cpuguard='enable'  
Cache or cache-line deallocation is aimed at performing dynamic reconfiguration to bypass  
potentially failing components. This capability is provided for both L2 and L3 caches. Dynamic  
run-time deconfiguration is provided if a threshold of L1 or L2 recovered errors is exceeded.  
In the case of an L3 cache run-time array single-bit solid error, the spare resources are used  
to perform a line delete on the failing line.  
PCI hot-plug slot fault tracking helps prevent slot errors from causing a system machine  
check interrupt and subsequent reboot. This provides superior fault isolation, and the error  
affects only the single adapter. Run-time errors on the PCI bus caused by failing adapters  
result in recovery action. If this is unsuccessful, the PCI device is shut down gracefully. Parity  
errors on the PCI bus itself result in bus retry, and if uncorrected, the bus and any I/O  
adapters or devices on that bus are deconfigured.  
The p5-520 or p5-520Q supports PCI Extended Error Handling (EEH), if it is supported by the  
PCI-X adapter. In the past, PCI bus parity errors caused a global machine check interrupt,  
which eventually required a system reboot in order to continue. In the p5-520 or p5-520Q  
system, hardware, system firmware, and AIX 5L interaction have been designed to allow  
transparent recovery of intermittent PCI bus parity errors and graceful transition to the I/O  
device available state in the case of a permanent parity error in the PCI bus.  
EEH-enabled adapters respond to a special data packet generated from the affected PCI slot  
hardware by calling system firmware, which examines the affected bus, allows the device  
driver to reset it, and continues without a system reboot.  
Persistent deallocation functions include:  
Processor  
Memory  
Deconfigure or bypass failing I/O adapters  
L3 cache  
Following a hardware error that has been flagged by the service processor, the subsequent  
reboot of the system invokes extended diagnostics. If a processor or L3 cache is marked for  
deconfiguration by persistent processor deallocation, the boot process attempts to proceed to  
completion with the faulty device deconfigured automatically. Failing I/O adapters are  
deconfigured or bypassed during the boot process.  
Note: The auto-restart (reboot) option, when enabled, can reboot the system automatically  
following an unrecoverable software error, software hang, hardware failure, or  
environmentally induced failure (such as a loss of the power supply).  
3.1.8 Serviceability  
Increasing service productivity means the system is up and running for a longer time. The  
p5-520 and p5-520Q improve service productivity by providing the functions described in the  
following sections.  
Error indication and LED indicators  
The p5-520 and p5-520Q are designed for client setup of the machine and for the subsequent  
addition of most hardware features. The p5-520 and p5-520Q also allow clients to replace  
service parts (Client Replaceable Unit). To accomplish this, the p5-520 or p5-520Q provides  
82  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
internal LED diagnostics that identify the parts that require service. Attenuation of the error is  
provided through a series of light attention signals, starting on the exterior of the system  
(System Attention LED) which is located on the front of the system and ending with an LED  
near the failing Field Replaceable Unit.  
For more information about Client Replaceable Units, including videos, see:  
System attention LED  
The attention indicator is represented externally by an amber LED on the operator panel and  
on the back of the system unit. The amber LED indicates that the system is in one of the  
following states:  
Normal state, LED is off.  
Fault state, LED is on solid.  
Identify state, LED is blinking.  
Additional LEDs on I/O components such as PCI-X slots and disk drives provide status  
information such as power, hot-swap, and need for service.  
Concurrent maintenance  
Concurrent Maintenance provides replacement of the following parts while the system  
remains running:  
Disk drives  
Cooling fans  
Power subsystems  
PCI-X adapter cards  
Operator Panel (requires HMC-guided support)  
GX RIO-2/HSL-2 Adapter (FC 2888)  
– All PCI-X adapters connected to the involved RIO loop must be first varied offline from  
the operating system.  
– This concurrent maintenance task requires HMC-guided support.  
3.2 Manageability  
We describe the functions and tools provided for IBM System p5 servers to ease  
management in the next sections.  
3.2.1 Service processor  
The service processor (SP) is always working. CEC can be in the following states:  
Power standby mode (power off)  
Operating, ready to start partitions  
Operating with some partitions running and an AIX 5L or Linux system in control of the  
machine.  
The SP is still working and checking the system for errors, ensuring the connection to the  
HMC (if present) for manageability purposes and accepting Advanced System Management  
Interface (ASMI) SSL network connections. The SP provides the capability to view and  
Chapter 3. RAS and manageability  
83  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
manage the machine-wide settings using the ASMI and allows complete system and partition  
management from the HMC. Also, the surveillance function of the SP is monitoring the  
operating system to check that it is still running and has not stalled.  
Note: The IBM System p5 service processor enables the analysis of a system that does  
not boot. It can be performed either from ASMI, an HMC, or an ASCI console (depending  
on the presence of an HMC). ASMI is provided in any case.  
Figure 3-2 shows an example of the ASMI accessed from a Web browser.  
Figure 3-2 Advanced System Management main menu  
3.2.2 Partition diagnostics  
The diagnostics consist of stand-alone diagnostics, which are loaded from the DVD-ROM  
drive, and online diagnostics (available in AIX 5L):  
Online diagnostics, when installed, are resident with AIX 5L on the disk or server. They  
can be booted in single-user mode (service mode), run in maintenance mode, or run  
concurrently (concurrent mode) with other applications. They have access to the AIX 5L  
error log and the AIX 5L configuration data:  
– Service mode (requires service mode boot) enables you to check system devices and  
features. Service mode provides the most complete checkout of the system resources.  
All system resources, except the SCSI adapter and the disk drives used for paging,  
can be tested.  
84  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
– Concurrent mode enables the normal system functions to continue while you are  
checking selected resources. Because the system is running in normal operation,  
some devices might require additional actions by the user or diagnostic application  
before testing can be done.  
– Maintenance mode enables checking of most system resources. Maintenance mode  
provides the exact same test coverage as Service Mode. The difference between the  
two modes is the way you invoke them. Maintenance mode requires that all activity on  
the operating system is stopped. You use the shutdown -mcommand to stop all activity  
on the operating system and put the operating system into maintenance mode.  
The System Management Services (SMS) error log is accessible from the SMS menu for  
tests performed through SMS programs. For results of service processor tests, access the  
error log from the service processor menu.  
Note: Because the p5-520 and p5-520Q system have an optional DVD-ROM (FC 1994)  
and DVD-RAM (FC 1993), alternate methods for maintaining and servicing the system  
need to be available if you do not order the DVD-ROM or DVD-RAM. You can also use the  
Network Install Manager (NIM) server for this purpose.  
3.2.3 Service Agent  
Service Agent is an application program that operates on an IBM System p server and  
monitors the server for hardware errors. It reports detected errors, assuming they meet  
certain criteria for severity, to IBM for service with no intervention. It is an enhanced version of  
Service Director™ with a graphical user interface.  
Key things you can accomplish using Service Agent for the IBM System p5, pSeries, and  
RS/6000 include:  
Automatic VPD collection  
Automatic problem analysis  
Problem-definable threshold levels for error reporting  
Automatic problem reporting where service calls are placed to IBM without intervention  
Automatic client notification  
In addition, there are:  
Commonly viewed hardware errors. You can view hardware event logs for any monitored  
machine in the network from any Service Agent host user interface.  
High-availability cluster multiprocessing (HACMP) support for full fallback.  
Network environment support with minimum telephone lines for modems.  
A communication base is provided for performance data collection and reporting tool  
Performance Management (PM/AIX). For more information about PM/AIX, see:  
You use the Service Agent user interface to define machines. After you define the machines,  
they are registered with the IBM Service Agent Server (SAS). During the registration process,  
an electronic key is created that becomes part of your resident Service Agent program. This  
key is used each time the Service Agent places a call for service. The IBM Service Agent  
Server checks the current client service status from the IBM entitlement database. If this  
reveals that you are not on Warranty or MA, the service call is refused and a message is  
posted back using an e-mail notification.  
You can connect Service Agent to connect to IBM either using a modem or a network  
connection. In any case, the communication is encrypted and strong authentication is used.  
Chapter 3. RAS and manageability  
85  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Service Agent sends outbound transmissions only and does not allow any inbound  
connection attempts. Only hardware machine configuration, machine status, or error  
information is transmitted. Service Agent does not access or transmit any other data on the  
monitored systems.  
Three principal ways of communication are possible:  
Dial-up using an attached modem device (uses the AT&T Global Network dialer for  
modem access; it does not accept incoming calls to the modem)  
VPN (IPsec is used in this case)  
HTTPS (can be configured to work with firewalls and authenticating proxies)  
Figure 3-3 shows possible communication paths for an IBM System p5 server that is  
configured to use all the features of Service Agent. In this figure, communication to IBM  
support can be through either a modem or the network. If an HMC is present, Service Agent  
is an integral part of the HMC and, if activated, collects hardware-related information and  
error messages about the entire system and partitions. If software level information (such as  
performance data) is also required, you can also install Service Agent on any of the partitions  
and configure Service Agent to act as either a gateway and a connection manager or as a  
client. When you configure Service Agent as a gateway and a connection manager, it gathers  
data from clients and communicates to IBM on behalf of them.  
Figure 3-3 Service Agent and possible connections to IBM  
Service Agent provides these additional services:  
My Systems: Client and IBM employees authorized by the client can view hardware and  
software information and error messages that are gathered by Service Agent on  
Electronic Services WWW pages at:  
Premium Search: A search service using information gathered by Service Agents (this is a  
paid service that requires a special contract).  
Performance Management: Service Agent provides the means for collecting long-term  
performance data. The data is collected in reports accessed by the client on WWW pages  
of Electronic Services (this is a paid service that requires a special contract).  
86  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
You can download the latest version of Service Agent at:  
Service Focal Point  
Traditional service strategies become more complicated in a partitioned environment. Each  
logical partition reports errors it detects, without determining if other logical partitions also  
detect and report the errors. For example, if one logical partition reports an error for a shared  
resource, such as a managed system power supply, other active logical partitions might  
report the same error. The Service Focal Point application helps you to avoid long lists of  
repetitive call-home information by recognizing that these are repeated or duplicate errors  
and correlating them into one error.  
Service Focal Point is an application on the HMC that enables you to diagnose and repair  
problems on the system. In addition, you can use Service Focal Point to initiate service  
functions on systems and logical partitions that are not associated with a particular problem.  
You can configure the HMC to use the Service Agent call-home feature to send IBM event  
information. Service Focal Point is available also in Integrated Virtualization Manager. It  
allows you to manage serviceable events, create serviceable events, manage dumps, and  
collect vital product data (VPD), but no reporting through Service Agent is possible.  
3.2.4 IBM System p5 firmware maintenance  
The IBM System p5, pSeries, and RS/6000 Client-Managed Microcode is a methodology that  
enables you to manage and install microcode updates on IBM System p5, pSeries, and  
RS/6000 systems and associated I/O adapters. The IBM System p5 microcode can be  
installed either from an HMC or from a running partition. For update details, see 2.15.6,  
If you use an HMC to manage your server, you can use the HMC interface to view the levels  
of server firmware and power subsystem firmware that are installed on your server and are  
available to download and install.  
Each IBM System p5 server has the following levels of server firmware and power subsystem  
firmware:  
Installed level – This is the level of server firmware or power subsystem firmware that has  
been installed and will be installed into memory after the managed system is powered off  
and powered on. It is installed on the t side of system firmware. For additional discussion  
Activated level – This is the level of server firmware or power subsystem firmware that is  
active and running in memory.  
Accepted level – This is the backup level of server or power subsystem firmware. You can  
return to this level of server or power subsystem firmware if you decide to remove the  
installed level. It is installed on the p side of system firmware. For additional discussion  
IBM introduced the Concurrent Firmware Maintenance (CFM) function on System p5 systems  
in system firmware level 01SF230_126_120, which was released on 16 June 2005. This  
function supports applying nondisruptive system firmware service packs to the system  
concurrently (without requiring a reboot to activate changes). For systems that are not  
managed by an HMC, the installation of system firmware is always disruptive.  
The concurrent levels of system firmware can, on occasion, contain fixes that are known as  
deferred. These deferred fixes can be installed concurrently but are not activated until the  
next IPL. For deferred fixes within a service pack, only the fixes in the service pack, which  
Chapter 3. RAS and manageability  
87  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
cannot be concurrently activated, are deferred. Figure 3-4 shows the system firmware file  
naming convention.  
Firmware Release  
level  
Last disruptive Firmware  
Service Pack level  
01SFXXX_YYY_ZZZ  
Firmware Service  
Pack level  
Figure 3-4 System firmware file naming convention  
An installation is disruptive if:  
The release levels (XXX) of currently installed and new firmware are different.  
The service pack level (YYY) and the last disruptive service pack level (ZZZ) are equal in  
new firmware.  
Otherwise, an installation is concurrent if:  
The service pack level (YYY) of the new firmware is higher than the service pack level  
currently installed on the system and the above conditions for disruptive installation are  
not met.  
3.3 Cluster solution  
Today's IT infrastructure requires that servers meet increasing demands, while offering the  
flexibility and manageability to rapidly develop and deploy new services. IBM clustering  
hardware and software provide the building blocks, with availability, scalability, security, and  
single-point-of-management control, to satisfy these needs. The advantages of clusters are:  
Large-capacity data and transaction volumes, including support of mixed workloads  
Scale-up (add processors) or scale-out (add servers) without down time  
Single point-of-control for distributed and clustered server management  
Simplified use of IT resources  
Designed for 24x7 access to data applications  
Business continuity in the event of a disaster  
The POWER processor-based AIX 5L and Linux cluster target scientific and technical  
computing, large-scale databases, and workload consolidation. IBM Cluster Systems  
Management software (CSM) is designed to provide a robust, powerful, and centralized way  
to manage a large number of POWER5 processor-based servers, all from a single  
point-of-control. Cluster Systems Management can help lower the overall cost of IT  
ownership by helping to simplify the tasks of installing, operating, and maintaining clusters of  
servers. Cluster Systems Management can provide one consistent interface for managing  
both AIX 5L and Linux nodes (physical systems or logical partitions), with capabilities for  
remote parallel network installation, remote hardware control, and distributed command  
execution.  
Cluster Systems Management for AIX 5L and Linux on POWER processor-based servers is  
supported on the p5-520 and the p5-520Q servers. For hardware control, an HMC is  
required. One HMC can also control several IBM System p5 servers that are part of the  
88  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
cluster. If a server that is configured in partition mode (with physical or virtual resources) is  
part of the cluster, all partitions must be part of the cluster.  
Monitoring is much easier to use, and the system administrator can monitor all of the network  
interfaces, not just the switch and administrative interfaces. The management server pushes  
information out to the nodes, which releases the management server from having to trust the  
node. In addition, the nodes do not have to be network-connected to each other. This means  
that giving root access on one node does not mean giving root access on all nodes. The base  
security setup is all performed automatically at installation time.  
For information regarding the IBM Cluster Systems Management for AIX 5L, HMC control,  
cluster building block servers, and cluster software available, visit the following links:  
Cluster 1600  
Cluster 1350™  
The CSM ships with AIX 5L itself (a 60-day Try and Buy license ships with AIX). The CSM  
client side is installed automatically and is ready when you install AIX 5L. So, each system or  
logical partition is cluster-ready.  
The CSM V1.5 on AIX 5L and Linux introduces an optional IBM CSM High Availability  
Management Server feature, which is designed to allow automated failover of the CSM  
management server to a backup management server. In addition, sample scripts for setting  
up Network Time Protocol (NTP), and network tuning (AIX 5L only) configurations, and the  
capability to copy files across nodes or node groups in the cluster can improve cluster ease of  
use and site customization.  
Chapter 3. RAS and manageability  
89  
Download from Www.Somanuals.com. All Manuals Search And Download.  
90  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Related publications  
The publications listed in this section are considered particularly suitable for a more detailed  
discussion of the topics that are covered in this Redpaper.  
IBM Redbooks  
For information about ordering these publications, see “How to get IBM Redbooks” on  
page 93. Note that some of the documents that are referenced here might be available in  
softcopy only.  
Advanced POWER Virtualization on IBM System p5, SG24-7940  
Partitioning Implementations for IBM Eserver p5 Servers, SG24-7039  
Advanced POWER Virtualization on IBM Eserver p5 Servers: Architecture and  
Performance Considerations, SG24-5768  
IBM Eserver pSeries Sizing and Capacity Planning: A Practical Guide, SG24-7071  
Problem Solving and Troubleshooting in AIX 5L, SG24-5496  
IBM Eserver p5 590 and 595 System Handbook, SG24-9119  
LPAR Simplification Tools Handbook, SG24-7231  
Virtual I/O Server Integrated Virtualization Manager, REDP-4061  
IBM Eserver p5 590 and 595 Technical Overview and Introduction, REDP-4024  
IBM Eserver p5 510 Technical Overview and Introduction, REDP-4001  
IBM Eserver p5 520 Technical Overview and Introduction, REDP-9111  
IBM Eserver p5 550 Technical Overview and Introduction, REDP-9113  
IBM Eserver p5 570 Technical Overview and Introduction, REDP-9117  
IBM System p5 505 Express Technical Overview and Introduction, REDP-4079  
IBM System p5 510 and 510Q Technical Overview and Introduction, REDP-4136  
IBM System p5 550 and 550Q Technical Overview and Introduction, REDP-4138  
IBM System p5 560Q Technical Overview and Introduction, REDP-4139  
Hardware Management Console (HMC) Case Configuration Study for LPAR  
Management, REDP-3999  
Other publications  
These publications are also relevant as further information sources:  
7014 Series Model T00 and T42 Rack Installation and Service Guide, SA38-0577,  
contains information regarding the 7014 Model T00 and T42 Rack, in which you can install  
this server.  
7316-TF3 17-Inch Flat Panel Rack-Mounted Monitor and Keyboard Installation and  
Maintenance Guide, SA38-0643, contains information regarding the 7316-TF3 Flat Panel  
Display, which you can install in your rack to manage your system units.  
© Copyright IBM Corp. 2006. All rights reserved.  
91  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
IBM Sserver Hardware Management Console for pSeries Installation and Operations  
Guide, SA38-0590, provides information to operators and system administrators about  
how to use a IBM Hardware Management Console for pSeries (HMC) to manage a  
system. It also discusses the issues associated with logical partitioning planning and  
implementation.  
Planning for Partitioned-System Operations, SA38-0626, provides information to  
planners, system administrators, and operators about how to plan for installing and using  
a partitioned server. It also discusses some issues associated with planning and  
implementing partitioning.  
RS/6000 and Sserver pSeries Diagnostics Information for Multiple Bus Systems,  
SA38-0509, contains diagnostic information, service request numbers (SRNs), and failing  
function codes (FFCs).  
System p5, Server p5 Customer service support and troubleshooting, SA38-0538,  
contains information regarding slot restrictions for adapters that you can use in this  
system.  
System Unit Safety Information, SA23-2652, contains translations of safety information  
used throughout the system documentation.  
Online resources  
These Web sites and URLs are also relevant as further information sources:  
AIX 5L operating system maintenance packages downloads  
News on new computer technologies  
Copper circuitry  
IBM Systems Hardware Information Center documentation  
IBM Systems Information Centers  
IBM microcode downloads  
Support for IBM System p servers  
Technical help database for AIX 5L  
IBMlink  
Linux for IBM System p5  
Microcode Discovery Service  
92  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
How to get IBM Redbooks  
You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft  
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at  
this Web site:  
Help from IBM  
IBM Support and downloads  
IBM Global Services  
Related publications  
93  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
94  
IBM System p5 520 and 520Q Technical Overview and Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Download from Www.Somanuals.com. All Manuals Search And Download.  
®
IBM System p5 520 and 520Q  
Technical Overview and Introduction  
Redpaper  
This IBM Redpaper is a comprehensive guide that covers the IBM  
System p5 520 and 520Q UNIX servers. It introduces major  
hardware offerings and discusses their prominent functions.  
Finer system  
INTERNATIONAL  
TECHNICAL  
granulation using  
Micro-Partitioning  
technology to help  
lower TCO  
SUPPORT  
ORGANIZATION  
Professionals who want to acquire a better understanding of IBM  
System p products should read this document. The intended  
audience includes:  
Support for versions  
of AIX 5L and Linux  
operating systems  
• Clients  
BUILDING TECHNICAL  
INFORMATION BASED ON  
PRACTICAL EXPERIENCE  
• Marketing representatives  
• Technical support professionals  
• IBM Business Partners  
• Independent software vendors  
From Web servers to  
integrated cluster  
solutions  
IBM Redbooks are developed  
by the IBM International  
Technical Support  
This document expands the current set of IBM System p  
documentation and provides a desktop reference that offers a  
detailed technical description of the IBM System p5 520 and the  
p5 520Q servers.  
Organization. Experts from  
IBM, Customers and Partners  
from around the world create  
timely technical information  
based on realistic scenarios.  
Specific recommendations  
are provided to help you  
implement IT solutions more  
effectively in your  
This publication does not replace the latest IBM System p  
marketing materials and tools. It is intended as an additional  
source of information that you can use, together with existing  
sources, to enhance your knowledge of IBM server solutions.  
environment.  
Download from Www.Somanuals.com. All Manuals Search And Download.  

Grizzly Staple Gun H8236 User Manual
Haier Sander HG1000TXVE User Manual
Harman Kardon Stereo Receiver AVP1A User Manual
Hasselblad Camera Lens CFi 35 30 User Manual
Hoshizaki Refrigerator F 450MAH User Manual
HP Hewlett Packard Personal Computer 100EU SFF PC User Manual
Huffy Fitness Equipment VP8100 User Manual
Husqvarna Lawn Mower 531 30 96 85 User Manual
Hypertec Carrying Case 6501HY User Manual
Hypertec Carrying Case N15848KHY User Manual