IBM System z10 Business Class (z10 BC)
Reference Guide
The New Face of
Enterprise Computing
April 2009
Download from Www.Somanuals.com. All Manuals Search And Download.
IBM System z10 Business Class (z10 BC)
Overview
In today’s world, IT is woven in to almost everything that
a business does and consequently is pivotal to a busi-
ness. Yet technology leaders are challenged to manage
sprawling, complex distributed infrastructures and the ever
growing flow of data while remaining highly responsive to
the demands of the business. And they must continually
evaluate and decide when and how to adopt a multitude
of innovations to keep the company competitive. IBM has
Think Big, Virtually Limitless
The Information Technology industry has recognized the
business value of exploiting virtualization technologies on
any and all server platforms. The leading edge virtualization
capabilities of System z, backed by over 40 years of tech-
nology innovation, are the most advanced in the industry.
With utilization rates of up to 100%, it’s the perfect platform
for workload consolidation, both traditional and new.
®
a vision that can help—the Dynamic Infrastructure —an
• Want to deploy dozens—or hundreds—of applications
on a single server for lower total cost of ownership? Want
a more simplified, responsive infrastructure?
evolutionary model that helps reset the economics of IT
and can dramatically improve operational efficiency. It also
can help reduce and control rising costs and improve pro-
visioning speed and data center security and resiliency—at
any scale. It will allow you to be highly responsive to any
user need. And it aligns technology and business—giving
you the freedom and the tools you need to innovate and be
• Want investment protection where new generation tech-
nology typically allows application growth at no extra
cost?
®
The virtualization technology found in z/VM with the
®
System z platform may help clients achieve all of these
operational goals while also helping to maximize the finan-
cial return on their System z investments.
competitive. IBM System z is an excellent choice as the
foundation for a highly responsive infrastructure.
New world. New business. A whole new mainframe. Meet
™
™
™
The z10 BC can have big advantages over traditional
the IBM System z10 Business Class (z10 BC), the tech-
nology that could change the way you think about Enter-
prise solutions. The technology that delivers the scalability,
flexibility, virtualization, and breakthrough performance
you need—at the lower capacity entry point you want.
This is the technology that fights old myths and percep-
tions—that’s not just for banks and insurance companies.
This is the technology for any business that wants to ramp
up innovation, boost efficiencies and lower costs—pretty
much any enterprise, any size, any location. This is a
mainframe technology for a new kind of data center—resil-
server farms. The z10 BC is designed to reduce energy
usage and save floor space when used to consolidate x86
servers. With increased capacity the z10 BC virtualization
capabilities can help to support hundreds of virtual servers
in a single 1.42 square meters footprint. When consolidat-
ing on System z you can create virtual servers on demand;
™
achieve network savings through HiperSockets (internal
LAN); improve systems management of virtual servers;
and most importantly, consolidate software from many dis-
tributed servers to a single consolidated server.
™
ient, responsive, energy efficient—this is z10 BC. And
So why run hundreds of standalone servers when z10 BC
could do the work more efficiently, in a smaller space, at a
lower cost, virtually? Less power. Less space. Less impact
on the environment.
it’s about to rewrite the rules and deliver new freedoms for
your business. Whether you want to deploy new applica-
tions quickly, grow your business without growing IT costs
or consolidate your infrastructure for reduced complexity,
look no further—
z Can Do IT
3
Download from Www.Somanuals.com. All Manuals Search And Download.
More Solutions, More Affordable
example, the zAAP and zIIP processors enable you to
Today’s businesses with extensive investments in hardware purchase additional processing capacity exclusively for
assets and core applications are demanding more from
IT—more value, more transactions, more for the money.
Above all, they are looking for business solutions that can
help enable business growth while driving costs out of the
business. System z has an ever growing set of solutions
that are being enhanced to help you lower IT costs. From
specific workloads, without affecting the MSU rating of the
IBM System z model designation. This means that adding
a specialty engine will not cause increased charges for
IBM System z software running on general purpose pro-
cessors in the server.
In order of introduction:
®
enterprise wide applications such as SAP or Cognos BI
to the consolidation of infrastructure workloads, z10 BC
has low cost solutions that also help you save more as
your demand grows. So, consider consolidating your IT
workloads on the z10 BC server if you want the right solu-
tions on a premier platform at a price you can afford.
The Internal Coupling Facility (ICF) processor was intro-
duced to help cut the cost of Coupling Facility functions
by reducing the need for an external Coupling Facility.
®
IBM System z Parallel Sysplex technology allows for
greater scalability and availability by coupling mainframes
together. Using Parallel Sysplex clustering, System z serv-
ers are designed for up to 99.999% availability.
The convergence of Service-Oriented Architecture (SOA)
and mainframe technologies can also help liberate these
core business assets by making it easier to enrich, mod-
ernize, extend and reuse them well beyond their original
scope of design. The ultimate implementation of flexibility
for today’s On Demand Business is a Service Oriented
Architecture—an IT architectural style that allows you to
design your applications to solve real business problems.
The z10 BC, along with the inherent strengths and capa-
bilities of multiple operating system choices and innovative
®
The Integrated Facility for Linux (IFL) processor offers
support for Linux and brings a wealth of available appli-
cations that can be run in a real or virtual environment
™
on the z10 BC. An example is the z/VSE strategy which
supports integration between the IFL, z/VSE and Linux on
System z to help customers integrate timely production
of z/VSE data into new Linux applications, such as data
®
warehouse environments built upon a DB2 data server. To
®
®
System z software solutions from WebSphere , CICS ,
consolidate distributed servers onto System z, the IFL with
Linux and the System z virtualization technologies fulfill the
qualifications for business-critical workloads as well as for
infrastructure workloads. For customers interested to use a
z10 BC only for Linux workload, the z10 BC can be config-
ured as a server with IFLs only.
®
®
Rational and Lotus strengthen the flexibility of doing SOA
and strengthen System z as an enterprise hub.
Special workloads, Specialty engines, affordable technology
The z10 BC continues the long history of providing inte-
grated technologies to optimize a variety of workloads. The
use of specialty engines can help users expand the use
of the mainframe for new workloads, while helping to lower
the cost of ownership. The IBM System z specialty engines
can run independently or complement each other. For
The System z10 Application Assist Processor (zAAP) is
designed to help enable strategic integration of new appli-
™
cation technologies such as Java technology-based Web
applications and XML-based data interchange services
with core business database environments. This helps
4
Download from Www.Somanuals.com. All Manuals Search And Download.
®
provide a more cost-effective, specialized z/OS applica-
The New Face Of System z
tion Java execution environment. Workloads eligible for the IBM’s mainframe capabilities are legendary. Customers
zAAP (with z/OS V1.8) include all Java processed via the
IBM Solution Developers Kit (SDK) and XML processed
locally via z/OS XML System Services.
deploy systems that remain available for years because
they are expected to, and continue to, work above expec-
tations. However, these systems have seen significant
innovative improvements for running new applications and
consolidating workloads in the last few years, and custom-
ers can see real gains in price/performance by taking
advantage of this new technology.
The System z10 Integrated Information Processor (zIIP) is
designed to support select data and transaction process-
ing and network workloads and thereby make the consoli-
dation of these workloads on to the System z platform
more cost effective. Workloads eligible for the zIIP (with
z/OS V1.7 or later) include remote connectivity to DB2
to help support these workloads: Business Intelligence
IBM provides affordable world-class technology to help
today’s enterprises respond to business conditions quickly
and with flexibility. From automation to advanced virtualiza-
(BI), Enterprise Relationship Management (ERP), Customer tion technologies to new applications supported by open
Relationship Management (CRM) and Extensible Markup
Language (XML) applications. In addition to supporting
industry standards such as SOA, IBM servers teamed with
IBM’s Storage Systems, Global Technology Services and
IBM Global Financing help deliver competitive advantages
for a Dynamic Infrastructure.
®
remote connectivity to DB2 (via DRDA over TCP/IP) the
zIIP also supports DB2 long running parallel queries—a
workload integral to Business Intelligence and Data Ware-
housing solutions. The zIIP (with z/OS V1.8) also supports
IPSec processing, making the zIIP an IPSec encryption
engine helpful in creating highly secure connections
in an enterprise. In addition, zIIP (with z/OS V1.10) sup-
ports select z/OS Global Mirror (formerly called Extended
Remote Copy, XRC) disk copy service functions. z/OS
V1.10 also introduces zIIP Assisted HiperSockets for large
messages (available on System z10 servers only).
z Can Do IT. The future runs on IBM System z and the
future begins today!
The new capability provided with z/VM-Mode partitions
increases flexibility and simplifies systems management by
allowing z/VM 5.4 to manage guests to operate Linux on
System z on IFLs, to operate z/VSE and z/OS on CPs,
to offload z/OS system software overhead, such as DB2
workloads on zIIPs, and to offer an economical Java exe-
cution environment under z/OS on zAAPs, all in the same
z/VM LPAR.
5
Download from Www.Somanuals.com. All Manuals Search And Download.
z/Architecture
The z10 BC continues the line of upward compatible main-
frame processors and retains application compatibility
• Support for 1 MB page frames
• Full hardware support for Hardware Decimal Floating-
point Unit (HDFU)
®
since 1964. The z10 BC supports all z/Architecture -com-
pliant Operating Systems. The heart of the processor unit
is the IBM z10 Enterprise Quad Core processor chip run-
ning at 3.5 GHz, designed to help improve CPU intensive
workloads.
z/Architecture operating system support
Delivering the technologies required to address today’s
IT challenges also takes much more than just a server;
it requires all of the system elements to be working
together. IBM System z10 operating systems and servers
are designed with a collaborative approach to exploit each
other’s strengths.
The z10 BC, like its predecessors, supports 24, 31,
and 64-bit addressing, as well as multiple arithmetic for-
mats. High-performance logical partitioning via Processor
™
™
Resource/Systems Manager (PR/SM ) is achieved by
industry-leading virtualization support provided by z/VM.
The z10 BC is also able to exploit numerous operating sys-
tems concurrently on a single server, these include z/OS,
z/VM, z/VSE, z/TPF, TPF and Linux for System z. These
operating systems are designed to support existing appli-
cation investments without anticipated change and help
you realize the benefits of the z10 BC. z10 BC—the new
business equation.
A change to the z/Architecture on z10 BC is designed
to allow memory to be extended to support large (1 mega-
byte (MB)) pages. Use of large pages can improve CPU
utilization for exploiting applications.
Large page support is primarily of benefit for long-running
applications that are memory-access-intensive. Large
page is not recommended for general use. Short-lived
processes with small working sets are normally not good
candidates for large pages.
z/OS
August 5, 2008, IBM announced z/OS V1.10. This release
of the z/OS operating system builds on leadership capa-
bilities, enhances time-tested technologies, and leverages
deep synergies with the IBM System z10 and IBM System
Large page support is exclusive to System z10 running
either z/OS or Linux on System z.
™
Storage family of products. z/OS V1.10 supports new
capabilities designed to provide:
z10 BC Architecture
• Storage scalability. Extended Address Volumes (EAVs)
enable you to define volumes as large as 223 GB to
relieve storage constraints and help you simplify storage
management by providing the ability to manage fewer,
large volumes as opposed to many small volumes.
Rich CISC Instruction Set Architecture (ISA)
• 894 instructions (668 implemented entirely in hardware)
• Multiple address spaces robust inter-process security
• Multiple arithmetic formats
• Application and data serving scalability. Up to 64
engines, up to 1.5 TB per server with up to 1.0 TB of
real memory per LPAR, and support for large (1 MB)
pages on the System z10 can help provide scale and
performance for your critical workloads.
Architectural extensions for z10 BC
• 50+ instructions added to z10 BC to improve compiled
code efficiency
• Enablement for software/hardware cache optimization
6
Download from Www.Somanuals.com. All Manuals Search And Download.
• Intelligent and optimized dispatching of workloads.
HiperDispatch can help provide increased scalability
and performance of higher n-way System z10 systems
by improving the way workload is dispatched within the
server.
With z/OS 1.9, IBM delivers functionality that continues to
solidify System z leadership as the premier data server.
z/OS 1.9 offers enhancements in the areas of security, net-
working, scalability, availability, application development,
integration, and improved economics with more exploita-
tion for specialty engines. A foundational element of the
platform — the z/OS tight interaction with the System z
hardware and its high level of system integrity.
• Low-cost, high-availability disk solution. The Basic
™
®
HyperSwap capability (enabled by TotalStorage
Productivity Center for Replication Basic Edition
for System z) provides a low-cost, single-site, high-
availability disk solution which allows the configuration
of disk replication services using an intuitive browser-
based graphical user interface (GUI) served from z/OS.
• Improved total cost of ownership. zIIP-Assisted
HiperSockets for Large Messages, IBM Scalable
With z/OS 1.9, IBM introduces:
• A revised and expanded Statement of z/OS System
Integrity
• Large Page Support (1 MB)
™
• Capacity Provisioning
Architecture for Financial Reporting enabled for zIIP (a
• Support for up to 64 engines in a single image (on
service offering of IBM Global Business Services), zIIP-
Assisted z/OS Global Mirror (XRC), and additional z/OS
XML System Services exploitation of zIIP and zAAP help
make these workloads more attractive on System z.
™
IBM System z10 Enterprise Class (z10 EC ) model only)
• Simplified and centralized policy-based networking
• Expanded IBM Health Checker
•
Improved management of temporary processor capac-
ity. Capacity Provisioning Manager which is available
on z/OS V1.10, and on z/OS V1.9 with PTFs, can monitor
z/OS systems on System z10 servers. Activation and
deactivation of temporary capacity can be suggested or
performed automatically based on user-defined sched-
®
• Simplified RACF Administration
• Hardware Decimal Floating Point
®
• Parallel Sysplex support for InfiniBand Coupling Links
• NTP Support for STP
• HiperSockets Multiple Write Facility
• OSA-Express3 support
™
ules and workload criteria. RMF or equivalent function is
required to use the Capacity Provisioning Manager.
• Advancements in ease of use for both new and existing
IT professionals coming to z/OS
• Improved network security. z/OS Communications Server
introduces new defensive filtering capability. Defensive
filters are evaluated ahead of configured IP filters, and
can be created dynamically, which can provide added
protection and minimal disruption of services in the
event of an attack.
• Support for zIIP-assisted IPSec, System Data Mover
(SDM) offload to zIIP, and support for eligible portions of
DB2 9 XML parsing workloads to be offloaded to zAAP
processors
• Expanded options for AT-TLS and System SSL network
security
• z/OS V1.10 also supports RSA key, ISO Format-3 PIN
block, 13-Digit through 19-Digit PAN data, secure key
AES, and SHA algorithms.
• Improved creation and management of digital certifi-
cates with RACF, SAF, and z/OS PKI Services
• Improved productivity. z/OS V1.10 provides improve-
ments in or new capabilities for: simplifying diagnosis
and problem determination; expanded Health Check
Services; network and security management; automatic
dump and re-IPL capability; as well as overall z/OS, I/O
configuration, sysplex, and storage operations
• Additional centralized ICSF encryption key management
functions for applications
7
Download from Www.Somanuals.com. All Manuals Search And Download.
• Improved availability with Parallel Sysplex and Coupling
Facility improvement
z/VM
z/VM V5.4 is designed to extend its System z virtualization
technology leadership by exploiting more capabilities of
System z servers including:
• Enhanced application development and integration with
™
new System REXX facility, Metal C facility, and z/OS
®
UNIX System Services commands
• Greater flexibility, with support for the new z/VM-mode
logical partitions, allowing all System z processor-types
(CPs, IFLs, zIIPs, zAAPs, and ICFs) to be defined in
the same z/VM LPAR for use by various guest operating
systems
• Enhanced Workload Manager in managing discretionary
work and zIIP and zAAP workloads
Commitment to system integrity
™
First issued in 1973, IBM’s MVS System Integrity State-
• Capability to install Linux on System z as well as z/VM
from the HMC on a System z10 that eliminates the need
for any external network setup or a physical connection
between an LPAR and the HMC
®
ment and subsequent statements for OS/390 and z/OS
stand as a symbol of IBM’s confidence and commitment
to the z/OS operating system. Today, IBM reaffirms its com-
mitment to z/OS system integrity.
• Enhanced physical connectivity by exploiting all OSA-
Express3 ports, helping service the network and reduc-
ing the number of required resources
IBM’s commitment includes designs and development
practices intended to prevent unauthorized application
programs, subsystems, and users from bypassing z/OS
security—that is, to prevent them from gaining access,
circumventing, disabling, altering, or obtaining control of
key z/OS system processes and resources unless allowed
by the installation. Specifically, z/OS “System Integrity” is
defined as the inability of any program not authorized by
a mechanism under the installation’s control to circumvent
or disable store or fetch protection, access a resource
protected by the z/OS Security Server (RACF), or obtain
control in an authorized state; that is, in supervisor state,
with a protection key less than eight (8), or Authorized
Program Facility (APF) authorized. In the event that an IBM
System Integrity problem is reported, IBM will always take
action to resolve it.
• Dynamic memory upgrade support that allows real
memory to be added to a running z/VM system. With z/VM
V5.4, memory can be added nondisruptively to individual
guests that support the dynamic memory reconfiguration
architecture. Systems can now be configured to reduce
the need to re-IPL z/VM. Processors, channels, OSA
adapters, and now memory can be dynamically added to
both the z/VM system itself and to individual guests.
The operation and management of virtual machines
has been enhanced with new systems management
APIs, improvements to the algorithm for distributing a
guest’s CPU share among virtual processors, and usability
enhancements for managing a virtual network.
Security capabilities of z/VM V5.4 provide an upgraded
LDAP server at the functional level of the z/OS V1.10 IBM
IBM’s long-term commitment to System Integrity is unique
in the industry, and forms the basis of the z/OS industry
leadership in system security. z/OS is designed to help you
protect your system, data, transactions, and applications
from accidental or malicious modification. This is one of
the many reasons System z remains the industry’s premier
data server for mission-critical workloads.
®
Tivoli Directory Server for z/OS and enhancements to the
RACF Security Server to create LDAP change log entries
in response to updates to RACF group and user profiles,
including user passwords and password phrases. The z/VM
SSL server now operates in a CMS environment, instead of
requiring a Linux distribution, thus allowing encryption ser-
vices to be deployed more quickly and helping to simplify
installation, service, and release-to-release migration.
8
Download from Www.Somanuals.com. All Manuals Search And Download.
The z/VM hypervisor is designed to help clients extend the
business value of mainframe technology across the enter-
prise by integrating applications and data while providing
exceptional levels of availability, security, and operational
ease. z/VM virtualization technology is designed to provide
the capability for clients to run hundreds to thousands of
Linux servers in a single mainframe, together with other
System z operating systems such as z/OS, or as a large-
scale Linux-only enterprise-server solution. z/VM V5.4 can
also help to improve productivity by hosting non-Linux
workloads such as z/OS, z/VSE, and z/TPF.
z/VSE 4.1 is designed to support:
• z/Architecture mode only
• 64-bit real addressing and up to 8 GB of processor
storage
• System z encryption technology including CPACF, con-
figurable Crypto Express2, and TS1120 encrypting tape
• Midrange Workload License Charge (MWLC) pricing,
including full-capacity and sub-capacity options
IBM has previewed z/VSE 4.2. When available, z/VSE 4.2
is designed to help address the needs of VSE clients with
growing core VSE workloads. z/VSE V4.2 is designed to
support:
August 5, 2008, IBM announced z/VM 5.4. Enhancements
in z/VM 5.4 include:
• Increased flexibility with support for new z/VM-mode
logical partitions
• More than 255 VSE tasks to help clients grow their CICS
workloads and to ease migration from CS/VSE to CICS
™
Transaction Server for VSE/ESA
• Dynamic addition of memory to an active z/VM LPAR
by exploiting System z dynamic storage-reconfiguration
capabilities
• Up to 32 GB of processor storage
• Sub-Capacity Reporting Tool running “natively”
• Enhanced physical connectivity by exploiting all OSA-
Express3 ports
•
Encryption Facility for z/VSE as an optional priced feature
• IBM System Storage TS3400 Tape Library (via the
TS1120 Controller)
• Capability to install Linux on System z from the HMC
without requiring an external network connection
• IBM System Storage TS7740 Virtualization Engine
Release 1.3
• Enhancements for scalability and constraint relief
• Operation of the SSL server in a CMS environment
• Systems management enhancements for Linux and
other virtual images
z/VSE V4.2 plans to continue the focus on hybrid solutions
exploiting z/VSE and Linux on System z, service-oriented
architecture (SOA), and security. It is the preferred replace-
ment for z/VSE V4.1, z/VSE V3, or VSE/ESA. It is designed
to protect and leverage existing VSE information assets.
For the most current information on z/VM, refer to the z/VM
z/VSE
z/TPF
z/VSE 4.1, the latest advance in the ongoing evolution of
VSE, is designed to help address needs of VSE clients
with growing core VSE workloads and/or those who wish
to exploit Linux on System z for new, Web-based business
solutions and infrastructure simplification.
z/TPF is a 64-bit operating system that allows you to move
legacy applications into an open development environ-
ment, leveraging large scale memory spaces for increased
speed, diagnostics and functionality. The open develop-
ment environment allows access to commodity skills and
9
Download from Www.Somanuals.com. All Manuals Search And Download.
enhanced access to open code libraries, both of which
can be used to lower development costs. Large memory
spaces can be used to increase both system and appli-
cation efficiency as I/Os or memory management can be
eliminated.
Operating System
ESA/390 z/Architecture
(31-bit)
(64-bit)
z/OS V1R8, 9 and 10
z/OS V1R7(1)(2) with BM Lifecycle
Extension for z/OS V1.7
Linux on System z(2), Red Hat
RHEL 4, & Novell SUSE SLES 9
Linux on System z(2), Red Hat
RHEL 5, & Novell SUSE SLES 10
z/VM V5R2(3), 3(3) and 4
z/VSE V3R1(2)(4)
z/VSE V4R1(2)(5) and 2(5)
No
Yes
No
Yes
Yes
Yes
z/TPF is designed to support:
• 64-bit mode
No
No*
Yes
No
Yes
Yes
No
• Linux development environment (GCC and HLASM for
Linux)
Yes
Yes
No
• 32 processors/cluster
• Up to 84* engines/processor
• 40,000 modules
z/TPF V1R1
No
TPF V4R1 (ESA mode only)
Yes
1. z/OS V1.7 support on the z10 BC requires the Lifecycle Extension for
z/OS V1.7, 5637-A01. The Lifecycle Extension for z/OS R1.7 + zIIP Web
Deliverable required for z10 to enable HiperDispatch on z10 (does not
require a zIIP). z/OS V1.7 support was withdrawn September 30, 2008.
The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based cor-
rective service for z/OS V1.7 available through September 2009. With
this Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain
functions and features of the z10 BC server require later releases of
z/OS. For a complete list of software support, see the PSP buckets and
the Software Requirements section of the z10 BC announcement letter,
dated October 21, 2008.
• Workload License Charge
Linux on System z
The System z10 BC supports the following Linux on
System z distributions (most recent service levels):
• Novell SUSE SLES 9
• Novell SUSE SLES 10
• Red Hat RHEL 4
2. Compatibility Support for listed releases. Compatibility support allows OS
to IPL and operate on z10 BC.
3. Requires Compatibility Support which allows z/VM to IPL and operate on
®
the System z10 providing IBM System z9 functionality for the base OS
and Guests. *z/VM supports 31-bit and 64-bit guests.
4. z/VSE V3 31-bit mode only. It does not implement z/Architecture, and
specifically does not implement 64-bit mode capabilities. z/VSE is
designed to exploit select features of System z10, System z9, and IBM
• Red Hat RHEL 5
™
®
eServer zSeries hardware.
5. z/VSE V4 is designed to exploit 64-bit real memory addressing, but will
not support 64-bit virtual memory addressing.
Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive
Planning (PSP) bucket prior to installing a z10 BC.
10
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC
The IBM System z10 Business Class (z10 BC) delivers
innovative technologies for small and medium enter-
prises that give you a whole new world of capabilities
to run modern applications. Ideally suited in a Dynamic
Infratructure, this competitively priced server delivers
unparalleled qualities of service to help manage growth
and reduce cost and risk in your business.
higher availability and can be concurrently added or
replaced when at least two drawers are installed. Reduced
capacity and priced I/O features will continue to be offered
on the z10 BC to help lower your total cost of acquisition.
The quad core design z10 processor chip delivers higher
frequency and will be introduced at 3.5 GHz which can
help improve the execution of CPU intensive workloads on
the z10 BC. These design approaches facilitate the high-
availability, dynamic capabilities and lower cost that differ-
entiate this z10 BC from other servers.
The z10 BC further extends the leadership of System z by
delivering expanded granularity and optimized scalability
for growth, enriched virtualization technology for consoli-
dation of distributed workloads, improved availability and
security to help increase business resiliency, and just-in-
time management of resources. The z10 BC is at the core
of the enhanced System z platform and is the new face
of System z.
The z10 BC supports from 4 GB up to 248 GB of real
customer memory. This is almost four times the maximum
memory available on the z9 BC. The increased available
memory on the server can help to benefit workloads that
perform better with larger memory configurations, such
as DB2, WebSphere and Linux. In addition to the cus-
tomer purchased memory, an additional 8 GB of memory
is included for the Hardware System Area (HSA). The
HSA holds the I/O configuration data for the server and is
entirely fenced from customer memory.
The z10 BC has the machine type of 2098, with one model
(E10) offering between one to ten configurable Processor
Units (PUs). This model design offers increased flexibility
®
over the two model IBM System z9 Business Class (z9 BC)
by delivering seamless growth within a single model, both
temporary and permanent.
High speed connectivity and high bandwidth out to the
data and the network are critical in achieving high levels of
transaction throughput and enabling resources inside and
outside the server to maximize application requirements.
The z10 BC has a host bus interface with a link data rate
of 6 GB using the industry standard InfiniBand protocol to
help satisfy requirements for coupling (ICF and server-to-
server connectivity), cryptography (Crypto Express2 with
The z10 BC delivers improvements in both the granular
increments and total scalability compared to previous
System z midrange servers, achieved by both increasing
the performance of the individual PU as well as increasing
the number of PUs per server. The z10 BC Model E10 is
designed to provide up to 1.5 times the total system capac-
ity for general purpose processing, and over 40% more
configurable processors than the z9 BC Model S07.
®
secure coprocessors and SSL transactions), I/O (ESCON ,
®
FICON or FCP) and LAN (OSA-Express3 Gigabit, 10
Gigabit and 1000BASE-T Ethernet features). High Perfor-
mance FICON for System z (zHPF) also brings new levels
of performance when accessing data on enabled storage
The z10 BC advances the innovation of the System z10
platform and brings value to a wider audience. It is built
using a redesigned air cooled drawer package which
replaces the prior “book” concept in order to reduce cost
and increase flexibility. A redesigned I/O drawer offers
™
devices such as the IBM System Storage DS8000 .
11
Download from Www.Somanuals.com. All Manuals Search And Download.
PUs defined as Internal Coupling Facilities (ICFs), Inte-
grated Facility for Linux (IFLs), System z10 Application
be individually configured as a secure coprocessor or
an accelerator for SSL, the TKE workstation with optional
Assist Processor (zAAPs) and System z10 Integrated Infor- Smart Card Reader, and provides the following CP Assist
mation Processor (zIIPs) are no longer grouped together in for Cryptographic Function (CPACF):
™
®
one pool as on the IBM eServer zSeries 890 (z890), but
are grouped together in their own pool, where they can be
managed separately. The separation significantly simpli-
fies capacity planning and management for LPAR and can
have an effect on weight management since CP weights
and zAAP and zIIP weights can now be managed sepa-
rately. Capacity BackUp (CBU) features are available for
IFLs, ICFs, zAAPs and zIIPs.
• DES, TDES, AES-128, AES-192, AES-256
• SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
• Pseudo Random Number Generation (PRNG)
z10 BC is designed to deliver the industry leading Reli-
ability, Availability and Serviceability (RAS) customers
expect from System z servers. RAS is designed to
reduce all sources of outages by reducing unscheduled,
scheduled and planned outages. Planned outages are
further designed to be reduced by reducing preplanning
requirements.
LAN connectivity has been enhanced with the introduction
of the third generation of Open Systems Adapter-Express
(OSA-Express3). This new family of LAN adapters have
been introduced to reduce latency and overhead, deliver
double the port density of OSA-Express2 and provide
increased throughput. The z10 BC continues to support
OSA-Express2 1000BASE-T and GbE Ethernet features,
and supports IP version 6 (IPv6) on HiperSockets. While
OSA-Express2 OSN (OSA for NCP) is still available on
System z10 BC to support the Channel Data Link Control
(CDLC) protocol, the OSA-Express3 will also provide this
function.
z10 BC preplanning improvements are designed to avoid
planned outages and include:
• Reduce pre-planning to avoid POR
– “Fixed” HSA amount
– Dynamic I/O enabled by default
– Add Logical Channel Subsystem (LCSS)
– Change LCSS Subchannel Sets
– Add/Delete logical partitions
• Reduce pre-planning to avoid LPAR deactivate
Additional channel and networking improvements include
support for Layer 2 and Layer 3 traffic, FCP management
facility for z/VM and Linux for System z, FCP security
improvements, and Linux support for HiperSockets IPv6.
STP enhancements include the additional support for NTP
clients and STP over InfiniBand links.
– Change partition logical processor configuration
– Change partition crypto coprocessor configuration
• CoD – Flexible activation/deactivation
• Elimination of unnecessary CBU passwords
• Enhanced Driver Maintenance (EDM) upgrades
– Multiple “from” sync point support
– Improved control of channel LIC levels
• Plan ahead memory
Like the System z9 BC, the z10 BC offers a configurable
Crypto Express2 feature, with PCI-X adapters that can
• Concurrent I/O drawer add/repair
12
Download from Www.Somanuals.com. All Manuals Search And Download.
Additionally, several service enhancements have also
mixed and unpredictable workload environments, provid-
been designed to avoid unscheduled outages and include ing scalability, high availability and Qualities of Service
continued focus on firmware quality, reduced chip count
on Single Chip Module (SCM) and memory subsystem
improvements. In the area of scheduled outage enhance-
ments include redundant 100Mb Ethernet service network
with VLAN, rebalance of PSIFB and I/O fanouts, and single
processor core sparing and checkstop. Exclusive to the
System z10 is the ability to hot swap ICB-4 and InfiniBand
hub cards.
(QoS) to emerging applications such as WebSphere, Java
and Linux.
With the logical partition (LPAR) group capacity limit on
z10 BC, z10 EC, z9 EC and z9 BC, you can now specify
LPAR group capacity limits allowing you to define each
LPAR with its own capacity and one or more groups of
LPARs on a server. This is designed to allow z/OS to
manage the groups in such a way that the sum of the
LPARs’ CPU utilization within a group will not exceed the
group’s defined capacity. Each LPAR in a group can still
optionally continue to define an individual LPAR capacity
limit.
Enterprises with IBM System z9 BC and IBM z890
may upgrade to any z10 Business Class model. Model
upgrades within the z10 BC are concurrent. If you desire
a consolidation platform for your mainframe and Linux
capable applications, you can add capacity and even
expand your current application workloads in a cost-effec-
tive manner. If your traditional and new applications are
growing, you may find the z10 BC a good fit with its base
qualities of service and its specialty processors designed
for assisting with new workloads. Value is leveraged with
improved hardware price/performance and System z10 BC
software pricing strategies.
The z10 BC has one model with a total of 130 capacity
settings available as new build systems and as upgrades
from the z9 BC and z890.
The z10 BC model is designed with a Central Processor
Complex (CPC) drawer with Single Chip Modules (SCM)
that provides up to 10 Processor Units (PUs) that can
be characterized as either Central Processors (CPs), IFLs,
ICFs, zAAPs or zIIPs.
The z10 BC is specifically designed and optimized for
full z/Architecture compatibility. New features enhance
enterprise data serving performance, industry leading
virtualization capabilities, energy efficiency at system
and data center levels. The z10 BC is designed to further
extend and integrate key platform characteristics such as
dynamic flexible partitioning and resource management in
Some of the significant enhancements in the z10 BC that
help bring improved performance, availability and function
to the platform have been identified. The following sections
highlight the functions and features of the z10 BC.
13
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Design and Technology
The System z10 BC is designed to provide balanced
system performance. From processor storage to the
system’s I/O and network channels, end-to-end bandwidth
is provided and designed to deliver data where and when
it is needed.
Speed and precision in numerical computing are important
for all our customers. The z10 BC offers improvements
for decimal floating point instructions, because each z10
processor chip has its own hardware decimal floating
point unit, designed to improve performance over that
provided by the System z9. Decimal calculations are often
used in financial applications and those done using other
floating point facilities have typically been performed by
software through the use of libraries. With a hardware
decimal floating point unit some of these calculations may
be done directly and accelerated.
The processor subsystem is comprised of one CPC, which
houses the processor units (PUs), Storage Controllers
(SCs), memory, Self-Time-Interconnects (STI)/InfiniBand
(IFB) and Oscillator/External Time Reference (ETR). The
z10 BC design provides growth paths up to a 10 engine
system where each of the 10 PUs has full access to all
system resources, specifically memory and I/O.
The design of the z10 BC provides the flexibility to con-
figure the PUs for different uses; There are 12 PUs per
system, two are designated as System Assist Processors
(SAPs) standard per system. The remaining 10 PUs are
available to be characterized as either CPs, ICF proces-
sors for Coupling Facility applications, or IFLs for Linux
applications and z/VM hosting Linux as a guest, System
z10 Application Assist Processors (zAAPs), System z10
Integrated Information Processors (zIIPs) or as optional
SAPs and provide you with tremendous flexibility in estab-
lishing the best system for running applications.
The z10 BC uses the same processor chip as the z10 EC,
relying only on 3 out of 4 functional cores per chip. Each
chip is individually packaged in an SCM. Four SCMs will
be plugged in the processor board providing the 12 PUs
for the design. Clock frequency will be 3.5 GHz.
There are three active cores per PU, an L1 cache divided
into a 64 KB cache for instructions and a 128 KB cache for
data. Each PU also has an L1.5 cache. This cache is 3 MB
in size. Each L1 cache has a Translation Look-aside Buffer
(TLB) of 512 entries associated with it. The PU, which uses
a high-frequency z/Architecture microprocessor core, is
built on CMOS 11S chip technology and has a cycle time
of approximately 0.286 nanoseconds.
The z10 BC can support from the 4 GB minimum memory
up to 248 GB of available real memory per server for grow-
ing application needs. A new 8 GB fixed HSA which is
managed separately from customer memory. This fixed
HSA is designed to improve availability by avoiding out-
ages that were necessary on prior models to increase its
size. There are up to 12 I/O interconnects per system at 6
GBps each.
The PU chip includes data compression and crypto-
graphic functions. Hardware data compression can play a
significant role in improving performance and saving costs
over doing compression in software. Standard clear key
cryptographic processors right on the processor translate
to high-speed cryptography for protecting data in storage,
integrated as part of the PU.
The z10 BC supports a combination of Memory Bus
Adapter (MBA) and Host Channel Adapter (HCA) fanout
cards. New MBA fanout cards are used exclusively for
ICB-4. New ICB-4 cables are needed for z10 BC. The
InfiniBand Multiplexer (IFB-MP) card replaces the Self-
14
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Model
Timed Interconnect Multiplexer (STI-MP) card. There are
two types of HCA fanout cards: HCA2-C is copper and
is always used to connect to I/O (IFB-MP card) and the
HCA2-O which is optical and used for customer InfiniBand
coupling.
The z10 BC has one model, the E10, (Machine Type 2098)
offering between 1 to 10 processor units (PUs), which
can be configured to provide a highly scalable solution
designed to meet the needs of both high transaction pro-
cessing applications and On Demand business. The PUs
can be characterized as either CPs, IFLs, ICFs, zAAPs,
zIIPs or option SAPs. An easy-to-enable ability to “turn
off” CPs or IFLs is available on z10 BC, allowing you to
purchase capacity for future use with minimal or no impact
on software billing. An MES feature will enable the “turned
off” CPs or IFLs for use where you require the increased
capacity. There are a wide range of upgrade options avail-
able in getting to and within the z10 BC.
The z10 BC has been designed to offer high performance
and efficient I/O structure. The z10 BC ships with a single
frame: the A-Frame which supports the installation of up
to four I/O drawers. Each drawer supports up to eight I/O
cards, four in front and four in the rear, providing support
for up to 480 channels (32 I/O features).
To increase the I/O device addressing capability, the I/O
subsystem has been enhanced by introducing support
for multiple subchannels sets (MSS), which are designed
to allow improved device connectivity for Parallel Access
Volumes (PAVs). To support the highly scalable system
design, the z9 BC I/O subsystem uses the Logical Chan-
nel SubSystem (LCSS) which provides the capability to
install up to 512 CHPIDs across the I/O drawers (256 per
operating system image). The Parallel Sysplex Coupling
Link architecture and technology continues to support
high speed links providing efficient transmission between
the Coupling Facility and z/OS systems. HiperSockets
provides high speed capability to communicate among
virtual servers and logical partitions. HiperSockets is now
improved with the IP version 6 (IPv6) support; this is based
on high speed TCP/IP memory speed transfers and pro-
vides value in allowing applications running in one partition
to communicate with applications running in another with-
out dependency on an external network. Industry standard
and openness are design objectives for I/O in z9 BC.
The z10 BC hardware model number (E10) on its own does
not indicate the number of PUs which are being used as
CPs. For software billing purposes only, there will be a
Capacity Indicator associated with the number PUs that
are characterized as CPs. This number will be reported
by the Store System Information (STSI) instruction for soft-
ware billing purposes only. There is no affinity between the
hardware model and the number of CPs.
z10 BC capacity identifiers
nxx, where n = subcapacity engine size and xx = number
of CPs
• Total 130 Capacity Indicators for “software settings”
• A00 for systems with IFL(s) or ICF(s) only.
Memory DIMM sizes: 2 GB and 4 GB
• Maximum physical memory: 256 GB per system
– Minimum physical installed = 16 GB of which 8 GB is
for Fixed HSA
• For 8 to 32, 4 GB increments, from 32 to 248, 8 GB
increments
15
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Model Capacity IDs:
z10 BC model upgrades
• A00, A01 to Z01, A01 to Z02, A03 to Z03, A04 to Z04,
and A05 to Z05
The z10 BC provides for the dynamic and flexible capac-
ity growth for mainframe servers. There are full upgrades
within the z10 BC and upgrades from any z9 BC or z890 to
any z10 BC. Temporary capacity upgrades are available
through On/Off Capacity on Demand (CoD).
• Capacity setting A00 does not have any CP engines
• Nxx, where n = the capacity setting of the engine, and
xx = the number of PU characterized as CPs in the CPC
Z01
Y01
X01
W01
Z02
Y02
X02
W02
Z03
Y03
X03
W03
Z04
Y04
X04
W04
Z05
Y05
X05
W05
z9 BC
z10 BC
z10 EC
R07
S07
V01
U01
T01
S01
R01
Q01
P01
O01
N01
M01
L01
V02
U02
T02
S02
R02
Q02
P02
O02
N02
M02
L02
V03
U03
T03
S03
R03
Q03
P03
O03
N03
M03
L03
V04
U04
T04
S04
R04
Q04
P04
O04
N04
M04
L04
V05
U05
T05
S05
R05
Q05
P05
O05
N05
M05
L05
E10
E12
K01
J01
K02
J02
K03
J03
K04
J04
K05
J05
I01
I02
I03
I04
I05
H01
G01
F01
E01
D01
C01
B01
A01
1-way
H02
G02
F02
E02
D02
C02
B02
A02
2-way
H03
G03
F03
E03
D03
C03
B03
A03
3-way
H04
G04
F04
E04
D04
C04
B04
A04
4-way
H05
G05
F05
E05
D05
C05
B05
A05
5-way
A04
z890
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
Specialty
Engine
For the z10 BC models, there are twenty-six capacity
settings per engine for central processors (CPs). Sub-
capacity processors have availability of z10 BC features/
functions and any-to-any upgradeability is available within
the sub-capacity matrix. All CPs must be the same capac-
ity setting size within one z10 BC. All specialty engines run
at full speed.
The one for one entitlement to purchase one zAAP and/or
one zIIP for each CP purchased is the same for CPs of
any speed.
16
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Performance
The performance design of the z/Architecture can enable
the server to support a new standard of performance for
applications through expanding upon a balanced system
approach. As CMOS technology has been enhanced to
support not only additional processing power, but also
more PUs, the entire server is modified to support the
increase in processing power. The I/O subsystem supports
a greater amount of bandwidth than previous generations
through internal changes, providing for larger and faster
volume of data movement into and out of the server. Sup-
port of larger amounts of data within the server required
improved management of storage configurations, made
available through integration of the operating system and
hardware support of 64-bit addressing. The combined bal-
anced system design allows for increases in performance
across a broad spectrum of work.
• up to 1.4 times that of the z9 BC (2096) Z01.
Moving from a System z9 partition to an equivalently sized
System z10 BC partition, a z/VM workload will experience
an ITR ratio that is somewhat related to the workload’s
instruction mix, MP factor, and level of storage over com-
mitment. Workloads with higher levels of storage over
commitment or higher MP factors are likely to experience
lower than average z10 BC to z9 ITR scaling ratios. The
range of likely ITR ratios is wider than the range has been
for previous processor migrations.
The LSPR contains the Internal Throughput Rate Ratios
(ITRRs) for the z10 BC and the previous-generation
zSeries processor families based upon measurements
and projections using standard IBM benchmarks in a con-
trolled environment. The actual throughput that any user
may experience will vary depending upon considerations
such as the amount of multiprogramming in the user’s job
stream, the I/O configuration, and the workload processed.
Therefore no assurance can be given that an individual
user will achieve throughput improvements equivalent
to the performance ratios stated. For more detailed per-
formance information, consult the Large Systems Perfor-
mance Reference (LSPR) available at:
Large System Performance Reference
IBM’s Large Systems Performance Reference (LSPR)
method is designed to provide comprehensive
z/Architecture processor capacity ratios for different con-
figurations of Central Processors (CPs) across a wide
variety of system control programs and workload envi-
ronments. For z10 BC, z/Architecture processor capacity
identifier is defined with a (A0x-Z0x) notation, where x is
the number of installed CPs, from one to five. There are
a total of 26 subcapacity levels, designated by the letters
A through Z.
CPU Measurement Facility
The CPU Measurement Facility is a hardware facility which
consists of counters and samples. The facility provides a
means to collect run-time data for software performance
tuning. The detailed architecture information for this facility
In addition to the general information provided for z/OS
V1.9, the LSPR also contains performance relationships for
z/VM and Linux operating environments.
™
can be found in the System z10 Library in Resource Link .
Based on using an LSPR mixed workload, the perfor-
mance of the z10 BC (2098) Z01 is expected to be:
17
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC I/O Subsystem
A new host bus interface using InfiniBand with a link data
rate of 6 GBps, was introduced on the z10 BC. It provides
enough throughput to support the full capacity and pro-
cessing power of the CPC. The z10 BC contains an I/O
System I/O Configuration Analyzer
Today the information needed to manage a system’s I/O
configuration has to be obtained from many separate
applications. The System’s I/O Configuration Analyzer
subsystem infrastructure which uses up to four I/O drawers (SIOA) tool is a SE/HMC-based tool that will allow the
that provides eight I/O slots in each drawer. There are two
I/O domains per drawer, and four I/O cards per domain.
I/O cards are horizontal and may be added concurrently.
Concurrent replacement and/or repair is available with
systems containing more than one I/O drawer. Drawers
may be added concurrently should the need for more con-
nectivity arise.
system hardware administrator access to the information
from these many sources in one place. This will make it
much easier to manage I/O configurations, particularly
across multiple CPCs. The SIOA is a “view-only” tool. It
does not offer any options other than viewing options.
First the SIOA tool analyzes the current active IOCDS on
the SE. It extracts information about the defined channel,
partitions, link addresses and control units. Next the SIOA
tool asks the channels for their node ID information. The
FICON channels support remote node ID information, so
ESCON, FICON Express4, FICON Express2, FICON
Express, OSA-Express3, OSA-Express2, and Crypto
Express2 features plug into the z10 BC I/O drawer along
with any ISC-3s and InfiniBand Multiplexer (IFB-MP) cards. that is also collected from them. The data is then formatted
All I/O features and their support cards can be hot-
plugged in the I/O drawer. Each model ships with one
I/O drawer as standard in the A-Frame (the A-Frame also
contains the Central Processor Complex [CPC]), where the
I/O drawers are installed. Each IFB-MP has a bandwidth
up to 6 GigaBytes per second (GB/sec) for I/O domains
and MBA fanout cards provide 2.0 GB/sec for ICB-4s.
and displayed on five screens:
1)PCHID Control Unit Screen – Shows PCHIDs, CSS.
CHPIDs and their control units
2)PCHID Partition Screen – Shows PCHIDS, CSS. CHPIDs
and what partitions they are in
3)Control Unit Screen – Shows the control units, their
PCHIDs and their link addresses in each of the CSS’s
The z10 BC continues to support all of the features
announced with the System z9 BC such as:
4)Link Load Screen – Shows the Link address and the
PCHIDs that use it
5)Node ID Screen – Shows the Node ID data under the
PCHIDs
• Logical Channel Subsystems (LCSSs) and support for
up to 30 logical partitions
• Increased number of Subchannels (63.75k)
• Multiple Subchannel Sets (MSS)
The SIOA tool allows the user to sort on various columns
and export the data to a USB flash drive for later viewing.
• Redundant I/O Interconnect
• Physical Channel IDs (PCHIDs)
• System Initiated CHPID Reconfiguration
• Logical Channel SubSystem (LCSS) Spanning
18
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Channels and I/O Connectivity
ESCON Channels
FICON Express4 Channels
The z10 BC supports up to 480 ESCON channels. The
high density ESCON feature has 16 ports, 15 of which
can be activated for customer use. One port is always
reserved as a spare which is activated in the event of a
failure of one of the other ports. For high availability the
initial order of ESCON features will deliver two 16-port
ESCON features and the active ports will be distributed
across those features.
The z10 BC supports up to 128 FICON Express4 chan-
nels, each one operating at 1, 2 or 4 Gb/sec auto-negoti-
ated. The FICON Express4 features are available in long
wavelength (LX) and short wavelength (SX). For customers
exploiting LX, there are two options available for unre-
peated distances of up to 4 kilometers (2.5 miles) or up to
10 kilometers (6.2 miles). Both LX features use 9 micron
single mode fiber optic cables. The SX feature uses 50
or 62.5 micron multimode fiber optic cables. Each FICON
Express4 feature has four independent channels (ports)
and can be configured to carry native FICON traffic or
Fibre Channel (SCSI) traffic. LX and SX cannot be inter-
mixed on a single feature. The receiving devices must cor-
respond to the appropriate LX or SX feature. The maximum
number of FICON Express4 features is 32 using four I/O
drawers.
Fibre Channel Connectivity
The on demand operating environment requires fast data
access, continuous data availability, and improved flex-
ibility, all with a lower cost of ownership. The four port
FICON Express4 and FICON Express2 features available
on the z9 BC continue to be supported on the System
z10 BC.
Exclusive to the z10 BC and z9 BC is the availability of
a, lower cost FICON Express4 2-port feature, the FICON
Express4-2C 4KM LX and FICON Express4-2C SC. These
features support two FICON 4 Gbps LX and SX chan-
nels respectively. The FICON Express4-2-port cards are
designed to operate like the 4 port card but with the flex-
ibility of having fewer ports per card.
Choose the FICON Express4 features that best meet your
business requirements
To meet the demands of your Storage Area Network (SAN),
provide granularity, facilitate redundant paths, and satisfy
your infrastructure requirements, there are five features
from which to choose.
Feature
FC # Infrastructure
Ports per
Feature
FICON Express2 Channels
The z10 BC supports carrying forward FICON Express2
channels, each one operating at 1 or 2 Gb/sec auto-
negotiated. The FICON Express2 features are available
in long wavelength (LX) using 9 micron single mode fiber
optic cables and short wavelength (SX) using 50 and
62.5 micron multimode fiber optic cables. Each FICON
Express2 feature has four independent channels (ports)
and each can be configured to carry native FICON traffic
or Fibre Channel (SCSI) traffic. LX and SX cannot be inter-
mixed on a single feature. The maximum number of FICON
Express2 features is 20, using four I/O drawers.
FICON Express4 10KM LX 3321 Single mode fiber
FICON Express4 4KM LX 3324 Single mode fiber
FICON Express4-2C 4KM LX 3323 Single mode fiber
4
4
2
4
2
FICON Express4 SX
3322 Multimode fiber
3318 Multimode fiber
FICON Express4-2C SX
Choose the features that best meet your granularity, fiber
optic cabling, and unrepeated distance requirements.
19
Download from Www.Somanuals.com. All Manuals Search And Download.
FICON Express Channels
Concurrent Update
The z10 BC also supports carrying forward FICON
Express LX and SX channels from z9 BC and z990 each
channel operating at 1 or 2 Gb/sec auto-negotiated. Each
FICON Express feature has two independent channels
(ports).
The FICON Express4 SX and LX features may be added
to an existing z10 BC concurrently. This concurrent update
capability allows you to continue to run workloads through
other channels while the new FICON Express4 features are
being added. This applies to CHPID types FC and FCP.
The System z10 BC Model E10 is limited to 32 features –
any combination of FICON Express4, FICON Express2 and
FICON Express LX and SX features.
Continued Support of Spanned Channels and Logical
Partitions
The FICON Express4 and FICON Express2, FICON and
FCP (CHPID types FC and FCP) channel types, can be
defined as a spanned channel and can be shared among
logical partitions within and across LCSSs.
The FICON Express4, FICON Express2 and FICON
Express feature conforms to the Fibre Connection (FICON)
architecture and the Fibre Channel (FC) architecture, pro-
viding connectivity between any combination of servers,
directors, switches, and devices in a Storage Area Network
(SAN). Each of the four independent channels (FICON
Express only supports two channels per feature) is capa-
ble of 1 Gigabit per second (Gb/sec), 2 Gb/sec, or 4
Gb/sec (only FICON Express4 supports 4 Gbps) depend-
ing upon the capability of the attached switch or device.
The link speed is auto-negotiated, point-to-point, and is
transparent to users and applications. Not all switches and
devices support 2 or 4 Gb/sec link data rates.
Modes of Operation
There are two modes of operation supported by FICON
Express4 and FICON Express2 SX and LX. These modes
are configured on a channel-by-channel basis – each of
the four channels can be configured in either of two sup-
ported modes.
• Fibre Channel (CHPID type FC), which is native FICON
or FICON Channel-to-Channel (server-to-server)
• Fibre Channel Protocol (CHPID type FCP), which sup-
ports attachment to SCSI devices via Fibre Channel
switches or directors in z/VM, z/VSE, and Linux on
System z10 environments
FICON Express4 and FICON Express2 Performance
Your enterprise may benefit from FICON Express4 and
FICON Express2 with:
Native FICON Channels
• Increased data transfer rates (bandwidth)
• Improved performance
Native FICON channels and devices can help to reduce
bandwidth constraints and channel contention to enable
easier server consolidation, new application growth,
large business intelligence queries and exploitation of On
Demand Business.
• Increased number of start I/Os
• Reduced backup windows
• Channel aggregation to help reduce infrastructure costs
®
For more information about FICON, visit the IBM Redbooks
SG24-5444. There are also various FICON I/O Connectivity
information at: www-03.ibm.com/systems/z/connectivity/.
20
Download from Www.Somanuals.com. All Manuals Search And Download.
The FICON Express4, FICON Express2 and FICON
Express channels support native FICON and FICON
Channel-to-Channel (CTC) traffic for attachment to servers,
disks, tapes, and printers that comply with the FICON
architecture. Native FICON is supported by all of the
z10 BC operating systems. Native FICON and FICON
CTC are defined as CHPID type FC.
IBM
Two site non-cascaded director
topology. Each CEC connects to
directors in both sites.
M
With Inter Switch Links (ISLs),
less fiber cabling may be needed
for cross-site connectivity
Because the FICON CTC function is included as part of
the native FICON (FC) mode of operation, FICON CTC is
not limited to intersystem connectivity (as is the case with
ESCON), but will support multiple device definitions.
Two Site cascaded director
topology. Each CEC connects to
local directors only.
FCP Channels
z10 BC supports FCP channels, switches and FCP/ SCSI
disks with full fabric connectivity under Linux on System
z and z/VM 5.2 (or later) for Linux as a guest under z/VM,
under z/VM 5.2 (or later), and under z/VSE 3.1 for system
usage including install and IPL. Support for FCP devices
means that z10 BC servers are capable of attaching to
select FCP-attached SCSI devices and may access these
devices from Linux on z10 BC and z/VSE. This expanded
attachability means that enterprises have more choices
for new storage solutions, or may have the ability to use
existing storage devices, thus leveraging existing invest-
ments and lowering total cost of ownership for their Linux
implementations.
FICON Support for Cascaded Directors
Native FICON (FC) channels support cascaded directors.
This support is for a single hop configuration only. Two-
director cascading requires a single vendor high integrity
fabric. Directors must be from the same vendor since
cascaded architecture implementations can be unique.
This type of cascaded support is important for disaster
recovery and business continuity solutions because it can
help provide high availability, extended distance connec-
tivity, and (particularly with the implementation of 2 Gb/sec
Inter Switch Links) has the potential for fiber infrastructure
cost savings by reducing the number of channels for inter-
connecting the two sites.
The same FICON features used for native FICON chan-
nels can be defined to be used for Fibre Channel Protocol
(FCP) channels. FCP channels are defined as CHPID type
FCP. The 4 Gb/sec capability on the FICON Express4
channel means that 4 Gb/sec link data rates are available
for FCP channels as well.
21
Download from Www.Somanuals.com. All Manuals Search And Download.
FCP – increased performance for small block sizes
FICON and FCP for connectivity to disk, tape, and printers
The Fibre Channel Protocol (FCP) Licensed Internal
Code has been modified to help provide increased I/O
operations per second for small block sizes. With FICON
Express4, there may be up to 57,000 I/O operations
per second (all reads, all writes, or a mix of reads and
writes), an 80% increase compared to System z9. These
results are achieved in a laboratory environment using
one channel configured as CHPID type FCP with no other
processing occurring and do not represent actual field
measurements. A significant increase in I/O operations per
second for small block sizes can also be expected with
FICON Express2.
High Performance FICON – improvement in performance and
RAS
Enhancements have been made to the z/Architecture
and the FICON interface architecture to deliver optimiza-
tions for online transaction processing (OLTP) workloads.
When exploited by the FICON channel, the z/OS operating
system, and the control unit, High Performance FICON for
System z (zHPF) is designed to help reduce overhead and
improve performance.
Additionally, the changes to the architectures offer end-
to-end system enhancements to improve reliability, avail-
ability, and serviceability (RAS).
This FCP performance improvement is transparent to oper-
ating systems that support FCP, and applies to all the
FICON Express4 and FICON Express2 features when con-
figured as CHPID type FCP, communicating with SCSI
devices.
zHPF channel programs can be exploited by the OLTP
I/O workloads – DB2, VSAM, PDSE, and zFS – which
transfer small blocks of fixed size data (4K blocks). zHPF
implementation by the DS8000 is exclusively for I/Os that
transfer less than a single track of data.
SCSI IPL now a base function
The maximum number of I/Os is designed to be improved
up to 100% for small data transfers that can exploit zHPF.
Realistic production workloads with a mix of data transfer
sizes can see up to 30 to 70% of FICON I/Os utilizing zHPF
resulting in up to a 10 to 30% savings in channel utilization.
Sequential I/Os transferring less than a single track size
(for example, 12x4k bytes/IO) may also benefit.
The SCSI Initial Program Load (IPL) enablement feature,
first introduced on z990 in October of 2003, is no longer
required. The function is now delivered as a part of the
server Licensed Internal Code. SCSI IPL allows an IPL of
an operating system from an FCP-attached SCSI disk.
FCP Full fabric connectivity
FCP full fabric support means that any number of (single
vendor) FCP directors/ switches can be placed between
the server and an FCP/SCSI device, thereby allowing
many “hops” through a Storage Area Network (SAN) for
I/O connectivity. FCP full fabric connectivity enables mul-
tiple FCP switches/directors on a fabric to share links and
therefore provides improved utilization of inter-site con-
nected resources and infrastructure.
The FICON Express4 and FICON Express2 features will
support both the existing FICON protocol and the zHPF
protocol concurrently in the server Licensed Internal Code.
High performance FICON is supported by z/OS for DB2,
VSAM, PDSE, and zFS applications. zHPF applies to all
FICON Express4 and FICON Express2 features (CHPID
type FC) and is exclusive to System z10. Exploitation is
required by the control unit.
22
Download from Www.Somanuals.com. All Manuals Search And Download.
IBM System Storage DS8000 Release 4.1 delivers new
capabilities to support High Performance FICON for
System z, which can improve FICON I/O throughput on a
DS8000 port by up to 100%. The DS8000 series Licensed
Machine Code (LMC) level 5.4.2xx.xx (bundle version
64.2.xx.xx), or later, is required.
Platform registration is a service defined in the Fibre Chan-
nel – Generic Services 4 (FC-GS-4) standard (INCITS
(ANSI) T11 group).
Platform and name server registration applies to all of the
FICON Express4, FICON Express2, and FICON Express
features (CHPID type FC). This support is exclusive to
System z10 and is transparent to operating systems.
Platform and name server registration in FICON channel
The FICON channel now provides the same information
to the fabric as is commonly provided by open systems,
registering with the name server in the attached FICON
directors. With this information, your storage area network
(SAN) can be more easily and efficiently managed,
enhancing your ability to perform problem determination
and analysis.
Preplanning and setup of SAN for a System z10 environment
The worldwide port name (WWPN) prediction tool is now
available to assist you with preplanning of your Storage
Area Network (SAN) environment prior to the installation of
your System z10 server.
This standalone tool is designed to allow you to setup
your SAN in advance, so that you can be up and running
much faster once the server is installed. The tool assigns
WWPNs to each virtual Fibre Channel Protocol (FCP)
channel/port using the same WWPN assignment algo-
rithms a system uses when assigning WWPNs for channels
utilizing N_Port Identifier Virtualization (NPIV).
Registration allows other nodes and/or SAN managers to
query the name server to determine what is connected
to the fabric, what protocols are supported (FICON, FCP)
and to gain information about the System z10 using the
attributes that are registered. The FICON channel is now
designed to perform registration with the fibre channel’s
Management Service and Directory Service.
The tool needs to know the FCP-specific I/O device defini-
tions in the form of a .csv file. This file can either be cre-
ated manually, or exported from Hardware Configuration
Definition/Hardware Configuration Manager (HCD/HCM).
The tool will then create the WWPN assignments, which
are required to set up your SAN. The tool will also create
a binary configuration file that can later on be imported by
your system.
It will register:
• Platform’s:
– Worldwide node name (node name for the platform –
same for all channels)
– Platform type (host computer)
– Platform name (includes vendor ID, product ID, and
vendor specific data from the node descriptor)
The WWPN prediction tool can be downloaded from
Resource Link and is applicable to all FICON channels
defined as CHPID type FCP (for communication with SCSI
devices). Check Preventive Service Planning (PSP) buck-
ets for required maintenance.
• Channel’s:
– Worldwide port name (WWPN)
– Node port identification (N_PORT ID)
– FC-4 types supported (always 0x1B and additionally
0x1C if any Channel-to-Channel (CTC) control units
are defined on that channel)
– Classes of service support by the channel
23
Download from Www.Somanuals.com. All Manuals Search And Download.
Extended distance FICON – improved performance at extended
distance
Exploitation of extended distance FICON is supported by
IBM System Storage DS8000 series Licensed Machine
Code (LMC) level 5.3.1xx.xx (bundle version 63.1.xx.xx),
or later.
An enhancement to the industry standard FICON architec-
ture (FC-SB-3) helps avoid degradation of performance at
extended distances by implementing a new protocol for
“persistent” Information Unit (IU) pacing. Control units that
exploit the enhancement to the architecture can increase
the pacing count (the number of IUs allowed to be in flight
from channel to control unit). Extended distance FICON also
allows the channel to “remember” the last pacing update for
use on subsequent operations to help avoid degradation of
performance at the start of each new operation.
To support extended distance without performance degra-
dation, the buffer credits in the FICON director must be
set appropriately. The number of buffer credits required is
dependent upon the link data rate (1 Gbps, 2 Gbps, or 4
Gbps), the maximum number of buffer credits supported
by the FICON director or control unit, as well as application
and workload characteristics. High bandwidth at extended
distances is achievable only if enough buffer credits exist
to support the link data rate.
Improved IU pacing can help to optimize the utilization of
the link, for example help keep a 4 Gbps link fully utilized
at 50 km, and allows channel extenders to work at any dis-
tance, with performance results similar to that experienced
when using emulation.
FICON Express enhancements for Storage Area Networks
N_Port ID Virtualization
N_Port ID Virtualization is designed to allow for sharing of
a single physical FCP channel among multiple operating
system images. Virtualization function is currently available
for ESCON and FICON channels, and is now available for
FCP channels. This function offers improved FCP channel
utilization due to fewer hardware requirements, and can
reduce the complexity of physical FCP I/O connectivity.
The requirements for channel extension equipment are
simplified with the increased number of commands in
flight. This may benefit z/OS Global Mirror (Extended
Remote Copy – XRC) applications as the channel exten-
sion kit is no longer required to simulate specific channel
commands. Simplifying the channel extension require-
ments may help reduce the total cost of ownership of end-
to-end solutions.
Program Directed re-IPL
Program Directed re-IPL is designed to enable an operat-
ing system to determine how and from where it had been
loaded. Further, Program Directed re-IPL may then request
that it be reloaded again from the same load device using
the same load parameters. In this way, Program Directed
re-IPL allows a program running natively in a partition to
trigger a re-IPL. This re-IPL is supported for both SCSI
and ECKD devices. z/VM 5.3 provides support for guest
exploitation.
Extended distance FICON is transparent to operating sys-
tems and applies to all the FICON Express2 and FICON
Express4 features carrying native FICON traffic (CHPID
type FC). For exploitation, the control unit must support the
new IU pacing protocol. The channel will default to cur-
rent pacing values when operating with control units that
cannot exploit extended distance FICON.
24
Download from Www.Somanuals.com. All Manuals Search And Download.
FICON Link Incident Reporting
OSA-Express3 for reduced latency and improved throughput
To help reduce latency, the OSA-Express3 features now
have an Ethernet hardware data router; what was previ-
ously done in firmware (packet construction, inspection,
and routing) is now performed in hardware. With direct
memory access, packets flow directly from host memory
to the LAN without firmware intervention. OSA-Express3 is
also designed to help reduce the round-trip networking
time between systems. Up to a 45% reduction in latency at
the TCP/IP application layer has been measured.
FICON Link Incident Reporting is designed to allow an
operating system image (without operating intervention) to
register for link incident reports, which can improve the
ability to capture data for link error analysis. The informa-
tion can be displayed and is saved in the system log.
Serviceability Enhancements
Requests Node Identification Data (RNID) is designed to
facilitate the resolution of fiber optic cabling problems.
You can now request RNID data for a device attached to a
native FICON channel.
The OSA-Express3 features are also designed to improve
throughput for standard frames (1492 byte) and jumbo
frames (8992 byte) to help satisfy the bandwidth require-
ments of your applications. Up to a 4x improvement has
been measured (compared to OSA-Express2).
Local Area Network (LAN) connectivity
OSA-Express3 – the newest family of LAN adapters
The above statements are based on OSA-Express3 perfor-
mance measurements performed in a laboratory environ-
ment on a System z10 and do not represent actual field
measurements. Results may vary.
The third generation of Open Systems Adapter-Express
(OSA-Express3) features have been introduced to help
reduce latency and overhead, deliver double the port den-
sity of OSA-Express2, and provide increased throughput
Choose the OSA-Express3 features that best meet your
business requirements.
Port density or granularity
The OSA-Express3 features have Peripheral Component
Interconnect Express (PCI-E) adapters. The previous table
To meet the demands of your applications, provide granu-
larity, facilitate redundant paths, and satisfy your infrastruc- identifies whether the feature has 2 or 4 ports for LAN con-
ture requirements, there are seven features from which to
choose. In the 10 GbE environment, Short Reach (SR) is
being offered for the first time.
nectivity. Select the density that best meets your business
requirements. Doubling the port density on a single feature
helps to reduce the number of I/O slots required for high-
speed connectivity to the Local Area Network.
Feature
Infrastructure
Ports per
Feature
The OSA-Express3 10 GbE features support Long Reach
(LR) using 9 micron single mode fiber optic cabling and
Short Reach (SR) using 50 or 62.5 micron multimode
fiber optic cabling. The connector is new; it is now the
small form factor, LC Duplex connector. Previously the SC
Duplex connector was supported for LR. The LC Duplex
connector is common with FICON, ISC-3, and OSA-
Express2 Gigabit Ethernet LX and SX.
OSA-Express3 GbE LX
Single mode fiber
Single mode fiber
Multimode fiber
Multimode fiber
Multimode fiber
Copper
4
2
4
2
2
4
2
OSA-Express3 10 GbE LR
OSA-Express3 GbE SX
OSA-Express3 10 GbE SR
OSA-Express3-2P GbE SX
OSA-Express3 1000BASE-T
OSA-Express3-2P 1000BASE-T Copper
Note that software PTFs or a new release may be required
(depending on CHPID type) to support all ports.
25
Download from Www.Somanuals.com. All Manuals Search And Download.
The OSA-Express3 features are exclusive to System z10.
OSA-Express2 availability
OSA-Express2 Gigabit Ethernet and 1000BASE-T Ethernet
continue to be available for ordering, for a limited time, if
you are not yet in a position to migrate to the latest release
of the operating system for exploitation of two ports per
PCI-E adapter and if you are not resource-constrained.
There are operating system dependencies for exploitation
of two ports in OSD mode per PCI-E adapter. Whether it is
a 2-port or a 4-port feature, only one of the ports will be
visible on a PCI-E adapter if operating system exploitation
updates are not installed.
Historical summary: Functions that continue to be sup-
ported by OSA-Express3 and OSA-Express2:
OSA-Express3 Ethernet features – Summary of benefits
OSA-Express3 10 GbE LR (single mode fiber), 10 GbE SR
(multimode fiber), GbE LX (single mode fiber), GbE SX
(multimode fiber), and 1000BASE-T (copper) are designed
for use in high-speed enterprise backbones, for local area
network connectivity between campuses, to connect server
farms to System z10, and to consolidate file servers onto
System z10. With reduced latency, improved throughput,
and up to 96 ports of LAN connectivity, (when all are 4-port
features, 24 features per server), you can “do more with
less.”
• Queued Direct Input/Output (QDIO) – uses memory
queues and a signaling protocol to directly exchange
data between the OSA microprocessor and the network
software for high-speed communication.
– QDIO Layer 2 (Link layer) – for IP (IPv4, IPv6) or non-
IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA) work-
loads. Using this mode the Open Systems Adapter
(OSA) is protocol-independent and Layer-3 indepen-
dent. Packet forwarding decisions are based upon the
Medium Access Control (MAC) address.
– QDIO Layer 3 (Network or IP layer) – for IP workloads.
Packet forwarding decisions are based upon the IP
address. All guests share OSA’s MAC address.
The key benefits of OSA-Express3 compared to OSA-
Express2 are:
•
Jumbo frames in QDIO mode (8992 byte frame size) when
operating at 1 Gbps (fiber or copper) and 10 Gbps (fiber).
• Reduced latency (up to 45% reduction) and increased
throughput (up to 4x) for applications
• 640 TCP/IP stacks per CHPID – for hosting more images
• More physical connectivity to service the network and
fewer required resources:
• Large send for IPv4 packets – for TCP/IP traffic and CPU
efficiency, offloading the TCP segmentation processing
from the host TCP/IP stack to the OSA-Express feature
– Fewer CHPIDs to define and manage
– Reduction in the number of required I/O slots
– Possible reduction in the number of I/O drawers
– Double the port density of OSA-Express2
– A solution to the requirement for more than 48 LAN
ports (now up to 96 ports)
• Concurrent LIC update – to help minimize the disrup-
tion of network traffic during an update; when properly
configured, designed to avoid a configuration off or on
(applies to CHPID types OSD and OSN)
• Multiple Image Facility (MIF) and spanned channels – for
sharing OSA among logical channel subsystems
The OSA-Express3 features are exclusive to System z10.
Download from Www.Somanuals.com. All Manuals Search And Download.
The OSA-Express3 and OSA-Express2 Ethernet features
support the following CHPID types:
OSA-Express3 10 Gigabit Ethernet SR
The OSA-Express3 10 Gigabit Ethernet (GbE) short reach
(LR) feature has two ports. Each port resides on a PCIe
adapter and has its own channel path identifier (CHPID).
There are two PCIe adapters per feature. OSA-Express3
10 GbE SR is designed to support attachment to a 10
Gigabits per second (Gbps) Ethernet Local Area Net-
work (LAN) or Ethernet switch capable of 10 Gbps.
OSA-Express3 10 GbE SR supports CHPID type OSD
exclusively. It can be defined as a spanned channel and
can be shared among LPARs within and across LCSSs.
CHPID OSA-Express3, Purpose/Traffic
Type OSA-Express2
Features
OSC 1000BASE-T
OSA-Integrated Console Controller (OSA-ICC)
TN3270E, non-SNA DFT, IPL to CPC and LPARs
Operating system console operations
OSD 1000BASE-T
Queued Direct Input/Output (QDIO)
TCP/IP traffic when Layer 3
Protocol-independent when Layer 2
GbE
10 GbE
®
OSE 1000BASE-T
passthru (LCS)
Non-QDIO, SNA/APPN /HPR and/or TCP/IP
OSN 1000BASE-T
GbE
OSA for NCP
Supports channel data link control (CDLC)
OSA-Express3 Gigabit Ethernet LX
The OSA-Express3 Gigabit Ethernet (GbE) long wave-
length (LX) feature has four ports. Two ports reside on a
PCIe adapter and share a channel path identifier (CHPID).
There are two PCIe adapters per feature. Each port sup-
ports attachment to a one Gigabit per second (Gbps) Eth-
ernet Local Area Network (LAN). OSA-Express3 GbE LX
supports CHPID types OSD and OSN. It can be defined
as a spanned channel and can be shared among LPARs
within and across LCSSs.
OSA-Express3 10 GbE
OSA-Express3 10 Gigabit Ethernet LR
The OSA-Express3 10 Gigabit Ethernet (GbE) long reach
(LR) feature has two ports. Each port resides on a PCIe
adapter and has its own channel path identifier (CHPID).
There are two PCIe adapters per feature. OSA-Express3
10 GbE LR is designed to support attachment to a 10
Gigabits per second (Gbps) Ethernet Local Area Net-
work (LAN) or Ethernet switch capable of 10 Gbps.
OSA-Express3 10 GbE LR supports CHPID type OSD
exclusively. It can be defined as a spanned channel and
can be shared among LPARs within and across LCSSs.
OSA-Express3 Gigabit Ethernet SX
The OSA-Express3 Gigabit Ethernet (GbE) short wave-
length (SX) feature has four ports. Two ports reside on a
PCIe adapter and share a channel path identifier (CHPID).
There are two PCIe adapters per feature. Each port sup-
ports attachment to a one Gigabit per second (Gbps) Eth-
ernet Local Area Network (LAN). OSA-Express3 GbE SX
supports CHPID types OSD and OSN. It can be defined
as a spanned channel and can be shared among LPARs
within and across LCSSs.
27
Download from Www.Somanuals.com. All Manuals Search And Download.
OSA-Express3-2P Gigabit Ethernet SX
automatically adjusts to 10, 100, or 1000 Mbps, depending
upon the LAN. When the feature is set to autonegotiate,
the target device must also be set to autonegotiate. The
feature supports the following settings: 10 Mbps half or full
duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps)
full duplex. OSA-Express3 1000BASE-T Ethernet supports
CHPID types OSC, OSD, OSE, and OSN. It can be defined
as a spanned channel and can be shared among LPARs
within and across LCSSs.
The OSA-Express3-2P Gigabit Ethernet (GbE) short
wavelength (SX) feature has two ports which reside on a
single PCIe adapter and share one channel path identifier
(CHPID). Each port supports attachment to a one Gigabit
per second (Gbps) Ethernet Local Area Network (LAN).
OSA-Express3 GbE SX supports CHPID types OSD and
OSN. It can be defined as a spanned channel and can be
shared among LPARs within and across LCSSs.
When configured at 1 Gbps, the 1000BASE-T Ethernet
feature operates in full duplex mode only and supports
jumbo frames when in QDIO mode (CHPID type OSD).
Four-port exploitation on OSA-Express3 GbE SX and LX
For the operating system to recognize all four ports on an
OSA-Express3 Gigabit Ethernet feature, a new release
and/or PTF is required. If software updates are not applied,
only two of the four ports will be “visible” to the operating
system.
OSA-Express3-2P 1000BASE-T Ethernet
The OSA-Express3-2P 1000BASE-T Ethernet feature has
two ports which reside on a single PCIe adapter and share
one channel path identifier (CHPID). Each port supports
attachment to either a 10BASE-T (10 Mbps), 100BASE-TX
(100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ether-
Activating all four ports on an OSA-Express3 feature pro-
vides you with more physical connectivity to service the
network and reduces the number of required resources
(I/O slots, I/O cages, fewer CHPIDs to define and manage). net Local Area Network (LAN). The feature supports auto-
negotiation and automatically adjusts to 10, 100, or 1000
Four-port exploitation is supported by z/OS, z/VM, z/VSE,
Mbps, depending upon the LAN. When the feature is set to
z/TPF, and Linux on System z.
autonegotiate, the target device must also be set to auto-
negotiate. The feature supports the following settings: 10
OSA-Express3 1000BASE-T Ethernet
Mbps half or full duplex, 100 Mbps half or full duplex, 1000
The OSA-Express3 1000BASE-T Ethernet feature has
Mbps (1 Gbps) full duplex. OSA-Express3 1000BASE-T
four ports. Two ports reside on a PCIe adapter and share
Ethernet supports CHPID types OSC, OSD, OSE, and
a channel path identifier (CHPID). There are two PCIe
OSN. It can be defined as a spanned channel and can be
adapters per feature. Each port supports attachment to
shared among LPARs within and across LCSSs. Software
either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or
updates are required to exploit both ports.
1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area
Network (LAN). The feature supports auto-negotiation and
28
Download from Www.Somanuals.com. All Manuals Search And Download.
When configured at 1 Gbps, the 1000BASE-T Ethernet
feature operates in full duplex mode only and supports
jumbo frames when in QDIO mode (CHPID type OSD).
Internal “routing” can be disabled on a per QDIO connec-
tion basis. This support does not affect the ability to share
an OSA-Express port. Sharing occurs as it does today, but
the ability to communicate between sharing QDIO data
connections may be restricted through the use of this sup-
port. You decide whether an operating system’s or z/VM’s
Virtual Switch OSA-Express QDIO connection is to be non-
isolated (default) or isolated.
OSA-Express QDIO data connection isolation for the z/VM
environment
Multi-tier security zones are fast becoming the network
configuration standard for new workloads. Therefore, it is
essential for workloads (servers and clients) hosted in a
virtualized environment (shared resources) to be protected
from intrusion or exposure of data and processes from
other workloads.
QDIO data connection isolation applies to the device
statement defined at the operating system level. While
an OSA-Express CHPID may be shared by an operating
system, the data device is not shared.
With Queued Direct Input/Output (QDIO) data connection
isolation you:
QDIO data connection isolation applies to the z/VM 5.3 and
5.4 with PTFs environment and to all of the OSA-Express3
and OSA-Express2 features (CHPID type OSD) on System
z10 and to the OSA-Express2 features on System z9.
• Have the ability to adhere to security and HIPAA-security
guidelines and regulations for network isolation between
the operating system instances sharing physical network
connectivity
Network Traffic Analyzer
• Can establish security zone boundaries that have been
defined by your network administrators
With the large volume and complexity of today’s network
traffic, the z10 BC offers systems programmers and net-
work administrators the ability to more easily solve net-
work problems. With the introduction of the OSA-Express
Network Traffic Analyzer and QDIO Diagnostic Synchro-
nization on the System z and available on the z10 BC,
customers will have the ability to capture trace/trap data
and forward it to z/OS 1.8 tools for easier problem determi-
nation and resolution.
•
Have a mechanism to isolate a QDIO data connection (on
an OSA port), ensuring all internal OSA routing between
the isolated QDIO data connections and all other shar-
ing QDIO data connections is disabled. In this state, only
external communications to and from the isolated QDIO
data connection are allowed. If you choose to deploy
an external firewall to control the access between hosts
on an isolated virtual switch and sharing LPARs then an
external firewall needs to be configured and each indi-
vidual host and or LPAR must have a route added to their
TCP/IP stack to forward local traffic to the firewall.
This function is designed to allow the operating system
to control the sniffer trace for the LAN and capture the
records into host memory and storage (file systems), using
existing host operating system tools to format, edit, and
process the sniffer records.
29
Download from Www.Somanuals.com. All Manuals Search And Download.
OSA-Express Network Traffic Analyzer is exclusive to the
z10 BC, z9 BC, z10 EC, and z9 EC, and is applicable
to the OSA-Express3 and OSA-Express2 features when
configured as CHPID type OSD (QDIO), and is supported
by z/OS.
OSA-Express2 (or OSA-Express3) port to the z/VM operat-
ing system when the port is participating in an aggregated
group when configured in Layer 2 mode. Link aggregation
(trunking) is designed to allow you to combine multiple
physical OSA-Express3 and OSA-Express2 ports (of the
same type for example 1GbE or 10GbE) into a single logi-
cal link for increased throughput and for nondisruptive
failover in the event that a port becomes unavailable.
Dynamic LAN idle for z/OS
Dynamic LAN idle is designed to reduce latency and
improve network performance by dynamically adjusting
the inbound blocking algorithm. When enabled, the z/OS
TCP/IP stack is designed to adjust the inbound blocking
algorithm to best match the application requirements.
• Aggregated link viewed as one logical trunk and con-
taining all of the Virtual LANs (VLANs) required by the
LAN segment
• Load balance communications across several links in a
trunk to prevent a single link from being overrun
For latency sensitive applications, the blocking algo-
rithm is modified to be “latency sensitive.” For streaming
(throughput sensitive) applications, the blocking algorithm
is adjusted to maximize throughput. The z/OS TCP/IP stack
can dynamically detect the application requirements,
making the necessary adjustments to the blocking algo-
rithm. The monitoring of the application and the blocking
algorithm adjustments are made in real-time, dynamically
adjusting the application’s LAN performance.
• Link aggregation between a VSWITCH and the physical
network switch
• Point-to-point connections
• Up to eight OSA-Express3 or OSA-Express2 ports in one
aggregated link
• Ability to dynamically add/remove OSA ports for “on
demand” bandwidth
• Full-duplex mode (send and receive)
• Target links for aggregation must be of the same type
(for example, Gigabit Ethernet to Gigabit Ethernet)
System administrators can authorize the z/OS TCP/IP stack
to enable a dynamic setting, which was previously a static
setting. The z/OS TCP/IP stack is able to help determine
the best setting for the current running application, based
on system configuration, inbound workload volume, CPU
utilization, and traffic patterns.
The Open Systems Adapter/Support Facility (OSA/SF) will
provide status information on an OSA port – its “shared” or
“exclusive use” state. OSA/SF is an integrated component
of z/VM.
Link aggregation is exclusive to System z10 and System
z9, is applicable to the OSA-Express3 and OSA-Express2
features in Layer 2 mode when configured as CHPID type
OSD (QDIO), and is supported by z/VM 5.3 and later.
Link aggregation for z/VM in Layer 2 mode
z/VM Virtual Switch-controlled (VSWITCH-controlled) link
aggregation (IEEE 802.3ad) allows you to dedicate an
30
Download from Www.Somanuals.com. All Manuals Search And Download.
Layer 2 transport mode: When would it be used?
OSA Layer 3 Virtual MAC for z/OS
If you have an environment with an abundance of Linux
images in a guest LAN environment, or you need to define
router guests to provide the connection between these
guest LANs and the OSA-Express3 features, then using the
Layer 2 transport mode may be the solution. If you have
To simplify the infrastructure and to facilitate load balanc-
ing when an LPAR is sharing the same OSA Media Access
Control (MAC) address with another LPAR, each operating
system instance can now have its own unique “logical” or
“virtual” MAC (VMAC) address. All IP addresses associ-
Internetwork Packet Exchange (IPX), NetBIOS, and SNA pro- ated with a TCP/IP stack are accessible using their own
tocols, in addition to Internet Protocol Version 4 (IPv4) and VMAC address, instead of sharing the MAC address of
IPv6, use of Layer 2 could provide “protocol independence.” an OSA port. This applies to Layer 3 mode and to an OSA
port shared among Logical Channel Subsystems.
The OSA-Express3 features have the capability to perform
like Layer 2 type devices, providing the capability of being
protocol- or Layer-3-independent (that is, not IP-only).
With the Layer 2 interface, packet forwarding decisions
are based upon Link Layer (Layer 2) information, instead
of Network Layer (Layer 3) information. Each operating
system attached to the Layer 2 interface uses its own MAC
address. This means the traffic can be IPX, NetBIOS, SNA,
IPv4, or IPv6.
This support is designed to:
• Improve IP workload balancing
• Dedicate a Layer 3 VMAC to a single TCP/IP stack
• Remove the dependency on Generic Routing Encapsu-
lation (GRE) tunnels
• Improve outbound routing
• Simplify configuration setup
• Allow WebSphere Application Server content-based
routing to work with z/OS in an IPv6 network
An OSA-Express3 feature can filter inbound datagrams by
Virtual Local Area Network identification (VLAN ID, IEEE
802.1q), and/or the Ethernet destination MAC address. Fil-
tering can reduce the amount of inbound traffic being pro-
cessed by the operating system, reducing CPU utilization.
• Allow z/OS to use a “standard” interface ID for IPv6
addresses
• Remove the need for PRIROUTER/SECROUTER function
in z/OS
Layer 2 transport mode is supported by z/VM and Linux on
System z.
OSA Layer 3 VMAC for z/OS is exclusive to System z, and
is applicable to OSA-Express3 and OSA-Express2 features
when configured as CHPID type OSD (QDIO).
31
Download from Www.Somanuals.com. All Manuals Search And Download.
Direct Memory Access (DMA)
CCL helps preserve mission critical SNA functions, such
as SNI, and z/OS applications workloads which depend
upon these functions, allowing you to collapse SNA inside
a z10 BC while exploiting and leveraging IP.
OSA-Express3 and the operating systems share a
common storage area for memory-to-memory communi-
cation, reducing system overhead and improving perfor-
mance. There are no read or write channel programs for
data exchange. For write processing, no I/O interrupts
have to be handled. For read processing, the number of
I/O interrupts is minimized.
The OSA-Express3 and OSA-Express2 GbE and
1000BASE-T Ethernet features provide support for CCL.
This support is designed to require no changes to operat-
ing systems (does require a PTF to support CHPID type
OSN) and also allows TPF to exploit CCL. Supported by
z/VM for Linux and z/TPF guest environments.
Hardware data router
With OSA-Express3, much of what was previously done in
firmware (packet construction, inspection, and routing) is
now performed in hardware. This allows packets to flow
directly from host memory to the LAN without firmware
intervention.
OSA-Express3 and OSA-Express2 OSN (OSA for NCP)
OSA-Express for Network Control Program (NCP), Channel
path identifier (CHPID) type OSN, is now available for use
with the OSA-Express3 GbE features as well as the OSA-
Express3 1000BASE-T Ethernet features.
With the hardware data router, the “store and forward”
technique is no longer used, which enables true direct
memory access, a direct host memory-to-LAN flow, return-
ing CPU cycles for application use.
OSA-Express for NCP, supporting the channel data link
control (CDLC) protocol, provides connectivity between
System z operating systems and IBM Communication Con-
This avoids a “hop” and is designed to reduce latency and troller for Linux (CCL). CCL allows you to keep your busi-
to increase throughput for standard frames (1492 byte)
and jumbo frames (8992 byte).
ness data and applications on the mainframe operating
systems while moving NCP functions to Linux on System z.
CCL provides a foundation to help enterprises simplify
their network infrastructure while supporting traditional
Systems Network Architecture (SNA) functions such as
SNA Network Interconnect (SNI).
IBM Communication Controller for Linux (CCL)
CCL is designed to help eliminate hardware dependen-
cies, such as 3745/3746 Communication Controllers,
ESCON channels, and Token Ring LANs, by providing a
software solution that allows the Network Control Program
(NCP) to be run in Linux on System z freeing up valuable
data center floor space.
Communication Controller for Linux on System z (Program
Number 5724-J38) is the solution for companies that
want to help improve network availability by replacing
32
Download from Www.Somanuals.com. All Manuals Search And Download.
Token-Ring networks and ESCON channels with an Ether-
net network and integrated LAN adapters on System z10,
OSA-Express3 or OSA-Express2 GbE or 1000BASE-T.
Remove L2/L3 LPAR-to-LPAR Restriction
OSA port sharing between virtual switches can communi-
cate whether the transport mode is the same (Layer 2 to
Layer 2) or different (Layer 2 to Layer 3). This enhance-
ment is designed to allow seamless mixing of Layer 2 and
Layer 3 traffic, helping to reduce the total cost of network-
ing. Previously, Layer 2 and Layer 3 TCP/IP connections
through the same OSA port (CHPID) were unable to com-
municate with each other LPAR-to-LPAR using the Multiple
Image Facility (MIF).
OSA-Express for NCP is supported in the z/OS, z/VM,
z/VSE, TPF, z/TPF, and Linux on System z environments.
OSA Integrated Console Controller
The OSA-Express Integrated Console Controller
(OSA-ICC) support is a no-charge function included in
Licensed Internal Code (LIC) on z10 BC, z10 EC, z9 EC,
z9 BC, z990, and z890 servers. It is available via the
OSA-Express2 and OSA-Express 1000BASE-T Ethernet
features, and supports Ethernet-attached TN3270E con-
soles.
This enhancement is designed to facilitate a migration
from Layer 3 to Layer 2 and to continue to allow LAN
administrators to configure and manage their mainframe
network topology using the same techniques as their non-
mainframe topology.
The OSA-ICC provides a system console function at IPL
time and operating systems support for multiple logical
partitions. Console support can be used by z/OS, z/OS.e,
z/VM, z/VSE, z/TPF, and TPF. The OSA-ICC also supports
local non-SNA DFT 3270 and 328x printer emulation for
OSA/SF Virtual MAC and VLAN id Display Capability
The Open Systems Adapter/Support Facility (OSA/SF) has
the capability to support virtual Medium Access Control
(MAC) and Virtual Local Area Network (VLAN) identifica-
tions (IDs) associated with OSA-Express2 feature config-
ured as a Layer 2 interface. This information will now be
displayed as a part of an OSA Address Table (OAT) entry.
This information is independent of IPv4 and IPv6 formats.
There can be multiple Layer 2 VLAN Ids associated to a
single unit address. One group MAC can be associated to
multiple unit addresses.
™
TSO/E, CICS, IMS , or any other 3270 application that
®
communicates through VTAM .
With the OSA-Express3 and OSA-Express2 1000BASE-T
Ethernet features, the OSA-ICC is configured on a port by
port basis, using the Channel Path Identifier (CHPID) type
OSC. Each port can support up to 120 console session
connections, can be shared among logical partitions using
Multiple Image Facility (MIF), and can be spanned across
multiple Channel Subsystems (CSSs).
For additional information, view IBM Redbooks, IBM
System z Connectivity Handbook (SG24-5444) at:
Download from Www.Somanuals.com. All Manuals Search And Download.
HiperSockets
The HiperSockets function, also known as internal Queued
Direct Input/Output (iDQIO) or internal QDIO, is an inte-
grated function of the z10 BC server that provides users
with attachments to up to sixteen high-speed “virtual”
Local Area Networks (LANs) with minimal system and
network overhead. HiperSockets eliminates the need to
utilize I/O subsystem operations and the need to traverse
an external network connection to communicate between
logical partitions in the same z10 BC server.
A HiperSockets device can filter inbound datagrams by
Virtual Local Area Network identification (VLAN ID, IEEE
802.1q), the Ethernet destination MAC address, or both.
Filtering can help reduce the amount of inbound traf-
fic being processed by the operating system, helping to
reduce CPU utilization.
Analogous to the respective Layer 3 functions, HiperSockets
Layer 2 devices can be configured as primary or secondary
connectors or multicast routers. This is designed to enable
the creation of high performance and high availability Link
Layer switches between the internal HiperSockets network
and an external Ethernet or to connect the HiperSockets
Layer 2 networks of different servers. The HiperSockets
Multiple Write Facility for z10 BC is also supported for
Layer 2 HiperSockets devices, thus allowing performance
improvements for large Layer 2 datastreams.
Now, the HiperSockets internal networks on z10 BC can
support two transport modes: Layer 2 (Link Layer) as well
as the current Layer 3 (Network or IP Layer). Traffic can
be Internet Protocol (IP) version 4 or version 6 (IPv4, IPv6)
or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
HiperSockets devices are now protocol-independent and
Layer 3 independent. Each HiperSockets device has its
own Layer 2 Media Access Control (MAC) address, which
is designed to allow the use of applications that depend
on the existence of Layer 2 addresses such as DHCP
servers and firewalls.
HiperSockets Layer 2 support is exclusive to System z10
and is supported by z/OS, Linux on System z environ-
ments, and z/VM for Linux guest exploitation.
Layer 2 support can help facilitate server consolidation.
Complexity can be reduced, network configuration is
simplified and intuitive, and LAN administrators can con-
figure and maintain the mainframe environment the same
as they do a non-mainframe environment. With support
of the new Layer 2 interface by HiperSockets, packet
forwarding decisions are now based upon Layer 2 infor-
mation, instead of Layer 3 information. The HiperSockets
device performs automatic MAC address generation and
assignment to allow uniqueness within and across logical
partitions (LPs) and servers. MAC addresses can also be
locally administered. The use of Group MAC addresses
for multicast is supported as well as broadcasts to all
other Layer 2 devices on the same HiperSockets network.
Datagrams are only delivered between HiperSockets
devices that are using the same transport mode (Layer 2
with Layer 2 and Layer 3 with Layer 3). A Layer 2 device
cannot communicate directly with a Layer 3 device in
another LPAR.
HiperSockets Multiple Write Facility for increased performance
Though HiperSockets provides high-speed internal TCP/IP
connectivity between logical partitions within a System z
server – the problem is that HiperSockets draws excessive
CPU utilization for large outbound messages. This may
lead to increased software licensing cost – HiperSock-
ets large outbound messages are charged to a general
CPU which can incur high general purpose CPU costs.
This may also lead to some performance issues due to
synchronous application blocking – HiperSockets large
outbound messages will block a sending application while
synchronously moving data.
A solution is HiperSockets Multiple Write Facility.
HiperSockets performance has been enhanced to allow
for the streaming of bulk data over a HiperSockets link
between logical partitions (LPARs). The receiving LPAR
can now process a much larger amount of data per I/O
34
Download from Www.Somanuals.com. All Manuals Search And Download.
interrupt. This enhancement is transparent to the operating HiperSockets Multiple Write Facility and zIIP enablement
system in the receiving LPAR. HiperSockets Multiple Write
Facility, with fewer I/O interrupts, is designed to reduce
CPU utilization of the sending and receiving LPAR.
is described as “zIIP-Assisted HiperSockets for large mes-
sages.” zIIP-Assisted HiperSockets can help make highly
secure, available, virtual HiperSockets networking a more
attractive option. z/OS application workloads based on
XML, HTTP, SOAP, Java, etc., as well as traditional file
transfer, can benefit from zIIP enablement by helping to
lower general purpose processor utilization for such TCP/
IP traffic.
The HiperSockets Multiple Write solution moves multiple
output data buffers in one write operation.
If the function is disabled then one output data buffer is
moved in one write operation. This is also how HiperSockets
functioned in the past.
Only outbound z/OS TCP/IP large messages which origi-
nate within a z/OS host are eligible for HiperSockets zIIP-
Assisted processing. Other types of network traffic such
as IP forwarding, Sysplex Distributor, inbound processing,
small messages, or other non TCP/IP network protocols
are not eligible for zIIP-Assisted HiperSockets. When the
workload is eligible, then the TCP/IP HiperSockets device
driver layer (write) processing is redirected to a zIIP,
which will unblock the sending application. zIIP Assisted
HiperSockets for large messages is available with z/OS
V1.10 with PTF and System z10 only. This feature is unsup-
ported if z/OS is running as a guest in a z/VM environment
and is supported for large outbound messages only.
If the function is enabled then multiple output data buf-
fers are moved in one write operation. This reduces CPU
utilization related to large outbound messages. When
enabled, HiperSockets Multiple Write will be used anytime
a message spans an IQD frame requiring multiple output
data buffers (SBALs) to transfer the message. Spanning
multiple output data buffers can be affected by a number
of factors including:
• IQD frame size
• Application socket send size
• TCP send size
• MTU size
To estimate potential offload, use PROJECTCPU for current
and existing workloads. This is accurate and very simple,
but you have to be on z/OS 1.10 with the enabling PTFs
AND System z10 server AND you need to be performing
HiperSockets Multiple Write workload already on z/OS.
The HiperSockets Multiple Write Facility is supported in
the z/OS environment. For a complete description of the
System z10 connectivity capabilities refer to IBM System z
Connectivity Handbook, SG24-5444.
HiperSockets Enhancement for zIIP Exploitation
In z/OS V1.10, specifically, the z/OS Communications
Server allows the HiperSockets Multiple Write Facility
processing for outbound large messages originating
from z/OS to be performed on a zIIP. The combination of
35
Download from Www.Somanuals.com. All Manuals Search And Download.
Security
Cryptography
Today’s world mandates that your systems are secure and
available 24/7. The z10 BC employs some of the most
advanced security technologies in the industry—helping
you to meet rigid regulatory requirements that include
encryption solutions, access control management, and
The z10 BC includes both standard cryptographic hard-
ware and optional cryptographic features for flexibility and
growth capability. IBM has a long history of providing hard-
ware cryptographic solutions, from the development of
Data Encryption Standard (DES) in the 1970s to delivering
extensive auditing features. It also provides disaster recov- integrated cryptographic hardware in a server to achieve
ery configurations and is designed to deliver 99.999%
application availability to help avoid the downside of
planned downtime, equipment failure, or the complete loss
of a data center.
the US Government’s highest FIPS 140-2 Level 4 rating for
secure cryptographic hardware.
The IBM System z10 BC cryptographic functions include
the full range of cryptographic operations needed for e-
business, e-commerce, and financial institution applica-
tions. In addition, custom cryptographic functions can be
added to the set of functions that the z10 BC offers.
When you need to be more secure, more resilient —
z Can Do IT. The z10 processor chip has on board cryp-
tographic functions. Standard clear key integrated crypto-
graphic coprocessors provide high speed cryptography
for protecting data in storage. CP Assist for Cryptographic
Function (CPACF) supports DES, TDES, Secure Hash Algo-
rithms (SHA) for up to 512 bits, Advanced Encryption Stan-
dard (AES) for up to 256 bits and Pseudo Random Number
Generation (PRNG). Audit logging has been added to the
new TKE workstation to enable better problem tracking.
New integrated clear key encryption security features on
z10 BC include support for a higher advanced encryption
standard and more secure hashing algorithms. Performing
these functions in hardware is designed to contribute to
improved performance.
Enhancements to eliminate preplanning in the cryptogra-
phy area include the System z10 function to dynamically
add Crypto to a logical partition. Changes to image pro-
files, to support Crypto Express2 features, are available
without an outage to the logical partition. Crypto Express2
features can also be dynamically deleted or moved.
System z is investing in accelerators that provide improved
performance for specialized functions. The Crypto
Express2 feature for cryptography is an example. The
Crypto Express2 feature can be configured as a secure
key coprocessor or for Secure Sockets Layer (SSL) accel-
eration. The feature includes support for 13, 14, 15, 16, 17,
18 and 19 digit Personal Account Numbers for stronger
protection of data. And the tamper-resistant cryptographic
coprocessor is certified at FIPS 140-2 Level 4. To help cus-
tomers scale their Crypto Express2 investments for their
business needs, Crypto Express2 is also available on z10
BC as a single PCI-X adapter which may be defined as
either a coprocessor or an accelerator.
CP Assist for Cryptographic Function (CPACF)
CPACF supports clear-key encryption. All CPACF func-
tions can be invoked by problem state instructions defined
by an extension of System z architecture. The function is
activated using a no-charge enablement feature and offers
the following on every CPACF that is shared between two
Processor Units (PUs) and designated as CPs and/or Inte-
grated Facility for Linux (IFL):
System z security is one of the many reasons why the
world’s top banks and retailers rely on the IBM mainframe
to help secure sensitive business transactions.
z Can Do IT securely.
• DES, TDES, AES-128, AES-192, AES-256
• SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
• Pseudo Random Number Generation (PRNG)
36
Download from Www.Somanuals.com. All Manuals Search And Download.
Enhancements to CP Assist for Cryptographic Func-
tion (CPACF):
Crypto Express2 Accelerator – for Secure Sockets Layer
(SSL) acceleration:
CPACF has been enhanced to include support of the fol-
lowing on CPs and IFLs:
• Is designed to support clear-key RSA operations
• Offloads compute-intensive RSA public-key and private-
key cryptographic operations employed in the SSL pro-
tocol Crypto Express2 features can be carried forward
on an upgrade to the System z10 BC, so users may con-
tinue to take advantage of the SSL performance and the
configuration capability
• Advanced Encryption Standard (AES) for 192-bit keys
and 256-bit keys
• SHA-384 and SHA-512 bit for message digest
SHA-1, SHA-256, and SHA-512 are shipped enabled and
do not require the enablement feature.
The configurable Crypto Express2 feature is supported by
z/OS, z/VM, z/VSE, and Linux on System z. z/VSE offers
support for clear-key operations only. Current versions of
z/OS, z/VM, and Linux on System z offer support for both
clear-key and secure-key operations.
Support for CPACF is also available using the Integrated
Cryptographic Service Facility (ICSF). ICSF is a com-
ponent of z/OS, and is designed to transparently use
the available cryptographic functions, whether CPACF
or Crypto Express2, to balance the workload and help
address the bandwidth requirements of your applications.
Crypto Express2-1P
The enhancements to CPACF are exclusive to the System
z10 and supported by z/OS, z/VM, z/VSE, and Linux on
System z.
An option of one PCI-X adapter per feature, in addition
to the current two PCI-X adapters per feature, is being
offered for the z10 BC to help satisfy small and midrange
security requirements while maintaining high performance.
Configurable Crypto Express2
The Crypto Express2-1P feature, with one PCI-X adapter,
can continue to be defined as either a Coprocessor or an
Accelerator. A minimum of two features must be ordered.
The Crypto Express2 feature has two PCI-X adapters.
Each of the PCI-X adapters can be defined as either a
Coprocessor or an Accelerator.
Additional cryptographic functions and features with
Crypto Express2 and Crypto Express2-1P.
Crypto Express2 Coprocessor – for secure-key encrypted
transactions (default) is:
Key management – Added key management for remote
loading of ATM and Point of Sale (POS) keys. The elimina-
tion of manual key entry is designed to reduce downtime
due to key entry errors, service calls, and key manage-
ment costs.
• Designed to support security-rich cryptographic func-
tions, use of secure-encrypted-key values, and User
Defined Extensions (UDX)
• Designed to support secure and clear-key RSA opera-
tions
• The tamper-responding hardware and lower-level firm-
ware layers are validated to U.S. Government FIPS 140-
2 standard: Security Requirements for Cryptographic
Modules at Level 4
37
Download from Www.Somanuals.com. All Manuals Search And Download.
Improved key exchange – Added Improved key
exchange with non-CCA cryptographic systems. New fea-
tures added to IBM Common Cryptographic Architecture
(CCA) are designed to enhance the ability to exchange
keys between CCA systems, and systems that do not
use control vectors by allowing the CCA system owner
to define permitted types of key import and export while
preventing uncontrolled key exchange that can open the
system to an increased threat of attack.
Cryptographic enhancements to Crypto Express2 and
Crypto Express2-1P
Dynamically add crypto to a logical partition.
Today, users can preplan the addition of Crypto Express2
features to a logical partition (LP) by using the Crypto
page in the image profile to define the Cryptographic
Candidate List, Cryptographic Online List, and Usage and
Control Domain Indexes in advance of crypto hardware
installation.
These are supported by z/OS and by z/VM for guest
exploitation.
With the change to dynamically add crypto to a logical
partition, changes to image profiles, to support Crypto
Express2 features, are available without outage to the
logical partition. Users can also dynamically delete or
move Crypto Express2 features. Preplanning is no longer
required.
Support for ISO 16609
Support for ISO 16609 CBC Mode T-DES Message
Authentication (MAC) requirements ISO 16609 CBC Mode
T-DES MAC is accessible through ICSF function calls
made in the PCI-X Cryptographic Adapter segment 3
Common Cryptographic Architecture (CCA) code.
This enhancement is supported by z/OS, z/VM for guest
exploitation, z/VSE, and Linux on System z.
This is supported by z/OS and by z/VM for guest exploita-
tion.
Secure Key AES
The Advanced Encryption Standard (AES) is a National
Institute of Standards and Technology specification for the
encryption of electronic data. It is expected to become the
accepted means of encrypting digital information, includ-
ing financial, telecommunications, and government data.
Support for RSA keys up to 4096 bits
The RSA services in the CCA API are extended to sup-
port RSA keys with modulus lengths up to 4096 bits. The
services affected include key generation, RSA-based
key management, digital signatures, and other functions
related to these.
AES is the symmetric algorithm of choice, instead of Data
Encryption Standard (DES) or Triple-DES, for the encryp-
tion and decryption of data. The AES encryption algorithm
will be supported with secure (encrypted) keys of 128,
192, and 256 bits. The secure key approach, similar to
what is supported today for DES and TDES, provides the
ability to keep the encryption keys protected at all times,
including the ability to import and export AES keys, using
RSA public key technology.
Refer to the ICSF Application Programmers Guide, SA22-
7522, for additional details.
38
Download from Www.Somanuals.com. All Manuals Search And Download.
Enhancement with TKE 5.3 LIC
Support for AES encryption algorithm includes the master
key management functions required to load or generate
AES master keys, update those keys, and re-encipher key
tokens under a new master key.
The TKE 5.3 level of LIC includes support for the AES
encryption algorithm, adds 256-bit master keys, and
includes the master key management functions required to
load or generate AES master keys to cryptographic copro-
cessors in the host.
Support for 13- thru 19-digit Personal Account Numbers
Credit card companies sometimes perform card security
code computations based on Personal Account Number
(PAN) data. Currently, ICSF callable services CSNBCSV
(VISA CVV Service Verify) and CSNBCSG (VISA CVV
Service Generate) are used to verify and to generate a
VISA Card Verification Value (CVV) or a MasterCard Card
Verification Code (CVC). The ICSF callable services cur-
rently support 13-, 16-, and 19-digit PAN data. To provide
additional flexibility, new keywords PAN-14, PAN-15, PAN-
17, and PAN-18 are implemented in the rule array for both
CSNBCSG and CSNBCSV to indicate that the PAN data is
comprised of 14, 15, 17, or 18 PAN digits, respectively.
Also included is an imbedded screen capture utility to
permit users to create and to transfer TKE master key entry
instructions to diskette or DVD. Under ‘Service Manage-
ment’ a “Manage Print Screen Files” utility will be available
to all users.
The TKE workstation and TKE 5.3 LIC are available on the
z10 EC, z10 BC, z9 EC, and z9 BC.
Smart Card Reader
Support for an optional Smart Card Reader attached to
the TKE 5.3 workstation allows for the use of smart cards
that contain an embedded microprocessor and associated
memory for data storage. Access to and the use of con-
fidential data on the smart cards is protected by a user-
defined Personal Identification Number (PIN).
Support for 13- through 19-digit PANs is exclusive to
System z10 and is offered by z/OS and z/VM for guest
exploitation.
TKE 5.3 LIC has added the capability to store key parts
on DVD-RAMs and continues to support the ability to store
key parts on paper, or optionally on a smart card. TKE 5.3
LIC has limited the use of floppy diskettes to read-only.
The TKE 5.3 LIC can remotely control host cryptographic
coprocessors using a password-protected authority signa-
ture key pair either in a binary file or on a smart card.
TKE 5.3 workstation
The Trusted Key Entry (TKE) workstation and the TKE
5.3 level of Licensed Internal Code are optional features
on the System z10 BC. The TKE 5.3 Licensed Internal
Code (LIC) is loaded on the TKE workstation prior to ship-
ment. The TKE workstation offers security-rich local and
remote key management, providing authorized persons a
method of operational and master key entry, identification,
exchange, separation, and update. The TKE workstation
supports connectivity to an Ethernet Local Area Network
(LAN) operating at 10 or 100 Mbps. Up to ten TKE work-
stations can be ordered.
The Smart Card Reader, attached to a TKE workstation
with the 5.3 level of LIC will support System z10 BC,
z10 EC, z9 EC, and z9 BC. However, TKE workstations
with 5.0, 5.1 and 5.2 LIC must be upgraded to TKE 5.3
LIC.
39
Download from Www.Somanuals.com. All Manuals Search And Download.
TKE additional smart cards – new feature
Remote Key Loading Benefits
• Provides a mechanism to load initial ATM keys without
the need to send technical staff to ATMs
You have the capability to order Java-based blank smart
cards which offers a highly efficient cryptographic and
data management application built-in to read-only memory
for storage of keys, certificates, passwords, applications,
and data. The TKE blank smart cards are compliant with
FIPS 140-2 Level 2. When you place an order for a quantity
of one, you are shipped 10 smart cards.
• Reduces downtime due to key entry errors
• Reduces service call and key management costs
• Improves the ability to manage ATM conversions and
upgrades
Integrated Cryptographic Service Facility (ICSF), together
with Crypto Express2, support the basic mechanisms in
Remote Key Loading. The implementation offers a secure
bridge between the highly secure Common Cryptographic
Architecture (CCA) environment and the various formats
and encryption schemes offered by the ATM vendors. The
following ICSF services are offered for Remote Key loading:
System z10 BC cryptographic migration
Clients using a User Defined Extension (UDX) of the
Common Cryptographic Architecture should contact their
UDX provider for an application upgrade before order-
ing a new System z10 BC machine; or before planning to
migrate or activate a UDX application to firmware driver
level 73 and higher.
•
Trusted Block Create (CSNDTBC): This callable service
is used to create a trusted block containing a public key
and some processing rules
• The Crypto Express2 feature is supported on the z9
BC and can be carried forward on an upgrade to the
System z10 BC
• Remote Key Export (CSNDRKX): This callable service
uses the trusted block to generate or export DES keys
for local use and for distribution to an ATM or other
remote device
• You may continue to use TKE workstations with 5.3
licensed internal code to control the System z10 BC
• TKE 5.0 and 5.1 workstations (#0839 and #0859) may
be used to control z9 EC, z9 BC, z890, and IBM eServer
zSeries 990 (z990) servers
Refer to Application Programmers Guide, SA22-7522, for
additional details.
Improved Key Exchange With Non-CCA Cryptographic Systems
IBM Common Cryptographic Architecture (CCA) employs
Control Vectors to control usage of cryptographic keys.
Non-CCA systems use other mechanisms, or may use
keys that have no associated control information. This
enhancement provides the ability to exchange keys
between CCA systems, and systems that do not use Con-
trol Vectors. Additionally, it allows the CCA system owner
to define permitted types of key import and export which
can help to prevent uncontrolled key exchange that can
open the system to an increased threat of attack.
Remote Loading of Initial ATM Keys
Typically, a new ATM has none of the financial institution’s
keys installed. Remote Key Loading refers to the pro-
cess of loading Data Encryption Standard (DES) keys to
Automated Teller Machines (ATMs) from a central admin-
istrative site without the need for personnel to visit each
machine to manually load DES keys. This has been done
by manually loading each of the two clear text key parts
individually and separately into ATMs. Manual entry of
keys is one of the most error-prone and labor-intensive
activities that occur during an installation, making it expen-
sive for the banks and financial institutions.
These enhancements are exclusive to System z10, and
System z9 and are supported by z/OS and z/VM for z/OS
guest exploitation.
40
Download from Www.Somanuals.com. All Manuals Search And Download.
On Demand Capabilities
It may sound revolutionary, but it’s really quite simple. In
the highly unpredictable world of On Demand business,
you should get what you need, when you need it. And you
should pay for only what you use. Radical? Not to IBM. It’s
the basic principle underlying IBM capacity on demand for
the IBM System z10.
The new contract set is structured in a modular, hierarchi-
cal approach. This new approach will eliminate redundant
terms between contract documents, simplifying the con-
tracts for our customers and IBM.
Just-in-time deployment of System z10 BC Capacity on
Demand (CoD) is a radical departure from previous System
z and zSeries servers. This new architecture allows:
The z10 BC also introduces a architectural approach for
temporary offerings that can change the thinking about on
demand capacity. One or more flexible configuration defi-
nitions can be used to solve multiple temporary situations
and multiple capacity configurations can be active at once
(for example, activation of just two CBUs out of a definition
that has four CBUs is acceptable). This means that On/Off
CoD can be active and up to seven other offerings can be
active simultaneously. Tokens can be purchased for On/Off
CoD so hardware activations can be prepaid.
• Up to eight temporary records to be installed on the CPC
and active at any given time
• Up to 200 temporary records to be staged on the SE
• Variability in the amount of resources that can be acti-
vated per record
• The ability to control and update records independent of
each other
• Improved query functions to monitor the state of each
record
All activations can be done without having to interact with
IBM—when it is determined that capacity is required, no
passwords or phone connections are necessary. As long
as the total z10 BC can support the maximums that are
defined, then they can be made available. With the z10
BC it is now possible to add permanent capacity while a
temporary capacity is currently activated, without having to
return first to the original configuration.
• The ability to add capabilities to individual records con-
currently, eliminating the need for constant ordering of
new temporary records for different user scenarios
• Permanent LIC-CC upgrades to be performed while
temporary resources are active
These capabilities allow you to access and manage pro-
cessing capacity on a temporary basis, providing increased
flexibility for on demand environments. The CoD offerings
are built from a common Licensed Internal Code – Configu-
ration Code (LIC-CC) record structure. These Temporary
Entitlement Records (TERs) contain the information neces-
sary to control which type of resource can be accessed
and to what extent, how many times and for how long, and
under what condition – test or real workload. Use of this
information gives the different offerings their personality.
Capacity on Demand – Temporary Capacity
The set of contract documents which support the various
Capacity on Demand offerings available for z10 BC has
been completely refreshed. While customers with exist-
ing contracts for Capacity Back Up (CBU) and Customer
Initiated Upgrade (CIU) – On/Off Capacity on Demand
(On/Off CoD) may carry those contracts forward to z10 BC
machines, new CoD capability and offerings for z10 BC is
only supported by this new contract set.
Capacity Back Up (CBU): Temporary access to dormant
processing units (PUs), intended to replace capacity lost
within the enterprise due to a disaster. CP capacity or any
and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF)
41
Download from Www.Somanuals.com. All Manuals Search And Download.
can be added up to what the physical hardware model
can contain for up to 10 days for a test activation or 90
days for a true disaster recovery.
While all new CBU contract documents contain the new
CBU Test terms, existing CBU customers will need to exe-
cute a contract to expand their authorization for CBU Test
upgrades if they want to have the right to execute produc-
tion workload on the CBU Upgrade during a CBU Test.
On system z10 the CBU entitlement records contain an
expiration date that is established at the time of order
and is dependent upon the quantity of CBU years. You
will now have the capability to extend your CBU entitle-
ments through the purchase of additional CBU years. The
number of CBU years per instance of CBU entitlement
remains limited to five and fractional years are rounded up
to the near whole integer when calculating this limit. For
instance, if there are two years and eight months to the
expiration date at the time of order, the expiration date can
be extended by no more than two additional years. One
test activation is provided for each additional CBU year
added to the CBU entitlement record.
Amendment for CBU Tests
The modification of CBU Test terms is available for existing
CBU customers via the IBM Customer Agreement Amend-
ment for IBM System z Capacity Backup Upgrade Tests (in
the US this is form number Z125-8145). This amendment
can be executed at any time, and separate from any par-
ticular order.
Capacity for Planned Event (CPE): Temporary access
to dormant PUs, intended to replace capacity lost within
the enterprise due to a planned event such as a facility
upgrade or system relocation. This offering is available
only on the System z10. CPE is similar to CBU in that it is
intended to replace lost capacity; however, it differs in its
scope and intent. Where CBU addresses disaster recovery
scenarios that can take up to three months to remedy, CPE
is intended for short-duration events lasting up to three
days, maximum. Each CPE record, once activated, gives
you access to all dormant PUs on the machine that can be
configured in any combination of CP capacity or specialty
engine types (zIIP, zAAP, SAP, IFL, ICF).
CBU Tests: The allocation of the default number of test
activations changed. Rather than a fixed default number
of five test activations for each CBU entitlement record,
the number of test activations per instance of the CBU
entitlement record will coincide with the number of CBU
years, the number of years assigned to the CBU record.
This equates to one test activation per year for each CBU
entitlement purchased.Additional test activations are now
available in quantities of one and the number of test acti-
vations remains limited at 15 per CBU entitlement record.
On/Off Capacity on Demand (On/Off CoD): Temporary
access to dormant PUs, intended to augment the existing
capacity of a given system. On/Off CoD helps you contain
workload spikes that may exceed permanent capacity
such that Service Level Agreements cannot be met and
business conditions do not justify a permanent upgrade.
An On/Off CoD record allows you to temporarily add CP
capacity or any and all specialty engine types (zIIP, zAAP,
SAP, IFL, ICF) up to the following limits:
These changes apply only to System z10 and to CBU
entitlements purchased through the IBM sales channel or
directly from Resource Link.
There are terms governing System z Capacity Back Up
(CBU) now available which allow customers to execute
production workload on a CBU Upgrade during a CBU
Test.
42
Download from Www.Somanuals.com. All Manuals Search And Download.
• The quantity of temporary CP capacity ordered is limited customer, via the Resource Link ordering process, deter-
by the quantity of purchased CP capacity (permanently
active plus unassigned)
mines how many tokens go into each pool. Once On/Off
CoD resources are activated, tokens will be decremented
from their pools every 24 hours. The amount decremented
is based on the highest activation level for that engine type
during the previous 24 hours.
• The quantity of temporary IFLs ordered is limited by
quantity of purchased IFLs (permanently active plus
unassigned)
• Temporary use of unassigned CP capacity or unas-
signed IFLs will not incur a hardware charge
Resource tokens are intended to help customers bound
the hardware costs associated with using On/Off CoD. The
use of resource tokens is optional and they are available
on either a prepaid or post-paid basis. When prepaid, the
customer is billed for the total amount of resource tokens
contained within the On/Off CoD record. When post-paid,
the total billing against the On/Off Cod record is limited by
the total amount of resource tokens contained within the
record. Resource Link will provide the customer an ordering
wizard to help determine how many tokens they need to
purchase for different activation scenarios. Resource tokens
within an On/Off CoD record may also be replenished.
• The quantity of permanent zIIPs plus temporary zIIPs
can not exceed the quantity of purchased (permanent
plus unassigned) CPs plus temporary CPs and the
quantity of temporary zIIPs can not exceed the quantity
of permanent zIIPs
• The quantity of permanent zAAPs plus temporary zAAPs
can not exceed the quantity of purchased (permanent
plus unassigned) CPs plus temporary CPs and the
quantity of temporary zAAPs can not exceed the quan-
tity of permanent zAAPs
•
The quantity of temporary ICFs ordered is limited by the
quantity of permanent ICFs as long as the sum of perma-
nent and temporary ICFs is less than or equal to 16
Resource Link offers an ordering wizard to help determine
how many tokens you need to purchase for different acti-
vation scenarios. Resource tokens within an On/Off CoD
record may also be replenished. For more information
on the use and ordering of resource tokens, refer to the
Capacity on Demand Users Guide, SC28-6871.
•
The quantity of temporary SAPs ordered is limited by the
quantity of permanent SAPs as long as the sum of per-
manent and temporary SAPs is less than or equal to 32
Although the System z10 BC will allow up to eight tempo-
rary records of any type to be installed, only one tempo-
rary On/Off CoD record may be active at any given time.
An On/Off CoD record may be active while other tempo-
rary records are active.
Capacity Provisioning
Hardware working with software is critical. The activation
of On/Off CoD on z10 EC can be simplified or automated
by using z/OS Capacity Provisioning (available with z/OS
V1.10 and z/OS V1.9). This capability enables the monitor-
ing of multiple systems based on Capacity Provisioning and
Workload Manager (WLM) definitions. When the defined
conditions are met, z/OS can suggest capacity changes for
manual activation from a z/OS console or the system can
add or remove temporary capacity automatically and with-
out operator intervention. z10 BC Can Do IT better.
Management of temporary capacity through On/Off CoD
is further enhanced through the introduction of resource
tokens. For CP capacity, a resource token represents
an amount of processing capacity that will result in one
MSU of SW cost for one day – an MSU-day. For specialty
engines, a resource token represents activation of one
engine of that type for one day – an IFL-day, a zIIP-day or
a zAAP-day. The different resource tokens are contained
in separate pools within the On/Off CoD record. The
43
Download from Www.Somanuals.com. All Manuals Search And Download.
z/OS Capacity provisioning allows you to set up rules
defining the circumstances under which additional capac-
ity should be provisioned in order to fulfill a specific busi-
ness need. The rules are based on criteria, such as: a
specific application, the maximum additional capacity that
should be activated, time and workload conditions. This
support provides a fast response to capacity changes and
ensures sufficient processing power will be available with
the least possible delay even if workloads fluctuate.
Capacity on Demand – Permanent Capacity
Customer Initiated Upgrade (CIU) facility: When your
business needs additional capacity quickly, Customer
Initiated Upgrade (CIU) is designed to deliver it. CIU is
designed to allow you to respond to sudden increased
capacity requirements by requesting a System z10 BC PU
and/or memory upgrade via the Web, using IBM Resource
Link, and downloading and applying it to your System z10
BC server using your system’s Remote Support connec-
tion. Further, with the Express option on CIU, an upgrade
may be made available for installation as fast as within a
few hours after order submission.
An installed On/Off CoD record is a necessary prerequisite
for automated control of temporary capacity through z/OS
Capacity Provisioning.
Permanent upgrades: Orders (MESs) of all PU types and
memory for System z10 BC servers that can be delivered by
Licensed Internal Code, Control Code (LIC-CC) are eligible
for CIU delivery. CIU upgrades may be performed up to the
maximum available processor and memory resources on
the installed server, as configured. While capacity upgrades
to the server itself are concurrent, your software may not be
able to take advantage of the increased capacity without
performing an Initial Programming Load (IPL).
See z/OS MVS Capacity Provisioning User’s Guide (SA33-
8299) for more information.
On/Off CoD Test: On/Off CoD allows for a no-charge test.
No IBM charges are assessed for the test, including IBM
charges associated with temporary hardware capacity,
IBM software, or IBM maintenance. This test can be used
to validate the processes to download, stage, install, acti-
vate, and deactivate On/Off CoD capacity non-disruptively.
Each On/Off CoD-enabled server is entitled to only one no-
charge test. This test may last up to a maximum duration
of 24 hours commencing upon the activation of any capac-
ity resources contained in the On/Off CoD record. Activa-
tion levels of capacity may change during the 24 hour test
period. The On/Off CoD test automatically terminates at
the end of the 24 hours period. In addition to validating
the On/Off CoD function within your environment, you may
choose to use this test as a training session for your per-
sonnel who are authorized to activate On/Off CoD.
System z9
System z10
Resources
Offerings
CP, zIIP, zAAP, IFL, ICF
CP, zIIP, zAAP, IFL, ICF, SAP
Requires access to IBM/ No password required or
®
RETAIN to activate
access to IBM/RETAIN to
activate
CBU, On/Off CoD
One offering at a time
CBU, On/Off CoD, CPE
Multiple offerings active
Permanent
upgrades
Requires de-provisioning Concurrent with temporary
of temporary capacity first offerings
Replenishment
CBU Tests
No
Yes w/ CBU & On/Off CoD
Up to 15 per record
SNMP API (Simple Network Management Protocol Appli-
cation Programming Interface) enhancements have also
been made for the new Capacity On Demand features.
More information can be found in the System z10 Capacity
On Demand User’s Guide, SC28-6871.
5 tests per record
No expiration
CBU Expiration
Specific term length
Capacity
Provisioning
Manager Support
No
Yes
44
Download from Www.Somanuals.com. All Manuals Search And Download.
Reliability, Availability, and Serviceability
(RAS)
In today’s on demand environment, downtime is not only
unwelcome—it’s costly. If your applications aren’t consis-
tently available, your business suffers. The damage can
extend well beyond the financial realm into key areas of
customer loyalty, market competitiveness and regulatory
compliance. High on the list of critical business require-
ments today is the need to keep applications up and run-
ning in the event of planned or unplanned disruptions to
your systems.
with the introduction of concurrent I/O drawer add and
eliminating pre-planning requirements. These features are
designed to reduce the need for a Power-on-Reset (POR)
and help eliminate the need to deactivate/activate/IPL a
logical partition.
RAS Design Focus
High Availability (HA) – The attribute of a system
designed to provide service during defined periods, at
acceptable or agreed upon levels and masks UNPLANNED
OUTAGES from end users. It employs fault tolerance, auto-
mated failure detection, recovery, bypass reconfiguration,
testing, problem and change management.
While some servers are thought of offering weeks or even
months of up time, System z thinks of this in terms of
achieving years. The z10 BC continues our commitment
to deliver improvements in hardware Reliability, Availability
and Serviceability (RAS) with every new System z server.
They include microcode driver enhancements, dynamic
segment sparing for memory and fixed HSA, as well as a
new I/O drawer design. The z10 BC is a server that can
help keep applications up and running in the event of
planned or unplanned disruptions to the system.
Continuous Operations (CO) – The attribute of a system
designed to continuously operate and mask PLANNED
OUTAGES from end users. It employs non-disruptive hard-
ware and software changes, non-disruptive configuration
and software coexistence.
Continuous Availability (CA) – The attribute of a system
designed to deliver non-disruptive service to the end user
7 days a week, 24 HOURS A DAY (there are no planned or
unplanned outages). It includes the ability to recover from
a site disaster by switching computing to a second site.
The System z10 BC is designed to deliver industry lead-
ing reliability, availability and security our customers have
come to expect from System z servers. System z10 BC
RAS is designed to reduce all sources of outages by
reducing unscheduled, scheduled and planned outages.
Planned outages are further designed to be reduced
Download from Www.Somanuals.com. All Manuals Search And Download.
Availability Functions
With the z10 BC, significant steps have been taken in the
area of server availability with a focus on reducing pre-
planning requirements. Pre-planning requirements are
minimized by delivering and reserving 8 GB for HSA so the
maximum configuration capabilities can be exploited. And
with the introduction of the ability to seamlessly include
such events as creation of LPARs, inclusion of logical
subsystems, changing logical processor definitions in an
LPAR, and the introduction of cryptography into an LPAR.
Features that carry forward from previous generation pro-
cessors include the ability to dynamically enable I/O, and
the dynamic swapping of processor types.
Redundant I/O Interconnect
In the event of a failure or customer initiated action such
as the replacement of an HCA/STI fanout card, the z10 BC
is designed to provide access to your I/O devices through
another HCA/STI to the affected I/O domains. This is exclu-
sive to System z10 and System z9.
Enhanced Driver Maintenance
One of the greatest contributors to downtime during
planned outages is Licensed Internal Code (LIC) updates.
When properly configured, z10 BC is designed to permit
select planned LIC updates.
A new query function has been added to validate LIC EDM
requirements in advance. Enhanced programmatic internal
controls have been added to help eliminate manual analy-
sis by the service team of certain exception conditions.
Hardware System Area (HSA)
Fixed HSA of 8 GB is provided as standard with the z10
BC. The HSA has been designed to eliminate planning for
HSA and makes all the memory purchased by customers
available for customer use. Preplanning for HSA expansion
for configurations will be eliminated as HCD/IOCP will, via
the IOCDS process, always reserve:
With the z10 BC, PR/SM code has been enhanced to allow
multiple EDM ‘From’ sync points. Automatic apply of EDM
licensed internal change requirements is now limited to
EDM and the licensed internal code changes update pro-
cess.
• 2 Logical Channel Subsystems (LCSS), pre-defined
• 30 Logical Partitions (LPARs), pre-defined
• Subchannel set 0 with 63.75k devices
There are several reliability, availability, and serviceability
(RAS) enhancements that have been made to the HMC/SE
based on the feedback from the System z9 Enhanced
Driver Maintenance field experience.
• Subchannel set 1 with 64K-1 devices
•
Dynamic I/O Reconfiguration – always enabled by
default
• Concurrent Patch – always enabled by default
• Change to better handle intermittent customer network
issues
• Add/Change the number of logical CP, IFL, ICF, zAAP,
zIIP, processors per partition and add SAPs to the con-
figuration
• EDM performance improvements
• New EDM user interface features to allow for customer
and service personnel to better plan for the EDM
• Dynamic LPAR PU assignment optimization CPs, ICFs,
IFLs, zAAPs, zIIPs, SAPs
• A new option to check all licensed internal code which
can be executed in advance of the EDM preload or
activate.
• Dynamically Add/Remove Crypto (no LPAR deactivation
required)
46
Download from Www.Somanuals.com. All Manuals Search And Download.
Dynamic Oscillator Switchover
fied in the configuration tool along with a “starting” logical
memory size. The configuration tool will then calculate the
physical memory required to satisfy this target memory.
Should additional physical memory be required, it will be
fulfilled with the preplanned memory features.
The z10 BC has two oscillator cards, a primary and a
backup. For most cases, should a failure occur on the pri-
mary oscillator card, the backup can detect it, switch over,
and provide the clock signal to the system transparently,
with no system outage. Previously, in the event of a failure
of the active oscillator, a system outage would occur, the
subsequent system Power On Reset (POR) would select
the backup, and the system would resume operation.
Dynamic Oscillator Switchover is exclusive to System z10
and System z9.
The preplanned memory feature is offered in 4 gigabyte
(GB) increments. The quantity assigned by the configu-
ration tool is the number of 4 GB blocks necessary to
increase the physical memory from that required for the
“starting” logical memory to the physical memory required
for the “target” logical configuration. Activation of any
preplanned memory requires the purchase of preplanned
memory activation features. One preplanned memory acti-
vation feature is required for each preplanned memory fea-
ture. You now have the flexibility to activate memory to any
logical size offered between the starting and target size.
Transparent Sparing
The z10 BC offers 12 PUs, two are designated as System
Assist Processors (SAPs). In the event of processor failure,
if there are spare processor units available (undefined),
these PUs are used for transparent sparing.
Service Enhancements
Concurrent Memory Upgrade
z10 BC service enhancements designed to avoid sched-
uled outages include:
Memory can be upgraded concurrently using LIC-CC
if physical memory is available on the machine either
through the Plan Ahead Memory feature or by having more
physical memory installed in the machine that has not
been activated.
• Concurrent firmware fixes
• Concurrent driver upgrades
• Concurrent parts replacement
• Concurrent hardware upgrades
• DIMM FRU indicators
Plan Ahead Memory
Future memory upgrades can now be preplanned to be
nondisruptive. The preplanned memory feature will add
the necessary physical memory required to support target
memory sizes. The granularity of physical memory in the
System z10 design is more closely associated with the
granularity of logical, entitled memory, leaving little room
for growth. If you anticipate an increase in memory require-
ments, a “target” logical memory size can now be speci-
• Single processor core checkstop
• Single processor core sparing
• Rebalance PSIFB and I/O Fanouts
• Redundant 100 Mb Ethernet service network with VLAN
47
Download from Www.Somanuals.com. All Manuals Search And Download.
Environmental Enhancements
Power and cooling discussions have entered the budget
planning of every IT environment. As energy prices have
risen and utilities have restricted the amount of power
usage, it is important to review the role of the server in bal-
ancing IT spending.
IBM Systems Director Active Energy Manager
™
IBM Systems Director Active Energy Manager (AEM) is a
building block which enables customers to manage actual
power consumption and resulting thermal loads IBM serv-
ers place in the data center. The z10 BC provides support
for IBM Systems Director Active Energy Manager (AEM)
for Linux on System z for a single view of actual energy
usage across multiple heterogeneous IBM platforms within
the infrastructure. AEM for Linux on System z will allow
tracking of trends for both the z10 BC as well as multiple
server platforms. With this trend analysis, a data center
administrator will have the data to help properly estimate
power inputs and more accurately plan data center con-
solidation or modification projects.
Power Monitoring
The “mainframe gas gauge” feature introduced on the
System z9 servers, provides power and thermal informa-
tion via the System Activity Display (SAD) on the Hardware
Management Console and will be available on the z10
BC giving a point in time reference of the information. The
current total power consumption in watts and BTU/hour as
well as the air input temperature will be displayed.
On System z10, the HMC will now provide support for the
Active Energy Manager (AEM) which will display power
consumption/air input temperature as well as exhaust
temperature. AEM will also provide some limited status/
configuration information which might assist in explaining
changes to the power consumption. AEM is exclusive to
System z10.
Power Estimation Tool
To assist in energy planning, Resource Link provides tools
to estimate server energy requirements before a new
server purchase. A user will input the machine model,
memory, and I/O configuration and the tool will output
an estimate of the system total heat load and utility input
power. A customized planning aid is also available on
Resource Link which provides physical characteristics
of the machine along with cooling recommendations,
environmental specifications, system power rating, power
plugs/receptacles, line cord wire specifications and the
machine configuration.
48
Download from Www.Somanuals.com. All Manuals Search And Download.
Parallel Sysplex Cluster Technology
IBM System z servers stand alone against competition and Although there is significant value in a single footprint and
have stood the test of time with our business resiliency
solutions. Our coupling solutions with Parallel Sysplex
technology allow for greater scalability and availability.
multi-footprint environment with resource sharing, those
customers looking for high availability must move on to
a database data sharing configuration. With the Paral-
lel Sysplex environment, combined with the Workload
Manager and CICS TS, DB2 or IMS, incoming work can
be dynamically routed to the z/OS image most capable
of handling the work. This dynamic workload balancing,
along with the capability to have read/write access data
from anywhere in the Parallel Sysplex cluster, provides
scalability and availability. When configured properly, a
Parallel Sysplex cluster is designed with no single point of
failure and can provide customers with near continuous
application availability over planned and unplanned outages.
Parallel Sysplex clustering is designed to bring the power
of parallel processing to business-critical System z10,
System z9, z990 or z890 applications. A Parallel Sysplex
cluster consists of up to 32 z/OS images coupled to one or
more Coupling Facilities (CFs or ICFs) using high-speed
specialized links for communication. The Coupling Facili-
ties, at the heart of the Parallel Sysplex cluster, enable
high speed, read/write data sharing and resource sharing
among all the z/OS images in a cluster. All images are also
®
connected to a Sysplex Timer or by implementing the
Server Time Protocol (STP), so that all events can be prop-
erly sequenced in time.
With the introduction of the z10 EC, we have the concept
of n-2 on the hardware as well as the software. The z10 BC
participates in a Sysplex with System z10 EC, System z9,
z990 and z890 only and currently supports z/OS 1.8 and
higher and z/VM 5.2 for a guest virtualization coupling
facility test environment.
CF
For detailed information on IBM’s Parallel Sysplex technol-
03.ibm.com/systems/z/pso/.
Coupling Facility Control Code (CFCC) Level 16
CFCC Level 16 is being made available on the IBM
System z10 BC.
Parallel Sysplex Resource Sharing enables multiple
system resources to be managed as a single logical
resource shared among all of the images. Some examples
of resource sharing include JES2 Checkpoint, GRS “star,”
and Enhanced Catalog Sharing; all of which provide sim-
plified systems management, increased performance and/
or scalability.
Improved service time with Coupling Facility Duplex-
ing enhancements: Prior to Coupling Facility Control
Code (CFCC) Level 16, System-Managed Coupling
Facility (CF) Structure Duplexing required two duplexing
protocol exchanges to occur synchronously during pro-
cessing of each duplexed structure request. CFCC Level
49
Download from Www.Somanuals.com. All Manuals Search And Download.
16 allows one of these protocol exchanges to complete
asynchronously. This allows faster duplexed request ser-
vice time, with more benefits when the Coupling Facilities
are further apart, such as in a multi-site Parallel Sysplex
environment.
Coupling Facility Configuration Alternatives
IBM offers multiple options for configuring a functioning
Coupling Facility:
• Standalone Coupling Facility: The standalone CF
provides the most “robust” CF capability, as the CPC is
wholly dedicated to running the CFCC microcode — all
of the processors, links and memory are for CF use
only. A natural benefit of this characteristic is that the
standalone CF is always failure-isolated from exploiting
z/OS software and the server that z/OS is running on for
environments without System-Managed CF Structure
Duplexing. The z10 BC with capacity indicator A00 is
used for systems with ICF(s) only. There are no software
charges associated with such a configuration.
List notification improvements: Prior to CFCC Level 16,
when a shared queue (subsidiary list) changed state from
empty to non-empty, the CF would notify ALL active con-
nectors. The first one to respond would process the new
message, but when the others tried to do the same, they
would find nothing, incurring additional overhead.
CFCC Level 16 can help improve the efficiency of coupling
communications for IMS Shared Queue and WebSphere
MQ Shared Queue environments. The Coupling Facility
notifies only one connector in a sequential fashion. If the
shared queue is processed within a fixed period of time,
the other connectors do not need to be notified, saving the
cost of the false scheduling. If a shared queue is not read
within the time limit, then the other connectors are notified
as they were prior to CFCC Level 16.
• Internal Coupling Facility (ICF): Customers consider-
ing clustering technology can get started with Parallel
Sysplex technology at a lower cost by using an ICF
instead of purchasing a standalone Coupling Facility.
An ICF feature is a processor that can only run Coupling
Facility Control Code (CFCC) in a partition. Since CF
LPARs on ICFs are restricted to running only CFCC,
there are no IBM software charges associated with
ICFs. ICFs are ideal for Intelligent Resource Director and
resource sharing environments as well as for data shar-
ing environments where System-Managed CF Structure
Duplexing is exploited.
When migrating CF levels, lock, list and cache structure
sizes might need to be increased to support new function.
For example, when you upgrade from CFCC Level 15 to
Level 16 the required size of the structure might increase.
This adjustment can have an impact when the system
allocates structures or copies structures from one coupling
facility to another at different CF levels.
System-Managed CF Structure Duplexing
System-Managed Coupling Facility (CF) Structure Duplex-
ing provides a general purpose, hardware-assisted, easy-
to-exploit mechanism for duplexing CF structure data. This
provides a robust recovery mechanism for failures such
as loss of a single structure or CF or loss of connectivity to
a single CF, through rapid failover to the backup instance
of the duplexed structure pair. CFCC Level 16 provides CF
Duplexing enhancements described previously in the sec-
tion titled “Coupling Facility Control Code (CFCC) Level 16”.
The coupling facility structure sizer tool can size struc-
tures for you and takes into account the amount of space
needed for the current CFCC levels.
Access the tool at:
CFCC Level 16 is exclusive to System z10 and is sup-
ported by z/OS and z/VM for guest exploitation.
50
Download from Www.Somanuals.com. All Manuals Search And Download.
Introducing long reach InfiniBand coupling links
Now, InfiniBand can be used for Parallel Sysplex coupling
and STP communication at unrepeated distances up to 10
km (6.2 miles) and even greater distances when attached
to a qualified optical networking solution. InfiniBand cou-
pling links supporting extended distance are referred to as
1x (one pair of fiber) IB-SDR or 1x IB-DDR.
z/OS
ICF
ICF
z/OS
System z10 / z9
zSeries 990 / 890
System z10 / z9
zSeries 990 / 890
A robust failure recovery capability
Parallel Sysplex Coupling Connectivity
• Long reach 1x InfiniBand coupling links support single
data rate (SDR) at 2.5 gigabits per second (Gbps) when
connected to a DWDM capable of SDR
The Coupling Facilities communicate with z/OS images in
the Parallel Sysplex environment over specialized high-
speed links. As processor performance increases, it is
important to also use faster links so that link performance
does not become constrained. The performance, avail-
ability and distance requirements of a Parallel Sysplex
environment are the key factors that will identify the appro-
priate connectivity option for a given configuration.
• Long reach 1x InfiniBand coupling links support double
data rate (DDR) at 5 Gbps when connected to a DWDM
capable of DDR.
Depending on the capability of the attached DWDM, the
link data rate will automatically be set to either SDR or
DDR.
When connecting between System z10, System z9 and
z990/z890 servers the links must be configured to operate
in Peer Mode. This allows for higher data transfer rates
to and from the Coupling Facilities. The peer link acts
simultaneously as both a CF Sender and CF Receiver link,
reducing the number of links required. Larger and more
data buffers and improved protocols may also improve
long distance performance.
The IBM System z10 introduces InfiniBand coupling link
technology designed to provide a high-speed solution and
increased distance (150 meters) compared to ICB-4 (10
meters).
InfiniBand coupling links also provide the ability to define
up to 16 CHPIDs on a single PSIFB port, allowing physi-
cal coupling links to be shared by multiple sysplexes.
This also provides additional subchannels for Coupling
Facility communication, improving scalability, and reduc-
ing contention in heavily utilized system configurations. It
also allows for one CHPID to be directed to one CF, and
another CHPID directed to another CF on the same target
server, using the same port.
12x PSIFB
Up to 150 meters
1x PSIFB
Up to 10/100 Km
z10 EC, z10 BC
12x PSIFB
.
.
.
.
.
.
.
.
Up to 150 meters
z9 EC and z9 BC S07
HCA2-O HCA2-O LR
.
.
.
.
.
.
.
.
HCA2-O
New ICB-4 cable
ICB-4 10 meters
z10 EC, z10 BC, z9 EC,
z9 BC, z990, z890
Like other coupling links, external InfiniBand coupling
links are also valid to pass time synchronization signals for
Server Time Protocol (STP). Therefore the same coupling
links can be used to exchange timekeeping informa-
tion and Coupling Facility messages in a Parallel Sysplex
environment.
MBA
ISC-3
ISC-3
ISC-3
ISC-3
ISC-3
IFB-MP
HCA2-C
Up to 10/100
Km
I/O Drawer
z10 EC, z10 BC, z9 EC,
z9 BC, z990, z890
z10
51
Download from Www.Somanuals.com. All Manuals Search And Download.
The IBM System z10 BC also takes advantage of
InfiniBand as a higher-bandwidth replacement for the Self-
Timed Interconnect (STI) I/O interface features found in
prior System z servers.
System z now supports 12x InfiniBand single data rate
(12x IB-SDR) coupling link attachment between System
z10 and System z9 general purpose (no longer limited to
standalone coupling facility)
5) Long Reach 1x InfiniBand coupling links (1x IB-
SDR or 1x IB-DDR) are an alternative to ISC-3 and
offer greater distances with support for point-to-point
unrepeated connections of up to 10 km (6.2 miles)
using 9 micron single mode fiber optic cables. Greater
distances can be supported with System z qualified
optical networking solutions. Long reach 1x InfiniBand
coupling links support the same sharing capability as
the 12x InfiniBand version allowing one physical link to
be shared across multiple CF images on a system.
InfiniBand coupling links are CHPID type CIB.
Coupling Connectivity for Parallel Sysplex
Five coupling link options: The z10 BC supports Internal
Coupling channels (ICs), Integrated Cluster Bus-4 (ICB-4),
InterSystem Channel-3 (ISC-3) (peer mode), and 12x and
1x InfiniBand (IFB) links for communication in a Parallel
Sysplex environment.
1) Internal Coupling Channels (ICs) can be used for inter-
nal communication between Coupling Facilities (CFs)
defined in LPARs and z/OS images on the same server.
Note: The InfiniBand link data rates do not represent the
performance of the link. The actual performance is depen-
dent upon many factors including latency through the
adapters, cable lengths, and the type of workload. Specifi-
cally, with 12x InfiniBand coupling links, while the link data
rate can be higher than that of ICB, the service times of
coupling operations are greater, and the actual throughput
is less.
2) Integrated Cluster Bus-4 (ICB-4) links are for short
distances. ICB-4 links use 10 meter (33 feet) copper
cables, of which 3 meters (10 feet) is used for internal
routing and strain relief. ICB-4 is used to connect z10
BC-to-z10 BC, z10 EC, z9 EC, z9 BC, z990, and z890.
Note: If connecting to a z9 BC or a z10 BC with ICB-4,
those servers cannot be installed with the non-raised
floor feature. Also, if the z10 BC is ordered with the non-
raised floor feature, ICB-4 cannot be ordered.
Refer to the Coupling Facility Configuration Options white-
paper for a more specific explanation of when to continue
using the current ICB or ISC-3 technology versus migrat-
ing to InfiniBand coupling links.
3) InterSystem Channel-3 (ISC-3) supports communi-
cation over unrepeated distances of up to 10 km (6.2
miles) using 9 micron single mode fiber optic cables
and even greater distances with System z qualified opti-
cal networking solutions. ISC-3s are supported exclu-
sively in peer mode (CHPID type CFP).
systems/z/advantages/pso/whitepaper.html.
4) 12x InfiniBand coupling links (12x IB-SDR or 12x
IB-DDR) offer an alternative to ISC-3 in the data center
and facilitate coupling link consolidation; physical links
can be shared by multiple systems or CF images on a
single system. The 12x IB links support distances up to
150 meters (492 feet) using industry-standard OM3 50
micron fiber optic cables.
52
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 Coupling Link Options
The Sysplex Timer Model 2 is the centralized time source
that sets the Time-Of-Day (TOD) clocks in all attached
servers to maintain synchronization. The Sysplex Timer
Model 2 provides the stepping signal that helps ensure
that all TOD clocks in a multi-server environment incre-
ment in unison to permit full read or write data sharing with
integrity. The Sysplex Timer Model 2 is a key component of
an IBM Parallel Sysplex environment and a Geographically
Type Description
Use
Link
ata rate
Distance
z10 BC
z10 EC Max
Max
z10
d
PSIFB 1x IB-DDR LR z10 to z10 5 Gbps
10 km unrepeated 12*/32*
(6.2 miles)
100 km repeated
PSIFB 12x IB-DDR
z10 to z10 6 GBps
150 meters
12*/32*
32/32
z10 to z9
3 GBps** (492 ft)***
™
®
Dispersed Parallel Sysplex (GDPS ) availability solution
for On Demand Business.
IC
Internal
Coupling
Channel
Internal
Communi- Speeds
cation
Internal
N/A
64
CHPIDS
The z10 BC server requires the External Time Reference
(ETR) feature to attach to a Sysplex Timer. The ETR fea-
ture is standard on the z10 BC and supports attachment
at an unrepeated distance of up to three kilometers (1.86
miles) and a link data rate of 8 Megabits per second.
The distance from the Sysplex Timer to the server can be
extended to 100 km using qualified Dense Wavelength
Division Multiplexers (DWDMs). However, the maximum
repeated distance between Sysplex Timers is limited to
40 km.
ICB-4 Copper
connection
z10, z9
z990, z890
2 GBps
10 meters***
(33 ft)
12/16
48/48
between OS
and CF
ISC-3 Fiber
z10, z9
z990, z890
2 Gbps
10 km
connection
between OS
and CF
unrepeated
(6.2 miles)
100 km repeated
•
•
The maximum number of Coupling Links combined cannot exceed 64
per server (PSIFB, ICB-4, ISC-3). There is a maximum of 64 Coupling
CHPIDs (CIB, ICP, CBP, CFP) per server.
For each MBA fanout installed for ICB-4s, the number of possible cus-
tomer HCA fanouts is reduced by one
* Each link supports definition of multiple CIB CHPIDs, up to 16 per fanout
** z10 negotiates to 3 GBps (12x IB-SDR) when connected to a System z9
*** 3 meters (10 feet) reserved for internal routing and strain relief
Server Time Protocol (STP)
STP messages: STP is a message-based protocol in
which timekeeping information is transmitted between
servers over externally defined coupling links. ICB-4, ISC-
3, and InfiniBand coupling links can be used to transport
STP messages.
Note: The InfiniBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5
Gbps do not represent the performance of the link. The actual performance
is dependent upon many factors including latency through the adapters,
cable lengths, and the type of workload. With InfiniBand coupling links,
while the link data rate may be higher than that of ICB (12x IB-SDR or 12x
IB-DDR) or ISC-3 (1x IB-SDR or 1x IB-DDR), the service times of coupling
operations are greater, and the actual throughput may be less than with ICB
links or ISC-3 links.
Server Time Protocol enhancements
Time synchronization and time accuracy on z10 BC
STP configuration and time information restoration
after Power on Resets (POR) or power outage: This
enhancement delivers system management improvements
by restoring the STP configuration and time information
after Power on Resets (PORs) or power failure that affects
both servers of a two server STP-only Coordinated Timing
Network (CTN). To enable this function the customer has to
select an option that will assure than no other servers can
If you require time synchronization across multiple servers
(for example you have a Parallel Sysplex environment) or
you require time accuracy either for one or more System z
servers or you require the same time across heterogeneous
®
platforms (System z, UNIX, AIX , etc.) you can meet these
requirements by either installing a Sysplex Timer Model 2
(9037-002) or by implementing Server Time Protocol (STP).
53
Download from Www.Somanuals.com. All Manuals Search And Download.
join the two server CTN. Previously, if both the Preferred
Time Server (PTS) and the Backup Time Server (BTS)
experienced a simultaneous power outage (site failure),
or both experienced a POR, reinitialization of time, and
special roles (PTS, BTS, and CTS) was required. With this
enhancement, you will no longer need to reinitialize the
time or reassign the roles for these events.
STP External Time Source (ETS), the time of an STP-only
Coordinated Timing Network (CTN) can track to the time
provided by the NTP server, and maintain a time accuracy
of 100 milliseconds.
Note: NTP client support has been available since October
2007.
Enhanced accuracy to an External Time Source: The
time accuracy of an STP-only CTN has been improved by
adding the capability to configure an NTP server that has
a pulse per second (PPS) output signal as the ETS device.
This type of ETS device is available worldwide from sev-
eral vendors that provide network timing solutions.
Preview - Improved STP System Management with
new z/OS Messaging: This is a new function planned to
generate z/OS messages when various hardware events
that affect the External Time Sources (ETS) configured for
an STP-only CTN occur. This may improve problem deter-
mination and correction times. Previously, the messages
were generated only on the Hardware Management Con-
sole (HMC).
STP has been designed to track to the highly stable,
accurate PPS signal from the NTP server, and maintain
an accuracy of 10 microseconds as measured at the PPS
input of the System z server. A number of variables such
as accuracy of the NTP server to its time source (GPS,
radio signals for example), and cable used to connect the
PPS signal will determine the ultimate accuracy of STP to
Coordinated Universal Time (UTC).
The ability to generate z/OS messages will be supported
on IBM System z10 and System z9 servers with z/OS 1.11
(with enabling support rolled back to z/OS 1.9) in the
second half of 2009.
The following Server Time Protocol (STP) enhancements
are available on the z10 EC, z10 BC, z9 EC, and z10 BC.
The prerequisites are that you install STP feature and that
the latest MCLs are installed for the applicable driver.
In comparison, the IBM Sysplex Timer is designed to
maintain an accuracy of 100 microseconds when attached
to an ETS with a PPS output. If STP is configured to use
a dial-out time service or an NTP server without PPS, it is
designed to provide a time accuracy of 100 milliseconds
to the ETS device.
NTP client support: This enhancement addresses the
requirements of customers who need to provide the same
accurate time across heterogeneous platforms in an enter-
prise.
For this enhancement, the NTP output of the NTP server
has to be connected to the Support Element (SE) LAN,
and the PPS output of the same NTP server has to be con-
nected to the PPS input provided on the External Time Ref-
erence (ETR) card of the System z10 or System z9 server.
The STP design has been enhanced to include support
for a Simple Network Time Protocol (SNTP) client on the
Support Element. By configuring an NTP server as the
54
Download from Www.Somanuals.com. All Manuals Search And Download.
Continuous Availability of NTP servers used as Exter-
nal Time Source: Improved External Time Source (ETS)
availability can now be provided if you configure different
NTP servers for the Preferred Time Server (PTS) and the
Backup Time Server (BTS). Only the PTS or the BTS can
be the Current Time Server (CTS) in an STP-only CTN.
Prior to this enhancement, only the CTS calculated the
time adjustments necessary to maintain time accuracy.
With this enhancement, if the PTS/CTS cannot access the
NTP Server or the pulse per second (PPS) signal from the
NTP server, the BTS, if configured to a different NTP server,
may be able to calculate the adjustment required and
propagate it to the PTS/CTS. The PTS/CTS in turn will per-
form the necessary time adjustment steering.
attaching NTP servers to the SE LAN. The HMC, via a
separate LAN connection, can access an NTP server avail-
able either on the intranet or Internet for its time source.
Note that when using the HMC as the NTP server, there is
no pulse per second capability available. Therefore, you
should not configure the ETS to be an NTP server using
PPS.
Enhanced STP recovery when Internal Battery Feature
is in use: Improved availability can be obtained when
power has failed for a single server (PTS/CTS), or when
there is a site power outage in a multi site configuration
where the PTS/CTS is installed (the site with the BTS is a
different site not affected by the power outage).
If an Internal Battery Feature (IBF) is installed on your
System z server, STP now has the capability of receiving
notification that customer power has failed and that the
IBF is engaged. When STP receives this notification from a
server that has the role of the PTS/CTS, STP can automati-
cally reassign the role of the CTS to the BTS, thus automat-
ing the recovery action and improving availability.
This avoids a manual reconfiguration of the BTS to be the
CTS, if the PTS/CTS is not able to access its ETS. In an
ETR network when the primary Sysplex Timer is not able
to access the ETS device, the secondary Sysplex Timer
takes over the role of the primary – a recovery action not
always accepted by some customers. The STP design
provides continuous availability of ETS while maintaining
the special roles of PTS and BTS as – signed by the cus-
tomer.
STP configuration and time information saved across
Power on Resets (POR) or power outages: This
enhancement delivers system management improvements
by saving the STP configuration across PORs and power
failures for a single server STP-only CTN. Previously, if
the server was PORed or experienced a power outage,
the time, and assignment of the PTS and CTS roles would
have to be reinitialized. You will no longer need to reinitial-
ize the time or reassign the role of PTS/CTS across POR or
power outage events.
The availability improvement is available when the ETS is
configured as an NTP server or an NTP server using PPS.
NTP Server on Hardware Management Console:
Improved security can be obtained by providing NTP
server support on the HMC. If an NTP server (with or with-
out PPS) is configured as the ETS device for STP, it needs
to be attached directly to the Support Element (SE) LAN.
The SE LAN is considered by many users to be a private
dedicated LAN to be kept as isolated as possible from the
intranet or Internet.
Note that this enhancement is also available on the z990
and z890 servers.
Since the HMC is normally attached to the SE LAN, pro-
viding an NTP server capability on the HMC addresses
the potential security concerns most users may have for
55
Download from Www.Somanuals.com. All Manuals Search And Download.
Application Programming Interface (API) to automate
STP CTN reconfiguration: The concept of “a pair and
a spare” has been around since the original Sysplex
Couple Data Sets (CDSs). If the primary CDS becomes
unavailable, the backup CDS would take over. Many sites
have had automation routines bring a new backup CDS
online to avoid a single point of failure. This idea is being
extended to STP. With this enhancement, if the PTS fails
and the BTS takes over as CTS, an API is now available
on the HMC so you can automate the reassignment of the
PTS, BTS, and Arbiter roles. This can improve availability
by avoiding a single point of failure after the BTS has taken
over as the CTS.
Two data centers
• CTN with 2 servers (one in each data center) install IBF
on at least the PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
• CTN with 3 or more servers, install IBF on at least the
PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
Message Time Ordering (Sysplex Timer Connectivity to Coupling
Facilities)
As processor and Coupling Facility link technologies have
improved, the requirement for time synchronization toler-
ance between systems in a Parallel Sysplex environment
has become ever more rigorous. In order to enable any
exchange of timestamped information between systems
in a sysplex involving the Coupling Facility to observe the
correct time ordering, time stamps are now included in
the message-transfer protocol between the systems and
the Coupling Facility. Therefore, when a Coupling Facility
is configured on any System z10 or System z9, the Cou-
pling Facility will require connectivity to the same 9037
Sysplex Timer or Server Time Protocol (STP) configured
Coordinated Timing Network (CTN) that the systems in its
Parallel Sysplex cluster are using for time synchroniza-
tion. If the ICF is on the same server as a member of its
Parallel Sysplex environment, no additional connectivity is
required, since the server already has connectivity to the
Sysplex Timer.
Prior to this enhancement, the PTS, BTS, and Arbiter roles
had to be reassigned manually using the System (Sysplex)
Time task on the HMC.
For additional details on the API, please refer to System z
Application Programming Interfaces, SB10-7030-11.
Additional information is available on the STP Web page:
The following Redbooks are available on the Redbooks
• Server Time Protocol Planning Guide, SG24-7280
• Server Time Protocol Implementation Guide, SG24-7281
Internal Battery Feature Recommendation
Single data center
• CTN with 2 servers, install IBF on at least the PTS/CTS
However, when an ICF is configured on any z10 which
does not host any systems in the same Parallel Sysplex
cluster, it is necessary to attach the server to the 9037
Sysplex Timer or implement STP.
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
– CTN with 3 or more servers IBF not required for STP
recovery, if Arbiter configured
56
Download from Www.Somanuals.com. All Manuals Search And Download.
HMC System Support
The new functions available on the Hardware Management HMC/SE support is addressing the following requirements:
Console (HMC) version 2.10.1 as described apply exclu-
• The availability of addresses in the IPv4 address space
sively to System z10. However, the HMC version 2.10.1 will
is becoming increasingly scarce.
continue to support the systems as shown.
• The demand for IPv6 support is high in Asia/Pacific
countries since many companies are deploying IPv6.
The 2.10.1 HMC will continue to support up to two 10/100
Mbps Ethernet LANs. Token Ring LANs are not supported.
The 2.10.1 HMC applications have been updated to sup-
port HMC hardware without a diskette drive. DVD-RAM,
CD-ROM, and/or USB flash memory drive media will be
used.
• The U.S. Department of Defense and other U.S. govern-
ment agencies are requiring IPv6 support for any prod-
ucts purchased after June 2008.
More information on the U.S. government requirements
memoranda/fy2005/m05-22.pdf and
Family
Machine Type
Firmware Driver SE Version
FAQs.pdf.
z10 BC
z10 EC
z9 BC
z9 EC
2098
2097
76
73
67
67
55
55
3G
3G
26
26
2.10.1
2.10.0
2.9.2
2.9.2
1.8.2
1.8.2
1.7.3
1.7.3
1.6.2
1.6.2
2096
HMC/SE Console Messenger
2094
On systems prior to System z9, the remote browser capa-
bility was limited to Platform Independent Remote Console
(PIRC), with a very small subset of functionality. Full func-
tionality via Desktop On-Call (DTOC) was limited to one
user at a time; it was slow, and was rarely used.
z890
2086
z990
2084
z800
2066
z900
2064
9672 G6
9672 G5
9672/9674
9672/9674
With System z9, full functionality to multiple users was
delivered with a fast Web browser solution. You liked this,
but requested the ability to communicate to other remote
users.
Internet Protocol, Version 6 (IPv6)
HMC version 2.10.1 and Support Element (SE) version
2.10.1 can now communicate using IP Version 4 (IPv4),
IP Version 6 (IPv6), or both. It is no longer necessary to
assign a static IP address to an SE if it only needs to com-
municate with HMCs on the same subnet. An HMC and
SE can use IPv6 link-local addresses to communicate with
each other.
57
Download from Www.Somanuals.com. All Manuals Search And Download.
There is now a new Console Manager task that offers
basic messaging capabilities to allow system operators or
administrators to coordinate their activities. The new task
may be invoked directly, or via a new option in Users and
Tasks. This capability is available for HMC and SE local
and remote users permitting interactive plain-text com-
munication between two users and also allowing a user to
broadcast a plain-text message to all users. This feature is
a limited instant messenger application and does not inter-
act with other instant messengers.
Enhanced installation support for z/VM using the HMC
HMC version 2.10.1 along with Support Element (SE)
version 2.10.1 on z10 BC and corresponding z/VM 5.4 sup-
port, will now give you the ability to install Linux on System
z in a z/VM virtual machine using the HMC DVD drive. This
new function does not require an external network con-
nection between z/VM and the HMC, but instead, uses the
existing communication path between the HMC and SE.
This support is intended for customers who have no alter-
native, such as a LAN-based server, for serving the DVD
contents for Linux installations. The elapsed time for instal-
lation using the HMC DVD drive can be an order of magni-
tude, or more, longer than the elapsed time for LAN-based
alternatives.
HMC z/VM Tower System Management Enhancements
Building upon the previous z/VM Systems Management
support from the Hardware Management Console (HMC),
which offered management support for already defined
virtual resources, new HMC capabilities are being made
available allowing selected virtual resources to be defined.
In addition, further enhancements have been made for
managing defined virtual resources.
Using the legacy support and the z/VM 5.4 support, z/VM
can be installed in an LPAR and both z/VM and Linux on
System z can be installed in a virtual machine from the
HMC DVD drive without requiring any external network
setup or a connection between an LPAR and the HMC.
Enhancements are designed to deliver out-of-the-box
integrated graphical user interface-based (GUI-based)
management of selected parts of z/VM. This is especially
targeted to deliver ease-of-use for enterprises new to
System z. This helps to avoid the purchase and installa-
tion of additional hardware or software, which may include
complicated setup procedures. You can more seamlessly
perform hardware and selected operating system man-
agement using the HMC Web browser-based user inter-
face.
This addresses security concerns and additional configura-
tion efforts using the only other previous solution of the exter-
nal network connection from the HMC to the z/VM image.
Support for the enhanced installation support for z/VM using
the HMC is exclusive to z/VM 5.4 and the System z10.
Support for HMC z/VM tower systems management
enhancements is exclusive to z/VM 5.4 and the System z10.
58
Download from Www.Somanuals.com. All Manuals Search And Download.
Implementation Services for Parallel
Sysplex
IBM Implementation Services for Parallel Sysplex CICS and
WAS Enablement
This DB2 data sharing service is designed for clients who
want to:
IBM Implementation Services for Parallel Sysplex Middle-
ware – CICS enablement consists of five fixed-price and
fixed-scope selectable modules:
1)Enhance the availability of data
2)Enable applications to take full utilization of all servers’
resources
1)CICS application review
3)Share application system resources to meet business
goals
2) z/OS CICS infrastructure review (module 1 is a prerequi-
site for this module)
4)Manage multiple systems as a single system from a
single point of control
3)CICS implementation (module 2 is a prerequisite for this
module)
5)Respond to unpredicted growth by quickly adding com-
puting power to match business requirements without
disruption
4)CICS application migration
5)CICS health check
6)Build on the current investments in hardware, software,
applications, and skills while potentially reducing com-
puting costs
IBM Implementation Services for Parallel Sysplex Mid-
dleware – WebSphere Application Server enablement
consists of three fixed-price and fixed-scope selectable
modules:
The offering consists of six selectable modules; each is
a stand-alone module that can be individually acquired.
The first module is an infrastructure assessment module,
followed by five modules which address the following DB2
data sharing disciplines:
1)WebSphere Application Server network deployment
planning and design
2)WebSphere Application Server network deployment
implementation (module 1 is a prerequisite for this
module)
1)DB2 data sharing planning
2)DB2 data sharing implementation
3)Adding additional data sharing members
4)DB2 data sharing testing
3)WebSphere Application Server health check
For a detailed description of this service, refer to Services
Announcement 608-041, (RFA47367) dated June 24, 2008.
5)DB2 data sharing backup and recovery
For more information on these services contact your IBM
Implementation Services for Parallel Sysplex DB2 Data Sharing
To assist with the assessment, planning, implementation,
testing, and backup and recovery of a System z DB2 data
sharing environment, IBM Global Technology Services
announced and made available the IBM Implementation
Services for Parallel Sysplex Middleware – DB2 data shar-
ing on February 26, 2008.
representative or refer to: www.ibm.com/services/server.
59
Download from Www.Somanuals.com. All Manuals Search And Download.
Fiber Quick Connect for FICON LX
Environments
GDPS
Fiber Quick Connect (FQC), an optional feature on z10
BC, is offered for all FICON LX (single-mode fiber) chan-
nels, in addition to the current support for ESCON (62.5
micron multimode fiber) channels. FQC is designed to
significantly reduce the amount of time required for on-site
installation and setup of fiber optic cabling. FQC facilitates
adds, moves, and changes of ESCON and FICON LX fiber
optic cables in the data center, and may reduce fiber con-
nection time by up to 80%.
Geographically Dispersed Parallel Sysplex (GDPS) is
designed to provide a comprehensive end-to-end con-
tinuous availability and/or disaster recovery solution
for System z servers, Geographically Dispersed Open
Clusters (GDOC) is designed to address this need for
open systems. When available, GDPS 3.5 will support
GDOC for coordinated disaster recovery across System
z and non-System z servers if Veritas Cluster Server is
already installed. GDPS and the new Basic HyperSwap
(available with z/OS V1.9) solutions help to ensure system
failures are invisible to employees, partners and customers
with dynamic disk-swapping capabilities that ensure appli-
cations and data are available. z10 BC—big on service,
low on cost.
FQC is for factory installation of Fiber Transport System
(FTS) fiber harnesses for connection to channels in the I/O
drawer. FTS fiber harnesses enable connection to FTS
direct-attach fiber trunk cables from IBM Global Technol-
ogy Services.
FQC, coupled with FTS, is a solution designed to help
minimize disruptions and to isolate fiber cabling activities
away from the active system as much as possible.
GDPS is a multi-site or single-site end-to-end application
availability solution that provides the capability to manage
remote copy configuration and storage subsystems
(including IBM TotalStorage), to automate Parallel Sysplex
operation tasks and perform failure recovery from a single
point of control.
IBM provides the direct-attach trunk cables, patch panels,
and Central Patching Location (CPL) hardware, as well
as the planning and installation required to complete the
total structured connectivity solution. An ESCON example:
Four trunks, each with 72 fiber pairs, can displace up
to 240 fiber optic jumper cables, the maximum quantity
of ESCON channels in one I/O drawer. This significantly
reduces fiber optic jumper cable bulk.
GDPS helps automate recovery procedures for planned
and unplanned outages to provide near-continuous avail-
ability and disaster recovery capability.
03.ibm.com/systems/z/gdps/.
At CPL panels you can select the connector to best meet
your data center requirements. Small form factor connec-
tors are available to help reduce the floor space required
for patch panels.
CPL planning and layout is done prior to arrival of the
server on-site using the default CHannel Path IDdentifier
(CHPID) placement report, and documentation is provided
showing the CHPID layout and how the direct-attach har-
nesses are plugged.
FQC supports all of the ESCON channels and all of the
FICON LX channels in the I/O drawer of the server. On
an upgrade from a z890 or z9 BC, ESCON channels that
are NOT using FQC cannot be used on the z10 BC FQC
feature.
60
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Physical Characteristics
Physical Planning
z10 BC System Power
1 I/O
A System z10 BC feature may be ordered to allow use of
the z10 BC in a non-raised floor environment. This capabil-
ity may help ease the cost of entry into the z10 BC; a raised
floor may not be necessary for some infrastructures.
2 I/O
Drawers
3 I/O
Drawers
4 I/O
Drawers
Drawer
normal room 3.686 kW 4.542 kW 5.308 kW 6.253 kW
(<28 degC)
warm room
(>=28 degC)
4.339 kW 5.315 kW
6.291 kW 7.266 kW
The non-raised floor z10 BC implementation is designed to
meet all electromagnetic compatibility standards. Feature
#7998 must be ordered if the z10 BC is to be used in a non-
raised floor environment. A Bolt-down kit (#7992) is also
available for use with a non-raised floor z10 BC, providing
frame stabilization and bolt-down hardware to help secure
a frame to a non-raised floor. Bolt-down kit (#7992) may be
ordered for initial box or MES starting January 28, 2009.
z10 BC Highlights and Physical Dimensions
z10 BC
z9 BC
Number of Frames 1 Frame
1 Frame
Height (with covers) 201.5 cm/79.3 in (42 EIA) 194.1 cm/76.4 in (40 EIA)
Width (with covers) 77.0 cm /30.3 in
Depth (with covers) 180.6 cm /71.1 in
78.5 cm /30.9 in
157.7 cm /62.1 in
Height Reduction
Width Reduction
180.9 cm / 71.2 in (EIA) 178.5 cm / 70.3 in (EIA)
None None
The Installation Manual for Physical Planning (GC28-6875)
is available on Resource Link and should always be referred
to for detailed planning information.
Machine Area
Service Clearance
1.42 sq. m. /15.22 sq. ft. 1.24 sq. m. /13.31 sq. ft.
3.50 sq. m. /37.62 sq. ft. 3.03 sq. m. /32.61 sq. ft.
(IBF Contained w/in Frame) (IBF Contained w/in Frame)
Maximum of 480 CHPIDs, four I/O drawers, 32 I/O slots (8 I/O
slots per I/O drawer):
J1
J1
BATTER
(CB Must be on)
Y
ENBLD
BATTERY ENBLD
(CB Must be on)
Pb
Pb
Integrated
Battery
Integrated
Battery
System
power
supply
System
power
supply
Central
Central
Processor
Complex (CPC)
drawer
Processor
Complex (CPC)
drawer
1
2
Support
Elements
I/O drawer 3
I/O drawer 2
I/O drawer 1
I/O drawer 4
I/O drawer 3
I/O drawer 2
I/O drawer 1
I/O drawer 4
A00S
A00H
A
B
C
D
E
F
G
H
J
K
L
M
N
P
Q
R
S
T
U
V
W
X
Y
Z
A Frame Front View
A Frame Rear View
61
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Configuration Detail
Features Min #
Max #
Max
Increments Purchase
z10 BC Concurrent PU Conversions
Features Features Connections per Feature Increments
• Must order (characterize one PU as) a CP, an ICF or an
IFL
• Concurrent model upgrade is supported
• Concurrent processor upgrade is supported if PUs are
available
16-port
ESCON
0 (1)
32
480 channels 16 channels 4 channels
1reserved as
as a spare
FICON
Express4*
0 (1)
0 (1)
0 (1)
32
20
20
64/128*
channels
2/4*
channels
2/4*
channels
– Add CP, IFL, unassigned IFL, ICF, zAAP, zIIP or
optional SAP
FICON
Express2**
80 channels
4
channels 4 channels
• PU Conversions
FICON
40 channels
2
channels 2 channels
Express**
– Standard SAP cannot be converted to other PU types
ICB-4
0 (1)
0 (1)
0 (1)
6
12 links (2) (3)
48 links (2)
2
4
2
2
links
links
links
links
1 link
To
CP IFL Unassigned ICF zAAP zIIP Optional
ISC-3
12
6
1 link
From
IFL
SAP
1x PSIFB
12 links (2)
2 links
2 links
CP
IFL
X
Yes
X
Yes
Yes
X
Yes Yes Yes
Yes Yes Yes
Yes Yes Yes
Yes
Yes
Yes
12x PSIFB 0 (1)
6
12 links (2) (3)
Yes
OSA-
Express3*
0
0
0
24
48/96*
ports
2 or 4
2 ports/
4 ports
Unassigned
IFL
Yes Yes
OSA-
Express2**
24
8
24/48
ports
1 or 2
2 ports/
1 port
ICF
Yes Yes
Yes Yes
Yes Yes
Yes Yes
Yes
Yes
Yes
Yes
X
Yes Yes
Yes
Yes
Yes
X
zAAP
zIIP
Yes
X
Yes
X
Crypto
Express2
8/16 PCI-X
adapters
1/2* PCI-X 2* PCI-X
adapters
adapters (4)
*
Yes Yes
1) Minimum of one I/O feature (ESCON, FICON) or Coupling Link (PSIFB,
ICB-4, ISC-3) required.
Optional
SAP
Yes Yes Yes
2) The maximum number of external Coupling Links combined cannot
exceed 56 per server. There is a maximum of 64 coupling link CHPIDs
per server (ICs, ICB-4s, active ISC-3 links, and IFBs)
3) ICB-4 and 12x IB-DDR are not included in the maximum feature count for
I/O slots but are included in the CHPID count.
Exceptions: Disruptive if ALL current PUs are converted to different types
may require individual LPAR disruption if dedicated PUs are converted.
4) Initial order of Crypto Express2 is 2/4 PCI-X adapters (two features).
Each PCI-X adapter can be configured as a coprocessor or an accelera-
tor.
*
FICON Express4-2C 4KM LX has two channels per feature, OSA-
Express3 GbE and 1000BASE-T have 2 and 4 port options and Crypto
Express2-1P has 1 coprocessor
** Available only when carried forward on an upgrade from z890 or or z9
BC. Limited availability for OSA-Express2 GbE features.
62
Download from Www.Somanuals.com. All Manuals Search And Download.
z10 BC Model Structure
z10 BC System weight and IBF hold-up times
z10 Model E10 – Single Frame
Model PU PUs for Max Avail Standard Standard CP/IFL/ Max
Max
Customer Subcapacity SAPs
CPs
Spares ICF/zAAP/ Customer Chan.
zIIP** Memory
w/o IBF
w/ IBF
1890 lbs.
2100 lbs.
E10
4
10
5
2
0
5/10/10/5/5 248 GB 480*
*
Max is for ESCON channels.
z10 BC IBF hold uptime
** For each zAAP and/or zIIP installed there must be a corresponding CP.
The CP may satisfy the requirement for both the zAAP and/or zIIP. The
combined number of zAAPs and/or zIIPs can not be more than 2x the
number of general purpose processors (CPs).
1 I/O
Drawer
2 I/O
3 I/O
4 I/O
Drawers
Drawers
Drawers
1 CPC Drawer 13 min
11 min
9 min
7 min
z10 BC
Minimum
Maximum
E10
4 GB
248 GB
Memory DIMM sizes: 2 GB and 4 GB. (Fixed HSA not included, up to 248
GB for customer use June 30, 2009)
System z CF Link Connectivity – Peer Mode only
Connectivity
Options
z10
ISC-3
z10
z10
z10
ICB-4 1x PSIFB 12x PSIFB
z10/z9/z990/z890 2 Gbps
ISC-3
N/A
2 GBps
N/A
N/A
N/A
N/A
z10/z9/z990/z890
ICB-4
N/A
N/A
N/A
N/A
N/A
z9 with PSIFB
N/A
3 GBps*
N/A
z10 1x PSIFB
(>150m)
N/A
5 Gbps*
N/A
z10 12x PSIFB
N/A
6 GBps*
•
•
•
N-2 Server generation connections allowed
Theoretical maximum rates shown
1x PSIFBs support single data rate (SDR) at 2.5 Gbps when connected
to a DWDM capable of SDR speed and double data rate (DDR) at 5
Gbps when connected to a DWDM capable of DDR speed
System z9 does NOT support 1x IB-DDR or SDR InfiniBand Coupling
Links
•
*Note: The InfiniBand link data rate of 6 GBps, 3 GBps or 5 Gbps does not
represent the performance of the link. The actual performance is depen-
dent upon many factors including latency through the adapters, cable
lengths, and the type of workload. With InfiniBand coupling links, while the
link data rate may be higher than that of ICB, the service times of coupling
operations are greater, and the actual throughput may be less than with ICB
links.
63
Download from Www.Somanuals.com. All Manuals Search And Download.
Coupling Facility – CF Level of Support
CF Level
Function
z10 EC
z10 BC
z9 EC
z9 BC
z990
z890
16
CF Duplexing Enhancements
X
List Notification Improvements
Structure Size increment increase from 512 MB –> 1 MB
15
14
13
12
Increasing the allowable tasks in the CF from 48 to 112
CFCC Dispatcher Enhancements
X
X
X
X
X
X
DB2 Castout Performance
z990 Compatibility 64-bit CFCC
Addressability Message Time Ordering
DB2 Performance SM Duplexing Support for zSeries
X
X
X
X
X
X
11
10
9
z990 Compatibility SM Duplexing Support for 9672 G5/G6/R06
z900 GA2 Level
X
X
X
X
Intelligent Resource Director IC3 / ICB3 / ISC3 Peer Mode
X
X
X
X
X
X
®
MQSeries Shared Queues
WLM Multi-System Enclaves
Note: zSeries 900/800 and prior generation servers are not supported with System z10 for Coupling Facility or Parallel Sysplex levels.
64
Download from Www.Somanuals.com. All Manuals Search And Download.
Statement of Direction
IBM intends to support optional water cooling on future
high end System z servers. This cooling technology will
tap into building chilled water that already exists within the
datacenter for computer room air conditioning systems.
External chillers or special water conditioning will not be
required. Water cooling technology for high end System z
servers will be designed to deliver improved energy effi-
ciencies.
The System z10 will be the last server to support connec-
tions to the Sysplex Timer (9037). Servers that require time
synchronization, such as to support a base or Parallel Sys-
plex, will require Server Time Protocol (STP). STP has been
available since January 2007 and is offered on the System
z10, System z9, and zSeries 990 and 890 servers.
ESCON channels to be phased out: It is IBM’s intent for
ESCON channels to be phased out. System z10 EC and
System z10 BC will be the last servers to support greater
than 240 ESCON channels.
IBM intends to support the ability to operate from High
Voltage DC power on future System z servers. This will
be in addition to the wide range of AC power already
supported. A direct HV DC datacenter power design can
improve data center energy efficiency by removing the
need for an additional DC to AC inversion step.
ICB-4 links to be phased out: Restatement of SOD) from
RFA46507) IBM intends to not offer Integrated Cluster Bus-
4 (ICB-4) links on future servers. IBM intends for System
z10 to be the last server to support ICB-4 links.
The System z10 will be the last server to support Dynamic
ICF expansion. This is consistent with the System z9 hard-
ware announcement 107-190 dated April 18, 2007, IBM
System z9 Enterprise Class (z9 EC) and System z9 Busi-
ness Class (z9 BC) – Delivering greater value for every-
one, in which the following Statement of Direction was
made: IBM intends to remove the Dynamic ICF expansion
function from future System z servers.
65
Download from Www.Somanuals.com. All Manuals Search And Download.
Publications
The following Redbook publications are available now:
z10 BC Technical Overview
SG24-7632
Hardware Management Console
Operations Guide (V2.10.1)
SC28-6873
SB10-7037
IOCP User’s Guide
z10 BC Technical Guide
SG24-7516
Maintenance Information for Fiber
Optic Links
SY27-2597
System z Connectivity Handbook
Server Time Protocol Planning Guide
SG24-5444
SG24-7280
OSA-Express Customer’s Guide
OSA-ICC User’s Guide
SA22-7935
SA22-7990
GA23-0367
SB10-7153
SC28-6839
GC28-6861
Server Time Protocol Implementation Guide SG24-7281
Planning for Fiber Optic Links
PR/SM Planning Guide
The following publications are shipped with the product and
available in the Library section of Resource Link:
SCSI IPL - Machine Loader Messages
Service Guide for HMCs and SEs
z10 BC Installation Manual
z10 BC Service Guide
GC28-6874
GC28-6878
GC28-6877
G229-9054
Service Guide for Trusted Key Entry
Workstations
GC28-6862
SB10-7152
z10 BC Safety Inspection Guide
System Safety Notices
Standalone IOCP User’s Guide
Support Element Operations Guide
(Version 2.10.0)
The following publications are available in the Library section of
Resource Link:
SC28-6879
System z Functional Matrix
TKE PCIX Workstation User’s Guide
z10 BC Parts Catalog
ZSW0-1335
SA23-2211
GC28-6876
SA22-1085
Agreement for Licensed Machine Code
SC28-6872
Application Programming Interfaces
for Java
API-JAVA
z10 BC System Overview
Application Programming Interfaces
Capacity on Demand User’s Guide
CHPID Mapping Tool User’s Guide
SB10-7030
SC28-6871
GC28-6825
z10 BC Installation Manual - Physical
Planning (IMPP)
GC28-6875
Publications for System z10 Business Class can be
obtained at Resource Link by accessing the following Web
site: www.ibm.com/servers/resourcelink
Common Information Model (CIM)
Management Interface
SB10-7154
Coupling Links I/O Interface Physical Layer SA23-0395
ESCON and FICON CTC Reference
ESCON I/O Interface Physical Layer
FICON I/O Interface Physical Layer
SB10-7034
SA23-0394
SA24-7172
66
Download from Www.Somanuals.com. All Manuals Search And Download.
©
Copyright IBM Corporation 2009
IBM Systems and Technology Group
Route 100
Somers, NY 10589
U.S.A
Produced in the United States of America,
04-09
All Rights Reserved
References in this publication to IBM products or services do not imply
that IBM intends to make them available in every country in which IBM
operates. Consult your local IBM business contact for information on the
products, features, and services available in your area.
IBM, IBM eServer, the IBM logo, the e-business logo, AIX, APPN, CICS,
Cognos, Cool Blue, DB2, DRDA, DS8000, Dynamic Infrastructure, ECKD,
ESCON, FICON, Geographically Dispersed Parallel Sysplex, GDPS,
HiperSockets, HyperSwap, IMS, Lotus, MQSeries, MVS, OS/390, Parallel
Sysplex, PR/SM, Processor Resource/Systems Manager, RACF, Rational,
Redbooks, Resource Link, RETAIN, REXX, RMF, Scalable Architecture
for Financial Reporting, Sysplex Timer, Systems Director Active Energy
Manager, System Storage, System z, System z9, System z10, Tivoli,
TotalStorage, VSE/ESA, VTAM, WebSphere, z9, z10, z10 BC, z10 EC, z/
Architecture, z/OS, z/VM, z/VSE, and zSeries are trademarks or registered
trademarks of the International Business Machines Corporation in the
Unites States and other countries.
InfiniBand is a trademark and service mark of the InfiniBand Trade Asso-
ciation.
Java and all Java-based trademarks and logos are trademarks or regis-
tered trademarks of Sun Microsystems, Inc. in the United States or other
countries.
Linux is a registered trademark of Linus Torvalds in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the Unites States
and other countries.
Microsoft, Windows and Windows NT are registered trademarks of Micro-
soft Corporation In the United States, other countries, or both.
Intel is a trademark of the Intel Corporation in the United States and other
countries.
Other trademarks and registered trademarks are the properties of their
respective companies.
IBM hardware products are manufactured from new parts, or new and
used parts. Regardless, our warranty terms apply.
Performance is in Internal Throughput Rate (ITR) ratio based on measure-
ments and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary
depending upon considerations such as the amount of multiprogramming
in the user’s job stream, the I/O configuration, the storage configuration,
and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
All performance information was determined in a controlled environment.
Actual results may vary. Performance information is provided “AS IS” and
no warranties or guarantees are expressed or implied by IBM.
Photographs shown are of engineering prototypes. Changes may be
incorporated in production models.
This equipment is subject to all applicable FCC rules and will comply with
them upon delivery.
Information concerning non-IBM products was obtained from the suppli-
ers of those products. Questions concerning those products should be
directed to those suppliers.
All customer examples described are presented as illustrations of how
those customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics
ZSO03021-USEN-02
67
Download from Www.Somanuals.com. All Manuals Search And Download.
|