NEC Enterprise Server
NEC Express5800/1000 Series
NEC Express5800/1000 Technology Guide Vol.1
Powered by the Dual-Core Intel® Itanium® Processor
NEC Express5800/1000 Series
Reliability and Performance through
the fusion of the NEC “A3” chipset and
the Dual-Core Intel® Itanium® processor
1320Xf/1160Xf
1080Rf
Download from Www.Somanuals.com. All Manuals Search And Download.
Internal Connections of the
Express5800/1000 Series
System Hardware Layout of the Express5800/1000
Series Server (1320Xf)
n
n
Increased inter-Cell data transfer speeds
Cell
Fan box
Service
Crossbar
card
Cell card
Processor *1
* Redundant
configuration
available
Clock card *1
HDD Bay
Fan box
PCI box
PCI slots
* Redundant
configuration
available
Processor
Power
Distribution
Unit (PDU)
Power Bay
Cell Controller
Processor
Hot Pluggable *2
Service Processor
Fan box
Processor
Fully
Redundant
N+1
Redundant
A3 Chipset
Clock card
Cell card
PCI box
HDD Bay
Power Distribution Unit
Power Bay
Crossbar card
PCI slots
*1 Redundancy is optional
*2 Ability to replace a failed component
without shutting down other partitions
Processor
Memory
Memory
Mainframe-class RAS features
Reliability / Availability
• Dual-Core Intel® Itanium® processor:
Error handling of hardware and operating system through Machine Check Architecture (MCA)
Continuous operation even in the event of a non-correctable error
• Memory mirroring:
Avoid multi-partition shutdowns resulting from chipset failures
System restoration after the replacement of a failed crossbar
• Partial Chipset degradation:
• Highly Available Center Plane:
no longer requires a system shutdown
Improvements in fault resilience,
• Complete modularization and redundancy:
continuous operation and serviceability
• Clock modularization, redundancy and 16 processor domain segmentation:
Minimizes downtime, and avoids multi partition shutdown due to clock failure
Substantial strengthening of data integrity
• Diagnostics of the error detection circuits:
• Enhanced error detection of the high-speed interconnect:
Intricate error handling through multi bit error detection and retransmission of error data
Avoid system shutdown due to failures of the power distribution units
• Two independent power sources:
Serviceability
Autonomic reporting of logs with pinpoint prognosis of failed components allow for the realization of
mainframe-class platform serviceability
•
3
Download from Www.Somanuals.com. All Manuals Search And Download.
Supercomputer-class Performance
Features for performance improvement
Dual-Core Intel® Itanium® processor and high-speed
inter/intra Cell cache-to-cache data transfer
At the heart of the Express5800/1000 series server is the
64-bit Dual-Core Intel® Itanium® processor, redesigned for
even faster processing of larger data sets.
Increased Memory Bandwidth
Improved Inter/Intra-Cell memory data transfer
The system has been equipped with the NEC designed chipset,
“A3”, in order to improve performance by utilizing, to its full
extent, the massive 24MB of cache memory that has been built
into the Dual-Core Intel® Itanium® processor
Very Large Cache (VLC) Architecture
High-speed/low latency Intra-Cell cache-to-cache data transfer
[1320Xf]
[1160Xf]
Dedicated Cache Coherency Interface (CCI)
High-speed/low latency Inter-Cell cache-to-cache data transfer
Technologies to increase cache-to-cache data transfer, such
as the VLC architecture and CCI, have been implemented
to maximize the performance for enterprise mission critical
Crossbar-less configuration
Improved data transfer latency between Cell/Cell and Cell/IO
[1080Rf]
computing.
High processing power of the Dual-Core Intel® Itanium® processor
Dual-Core, massive L3 cache and EPIC (Explicitly Parallel Instruction Computing) architecture
The Dual-Core Intel® Itanium® processor is Intel’s first production
in the Itanium® processor family with two complete 64-bit cores
on one processor and also the first member of the Intel® Itanium®
processor family to include Hyper-Threading Technology, which
provides four times the number of application threads provided by
earlier single-core implementations.
With a maximum of 24MB of On-Die L3 cache, the Dual-Core Intel®
Itanium® processor excels at high volume data transactions.
EPIC architecture provides a variety of advanced implementations
of parallelism, predication, and speculation, resulting in superior
Instruction-Level Parallelism (ILP) to help address the current
and future requirements of high-end enterprise and technical
workloads.
Conventional Superscalar
RISC Processor
Parallel processing with
EPIC architecture
Original Source Code
Original Source Code
Intel® Itanium®
Partial HW
Parallelization
Compiler
processor
supported compiler
Intel® Itanium® processor
source is parallelized at
compile time
Efficient parallel processing
is made possible due to the
thorough parallelization.
Sequential
Machine Code
Hardware
Some level of parallelization is achieved however,
it is not maximized nor efficient
In the EPIC architecture, parallelization is run at compile time,
allowing for maximum parallelization with minimal scheduling.
4
Download from Www.Somanuals.com. All Manuals Search And Download.
VLC Architecture
High-speed / low latency Intra-Cell cache-to-cache data transfer
The Express5800/1000 series server
Very Large Cache (VLC) Architecture
Split BUS Architecture
Higher cache memory
access latency.
Non-uniform
cache-to-cache data
transfer.
Inconsistent
performance.
implements the VLC architecture, which
Increased enterprise
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
applications
allows for low latency cache-to-cache
data transfer between multiple CPUs
within a cell.
Cache
Cache
Cache
Cache
Cache
Cache
Cache
Cache
Memory
Memory
Memory
Memory
Memory
Memory
Memory
Memory
performance through
reduced cache memory
access latency
Data transfer controller
Memory
Memory
chipset
Direct CPU-to-CPU transfers
chipset
Overhead from transferring
data through the chipset.
In a split BUS architecture, for a cache-
to-cache data transfer to take place, the
data must be passed through a chipset.
However, in the VLC architecture,
data within the cache memory can
be accessed directly by one another,
bypassing the chipset. This allows
for lower latency between the cache
memory, which results in faster data
transfers.
FSB
Intel® Itanium® 2 processor
(Madison : L3 9MB)
Latency
Intel® Itanium® 2 processor
(Madison : L3 9MB)
Latency
FSB chipset FSB
High-speed
cache-to-cache
transfers
Latency
L3 of other
degradation
(approx 3x)
CPU on
different FSB
L3 of
other CPU on
same FSB
L3 of other CPU
This area increases
due to the increase in
cache size and
CPU Cache Cache Cache
CPU Cache Cache Cache
L3
L3
Memory Memory Memory
Memory Memory Memory
Data Size
Data Size
Dual-Core Intel® Itanium® processor
(Montvale : L3 24MB)
Latency
Dual-Core Intel® Itanium® processor
(Montvale : L3 24MB)
Latency
higher latency
Higher
latency
(approx 3x)
L3 of other CPU on
different FSB
L3 of other CPU
L3 of other CPU
on same FSB
Cache
Memory
Cache
Memory
Cache
Memory
Cache
Memory
Cache
Memory
Cache
CPU
L3
CPU
L3
Memory
Data Size
Data Size
This image does not depict actual numbers
Dedicated Cache Coherency Interface (CCI)
High-speed / low latency Inter-Cell cache-to-cache data transfer
Another technology implemented in the Express5800/1000 series
server to improve cache-to-cache data transfer is the Cache
Coherency Interface (CCI). CCI, the inter-Cell counterpart of the
VLC architecture, allows for a lower latency cache-to-cache data
transfer between Cells.
The benefit of the TAG based mechanism, thus implemented in
the Express5800/1000 series server, is that by accessing the
TAG, unnecessary inquiries to the cache memory are filtered for a
smoother transfer of data. Furthermore, the Express5800/1000
series server includes a dedicated high-speed cache coherency
interface (CCI) which is used to connect the Cells directly to
one another without using a crossbar. This interface is used for
broadcasting and other cache coherency transactions to allow for
even faster cache-to-cache data transfer.
Information containing the location and state of cached data is
required for the CPU to access the specific data stored in cache
memory. By accessing the cache memory according to this
information, the CPU is able to retrieve the desired data.
A3 Chipset
Tag Based Cache Coherency
Performance
Two main mechanisms exist for cache-to-cache data transfer
between Cells, directory based and TAG based cache coherency.
The cache information, described above, is stored in external
memory (DIR memory) for the directory based, and within the
chipset for the TAG based mechanisms.
chip
set
chip
set
chip
set
chip
set
increase with
CPU
CPU
CPU
Request is broadcasted to all CPU
simultaneously
3
the A chipset
TAG
CPU CPU CPU CPU
CPU CPU CPU CPU
CPU CPU CPU CPU
chip
Directory Based Cache Coherency
chip
set
chip
set
chip
set
chip
set
chip
set
chip
set
chip
chip
Memory
CPU
CPU
CPU
Memory
Memory
set
Memory
set
set
DIR
TAG
TAG
TAG
In a directory based system, the requestor CPU will first access the
external memory to confirm the location of the cached data, and
then will access the appropriate cache memory. On the other hand,
in a TAG based system, the requestor CPU broadcasts a request to
all other cache simultaneously via TAG.
The Express5800/1000 Series server
implements a dedicated connection (CCI)
for snooping
CPU
CPU requesting the information
CPU storing the newest information
CPU
Memory that is storing location regarding
the memory
Memory
Directory Based Cache Coherency
TAG memory (Manages cache line
information for all of the CPUs loaded on a
CELL card)
Access Directory to confirm the location of
the data first, then access the appropriate
cache memory
TAG
DIR
DIR Memory (Manages cache line
information for all of the memory loaded on
a CELL card)
CPU CPU CPU CPU
CPU CPU CPU CPU
CPU CPU CPU CPU
chip
chip
chip
Memory
set
Memory
set
Memory
set
DIR
DIR
DIR
Crossbar-less configuration
Improved data transfer latency through direct attached Cell configuration
Within the Express5800/1000 series server lineup, the 1080Rf
has been able to lower the data transfer latency by removing the
crossbar and directly connecting Cell to Cell, and Cell to PCI box.
Even with the crossbar-less configuration, virtualization of the Cell
card and I/O box has been retained as not to diminish computing
and I/O resources.
5
Download from Www.Somanuals.com. All Manuals Search And Download.
Mainframe-class RAS Features
RAS Design Philosophy
Realization of a mainframe-class continuous operation through the pursuit of
reliability and availability in a single server construct
Generally, in order to achieve reliability and availability on an
open server, clustering would be implemented. However,
clustering comes with a price tag. To keep costs at a minimum,
the Express5800/1000 series servers were designed to
achieve a high level of reliability and availability, but within a
single server.
Continuous operations throughout failures; minimize the
spread of failures; and smooth recovery after failures were
goals set forth which lead to implementation of technologies
such as memory mirroring, increased redundancy of intricate
components, and modularization. Through these technologies
a mainframe level of continuous operation was achieved.
The Express5800/1000 series server’s powerful RAS features
were developed through the pursuit of dependable server
technology.
Reliability
Availability
Serviceability
Mainflame
Level
Clustering
Dependable Server Technology
Center
plane
No chipset on the center plane
ECC protection of main
Partial chipset degradation/
data paths Intricate error
detectionof the high-
speed interconnects
4
*
*
Hot Pluggable
Chipset
Clock
Dynamic recovery
Duplexed*1
16 processor domain
segmentation
Continuous operations through failures
4
Hot Pluggable
2
Redundant components, error prediction and error
correction allows for continuous operation
*
4
4
Core I/O
*
*
Hot Pluggable
Core I/O Relief
Conventional
open server
Level
Hot Pluggable
PCI card
Memory
Minimized spread of failures
Technology to minimize the effects of hardware failures on
the system. Reduction of performance degradation and
multi-node shutdown
ECC protection
SDDC Memory
Memory
Mirroring*1
Intel® Cache Safe
Technology*3
CPU
L3 cache
N+1 Redundant
Two independent
power sources
4
4
Smooth recovery after failures
*
*
Power
HDD
Hot Pluggable
Ability to replace failed components without
shutting down operations
Software RAID
Hardware RAID
PC Server
Level
Hot Pluggable
*1 Available only on the 1320Xf/1160Xf
*2 Available only on the 1320Xf
*3 Intel® technology designed to avoid cache based failures
*4 Replacement of failed component without shutting down other partitions.
The Dual-Core Intel® Itanium® processor MCA
(Machine Check Architecture)
The framework for hardware, firmware and OS error handling
The Dual-Core Intel® Itanium® processor, designed for high-end
enterprise servers, not only excels in performance, but is also
abundant in RAS features. At the core of the processor’s RAS
feature set, is the error handling framework, called MCA.
Application Layer
Operating System
The OS logs the error, and then starts the recovery process
MCA provides a 3 stage error handling mechanism – hardware,
firmware, and operating system. In the first stage, the CPU and
chipset attempt to handle errors through ECC (Error Correcting
Code) and parity protection. If the error can not be handled by
the hardware, it is then passed to the second stage, where the
firmware attempts to resolve the issue. In the third stage, if the
error can not be handled by the first two stages, the operating
Firmware
Seamlessly handles the error
Hardware
CPU and chipset ECC and parity protection
The Firmware and OS aid in the correction of complex platform errors to restore the system
Error details are logged, and then a report flow is defined for the OS
Detects and corrects a wide range of hardware errors for main data structures
system runs recovery procedures based on the error report
and error log that was received. In the event of a critical error,
the system will automatically reset, to significantly reduce the
possibility of a system failure.
6
Download from Www.Somanuals.com. All Manuals Search And Download.
Memory Mirroring
Continuous operation even in the event of a non-correctable memory error
The Express5800/1000 series server supports high-level memory
RAS features to ensure that the server can rapidly detect memory
CPU
CPU
CPU
CPU
errors, reduce multi-bit errors and continually operate even in
the event of memory chip or memory controller failures. Memory
scan, memory chip sparing (SDDC*) and memory scrubbing are
examples of those features.
Memory
Image
Cell
Controller
Memory
Controller
Memory
I/F
A memory scan is run on all loaded memory modules at each OS
Memory
Controller
Memory
I/F
boot. If the system detects a memory failure, the failed component
is immediately isolated and detached from the system preventing
possible downtime during business operations.
Memory
Memory
I/F
Controller
Chip sparing (SDDC*) memory is a memory system loaded with
several DRAM chips that can correct errors at the chip level. If
a failure were to occur in the memory, the error can be corrected
immediately to allow for continuous operation.
Memory
Controller
Memory
I/F
Memory scrubbing checks memory content regularly (every few
milliseconds) during operation without affecting performance.
When an error is detected, it is corrected and then reported.
The scrubbing function is effective in detecting errors in a timely
manner which ultimately results in the reduction of multi-bit errors.
Unit of degradation
on the Express5800/
1000 Series
Components covered by
the memory mirroring
Components covered by
the standard chip sparing
This construct allows for continuous operation through all non-
correctablememory errors, not limited to the memory themselves,
but also in the memory interfaces and the in memory controllers.
Memory mirroring takes place continuously, where the same data
is written onto 2 separate memory blocks instead of 1 (available
only on the 1160Xf and 1320Xf). In the event of a non-correctable
error, due to the fact that the data exists on two independent
blocks, operations are able to continue without interruption.
* Single Device Data Correction
Partial Chipset degradation
Avoid multi-partition shutdowns resulting from chipset failures
In certain instances when multiple server partitions share a
0
1
common crossbar controller, effects of a single partition failure
Cell 0
Cell 1
may result in a multi-partition shutdown. To resolve this issue, the
Express5800/1000 series servers have been designed to allow for
Partial
the partial degradation of chipsets.
degradation
Within each of the LSI chips, which make up the chipset, multiple
LSI sub-units exist. These sub-units are connected to other sub-
units located on separate LSI chips. The combined sub-units
together make up single partition. If an error were to occur on an
LSI sub-unit, that sub-unit alone can be degradated to isolate the
failure to a single partition, thus preventing the failure to spread to
other partitions.
Failure
Sub
Unit
Sub
Unit
Sub
Unit
Sub
Unit
Crossbar
Controller
A
Crossbar
Controller
B
Sub
Unit
Sub
Unit
Sub
Unit
Sub
Unit
PCIBox
0
PCIBox
1
Furthermore, the downed partition can automatically reboot
itself, after isolating the failed subsystem, to resume operations
in a degradated mode without the intervention of a system
administrator. This is made possible, on the Express5800/1000
series servers, by the redundant paths between the Cells and the
IO.
Failure occurs at the sub-unit of
the crossbar controller.
Partition 0 is shutdown so that the
failed component can be isolated.
Partition 0 is rebooted
0
1
0
1
n specifies the partition number
Sub-units within the chipset
Additional sub-sets exist in
actuality
Sub
Unit
Not affected
7
Download from Www.Somanuals.com. All Manuals Search And Download.
Mainframe-class RAS Features
Highly Available Center Plane
System restoration after the replacement of a failed crossbar card
no longer requires a planned system downtime
The Express5800/1000 series server has separated and
Crossbar Controller Mudularization
Only the node that is linked directly to the failed crossbar
will be temporarily shutdown
modularized the crossbar controller which ordinarily would reside
on the system center plane. By moving the crossbar controller off
of the center plane, a reduction in center plane failures has been
Failure
realized.
Failure
Crossbar
Controller
(LSI)
In the unlikely event of a crossbar failure, only the partition that is
linked to the crossbar will be temporarily shutdown, allowing for
Down
Down
Crossbar
Card
the other partitions to continue operations uninterrupted, including
during the replacement of the crossbar card.
The failed crossbar
card can be
replaced without halting
other business
(The 1080Rf has a crossbar-less configuration.)
operations.
Complete modularization and redundancy
Improvements in fault resilience, continuous operation and serviceability
Major components of the Express5800/1000 series servers have
been modularized, allowing for better serviceability and easy
replacement in the event of a component failure.
Front Side
Back Side
Fan box
Fan box
Furthermore, to minimize the existence of single point of failure,
many of these modules have redundancy, allowing for continuous
operations (fault resilience).
Service
Processor
Cell card
Fan box
Cell card
Fan box
Crossbar
card
Clock
card
PCI Module
PCI Box
HDD Module
Power
Distribution
Unit
Sample: Express5800/1320Xf
Operation 1
Node 1
Operation 2 Spare
Operation 3
Node 3
Operation 4
Node 4
Cell card
CPU CPU
Redundant Crossbar
Node 2
Cell Card
Redundant Clock Module
(Redundancy or Segmentation)
Cell card
Cell card
CPU CPU
Cell card
CPU CPU
Cell card
CPU CPU
Cell card
Cell card
CPU CPU
CPU CPU
CPU CPU
Memory
CPU CPU
CPU CPU
Memory
CPU CPU
Memory
CPU CPU
Memory
CPU CPU
Memory
CPU CPU
Memory
CPU CPU
Memory
Redundant service processor
N+1 redundant cooling fan
N+1 redundant power supply
Memory
Memory
Memory
Memory
Memory
Memory
Memory
Crossbar Card
Crossbar Card
Crossbar Card
Crossbar Card
Quick recovery is possible
with a spare CELL card
1080Rf is crossbar-less
Full redundancy is available on the
1320Xf/1160Xf. Segregation is available
on the 1320Xf
PCI box
PCI box
PCI box
PCI box
PCI box
PCI box
Available on the 1320Xf/1160Xf
2N is included in the 1320Xf, and is offered
as an option on the 1160Xf/1080Rf
* This picture illustrates a 1320Xf
Clock Card
Clock Card
Service Processor
Cooling fan (N+1 redundant)
Power supply (N+1 / 2N redundant)
8
Download from Www.Somanuals.com. All Manuals Search And Download.
Modularization, redundancy and domain segmentation of the system clock
Minimizes downtime, and avoids multi-partition shutdown due to clock failure
Through modularization and redundancy, system downtime, due to
clock failures, have been minimized. The Express5800/1000 series
server has taken it one step further. In many cases, when a system
is said to have a redundant clock, in actuality, only the oscillator
is redundant. Integral clock distribution mechanisms such as the
clock driver or the amplifier are, many times, not redundant. Such
a construct leads to the existence of system single point of failures.
The Express5800/1000 series servers have redundancy in not only
the oscillator, but also in the clock distribution mechanisms so that
system downtime can be minimized.
The 1320Xf system allows for the division of the system into two
16 processor segments, where one segment utilizes one system
clock, and the other 16 processor segment utilizes the remaining
system clock. A failure in a system clock therefore, will not result
in shutdown of the entire system.
Express5800/1000 Series
Redundant Configuration B
Redundant: Active, Standby
Redundant Configuration A
Redundant: Active, Standby
16 Processor Domain
Redundant: Active, Standby
Segmentation
16 Processor
Domain
16 Processor
Domain
chipset
chipset
chipset
chipset
chipset
chipset
chipset
chipset
SPOF
Clock
Distribution
Clock
Distribution
Clock
Distribution
Clock
Distribution
Clock
Distribution
Clock
Distribution
Clock
Module
Hot
pluggable
Not hot
pluggable
Clock
Module
Clock
Module
Clock
Module
Clock
Module
Clock
Module
Clock
Module
Clock
Module
Redundant Configuration A
Express5800/1000 Series
Redundant Configuration B
Replacement of failed
component without
system halt
Redundant
1
*
Available on the 1320Xf/1160Xf
16 processor Domain Segmentation
Minimized spread
of failure
Available on the 1320Xf
*1: Hot plugging of the redundant oscillator is possible, however the hot plugging of the single clock driver is not possible
Diagnostics of the error detection circuits
Substantial strengthening of data integrity
Main data paths of the A3 chipset on the Express5800/1000 series
servers have been protected by ECC. When a single bit error is
Cell card
CPU CPU CPU CPU
Memory
Controller
detected, a hardware error correction is carried out. Furthermore,
paths between the A3 chipset interfaces support multi-bit error
Memory
Controller
To
Cell
Controller
other CELL
controller
detection, and resending of errored data.
Memory
Controller
Built-in high-speed error
check for inter-chipset
paths
In addition to maintaining data integrity through these RAS
features, the Express5800/1000 series server has the ability to
run diagnostics on its own error detection circuits. During every
system boot, all error detection circuits are diagnosed for possible
failures. Without this feature, a failure in these circuits could result
in the inability to detect errors during system operation.
Memory
Controller
Crossbar Card
Crossbar
Controller
Crossbar
Controller
Crossbar
Controller
Crossbar
Controller
PCI BOX
I/O
Router
I/O
Router
9
Download from Www.Somanuals.com. All Manuals Search And Download.
Mainframe-class RAS Features
Enhanced error detection of the high-speed interconnect
Intricate error handling through multi-bit error detection
and resending of errored data
Without Check Features
Since higher speed interconnects are implemented to increase
system performance, there are higher probabilities that
interference noise will cause errors occurring along these
interconnects. One method of handling these interconnect errors
would be to disable the errored interconnect and operate in a
degradated mode.
Bad data, resulting from a simple error
such as a single bit error, can not be
blocked if a failure exists within the
error detection circuits themselves.
Logic Circuits
1 bit Error
Data
ECC
chipset
Error
Error Detection
Circuits
Unable to detect error
Reporting
Failure
Bad Data
In addition to above method, the Expres5800/1000 series servers
have implemented a methodology prevalent in supercomputers,
where by intricate multi-bit error detection is carried out, and
errored data is resent upon detection of an error. This allows
the Express5800/1000 series servers to handle the intermittent
errors which occur along the high-speed interconnects, without
impacting the system performance.
Without Check Features
Diagnostics of the error detection
circuits at every system boot
insures data integrity.
Logic Circuits
Data
ECC
Error Detection
Circuits
Error
Failure
Error Detected
Reporting
Circuit
Check
Two independent power sources
Avoid system shutdown due to failures of the power distribution units
The previous 32 processor and the 16 processor models supported
having two independent power supplies, where the 8 processor
model did not. This feature is now available on the new 8 processor
system (1080Rf) so that the system can continue operations even
in the event of a failure with in the power distribution unit.
Implementation of an Uninterruptible Power Supply (UPS) can
further increase availability. The two independent power source
feature is a standard feature on the 1320Xf and is available as an
optional feature for 1160Xf and 1080Rf.
Autonomic reporting of error logs with pinpoint prognosis
of failed components
Realization of a mainframe-class platform serviceability
The Express5800/1000 series servers are equipped with a service
Customer
The error information summary
is analyzed to determine the
cause of the failure.
The development team may
be contacted for assistance.
Environment
processor which process server management and platform error
handling. The service processor can be considered the core
component which supports the RAS features of the system. One
feature of the service processor is its ability to analyze detail logs
(BID: built-in diagnosis) which are collected by the chipset in the
event of an error. The BID is able to diagnose the location of the
error, and will pinpoint the required FRU (Field Replaceable Unit)
so that the time required to replace the component and recover the
system, can be minimized.
Diagnostics Agent
Diagnostics of retry tendency and
confirmation of whether threshold
was exceeded
Diagnostics
Agent
Maintenance Group
Log
Mail
Hard
ware
Encrypted message
Service
Processor
Internet
Mail
Log
If required, the detail log is analyzed
further by the development groups
Manager
Log
In the event of a failure, the Express5800/1000 series servers
also have the capability to automatically send detailed error logs
to maintenance personnel, enabling us to further lessen the time
required to resolve a system error. Furthermore, to minimize
the possibility of a critical error, the diagnostics engine is able to
proactively predict errors rather than just react to errors.
A detailed hardware error log
including transaction history is
collected.
The Error information
is sent via email
Development Group
10
Download from Www.Somanuals.com. All Manuals Search And Download.
Flexibility and Operability
Pursuit of flexibility and operability in a system
— Flexible resource virtualization using floating I/O for improved operability
Investment Protection
Smooth migration to future processors
� Intel® Itanium® Processor Family Roadmap
The Express5800/1000 series servers now support the Dual-Core
Intel® Itanium® processors with two complete 64-bit cores on
each processor. From the beginning of development, state-of-the-
art technologies have been built into the Itanium® processors to
answer to the stringent levels of throughput, scalability, reliability,
and availability that are required by the server platforms, and
also provided top-level performance. With the deployment of the
present day Dual-Core system, a smooth migration to future multi-
core systems can be assured.
2002
2003
2004
2006
2007
Future
®
®
Dual-Core Intel Itanium
processor 1.6GHz,
24MB L3
2
Tukwila*
®
®
Intel Itanium
2
processor 1.6GHz,
9MB L3
®
®
Intel Itanium
2
processor 1.5GHz,
6MB L3
®
®
Dual-Core Intel Itanium
processor 1.6GHz,
24MB L3
®
®
Intel Itanium
2
processor 1GHz,
3MB L3
* Intel codenames
Resource virtualization through floating I/O
Flexible resource management allows for robust server virtualization
The Express5800/1000 series employ floating I/O to allow for
Insufficient computing
resources
Resource pool
Cell card
the flexible combination of Cell cards and PCI boxes (I/O). The
computational and I/O resources can be virtualized, allowing for
the flexibility to reallocate system resources into the most optimal
configuration according to operation or load.
Cell card
PCI box
Cell card
PCI box
Crossbar
PCI box
Furthermore, with the existence of a spare Cell card, the system
can swap the failed Cell card with the spare in the event of a failure,
and reboot the system so that business operation can resume
without loosing valuable computational resources.
Insufficient I/O
resources
Problem resolved by adding
additional computing resources
Cell card
Cell card
Crossbar
PCI box
Cell card
PCI box
PCI box
Problem resolved by adding
additional I/O resources
Multi OS support / Rich application lineup
Windows® operating system and Linux operating systems supported
Along with the industry’s prevalent Microsoft® Windows® operating
system,the Express5800/1000 series servers also support the
Linux operating system. By dividing the system into multiple
partitions, it is possible to support multiple operating systems
within a single server.
With the inception of the Itanium® Solutions Alliance (ISA),
whose main objective is to promote the advancement of
Itanium®-based solutions, applications streamlined to perform
on the Itanium®-based servers, such as the Express5800/1000
series servers, have increased considerably.
Superior standard chassis configuration
Small footprint and a highly scalable I/O
With the ability to load 32 Dual-Core Intel® Itanium® processors
(1320Xf) into an industry standard 19-inch rack footprint, the
Express5800/1000 series server has proved to have the industries
highest level of performance per unit area. Because additional
space is not required in the datacenter in order to accommodate
the Express5800/1000 series server, it is an ideal candidate for
replacement or consolidation of older systems.
The 1080Rf is a very compact 8U model which can support up to 8
internal 3.5 inch HDD and 16 PCI cards.
11
Download from Www.Somanuals.com. All Manuals Search And Download.
NEC Express5800/1000 series Specifications
n
Model
1080Rf
1160Xf
Dual Core Intel® Itanium® processor
1320Xf
Processor
CPU
Intel® Processor Number
Clock frequency
9120N
9140N
1.60GHz
8 (16)
9150N
9120N
9140N
1.60GHz
16 (32)
9150N
9120N
9140N
1.60GHz
32 (64)
9150N
1.42GHz
1.60GHz
1.42GHz
1.60GHz
1.42GHz
1.60GHz
Maximum Number of CPU(core)
L1 Cache/core
16KB (I) / 16KB (D)
1MB (I) / 256KB (D)
9MB
L2 Cache/core
On-chip cache
L3 Cache/core
6MB
9MB
12MB
24MB
6MB
12MB
24MB
6MB
9MB
18MB
1TB
12MB
24MB
L3 Cache/CPU
Maximum Memory Capacity
Maximum Number of I/O slots
12MB
18MB
12MB
18MB
12MB
128GB
512GB
16
32
32/64
16/32
Disk Bay
8
16
Internal Disk
Drives
Maximum Capacity
2,400GB (300GB * 8)
4,800GB (300GB * 16)
9,600GB (300GB * 32)
LAN Interface
10/100Base-T (For Management console)
Cabinet Type
Rack mount (8U)
441 x 857 x 351 mm
110kg
Standalone (37U)
600 x 1070 x 1800 mm
Dimension (W * D * H)
Weight
464kg
AC 200-240V / 50Hz-60Hz
563.4kg
Power Supply
ES Temperature/Humidity
Supported OS
5 – 35 degree C / 20 – 80 % RH (operation), 5 – 45 degree C /8 - 80 % RH (non-operation) without condensation
Microsoft® Windows Server® 2008 for Itanium-based Systems
Microsoft® Windows Server® 2003 Enterprise Edition / Datacenter Edition
Red Hat Enterprise Linux
* NEC is a registered trademark and Empowered by Innovation a trademark of NEC Corporation and/or one or more of its subsidiaries. All are used under license. * Intel, Intel logo, Itanium and
Itanium inside are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. * Microsoft and Windows are registered trademarks or
trademarks of the US Microsoft Corporation in the United States and other countries. * Red Hat and Shadow Man logos are registered trademarks or trademarks of Red Hat Inc. in the United States
and other countries. * Linux is a trademark or registered trademark of Linus Torvalds in the United States and other countries. * All other trademarks and registered trademarks are the property of
their respective owners.
Safety notes
Please read carefully before use and observe the cautions and prohibitions in the instruction, installation, planning, operations and other manuals.
Incorrect usage may cause fire, electric shock, or injury.
Company names and product names used in this catalogue are trademarks or registered trademarks of the respective companies.
If this product (including the software) comes under the regulations of Foreign Exchange and Foreign Trade Law as a regulated article or other item, observe the procedures (such as application for
export permission) required by the Japanese government when taking the product out of Japan.
The colors of the products in this catalogue may be slightly different from the actual colors. Specifications are subject to change without prior notice for the purpose of improving the product.
© 2008 NEC Corporation. All rights reserved.
Information in this document is subject to change without notice.
Cat .No.E07H001
Download from Www.Somanuals.com. All Manuals Search And Download.
|
MTX Audio Speaker 620WE User Manual
Murphy Automobile Parts EA150 User Manual
NETGEAR Portable Multimedia Player NTV250 100NAS User Manual
Nikon Projector NP2000 User Manual
Nintendo Video Games 45496742010 User Manual
NordicTrack Treadmill NTL778061 User Manual
Nortel Networks Network Router VT100 User Manual
Onkyo MP3 Docking Station DS A1XP User Manual
Palsonic Flat Panel Television PDP 5000 User Manual
Panasonic IP Phone KX NT300 User Manual