Fujitsu ServerView Resource Orchestrator Cloud Edition J2X1 7611 03ENZ0 User Manual |
ServerView Resource Orchestrator
Cloud Edition V3.1.0
Operation Guide
Windows/Linux
J2X1-7611-03ENZ0(00)
October 2012
Purpose
This manual explains how to operate ServerView Resource Orchestrator (hereinafter Resource Orchestrator).
Target Readers
This manual is written for system administrators who will use Resource Orchestrator to operate the infrastructure in private cloud or data
center environments.
When setting up systems, it is assumed that readers have the basic knowledge required to configure the servers, storage, network devices,
and server virtualization software to be installed. Additionally, a basic understanding of directory services such as Active Directory and
LDAP is necessary.
Organization
This manual is composed as follows:
Section 1 Overview
Provides an overview of the operation, maintenance, and monitoring of Resource Orchestrator.
Section 2 Operation
Explains the methods for deliberately starting and stopping managers and agents.
Explains the management of user accounts.
Explains the management of tenants.
Explains the management of templates.
Explains the management of resources and resource pools.
Explains the management of L-Platforms.
Explains how to modify various setting information.
Section 3 Maintenance
Explains the maintenance of hardware.
Explains how to use the backup and restore provided by Resource Orchestrator.
Section 4 Monitoring
Explains how to monitor the configuration and status of managed resources.
- ii -
Explains how to export the power consumption data collected from registered power monitoring targets and how to display it as
graphs, and also describes the exported data's format.
Explains the monitoring of resource pools.
Explains the monitoring of L-Platforms.
Explains charging.
Explains the monitoring of logs.
Section 5 High Availability
Explains failover.
Explains the Disaster Recovery function for L-Servers.
Gives important reminders for the operation of Resource Orchestrator.
Explains metering logs.
Explains the terms used in this manual. Please refer to it when necessary.
Notational Conventions
The notation in this manual conforms to the following conventions.
- When using Resource Orchestrator and the functions necessary differ due to the necessary basic software (OS), it is indicated as
follows:
[Windows Manager]
[Linux Manager]
[Windows]
[Linux]
Sections related to Windows manager
Sections related to Linux manager
Sections related to Windows (When not using Hyper-V)
Sections related to Linux
[Solaris]
Sections related to Solaris or Solaris Containers
Sections related to VMware
[VMware]
[Hyper-V]
Sections related to Hyper-V
[Xen]
Sections related to RHEL5-Xen
[KVM]
Sections related to RHEL-KVM
Sections related to Solaris containers
Sections related to Oracle VM
[Solaris Containers]
[Oracle VM]
- iii -
[Physical Servers]
[VM host]
Sections related to physical servers
Sections related to Windows Server 2008 with VMware or Hyper-V enabled
- Unless specified otherwise, the blade servers mentioned in this manual refer to PRIMERGY BX servers.
- Oracle Solaris may also be indicated as Solaris, Solaris Operating System, or Solaris OS.
- References and character strings or values requiring emphasis are indicated using double quotes ( " ).
- Window names, dialog names, menu names, and tab names are shown enclosed by brackets ( [ ] ).
- Button names are shown enclosed by angle brackets (< >) or square brackets ([ ]).
- The order of selecting menus is indicated using [ ]-[ ].
- Text to be entered by the user is indicated using bold text.
- Variables are indicated using italic text and underscores.
- The ellipses ("...") in menu names, indicating settings and operation window startup, are not shown.
- The ">" used in Windows is included in usage examples. When using Linux, read ">" as meaning "#".
- The URLs in this manual were correct when the manual was written.
Menus in the ROR console
Operations on the ROR console can be performed using either the menu bar or pop-up menus.
By convention, procedures described in this manual only refer to pop-up menus.
Regarding Installation Folder Paths
The installation folder path may be given as C:\Fujitsu\ROR in this manual.
Replace it as shown below.
When using Windows 64-bit (x64)
C:\Program Files (x86)\Resource Orchestrator
When using Windows 32-bit (x86)
C:\Program Files\Resource Orchestrator
Abbreviations
The following abbreviations are used in this manual:
Abbreviation
Products
Microsoft(R) Windows Server(R) 2008 Standard
Microsoft(R) Windows Server(R) 2008 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Standard
Microsoft(R) Windows Server(R) 2008 R2 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Datacenter
Microsoft(R) Windows Server(R) 2003 R2, Standard Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise Edition
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
Windows(R) 7 Professional
Windows
Windows(R) 7 Ultimate
Windows Vista(R) Business
Windows Vista(R) Enterprise
- iv -
Abbreviation
Products
Windows Vista(R) Ultimate
Microsoft(R) Windows(R) XP Professional operating system
Microsoft(R) Windows Server(R) 2008 Standard
Microsoft(R) Windows Server(R) 2008 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Standard
Microsoft(R) Windows Server(R) 2008 R2 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Datacenter
Windows Server 2008
Microsoft(R) Windows Server(R) 2008 Standard (x86)
Microsoft(R) Windows Server(R) 2008 Enterprise (x86)
Windows 2008 x86 Edition
Windows 2008 x64 Edition
Microsoft(R) Windows Server(R) 2008 Standard (x64)
Microsoft(R) Windows Server(R) 2008 Enterprise (x64)
Microsoft(R) Windows Server(R) 2003 R2, Standard Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise Edition
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
Windows Server 2003
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
Windows 2003 x64 Edition
Windows 7
Windows(R) 7 Professional
Windows(R) 7 Ultimate
Windows Vista(R) Business
Windows Vista(R) Enterprise
Windows Vista(R) Ultimate
Windows Vista
Windows XP
Microsoft(R) Windows(R) XP Professional operating system
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.8 (for x86)
Red Hat(R) Enterprise Linux(R) 5.8 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
SUSE(R) Linux Enterprise Server 11 for x86
SUSE(R) Linux Enterprise Server 11 for EM64T
Linux
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat Enterprise Linux
- v -
Abbreviation
Products
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.8 (for x86)
Red Hat(R) Enterprise Linux(R) 5.8 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.8 (for x86)
Red Hat(R) Enterprise Linux(R) 5.8 (for Intel64)
Red Hat Enterprise Linux 5
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
Red Hat Enterprise Linux 6
RHEL5-Xen
Red Hat(R) Enterprise Linux(R) 5.4 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 6.2 (for x86) Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64) Virtual Machine Function
RHEL-KVM
DOS
Microsoft(R) MS-DOS(R) operating system, DR DOS(R)
SUSE(R) Linux Enterprise Server 11 for x86
SUSE(R) Linux Enterprise Server 11 for EM64T
SUSE Linux Enterprise Server
Oracle VM
ESC
Oracle VM Server for x86
ETERNUS SF Storage Cruiser
PRIMECLUSTER GLS
GLS
Navisphere
Solutions Enabler
MSFC
EMC Navisphere Manager
EMC Solutions Enabler
Microsoft Failover Cluster
Solaris(TM) 10 Operating System
Solaris
System Center Virtual Machine Manager 2008 R2
System Center 2012 Virtual Machine Manager
SCVMM
- vi -
Abbreviation
Products
VMware vSphere(R) 4
VMware vSphere(R) 4.1
VMware vSphere(R) 5
VMware
VMware ESX
VMware(R) ESX(R)
VMware ESX 4
VMware ESXi
VMware(R) ESX(R) 4
VMware(R) ESXi(TM)
VMware(R) ESXi(TM) 5.0
VMware(R) Tools
VMware ESXi 5.0
VMware Tools
VMware vSphere 4.0
VMware vSphere 4.1
VMware vSphere 5
VMware vSphere Client
VMware vCenter Server
VMware vClient
VMware FT
VMware vSphere(R) 4.0
VMware vSphere(R) 4.1
VMware vSphere(R) 5
VMware vSphere(R) Client
VMware(R) vCenter(TM) Server
VMware(R) vClient(TM)
VMware(R) Fault Tolerance
VMware DRS
VMware(R) Distributed Resource Scheduler
VMware(R) Distributed Power Management
VMware(R) vNetwork Distributed Switch
VMware(R) Storage VMotion
VMware DPM
VMware vDS
VMware Storage VMotion
VIOM
ServerView Virtual-IO Manager
BladeLogic
BMC BladeLogic Server Automation
ServerView SNMP Agents for MS Windows (32bit-64bit)
ServerView Agents Linux
ServerView Agent
ServerView Agents VMware for VMware ESX Server
RCVE
ServerView Resource Coordinator VE
ROR
ServerView Resource Orchestrator
ROR VE
ROR CE
ServerView Resource Orchestrator Virtual Edition
ServerView Resource Orchestrator Cloud Edition
Systemwalker Resource Coordinator
Systemwalker Resource Coordinator Virtual server Edition
Resource Coordinator
Export Administration Regulation Declaration
Documents produced by FUJITSU may contain technology controlled under the Foreign Exchange and Foreign Trade Control Law of
Japan. Documents which contain such technology should not be exported from Japan or transferred to non-residents of Japan without first
obtaining authorization from the Ministry of Economy, Trade and Industry of Japan in accordance with the above law.
Trademark Information
- BMC, BMC Software, the BMC logos, and other BMC marks are trademarks or registered trademarks of BMC Software, Inc. in the
U.S. and/or certain other countries.
- EMC, EMC2, CLARiiON, Symmetrix, and Navisphere are trademarks or registered trademarks of EMC Corporation.
- HP is a registered trademark of Hewlett-Packard Company.
- vii -
- Linux is a trademark or registered trademark of Linus Torvalds in the United States and other countries.
- Microsoft, Windows, MS, MS-DOS, Windows XP, Windows Server, Windows Vista, Windows 7, Excel, Active Directory, and
Internet Explorer are either registered trademarks or trademarks of Microsoft Corporation in the United States and other countries.
- NetApp is a registered trademark of Network Appliance, Inc. in the US and other countries. Data ONTAP, Network Appliance, and
Snapshot are trademarks of Network Appliance, Inc. in the US and other countries.
- Oracle and Java are registered trademarks of Oracle and/or its affiliates in the United States and other countries.
- Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
- Red Hat, RPM and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc. in the United
States and other countries.
- SUSE is a registered trademark of SUSE LINUX AG, a Novell business.
- VMware, the VMware "boxes" logo and design, Virtual SMP, and VMotion are registered trademarks or trademarks of VMware, Inc.
in the United States and/or other jurisdictions.
- ServerView and Systemwalker are registered trademarks of FUJITSU LIMITED.
- All other brand and product names are trademarks or registered trademarks of their respective owners.
Notices
- The contents of this manual shall not be reproduced without express written permission from FUJITSU LIMITED.
- The contents of this manual are subject to change without notice.
Month/Year Issued, Edition
Manual Code
November 2011, First Edition J2X1-7611-01ENZ0(00)
December 2011, 1.1
January 2012, 1.2
February 2012, 1.3
March 2012, 1.4
J2X1-7611-01ENZ0(01)
J2X1-7611-01ENZ0(02)
J2X1-7611-01ENZ0(03)
J2X1-7611-01ENZ0(04)
J2X1-7611-01ENZ0(05)
J2X1-7611-02ENZ0(00)
J2X1-7611-03ENZ0(00)
April 2012, 1.5
July 2012, 2
October 2012, Third Edition
Copyright FUJITSU LIMITED 2010-2012
- viii -
Contents
Part 1 Overview........................................................................................................................................................................1
Chapter 1 Overview of Operations, Maintenance, and Monitoring...........................................................................................2
1.1 Operation, Maintenance, and Monitoring by Infrastructure Administrators.......................................................................................3
1.2 Operation, Maintenance, and Monitoring by Tenant Administrators..................................................................................................4
1.3 Operation, Maintenance, and Monitoring by Tenant Users.................................................................................................................4
Part 2 Operation.......................................................................................................................................................................5
Chapter 2 Starting and Stopping Managers and Agents..........................................................................................................6
2.1 Starting and Stopping the Manager.....................................................................................................................................................6
2.2 Starting and Stopping an Agent...........................................................................................................................................................8
Chapter 3 Managing User Accounts.......................................................................................................................................11
Chapter 4 Managing Tenants.................................................................................................................................................12
Chapter 5 Managing Templates.............................................................................................................................................13
Chapter 6 Managing Resources and Resource Pools...........................................................................................................14
6.1 Managing Resource Pools.................................................................................................................................................................14
6.2 Managing Resources..........................................................................................................................................................................14
6.3 Managing L-Servers..........................................................................................................................................................................14
Chapter 7 Management of L-Platform....................................................................................................................................15
7.1 Review for L-Platform Usage Applications......................................................................................................................................15
7.2 Administration of L-Platform............................................................................................................................................................15
7.2.1 Deleting Unnecessary Data.........................................................................................................................................................15
7.2.2 Updating the Cloning Image.......................................................................................................................................................15
7.2.3 Importing to L-Platform..............................................................................................................................................................15
7.2.3.1 Network Information Settings for Converted L-Servers.....................................................................................................16
7.2.3.2 Importing L-Servers.............................................................................................................................................................16
7.2.3.3 Releasing L-Servers.............................................................................................................................................................16
7.2.4 Setting OS with Deployment on RHEL-KVM...........................................................................................................................16
7.2.5 Startup Priority Level Settings ...................................................................................................................................................17
7.2.6 Action to Take when an Error has Occurred..............................................................................................................................17
7.2.7 Setting Alive Monitoring............................................................................................................................................................18
7.2.8 Redundancy Settings...................................................................................................................................................................19
7.2.9 Automatic Server Release Settings.............................................................................................................................................19
7.2.10 Definition VM Specific Information Definition File................................................................................................................20
7.2.11 Changing Server Specifications on a VM Host........................................................................................................................20
Chapter 8 Changing Settings.................................................................................................................................................21
8.1 Registering and Deleting Application Process Assessors..................................................................................................................21
8.1.1 Registering an Application Process Assessor.............................................................................................................................21
8.1.2 Deleting an Application Process Assessor..................................................................................................................................21
8.1.2.1 Deleting an Infrastructure Administrator/dual-role Administrator from IflowUsers Group...............................................21
8.2 Settings for Sending Email................................................................................................................................................................22
8.3 Settings for Port Number of Management Servers............................................................................................................................23
8.4 Editing Information in the Home Window........................................................................................................................................25
8.5 Settings for L-Platform Management................................................................................................................................................26
8.5.1 Settings for Permissions to Change L-Platform Templates........................................................................................................26
8.5.2 Subnet Settings at Segment Editing............................................................................................................................................27
8.5.3 Settings for the Simplified Reconfiguration Function................................................................................................................27
8.5.4 Distribution Ratio Settings..........................................................................................................................................................28
8.5.5 Application Process Settings......................................................................................................................................................29
8.5.5.1 How to Modify the Application Process Settings................................................................................................................29
- ix -
8.5.5.2 How to Modify Application Process to be Used.................................................................................................................29
8.5.6 Editing the Environment Setup File for the L-Platform API......................................................................................................29
8.5.7 Edit the License Agreement........................................................................................................................................................29
8.5.8 Settings when RHEL5-Xen is used............................................................................................................................................29
8.5.9 Default Password Setting for Sent Emails..................................................................................................................................29
8.5.10 Settings for the Maximum Number of Connections for the L-Platform Template..................................................................30
8.5.11 Customizing the User Rights for L-Platform Operations.........................................................................................................30
8.6 Settings for Tenant Management and Account Management............................................................................................................30
8.6.1 Settings for Tenant Management and Account Management.....................................................................................................31
8.6.2 Editing the User Agreement when Registering a User...............................................................................................................34
8.7 Accounting Settings...........................................................................................................................................................................34
8.7.1 Display Function Settings for Estimated Price...........................................................................................................................34
8.7.2 Currency Information Settings....................................................................................................................................................36
8.7.3 Metering Log Settings................................................................................................................................................................36
8.7.4 Usage Charge Calculator Settings..............................................................................................................................................38
8.8 System Condition Server List Settings..............................................................................................................................................39
8.9 Settings for Event Log Output for CMDB Agent..............................................................................................................................41
Part 3 Retention......................................................................................................................................................................42
Chapter 9 Hardware Maintenance..........................................................................................................................................43
9.1 Overview............................................................................................................................................................................................43
9.2 Blade Server Maintenance.................................................................................................................................................................47
9.2.1 Maintenance LED.......................................................................................................................................................................47
9.2.2 Reconfiguration of Hardware Properties....................................................................................................................................48
9.2.3 Replacing Servers.......................................................................................................................................................................49
9.2.4 Replacing Non-Server Hardware................................................................................................................................................51
9.3 Maintenance for Servers Other Than Blade Servers.........................................................................................................................52
9.3.1 Reconfiguration of Hardware Properties....................................................................................................................................52
9.3.2 Replacing Servers.......................................................................................................................................................................54
9.3.3 Replacing and Adding Server Components................................................................................................................................58
9.3.4 Replacing Non-server Hardware................................................................................................................................................60
9.4 For Servers not Using Server Management Software.......................................................................................................................60
9.5 Network Device Maintenance...........................................................................................................................................................61
9.5.1 Replacement Procedure of Network Devices.............................................................................................................................61
9.5.1.1 When device that is targeted to replace is a breakdown......................................................................................................61
9.5.1.2 When Device that is Target of Restoration is undamaged...................................................................................................62
9.5.2 Regular Maintenance Procedure of Network Devices................................................................................................................65
9.5.3 Procedure for Addition of Network Devices..............................................................................................................................66
9.5.3.1 Adding L2 Switches to Handle Insufficient Numbers of Ports when Adding Servers........................................................66
9.5.3.2 Adding Firewalls, Server Load Balancers, and L2 Switches for Additional Tenants.........................................................68
9.5.4 Procedure for Addition or Modification of Connection Destinations of Network Devices.......................................................70
9.6 Storage Device Maintenance.............................................................................................................................................................71
9.7 Power Monitoring Device (PDU or UPS) Maintenance....................................................................................................................71
Chapter 10 Backup and Restoration...................................................................................................................................... 73
10.1 Backup and Restoration of Admin Servers......................................................................................................................................73
10.1.1 Mechanism of Backup and Restoration....................................................................................................................................74
10.1.2 Offline Backup of the Admin Server........................................................................................................................................80
10.1.2.1 Stopping the Manager........................................................................................................................................................81
10.1.2.2 Back up the Resources of this Product..............................................................................................................................81
10.1.2.3 Starting the Manager..........................................................................................................................................................81
10.1.3 Online Backup of the Admin Server.........................................................................................................................................82
10.1.3.1 Items to be Determined Before Periodic Execution..........................................................................................................84
10.1.3.2 Settings for Periodic Execution of Backup........................................................................................................................84
10.1.4 Restoring the Admin Server.....................................................................................................................................................87
10.1.4.1 Stopping the Manager........................................................................................................................................................87
10.1.4.2 Restoring the Resources of This Product...........................................................................................................................87
- x -
10.1.4.3 Starting the Manager..........................................................................................................................................................88
10.1.4.4 Disabling L-Platform Applications....................................................................................................................................88
10.1.4.5 Updating the configuration information in the operational status information.................................................................88
10.1.5 Online Backup Settings for Metering.......................................................................................................................................88
10.2 Backup and Restoration of Network Devices..................................................................................................................................91
10.2.1 Mechanism of Backup and Restoration....................................................................................................................................92
10.2.2 Backup of Network Devices.....................................................................................................................................................92
10.2.3 Restoration of Network Devices...............................................................................................................................................93
Part 4 Monitoring....................................................................................................................................................................94
Chapter 11 Monitoring Resources..........................................................................................................................................95
11.1 Overview..........................................................................................................................................................................................95
11.2 Resource Status................................................................................................................................................................................96
11.3 Addressing Resource Failures.........................................................................................................................................................99
11.4 Monitoring Networks.......................................................................................................................................................................99
11.4.1 Identification of Error Locations............................................................................................................................................100
11.4.1.1 When Notified of an Error by a Tenant Administrator or Tenant User...........................................................................100
11.4.1.2 When Changing State is Detected during Status Confirmation Using the ROR Console...............................................101
11.4.2 Firewall Status Confirmation..................................................................................................................................................102
11.4.2.1 When an L-Platform Using a Firewall is Identified........................................................................................................102
11.4.2.2 When a Firewall Changing State is Detected during Status Confirmation Using the ROR Console..............................103
11.4.3 Server Load Balancer Status Confirmation............................................................................................................................104
11.4.3.1 When an L-Platform Using a Server Load Balancer is Identified...................................................................................104
11.4.4 L2 Switch Status Confirmation...............................................................................................................................................106
11.4.5 Status Confirmation of Other Network Devices.....................................................................................................................107
11.5 Monitoring Storage........................................................................................................................................................................107
Chapter 12 Collecting Power Consumption Data and Displaying Graphs............................................................................109
12.1 Overview........................................................................................................................................................................................109
12.2 Exporting Power Consumption Data.............................................................................................................................................109
12.3 Power Consumption Data File (CSV Format)...............................................................................................................................109
12.4 Displaying Power Consumption Data Graphs...............................................................................................................................110
Chapter 13 Monitoring Resource Pools (Dashboard)...........................................................................................................111
Chapter 14 Monitoring L-Platforms.......................................................................................................................................112
Chapter 15 Accounting.........................................................................................................................................................113
15.1 Overview........................................................................................................................................................................................113
15.2 Manage Accounting Information...................................................................................................................................................114
15.2.1 Information Maintained by Product Master............................................................................................................................115
15.2.2 Accounting Information File Format......................................................................................................................................118
15.3 Operate Accounting Information...................................................................................................................................................121
15.3.1 Register Accounting Information...........................................................................................................................................122
15.3.2 Modify Accounting Information Command...........................................................................................................................123
15.3.3 Delete Accounting Information..............................................................................................................................................125
15.3.4 Reference Accounting Information........................................................................................................................................126
15.4 Calculation of Usage charges........................................................................................................................................................126
15.4.1 Overview of Usage charge Calculation..................................................................................................................................127
15.4.2 Resource Usage Times............................................................................................................................................................127
15.4.3 How to Charge for Resources.................................................................................................................................................127
15.4.4 Resource Usage Amounts and Times.....................................................................................................................................128
15.4.5 Example of Usage charge Calculation....................................................................................................................................128
15.4.6 Sending Usage charges...........................................................................................................................................................131
15.4.6.1 Usage Charge List File....................................................................................................................................................132
15.4.6.2 Usage charge Detail File..................................................................................................................................................133
- xi -
Chapter 16 Monitoring Logs.................................................................................................................................................135
16.1 Operation Logs..............................................................................................................................................................................135
16.1.1 Overview.................................................................................................................................................................................135
16.1.2 Usage Method.........................................................................................................................................................................138
16.1.3 Retention.................................................................................................................................................................................139
16.1.4 Scope of Operations Recorded in Operation Logs.................................................................................................................140
16.2 Audit Logs.....................................................................................................................................................................................140
16.2.1 Configuration Management Audit Log...................................................................................................................................140
16.2.2 Audit Logs of Output by the Tenant Management, Accounting, Access Control and System Condition.............................146
16.2.3 Application Process Audit Log...............................................................................................................................................152
16.3 Operation Logs (Activity)..............................................................................................................................................................154
16.3.1 Operation Logs for Accounting..............................................................................................................................................154
16.4 Investigation Logs..........................................................................................................................................................................155
16.4.1 Investigation Logs on Admin Servers....................................................................................................................................155
Part 5 High Availability and Disaster Recovery....................................................................................................................158
Chapter 17 High Availability of Managed Resources...........................................................................................................159
17.1 High Availability of Managed Resources......................................................................................................................................159
17.1.1 High Availability of L-Servers...............................................................................................................................................159
17.1.2 Blade Chassis High Availability.............................................................................................................................................162
17.1.3 High Availability for Storage Chassis....................................................................................................................................164
17.2 High Availability for Admin Servers.............................................................................................................................................169
Chapter 18 Disaster Recovery.............................................................................................................................................173
Appendix A Notes on Operating ServerView Resource Orchestrator..................................................................................174
Appendix B Metering Log.....................................................................................................................................................178
B.1 Types of Metering Logs..................................................................................................................................................................178
B.2 Output Contents of Metering Logs.................................................................................................................................................179
B.3 Formats of Metering Log Files.......................................................................................................................................................184
B.4 Deleting Metering Logs..................................................................................................................................................................187
Glossary...............................................................................................................................................................................188
- xii -
Part 1 Overview
Chapter 1 Overview of Operations, Maintenance, and Monitoring...................................................................2
- 1 -
Chapter 1 Overview of Operations, Maintenance, and
Monitoring
This chapter provides an overview of operation, maintenance, and monitoring of Resource Orchestrator.
For additional information on the operation, maintenance, and monitoring of this product, refer to the configuration information in the
"Setup Guide CE".
Flow of Service Provision Using Applications
The flow of service provision using applications in an environment where Resource Orchestrator has been installed is as shown below.
Figure 1.1 Flow of Service Provision Using Applications
* Note: Necessary when using firewalls (Firewall), server load balancers (SLB), or L2 switches.
1. Application for use
The tenant user applies to use an L-Platform.
For details, refer to "5.2 Subscribe to an L-Platform" in the "User's Guide for Tenant Users CE".
2. Approval
The tenant administrator approves the application by a tenant user to use an L-Platform.
For details, refer to "9.3 Approving an Application" in the "User's Guide for Tenant Administrators CE".
3. Assessment
The infrastructure administrator assesses the content of the application to use an L-Platform from the tenant administrator or tenant
user.
For details, refer to "10.2 Assessing an Application" in the "User's Guide for Infrastructure Administrators CE".
4. Notification of settings for firewalls and server load balancers
When using a firewall or a server load balancer, the infrastructure administrator prepares a script for configuring the firewall or
server load balancer.
Based on the information provided by the infrastructure administrator, the tenant administrator notifies the tenant user of the
configuration information for the firewall or server load balancer.
- 2 -
5. Configure firewalls
When configuring an application on an L-Server that has been deployed on the public LAN, the tenant user needs to create a rule
that enables access to that L-Server from the public LAN.
6. Configure applications
The tenant user performs the installation and environment settings necessary for the application to be provided as a service by the
L-Server.
7. Confirm communication with applications
The tenant user checks that there are no problems with the applications installed on the L-Server, and that the L-Server can be
accessed from the public LAN.
When a firewall has not been configured, configure one.
If there are no problems in the communication check, proceed to the next step.
If there are problems, resolve them and check communication again.
8. Configure server load balancers
When an L-Platform has a server load balancer deployed, the tenant user performs configuration of the server load balancer.
9. Check communication with server load balancers
When an L-Platform has a server load balancer deployed, test that the settings of the server load balancer are correct.
It is necessary to configure the rules for the firewall so that communication using the virtual IP address configured for the server
load balancer is possible.
If there are no problems in the communication check, proceed to the next step.
If there are problems, resolve them and check communication again.
10. Configure firewalls
Configure address translation and firewall rules, and then test that communication with the L-Server is possible.
If the test shows no problems, configuration of the L-Platform operation environment is complete.
If there are problems, resolve them and check communication again.
1.1 Operation, Maintenance, and Monitoring by Infrastructure
Administrators
This section explains operation, maintenance, and monitoring by infrastructure administrators when using Resource Orchestrator.
Infrastructure Administrator (infra_admin) Operations
Refer to the "User's Guide for Infrastructure Administrators (Resource Management) CE" for the [Resource] tab operations.
Refer to the "User's Guide for Infrastructure Administrators CE" for other operations.
Infrastructure Administrator (infra_admin) Management Operations
The operations that infrastructure administrators (infra_admin) can perform are as follow.
- Management of resources and resource pools
- Registration, modification, and deletion of resources
- Creation, deletion, modification of global pools and tenant local pools
- Review and confirmation of application status and L-Platform usage applications
- Management of tenants
- Creation, modification, and deletion of tenants
- 3 -
- Create a tenant administrator
- Creation, modification, and deletion of user accounts
- Management of templates
- Creation, modification, and deletion of L-Platform templates (*)
- Creation, modification, and deletion of L-Server templates
* Note: To check subscription requests submitted by using a created L-Platform template, use a dual-role administrator account.
For more details, refer to "Appendix B Applying (Subscribe) for L-Platform Usage by Dual-Role Administrators" in "User's Guide
for Infrastructure Administrators".
Infrastructure Administrator (infra_admin) Maintenance Operations
Maintenance operations that infrastructure administrators (infra_admin) can perform are as follow:
- Hardware maintenance
- System maintenance (*2)
- Backup and restoration of admin servers (*2)
*2: OS administrative privileges are necessary for OS maintenance and backup and restore of admin servers.
Infrastructure Administrator (infra_admin) Monitoring Operations
Monitoring operations that infrastructure administrators (infra_admin) can perform are as follow:
- Monitoring of resource pools using dashboard
- Monitoring of L-Platform operation statuses
- Monitoring of resource capacity (servers, storage and network)
- Monitoring of operation logs and audit logs
1.2 Operation, Maintenance, and Monitoring by Tenant
Administrators
For details on operations, maintenance and monitoring by tenant administrators, refer to the "User's Guide for Tenant Administrators CE".
1.3 Operation, Maintenance, and Monitoring by Tenant Users
For details on operations, maintenance and monitoring by tenant users, refer to the "User's Guide for Tenant Users CE".
- 4 -
Part 2 Operation
Chapter 2 Starting and Stopping Managers and Agents..................................................................................6
Chapter 3 Managing User Accounts...............................................................................................................11
Chapter 4 Managing Tenants.........................................................................................................................12
Chapter 5 Managing Templates.....................................................................................................................13
Chapter 6 Managing Resources and Resource Pools...................................................................................14
Chapter 7 Management of L-Platform............................................................................................................15
Chapter 8 Changing Settings.........................................................................................................................21
- 5 -
Chapter 2 Starting and Stopping Managers and Agents
This chapter explains how to manually start or stop managers and agents.
To use Resource Orchestrator, both the manager and agents must be running.
The manager and agent services are configured to start automatically upon startup of their respective servers (admin server, managed
server). Normally, there should be no need to manually start or stop either the manager or agents. To start or stop a manager or an agent
intentionally, refer to "2.1 Starting and Stopping the Manager" and "2.2 Starting and Stopping an Agent".
Note
When using the HBA address rename function, ensure that the manager is started before starting any managed servers. The power on
procedure should be managed as follows: first, start the admin server together with any storage devices, and start the managed servers 10
minutes later.
Managed servers will not boot up properly if they are started before the manager. Make sure that the manager is running before starting
managed servers.
Additionally, when using the HBA address rename function, the HBA address rename setup service should be started on a dedicated server
(HBA address rename server) and left running continuously. For details on starting, stopping, and confirming the state of the HBA address
rename setup service, refer to "Chapter 10 Settings for the HBA address rename Setup Service" in the "Setup Guide CE".
2.1 Starting and Stopping the Manager
The Resource Orchestrator manager starts automatically on the admin server.
This section explains how to manually start or stop the manager and how to check its running state.
[Windows Manager]
The manager is made up of the following two groups of Windows services:
- Manager Services
Resource Coordinator Manager
Resource Coordinator Task Manager
Resource Coordinator Web Server (Apache)
Resource Orchestrator Sub Web Server (Mongrel)
Resource Orchestrator Sub Web Server (Mongrel2)
Resource Coordinator Sub Web Server (Mongrel3)
Resource Coordinator Sub Web Server (Mongrel4)
Resource Coordinator Sub Web Server (Mongrel5)
Resource Coordinator DB Server (PostgreSQL)
ServerView Resource Orchestrator Service Catalog Manager DB Service(Dashboard)
ServerView Resource Orchestrator Service Catalog Manager DB Service(Charging)
ServerView Resource Orchestrator Service Catalog Manager REST Service(Charging)
- Related Services
Deployment Service
TFTP Service
- 6 -
PXE Services
DHCP Server (*)
Systemwalker SQC DCM
Interstage BPM Analytics eRule Engine (EFServer)
Systemwalker MpJobsch9
Systemwalker MpMjes
Systemwalker MpMjes9
Systemwalker Runbook Automation DB Service
Shunsaku Conductor cmdbc
Shunsaku Sorter cmdbo01
* Note: Required when managed servers belonging to different subnets from the admin server exist.
From the Windows Control Panel, open [Administrative Tools]. Then, open the [Services] window to check the state of each service.
Services are started and stopped using the rcxmgrctl command (start and stop subcommands).
Using this command, manager services and related services can be started or stopped at the same time.
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
To start or stop a manager in a clustered configuration, right-click the manager application shown under the failover cluster manager tree,
and select either [Bring this service or application online] or [Take this service or application offline].
[Linux Manager]
The manager is made up of the following two groups of Linux services:
- Manager Services
rcvmr
Manager services also include the following daemons.
rcxmanager
rcxtaskmgr
rcxmongrel1
rcxmongrel2
rcxmongrel3
rcxmongrel4
rcxmongrel5
rcxhttpd
- Database (PostgreSQL)
rcxdb
- Related Services
scwdepsvd
scwpxesvd
scwtftpd
dhcpd (*)
* Note: Required when managed servers belonging to different subnets from the admin server exist.
The status of each of those services can be confirmed from the service command, as shown below.
# service rcvmr status <RETURN>
# service scwdepsvd status <RETURN>
# service scwpxesvd status <RETURN>
# service scwtftpd status <RETURN>
# service dhcpd status <RETURN>
Services are started and stopped using the rcxmgrctl command (start and stop subcommands).
Using this command, manager services and related services can be started or stopped at the same time.
- 7 -
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
To start or stop a manager in a clustered configuration, use the cluster administration view (Cluster Admin).
For details, refer to the PRIMECLUSTER manual.
Note
- When using ServerView Deployment Manager on an admin LAN, all services related to Resource Orchestrator will be automatically
disabled. To prevent conflicts with ServerView Deployment Manager, do not start these services in order. For details, refer to
"Appendix B Co-Existence with ServerView Deployment Manager" in the "Setup Guide VE".
- Resource Orchestrator cannot be operated if any of the manager services are stopped. Ensure that all services are running when
Resource Orchestrator is running.
- If the manager is unable to communicate on the admin LAN when started up (because of LAN cable disconnections or any other
causes), PXE Services may not start automatically. If PXE Services are stopped, investigate the network interface used for the admin
LAN and confirm whether it can communicate with other nodes on the admin LAN.
If the manager cannot communicate with admin LAN nodes, restore the admin LAN itself and restart the manager.
- In Basic mode, the following manager services are started.
In Basic mode, the procedure to start and stop the services and the procedure to check their statuses are same as those in standard
mode.
[Windows Manager]
- Manager Services
Resource Coordinator Manager
Resource Coordinator Task Manager
Resource Coordinator Web Server (Apache)
Resource Orchestrator Sub Web Server (Mongrel)
Resource Orchestrator Sub Web Server (Mongrel2)
Resource Coordinator DB Server (PostgreSQL)
[Linux Manager]
- Manager Services
rcvmr
Manager services also include the following daemons.
rcxmanager
rcxtaskmgr
rcxmongrel1
rcxmongrel2
rcxhttpd
2.2 Starting and Stopping an Agent
The Resource Orchestrator agent starts automatically on managed servers.
This section explains how to manually start or stop an agent and how to check its power state.
Note
To prevent conflicts, related services are uninstalled from the Resource Orchestrator agent when using ServerView Deployment Manager
on the admin LAN. In such cases, there is no need to start or stop those services when starting or stopping the Resource Orchestrator agent.
[Windows] [Hyper-V]
The agent consists of the following two Windows services:
- 8 -
- Agent Service
Resource Coordinator Agent
- Related Services
- Deployment Agent
- Systemwalker SQC DCM
From the Windows Control Panel, open [Administrative Tools]. Then, open the [Services] window to check the state of each service.
The following explains how to start and stop each service.
- Agent Service
Agents can be started and stopped using the start and stop subcommands of the rcxadm agtctl command.
For details of the command, refer to "5.3 rcxadm agtctl" in the "Reference Guide (Command/XML) CE".
- Related Services
From the Windows Control Panel, open [Administrative Tools]. Then, open the [Services] window to stop or start the following
service.
- Deployment Agent
- Systemwalker SQC DCM
[Linux] [VMware] [Xen] [KVM]
The agent consists of the following services.
- Agent Service
- Related Services
- Deployment Agent
For VMware vSphere 4.0 or later version, Deployment Agent is not automatically started, as backup and restore, and cloning
functions cannot be used. It is not necessary to start up.
[Linux]
- Systemwalker SQC DCM
Execute the following commands to determine whether the agent is running or not. If those commands show that the processes for the
agent and deployment services are running, then the agent can be asserted to be running.
- Agent Service
# /bin/ps -ef | grep FJSVssagt <RETURN>
- Related Services
# /bin/ps -ef | grep scwagent <RETURN>
To check the running state of the service of Systemwalker SQC DCM, execute the following command:
# /etc/rc0.d/K00ssqcdcm <RETURN>
The following explains how to start and stop each service.
- Agent Service
Agents can be started and stopped using the start and stop subcommands of the rcxadm agtctl command.
For details of the command, refer to "5.3 rcxadm agtctl" in the "Reference Guide (Command/XML) CE".
- Related Services
Execute the following command to start or stop the collection of image files, deployment of image files, and server startup control.
Starting an L-Server
- 9 -
# /etc/init.d/scwagent start <RETURN>
# /etc/rc2.d/S99ssqcdcm start <RETURN>
Stop
# /etc/init.d/scwagent stop <RETURN>
# /etc/rc0.d/K00ssqcdcm stop <RETURN>
[Solaris]
The agent consists of the following services.
- Agent Service
Execute the following commands to determine whether the agent is running or not. If those commands show that the processes for
the agent and deployment services are running, then the agent can be asserted to be running.
# /bin/ps -ef | grep FJSVrcvat <RETURN>
The following explains how to start and stop each service.
Agents can be started and stopped using the start and stop subcommands of the rcxadm agtctl command.
For details of the command, refer to "5.3 rcxadm agtctl" in the "Reference Guide (Command/XML) CE".
- 10 -
Chapter 3 Managing User Accounts
This chapter explains the management of user accounts.
Creation, Viewing, and Modification of User Accounts
Only users that hold the role of infrastructure administrator, tenant administrator, or administrator can create user accounts.
For details on operations by infrastructure administrators, refer to "Chapter 3 Configuring Users for Infrastructure Administrators" in the
"User's Guide for Infrastructure Administrators (Resource Management) CE".
For details on operations by tenant administrators, refer to "Chapter 10 Tenant" in the "User's Guide for Tenant Administrators CE".
Viewing and Modification of Information of Logged in Users
To view and modify the information of logged in users, use [Account] on the ROR console for the operation.
For details on [Account], refer to "Chapter 13 Account" in the "User's Guide for Infrastructure Administrators CE".
- 11 -
Chapter 4 Managing Tenants
This chapter explains the management of tenants.
Tenant Creation
The flow of tenant creation is as follows:
1. Register Tenants
Input the tenant information and register tenants.
2. Create a Tenant Administrator
Create a tenant administrator.
3. Create a Local Pool for Tenants
The following two types of resource pool operations can be performed:
- Local Pool
A resource pool which can be used only for tenants.
A resource stored in a local pool cannot be used by a user of another tenant.
- Global Pool
A common resource pool which can be used by the entire system.
For details on how to select resource pools, refer to "Chapter 6 Defining Tenants and Resource Pools" in the "Design Guide CE".
When using a local pool, create a local pool used only by the tenant registered in step 1.
4. Register Resources
Register the resources created in 3.in the local pool.
For details, refer to "11.3 Creating a Tenant" in the "User's Guide for Infrastructure Administrators CE".
Tenant Operation
Use the [Tenant] tab in the ROR console for the following operations.
For details on the [Tenant] tab, refer to "Chapter 11 Tenant" in the "User's Guide for Infrastructure Administrators CE".
- Register Tenants
- Create a Tenant Administrator
- Create a Tenant Resource Pool
- Create and Delete Tenants
- 12 -
Chapter 5 Managing Templates
This chapter explains the management of templates.
- L-Platform Templates
An L-Platform template is the template to define the logical configuration of ICT resources and software.
An L-Platform is composed of an L-Platform template.
Use the [Template] tab to create, modify, and delete L-Platform templates.
For details on the [Template] tab, refer to "Chapter 8 Template" in the "User's Guide for Infrastructure Administrators CE".
- L-Server Templates
An L-Server template is the template defining the specifications of an L-Server (number of CPUs, memory capacity, disk capacity,
and number of NICs) used for an L-Platform.
Use the [Resource] tab to create, modify, and delete L-Server templates.
For details on L-Server template operations, refer to "Chapter 15 L-Server Template Operations" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- 13 -
Chapter 6 Managing Resources and Resource Pools
This chapter explains the management of resources and resource pools.
6.1 Managing Resource Pools
This section explains the management of resource pools.
The following resource pool operations are possible:
- Addition and modification of tenants and local pools
- Deletion of local pools within tenants
- Addition, modification, and deletion of global pools
When changing the global pool that can be used by tenants, perform the operation from the [Tenant] tab on the ROR console.
For details on the [Tenant] tab, refer to "Chapter 11 Tenant" in the "User's Guide for Infrastructure Administrators CE".
6.2 Managing Resources
This section explains the management of resources.
Use the [Resource] tab in the ROR console to register, change, or delete resources.
- Register Resources
Refer to "Chapter 5 Registering Resources" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
- Change Resources
Refer to "Chapter 7 Changing Resources" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
- Delete Resources
Refer to "Chapter 9 Deleting Resources" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
For details on how to add network devices, refer to "9.5.3 Procedure for Addition of Network Devices", and for details on how to add or
modify connection destinations of network devices, refer to "9.5.4 Procedure for Addition or Modification of Connection Destinations of
6.3 Managing L-Servers
This section explains the management of L-Servers.
L-Server Operations
Refer to "Chapter 17 L-Server Operations" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
Use of Physical Servers or Virtual Machines as L-Servers
Configured physical servers or virtual machines can be used as L-Servers.
For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
- 14 -
Chapter 7 Management of L-Platform
This chapter explains how to management of L-Platform.
7.1 Review for L-Platform Usage Applications
Use the [Request] tab of the ROR console to review applications to use from tenant users for operations such as usage application,
configuration modification, and cancel of L-Platforms.
For details on the [Request] tab, refer to "Chapter 10 Request" in the "User's Guide for Infrastructure Administrators CE".
7.2 Administration of L-Platform
This section explains how to perform L-Platform operations.
7.2.1 Deleting Unnecessary Data
If an L-Platform or server deployed by this product was accidently deleted by virtualization software such as VMware, it is possible to
delete unneeded information about the system or server remaining in this product.
What is actually done is that the unneeded L-Platform information or server information status is changed to "Finished with return" with
the cfmg_deletesysdata(Unnecessary Data Deletion) command so that it is no longer displayed in the L-Platform management View.
Refer to "12.3 cfmg_deletesysdata (Unnecessary Data Deletion)" in the "Reference Guide (Command/XML) CE" for details on this
command.
7.2.2 Updating the Cloning Image
In this product, the resource ID is used when managing the cloning image.
If the cloning image has been updated, the resource ID will be changed, so the image information must be updated in order to return the
resource ID back to its previous setting.
If the cloning image has been updated, use an operation from the "Template" window to update the image information.
Refer to "8.3.8 Synchronizing Image Information" in the "User's Guide for Infrastructure Administratorsfor CE" details.
Point
To use both cloning masters (the cloning master before the update and the cloning master after the update) in the L-Platform template,
updating a single cloning master will not work. Instead, these cloning masters must be collected separately.
7.2.3 Importing to L-Platform
This section explains how to import physical servers, virtual machines, and L-Servers to an L-Platform.
There are the following two ways of importing physical servers, virtual machines, and L-Servers to the L-Platform Management function:
- Convert configured physical servers or virtual machines into L-Servers, then import the converted L-Servers to the L-Platform
Follow the procedure described below to convert configured physical servers or virtual machines into L-Servers and import to the L-
Platform:
a. Convert the physical servers or virtual machines into L-Servers
b. Network Information Settings for Converted L-Servers
- 15 -
c. Importing the L-Server for which network information has been set
- Import L-Servers created in the ROR console into the L-Platform
7.2.3.1 Network Information Settings for Converted L-Servers
Set the network information for the L-Servers converted in "Use of Physical Servers or Virtual Machines as L-Servers".
Execute the rcxadm lserver attach -define command to set network information.
The setting of network information via the command rcxadm lserver attach -define can only be performed on the L-Server before it is
imported into the L-Platform management function.
Also, the rcxadm lserver attach -define command can only be executed if using Solaris container.
If adding multiple network interface cards (NICs), execute the command rcxadm lserver attach -define only the same number of times as
there are NICs being added.
Refer to "3.6 rcxadm lserver" in the "Reference Guide (Command/XML) CE" for details.
7.2.3.2 Importing L-Servers
The Import L-Server command (cfmg_importlserver) can be used to import servers that have been deployed, or the VM guests that have
been imported using the ROR Console, to the L-Platform Management function.
Refer to "12.4 cfmg_importlserver (Import L-Server)" in the "Reference Guide (Command/XML) CE" for details on this command.
Note
- When an L-Server for infrastructure administrator is imported to an L-Platform, the operation privileges of the L-Server are transferred
to the tenant administrator or the tenant user.
When this L-Server is released from the L-Platform by the cfmg_deletelserver command, the L-Server is changed back to the one for
infrastructure administrator.
- L-Servers without network interface cards (NICs) cannot be imported.
- No initial password information will be set for an L-Server that has been imported without image information being specified. "initial
password is [.]" will be displayed on the initial password confirmation window of the system details window of the L-Platform
management window.
- An L-Server that is under a tenant cannot be imported to a different tenant.
- When importing an L-Server that is not under a tenant, switch the power off for the L-Server targeted for import.
- Do not import physical L-Servers that have VM hosts installed. Refer to "Appendix D Installing VM Hosts on Physical L-Servers"
in the "Setup Guide CE" for information on installing VM hosts on a physical L-Server.
7.2.3.3 Releasing L-Servers
L-Servers that have been imported into the L-Platform management function can be released from the L-Platform by using the L-Server
release command (cfmg_deletelserver).
Refer to "12.2 cfmg_deletelserver (Release L-Server)" in the "Reference Guide (Command/XML) CE" for more information about the
command.
7.2.4 Setting OS with Deployment on RHEL-KVM
Windows OS(excluding Microsoft(R) Windows Server(R) 2008 R2)
When the virtualization software is RHEL-KVM, the following OS configuration procedure is necessary on each deployed server with
Windows OS to enable the settings of the IP addresses, the default gateway, and the host name.
- 16 -
The infrastructure administrator must look up the IP addresses and the host name in the L-Platform management window and the default
gateway in the resource management window, and then connect a console to the deployed server and configure the OS manually.
Users cannot access the server until this configuration is completed.
The administrator should include a description like "The IP address needs to be set by the administrator after deployment" in the description
field of the L-Platform template, and notify users that the server has become accessible after the configuration.
Linux OS with SELinux enabled
The administrator must disable SELinux when creating an image, and should include a description like "SELinux needs to be enabled
after the deployment has been completed" in the description field of the L-Platform template.
Because Linux OS is deployed with SELinux disabled, ensure that there are procedures in place to advise the user to enable SELinux after
deployment.
7.2.5 Startup Priority Level Settings
Any server with a startup priority level set to 0 will not startup or shutdown when performing bundled power supply operations.
An information message will be output to vsys_trace_log for any server that did not startup or shutdown.
If any server that either did not startup or shutdown actually needs to be started up or shut down, refer to the information message and use
the StartLServer or StopLServer command to startup or shutdown individually.
Refer to "2.3.5 StartLServer (Starts a Server)" and "2.3.6 StopLServer (Stops a Server)" in the "Reference Guide (API)" for details on this
command.
7.2.6 Action to Take when an Error has Occurred
When an error has occurred during a cancellation application by a tenant user, the cancellation application may become impossible.
In that case, confirm the system ID of the L-Platform, and use the Disable L-Platform Application command to make the cancellation
application possible again for the user.
For information on the Disable L-Platform Application command, refer to "12.12 recoverService (Disable L-Platform Application)" in
the "Reference Guide (Command/XML) CE".
When a Problem Occurs during L-Platform Operation
The flow of when a problem occurs, after the following operations for an L-Platform are implemented by a tenant user, is as below.
- Creation, modification, or deletion of an L-Platform
- 17 -
- Configuration or modification of network devices such as firewalls or server load balancers
Figure 7.1 Flow of Corrective Actions when a Problem Occurs during L-Platform Operation
1. The tenant user performs L-Platform creation, modification or deletion, or network device configuration or modification.
2. Problem Occurrence
L-Platform creation, modification or deletion, or network device configuration or modification ends abnormally.
3. Investigation Request
The tenant user requests investigation of the cause of the operation failure by the tenant administrator. When requesting investigation,
provide detailed information about the failed operation or output message.
The tenant administrator provides the information obtained from the tenant user to the infrastructure administrator, and requests
investigation of the cause of the operation failure.
4. Problem Cause Investigation
Based on the information obtained from the tenant administrator, the infrastructure administrator investigates problems with the
script configuring the network devices, the hardware, or the communication route.
5. Corrective Action
The infrastructure administrator performs the following corrective actions:
- When there are errors in the script configuring the network device, the infrastructure administrator modifies the script.
- When an error occurs on hardware or the communication route, the infrastructure administrator replaces the hardware.
6. Reporting of Investigation Results
After completing corrective action, the infrastructure administrator reports the results of investigation to the tenant administrator or
the tenant user, and requests operation of the L-Platform.
7. Operation of the L-Platform
The tenant user performs operation of the L-Platform again.
7.2.7 Setting Alive Monitoring
When using alive monitoring on an L-Platform, it needs to be deployed specifying an L-Server template in which the setting of activity
monitoring is enabled.
- 18 -
For information on creating L-Server templates, refer to "Chapter 15 L-Server Template Operations" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
If the settings are changed after deployment, change the "Type of Server(specifications)" by reconfiguring the L-Platform.
Register different L-Server templates where heartbeat settings are enabled and disabled.
Also use L-Server template names that make it easy to distinguish whether heartbeat settings are enabled or disabled.
Example
- L-Server template name where heartbeat settings are disabled
VMware_Small
- L-Server template name where heartbeat settings are enabled
VMware_Small_Monitoring
7.2.8 Redundancy Settings
When deploying a server with redundancy settings to an L-Platform, it is necessary to deploy by specifying an L-Server template that has
redundancy settings enabled.
Refer to "Chapter 15 L-Server Template Operations" in the "User's Guide for Infrastructure Administrators (Resource Management) CE"
for information on how to create L-Server templates.
If the settings are changed after deployment, change the "Type of Server(specifications)" by reconfiguring the L-Platform.
Register different L-Server templates where redundancy settings are enabled and disabled.
Also use L-Server template names that make it easy to distinguish whether redundancy settings are enabled or disabled.
Example
- L-Server template name where redundancy settings are disabled
VMware_Small
- L-Server template name where redundancy settings are enabled
VMware_Small_HA
7.2.9 Automatic Server Release Settings
When deploying a server with automatic server release settings to an L-Platform, it is necessary to deploy by specifying an L-Server
template that has automatic server release settings enabled.
Refer to "Chapter 15 L-Server Template Operations" in the "User's Guide for Infrastructure Administrators (Resource Management) CE"
for information on how to create L-Server templates.
If the settings are changed after deployment, change the "Type of Server(specifications)" by reconfiguring the L-Platform.
Register different L-Server templates where automatic server release settings are enabled and disabled.
Also use L-Server template names that make it easy to distinguish whether automatic server release settings are enabled or disabled.
Example
- L-Server template name where automatic server release settings are disabled
VMware_Small
- 19 -
- L-Server template name where automatic server release settings are enabled
VMware_Small_Repurpose
7.2.10 Definition VM Specific Information Definition File
If an overcommit value has not been set for the L-Server template selected in "type" on the Reconfiguration page of the L-Platform
subscription window, then the values set in VM specific information definition file will not be used, even if the file is used. Rather, the
following values are applied:
[VMware]
- CPU Reserved: 0.1GHz
- CPU Shares: 1000
- Memory Reserved: Memory Size
- Memory Shares: Memory Size * 10240
[Hyper-V]
- CPU Reserved: 0.1GHz
- CPU Weight: 100
- Memory RAM: Memory Size
- Memory Weight: 5000
With L-Server templates used in L-Platforms, set the values for overcommit in the L-Server templates rather than in a definition VM
specific information definition file.
7.2.11 Changing Server Specifications on a VM Host
In L-Platform subscription and L-Platform configuration changes, the upper limits of the server's specifications are determined by the
maximum number of CPUs, maximum CPU frequency, and the maximum memory capacity values specified in the L-Platform template's
image information.
After deploying an L-Platform, and if the server's CPUs, CPU frequency, and memory capacity are to be changed from the VM host, make
sure you specify values that do not exceed the maximum values specified in the image information of the L-Platform template.
If the values specified for the CPUs, CPU frequency, and memory capacity exceed the maximum values specified in the image information
of the L-Platform template, those values will be changed back to match the maximum values when the configuration is changed.
Point
Do not change the server's CPUs, CPU frequency, and memory capacity from the VM host if possible.
- 20 -
Chapter 8 Changing Settings
This chapter explains how to change settings.
8.1 Registering and Deleting Application Process Assessors
This section explains how to register and delete application process assessors.
8.1.1 Registering an Application Process Assessor
This section explains how to register an infrastructure administrator or dual-role administrator as an application process assessor.
Add all infrastructure administrator and dual-role administrator to the directory service IflowUsers group in order to use application
processes. Use the LDIF file to register an application process assessor at the directory server. Follow the procedure below to register as
application process assessor.
1. Create an infrastructure administrator or dual-role administrator.
2. Add the infrastructure administrator or dual-role administrator as a member of the IflowUsers group.
Note
- Infrastructure administrators and dual-role administrators who have not been registered in the "IflowUsers" group cannot
conduct assessment in application processes. Also, if infrastructure administrators and dual-role administrators not registered
in the "IflowUsers" group select the Request tab in the ROR Console, the following error message appears:
Error message : Failed to authenticate the user.
- Administrators (dual-role administrators) created during installation are not registered in the "IflowUsers" group. Add them to
the "IflowUsers" group.
- If an email address is not set, assessment request emails are not sent, and reservation notification emails are not sent when an
error occurs.
- If no infrastructure administrators or dual-role administrators are registered in the IflowUsers group, the following message is
displayed after the application is forwarded from the Forward screen window when the user subscribes to the service:
PCS1002
An error occurred while processing application.
Please contact the infrastructure administrators.
Refer to "19.2.1 Registering an Application Process Assessor" in the "Setup Guide CE" for information on how to register application
process assessor.
8.1.2 Deleting an Application Process Assessor
This section explains how to delete an infrastructure administrator or dual-role administrator from the application process assessors.
8.1.2.1 Deleting an Infrastructure Administrator/dual-role Administrator from IflowUsers
Group
Follow the procedure below to delete an infrastructure administrator or dual-role administrator from the IflowUsers group members.
For OpenDS
1. Create an LDIF file.
Edit a sample LDIF file to create the file. An example of an LDIF file is shown below.
- 21 -
# Delete manager from IflowUsers
dn: cn=IflowUsers,ou=group,dc=fujitsu,dc=com
changetype: modify
delete: member
member:cn=manager,ou=users,dc=fujitsu,dc=com
2. Execute the ldapmodify command.
[Windows Manager]
Specify the created LDIF file, and then execute the ldapmodify command.
OpenDS Installation_folder\bat\ldapmodify.bat" -p <port number> -f <ldif file> -D <administrator
user DN> -w <password>
An execution example is shown below.
c:\> c:\Program Files (x86)\Fujitsu\ServerView Suite\opends\bat\ldapmodify -p 1473 -D
"cn=Directory Manager" -w admin -f c:\ldif\deleteuserfromgroup.ldif
Processing MODIFY request for cn=IflowUsers,ou=group,dc=fujitsu,dc=com
MODIFY operation successful for DN cn=IflowUsers,ou=group,dc=fujitsu,dc=com
[Linux Manager]
Specify the created LDIF file, and then execute the ldapmodify command.
# OpenDS Installation_folder/bin/ldapmodify" -p <port number> -f <ldif file> -D <administrator
user DN> -w <password>
An execution example is shown below.
# /opt/fujitsu/ServerViewSuite/opends/bin/ldapmodify -p 1473 -D "cn=Directory Manager" -f /tmp/
ldif/adduser2group.ldif -w admin
Processing MODIFY request for cn=IflowUsers,ou=group,dc=fujitsu,dc=com
MODIFY operation successful for DN cn=IflowUsers,ou=group,dc=fujitsu,dc=com
Note
- In the command input line, enter the command as one line without entering any line feeds.
- For the directory service port number, administrator DN, and administrator DN password, enter the values that were set during
installation.
For Active Directory
1. From the Start menu, open [Control Panel]-[Administrative Tools]-[Active Directory Users and Computers].
2. Select the name of a domain that is managed by Active Directory.
3. Right-click "IflowUsers" of the organizational unit "Group", and select [Property].
4. Select the [Members] tab, and select the members to delete from the member list, and click the [Remove] button.
5. A confirmation dialog will be displayed. Click [Yes].
6. After returning to the property window of the group, confirm that the members have been deleted correctly,
and click the [OK] button.
8.2 Settings for Sending Email
This section explains how to change settings for sending an email.
- 22 -
Settings for the Email Sent from Tenant Management
Email sent from the tenant management will be enabled only if the tenant has set the Performing tenant management setting.
When an operation such as registering a tenant or adding or changing a user has been performed, notification to that effect is sent to the
tenant administrators, tenant users, and tenant email addresses within that same tenant.
Refer to "19.1 Settings for Sending Email" in the "Setup Guide CE" for information on how to change settings for e-mail sent from tenant
administrators.
Settings for Email Sent from the L-Platform Management Window
Email sent from the L-Platform management window notifies the end or failure of processing to the tenant administrators and the tenant
users when the tenant users have used the ROR Console to perform an application to use L-Platform, an L-Platform modification, or an
application to cancel L-Platform.
Refer to "19.1 Settings for Sending Email" in the "Setup Guide CE" for information on how to change settings for email sent from the L-
Platform Management window.
Email Sent from the Usage Charge Calculator
Email will be enabled if the calculator for charges is used.
A usage charge file is sent to the set email address with the monthly usage charges for each tenant.
Refer to "19.1.4 Email Sent from the Usage Charge Calculator" in the "Setup Guide CE" for information on the email settings used for
the usage charge calculator.
Email Sent via an Application Process
An email will be sent via the application process when changes or requests have been made.
The following notification will be sent for an application to use L-Platform, an L-Platform modification, or an application to cancel L-
Platform from the L-Platform management window:
- Notification of acceptance of application, rejection of application, and dismissal of application to the tenant users
- Notification of request for approval and dismissal of application to the tenant administrators
- Notification of request for assessment to the infrastructure administrators
Refer to "19.1.6 Settings for Email Sent via the Application Process" in the "Setup Guide CE" for information on how to change settings
for email sent from an application process."
Settings for the Email Sent from the Dashboard
Email sent from the Dashboard will be enabled only if using the dashboard alert function.
The dashboard sends notifications to e-mail address setting in Customizing Email Send Settings if the global pool use rate exceeds the
threshold value.
Refer to "19.1.7 Settings for Email Sent from the Dashboard" in the "Setup Guide CE" for information on how to set settings for email
sent from the dashboard alert function.
8.3 Settings for Port Number of Management Servers
This section explains how to change the port number of the Management Server as follows:
- ROR console server port number
- L-Platform management port number
Stop the manager before changing the port number. Restart the manager after changing the port number.
Refer to "2.1 Starting and Stopping the Manager" for information on how to start and stop the manager.
Note that there is no need to stop and start the manager for each port number.
Changing the Port Number of the ROR Console Server
The procedure for changing the port number is as follows.
- 23 -
1. Modify the portal.properties file.
Open the following file:
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\conf\portal.properties
[Linux Manager]
/etc/opt/FJSVctsec/conf/portal.properties
Change the port numbers specified in the following URLs. Set the same values in the port numbers:
- portalSsl.url
- authedPortal.url
- sendmail.auth.url
2. Start the Interstage Management Console.
The procedure for starting the Interstage Management Console is as follows:
[Windows Manager]
From the Start menu, select All Programs > Interstage > Application Server > Interstage Management Console.
[Linux Manager]
1. Start the Web browser.
2. Specify the URL of the Interstage Management Console.
The URL format is as follows:
(If SSL encrypted communication is not being used)
http://[Host name]:[Port number]/IsAdmin/
(If SSL encrypted communication is being used)
https://[Host name]:[Port number]/IsAdmin/
3. Login to the Interstage Management Console.
3. Change the port number.
Select System > Services > Web Server > RCXCT-ext > Web Server Settings to change the port number.
Changing the port number of L-Platform management
The procedure for changing the port number is as follows.
1. Modify the portal.properties file.
Open the following file:
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\conf\portal.properties
[Linux Manager]
/etc/opt/FJSVctsec/conf/portal.properties
Change the port numbers specified in the following URL:
- vsys.host
An example is shown below. The parts in italics show the information that is changed.
vsys.host = http://192.168.11.22:8013/vsys/services/VSYS/
- 24 -
2. Modify the managerview_config.xml file.
Open the following file.
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\managerview_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/managerview_config.xml
Modify the value of the entry tag with vsys-port as the key value.
- The entry tag with vsys-port as the key value
An example is shown below. The section in italics is the information to be modified.
<entry key="vsys-port">8013</entry>
8.4 Editing Information in the Home Window
This section explains how to edit the information that is displayed on the lower part of the home window of the ROR Console.
Point
- The information can also be used to notify tenant administrators and tenant users of who to contact.
- The messages can also be edited in the home window. Refer to "3.2 Editing the Home Messages" in the "User's Guide for Infrastructure
Administrators CE" for details.
Information are divided into those for infrastructure administrators and those for tenant administrators and tenant users, so use the following
respective text files to edit them:
For infrastructure administrators
[Windows Manager]
Installation_folder\SVROR\Manager\etc\customize_data\home_tab\home_infra_mes.txt
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data/home_tab/home_infra_mes.txt
For tenant administrators and tenant users
[Windows Manager]
Installation_folder\SVROR\Manager\etc\customize_data\home_tab\home_tenant_mes.txt
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data/home_tab/home_tenant_mes.txt
Settings
Enter the message, line by line, in the following format:
date, message
- UTF-8 must be used as the character code in the text file.
- There is no schedule format specified. If no schedule is required, use a comma at the start of the line, and then subsequently enter the
message.
- 25 -
- Enter a string of up to 30 characters for the schedule. Commas (,) cannot be included.
- Enter a string of up to 250 characters for the message. Commas (,) can be included.
Example of settings
2011/11/11,Maintenance is scheduled for the Kanto network on the weekend.
,Upgraded the operation management software.
8.5 Settings for L-Platform Management
This section explains how to change the settings for L-Platform management.
8.5.1 Settings for Permissions to Change L-Platform Templates
Specify whether to permit modification of the value specified in the L-Platform template when an L-Platform usage application is made
in the L-Platform Management window.
Note that if modification is not permitted, it is not possible to modify the configuration of L-Platforms that have already been deployed.
Point
Settings for Permissions to Change L-Platform Templates can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
Stopping the manager
Stop the manager.
Changing L-Platform templates
The procedure for changing the settings for whether or not changes to the L-Platform templates in the L-Platform management window
are to be permitted is as follows:
Open the following file.
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\custom_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/custom_config.xml
The following information must be modified:
- The entry tag with no-configuration as the key value
Modify the value of the entry tag with no-configuration as the key value. The section in italics is the information to be modified.
Specify "false" to allow the L-Platform template to be modified. Specify "true" to not allow it to be modified. The default value is "false".
<entry key="no-configuration">false</entry>
Starting the manager
Start the manager.
- 26 -
8.5.2 Subnet Settings at Segment Editing
It is possible to change the method for setting up the subnets that are allocated to segments when performing an application to use L-
Platform. Use the following procedure to use network resource names rather than IP addresses to select which subnets to allocate to
segments during subnet setup.
Refer to "8.3.14 L-Platform Reconfiguration" in the "User's Guide for Tenant Administrators" for details on changing the configuration.
Point
Subnet Setting at Segment Editing can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
1. Open the Manager View settings file in a text editor.
The Manager View settings file is stored in the following location:
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\managerview_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/managerview_config.xml
2. Add the following key and value.
Key name
Content
network-list-show-resource-name
false: Uses the IP address to select a subnet. (This is the default
value. This is applicable even when this key is not defined.)
true: Uses the network resource name to select a subnet.
3. Save the file.
4. Restart the manager.
8.5.3 Settings for the Simplified Reconfiguration Function
The simplified reconfiguration function is a function that allows specification changes to be performed for a server, when a new system
is being created or when a configuration is being changed after deployment, simply by selecting a server type.
When this function is enabled, individual values cannot be changed directly.
To change the settings for this function, perform the following procedure.
Refer to "8.3.14 L-Platform Reconfiguration" in the "User's Guide for Tenant Administrators CE" for details on changing the configuration.
Point
Settings for the Simplified Reconfiguration Function can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE"
1. Open the settings file in a text editor.
The settings file is stored in the following location:
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\managerview_config.xml
- 27 -
[Linux Manager]
/etc/opt/FJSVctmyp/config/managerview_config.xml
2. Add the following key and value:
Key name
Content
enable-easy-reconfigure
false: Disables the function. (This is the default value. This is
applicable even when this key is not defined.)
true : Enables the function
3. Save the file.
4. Restart the manager.
8.5.4 Distribution Ratio Settings
The distribution ratio settings set a simple selection method for the distribution ratios of CPUs and memory that correspond to the
distribution ratio settings of VMware.
Note that the settings are enabled only if the simplified reconfiguration function has been disabled.
To change the settings, implement the following procedure:
Refer to "8.3.14 L-Platform Reconfiguration" in the "User's Guide for Tenant Administrators CE" for details on changing the configuration.
Point
Distribution Ratio Settings can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
1. Use the editor to open the settings file.
The settings file is stored in the following location:
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\managerview_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/managerview_config.xml
2. Add the following keys and values:
Key name
Content
share-easy-setting
false: Directly edits values. (This is the default value. This is applicable even when this key
is not defined.)
true: Selects from a list box the values to be set that show the distribution ratio of memory.
The values to be set are as follows:
Value
Low (500)
Distribution ratio (share)
500
Standard (1,000)
High (2,000)
1,000
2,000
3. Save the file.
- 28 -
4. Restart the manager.
8.5.5 Application Process Settings
This section explains how to modify the application process settings.
8.5.5.1 How to Modify the Application Process Settings
This section explains how to modify the setting whether to use the application process.
Note
If the application process is being changed from "use" to "do not use" after the manager starts its operation, make sure that there are no
pending processes before changing it. If there are pending processes, finish all of them by cancelling, approving, rejecting, accepting, or
dismissing each of them.
Refer to "19.2 Application Process Settings" in the "Setup Guide CE" for information on how to change the setting for whether or not an
application process is used.
8.5.5.2 How to Modify Application Process to be Used
This section explains how to modify the application process to be used.
Note
If the application process to be used is being changed after the manager starts its operation, make sure that there are no pending processes
before changing it. If there are pending processes, finish all of them by cancelling, approving, rejecting, accepting, or dismissing each of
them.
Refer to "19.2 Application Process Settings" in the "Setup Guide CE" for information on how to change the application process to be used.
8.5.6 Editing the Environment Setup File for the L-Platform API
Refer to "19.7 Editing the Environment Setup File for the L-Platform API" in the "Setup Guide CE" for information on how to change
the environment settings for the L-Platform API.
8.5.7 Edit the License Agreement
Refer to "19.12 Edit the License Agreement" in the "Setup Guide CE" for information on how to edit the license displayed in the L-Platform
Management window.
8.5.8 Settings when RHEL5-Xen is used
Refer to "19.8 Settings when RHEL5-Xen is used" in the "Setup Guide CE" for information on the settings for when RHEL5-Xen is to
be used.
8.5.9 Default Password Setting for Sent Emails
Refer to "19.1 Settings for Sending Email" in the "Setup Guide CE" for information on how to set whether to include the deployed server's
default password in the emails sent when an L-Platform is deployed or a server is added to an L-Platform.
- 29 -
8.5.10 Settings for the Maximum Number of Connections for the L-Platform
Template
The maximum number of L-Servers that can be placed in an L-Platform Template and the maximum number of NICs in a segment of an
L-Platform Template can be modified.
1. Use the editor to open the settings file.
The settings file is stored in the following location:
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\managerview_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/managerview_config.xml
2. Add the following keys and values:
Key name
Content
maximum-number-of-connections-in-template
Specify the maximum number of L-Servers that can be placed
in the L-Platform Template. Without a key, the default value
is 30.
maximum-number-of-connections-in-segment
Specify the maximum number of NICs in the segment of the
L-Platform Template. Without a key, the default value is 30.
Note
If there is a firewall, the maximum number of connections in the segment defined in the ruleset will be the smaller value out of the
maximum number of servers in the ruleset and the configured value of the "maximum-number-of-connections-in-segment".
Segments that are not defined in the firewall ruleset will use the configured value.
3. Save the file.
4. Restart the manager.
8.5.11 Customizing the User Rights for L-Platform Operations
It is possible to customize the user rights in the L-Platform management window to suit the role of the user (tenant administrator or tenant
user).
Customize user rights according to the design changes made in "5.2 Customizing Access Authority for L-Platform Operations" in the
"Design Guide CE".
You can use a command to customize user rights.
Refer to "Chapter 11 Access Authority Customize Commands" in the "Reference Guide (Command/XML) CE" for information on the
command to customize user rights.
8.6 Settings for Tenant Management and Account Management
This section explains how to change the settings for the tenant management and the account management, and explains how to edit the
user agreement that is displayed when registering a user.
- 30 -
8.6.1 Settings for Tenant Management and Account Management
This section explains how to change the settings for the tenant management and the account management.
- Display setting for user list
This section explains the procedure for changing the setting for whether or not tenant users are to be displayed, when an infrastructure
administrators has used the tenant management to display the user list.
- Setting for registration format of tenant users
This section explains the procedure for changing the setting for whether a provisional account of a tenant user is to be registered or
whether the tenant user is to be registered directly, when a tenant administrator registers a tenant user.
- Setting for execution authority of the tenant management
This section explains the procedure for changing the setting for whether or not a tenant administrator can perform the following tenant
management:
- Add users
- Delete users
- Delegate user privileges
- Set user password
- Password change notification email settings
This section explains the procedure for modifying the settings when determining whether or not to include a new password within the
body of the password change notification email that is sent once password settings are complete, in the event that a tenant administrator
sets the user's password.
- Setting for execution authority of the account management
This section explains the procedure for changing the setting for whether or not a tenant administrator or tenant user can perform the
following account management:
- Changing the user's own information
- Changing the user's own password
- Directory service operation setting
This section explains the procedure for changing the setting for whether or not registration to directory service can be performed and
for whether or not password modification is to be allowed, when an infrastructure administrators or a tenant administrator registers a
user.
Point
Setting for registration format of tenant users and Setting for execution authority of the tenant management can be set by "Setup Wizard"
on the ROR Console. See the table below for the settings items can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
Stopping the manager
Stop the manager.
Tenant Management Settings
The procedure for changing the setting of the tenant management is as follows.
1. Open the following file.
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\conf\portal.properties
[Linux Manager]
- 31 -
/etc/opt/FJSVctsec/conf/portal.properties
2. The following information must be modified:
Setting item
Settings
visible.tenantuser
Setting for infrastructure
administrators operation
Specify "on" if both tenant administrators
and tenant users are to be displayed in the
user list for the tenant management, and
specify "off" if only tenant administrators
are to be displayed. The initial value is
"on".
If "off" has been specified, the tenant
users will not be displayed in the User List
window of the tenant management.
provisional.acount (*)
Setting for tenant
administrator operation
Specify "on" if a provisional account of a
tenant user is to be created when the
tenant management is to be used to
register the tenant user, and specify "off"
if the tenant user is to be registered
directly. The initial value is "on".
If "off" has been specified, the window
for directly registering a tenant user will
be displayed when registering a tenant
user.
allowUpdate (*)
Specify "on" if the tenant management is
to be performed, and specify "off" if it is
not to be performed. The initial value is
"off".
If "off" has been specified, the Tenant tab
will not be displayed on the ROR
Console.
setPassword.tenantadmin.mailwithpasswd
When setting the user's password in
tenant management, configure the
settings to "on" to include the new
password in the body of the password
change notification email, or to "off"
when not including the password in the
email. The default value is set to "on".
A new password will be included in the
body of the password change notification
email in the event that this value is
omitted and/or the key is undefined.
leftMenu.modifyUser.admin.visible
Specify "on" if changing user account is
to be performed using the account
management, and specify "off" if it is not
to be performed. The initial value is "on".
If "off" has been specified, the Change
user account button will not be displayed
in the Account window of the account
management.
leftMenu.changePassword.admin.visible
Specify "on" if changing user password is
to be performed using the account
management, and specify "off" if it is not
to be performed. The initial value is "on".
If "off" has been specified, the Change
- 32 -
Setting item
Settings
user password button will not be
displayed in the Account window of the
account management.
leftMenu.modifyUser.user.visible
Setting for tenant user
operation
Specify "on" if changing user account is
to be performed using the account
management, and specify "off" if it is not
to be performed. The initial value is "on".
If "off" has been specified, the Change
user account button will not be displayed
in the Account window of the account
management.
leftMenu.changePassword.user.visible
Specify "on" if changing user password is
to be performed using the account
management, and specify "off" if it is not
to be performed. The initial value is "on".
If "off" has been specified, the Change
user password button will not be
displayed in the Account window of the
account management.
* note : this can be set by "Setup Wizard" on the ROR Console.
A setting example is shown below.
If the line in red font below is missing, please add it.
... omitted
allowUpdate = on
setPassword.tenantadmin.mailwithpasswd=off
... omitted
leftMenu.modifyUser.admin.visible=on
leftMenu.changePassword.admin.visible=on
leftMenu.modifyUser.user.visible=on
leftMenu.changePassword.user.visible=on
visible.tenantuser=on
provisional.acount=on
3. Open the following directory service operation definition file.
[Windows Manager]
Installation_folder\ROR\SVROR\Manager\etc\customize_data\ldap_attr.rcxprop
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data/ldap_attr.rcxprop
4. The following information must be modified:
Setting item
directory_service (*)
Settings
Setting for infrastructure
administrators, tenant
administrator, and tenant
user operation
Specify "true" if user registration to
directory service can be performed and
password modification is to be allowed
when the tenant management is to be used
to register a user, and specify "false" if no
user registration to directory service is to
be performed and no password
modification is to be allowed. The initial
value is "true".
- 33 -
Setting item
Settings
If "false" has been specified, the Set
password button will not be displayed in
the User List window of the tenant
management. In addition, the Change
user password button will not be
displayed in the Account window of the
account management.
Note that, if "false" is specified, users
must already be registered in the directory
service.
Perform user registration according to the
directory service to be used.
* note : this can be set by "Setup Wizard" on the ROR Console.
Edit only the "directory_service" line in the definition file.
A setting example is shown below.
directory_service=true
Starting the manager
Start the manager.
8.6.2 Editing the User Agreement when Registering a User
Refer to "19.13 Editing the User Agreement when Registering a User" in the "Setup Guide CE" for information on how to edit the agreement
displayed in the "Registering Users" window when new tenant users perform the registration procedure.
8.7 Accounting Settings
This section explains how to modify the accounting settings.
8.7.1 Display Function Settings for Estimated Price
Usage fee (the estimated price) for the L-Platform template can be displayed in the L-Platform Management window based on L-Platform
template accounting information.
This section describes how to modify settings according to whether usage fee (the estimated price) for the L-Platform template will be
displayed.
Point
Display Function Settings for Estimated Price can be set by "Setup Wizard" on the ROR Console. See the table below for the settings
items can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
Procedures to modify the settings are as follows:
1. Open the following file.
[Windows Manager]
Installation_folder\RCXCFMG\config\vsys_config.xml
[Linux Manager]
- 34 -
/etc/opt/FJSVcfmg/config/vsys_config.xml
2. The following information must be modified:
Key
Description
Default value
use-charge (*)
Specifies whether usage fee (the estimated price) for the no
L-Platform template will be displayed.
- yes: Display
- no: Do not display
* note: this can be set by "Setup Wizard" on the ROR Console.
A setting example is shown below.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
... omitted
<entry key="use-charge">yes</entry>
<entry key="charge-host">localhost</entry>
<entry key="charge-port">3550</entry>
<entry key="charge-uri">/resource/ver1.0</entry>
... omitted
</properties>
3. Open the following file.
[Windows Manager]
Installation_folder\RCXCTMG\MyPortal\config\custom_config.xml
[Linux Manager]
/etc/opt/FJSVctmyp/config/custom_config.xml
4. The following information must be modified:
Key
Description
Default value
estimation-mode (*1)
Specifies whether usage fee (the estimated price) for the
L-Platform template will be displayed.
0
- 3: Display
- 0: Do not display
Set to "3" if use-charge in vsys_config.xml is set to "yes",
or to "0" if use-charge is set to "no".
compatible-estimation
(*2)
When overcommit is enabled, specify whether to
calculate usage fee (the estimated price) using the
operating value or the reserved value.
false
- true: Calculate using operating value (CPU
performance and/or memory capacity)
- false: Calculate using reserved value (reserved CPU
performance and/or reserved memory capacity)
*1 : this can be set by "Setup Wizard" on the ROR Console.
*2 : These settings are only valid when the overcommit function is enabled. Refer to "19.6 Settings for the Overcommit Function"
in the "Setup Guide CE" for further details.
Note also that these settings are only valid when either VMware or Hyper-V is used as the virtual software. If any other virtual
- 35 -
software is used, the usage fee (the estimated price) will be calculated using the operating value regardless of whether or not
overcommit is enabled or disabled.
A setting example is shown below.
<?xml version="1.0" encoding="UTF-8"?>
<properties>
<entry key="estimation-mode">3</entry>
<entry key="compatible-estimation">true</entry>
... omitted
</properties>
5. Restart the manager.
8.7.2 Currency Information Settings
Currency information can be changed. Default setting is USD ($).
The currency that can be used is shown below.
Currency
United States Dollar
Currency sign
Number of decimal places
$
2
0
2
2
Japanese Yen
Euro
Â¥
EUR
S$
Singapore dollar
To change the currency information, perform the following procedure:
1. Stop the manager.
2. Execute the Change currency information setting command to change the currency information.
Refer to "10.3 currencyset (Change Currency Information Setting)" in the "Reference Guide (Command/XML) CE" for information
on how to use the Change currency information setting command.
3. Start the manager.
Note
Determine the currency used when installing the system.
Do not change the currency information once the operation starts.
8.7.3 Metering Log Settings
This section explains how to change the metering log operational settings.
Follow the steps below to change the metering log operational settings.
1. Open the following operational settings file for metering logs:
[Windows Manager]
Installation_folder\RCXCTMG\Charging\conf\metering_log.properties
[Linux Manager]
/etc/opt/FJSVctchg/conf/metering_log.properties
- 36 -
2. Change the relevant items in the operational settings file for metering logs:
Key
retention_period
Description
Default value
0000-03-00
Retention period of log entries
Logs will be deleted once their retention period has
passed.
Use the following format to specify the retention
period:
YYYY-MM-DD
Example:
0000-03-00: Retain logs for 3 months.
0005-00-00: Retain logs for 5 years.
periodic_log_use
(*1)
Specify whether or not to use the periodic log function: yes
- yes: Use
- no: Do not use
periodic_log_schedule_time
(*1)
Output time of periodic log
Use the following format to specify the output time:
00:00
HH:mm
periodic_log_schedule_type
(*1)
Output frequency of periodic log
Specify one of the following strings:
DAILY
- DAILY: Every day
- WEEKLY: Every week
- MONTHLY: Every month
Output day of periodic log
periodic_log_schedule_day
(*1)
No specification
If periodic_log_schedule_type is WEEKLY or
MONTHLY, this item is mandatory.(*2)
- If periodic_log_schedule_type is WEEKLY:
Use the following strings to specify the day of the
week:
- MON
- TUE
- WED
- THU
- FRI
- SAT
- SUN
Commas can be used as delimiters to specify a
number of days of the week.
- If periodic_log_schedule_type is MONTHLY:
Use one of the following methods to specify a date:
- Numerics from 1 to 28 indicating the date
- LASTDAY string indicating the last day of the
month
- 37 -
Key
Description
Default value
A number of days cannot be specified with this
method.
*1: Changes to this setting are enabled by executing the Change periodic log schedule settings command after changing the
settings file.
*2: If periodic_log_schedule_type is DAILY, the periodic_log_schedule_day value will be ignored.
An example of setting the operational settings file is shown below:
# delete setting of meteringlog database
# YYYY-MM-DD
# ex. 3 months ago
0000-03-00
retention_period=0000-03-00
# schedule of periodlog insert
periodic_log_use=yes
periodic_log_schedule_time=00:00
periodic_log_schedule_type=DAILY
periodic_log_schedule_day=
3. If an item other than retention_period has been changed, execute the Change periodic log schedule settings command.
Refer to "10.1 ctchg_chgschedule (Change Periodic Log Schedule Settings)" in the "Reference Guide (Command/XML) CE" for
information on the Change periodic log schedule settings command.
Note
Points to note when using the usage charge calculator
- Specify either the default value (3 months) or a period longer than the default value for retention_period.
- Do not change the values for the following keys from the default:
- periodic_log_use
- periodic_log_schedule_time
- periodic_log_schedule_type
- periodic_log_schedule_day
8.7.4 Usage Charge Calculator Settings
This section describes how to change the settings for the usage charge calculator function.
The procedure for changing the settings is as follows:
1. Open the following operating environment file:
[Windows Manager]
Installation_folder\RCXCTMG\Charging\conf\accounting.properties
[Linux Manager]
/etc/opt/FJSVctchg/conf/accounting.properties
2. Set the following items in the operating environment file:
- 38 -
Key
Description
Default value
accounting.use (*)
gui.cutoffdate (*)
Specify whether to use the usage charge
calculator.
no
31
- yes: Use the usage charge calculator.
- no: Do not use the usage charge
calculator.
Specify the default for the cut-off date
displayed in the tenant management
window of the ROR console.
Specify a value between 1 and 31.
In cases where the specified date does not
exist, the cut-off date will be the end of the
month. For example, if 31 is specified, but
the month only contains 30 days, then the
30th will be the cut-off date.
gui.sendmailaddress (*)
Specify the default for the address to send
the usage fee displayed in the tenant
None
management window of the ROR console.
usedtime.metering.cpu.perf.vserver
When overcommit is enabled, specify
whether to calculate the CPU clock of the
virtual server using CPU performance or
CPU reserve performance.
cpu_reserve
- cpu_perf: Calculate with CPU
performance
- cpu_reserve: Calculate with CPU
reserve performance
usedtime.metering.memory.vserver
When overcommit is enabled, specify
whether to calculate the memory usage of
the virtual server using memory or memory
reserve.
memory_reserve
- memory_size: Calculate with memory
- memory_reserve:
memory reserve
Calculate
with
* note: This can be set by "Setup Wizard" on the ROR Console.
An example is shown below:
accounting.use = yes
... omitted
gui.cutoffdate = 20
gui.sendmailaddress = example@xxx.com
... omitted
usedtime.metering.cpu.perf.vserver = cpu_reserve
usedtime.metering.memory.vserver = memory_reserve
... omitted
3. Restart the Manager.
8.8 System Condition Server List Settings
This section explains how to change the System Condition Server List settings.
- 39 -
If the L-Platform Management overcommit function is enabled, the CPU and memory settings displayed in the System Condition Server
List can be changed. Refer to "19.6 Settings for the Overcommit Function" in the "Setup Guide CE" for information on the L-Platform
Management overcommit function settings.
Note
If the overcommit function is used, the settings must match those of the L-Platform Management overcommit function.
Point
System Condition Server List Setting can be set by "Setup Wizard" on the ROR Console.
For details of "Setup Wizard", refer to "3.1 Setup Wizard" in the "User's Guide for Infrastructure Administrators CE".
Use the procedure below to change the System Condition Server List settings.
1. Open the following file:
[Windows Manager]
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\CMDBConsole\WEB-INF\classes\viewlist_en.xml
[Linux Manager]
/opt/FJSVcmdbm/CMDBConsole/WEB-INF/classes/viewlist_en.xml
2. Set the following items:
Settings item
Explanation
serverByOrg_ROR.bottom.column.11.isEnable
Set "true" to display the CPU Reserve Clock Rate. Set "false" to
hide it. The default value is "false".
serverByOrg_ROR.bottom.column.14.isEnable
Set "true" to display the Memory Reserve Size. Set "false" to hide
it. The default value is "false".
A settings example is shown below.
<?xml version="1.0" encoding="UTF-8"?>
<properties>
... omitted
<entry key="serverByOrg_ROR.bottom.column.11.isEnable">false</entry>
<entry key="serverByOrg_ROR.bottom.column.11.label">Reserved CPU clock speed(GHz)</entry>
<entry key="serverByOrg_ROR.bottom.column.11.path">/cmdb:item/cmdb:record[@type='observed']/
rc:LogicalServer/@reservedCPUClock</entry>
<entry key="serverByOrg_ROR.bottom.column.11.width">135</entry>
... omitted
<entry key="serverByOrg_ROR.bottom.column.14.isEnable">false</entry>
<entry key="serverByOrg_ROR.bottom.column.14.label">Reserved memory size(GB)</entry>
<entry key="serverByOrg_ROR.bottom.column.14.path">/cmdb:item/cmdb:record[@type='observed']/
rc:LogicalServer/@reservedMemorySize</entry>
<entry key="serverByOrg_ROR.bottom.column.14.width">140</entry>
... omitted
</properties>
Note
- If the L-Platform Management overcommit function is disabled, when the viewlist_en.xml overcommit setting is enabled, the CPU
Reserve Clock Rate and the Memory Reserve Size columns are displayed but the values are not displayed.
- 40 -
- When editing the viewlist_en.xml file, do not change any settings items other than serverByOrg_ROR.bottom.column.11.isEnable
and serverByOrg_ROR.bottom.column.14.isEnable.
- Save the viewlist_en.xml file before you edit the file. If any settings other than serverByOrg_ROR.bottom.column.11.isEnable and
serverByOrg_ROR.bottom.column.14.isEnable are changed, restore the saved file.
8.9 Settings for Event Log Output for CMDB Agent
This section explains how to change the settings that determine whether the start/end messages for CMDB agent information collection
are output to event logs.
Use the following procedure to change the settings that determine whether to prevent start/end messages from being output to event logs
or not.
1. Open the following files
[Windows Manager]
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\axis2\WEB-INF\services\mdr_cfmg\cmdb.properties
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\axis2\WEB-INF\services\mdr_ror\cmdb.properties
[Linux Manager]
/opt/FJSVcmdbm/axis2/WEB-INF/services/mdr_cfmg/cmdb.properties
/opt/FJSVcmdbm/axis2/WEB-INF/services/mdr_ror/cmdb.properties
2. Set the following items:
- schedule.syslog.disable
Change the value above. A modification example is shown below. Change the part in italics.
Set this item to "true" to prevent start/update messages from being output to event logs, or "false" otherwise.
schedule.syslog.disable=true
3. Restart the CMDB.
Open a command prompt and execute the following commands:
[Windows Manager]
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\bin\cmdbstop
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\bin\cmdbstart
[Linux Manager]
/opt/FJSVcmdbm/bin/cmdbstop.sh
/opt/FJSVcmdbm/bin/cmdbstart.sh
- 41 -
Part 3 Retention
Chapter 9 Hardware Maintenance..................................................................................................................43
Chapter 10 Backup and Restoration..............................................................................................................73
- 42 -
Chapter 9 Hardware Maintenance
This chapter explains how to perform hardware maintenance.
9.1 Overview
This section explains how to perform maintenance on the hardware devices managed by Resource Orchestrator.
Hardware Maintenance Flow
The flow of maintenance for hardware used to operate an L-Platform is shown below.
Figure 9.1 Flow of Maintenance for L-Platform Hardware
1. Notify Users of Maintenance Operations
The infrastructure administrator managing hardware devices notifies the tenant administrator and the tenant users that manage or
use L-Platforms running on the hardware which is the target of maintenance (regular maintenance or patch application) that
maintenance operations will be implemented.
As one method of notification, Resource Orchestrator provides the function which shows the information by displaying
"Information" on the ROR console for tenant administrators or tenant users.
For details, refer to "3.2 Editing the Home Messages" in the "User's Guide for Infrastructure Administrators CE".
2. Stop Managed Servers
When using a physical L-Server, the tenant user must stop the managed server.
3. Change to Maintenance Status
The infrastructure administrator places the resource of the maintenance target into maintenance mode.
- When the maintenance target is a managed server
- When using physical L-Servers
The infrastructure administrator places the physical L-Server into maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- 43 -
- When using a server for which spare servers are not configured
When using a server for which spare servers are not configured, the infrastructure administrator places the managed server
into maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- When the maintenance target is a network device
When the maintenance target is a network device, the infrastructure administrator places the relevant network device into
maintenance mode, and excludes it from being the target of monitoring and auto-configuration.
For details, refer to "Chapter 22.1 Switchover of Maintenance Mode" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
4. Implement Regular Maintenance
The infrastructure administrator performs the maintenance operations such as regular maintenance or patch application.
For details on the maintenance of managed servers, refer to the following sections:
For details on the maintenance of network devices, refer to the following section:
5. Release Maintenance Status
The infrastructure administrator releases the resource from maintenance mode.
- When the maintenance target is a managed server
- When using physical L-Servers
The infrastructure administrator releases the physical L-Servers from maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- When using a server for which spare servers are not configured
When using a server for which spare servers are not configured, the infrastructure administrator releases the managed server
from maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- When the maintenance target is a network device
The infrastructure administrator adds the network device as the target of monitoring or auto-configuration by releasing it from
maintenance mode.
For details, refer to "Chapter 22.1 Switchover of Maintenance Mode" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
6. Notify Users of Maintenance Operation Completion
The infrastructure administrator notifies tenant administrators and tenant users that manage or use L-Platforms running on the
hardware which was the target of maintenance that maintenance operations have been completed.
As one method of notification, Resource Orchestrator provides the function which shows the information by displaying
"Information" on the ROR console for tenant administrators or tenant users.
For details, refer to "3.2 Editing the Home Messages" in the "User's Guide for Infrastructure Administrators CE".
- 44 -
Flow of Corrective Actions when Hardware Fails
The flow of corrective actions when hardware fails is as below.
Figure 9.2 Flow of Corrective Actions when Hardware on which an L-Platform Operates Fails
1. Error Detection
Errors are detected at the following timing:
- When operation errors are reported to the tenant user by the user of the application running on the L-Platform
- When hardware errors are detected by the infrastructure administrator or the infrastructure monitor
- When the status of L-Platform resources used on the ROR console change to something other than "normal"
2. Investigation Request
The tenant user requests investigation of the cause of the error by the tenant administrator, based on the information related to
detected errors (error details, the name of the resource where the error occurred, the L-Platform name).
The tenant administrator requests investigation of the cause of the error by the infrastructure administrator, based on the information
obtained from the tenant user.
3. Status Confirmation
The infrastructure administrator identifies the hardware to which resources are allocated using the obtained information, and
confirms their status.
4. Problem Cause Investigation
The infrastructure administrator identifies the cause, by investigating the hardware on which the problem occurred.
5. Corrective Action
The infrastructure administrator takes corrective actions in order to resolve the problems with hardware.
6. Reporting of Investigation Results
The infrastructure administrator reports the results of the investigation, after completing corrective action.
Flow of Hardware Maintenance when a Server Fails
The following flowchart shows the procedure for maintaining hardware when failures occur on registered servers.
- 45 -
Figure 9.3 Flow of Hardware Maintenance when a Server Fails
*2: For details on how to configure and release the maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
*3: For details on server switchover, failback, and takeover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
*4: For details on backing up and restoring system images, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
*5: For details on maintenance LED operations, refer to "9.2.1 Maintenance LED". Please note that maintenance LED operations are only
supported for PRIMERGY BX servers.
*6: For details on re-configuring hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
*7: For details on power control, refer to "Chapter 14 Power Control" in the "User's Guide VE".
The following hardware replacements can be performed:
- Replacing Servers
Replace a server that has been registered in Resource Orchestrator.
- Replacing Server Components
Replace hardware components (such as a NIC, HBA, or hard disk) of a registered server.
For details on replacing or adding server components, refer to "9.3.3 Replacing and Adding Server Components".
- 46 -
- Replacing Non-Server Hardware
Replace registered chassis, management blades, or any other hardware components external to servers.
9.2 Blade Server Maintenance
This section explains the maintenance of blade servers.
9.2.1 Maintenance LED
This section explains how to operate maintenance LEDs.
Activating a server blade's maintenance LED make it easy to identify a server from others. When replacing servers, it is recommended to
use this function to identify which server blade should be replaced.
To activate the maintenance LED of a managed server running either a physical OS or a VM host, the server should be placed into
maintenance mode first.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
Note
- Maintenance LED control is only available for PRIMERGY BX servers. The actual LED used as an identification LED differs between
server models.
- For PRIMERGY BX600 servers, the power LED is used (blinks when activated).
- For PRIMERGY BX900 servers, the ID indicator is used (lit when activated).
- If SNMP agent settings within the management blade configuration are incorrect, maintenance LED operations in Resource
Orchestrator will end successfully, but the state of the identification LED will not change. Configure the settings correctly, referring
to "8.2 Configuring the Server Environment" in the "Design Guide CE".
Activating a Maintenance LED
Use the following procedure to activate a server blade's maintenance LED.
1. In the ROR console server resource tree, right-click the target server, and select [LED]-[ON] from the popup menu.
The [Turning on Maintenance LED] dialog is displayed.
2. Click <OK>.
Selecting the [Automatically turn off] checkbox will automatically shut down the server after activating its maintenance LED.
Note
Once the maintenance LED of a server blade is activated, new errors detected in that server cannot be checked from its LED anymore.
Check the server status directly from the ROR console.
Deactivating a Maintenance LED
Use the following procedure to deactivate a server blade's maintenance LED.
1. In the ROR console server resource tree, right-click the target server, and select [LED]-[OFF] from the popup menu.
The [Turning off Maintenance LED] dialog is displayed.
- 47 -
2. Click <OK>.
The maintenance LED is turned off.
9.2.2 Reconfiguration of Hardware Properties
This section explains how to re-configure hardware properties for replaced hardware.
After hardware replacement, it is necessary to re-configure Resource Orchestrator with the new hardware properties.
For PRIMERGY BX servers, the hardware properties are automatically re-configured.
Note
- Ensure this operation is performed only after the replacement of one of the following: a server itself, the NIC used for either the admin
or public LAN, or the HBA.
If it is not, there is a possibility that operations on the server will not run correctly.
- After replacing the hardware, the server status becomes "unknown". The appropriate status can be restored by re-configuring the
hardware properties from the server.
Prerequisites
The following prerequisites must be satisfied before this operation can be performed:
- Both the replaced server and replacement server must be the same model
A warning message is shown if the model of the replacement server differs from that of the replaced server.
- When replacing a PRIMERGY BX server, the replacement server must be inserted into the same slot as used for the replaced server
Hardware properties cannot be re-configured from a server inserted in a different slot. An error occurs if no server is inserted in the
slot occupied by the previous server.
- The replaced server and replacement server must both be of the same blade type
If the blade types of the replaced and replacement servers are different, an error will occur.
To move a server to a different slot within a chassis, the server must be deleted first, and then registered again after being inserted in its
new slot.
Pre-Configuration
For PRIMERGY BX servers, the hardware properties are automatically re-configured.
If automatic re-configuration is not necessary for PRIMERGY BX servers, delete the following file, and then restart the manager.
Placeholder for the Definition File
[Windows Manager]
Installation_folder\SVROR\Manager\etc\customize_data
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data
Name of the Definition File
auto_replace.rcxprop
Re-configuring Hardware Properties after Server Replacement
If the definition file has already been created, there is no need to set the hardware information again.
If the definition file has not been created, use the following procedure to re-configure properties for replaced hardware.
- 48 -
1. After hardware replacement, insert the server and check that the following message is displayed in the event log.
Server blade added
After the message is displayed, shut down the server if it is still powered on.
2. After approximately 30 seconds, right-click the target server in the ROR console server resource tree, and select [Hardware
Maintenance]-[Re-configure] from the popup menu.
The [Re-configure Hardware Properties] dialog is displayed.
3. Click <OK>.
The original hardware properties of the selected managed server are updated with new hardware properties obtained from the
replacement server. If the maintenance LED is on it will be turned off automatically.
Note
When registering an agent and performing backups of system images or cloning images, perform one of the following.
- Restart the managed server after reconfiguring the hardware properties
9.2.3 Replacing Servers
This section details the procedure to follow when replacing servers.
Information
- Follow the same procedure when replacing servers where VM hosts are running.
- No specific action is required in Resource Orchestrator when replacing admin servers or HBA address rename setup service servers.
- Replacing a Server Assigned with Spare Servers
Use the following procedure to switch applications over to a spare server and replace a server with minimal interruption.
1. Perform Server Switchover
Switch over the server to replace with its spare server.
For server switchover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
After the server has been switched over, its maintenance LED is automatically activated, and the server is powered down.
2. Replace the Server
Replace the server whose maintenance LED is activated.
Change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
After hardware properties have been re-configured, the maintenance LED is automatically turned off in the ROR console.
4. Perform Post-server Switchover Operations
For details on the operations that must be performed after server switchover, refer to "4.3 Post-Switchover Operations" in the
"Operation Guide VE".
- 49 -
- Replacing a Server with no Spare Server Assigned
Use the following procedure to smoothly replace a server and resume its applications.
1. Place the Server into Maintenance Mode
Place the primary server to replace into maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
2. Create a System Image Backup
For local boot servers, create a system image backup when possible.
For details on backing up system images, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
In SAN boot environments, the boot disk can be restored without having to back up and restore a system image.
3. Activate the Maintenance LED
Activate the maintenance LED on the server that is to be replaced before shutting it down.
4. Replace the Server
Replace the server whose maintenance LED is activated.
Change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
5. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
After hardware properties have been re-configured, the maintenance LED is automatically turned off in the ROR console.
6. Restore the Boot Disk
- Local Boot
There is no need to restore the boot disk if the original disk is installed on the replaced server. Simply power on the
replacement server.
If the boot disk was replaced and a system image backup was collected, restore that backup.
Refer to "16.3 Restoring a System Image" in the "User's Guide VE" for details on how to restore a system image. After the
system image is restored, the server will be automatically powered on.
If there is no backup of the system image, run the installation program again.
- SAN Boot
As the replaced server can be easily configured to access the original boot disk using HBA address rename there is no need
to restore the boot disk. Simply power on the replacement server.
7. Release Maintenance Mode
Release the replaced server from maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- Servers with no Agent Registered
Use the following procedure to replace servers on which no Resource Orchestrator agent was registered.
1. Activate the Maintenance LED
Activate the maintenance LED on the server that is to be replaced and shut down the server if it is still powered on.
2. Replace the Server
Replace the server whose maintenance LED is activated.
Change the BIOS settings of the replacement server to match the operating environment.
- 50 -
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
After hardware properties have been re-configured, the maintenance LED is automatically turned off in the ROR console.
9.2.4 Replacing Non-Server Hardware
This section explains how to replace hardware external to servers.
- Replacing Chassis
No specific action is required in Resource Orchestrator.
- Replacing Management Blades
No specific action is required in Resource Orchestrator.
- Replacing LAN Switch Blades
No specific action is required for PRIMERGY BX900/BX400 LAN switch blades in IBP mode.
For other LAN switch blades of PRIMERGY BX models, after replacing a switch blade, update the new LAN switch blade with the
VLAN settings that were previously configured in Resource Orchestrator.
Use the following procedure to replace a LAN switch blade.
1. Replace the faulty LAN switch blade.
2. Restore the LAN switch blade configuration backup (which includes all of the LAN switch blade settings) to the new LAN
switch blade.
If the LAN switch blade configuration was not been backed up in advance, it has to be restored by configuring each setting
(except VLAN settings) to the same values set during the initial installation.
Refer to the manual of the LAN switch blade used for details on how to back up and restore LAN switch blade configurations.
3. Update the new LAN switch blade with the latest VLAN settings configured in Resource Orchestrator.
a. In the ROR console server resource tree, right-click the target LAN switch, and select [Restore] from the popup menu.
The [Restore LAN Switch] dialog is displayed.
b. Click <OK>.
VLAN settings are applied to the specified LAN switch blade.
Note
To replace LAN switch blades with different models, first delete the registered LAN switch blade, and then register the replacement
LAN switch blade.
After the LAN switch blade is registered, the VLAN settings must be configured for the internal and external ports.
For details on the VLAN settings, refer to "5.4.4 Configuring VLANs on LAN Switch Blades" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
When changing from IBP to another mode, or vice versa, when using a PRIMERGY BX900/BX400 series LAN switch blade, delete
the registered LAN switch blade and register it again.
- Replacing Fibre Channel Switch Blades
No specific action is required in Resource Orchestrator.
In Resource Orchestrator, the settings for Fibre Channel switch blades are not restored.
Restore the settings for Fibre Channel switch blades based on the information in hardware manuals.
- 51 -
- Replacing Storage Blades
No specific action is required in Resource Orchestrator when replacing a storage blade that does not contain the boot disk of a server
blade.
Use the following procedure to replace a storage blade that contains the boot disk of a server blade.
1. Replace the storage blade.
2. Insert the server blade's boot disk in the new storage blade.
3. If the boot disk's content was backed up, restore it.
Information
The backup and restore functions available in Resource Orchestrator can be used to restore the boot disk contents.
For details, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
9.3 Maintenance for Servers Other Than Blade Servers
This section explains server maintenance for other than blade servers.
9.3.1 Reconfiguration of Hardware Properties
This section explains how to re-configure hardware properties for replaced hardware.
After hardware replacement, it is necessary to re-configure Resource Orchestrator with the new hardware properties.
Note
- Ensure this operation is performed only after the replacement of one of the following: a server itself, the NIC used for either the admin
or public LAN, or the HBA.
If it is not, there is a possibility that operations on the server will not run correctly.
- When the system board or GSPB of a PRIMEQUEST server has been changed, ensure that this operation is performed.
If it is not, there is a possibility that operations on the server will not run correctly.
- After replacing the hardware, the server status becomes "unknown". The appropriate status can be restored by re-configuring the
hardware properties from the server.
Prerequisites
The following prerequisites must be satisfied before this operation can be performed:
- Both the replaced server and replacement server must be the same model
A warning message is shown if the model of the replacement server differs from that of the replaced server.
- The replaced server and replacement server must both be of the same blade type
If the blade types of the replaced and replacement servers are different, an error will occur.
To move a server to a different slot within a chassis, the server must be deleted first, and then registered again after being inserted in its
new slot.
Pre-Configuration
For SPARC Enterprise servers, the hardware properties are automatically re-configured.
- 52 -
Re-configuring Hardware Properties after Server Replacement
- For Rack Mount and Tower Servers
Use the following procedure to re-configure properties for replaced hardware.
1. If the agent or ServerView Agents has already been registered, power on the server.
Additional Information
When a server using SAN boot has a hardware exchange that results in the MAC address used for the admin LAN being
changed, the OS and an agent cannot be started.
In this case, the server should be started once, and the MAC address confirmed on the BIOS (hardware) screen. After the
MAC address is confirmed, power off the server again.
2. In the ROR console server resource tree, right-click the target server and select [Hardware Maintenance]-[Re-configure] from
the popup menu.
The [Re-configure Hardware Properties] dialog is displayed.
3. Enter MAC addresses for the network interfaces used on the admin LAN.
This step can be skipped if no network interface was replaced.
- Admin LAN MAC Address (NIC1)
Required only if the agent is not registered.
Additional Information
When a server is powered off for the reason given in the Additional Information in 1., the item for input of the value of
NIC1 is displayed. In this case, input the MAC address confirmed in the Additional Information in 1.
- MAC address (NIC2) under SAN Boot/admin LAN redundancy
This item is only required for the following cases:
- When using the HBA address rename setup service
- When using GLS for admin LAN redundancy on the target server
- For the spare server of a managed server using admin LAN redundancy
4. Click <OK>.
The original hardware properties of the selected managed server are updated with new hardware properties obtained from the
replacement server.
- For PRIMEQUEST Servers
Use the following procedure to re-configure properties for replaced hardware.
1. Replace the system board or GSPB, and insert the server.
2. After approximately 30 seconds, right-click the target server in the ROR console server resource tree, and select [Hardware
Maintenance]-[Re-configure] from the popup menu.
The [Re-configure Hardware Properties] dialog is displayed.
3. Click <OK>.
The original hardware properties of the selected managed server are updated with new hardware properties obtained from the
replacement server.
- For SPARC Enterprise Servers
Note that in this case there is no need to set the hardware information again.
Note
When registering an agent and performing backups of system images or cloning images, perform one of the following.
- Restart the managed server after reconfiguring the hardware properties
- 53 -
9.3.2 Replacing Servers
This section details the procedure to follow when replacing servers.
Information
- Follow the same procedure when replacing servers where VM hosts are running.
- No specific action is required in Resource Orchestrator when replacing admin servers or HBA address rename setup service servers.
For Rack Mount and Tower Servers
- Replacing a Server Assigned with Spare Servers
Use the following procedure to switch applications over to a spare server and replace a server with minimal interruption.
1. Perform Server Switchover
Switch over the server to replace with its spare server.
For server switchover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
The server to replace is automatically powered off after switchover.
2. Replace the Server
Replace the server.
Change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
Configure the remote management controller of the replacement server with the same IP address, user name, password, and
SNMP trap destination as those set on the original server.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
4. Perform Post-server Switchover Operations
For details on the operations that must be performed after server switchover, refer to "4.3 Post-Switchover Operations" in the
"Operation Guide VE".
- Replacing a Server with no Spare Server Assigned
Use the following procedure to smoothly replace a server and resume its applications.
1. Place the Server into Maintenance Mode
Place the primary server to replace into maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
2. Create a System Image Backup
For local boot servers, create a system image backup when possible.
For details on backing up system images, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
In SAN boot environments, the boot disk can be restored without having to back up and restore a system image.
3. Power OFF
Shut down the server to replace if it is still powered on.
For details on shutting down servers, refer to "Chapter 14 Power Control" in the "User's Guide VE".
- 54 -
4. Replace the Server
Replace the server.
Change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
Configure the remote management controller of the replacement server with the same IP address, user name, password, and
SNMP trap destination as those set on the original server.
5. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
6. Restore the Boot Disk
- Local Boot
There is no need to restore the boot disk if the original disk is installed on the replaced server. Simply power on the
replacement server.
If the boot disk was replaced and a system image backup was collected, restore that backup.
Refer to "16.3 Restoring a System Image" in the "User's Guide VE" for details on how to restore a system image. After the
system image is restored, the server will be automatically powered on.
If there is no backup of the system image, run the installation program again.
- SAN Boot
The replaced server can be easily configured to access the original boot disk using I/O virtualization. Therefore, there is no
need to restore the boot disk. Simply power on the replacement server.
7. Release Maintenance Mode
Release the replaced server from maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- Servers with no Agent Registered
Use the following procedure to replace servers on which no Resource Orchestrator agent was registered.
1. Power OFF
Shut down the server to replace if it is still powered on.
For details on shutting down servers, refer to "Chapter 14 Power Control" in the "User's Guide VE".
2. Replace the Server
Replace the target server.
Change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
Configure the remote management controller of the replacement server with the same IP address, user name, password, and
SNMP trap destination as those set on the original server.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
For SPARC Enterprise Servers
- Replacing a server assigned with spare servers
Use the following procedure to switch applications over to a spare server and replace a server with minimal interruption.
- 55 -
- When Replacing an HBA
1. Perform Server Switchover
Switch over the server to replace with its spare server.
For server switchover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
The server to replace is automatically powered off after switchover.
2. Replace the Server
Replace the HBA of the server.
Change the OBP settings of the replacement server to match the operating environment.
For details on OBP settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing OBP settings.
Configure the remote management controller of the replacement server with the same IP address, user name, password, and
SNMP trap destination as those set on the original server.
3. Change the WWN Information Settings
Change the WWN information settings for after server replacement to the WWN value of the HBA after server replacement.
Leave the value of the target CA as the one before changes were made.
4. Perform Post-server Switchover Operations
For details on the operations that must be performed after server switchover, refer to "4.3 Post-Switchover Operations" in
the "Operation Guide VE".
Note
If takeover was performed before replacement of the HBA, release the spare server settings. Change the WWN information
settings following the procedure in "Replacing a server with no spare server assigned".
- When not Replacing an HBA
1. Perform Server Switchover
Switch over the server to replace with its spare server.
For server switchover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
The server to replace is automatically powered off after switchover.
2. Replace the Server
Replace components (other than the HBA) of the server.
Change the OBP settings of the replacement server to match the operating environment.
For details on OBP settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing OBP settings.
Configure the remote management controller of the replacement server with the same IP address, user name, password, and
SNMP trap destination as those set on the original server.
3. Perform Post-server Switchover Operations
For details on the operations that must be performed after server switchover, refer to "4.3 Post-Switchover Operations" in
the "Operation Guide VE".
- Replacing a Server with no Spare Server Assigned
When WWN information has been configured, use the following procedure to change the WWN information to that of the WWPN
value of the replaced HBA.
1. Delete the Target CA
When there are target CA settings in the WWN information, stop the server and then delete the target CA settings (set them as
hyphens ("-")).
2. Replace the Server
Replace the HBA of the server.
Change the OBP settings of the replacement server to match the operating environment.
- 56 -
For details on OBP settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing OBP settings.
When the target CA was deleted in step 1., configure zoning and host affinity settings in the WWPN value of the replacement
HBA.
For details, refer to the ESC users guide.
3. Change the WWN Information Settings
Change the WWN information settings for after server replacement to the WWN value of the HBA after server replacement.
When the target CA was deleted in step 1., configure a new target CA.
After configuration, restart the server.
After starting the server, check the status of the server's HBA from ESC.
When the HBA status is "unknown", delete it.
When the HBA status is displayed as "access path inheritance is required" (yellow icon), perform access path inheritance.
For details, refer to the ESC users guide.
When the target CA was not deleted in step 1., configure the target CA as a hyphen ("-").
For PRIMEQUEST Servers
- Replacing a Server Assigned with Spare Servers
Use the following procedure to switch applications over to a spare server and replace a server with minimal interruption.
1. Perform Server Switchover
Switch over the server to replace with its spare server.
For server switchover, refer to "Chapter 4 Server Switchover" in the "Operation Guide VE".
The server is automatically powered off after switchover.
2. Replace the Server
Replace the server.
Use the Maintenance Wizard of the Management Board Web-UI to perform replacement.
For details on the Maintenance Wizard, refer to the PRIMEQUEST manual.
Also, change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
4. Perform Post-server Switchover Operations
For details on the operations that must be performed after server switchover, refer to "4.3 Post-Switchover Operations" in the
"Operation Guide VE".
- Replacing a Server with no Spare Server Assigned
Use the following procedure to smoothly replace a server and resume its applications.
1. Place the Server into Maintenance Mode
Place the primary server to replace into maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
2. Create a System Image Backup
For local boot servers, create a system image backup when possible.
For details on backing up system images, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
In SAN boot environments, the boot disk can be restored without having to back up and restore a system image.
- 57 -
3. Power OFF
Shut down the server to replace if it is still powered on.
For details on shutting down servers, refer to "Chapter 14 Power Control" in the "User's Guide VE".
4. Replace the Server
Replace the server.
Use the Maintenance Wizard of the Management Board Web-UI to perform replacement.
For details on the Maintenance Wizard, refer to the PRIMEQUEST manual.
Also, change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
5. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
6. Restore the Boot Disk
- Local Boot
There is no need to restore the boot disk if the original disk is installed on the replaced server. Simply power on the
replacement server.
If the boot disk was replaced and a system image backup was collected, restore that backup.
Refer to "16.3 Restoring a System Image" in the "User's Guide VE" for details on how to restore a system image. After the
system image is restored, the server will be automatically powered on.
If there is no backup of the system image, run the installation program again.
- SAN Boot
As the replaced server can be easily configured to access the original boot disk using HBA address rename there is no need
to restore the boot disk. Simply power on the replacement server.
7. Release Maintenance Mode
Release the replaced server from maintenance mode.
For details on maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- Servers with no Agent Registered
Use the following procedure to replace servers on which no Resource Orchestrator agent was registered.
1. Power OFF
Shut down the server to replace if it is still powered on.
For details on shutting down servers, refer to "Chapter 14 Power Control" in the "User's Guide VE".
2. Replace the Server
Replace the server.
Use the Maintenance Wizard of the Management Board Web-UI to perform replacement.
For details on the Maintenance Wizard, refer to the PRIMEQUEST manual.
Also, change the BIOS settings of the replacement server to match the operating environment.
For details on BIOS settings, refer to "8.2 Configure the Server Environment" in the "Design Guide CE".
Shut down the server after completing BIOS settings.
3. Re-configure Hardware Properties after Replacement
After replacing the server, re-configure Resource Orchestrator with the latest hardware properties.
For details on how to re-configure hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
9.3.3 Replacing and Adding Server Components
This section explains how to replace and add server components.
- 58 -
- Replacing and Adding Network Interfaces (Admin LAN, Public LAN)
The procedure used to replace and add network interfaces is the same as that described in "9.3.2 Replacing Servers".
When adding or removing network interfaces, if the target server is running Red Hat Enterprise Linux 5 or Citrix XenServer, after
completing the steps described in "9.3.2 Replacing Servers", log in with administrative privileges on the managed server and execute
the following command.
# /usr/local/sbin/macbindconfig create <RETURN>
[Xen]
When using Citrix XenServer, reinstall XenServer referring to the Citrix XenServer manual.
When using Red Hat Enterprise Linux 5 Virtualization (Xen-Based) and not using I/O Virtualization (VIOM), perform the following
procedure.
1. Execute the following command to temporally disable automatic startup of the xend daemon and then restart the managed
server.
# chkconfig xend off <RETURN>
2. Once the server has restarted, execute the following commands to update MAC address bindings, re-enable automatic startup
of the xend daemon, and restart the xend daemon itself.
# /usr/local/sbin/macbindconfig create <RETURN>
# chkconfig xend on <RETURN>
# service xend start <RETURN>
[Linux]
When the configuration of server components has been changed, check the configuration file of the OS, and make any necessary
corrections. For details, refer to "Configuration File Check" in "2.1.1.1 Software Preparation and Checks" in the "Setup Guide CE".
[Red Hat Enterprise Linux 6]
When adding or removing network interfaces, if the target server is running Red Hat Enterprise Linux 6, after completing the steps
For details, refer to "Configuration File Check" in "2.2.1.1 Software Preparation and Checks" in the "Setup Guide CE".
- Replacing a GSPB
The procedure used to replace a GSPB is the same as that described in "Replacing a network interface".
Replace NIC with GSPB in the procedure.
- Replacing an HBA
- When using I/O virtualization, the replacement HBA will automatically inherit the WWN originally set on the replaced HBA.
Therefore, there is no need to re-configure access paths on the storage side.
- When configuring WWN information, it is necessary to change WWN information settings to the replaced HBA WWN values.
For details on how to change WWN information, refer to "9.1.12 Changing WWN Settings for ETERNUS SF Storage Cruiser
Integration" in the "User's Guide VE".
- Replacing a boot disk (in local boot environments)
Use the following procedure to replace a boot disk.
1. Replace the faulty boot disk with a new one.
2. If the boot disk's content was backed up, restore it.
- 59 -
Information
The backup and restore functions available in Resource Orchestrator can be used to restore the boot disk contents.
For details, refer to "Chapter 16 Backup and Restore" in the "User's Guide VE".
- Replacing a System Board
The procedure used to replace a system board is the same as that described in "9.3.2 Replacing Servers".
- Replacing an IO Board
No specific action is required in Resource Orchestrator when replacing an IO board.
- Replacing Other Server Components
No specific action is required in Resource Orchestrator when replacing onboard server components like memory modules or other
parts.
[Solaris Containers]
When replacing, adding or removing CPU, Add replaced, added or removed CPU to the resource pool for Solaris Containers or remove
it depending on your environment.
For details, refer to "C.7 Solaris Containers" in the "Setup Guide CE".
9.3.4 Replacing Non-server Hardware
This section explains how to replace hardware external to servers.
- Replacing Management Blades
No specific action is required in Resource Orchestrator.
- Replacing Management Boards
No specific action is required in Resource Orchestrator.
- Replacing LAN Switches
No specific action is required in Resource Orchestrator when replacing a LAN switch.
9.4 For Servers not Using Server Management Software
This section explains how to maintain servers not using server management software.
When replacing an entire server, a CPU, or memory, if there are differences in the CPU or memory specifications before and after the
replacement, reflect the new CPU and memory specifications of the replaced server in the definition file before reconfiguring hardware
information.
- For Virtual L-Servers
For details on definition files, refer to "C.1.4 Configuration when Creating a Virtual L-Server Using a Server which cannot Use
ServerView Agents" in the "Setup Guide CE".
- For Physical L-Servers
For details on definition files, refer to "B.1.6 Configuration when Creating a Physical L-Server without Specifying a Model Name in
the L-Server Template" in the "Setup Guide CE".
In the following cases, reflect the correct value on the definition file before reconfiguring the hardware information.
- When defining the configuration information (CPU core count, CPU clock speed, memory capacity, etc.) of the target server after
registering it with Resource Orchestrator
- When modifying the configuration information (CPU core count, CPU clock speed, memory capacity, etc.) of a server that has been
registered with Resource Orchestrator, in the definition file
For details on re-configuring hardware properties, refer to "9.3.1 Reconfiguration of Hardware Properties".
- 60 -
9.5 Network Device Maintenance
This section explains how to maintain network devices that are the target of management in Resource Orchestrator.
9.5.1 Replacement Procedure of Network Devices
This section explains the procedure to replace network devices when it becomes necessary due to failure.
Figure 9.4 Image of Network Device Replacement
It is premised that you perform replacement while continuing operations using the network devices of redundancy configurations of active
and standby switch structure.
When there is no description, the operations are performed by an infrastructure administrator.
9.5.1.1 When device that is targeted to replace is a breakdown
This section explains replacement procedure when device that is targeted to replace is a breakdown
When the Management Function for Network Device Configuration Files is not Used
1. Announcement of planned maintenance operations.
- 61 -
2. Change the target network device to the "maintenance mode".
3. Replace the network devices. (Hardware maintenance person)
4. Restore configuration of replaced network device along by maintenance procedure of network device.
5. Release the "maintenance mode" of network devices, when problems with network devices after replacement have been solved.
6. Notification that maintenance operations are complete.
When the Management Function for Network Device Configuration Files is Used
1. Announcement of planned maintenance operations.
2. Change the target network device to the "maintenance mode".
3. Replace the network devices. (Hardware maintenance person)
4. Restore configuration of replaced network device using network device file which was backed up beforehand along by maintaining
procedure of the network device
- When operating restoration to log in to replaced network device directly.
1. Export network device file which was backed up beforehand with rcxadm netdevice cfexport command.
2. Restore exported network device file along by maintenance procedure of the network device.
- When operating restoration to use the restoration function of the management function for network device file
1. Configure replaced network device definition that are needed in operation management.
2. Restore network device file.
Information
- When replaced network device is "Cisco ASA 5500 series", operating restoration by rcxadm netdevice cfrestore command
is not required.
By function of "Cisco ASA 5500 series", same configuration as device of active state is reflected automatically.
For detail, please refer to manual of "Cisco ASA 5500 series".
For maintenance procedure of network device, please refer to manual of network device.
5. Back up the current network device files from the network devices with operational status.
If the content of the backed up device configuration file is up to date, this step is not required.
If the content is not the up to date, take a backup of the network device configuration file using the rcxadm netdevice cfbackup
command.
The date and time of backup can be checked using the rcxadm netdevice cflist command.
6. Check that there are no differences in the definitions that become a problem in the redundancy configurations using the network
device files used in 4. and the network device configuration file in the network device file backed up in 5.
Export each network device configuration file with rcxadm netdevice cfexport command and check the difference.
When there is difference that becomes a problem, please resolve difference following maintenance procedure of network devices.
For maintenance procedure of network device, please refer to manual of network devices.
7. Release the "maintenance mode" of network devices, when problems with network devices after replacement have been solved.
8. Notification that maintenance operations are complete.
9.5.1.2 When Device that is Target of Restoration is undamaged
This section explains replacement procedure when network device that is target of restoration is undamaged.
- 62 -
When the management function for network device configuration files is not used
1. Announcement of planned maintenance operations.
2. Log in the network device directly to check if the target network device of replacement is in active status or standby status.
When the target network device of replacement is in active status, switch over the device with the standby network device of
redundancy configuration, and change the status of target network device for replacement from active status to standby status.
3. Change the target network device to the "maintenance mode".
4. Back up the current environment (such as definitions) from the network devices that are switched to "maintenance mode".
5. Replace the network devices. (Hardware maintenance person)
6. Restore configuration of replaced network device using environment backed up in procedure 4.along by maintenance procedure of
network device.
7. Back up the current definitions from the network devices with operational status.
8. Check that there are no differences in the definitions in the redundancy configurations using environments backed up in 7. and
environment definitions backed up in 4.
When there is difference that is a problem, log in to the network device directly after replacement, and resolve the difference.
9. Release the "maintenance mode" of network devices, when problems with network devices after replacement have been solved.
10. Notification that maintenance operations are complete.
When the management function for network device configuration files is used
1. Announcement of planned maintenance operations.
2. Log in the network device directly to check if the target network device of replacement is in active status or standby status.
When the target network device of replacement is in active status, switch over the device with the standby network device of
redundancy configuration, and change the status of target network device for replacement from active status to standby status.
3. Change the target network device to the "maintenance mode".
4. Back up the current network device files from the network devices that are switched to "maintenance mode".
If the content of the backed up device configuration file is up to date, this step is not required.
If the content is not the up to date, take a backup of the network device configuration file using the rcxadm netdevice cfbackup
command.
The date and time of backup can be checked using the rcxadm netdevice cflist command.
5. Replace the network devices. (Hardware maintenance person)
Information
When registering "Nexus 5000 series" as network device using the management function for network device file, please confirm
the note of "9.4.8.2 When using management function of file for configuration of network device" in the "Design Guide CE" before
executing restoration in the procedure 6..
6. Restore configuration of replaced network device using network device file which was backed up in procedure 4.along by
maintenance procedure of the network device.
- When operating restoration to log in to replaced network device directly.
1. Export network device file which was backed up in procedure 4. with rcxadm netdevice cfexport command.
2. Restore exported network device file along by maintenance procedure of the network device.
- When operating restoration to use the restoration function of the management function for network device file
1. Configure replaced network device definition that are needed in operation management.
- 63 -
2. Restore network device file.
Information
- When replaced network device is "Cisco ASA 5500 series", operating restoration by rcxadm netdevice cfrestore command
is not required.
By function of "Cisco ASA 5500 series", same configuration as device of active state is reflected automatically.
For detail, please refer to manual of "Cisco ASA 5500 series".
For maintenance procedure of network device, please refer to manual of network device.
7. Back up the current network device files from the network devices with operational status.
If the content of the backed up device configuration file is up to date, this step is not required.
If the content is not the up to date, take a backup of the network device configuration file using the rcxadm netdevice cfbackup
command.
The date and time of backup can be checked using the rcxadm netdevice cflist command.
8. Check that there are no differences in the definitions that become a problem in the redundancy configurations using the network
device file used in 4. and the network device configuration file in the network device file backed up in 7.
Export each network device configuration file with rcxadm netdevice cfexport command and check the difference.
When there is difference that becomes a problem, please resolve difference following maintenance procedure of network devices.
For procedure maintenance of network device, please refer to manual of network devices.
9. Release the "maintenance mode" of network devices, when problems with network devices after replacement have been solved.
10. Notification that maintenance operations are complete.
See
- When checking configuration of network device, you may not confirm it from network device environment file. Please confirm that
it is possible to check configuration of network device from network device environment file by manual of network device in advance.
- For details on how to configure and release the maintenance mode, refer to "22.1 Switchover of Maintenance Mode" in the "User's
Guide for Infrastructure Administrators (Resource Management) CE".
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
Note
- Replacement using the same model is a prerequisite when replacing network devices.
- Confirm the following items, using the manuals of network devices, in advance.
- The replacement procedure when using redundancy configurations
- The operations for network devices (status check and switchover)
- Environmental differences which become problems when configuring redundancy configurations
When the management function for Network device configuration file is not used, confirm the following item, using the manuals of
network devices, in advance.
- The operations for network devices (backup methods and restore methods)
- When using the network devices of redundancy configurations, replace network devices in the following order.
1. Replace network devices with standby status
- 64 -
2. Set the status of the network device before replacement to standby, and replace the device
When replacing multiple network devices of redundancy configurations simultaneously, perform replacement operations in units of
the same redundancy configurations.
- When the network device to replace has failed, step 4. cannot be performed. It is recommended to back up environments regularly in
preparation for failures of network devices.
When performing regular backup of environments, the load of restoration operations after replacement of network devices can be
reduced, by using the latest backup environment.
9.5.2 Regular Maintenance Procedure of Network Devices
This section explains the procedure of regular maintenance (patch application or firmware update) of network devices.
Use the following procedure when performing maintenance operations while continuing operations using the network devices of
redundancy configurations by switching between active and standby.
When there is no description, the following operations are performed by an infrastructure administrator.
1. Announce that regular maintenance operations are being started.
2. Confirm that the network device that is the target of regular maintenance is in standby status, by directly logging in to the network
device.
3. Back up the current network device configuration file from the network device with standby status.
- When the management function for network device configuration files is used
If the content of the configuration file is up to date, this step is not required.
If the content is not up to date, back up the configuration file using the rcxadm netdevice cfbackup command.
The date and time of backup can be checked using the rcxadm netdevice cflist command.
- When the management function for network device configuration files is not used
Back up the configuration files from network devices.
For information about the backup method, refer to the network device manuals.
4. Change the target network device in standby status to "maintenance mode".
5. A hardware maintenance person performs the regular maintenance operations for network devices (batch application or firmware
update).
6. Back up the current network device configuration files from the network devices with operational status.
- When the management function for network device configuration files is used
If the content of the configuration file is up to date, this step is not required.
If the content is not up to date, back up the configuration file using the rcxadm netdevice cfbackup command.
The date and time of backup can be checked using the rcxadm netdevice cflist command.
- When the management function for network device configuration files is not used
Back up the configuration files from network devices.
For information about the backup method, refer to the network device manuals.
7. Check that there are no differences between the network device configuration files backed up in 3. and those backed up in 6.
- When the management function for network device configuration files is used
Export the configuration file using the rcxadm netdevice cfexport command, and check for any differences. When there is
difference that is a problem, log in to the network device with standby status, and resolve the difference.
- When the management function for network device configuration files is not used
When the management function for network device configuration files is not used. When there is difference that is a problem,
log in to the network device with standby status, and resolve the difference.
For information about how to export, refer to the network device manuals.
For information about login to network devices, refer to the network device manuals.
- 65 -
8. Release the network device from "maintenance mode", after checking that problems with network devices with standby status have
been solved.
9. Switch over the network device in active status that is the target of regular maintenance and the network device of the redundancy
configuration which is in standby status.
10. Then change the status of the remaining network device that is the target of regular maintenance from operational status to standby
status, and perform steps 3. to 8.
11. Announce that maintenance operations are complete.
See
- For details on how to configure and release the maintenance mode, refer to "22.1 Switchover of Maintenance Mode" in the "User's
Guide for Infrastructure Administrators (Resource Management) CE".
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
Note
- Regular maintenance may not be able to be performed using the described procedure depending on the maintenance details for
individual network devices. Before performing regular maintenance operations, ensure you check the information provided from the
network device vendors regarding the maintenance operations of network devices.
- Confirm the following items, using the manuals of network devices, in advance.
- The operations for network devices (status check, switchover and backup methods)
- Environmental differences which become problems due to redundancy configurations
- When performing regular maintenance for multiple network devices of redundancy configurations simultaneously, perform
replacement operations in units of the same redundancy configurations.
9.5.3 Procedure for Addition of Network Devices
This section explains the operation for adding network devices.
9.5.3.1 Adding L2 Switches to Handle Insufficient Numbers of Ports when Adding
Servers
This section explains the procedure for addition, assuming a case where it is necessary to add L2 switches, since the LAN ports of the L2
switch to connect to are insufficient when adding servers.
The explanation is mainly about operations related to L2 switches.
When there is no description, the following operations are performed by an infrastructure administrator.
- 66 -
Figure 9.5 Image of L2 Switches to Add
1. Design additional configurations. (Network device administrator)
2. Provide the additional network device information to the infrastructure administrator. (Network device administrator)
Add a network device in the state where the following operations have been completed.
- Initial configuration
- Operation test
- Integration of the device into a physical network configuration
3. Register the resources of the server.
It is necessary to register chassis or LAN switch blades for a blade server.
4. Create network configuration information (XML definition) using the acquired network device information.
5. Register an additional L2 switch as a network device.
Use the rcxadm netdevice create command to register as a network device.
- 67 -
6. When the following applies to the additional network device, create and register rulesets.
- When adding an L2 switch of a model for which sample scripts are not prepared, or an L2 switch of a model that has not been
used in the system until now.
In this case, it is necessary to create a directory to allocate rulesets to.
- When using a model for which sample scripts are not prepared, or even when using a model which has been used in the system
until now, by configuring definitions using the different rules (scripts)
- Even when using a model for which sample scripts are prepared, and when using a model with definitions configured using
different rules (scripts)
Note
Details of sample scripts may be reviewed and modified. When using rulesets modified from prepared sample scripts, the modified
details will be cleared by replacing the modified scripts with the sample scripts, when updating sample scripts.
In order to prevent this type of problem, when creating scripts by referring to sample scripts, create the new rulesets after copying
the rulesets of the sample script, and perform necessary modifications.
7. Change all resources using the additional network devices.
It is necessary to add the information about uplink ports of the added chassis, when adding a blade server.
Use the rcxadm network modify command to modify a network resource.
8. Register the added server as a resource in the necessary resource pool.
See
- For details on the initial configurations of network devices, refer to "9.2.3 Settings for Managed Network Devices" in the "Design
Guide CE".
- For details on how to create network configuration information (XML definition), refer to "14.6 Network Configuration Information"
in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on ruleset creation and the registration destinations, refer to "F.3 Creating a Folder for Registering Rulesets" in the "Setup
Guide CE".
- For details on the rcxadm network command, refer to "3.9 rcxadm network" in the "Reference Guide (Command/XML) CE".
- For details on how to register a resource in a resource pool, refer to "Chapter 19 Resource Operations" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
9.5.3.2 Adding Firewalls, Server Load Balancers, and L2 Switches for Additional
Tenants
This section explains the procedure for addition, assuming a case where it is necessary to add a network device or a server in order to add
a tenant.
The explanation is mainly about operations related to firewalls, server load balancers, and L2 switches.
When there is no description, the following operations are performed by an infrastructure administrator.
- 68 -
Figure 9.6 Image of Tenants to Add
1. Design additional configurations. (Network device administrator)
2. Provide the additional network device information to the infrastructure administrator. (Network device administrator)
Add a network device in the state where the following operations have been completed.
- Initial configuration
- Operation test
- Integration of the device into a physical network configuration
3. Register the resources of the server.
It is necessary to register chassis or LAN switch blades for a blade server.
4. Create network configuration information (XML definition) using the acquired network device information.
- 69 -
5. Register the added firewall, server load balancer, and L2 switch as network devices.
Use the rcxadm netdevice create command to register as a network device.
6. When the following applies to the additional network device, create and register rulesets.
- When adding a firewall, server load balancer, or L2 switch of a model for which sample scripts are not provided, or those of a
model that has not been used in the system until now
In this case, it is necessary to create a directory to allocate rulesets to.
- When using a model for which sample scripts are not prepared, or even when using a model which has been used in the system
until now, by configuring definitions using the different rules (scripts)
- Even when using a model for which sample scripts are prepared, and when using a model with definitions configured using
different rules (scripts)
Note
Details of sample scripts may be reviewed and modified. When using rulesets modified from prepared sample scripts, the modified
details will be cleared by replacing the modified scripts with the sample scripts, when updating sample scripts.
In order to prevent this type of problem, when creating scripts by referring to sample scripts, create the new rulesets after copying
the rulesets of the sample script, and perform necessary modifications.
7. Create a tenant and register a tenant administrator.
8. Back up environments using the functions provided by the firewall and server load balancer.
Backups can be used for restoration when replacing firewalls or server load balancers due to device failure.
For details on how to back up environments, refer to the manuals of the firewall and server load balancer being used.
9. Register additional servers, firewalls and server load balancers in a resource pool for tenants as resources.
See
- For details on the initial configurations of network devices, refer to "9.2.3 Settings for Managed Network Devices" in the "Design
Guide CE".
- For details on how to create network configuration information (XML definition), refer to "14.6 Network Configuration Information"
in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on ruleset creation and the registration destinations, refer to "F.3 Creating a Folder for Registering Rulesets" in the "Setup
Guide CE".
- For details on how to create a tenant, refer to "11.3 Creating Tenants" in the "User's Guide for Infrastructure Administrators CE".
- For details on how to register tenant administrators, refer to "Chapter 3 Operating User Accounts" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
- For details on how to register a resource in a resource pool, refer to "Chapter 19 Resource Operations" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
9.5.4 Procedure for Addition or Modification of Connection Destinations of
Network Devices
This section explains the procedure for adding or modifying destinations for network device connection.
When there is no description, the following operations are performed by an infrastructure administrator.
1. Notify your infrastructure administrator about the addition or modification of the destination for network device connection.
(Network device administrator)
- 70 -
2. Create network configuration information (XML definition) using the acquired network device information.
3. Confirm there are no differences besides the link information (under Links tag) regarding the added or modified destinations for
connection, by comparing the network configuration information of network devices registered in Resource Orchestrator and the
network configuration information created in 2.
If there is any difference, check with the system administrator that network device configurations have not been modified, and
change the network configuration information if necessary.
The network configuration information of network devices registered in Resource Orchestrator can be obtained using the rcxadm
netconfig export command.
4. Modify the network device by setting the confirmed network configuration information as the input information.
Use the rcxadm netdevice modify command to modify network devices.
5. Confirm from the ROR console that the network device information has changed, and the status is normal.
See
- For details on how to create network configuration information (XML definition), refer to "14.6 Network Configuration Information"
in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm netconfig command, refer to "3.7 rcxadm netconfig" in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
9.6 Storage Device Maintenance
This section explains how to maintain storage devices.
- Replacing storage devices
No specific action is required in Resource Orchestrator.
In Resource Orchestrator, the settings for storage devices are not restored.
Restore the settings for storage devices based on the information in hardware manuals.
- Replacing Fibre Channel switches
No specific action is required in Resource Orchestrator.
In Resource Orchestrator, the settings for Fibre Channel switches are not restored.
Restore the settings for Fibre Channel switches based on the information in hardware manuals.
9.7 Power Monitoring Device (PDU or UPS) Maintenance
This section explains how to maintain power monitoring devices (PDU or UPS).
- Replacing power monitoring devices (PDU or UPS)
After replacing a power monitoring device, re-configure hardware properties of the power monitoring device (PDU or UPS).
Use the following procedure to replace a power monitoring device.
1. Replace the faulty power monitoring device.
2. Set the admin LAN IP address and SNMP community on the replacement device to the same values as those that were set on
the faulty device.
- 71 -
3. Re-configure the power monitoring device's hardware properties.
a. In the ROR console server resource tree, right-click the target power monitoring device (PDU or UPS), and from the
popup menu, select [Hardware Maintenance]-[Re-configure].
The [Re-configure Hardware Properties] dialog is displayed.
b. Click <OK>.
The target power monitoring device's hardware properties are re-configured.
- 72 -
Chapter 10 Backup and Restoration
This chapter describes how to operate the backup and restoration of ServerView Resource Orchestrator Cloud Edition.
10.1 Backup and Restoration of Admin Servers
This section describes how to operate the backup and restoration of the admin server of ServerView Resource Orchestrator Cloud Edition.
Backing up the Admin Server
The two methods of backing up the Management Server are shown below.
With either method, backup can be performed by executing one command (the rcxmgrbackup command).
For details on the rcxmgrbackup command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
- Offline Backup
The manager of this product is stopped and then resources are backed up. A backup is taken at the following times:
- When installation of this product has completed
When using an offline backup, backup of the following resource is not performed:
- Audit log
- Network device configuration file
- Online Backup
Resources are backed up without the manager of this product being stopped. Backing up periodically is recommended for online
backups.
To save time when restoring configuration information at operation, take an online backup if configuration information is to be updated.
When using an online backup, backup of the following resources is not performed:
- Dashboard Information
- Operational Status Information
- Audit Log
- Application Information
- Definition File
- Some L-Platform Management Settings
- Tenant Management and Account Management Settings
- Operational Status Server List Settings
- CMDB Agent Event Log Output Settings
- Network device configuration file
Online backup uses the PostgreSQL Point-In-Time Recovery (PITR) mechanism.
There are two online backup methods, each with a different database collection range, as follows:
- Base Backup
Base backup is the backup of the entire database cluster (file group in which database data is recorded). A base backup is taken
by executing the rcxmgrbackup -base command.
- Differential Backup
With differential backup, the contents of updates to the database are output to multiple files in 16 MB-sized lots.
These files are called Write-Ahead Logging (WAL) files.
- 73 -
Usually, for each 16 MB written, the WAL file being written to is switched, and the WAL file for which writing has been completed
is saved to the "wal" directory under the backup directory.
Periodically executing the rcxmgrbackup command allows the contents of updates to the database that are recorded in saved WAL
files to be maintained for a certain period of time. For example, when the rcxmgrbackup command is executed every hour, the
contents of updates performed in the most recent one-hour period will be saved.
Restoring the Admin Server
Database restoration restores by applying to the base backup the contents of updates to the database in WAL files that have been output
after the base backup was taken. The rcxmgrrestore command is used for restoration.
10.1.1 Mechanism of Backup and Restoration
This section describes the mechanism of backup and restoration, including environment requirements, points to note, and the restart position
after restoration, in a system in which this product is installed.
Backup and Restoration Environment Requirements
The following conditions must be satisfied in the environments to be backed up and restored:
- The operating systems match.
Note that this does not include operating system version differences.
- The host information (host name or IP address) matches.
- The character code systems match.
- The directory services match.
- The product installation folders match.
Points to Note at Backup and Restoration
This section describes the points to note when backup and restoration is being performed.
- Administrator privileges are required in order to execute the commands.
- If, and only if, resources backed up using online backup are to be restored, execute the command that disables a service application
for which an application process no longer exists (recoverAllService command).
- Perform backup and restore operations as described below for ServerView Operations Manager, which is mandatory software. Refer
to a ServerView Operations Manager manual for details.
- Backup Operations
Perform system backup in accordance with Management Server system backup operations.
When a registration or update of user information has occurred, perform backup again
- Restoration Operations
a. Perform restoration of the entire Management Server system.
b. Use the restoration procedure described in this document to perform restoration.
- If systems backed up using offline and online backup are to be restored, perform restoration in the following order:
1. Restoration of systems backed up using offline backup
2. Restoration of systems backed up using online backup
- When moving backup resources, move the folder specified in the backup destination as well as all folders and files under that folder.
- 74 -
- Do not delete backup resources during execution of the restore command.
- To delete backup resources, delete the folder specified in the backup destination as well as all folders and files under that folder.
- Backup to the following media cannot be performed using the backup command:
- Backup to optical disks such as CD-R and DVD-R
To back up user resources to optical disks, back up once to local disks, and then write to the media.
- Backup to folders that include spaces.
- Restoration from the following folders cannot be performed using the restore command:
- Restoration from folders that include spaces.
- When the management function for Network device configuration file is used, after offline backup is performed, save the following
folders and files under that folder.
[Windows Manager]
Installation_folder\SVROR\Manager\var\netdevice\
[Linux Manager]
/var/opt/FJSVrcvmr/netdevice/
- When the management function for Network device configuration file is used, before restoration of admin server is performed, replace
the folder saved when backing up with the original folder.
Resources Managed by This Product and Timing of Update
The resource files managed by Resource Orchestrator are as shown below: When an operation described in the 'timing of update column'
has been performed, a backup should be made.
Table 10.1 Resources to be Backed Up and Timing of Update
Necessity of
Target Resources
When Backup is Necessary
Stopping
Remarks
Managers
Not
Required
Certificates
None
After password saving (after
execution of the rcxlogin -save
command)
Not
Required
Session encryption keys
Backup of system images and
cloning images of virtual L-
Servers are performed as a part of
virtual machine backup
After addition, deletion, and
modification of Physical L-Server
images
Not
Required
System images and cloning images
operations.
Perform backup operations of
virtual machines using the
corresponding functions of VM
management software.
After creation, registration,
modification, unregistration, and
deletion of L-Servers and resources
Not
Required
Configuration definition information
Information related to image files
Physical L-Server registration,
deletion, movement. usage changes,
power operations, conversion, and
reversion
Not
Required
After the registration and
unregistration of VM hosts
- 75 -
Necessity of
Stopping
Target Resources
When Backup is Necessary
Remarks
Managers
Note) If the following definition
files are to be backed up, the
Manager must be stopped and
offline backup must be performed.
- Some L-Platform management
settings
Definition files
Modification of definition files
No (Note)
- Tenant management and account
management settings
- Operational status server list
settings
- CMDB agent event log output
settings
After rcxadm imagemgr command
operations
Not
Required
Image management information
Metering information
Creation, change, deletion, move, or
power operation of an L-Platform or
L-Server
Not
Required
Home window announcement
information
Announcement change
Yes
Yes
License agreement when users are added License agreement change
Because it is updated as needed,
stop the Manager and take a
backup as required.
Dashboard information
As required
As required
Yes
Yes
Because it is updated as needed,
stop the Manager and take a
backup as required.
Operational status information
Application to use, modify, or cancel
an L-Platform, or approval,
assessment, dismissal, or cancellation
of application
Application information
Yes
Terms of use or terms of cancellation of Change to terms of use or terms of
L-Platform cancellation
Yes
Yes
Terms of use or terms of cancellation of Change to terms of use or terms of
L-Platform cancellation
Remarks) Capacity Planning data is not included in backups. Export the data to CSV or Excel files by using the Capacity Planning window
as necessary.
Disk Space Necessary for the Backup
Disk space necessary for the backup is as follows.
Table 10.2 Resource to be Backed Up and Disk Space Necessary for the Backup
Target Resources
Disk Space Necessary for the Backup
The file size of backup files varies depending on the number of
resources defined in the configuration definition information.
When the number of VM guests is 1000VM, collection information
temporarily becomes about 150 MB. It becomes less than 2 MB by
Configuration definition information
- 76 -
Target Resources
Disk Space Necessary for the Backup
compressing it.
Please prepare the backup area referring to this size.
Every time, when backing up, it is necessary. Therefore, when the
backup is executed three times for instance, the capacity of image
save area is three times necessary.
System image or cloning image
For the capacity of the image storage area, refer to "2.4.2.5 Dynamic
Disk Space" in the "Design Guide VE".
Certificates
Session cryptography key
Definition files
The size of the file backed up changes according to the number of
files backed up.
Image management information
Home window announcement information
License agreement when users are added
Terms of use or terms of cancellation of L-Platform
Please prepare the backup area referring to this size to need only the
area of less than 1 MB even if these 100 files are stored by 10 KB
or less any file.
[Offline backup]
The size increases and decreases to the file backed up in proportion
to the increase and decrease at the number of resources of metering
objects and the retention period of the metering log. (*)
Metering information
[Online backup]
1 KB (*)
The size increases and decreases to the file backed up in proportion
to the increase and decrease of the number of L-Server templates and
the number of tenants.
Please prepare the backup area referring to the following formulas
when there are 10 L-Server templates defined.
Dashboard information
When the number of tenants is 100, about 6.6 GB is necessary.
disk space = (67.0 + (55.5 number of * tenants)) * 1.2 (MB)
20 MB
Usage condition information
The size increases and decreases to the file backed up according to
the operating environment. Please calculate the total of the amount
of disk under the control of the following directories, and prepare
the backup area of the amount.
[Windows Manager]
- system drive\SWRBADB(If the directory exists)
- Installation_folder\SWRBAM
- Installation_folder\SWOMGR
[Linux Manager]
Application information
- /etc/opt/FJSVswrbam
- /var/opt/FJSVswrbam
- /var/opt/FJSVJMCMN/etc
- /var/opt/FJSVjmcal
- /var/opt/FJSVJOBSC
- /var/opt/FJSVfwseo/config/JM
- /opt/FJSVJOBSC/bin
- /etc/mjes
- 77 -
Target Resources
Disk Space Necessary for the Backup
- /var/spool/mjes
* Note: The backup of metering information is gathered under the control of the directory specified by the rcxmgrbackup command at an
off-line backup, and it is gathered in the area besides the specified directory at an online backup.
Please calculate a necessary amount of disk by backing up metering information referring to the following examples.
"The amount of disk space necessary for the base backup" is necessary for an off-line backup.
It is necessary to total the "Disk space necessary for base backup" and the "Disk space necessary for difference backup" for an online
backup.
In the following conditions, the amount of disk of about 12.6 GB is necessary for about 1.3 GB and online backups for an off-line backup.
Table 10.3 Condition Necessary for Backup of Metering Information
Item
Estimated Value
Number of operating L-Platforms
1000
L-Server
1
1
2
Number of resources per L-
Platform
Expansion disk
Software
- The following operations are executed every day
- Return and deployment of 10 L-Platforms
- Starting of 1,000 L-Servers when starting operations
- Stopping of 1,000 L-Servers when finishing operations
- A regular log is acquired every day
Usage status
- Keep metering logs for one year
- Execute monthly base backup (every 30 days)
- Execute hourly difference backup.
Online backup frequency
Table 10.4 Formula for Metering Logs per Day
- Target capacity for metering logs
- Event Logs for an L-Platform : 2.3 KB/each time (A)
- Event Logs for other than an L-Platform : 0.6 KB/each time (B)
- Regular logs : 2.3 * number of L-Platforms (KB) (C)
- Metering logs per day
(A) * operation number for L-Platforms per day
+ (B) * operation number for other than L-Platforms per day
+ (C) * number of operating L-Platforms
= 2.3 KB * 20 + 0.6 KB * 2000 + 2.3 KB * 1000
= 3.5 MB
Table 10.5 Disk Space Necessary for Base Backup
Log of the metering each day * For one year
3.5 MB * 365 = 1.3 GB
Table 10.6 Disk Space Necessary for Difference Backup
disk space per WAL file : 16 MB
WAL file space frequency until acquiring base backup (30 days): 24 * 30
- 78 -
16 MB * 24 * 30 = 11.3 GB
Table 10.7 Disk Space Necessary for Operation of Online Backup
Disk space necessary for operation of online backup
= Disk space necessary for base backup + Disk space necessary for difference backup
= 1.3 GB + 11.3 GB
= 12.6 GB
Storage Destination for Backing Up Resources
This section describes the storage destination for backing up Admin Server resources.
Use the rcxmgrbackup command to specify a storage destination folder, except in the case of metering information.
The folders described in the following table are automatically created in the storage destination folder in order to store the resources. When
restoration is to be performed, restoration occurs from the information in the latest folder of each folder. Therefore, even if a folder is
deleted due to the disk capacity status, this will not cause a problem because the information in the latest folder will remain.
When the rcxmgrbackup command is used with the -cleanup option specified, all information older than the latest information collected
by the command will be deleted.
Table 10.8 Relationship between Backup Method and Data Collected
Online Backup
(Base Backup)
Online Backup
(Differential Backup)
Folder
Offline Backup
CTMG_OFFyyyymmddhhmmss
CTMG_BASEyyyymmddhhmmss
CTMGyyyymmddhhmmss
CFMGyyyymmddhhmmss
RORyyyymmddhhmmss
RBAyyyymmddhhmmss
RORSCW
Yes
No
No
Yes
No
No
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
"yyyymmddhhmmss" is the date and time the command was executed.
Note
Backup files of the admin server should be stored on external storage media to prevent the backup data from being corrupted due to server
failure.
Refer to "10.1.5 Online Backup Settings for Metering" for information on the storage destination for metering.
Standard at Backup and Restore Processing Time
Estimate the time necessary to perform backup and restoration of the admin server referring to the following command processing time.
The command processing time differs according to the operation environment, so check the time necessary in your environment. Base
your estimates on the values you obtain from the test.
Example
- Environment
- CPU : 4CPU
- 79 -
- Memory Size : 16 GB
- Command processing time
- offline backup : 3 minutes
- online backup (base backup) : 2 minutes
- online backup (differential backup) : 2 minutes
- restore : 4 minutes
System Restart Point after Restoration
In the system operations flow used by this product, the restart point of the system will vary according to the timing of the backup of
resources. Therefore, to ensure that the system can restart from the status it had before backup, operate the system in accordance with
whether offline backup or online backup is used for backup, and perform restoration processing in accordance with each backup type.
Figure 10.1 Schedule for Backup and Restoration of Admin Servers (Example)
The flow of the preceding diagram is as follows:
1. A problem occurs in the system (Example: Occurs at 12:35 on August 1, 2011)
2. In order to restore the system, restoration is performed on the environment backed up offline and the environment backed up online.
(Example: Restores at 14:00 on August 1, 2011)
3. The latest differential backup files are dated (12:00 on August 1, 2011), so the restart point will be from that date and time.
10.1.2 Offline Backup of the Admin Server
Before performing backup, the systems of this product must be stopped.
1. Stop the Manager
2. Back up the Resources of this Product
3. Start the Manager
- 80 -
10.1.2.1 Stopping the Manager
Stop the Manager and check that it is in a stopped state.
Stopping the Manager
Execute the command shown below to stop the Manager.
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrctl stop <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrctl stop <RETURN>
Checking the status of the services of this product
Check that the Manager and services of this product are stopped.
10.1.2.2 Back up the Resources of this Product
Back up the resources of this product. Execute the command shown below.
For details on the command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrbackup -dir directory [-cleanup] <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrbackup -dir directory [-cleanup] <RETURN>
When the management function for Network device configuration file is used, save the following folders and files under that folder.
[Windows Manager]
>Installation_folder\SVROR\Manager\var\netdevice\
[Linux Manager]
# /var/opt/FJSVrcvmr/netdevice/
10.1.2.3 Starting the Manager
Execute the command shown below to start the Manager.
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
> Installation_folder\SVROR\Manager\bin\rcxmgrctl start <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrctl start <RETURN>
- 81 -
Saving Image Management Information
Check the following image management information, the number of stored snapshot generations and the image file storage folder:
- Number of Stored Snapshot Generations
- Image File Storage Folder
Execute the following command to set the number of stored snapshot generations and the image storage folder information to a standard
output, by redirecting the information to a file, and then saving it.
[Windows Manager]
> Installation_folder\SVROR\Manager\bin\rcxadm imagemgr info > file <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxadm imagemgr info > file <RETURN>
Parameter
file
Specify the name of the file for output.
For details on the rcxadm imagemgr command, refer to "5.9 rcxadm imagemgr" in the "Reference Guide (Command/XML) CE".
10.1.3 Online Backup of the Admin Server
This section describes online backup of the admin server.
When using online backup of the admin server, the PostgreSQL Point-In-Time Recovery (PITR) mechanism is used in the backup of the
metering database.
When using PITR backup and restoration, the following two resource types are backed up and restored:
- Base Backup
The entire database cluster (file group in which database data is recorded) is backed up.
- WAL File
Write-Ahead Log (WAL) files are files in which the contents of updates to the database are recorded.
Backup of the two preceding resource types is taken when the commands for the two corresponding backup methods (base backup and
differential backup) are executed.
Resources that are backed up and restored using PITR are stored in the directory that has been specified in the settings file, regardless of
the storage destination specified when the command was executed. Refer to "10.1.5 Online Backup Settings for Metering" for information
on how to set this.
Each of the backup methods for performing online backup of the admin server is described below.
Base Backup
Base backup is the backup of the entire database cluster (file group in which database data is recorded).
Execute the command shown below.
For details on the command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrbackup -dir directory -base [-cleanup] <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrbackup -dir directory -base [-cleanup] <RETURN>
- 82 -
Note
The backup command may not end normally at times, so do not perform the following operations:
- Forced end using Ctrl+C during execution of the backup command
- Stopping the Manager during execution of the backup command
If the operations listed above have been performed, the following action will be required, depending on the status:
Execute the command shown below the next time and any subsequent times the base backup does not end normally.
When this command is executed, base backup will end normally.
For details on the command, refer to "12.10 ctmg_resetbackuperror (Recover Base Backup Error)" in the "Reference Guide (Command/
XML) CE".
<Installation_folder>\RCXCTMG\bin\ctmg_resetbackuperror.bat
The processing result is output as standard output.
The contents and meaning of the processing result are shown in the table below.
Processing Result Return Value
Message
Normal end
Error
0
Successfully reset the base-backup error.
Failed to reset the base-backup error.
Other than 0
- If starting of the Manager fails
When operations continue after a while without the command mentioned above (ctmg_resetbackuperror) being executed, and then
the Manager is stopped, subsequent starts of the Manager may fail. If this happens, an error message will be output to the database
log files, as follows:
- Database Log Files
Installation_folder\RCXCTMG\Charging\log\psql-nn.log (*)
* Note: The "nn" part is a 2-digit numeral indicating the day on which the log was output.
- Error Message
Example: If the access control database failed to start
LOG: could not open file "pg_xlog/xxxxxxxx" (log file 0, segment xx): No such file or directory
(*)
LOG: invalid checkpoint record
PANIC: could not locate required checkpoint record
HINT: If you are not restoring from a backup, try removing the file
"C:/Fujitsu/ROR/RCXCTMG/Charging/pgsql/data/backup_label".
* Note: The "xxxxxxxx" and "xx" parts of the log are undefined.
In a case like this, delete the file shown below. When this file is deleted, start of the Manager will end normally. When this
file is deleted, start of the Manager will end normally.
Installation_folder\RCXCTMG\Charging\pgsql\data\backup_label
Differential Backup
With differential backup, the contents of updates to the database are output to multiple files in 16 MB-sized lots.
These files are called Write-Ahead Logging (WAL) files.
Usually, for each 16 MB written, the WAL file being written to is switched, and the WAL file for which writing has been completed is
saved to the "wal" directory under the backup directory.
- 83 -
Execute the rcxmgrbackup command.
Periodically executing the rcxmgrbackup command allows the contents of updates to the database that are recorded in saved WAL files
to be maintained for a certain period of time. For example, when the rcxmgrbackup command is executed every hour, the contents of
updates performed in the most recent one-hour period will be saved.
For details on the command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrbackup -dir directory [-cleanup] <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrbackup -dir directory [-cleanup] <RETURN>
10.1.3.1 Items to be Determined Before Periodic Execution
The following items must be determined before periodic execution is implemented.
Items
What to Decide
Determine the frequency of base backup.
Example: 3:00 a.m. on the 1st of every month
Frequency and timing of base backup
Determine the frequency of differential backup.
Example: Once per hour
Frequency and timing of differential backup
Location of backup
Determine the location of the backup. Setting a disk other than the
disk that installed the product is recommended.
In addition, sufficient capacity is required to be able to store the
backup file.
Refer to "10.1.5 Online Backup Settings for Metering" for information on how to set the backup destination.
10.1.3.2 Settings for Periodic Execution of Backup
The following two settings are required in order to perform periodic execution of backup:
- Settings for periodic execution of base backup
- Setting for periodic execution of WAL file save
Note
- If this procedure has been used to start batch files using the Task Scheduler, a command prompt is displayed while the batch files are
being executed. Do not close this command prompt.
- When an error occurs at periodic execution of online backup, a message with the error code 67198 will be output to the log files. If
the occurrence of errors is being monitored, use software for monitoring the log files to monitor this error message. For details on the
error message, refer to "Message number 67198" in "Messages".
Settings for Periodic Execution of Base Backup
Specify settings so that a base backup is taken periodically.
Use Windows Task Scheduler as a mechanism for periodically executing commands. Refer to Windows Help for information on how to
set the Task Scheduler.
This section describes an example of a setup procedure for implementing a backup at 3:00 a.m. on the 1st of every month.
1. From the Windows [Start] menu, select [Administrative Tools]-[Task Scheduler] to start the Task Scheduler.
- 84 -
2. To manage tasks hierarchically, use the following procedure to create a folder:
a. In the Task Scheduler menu, after selecting Task Scheduler Library, select [Actions]-[New Folder], and then enter any folder
name in the dialog box that is displayed and click <OK>.
b. Selecting the created folder and then creating another folder by selecting [Actions]-[New Folder] from the Task Scheduler
menu allows a further hierarchical level to be added.
Point
Creating a folder in Task Manager allows tasks to be managed hierarchically. In cases such as where multiple tasks are to be
registered, creating a folder allows task management to be performed efficiently.
3. From the Task Scheduler menu, select Actions >> Create Basic Task to display the Create Basic Task Wizard.
Point
When a subsequent operation is performed after any folder is selected, the task will be registered under that folder. If a folder is not
selected, the task will be registered in the Task Scheduler Library.
4. In the [Name] field, enter a task name (for example, "Monthly backup") and click <Next>.
5. Select "Monthly" as the task trigger and click <Next>.
6. In the Start field, set the date and time at which periodic backup is to start.
Example: Set a date of the "1st" of the following month and "3:00:00".
7. In the [Months] field, select the "Select all months" checkbox.
8. In the [Days] field, select the "1" checkbox.
9. Click <Next>.
10. Select "Start a program" as the task action and click <Next>.
11. Click <Browse> and in the [Program/script] field, set the batch files for base backup.
Example
C:\work\backupall.bat
@echo off
echo "Resource Manager Cloud Edition Resources backup Start"
call "{Installation_folder}\SVROR\Manager\bin\rcxmgrbackup" -directory {storage destination
folder} -base
echo "Resource Manager Cloud Edition Resources backup End"
- For details on the rcxmgrbackup command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
12. In the [Add arguments (optional)] field, set a character string to be used for outputting the command output to log files.
Example: >>F:\backup\backupall.log 2>&1
13. Click <Next>.
14. Check the task settings, and if they are correct, click <Finish>.
Settings for Periodic Execution of Differential Backup
Specify settings so that a differential backup is taken periodically.
This section describes an example of a setup procedure for saving a differential backup every hour.
- 85 -
Point
When using the Create Basic Task Wizard, execution for a time interval shorter than one day cannot be set, so once the task has been
registered for a different frequency, change the properties.
1. From the Windows [Start] menu, select [Administrative Tools]-[Task Scheduler] to start the Task Scheduler.
2. To manage tasks hierarchically, use the following procedure to create a folder:
a. From the Task Scheduler menu, after selecting Task Scheduler Library, select [Actions]-[New Folder], and then enter a folder
name in the dialog box that is displayed and click the <OK> button.
b. Selecting the created folder and then creating another folder by selecting [Actions]-[New Folder] from the Task Scheduler
menu allows a further hierarchical level to be added.
Point
Creating a folder in Task Manager allows tasks to be managed hierarchically. In cases such as where multiple tasks are to be
registered, creating a folder allows task management to be performed efficiently.
3. From the Task Scheduler menu, select [Actions]-[Create Basic Task] to display the Create Basic Task Wizard.
Point
When a subsequent operation is performed after any folder is selected, the task will be registered under that folder. If a folder is not
selected, the task will be registered in the Task Scheduler Library.
4. In the [Name] field, enter a task name (for example, "Periodic WAL switching") and click the Next button.
5. Select Daily as the task trigger and click the Next button.
6. In the Start field, set the date and time at which online backup (WAL) is to start.
Example: Set the date of the following day and "0:00:00".
7. In the [Recur every] field, set 1 day (set by default).
8. Click the Next button.
9. Select Start a program as the task action and click the Next button.
10. Click the <Browse> button and in the [Program/script] field, set the batch files for online backup (WAL).
Example
C:\work\hourlybackup.bat
@echo off
echo "Resource Manager Cloud Edition Resources backup Start"
call "{Installation_folder}\SVROR\Manager\bin\rcxmgrbackup" -dir {storage destination folder}
echo "Resource Manager Cloud Edition Resources backup End"
- For details on the rcxmgrbackup command, refer to "6.6 rcxmgrbackup" in the "Reference Guide (Command/XML) CE".
11. In the Add arguments (optional) field, set a character string to be used for outputting the command output to log files.
Example: >>F:\backup\backupall.log 2>&1
12. Click the Next button.
13. Check the task settings, and if they are correct, click the <Finish> button.
14. Select the registered task from the task list, and select [Action]-[Properties] to open the Properties dialog.
- 86 -
15. Open the Triggers tab, select the existing trigger, and click the Edit button.
16. Check the [task every:] checkbox in [Advanced settings], and select "1 hour" (set by default).
17. For the [for a duration of], select "1 day" (set by default).
18. Click the OK button.
10.1.4 Restoring the Admin Server
This chapter describes how to restore resources that have been backed up.
1. Stop the manager
2. Restore the resources of the Manager
3. Start the manager
4. Update the configuration information in the operational status information
10.1.4.1 Stopping the Manager
Execute the command shown below to stop the Manager.
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrctl stop <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrctl stop <RETURN>
10.1.4.2 Restoring the Resources of This Product
Restore the resources of this product. Execute the command shown below.
For details on the command, refer to "6.17 rcxmgrrestore" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrrestore -dir directory <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrrestore -dir directory <RETURN>
When the management function for Network device configuration file is used, and the folder is saved according to procedure of "10.1.2.2
Back up the Resources of this Product", replace the following folder with the saving informations.
[Windows Manager]
>Installation_folder\SVROR\Manager\var\netdevice\
[Linux Manager]
# /var/opt/FJSVrcvmr/netdevice/
Restoring Image Management Information
If the following image management information that was saved at backup has been changed, reset it:
- Number of stored snapshot generations
- 87 -
[Windows Manager]
In value, specify the number of stored generations that were saved at backup.
>Installation_folder\SVROR\Manager\bin\rcxadm imagemgr set -attr vm.snapshot.maxversion=value
<RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxadm imagemgr set -attr vm.snapshot.maxversion=value <RETURN>
For details on the rcxadm imagemgr command, refer to "5.9 rcxadm imagemgr" in the "Reference Guide (Command/XML) CE".
10.1.4.3 Starting the Manager
Execute the command shown below to start the Manager.
For details on the command, refer to "5.19 rcxmgrctl" in the "Reference Guide (Command/XML) CE".
[Windows Manager]
>Installation_folder\SVROR\Manager\bin\rcxmgrctl start <RETURN>
[Linux Manager]
# /opt/FJSVrcvmr/bin/rcxmgrctl start <RETURN>
10.1.4.4 Disabling L-Platform Applications
Disable an L-Platform application for which an application process no longer exists, if online backup of the admin server has been
performed.
For details on the command, refer to "12.11 recoverAllService (Disable L-Platform Application)" in the "Reference Guide (Command/
XML) CE".
[Windows Manager]
>Installation_folder\RCXCTMG\MyPortal\bin\recoverAllService.bat <RETURN>
[Linux Manager]
# /opt/FJSVctmyp/bin/recoverAllService.sh <RETURN>
10.1.4.5 Updating the configuration information in the operational status information
Execute the following command to update the configuration for the operational status information:
For details on the command, refer to "12.8 cmdbrefresh (Refresh Configuration Information of System Condition)" in the "Reference
Guide (Command/XML) CE".
[Windows Manager]
>Installation folder\SWRBAM\CMDB\FJSVcmdbm\bin\cmdbrefresh.exe -a -q <RETURN>
[Linux Manager]
# /opt/FJSVcmdbm/bin/cmdbrefresh.sh -a -q <RETURN>
10.1.5 Online Backup Settings for Metering
Backup resources that use the PostgreSQL point-in-time recovery (PITR) mechanism (metering resources), which are among the resources
that are obtained in online backup of the admin server, are stored in the directory that has been specified in the settings file, regardless of
the storage destination specified by the command.
- 88 -
This section describes how to change the backup destination directory for the online backup of metering resources.
To change the backup destination directory, the files and items in the table shown below must be changed.
File to be Changed
File Name
postgresql.conf
Item to be Changed
WAL save directory
Operational settings file for the database
WAL save directory
Backup directory
Operational settings file for online backup
ctmgbackup.properties
Stopping the Manager
Stop the manager.
Changing the Backup Destination Directory
This section describes how to change the backup destination directory, based on the example shown below.
[Windows Manager]
Directory
Type
Before
change
Directory Path
C:\Fujitsu\ROR\RCXCTMG\backup\data
Backup directory
After change D:\basebackup
Before
C:\Fujitsu\ROR\RCXCTMG\backup\wal
change
WAL save directory
After change E:\walbackup
[Linux Manager]
Directory
Type
Directory Path
/var/opt/FJSVctchg/backup/data
Before
change
Backup directory
After change /basebackup
Before
/var/opt/FJSVctchg/backup/wal
change
WAL save directory
After change /walbackup
1. Create a new backup directory.
[Windows Manager]
> D: <RETURN>
> cd \ <RETURN>
> mkdir basebackup\Charging <RETURN>
> E: <RETURN>
> cd \ <RETURN>
> mkdir walbackup\Charging <RETURN>
[Linux Manager]
# mkdir /basebackup <RETURN>
# mkdir /walbackup <RETURN>
- 89 -
2. Set access privileges for users connected with the database, for the directory.
[Windows Manager]
>cacls D:\basebackup\Charging /T /E /G rcxctdbchg:F <RETURN>
>cacls E:\walbackup\Charging /T /E /G rcxctdbchg:F <RETURN>
[Linux Manager]
# chown -R rcxctdbdhg:rcxctdbchg /basebackup <RETURN>
# chown -R rcxctdbdhg:rcxctdbchg /walbackup <RETURN>
3. Copy (move) files from the existing directory to the new directory.
[Windows Manager]
>xcopy c:\Fujitsu\ROR\RCXCTMG\backup\data\* D:\basebackup\ /E /H /K /X <RETURN>
>xcopy c:\Fujitsu\ROR\RCXCTMG\backup\wal\* E:\walbackup\ /E /H /K /X <RETURN>
[Linux Manager]
# cp -pR /var/opt/FJSVctchg/backup/data/* /basebackup/. <RETURN>
# cp -pR /var/opt/FJSVctchg/backup/wal/* /walbackup/. <RETURN>
4. Modify the operational settings file for the database.
Change the settings for the following operational settings file for each database cluster:
[Windows Manager]
Installation_folder\RCXCTMG\Charging\pgsql\data\postgresql.conf
Change as follows:
- Setting before Change
archive_command = 'copy "%p" "C:\\Fujitsu\\ROR\\RCXCTMG\\backup\\wal\\Charging\\%f"'
command to use to archive a logfile segment
#
- Setting after Change
archive_command = 'copy "%p" "E:\\walbackup\\Charging\\%f"'
logfile segment
# command to use to archive a
Point
Use "\\" as a delimiter.
[Linux Manager]
/var/opt/FJSVctchg/pgsql/data/postgresql.conf
- Setting before Change
archive_command = 'cp "%p" "/var/opt/FJSVctchg/backup/wal/%f"' # command to use to archive a
logfile segment
- Setting after Change
archive_command = 'copy "%p" "E:\\walbackup\\Charging\\%f"'
logfile segment
# command to use to archive a
- 90 -
Point
Use "/" as a delimiter.
5. Modify the operational settings file for the online backup.
Open the following file:
[Windows Manager]
Installation_folder\RCXCTMG\bin\conf\ctmgbackup.properties
Change as follows:
- Setting before Change
BASE_BACKUP_DIR=C:/Fujitsu/ROR/RCXCTMG/backup/data
WAL_ARCHIVE_DIR=C:/Fujitsu/ROR/RCXCTMG//backup/wal
- Setting after Change
BASE_BACKUP_DIR=D:/basebackup
WAL_ARCHIVE_DIR=E:/walbackup
Point
Use "/" as a delimiter.
[Linux Manager]
/opt/FJSVctmg/bin/conf/ctmgbackup.properties
Change as follows:
- Setting before Change
BASE_BACKUP_DIR=/var/opt/FJSVctchg/backup/data
WAL_ARCHIVE_DIR=/var/opt/FJSVctchg/backup/wal
- Setting after Change
BASE_BACKUP_DIR=/basebackup
WAL_ARCHIVE_DIR=/walbackup
Point
Use "/" as a delimiter.
Start the manager
Start the manager.
10.2 Backup and Restoration of Network Devices
This section describes how to operate the backup and restoration of the Network devices.
- 91 -
10.2.1 Mechanism of Backup and Restoration
By backing up network device configuration files, restoration of network device configurations can be done quickly when network devices
are replaced due to network failures.
Network device configuration files can be backed up and saved within 5 generations. If the number of saved generations exceeds 5
generations, old generations will be deleted, starting with the oldest generation.
Network device environment file can be backed up and saved only 1 generation. If backup is performed, the saved network device
environment file will be deleted automatically.
This product supports the backup and restoration of the following network devices configuration files.
Table 10.9 Network Devices that are supported by Device Configuration File Management
Network Device
Hardware
Environment File
Configuration File
config1
config2
SR-X series
None
ipcomenv-host_name-firmware_version_number-time.tgz
host_name: Target IPCOM EX series host name
firmware_version_number: current firmware version
number
IPCOM EX series
(*)
running-config.cli
startup-config.cli
Fujitsu
time: The time when backup was performed
Example:
ipcomenv-ipcom-
E20L10NF0001B01-20120802-103845.tgz
nsappliance-IP_address.tgz
running-config.cli
startup-config.cli
NS Appliance
Example: nsappliance-192.168.1.1.tgz
running-config
startup-config
Catalyst series (*)
ASA5500 series (*)
Nexus 5000 series
None
running-config
startup-config
Cisco
None
running-config
startup-config
None
BIG-IP Local Traffic
Manager series
F5 Networks
config.scf
environment.ucs
*: For details on the target network devices, refer to "Table 2.69 Supported Network Devices" in the "Design Guide CE".
10.2.2 Backup of Network Devices
Whether to back up the network device file is decided according to the "CONFIG_BACKUP" parameter in unm_mon.rcxprop file.
Backup is peformed in the following cases
- When performing auto configuration of network devices
- When executing rcxadm netdevice cfbackup command specifying the network device
Point
When performing the following operation to manage the latest file for configuration of network device, please be sure to back up file
for configuration of network device in rcxadm netdevice cfbackup command.
- When update file for configuration of network device such as modifying configuration of management information, updating of
server certificate logging in to network device directly.
- 92 -
Backup is stored in the network device file storage area of the ROR management server.
When backing up the network device file, backup method differs according to the specifications of network devices.
- When the network device has ftp server function
When backup is performed, network connection to the specified device is made directly, and the backup will be performed.
- When the network device does not have ftp server
When backup is performed, the backup will be performed over the external ftp server defined by rcxadm netconfig command in
advance.
See
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm netconfig command, refer to "3.7 rcxadm netconfig" in the "Reference Guide (Command/XML) CE".
- For details on the unm_mon.rcxprop file, refer to "9.4.8.3 Network Device Management Function Definition File" in the "Design
Guide CE".
10.2.3 Restoration of Network Devices
Restoration is performed in the following cases.
- When executing rcxadm netdevice cfrestore
This section describes restoration procedure of backed up network device file.
When backed up network device file is "network device configuration file" or "network device environment
file"
Restore backed up network device file in rcxadm netdevice cfrestore.
When backed up network device file is both "network device configuration file" and "network device
environment file".
1. Confirm later back up date of file with displaying back up date of the latest "network device configuration file" and "network device
environment file" in rcxadm netdevice cflist.
2. Perform restoration by either restoration procedure according to confirmed back up date.
- When back up date of the latest "network device configuration file" is later
a. Restore "network device environment file" in rcxadm netdevice cfrestore command.
b. Restore "network device configuration file" in rcxadm netdevice cfrestore command.
- When back up date of "network device environment file" is later
a. Restore only "network device environment file" in rcxadm netdevice cfrestore command.
See
For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- 93 -
Part 4 Monitoring
Chapter 11 Monitoring Resources..................................................................................................................95
Chapter 12 Collecting Power Consumption Data and Displaying Graphs....................................................109
Chapter 13 Monitoring Resource Pools (Dashboard)...................................................................................111
Chapter 14 Monitoring L-Platforms...............................................................................................................112
Chapter 15 Accounting.................................................................................................................................113
Chapter 16 Monitoring Logs.........................................................................................................................135
- 94 -
Chapter 11 Monitoring Resources
This chapter explains how to monitor the configuration and status of managed resources.
11.1 Overview
Resource Orchestrator can centrally monitor the configuration and status of servers or other managed resources directly from the ROR
console. This enables the identification of resources experiencing problems, which reduces the time spent on system maintenance.
Moreover, Resource Orchestrator can easily launch external management software to precisely locate faulty parts within a managed
resource.
Monitoring is based on the following three components:
- Resources
Resource Orchestrator can centrally monitor the configuration and status of servers and other managed resources (Chassis, LAN
switches, LAN switch blades, network devices, physical OSs, VM hosts and guests, power monitoring devices, etc.) directly from the
ROR console.
When a hardware problem occurs on a server, affected guest operating systems can be easily detected.
Note
Power monitoring devices are not subject to monitoring.
- Events
Resource Orchestrator displays events such as hardware failures, server switchovers triggered by hardware failures, and the results of
every performed operation.
- Recent Operations
Resource Orchestrator displays the progress status of the various operations performed on resources.
The following table shows the level of monitoring performed for each resource monitored in Resource Orchestrator.
Table 11.1 Monitoring Level for Each Resource Type
Resource
Status Monitoring
Event Monitoring
Chassis
Server
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
No
No
No
No
Yes
No
No
Physical OS
VM Hosts
VM Guest
VM management software
LAN switch blade
Network device
Power monitoring device
Yes: Supported
No: Not supported
Regular Update of Resource Data
The Resource Orchestrator manager regularly updates resource data with information gathered from the following resources.
- 95 -
Table 11.2 List of Regularly Updated Resources and Their Related Resources
Resources Subject to Regular Update
Related Resources
Data Source
Chassis
Chassis
Server management unit
Server
Physical OS
VM host (*1)
VM guest (*1)
ServerView Agents (*2)
Server management unit
Server virtualization software
Server
LAN switch blade
Server management unit (*3)
LAN switch blade
Network device
LAN switch blade
L2 switch
Firewall
L2 switch
Firewall
Server load balancer
Server load balancer
VM management software
VM host (*1)
VM management software
VM management software
VM guest (*1)
*1: When no VM management software is registered, the status of VM hosts and VM guests is updated during a regular update of their
physical server. When VM management software is registered, their status is updated during the regular update of the VM management
software.
*2: Only for PRIMERGY servers and PRIMEQUEST.
*3: Only for LAN switch blades mounted in a PRIMERGY BX chassis.
The time required to update all resources depends on the number of registered resources. For 1 chassis that contains 10 servers and 4 LAN
switches, the update takes about 2 minutes. For 5 chassis that have identical configurations, the update should take about 10 minutes.
VM management software updates are independent from other resource updates, and takes approximately 3 minutes.
In the following cases, resource data is refreshed without waiting for the regular update.
- When a resource's state is changed as the result of an operation performed by Resource Orchestrator
- When a failure-triggered SNMP Trap is received from a resource
If a resource was operated externally to Resource Orchestrator, there may be a slight delay before its state is updated in the ROR console.
To force an update of a resource's data, right-click the resource and select [Update] from the displayed menu. The time required to update
resource data depends on the device. Generally, update should take no more than 40 seconds.
In order to restrain device and network load, resource data is not refreshed for 7 seconds following the last update time. However, when
a failure-triggered SNMP Trap is received, resource data is refreshed unconditionally. When manually updating a resource from the menu
right after performing an operation on that resource, if its data is not refreshed within 40 seconds, try updating it from the menu again.
11.2 Resource Status
Resources are monitored in the [Status] tab of the ROR console.
The [Status] tab shows the number of servers listed under the statuses "warning", "unknown", "error", or "fatal".
Servers whose status is "warning" or "unknown" are counted under "Warning", and servers whose status is "fatal" or "error" are counted
under "Error".
Clicking on "Error" or "Warning", displays the resources under the corresponding status in the [Resource List] tab.
The status of resources can also be monitored from both the resource tree and the [Resource List] tab. When an error occurs, a status icon
is added to the icon of the resource concerned.
Double-clicking on a resource icon displays the [Resource Details] tab, which provides detailed information about the corresponding
resource.
Icons Displayed in the ROR Console
The following table shows the resource icons used in BladeViewer and their associated meanings.
- 96 -
Table 11.3 Resource Icons
Icon
Meaning
Server resource
Chassis
Server
Physical OS
VM host
VM guest
LAN switch blade
L2 switch
Firewall
Server load balancer
Integrated network device
Power monitoring device (*)
PDU (*)
UPS (*)
Management software
* Note: Power monitoring devices (PDU or UPS) are not subject to monitoring.
The following table shows the status icons used in Resource Orchestrator and their associated meanings. It also shows which status icons
require corrective actions.
Table 11.4 Status Icons
Corrective
Action
Icon
Status
Meaning
No action is
necessary
None normal
warning
Normal
Warning
Action must be
taken
An error has occurred but the resource can still be used.(*1)
Unknown
Action must be
taken
unknown
stop
The status of the resource cannot be obtained. (*2, *3)
Stop
No action is
necessary
The resource has stopped and cannot be used.
Error
Action must be
taken
error
fatal
An error whose cause is unknown has occurred and the resource
cannot be used.
Fault
Action must be
A fault has occurred in the resource and the resource cannot be used. taken
*1: When a LAN switch is in "warning" status, it may mean that the LAN switch has been replaced with another model.
To use the LAN switch as it is, first delete the registered LAN switch, and then register it again.
*2: When a VM guest is in "unknown" status, check the operation status of the VM host on which the VM guest is running.
*3: When a LAN switch is in "unknown" status, check the physical connection between the LAN switch and admin LAN as well as whether
or not the LAN switch is responding to commands.
- 97 -
Note
- On the SPARC Enterprise T series, as statuses cannot be obtained from ILOM using SNMP, only "normal", "stop" or "unknown"
statuses are shown, while "warning", "error", and "fatal" statuses cannot be detected. If an SNMP Trap indicating an error is displayed,
check the status of the server on ILOM.
- For other servers, hardware statuses cannot be obtained from server management software (ServerView). Therefore, only "normal",
"stop" or "unknown" statuses are shown, while "warning", "error", and "fatal" statuses cannot be detected.
- For PRIMEQUEST, all partitions within the same chassis may temporarily become "unknown" depending on the timing of change
in power control status of the partition.
Table 11.5 Physical Server Icons
Icon
Meaning
Spare server
Maintenance mode server
Table 11.6 OS Icons
Icon
Meaning
Windows OS
Linux OS
Solaris OS
VMware host OS
Hyper-V host OS
Citrix XenServer host OS
Linux Xen host OS
KVM host OS
Solaris Container host OS
Information
- For server virtualization software, the following information is also displayed.
- VM management software
VM management software statuses can be one of the following: "normal" or "unknown".
If "unknown" is shown, check whether the VM management software is operating properly.
- VM host
The status of a VM host is displayed in the same way as for a physical OS.
- VM guest
Errors detected from server virtualization software are reflected in VM guest statuses.
VM guest statuses can be one of the following: "normal", "warning", "error", "unknown", or "stop".
For details, refer to "D.3 Functional Differences between Products" in the "Design Guide VE".
- For LAN switches, "error" and "fatal" are not displayed.
Only "warning", "normal", or "unknown" are displayed.
- 98 -
11.3 Addressing Resource Failures
This section explains how to address problems like hardware failures that occur in a system.
Basic Procedure
The following procedure is used to confirm and resolve problems using the ROR console:
1. Confirm the Existence of a Problem
For the confirmation method, refer to "11.2 Resource Status" and "A.3 Status Panel" in the "User's Guide for Infrastructure
Administrators (Resource Management) CE".
2. Check the Event Log
Use the event log to check the device where the error occurred and the content of the event.
In some cases, a single problem can cause a series of events, so search back through past events to find events with dates that are
close together.
3. Check the Status of Resources
From the resource tree, open the resource where the problem occurred and look for any affected chassis, physical servers, LAN
switches, physical OS's, VM hosts, or VM guests.
If Auto-Recovery has been enabled for a physical OS or VM host, it will be automatically switched over with a spare server. If
Auto-Recovery has not been enabled, server switchover can still be performed manually as long as a spare server has been designated.
For server switchover, refer to "4.2 Switchover" in the "Operation Guide VE".
4. Perform Detailed Investigation and Recovery
From the [Resource Details] tab of the failed resource, launch the external management software to investigate the precise cause of
the problem.
When no management software is available, confirm with the maintenance staff of the failed resource to investigate the problem.
Once this is done, perform the necessary maintenance work on any faulty hardware identified.
If a server hardware failure requires replacing a managed server, carry out the replacement operation as described in "9.3.2 Replacing
Servers".
5. Perform Post-recovery Verification
Following recovery, confirm that there are no more icons indicating problems on the ROR console.
11.4 Monitoring Networks
This section explains detecting status change of network devices to automatic network settings for them.
Besides, network monitoring of this product operates monitoring existence and state and SNMP trap monitoring of network devices. For
error monitoring such as port trouble of network device, please use network management software.
An infrastructure administrator recognizes changing state of a network device in one of following cases.
- Report of an error from other administrator or user
- Detection of changing state when confirming system using the ROR console
- The event logs to show the changing port state of network devices are output.
If a port status change on a registered network device is detected, the message number 22784 will be output.
If the port status is "down" or "unknown" and that change was not caused by the infrastructure administrator, it indicates a problem
with the network device. Check the network device.
- Status icons of resources on the network device tree are changed to a status other than normal
- Detection of changing state by software for network management
An infrastructure administrator checks the status of the network device and takes corrective action if the problem can be solved.
If the problem cannot be solved by the infrastructure administrator, notification of the following information is sent to the network device
administrator, requesting solving of the problem.
- 99 -
- Network device name in which the changing state occurred
- Network device status
- Phenomenon noticed when receiving requests for confirmation from other administrator or user
11.4.1 Identification of Error Locations
This section explains how to identify the network device on which an error has occurred.
11.4.1.1 When Notified of an Error by a Tenant Administrator or Tenant User
Use the following procedure to determine the location of the problem.
1. Confirm the network device status using the ROR console.
- If event logs showing the changing port state of network devices are not output.
If a port status change on a registered network device is detected, the message number 22784 will be output.
If the port status is "down" or "unknown" and that change was not caused by the infrastructure administrator, it may indicates
a problem with the network device.
- If status icons of resources on the network device tree have not changed to a status other than normal
2. When there is a network device on which changing state has occurred, check if the device is the network device used for an L-
Platform for which notification has been received.
When changing state occurs on a network device which is not used for the L-Platform for which notification has been received,
refer to "11.4.1.2 When Changing State is Detected during Status Confirmation Using the ROR Console".
- When the network device on which changing state has occurred is a firewall or a server load balancer
a. In the orchestration tree, select the L-Platform providing services for which notification has been received from other
administrator or user that an error has occurred.
b. Select the [Resource List] tab of the Main Panel.
c. Confirm [Use Resource] of [Firewall List].
When the status of a used resource is something other than "(normal)", and the name is the same as the network device
confirmed in 1. on which changing state has occurred, the error will be identified as having occurred on the network
device (firewall).
Additionally, confirm the status of the specified firewall.
For details on how to confirm the status of the firewall, refer to "11.4.2 Firewall Status Confirmation".
d. Confirm [Use Resource] of the [Server Load Balancer List].
When the status of a used resource is something other than "(normal)", and the name is the same as the network device
confirmed in 1. on which changing state has occurred, the error will be identified as having occurred on the network
device (server load balancer).
Additionally, confirm the status of the specified server load balancer.
For details on how to confirm the status of the server load balancer, refer to "11.4.3 Server Load Balancer Status
- When the network device on which changing state occurs is something other than a firewall and a server load balancer
a. Confirm the name of the network resource.
The name of the network resource can be confirmed in the items of the displayed results of the rcxadm netdevice show
command (AllocatedResources[Network]). Specify the name of the network device confirmed in 1. for the name of the
network device name for the name option.
b. Select the L-Platform from the orchestration tree.
c. Select the L-Server under the L-Platform.
- 100 -
d. Select the [Resource Details] tab of the Main Panel.
e. Confirm [Network Resource] of [Network Information]. When the network device name is the same as the name
confirmed in a., the error on the network device (L2 switch) can be identified on the specified network device that was
confirmed in a.
Additionally, confirm the status of the specified L2 switch.
For details on how to confirm the L2 switch status, refer to "11.4.4 L2 Switch Status Confirmation".
When there is no network device on which changing state has occurred, one of the following errors may have occurred.
- The wrong access control rules have been applied to the firewall, and passing of communication packet data is being rejected
Use the following procedure to confirm the status of the firewall.
a. Identify the firewall used for the notified L-Platform.
For details on how to specify the firewall, refer to "When the network device on which changing state has occurred is a
firewall or a server load balancer ".
b. Confirm the status of the firewall.
For details on how to confirm the status of the firewall, refer to "11.4.2 Firewall Status Confirmation".
- An error occurred in a network device that is managed using Resource Orchestrator
Request investigation from the network device administrator from the following point of view.
- If an error has occurred on a network device connecting to the network device used for the L-Platform.
- If an error has occurred on a network device on the communication route to the L-Platform
The network device administrator must confirm the network device status, using the following procedure.
a. Log in directly to a network device other than the L-Platform.
b. Confirm the status of the network device.
See
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on the operations (such as status confirmation) for network devices, refer to the manuals of network devices.
11.4.1.2 When Changing State is Detected during Status Confirmation Using the ROR
Console
When changing state is detected during status confirmation using the ROR console, it can be determined that an error might have occurred
on the network device which has indicated changing state (a firewall, a server load balancer or an L2 switch).
- When the network device on which changing state has occurred is a firewall
Confirm the status of the firewall.
For details on how to confirm the status of the firewall, refer to "11.4.2 Firewall Status Confirmation".
- When the network device on which changing state has occurred is a server load balancer
Confirm the status of the server load balancer.
For details on how to confirm the status of the server load balancer, refer to "11.4.3 Server Load Balancer Status Confirmation".
- When the network device on which changing state has occurred is an L2 switch
Check the status of the L2 switch.
For details on how to confirm the L2 switch status, refer to "11.4.4 L2 Switch Status Confirmation".
- 101 -
See
For details on the operations (such as status confirmation) for network devices, refer to the manuals of network devices.
11.4.2 Firewall Status Confirmation
This section explains the confirmation procedure of firewall status.
11.4.2.1 When an L-Platform Using a Firewall is Identified
Use the following procedure to confirm the status of the firewall.
1. In the orchestration tree, select the network device of a firewall under the L-Platform.
2. Select the [Resource Details] tab, and click the link of [Preserved resource] of [Network Device] of [Basic Information of Network
Device].
The [Resource Details] tab of the network device is displayed.
3. Confirm the displayed detailed information.
When the target network device is in redundant configuration, confirm both the devices are in active status and standby status.
- When there is a link of [Launch Network Device Web UI] in [Hardware Details]
a. Click the link and start the firewall management screen.
b. Confirm event log, status(interface, system condition and operation status) and whether communication packet can pass
or not from the started management window, and check the error detected by the firewall.
- When there is no link of [Launch Network Device Web UI] in [Hardware Details]
Confirm the following information displayed in the Main Panel.
Basic Information - Device Status
The status of the firewall is displayed.
When the status is something other than "normal", it indicates that an error might have occurred.
Port Information - Link Status
The port status of the firewall is displayed.
When the status is something other than "up" that is not intended by infrastructure administrator, it indicates that a port error
might have occurred.
Additionally, confirm status(system condition and operation status) and whether communication packet can pass or not by
logging the firewall directly, and check the error detected by the firewall.
4. Confirm the status of the firewall.
- When passing of communication packets is rejected by a firewall or an event log is output
The infrastructure administrator must confirm if the following items using auto-configuration are correct.
- Scripts for configurations
- Parameter files
- Configuration files for interfaces
- When it is possible that the hardware has failed, in cases where the firewall device status is "unknown" or the link status is
"down" that is not intended by infrastructure administrator.
The infrastructure administrator must request confirmation the status from the administrator of the network device, in cases
where firewall hardware has not failed. The network device administrator should request a hardware maintenance person to
take corrective action when hardware has failed.
- 102 -
5. Take corrective action based on the results of checked scripts and files.
- When there are no errors in the scripts or files checked in 4.
Request confirmation from a tenant administrator or tenant user that there are no errors in the parameters taken over during the
L-Platform update.
- When there are errors in the scripts or files checked in 4.
The infrastructure administrator will log in to the firewall directly, delete the failed configuration (such as rejection of
communication packets), and modify error scripts or files.
6. Take corrective action based on the results of parameter checks.
- When there are no errors in the parameters taken over during the L-Platform update
Confirm with the administrator of the network device that the firewall configuration has not been modified, since an unexpected
definition modification may have been made.
- When there are errors in the parameters taken over during the L-Platform update
The infrastructure administrator will log in to the firewall directly and delete the failed configuration (such as rejection of
communication packets).
7. Take corrective action based on the check results if definitions have been modified.
- When the network device administrator has not modified the configuration
Extract the firewall definitions and check the content. When inappropriate settings have been configured, log in to the firewall
directly, and modify the definitions.
- When a network device administrator has modified the configuration
Check if the configuration modification is necessary.
- When the configuration modification is not necessary
The infrastructure administrator must log in to the firewall directly, and delete or modify the problem-causing configuration
(such as rejection of communication packets).
- When configuration modifications were necessary based on the system operation policy
Review if the details of scripts, parameter files, and interface configuration files are based on the operation policy.
11.4.2.2 When a Firewall Changing State is Detected during Status Confirmation Using
the ROR Console
Use the following procedure to confirm the status of the firewall.
1. Select the network device of firewall on which changing state occurs from the network device tree.
2. Select the [Resource Details] tab.
3. Confirm the status of the firewall.
4. Identify the L-Platform in use.
a. Confirm the name of the firewall allocated using auto-configuration by checking the items in displayed results of the rcxadm
netdevice show command (AllocatedResources[Firewall]).
b. Confirm the name of the L-Platform using the firewall by checking the items in displayed results of the rcxadm firewall show
command (L-Platform Name). Specify the firewall name confirmed in a. as the firewall name to be specified for the name
option.
5. Confirmation procedure after this, refer to the operation after step 3. of "11.4.2.1 When an L-Platform Using a Firewall is
- 103 -
See
- For details on firewall operations (login, status confirmation, definition extraction, definition modification), refer to the manuals of
firewalls.
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm firewall command, refer to "3.4 rcxadm firewall" in the "Reference Guide (Command/XML) CE".
11.4.3 Server Load Balancer Status Confirmation
This section explains the confirmation procedure of server load balancer status.
11.4.3.1 When an L-Platform Using a Server Load Balancer is Identified
Use the following procedure to confirm the status of server load balancers.
1. In the orchestration tree, select the network device of a server load balancer under the L-Platform.
2. Select the [Resource Details] tab, and click the link for [Preserved resource] of [Network Device] of [Basic Information of Network
Device].
The [Resource Details] tab of the network device is displayed.
3. Confirm the displayed detailed information.
When the target network device is in redundant configuration, confirm both the devices are in active status and standby status.
- When there is a link for [Launch Network Device Web UI] in [Hardware Details]
a. Click the link and start the server load balancer management screen.
b. Confirm event log and status(interface, system condition and operation status) from the started management window,
and check the error detected by the server load balancer.
- When there is no link for [Launch Network Device Web UI] in [Hardware Details]
Confirm the following information displayed in the Main Panel.
Basic Information - Device Status
The status of the server load balancer is displayed.
When the status is something other than "normal" that is not intended by infrastructure administrator, it indicates that an
error might have occurred.
Port Information - Link Status
The port status of the server load balancer is displayed.
When the status is something other than "up" that is not intended by infrastructure administrator, it indicates that a port error
might have occurred.
Additionally, confirm status(system condition and operation status) by logging the server load balancer directly, and check the
error detected by the server load balancer.
4. Confirm the status of the server load balancer.
- When server load balancing is not performed as expected or an event log is output
The infrastructure administrator must confirm if the following items using auto-configuration are correct.
- Scripts for configurations
- Parameter files
- Configuration files for interfaces
- 104 -
Further, confirm that the server to be load balanced is operating normally because the problem may be due to an error in the
server.
- When it is possible that the hardware has failed, in cases that the server load balancer device status is "unknown" or the link
status is "down" that is not intended by infrastructure administrator.
The infrastructure administrator should request confirmation of the status from the administrator of the network device, to check
if the server load balancer has failed. The network device administrator should request a hardware maintenance person to take
corrective action when hardware has failed.
5. Take corrective action based on the results of checked scripts and files.
- When there are no errors in the scripts or files checked in 4.
Request confirmation from a tenant administrator or tenant user that there are no errors in the parameters taken over during the
L-Platform update.
- When there are errors in the scripts or files checked in 4.
The infrastructure administrator should log in to the server load balancer directly, delete the failed configuration (such as load
balancing rules), and modify any scripts or files containing errors.
6. Take corrective action based on the results of parameter checks.
- When there are no errors in the parameters taken over during the L-Platform update
Confirm with the administrator of the network device that the server load balancer configuration has not been modified, since
an unexpected modification may have been made to the definition.
- When there are errors in the parameters taken over during the L-Platform update
The infrastructure administrator should log in to the server load balancer directly, and delete the failed configuration (such as
load balancing rules).
7. Take corrective action based on the check results if definitions have been modified.
- When the network device administrator has not modified the configuration
Extract the server load balancer definitions and check the content. When inappropriate settings have been configured, log in to
the server load balancer directly, and modify the definitions.
- When a network device administrator has modified the configuration
Check if the configuration modification is necessary.
- When the configuration modification is not necessary
The infrastructure administrator should log in to the server load balancer directly, and delete or modify the problem-causing
configuration (such as load balancing rules).
- When configuration modifications were necessary based on the system operation policy
Review if the details of scripts, parameter files, and interface configuration files are based on the operation policy.
11.4.3.2 When a Changing State of Server Load Balancer is Detected during Status
Confirmation Using the ROR Console
Use the following procedure to confirm the status of server load balancers.
1. Select the network device of the server load balancer on which an error has occurred from the network device tree.
2. Select the [Resource Details] tab.
3. Confirm the status of the server load balancer.
4. Identify the L-Platform in use.
a. Confirm the name of the server load balancer allocated using auto-configuration by checking the items displayed in the results
for the rcxadm netdevice show command (AllocatedResources[SLB]).
- 105 -
b. Confirm the name of the L-Platform using the server load balancer by checking the items in results displayed for the rcxadm
slb show command (L-Platform Name). Specify the server load balancer name confirmed in a. as the server load balancer
name to be specified for the name option.
5. For the confirmation procedure after this, refer to the operation after step 3. of "11.4.3.1 When an L-Platform Using a Server Load
See
- For details on server load balancer operations (login, status confirmation, definition extraction, definition modification, etc.), refer to
the manuals of server load balancers.
- For details on the rcxadm netdevice command, refer to "3.8 rcxadm netdevice" in the "Reference Guide (Command/XML) CE".
- For details on the rcxadm slb command, refer to "3.12 rcxadm slb" in the "Reference Guide (Command/XML) CE".
11.4.4 L2 Switch Status Confirmation
This section explains the confirmation procedure of L2 switch status.
Use the following procedure to confirm the status of the L2 switch.
1. Select the network device on which the status icon has changed from normal to another using the network device tree.
2. Select the [Resource Details] tab.
3. Confirm the displayed detailed information.
- When there is a link of [Launch Network Device Web UI] in [Hardware Details]
a. Click the link and start the L2 switch management screen.
b. Confirm event logs or status from the started management window, and check the error detected by the L2 switch.
- When there is no link of [Launch Network Device Web UI] in [Hardware Details]
Confirm the following information displayed in the Main Panel.
Basic Information - Device Status
The status of the L2 switch is displayed.
When the status is something other than "normal", it indicates that an error might have occurred.
Port Information - Link Status
The status of the port of the L2 switch is displayed.
When the status is something other than "up", it indicates that a port error might have occurred.
4. Take corrective action based on the results of L2 switch status checks.
- When an error is detected on an L2 switch or an event log is output
The infrastructure administrator must confirm if the specifications of configuration scripts or parameter files used in auto-
configuration are correct.
- When it is possible that the hardware has failed, in cases where the L2 switch device status is "unknown" or the link status is
"down" that is not intended by infrastructure administrator.
The infrastructure administrator must request confirmation of the status of the L2 switch from the network device administrator.
The network device administrator should request a hardware maintenance person to take corrective action when hardware has
failed.
5. Take corrective action based on the results of checked scripts and files.
- When there are no errors in the scripts or settings in files checked in 4.
Confirm with the administrator of the network device that the L2 switch configuration has not been modified, since an unexpected
definition modification may have been made.
- 106 -
- When there are errors in the scripts or settings in files checked in 4.
The infrastructure administrator will log in to the L2 switch directly, delete the failed configurations, and modify error scripts
or files.
6. Take corrective action based on the check results if definitions have been modified.
- When the network device administrator has not modified the configuration
Extract the L2 switch definitions and check the content. When inappropriate settings have been configured, log in to the L2
switch directly, and modify the definitions.
- When a network device administrator has modified the configuration
Check that the configuration modification is necessary.
- When the configuration modification is not necessary
The infrastructure administrator must log in to the L2 switch directly, and delete or modify the problem-causing
configuration.
- When configuration modifications were necessary based on the system operation policy
Review if the details of scripts and parameter files are based on the operation policy.
See
For details on L2 switch operations (login, status confirmation. definition extraction, definition modification), refer to the manuals of L2
switches.
11.4.5 Status Confirmation of Other Network Devices
This section explains the confirmation procedure for other network devices.
Use the following procedure to check the statuses of other network devices.
1. Log in to the network device directly.
2. Confirm the status using the network device functions.
See
For details on the operations (such as login and status confirmation) on the network devices, refer to the manuals of network devices.
11.5 Monitoring Storage
This section explains how to monitor storage.
The following is possible when monitoring storage.
- Display of virtual storage resource and disk resource information
- Monitoring of storage unit errors
Use of storage management software to monitor storage.
- For ETERNUS storage
Refer to the "ETERNUS SF Storage Cruiser User's Guide" and the "ETERNUS SF Storage Cruiser Message Guide".
- For EMC CLARiiON storage
Refer to the "EMC Navisphere Manager Administrator's Guide".
- For EMC Symmetrix DMX storage and EMC Symmetrix VMAX storage
Refer to the "EMC Navisphere Manager Administrator's Guide".
- 107 -
- For NetApp storage
Refer to the "Data ONTAP Storage Management Guide".
- 108 -
Chapter 12 Collecting Power Consumption Data and
Displaying Graphs
This chapter explains how to export the power consumption data collected from registered power monitoring targets and how to display
it as graphs, and also describes the exported data's format.
12.1 Overview
This section details the power consumption data that is collected from registered power monitoring targets.
Resource Orchestrator calculates the power (in Watts) and energy (Watt-hours) consumed by a power monitoring target by multiplying
its collected electrical current (Amperes) by its registered voltage value (Volts).
This data can then be exported to a file in CSV format or as a graph.
The data can then be summarized or visualized as a graph, using an external tool such as Excel, to obtain a graphical representation of the
power consumed by each power monitoring target.
Information
In Resource Orchestrator, power consumption is calculated as the product of electrical current (A) multiplied by voltage (V). Normally,
power consumption is the product of an electrical current multiplied by a voltage and an additional phase factor (if the phase difference
between the current and voltage is defined as "θ", this factor is expressed as "cosθ").
Note
This data should only be used as a reference to evaluate the power consumption status. It should not be used as an exact power consumption
measurement for billing purposes.
12.2 Exporting Power Consumption Data
For details of how to export power consumption data, refer to "13.1 Exporting Power Consumption Data" in the "User's Guide for
Infrastructure Administrator (Resource Management) CE".
12.3 Power Consumption Data File (CSV Format)
This section explains the power consumption data file format (CSV format).
Each defined item of the exported power consumption data is separated by a comma (",").
Each line is exported in the following format.
- Data format
Data is exported using the following format:
Time,power_monitoring_target_name(data_type)[,power_monitoring_target_name(data_type)]...
time1,data1[,data1]...
time2,data2[,data2]...
- Header line
The header line contains column titles identifying the data (from line 2 and later) that is displayed under each column. Each column
title is set according to the data types that have been selected in the [Export Environmental Data (power_monitoring_target_type)]
dialog.
- 109 -
- Time
This column displays the date and time at which each data sample was collected.
Within data lines, the entry corresponding to this column is displayed in the following format: "YYYY-MM-DD hh:mm:ss"
("YYYY": Year, "MM": Month, "DD": Date, "hh:mm:ss": Hours:Minutes:Seconds). The time is displayed according to the time
zone set in the admin server operating system.
- power_monitoring_target_name(data_type)
The power_monitoring_target_name part displays the name of the selected target.
The data_type part displays the following data types:
- Power (W) is shown as "power"
- Average Power (W) as "power-average"
- Energy (Wh) as "energy"
- Data lines
Each data line contains data values corresponding to each of the column titles shown in the header line.
A hyphen ("-") is displayed for any data that could not be collected.
Note
- Regardless of the specified power monitoring target, the data held within Resource Orchestrator that fits the conditions given for the
selected time span and rate will be exported.
- Depending on the statuses of specified power monitoring targets, the data corresponding to the specified time span and rate may not
have been collected.
In this case, a hyphen ("-") will be displayed for any data that could not be collected.
Hyphens can be displayed when data was collected from another power monitoring target (including a deleted one) at the same
collection time, and data was not collected from the specified power monitoring target.
No data is collected from servers on which ServerView Agents is not running. In this case, missing data is shown using hyphens ("-").
- When power consumption data is exported, if the latest data is being collected at that point, some data may be shown using hyphens
("-").
- If the "Finest sampling" "rate" is selected in the [Export Environmental Data (power_monitoring_target)] dialog, the power and average
power values will be equal for each data sample.
- If a "rate" other than "Finest sampling" has been selected in the [Export Environmental Data (power_monitoring_target)] dialog,
values for each sample are displayed as follows. If data was collected at the displayed sample time, that value is displayed. If no data
was collected at the displayed sample time, the last data collected in the time interval between that sample and the previous sample
will be displayed.
- The energy (Wh) value of a finest sample is calculated under the assumption that the power value (W) collected for the sample stayed
at the same value until the next sampling (in other words it is assumed that power values (W) do not vary during the duration of the
polling interval).
- Only daily average data can be collected from blade chassis.
- Data collected from servers does not include power consumed by storage blades.
- For rates other than "Finest sampling", the energy value is calculated as the sum of energy samples. In such cases, the energy value
of samples for which no data could be collected will be deemed to be 0.
- The average power (W) of each sample is calculated from the energy value (Wh) of that sample and its corresponding time interval.
12.4 Displaying Power Consumption Data Graphs
For details of how to display graphs of power consumption data, refer to "13.2 Displaying Power Consumption Data Graphs" in the "User's
Guide for Infrastructure Administrator (Resource Management) CE".
- 110 -
Chapter 13 Monitoring Resource Pools (Dashboard)
This chapter explains how to monitor resource pools.
Use the [Dashboard (Pool Conditions)] tab of the ROR console to monitor the usage status of resource pools allocated to tenants and
resource pools of overall systems.
For details on the [Dashboard (Pool Conditions)] tab, refer to "Chapter 4 Dashboard (Pool Conditions)" in the "User's Guide for
Infrastructure Administrators CE".
- 111 -
Chapter 14 Monitoring L-Platforms
This chapter explains how to monitor L-Platforms.
The operation status of L-Servers can be monitored from the [Dashboard (System Conditions)] tab on the ROR console.
For details on the [Dashboard (System Conditions)] tab, refer to "Chapter 5 Dashboard (System Conditions)" in the "User's Guide for
Infrastructure Administrators CE".
- 112 -
Chapter 15 Accounting
This chapter explains the accounting.
15.1 Overview
This section explains an overview of the accounting.
This function provides the basis to usage charge corresponding to the L-Platforms used by each tenant.
The accounting includes following functions.
Manage accounting information of L-Platform templates
Add, modify, or delete the accounting information of L-Platform templates. Based on the accounting information, usage fee (the
estimated price) of L-Platform templates can be displayed on the L-Platform management window, etc. Usage charges can also be
viewed in the usage charge window.
The database where the accounting information of L-Platform templates is stored is called the product master.
Note
Usage fee (the estimated price) for the L-Platform template will be displayed provided that settings for the estimated price display
function are enabled. Refer to "8.7.1 Display Function Settings for Estimated Price" for information on settings for the estimated price
display function.
Usage charge calculation
Usage charges are calculated based on the amount that the L-Platform is used and the charge information. The calculated usage charge
is sent to the email address specified in the tenant management window of the ROR console. The usage charge is also stored in the
usage charge database.
Viewing usage charges
Usage charges can be viewed in the ROR console. If necessary, the usage charge information can also be downloaded as a file.
Operation image
The operation image of accounting is shown below.
- 113 -
[Management and operation of accounting information by the infrastructure administrator]
1. The infrastructure administrator registers L-Platform template information using the L-Platform management window of the ROR
Console.
2. The infrastructure administrator registers the accounting information using the product master maintenance command. Refer to
3. The infrastructure administrator publishes the L-Platform template.
[Create tenant]
4. The infrastructure administrator creates tenants.
Set the cut-off date for the calculation of usage charges and the email address to send them to as necessary.
[Operation in the tenant]
5. The subscriber references the usage fee (the estimated price) of the L-Platform templates displayed on the L-Platform management
window, and subscribes to the L-Platform.
6. The tenant administrator references to the usage fee (the estimated price) of the L-Platform templates displayed in the Request on
the ROR Console, then approves or rejects the subscription.
7. The infrastructure administrator views the monthly estimated charges for L-Platform templates displayed in the Request in the ROR
console and assesses the L-Platform subscriptions.
[Invoice for usage charges]
8. The accounts manager receives the usage charge file.
Usage charges are automatically calculated based on the amount that each L-Platform is used and the charge information, then stored
in the usage charge database. The usage charges for the previous month are confirmed the day after the cut-off date, and then sent as
a usage charge file to the email address.
9. The accounts manager views the usage charge file, creates the invoice, and then sends it to the tenant.
[View usage charges]
10. The infrastructure administrator and the tenant administrator can view the confirmed usage charges. If necessary, the usage charge
information can also be downloaded as a file.
15.2 Manage Accounting Information
This section explains how to manage the accounting information. The accounting information is managed in the product master.
The accounting information of product master is managed using the product master maintenance command. The product master
maintenance command provides the following functions.
- Register the accounting information of an L-Platform template on the product master
- Output the accounting information of an L-Platform template that is registered on the product master to the accounting information
file
Refer to "10.4 productmaintain (Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the
product master maintenance command.
Information
Usage fee (the estimated price) of L-Platform templates will be displayed when the following operations are performed by registering the
accounting information.
- L-Platform subscription from the L-Platform management window
- Modifying L-Platform templates
- Approving application list from the ROR Console
- Assessment from the ROR Console
- 114 -
Note
Usage fee (the estimated price) for the L-Platform template will be displayed provided that settings for the estimated price display function
are enabled. Refer to "8.7.1 Display Function Settings for Estimated Price" for information on settings for the estimated price display
function.
15.2.1 Information Maintained by Product Master
The accounting information can be set to the product master as products for each element type (category) that consists of the L-Platform.
The categories that can be set as products are shown below. Note that the unit prices can be specified by yearly, monthly, or hourly.
Category
Explanation
CPU
Unit price per CPU (*1).
CPU clock
Unit price per CPU clock of 0.1GHz (*1).
Unit price per memory capacity of 0.1GB.
Memory capacity
Virtual L-Server
Unit price per virtual L-Server.
Set the unit price based on system disk, snapshot, and software, etc.
Do not include the price for CPU, CPU clock, or memory in the specified value.
Physical L-Server
Unit price per physical L-Server.
Set the unit price based on system disk, snapshot, and software, etc.
Do not include the price for CPU, CPU clock, or memory in the specified value.
Note that the usage fee (the estimated price) displayed on the L-Platform Management
window may differ from actual price. (*2)
Data disk Capacity
Template
Unit price per disk capacity of 0.1GB. (*3)
Unit price per L-Platform template.
Set the unit price based on firewall cost, network resources (network line, global IP,
segment, NIC) usage cost, and system SE cost, etc.
Do not include the price for virtual server or virtual disk in the specified value.
*1: The price per CPU allocated as usage fee (the estimated price) is the sum of "Unit price per CPU" and "Unit price per CPU clock *
CPU clock".
Example) An example price calculation is shown below assuming following conditions.
[Conditions]
- CPU Clock: $0.10/0.1GHz
- CPU: $0.80/unit
[Example calculation]
- Price for a CPU of 3.2GHz: $0.10 * 32 + $0.80 = $4.00
- Price for two CPUs each of 1.0GHz: ( $0.10 * 10 + $0.80 ) * 2 = $3.60
*2: When a physical L-Server is to be newly deployed, estimated price for the CPU performance and memory capacity will be calculated
according to the values entered on the L-Platform Management window. The physical L-Server that is actually deployed will have
performance that is closest to the value that was entered on the L-Platform Management window. Accordingly, estimated price displayed
may differ to the actual price. For physical L-Servers that have already been deployed, estimated price will be calculated based on the
actual CPU performance and memory capacity.
*3: Estimated price for data disk capacity are added according to disk capacity usage for each storage pool, with no differentiation between
extension disks and existing disks. Note also that if the same existing disk is attached to multiple L-Servers at this time, calculations for
the existing disk capacity will be added together in accordance with the number of L-Servers to which the disk is attached. Resource
allocation methods differ between extension disks and existing disks. For this reason, the storage pool used for an existing disk should not
be the same as one used for an extension disk. For operations that use separate storage pools, charges for the existing disk can be calculated
using the following two methods:
- 115 -
- If charging according to the total capacity of an existing disk attached to L-Server(s):
In the storage pool, set the amount to be charged (in units of 0.1GB) when attaching an existing disk to an L-Server. If the same
existing disk is attached to multiple different L-Servers at this time, the disk capacity will be added together for the total number of
attached L-Servers and charged accordingly.
- If charging by the available existing disk from the L-Platform:
Rather than setting the price for the existing disk defined by the storage pool, perform settings instead to include the usage charge for
the available existing disk in the price for the L-Platform template.
Point
L-Platform operation when reconfiguration is not permitted
L-Servers and others will be allocated using the system configuration defined in the template if the L-Platform template reconfiguration
is not permitted in the operation. In this case, only the prices for the templates need to be set and the prices for the L-Servers need not.
About Setting Prices
- In the Solaris container, although it is possible to allocate more CPU and memory to the non-global zone than is installed in the host
operating system, if the CPU load and memory usage in each non-global zone increase, the allocated CPU function and memory
capacity may become unusable.
In order to not exceed the capacity physically held by the host operating system when creating a non-global zone, it is possible to
ensure the functionality of the allocated amount in the non-global zone by allocating CPU and memory.
However, if used for development environments or if non-global zones which exceed the physical CPU and physical memory are
created, it is not possible to ensure the allocated amount of functionality. If CPU and memory are made the subject of charges in
environments where the allocated functionality cannot be ensured, consider setting the CPU and memory unit prices at lower prices
than normal.
- As it may not be possible to ensure the functionality of allocated amounts of CPU and memory if overcommit is enabled in cases such
as where L-Platform import from related L-Servers is performed using VMware and Hyper-V as virtual software, give some
consideration to price settings.
The usage fee (the estimated price) displayed on the L-Platform management window and others is calculated as follows.
- 116 -
L-Platform usage fee (the estimated price) = monthly template price + monthly price for all L-Server
+ monthly price for all data disks
Template monthly price = template unit price * converted monthly price
Virtual L-Server monthly price = image unit price * converted monthly converted
+(CPU unit price * monthly converted price
+ CPU clock unit price * CPU clocks * converted monthly price) * number of CPUs
+ memory unit price * amount of memory * converted monthly price
Physical L-Server monthly price = image unit price * converted monthly converted
+(CPU unit price * monthly converted price
+ CPU clock unit price * CPU clocks * converted monthly price) * number of CPUs
- 117 -
+ memory unit price * amount of memory * converted monthly price
Data disk monthly price = disk unit price * amount of disk * monthly converted price
- If the price is hourly basis, usage fee (the estimated price) is calculated for 24 hours * 30 days.
- If the price is yearly basis, usage fee (the estimated price) is twelfth of that price.
- Usage fee (the estimated price) is rounded to nearest hundredths of dollar (cent) when displayed.
15.2.2 Accounting Information File Format
The accounting information of L-Platform templates is written to the accounting information file. The format of accounting information
file is as follows.
- Character code is UTF-8.
- Each column is separated by a comma.
- Comments cannot be written.
- Character string data are enclosed by double quotations. When the double quotation is a character in a character string, enter two
double quotations to escape it.
- When a double quotation is set in a character string that is enclosed with double quotations, the single double quotation will be ignored.
Example
When the following string is entered in the accounting information file, the value will be "aaaaa"a".
"aaaaa""a"
When the following string is entered in the accounting information file, the value will be "aaaaaa".
"aaa"aaa"
Columns represented in an accounting information file are shown below.
No
Name
Product ID
Omit
Explanation
1
No
Specify ID that identifies the product.
Register one product for each resource identifier.
Specify 16 or less characters using alphanumeric characters, _(underline), or -(hyphen).
2
Priority
No
Specify the priority in a product.
Multiple unit prices can be specified in a product, and the unit price with larger value has
higher priority. (*1)
The same priority cannot be set to the same product.
Specify the numerical value from 0 to 999.
3
4
Start of applicable
date
No
Specify start of applicable date.
Format: YYYY-MM-DDThh:mm:ss.SSSZ
Example) 2011-04-01T00:00:00.000+0900
End of applicable
date
Yes
Specify end of applicable date.
Format: YYYY-MM-DDThh:mm:ss.SSSZ
Example) 2012-03-31T23:59:59.999+0900
Specify the date prior to the start of applicable date.
5
Category code
No
Specify the product category using following codes.
- cpu: CPU
- cpu clock: CPU clock
- memory: memory capacity
- vm: virtual L-Server
- 118 -
No
Name
Omit
Explanation
- pm: physical L-Server
- disk: extension disk capacity
- template: template
6
Resource identifier
No
Specify information to identify the accounting target resource with 128 or less
alphanumeric characters.
Resource identifiers vary by the category. (*2)
- cpu, cpu_clock, memory: VM pool name or sever pool name
- vm: virtual L-Server image name
- pm: physical L-Server image name
- disk: storage pool name
- template: template ID
7
8
Unit code
Unit price
No
No
Specify accounting unit using following codes.
- year: yearly
- month: monthly
- hour: hourly
Specify unit price.
Specify 11 or less digits for the integer portion and 4 or less digit for the fractional portion.
Specify the fractional currency unit (ex. cent) as 1 for U.S. dollar, euro, or Singapore dollar.
Specify the currency unit as 1 for Japanese yen. (*3)
Example)
Specify 23 if the setting value is $0.23.
Specify 23 if the setting value is ¥23.
9
Product name
Description
No
Specify the name to distinguish the products using 128 or less characters.
10
Yes
Specify the description of the product using 1024 or less characters.
A comma that follows the name must be entered even when the specification is omitted.
Example)
...,Unit price,"Product name",
*1: An example of description is shown below assuming following settings.
- Example of the setting for product ID "ME-0001"
Product
ID
Priority
Unit
price
Start of applicable date
End of applicable date
Data
A
ME-0001
0
1
$2.00
$1.00
2012-01-01T00:00:00.000+0900
2012-08-01T00:00:00.000+0900
None
Data
B
ME-0001
2012-08-31T23:59:59.999+0900
- The unit price for product ID "ME-0001" is $2.00 on 2011-07-01 because only data A is applied.
- The unit price for product ID "ME-0001" is $1.00 on 2011-08-15 because data B has higher priority even though both data A and
data B are in the applicable period.
- An example description of accounting information file
"ME-0001",0,"2012-01-01T00:00:00.000+0900",,"memory","/VMPool","month",200,"Standard
memory","Standard memory"
- 119 -
"ME-0001",1,"2012-08-01T00:00:00.000+0900","2012-08-31T23:59:59.999+0900","memory","/
VMPool","month",100,"Standard memory(Campaign)","Standard memory(Campaign)"
*2: The value of the resource identifier can be obtained from the value of the corresponding XML tag of the template information list that
is output in XML format by using the template information list display command. Refer to "9.12 cfmg_listtemplate (Displaying Template
Information List)" in the "Reference Guide (Command/XML) CE" for information on how to use the template information list display
command.
The resource identifier for each category code and the corresponding template information (XML tag) are shown below.
- Resource identifiers and corresponding template information (XML tag)
Category code
Resource identifier
Template ID
Template information (XML tag)
<template><id>
template
vm
Virtual L-Server image name
Physical L-Server image name
VM pool name or sever pool name
VM pool name or sever pool name
VM pool name or sever pool name
Storage pool name

</server>
</servers>
</template>
</templates>
- 120 -
- Example description of accounting information file
"TP-0001",0,"2012-01-01T00:00:00.000+0900",,"template","templateId1","month",1000,"Web/DB
Windows Server 2008 R2 Standard","Service Windows Server 2008 R2 Standard"
"VM-0001",0,"2012-01-01T00:00:00.000+0900",,"vm","image1","month",500,"Windows Server 2008 R2
Standard","VM Windows Server 2008 R2 Standard"
"CP-0001",0,"2012-01-01T00:00:00.000+0900",,"cpu","/VMHostPool","month",100,"Xeon5110","Xeon5110"
"CL-0001",0,"2012-01-01T00:00:00.000+0900",,"cpu_clock","/VMHostPool","month",
50,"Xeon5110","Xeon5110"
"ME-0001",0,"2012-01-01T00:00:00.000+0900",,"memory","/VMHostPool","month",200,"Standard
memory","Standard memory"
"DI-0001",0,"2012-01-01T00:00:00.000+0900",,"disk","/StoragePool","month",10,"normal
disk","normal disk"
Note
- The accounting information will not be deleted from the product master automatically even after the end of applicable date. Reference
- If the end of applicable date is specified, prepare other accounting information of the same product ID without specifying the end of
applicable date.
- If the overcommit function is enabled, the category codes cpu_clock and memory will be calculated by default as CPU reserve
performance and memory reserve capacity, respectively, and usage fee (the estimated price) will be displayed on the L-Platform
Management window.
Refer to "8.7.1 Display Function Settings for Estimated Price" for information on charge settings when the overcommit function is
enabled.
15.3 Operate Accounting Information
This section explains how to operate the accounting information.
The operation for the accounting information consists of registration, modification, deleting, and reference.
Resister accounting information
Register the accounting information when newly created L-Platform templates are enabled.
The accounting information is set to the elements of L-Platform templates that are created. After finishing registration of the accounting
information, enable the L-Platform templates.
It is also possible to register accounting information for L-Servers that were imported into the system using the L-Server import
command.
Refer to "15.3.1 Register Accounting Information" for information on how to register the accounting information.
Modify accounting information
Modify the accounting information of the L-Platform templates that are already enabled. This corresponds to the case of setting limited
time price for campaign or raising the price, for example.
The price is modified to the accounting information of the L-Platform templates registered on the product master.
It is also possible to modify accounting information for L-Servers that were imported into the system.
Refer to "15.3.2 Modify Accounting Information Command" for information on how to modify the accounting information.
Delete accounting information
Delete the accounting information if the following conditions are met.
- The L-Platform templates are disabled.
- The log that shows the usage result of the corresponding L-Platform templates does not exist in the metering log.
Refer to "15.3.3 Delete Accounting Information" for information on how to delete the accounting information. Refer also to
- 121 -
Reference accounting information
Reference the accounting information registered on the product master when calculating the accounting. Refer to "15.3.4 Reference
Accounting Information" for information on how to reference the accounting information.
15.3.1 Register Accounting Information
Methods for registering accounting information differ for L-Platform templates and L-Servers that were imported into the system.
Register Accounting Information of L-Platform template
Create new L-Platform templates, and set the accounting information to the elements of the L-Platform template that was created. After
finishing the registration of the accounting information, enable the L-Platform templates.
Follow the procedure below to register the accounting information.
1. Register new L-Platform templates.
Refer to "8.3.2 Creating New L-Platform Template" in the "User's Guide for Infrastructure Administrators CE" for information on
how to register L-Platform templates.
2. Obtain a list of the template information registered.
Refer to "9.12 cfmg_listtemplate (Displaying Template Information List)" in the "Reference Guide (Command/XML) CE" for
information on how to obtain the template information list.
3. Execute the output function of the product master maintenance command.
The accounting information of the L-Platform templates registered on the product master will be output to the specified accounting
information file by executing the output function of the product master maintenance command. Refer to "10.4 productmaintain
(Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the output function of the product
master maintenance command.
This operation is unnecessary for the initial registration. Create a new accounting information file from step 4.
4. Add the accounting information of the L-Platform templates to the output accounting information file based on the template
information list.
Specify a date prior to the L-Platform template enabled date as the start of applicable date in the accounting information to be added.
Omit the end of applicable date. Refer to "15.2.2 Accounting Information File Format" for information on the accounting information
file format.
Example)
Product
ID
Priorit
y
Unit
price
Start of applicable date
End of applicable date
Data A
ID001
0
$2.00
2012-01-01T00:00:00.000+0900
None
5. Notify the tenant of the contents of the newly enabled L-Platform templates (L-Platform template name, summary of the L-Platform
template, usage fee (the estimated price), start of applicable date, etc.).
6. Execute the register function of the product master maintenance command.
Specify the accounting information file updated, and execute the product master maintenance command. Refer to "10.4
productmaintain (Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the register
function of the product master maintenance command.
7. Enable the L-Platform template.
Enable the L-Platform template on the start of applicable date. Refer to "8.3.6 Publishing and Hiding L-Platform Template" in the
"User's Guide for Infrastructure Administrators CE" for information on how to enable the L-Platform templates.
Registering Accounting Information for L-Servers imported into the System
Use the following procedures to import an L-Server into the system and register accounting information for the imported L-Server.
1. Use the L-Server import command to import the L-Server into the system.
Refer to "12.4 cfmg_importlserver (Import L-Server)" in the "Reference Guide (Command/XML) CE" for information on how to
use the L-Server import command.
- 122 -
Note
- If the <VM pool name> option is omitted when executing the L-Server import command, it will not be possible to register
accounting information for the CPU, CPU clock or memory capacity.
- If the <storage pool name> option is omitted when executing the L-Server import command, it will not be possible to register
accounting information for the L-Server data disk.
2. Obtain the template information list that was automatically generated by the L-Server import command. The template information
list can be obtained using the template information list display command. Refer to "9.12 cfmg_listtemplate (Displaying Template
Information List)" in the "Reference Guide (Command/XML) CE" for information on how to use the template information list
display command.
3.
Point
Check the following tags to distinguish between automatically generated templates:
- Template ID (<template><id>): The template ID output when the L-Server import command is executed.
- L-Server name (<template><description>): The L-Server name specified when the L-Server import command is executed.
4. Execute the output function of the product master maintenance command.
The accounting information of the L-Platform templates registered on the product master will be output to the specified accounting
information file by executing the output function of the product master maintenance command. Refer to "10.4 productmaintain
(Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the output function of the product
master maintenance command.
This operation is unnecessary for the initial registration. Create a new accounting information file from step 4.
5. Add the accounting information to the output accounting information file based on the template information list.
When applying a start date for the accounting information to be added, specify a date/time prior to the date/time that the L-Server
import command was executed. Omit the end of applicable date. Refer to "15.2.2 Accounting Information File Format" for
information on the accounting information file format.
Example)
Product
ID
Priorit
y
Unit
price
Start of applicable date
End of applicable date
Data A
ID001
0
$2.00
2012-01-01T00:00:00.000+0900
None
6. Execute the register function of the product master maintenance command.
Specify the accounting information file updated, and execute the product master maintenance command. Refer to "10.4
productmaintain (Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the register
function of the product master maintenance command.
15.3.2 Modify Accounting Information Command
Modify the accounting information of the L-Platform templates that are already enabled. This corresponds to the case of raising the price
or setting limited time price for campaign.
It is also possible to modify accounting information for L-Servers imported into the system.
Follow the procedures below to modify accounting information for either L-Platform templates registered in the product master or L-
Servers imported into the system.
1. Execute the output function of the product master maintenance command.
The accounting information of the L-Platform templates registered on the product master will be output to the specified accounting
information file by executing the output function of the product master maintenance command. Refer to "10.4 productmaintain
(Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the output function of the product
master maintenance command.
- 123 -
2. Modify the price, or specify the time period of the output accounting information file.
An example of the modification of the accounting information file is shown below. Specify future date to the modification date.
a. Add the modified unit price data newly to change the unit price of the product that is already registered on the product master.
Example) Modify the unit price of the product whose product ID is "ID001" to $2.10 from 2012-08-01.
Before modification:
Product Priori
Unit
Start of applicable date
End of applicable date
ID
ty
price
Data A ID001
0
$2.00 2012-01-01T00:00:00.000+0900
None
After modification:
Product Priori
Unit
Start of applicable date
End of applicable date
ID
Data A ID001
Data B ID001
ty
price
0
$2.00 2012-01-01T00:00:00.000+0900
2012-07-31T23:59:59.999+0900
1
$2.10 2012-08-01T00:00:00.000+0900 None
Set as follows for existing data A.
- End of applicable date: Modify to the date just before the modification date.
Set as follows for new data B.
- Priority: Higher priority than the existing data A
- Unit price: New unit price
- Start of applicable date: Modification date
- End of applicable date: Omit
b. Add new data for the specific time period if a limited time price such as campaign price is to be registered.
Example) Halve the price in the period between 2012-08-01 and 2012-08-31.
Before modification:
Product Priori
Unit
Start of applicable date
End of applicable date
ID
ty
price
Data A ID001
0
$2.00 2012-01-01T00:00:00.000+0900
None
After modification:
Product Priori
Unit
Start of applicable date
End of applicable date
ID
Data A ID001
Data B ID001
ty
price
0
$2.00 2012-01-01T00:00:00.000+0900
None
1
$1.00 2012-08-01T00:00:00.000+0900
2012-08-31T23:59:59.999+0900
Do not modify existing data A.
Set as follows for new data B.
- Priority: Higher priority than the existing data A
- Unit price: New unit price
- Start of applicable date: Start date of the period
- End of applicable date: End date of the period
Refer to "15.2.2 Accounting Information File Format" for information on the accounting information file format.
3. Notify the tenant of the contents of the modified L-Platform templates (L-Platform template name, usage fee (the estimated price),
start of applicable date, etc.).
4. Execute the register function of the product master maintenance command.
Specify the accounting information file updated, and execute the product master maintenance command. Refer to "10.4
productmaintain (Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the register
function of the product master maintenance command.
- 124 -
Note
- If the applicable start date is not set to a future date and time, modify the charge information after the system has stopped. If system
operation is not stopped, the pre-modification amounts may be displayed for the estimated charges and usage charges.
- The displayed price does not change for the subscription from saved specifications if the specifications are saved before modifying
the price. Notify the tenant of the cautions that the price will not be the price displayed on the L-Platform management window but
be the modified price after the modification date.
15.3.3 Delete Accounting Information
Delete the accounting information if the accounting calculation becomes unnecessary for the L-Platform templates or the L-Server imported
into the system.
The accounting information becomes unnecessary if the following conditions are met.
- The L-Platform templates are disabled
- The log that shows the usage result of the corresponding L-Platform templates or L-Server imported into the system. does not exist
in the metering log.
Follow the procedure below from disabling the L-Platform templates to deleting the accounting information.
Begin from "Check Metering Log" for L-Servers imported into the system.
See
Disable L-Platform templates
1. Notify to the tenant about the contents of the L-Platform templates to be disabled (L-Platform template name, usage fee (the estimated
price), start of applicable date, etc.).
2. Disable the L-Platform templates on the date that was notified.
Disable the L-Platform templates using the template management window of the ROR Console. Refer to "8.3.6 Publishing and
Hiding L-Platform Template" in the "User's Guide for Infrastructure Administrators CE" for information on the setting for disabling
the L-Platform template.
Check Metering Log
1. Check that the log that shows the usage result of the corresponding L-Platform templates does not exist in the metering log.
Output the metering log by the output metering logs command to check that no resource identifiers corresponding to the category
code of that accounting information and no information for the item ID corresponding to that exists in the metering log.
The correspondence relationships between category codes and resource identifiers in the accounting information and item IDs in
the metering log are shown below.
- 125 -
Accounting information
Category code Resource identifier
Metering log
Item ID
cpu
VM pool name
Server pool name
VM pool name
Server pool name
VM pool name
Server pool name
Image name
vm_pool
server_pool
vm_pool
cpu_clock
memory
server_pool
vm_pool
server_pool
image_name
image_name
storage_pool
base_template_id
vm
pm
Image name
disk
Storage pool name
Template ID
template
Delete accounting information
1. Execute the output function of the product master maintenance command.
The accounting information of the L-Platform templates registered on the product master will be output to the specified accounting
information file by executing the output function of the product master maintenance command. Refer to "10.4 productmaintain
(Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the output function of the product
master maintenance command.
2. Delete accounting information that becomes unnecessary from the output accounting information file.
3. Execute the register function of the product master maintenance command.
Specify the accounting information file updated, and execute the product master maintenance command. Refer to "10.4
productmaintain (Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the register
function of the product master maintenance command.
15.3.4 Reference Accounting Information
Follow the procedure below to reference the accounting information registered on the product master.
1. Execute the output function of the product master maintenance command.
The accounting information of the L-Platform templates registered on the product master will be output to the specified accounting
information file by executing the output function of the product master maintenance command. Refer to "10.4 productmaintain
(Product Master Maintenance)" in the "Reference Guide (Command/XML) CE" for information on the output function of the product
master maintenance command.
2. Reference the output accounting information file.
Reference the accounting information file that was output by the output function of the product master maintenance command.
Refer to "15.2.2 Accounting Information File Format" for information on the accounting information file format.
15.4 Calculation of Usage charges
This section explains how to calculate usage charges.
See
Refer to "Appendix B Metering Log" for information on the metering log, which is the information upon which the calculation of usage
charges is based.
- 126 -
15.4.1 Overview of Usage charge Calculation
When a user subscribes to or unsubscribes from an L-Platform, or performs operations such as starting or stopping an L-Server, these are
recorded as operation logs in the metering log.
Resource (L-Platform, L-Server, etc.) usage time is then aggregated from this operation log. Usage charges are calculated for each L-
Platform based on the aggregated usage time, the amount of resources used, and the unit price in the charge information. The monthly
usage charges are confirmed the day after the cut-off date set for the tenants, and then sent to the email address.
15.4.2 Resource Usage Times
Resource usage times include the time deployed and the time operated. These are aggregated based on the information in the metering
log. Deployment time is the time between when the resource is deployed and when it is terminated. Operating time is the time between
when the resource is started and when it is stopped.
Usage time is aggregated according to the following rules:
Usage time under one hour (in minutes)
Each day's usage time is aggregated as minutes.
Rounding
Times are rounded off to the nearest minute. Times 30 seconds and over are counted as 1 minute. Times less than 30 seconds are
counted as 0 minutes.
The following shows an example of an L-Server deployed time and operated time.
This example shows when an L-Server is added at 7:00, started at 7:40, stopped at 18:20, and then deleted at 19:00.
15.4.3 How to Charge for Resources
Resources can be charged by a fixed rate or by the amount used. With fixed charging, the usage charges are charged according to a fixed
fee rather than the usage time of the resources. With charging by the amount used, the usage charges are charged according to the usage
time of the resources.
Unit codes for the charge information are used when charging. Fixed charging is used when either year or month is specified as the unit
code. Charging by amount is used when hour is specified as the unit code.
The following table shows the relationships between the unit code specified in the charge information, the charging method, and the
calculation of a month's usage charges:
Unit codes
year
charging method
Fixed charging
Usage charges for one month
Unit price / 12 (months)
Unit price
month
hour
Fixed charging
Charge by amount
Unit price / 60 (minutes) x usage time (minutes) x amount used
- 127 -
Point
The unit price, unit code, and 1 month's usage charges for each resource specified in the charge information are displayed as usage charge
information for each month from the ROR console Accounting tab to Usage Charge Detail window.
Refer to "12.3 Usage Charge Detail" in the "User's Guide for Infrastructure Administrators CE" for information about the Usage Charge
Detail window. Also, tenant administrators can refer to the same information. Refer to "11.2 Usage Charge Detail" in the "User's Guide
for Tenant Administrators CE" for information about the Usage Charge Detail window viewable by tenant administrators.
15.4.4 Resource Usage Amounts and Times
If charging by amount is to be used for resources, the usage charge is determined by the amount of resources used and the time they are
used for.
The following table shows the relationship between the product categories for each resource, the amount of resources used, and the usage
time:
Resource
L-Platform
Product category
Template
Resource usage amount
Usage time
Total number of deployed L-
Platforms
L-Platform deployed time
L-Server
CPU
Total number of deployed L-Servers Deployment time of the L-Server
- Virtual L-Server
- Physical L-Server
CPU
CPU number
Operating time of the L-Server
CPU clock
Use one of the following:
- CPU performance (GHz)
- CPU reserved performance
(GHz) (*)
Memory
Memory capacity
Data disk capacity
Use one of the following:
- Memory (GB)
Operating time of the L-Server
Disk deployment time
- Memory reserved (GB) (*)
Disk size (GB)
Extension disk
* Note: This is the default value when overcommit is enabled on the virtual L-Server. These can be changed by changing the settings.
Point
Usage Charges for physical L-Servers.
The calculation of Usage Charges for the CPU, CPU clock, and memory capacity of the physical L-Server are calculated using the values
input into the L-Platform Management window (input value). Also, a server that is close to the input value in the specified server pool
will be deployed as the physical L-Server. For that reason, if charging for the physical L-Server, perform operations such as preparing
multiple physical L-Servers that are close to the input value in the server pool.
CPU usage charges
The usage charge for the CPU is calculated based on the CPU and the CPU clock. The calculation formula is as follows:
(CPU unit price / 60 (minutes) + CPU clock unit price / 60 (minutes) x CPU clock count / 0.1) x CPU count x usage time
15.4.5 Example of Usage charge Calculation
Here we will describe an example of calculation, using an L-Platform with the configuration shown below.
- 128 -
Configuration
Charge information
The charge information is as follows:
Resource
template
Resource identifier
Unit codes
month
Unit price
$2.00
TE_01
vm
IM_01
hour
hour
hour
hour
hour
$0.10
cpu
/VMPool
/VMPool
/VMPool
/StoragePool
$0.001
$0.002
$0.001
$0.001
cpu_clock
memory
disk
Operation
Operations on the L-Platform are as follows:
Date/time
Operation
2012-07-01 10:10:00
2012-07-01 10:11:00
2012-07-02 18:10:00
2012-07-02 18:11:00
2012-07-02 18:12:00
2012-07-02 18:13:00
2012-07-05 18:00:00
2012-07-05 18:01:00
Deploy L-Platform 1
Start L-Server
Stop L-Server 1
Add L-Server 2
Start L-Server
Start L-Server 2
Stop L-Server 1 and L-Server 2
Terminate L-Platform 1
- 129 -
Usage time
The usage time for is resource is as follows:
Resource
L-Platform1
Deployed time
Running time
128 hours 51 minutes (7731
minutes)
-
L-Server1
128 hours 51 minutes (7731 128 hours 51 minutes (7731
minutes)
minutes)
L-Server2
71 hours 50 minutes (4310
minutes)
71 hours 48 minutes (4308
minutes)
Extension disk 1
Extension disk 2
128 hours 51 minutes (7731
minutes)
-
71 hours 50 minutes (4310
minutes)
-
Usage charges
The usage charges for L-Platform 1 are as follows:
- 130 -
Point
Fractions in usage charges
When the calculations of usage charge produce a figure that includes fractions less than the lowest unit in the currency, the remainder is
rounded off.
L-Platform transfers
When an L-Platform is moved, the usage charges for resources charged by amount are divided between the departure tenant and destination
tenant, according to the amounts used. When resources are charged by a fixed amount, the departure tenant is billed.
15.4.6 Sending Usage charges
The usage charges for the previous month confirmed the day after the cut-off date are attached as a usage charge file and sent to the email
address. The usage charge file is a zip file combining the usage charge list file and the usage charge detail file.
The detail of this file is as follows:
- 131 -
File
File name
Description
Usage charges file
YYYYMM_tenant_name[_date_deleted].zip
File of usage charges for the tenant in zip
format.
YYYYMM is the date of the cut-off.
"_date_deleted" is added when the tenant has
already been deleted.
Usage charge list file
YYYYMM_tenant_name[_date_deleted].csv
File in CSV format showing the tenant's
previous month's usage charges plus the usage
charges by the L-Platform they own.
YYYYMM is the date of the cut-off.
"_date_deleted" is added when the tenant has
already been deleted.
Usage charge detail file YYYYMM_L-PlatformID.csv
File in CSV format showing the breakdown of
usage charges for each L-Platform.
YYYYMM is the date of the cut-off.
Point
Size of the usage charges file
The size of usage charge files depends on the number of L-Platforms deployed for the tenant and the configuration used. For example, if
200 L-Platforms including two L-Servers are deployed, the file size is about 160 Kb.
15.4.6.1 Usage Charge List File
The usage charge list file is a file in CSV format showing the tenant's previous month's usage charges plus the usage charges by the L-
Platform they own.
The output items are as follows:
Item
Description
TenantName
This is the tenant name. (*1)
TenantDisplayName
TenantDeletedDate
This is the display name. Note (*1)
This is the date and time the tenant was deleted.
Output only if the tenant has already been deleted. Empty string is output if this has not
been deleted.
Format: YYYY-MM-DD HH:mm:ss.SSS (*1)
Example:2013-03-31 23:59:59.999
Date
This is the cut-off date.
Format:YYYY-MM (*1)
Example:2013-03
TotalChargeAmount
Total value of the usage charge for the tenant.
The amounts are values, so they are not enclosed with double-quotes (").
Format:ZZZZZZZZZZZ9 (*2)
Example:800
FileVersion
This is the version of the file.
Fixed as 1.0.
LplatformId
This is the L-Platform ID. (*1)
This is the L-Platform name. (*1)
LplatformName
LplatformDeletedDate
Date that the L-Platform was returned.
Output only if the L-Platform has already been returned. Empty string is output if this
has not been returned.
- 132 -
Item
Description
Format:YYYY-MM-DD HH:mm:ss.SSS (*1)
Example:2013-03-31 23:59:59.999
ChargeAmount
The amount of usage charges for each L-Platform.
The amounts are values, so they are not enclosed with double-quotes (").
Format:ZZZZZZZZZZZ9 (*2)
Example:1000
*1: Data enclosed with double-quotes (").
*2: The amount is shown only, without the currency symbol.
The following is an example of file output:
#TenantName,TenantDisplayName,TenantDeletedDate,Date,TotalChargeAmount,FileVersion,LplatformId,Lplat
formName,LplatformDeletedDate,ChargeAmount
"TNT00001","tenant01","2012-02-01 09:30:40.001","2012-02",1800,"1.0","","","",
"","","","",,"","TNT00001-LPlatform01","template-TNT00001-LPlatform01","2012-01-10 09:30:40.001",
1000
"","","","",,"","TNT00001-LPlatform02","template-TNT00001-LPlatform02","",800
15.4.6.2 Usage charge Detail File
The usage charge detail file is a file in CSV format showing the breakdown of usage charges for each L-Platform.
The output items are as follows:
Item
Description
TenantName
This is the tenant name. (*1)
TenantDisplayName
TenantDeletedDate
This is the display name. Note (*1)
This is the date and time the tenant was deleted.
Output only if the tenant has already been deleted. Empty string is output if this has not
been deleted.
Format: YYYY-MM-DD HH:mm:ss.SSS (*1)
Example:2013-03-31 23:59:59.999
LplatformId
This is the L-Platform ID. (*1)
This is the L-Platform name. (*1)
LplatformName
LplatformDeletedDate
Date that the L-Platform was returned.
Output only if the L-Platform has already been returned. Empty string is output if this
has not been returned.
Format:YYYY-MM-DD HH:mm:ss.SSS (*1)
Example:2013-03-31 23:59:59.999
Date
This is the cut-off date.
Format:YYYY-MM (*1)
Example:2013-03
ChargeAmount
The amount of usage charges for each L-Platform.
The amounts are values, so they are not enclosed with double-quotes (").
Format:ZZZZZZZZZZZ9 (*1)
Example:1000
FileVersion
This is the version of the file.
Fixed as 1.0.
ItemColumn1
This is a breakdown (item 1).
The template name, virtual L-Server name, or physical L-Server name is output.
- 133 -
Item
Description
ItemColumn2
This is a breakdown (item 2).
The category of the resource. Empty string is output if this is for an L-Platform template,
virtual L-Server, or physical L-Server.
ItemColumn3
ItemColumn4
This is a breakdown (item 3).
The image name or resource identifier is output. (*1)
This is a breakdown (item 4).
This is the unit corresponding to the unit price. A hyphen (-) is output if this is for an L-
Platform template, virtual L-Server, or physical L-Server.
UnitPrice
This is the unit price. (*1)
This is the usage time. (*1)
UsedFrequency
ItemAmount
This is a breakdown of the amounts.
Format:ZZZZZZZZZZZ9 (*2)
Example:50
*1: Data enclosed with double-quotes (").
*2: The amount is shown only, without the currency symbol.
The following is an example of file output:
#TenantName,TenantDisplayName,TenantDeletedDate,LplatformId,LplatformName,LplatformDeleteDate,Date,C
hargeAmount,FileVersion,ItemColumn1,ItemColumn2,ItemColumn3,ItemColumn4,UnitPrice,UsedFrequency,Item
Amount
"TNT00001","tenant01","2012-02-01 09:30:40.001","TNT00002-LPlatform01","template-TNT00002-
LPlatform01","2012-02-01 09:30:40.001","2012-02",1000,"1.0","","","","","","",
"","","","","","","",,"","Template","","TE_01","-","\200.0000/month","1 month",200
"","","","","","","",,"","Virtual server(serverId001)","","image001","-","\1.2000/h","5h",6
"","","","","","","",,"","Virtual server(serverId001)","CPU","/VMPool","1core,
10GHz","\1.6667,\0.5560/h","18h",26
"","","","","","","",,"","Virtual server(serverId001)","Memory capacity","/VMPool","10GB","\20.0000/
h","18h",360
"","","","","","","",,"","Virtual server(serverId001)","Disk Capacity","/StragePool","10GB","\8.7500/
h","18h",158
"","","","","","","",,"","Virtual server(serverId001)","Disk Capacity","/
StragePool2","10GB","\9.0000/h","2h",18
"","","","","","","",,"","Virtual server(serverId001)","Disk Capacity","/
StragePool3","10GB","\9.7230/h","6h",59
"","","","","","","",,"","Virtual server(serverId003)","","image003","-","\1.2000/h","5h",6
"","","","","","","",,"","Virtual server(serverId003)","CPU","/VMPool","1core,
10GHz","\1.6667,\0.5560/h","18h",26
"","","","","","","",,"","Virtual server(serverId003)","Memory capacity","/VMPool","10GB","\20.0000/
h","18h",360
"","","","","","","",,"","Virtual server(serverId003)","Disk Capacity","/StragePool","10GB","\8.7500/
h","18h",158
"","","","","","","",,"","Virtual server(serverId003)","Disk Capacity","/
StragePool2","10GB","\9.0000/h","2h",18
"","","","","","","",,"","Virtual server(serverId003)","Disk Capacity","/
StragePool3","10GB","\9.7230/h","6h",59
"","","","","","","",,"","Physical server(serverId002)","","image002","-","\1.2000/h","5h",6
"","","","","","","",,"","Physical server(serverId002)","CPU","/ServerPool","1core,
10GHz","\1.6667,\0.5560/h","18h",26
"","","","","","","",,"","Physical server(serverId002)","Memory capacity","/
ServerPool","10GB","\20.0000/h","18h",360
"","","","","","","",,"","Physical server(serverId002)","Disk Capacity","/
StragePool","10GB","\8.7500/h","18h",158
"","","","","","","",,"","Physical server(serverId002)","Disk Capacity","/
StragePool2","10GB","\9.0000/h","2h",18
"","","","","","","",,"","Physical server(serverId002)","Disk Capacity","/
StragePool3","10GB","\9.7230/h","6h",59
- 134 -
Chapter 16 Monitoring Logs
This chapter explains how to monitor logs.
16.1 Operation Logs
This section explains the operation logs of Resource Orchestrator.
Note
- Operation logs should be used by only infrastructure administrators or administrators, as all user operations of Resource Orchestrator
can be viewed.
- Displaying resource names arranged in hierarchies is not supported.
16.1.1 Overview
Provides the functions to record user operations as operation logs in Resource Orchestrator.
Using this function, administrators can monitor the following information:
- Time events were recorded in the operation logs
- User ID
- User group name
- IP address
- Status
- Resource name
- Operations
The operation logs are output in the following formats:
Date User Group IP
Progress Resource Event
----- ----- ------ ----------- --------- --------- -------
Element Name
Description
Remarks
The time events recorded in the operation logs are output in the local time. If daylight
savings time (a regulation of time in summer) is set on the operating system, the
time events are output in daylight savings time.
Time events were
recorded in the operation
logs
Date
Time events are output in the following format:
YYYY-MM-DD HH:MM:SS.XXX
The user ID of the logged in user is output.
When a special administrator uses the command, a hyphen ("-") is output.
User
User ID
The name of the user group name the logged in user belongs to is output.
When the logged in user does not belong to a user group, a hyphen ("-") is output.
The IP addresses of the connected clients are output.
Group
IP
User group name
IP address
Starting and stopping of operations, and errors are output. The following statuses
are output:
Progress
Status
- Start of operations
- 135 -
Element Name
Description
Remarks
Starting(Operation_identifier)
- End of operations
Completed(Operation_identifier)
- Errors during operations
Error(Operation_identifier)
A resource name and a resource identifier are output in the following format:
- "Resource identifier(Resource_name)"
The parameters received by the manager are output.
Resource
Event
Resource name
Operations
For the information that will be output, refer to "16.1.4 Scope of Operations
Note
- A hyphen ("-") may be output for an operation identifier.
- For operations involving multiple resources, the same resource name as the one displayed in the Recent Operations on the ROR console
and the event log is output in "Resource".
Example
Output the latest 1 day's worth of data.
For details on how to operate operation logs, refer to "5.13 rcxadm logctl", and for the information to be output, refer to "16.1.4 Scope of
Operations Recorded in Operation Logs" in the "Reference Guide (Command/XML) CE".
>rcxadm logctl list -latest -duration 1D <RETURN>
Date
User
Group
IP
Progress
Resource
Event
----------------------- ----- ------ -------------- ---------------------- -----------------------
---------------------
2011-03-10 21:15:00.390 -
l_servers create
2011-03-10 21:15:06.250 -
l_servers create
2011-03-10 21:26:05.953 -
l_servers create
2011-03-10 21:29:21.150 -
l_servers create
2011-03-10 23:15:39.750 admin -
server_images snapshot
2011-03-10 23:15:46.781 admin -
server_images snapshot
2011-03-10 23:16:23.625 admin -
server_images restore
2011-03-10 23:16:28.484 admin -
server_images restore
2011-03-10 23:17:00.859 admin -
server_images destroy
2011-03-10 23:17:04.718 admin -
server_images destroy
2011-03-10 23:19:25.734 admin -
server_images create
2011-03-10 23:27:29.640 admin -
server_images create
-
-
-
-
10.20.30.53
10.20.30.53
10.20.30.73
10.20.30.73
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
Starting(BX620-1_21)
BX620-1_473(snap)
BX620-1_473(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_510(snap)
BX620-1_744(image_test)
BX620-1_744(image_test)
Error(BX620-1_21)
Starting(BX620-1_25)
Completed(BX620-1_25)
Starting(BX620-1_35)
Completed(BX620-1_35)
Starting(BX620-1_36)
Completed(BX620-1_36)
Starting(BX620-1_37)
Completed(BX620-1_37)
Starting(BX620-1_38)
Completed(BX620-1_38)
- 136 -
2011-03-10 23:42:37.171 admin -
server_images destroy
2011-03-10 23:42:47.460 admin -
server_images destroy
2011-03-10 23:51:06.620 userA groupA 127.0.0.1
l_servers create
2011-03-10 23:53:06.437 userA groupA 127.0.0.1
l_servers create
2011-03-10 23:53:39.265 userA groupA 127.0.0.1
l_servers start
2011-03-10 23:54:26.640 userA groupA 127.0.0.1
l_servers start
2011-03-10 23:54:45.531 userA groupA 127.0.0.1
l_servers restart
2011-03-10 23:55:26.859 userA groupA 127.0.0.1
l_servers restart
2011-03-10 23:55:48.953 userA groupA 127.0.0.1
l_servers stop
2011-03-10 23:56:26.390 userA groupA 127.0.0.1
l_servers stop
2011-03-10 23:57:11.968 userA groupA 127.0.0.1
l_servers attach
2011-03-10 23:58:21.359 userA groupA 127.0.0.1
l_servers attach
2011-03-10 23:58:35.620 userA groupA 127.0.0.1
l_servers detach
2011-03-10 23:59:23.343 userA groupA 127.0.0.1
l_servers detach
2011-03-10 23:59:40.265 userA groupA 127.0.0.1
l_servers migrate
2011-03-11 00:00:53.984 userA groupA 127.0.0.1
l_servers migrate
10.20.30.53
10.20.30.53
Starting(BX620-1_40)
Completed(BX620-1_40)
Starting(BX620-1_41)
Completed(BX620-1_41)
Starting(BX620-1_42)
Completed(BX620-1_42)
Starting(BX620-1_43)
Completed(BX620-1_43)
Starting(BX620-1_44)
Completed(BX620-1_44)
Starting(BX620-1_46)
Completed(BX620-1_46)
Starting(BX620-1_47)
Completed(BX620-1_47)
Starting(BX620-1_48)
Completed(BX620-1_48)
Starting(BX620-1_50)
Completed(BX620-1_50)
Starting(BX620-1_57)
Completed(BX620-1_57)
Starting(BX620-1_117)
BX620-1_578(image_test)
BX620-1_578(image_test)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_806(LS_RT_A001)
BX620-1_954(LS_RT_A001)
2011-03-11 00:01:09.296 userA groupA 127.0.0.1
l_servers update
2011-03-11 00:02:58.125 userA groupA 127.0.0.1
l_servers update
2011-03-11 00:04:42.640 userA groupA 127.0.0.1
l_servers destroy
2011-03-11 00:05:22.921 userA groupA 127.0.0.1
l_servers destroy
2011-03-11 00:35:44.250 userA groupA 127.0.0.1
folders move_resource
2011-03-11 00:35:44.625 userA groupA 127.0.0.1
folders move_resource
Completed(BX620-1_117) BX620-1_954(LS_RT_A001)
Starting(BX620-1_570) BX620-1_2193(master-52)
Completed(BX620-1_570) BX620-1_2193(master-52)
Starting(BX620-1_571) BX620-1_2193(master-52)
Completed(BX620-1_571) BX620-1_2193(master-52)
2011-03-11 01:04:34.880 admin -
l_servers convert
10.20.30.53
10.20.30.53
10.20.30.53
10.20.30.53
2011-03-11 01:04:36.650 admin -
l_servers convert
2011-03-11 01:05:05.568 admin -
l_servers revert
2011-03-11 01:05:06.451 admin -
l_servers revert
Note
The starting point (Starting) of recording for each operation is when the operation is displayed in the Recent Operations on the ROR
console.
- 137 -
16.1.2 Usage Method
This section explains the methods for configuring and operating operation logs.
Perform the following procedure:
1. Disk space estimation
Infrastructure administrator (infra_admin) estimates the disk size to use for storing operation logs.
Estimate the amount of disk space using the following formula, and then decide the number of days to retain operation logs.
{(Number_of_operations_of_the_resource_in_1_day) * (Number_of_target_resources) * 1(KB)} *
(retention_period)
Example
Disk space when estimating that the retention period is 180 days (by default), the target resource is operated 4 times, and the number
of target resources is 256
Retention
period
Necessary Disk
Space
Formula
180 days worth Approx. 185 MB
4 * 256 * 1 * 180 = 184320 (KB)
2. Check the settings in the [Date and Time properties] dialog of the operating system
Check whether the following tabs of the OS are configured correctly. If the settings are incorrect, set them correctly.
- [Date and Time] tab
- [Time Zone] tab
- [Internet Time(or the Network Time Protocol)] tab
3. Configure the retention period of operation logs
Execute the rcxadm logctl set command, to configure the retention period for operation logs.
For details on the rcxadm logctl set command, refer to "5.13 rcxadm logctl" in the "Reference Guide (Command/XML) CE".
4. Start recording of the operation log
Execute the rcxadm logctl start command to start recording the operation log.
When the rcxadm logctl start command is executed, the operation logs will be recorded for the configured retention period from
the date recording of operation logs is started.
Days when no events occur, or days when recording is not possible due to the manager being stopped are not counted as dates for
recording.
Note
- When the settings of the OS [Date and Time Properties] dialog have been changed after starting the recording of operation logs
It may not be possible to correctly display or delete the operation logs.
- When changing the settings of the OS [Date and Time Properties] dialog after starting the recording of operation logs
- When a Cloud Edition license has been registered
When starting the manager, recording of operation logs is started automatically.
- 138 -
16.1.3 Retention
This section explains the retention of operation logs.
- Periodic deletion
Due to extended periods of operation or modification of retention periods, operation logs which have exceeded a certain retention
period are periodically deleted.
The timing of deletion (based on retention period checks) is set to take place as the first operation after the date changes.
Note
- The recording period is the retention period + 1. After periodic deletion is performed, the recording period will be equal to the
retention period.
- Periodic deletion is executed when the next recording is started (the first recording after the date changes), and operation logs will
be deleted in chronological order.
- Deletion
Users can delete unnecessary operation logs by defining a retention period.
- Backup and Restore
Use the following procedure for backup and restoration of operation logs.
- Backup
1. Confirm the "retention folder" for the operation logs.
2. Stop recording operation logs.
3. Back up the "retention folder" confirmed in 1.
4. Start recording operation logs.
- Restore
1. Confirm the "retention folder" for the operation logs.
2. Stop recording operation logs.
3. Restore the backed-up folder into the "retention folder" confirmed in 1.
4. Start recording operation logs.
- Modification of the settings in the [Date and Time properties] dialog after starting the recording of operation logs
Use the following procedure to make changes.
1. Stop recording operation logs.
2. Display the operation logs and back up the necessary portions from the records.
3. Confirm the "retention folder" for the operation logs.
4. Empty the "retention folder" by moving all files in the "retention folder" checked in 3. to a new location.
5. Modify the settings in the [Date and Time properties] dialog.
6. Start recording operation logs.
For details on operations for operation logs, refer to "5.13 rcxadm logctl" in the "Reference Guide (Command/XML) CE".
Note
- Once recording operations are stopped, user operations are not recorded until the recording of logs is started again.
- 139 -
- Performing this procedure resets the recording period of operation logs to 0.
16.1.4 Scope of Operations Recorded in Operation Logs
The scope of operations recorded in operation logs and the character strings displayed in the Event column are as indicated below.
Table 16.1 Scope of Operations
Resource Type
Action
Strings Displayed in the Event Column
l_servers create
Creation
Deletion
l_servers destroy
Modification
Powering on
Powering off
Restarting
l_servers update
l_servers start
l_servers stop
l_servers restart
Disk addition
Disk detachment
Folder migration
Migration
l_servers attach
L-Server
l_servers detach
folders move resource
l_servers migrate
l_servers set_attrs
Physical L-Server configuration
Transmission of network information to physical L-
Servers
l_servers setup
Conversion
Reversion
Creation
l_servers convert
l_servers revert
server_images create
server_images destroy
server_images restore
server_images snapshot
l_platforms create
l_platforms destroy
l_platforms update
tenants create
Deletion
Image
Restore
Snapshot collection
Creation
L-Platform
Tenant
Deletion
Modification
Create
Deletion
tenants destroy
Modification
Creation
tenants update
folders create
Resource Folders
Deletion
folders destroy
Modification
folders update
16.2 Audit Logs
This section explains audit logs.
By looking up audit logs, it is possible to find out who performed what operation when.
16.2.1 Configuration Management Audit Log
This section explains the configuration management audit log.
- 140 -
The file name, file size, and number of generations of the audit log are shown below:
Number of
generations
Log name
vsys_audit_log
Description
Audit logs are output to this log.
File size
10 MB
10
generations
(*)
* Note: If the number of saved generations exceeds 10 generations, old generations will be deleted, starting with the oldest generation.
The file size and the number of generations to be saved can be changed. Refer to "Procedure for changing the size of the file" and "Procedure
for changing the number of generations to be saved" for details.
With default settings, audit logs will be held for approximately 50 days if 10 people use this product for approximately one hour per day
per person. If necessary, back up audit logs at appropriate intervals according to the usage frequency. The names of the files to be backed
up are shown below:
vsys_audit_log.[n] (where "n" is the generation number)
Example: To back up three generations' worth of files, the names of the files to be backed up are as
follows:
- vsys_audit_log.1
- vsys_audit_log.2
- vsys_audit_log.3
Output Destination
The log output destinations are shown below:
[Windows Manager]
Output folder
Installation_folder\RCXCFMG\logs
Output destination file
vsys_audit_log.[n] (n is the number of
generations)
[Linux Manager]
Output folder
Output destination file
/var/opt/FJSVcfmg/logs
vsys_audit_log.[n] (n is the number of
generations)
Output format
Audit logs are CSV files where the following items are output in the following order:
Output format
<date/time when the operation was performed>,<user ID>,<organization ID>,<operation
ID>,<parameters>,<operation result>
Item
Description
YYYY-MM-DD HH:MM:SS.sss (local time).
date/time when the
operation was performed
user ID
The user ID of the user that executed the operation.
The organization ID of the user that executed the operation.
A string indicating the content of the operation. (*)
The parameters specified by the request.
organization ID
operation type
parameters
- 141 -
Item
Description
operation result
"SUCCESS" if the operation was successful and "FAILURE" if the operation failed.
* Note: The operation types are as follows:
Operation type
AccessControl#evaluate
AccessControl#getAuthority
DeployMaster#delete
Description
Checking access permissions.
Obtaining information about resource operation privileges for a user.
Deleting configuration information.
DeployMaster#getDetail
DeployMaster#getSaveList
DeployMaster#getTemplate
DeployMaster#save
Obtaining detailed configuration information.
Obtaining a list of configuration information.
Obtaining a deployment master from an L-Platform template.
Saving configuration information.
DeployMaster#setStatus
Event#notify
Setting the status of configuration information.
Notifying events that occurred on a server. With the current version, changes to the
power status, migration, reconfiguring, pool information, events are notified.
EventLog#getList
Images#changeShow
Images#getDetail
Obtaining a list of event logs.
Changing whether images are displayed.
Obtaining detailed image information.
Obtaining a list of images registered.
Obtaining a list of L-Server Templates.
Registering image information with this product.
Searching image information.
Images#getList
Images#getServerType
Images#register
Images#search
Images#unregister
Images#update
Deregistering image information from this product.
Updating image information.
Network#addCategory
Network#deleteCategory
Network#getCategoryList
Network#getList
Registering a network resource type.
Deleting a network resource type.
Obtaining a list of network types.
Obtaining a list of networks.
Network#getRulesetList
Network#detailRuleset
Parameters#delete
Parameters#getDetail
Parameters#getList
Parameters#set
Obtaining a list of rulesets.
Obtaining detailed ruleset information.
Deleting parameter information.
Obtaining detailed parameter information.
Obtaining a list of parameter information.
Registering or updating parameter information.
Registering software information.
Softwares#create
Softwares#delete
Deleting software information.
Softwares#getDetail
Softwares#getList
Softwares#update
Templates#changeShow
Templates#deletePublic
Obtaining detailed software information.
Obtaining a list of software.
Updating software information.
Changing the settings as to whether L-Platform templates can be displayed.
Deleting L-Platform templates.
- 142 -
Operation type
Templates#getDetail
Description
Obtaining detailed L-Platform template information.
Registering L-Platform template information.
Searching an L-Platform template.
Updating templates
Templates#importPublic
Templates#search
Templates#updatePublic
VDisk#getList
Obtaining a list of existing disks.
Registering patch information.
Adding an expansion disk.
VServer#addPatch
VServer#attach
VServer#backup
Creating a snapshot.
VServer#cancelError
VServer#changeSpec
VServer#cloning
Releasing the error status of snapshots or restorations.
Changing server performance.
Collecting a cloning image from a deployed server.
Adding a server.
VServer#create
VServer#deletePatch
VServer#detach
Deleting patch information.
Deleting an expansion disk.
VServer#getBackupList
VServer#getInitPW
VServer#release
Obtaining a list of snapshots.
Obtaining the initial password for a server.
Returning a server.
VServer#removeBackup
VServer#restore
Deleting a snapshot.
Restoring a server.
VServer#search
Obtaining server information.
Starting a server.
VServer#start
VServer#stop
Stopping a server.
VSYS#addNetwork
VSYS#changeOrg
VSYS#convertFolder
Adding a segment to an L-Platform.
Changing a user ID or organization ID for an L-Platform.
Converting a system deployed with Systemwalker Software Configuration Manager
V14.1 to an L-Platform for this product.
VSYS#delete
Deleting an L-Platform from just the resource window after deleting failed, etc.
Deleting a server included in an L-Platform from the L-Platform.
Deleting a segment from an L-Platform.
VSYS#deleteLServer
VSYS#deleteNetwork
VSYS#deploy
Deploying an L-Platform.
VSYS#flowCancel
VSYS#flowCancelRetry
VSYS#flowDeploy
VSYS#flowEnableDeploy
VSYS#flowEnableRelease
Canceling the application to deploy an L-Platform or change a configuration.
Canceling the flow for saving a configuration and then making another application.
Deploying an L-Platform. (For flows: no operation after authorization.)
Deploying an L-Platform. (For flows: operation to perform after authorization.)
Returning all L-Platforms as a batch. (For flows: operation to perform after
authorization.)
VSYS#flowEnableUpdate
VSYS#flowError
Reconfiguring an L-Platform. (For flows: operation to perform after authorization.)
Executing post-processing when an error has occurred with an application flow for
deploying an L-Platform or changing a configuration.
VSYS#flowForward
Applying to deploy an L-Platform or change a configuration.
- 143 -
Operation type
VSYS#flowRejectApplication
VSYS#flowRelease
VSYS#flowReleaseApplication
VSYS#flowReleaseForward
VSYS#flowSaveCancel
VSYS#flowUpdate
Description
Rejecting deployment application.
Returning all L-Platforms as a batch. (For flows: no operation after authorization.)
Applying to return an L-Platform.
Setting the status of a return application to "forwarding complete".
Canceling the application to deploy an L-Platform or change a configuration.
Reconfiguring an L-Platform. (For flows: no operation after authorization.)
Obtaining configuration information.
VSYS#getConfigurations
VSYS#getCurrency
VSYS#getDetail
Obtaining product currency information.
Obtaining detailed information for an L-Platform.
Obtaining a list of host name serial number counters.
Obtaining a list of L-Platforms.
VSYS#getHostnameCounter
VSYS#getList
VSYS#getLoginDate
VSYS#getOperationLNetDevResult
VSYS#getPoolList
Obtaining the date/time when the user logged in to the L-Platform Manager View.
Obtaining the operation logs for the server load balancer.
Obtaining a list of resource pools for ServerView Resource Orchestrator.
Obtaining task information.
VSYS#getTask
VSYS#getTenantList
VSYS#importLServer
VSYS#lock
Obtaining a list of tenants.
Importing an L-Server under resource management to an L-Platform.
Locking an L-Platform.
VSYS#operateLNetDev
VSYS#recoverDisk
VSYS#recoverNet
Executing server load balancer operations.
Recovering a disk.
Recovering a network segment.
VSYS#recoverNic
Recovering an NIC.
VSYS#recoverServer
VSYS#recoverSystem
VSYS#release
Recovering a server.
Recovering an L-Platform.
Returning all L-Platforms as a batch.
VSYS#resetHostnameCounter
VSYS#setChangeInfo
VSYS#setDisplayStatus
VSYS#setLoginDate
VSYS#setRecoverInfo
VSYS#setServerStatus
VSYS#setUndeploy
VSYS#start
Resetting the host name serial number counter.
Recovering an L-Platform name, server name, or server specification.
Changing the status of an L-Platform.
Updating the date/time when the user logged in to L-Platform Manager View.
Recovering a resource ID or disk number.
Recovering the status of a server.
Changing the status of an L-Platform, server or disk to "undeployed".
Starting the servers in an L-Platform as a batch.
Starting the servers for multiple tenants as a batch.
Stopping the servers in an L-Platform as a batch.
Stopping the servers for multiple tenants as a batch.
Synchronizing server power statuses in a batch.
Synchronizing server performance information with the actual state.
Unlocking an L-Platform.
VSYS#startServers
VSYS#stop
VSYS#stopServers
VSYS#syncServerStatus
VSYS#syncSpec
VSYS#unlock
- 144 -
Operation type
VSYS#update
Description
Reconfiguring an L-Platform.
VSYS#updateLNetDev
VSYS#updateRemarks
Updating the parameters for the firewall and server load balancer.
Changing the L-Platform remarks column input values after deploying.
Procedure for changing output destination
Use the following procedure to change the audit log output destination.
1. Rewrite the settings file.
The following table shows the settings file and the location to change:
[Windows Manager]
Log name
vsys_audit_log
Settings file
Location to change (one location)
Installation_folder\RCXCFMG\config <param name="File" value="C:\ProgramData
\vsys_log4j.xml
\Fujitsu\SystemwalkerCF-MG\logs
\vsys_audit_log" />
[Linux Manager]
Log name
Settings file
Location to change (one location)
vsys_audit_log
/etc/opt/FJSVcfmg/config/
vsys_log4j.xml
<param name="File" value="/var/opt/FJSVcfmg/
logs/vsys_audit_log" />
2. Restart the manager.
Procedure for changing the size of the file
Use the following procedure to change the audit log file size.
1. Rewrite the settings file.
The location to change is shown below:
[Windows Manager]
Log name
Settings file
Location to change (one location)
vsys_audit_log
Installation_folder\RCXCFMG\config
\vsys_log4j.xml
Change the underlined part of the <param
name="MaxFileSize" value="10MB" /> element
under the <appender
name="auditfileout"class="org.apache.log4j.Rolling
FileAppender"> element to the desired value.
Example: value="100MB" (to change the size of the
audit log file to 100 MB)
[Linux Manager]
Log name
Settings file
Location to change (one location)
vsys_audit_log
/etc/opt/FJSVcfmg/config/vsys_log4j.xml
Change the underlined part of the <param
name="MaxFileSize" value="10MB" /> element
under the <appender
name="auditfileout"class="org.apache.log4j.Rollin
gFileAppender"> element to the desired value.
- 145 -
Log name
Settings file
Location to change (one location)
Example: value="100MB" (to change the size of the
audit log file to 100 MB)
2. Restart the manager.
Procedure for changing the number of generations to be saved
Use the following procedure to change the number of audit log generations to be saved.
1. Rewrite the settings file.
The location to change is shown below:
[Windows Manager]
Log name
Settings file
Location to change (one location)
vsys_audit_log
Installation_folder\RCXCFMG\config
\vsys_log4j.xml
Change the underlined part of the <param
name="MaxBackupIndex" value="9" / /> element
under the <appender
name="auditfileout"class="org.apache.log4j.Rolling
FileAppender"> element to the desired value.
Example: value="100" (to change the number of audit
log generations to 100)
[Linux Manager]
Log name
Settings file
Location to change (one location)
vsys_audit_log
/etc/opt/FJSVcfmg/config/vsys_log4j.xml
Change the underlined part of the <param
name="MaxBackupIndex" value="9" / /> element
under the <appender
name="auditfileout"class="org.apache.log4j.Rollin
gFileAppender"> element to the desired value.
Example: value="100" (to change the number of
audit log generations to 100)
2. Restart the manager.
16.2.2 Audit Logs of Output by the Tenant Management, Accounting, Access
Control and System Condition
This section explains audit logs of output by the tenant management, accounting, access control and system condition functions.
Point
- If L-Platform Management is operated, audit logs will be output to Configuration Manager. Refer to "16.2.1 Configuration
Management Audit Log" for details.
- Audit logs relating to the registration, modification or deletion of infrastructure administrators, infrastructure operators, infrastructure
monitors, administrators, operators and/or monitors can be checked in the OpenDS access log.
The storage locations and file names of OpenDS access logs are as follows.
[Windows Manager]
OpenDS Installation_folder\opends\logs\access
- 146 -
[Linux Manager]
/opt/fujitsu/ServerViewSuite/opends/logs/access
Refer to the OpenDS website for details on OpenDS access logs.
URL: https://docs.opends.org/2.2/page/DefAccessLog (As of February2012)
Output format
Audit logs are CSV files where the following items are output in the following order:
Output format
operation date/time,user ID,tenant name,operation type,operation information,operation result
Item
Description
The date/time when the operation was performed.
operation date/time
The date/time is output using the following format:
YYYY-MM-DD HH:MM:SS.sss (local time)
user ID
The user ID of the user that performed the operation.
tenant name
The tenant name of the user that executed the operation.
If the operation is performed from tenant management GUI, the tenant name is fixed as "ctmgadm".
operation type
The type of the operation performed.
operation information Detailed information for the operation type.
operation result
The result of the operation performed.
One of the following values is output:
SUCCESS: When the operation was successful
FAILURE: When the operation failed
Output files
Audit logs are output to the following files:
Function
Output file
Tenant management
(GUI operations from
the ROR Console)
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\log\ctsec_audit_a.log
[Linux Manager]
/var/opt/FJSVctsec/log/ctsec_audit_a.log
Tenant management
(creating users from
the ROR Console:
Provisional account
registration method)
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\log\ctsec_audit_s.log
[Linux Manager]
/var/opt/FJSVctsec/log/ctsec_audit_s.log
Accounting
[Windows Manager]
Installation_folder\RCXCTMG\Charging\log\ctchg_audit.log
[Linux Manager]
/var/opt/FJSVctchg/log/ctchg_audit.log
- 147 -
Function
Accounting
Output file
[Windows Manager]
(GUI operations from
the ROR Console)
Installation_folder\RCXCTMG\Charging\log\charging_audit.log
[Linux Manager]
/var/opt/FJSVctchg/log/charging_audit.log
[Windows Manager]
Accounting
(Published API
operation)
Installation_folder\RCXCTMG\Charging\log\accounting_audit.log
[Linux Manager
/var/opt/FJSVctchg/log/accounting_audit.log
Access Control
[Windows Manager]
Installation_folder\RCXCTMG\SecurityManagement\log\ctac_audit.log
[Linux Manager]
/var/opt/FJSVctsec/log/ctac_audit.log
System condition
[Windows Manager]
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\var\log\audit\uigui\cmdb_audit.log
[Linux Manager]
/opt/FJSVcmdbm/var/log/audit/uigui/cmdb_audit.log
Procedure for changing the file size and the number of generations held
By default, audit log files are rotated when they reach 10 MB.
To change the maximum size of audit log files or the maximum number of generations held, perform the following procedure:
1. Stop the manager.
2. Edit the appropriate items in the following definition files:
- Definition files
Function
Tenant
Definition file
[Windows Manager]
management
(GUI
operations
from the ROR
Console)
Installation_folder\RCXCTMG\conf\auditsecalog4j.xml
[Linux Manager]
/etc/opt/FJSVctmg/conf/auditsecalog4j.xml
Tenant
[Windows Manager]
management
(creating
users from the
ROR
Installation_folder\RCXCTMG\conf\auditsecslog4j.xml
[Linux Manager]
/etc/opt/FJSVctmg/conf/auditsecslog4j.xml
Console:
Provisional
account
registration
method)
Accounting
[Windows Manager]
Installation_folder\RCXCTMG\conf\auditchglog4j.xml
[Linux Manager]
/etc/opt/FJSVctmg/conf/auditchglog4j.xml
- 148 -
Function
Definition file
Accounting
[Windows Manager]
(GUI
Installation folder\RCXCTMG\conf\auditchgguilog4j.xml
[Linux Manager]
operations
from the ROR
Console)
/etc/opt/FJSVctmg/conf/auditchgguilog4j.xml
[Windows Manager]
Accounting
(Published
API
operation)
Installation folder\RCXCTMG\conf\auditacntlog4j.xml
[Linux Manager]
/etc/opt/FJSVctmg/conf/auditacntlog4j.xml
Access
Control
[Windows Manager]
Installation_folder\RCXCTMG\conf\auditaclog4j.xml
[Linux Manager]
/etc/opt/FJSVctmg/conf/auditaclog4j.xml
System
[Windows Manager]
condition
Installation_folder\SWRBAM\CMDB\FJSVcmdbm\CMDBConsole\WEB-INF\classes
\log4j.properties
[Linux Manager]
/opt/FJSVcmdbm/CMDBConsole/WEB-INF/classes\log4j.properties
- Setting items
Setting item
Description
MaxFileSize
This item sets the maximum size of audit log files.
The file size can be specified using a combination of an integer greater than 0 and
a unit (KB, MB or GB). (*1), (*2)
Example: <param name="MaxFileSize" value="500KB"/>
MaxBackupIndex
This item sets the maximum number of generations of the audit log file.
An integer greater than 0 can be specified. (*1)
Example: <param name="MaxBackupIndex" value="50"/>
*1: Do not specify decimal fractions. Also, do not leave a blank space.
*2: Do not specify a maximum file size that is larger than the size of the disk. Conversely, do not set values that are too
small for the maximum file size, or else the logs will be overwritten frequently.
3. Start the manager.
Operation types and operation information
The following table shows the operation types and operation information that are output to audit logs:
Function
Tenant
Operation type
Content
Notify user
Operation information (*1)
"mail=""xxx@com"""
registUser
management
(GUI
operations
from the
ROR
registration
createUser
Create users
"userid=""<user ID of the user created>""&mail=""xxx@com""
&lastname=""<last name>""&firstname=""<first name>
""&auth=""tenant_admin|tenant_operator|tenant_monitor|
tenant_user""
&explanation=""xxxx""&corporatename=""fujitsu""
&emergencymail=""yyy@com""&emergencytel=""0000"""
Console)
- 149 -
Function
Operation type
Content
Operation information (*1)
(only if the infrastructure administrator performed or the tenant
administrator performed by the direct registration method)
deleteUser
updateUser
Delete users
"userid=""<user ID of the user deleted>"""
Update user
information
"userid=""<user ID of the user updated>""&mail=""xxx@com""
&lastname=""<last name>""&firstname=""<first name>"""
&auth=""infra_admin|infra_operator|administrator|monitor|
operator|tenant_admin|tenant_operator|tenant_monitor|
tenant_user""
&explanation=""xxxx""&corporatename=""fujitsu""
&emergencymail=""yyy@com""&emergencytel=""0000"""
listUser
Get a list of user
information
None.
moveUser
Relocate users
"userid=""<user ID of the user that has been
relocated>""&oldorgid=""<tenant name of the original tenant>""
&neworgid="" tenant name of the tenant to which the user has
been relocated"""
updatePassword
Update passwords
Create an tenant
"userid=""<user ID of the user whose password has been
updated>"""
createOrg
(*2) (*3)
- With no calculation of Usage Charges
"orgid=""<tenant name of the tenant that has been
created>""&orgname=""<tenant name>""
&mail=""xxx@com""&globalpool=""<global pool that has been
set>"""
- With calculation of Usage Charges
"orgid=""<tenant name of the tenant that has been
created>""&orgname=""<tenant
name>""&mail=""xxx@com""&globalpool=""<global pool that
has been set>""&cutoffdate =""<Cut off
date>""&accountingmail=""<email address where usage charges
are sent>"""
deleteOrg
(*3)
Delete tenants
"orgid=""<tenant name of the tenant that has been deleted>"""
updateOrg
(*2) (*3)
Update tenant
information
- With no calculation of Usage Charges
"orgid=""<tenant name of the tenant that has been
updated>""&orgname=""<tenant name>""
&mail=""xxx@com""&globalpool=""<global pool that has been
set>"""
- With calculation of Usage Charges
"orgid=""<tenant name of the tenant that has been
created>""&orgname=""<tenant
name>""&mail=""xxx@com""&globalpool=""<global pool that
has been set>""&cutoffdate =""< Cut off
date>""&accountingmail=""<email address where usage charges
are sent>"""
listOrg
Get a list of tenant
information
None.
Tenant
management
(creating
createUser
Create users
"userid=""<user ID of the user created>""&mail=""xxx@com""
&lastname=""<last name>""&firstname=""<first
name>""""&auth=""tenant_user""
- 150 -
Function
Operation type
Content
Operation information (*1)
users from
the ROR
Console:
Provisional
account
&explanation=""xxxx""&corporatename=""fujitsu""
&emergencymail=""yyy@com""&emergencytel=""0000"""
registration
method)
Accounting
updatePMaster
listPMaster
Update product
master
None.
Get a list of product None.
master
updateMlogSch
Update periodic log "use=""yes|no""&time=""<time of output of periodic
schedule settings
log>""&type=""<frequency of output of periodic
log>""&day=""<day of output of periodic log>"""
listMlogSch
listMeteringlog
deleteMlog
Get a list of periodic "use=""yes|no""&time=""<time of output of periodic
log schedule
settings
log>""&type=""frequency of output of periodic
log""&day=""<day of output of periodic log>"""
Get a list of
metering logs
"start=""<start date of the acquisition period>""&end=""<end
date of the acquisition period>""
&type=""event|period"""
Delete metering
logs
"retention=""<log entry retention period>"""
Accounting
(GUI
operations
from the
ROR
listLplatformCha Get a list of usage
None.
rge
charge for each L-
Platform
getDetailCharge
Get a breakdown of None.
L-Platform usage
charges
Console)
listTenantCharge Get a list of usage
None.
charge for each
tenant
listLplatformCha Get a list of usage
None.
rgeByTenant
charge for each L-
Platform under the
specified tenant
downloadFile
File download
"target=""LplatformChargeList|DetailCharge|TenantChargeList|
LplatformChargeListByTenant"""
Accounting
(Published
API
getResourceUsa
ge
Get resource usage
None.
getUsagePoint
Get usage
frequency
None.
operation)
registerUsagePoi Register usage
"date=""<date of data registered>(*5)""&id=""<L-Platform
ID>""&name=""<L-Platform name>""&tenantname=""<tenant
name of managed tenant>""&tenantdeletedate=""<date when
managed tenant was deleted>(*6)"""
nt
frequency (*4)
getDailyCharge
Get daily usage
charges
None.
registerDailyCha Register daily usage "date=""<date of data registered>""&id=""L-Platform
rge charges (*4) ID""&name=""<L-Platform name>""&tenantname=""<tenant
- 151 -
Function
Operation type
Content
Operation information (*1)
name of managed tenant>""&tenantdeletedate=""<date when
managed tenant was deleted>"""
getMonthlyChar
ge
Get usage charges
None.
registerMonthly
Charge
Register monthly
usage charges (*4)
"date=""<date of data registered>""&id=""L-Platform
ID""&name=""<L-Platform name>""&tenantname=""<tenant
name of managed tenant>""&tenantdeletedate=""<date when
managed tenant was deleted>"""
getTenants
Get tenant
None.
information
Access
Control
updateAuthority
Access authority
modifications
"roleid=""<role name of modification target>""
&actionid=""<action ID of modification target>""
&permission=""<allow/deny status of specified action>"""
(The above information will be output as follows: one information
item when a role name is specified, or if a file is specified, the
number of information items will match the number of action IDs.)
System
condition
(*4)
dispUsageStatus
Display usage
condition
None.
*1: If a value is not set for an item, """" is output.
An example is shown below.
... &globalpool=""""...
*2: If multiple global pools have been set, the global pools are output separated by commas.
An example is shown below.
...&globalpool=""/AddressPool,/ImagePool""...
*3: For the operation result of createOrg, deleteOrg, or updateOrg, the processing result will be output. Use the operation log (resource
operation) to check the actual processing result. Refer to "16.1 Operation Logs" for information on how to check the operation log (resource
operation).
*4: Multiple lines may be output each time there is a registration operation.
*5: Format is "yyyy-MM-dd".
*6: Format is "yyyy-MM-ddTHH:mm:ss.SSSZ".
*7: Audit logs for usage condition are output only when operations are performed from the ROR Console.
16.2.3 Application Process Audit Log
This section explains the application process audit log.
Usage
To obtain the process instance audit log for the application process, execute the get process instance audit information command.
Installation_folder\SWRBAM\bin\swrba_audit
The process instance audit information that can be obtained will be the process instance information that was complete after the command
was executed the previous time.
Privilege Required/Execution Environment
[Windows Manager]
- 152 -
- Administrator privileges are required. If the operating system is Windows Server 2008, execute as the administrator.
- This command can be executed on the admin server.
[Linux Manager]
- System administrator (superuser) privileges are required.
- This command can be executed on the admin server.
Output files
The following table shows the file name, file size, and generations of the audit log:
Log name
Description
File size
10 MB
Number of generations
10 (*)
swrba_audit.log
Audit logs are output.
* Note:
- The trigger for the generation switchover is when the file exceeds 10MB.
If the file size does not reach 10 MB, the log continues to be output to the same file even if the swrba_audit command is executed
multiple times.
- Once 10 generations (100 MB) is exceeded, the oldest file (swrba_audit9.log) is deleted.
Destination
The table below shows the log output destination.
[Windows Manager]
Output folder
Output destination files
Installation_folder\SWRBAM\var\audit
swrba_audit.log.[n] (n is the generation)
[Linux Manager]
Output folder
Output destination files
/opt/FJSVswrbam/var/audit
swrba_audit.log.[n] (n is the generation)
Output format
The audit log is a CSV format file. The following items are output in the following order.
One item of process instance information is displayed as one record.
"Process instance start time","Process instance starter","Process instance name","Process instance
state","Process instance end time","Activity name","Task execution date/time","Person
responsible","Status","Task process",...,"Result of application"
Item
Description
Time when the application was executed
Process instance start time
Process instance starter
Process instance name
User ID of applicant
Process instance name
L-PlatformSubscription_xxx,L-PlatformChange_xxx,L-PlatformUnsubscription_xxx
State of process instance
Process instance end time
State of process instance
closed: Closed
Time that the process instance ended
yyyy-mm-dd hh:mm:ss.sss
- 153 -
Item
Description
Activity name
Activity name
Application,Approve,Assess,Pending state,Cancel
Task execution date/time
Person responsible
Status
The date/time the task was executed
User ID of the user that executed the task
Shows the state of task
COMPLETED: Completed
Task process
Button name executed by the activity:
Application: Apply
- Approve: 0:Approve or 1:Reject
- Assess: 0:Accept or 1:Dismiss
- Pending state : 0:Cancel
- Cancel: 0:Cancel
Result of application
Result of application:
- Accepted: When [Accept] is executed for an assessment task
- Approved: When [Approve] is executed for an approval task (when ApproverOnly)
- Rejected: When [Reject] is executed for an approval task
- Dismissed: When [Dismiss] is executed for an assessment task
- Canceled: When [Cancel] is executed for a cancellation task
- Canceled: When [Cancel] is executed for a pending task
Output example
The example below shows the output of the audit information for the process instance with the name "L-Platform usage application_100".
"2012-04-11 17:03:53.580","tenant_user001","L-PlatformSubscription_100","closed","2012-04-11
17:04:25.471","Application","2012-04-11
17:03:56.111","tenant_user001","COMPLETED","Apply","Approve","2012-04-11
17:04:15.908","tenant_admin001","COMPLETED","0:Approve","Assess","2012-04-11
17:04:21.346","infra_admin001","COMPLETED","0:Accept","Accepted"
Point
Using an operating system feature such as Task Scheduler (Windows Manager) or cron (Linux Manager), configure the settings so that
the get process instance audit information command is executed at fixed intervals so that an audit log of process instances can be obtained
between command executions.
16.3 Operation Logs (Activity)
This section explains operation logs.
Operation logs are used as maintenance information when investigating and dealing with errors.
16.3.1 Operation Logs for Accounting
The following table shows the files where operation logs used for maintenance information accounting are output:
Copy the files described below to collect maintenance information.
- 154 -
[Windows Manager]
Type
Destination of files output
Installation_folder\RCXCTMG\Charging\log\accounting_calc_mail.log (*1)
Description
Operation
log
The results of sending
the usage charge file
are output. (*2)
*1: This log file is 5 MB in size and holds 5 generations.
*2: Refer to "Chapter 9 Messages Starting with ctact" in the "Messages" for information on the execution result that is output.
[Linux Manager]
Type
Destination of files output
Description
Operation
log
/var/opt/FJSVctchg/log/accounting_calc_mail.log (*1)
The results of sending
the usage charge file
are output. (*2)
*1: This log file is 5 MB in size and holds 5 generations.
*2: Refer to "Chapter 9 Messages Starting with ctact" in the "Messages" for information on the execution result that is output.
16.4 Investigation Logs
This section explains investigation logs.
Investigation logs are used as maintenance information when investigating and dealing with errors.
16.4.1 Investigation Logs on Admin Servers
The investigation logs on Admin Servers are classified into the following types:
Number of
generations
Log name
vsys_trace_log
Description
File size
Trace logs for the management of L-Platform template and L-
Platform are output to this log.
10MB
10
generations
(*)
vsys_batch_log
Trace logs for the batch processing section of the management
of L-Platform template and L-Platform are output to this log.
myportal_trace.log
cfmg_api_log
Event log
Trace logs for the Manager View are output to this log.
Logs for the CFMG APIs are output to this log.
Information such as errors that occur while the Manager View
is being used is output to this log.
-
-
Refer to "Chapter 20 Messages Starting with VSYS" in the
"Messages" for information on event logs.
* Note: If this number is exceeded, old generations will be deleted, starting with the oldest generation.
Output Destination
The log output destinations are shown below:
[Windows Manager]
Output folder
Installation_folder\RCXCFMG\logs
Installation_folder\RCXCTMG\MyPortal\log
Output destination file
vsys_trace_log, vsys_batch_log,
cfmg_api_log
myportal_trace.log
- 155 -
[Linux Manager]
Output destination directory
Output destination file
/var/opt/FJSVcfmg/logs
/var/opt/FJSVctmyp/log
vsys_trace_log, vsys_batch_log,
cfmg_api_log
myportal_trace.log
Output format
Output format
<date/time> <log level> <message ID> <message text>
Item
Description
date/time
log level
yyyy-mm-dd hh:mm:ss,sss
One of the following:
info
Information level message
warn
error
fatal
Warning level message
Error level message
Fatal level message
message ID (*)
message text (*)
Prefix and message number:
- The prefix for myportal_trace.log is "MGRV".
- The prefix for cfmg_api_log is "PAPI".
- The prefix for all other logs is "VSYS".
Message content
* Note: Refer to the '"Messages" for information on the message ID and message text.
Procedure for changing the investigation output destination
Use the following procedure to change the output destination for investigation logs.
1. Rewrite the settings files corresponding to each log.
The following table shows the settings file and the location to change:
[Windows Manager]
Log name
vsys_trace_log
Settings file
Location to change (one location)
Installation_folder\RCXCFMG\config <param name="File" value="C:\ProgramData
\vsys_log4j.xml
\Fujitsu\SystemwalkerCF-MG\logs
\vsys_trace_log" />
vsys_batch_log
myportal_trace.log
cfmg_api_log
Installation_folder\RCXCFMG\config <param name="File" value="C:\ProgramData
\batch_log4j.xml
\Fujitsu\SystemwalkerCF-MG\logs
\vsys_batch_log" />
Installation_folder\RCXCTMG
\MyPortal\config
<param name="file" value=" C:/Fujitsu/ROR/
RCXCTMG/MyPortal/log/myportal_trace.log" />
\managerview_log4j.xml
Installation_folder\RCXCFMG\config <param name="file" value="C:\ProgramData
\api_log4j.xml
\Fujitsu\SystemwalkerCF-MG\logs
\cfmg_api_log" />
- 156 -
[Linux Manager]
Log name
Settings file
Location to change (one location)
vsys_trace_log
vsys_batch_log
myportal_trace.log
cfmg_api_log
/etc/opt/FJSVcfmg/config/
vsys_log4j.xml
<param name="File" value="/var/opt/FJSVcfmg/
logs/vsys_trace_log" />
/etc/opt/FJSVcfmg/config/
batch_log4j.xml
<param name="File" value="/var/opt/FJSVcfmg/
logs/vsys_batch_log" />
/etc/opt/FJSVctmyp/config/
managerview_log4j.xml
<param name="file" value="/var/opt/
FJSVctmyp/log/myportal_trace.log" />
/etc/opt/FJSVcfmg/config/
api_log4j.xml
<param name="File" value="/var/opt/FJSVcfmg/
logs/cfmg_api_log" />
2. Restart the manager.
- 157 -
Part 5 High Availability and Disaster
Recovery
Chapter 17 High Availability of Managed Resources...................................................................................159
Chapter 18 Disaster Recovery.....................................................................................................................173
- 158 -
Chapter 17 High Availability of Managed Resources
This chapter explains failover.
17.1 High Availability of Managed Resources
This section explains how to realize high availability of managed resources.
The methods of environment creation and operation for enabling higher availability of managed resources vary depending on resources
involved.
- Servers
- Blade Chassis
- Storage Chassis
17.1.1 High Availability of L-Servers
This section explains high availability for L-Servers.
Regardless of the server type, select the [HA] checkbox on the [Server] tab to perform redundancy settings when creating an L-Server.
- For physical L-Servers
Select the [HA] checkbox to enable the selection of the pool of a spare server.
The server pool of the spare server that the physical server will use for automatic switchover is registered in can be specified.
For details on the [Server] tab, refer to "16.2.2 [Server] Tab" in the "User's Guide for Infrastructure Administrators (Resource
Management) CE".
For the details on conditions for switchover to spare servers, refer to "9.3 Server Switchover Conditions" in the "Setup Guide VE".
- For virtual L-Servers
Settings differ according to the server virtualization software being used.
Refer to "16.3.2 [Server] Tab" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
Checks of Spare Server Models and Configurations
The L-Server definition and spare server configuration are checked when configuring server redundancy during creation or modification
of a physical L-Server.
Confirmation is performed on the following items of an L-Server definition and spare server configuration:
- Server Model
Check if there is a server compatible with the L-Server definition in the specified server pool.
The compatibility of server models is evaluated by the definition file for checks on spare server models and configurations.
The definition file is created when installing managers.
When installing a new compatible server model, update the definition file by adding the server model.
In the following cases, the same server model of physical server as the server specified in the L-Server definition is selected.
- When there is no definition file
- When there is an error in definitions
- 159 -
- Number of NICs
Check the number of NICs mounted on a compatible server and L-Server.
A server is considered as a server satisfying the conditions, in the following cases:
- The number of NICs mounted on the compatible server is the same as the number in the L-Server definition
- The number is more than the number in the L-Server definition
Selection of Suitable Spare Servers
When the [Use a low spec server] checkbox is not selected, a spare server is automatically set.
Based on the following conditions, an L-Server definition is compared to physical L-Servers existing in the specified resource pool as a
spare server:
- When specifying the model name on creation of a physical L-Server
- If the "model name" is the same or compatible
- Number of NICs
- If CPUs and memories are specified when specifying a physical L-Server
- CPU core count
- CPU clock speed
- Memory capacity
- Number of NICs
Configurations are not checked.
A server that satisfies or exceeds all conditions and has the nearest specifications to the L-Server definition is selected.
When the [Use a low spec server] checkbox is selected, physical servers with models matching that in the L-Server definition are selected
as spare servers.
For details, refer to "B.9 Methods for Selecting Physical Servers Automatically" in the "Setup Guide CE".
Modification of Checks on Spare Server Models and Configurations
The switchover policy and the definition of server model compatibility can be modified.
Although the definition file is deployed when installing managers, a server model with a matching L-Server definition and physical server
model is selected as a spare server in the following cases:
- When there is no definition file
- When there is an error in definitions
When the [Use a low spec server] checkbox is selected, physical servers with models matching that in the L-Server definition are selected
as spare servers.
The definition file for checks on spare server models and configurations is stored in the following location.
Location of the Definition File
[Windows Manager]
Installation_folder\SVROR\Manager\etc\customize_data
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data
Definition File Name
spare_server_config.rcxprop
- 160 -
Definition File Format
For the definition file, write each line in the following format:
Key = Value
When adding comments, start with a number sign ("#").
Definition File Items
Specify the following items in a definition file.
Table 17.1 List of Items Specified in Definition Files for Checking Spare Server Models and Configurations
Item
Key
Value
skip
warning
Remarks
Configure the check policy of the L-Server definition for the
whole system and the spare server configuration.
- skip
Omits the check of whether or not there is a server that
satisfies the spare server conditions.
Even if a server pool that does not satisfy the conditions
is selected, creation and modification of an L-Server are
performed without checking.
- warning
Server pools without servers that satisfy the spare server
conditions are also displayed.
When a server pool that does not satisfy the conditions is
selected, a warning message is displayed during creation
and modification of an L-Server, checking whether to
continue the operation.
Policy of
Switchover
OVERALL_POLICY
error
- error
Server pools without servers that satisfy the spare server
conditions are not displayed.
Therefore, a server pool that does not satisfy the spare
server conditions cannot be selected.
Server switchover is not possible when the destination
cannot be found while performing the switchover, even if
the check has been made in advance.
Define compatible server models as a group.
When checking the L-Server definition and the spare server
configuration, the server models in the same group are the
target of switchover.
SPARE_SVR_COMP
ATX
Compatibility of
Server Model (*)
["modelA","
modelB"]
(X is a number from 0
- 255)
Examples of server models are as below.
- BX920 S1
- BX920 S2
* Note: The value of SPARE_SVR_COMPAT is changeable. Server switchover may fail depending on the combination of changed
values, such as when a server with no compatibility is set as a spare server.
Example Definition File
# Spare server check configuration
# Spare server check logic's policy: skip | warning
- 161 -
| error
OVERALL_POLICY=skip
# Server model compatibility list
SPARE_SVR_COMPAT0=["BX920 S1", "BX920 S2"]
17.1.2 Blade Chassis High Availability
This section explains high availability for when operating L-Servers on blade servers.
When the server is a physical L-Server for a blade server, this function which enables restarting of L-Servers, by manually switching over
servers to spare servers when there are blade servers are specified as spare servers on other chassis.
Prerequisites
For details on prerequisites for high availability of blade chassis, refer to "7.1 Blade Chassis High Availability Design" in the "Design
Guide CE".
Installation
This section explains the preparations for configuration of server switchover.
Use the following procedure for configuration:
1. Register a server in a spare blade chassis in the server pool.
Perform one of the following steps:
- Register a server in a spare blade chassis in the server pool
- Register a server in a spare chassis in the server pool for a spare server
2. Create an L-Server.
For details, refer to "16.1 Creation Using an L-Server Template" or "16.2 Creation of Physical L-Servers Using Parameters" in the
"User's Guide for Infrastructure Administrators (Resource Management) CE".
In this case, perform redundancy for the L-Server.
On the [Server] tab of the [Create an L-Server] dialog, set the following items.
- Check the [HA] checkbox.
- When the server in the chassis was registered in the spare server pool in step 1., specify the spare server pool.
Operations
This section explains how to perform switchover of L-Servers.
Use the following procedure to perform switchover of L-Servers:
1. A chassis error is detected.
If the following events occur, determine whether to perform chassis switchover regarding the event as the chassis failure.
- When both management blades cannot be accessed because of a management blade failure
- Communication with the admin LAN is not possible
- When communication is not possible because there is trouble with the LAN switches that connect with the admin LAN
2. Check the chassis on which the L-Server for which an error was detected.
Execute the rcxadm chassis show command to check.
For details on the rcxadm chassis show command, refer to "3.2 rcxadm chassis" in the "Reference Guide (Command/XML) CE".
- 162 -
3. Check if the status of the L-Server that is the source for switchover is stop.
If the status of the L-Server is not stop, stop the L-Server.
The L-Server cannot be stopped, as the management blade cannot be accessed. Stop the L-Server on the managed server, using the
console of managed server.
Note
If switchover is performed while an L-Server in a chassis that has trouble is still operating, there is a possibility that another instance
of the L-Server will be started and its disks damaged. Ensure the L-Server is stopped.
4. Place the server into maintenance mode.
For details on how to place servers into maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
Execute the rcxadm server command from the command line.
For details on the rcxadm server command, refer to "3.11 rcxadm server" in the "Reference Guide (Command/XML) CE".
5. Start the L-Server.
Start the L-Server on the switchover destination server.
For details on how to start an L-Server, refer to "17.1.1 Starting an L-Server" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
6. Release the server that has been switched to from maintenance mode.
For details on how to release servers from maintenance mode, refer to "Appendix C Maintenance Mode" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
Execute the rcxadm server command from the command line.
For details on the rcxadm server command, refer to "3.11 rcxadm server" in the "Reference Guide (Command/XML) CE".
Restoration
This section explains restoration.
Use the following procedure for restoration:
1. If a management blade has failed, replace it with new one. If an admin LAN or switches fail, replace them. After that, initialize the
management blade. To initialize the management blade, select "controlled by VIOM" in the management window of the management
blade. After performing the configuration to forcibly reset the local VIOM settings on the displayed screen, restart the management
board. In either case, if any concerns arise, contact Fujitsu technical staff.
2. Mount a blade on the chassis.
3. After unregistering the restored chassis using the ROR console, register the chassis again.
4. After powering on the server in the restored chassis from the server resource tree, forcibly power off the server.
After restoring the blade chassis, use the following procedure to relocate the L-Server.
1. Stop an L-Server.
For details on how to stop an L-Server, refer to "17.1.2 Stopping an L-Server" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
2. Perform modification of the physical server to use.
In the [Modify an L-Server] dialog, specify the restored server for [Physical server].
For details on how to modify the physical server to use, refer to "17.2.1 Modifying Specifications" in the "User's Guide for
Infrastructure Administrators (Resource Management) CE".
- 163 -
3. Start the L-Server.
For details on how to start an L-Server, refer to "17.1.1 Starting an L-Server" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
17.1.3 High Availability for Storage Chassis
This section explains high availability of storage chassis connected to physical L-Servers.
If performing replication between two storage units of LUNs used by a physical L-Server, quick restoration of physical L-Servers is
possible, even when storage units have failed.
This section explains the switchover of disks used by physical L-Servers, between two storage units on which replication of LUNs is
managed by a single manager.
- When performing switchover of physical L-Servers and disks used by physical L-Servers in a Disaster Recovery environment, refer
Figure 17.1 Switchover of Operating or Standby Status of Storage
- When the disk resource is created using dynamic LUN mirroring, refer to "When Using Dynamic LUN Mirroring".
Prerequisites
For details on prerequisites for high availability of storage chassis, refer to "7.2 Storage Chassis High Availability Design" in the "Design
Guide CE".
Replication of Storage Unit LUNs
- For ETERNUS
Define the LUN replication using ETERNUS SF AdvancedCopy Manager.
- For EMC CLARiiON
Define the LUN replication using the MirrorView function.
- For EMC Symmetrix DMX storage and EMC Symmetrix VMAX storage
Define the device replication using the SRDF function.
- 164 -
Replication Definition Files
The replication definition file must be created in advance. In the replication definition file, describe the relationship between the operating
storage and the standby storage for replication.
The format of the replication definition file is as follows:
IP_address_of_operating_storage_unit,Operating_volume_identifier,IP_address_of_standby_storage_unit,Standby_volume_identifier
Configure the definition file using a unique combination of an IP address for an operating or standby storage unit, and an identifier for an
operating or standby volume. When the information overlaps in the replication definition file, an error will occur when creating a failover
or a failback script.
- For ETERNUS
The relationship for replication can be checked using ETERNUS SF AdvancedCopy Manager.
Specify the IP address of a storage for the storage identifier. Check the IP address of a storage using the rcxadm storage list command.
For details on volume identifiers, check them from ETERNUS SF AdvancedCopy Manager. Volume identifiers are written in
hexadecimal format without zero suppression.
Example
192.168.1.24,0x0001,192.168.2.25,0x00
05
192.168.1.24,0x0002,192.168.2.25,0x00
06
192.168.3.25,0x0001,192.168.4.26,0x00
05
Information
When replicating using the Copy Control Module of ETERNUS SF AdvancedCopyManager, a replication definition file can be
created.
For details on the rcxrepdef command, refer to "5.22 rcxrepdef" in the "Reference Guide (Command/XML) CE".
- For EMC CLARiiON
The relationship for replication can be checked using the MirrorView function.
Specify an IP address for the storage identifier. Check the IP address, using the rcxadm storage list command.
For details on volume identifiers, check them using the MirrorView function. Volume identifiers are written in hexadecimal format
without zero suppression.
Example
192.168.1.24,0x0001,192.168.2.25,0x000
5
192.168.1.24,0x0002,192.168.2.25,0x000
6
192.168.3.25,0x0001,192.168.4.26,0x000
5
- 165 -
- For EMC Symmetrix DMX storage and EMC Symmetrix VMAX storage
The relationship for replication can be checked using the SRDF function.
Specify SymmID for the storage identifier. Check SymmID, using the rcxadm storage list command.
Specify a device for the volume identifier. Check the device, using the SRDF function. Volume identifiers are written in hexadecimal
format without zero suppression.
Example
000192601264,0001,000192601265,0005
000192601264,0002,000192601265,0006
000192601274,0001,000192601275,0005
When Performing Switchover from Operating to Standby (Failover)
This section explains the procedure to perform switchover from operating storage units to standby storage units.
1. Create the replication definition file.
2. Create the following scripts by executing the rcxstorage -failover command.
- failover script
- Physical L-Server stopping script
- Physical L-Server startup script
For details on the rcxstorage command, refer to "5.23 rcxstorage" in the "Reference Guide (Command/XML) CE".
- Create these scripts in units of operating storage.
- These scripts are created based on the configuration information at the time which the command is executed. When changing
configurations, create these scripts again.
- Execute the rcxstorage command with the -failover option, when an operating and a standby storage unit are displayed in the
storage tree. The script can be created as long as the operating storage unit is displayed in the storage tree, even if it has failed.
Note
These scripts cannot be created when the operation target storage unit is not displayed in the storage tree.
3. Execute the physical L-Server stopping script on the server where the manager is being operated. This operation stops the physical
L-Server targeted by the failover script. To forcibly stop the server without shutting down the operating system running on the L-
Server, specify the -force option when executing the physical L-Server stopping script.
If an error occurs during execution of the script, contact Fujitsu technical staff.
Note
Physical L-Servers are stopped in the order of the entries in the physical L-Server stopping script. When specification of the order
of stopping physical L-Servers is necessary, edit the physical L-Server stopping script.
4. Delete the zoning combining the WWPN of the HBA of the physical L-Server and the WWPN of the port of the operating storage
from the Fibre Channel switch. For ETERNUS storage, this step is not necessary as the zoning for Fibre Channel switch will be
deleted by Resource Orchestrator when the failover script is executed.
5. If the replication function for storage is in operation, stop it.
- 166 -
6. Execute the failover script on the server where the manager is being operated.
- If the error message number 62513 occurs during script execution
The Thin Provisioning and Thick Provisioning attributes of the operating disk resource and standby disk resource may be not
the same. Check the replication definition file, and define the disk resource with the same attributes of Thin Provisioning and
Thick Provisioning.
- When an error other than the above has occurred
Contact Fujitsu technical staff.
7. To access the standby storage, add the zoning combining the WWPN of the HBA of the physical L-Server and the WWPN of the
port of the standby storage to the Fibre Channel switch. For ETERNUS storage, this step is not necessary as the zoning for Fibre
Channel switch will be added by Resource Orchestrator when the failover script is executed.
8. When perform reading or writing for the LUN of the standby storage, modify the settings of replication if necessary.
9. Execute the physical L-Server startup script on the server where the manager is being operated. This operation starts the physical
L-Server.
If an error occurs during execution of the script, contact Fujitsu technical staff.
Note
Physical L-Servers are started in the order of the entries in the physical L-Server startup script. When the specification of the order
of starting physical L-Servers is necessary, edit the physical L-Server startup script.
10. When operating an L-Platform, use the cfmg_syncdiskinfo command to reflect the information for switched disks on the L-Platform
configuration information.
[Windows Manager]
>Installation_folder\RCXCFMG\bin\cfmg_syncdiskinfo <RETURN>
[Linux Manager]
# /opt/FJSVcfmg/bin/cfmg_syncdiskinfo <RETURN>
For details on the cfmg_syncdiskinfo command, refer to "12.7 cfmg_syncdiskinfo (disk information synchronization)" in the
"Reference Guide (Command/XML) CE".
When Performing Switchover from Standby to Operating (Failback)
This section explains the procedure for performing switchover from standby storage units to operating storage units.
1. Request repair of the operating storage units.
2. Using storage management software, restore the logical configuration (RAID, LUN) of an operating storage unit.
3. Using storage management software, check the LUN masking definition and LUN mapping definition of the operating storage unit.
If the definitions relating to the WWPN of the HBA of the physical L-Server remain, delete them.
Information
When the storage is ETERNUS, delete the affinity group for the LUN of the operating storage unit
4. By modifying the settings of the replication function, perform replication of the storage unit from the operating to the standby, then
wait until the status of the LUNs of the operating and standby storage become equivalent.
5. Prepare the replication definition file.
Use the same replication definition file as that for failover. When changing the configuration after executing the failover script,
correct the replication definition file.
- 167 -
6. Create the following scripts by executing the rcxstorage -failback command.
- failback script
- Physical L-Server stopping script
- Physical L-Server startup script
For details on the rcxstorage command, refer to "5.23 rcxstorage" in the "Reference Guide (Command/XML) CE".
- Create these scripts in units of operating storage.
- These scripts can be created after executing the failover script and performing switchover to the standby storage unit.
- These scripts are created based on the configuration information at the time which the command is executed. When changing
configurations, create these scripts again.
- Execute the rcxstorage command with the -failback option, when an operating and a standby storage unit are displayed in the
storage tree. These scripts cannot be created when the operation target storage unit is not displayed in the storage tree.
7. Execute the physical L-Server stopping script on the server where the manager is being operated. This operation stops the physical
L-Server targeted by the failback script. To forcibly stop the server without shutting down the operating system running on the L-
Server, specify the -force option when executing the physical L-Server stopping script.
If an error occurs during execution of the script, contact Fujitsu technical staff.
Note
Physical L-Servers are stopped in the order of the entries in the physical L-Server stopping script. When specification of the order
of stopping physical L-Servers is necessary, edit the physical L-Server stopping script.
8. Delete the zoning combining the WWPN of the HBA of the physical L-Server and the WWPN of the port of the standby storage
from the Fibre Channel switch. For ETERNUS storage, this step is not necessary as the zoning for Fibre Channel switch will be
deleted by Resource Orchestrator when the failback script is executed.
9. Stop the storage replication function.
10. Execute the failback script on the server where the manager is being operated.
- If the error message number 62513 occurs during script execution
The Thin Provisioning and Thick Provisioning attributes of the operating disk resource and standby disk resource may be not
the same. Check the replication definition file, and define the disk resource with the same attributes of Thin Provisioning and
Thick Provisioning.
- When an error other than the above has occurred
Contact Fujitsu technical staff.
11. To access the operating storage, add the zoning combining the WWPN of the HBA of the physical L-Server and the WWPN of the
port of the operating storage to the Fibre Channel switch. For ETERNUS storage, this step is not necessary as the zoning for Fibre
Channel switch will be added by Resource Orchestrator when the failback script is executed.
12. By modifying the settings of the replication function, perform replication of the storage unit from the standby to the operating, then
wait until the status of the LUNs of the standby and operating storage become equivalent.
13. Execute the physical L-Server startup script on the server where the manager is being operated. This operation starts the physical
L-Server.
If an error occurs during execution of the script, contact Fujitsu technical staff.
Note
Physical L-Servers are started in the order of the entries in the physical L-Server startup script. When the specification of the order
of starting physical L-Servers is necessary, edit the physical L-Server startup script.
- 168 -
14. When operating an L-Platform, use the cfmg_syncdiskinfo command to reflect the information for switched disks on the L-Platform
configuration information.
[Windows Manager]
>Installation_folder\RCXCFMG\bin\cfmg_syncdiskinfo <RETURN>
[Linux Manager]
# /opt/FJSVcfmg/bin/cfmg_syncdiskinfo <RETURN>
For details on the cfmg_syncdiskinfo command, refer to "12.7 cfmg_syncdiskinfo (disk information synchronization)" in the
"Reference Guide (Command/XML) CE".
When Using Dynamic LUN Mirroring
This document explains some changes and points to note when using dynamic LUN mirroring.
Prerequisites
Refer to "7.2 Storage Chassis High Availability Design" in the "Design Guide CE".
Replication of Storage Unit LUNs
If disk resources are automatically generated from virtual storage, replication may be automatically set depending on the settings in
the mirroring definition file. Replication settings use REC for CCM (machine-to-machine copying, transfer mode: Stack mode), so
copy groups or pairs are automatically generated. The name of the generated copy group will be the value entered in the mirroring
definition file.
Replication Definition Files
When replication using CCM is set, a replication definition file can be generated by executing the rcxrepdef command.
For details on the rcxrepdef command, refer to "5.22 rcxrepdef" in the "Reference Guide (Command/XML) CE".
When Performing Switchover from Operating to Standby (Failover)
The procedure for switchover is the same as that for when a LUN is created in the storage unit beforehand.
Note
The disk resources created using dynamic LUN mirroring are not deleted when failover to the standby node occurs.
Information
Start to copy using REC (transfer mode: Stack mode) for CCM for the dynamic LUN mirroring. Stop the replication function using
the procedure described in step 5. in the same way as follows:
1. Use the acec suspend command by specifying the -force option to forcibly suspend the REC session.
2. Use the acec change command to change the REC transfer mode to Through mode.
3. Suspend the REC session using the acec cancel command. In this case, do not specify the -force option.
When Performing Switchover from Standby to Operating (Failback)
The procedure for switchover is the same as that for when a LUN is created in the storage unit beforehand.
17.2 High Availability for Admin Servers
This section explains high availability of managers.
- 169 -
Prerequisites for Manager Cluster Operation
For details on prerequisites for operating managers in cluster environments, refer to "7.3 Admin Server High Availability Design" in the
"Design Guide CE".
Manager Cluster Operation in Windows Guest Environments in Hyper-V environments
- Install an operating system and configure a domain controller on the domain controller server.
- Perform installation and configuration of the admin server.
The following operations are necessary:
- Primary Node
- Connection with shared storage
- Configure BIOS
- Install Hyper-V roles
- Install and configure EMC Solutions Enabler (when used)
- Add a failover clustering function
- Create a Hyper-V virtual network
- Create clusters
- Prepare virtual machines
- Register virtual machines in clusters
- Install and configure storage management software
- Install and configure VM management software
- Install and configure ServerView Operations Manager and ServerView Virtual-IO Manager
- Install the Resource Orchestrator manager
- Setup Resource Orchestrator
- Secondary Node
- Connection with shared storage
- Configure BIOS
- Install Hyper-V roles
- Install and configure EMC Solutions Enabler (when used)
- Add a failover clustering function
- Create a Hyper-V virtual network
For details on the following items, refer to the Hyper-V manual.
- Install Hyper-V Roles
- Add a Failover Clustering Function
- Create a Hyper-V Virtual Network
- Create Clusters
- Prepare Virtual Machines
- Register Virtual Machines in Clusters
- Operation
When an error occurs on a VM guest, the operation will continue if the VM guest is switched over.
- 170 -
Note
- When performing configuration, modification, or deletion of managed server environments, such as L-Server creation, if an error
occurs on VM guest, the operation may fail.
In this case, part of the environment of the managed server may be created. Perform the operation again after deleting created
environments.
- When performing L-Server creation or ETERNUS configuration information modification using ETERNUS, if an error occurs
on a VM guest, ETERNUS may not be able to return from the status of processing to normal status. In this case, to restore, forcibly
log on from ETERNUSmgr, then log off. In the case of ETERNUS DX60/DX80/DX90, contact Fujitsu technical staff. For details
on how to restore ETERNUS, refer to the ETERNUS manual.
Manager Cluster Operation in Windows and Linux Environments
The settings and deletion operations described below are required for cluster operation.
For details on the settings for cluster operation and the procedure for deletion, refer to "Appendix D Manager Cluster Operation Settings
and Deletion" in the "Setup Guide VE".
- Settings
- Primary Node
- Create cluster resources
- Copy dynamic disk files
- Perform link settings for folders on the shared disk
- Set folder and file access rights
- Set access rights for the Resource Orchestrator database
- Change the IP address set for the manager's admin LAN
- Register service resources
- Start the cluster service
- Secondary Node
- Perform link settings for folders on the shared disk
- Set access rights for the Resource Orchestrator database
- Change the IP address set for the manager's admin LAN
- Start the cluster service
- Deletion
- Primary Node
- Stop the cluster service
- Delete service resources
- Uninstall the manager
- Secondary Node
- Uninstall the manager
- Delete shared disk files
- Delete cluster resources
- 171 -
Note
- If switchover of an admin server occurs while L-Servers are operating, the operation being performed may fail.
If you were creating or registering resources, delete unnecessary resources and then perform the operation again.
- When performing L-Server creation or ETERNUS configuration information modification using ETERNUS, if an error occurs on an
admin server, ETERNUS may not be able to return from the status of processing to normal status. In this case, to restore, forcibly log
on from ETERNUSmgr, then log off. In the case of ETERNUS DX60/DX80/DX90, contact Fujitsu technical staff. For details on how
to restore ETERNUS, refer to the ETERNUS manual.
- 172 -
Chapter 18 Disaster Recovery
This chapter provides information on Disaster Recovery for the admin server where Resource Orchestrator manager operates.
Resource Orchestrator provides simple and highly reliable Disaster Recovery, through exporting and importing the following information
that Resource Orchestrator manager handles:
- L-Platform Templates
- L-Platform Configuration Information
- Resource Information
- Accounting Information
- Metering Logs
For details, refer to "DR Option Instruction".
- 173 -
Appendix A Notes on Operating ServerView Resource
Orchestrator
This appendix provides important reminders for the operation of Resource Orchestrator.
Redundancy Configurations for the Admin LAN
If communication issues occur on the admin LAN, or one of the network interfaces used by a managed server on the admin LAN fails,
the following operations may result in errors. In such cases, restore the admin LAN network as quickly as possible.
- Backup and restore operations
- Collection and deployment of cloning images
- Server switchover and failback
HBA address rename
- With Resource Orchestrator, the factory-set WWN of a managed server's HBA is overridden when the HBA address rename function
is used. The WWN is reset to its factory-set value when the server is deleted from Resource Orchestrator.
Before using HBAs in an environment that is not managed by Resource Orchestrator, first delete the server in which it is mounted
using the ROR console.
For details on how to delete a server, refer to "9.2 Deleting Managed Servers" in the "User's Guide for Infrastructure Administrators
(Resource Management) CE".
- The WWN of a managed server is set during startup, using a network boot session to connect to the admin server. Once set up with
a proper WWN, the managed server reboots into its own Operating System.
Therefore, a managed server may reboot during its startup.
- Do not move HBAs whose HBA address rename settings have been set up to different managed servers. If operating HBAs without
resetting their WWNs, when the same WWN is configured on multiple servers data may be damaged by same volume access.
Changing the Manager's System Time
When the admin server's system time is reset to a time in the past, the resource monitoring by the manager stops for this period. To reset
the system time to more than just a few minutes in the past, return the time and then restart the manager.
Restarting Managers
By default, the manager services restart at 3:45 am every day for stable system operation.
The settings for restarting can be changed depending on the authority level. To change the configuration, perform the following:
- Configuration File
[Windows Manager]
Installation_folder\SVROR\Manager\rails\config\rcx\rcx_manager_params.rb
[Linux Manager]
/opt/FJSVrcvmr/rails/config/rcx/rcx_manager_params.rb
- Configuration Parameters
Table A.1 Configuration Parameters
Parameter
Meaning
Initial Value
true
RESTART_ENABLE
Select the restart operation status.
- 174 -
Parameter
Meaning
Initial Value
- When restarting
Specify "true".
- When not restarting
Specify "false".
RESTART_HOUR
RESTART_MIN
Specify the restart time (hour) from 0 to 23.
Specify the restart time (minutes) from 0 to 59.
Specify the restart interval (days) from 1 to 5.
3
45
1
RESTART_CYCLE
- Parameter Change Procedure
1. Stop the manager.
2. Use an editor and change the parameters of the rcx_manager_params.rb file.
3. Start the manager.
Note
The conditions for restarting are, that more than RESTART_CYCLE * 24 hours have passed since manager was started and it is the time
specified for RESTART_HOUR and RESTART_MIN.
For the stable operation of systems, configure the restarting of managers to occur on a daily basis.
Changing Multiple Operations of Managers
When executing multiple operations simultaneously, the upper limit is set for the number of simultaneous operations.
The upper limit of the number of simultaneous operations can be changed depending on the usage environment. To change the
configuration, edit the following definition file.
When there is no definition file, create one.
Placeholder for the Definition File
[Windows Manager]
Installation_folder\SVROR\Manager\etc\customize_data
[Linux Manager]
/etc/opt/FJSVrcvmr/customize_data
Definition File Name
rcx_base.rcxprop
Format of the Definition File
Describe the definition file in individual lines as below:
Key = Value
Items in the Definition File
Specify the following items.
Table A.2 Items in the Definition File
Item
Key
Value
Remarks
The default value is "30".
Specify the multiplicity
from 5 - 30.
Multiplicity
TASK_WORKER_COUNT
For basic mode, the default value is "5".
- 175 -
Item
Key
Value
Remarks
Operates by default, when there are no
definition files.
Example Definition File
An example definition file is indicated below. In this example, the multiplicity is set to "10".
TASK_WORKER_COUNT=10
Changing Procedures of Definition Files
- When the manager is operating in a normal environment
1. Stop the manager.
2. Change TASK_WORKER_COUNT values for rcx_base.rcxprop files.
When there is no rcx_base.rcxprop file, create one.
3. Start the manager.
- When the manager is operating in a cluster environment
[Windows Manager]
1. Stop the manager.
2. Place the shared disk of the manager online. Place other cluster resources offline.
3. Change the TASK_WORKER_COUNT values for rcx_base.rcxprop files on the shared disk. When there is no rcx_base.rcxprop
file, create one.
Drive_name\Fujitsu\ROR\SVROR\customize_data\rcx_base.rcxprop
4. Start the manager.
[Linux Manager]
1. Stop the manager.
2. Mount the shared disk of the admin server on the primary node or the secondary node.
3. Change the TASK_WORKER_COUNT values for rcx_base.rcxprop files in the shared disk.
When there is no rcx_base.rcxprop file, create one.
Destination_to_mount_shared_disk/Fujitsu/ROR/SVROR/etc/opt/FJSVrcvmr/customize_data/rcx_base.rcxprop
4. Unmount the shared disk for the admin server from the node mounted in step 2.
5. Restart the manager.
Note
Advisory Notes for Basic Mode
Memory usage will increase according to the multiplicity.
For details on the memory usage to increase, refer to "Table A.3 Increased Memory Use with Multiple Operations".
Calculate the memory used from a value in the table and the memory size required for the manager operations described in "2.4.2.6 Memory
Size" in the "Design Guide CE", and then add memory if necessary.
Table A.3 Increased Memory Use with Multiple Operations
Multiplicity
Increase in Memory Use (Unit: MB)
5
-
- 176 -
Multiplicity
Increase in Memory Use (Unit: MB)
1080 + (Multiplicity * 40)
6 - 14
15 - 30
2104 + (Multiplicity * 40)
- 177 -
Appendix B Metering Log
This chapter explains the metering log.
Metering logs is saved in the database as Information based on bill calculations so that fees can be charged based on the resources that
have been used by tenants. The Output metering log command is used to output metering logs to metering log files.
Refer to "10.2 ctchg_getmeterlog (Output Metering Logs)" in the "Reference Guide (Command/XML) CE" for information on the Output
metering log command.
See
Refer to "Chapter 15 Accounting" for information on the overview of the accounting, the accounting information and the calculation of
usage charges.
B.1 Types of Metering Logs
There are two types of metering logs: event logs and periodic logs.
Each of these types of logs is explained below:
Event logs
Event logs are logs that are output when the status of a resource has changed, such as when an L-Platform is created or canceled or when
an L-Server is started or stopped. An event log records resource attribute information such as the date and time at which the event occurred,
event type, information for identifying the resource, and allocation amount.
The following table shows the timing at which event logs will be output:
Event
Event identifier
Timing of event
Resource
Adding
ADD
When a resource is added and when the
resource is actually secured
- L-Platform
- L-Server
- Disk
Changing
Deleting
CHANGE
DELETE
When a resource is changed
When a resource deletion is requested
If an approval workflow is enabled, when an
approval or assessment is performed
- L-Platform template (*1)
- Software (*2)
Starting
START
STOP
When starting a resource is complete
When stopping a resource is requested
- L-Server
Stopping
*1: The event log for L-Platform template is output for tenant-specific templates. The event log is not output for global templates. The
CHANGE log will be output if the template name is changed. If only other basic information and configuration information are changed,
the CHANGE log will not be output.
*2: The event log for software will be output based on the software information that has been registered in the image information of
the L-Server. The timing of software addition or deletion events is synchronized with L-Server addition or deletion events, respectively.
Periodic logs
Periodic logs are logs that show the periodic operational status of resources.
The following table shows the timing at which periodic logs will be output:
Event
Period
Event identifier
Timing of event
Resource
PERIOD
Every day at 0:00 (default) (*1)
- L-Platform
- L-Server
- Disk
- 178 -
Event
Event identifier
Timing of event
Resource
- L-Platform template (*2)
- Software
*1: If the timing of output of the periodic log is to be changed, overwrite the metering log operational settings file and then execute
the Change periodic log schedule settings command.
Refer to "8.7.3 Metering Log Settings" for information on the metering log operational settings file and refer to "10.1 ctchg_chgschedule
(Change Periodic Log Schedule Settings)" in the "Reference Guide (Command/XML) CE" for the Change periodic log schedule settings
command.
*2: The periodic log for L-Platform template is output for tenant-specific templates. The periodic log is not output for global templates.
Note
'Create' processing for periodic logs and 'delete' processing for metering logs are implemented periodically.
The processing results of 'create' processing for periodic logs and 'delete' processing for metering logs will be output to the following files
as operation logs:
- 'Create' processing for periodic logs
[Windows Manager]
<installation_foldert>\RCXCTMG\Charging\log\metering_period.log
[Linux Manager]
/var/opt/FJSVctchg/log/metering_period.log
- 'Delete' processing for metering logs
[Windows Manager]
<installation_foldert>\RCXCTMG\Charging\log\metering_log_db_delete.log
[Linux Manager]
/var/opt/FJSVctchg/log/metering_log_db_delete.log
Refer to "Chapter 14 Messages Starting with meter" in the "Messages" to check whether processing is operating normally. Check 'create'
processing for periodic logs every day at 0:00 or at least 5 minutes after the changed time. 'Delete' processing for metering logs is
implemented at 0:35 every day, so check at that time or later.
B.2 Output Contents of Metering Logs
The contents that are output to metering logs will vary according to the resource.
The following tables show the log contents that are output in common for all resources as well as the log contents that are output for each
resource:
Common
Item ID
Output for each event
Item name
Description
ADD
CHAN DELET START STOP PERIO
GE
Yes
Yes
E
D
version
event_time
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Version
Fixed at 2.0
Time event
occurred
String
YYYY-MM-
DDThh:mm:ss.SSSZ
- 179 -
Item ID
Output for each event
Item name
Description
ADD
CHAN DELET START STOP PERIO
GE
E
D
event
Yes
Yes
Yes
Yes
Yes
Yes
Event
ADD: Deploy
CHANGE: Change
DELETE: Delete
START: Start
STOP: Stop
PERIOD: Period
resource_type
Yes
Yes
Yes
Yes
Yes
Yes
Configured
vsys: L-Platform
resource type
vserver: Virtual L-Server
pserver: Physical L-Server
vdisk: Expansion disk
template: L-Platform
template
software: Software
Yes: Output; No: Not output
L-Platform
In addition to the common items output, the following items will be output:
Item ID Output for each event
Item name
Description
ADD
CHANGE
DELETE
PERIOD
org_id
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
vsys_id
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
L-Platform ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
system_name
L-Platform
name
UTF-8: 1 to 256 characters
Name of the L-Platform
(comment)
base_template_id
Yes
Yes
No
Yes
L-Platform
template ID
ASCII: 1 to 32 characters
Template ID of the L-
Platform creation source
Yes: Output; No: Not output
Virtual L-Server
In addition to the common items output, the following items will be output:
Item ID
Output for each event
Item name
Description
ADD CHAN DELE STAR STOP PERIO
GE
TE
T
D
org_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
- 180 -
Item ID
Output for each event
Item name
Description
ADD CHAN DELE STAR STOP PERIO
GE
TE
T
D
vsys_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
L-Platform ID
L-Server ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
server_id
Yes
Yes
Yes
No
Yes
No
Yes
Yes
ASCII: 1 to 64 characters
ID identifying the L-Server
system_name
L-Platform
name
UTF-8: 1 to 256 characters
Name of the L-Platform
(comment)
server_name
Yes
Yes
No
No
No
Yes
L-Server name
UTF-8: 1 to 256 characters
Name of the L-Server
(comment)
image_name
vm_pool
Yes
Yes
Yes
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Image name
ASCII: 1 to 32 characters
VM pool name
Variable-length string
(unlimited)
cpu_num
Yes
Yes
Yes
Yes
No
No
No
No
No
No
Yes
Yes
Number of
CPUs
1 to 999999
cpu_input_num
Value entered
for number of
CPUs
1 to 999999
cpu_perf
Yes
Yes
Yes
Yes
No
No
No
No
No
No
Yes
Yes
CPU
performance
1 to 999999 (Unit: 0.1 GHz)
Maximum 99999.9 GHz
cpu_input_perf
Value entered
for CPU
1 to 999999 (Unit: 0.1 GHz)
Maximum 99999.9 GHz
performance
cpu_reserve
Yes
Yes
No
No
No
Yes
Reserve value
of CPU
0 to 999999 (Unit: 0.1 GHz)
Maximum 99999.9 GHz
performance
memory_size
Yes
Yes
Yes
Yes
No
No
No
No
No
No
Yes
Yes
Memory
capacity
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
memory_reserve
Reserve value
of memory
capacity
0 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
storage_pool
disk_size
Yes
Yes
Yes
No
Yes
Yes
Yes
No
No
No
No
No
No
No
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Storage pool
name
Variable-length string
(unlimited)
System disk
size
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
server_template_n
ame
L-Server
template name
ASCII: 1 to 32 characters
status
Status
RUNNING: Active
STOPPED: Stopped
Yes: Output; No: Not output
Physical L-Server
In addition to the common items output, the following items will be output:
- 181 -
Item ID
Output for each event
Item name
Description
ADD
Yes
CHAN DELET START STOP PERIO
GE
E
D
org_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
vsys_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
No
Yes
Yes
No
Yes
Yes
Yes
L-Platform ID
L-Server ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
server_id
system_name
ASCII: 1 to 64 characters
ID identifying the L-Server
L-Platform
name
UTF-8: 1 to 256 characters
Name of the L-Platform
(comment)
server_name
Yes
Yes
No
No
No
Yes
L-Server name
UTF-8: 1 to 256 characters
Name of the L-Server
(comment)
image_name
server_pool
Yes
Yes
Yes
Yes
No
No
Yes
No
Yes
No
Yes
Yes
Image name
ASCII: 1 to 32 characters
Server pool
name
Variable-length string
(unlimited)
cpu_num
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Value entered
for number of
CPUs
1 to 999999
cpu_input_nu
m
Value entered
for number of
CPUs
1 to 999999
cpu_perf
Value entered
for CPU
performance
1 to 999999 (Unit: 0.1 GHz)
Maximum 99999.9 GHz
cpu_input_per
f
Value entered
for CPU
performance
1 to 999999 (Unit: 0.1 GHz)
Maximum 99999.9 GHz
memory_size
Value entered
for memory
capacity
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
memory_input
_size
Value entered
for memory
capacity
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
storage_pool
disk_size
Yes
Yes
Yes
No
Yes
Yes
Yes
No
No
No
No
No
No
No
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Storage pool
name
Variable-length string
(unlimited)
System disk
size
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
server_templat
e_name
L-Server
template ID
ASCII: 1 to 32 characters
status
Status
RUNNING: Active
STOPPED: Stopped
Yes: Output; No: Not output
- 182 -
Disk
In addition to the common items output, the following items will be output:
Item ID Output for each event
Item name
Description
ADD
CHANGE
DELETE
PERIOD
org_id
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
vsys_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
L-Platform ID
L-Server ID
Disk ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
server_id
disk_id
ASCII: 1 to 32 characters
ID identifying the L-Server
ASCII: 1 to 32 characters
ID identifying the disk
system_name
L-Platform
name
UTF-8: 1 to 256 characters
Name of the L-Platform
(comment)
disk_name
storage_pool
disk_size
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
Yes
Yes
Yes
Disk name
UTF-8: 1 to 256 characters
Name of the disk
Storage pool
name
Variable-length string
(unlimited)
Disk size
1 to 999999 (Unit: 0.1 GB)
Maximum 99999.9 GB
Yes: Output; No: Not output
L-Platform template
In addition to the common items output, the following items will be output:
Item ID Output for each event
Item name
Description
ADD
CHANGE
DELETE
PERIOD
org_id
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
template_id
template_name
L-Platform
template ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
template
L-Platform
template name
UTF-8: 1 to 256 characters
Name of the L-Platform
template
Yes: Output; No: Not output
Software
In addition to the common items output, the following items will be output:
- 183 -
Item ID
Output for each event
Item name
Description
ADD
CHANGE
DELETE
PERIOD
org_id
Yes
Yes
Yes
Yes
Tenant name
User account
ASCII: 1 to 32 characters
Name of the tenant
user_id
Yes
Yes
Yes
Yes
ASCII: 1 to 32 characters
User that owns the L-
Platform
vsys_id
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
L-Platform ID
L-Server ID
Software ID
ASCII: 1 to 32 characters
ID identifying the L-Platform
server_id
ASCII: 1 to 32 characters
ID identifying the L-Server
software_id
system_name
ASCII: 1 to 32 characters
ID identifying the software
L-Platform
name
UTF-8: 1 to 256 characters
Name of the L-Platform
(comment)
Yes: Output; No: Not output
B.3 Formats of Metering Log Files
Metering logs are output to CSV format files or XML format files.
Use the options of the Output metering log command to specify the format in which metering logs are to be output. Refer to "10.2
ctchg_getmeterlog (Output Metering Logs)" in the "Reference Guide (Command/XML) CE" for information on the Output metering log
command.
This section explains each of the file formats.
CSV format files
The format of CSV format files is as follows:
- The item ID is output with a one-byte hash (#) sign added to the first character of the first row.
- Items are output with commas used as delimiters between each item.
- String data is output enclosed by double quotation marks and numeric data is output as it is.
- If an item is not output using string data, "" is output.
- If an item is not output using numeric data, nothing is output. (A comma is output immediately after the previous item.)
- For a Reserved item, "" is output.
The order of the items that are output to a CSV format file is as follows:
No.
Item ID
Item name
Data type
1
version
event_time
Reserved
vsys_id
org_id
Version
Numeric
String
2
3
4
5
6
7
Time of event
L-Platform ID
Tenant name
String
String
String
String
event
Event
resource_type
Configured resource type
- 184 -
No.
Item ID
Item name
Data type
8
status
Status
String
String
String
String
String
String
String
String
String
9
user_id
User account
L-Server ID
Disk ID
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
server_id
disk_id
software_id
Software ID
L-Platform name
L-Server name
Disk name
system_name
server_name
disk_name
template_id
Reserved
Template ID
base_template_id
image_name
storage_pool
disk_size
Used template ID
Image name
String
String
Storage pool name
Disk size
String
Numeric
String
vm_pool
VM pool name
Number of CPUs
CPU performance
Memory capacity
cpu_num
Numeric
Numeric
Numeric
Numeric
cpu_perf
memory_size
cpu_reserve
Reserve value of CPU
performance
27
memory_reserve
Reserve value of memory
capacity
Numeric
28
29
30
server_template_name
server_pool
L-Server template name
Server pool name
String
String
cpu_input_num
Value entered for number of
CPUs
Numeric
31
32
33
cpu_input_perf
Value entered for CPU
frequency
Numeric
Numeric
String
memory_input_size
template_name
Value entered for memory
capacity
Template name
An output example of a CSV format file is shown below:
#version,event_time,Reserved,vsys_id,org_id,event,resource_type,status,user_id,server_id,disk_id,sof
tware_id,system_name,
server_name,disk_name,template_id,Reserved,base_template_id,image_name,storage_pool,disk_size,vm_poo
l,cpu_num,cpu_perf,
memory_size,cpu_reserve,memory_reserve,server_template_name,server_pool,cpu_input_num,cpu_input_perf
,memory_input_size,
template_name
2.0,"2011-07-02T00:00:00.000+0900","","vsysId001","orgId001","ADD","pserver","","userId001","serverI
d001","","","systemName001","ServerName001","","","","","image001","/StoragePool001",100,"",
1,3,6,,,serverTemplateName001,serverPool001,2,5,8,
2.0,"2011-08-08T10:20:10.000+0900","","","orgId001","PERIOD","template","","userId001","","","","","
","","id001","","","","",,"",,,,,,,,,,,name001
2.0,"2011-08-08T10:20:10.000+0900","","vsysId001","orgId001","PERIOD","software","","userId001","ser
- 185 -
verId001","","softwareId001","systemName001","","","","","","","",,"",,,,,,,,,,,
2.0,"2011-08-08T10:20:10.000+0900","","vsysId001","orgId001","PERIOD","vserver","RUNNING","userId001
","serverId001","","","systemName001","serverName002","","","","","imageInfoName001","/
StoragePool002",100,"/VMPool001",2,30,60,10,50,serverType001,,,,,
2.0,"2011-08-08T10:20:10.000+0900","","vsysId001","orgId001","PERIOD","vdisk","","userId001","server
Id001","diskId001","","systemName001","","diskName001","","","","","/StoragePool002",
200,"",,,,,,,,,,,
2.0,"2011-08-08T10:20:10.000+0900","","vsysId001","orgId001","PERIOD","vsys","","userId001","","",""
,"systemName001","","","","","baseTemplateId001","","",,"",,,,,,,,,,,
XML format files
The format of XML format files is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<meterlog>
<entry><Item ID>Value</Item ID>...</entry>
<entry><Item ID>Value</Item ID>...</entry>
...
</meterlog>
- The first row of the CSV format corresponds to the <entry> element.
- The first column of the CSV format corresponds to the <Item ID> element.
- If an item is not output, the <Item ID> tag itself is not output.
An output example of an XML format file is shown below:
<?xml version="1.0" encoding="UTF-8"?>
<meterlog>
<entry>
<version>2.0</version>
<event_time>2011-07-02T00:00:00.000+0900</event_time>
<vsys_id>vsysId001</vsys_id>
<org_id>orgId001</org_id>
<event>ADD</event>
<resource_type>pserver</resource_type>
<user_id>userId001</user_id>
<server_id>serverId001</server_id>
<system_name>systemName001</system_name>
<server_name>serverName001</server_name>
<image_name>image001</image_name>
<storage_pool>storagePool001</storage_pool>
<disk_size>1</disk_size>
<cpu_num>1</cpu_num>
<cpu_perf>3</cpu_perf>
<memory_size>6</memory_size>
<server_template_name>serverTemplateName001</server_template_name>
<server_pool>serverPool001</server_pool>
<cpu_input_num>2</cpu_input_num>
<cpu_input_perf>5</cpu_input_perf>
<memory_input_size>8</memory_input_size>
</entry>
<entry>
...
</entry>
...
</meterlog>
Point
- If a change of tenant has been performed for an L-Platform, an event log with CHANGE as the event will be output.
- 186 -
- If snapshot restore has been performed, an event log with CHANGE as the event will be output even if there was no change.
- At execution of the Import L-Server command of the L-Platform Management function, the following event log will be output:
- If a stopped L-Server has been imported
Event log with ADD as the event
- If an active L-Server has been imported
Event log with ADD and START as the events
Note
- An event indicating a stop of an L-Server (STOP) occurs for a stop or a forced stop of an L-Server. The forced stop of an L-Server
can be performed even if the L-Server has a stopped status (STOPPED). Therefore, take into account that even if an L-Server has a
stopped status, an event indicating a stop of the L-Server (STOP) may exist.
- When starting virtual L-Servers using RHEL-KVM as virtualization software, the amount of memory used by L-Servers is set to the
upper limit first, and then it is reduced to the value specified to L-Servers by the RHEL-KVM management application. Therefore
metering logs will be output in the following order if a RHEL-KVM L-Server is started.
1. L-Server starting (START)
2. L-Server changing (CHANGE) : Memory capacity = The upper limit
3. L-Server changing (CHANGE) : The upper limit > Memory capacity > The value specified to the L-Server (*)
4. L-Server changing (CHANGE) : Memory capacity = The value specified to the L-Server
* Note: The changing log 3. may not be output sometimes.
Refer to "C.6.10 Overcommit" in the "Setup Guide CE" for the RHEL-KVM memory amount.
B.4 Deleting Metering Logs
Metering logs are periodically deleted from the database.
Delete processing is implemented every day at 0:35 and the logs for which the log entry retention period that was set in the metering
operational settings file has passed will be deleted.
When metering logs are deleted, accounting can no longer be calculated, so set the log entry retention period so that only the metering
logs for which accounting calculations are complete will be deleted.
Refer to "8.7.3 Metering Log Settings" for information on the metering log operational settings file.
- 187 -
Glossary
access path
A logical path configured to enable access to storage volumes from servers.
active mode
The state where a managed server is performing operations.
Managed servers must be in active mode in order to use Auto-Recovery.
Move managed servers to maintenance mode in order to perform backup or restoration of system images, or collection or deployment
of cloning images.
active server
A physical server that is currently operating.
admin client
A terminal (PC) connected to an admin server, which is used to operate the GUI.
admin LAN
A LAN used to manage resources from admin servers.
It connects managed servers, storage, and network devices.
admin server
A server used to operate the manager software of Resource Orchestrator.
affinity group
A grouping of the storage volumes allocated to servers. A function of ETERNUS.
Equivalent to the LUN mapping of EMC.
agent
The section (program) of Resource Orchestrator that operates on managed servers.
aggregate
A unit for managing storage created through the aggregation of a RAID group.
Aggregates can contain multiple FlexVols.
alias name
A name set for each ETERNUS LUN to distinguish the different ETERNUS LUNs.
Auto Deploy
A function for deploying VMware ESXi 5.0 to servers using the PXE boot mechanism.
Automatic Storage Layering
A function that optimizes performance and cost by automatically rearranging data in storage units based on the frequency of access.
Auto-Recovery
A function which continues operations by automatically switching over the system image of a failed server to a spare server and
restarting it in the event of server failure.
This function can be used when managed servers are in a local boot configuration, SAN boot configuration, or a configuration such
as iSCSI boot where booting is performed from a disk on a network.
- 188 -
- When using a local boot configuration
The system is recovered by restoring a backup of the system image of the failed server onto a spare server.
- When booting from a SAN or a disk on a LAN
The system is restored by having the spare server inherit the system image on the storage.
Also, when a VLAN is set for the public LAN of a managed server, the VLAN settings of adjacent LAN switches are automatically
switched to those of the spare server.
backup site
An environment prepared in a different location, which is used for data recovery.
BACS (Broadcom Advanced Control Suite)
An integrated GUI application (comprised from applications such as BASP) that creates teams from multiple NICs, and provides
functions such as load balancing.
Basic Mode
A function that can be used by configuring a Cloud Edition license after installing ROR VE.
BASP (Broadcom Advanced Server Program)
LAN redundancy software that creates teams of multiple NICs, and provides functions such as load balancing and failover.
blade server
A compact server device with a thin chassis that can contain multiple server blades, and has low power consumption.
As well as server blades, LAN switch blades, management blades, and other components used by multiple server blades can be mounted
inside the chassis.
blade type
A server blade type.
Used to distinguish the number of server slots used and servers located in different positions.
BladeViewer
A GUI that displays the status of blade servers in a style similar to a physical view and enables intuitive operation.
BladeViewer can also be used for state monitoring and operation of resources.
BMC (Baseboard Management Controller)
A Remote Management Controller used for remote operation of servers.
boot agent
An OS for disk access that is distributed from the manager to managed servers in order to boot them when the network is started during
image operations.
CA (Channel Adapter)
An adapter card that is used as the interface for server HBAs and fibre channel switches, and is mounted on storage devices.
CCM (ETERNUS SF AdvancedCopy Manager Copy Control Module)
This is a module that does not require installation of the ETERNUS SF AdvancedCopy Manager agent on the server that is the source
of the backup, but rather uses the advanced copy feature of the ETERNUS disk array to make backups.
chassis
A chassis used to house server blades and partitions.
Sometimes referred to as an enclosure.
- 189 -
cloning
Creation of a copy of a system disk.
cloning image
A backup of a system disk, which does not contain server-specific information (system node name, IP address, etc.), made during
cloning.
When deploying a cloning image to the system disk of another server, Resource Orchestrator automatically changes server-specific
information to that of the target server.
Cloud Edition
The edition which can be used to provide private cloud environments.
data center
A facility that manages client resources (servers, storage, networks, etc.), and provides internet connections and maintenance/
operational services.
directory service
A service for updating and viewing the names (and associated attributes) of physical/logical resource names scattered across networks,
based on organizational structures and geographical groups using a systematic (tree-shaped structure) management methodology.
disk resource
The unit for resources to connect to an L-Server. An example being a virtual disk provided by LUN or VM management software.
DN (Distinguished Name)
A name defined as a line of an RDN, which contains an entry representing its corresponding object and higher entry.
Domain
A system that is divided into individual systems using partitioning. Also used to indicate a partition.
DR Option
The option that provides the function for remote switchover of servers or storage in order to perform disaster recovery.
Dual-Role Administrators
The administrators with both infrastructure administrator's and tenant administrator's role.
dynamic LUN mirroring
This is a feature whereby a mirror volume is generated at the remote site when a volume is generated at the local site, and copies are
maintained by performing REC.
dynamic memory
A function that optimizes physical memory allocation for virtual machines, depending on their execution status on Hyper-V.
end host mode
This is a mode where the uplink port that can communicate with a downlink port is fixed at one, and communication between uplink
ports is blocked.
environmental data
Measured data regarding the external environments of servers managed using Resource Orchestrator.
Measured data includes power data collected from power monitoring targets.
- 190 -
ESC (ETERNUS SF Storage Cruiser)
Software that supports stable operation of multi-vendor storage system environments involving SAN, DAS, or NAS. Provides
configuration management, relation management, trouble management, and performance management functions to integrate storage
related resources such as ETERNUS.
ETERNUS SF AdvancedCopy Manager
This is storage management software that makes highly reliable and rapid backups, restorations and replications using the advanced
copy feature of the ETERNUS disk array.
Express
The edition which provides server registration, monitoring, and visualization.
external FTP server
An FTP server used to relay network device files between the ROR manager and network devices that do not possess their own FTP
server function.
FC switch (Fibre Channel Switch)
A switch that connects Fibre Channel interfaces and storage devices.
Fibre Channel
A method for connecting computers and peripheral devices and transferring data.
Generally used with servers requiring high-availability, to connect computers and storage systems.
Fibre Channel port
The connector for Fibre Channel interfaces.
When using ETERNUS storage, referred to as an FC-CA port, when using NetApp storage, referred to as an FC port, when using EMC
CLARiiON, referred to as an SP port, when using EMC Symmetrix DMX or EMC Symmetrix VMAX, referred to as a DIRECTOR
port.
fibre channel switch blade
A fibre channel switch mounted in the chassis of a blade server.
FlexVol
A function that uses aggregates to provide virtual volumes.
Volumes can be created in an instant.
FTRP
The pool for physical disks created by Automatic Storage Layering for ETERNUS.
In Resource Orchestrator, FTRPs are used as virtual storage resources on which Thin Provisioning attributes are configured.
FTV
The virtual volumes created by Automatic Storage Layering for ETERNUS.
In Resource Orchestrator, FTVs are used as disk resources on which Thin Provisioning attributes are configured.
global pool
A resource pool that contains resources that can be used by multiple tenants.
It is located in a different location from the tenants.
By configuring a global pool with the attributes of a tenant, it becomes possible for tenant administrators to use the pool.
- 191 -
global zone
The actual OS that is used for a Solaris container.
A Solaris environment that has been installed on a physical server.
GLS (Global Link Services)
Fujitsu network control software that enables high availability networks through the redundancy of network transmission channels.
GSPB (Giga-LAN SAS and PCI_Box Interface Board)
A board which mounts onboard I/O for two partitions and a PCIe (PCI Express) interface for a PCI box.
GUI (Graphical User Interface)
A user interface that displays pictures and icons (pictographic characters), enabling intuitive and easily understandable operation.
HA (High Availability)
The concept of using redundant resources to prevent suspension of system operations due to single problems.
hardware initiator
A controller which issues SCSI commands to request processes.
In iSCSI configurations, NICs fit into this category.
hardware maintenance mode
In the maintenance mode of PRIMEQUEST servers, a state other than Hot System Maintenance.
HBA (Host Bus Adapter)
An adapter for connecting servers and peripheral devices.
Mainly used to refer to the FC HBAs used for connecting storage devices using Fibre Channel technology.
HBA address rename setup service
The service that starts managed servers that use HBA address rename in the event of failure of the admin server.
HBAAR (HBA address rename)
I/O virtualization technology that enables changing of the actual WWN possessed by an HBA.
host affinity
A definition of the server HBA that is set for the CA port of the storage device and the accessible area of storage.
It is a function for association of the Logical Volume inside the storage which is shown to the host (HBA) that also functions as security
internal to the storage device.
Hyper-V
Virtualization software from Microsoft Corporation.
Provides a virtualized infrastructure on PC servers, enabling flexible management of operations.
I/O virtualization option
An optional product that is necessary to provide I/O virtualization.
The WWNN address and MAC address provided is guaranteed by Fujitsu Limited to be unique.
Necessary when using HBA address rename.
- 192 -
IBP (Intelligent Blade Panel)
One of operation modes used for PRIMERGY switch blades.
This operation mode can be used for coordination with ServerView Virtual I/O Manager (VIOM), and relations between server blades
and switch blades can be easily and safely configured.
ICT governance
A collection of principles and practices that encourage desirable behavior in the use of ICT (Information and Communication
Technology) based on an evaluation of the impacts and risks posed in the adoption and application of ICT within an organization or
community.
ILOM (Integrated Lights Out Manager)
The name of the Remote Management Controller for SPARC Enterprise T series servers.
image file
A system image or a cloning image. Also a collective term for them both.
infrastructure administrator
A user who manages the resources comprising a data center.
infra_admin is the role that corresponds to the users who manage resources.
Infrastructure administrators manage all of the resources comprising a resource pool (the global pool and local pools), provide tenant
administrators with resources, and review applications by tenant users to use resources.
integrated network device
A network device with integrated firewall or server load balancing functions.
The IPCOM EX IN series fits into this category.
IPMI (Intelligent Platform Management Interface)
IPMI is a set of common interfaces for the hardware that is used to monitor the physical conditions of servers, such as temperature,
power voltage, cooling fans, power supply, and chassis.
These functions provide information that enables system management, recovery, and asset management, which in turn leads to reduction
of overall TCO.
IQN (iSCSI Qualified Name)
Unique names used for identifying iSCSI initiators and iSCSI targets.
iRMC (integrated Remote Management Controller)
The name of the Remote Management Controller for Fujitsu's PRIMERGY servers.
iSCSI
A standard for using the SCSI protocol over TCP/IP networks.
iSCSI boot
A configuration function that enables the starting and operation of servers via a network.
The OS and applications used to operate servers are stored on iSCSI storage, not the internal disks of servers.
iSCSI storage
Storage that uses an iSCSI connection.
LAG (Link Aggregation Group)
A single logical port created from multiple physical ports using link aggregation.
- 193 -
LAN switch blades
A LAN switch that is mounted in the chassis of a blade server.
LDAP (Lightweight Directory Access Protocol)
A protocol used for accessing Internet standard directories operated using TCP/IP.
LDAP provides functions such as direct searching and viewing of directory services using a web browser.
license
The rights to use specific functions.
Users can use specific functions by purchasing a license for the function and registering it on the manager.
link aggregation
Function used to multiplex multiple ports and use them as a single virtual port.
By using this function, it becomes possible to use a band equal to the total of the bands of all the ports.
Also, if one of the multiplexed ports fails its load can be divided among the other ports, and the overall redundancy of ports improved.
local pool
A resource pool that contains resources that can only be used by a specific tenant.
They are located in tenants.
logical volume
A logical disk that has been divided into multiple partitions.
L-Platform
A resource used for the consolidated operation and management of systems such as multiple-layer systems (Web/AP/DB) comprised
of multiple L-Servers, storage, and network devices.
L-Platform template
A template that contains the specifications for servers, storage, network devices, and images that are configured for an L-Platform.
LSB (Logical System Board)
A system board that is allocated a logical number (LSB number) so that it can be recognized from the domain, during domain
configuration.
L-Server
A resource defined using the logical specifications (number of CPUs, amount of memory, disk capacity, number of NICs, etc.) of the
servers, and storage and network devices connected to those servers.
An abbreviation of Logical Server.
L-Server template
A template that defines the number of CPUs, memory capacity, disk capacity, and other specifications for resources to deploy to an
L-Server.
LUN (Logical Unit Number)
A logical unit defined in the channel adapter of a storage unit.
MAC address (Media Access Control address)
A unique identifier that is assigned to Ethernet cards (hardware).
Also referred to as a physical address.
- 194 -
Transmission of data is performed based on this identifier. Described using a combination of the unique identifying numbers managed
by/assigned to each maker by the IEEE, and the numbers that each maker assigns to their hardware.
maintenance mode
The state where operations on managed servers are stopped in order to perform maintenance work.
In this state, the backup and restoration of system images and the collection and deployment of cloning images can be performed.
However, when using Auto-Recovery it is necessary to change from this mode to active mode. When in maintenance mode it is not
possible to switch over to a spare server if a server fails.
managed server
A collective term referring to a server that is managed as a component of a system.
management blade
A server management unit that has a dedicated CPU and LAN interface, and manages blade servers.
Used for gathering server blade data, failure notification, power control, etc.
Management Board
The PRIMEQUEST system management unit.
Used for gathering information such as failure notification, power control, etc. from chassis.
manager
The section (program) of Resource Orchestrator that operates on admin servers.
It manages and controls resources registered with Resource Orchestrator.
master configuration file
This is the original network device configuration file that is backed up from each network device immediately after Resource
Orchestrator is set up.
It is used for the following purposes:
- When initializing the settings of network devices
- When checking the differences between the current and original configurations
- For providing the initial settings when creating a new system with the same configuration
In regards to the network device file management function, these files are excluded from the scope of version management (They are
not automatically deleted).
master slot
A slot that is recognized as a server when a server that occupies multiple slots is mounted.
member server
A collective term that refers to a server in a Windows network domain that is not a domain controller.
migration
The migration of a VM guest to a different VM host. The following two types of migration are available:
- Cold migration
Migration of an inactive (powered-off) VM guest.
- Live migration
Migration of an active (powered-on) VM guest.
multi-slot server
A server that occupies multiple slots.
- 195 -
NAS (Network Attached Storage)
A collective term for storage that is directly connected to a LAN.
network device
The unit used for registration of network devices.
L2 switches, firewalls, and server load balancers fit into this category.
network device configuration file
These files contain definitions of settings regarding communication, such as VLAN information for network devices and interfaces,
rules for firewalls and server load balancers, etc.
As the content of these files changes each time settings are configured from the CLI, they are the target of automatic backup by Resource
Orchestrator, and a constant number of versions (32 by default) are backed up inside Resource Orchestrator.
Many network devices have two types of network device configuration files: "running config", which holds the current configuration
details, and "startup config", which holds the configuration that is valid directly after startup.
In Resource Orchestrator these two types of files are the target of backup and restore operations.
network device environment file
A collective term that refers to the files necessary for operating devices, such as CA certificates, user authentication databases,
customized user information, etc. (but excluding the network device configuration file).
As these files are not usually changed after they have been configured, Resource Orchestrator does not back them up each time automatic
configuration is performed.
network device file
Regarding the network device file management function, this is a collective term that refers to the files held by network devices that
are the target of backup and restore operations.
The two types of network device files are network device configuration files and network device environment files.
network map
A GUI function for graphically displaying the connection relationships of the servers and LAN switches that compose a network.
network view
A window that displays the connection relationships and status of the wiring of a network map.
NFS (Network File System)
A system that enables the sharing of files over a network in Linux environments.
NIC (Network Interface Card)
An interface used to connect a server to a network.
non-global zone
A virtual machine environment that has been prepared in a global zone. Its OS kernel is shared with the global zone. Non-global zones
are completely separate from each other.
OS
The OS used by an operating server (a physical OS or VM guest).
overcommit
A function to virtually allocate more resources than the actual amount of resources (CPUs and memory) of a server.
This function is used to enable allocation of more disk resources than are mounted in the target server.
- 196 -
PDU (Power Distribution Unit)
A device for distributing power (such as a power strip).
Resource Orchestrator uses PDUs with current value display functions as Power monitoring devices.
physical LAN segment
A physical LAN that servers are connected to.
Servers are connected to multiple physical LAN segments that are divided based on their purpose (public LANs, backup LANs, etc.).
Physical LAN segments can be divided into multiple network segments using VLAN technology.
physical network adapter
An adapter, such as a LAN, to connect physical servers or VM hosts to a network.
physical OS
An OS that operates directly on a physical server without the use of server virtualization software.
physical server
The same as a "server". Used when it is necessary to distinguish actual servers from virtual servers.
pin-group
This is a group, set with the end host mode, that has at least one uplink port and at least one downlink port.
Pool Master
On Citrix XenServer, it indicates one VM host belonging to a Resource Pool.
It handles setting changes and information collection for the Resource Pool, and also performs operation of the Resource Pool.
For details, refer to the Citrix XenServer manual.
port backup
A function for LAN switches which is also referred to as backup port.
port VLAN
A VLAN in which the ports of a LAN switch are grouped, and each LAN group is treated as a separate LAN.
port zoning
The division of ports of fibre channel switches into zones, and setting of access restrictions between different zones.
power monitoring devices
Devices used by Resource Orchestrator to monitor the amount of power consumed.
PDUs and UPSs with current value display functions fit into this category.
power monitoring targets
Devices from which Resource Orchestrator can collect power consumption data.
pre-configuration
Performing environment configuration for Resource Orchestrator on another separate system.
primary server
The physical server that is switched from when performing server switchover.
primary site
The environment that is usually used by Resource Orchestrator.
- 197 -
private cloud
A private form of cloud computing that provides ICT services exclusively within a corporation or organization.
public LAN
A LAN used for operations by managed servers.
Public LANs are established separately from admin LANs.
rack
A case designed to accommodate equipment such as servers.
rack mount server
A server designed to be mounted in a rack.
RAID (Redundant Arrays of Inexpensive Disks)
Technology that realizes high-speed and highly-reliable storage systems using multiple hard disks.
RAID management tool
Software that monitors disk arrays mounted on PRIMERGY servers.
The RAID management tool differs depending on the model or the OS of PRIMERGY servers.
RDM (Raw Device Mapping)
A function of VMware. This function provides direct access from a VMware virtual machine to a LUN.
RDN (Relative Distinguished Name)
A name used to identify the lower entities of a higher entry.
Each RDN must be unique within the same entry.
Remote Management Controller
A unit used for managing servers.
Used for gathering server data, failure notification, power control, etc.
- For Fujitsu PRIMERGY servers
iRMC2
- For SPARC Enterprise
ILOM (T series servers)
XSCF (M series servers)
- For HP servers
iLO2 (integrated Lights-Out)
- For Dell/IBM servers
BMC (Baseboard Management Controller)
Remote Server Management
A PRIMEQUEST feature for managing partitions.
Reserved SB
Indicates the new system board that will be embedded to replace a failed system board if the hardware of a system board embedded
in a partition fails and it is necessary to disconnect the failed system board.
- 198 -
resource
General term referring to the logical definition of the hardware (such as servers, storage, and network devices) and software that
comprise a system.
resource folder
An arbitrary group of resources.
resource pool
A unit for management of groups of similar resources, such as servers, storage, and network devices.
resource tree
A tree that displays the relationships between the hardware of a server and the OS operating on it using hierarchies.
role
A collection of operations that can be performed.
ROR console
The GUI that enables operation of all functions of Resource Orchestrator.
ruleset
A collection of script lists for performing configuration of network devices, configured as combinations of rules based on the network
device, the purpose, and the application.
SAN (Storage Area Network)
A specialized network for connecting servers and storage.
SAN boot
A configuration function that enables the starting and operation of servers via a SAN.
The OS and applications used to operate servers are stored on SAN storage, not the internal disks of servers.
SAN storage
Storage that uses a Fibre Channel connection.
script list
Lists of scripts for the automation of operations such as status and log display, and definition configuration of network devices.
Used to execute multiple scripts in one operation. The scripts listed in a script list are executed in the order that they are listed.
As with individual scripts, they can are created by the infrastructure administrator, and can be customized to meet the needs of tenant
administrators.
They are used to configure virtual networks for VLANs on physical networks, in cases where it is necessary to perform auto-
configuration of multiple switches at the same time, or to configure the same rules for network devices in redundant configurations.
The script lists contain the scripts used to perform automatic configuration.
There are the following eight types of script lists:
- script lists for setup
- script lists for setup error recovery
- script lists for modification
- script lists for modification error recovery
- script lists for setup (physical server added)
- script lists for setup error recovery (physical server added)
- 199 -
- script lists for deletion (physical server deleted)
- script lists for deletion
server
A computer (operated with one operating system).
server blade
A server blade has the functions of a server integrated into one board.
They are mounted in blade servers.
server management unit
A unit used for managing servers.
A management blade is used for blade servers, and a Remote Management Controller is used for other servers.
server name
The name allocated to a server.
server NIC definition
A definition that describes the method of use for each server's NIC.
For the NICs on a server, it defines which physical LAN segment to connect to.
server virtualization software
Basic software which is operated on a server to enable use of virtual machines. Used to indicate the basic software that operates on a
PC server.
ServerView Deployment Manager
Software used to collect and deploy server resources over a network.
ServerView Operations Manager
Software that monitors a server's (PRIMERGY) hardware state, and notifies of errors by way of the network.
ServerView Operations Manager was previously known as ServerView Console.
ServerView RAID
One of the RAID management tools for PRIMERGY.
ServerView Update Manager
This is software that performs jobs such as remote updates of BIOS, firmware, drivers, and hardware monitoring software on servers
being managed by ServerView Operations Manager.
ServerView Update Manager Express
Insert the ServerView Suite DVD1 or ServerView Suite Update DVD into the server requiring updating and start it.
This is software that performs batch updates of BIOS, firmware, drivers, and hardware monitoring software.
Single Sign-On
A system among external software which can be used without login operations, after authentication is executed once.
slave slot
A slot that is not recognized as a server when a server that occupies multiple slots is mounted.
- 200 -
SMB (Server Message Block)
A protocol that enables the sharing of files and printers over a network.
SNMP (Simple Network Management Protocol)
A communications protocol to manage (monitor and control) the equipment that is attached to a network.
software initiator
An initiator processed by software using OS functions.
Solaris container resource pool
The Solaris Containers resource pool used in the global zone and the non-global zone.
Solaris Containers
Solaris server virtualization software.
On Solaris servers, it is possible to configure multiple virtual Solaris servers that are referred to as a Solaris Zone.
Solaris Zone
A software partition that virtually divides a Solaris OS space.
SPARC Enterprise Partition Model
A SPARC Enterprise model which has a partitioning function to enable multiple system configurations, separating a server into multiple
areas with operating OS's and applications in each area.
spare server
A server which is used to replace a failed server when server switchover is performed.
storage blade
A blade-style storage device that can be mounted in the chassis of a blade server.
storage management software
Software for managing storage units.
storage resource
Collective term that refers to virtual storage resources and disk resources.
storage unit
Used to indicate the entire secondary storage as one product.
surrogate pair
A method for expressing one character as 32 bits.
In the UTF-16 character code, 0xD800 - 0xDBFF are referred to as "high surrogates", and 0xDC00 - 0xDFFF are referred to as "low
surrogates". Surrogate pairs use "high surrogate" + "low surrogate".
switchover state
The state in which switchover has been performed on a managed server, but neither failback nor continuation have been performed.
system administrator
The administrator who manages the entire system. They perform pre-configuration and installation of Resource Orchestrator.
Administrator privileges for the operating system are required. Normally the roles of the infrastructure administrator and system
administrator are performed concurrently.
- 201 -
System Board
A board which can mount up to 2 Xeon CPUs and 32 DIMMs.
system disk
The disk on which the programs (such as the OS) and files necessary for the basic functions of servers (including booting) are installed.
system image
A copy of the contents of a system disk made as a backup.
Different from a cloning image as changes are not made to the server-specific information contained on system disks.
tenant
A unit for the division and segregation of management and operation of resources based on organizations or operations.
tenant administrator
A user who manages the resources allocated to a tenant.
tenant_admin is the role for performing management of resources allocated to a tenant.
Tenant administrators manage the available space on resources in the local pools of tenants, and approve or reject applications by
tenant users to use resources.
tenant user
A user who uses the resources of a tenant, or creates and manages L-Platforms, or a role with the same purpose.
Thick Provisioning
Allocation of the actual requested capacity when allocating storage resources.
Thin Provisioning
Allocating of only the capacity actually used when allocating storage resources.
tower server
A standalone server with a vertical chassis.
TPP (Thin Provisioning Pool)
One of resources defined using ETERNUS. Thin Provisioning Pools are the resource pools of physical disks created using Thin
Provisioning.
TPV (Thin Provisioning Volume)
One of resources defined using ETERNUS. Thin Provisioning Volumes are physical disks created using the Thin Provisioning function.
UNC (Universal Naming Convention)
Notational system for Windows networks (Microsoft networks) that enables specification of shared resources (folders, files, shared
printers, shared directories, etc.).
Example
\\hostname\dir_name
UPS (Uninterruptible Power Supply)
A device containing rechargeable batteries that temporarily provides power to computers and peripheral devices in the event of power
failures.
Resource Orchestrator uses UPSs with current value display functions as power monitoring devices.
- 202 -
URL (Uniform Resource Locator)
The notational method used for indicating the location of information on the Internet.
VIOM (ServerView Virtual-IO Manager)
The name of both the I/O virtualization technology used to change the MAC addresses of NICs and the software that performs the
virtualization.
Changes to values of WWNs and MAC addresses can be performed by creating a logical definition of a server, called a server profile,
and assigning it to a server.
Virtual Edition
The edition that can use the server switchover function.
Virtual I/O
Technology that virtualizes the relationship of servers and I/O devices (mainly storage and network) thereby simplifying the allocation
of and modifications to I/O resources to servers, and server maintenance.
For Resource Orchestrator it is used to indicate HBA address rename and ServerView Virtual-IO Manager (VIOM).
virtual server
A virtual server that is operated on a VM host using a virtual machine.
virtual storage resource
This refers to a resource that can dynamically create a disk resource.
An example being RAID groups or logical storage that is managed by server virtualization software (such as VMware datastores).
In Resource Orchestrator, disk resources can be dynamically created from ETERNUS RAID groups, NetApp aggregates, and logical
storage managed by server virtualization software.
virtual switch
A function provided by server virtualization software to manage networks of VM guests as virtual LAN switches.
The relationships between the virtual NICs of VM guests and the NICs of the physical servers used to operate VM hosts can be managed
using operations similar to those of the wiring of normal LAN switches.
A function provided by server virtualization software in order to manage L-Server (VM) networks as virtual LAN switches.
Management of relationships between virtual L-Server NICs, and physical server NICs operating on VM hosts, can be performed using
an operation similar to the connection of a normal LAN switch.
VLAN (Virtual LAN)
A splitting function, which enables the creation of virtual LANs (seen as differing logically by software) by grouping ports on a LAN
switch.
Using a Virtual LAN, network configuration can be performed freely without the need for modification of the physical network
configuration.
VLAN ID
A number (between 1 and 4,095) used to identify VLANs.
Null values are reserved for priority tagged frames, and 4,096 (FFF in hexadecimal) is reserved for mounting.
VM (Virtual Machine)
A virtual computer that operates on a VM host.
VM guest
A virtual server that operates on a VM host, or an OS that is operated on a virtual machine.
- 203 -
VM Home Position
The VM host that is home to VM guests.
VM host
A server on which server virtualization software is operated, or the server virtualization software itself.
VM maintenance mode
One of the settings of server virtualization software, that enables maintenance of VM hosts.
For example, when using high availability functions (such as VMware HA) of server virtualization software, by setting VM maintenance
mode it is possible to prevent the moving of VM guests on VM hosts undergoing maintenance.
For details, refer to the manuals of the server virtualization software being used.
VM management software
Software for managing multiple VM hosts and the VM guests that operate on them.
Provides value adding functions such as movement between the servers of VM guests (migration).
VMware
Virtualization software from VMware Inc.
Provides a virtualized infrastructure on PC servers, enabling flexible management of operations.
VMware DPM (VMware Distributed Power Management)
A function of VMware. This function is used to reduce power consumption by automating power management of servers in VMware
DRS clusters.
VMware DRS (VMware Distributed Resource Scheduler)
A function of VMware. This function is used to monitor the load conditions on an entire virtual environment and optimize the load
dynamically.
VMware Teaming
A function of VMware. By using VMware Teaming it is possible to perform redundancy by connecting a single virtual switch to
multiple physical network adapters.
Web browser
A software application that is used to view Web pages.
WWN (World Wide Name)
A 64-bit address allocated to an HBA.
Refers to a WWNN or a WWPN.
WWNN (World Wide Node Name)
A name that is set as a common value for the Fibre Channel ports of a node. However, the definitions of nodes vary between
manufacturers, and may also indicate devices or adapters. Also referred to as a node WWN.
WWPN (World Wide Port Name)
A name that is a unique value and is set for each Fibre Channel port (HBA, CA, fibre channel switch ports, etc.), and is the IEEE global
MAC address.
As the Fibre Channel ports of the same WWPN are unique, they are used as identifiers during Fibre Channel port login. Also referred
to as a port WWN.
WWPN zoning
The division of ports into zones based on their WWPN, and setting of access restrictions between different zones.
- 204 -
Xen
A type of server virtualization software.
XSB (eXtended System Board)
Unit for domain creation and display, composed of physical components.
XSCF (eXtended System Control Facility)
The name of the Remote Management Controller for SPARC Enterprise M series servers.
zoning
A function that provides security for Fibre Channels by grouping the Fibre Channel ports of a Fibre Channel switch into zones, and
only allowing access to ports inside the same zone.
- 205 -
|
Motorola MOTOSLVR 6802931J79 User Manual
Motorola MOTOROKR 68000201355 A User Manual
Motorola MOTORAZR V3 User Manual
Lindy GIGAPATCHPANEL 20704 User Manual
Hitachi Travelstar 5k1000 0J22413 User Manual
Bolens 246 650 000 User Manual
Blackberry RBB10BW User Manual
Belkin BUS STATION F5U100 User Manual
Apple 2A034 4320 A User Manual
Alpine CDA 9885 User Manual