Hitachi Command Control Interface (CCI)
User and Reference Guide
Hitachi Universal Storage Platform V/VM
Hitachi TagmaStore® Universal Storage Platform
Hitachi TagmaStore® Network Storage Controller
Hitachi Lightning 9900™ V Series
Hitachi Lightning 9900™
MK-90RD011-25
Copyright © 2008 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED
Notice: No part of this publication may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying and recording, or stored in a
database or retrieval system for any purpose without the express written permission of
Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”).
Hitachi Data Systems reserves the right to make changes to this document at any time
without notice and assumes no responsibility for its use. Hitachi Data Systems products and
services can only be ordered under the terms and conditions of Hitachi Data Systems’
applicable agreements. All of the features described in this document may not be currently
available. Refer to the most recent product announcement or contact your local Hitachi
Data Systems sales office for information on feature and product availability.
This document contains the most current information available at the time of publication.
When new and/or revised information becomes available, this entire document will be
updated and distributed to all registered users.
Trademarks
Hitachi, the Hitachi logo, and Hitachi Data Systems are registered trademarks and service
marks of Hitachi, Ltd. The Hitachi Data Systems logo is a trademark of Hitachi, Ltd.
Hitachi Lightning 9900 and Hitachi TagmaStore are registered trademarks or trademarks of
Hitachi Data Systems Corporation.
All other brand or product names are or may be trademarks or service marks of and are used
to identify products or services of their respective owners.
Notice of Export Controls
Export of technical data contained in this document may require an export license from the
United States government and/or the government of Japan. Please contact the Hitachi Data
Systems Legal Department for any export compliance questions.
Document Revision Level
Revision
Date
Description
MK-90RD011-00
July 2000
Initial Release
MK-90RD011-01 thru --
MK-90RD011-22
The release information for revisions 01-22 has been omitted. See
MK-90RD011-23 for release information for these revisions.
MK-90RD011-23
MK-90RD011-24
MK-90RD011-25
September 2007 Revision 23, supersedes and replaces MK-90RD011-22
January 2008
May 2008
Revision 24, supersedes and replaces MK-90RD011-23
Revision 25, supersedes and replaces MK-90RD011-24
Hitachi Command Control Interface (CCI) User and Reference Guide
iii
Source Documents for this Revision
„
RAID Manager Basic Specifications, revision 64 (3/24/2008)
Changes in this Revision
„
Added support for the following host platforms (section 3.1):
–
–
–
–
Microsoft Windows 2008
HP OpenVMS 8.3 support for IPv6
HP OpenVMS for Integrity Server
64-bit RAID Manager for RH/IA64
„
„
Added “SSB” to the output of the EX_CMDRJE error message (Table 5.3).
Added support for Oracle10g H.A.R.D:
–
–
Added identification of “NON zero checking” to the output of the raidvchkdsp and
„
Added “pathID” as HORCM_INSTP in horcm.conf (section 2.8.4).
iv
Preface
Preface
This document describes and provides instructions for installing and using the Command
Control Interface (CCI) software for Hitachi RAID storage systems. CCI enables the user to
configure, perform, and manage operations for the following data management/business
continuity features from the open-systems host:
TrueCopy
ShadowImage
Copy-on-Write Snapshot
Universal Replicator
Database Validator
Data Retention Utility/Open LDEV Guard
This document applies to the following Hitachi RAID storage systems:
Hitachi Universal Storage Platform V/VM (USP V/VM)
Hitachi TagmaStore® Universal Storage Platform (USP)
Hitachi TagmaStore Network Storage Controller (NSC)
Hitachi Lightning 9900™ V Series (9900V)
Hitachi Lightning 9900 (9900)
This document assumes the following:
„
The user has a background in data processing and understands RAID storage systems and
their basic functions.
„
The user is familiar with the Hitachi RAID storage systems and has read and understands
the User and Reference Guide for the storage system.
„
„
The user is familiar with the host operating system.
The user is familiar with the Hitachi business continuity features.
Notes:
„
The term “Hitachi RAID storage system” refers to all supported Hitachi storage systems,
unless otherwise noted.
„
The terms used for the Hitachi RAID storage systems refer to all models of the storage
system, unless otherwise noted. For example, “Universal Storage Platform V” refers to
all models of the USP V, unless otherwise noted.
Notice: The use of the CCI software and all other Hitachi Data Systems products is governed
by the terms of your agreement(s) with Hitachi Data Systems.
CCI Software Version
This document revision applies to CCI software version 01-22-03/02.
Hitachi Command Control Interface (CCI) User and Reference Guide
v
Conventions for Storage Capacity Values
Storage capacity values for logical devices (LDEVs) on the Hitachi RAID storage systems are
calculated based on the following values:
1 KB (kilobyte) = 1,024 bytes
1 MB (megabyte) = 1,0242 bytes
1 GB (gigabyte) = 1,0243 bytes
1 TB (terabyte) = 1,0244 bytes
1 PB (petabyte) = 1,0245 bytes
1 block = 512 bytes
Referenced Documents
Hitachi Universal Storage Platform V/VM documents:
„
„
„
„
„
„
„
„
Universal Storage Platform V/VM User and Reference Guide, MK-96RD635
Storage Navigator User’s Guide, MK-96RD621
Hitachi ShadowImage User’s Guide, MK-96RD618
Hitachi TrueCopy User’s Guide, MK-96RD622
Data Retention Utility User’s Guide, MK-96RD612
Database Validator User’s Guide, MK-96RD611
Copy-on-Write Snapshot User’s Guide, MK-96RD607
Universal Replicator User’s Guide, MK-96RD624
Hitachi TagmaStore USP V/VM and NSC documents:
„
„
„
„
„
„
„
„
„
Universal Storage Platform User and Reference Guide, MK-94RD231
Network Storage Controller User and Reference Guide, MK-95RD279
Storage Navigator User’s Guide, MK-94RD206
Hitachi ShadowImage User Guide, MK-94RD204
Hitachi TrueCopy User and Reference Guide, MK-94RD215
Data Retention Utility User’s Guide, MK-94RD210
Database Validator User’s Guide, MK-94RD207
Copy-on-Write Snapshot User’s Guide, MK-95RD277
Universal Replicator User’s Guide, MK-94RD223
Hitachi Lightning 9900™ V Series documents:
„
„
„
User and Reference Guide, MK-92RD100
Remote Console – Storage Navigator User’s Guide, MK-92RD101
Hitachi ShadowImage User’s Guide, MK-92RD110
vi
Preface
„
„
„
Hitachi TrueCopy User and Reference Guide, MK-92RD108
Open LDEV Guard User’s Guide, MK-93RD158
DB Validator Reference Guide, MK-92RD140
Hitachi Lightning 9900™ documents:
„
„
„
„
User and Reference Guide, MK-90RD008
Remote Console User’s Guide, MK-90RD003
Hitachi ShadowImage User’s Guide, MK-90RD031
Hitachi TrueCopy User and Reference Guide, MK-91RD051
Comments
Please send us your comments on this document. Make sure to include the document title,
number, and revision. Please refer to specific section(s) and paragraph(s) whenever possible.
„
„
„
E-mail: doc.comments@hds.com
Fax: 858-695-1186
Mail:
Technical Writing, M/S 35-10
Hitachi Data Systems
10277 Scripps Ranch Blvd.
San Diego, CA 92131
Thank you! (All comments become the property of Hitachi Data Systems Corporation.)
Hitachi Command Control Interface (CCI) User and Reference Guide
vii
viii
Preface
Contents
Chapter 1 Overview of CCI Functionality...............................................................................................1
Chapter 2 Overview of CCI Operations ..................................................................................................7
2.1 Overview...........................................................................................8
2.2 Features of Paired Volumes.....................................................................9
2.8 CCI Software Structure ........................................................................ 45
Hitachi Command Control Interface (CCI) User and Reference Guide
ix
2.9 Configuration Definition File ..................................................................76
Chapter 3 Preparing for CCI Operations............................................................................................111
3.1 System Requirements......................................................................... 112
3.2 Hardware Installation ........................................................................ 128
3.3 Software Installation ......................................................................... 129
3.6 CCI Startup..................................................................................... 155
Chapter 4 Performing CCI Operations...............................................................................................163
4.1 Environmental Variables ..................................................................... 164
4.2 Creating Pairs (Paircreate) .................................................................. 168
4.3 Splitting and Deleting Pairs (Pairsplit)..................................................... 173
4.6 Monitoring Pair Activity (Pairmon) ......................................................... 192
x
Contents
4.8 Displaying Pair Status (Pairdisplay)......................................................... 211
4.11 Displaying Configuration Information ...................................................... 229
4.13 Controlling CCI Activity....................................................................... 261
4.13.10Mount Subcommand.................................................................. 275
4.13.11Umount and Umountd Subcommands ............................................. 277
4.13.12Environment Variable Subcommands.............................................. 279
4.14 CCI Command Tools ........................................................................... 280
4.16 Protection Facility............................................................................. 296
Hitachi Command Control Interface (CCI) User and Reference Guide
xi
4.20 Host Group Control ........................................................................... 320
4.21 Using CCI SLPR Security ...................................................................... 322
4.22 Controlling Volume Migration ............................................................... 329
Chapter 5 Troubleshooting.................................................................................................................337
5.1 General Troubleshooting..................................................................... 338
5.3 Error Reporting ................................................................................ 343
Appendix A Maintenance Logs and Tracing Functions.......................................................................353
A.1 Log Files ........................................................................................ 353
A.2 Trace Files ..................................................................................... 355
Appendix B Updating and Uninstalling CCI..........................................................................................359
B.1 Uninstalling UNIX CCI Software ............................................................. 359
Appendix C Fibre-to-SCSI Address Conversion ..................................................................................361
Acronyms and Abbreviations .................................................................................................................367
xii
Contents
List of Figures
Figure 2.1 Concept of Paired Volumes ...............................................................9
Figure 2.10 Sidefile Quantity Limit .................................................................. 25
Figure 2.21 CCI Software Structure .................................................................. 47
Figure 2.27 Configuration for Multiple Networks .................................................. 58
Figure 2.28 Network Configuration for IPv6 ........................................................ 59
Figure 2.35 Flow of Command Issue ................................................................. 66
Figure 2.38 Current Assignment Sequence.......................................................... 69
Hitachi Command Control Interface (CCI) User and Reference Guide
xiii
Figure 2.48 Pairdisplay on HORCMINST0 ........................................................... 102
Figure 2.49 Pairdisplay on HORCMINST1 ........................................................... 102
Figure 2.50 Pairdisplay on HORCMINST0 ........................................................... 103
Figure 2.58 System Failover and Recovery ........................................................ 109
Figure 3.4 CCI Configuration on VIO Client ...................................................... 124
Figure 3.5 Library and System Call for IPv6...................................................... 126
Figure 4.1 Pair Creation............................................................................. 168
Figure 4.2 Pair Splitting............................................................................. 173
Figure 4.5 Pair Resynchronization................................................................. 181
Figure 4.9 Swap Operation ......................................................................... 186
Figure 4.11 Pair Event Waiting ...................................................................... 187
Figure 4.22 Pairdisplay -m Example ................................................................ 214
Figure 4.29 Example of -find Option for Raidscan ............................................... 231
xiv
Contents
Figure 4.67 Inqraid: Example of -gvinf Option .................................................... 287
Figure 4.68 Inqraid: Example of -svinf[=PTN] Option ............................................ 287
Figure 4.72 Definition of the Protection Volume ................................................. 296
Figure 4.75 Definition of the Group Version....................................................... 306
Figure 4.76 LDM Volume Configuration............................................................. 307
Figure 4.77 LDM Volume Flushing ................................................................... 311
Figure 4.79 Directory Mount Structure ............................................................. 318
Figure 4.81 SLPR Configuration on a Single Host ................................................. 324
Hitachi Command Control Interface (CCI) User and Reference Guide
xv
Figure 4.84 SLPR Configuration on Dual Hosts .................................................... 326
Figure 4.87 TrueCopy Operation using SLPR ...................................................... 328
Figure 4.88 Volume Migration Configurations..................................................... 329
List of Tables
Table 2.1
Table 2.2
Table 2.3
Table 2.4
Table 2.5
Table 2.6
Table 2.7
Table 2.8
Table 2.9
Table 2.10 CCI Files for UNIX-based Systems.......................................................71
Table 3.1
Table 3.2
Table 3.3
Table 3.4
Table 3.5
Table 3.6
Table 3.7
Table 3.8
Table 3.9
Supported Platforms for TrueCopy................................................... 114
Supported Platforms for ShadowImage.............................................. 115
Supported Platforms for TrueCopy Async ........................................... 116
Supported Platforms for Universal Replicator...................................... 117
Supported Guest OS for VMware...................................................... 118
Supported Platforms: IPv6 vs IPv6.................................................... 118
Supported Platforms: IPv4 vs IPv6.................................................... 119
Relationship between CCI and RAID Storage System .............................. 120
Table 4.1
Table 4.2
Table 4.3
Table 4.4
Table 4.5
Table 4.6
Table 4.7
Table 4.8
Table 4.9
HORCM, Hitachi TrueCopy, and ShadowImage Variables ......................... 164
Relationship Between -I[inst#] Option and $HORCMINST and HORCC_MRCF .. 167
Paircreate Command Parameters .................................................... 169
Specific Error Codes for Paircreate .................................................. 172
Specific Error Codes for Pairsplit..................................................... 177
Pairresync Command Parameters .................................................... 182
Specific Error Codes for Pairresync .................................................. 185
Table 4.10 Specific Error Codes for Pairevtwait................................................. 189
xvi
Contents
Table 4.14 Specific Error Codes for Pairvolchk................................................... 197
Table 4.20 Specific Error Code for Paircurchk ................................................... 218
Table 4.49 Specific Error Code for Pairsyncwait................................................. 294
Table 4.50 Registration for the Mirror Descriptor ............................................... 297
Table 5.1
Table 5.2
Table 5.3
Table 5.4
Operational Notes for CCI Operations................................................ 338
System Log Messages.................................................................... 343
Command Error Messages .............................................................. 344
Generic Error Codes (horctakeover, paircurchk, paircreate, pairsplit,
pairresync, pairevtwait, pairvolchk, pairsyncwait, pairdisplay)................. 348
Generic Error Codes (raidscan, raidqry, raidar, horcctl) ......................... 349
Specific Error Codes .................................................................... 350
Table 5.5
Table 5.6
Hitachi Command Control Interface (CCI) User and Reference Guide
xvii
xviii
Contents
Chapter 1 Overview of CCI Functionality
1.1 Overview of Command Control Interface
The Hitachi Command Control Interface (CCI) software product enables you to configure and
control Hitachi data replication and data protection operations by issuing commands from
the open-systems host to the Hitachi RAID storage systems. This document covers CCI
operations for the following Hitachi storage systems: Universal Storage Platform V/VM (USP
V/VM), Universal Storage Platform (USP), Network Storage Controller (NSC), Lightning 9900V,
and Lightning 9900.
The Hitachi data replication operations supported by CCI include (see section 1.2):
„
„
„
„
TrueCopy (Synchronous and Asynchronous)
ShadowImage
Universal Replicator (USP V/VM, TagmaStore USP/NSC)
Copy-on-Write Snapshot (USP V/VM, TagmaStore USP/NSC)
The Hitachi data protection operations supported by CCI include (see section 1.3):
„
„
Database Validator
Data Retention Utility (called “Open LDEV Guard” on Lightning 9900V/9900)
For remote copy operations, CCI interfaces with the system software and high-availability
(HA) software on the host as well as the Hitachi software on the RAID storage system. CCI
provides failover and operation commands that support mutual hot standby in conjunction
with industry-standard failover products (e.g., MC/ServiceGuard, HACMP, FirstWatch®). CCI
also supports a scripting function for defining multiple operations in a script (or text) file.
Using CCI scripting, you can set up and execute a large number of commands in a short
period of time while integrating host-based high-availability control over copy operations.
Hitachi Command Control Interface (CCI) User and Reference Guide
1
1.2 Overview of Hitachi Data Replication Functions
The Hitachi data replication features controlled by CCI include:
„
„
„
„
TrueCopy (section 1.2.1)
ShadowImage (section 1.2.2)
Universal Replicator (section 1.2.3)
Copy-on-Write Snapshot (section 1.2.4)
1.2.1 Hitachi TrueCopy
The Hitachi TrueCopy feature enables you to create and maintain remote copies of the data
stored on the RAID storage systems for data backup and disaster recovery purposes.
TrueCopy operations can be performed across distances of up to 43 km (26.7 miles) using
standard ESCON® support, and up to 30 km (18.6 miles) using fibre-channel (FC) interface.
Long-distance TrueCopy solutions are provided, based on user requirements and workload
characteristics, using approved channel extenders and communication lines.
Hitachi TrueCopy operations can be performed using the Command Control Interface (CCI)
software on the UNIX/PC server host, or the TrueCopy software on Storage Navigator. The
CCI software on the UNIX/PC server displays Hitachi TrueCopy information and allows you to
perform TrueCopy operations from the UNIX command line or via a script file. The CCI
software interfaces with the RAID storage systems through a dedicated LU called a command
device. The Hitachi TrueCopy software also displays TrueCopy information and allows you to
perform TrueCopy operations via a Windows-based GUI.
Hitachi TrueCopy can be used in conjunction with ShadowImage to maintain multiple copies
of critical data at your primary and/or secondary (remote) sites. This capability provides
maximum flexibility in data backup and duplication activities.
For details on TrueCopy operations, please refer to the TrueCopy User’s Guide for the
storage system (e.g., Hitachi TagmaStore® USP/NSC TrueCopy User’s Guide).
Note: The 7700E remote copy feature/software is called Hitachi Open Remote Copy (HORC).
2
Chapter 1 Overview of Hitachi Copy Solutions
1.2.2 Hitachi ShadowImage
The ShadowImage data duplication feature enables you to set up and maintain multiple
copies of logical volumes within the same storage system. The RAID-protected ShadowImage
duplicates are created and maintained at hardware speeds. ShadowImage operations for
UNIX/PC server-based data can be performed using either the Command Control Interface
(CCI) software on the UNIX/PC server host, or the ShadowImage software on Storage
Navigator.
The Hitachi CCI software on the UNIX/PC server displays ShadowImage information and
allows you to perform ShadowImage operations by issuing commands from the UNIX
command line or by executing a script file. The CCI software interfaces with the storage
system through a dedicated LU called a command device. The ShadowImage remote console
software also displays ShadowImage information and allows you to perform ShadowImage
operations using a Windows-based GUI. The ShadowImage software interfaces with the RAID
storage system via its service processor (SVP).
ShadowImage can be used in conjunction with Hitachi TrueCopy to maintain multiple copies
of critical data at your primary and/or secondary (remote) sites. This capability provides
maximum flexibility in data backup and duplication activities.
For details on ShadowImage operations, please refer to the ShadowImage User’s Guide for
the storage system (e.g., Hitachi TagmaStore® USP/NSC ShadowImage User’s Guide).
Note: The 7700E data duplication feature/software is called Hitachi Open Multi-RAID
Coupling Feature (HOMRCF).
1.2.3 Hitachi Universal Replicator
Universal Replicator (UR) provides a RAID storage-based hardware solution for disaster
recovery which enables fast and accurate recovery for large databases spanning multiple
volumes. Universal Replicator provides update sequence consistency for user-defined journal
group (i.e., large database) as well as protection for write-dependent applications in the
event of a disaster. Universal Replicator enables you to configure and manage highly reliable
data replication systems by using journal volumes to reduce chances of suspension of copy
operations.
Universal Replicator can be used in conjunction with TrueCopy as part of a 3DC Cascading
Configuration and/or a 3DC Multi-Target Configuration. Universal Replicator can also be used
with ShadowImage to maintain multiple copies of critical data at primary and secondary
(remote) sites. These capabilities provide maximum flexibility in data backup and
duplication activities.
Note: Universal Replicator is available on USP V/VM and TagmaStore USP/NSC (not
9900V/9900).
For details on Universal Replicator operations, refer to the Universal Replicator User’s Guide
for the storage system, or contact your Hitachi Data Systems account team.
Hitachi Command Control Interface (CCI) User and Reference Guide
3
1.2.4 Hitachi Copy-on-Write Snapshot
Copy-on-Write (COW) Snapshot provides ShadowImage functionality using less capacity of the
disk storage system and less time for processing than ShadowImage. COW Snapshot enables
you to create copy pairs, just like ShadowImage, consisting of primary volumes (P-VOLs) and
secondary volumes (S-VOLs). The COW Snapshot P-VOLs are logical volumes (OPEN-V LDEVs),
but the COW Snapshot S-VOLs are virtual volumes (V-VOLs) with pool data stored in memory.
Copy-on-Write Snapshot is recommended for copying and managing data in a short time with
reduced cost. However, since only some of the P-VOL data is copied by COW Snapshot, the
data stored in the S-VOL is not guaranteed in certain cases (e.g., physical P-VOL failure).
ShadowImage copies the entire P-VOL to the S-VOL, so even if a physical failure occurs, the
P-VOL data can be recovered using the S-VOL. ShadowImage provides higher data integrity
than COW Snapshot, so you should consider the use of ShadowImage when data integrity is
more important than the copy speed or the capacity of the disk storage system.
Note: Copy-on-Write Snapshot is available on USP V/VM and TagmaStore USP/NSC (not
9900V/9900).
For details on Copy-on-Write Snapshot operations, see the Copy-on-Write Snapshot User’s
Guide for the storage system, or contact your Hitachi Data Systems account team.
4
Chapter 1 Overview of Hitachi Copy Solutions
1.3 Overview of Hitachi Data Protection Functions
The Hitachi data protection features controlled by CCI include:
„
„
Database Validator (section 1.3.1)
Data Retention Utility (section 1.3.2)
1.3.1 Hitachi Database Validator
The Database Validator feature is designed for the Oracle® database platform to prevent
data corruption between the database and the storage system. Database Validator prevents
corrupted data blocks generated in the database-to-storage system infrastructure from being
written onto the storage disk. The combination of networked storage and database
management software has a risk of data corruption while writing data on the storage. This
data corruption rarely occurs; however, once corrupted data is written into storage, it can
be difficult and time-consuming to detect the underlying cause, restore the system, and
recover the database. Database Validator helps prevent corrupted data environments and
minimizes risk and potential costs in backup, restore, and recovery operations. Database
Validator combined with the Oracle9i Database product provides a resilient system that can
operate for 24 hours a day, 365 days a year to provide the uptime required by enterprises
today.
The Hitachi RAID storage systems support parameters for validation checking at the volume
level, and these parameters are set through the command device using the Command
Control Interface (CCI) software. CCI supports commands to set and verify these parameters
for validation checking. Once validation checking is turned on, all write operations to the
specified volume must have valid Oracle checksums. CCI reports a validation check error to
the syslog file each time an error is detected.
Database Validator requires the CCI software product and a separate license key. Database
Validator is not controlled via the Storage Navigator remote console software.
For details on Database Validator operations, please see the Database Validator Reference
Guide for the storage system (e.g., Hitachi TagmaStore® USP/NSC Database Validator User’s
Guide), or contact your Hitachi Data Systems account team.
Hitachi Command Control Interface (CCI) User and Reference Guide
5
1.3.2 Hitachi Data Retention Utility (Open LDEV Guard)
Data Retention Utility (called Open LDEV Guard on 9900V/9900) enables you to prevent
writing to specified volumes by the RAID storage system guarding the volumes. Data
Retention Utility is similar to the Database Validator feature, setting a guarding attribute to
the specified LU.
The RAID storage system supports parameters for guarding at the volume level. You can set
and verify these parameters for guarding of open volumes using either the Storage Navigator
software or the Command Control Interface (CCI) software on the host. Once guarding is
enabled, the RAID storage system conceals the target volumes from SCSI commands (e.g.,
SCSI Inquiry, SCSI Read Capacity), prevents reading and writing to the volume, and protects
the volume from being used as a copy volume (i.e., TrueCopy and ShadowImage paircreate
operation fails).
For details on Data Retention Utility operations, please see the Data Retention Utility (or
Open LDEV Guard) User’s Guide for the storage system (e.g., Hitachi TagmaStore® USP/NSC
Data Retention Utility User’s Guide), or contact your Hitachi Data Systems account team.
6
Chapter 1 Overview of Hitachi Copy Solutions
Chapter 2 Overview of CCI Operations
This chapter provides a high-level description of the operations that you can perform with
Hitachi Command Control Interface:
„
„
„
„
„
„
„
„
„
„
„
Overview (section 2.1)
Overview of CCI ShadowImage Operations (section 2.3)
Hitachi TrueCopy/ShadowImage Volumes (section 2.4)
Applications of Hitachi TrueCopy/ShadowImage Commands (section 2.5)
Overview of Copy-on-Write Snapshot operations (section 2.6)
Overview of CCI Data Protection Operations (section 2.7)
CCI Software Structure (section 2.8)
Error Monitoring and Configuration Confirmation (section 2.10)
Hitachi Command Control Interface (CCI) User and Reference Guide
7
2.1 Overview
CCI allows you to perform Hitachi TrueCopy and ShadowImage operations by issuing
TrueCopy and ShadowImage commands from the UNIX/PC server host to the Hitachi RAID
storage system. Hitachi TrueCopy and ShadowImage operations are nondisruptive and allow
the primary volume of each volume pair to remain online to all hosts for both read and write
operations. Once established, TrueCopy and ShadowImage operations continue unattended
to provide continuous data backup.
This document covers the requirements for using Hitachi TrueCopy and ShadowImage in HA
configurations. UNIX/PC servers in HA configurations normally support disk duplicating
functions to enhance disk reliability (e.g., mirroring provided by the LVM or device driver,
RAID5 or equivalent function provided by the LVM). UNIX/PC servers also feature hot standby
and mutual hot standby functions in case of failures on the server side. However, mutual hot
standby for disaster recovery has not yet been achieved, since it requires the remote
mirroring function.
Hitachi TrueCopy supports the remote mirroring function, linkage function with the failover
switch, and remote backup operation among servers, all of which are required by UNIX/PC
servers in HA configurations for disaster recovery. For detailed information on TrueCopy
operations, please refer to the TrueCopy User and Reference Guide for the storage system.
ShadowImage supports the mirroring function within a storage system. For detailed
information on ShadowImage operations, please refer to the ShadowImage User’s Guide for
the storage system.
8
Chapter 2 Overview of CCI Operations
2.2 Features of Paired Volumes
The logical volumes, which have been handled independently by server machines, can be
combined or separated in a pair being handled uniformly by the Hitachi TrueCopy and/or
ShadowImage pairing function. Hitachi TrueCopy and ShadowImage regard those two
volumes to be combined or separated as unique paired logical volume used by the servers. It
is possible to handle paired volumes as groups by grouping them in units of server software
or in units of database and its attribute.
Server A
Server B
Special files
A and B
Special files
C and D
Oradb
Oradb
Group name: oradb
Volume A
Volume B
Volume C
Oradb1
Paired logical volume
Volume D
Oradb2
Paired logical volume
Local volume space of server A
Server C
Special files
C and D
Oradb
Figure 2.1 Concept of Paired Volumes
Addressing paired logical volumes: The correspondences between the paired logical
volumes and physical volumes are defined by users by describing any intended paired logical
volume names and group names in the configuration definition file of each server. It is
possible to define a server for the paired logical volumes in units of group name. Each paired
logical volume must belong to a group in order to determine the corresponding server.
Specification of volumes by commands: Volume names to be specified by the TrueCopy
commands must be given using the paired logical volume names or the group names.
Hitachi Command Control Interface (CCI) User and Reference Guide
9
2.2.1 ShadowImage Duplicated Mirroring
Duplicated mirroring of a single primary volume is possible when the ShadowImage feature is
used. The duplicated mirror volumes of the P-VOL are expressed as virtual volumes using the
mirror descriptors (MU#0-2) in the configuration definition file as shown below.
Group name: oradb
Volume A
Volume C
Volume D
Oradb1
MU# 0
Paired logical volume
Volume B
Oradb2
MU# 0
Paired logical volume
Group name: oradb-1
Volume A
Volume B
Volume A
Volume E
Volume F
Oradb1-1
MU# 1
Paired logical volume
Volume B
Oradb1-2
MU# 1
Paired logical volume
Group name: oradb-2
Oradb2-1
Volume A
Volume G
Volume H
MU# 2
Paired logical volume
Volume B
Oradb2-2
MU# 2
Paired logical volume
Figure 2.2 ShadowImage Duplicated Mirrors
10
Chapter 2 Overview of CCI Operations
2.2.2 ShadowImage Cascading Pairs
ShadowImage provides a cascading function for the ShadowImage S-VOL. The cascading
mirrors of the S-VOL are expressed as virtual volumes using the mirror descriptors (MU#1-2)
in the configuration definition file as shown below. The MU#0 of a mirror descriptor is used
for connection of the S-VOL.
MU# 0
Group name: oradb
Group name: oradb1
Volume A
Volume C
Volume C
Volume E
Oradb1
MU#0
Oradb11
MU#0
Volume A
P-VOL
P-VOL
MU# 1
P-VOL
MU# 1
S-VOL
S-VOL
Volume B
Volume D
Volume D
Volume F
Oradb2
MU#0
Oradb21
MU#0
Volume B
P-VOL
P-VOL
MU# 1
P-VOL
MU# 1
S-VOL
S-VOL
Group name: oradb2
MU# 2
Volume C
Volume G
Oradb21
MU#0
P-VOL
MU# 2
S-VOL
Volume D
Volume H
Oradb22
MU#0
P-VOL
MU# 2
S-VOL
Figure 2.3 ShadowImage Cascade Volume Pairs
Hitachi Command Control Interface (CCI) User and Reference Guide
11
2.2.2.1 Restrictions for ShadowImage Cascading Volumes
Pair Creation. Pair creation of SVOL (oradb1) can only be performed after the pair creation
of S/PVOL (oradb). If pair creation of SVOL (oradb1) is performed at the SMPL or PSUS state
of S/PVOL (oradb), paircreate will be rejected with EX_CMDRJE or EX_CMDIOE.
SVOL
S/P
VOL
oradb1
1
2
PVOL
oradb
0
0
oradb2
SVOL
Pair Splitting. Pair splitting of SVOL (oradb1) can only be performed after the SMPL or PSUS
state condition of S/PVOL (oradb), due to ShadowImage asynchronous copy. If the pair
splitting of SVOL (oradb1) is performed at the COPY or PAIR state of S/PVOL (oradb), the
pairsplit command will be rejected with EX_CMDRJE or EX_CMDIOE.
SVOL
S/P
VOL
oradb1
1
2
PVOL
oradb
0
0
oradb2
SVOL
Pair Restore. Pair restore (resync from SVOL (oradb1) to S/PVOL) can only be performed
when the state condition of SVOL (oradb) and another PVOL (oradb2) on the S/PVOL are
SMPL. If the pair restore of SVOL (oradb1) is performed at the COPY or PAIR or PSUS state of
S/PVOL (oradb or oradb2), the pairresync (-restore option) command will be rejected with
EX_CMDRJE or EX_CMDIOE.
SVOL
S/P
VOL
oradb1
1
2
PVOL
oradb
0
0
oradb2
SVOL
12
Chapter 2 Overview of CCI Operations
2.2.2.2 Restriction for TrueCopy/ShadowImage Cascading Volumes
Pair restore (resynchronization from SVOL (oradb1) to S/PVOL) can only be performed when
the TrueCopy VOL (oradb) is SMPL or PSUS(SSUS), and another PVOL (oradb2) on the S/PVOL
is SMPL or PSUS. If pairresync of S-VOL (oradb1) is performed when the S/PVOL (oradb or
oradb2) is in any other state, the pairresync (-restore option) command will be rejected with
EX_CMDRJE or EX_CMDIOE.
SVOL
0
1
SMPL
or
PVOL
oradb
oradb1
S/P
VOL
TrueCopy
TrueCopy
oradb2
SVOL
2.2.2.3 Overview of CCI TrueCopy Operations
CCI TrueCopy operates in conjunction with the software on the UNIX/PC servers and the
Hitachi TrueCopy (HORC) functions of the RAID storage systems. The CCI software provides
failover and other functions such as backup commands to allow mutual hot standby in
cooperation with the failover product on the UNIX/PC server (e.g., MC/ServiceGuard,
FirstWatch, HACMP). For the proper maintenance of Hitachi TrueCopy operations, it is
important to find failures in paired volumes, recover the volumes from the failure as soon as
possible, and continue operation in the original system.
Note: For information on the operational requirements for TrueCopy, please refer to the
Hitachi TrueCopy User and Reference Guide for the storage system.
Hitachi Command Control Interface (CCI) User and Reference Guide
13
2.2.3 Hitachi TrueCopy Takeover Commands
Figure 2.4 illustrates the server failover system configuration. When a server software error
or a node error is detected, the operation of the failover software causes the Cluster
Manager (CM) to monitor server programs, and causes the CM of the standby node to
automatically activate the HA control script of the corresponding server program. The HA
control script usually contains the database recovery procedures, server program activation
procedures, and other procedures. The takeover commands provided by Hitachi TrueCopy
are activated by the control HA script and execute the control needed for failover of the
server.
Host A
Host B
CM
CM
Script
Script
Server
(Active )
HORCM
(CCI)
HORCM
(CCI)
Server
(standby)
Command device
Command device
Splitting
paired
volume
Primary/
secondary
volume
Secondary/
primary
volume
Swapping
Hitachi RAID
Hitachi RAID
Figure 2.4 Server Failover System Configuration
In a high-availability (HA) environment, a package is a group of applications that are scripted
to run on the secondary host in the event of a primary host failure. When using the HA
software (e.g., MC/ServiceGuard), the package can be transferred to the standby node as an
is performed in an environment in which Hitachi TrueCopy is used, the volume is switched
from primary to secondary as if an error has occurred, even though data consistency is
assured. When returning the package to the current node, it is necessary to copy the
secondary volume data into the primary volume, and this operation can take as much time
as the initial copy operation for the pair. In actual operation, no package can be transferred
when TrueCopy is used. The secondary package is switched to the primary package, and vice
versa, when the primary volume is switched to the secondary volume. Therefore, the
primary and secondary TrueCopy volumes should be switched depending on the package
state.
14
Chapter 2 Overview of CCI Operations
Standby
Active
Standby
S
A
Æ
Primary
Secondary
Primary
Secondary
Figure 2.5 Package Transfer on High Availability (HA) Software
2.2.4 Hitachi TrueCopy Remote Commands
Figure 2.6 illustrates a Hitachi TrueCopy remote configuration. The Hitachi TrueCopy remote
commands support a function which links the system operation for the purpose of volume
backup among UNIX servers with the operation management of the server system. The
Hitachi TrueCopy remote pair commands are also used to copy volumes in the failover
configuration of the servers and to recover the volumes after the takeover.
„
„
„
Pair creation command: Creates a new volume pair. Volume pairs can be created in
units of volume or group.
Pair splitting command: Splits a volume pair and allows read and write access to the
secondary volume.
Pair resynchronization command: Resynchronizes a split volume pair based on the
primary volume. The primary volume remains accessible during resynchronization.
–
Swaps(p) option (TrueCopy only). Swaps volume from the SVOL(PVOL) to
PVOL(SVOL) at suspending state on the SVOL(PVOL) side and resynchronizes the
NEW_SVOL based on the NEW_PVOL. At the result of this operation, the volume
attributes of own host (local host) become the attributes for the NEW_PVOL(SVOL).
„
„
Event waiting command: Used to wait for completion of volume pair creation or
resynchronization and to check the pair status.
Pair status display and configuration confirmation command: Displays the pair status
and configuration of the volume pairs, used for checking the completion of pair creation
or pair resynchronization.
Hitachi Command Control Interface (CCI) User and Reference Guide
15
Host B
Operation
Host A
Operation
Management
Management
Commands
Commands
Server
software
HORCM
(CCI)
HORCM
(CCI)
Server
software
Pair
generation
and
Command device
Command device
resync
Primary/
secondary
volume
Secondary/
primary
volume
Pair
splitting
Hitachi RAID
Hitachi RAID
Figure 2.6 Hitachi TrueCopy Remote System Configuration
2.2.5 Hitachi TrueCopy Local Commands
Figure 2.7 illustrates a Hitachi TrueCopy local configuration. The TrueCopy local commands
support a function which links the system operation for the purpose of volume backup among
UNIX servers with the operation management of the server system. The TrueCopy local
commands perform the same functions as the remote commands only within the same
storage system instead of between two storage systems.
Host B
Operation
Host A
Operation
Management
Management
Commands
Commands
Server
software
HORCM
(CCI)
HORCM
(CCI)
Server
software
Command
device
Primary/
secondary
volume
Secondary/
primary
volume
Pair generation and re-synchronization
Pair splitting
Hitachi RAID Storage
Figure 2.7 Hitachi TrueCopy Local System Configuration
16
Chapter 2 Overview of CCI Operations
2.3 Overview of CCI ShadowImage Operations
Figure 2.8 illustrates the ShadowImage configuration. The ShadowImage commands support a
function which links the system operation for the purpose of volume backup among UNIX
servers with the operation management of the server system. For detailed information on
the operational requirements for ShadowImage, please refer to the Hitachi ShadowImage
User’s Guide for the storage system.
„
„
„
Pair creation command: Creates a new volume pair. Volume pairs can be created in
units of volume or group.
Pair splitting command: Splits a volume pair and allows read and write access to the
secondary volume.
Pair resynchronization command: Resynchronizes a split volume pair based on the
primary volume. The primary volume remains accessible during resynchronization.
Restore option: Resynchronizes a split pair based on the secondary volume (reverse
resync). The primary volume is not accessible during resync with restore option.
„
„
Event waiting command: Used to wait for completion of volume pair creation or
resynchronization and to check the pair status.
Pair status display and configuration confirmation command: Displays the pair status
and configuration of the volume pairs, used for checking the completion of pair creation
or pair resynchronization.
Host B
Operation
Host A
Operation
Management
Management
Commands
Commands
Server
software
HORCM
(CCI)
HORCM
(CCI)
Server
software
Command device
Primary/
secondary
volume
Secondary/
primary
volume
Pair generation and re-synchronization
Pair splitting
Hitachi RAID Storage
Figure 2.8 ShadowImage System Configuration
Hitachi Command Control Interface (CCI) User and Reference Guide
17
2.4 Hitachi TrueCopy/ShadowImage Volumes
Hitachi TrueCopy commands allow you to create volume pairs consisting of one primary
volume (P-VOL) and one secondary volume (S-VOL). The TrueCopy P-VOL and S-VOL can be in
different storage systems. Hitachi TrueCopy provides synchronous and asynchronous copy
modes. TrueCopy Asynchronous can only be used between separate storage systems (not
within one storage system). The maximum number of TrueCopy pairs in one storage system
is 16,383 for TagmaStore USP/NSC, 8191 for 9900V, and 4095 for 9900, provided that one
LUN is dedicated to the command device. For details on TrueCopy volumes and operations,
please refer to the Hitachi TrueCopy User and Reference Guide for the storage system.
ShadowImage commands allow you to create volume pairs consisting of one P-VOL and up to
nine S-VOLs using the ShadowImage cascade function. ShadowImage pairs are contained
within the same storage system and are maintained using asynchronous update copy
operations. The maximum number of ShadowImage pairs in one storage system is 8191 for
TagmaStore USP/NSC, 4095 for 9900V, and 2047 for 9900. For details on ShadowImage
volumes and operations, please refer to the Hitachi ShadowImage User’s Guide for the
storage system.
Each volume pair that you want to create must be registered in the CCI configuration file.
ShadowImage volume pairs must include an MU (mirrored unit) number assigned to the
S-VOL. The MU number indicates that the pair is a ShadowImage pair and not a Hitachi
TrueCopy pair. Once the correspondence between the paired logical volumes has been
defined in the HORCM_DEV section of the configuration file, you can use the configuration
file to group the paired volumes into volume groups that can be managed by the host
operating system’s LVM (logical volume manager).
The host’s LVM allows you to manage the Hitachi TrueCopy/ShadowImage volumes as
individual volumes or by volume group. TrueCopy/ShadowImage commands can specify
individual logical volumes or group names. For LUN Expansion (LUSE) volumes, you must
enter commands for each volume (LDEV) within the expanded LU. If you define volume
groups and you want to issue commands to those volume groups, you must register the
volume groups in the configuration file. For further information on the LVM, refer to the
user documentation for your operating system.
18
Chapter 2 Overview of CCI Operations
2.4.1 TrueCopy/ShadowImage/Universal Replicator Volume Status
Each TrueCopy pair consists of one P-VOL and one S-VOL, and each ShadowImage pair
consists of one P-VOL and up to nine S-VOLs when the cascade function is used.
Table 2.1 lists and describes the Hitachi TrueCopy and ShadowImage pair status terms. The
P-VOL controls the pair status for the primary and secondary volumes. The major pair
statuses are SMPL, PAIR, PSUS/PSUE, and COPY/RCPY. Read and write requests from the host
are accepted or rejected depending on the pair status of the volume.
The pair status can change when a CCI command is executed. The validity of the specified
operation is checked according to the status of the volume (primary volume).
„
„
„
Table 2.2 shows the relationship between pair status and TrueCopy/Universal Replicator
command acceptance.
Table 2.3 shows the relationship between pair status and ShadowImage command
acceptance.
Table 2.4 shows the relationship between pair status and COW Snapshot command
acceptance.
Table 2.1
Hitachi TrueCopy and ShadowImage Pair Status
Status Hitachi TrueCopy Pair Status
ShadowImage Pair Status
Primary
Secondary
R/W enabled
R enabled
SMPL
PAIR
Unpaired volume
Unpaired volume
R/W enabled
Paired volume. Initial copy is complete. Paired volume. Initial copy is complete. R/W enabled
Updates are processed synchronously Updates are processed
or asynchronously.
asynchronously.
COPY
RCPY
In paired state, but initial copy, pairsplit, In paired state, but initial copy, pairsplit, R/W enabled
R enabled
R enabled
or resync operation is not complete.
Includes COPY(PD), COPY(SP), and
COPY(RS) status.
or resync operation is not complete.
Includes COPY(PD), COPY(SP), and
COPY(RS) status.
Not used for Hitachi TrueCopy
In paired state, but reverse resync
operation is not complete. Includes
COPY(RS-R) status.
R enabled
PSUS
(split)
In paired state, but updates to the
S-VOL data are suspended due to user- S-VOL data are suspended due to user-
requested pairsplit. The RAID storage
system keeps track of P-VOL and
S-VOL updates while the pair is split.
In paired state, but updates to the
R/W enabled
R/W enabled
when using
write enable
pairsplit option
requested pairsplit. The RAID storage
system keeps track of P-VOL and
S-VOL updates while the pair is split.
PSUE
(error)
or
In paired state, but updates to the
S-VOL data are suspended due to an
error condition. (PSUE is PSUS with
In paired state, but updates to the
S-VOL volume data are suspended due no error occurs
to an error condition. When a PSUE
R/W enabled if R enabled
in the primary
volume
PFUS
reason of internal error. PFUS is PSUS pair is resynched, the RAID storage
with reason of sidefile full.)
system copies the entire P-VOL to the
S-VOL (same as initial copy).
PDUB
Used for Hitachi TrueCopy LUSE pairs Not used for ShadowImage
only. In paired state, but updates to one
or more LDEVs within the LUSE pair
R/W enabled if R enabled
no error occurs
in the primary
are suspended due to error condition.
volume
Hitachi Command Control Interface (CCI) User and Reference Guide
19
„
Accepted = Accepted and executed. When operation terminates normally, the status
changes to the indicated number.
„
„
Acceptable = Accepted but no operation is executed.
Rejected = Rejected and operation terminates abnormally.
Table 2.2
Pair Status versus TrueCopy and Universal Replicator Commands
Hitachi TrueCopy Command
Paircreate
Nocopy
Pairsplit
-P option
Pairresync
Resync
#
Status Copy
-r or -rw option
Rejected
-S option
c SMPL
Accepted d
Accepted e
Acceptable
Acceptable
Rejected
Rejected
Acceptable
Accepted c
Accepted c
Accepted c
Rejected
d COPY Acceptable
Accepted f
Accepted f
Acceptable
Rejected.
Acceptable
Acceptable
e PAIR
Acceptable
Rejected
Accepted f
Acceptable
f PSUS
Accepted d
(see Note)
g PSUE
Rejected
Rejected
Rejected
Rejected
Rejected
Rejected
Rejected
Rejected
Accepted c
Accepted d
(see Note)
h PDUB
Accepted c
Accepted d
(see Note)
Pairsplit of a Hitachi TrueCopy Asynchronous volume will be returned after verification of
state transition that waits until delta data is synchronized from P-VOL to S-VOL.
Note: In case of the SSWS state after SVOL-SSUS-takeover, pairresync command (from
PVOL to SVOL) is rejected because the delta data for SVOL becomes dominant, and its state
expect to be using -swaps(p) option of pairresync. If the pairresync command (from PVOL to
SVOL) is rejected, confirm this special state using the -fc option of the pairdisplay
command.
20
Chapter 2 Overview of CCI Operations
Table 2.3
Pair Status versus ShadowImage Commands
ShadowImage Command
Paircreate
-split
Pairsplit
Pairresync
Resync
Pair Status No -split
-E option
-C option
-S option
c SMPL
Accepted d
Accepted [2] Rejected
dÆf
Rejected
Acceptable
Rejected
d COPY
Acceptable
Acceptable
Accepted [1] Accepted g
dÆf
Accepted [1] Accepted c
dÆf
Acceptable
Acceptable
RCPY
e PAIR
Accepted [2] Accepted g
Accepted [2] Accepted c
dÆf
dÆf
f PSUS
Rejected
Rejected
Acceptable
Rejected
Accepted g
Acceptable
Rejected
Accepted c
Accepted d
g PSUE
Acceptable
Accepted c
Accepted d
Note: If the PVOL does not have Write in the PAIR state, then data identical with an SVOL is
guaranteed. Therefore, in case of using the SVOL with the SMPL state, after stopping Write
to the PVOL, generate a paired volume, and then split the paired volume after confirming
that the paired volume has the PAIR status. In the PSUE state, ShadowImage does not
manage differential data at the PVOL or SVOL. Therefore, pairresync issued to a pair in the
PSUE state is all copy performance, but the copy progress rate returned by the -fc option of
the pairdisplay command indicates “0%”.
Note 1: The state change (dÆf) is effective COPY state only that is changed without
specification of -split for paircreate command.
Note 2: The (dÆf) state change is displayed as PVOL_PSUS & SVOL_COPY (see display
example below), and reading and writing are enabled for SVOL in SVOL_COPY state.
# pairsplit -g oradb
# pairdisplay -g oradb -fc
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#.P/S, Status, %, P-LDEV# M
oradb oradev3(L) (CL2-N , 3, 4-0) 8071 28..P-VOL PSUS, 100
oradb oradev3(R) (CL2-N , 3, 5-0) 8071 29..S-VOL COPY, 97
29 W
28 -
PVOL_PSUS & SVOL_COPY is the non-reflected PSUS state that data is still being copied from the P-VOL to the S-VOL,
and this state has the following specific behavior.
–
–
–
If you will attempt to read non-reflected data on SVOL in PVOL_PSUS & SVOL_COPY state, then HOMRCF copies non-
reflected data from PVOL to SVOL, and will be returned the correct data after copied. This will brings the performance
degradation (1/6 to 1/15 with IOPS) to read on the SVOL.
If you will attempt to write non-reflected data on SVOL in PVOL_PSUS & SVOL_COPY state, then HOMRCF copies non-
reflected data from PVOL to SVOL, and writing data is managed as delta data for SVOL after copied. This will brings the
performance degradation (1/6 to 1/8 with IOPS) to write on the SVOL.
If you will attempt to write to the data on PVOL that does not still reflected the data to SVOL, then HOMRCF copies non-
reflected data from PVOL to SVOL, and writing data is managed as delta data for PVOL. This will brings the performance
degradation (1/6 to 1/8 with IOPS) to write on the PVOL.
Hitachi Command Control Interface (CCI) User and Reference Guide
21
–
The state changes for pairsplit are (WD = Write Disable, WE = Write Enable):
If PVOL has non-reflected data in PAIR state:
Behavior of OLD pairsplit at T0
Behavior of First pairsplit at T0
T0 PVOL_PAIR ÅÆ SVOL_PAIR(WD)
T1: PVOL_COPY ÅÆ SVOL_COPY(WD)
T2: PVOL_PSUS ÅÆ SVOL_SSUS(WE)
PVOL_PAIR ÅÆ SVOL_PAIR(WD)
PVOL_PSUS ÅÆ SVOL_COPY(WE)
PVOL_PSUS ÅÆ SVOL_SSUS(WE)
If PVOL has been reflected all data to SVOL in PAIR state:
Behavior of OLD pairsplit at T0
Behavior of First pairsplit at T0
PVOL_PAIR ÅÆ SVOL_PAIR(WD)
PVOL_PSUS ÅÆ SVOL_SSUS(WE)
T0: PVOL_PAIR ÅÆ SVOL_PAIR(WD)
T1: PVOL_PSUS ÅÆ SVOL_SSUS(WE)
–
–
–
The state changes for paircreate -split are:
Behavior of OLD paircreate -split at T0
Behavior of First paircreate -split at T0
T0: SMPL
ÅÆ SMPL
SMPL
ÅÆ SMPL
T1: PVOL_COPY ÅÆ SVOL_COPY(WD)
T2: PVOL_PSUS ÅÆ SVOL_SSUS(WE)
PVOL_PSUS ÅÆ SVOL_COPY(WE)
PVOL_PSUS ÅÆ SVOL_SSUS(WE)
If you will attempt the “pairevtwait -s psus” in PVOL_PSUS & SVOL_COPY state, then pairevtwait will return immediately
even if the S-VOL is still in SVOL_COPY state because PVOL is already in PVOL_PSUS state. If you want to wait the
“SVOL_SSUS” state, and then you must check the status of the SVol becomes “SVOL_PSUS” via the return code used
“pairvolchk -ss” command on SVOL side or “pairvolchk -ss -c” command on PVOL side.
OR you can use “pairevtwait -ss ssus” on both PVOL and SVOL, “pairevtwait -ss ssus -l” on SVOL locally.
If you will attempt the “pairresync -restore” or “pairsplit -S” in PVOL_PSUS & SVOL_COPY state, then HOMRCF will
reject this command due to unable to perform. In this case, you need to wait until the SVol state becomes
“SVOL_SSUS”.
Table 2.4
Pair Status versus SnapShot Commands
Copy-on-Write Snapshot Command
Paircreate
-split
Pairsplit
Pairresync
Resync
Pair Status No -split
-E option
Rejected
Rejected
-C option
-S option
Acceptable
Rejected
c SMPL
Accepted d
Rejected
Rejected
Rejected
Rejected
Rejected
d COPY
Acceptable
Acceptable
Rejected
Acceptable
RCPY
e PAIR
Accepted*
f
Rejected
Rejected
Acceptable
Accepted f Accepted c
Acceptable
f PSUS
Acceptable
Acceptable
Rejected
Accepted c
Accepted* d
Accepted* d
(PFUS)
g PSUE
Rejected
Rejected
Accepted c
Accepted*: Accepted*: A command is accepted and issued; whether this command is
executed or not depends on the microcode version of the RAID storage system.
Notes:
„
„
Pairsplit (“simplex -S”) of SnapShot volume will be returned without verification of state
transition that waits until SMPL state. In SMPL state, the volume which was SVOL
becomes R/W disable and data is discarded.
In the “PSUE” state, SnapShot does not manage for differential data between the
primary volume and secondary volume.
22
Chapter 2 Overview of CCI Operations
2.4.2 TrueCopy Async, TrueCopy Sync CTG, and Universal Replicator Volumes
Hitachi TrueCopy Asynchronous/Universal Replicator provides paired volumes which utilize
asynchronous transfer to ensure the sequence of writing data between the primary volume
and secondary volume. The sequence of writing data between the primary and secondary
volumes is guaranteed within each consistency (CT) group (see Figure 2.9).
Restrictions:
„
Group definition of TrueCopy Async/Universal Replicator/TrueCopy Sync CTG
volume: All volumes in a group must be contained within the same storage system. If
two or more groups of CCI include the same CT group (CTGID), then pair operation of the
group specification is handled in CT group entirety.
„
Registration of CTGID number and limitations: CCI registers CTGID to RAID disk array
automatically when paired volumes are created by paircreate command, and groups of
configuration definition files are mapped to CTGID. The maximum number of CT groups
is 256 for USP V/VM and USP/NSC (CTGID0 to CTGID255), 128 for 9900V (CTGID0 to
CTGID127), and 64 for 9900 (CTGID0-CTGID63) (16 for 7700E). TrueCopy Async/Universal
Replicator pair command will be terminated with EX_ENOCTG when the maximum
number of CT groups is exceeded.
„
Relationships between CTGID and Journal group ID: CT group numbers 0-127 are used
for TrueCopy Asynchronous, TrueCopy Sync CTG, and Universal Replicator. The rest of
the CT group numbers 128-255 are used only for Universal Replicator, and are mapped to
the journal groups.
Table 2.5
Assignment of CT Groups
Assignment
CTG
0 -127
TrueCopy Async
TrueCopy Sync CTG
CTG 0-127
Universal Replicator
Universal Replicator
JNG 0-127
128 - 255
JNG 128-255
„
At-time Split for TrueCopy Sync CTG: The operation for making data consistency is only
supported by the following option:
– pairsplit –g <group> ... [-r]
– pairsplit –g <group> ... -rw
TrueCopy Asynchronous/Universal Replicator volumes have the following characteristics:
„
PAIR state: A Hitachi TrueCopy Async pair changes to the PAIR status as soon as all
pending recordsets have been placed in the queue at the primary volume, without
waiting for the updates to complete at the secondary volume.
„
Pair splitting: When a TrueCopy Async pair is split or deleted, all pending recordsets at
the primary volume are sent to the secondary volume, then the pair status changes to
PSUS or SMPL. For pairsplit only, updates for the primary volume which occur during and
after the pairsplit operation are marked on the bitmap of the primary volume.
Hitachi Command Control Interface (CCI) User and Reference Guide
23
„
„
„
Pair resynchronization: The pairresync command resynchronizes the secondary volume
based on the primary volume. This resynchronization does not guarantee the sequenced
data transfer.
Error suspending: Pending recordsets which have not yet been sent to the secondary
volume are marked on the bitmap of the primary volume and then deleted from the
queue, and then the pair status changes to PSUE.
Group operations: HORCM registers CTGID to the storage system automatically when
paired volumes are created by the paircreate command, and groups of configuration file
are mapped to CTGID. If more than one group defined in the configuration definition file
is assigned to the same CT group ID, then pair operations of the group specification
apply to the entire CT group.
HA Software Package
Process-A
Process-B
write(1)
.
write(2)
.
write(4)
write(3)
Note: Write() shows that synchronous writing or
commit() of DB is used.
write(5)
R/W
PAIR
FIFO
FIFO
Asynchronous transfer
PAIR
5 4 3
. . . . . . . .
. . . . . . . . . .
2 1
PSUS / PSUE
PSUE
PSUE
Primary
volume
BITMAP
Secondary
volume
BITMAP
BITMAP
Resynchronization
Primary
volume
BITMAP
Secondary
volume
CT group
Hitachi RAID Storage
Hitachi RAID Storage
Figure 2.9 Hitachi TrueCopy Asynchronous Consistency Groups
24
Chapter 2 Overview of CCI Operations
2.4.2.1 Sidefile Cache for Hitachi TrueCopy Asynchronous
The first-in-first-out (FIFO) queue of each CT group is placed in an area of cache called the
sidefile. The sidefile is used for transferring Hitachi TrueCopy Async recordsets to the RCU.
The sidefile is not a fixed area in cache but has variable capacity for write I/Os for the
primary volume. If the host write I/O rate is high and the MCU cannot transfer the Hitachi
TrueCopy Async recordsets to the RCU fast enough, then the sidefile capacity expands
gradually. The sidefile has a threshold to control the quantity of data transfer of host side
write I/O. Host side write I/Os are controlled by delaying response when the sidefile exceeds
the constant quantity limit on cache in the storage system (see Figure 2.10).
Sidefile Threshold =
30% Æ 70% of Cache
Sidefile Area
Cache
(Total cache minus
FlashAccess)
High water mark
30% of Cache
For TagmaStore USP
(0 to 70%) of Cache
the default is 40%
Writing are waited until under threshold.
Writing response are delayed.
Figure 2.10 Sidefile Quantity Limit
Sidefile area: Sidefile area = 30% to 70% of cache as set on Storage Navigator (or SVP)
(default sidefile = 40% for TagmaStore USP/NSC, 50% for 9900V/9900).
Write I/O control at high-water mark (HWM): When the quantity of data in sidefile reaches
30% of cache, the Hitachi TrueCopy Async pair status is HWM of PAIR state, and the host
write I/Os receive delayed response in the range of 0.5 seconds to 4 seconds. Following is an
arithmetic expression of the HWM at 100% of a sidefile space:
HWM(%) = High water mark(%) / Sidefile threshold (30 to 70) * 100
Write I/O control at sidefile threshold: When the quantity of data in sidefile reaches the
defined sidefile area, host write I/Os are delayed until there is enough sidefile space to
store the next new write data. The copy pending timeout group option is between 1 second
and 255 seconds (600 for seconds for Universal Replicator). The timeout value is defined on
Storage Navigator (or SVP) and specifies the maximum delay between the M-VOL update and
the corresponding R-VOL update. The default timeout value is 90 seconds, 60 seconds for
Universal Replicator. If the timeout occurs during this waiting state, the pair status changes
from PAIR to PSUS (sidefile full), and host write I/Os continue with updates being managed
by the cylinder bitmap. Important: The copy pending timeout value should be less than the
I/O timeout value of the host system.
Hitachi Command Control Interface (CCI) User and Reference Guide
25
2.4.2.2 Hitachi TrueCopy Asynchronous Transition States
Hitachi TrueCopy Async volumes have special states for sidefile control during status
Hitachi TrueCopy Asynchronous volumes.
The suspending and deleting states are temporary internal states within the RAID storage
system. CCI cannot detect these transition states, because these states are reported on the
previous state from the storage system. These states are therefore concealed inside the
pairsplit command. After the pairsplit command is accepted, host write I/Os for the P-VOL
are managed by the cylinder bitmap (normal), non-transmitted data remaining in the
P-VOL’s FIFO queue is transferred to the S-VOL’s FIFO queue, and the pair status is then set
to PSUS [SMPL] state when all data in the P-VOL’s FIFO queue has been transmitted.
PFUL. If the quantity of data in sidefile cache exceeds 30% of cache storage, the internal
status of the RAID storage system is PFUL, and host write I/Os receive delayed response in
the range of 0.5 seconds to 4 seconds.
PFUS. If the quantity of data in sidefile cache exceeds the user-defined sidefile area (30%–
70%), then host write I/Os are waited for enough sidefile space to store the next new write
data. If the copy pending timeout occurs during this waiting state, then the pair status
changes from PAIR to PFUS, host write I/Os are accepted, and write data is managed by
bitmap.
The CCI software can detect and report the PFUL and PFUS states as follows:
„
„
„
As a return code of the pairvolchk command
As the status code displayed to code item by the pairmon command
As the paired status displayed to status item using -fc option of pairdisplay command
26
Chapter 2 Overview of CCI Operations
Table 2.6
State Table for Hitachi TrueCopy Sync vs. TrueCopy Async
CCI
State
Storage
System
Internal State
Description
TC Sync
Writing Control on TC Async Vol Transfer
data via
ESCON
TC Async
Writing data Response
SMPL
COPY
SMPL
COPY
SMPL
COPY
Same
Same
Normal
Usual
None
Via Sidefile
Usual [Note 1]
Sidefile &
bitmap
Deleting
Suspending
PAIR
N/A
SMPL from COPY by using [pairsplit -S]
PSUS from COPY by using [pairsplit]
Asynchronized Less than HWM
Normal
Usual
Sidefile
Sidefile
Sidefile
Sidefile
Sidefile
N/A
Via Bitmap
Via Sidefile
Via Sidefile
Via Sidefile
Usual
PAIR
Synchronized
N/A
Usual
sidefile in use
PFUL
HWM to Threshold
Delayed
Over Threshold
Wait until under
threshold
Deleting
N/A
N/A
SMPL from PAIR by using [pairsplit -S]
Normal
Usual
Usual
Sidefile
Suspending
PSUS from
PAIR
Using [pairsplit ]
Via Bitmap
Sidefile
Timeout of over threshold
PSUS
PSUS
PFUS
PSUE
PDUB
PSUS
None
Same
Via Bitmap
Via Bitmap
Via Bitmap
Via Bitmap
Usual
Usual
Usual
Usual
None
None
None
None
Timeout Over Threshold
Same (Link down etc)
Same
PSUE
PDUB
PSUE
PDUB
Note 1: If the host has more Write I/Os in COPY state, then host Write I/Os will be delayed
until there is enough space in the sidefile.
Explanation of terms in Table 2.6:
„
„
„
„
Bitmap: Host writes noted (without ordering) in a delta data bit map.
Normal: Host side writing data is not managed by BITMAP or sidefile.
Usual: Host side writing response is not delayed.
HWM (High Water Mark): Sidefile quantity is over 30% of total cache storage (minus
Cache Residency Manager usage).
(1) Suspending, [Deleting] status. These are temporary states internal to the storage system,
and CCI cannot detect these states definitely because they are reported on the previous
state from the storage system. Therefore these states are concealed inside of pairsplit
command. After a pairsplit command has been accepted, host side write I/O for primary
volume will be managed by the BITMAP[NORMAL]: non-transmitted data which remains in the
FIFO queue of the primary volume is transferred to FIFO queue of the secondary volume, and
pair status will be set to “PSUS” [“SMPL”] state when data transfer is complete.
(2) PFUL status. If sidefile quantity is over 30% of cache storage, then internal status of the
storage system is “PFUL”, and host side write I/O is being with delayed response of range
from 0.5 seconds (minimum) to 4 seconds (maximum).
Hitachi Command Control Interface (CCI) User and Reference Guide
27
2.4.2.3 TrueCopy Async/Universal Replicator ERROR State
In the case of an ESCON or fibre-channel (FC) failure, the S-VOL FIFO queue is missing a data
block that was transferred from the P-VOL FIFO queue. The RCU waits to store the next
sequenced data block in the S-VOL FIFO queue until the TrueCopy Async copy pending
timeout occurs (defined using Hitachi TrueCopy remote console software). If the timeout
occurs during this waiting state, the pair status is changed from PAIR to PSUE, and non-
sequenced data blocks are managed by the S-VOL bitmap. The missing data block can be
recovered using the pairresync command, which merges the S-VOL bitmap with the P-VOL
bitmap, shows its situation on the secondary side.
RCU (secondary)
Asynchronous transfer
FIFO (sidefile)
1
2
PAIR
5 3 2
1
. . . . . .
3
5
ESCON®/fibre failure
PSUE
4
BITMAP
Secondary
volume
BITMAP
Secondary
volume
Figure 2.11 Hitachi TrueCopy Async Suspension Condition
2.4.3 TrueCopy Sync/Async and Universal Replicator Fence-Level Settings
Hitachi TrueCopy volume pairs are assigned a fence level for write I/Os to ensure the
mirroring consistency of critical volumes. Accordingly, when the secondary volume takes
over from the primary volume, the takeover action is determined according to the pair
relationship between Hitachi TrueCopy pair status and fence level.
28
Chapter 2 Overview of CCI Operations
Table 2.7
Relationship between Hitachi TrueCopy Pair Status and Fence Level
Fence Level and Write Response
[1]
[2]
[3]
[4]
Hitachi TrueCopy Pair Status of Volume
Data
Status
Never
Async
OK
OK
OK
OK
Write response
Mirroring
consistency
assured
Mirroring
consistency
assured
Mirroring
consistency
assured
Data
consistency
assured
Valid
Valid
P
P
Primary volume
Secondary volume
ERROR
OK
OK
OK
Write response
Valid
Mirroring
consistency
assured
Mirroring
consistency
not assured
Mirroring
consistency
not assured
Data
consistency
assured
Not Valid
E
E
Primary volume
Secondary volume
Write response
Valid
ERROR
ERROR
OK
OK
Mirroring
consistency
assured
Mirroring
consistency
assured
Mirroring
consistency
not assured
Data
consistency
assured
Not Valid
E
P
Primary volume
Secondary volume
Mirror consistency = Identity of data and sequence of data are assured via error notification
as an I/O completion.
Data consistency = Sequence of data is assured in I/O order based on host.
[1] When fence level is data: Mirroring consistency is assured, since a write error is returned
if mirror consistency with the remote SVOL is lost. The secondary volume can continue
operation, regardless of the status. Note: A PVOL write that discovers a link down situation
will, in addition to returning an error to the host, likely be recorded on [only] the PVOL side.
[2] When fence level is status: If there is a mirror consistency problem (i.e., PSUE) and it is
possible to set the SVOL to PSUE, the PVOL write completes OK. If the SVOL cannot be set to
PSUE for any reason, the PVOL write completes with an error. The mirror consistency of the
SVOL depends on its status:
„
„
PSUE: The secondary volume is dubious.
PAIR: The secondary volume can continue operation.
[3] When fence level is never: Writing to the PVOL is still enabled in the state where mirror
consistency to the SVOL is lost, regardless of whether the secondary volume status is
updated or not. Thus, the secondary could have these states:
„
„
PSUE: The secondary volume is dubious.
PAIR: The secondary volume is substantially dubious, since it can continue operation and
is also dubious. The P-VOL status must be checked to confirm the mirroring consistency.
Hitachi Command Control Interface (CCI) User and Reference Guide
29
[4] When fence level is async: TrueCopy Async/Universal Replicator uses asynchronous
transfers to ensure the sequence of write data between the PVOL and SVOL. Writing to the
PVOL is enabled, regardless of whether the SVOL status is updated or not. Thus the mirror
consistency of the secondary volume is dubious (similar to “Never” fence):
„
PSUE: The SVOL mirroring consistency is not assured, but the PSUE suspended state
ensures the sequence of data for the CT group, thus data consistency is also assured
during PSUE state. At PSUE state, PVOL writes still complete and are also noted in a
bitmap for future transfer. Due to use of bitmap in suspend state, data consistency is
not assured during a copy state resync.
„
PAIR: If the PVOL and SVOL are both PAIR state, mirror consistency is not assured (may
be behind) but data consistency is assured (what has reached the SVOL is in the proper
order).
2.4.3.1 How to Set the Fence Level
Figure 2.12 shows the relations between redo log files (journal) and data files. If the S-VOL
occurred), the secondary host leaves data (V) unprocessed in the roll-back processing and
cannot be recovered completely. Therefore, the fence level of a redo log file must be
defined as data. Once the fence level is set to data, the P-VOL returns an error if data may
possibly be inconsistent when a write request is issued by the host. Since the writing into the
data file has not been executed due to a write error of the redo log file, the log file stays
consistent with the data file. However, when the fence level is set to data, a write I/O error
occurs even in the case where operation is suspended due to an error in the S-VOL.
Accordingly, the duplication becomes meaningless when the S-VOL takes over. Thus,
applications using paired volumes with the data fence level should be able to handle write
I/O errors properly. For example, the Oracle application creates multiple redo log files by
itself (three by default). The fence level can be set to data in this case in which disk errors
are permissible by creating multiple file.
Since most UNIX file systems (excluding JFS and VxFS) have no journal files, the fence level
should be defined as Never. When a takeover by the S-VOL occurs, fsck is executed on the
volume and the file system is cleaned up, even if the S-VOL is undefined at the secondary
host. The data that will be lost depends on how much differential data is contained in the
P-VOL when the S-VOL is suspended. During operation, error recovery should be performed
when the suspended status (PSUE or PDUB) is detected (when one error occurs).
Data (V)
Data (V)
Log (V)
Log (IV)
Primary Volume
Secondary Volume
Figure 2.12 Relation between Logs and Data in Paired Status
30
Chapter 2 Overview of CCI Operations
2.5 Applications of Hitachi TrueCopy/ShadowImage Commands
This section provides examples of tasks which can be performed using Hitachi TrueCopy
„
„
„
„
Back up secondary volume in paired status (TrueCopy or ShadowImage)
Restore secondary volume to primary volume in split status (TrueCopy or ShadowImage)
Swapping paired volume for duplex operation (TrueCopy only)
Restoring secondary volume for duplex operation (TrueCopy only)
R/W
OLTP (DB) server
Backup server
Primary
PAIR
Secondary
PAIR
Backup request
c
Database freezing
c
Event waiting (PSUS)
d
W (Database flushing)
Event waiting (PAIR)
d
Primary
PSUS
Secondary
PSUS
Pair splitting (Read)
e
Database unfreezing
f
Database mount -r
e
Backup executing
f
Read
R/W
Primary
PSUS
Secondary
PSUS
Database unmount
Backup completion
g
h
R/W
Primary
COPY
Secondary
COPY
Pair resynchronization
g
Figure 2.13 Backing Up S-VOL in Paired Status Using Hitachi TrueCopy
Hitachi Command Control Interface (CCI) User and Reference Guide
31
R/W
OLTP (DB) server
Backup server
Primary
PAIR
Secondary
PAIR
Backup request
c
Database freezing
c
Event waiting (PSUS)
d
W (Database flushing)
Event waiting (PAIR)
d
Primary
COPY
Secondary
COPY
Pair splitting (Read)
e
differential data
Database unfreezing
f
R/W
Primary
COPY
Secondary
COPY
After copied, the status
changes to"PSUS"
Database mount -r
e
Backup executing
f
Read
R/W
Primary
PSUS
Secondary
PSUS
Database unmount
Backup completion
g
h
R/W
Primary
COPY
Secondary
COPY
Pair re-synchronization
g
Figure 2.14 Backing Up S-VOL in Paired Status Using ShadowImage
Note: When you issue the pairsplit command to a ShadowImage paired volume, the pair
status changes to COPY, and the differential data due to asynchronous copy is copied to the
secondary volume. When this copy is finished, the pair status changes to PSUS. The primary
volume remains write-enabled throughout the pairsplit operation (COPY and PSUS status).
32
Chapter 2 Overview of CCI Operations
OLTP (DB) server
DSS server
R/W
R/W
Primary
PSUS
Secondary
PSUS
R/W
R/W
Pair splitting
(Simplex)
c
SMPL
R/W
SMPL
unmount
d
Primary
COPY
Secondary
COPY
Pair generation
e
(Remote)
Event waiting (PAIR)
f
R/W
Primary
PAIR
Secondary
PAIR
Restoration request
g
h
Database freezing
c
Event waiting (PSUS)
R
W (Database flushing)
Pair splitting (Read)
Database unfreezing
d
e
Primary
PSUS
Secondary
PSUS
mount -r
i
Figure 2.15 Restoring S-VOL to P-VOL in Split Status Using Hitachi TrueCopy
Hitachi Command Control Interface (CCI) User and Reference Guide
33
OLTP (DB) server
DSS server
R/W
R/W
Primary
PSUS
Secondary
PSUS
R/W
R/W
Swapping paired
Pair splitting
(Simplex)
c
SMPL
R/W
SMPL
volume
unmount
d
Primary
COPY
Secondary
COPY
Pair generation
e
(Remote)
Event waiting (PAIR)
f
R/W
Primary
PAIR
Secondary
PAIR
Restoration request
g
h
Database freezing
Pair splitting
c
d
W (Database flushing)
Event waiting (PSUS)
Primary
COPY
Secondary
COPY
differential data
R/W
Database unfreezing
e
Primary
COPY
Secondary
COPY
R
R/W
Primary
PSUS
Secondary
PSUS
mount -r
i
Figure 2.16 Restoring S-VOL to P-VOL in Split Status Using ShadowImage
Note: When a swap of the primary/secondary is performed, only one paired volume is
possible.
34
Chapter 2 Overview of CCI Operations
R/W
Server B
Server A
Primary
PAIR
Secondary
PAIR
R/W stop
c
d
Server swapping
instruction
Event waiting (PAIR)
e
Pair splitting (Simplex)
c
d
SMPL
R/W
SMPL
Primary
PAIR
Secondary
PAIR
Pair generation (No Copy)
Local
Splitting from swapped duplex state
R/W
R/W
Primary
PSUS
Secondary
PSUS
Pair splitting (R/W)
i
Figure 2.17 Swapping Paired Volume for Duplex Operation — Hitachi TrueCopy Only
Hitachi Command Control Interface (CCI) User and Reference Guide
35
R/W
Server B
Server A
Primary
PSUS
Secondary
PSUS
DB shutdown
c
d
DB completion notification
SMPL
Pair splitting
c
(Simplex)
SMPL
Primary
COPY
Secondary
COPY
Pair generation
(Remote)
d
Event waiting (PAIR)
e
f
Pair splitting
(Simplex)
SMPL
SMPL
R/W
Primary
PAIR
Secondary
PAIR
Pair generation
(No Copy)
g
Figure 2.18 Restoring S-VOL for Duplex Operation (Hitachi TrueCopy Only)
36
Chapter 2 Overview of CCI Operations
2.6 Overview of Copy-on-Write Snapshot Operations
Copy-on-Write Snapshot normally creates virtual volumes for copying on write without
specifying LUNs as S-VOLs. However, to use a SnapShot volume via the host, it is necessary
to map the SnapShot S-VOL to a LUN. Therefore, CCI provides a combined command to
enable the user or APP to use the same CCI command in order to maintain the compatibility
of ShadowImage.
SnapShot uses two techniques called V-VOL mapping and SnapShot using copy on write. Also
SnapShot volumes are put into pooling volumes called SnapShot pool, and the SnapShot pool
are specified as pool ID when a SnapShot is made. The SnapShot and volume mapping is
illustrated in Figure 2.19.
STD Inquiry
OPEN-0V
LDEV 20
STD Inquiry
OPEN-0V
LDEV 22
CL1-B
CL1-C
SVOL
10-0
VG0
VG1
VG2
LDEV
22
LDEV
20
0
2
PVOL
10
1
VG1
VVOL
10-1
VVOL
10-2
SnapShot pool
Figure 2.19 Copy-on-Write Snapshot and Volume Mapping
Hitachi Command Control Interface (CCI) User and Reference Guide
37
2.6.1 Creating SnapShot
The CCI command for creating a COW SnapShot pair is the same as for ShadowImage. The
RAID storage system determines whether the pair is a ShadowImage or SnapShot pair by the
LDEV attribute of the S-VOL. A SnapShot pair is generated in the following two cases.
„
„
When V-VOL unmapped to the S-VOL of SnapShot called OPEN-0V is specified as S-VOL.
When S-VOL is not specified.
The V-VOL has the following characteristic.
„
„
It is displayed as “OPEN-0V” to identify V-VOL easily via SCSI Inquiry or CCI.
V-VOL unmapped to the S-VOL of SnapShot will reply to SCSI Inquiry, but Reading and/or
Writing will not be allowed. LDEV will reply the capacity setting as an LU to SCSI Read
Capacity.
„
V-VOL became the S-VOL of SnapShot will reply to SCSI Inquiry, and Reading and/or
Writing will be allowed.
2.6.2 SnapShot Volume Specifications
„
Allowable type of paired volume: The supported volume type is OPEN-V only for P-VOL,
and OPEN-0V for S-VOL.
„
Number of volumes (SnapShot) can be paired: This depends on P-VOL capacity, SnapShot
pool capacity, and shared memory capacity on the RAID storage system.
„
„
Duplicated writing mode: Copying on write.
Number of mirror volumes: Up to 64 secondary volumes can be defined for each P-VOL.
2.6.3 SnapShot Volume Characteristics
Each paired volume consists of a primary volume (P-VOL) and a secondary volume (S-VOL).
Each volume has the status for controlling the state concerning the pairing.
The P-VOL controls the pairing state which is reflected on the status of the S-VOL. The
major pairing statuses are “SMPL”, “PAIR”, “PSUS”, “COPY”, and “RCPY”. The status is
changed when the CCI command is issued. A read or write request from the host is allowed
or rejected according to the status.
38
Chapter 2 Overview of CCI Operations
Read/Write
Read/Write
Primary
volume
Secondary
volume
Copy on write
Restore copy
S
S
Table 2.8
SnapShot Pairing Status
Status
Pairing Status
Primary
Secondary
SMPL
Unpaired (SnapShot) volume
R/W enabled
R/W enabled
R/W enabled
R/W disable
R/W disable (Note 1)
R/W disable
PAIR (PFUL)
COPY
The snapshot available state allocated the resource.
The preparing state allocates the resource for the snapshot.
R/W disable
RCPY
The copying state from snapshot to the primary volume by
using restore option.
R/W disable
PSUS (PFUS)
PSUE (Error)
The differences of the updated data of the primary and
secondary volume are controlled with copying on write.
R/W enabled
R/W enabled
R/W disable
“PSUS” status owing to an internal failure. The differences of
the updated data for the snapshot volume are not controlled.
R/W enabled
(Note 2)
Note 1: V-VOL unmapped to the SVOL of SnapShot will reply to SCSI Inquiry, but Reading
and/or Writing will not be allowed.
Note 2: Reading and writing are enabled, as long as no failure occurs in the primary volume.
Hitachi Command Control Interface (CCI) User and Reference Guide
39
2.7 Overview of CCI Data Protection Operations
User data files are normally placed to a disk through some software layer such as file
system, LVM, diskdriver, SCSI protocol driver, bus adapter, and SAN switching fabric. Data
corruption can happen on above software bugs and human error as follows. The purpose of
Data Protection is to prevent writing to volumes by RAID storage system guarding the
volume.
The CCI Data Protection functions include:
„
Validator, please refer to the Database Validator Reference Guide for the storage
system.
„
Data Retention Utility (DRU) (called “Open LDEV Guard” on 9900V/9900) (sections 2.7.3
Retention Utility User’s Guide for USP V/VM and USP/NSC (Open LDEV Guard User’s
Guide for Lightning 9900V and 9900).
2.7.1 Database Validator
The purpose of Database Validator (9900V and later) is to prevent data corruption by
checking Oracle data validation before ORACLE data block is written on a disk.
„
„
„
Data Block corruption: Oracle data is corrupted by some intervening software layer and
hardware components. The RAID storage system can check the validity of the data block
before the Oracle DataBlock is written to disk.
Data Block address corruption: The OS (file system, LVM, Diskdriver) may write blocks
to the wrong location. The RAID storage system can check the validity of the data block
address to verify that the Oracle DataBlock is written to the correct location on disk.
Protection of Oracle volume: Oracle datafiles might be overwritten by a non-Oracle
application or by human operation using a command. The RAID storage system can
protect volumes storing Oracle files to prevent the volumes from being modified by
another application or by human error.
40
Chapter 2 Overview of CCI Operations
2.7.2 Restrictions on Database Validator
„
Oracle Tablespace Location
–
File system-based Oracle files are not supported by DB Validator. All Oracle
tablespace files must be placed on raw volumes (including LVM raw volumes)
directly.
–
–
If host-based striping is used on the raw volumes, then the stripe size must be an
exact multiple of the Oracle block size.
Oracle redo log files (including archive logs) must be on separate volumes with
respect to the data files (including control files) to different LU. In other words,
Oracle redo log files and the data files must not be mixed on the same LU.
„
Restoring of Oracle Files
–
Before restoring Oracle data files from a backup, the data validation may need to be
temporarily turned off for those data files that were backed up prior to the Oracle
checksum being enabled.
Old blocks may exist on disk without checksum information in them if the database
was running without checksum enabled in the past.
„
Oracle on LVM (VxVM)
–
LVM block size must be a multiple of the Oracle block size. The Oracle block size
must be less than or equal to the minimum of the LVM stripe size and the largest
block size that LVM will not fracture (known as “Logical Track Group” in LVM), which
is 256 KB in LVM.
–
When adding new physical volumes (PVs) to a logical volume (LV) to be used as an
Oracle datafile, controlfile, or online log, in order to have HARD checking take
effect on those new PVs, the data validation should be re-enabled.
Similarly, in order to have HARD checking no longer performed on PVs that have
been removed from an LV that had previously been used by Oracle, HARD checking
should be explicitly disabled on the device corresponding to the PV.
–
–
If host-based mirroring is used such as LVM mirroring, all component PV mirrors must
be HARD-enabled, otherwise the entire logical volume (LV) is exposed. That is, if a
user takes an unmirrored HARD-enabled LV, then makes it mirrored on the fly
without HARD-enabling all sides of the mirror, that entire LV is exposed to data
corruption.
LVM bad block relocation will not be allowed on PVs that are HARD-enabled.
„
Oracle and LVM (VxVM) on HA Cluster Server
–
If HA Cluster software will be writing to LVM metadata at regular intervals in order
to confirm whether its disks are available or not, then its LVM’s area must be out of
checking for Database Validator by using “-vs <bsize> SLBA ELBA” option.
Hitachi Command Control Interface (CCI) User and Reference Guide
41
2.7.3 Data Retention Utility/Open LDEV Guard
The purpose of Data Retention Utility (DRU) (Open LDEV Guard on 9900V) is to prevent
writing to volumes by RAID storage system guarding the volume. DRU is similar to the
command that supports Database Validator, setting a protection attribute to the specified
LU.
„
Hiding from Inquiry command. The RAID storage system conceals the target volumes
from SCSI Inquiry command by responding “unpopulated volume” (0x7F) to the device
type.
„
„
SIZE 0 volume. The RAID storage system replies with “SIZE 0” to the target volumes
through SCSI Read capacity command.
Protection of reading. The RAID storage system protects from reading the target
volumes by responding the check condition of “Illegal function” (SenseKey = 0x05,
SenseCode = 0x2200).
„
„
Protection of writing. The RAID storage system replies with “Write Protect” in Mode
sense header, and protects from writing the target volumes by responding the check
condition of “Write Protect” (SenseKey=0x07, SenseCode=0x2700).
SVOL Disabling. The RAID storage system protects from becoming the SVOL through pair-
creation.
42
Chapter 2 Overview of CCI Operations
2.7.4 Restrictions on Data Retention Utility Volumes
„
File systems using Data Retention Utility (Open LDEV Guard)
–
When using the UNIX file system volumes as the DRU/Open LDEV Guard, the volumes
must be mounted with Read Only option by setting the DRU/Open LDEV Guard after
the volumes are unmounted.
–
–
In case of Windows 2003/2008 file system, you have to use “-x mount” and “-x
umount” option of CCI commands with above procedures.
DRU/Open LDEV Guard volumes set to Write Protect Mode (Read ONLY) cannot be
used for the file system (NTFS, FAT) on Windows OS.
„
„
„
„
LVM (VxVM) on Data Retention Utility (Open LDEV Guard)
–
When operating the LVM volumes to be used by DRU/Open LDEV guard, LVM
commands may be writing the volumes, and then DRU/Open LDEV Guard should be
re-enabled.
Data Retention Utility (Open LDEV Guard) in HA Cluster Server
–
If HA Cluster software will be writing to the metadata at regular intervals in order to
confirm whether its disks are available or not, then DRU/Open LDEV Guard should
not be used in HA environments.
Dynamic disk on Windows systems
–
DRU/Open LDEV Guard volumes cannot be used to the dynamic disk, because the
dynamic disk does not handle the volumes set to Write Protect Mode (Read ONLY).
DRU/Open LDEV Guard volumes must be used for the Basic disk only.
LUN#0
Some operating systems cannot recognize LUNs over LUN#1, if LUN#0 is set to the
–
“inv” as the attribute of DRU/OpenLDEV Guard. This is because some HBA drivers do
not scan all LUNs on a port, if LUN#0 is invisible.
Hitachi Command Control Interface (CCI) User and Reference Guide
43
2.7.5 Operations
The Hitachi storage systems (9900V and later) have parameters for the protection checking
to each LU, and these parameters are set through the command device by CCI. CCI supports
the following commands in order to set and verify the parameters for the protection
checking to each LU:
„
This command sets the parameter for the protection checking to the specified volumes.
„
This command shows the parameter for the protection checking on the specified
volumes based on CCI configuration definition file.
„
„
This command shows the parameter for the protection checking on the specified
volumes based on the raidscan command.
This command is used to discover the journal volume list setting via SVP within the
storage system. It also displays any information for the journal volume.
44
Chapter 2 Overview of CCI Operations
2.8 CCI Software Structure
Figure 2.20 illustrates the CCI software structure: the CCI components on the RAID storage
system, and the CCI instance on the UNIX/PC server. The CCI components on the storage
system include the command device(s) and the Hitachi TrueCopy and/or ShadowImage
volumes. Each CCI instance on a UNIX/PC server includes:
„
HORC Manager (HORCM):
–
–
–
–
Log and trace files
A command server
Error monitoring and event reporting files
A configuration management feature
„
„
Configuration definition file (defined by the user)
The Hitachi TrueCopy and/or ShadowImage user execution environments, which contain
the TrueCopy/ShadowImage commands, a command log, and a monitoring function.
2.8.1 HORCM Operational Environment
The HORCM operates as a daemon process on the host server and is activated automatically
when the server machine starts up or manually by the start-up script. HORCM refers to the
definitions in the configuration file when it is activated. The environmental variable
HORCM_CONF is used to define the configuration file to be referenced.
HORCM_CONF
Command
RD/WR command
Command Device
HORCM
Command
Command
Server machine
Remote communication
Communicates with the HORCM (CCI) in the remote server.
Figure 2.20 HORCM Operational Environment
Hitachi Command Control Interface (CCI) User and Reference Guide
45
2.8.2 CCI Instance Configurations
The basic unit of the CCI software structure is the CCI instance. Each copy of CCI on a server
is a CCI instance. Each instance uses a defined configuration file to manage volume
relationships while maintaining awareness of the other CCI instances. Each CCI instance
normally resides on one server (one node). If two or more nodes are run on a single server
(e.g., for test operations), it is possible to activate two or more instances using instance
numbers. The CCI command, Hitachi TrueCopy or ShadowImage, is selected by the
environment variable (HORCC_MRCF). The default command execution environment for CCI
is Hitachi TrueCopy.
RAID storage system. The remote execution link is a network connection to another PC to
allow you to execute CCI functions remotely. The connection between the CCI instance and
the storage system illustrates the connection between the CCI software on the host and the
command device. The command device accepts TrueCopy and ShadowImage CCI commands
and communicates read and write I/Os between the host and the volumes on the storage
system. The host does not communicate Hitachi TrueCopy or ShadowImage commands
directly to the volumes on the storage system. The CCI commands always go through the
command device.
46
Chapter 2 Overview of CCI Operations
User’s execution environment
TrueCopy
User’s execution environment
ShadowImage
Command Command Command
Command Command
Configuration
definition file
Monitoring
command
Command log
Command log
HORCM
Error monitoring
and event
monitoring
TrueCopy
command
server
Configuration
management
x Syslog file
Remote execution
x HORCM log
x HORCM trace
x Command trace
x Command core file
Log and trace
HORCM (CCI) instances
ct
mes
Object
volumes
TrueCopy
control
ShadowImage
control
Hitachi RAID
Hitachi RAID
Figure 2.21 CCI Software Structure
Connecting the host to one storage system allows you to maintain multiple copies of your
data for testing purposes or offline backup. Connecting the host to two storage systems
enables you to migrate data or implement disaster recovery by maintaining duplicate sets of
data in two different storage systems. You can implement disaster recovery solutions by
placing the storage systems in different geographic areas. Having two attached hosts, one
for the primary volume and one for the secondary volume, allows you to maintain and
administer the primary volumes while the secondary volumes can be taken offline for
testing. Two hosts connected to two storage systems also allows the most flexible disaster
recovery plan, because both sets of data are administered by different hosts, which guards
against storage system as well as host failure.
Hitachi Command Control Interface (CCI) User and Reference Guide
47
The four possible CCI instance configurations are:
„
One host connected to one storage system. Each CCI instance has its own operation
manager, server software, and scripts and commands, and each CCI instance
communicates independently with the command device. The RAID storage system
contains the command device which communicates with the CCI instances as well as the
primary and secondary volumes of both CCI instances.
„
One host connected to two storage systems. Each CCI instance has its own operation
manager, server software, and scripts and commands, and each CCI instance
communicates independently with the command device. Each RAID storage system has a
command device which communicates with each CCI instance independently. Each
storage system contains the primary volumes of its connected CCI instance and the
secondary volumes of the other CCI instance (located on the same host in this case).
„
„
Two hosts connected to one storage system. The CCI instances are connected via the
LAN so that they can maintain awareness of each other. The RAID storage system
contains the command device which communicates with both CCI instances and the
primary and secondary volumes of both CCI instances
Two hosts connected to two storage systems. The CCI instances are connected via the
LAN so that they can maintain awareness of each other. Each RAID storage system has a
command device which communicates with each CCI instance independently. Each
storage system contains the primary volumes of its connected CCI instance and the
secondary volumes of the other CCI instance (located on a different host in this case).
2.8.3 Host Machines that Can be Paired
Host machines are combined when a paired logical volume is defined, provided the host
machines run on the operating system (OS) of the same architecture. This is because the
host machine may be incapable of recognizing the paired volume of another host, although
the HORC management of them runs properly.
As a particular application uses HORC, users sometimes use HORC volume as the retention
volume for the data backup of the server. In this case, RAID Manager requires that the RAID
Manager instance corresponds to each OS platform that is located on the secondary site for
the pair operation of data backup on the primary servers of each OS platform.
However, it is possible to prepare only one server at a secondary site by supporting RAID
Manager communications among different OSs (including the converter for “little-endian” vs
“big-endian”).
shows the supported communication (32-bit, 64-bit, MPE/iX) among different OSs. Please
note the following terms that are used in the example:
„
RM-H: RAID Manager instance setting HORCMFCTBL environment variable for HP-UX on
Windows
„
RM-S: RAID Manager instance setting HORCMFCTBL environment variable for Solaris on
Windows
48
Chapter 2 Overview of CCI Operations
HP-UX
RM
HP-UX
RM
Solaris
RM
Windows
RM
Windows
RM-H
RM-S
RM
Solaris
RM
Command
device
Command
device
P-VOL
P-VOL
S
P-VOL
W
S-VOL
W
S-VOL
S
S-VOL
H
H
RAID
RAID
Figure 2.22 RAID Manager Communication Among Different Operating Systems
Table 2.9
Supported HORCM Communication
HORCM
32 bit
big
64 bit
MPE/iX
HORCM
little
AV
little big big
32 bit
little
big
AV
AV
AV
AV
AV
AV
AV
AV
NA
AV
64 bit
little
big
AV
MPE/iX
big
AV
AV
NA
AV
Restriction: RAID Manager for MPE/iX cannot communicate with 64bit HORCM.
Restriction: RAID Manager’s communications among different operating systems is
supported on HP-UX, Solaris, AIX®, Linux®, and Windows (except for Tru64™ UNIX (Digital
UNIX)). Also, RAID Manager does not require that the HORCMFCTBL environment variable is
set—except for RM-H and RM-S instances (to ensure that the behavior of the operating
system platform is the same across different operating systems).
Hitachi Command Control Interface (CCI) User and Reference Guide
49
2.8.4 Configuration Definition File
The CCI configuration definition file is the text file which defines connected hosts and the
volumes and groups known to the CCI instance. Physical volumes (special files) used
independently by the servers are combined when paired logical volume names and group
names are given to them. The configuration definition file describes the correspondence
between the physical volumes used by the servers and the paired logical volumes and the
creating the CCI configuration file.
configuration file for a Windows operating system.
HOSTB
HOSTA
Configuration definition file
Configuration definition file
G1,Oradb1... P1,T1,L1
G1...HOSTB
G2,Oradb2... P2,T2,L3
G2,Oradb3... P2,T2,L4
G2...HOSTC
G1,Oradb1...P3,T2,L2
G1...HOSTA
Special file
Special file
P3,T2,L2
P1,T1,L1
P2,T2,L3
G1,Oradb1
L1
L2
Paired logical volume
Hitachi RAID
Oradb2
Oradb3
L3
L4
L1
L2
Group = G2
Hitachi RAID
Hitachi RAID
P4,T1,L1
Explanation of symbols:
Pn: Port name
Tn: Target ID
HOSTC
Configuration definition file
Ln: LUN number
G2,Oradb2...P4,T1,L1
G2,Oradb3...P4,T1,L2
G2...HOSTA
Special file
Figure 2.23 Configuration Definition of Paired Volumes
50
Chapter 2 Overview of CCI Operations
HORCM_MON
#ip_address service
HST1
poll(10ms)
1000
timeout(10ms)
3000
horcm
HORCM_CMD
#unitID 0... (seq#30014)
#dev_name dev_name
/dev/rdsk/c0t0d0
dev_name
dev_name
#unitID 1... (seq#30015)
#dev_name
dev_name
/dev/rdsk/c1t0d0
HORCM_DEV
#dev_group dev_name
oradb
oradb
oralog
oralog
oralog
oralog
HORCM_INST
port#
CL1-A
CL1-A
CL1-A
CL1-A1
CL1-A1
CL1-A1
TargetID
LU#
1
1
0
0
MU#
0
1
oradb1
oradb2
oralog1
oralog2
oralog3
oralog4
3
3
5
5
5
5
1
1
h1
#dev_group ip_address
service
horcm
horcm
horcm
oradb
oradb
oralog
HST2
HST3
HST3
Figure 2.24 Configuration File Example — UNIX-Based Servers
Figure 2.25 Configuration File Example — Windows Servers
Hitachi Command Control Interface (CCI) User and Reference Guide
51
HORCM_MON. The monitor parameter (HORCM_MON) defines the following values:
„
Ip_address: The network address (IPv4 or IPv6) of the local host. When HORCM has two
or more network addresses on different subnets or MPE/iX, enter NONE for IPv4 or
NONE6 for IPv6 here.
„
Service: Specifies the UDP port name assigned to the HORCM communication path,
which is registered in “/etc/services” (“\WINNT\system32\drivers\etc\services” in
Windows, “SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT” in OpenVMS). If a port
number is specified instead of a port name, the port number will be used.
„
„
Poll: The interval for monitoring paired volumes. To reduce the HORCM daemon load,
make this interval longer. If set to -1, the paired volumes are not monitored. The value
of -1 is specified when two or more CCI instances run on a single machine.
Timeout: The time-out period of communication with the remote server.
HORCM_CMD. The command parameter (HORCM_CMD) defines the UNIX device path or
Windows physical device number of the command device. The command device must be
mapped to the SCSI/fibre using the LUN Manager remote console software (or SVP). You can
define more than one command device to provide failover in case the original command
device becomes unavailable (see section 2.8.6).
Note: To enable dual pathing of the command device under Solaris systems, make sure to
include all paths to the command device on a single line in the HORCM_CMD section of the
config file. Putting the path information on separate lines may cause parsing issues, and
failover may not occur unless the HORCM startup script is restarted on the Solaris system.
When a server is connected to two or more storage systems, the HORCM identifies each
order described in this section of the configuration definition file. The server must be able
to verify that unit ID is the same Serial# (Seq#) with among server when the storage system
is shared by two or more servers. This can be verified using the raidqry command.
52
Chapter 2 Overview of CCI Operations
unitID0 = Ser# 30014
unitID1 = Ser# 30015
ESCON®
unitID=0
Command
device
HORCM
(CCI)
Ser#30014
Server
ESCON®/
Fibre-channel
HUB
unitID0 = Ser# 30014
unitID1 = Ser# 30015
ESCON®
Command
device
unitID=1
HORCM
(CCI)
Ser# 30015
Server
Figure 2.26 Configuration and Unit IDs for Multiple Storage Systems
dev_name for Windows
In Windows SAN environment,” Volume{guid}” is changed on every re-boot under
MSCS/Windows 2003/2008, if Windows finds the same signature on the command device
connected with Multi-Path. Therefore, the user must find NEW “Volume{guid}” and change
“Volume{guid}” described in the CCI configuration file. CCI supports the following naming
format specifying Serial#/LDEV#/Port# as notation of the command device for only Windows.
\\.\CMD-Ser#-ldev#-Port#
HORCM_CMD
#dev_name
dev_name
dev_name
\\.\CMD-30095-250-CL1-A
To allow more flexibility, CCI allows the following format.
„
For minimum specification. Specifies to use any command device for Serial#30095
\\.\CMD-30095
If Windows has two different array models that share the same serial number, fully
define the serial number, ldev#, port and host group for the CMDDEV.
„
„
„
For under Multi Path Driver. Specifies to use any port as the command device for
Serial#30095, LDEV#250
\\.\CMD-30095-250
For full specification. Specifies the command device for Serial#30095, LDEV#250
connected to Port CL1-A, Host group#1
\\.\CMD-30095-250-CL1-A-1
Other examples
\\.\CMD-30095-250-CL1-A
\\.\CMD-30095-250-CL1
Hitachi Command Control Interface (CCI) User and Reference Guide
53
dev_name for UNIX
In UNIX SAN environment, there are situations when the device file name will be changed, a
failover operation in UNIX SAN environment or every reboot under Linux when the SAN is
reconfigured. The CCI user needs to find NEW "Device Special File" and change HORCM_CMD
described in the CCI configuration file. Thus, CCI supports the following naming format
specifying Serial#/LDEV#/Port#:HINT as notation of the command device for UNIX:
\\.\CMD-Ser#-ldev#-Port#:HINT
HORCM_CMD
#dev_name
dev_name
dev_name
\\.\CMD-30095-250-CL1-A-1:/dev/rdsk/
If these names are specified, HORCM finds “\\.\CMD-Serial#-Ldev#-Port#” from the device
files specified by HINT at HORCM start-up. HINT must be specified “directory terminated
with ‘/’ on the device file name” or “directory including device file name pattern” as
below, for example:
/dev/rdsk/
/dev/rdsk/c10 Æ this finds a specified CMD from /dev/rdsk/c10*
/dev/rhdisk Æ this finds a specified CMD from /dev/rhdisk*
Æ this finds a specified CMD from /dev/rdsk/*
The device files discovered during HINT are filtered with the following pattern:
HP-UX: /dev/rdsk/* or /dev/rdisk/disk*
Solaris: /dev/rdsk/*s2, AIX: /dev/rhdisk*
Linux : /dev/sd...., zLinux : /dev/sd....
MPE/iX: /dev/...
Tru64: /dev/rrz*c or /dev/rdisk/dsk*c or /dev/cport/scp*
DYNIX: /dev/rdsk/sd*
IRIX64: /dev/rdsk/*vol or /dev/rdsk/node_wwn/*vol/*
If a HINT is already specified, “:HINT” can be omitted for next command devices, and then a
command device will be found from the cached Inquiry information of HORCM for saving
unnecessary device scanning.
HORCM_CMD
#dev_name
dev_name
dev_name
\\.\CMD-30095-250-CL1:/dev/rdsk/
\\.\CMD-30095-250-CL2
Example for minimum specification. Specifies to use any command device for Serial#30095:
\\.\CMD-30095:/dev/rdsk/
Example for under Multi Path Driver. Specifies to use any port as the command device for
Serial#30095, LDEV#250:
\\.\CMD-30095-250:/dev/rdsk/
Example for full specification. Specifies the command device for Serial#30095, LDEV#250
connected to Port CL1-A, Host group#1:
\\.\CMD-30095-250-CL1-A-1:/dev/rdsk/
Other examples:
\\.\CMD-30095-250-CL1:/dev/rdsk/
\\.\CMD-30095:/dev/rdsk/c1
\\.\CMD-30095-250-CL2
\\.\CMD-30095:/dev/rdsk/c2
54
Chapter 2 Overview of CCI Operations
HORCM_DEV. The device parameter (HORCM_DEV) defines the RAID storage system device
addresses for the paired logical volume names. When the server is connected to two or more
storage systems, the unit ID is expressed by port# extension. Each group name is a unique
name discriminated by a server which uses the volumes, the attributes of the volumes (such
as database data, redo log file, UNIX file), recovery level, etc. The group and paired logical
volume names described in this item must reside in the remote server. The hardware
SCSI/fibre bus, target ID, and LUN as hardware components need not be the same.
The following values are defined in the HORCM_DEV parameter:
„
dev_group: Names a group of paired logical volumes. A command is executed for all
corresponding volumes according to this group name.
„
dev_name: Names the paired logical volume within a group (i.e., name of the special
file or unique logical volume). The name of paired logical volume must be different than
the “dev name” on another group.
„
Port#: Defines the RAID storage system port number of the volume that corresponds
with the dev_name volume. The following “n” shows unit ID when the server is
connected to two or more storage systems (e.g., CL1-A1 = CL1-A in unit ID 1). If the “n”
option is omitted, the unit ID is 0. The port is not case sensitive (e.g., CL1-A= cl1-a=
CL1-a= cl1-A).
Basic
Option
Option
Option
CL1 An Bn Cn Dn En Fn Gn Hn Jn Kn Ln Mn Nn Pn Qn Rn
CL2 An Bn Cn Dn En Fn Gn Hn Jn Kn Ln Mn Nn Pn Qn Rn
The following ports can only be specified for the 9900V:
Basic
Option
Option
Option
CL3 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CL4 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
For 9900V, CCI supports four types of port names for host groups:
–
–
Specifying the port name without a host group:
CL1-A
CL1-An
Specifying the Port name without a host group
CL1-A-g where g : host group
where n unit ID for multiple RAID
CL1-An-g where n-g : host group=g on CL1-A in unit ID=n
Hitachi Command Control Interface (CCI) User and Reference Guide
55
The following ports can only be specified for USP/NSC and USP V/VM:
Basic
Option
Option
Option
CL5 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CL6 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CL7 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CL8 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CL9 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLA an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLB an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLC an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLD an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLE an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLF an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
CLG an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn
„
„
Target ID: Defines the SCSI/fibre target ID number of the physical volume on the
specified port. See Appendix C for further information on fibre address conversion.
LU#: Defines the SCSI/fibre logical unit number (LU#) of the physical volume on the
specified target ID and port.
Note: In case of fibre channel, if the TID and LU# displayed on the system are different
than the TID on the fibre address conversion table, then you must use the TID and LU#
indicated by the raidscan command in the CCI configuration file.
„
MU# for HOMRCF: Defines the mirror unit number (0 - 2) for the identical LU on the
HOMRCF. If this number is omitted it is assumed to be zero (0). The cascaded mirroring
of the S-VOL is expressed as virtual volumes using the mirror descriptors (MU#1-2) in the
configuration definition file. The MU#0 of a mirror descriptor is used for connection of
the S-VOL. SnapShot will have 64 mirror descriptions in HOMRCF and SnapShot feature.
SMPL
MU#3 – 63
P-VOL
MU#3 – 63
S-VOL
MU#1 – 63
Feature
MU#0-2
valid
MU#0-2
valid
MU#0
valid
valid
HOMRCF
SnapShot
Invalid
valid
Invalid
valid
Invalid
Invalid
valid
valid
56
Chapter 2 Overview of CCI Operations
„
MU# for HORC/Universal Replicator: Defines the mirror unit number (0 - 3) of one of
four possible HORC/UR bitmap associations for an LDEV. If this number is omitted, it is
assumed to be zero (0). The Universal Replicator mirror description is described in the
MU# column by adding “h” in order to identify identical LUs as the mirror descriptor for
UR. The MU# for HORC must be specified “blank” as “0”. The mirror description for
HORC is only one, but UR will have four mirrors, as shown below.
SMPL
MU#0
Valid
P-VOL
MU#0
Valid
S-VOL
MU#0
Valid
State/
Feature
MU#h1 - h3
Not Valid
Valid
MU#h1 - h3
Not Valid
Valid
MU#h1 - h3
Not Valid
Valid
TrueCopy
Universal Replicator Valid
Valid
Valid
HORCM_INST. The instance parameter (HORCM_INST) defines the network address (IP
address) of the remote server (active or standby). It is used to refer to or change the status
of the paired volume in the remote server (active or standby). When the primary volume is
shared by two or more servers, there are two or more remote servers using the secondary
volume. Thus, it is necessary to describe the addresses of all of these servers.
The following values are defined in the HORCM_INST parameter:
„
„
„
dev_group: The server name described in dev_group of HORC_DEV.
ip_address: The network address of the specified remote server.
service: The port name assigned to the HORCM communication path (registered in the
/etc/services file). If a port number is specified instead of a port name, the port number
will be used.
When HORCM has two or more network addresses on different subnets for communication,
the ip_address of HORCM_MON must be NONE. This configuration for multiple networks can
be found using raidqry -r <group> command option on each host. The current network
address of HORCM can be changed using horcctl -NC <group> on each host.
Hitachi Command Control Interface (CCI) User and Reference Guide
57
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
#ip_address service poll(10ms) timeout(10ms)
NONE
horcm 1000
3000
NONE
horcm 1000
3000
.
.
.
.
HORCM_INST
HORCM_INST
#dev_group ip_address
service
horcm
horcm
horcm
horcm
#dev_group ip_address
service
horcm
horcm
oradb
oradb
oradb
oradb
HST2_IPA
HST3_IPA
HST2_IPB
HST3_IPB
oradb
HST1_IPA
oradb
HST1_IPB
SubnetA
SubnetB
IPA
horcm
HST3
IPB
IPA IPB
IPA IPB
horcm
horcm
HST2
HST1
Oradb
SVOL
PVOL
Hitachi RAID Storage
Hitachi RAID Storage
Figure 2.27 Configuration for Multiple Networks
For example:
# horcctl -ND -g IP46G
Current network address = 158.214.135.106,services = 50060
# horcctl -NC -g IP46G
Changed network address(158.214.135.106,50060 -> fe80::39e7:7667:9897:2142,50060)
In case of IPv6 only, the configuration must be defined as HORCM/IPv6.
58
Chapter 2 Overview of CCI Operations
Host
Host
RM command
RM command
IPv6
HORCM
IPv6
HORCM
IPv6
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE6 horcm0 1000 3000
1000
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE6 horcm0 1000 3000
horcm0 1000
#fe80::202:a5ff:fe55:c1d2 horcm0
#fe80::209:6bff:febe:3c17
3000
3000
#/********** For HORCM_CMD ****************/
HORCM_CMD
#/********** For HORCM_CMD ****************/
HORCM_CMD
#dev_name
#dev_name
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#dev_group
IPV6G
dev_name
dev1
Serial#
63502
LDEV# MU#
677
#dev_group
IPV6G
dev_name
dev1
Serial#
63502
LDEV# MU#
577
#/********** For HORCM_INST ****************/
HORCM_INST
#/********** For HORCM_INST ****************/
HORCM_INST
#dev_group ip_address
IPV6G fe80::209:6bff:febe:3c17
service
horcm0
#dev_group ip_address
service
IPV6G
fe80::202:a5ff:fe55:c1d2 horcm0
Figure 2.28 Network Configuration for IPv6
Hitachi Command Control Interface (CCI) User and Reference Guide
59
In case of IPv4 mapped IPv6, it is possible to communicate between HORCM/IPv4 and
HORCM/IPv6 using IPv4 mapped IPv6.
Host
Host
RM command
RM command
HORCM
IPv6
HORCM
IPv4
mapped IPv6
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE horcm4 1000 3000
#158.214.127.64 horcm4 1000
#ip_address service poll(10ms) timeout(10ms)
NONE6
#::ffff:158.214.135.105
horcm6
1000
horcm6
3000
1000
3000
3000
#/********** For HORCM_CMD ****************/
HORCM_CMD
#dev_name
#/********** For HORCM_CMD ****************/
HORCM_CMD
#dev_name
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#dev_group
IPM4G
dev_name
dev1
Serial#
63502
LDEV# MU#
577
#dev_group
IPM4G
dev_name
dev1
Serial#
63502
LDEV# MU#
677
#/********** For HORCM_INST ****************/
HORCM_INST
#/********** For HORCM_INST ****************/
HORCM_INST
#dev_group
ip_address
service
horcm6
#dev_group
IPM4G
ip_address
::ffff:158.214.127.64
158.214.127.64
service
horcm4
horcm4
IPM4G
158.214.135.105
IPM4G
”::ffff:158.214.127.64” shows IPv4 mapped IPv6.
If IP_address will be specified with IPV4 format, then
HORCM converts to IPV4 mapped IPV6.
Figure 2.29 Network Configuration for IPv4 Mapped IPv6
60
Chapter 2 Overview of CCI Operations
In case of mixed IPv4 and IPv6, it is possible to communicate between HORCM/IPv4 and
HORCM/IPv6 and HORCM/IPv6 using IPv4 mapped IPv6 and native IPv6.
Host
Host
RM command
RM command
RM command
RM command
HORCM
IPv4
HORCM
IPv4
IPv4
IPv6
Host
Host
mapped
mapped
HORCM
IPv6
HORCM
IPv6
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE horcm4 1000 3000
#158.214.127.64 horcm4 1000
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE
#158.214.135.105
horcm4
1000
horcm4
3000
1000
3000
3000
#/********** For HORCM_CMD ****************/
HORCM_CMD
#/********** For HORCM_CMD ****************/
HORCM_CMD
#dev_name
#dev_name
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#dev_group
IP46G
dev_name
dev1
Serial#
63502
LDEV# MU#
577
#dev_group
IP46G
dev_name
dev1
Serial#
63502
LDEV# MU#
677
#/********** For HORCM_INST ****************/
HORCM_INST
#/********** For HORCM_INST ****************/
HORCM_INST
#dev_group
IP46G
IP46G
ip_address
158.214.135.105
158.214.135.106
service
horcm4
horcm6
#dev_group
IP46G
IP46G
ip_address
158.214.127.64
158.214.127.65
service
horcm4
horcm6
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
NONE6 horcm6 1000 3000
#ip_address service poll(10ms) timeout(10ms)
NONE6 horcm6 1000 3000
#/********** For HORCM_CMD ****************/
HORCM_CMD
#/********** For HORCM_CMD ****************/
HORCM_CMD
#dev_name
#dev_name
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#UnitID 0 (Serial# 63502)
/dev/rdsk/c1t0d0s2
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#/********** For HORCM_LDEV ****************/
HORCM_LDEV
#dev_group
IP46G
dev_name
dev1
Serial#
63502
LDEV# MU#
677
#dev_group
IP46G
dev_name
dev1
Serial#
63502
LDEV# MU#
577
#/********** For HORCM_INST ****************/
HORCM_INST
#/********** For HORCM_INST ****************/
HORCM_INST
#dev_group
IP46G
IP46G
ip_address
158.214.127.64
fe80::209:6bff:febe:3c17
service
horcm4
horcm6
#dev_group
IP46G
IP46G
ip_address
158.214.135.105
fe80::202:a5ff:fe55:c1d2 horcm6
service
horcm4
Figure 2.30 Network Configuration for Mixed IPv4 and IPv6
Hitachi Command Control Interface (CCI) User and Reference Guide
61
HORCM_LDEV. The HORCM_LDEV parameter is used for specifying stable LDEV# and Serial#
as the physical volumes corresponding to the paired logical volume names. Each group name
is unique and typically has a name fitting its use (e.g., database data, Redo log file, UNIX
file). The group and paired logical volume names described in this item must also be known
to the remote server.
(a) dev_group: This parameter is the same as HORCM_DEV parameter.
(b) dev_name: This parameter is the same as HORCM_DEV parameter.
(c) MU#: This parameter is the same as HORCM_DEV parameter.
(d) Serial#: This parameter is used to describe the Serial number of RAID box.
(e) CU:LDEV(LDEV#): This parameter is used to describe the LDEV number in the RAID
storage system and supports three types format as LDEV#.
HORCM_LDEV
#dev_group dev_name
Serial#
30095
30095
CU:LDEV(LDEV#) MU#
oradb
oradb
dev1
dev2
02:40
02:41
0
0
–
–
–
Specifying “CU:LDEV” in hex used by SVP or Web console
Example for LDEV# 260
01: 04
Specifying “LDEV” in decimal used by inqraid command of RAID Manager
Example for LDEV# 260
260
Specifying “LDEV” in hex used by inqraid command of RAID Manager
Example for LDEV# 260
0x104
Note: HORCM_LDEV format can be used for Lightning 9900V and later. LDEV# will be
converted to “Port#, Targ#, Lun#” mapping to this LDEV internally, because the RAID storage
system needs to specify “Port#, Targ#, Lun#” for the target device. This feature is
TagmaStore USP/NSC and 9900V microcode dependent; if HORCM fails to start, use
HORCM_DEV.
HORCM_INSTP. This parameter is used to specify “pathID” for TrueCopy link as well as
“HORCM_INST”.
HORCM_INSTP
dev_group
VG01
ip_address
HSTA
HSTA
service pathID
horcm
horcm
1
2
VG02
Note: PathID can be specified for TrueCopy on USP V/VM and USP/NSC. UR cannot be
specified. PathID is used for the paircreate command and pairresync –swapp[s], so it must be
specified at the PVOL and SVOL sites. If PathID is not specified, it will be used as CU free.
62
Chapter 2 Overview of CCI Operations
2.8.5 Command Device
The Hitachi TrueCopy/ShadowImage commands are issued by the HORC Manager (HORCM) to
the RAID storage system command device. The command device is a user-selected,
dedicated logical volume on the storage system which functions as the interface to the CCI
software on the UNIX/PC host. The command device is dedicated to CCI communications and
cannot be used by any other applications. The command device accepts TrueCopy and
ShadowImage read and write commands that are executed by the storage system. The
command device also returns read requests to the UNIX/PC host. The volume designated as
the command device is used only by the storage system and is blocked from the user. The
command device uses 16 MB, and the remaining volume space is reserved for CCI and its
utilities. The command device can be any OPEN-x device (e.g., OPEN-3, OPEN-8) that is
accessible by the host. A LUSE volume cannot be used as a command device. A Virtual
LVI/LUN volume as small as 36 MB (e.g., OPEN-3-CVS) can be used as a command device.
WARNING: Make sure the volume to be selected as the command device does not contain
any user data. The command device will be inaccessible to the UNIX/PC server host.
The CCI software on the host issues read and write commands to the command device. When
CCI receives an error notification in reply to a read or write request to the RAID storage
system, the CCI software will switch to an alternate command device, if one is defined. If a
command device is blocked (e.g., online maintenance), you can switch to an alternate
command device manually. If no alternate command device is defined or available, all
Hitachi TrueCopy and ShadowImage commands will terminate abnormally, and the host will
not be able to issue commands to the storage system. The user must set one or more
downtime.
Each command device must be set using the LUN Manager remote console software. If the
remote LUN Manager feature is not installed, please ask your Hitachi Data Systems
representative about LUN Manager configuration services. Each command device must also
be defined in the HORCM_CMD section of the configuration file for the CCI instance on the
attached host. If an alternate command device is not defined in the configuration file, the
and defining command devices.
an attribute to indicate protection ON or OFF.
Notes:
„
„
For Solaris operations, the command device must be labeled.
To enable dual pathing of the command device under Solaris systems, make sure to
include all paths to the command device on a single line in the HORCM_CMD section of
c2) to the command device. Putting the path information on separate lines may cause
parsing issues, and failover may not occur unless the HORCM startup script is restarted
on the Solaris system.
Hitachi Command Control Interface (CCI) User and Reference Guide
63
HORCM_CMD
#dev_name dev_name dev_name
/dev/rdsk/c1t66d36s2 /dev/rdsk/c2t66d36s2
Figure 2.31 Example of Alternate Path for Command Device for Solaris Systems
2.8.6 Alternate Command Device Function
The CCI software issues commands to the command device via the UNIX/PC raw I/O
interface. If the command device fails in any way, all Hitachi TrueCopy/ShadowImage
commands are terminated abnormally, and the user cannot use any commands. Because the
use of alternate I/O pathing depends on the platform, restrictions are placed upon it. For
example, on HP-UX systems only devices subject to the LVM can use the alternate path PV-
LINK. To avoid command device failure, CCI supports an alternate command device function.
„
„
„
„
Definition of alternate command devices. To use an alternate command device, you
must define two or more command devices for the HORCM_CMD item in the
defined, they are recognized as alternate command devices.
Timing of alternate command devices. When the HORCM receives an error notification
in reply from the operating system via the raw I/O interface, the command device is
alternated. It is possible to alternate the command device forcibly by issuing an
alternating command provided by Hitachi TrueCopy (horcctl -C).
Operation of alternating command. If the command device will be blocked due to
online maintenance (e.g., microcode replacement), the alternating command should be
issued in advance. When the alternating command is issued again after completion of
the online maintenance, the previous command device is activated again.
Multiple command devices on HORCM startup. If at least one command device is
available during one or more the command devices which was described to the
configuration definition file, then HORCM will be able to start with warning message to
startup log by using available command device. The user needs to confirm that all
command devices can be changed by using horcctl -C command option, or HORCM has
been started without warning message to the HORCM start up log.
Hitachi RAID
Storage System
TrueCopy/
ShadowImage
Command
Command
Command
HORCM
Command
Device
HORCM
(CCI)
Alternating
Command
Figure 2.32 Alternate Command Device Function
64
Chapter 2 Overview of CCI Operations
2.8.7 Command Interface with Hitachi TrueCopy/ShadowImage
When the CCI commands are converted into SCSI commands of a special format, a SCSI
through driver which can send such special SCSI commands to the RAID storage system is
needed. As a result, it is quite possible that support by CCI depends on the OS supplier.
Accordingly, it is necessary to use read/write command that can easily be issued by many
UNIX/PC server platforms. Ioctl() can be used for the following platforms: HP-UX, Linux,
Solaris, Windows, IRIX®64, OpenVMS, and zLinux®.
Format of SCSI commands used. Use the RD/WR command. They should be RD/WR
command valid for special LDEV, since they should be discriminated from the normal RD/WR
command.
Recognition of control command area (LBA#). The host issues control commands through a
special file for raw I/O of a special LDEV. Since the specific LU (command device) receiving
these commands is a normal disk viewed from the SCSI interface, the operating system may
access the control area of its local. The RAID storage system must distinguish such accesses
from the control command accesses. Normally, several megabytes of the OS control area is
used from the initial LAB#. To avoid using this area, a specific LBA# area is decided and
control commands are issued within this area. The command LAB# recognized by the storage
system is shown below, provided the maximum OS control area is 16 MB.
32768 <= LBA# <= 32768 * 2 (In units of block; 512 bytes per block)
The host seeks 32768 * 512 bytes and issues a command.
Special LDEV space
Special file
(raw I/O volume)
OS Control
Area
Command
Area LBA
Target LU
Target LU
Command
Seeking position
Figure 2.33 Relation between Special File and Special LDEV
Acceptance of commands. A command is issued in the LBA area of the special LDEV
explained above. The RD/WR command meeting this requirement should be received
especially as the CCI command. A command is issued in the form of WR or WR-RD. When a
command is issued in the form of RD, it is regarded as an inquiry (equivalent to a SCSI
inquiry), and a CCI recognition character string is returned.
Hitachi Command Control Interface (CCI) User and Reference Guide
65
2.8.7.1 Command Competition
The CCI commands are asynchronous commands issued via the SCSI interface. Accordingly, if
several processes issue these commands to a single LDEV, the storage system cannot take
the proper action. To avoid such a problem, two or more WR commands should not be issued
to a single LDEV. The command initiators should not issue two or more WR commands to a
single LDEV unless the storage system can receive commands with independent initiator
number * LDEV number simultaneously.
Command
process
RD/WR command
Hitachi RAID
(Command device)
HORCM
(CCI)
Command
process
Command
process
Figure 2.34 HORCM and Command Issue Process
2.8.7.2 Flow of Commands
Figure 2.35 shows the flow of RD/WR command control in a specified LBA#.
LDEV (Target#, LUN#)
Host side
Hitachi RAID side
Write()
Write/read
processing code
Command area
Input data
Input parameter
Edited data
Read()
Scanning edit data by reading
Figure 2.35 Flow of Command Issue
66
Chapter 2 Overview of CCI Operations
2.8.7.3 Issuing Commands for LDEV(s) within a LUSE Device
A LUSE device is a group of LDEVs regarded as a single logical unit. Since it is necessary to
know about the configuration of the LDEVs when issuing a command, a new command is
used. This command specifies a target LU and acquires LDEV configuration data
(see Figure 2.36).
Target LU (Port#, SCSI ID#, LU#)
Special LDEV
LDEV# n
LDEV# n+1
Initial LBA of
command
Command area
Special LDEV
space
LDEV# n+2
Figure 2.36 LUSE Device and Command Issue
Hitachi Command Control Interface (CCI) User and Reference Guide
67
2.8.8 Logical DKC per 64K LDEVs
The Universal Storage Platform V/VM controller manages internal LDEV numbers as a four-
byte data type in order to support over 64K LDEVs. Because the LDEV number for the host
interface is defined as two-byte data type, the USP V/VM implements the concept of the
logical DKC (LDKC) in order to maintain the compatibility of this host interface and to make
operation possible for over 64K LDEVs without changing the host interface.
HOST1
Command
Inquiry
CM
LDKC#
CM
LDKC#
LDKC#
2Byte LDEV#
c
LDKC# + 2Byte LDEV#
4Byte LDEV#
e
d
USP V (64kLDEV*2)
Figure 2.37 Relation between LDEVs and Command Device on LDKC
c Converting to LDKC# and 2-byte LDEV: returns the LDEV number to the host by converting
from 4-byte LDEV# to LDKC# and 2-byte LDEV#.
LDKC# (0 or 1 for 128K LDEVs) = internal DKC LDEV# / 64K
LDEV# for the Host = internal DKC LDEV# % 64K
d Converting to 4-byte LDEV: converts as internal LDEV# from LDKC# and 2-byte LDEV# to
4-byte LDEV#.
internal DKC LDEV# = 64K * LDKC# (0 or 1 for 128K LDEVs) + LDEV# from the host
e Filtering to 2-byte LDEV: returns 2-byte LDEV# to the host by filtering 4-byte LDEV# via
the command device.
LDEV# for the host = internal DKC LDEV# - 64K * LDKC# (0 or 1 for 128K LDEVs)
If "0 ≤ LDEV# for the host < 64k", then returns LDEV# for the host, else returns NO LDEV.
Restrictions:
„
„
„
For TrueCopy Async, you cannot create a CT group across LDKC.
For ShadowImage and COW Snapshot, you cannot create a paired-volume across LDKC.
You must configure LUSE/POOL/JNL using multiple LDEVs within the same LDKC.
68
Chapter 2 Overview of CCI Operations
2.8.9 Command Device Guarding
In the customer environment, a command device may be attacked by the maintenance
program of the Solaris Server, after that usable instance will be exhausted. As a result, CCI
instance could not start up on all servers (except attacked server). This may happen on
wrong operation of the maintenance personnel for the UNIX Server. In that case, the
command device needs some protection assumed a human error, as long as it can be seen as
the device file from the maintenance personnel.
Thus, RAID F/W (for the command device) and CCI support this protection in order to guard
from the similar access.
2.8.9.1 Guarding Method
Currently, assignment of the instance via the command device is ONE phase. Therefore, if
the command device will be read a special allocation area of the instance through the
maintenance tool and so on, then it causes a fault of full space of the instance, because the
command device interprets as assignment of the instance from RAID Manager.
The RAID Manager has TWO phases that it reads to acquire usable LBA, and writes with the
acquired LBA in attaching sequence to the command device, so the command device will be
able to confirm whether it was required as the assignment for RAID Manager or not, by
detecting and adding two status bits to the instance assignment table.
Figure 2.38 Current Assignment Sequence
Hitachi Command Control Interface (CCI) User and Reference Guide
69
RAID
HOST
Command device
Table
0 0
RAID Manager
Temporary allocation
Read(Instance request)
Getting LBA
1 0
1 1
Getting configuration
Actual allocation
Write with LBA
(to get configuration)
Figure 2.39 Improved Assignment Sequence
The command device performs the assignment of an instance through TWO phase that has
“temporary allocation (1 0)” and “actual allocation (1 1)” to the instance assignment table.
If the command device will be attacked, the instance assignment table will be filled with
“temporary allocation (1 0)” status, after that the command device will detects a fault of
full space as the instance assignment, and then will clear up all “temporary allocation (1
0)”, and re-assigns the required instance automatically.
This does not require a service personnel to do “OFF/ON” of the command device for clear
up the instance table.
2.8.9.2 Verifying the RM Instance Number
RAID Manager provides a way to verify number of “temporary allocation (1 0)” and “actual
allocation (1 1)” on the instance table so that a user can confirm own validity of the RM
instance number they are using. The horcctl -DIcommand shows the number of RM
instances as follows of when HORCM has being started.
Example without command device security:
# horcctl -DI
Current control device = /dev/rdsk/c0t0d0 AI = 14 TI = 0 CI = 1
Example with command device security:
# horcctl -DI
Current control device = /dev/rdsk/c0t0d0* AI = 14 TI = 0 CI = 1
AI : NUM of Actual instances in use
TI : NUM of temporary instances in RAID
CI : NUM of instances using current (own) instance
70
Chapter 2 Overview of CCI Operations
2.8.10 CCI Software Files
The CCI software product consists of files supplied to the user, log files created internally,
and files created by the user. These files are stored on the local disk in the server machine.
which are provided for OpenVMS®-based systems.
Table 2.10 CCI Files for UNIX-based Systems
No. Title
HORCM
File name
Command name
horcmd
Mode User* Group
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/etc/horcmgr
0544
0444
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
0544
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
HORCM_CONF
/HORCM/etc/horcm.conf
/usr/bin/horctakeover
/usr/bin/paircurchk
/usr/bin/paircreate
/usr/bin/pairsplit
–
Takeover
horctakeover
paircurchk
paircreate
pairsplit
Accessibility check
Pair generation
Pair splitting
Pair resynchronization
Event waiting
/usr/bin/pairresync
/usr/bin/pairevtwait
/usr/bin/pairmon
pairresync
pairevtwait
pairmon
Error notification
Volume check
/usr/bin/pairvolchk
/usr/bin/pairdisplay
/usr/bin/raidscan
pairvolchk
pairdisplay
raidscan
Pair configuration confirmation
RAID scanning
RAID activity reporting
Connection confirming
Trace control
/usr/bin/raidar
raidar
/usr/bin/raidqry
raidqry
/usr/bin/horcctl
horcctl
HORCM activation script
HORCM shutdown script
Connection confirming
Synchronous waiting
Configuration file making
Database Validator setting
Database Validator confirmation
Database Validator confirmation
/usr/bin/horcmstart.sh
/usr/bin/horcmshutdown.sh
/HORCM/usr/bin/inqraid
/usr/bin/pairsyncwait
/HORCM/usr/bin/mkconf.sh
/usr/bin/raidvchkset
/usr/bin/raidvchkdsp
/usr/bin/raidvchkscan
horcmstart.sh
horcmshutdown.sh
--
pairsyncwait
--
raidvchkset
raidvchkdsp
raidvchkscan
*Note: For information and instructions on changing the UNIX user for the CCI software,
please see section 3.3.4.
Hitachi Command Control Interface (CCI) User and Reference Guide
71
Table 2.11 CCI Files for Windows-based Systems
No. Title
File name
Command name
horcmd
001 HORCM
\HORCM\etc\horcmgr.exe
\HORCM\etc\horcm.conf
002 HORCM_CONF
003 Takeover
-
\HORCM\etc\horctakeover.exe
\HORCM\etc\paircurchk.exe
\HORCM\etc\paircreate.exe
\HORCM\etc\pairsplit.exe
\HORCM\etc\pairresync.exe
\HORCM\etc\pairevtwait.exe
\HORCM\etc\pairmon.exe
\HORCM\etc\pairvolchk.exe
\HORCM\etc\pairdisplay.exe
\HORCM\etc\raidscan.exe
\HORCM\etc\raidar.exe
horctakeover
paircurchk
paircreate
pairsplit
004 Accessibility check
005 Pair generation
006 Pair split
007 Pair re-synchronization
008 Event waiting
pairresync
pairevtwait
pairmon
009 Error notification
010 Volume checking
011 Pair configuration confirmation
012 RAID scanning
013 RAID activity reporting
014 Connection confirmation
015 Trace control
pairvolchk
pairdisplay
raidscan
raidar
\HORCM\etc\raidqry.exe
raidqry
\HORCM\etc\horcctl.exe
horcctl
016 HORCM activation script
017 HORCM shutdown script
018 Synchronous waiting
019 Connection confirmation
020 Configuration file making
021 Oracle Validation setting
022 Oracle Validation confirmation
023 Oracle Validation confirmation
024 Tool
\HORCM\etc\horcmstart.exe
\HORCM\etc\horcmshutdown.exe
\HORCM\etc\pairsyncwait.exe
\HORCM\etc\inqraid.exe
horcmstart
horcmshutdown
pairsyncwait
inqraid
\HORCM\Tool\mkconf.exe
\HORCM\etc\raidvchkset
mkconf
raidvchkset
raidvchkdsp
raidvchkscan
chgacl
\HORCM\etc\raidvchkdsp
\HORCM\etc\raidvchkscan
\HORCM\Tool\chgacl.exe
\HORCM\Tool\svcexe.exe
\HORCM\Tool\HORCM0_run.txt
\HORCM\Tool\TRCLOG.bat
\HORCM\usr\bin\horctakeover.exe
\HORCM\usr\bin\paircurchk.exe
\HORCM\usr\bin\paircreate.exe
\HORCM\usr\bin\pairsplit.exe
\HORCM\usr\bin\pairresync.exe
\HORCM\usr\bin\pairevtwait.exe
025 Tool
svcexe
026 Sample script for svcexe
027 Tool
-
TRCLOG.bat
horctakeover
paircurchk
paircreate
pairsplit
028 Takeover
029 Accessibility check
030 Pair generation
031 Pair split
032 Pair re-synchronization
033 Event waiting
pairresync
pairevtwait
72
Chapter 2 Overview of CCI Operations
No. Title
File name
Command name
pairvolchk
034 Volume check
\HORCM\usr\bin\pairvolchk.exe
\HORCM\usr\bin\pairsyncwait.exe
\HORCM\usr\bin\pairdisplay.exe
\HORCM\usr\bin\raidscan.exe
\HORCM\usr\bin\raidqry.exe
\HORCM\usr\bin\raidvchkset
\HORCM\usr\bin\raidvchkdsp
\HORCM\usr\bin\raidvchkscan
035 Synchronous waiting
036 Pair configuration confirmation
037 RAID scanning
pairsyncwait
pairdisplay
raidscan
038 Connection confirmation
039 Oracle Validation setting
040 Oracle Validation confirmation
041 Oracle Validation confirmation
raidqry
raidvchkset
raidvchkdsp
raidvchkscan
Notes:
„
„
„
The \HORCM\etc\ commands are used from the console window. If these commands are
executed without an argument, the interactive mode will start up.
The \HORCM\usr\bin commands have no console window, and can therefore be used from
the application.
The \HORCM\usr\bin commands do not support the directory mounted volumes in
subcommands.
Hitachi Command Control Interface (CCI) User and Reference Guide
73
Table 2.12 CCI Files for OpenVMS®-based Systems
No.
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
Title
File name
Command name User
HORCM
$ROOT:[HORCM.etc]horcmgr.exe
$ROOT:[HORCM.etc]horcm.conf
$ROOT:[HORCM.usr.bin]horctakeover.exe
$ROOT:[HORCM.usr.bin]paircurchk.exe
$ROOT:[HORCM.usr.bin]paircreate.exe
$ROOT:[HORCM.usr.bin]pairsplit.exe
$ROOT:[HORCM.usr.bin]pairresync.exe
$ROOT:[HORCM.usr.bin]pairevtwait.exe
$ROOT:[HORCM.usr.bin]pairmon.exe
$ROOT:[HORCM.usr.bin]pairvolchk.exe
horcmd
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
sys
HORCM_CONF
Takeover
–
horctakeover
paircurchk
paircreate
pairsplit
Volume Accessibility check
Pair generation
Pair splitting
Pair re-synchronization
Event waiting
pairresync
pairevtwait
pairmon
pairvolchk
pairdisplay
raidscan
raidar
Error notification
Volume checking
Pair configuration confirmation $ROOT:[HORCM.usr.bin]pairdisplay.exe
RAID scan
$ROOT:[HORCM.usr.bin]raidscan.exe
$ROOT:[HORCM.usr.bin]raidar.exe
$ROOT:[HORCM.usr.bin]raidqry.exe
$ROOT:[HORCM.usr.bin]horcctl.exe
$ROOT:[HORCM.usr.bin]horcmstart.exe
RAID activity report
Connection confirmation
Trace control
raidqry
horcctl
HORCM activation script
HORCM shutdown script
Connection confirmation
Synchronous waiting
Configuration file making
Database Validator setting
horcmstart.sh
$ROOT:[HORCM.usr.bin]horcmshutdown.exe horcmshutdown.sh sys
$ROOT:[HORCM.usr.bin]inqraid.exe
$ROOT:[HORCM.usr.bin]pairsyncwait.exe
$ROOT:[HORCM.usr.bin]mkconf.exe
$ROOT:[HORCM.usr.bin]raidvchkset.exe
$ROOT:[HORCM.usr.bin]raidvchkdsp.exe
-
sys
sys
sys
sys
sys
pairsyncwait
-
raidvchkset
raidvchkdsp
Database Validator
confirmation
023
Database Validator
confirmation
$ROOT:[HORCM.usr.bin]raidvchkscan.exe
raidvchkscan
sys
024
025
Sample file for horcmstart
Sample file for horcmstart
$ROOT:[HORCM]loginhorcm*.com
$ROOT:[HORCM]runhorcm*.com
-
-
sys
sys
Notes:
„
„
$ROOT is defined as SYS$POSIX_ROOT. $POSIX_ROOT is necessary when using C RTL.
The User-name for OpenVMS is “System”.
74
Chapter 2 Overview of CCI Operations
2.8.11 Log and Trace Files
The CCI software (HORCM) and Hitachi TrueCopy and ShadowImage commands maintain
start-up log files, execution log files, and trace files which can be used to identify the
causes of errors and keep records of the status transition history of the paired volumes.
Please refer to Appendix A for a complete description of the CCI log and trace files.
2.8.12 User-Created Files
Script Files. CCI supports scripting to provide automated and unattended copy operations. A
CCI script contains a list of CCI commands which describes a series of TrueCopy and/or
ShadowImage operations. The scripted commands for UNIX-based platforms are defined in a
shell script file. The scripted commands for Windows-based platforms are defined in a text
file. The host reads the script file and sends the commands to the command device to
execute the TrueCopy/ShadowImage operations automatically. The CCI scripts are:
„
HORCM startup script (horcmstart.sh, horcmstart.exe): A script which starts HORCM
(/etc/horcmgr), sets environmental variables as needed (e.g., HORCM_CONF,
HORCM_LOG, HORCM_LOGS), and starts HORCM.
„
„
HORCM shutdown script (horcmshutdown.sh, horcmshutdown.exe): A script for stopping
the HORCM (/etc/horcmgr).
HA control script: A script for executing takeover processing automatically when the
cluster manager (CM) detects a server error.
When constructing the HORCM environment, the system administrator should make a copy of
the HORCM_CONF file. The copied file should be set according to the system environment
and registered as the following file (* is the instance number):
UNIX-based systems: /etc/horcm.conf or /etc/horcm*.conf
Windows-based systems: \WINNT\horcm.conf or \WINNT\horcm*.conf
Hitachi Command Control Interface (CCI) User and Reference Guide
75
2.9 Configuration Definition File
file(s) for each configuration, and examples of CCI command use for each configuration.
The command device is defined using the system raw device name (character-type device
file name). For example, the command devices for Figure 2.40 would be:
„
HP-UX:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HOSTA = /dev/rhdiskXX
HORCM_CMD of HOSTB = /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HOSTA = /dev/rrzbXXc
HORCM_CMD of HOSTB = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®: HORCM_CMD of HOSTA = /dev/rdsk/sdXX
HORCM_CMD of HOSTB = /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
„
„
„
Windows 2000/2003/2008: HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
Windows NT®:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
Linux, zLinux
HORCM_CMD of HOSTA = /dev/sdX
HORCM_CMD of HOSTB = /dev/sdX
where X = device number assigned by Linux, zLinux
„
IRIX:
HORCM_CMD for HOSTA ...
HORCM_CMD for HOSTB ...
/dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
/dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
76
Chapter 2 Overview of CCI Operations
RAID
HOST
Table
Command device
RAID Manager
0
Actual allocation
Read(Instance request)
Getting LBA
1
Getting configuration
Write with LBA
(to get configuration)
Figure 2.40 Hitachi TrueCopy Remote Configuration Example
Example of CCI commands with HOSTA:
„
„
„
Designate a group name (Oradb) and a local host P-VOL a case.
# paircreate -g Oradb -f never -vl
This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.40.
Designate a volume name (oradev1) and a local host P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vl
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.40).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#, LDEV#..P/S, Status, Fence,
Seq#, P-LDEV# M
oradb
oradb
oradb
oradb
oradev1(L) (CL1-A, 1,1) 30053
oradev1(R) (CL1-D, 2,1) 30054
oradev2(L) (CL1-A, 1,2) 30053
oradev2(R) (CL1-D, 2,2) 30054
18...P-VOL
19...S-VOL
20...P-VOL
21...S-VOL
COPY NEVER, 30054
19 -
18 -
21 -
20 -
COPY NEVER, -----
COPY NEVER, 30054
COPY NEVER , -----
Example of CCI commands with HOSTB:
„
„
„
Designate a group name and a remote host P-VOL a case.
# paircreate -g Oradb -f never -vr
This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.40).
Designate a volume name (oradev1) and a remote host P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vr
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.40).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#,LDEV#..P/S,
Status, Fence,
Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D, 2,1) 30054 19...S-VOL COPY
oradb oradev1(R) (CL1-A, 1,1) 30053 18...P-VOL COPY
oradb oradev2(L) (CL1-D, 2,2) 30054 21...S-VOL COPY
oradb oradev2(R) (CL1-A, 1,2) 30053 20...P-VOL COPY
NEVER, -----
18 -
19 -
20 -
21 -
NEVER, 30054
NEVER, -----
NEVER, 30054
Hitachi Command Control Interface (CCI) User and Reference Guide
77
The command device is defined using the system raw device name (character-type device
file name). For example, the command devices for Figure 2.41 would be:
„
HP-UX:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HOSTA = /dev/rhdiskXX
HORCM_CMD of HOSTB = /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HOSTA = /dev/rrzbXXc
HORCM_CMD of HOSTB = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®:
HORCM_CMD of HOSTA = /dev/rdsk/sdXX
HORCM_CMD of HOSTB = /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
„
Windows 2008/2003/2000:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
„
„
Windows NT:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
Linux, zLinux:
HORCM_CMD of HOSTA = /dev/sdX
HORCM_CMD of HOSTB = /dev/sdX
where X = device number assigned by Linux, zLinux
„
IRIX:
HORCM_CMD for HOSTA ...
/dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
/dev/rdsk/dks1d0l1vol or
HORCM_CMD for HOSTB ...
/dev/rdsk/node_wwn/lun1vol/c1p0
78
Chapter 2 Overview of CCI Operations
LAN
Ip address:HST1
Ip address:HST2
HOSTA
HOSTB
/dev/rdsk/c1t2d1
/dev/rdsk/c1t2d2
/dev/rdsk/c1t0d0
CONF.file
HORCM
/dev/rdsk/c0t1d1
/dev/rdsk/c0t1d2
/dev/rdsk/c0t0d0
CONF.file
HORCM
Fibre port
Fibre port
C1
C0
B
Fibre-channel
Fibre-channel
CL1
CL1
A
C
D
T0,L1
T0,L1
Command
device
T1,L1
Oradb
oradev1
T2,L1
T2,L2
P-Vol
P-Vol
S-Vol
S-Vol
oradev2
T1,L2
Hitachi RAID Storage System
Note: Use of command device by user is not possible (command
device established from Remote Console PC or SVP).
Tx: Target ID
Lx: LUN
Configuration file for HOSTA (/etc/horcm.conf)
Configuration file for HOSTB (/etc/horcm.conf)
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
#ip_address service poll(10ms) timeout(10ms)
HST2
horcm 1000
3000
HST1
horcm 1000
3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
HORCM_DEV
#dev_group dev_name port# TargetID LU#
#dev_group dev_name port# TargetID LU#
Oradb
Oradb
oradev1
oradev2
CL1-D
CL1-D
2
2
1
2
Oradb
Oradb
oradev1
oradev2
CL1-A
CL1-A
1
1
1
2
HORCM_INST
#dev_group
Oradb
HORCM_INST
#dev_group
Oradb
ip_address
HST1
service
horcm
ip_address
HST2
service
horcm
Figure 2.41 Hitachi TrueCopy Local Configuration Example
Hitachi Command Control Interface (CCI) User and Reference Guide
79
Example of CCI commands with HOSTA:
„
„
„
Designate a group name (Oradb) and a local host P- VOL a case.
# paircreate -g Oradb -f never -vl
This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.41).
Designate a volume name (oradev1) and a local host P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vl
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.41).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#, LDEV#..P/S,
Status, Fence, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-A, 1,1) 30053 18.. P-VOL COPY NEVER ,30053
oradb oradev1(R) (CL1-D, 2,1) 30053 19.. S-VOL COPY NEVER ,-----
oradb oradev2(L) (CL1-A, 1,2) 30053 20.. P-VOL COPY NEVER ,30053
oradb oradev2(R) (CL1-D, 2,2) 30053 21.. S-VOL COPY NEVER ,-----
19
18
21
-
-
-
20
-
Example of CCI commands with HOSTB:
„
Designate a group name and a remote host P-VOL a case.
# paircreate -g Oradb -f never -vr
This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.41).
„
Designate a volume name (oradev1) and a remote host P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vr
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.41).
„
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#,LDEV#..P/S,
Status, Fence, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D, 2,1) 30053 19.. S-VOL COPY NEVER ,-----
oradb oradev1(R) (CL1-A, 1,1) 30053 18.. P-VOL COPY NEVER ,30053
oradb oradev2(L) (CL1-D, 2,2) 30053 21.. S-VOL COPY NEVER ,-----
oradb oradev2(R) (CL1-A, 1,2) 30053 20.. P-VOL COPY NEVER ,30053
18 -
19 -
20 -
21 -
80
Chapter 2 Overview of CCI Operations
The command device is defined using the system raw device name (character-type device
file name). The command device defined in the configuration definition file must be
established in a way to be following either every instance. If one command device is used
between different instances on the same SCSI port, then the number of instances is up to 16
per command device. If this restriction is exceeded, then use a different SCSI path for each
„
HP-UX:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1s2
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HORCMINST0 = /dev/rhdiskXX
HORCM_CMD of HORCMINST1 = /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HORCMINST0 = /dev/rrzbXXc
HORCM_CMD of HORCMINST1 = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®:
HORCM_CMD of HORCMINST0 = /dev/rdsk/sdXX
HORCM_CMD of HORCMINST1 = /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
„
„
Windows 2008/2003/2000:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#
Windows NT:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#
„
„
Linux, zLinux:
HORCM_CMD of HORCMINST0 = /dev/sdX
HORCM_CMD of HORCMINST1 = /dev/sdX
where X = device number assigned by Linux, zLinux
IRIX:
HORCM_CMD for HOSTA (/etc/horcm0.conf)... /dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
HORCM_CMD for HOSTA (/etc/horcm1.conf)... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
Hitachi Command Control Interface (CCI) User and Reference Guide
81
LAN
Ip address:HST1
HORCMINST0
/dev/rdsk/c0t1d1
HORCMINST1
/dev/rdsk/c1t2d1
/dev/rdsk/c1t2d2
/dev/rdsk/c1t0d0
HOSTA
CONF.file
CONF.file
/dev/rdsk/c0t1d2
/dev/rdsk/c0t0d0
HORCM
HORCM
C1
Fibre port C0
Fibre port
Fibre-channel
Fibre-channel
CL1
CL1
A
B
C
D
T0,L1
T0,L1
Command
device
T1,L1
T1,L2
Oradb
oradev1
T2,L1
P-Vol
P-Vol
S-Vol
oradev2
T2,L2
S-Vol
Hitachi RAID Storage System
Note: Use of command device by user is not possible (command
device established from Remote Console PC or SVP).
Tx: Target ID
Lx: LUN
Configuration file for HORCMINST0 (horcm0.conf)
Configuration file for HORCMINST1 (horcm1.conf)
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
#ip_address service poll(10ms) timeout(10ms)
HST1
horcm1
1000
3000
HST1
horcm0
1000
3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
HORCM_DEV
#dev_group dev_name port# TargetID LU#
#dev_group dev_name port# TargetID LU#
Oradb
Oradb
oradev1
oradev2
CL1-D
CL1-D
2
2
1
2
Oradb
Oradb
oradev1
oradev2
CL1-A
CL1-A
1
1
1
2
HORCM_INST
#dev_group
Oradb
HORCM_INST
#dev_group
Oradb
ip_address
HST1
service
horcm0
ip_address
HST1
service
horcm1
Figure 2.42 Hitachi TrueCopy Configuration Example for Two Instances
82
Chapter 2 Overview of CCI Operations
Example of CCI commands with Instance-0 on HOSTA:
„
When the command execution environment is not set, set an instance number.
For C shell: # setenv HORCMINST 0
For Windows: set HORCMINST=0
„
Designate a group name (Oradb) and a local instance P- VOL a case.
# paircreate -g Oradb -f never -vl
This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.42).
„
„
Designate a volume name (oradev1) and a local instance P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vl
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.42).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#,
LDEV#.. P/S, Status, Fence, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-A, 1,1) 30053 18..
oradb oradev1(R) (CL1-D, 2,1) 30053 19..
oradb oradev2(L) (CL1-A, 1,2) 30053 20..
oradb oradev2(R) (CL1-D, 2,2) 30053 21..
P-VOL COPY
S-VOL COPY
P-VOL COPY
S-VOL COPY
NEVER , 30053 19
NEVER , ----- 18
NEVER , 30053 21
NEVER , ----- 20
-
-
-
-
Example of CCI commands with Instance-1 on HOSTA:
„
When the command execution environment is not set, set an instance number.
For C shell: # setenv HORCMINST 1
For Windows: set HORCMINST=1
„
Designate a group name and a remote instance P-VOL a case.
# paircreate -g Oradb -f never -vr
This command creates pairs for all LU designated as Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.42).
„
„
Designate a volume name (oradev1) and a remote instance P-VOL a case.
# paircreate -g Oradb -d oradev1 -f never -vr
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.42).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (P,T#,L#), Seq#, LDEV#.. P/S, Status, Fence, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-D, 2,1) 30053 19..
oradb oradev1(R) (CL1-A, 1,1) 30053 18..
oradb oradev2(L) (CL1-D, 2,2) 30053 21..
oradb oradev2(R) (CL1-A, 1,2) 30053 20..
S-VOL COPY
P-VOL COPY
S-VOL COPY
P-VOL COPY
NEVER , ----- 18
NEVER , 30053 19
NEVER , ----- 20
NEVER , 30053 21
-
-
-
-
Hitachi Command Control Interface (CCI) User and Reference Guide
83
The command device is defined using the system raw device name (character-type device
file name). For example, the command devices for Figure 2.43 would be:
„
HP-UX:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1
HORCM_CMD of HOSTC = /dev/rdsk/c1t0d1
HORCM_CMD of HOSTD = /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2
HORCM_CMD of HOSTC = /dev/rdsk/c1t0d1s2
HORCM_CMD of HOSTD = /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HOSTA = /dev/rhdiskXX
HORCM_CMD of HOSTB = /dev/rhdiskXX
HORCM_CMD of HOSTC = /dev/rhdiskXX
HORCM_CMD of HOSTD = /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HOSTA = /dev/rrzbXXc
HORCM_CMD of HOSTB = /dev/rrzbXXc
HORCM_CMD of HOSTC = /dev/rrzbXXc
HORCM_CMD of HOSTD = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®:
HORCM_CMD of HOSTA = /dev/rdsk/sdXX
HORCM_CMD of HOSTB = /dev/rdsk/sdXX
HORCM_CMD of HOSTC = /dev/rdsk/sdXX
HORCM_CMD of HOSTD = /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
„
Windows 2008/2003/2000:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTC = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTD = \\.\CMD-Ser#-ldev#-Port#
„
Windows NT:
HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTC = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTD = \\.\CMD-Ser#-ldev#-Port#
84
Chapter 2 Overview of CCI Operations
„
Linux, zLinux:
HORCM_CMD of HOSTA = /dev/sdX
HORCM_CMD of HOSTB = /dev/sdX
HORCM_CMD of HOSTC = /dev/sdX
HORCM_CMD of HOSTD = /dev/sdX
where X = device number assigned by Linux, zLinux
„
IRIX:
HORCM_CMD for HOSTA ... /dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
HORCM_CMD for HOSTB ... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
HORCM_CMD for HOSTC ... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
HORCM_CMD for HOSTD ... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
Hitachi Command Control Interface (CCI) User and Reference Guide
85
LAN
HOST D Ip address:HST4
Ip address:HST1
HOST A
HOST C Ip address:HST3
ile
HOST B Ip address:HST2
file
CONF.file
HORCM
/dev/rdsk/c0t1d1
/dev/rdsk/c0t1d2
/dev/rdsk/c0t0d1
/dev/rdsk/c1t2d1
M
CONF.file
/dev/rdsk/c1t2d2
/dev/rdsk/c1t0d1
HORCM
Fibre port
C0
B
Fibre port
C1
Fibre-channel
Fibre-channel
CL2
CL1
A
C
D
A
B
C
D
T0,L1
T0,L1
Command
device
Oradb
MU# 0
T2,L1
T2,L2
oradev1
T1,L1
P-Vol
S-Vol
S-Vol
T1,L2
oradev2
P-Vol L
Oradb1
MU# 1
oradev1-1
T2,L1
S-Vol
S-Vol
oradev1-2
T2,L2
Oradb2
MU# 2
T2,L1
T2,L2
oradev2-1
S-Vol
S-Vol
oradev2-2
Hitachi RAID Storage System
Tx: Target ID
Lx: LUN
Note: Use of command device by user is not possible (command
device established from Remote Console PC or SVP).
Figure 2.43 ShadowImage Configuration Example (continues on the next page)
86
Chapter 2 Overview of CCI Operations
Configuration file for HOSTA (/etc/horcm.conf)
onfiguration file for HOSTB (/etc/horcm.conf)
C
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
#ip_address service poll(10ms) timeout(10ms)
HST1
horcm 1000
3000
HST2
horcm 1000
3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
Oradb
oradev1
CL1-A
1
1
0
CL2-B
2
1
Oradb
oradev2
Oradb
oradev2
CL1-A
1
2
0
CL2-B
2
2
Oradb1
Oradb1
oradev1-1
oradev1-2
CL1-A
CL1-A
1
1
1
2
1
1
HORCM_INST
#dev_group
Oradb
ip_address
HST1
service
horcm
Oradb2
Oradb2
oradev2-1
oradev2-2
CL1-A
CL1-A
1
1
1
2
2
2
onfiguration file for HOSTC (/etc/horcm.conf)
C
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
HST3 horcm 1000 3000
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
ip_address
service
horcm
horcm
horcm
HST2
HST3
HST4
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Oradb1
oradev1-1
CL2-C
2
1
Oradb1
oradev1-2
CL2-C
2
2
HORCM_INST
#dev_group
Oradb1
ip_address
HST1
service
horcm
Configuration file for HOSTD (/etc/horcm.conf)
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
HST4 horcm 1000 3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Oradb2
Oradb2
oradev2-1
oradev2-2
CL2-D
CL2-D
2
2
1
2
HORCM_INST
#dev_group
Oradb2
ip_address
HST1
service
horcm
Hitachi Command Control Interface (CCI) User and Reference Guide
87
Example of CCI commands with HOSTA (group Oradb):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
Windows: set HORCC_MRCF=1
Designate a group name (Oradb) and a local host P-VOL a case.
# paircreate -g Oradb -vl
This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev1) and a local host P-VOL a case.
# paircreate -g Oradb -d oradev1 -vl
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (Port#,TID,LU-M), Seq#,LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-A, 1, 1 - 0)
oradb oradev1(R) (CL2-B, 2, 1 - 0)
oradb oradev2(L) (CL1-A, 1, 2 - 0)
oradb oradev2(R) (CL2-B, 2, 2 - 0)
30053
30053
30053
30053
18..P-VOL COPY 30053
20 -
18 -
21 -
19 -
20..S-VOL COPY -----
19..P-VOL COPY 30053
21..S-VOL COPY -----
Example of CCI commands with HOSTB (group Oradb):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
Windows: set HORCC_MRCF=1
Designate a group name and a remote host P-VOL a case.
# paircreate -g Oradb -vr
This command creates pairs for all LUs assigned to group Oradb in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev1) and a remote host P-VOL a case.
# paircreate -g Oradb -d oradev1 -vr
This command creates pairs for all LUs designated as oradev1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name and display pair status.
# pairdisplay -g Oradb
Group PairVol(L/R) (Port#,TID,LU-M), Seq#,LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL2-B, 2, 1 - 0)
oradb oradev1(R) (CL1-A, 1, 1 - 0)
oradb oradev2(L) (CL2-B, 2, 2 - 0)
oradb oradev2(R) (CL1-A, 1, 2 - 0)
30053
30053
30053
30053
20..S-VOL COPY -----
18 -
20 -
19 -
21 -
18..P-VOL COPY 30053
21..S-VOL COPY -----
19..P-VOL COPY 30053
88
Chapter 2 Overview of CCI Operations
Example of CCI commands with HOSTA (group Oradb1):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
Designate a group name (Oradb1) and a local host P-VOL a case.
# paircreate -g Oradb1 -vl
This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev1-1) and a local host P-VOL a case.
# paircreate -g Oradb1 -d oradev1-1 -vl
This command creates pairs for all LUs designated as oradev1-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name, and display pair status.
# pairdisplay -g Oradb1
Group PairVol(L/R) (Port#,TID,LU-M), Seq#,LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb1 oradev1-1(L) (CL1-A, 1, 1 - 1) 30053
oradb1 oradev1-1(R) (CL2-C, 2, 1 - 0) 30053
oradb1 oradev1-2(L) (CL1-A, 1, 2 - 1) 30053
oradb1 oradev1-2(R) (CL2-C, 2, 2 - 0) 30053
18..P-VOL COPY 30053 22 -
22..S-VOL COPY -----
19..P-VOL COPY 30053
23..S-VOL COPY -----
18 -
23 -
19 -
Example of CCI commands with HOSTC (group Oradb1):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
Designate a group name and a remote host P-VOL a case.
# paircreate -g Oradb1 -vr
This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev1-1) and a remote host P-VOL a case.
# paircreate -g Oradb1 -d oradev1-1 -vr
This command creates pairs for all LUs designated as oradev1-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name and display pair status.
# pairdisplay -g Oradb1
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb1 oradev1-1(L) (CL2-C, 2, 1 - 0) 30053
oradb1 oradev1-1(R) (CL1-A, 1, 1 - 1) 30053
oradb1 oradev1-2(L) (CL2-C, 2, 2 - 0) 30053
oradb1 oradev1-2(R) (CL1-A, 1, 2 - 1) 30053
22..S-VOL COPY ----- 18 -
18..P-VOL COPY 30053
23..S-VOL COPY -----
19..P-VOL COPY 30053
22 -
19 -
23 -
Hitachi Command Control Interface (CCI) User and Reference Guide
89
Example of CCI commands with HOSTA (group Oradb2):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
Designate a group name (Oradb2) and a local host P-VOL a case.
# paircreate -g Oradb2 -vl
This command creates pairs for all LUs assigned to group Oradb2 in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev2-1) and a local host P-VOL a case.
# paircreate -g Oradb2 -d oradev2-1 -vl
This command creates pairs for all LUs designated as oradev2-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name and display pair status.
# pairdisplay -g Oradb2
Group PairVol(L/R) (Port#,TID,LU-M), Seq#,LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb2 oradev2-1(L) (CL1-A, 1, 1 - 2) 30053
oradb2 oradev2-1(R) (CL2-D, 2, 1 - 0) 30053
oradb2 oradev2-2(L) (CL1-A, 1, 2 - 2) 30053
oradb2 oradev2-2(R) (CL2-D, 2, 2 - 0) 30053
18..P-VOL COPY 30053 24 -
24..S-VOL COPY -----
19..P-VOL COPY 30053
25..S-VOL COPY -----
18 -
25 -
19 -
Example of CCI commands with HOSTD (group Oradb2):
„
„
„
„
When the command execution environment is not set, set HORCC_MRCF to the
environment variable.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
Designate a group name and a remote host P-VOL a case.
# paircreate -g Oradb2 -vr
This command creates pairs for all LUs assigned to group Oradb2 in the configuration
definition file (two pairs for the configuration in Figure 2.43).
Designate a volume name (oradev2-1) and a remote host P-VOL a case.
# paircreate -g Oradb2 -d oradev2-1 -vr
This command creates pairs for all LUs designated as oradev2-1 in the configuration
definition file (CL1-A,T1,L1 and CL1-D,T2,L1 for the configuration in Figure 2.43).
Designate a group name and display pair status.
# pairdisplay -g Oradb2
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S,
Status, Seq#,P-LDEV# M
oradb2 oradev2-1(L) (CL2-D, 2, 1 - 0) 30053
oradb2 oradev2-1(R) (CL1-A, 1, 1 - 2) 30053
oradb2 oradev2-2(L) (CL2-D, 2, 2 - 0) 30053
oradb2 oradev2-2(R) (CL1-A, 1, 2 - 2) 30053
24..S-VOL COPY ----- 18 -
18..P-VOL COPY 30053
25..S-VOL COPY -----
19..P-VOL COPY 30053
24 -
19 -
25 -
90
Chapter 2 Overview of CCI Operations
The command device is defined using the system raw device name (character-type device
file name). The command device defined in the configuration definition file must be
established in a way to be following either every instance. If one command device is used
between different instances on the same SCSI port, then the number of instances is up to 16
per command device. If this restriction is exceeded, then use a different SCSI path for each
„
HP-UX:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HORCMINST0 = /dev/rdsk/c0t0d1s2
HORCM_CMD of HORCMINST1 = /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HORCMINST0 = /dev/rhdiskXX
HORCM_CMD of HORCMINST1 = /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HORCMINST0 = /dev/rrzbXXc
HORCM_CMD of HORCMINST1 = /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®:
HORCM_CMD of HORCMINST0 = /dev/rdsk/sdXX
HORCM_CMD of HORCMINST1 = /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
„
„
„
Windows 2008/2003/2000:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#
Windows NT:
HORCM_CMD of HORCMINST0 = \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HORCMINST1 = \\.\CMD-Ser#-ldev#-Port#
Linux, zLinux:
HORCM_CMD of HORCMINST0 = /dev/sdX
HORCM_CMD of HORCMINST1 = /dev/sdX
where X = device number assigned by Linux, zLinux
„
IRIX:
HORCM_CMD for HOSTA (/etc/horcm0.conf)... /dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
HORCM_CMD for HOSTA (/etc/horcm1.conf)... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
Hitachi Command Control Interface (CCI) User and Reference Guide
91
LAN
Ip address:HST1
HORCMINST0
/dev/rdsk/c0t1d1
HORCMINST1
/dev/rdsk/c1t2d1
HOSTA
CONF.file
CONF.file
/dev/rdsk/c1t2d2
/dev/rdsk/c0t1d2
.
/dev/rdsk/c0t0d0
HORCM
/dev/rdsk/c1t0d0
HORCM
C1
Fibre port C0
Fibre port
Fibre-channel
CL1
Fibre-channel
CL1
A
B
C
D
T0,L1
T0,L1
Command
device
Oradb
oradev1
Oradb1
T1,L1
T1,L2
T2,L1
T3,L1
T3,L2
MU# 1 oradev11
S-Vol
S-Vol
P-Vol
P-Vol
S/P-Vol
S/P-Vol
oradev2
T2,L2
MU# 1 oradev12
Oradb2
T4L1
T4L2
MU# 2 oradev21
SMPL
SMPL
MU# 2 oradev22
Hitachi RAID Storage System
Configuration file for HOSTA (/etc/horcm0.conf)
Configuration file for HOSTA (/etc/horcm1.conf)
HORCM_MON
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
#ip_address service poll(10ms) timeout(10ms)
HST1
horcm1
1000
3000
HST1
horcm0
1000
3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
HORCM_DEV
#dev_group dev_name
port# TargetID LU# MU#
#dev_group
Oradb
Oradb
Oradb1
Oradb1
Oradb2
Oradb2
dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
CL1-D
2
2
2
2
2
2
1
2
1
2
1
2
0
0
1
1
2
2
oradev1
oradev2
CL1-A
CL1-A
1
1
3
3
4
4
1
2
1
2
1
2
0
0
0
0
0
0
Oradb
oradev2
Oradb1
Oradb1
Oradb2
Oradb2
oradev11
oradev12
oradev21
oradev22
oradev11 CL1-D
oradev12 CL1-D
oradev21 CL1-D
oradev22 CL1-D
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
ip_address
HST1
HST1
service
ip_address
HST1
HST1
service
horcm0
horcm0
horcm0
horcm1
horcm1
horcm1
HST1
HST1
Figure 2.44 ShadowImage Configuration Example with Cascade Pairs
92
Chapter 2 Overview of CCI Operations
Example of CCI commands with Instance-0 on HOSTA:
„
„
„
When the command execution environment is not set, set an instance number.
For C shell:
# setenv HORCMINST 0
# setenv HORCC_MRCF 1
set HORCMINST=0
For Windows:
set HORCC_MRCF=1
Designate a group name (Oradb) and a local instance P- VOL a case.
# paircreate -g Oradb -vl
# paircreate -g Oradb1 -vr
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file (four pairs for the configuration in Figure 2.44).
Designate a group name and display pair status.
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-A , 1, 1-0) 30053 266.. P-VOL PAIR, 30053 268 -
oradb oradev1(R) (CL1-D , 2, 1-0) 30053 268.. S-VOL PAIR, ----- 266 -
oradb1 oradev11(R) (CL1-D , 2, 1-1) 30053 268.. P-VOL PAIR, 30053 270 -
oradb2 oradev21(R) (CL1-D , 2, 1-2) 30053 268.. SMPL ----, ----- ---- -
oradb oradev2(L) (CL1-A , 1, 2-0) 30053 267.. P-VOL PAIR, 30053 269 -
oradb oradev2(R) (CL1-D , 2, 2-0) 30053 269.. S-VOL PAIR, ----- 267 -
oradb1 oradev12(R) (CL1-D , 2, 2-1) 30053 269.. P-VOL PAIR, 30053 271 -
oradb2 oradev22(R) (CL1-D , 2, 2-2) 30053 269.. SMPL ----, ----- ---- -
Example of CCI commands with Instance-1 on HOSTA:
„
When the command execution environment is not set, set an instance number.
For C shell:
# setenv HORCMINST 1
# setenv HORCC_MRCF 1
set HORCMINST=1
For Windows:
set HORCC_MRCF=1
„
Designate a group name and a remote instance P-VOL a case.
# paircreate -g Oradb -vr
# paircreate -g Oradb1 -vl
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file (four pairs for the configuration in Figure 2.44).
Hitachi Command Control Interface (CCI) User and Reference Guide
93
„
Designate a group name and display pair status.
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D , 2, 1-0)30053 268..S-VOL PAIR,----- 266 -
oradb1 oradev11(L) (CL1-D , 2, 1-1)30053 268..P-VOL PAIR,30053 270 -
oradb2 oradev21(L) (CL1-D , 2, 1-2)30053 268..SMPL ----,----- ---- -
oradb oradev1(R) (CL1-A , 1, 1-0)30053 266..P-VOL PAIR,30053 268 -
oradb oradev2(L) (CL1-D , 2, 2-0)30053 269..S-VOL PAIR,----- 267 -
oradb1 oradev12(L) (CL1-D , 2, 2-1)30053 269..P-VOL PAIR,30053 271 -
oradb2 oradev22(L) (CL1-D , 2, 2-2)30053 269..SMPL ----,----- ---- -
oradb oradev2(R) (CL1-A , 1, 2-0)30053 267..P-VOL PAIR,30053 269 -
The command device is defined using the system raw device name (character-type device
file name). The command device defined in the configuration definition file must be
established in a way to be following either every instance. If one command device is used
between different instances on the same SCSI port, then the number of instances is up to 16
per command device. If this restriction is exceeded, then use a different SCSI path for each
„
HP-UX:
HORCM_CMD of HOSTA (/etc/horcm.conf) ... /dev/rdsk/c0t0d1
HORCM_CMD of HOSTB (/etc/horcm.conf) ... /dev/rdsk/c1t0d1
HORCM_CMD of HOSTB (/etc/horcm0.conf) ... /dev/rdsk/c1t0d1
„
Solaris:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rdsk/c0t0d1s2
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rdsk/c1t0d1s2
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... /dev/rdsk/c1t0d1s2
Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command
device does not need to be labeled during format command.
„
„
„
AIX:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rhdiskXX
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rhdiskXX
HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rhdiskXX
where XX = device number assigned by AIX
Tru64 UNIX:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rrzbXXc
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rrzbXXc
HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rrzbXXc
where XX = device number assigned by Tru64 UNIX
DYNIX/ptx®:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rdsk/sdXX
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rdsk/sdXX
HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rdsk/sdXX
where XX = device number assigned by DYNIX/ptx®
94
Chapter 2 Overview of CCI Operations
„
„
„
Windows 2008/2003/2000:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... \\.\CMD-Ser#-ldev#-Port#
Windows NT:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port#
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... \\.\CMD-Ser#-ldev#-Port#
Linux, zLinux:
HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/sdX
HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/sdX
HORCM_CMD of HOSTB(/etc/horcm0.conf) ... /dev/sdX
where X = device number assigned by Linux, zLinux
„
IRIX:
HORCM_CMD for HOSTA (/etc/horcm.conf) ... /dev/rdsk/dks0d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c0p0
HORCM_CMD for HOSTB (/etc/horcm.conf) ... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
HORCM_CMD for HOSTB (/etc/horcm0.conf)... /dev/rdsk/dks1d0l1vol or
/dev/rdsk/node_wwn/lun1vol/c1p0
Hitachi Command Control Interface (CCI) User and Reference Guide
95
LAN
Ip address:HST1
Ip address:HST2
HORCMINST0
HOSTA
HOSTB
HORCMINST
HORCMINST
/dev/rdsk/c1t3d1
/dev/rdsk/c1t3d2
.
/dev/rdsk/c0t1d1
/dev/rdsk/c0t1d2
CONF.file
HORCM
/dev/rdsk/c1t2d1
/dev/rdsk/c1t2d2
CONF.file
HORCM
CONF.file
HORCM
/dev/rdsk/c1t0d1
/dev/rdsk/c0t0d1
/dev/rdsk/c1t0d1
Fibre port C0
Fibre-channel
C1
D
Fibre port
Fibre-channel
CL1
C
A
B
CL1
Oradb
oradev1
Oradb1
T1,L1
T2,L1
T3,L1
MU# 0 oradev11
S-VOL
P/S /P-VOL
S/P -VOL
oradev2
T2,L2
T3,L2
T1,L2
MU# 0 oradev12
S-VOL
S/P -VOL
P/S /P-VOL
Oradb2
T4,L1
T4,L2
ESCON®/Fibre
MU# 1 oradev21
SMPL
SMPL
T0,L1
Command
device
Command
device
T0,L1
MU# 1 oradev22
MU#2
Hitachi RAID Storage
Hitachi RAID Storage
Config file-HOSTA (/etc/horcm.conf )
Config file-HOSTB (/etc/horcm0.conf)
Config file-HOSTB (/etc/horcm.conf)
HORCM_MON
#ip_address service poll(10ms) timeout
HORCM_MON
#ip_address service poll(10ms) timeout
HORCM_MON
#ip_address service poll(10ms) timeout
HST2
horcm
1000
3000
HST2
horcm0
1000
3000
HST1
horcm
1000
3000
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_CMD
#dev_name
/dev/xxx [Note 1]
HORCM_DEV
#dev_groupdev_name port# TID LU# MU#
HORCM_DEV
#dev_groupdev_name port# TID LU# MU#
HORCM_DEV
#dev_groupdev_name port# TID LU# MU#
Oradb
Oradb
Oradb1
Oradb1
Oradb2
Oradb2
oradev1 CL1-D
oradev2 CL1-D
oradev11 CL1-D
oradev12 CL1-D
oradev21 CL1-D
oradev22 CL1-D
2
2
2
2
2
2
1
2
1
2
1
2
Oradb
Oradb
Oradb1
Oradb1
Oradb2
Oradb2
oradev1 CL1-D
oradev2 CL1-D
oradev11 CL1-D
oradev12 CL1-D
oradev21 CL1-D
oradev22 CL1-D
2
2
3
3
4
4
1
2
1
2
1
2
Oradb
oradev1 CL1-A
1
1
1
2
Oradb
oradev2 CL1-A
0
0
1
1
0
0
0
0
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group
Oradb
ip_address
HST1
HST2
service
horcm
horcm0
horcm0
ip_address
HST1
HST2
service
horcm
horcm
horcm
ip_address
HST2
HST2
service
horcm
horcm0
Oradb
HST2
HST2
Shaded portions: If HORCMINST0 needs to operate Hitachi TrueCopy’s paired volume, then
describe oradb.
Figure 2.45 Hitachi TrueCopy/ShadowImage Configuration Example with Cascade Pairs
96
Chapter 2 Overview of CCI Operations
Example of CCI commands with HOSTA and HOSTB:
„
Designate a group name (Oradb) on Hitachi TrueCopy environment of HOSTA.
# paircreate -g Oradb -vl
„
Designate a group name (Oradb1) on ShadowImage environment of HOSTB. When the
command execution environment is not set, set HORCC_MRCF.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
# paircreate -g Oradb1 -vl
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the
configuration definition file (four pairs for the configuration in Figure 2.45).
„
Designate a group name and display pair status on HOSTA.
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-A , 1, 1-0)30052 266..SMPL ----,----- ---- -
oradb oradev1(L) (CL1-A , 1, 1) 30052 266..P-VOL COPY,30053 268 -
oradb1 oradev11(R) (CL1-D , 2, 1-0)30053 268..P-VOL COPY,30053 270 -
oradb2 oradev21(R) (CL1-D , 2, 1-1)30053 268..SMPL ----,----- ---- -
oradb oradev1(R) (CL1-D , 2, 1) 30053 268..S-VOL COPY,----- 266 -
oradb oradev2(L) (CL1-A , 1, 2-0)30052 267..SMPL ----,----- ---- -
oradb oradev2(L) (CL1-A , 1, 2) 30052 267..P-VOL COPY,30053 269 -
oradb1 oradev12(R) (CL1-D , 2, 2-0)30053 269..P-VOL COPY,30053 271 -
oradb2 oradev22(R) (CL1-D , 2, 2-1)30053 269..SMPL ----,----- ---- -
oradb oradev2(R) (CL1-D , 2, 2) 30053 269..S-VOL COPY,----- 267 -
Example of CCI commands with HOSTB:
„
Designate a group name (oradb) on Hitachi TrueCopy environment of HOSTB.
# paircreate -g Oradb -vr
„
Designate a group name (Oradb1) on ShadowImage environment of HOSTB. When the
command execution environment is not set, set HORCC_MRCF.
For C shell: # setenv HORCC_MRCF 1
For Windows: set HORCC_MRCF=1
# paircreate -g Oradb1 -vl
This command creates pairs for all LUs assigned to group Oradb1 in the configuration
definition file (four pairs for the configuration in Figure 2.45).
Hitachi Command Control Interface (CCI) User and Reference Guide
97
„
Designate a group name and display pair status on TrueCopy environment of HOSTB.
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053 270 -
oradb2 oradev21(L) (CL1-D , 2, 1-1)30053 268..SMPL ----,----- ---- -
oradb oradev1(L) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----- 266 -
oradb oradev1(R) (CL1-A , 1, 1-0)30052 266..SMPL ----,----- ---- -
oradb oradev1(R) (CL1-A , 1, 1) 30052 266..P-VOL PAIR,30053 268 -
oradb1 oradev12(L) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053 271 -
oradb2 oradev22(L) (CL1-D , 2, 2-1)30053 269..SMPL ----,----- ---- -
oradb oradev2(L) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----- 267 -
oradb oradev2(R) (CL1-A , 1, 2-0)30052 267..SMPL ----,----- ---- -
oradb oradev2(R) (CL1-A , 1, 2) 30052 267..P-VOL PAIR,30053 269 -
„
Designate a group name and display pair status on ShadowImage environment of HOSTB.
# pairdisplay -g oradb1 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053 270 -
oradb2 oradev21(L) (CL1-D , 2, 1-1)30053 268..SMPL ----,----- ---- -
oradb oradev1(L) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----- 266 -
oradb1 oradev11(R) (CL1-D , 3, 1-0)30053 270..S-VOL PAIR,----- 268 -
oradb1 oradev12(L) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053 271 -
oradb2 oradev22(L) (CL1-D , 2, 2-1)30053 269..SMPL ----,----- ---- -
oradb oradev2(L) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----- 267 -
oradb1 oradev12(R) (CL1-D , 3, 2-0)30053 271..S-VOL PAIR,----- 269 -
„
Designate a group name and display pair status on ShadowImage environment of HOSTB
(HORCMINST0).
# pairdisplay -g oradb1 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 1-0)30053 270..S-VOL PAIR,----- 268 -
oradb1 oradev11(R) (CL1-D , 2, 1-0)30053 268..P-VOL PAIR,30053 270 -
oradb2 oradev21(R) (CL1-D , 2, 1-1)30053 268..SMPL ----,----- ---- -
oradb oradev1(R) (CL1-D , 2, 1) 30053 268..S-VOL PAIR,----- 266 -
oradb1 oradev12(L) (CL1-D , 3, 2-0)30053 271..S-VOL PAIR,----- 269 -
oradb1 oradev12(R) (CL1-D , 2, 2-0)30053 269..P-VOL PAIR,30053 271 -
oradb2 oradev22(R) (CL1-D , 2, 2-1)30053 269..SMPL ----,----- ---- -
oradb oradev2(R) (CL1-D , 2, 2) 30053 269..S-VOL PAIR,----- 267 -
98
Chapter 2 Overview of CCI Operations
2.9.1 Configuration Definition for Cascading Volume Pairs
The CCI software (HORCM) is capable of keeping track of up to seven pair associations per
LDEV (1 for TC/UR, 3 for UR, 3 for SI/Snapshot, 1 for Snapshot). By this management, CCI
can be assigned to seven groups per LU that describes seven mirror descriptors for a
configuration definition file.
UR
Oradb4 - 6
MU#1-#3
Oradb
TrueCopy
MU#0
Oradb1
LDEV
ShadowImage
Oradb2 - 3
Oradb7~
MU#1-2
MU#3-63
Figure 2.46 Mirror Descriptors and Group Assignment
2.9.1.1 Correspondence of the Configuration File and Mirror Descriptors
The group name and MU# which are described in HORCM_DEV of a configuration definition
of MU#” is handled as MU#0, and the specified group is registered to MU#0 on ShadowImage
numbering sequence (for example, 2, 1, 0).
Hitachi Command Control Interface (CCI) User and Reference Guide
99
Table 2.13 Mirror Descriptors and Group Assignments
ShadowImage Univ Rep
(Snapshot) Only Only
MU#0
HORCM_DEV Parameter in Configuration File
TrueCopy/ ShadowImage MU#1-#2
MU#1-#3
Univ Repl
(MU#3-#63)
HORCM_DEV
oradev1
oradev1
oradev1
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
1
HORCM_DEV
oradev1
oradev1
oradev11
oradev21
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
2
2
1
1
1
Oradb1
Oradb2
oradev11 CL1-D
oradev21 CL1-D
1
2
HORCM_DEV
oradev11
oradev21
oradev31
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
2
2
2
1
1
1
1
Oradb1
Oradb2
Oradb3
oradev11 CL1-D
oradev21 CL1-D
oradev31 CL1-D
0
1
2
HORCM_DEV
oradev1
oradev1
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
1
0
HORCM_DEV
oradev11
oradev21
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
2
2
1
1
1
0
1
2
Oradb1
Oradb2
oradev1 CL1-D
oradev21 CL1-D
HORCM_DEV
oradev1
oradev11
oradev21
oradev31
oradev41
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1
CL1-D
2
2
2
2
2
1
1
1
1
1
Oradb1
Oradb2
Oradb3
Oradb4
oradev11 CL1-D
oradev21 CL1-D
oradev31 CL1-D
oradev41 CL1-D
0
h1
h2
h3
100
Chapter 2 Overview of CCI Operations
2.9.1.2 Cascade Function and Configuration Files
A volume of the cascading connection describes entity in a configuration definition file on
the same instance, and classifies connection of volume through the mirror descriptor. In
case of Hitachi TrueCopy/ShadowImage cascading connection, too, the volume entity
example of this.
HORCMINST0
HORCMINST1
S/P
MU#0
PVOL
T3L0
MU#0
VOL
T3L2
Oradb
ShadowImage
ShadowImage
SVOL
T3L4
MU#0
Oradb1
Oradb2
MU#1
SVOL
T3L6
MU#0
MU#2
HORCM_DEV
#dev_group dev_name port# TargetID LU#MU#
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
Oradb
oradev1 CL1-D
oradev11 CL1-D
oradev21 CL1-D
3
3
3
0
4
6
0
0
0
Oradb
Oradb1
Oradb2
oradev1 CL1-D
oradev11 CL1-D
oradev21 CL1-D
3
3
3
2
2
2
0
1
2
Oradb1
Oradb2
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group
Oradb
Oradb1
Oradb2
ip_address
HST1
HST1
service
ip_address
HST1
HST1
service
horcm1
horcm1
horcm1
horcm0
horcm0
horcm0
HST1
HST1
Figure 2.47 ShadowImage Cascade Connection and Configuration File
Hitachi Command Control Interface (CCI) User and Reference Guide
101
2.9.1.3 ShadowImage
ShadowImage is a mirror configuration within one storage system. Therefore, ShadowImage
can be described a volume of the cascading connection according to two configuration
definition files. In case of cascading connection of ShadowImage only, the specified group is
assigned to the mirror descriptor (MU#) of ShadowImage that describes definitely “0” as MU#
the pairdisplay information for each configuration.
S/P
VOL
268
Oradb1
oradb
PVOL
266
1
2
270
272
0
0
Oradb2
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0)30053 266..P-VOL PAIR,30053
oradb oradev1(R) (CL1-D , 3, 2-0)30053 268..S-VOL PAIR,-----
oradb1 oradev11(R) (CL1-D , 3, 2-1)30053 268..P-VOL PAIR,30053
oradb2 oradev21(R) (CL1-D , 3, 2-2)30053 268..P-VOL PAIR,30053
268 -
266 -
270 -
272 -
Figure 2.48 Pairdisplay on HORCMINST0
S/P
VOL
268
Oradb
PVOL
266
0
1
2
0
Oradb1
Oradb2
270
272
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 2-0)30053 268..S-VOL PAIR,-----
oradb1 oradev11(L) (CL1-D , 3, 2-1)30053 268..P-VOL PAIR,30053
oradb2 oradev21(L) (CL1-D , 3, 2-2)30053 268..P-VOL PAIR,30053
oradb oradev1(R) (CL1-D , 3, 0-0)30053 266..P-VOL PAIR,30053
266 -
270 -
272 -
268 -
Figure 2.49 Pairdisplay on HORCMINST1
102
Chapter 2 Overview of CCI Operations
P/S
VOL
268
Oradb1
0
Oradb
SVOL
270
266
272
0
1
2
Oradb2
/dev/rdsk/c0t3d4
# pairdisplay -d /dev/rdsk/c0t3d4 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,-----
oradb1 oradev11(R) (CL1-D , 3, 2-1)30053 268..P-VOL PAIR,30053
oradb oradev1(R) (CL1-D , 3, 2-0)30053 268..S-VOL PAIR,-----
oradb2 oradev21(R) (CL1-D , 3, 2-2)30053 268..P-VOL PAIR,30053
268 -
270 -
266 -
272 -
Figure 2.50 Pairdisplay on HORCMINST0
Hitachi Command Control Interface (CCI) User and Reference Guide
103
2.9.1.4 Cascading Connections for Hitachi TrueCopy and ShadowImage
The cascading connections for Hitachi TrueCopy/ShadowImage can be set up by using three
configuration definition files that describe the cascading volume entity in a configuration
definition file on the same instance. The mirror descriptor of ShadowImage and Hitachi
TrueCopy definitely describe “0” as MU#, and the mirror descriptor of Hitachi TrueCopy does
not describe “0” as MU#.
HOST2
HOST1
HORCMINST for TrueCopy
HORCMINST for
HORCMINST0 for ShadowImage
TrueCopy/ShadowImage
TrueCopy
environment
ShadowImage
environment
HORCC_MRCF=1
TrueCopy
environment
ShadowImage
environment
HORCC_MRCF=1
SVOL
MU#0
PVOL
S/P
MU#0
VOL
Oradb
TrueCopy
T3L4
T3L2
T3L0
Oradb1
TrueCopy
MU#1
Oradb
SVOL
T3L6
MU#0
HORCM_DEV
#group dev_name port# TID LU MU
HORCM_DEV
#group dev_name port# TID LU MU
Oradb oradev1 CL1-D 3
Oradb1 oradev11 CL1-D 3
Oradb2 oradev21 CL1-D 3
HORCM_DEV
#group dev_name port# TID LU MU
Oradb oradev1 CL1-D
Oradb oradev1 CL1-D
Oradb1 oradev11 CL1-D
Oradb2 oradev21 CL1-D
3
3
3
2
4
6
2
2
2
3
0
0
0
0
1
HORCM_INST
#dev_group ip_address service
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group ip_address service
Oradb
Oradb1
Oradb2
HORCM_INST
#dev_group ip_address service
Oradb
Oradb
HST1
HST2
HST2
horcm
horcm
horcm
HST1
HST2
HST2
horcm
horcm0
horcm0
HST2
HST2
horcm
horcm0
Note: Shaded portions: If HORCMINST0 needs to operate Hitachi TrueCopy’s paired volume,
then “oradb” must describe that there is a connection to HST1 via HORCMINST0.
Figure 2.51 TrueCopy/ShadowImage Cascading Connection and Configuration File
104
Chapter 2 Overview of CCI Operations
the pairdisplay information for each configuration.
S/P
VOL
268
Oradb1
oradb
PVOL
266
0
1
270
272
SMPL
0
Oradb2
Seq#30053
Seq#30052
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0)30052 266..SMPL ----,----- ---- -
oradb oradev1(L) (CL1-D , 3, 0) 30052 266..P-VOL COPY,30053
oradb1 oradev11(R) (CL1-D , 3, 2-0)30053 268..P-VOL COPY,30053
oradb2 oradev21(R) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053
oradb oradev1(R) (CL1-D , 3, 2) 30053 268..S-VOL COPY,-----
268 -
270 -
272 W
266 -
Figure 2.52 Pairdisplay for Hitachi TrueCopy on HOST1
PVOL
266
S/P
VOL
268
Oradb
Oradb1
SMPL
0
0
1
Seq#30052
Oradb2
Seq#30053
270
272
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 2-0)30053 268..P-VOL PAIR,30053
oradb2 oradev21(L) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053
oradb oradev1(L) (CL1-D , 3, 2) 30053 268..S-VOL PAIR,-----
270 -
272 W
266 -
oradb oradev1(R) (CL1-D , 3, 0-0)30052 266..SMPL ----,----- ---- -
oradb oradev1(R) (CL1-D , 3, 0) 30052 266..P-VOL PAIR,30053
268 -
Figure 2.53 Pairdisplay for Hitachi TrueCopy on HOST2 (HORCMINST)
Hitachi Command Control Interface (CCI) User and Reference Guide
105
S/P
VOL
268
Oradb
266
0
1
Oradb1
Oradb2
Seq#30052
Seq#30053
SVOL
270
272
# pairdisplay -g oradb1 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 2-0)30053 268..P-VOL PAIR,30053
oradb2 oradev21(L) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053
oradb oradev1(L) (CL1-D , 3, 2) 30053 268..S-VOL PAIR,-----
oradb1 oradev11(R) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,-----
270 -
272 W
266 -
268 -
Figure 2.54 Pairdisplay for ShadowImage on HOST2 (HORCMINST)
P/S
VOL
268
Oradb
Oradb1
SVOL
270
266
0
0
1
Oradb2
Seq#30053
272
/dev/rdsk/c0t3d4
# pairdisplay -g oradb1 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,-----
oradb1 oradev11(R) (CL1-D , 3, 2-0)30053 268..P-VOL PAIR,30053
oradb2 oradev21(R) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053
oradb oradev1(R) (CL1-D , 3, 2) 30053 268..S-VOL PAIR,-----
268 -
270 -
272 W
266 -
# pairdisplay -d /dev/rdsk/c0t3d4 -m cas
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb1 oradev11(L) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,-----
oradb1 oradev11(R) (CL1-D , 3, 2-0)30053 268..P-VOL PAIR,30053
oradb2 oradev21(R) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053
oradb oradev1(R) (CL1-D , 3, 2) 30053 268..S-VOL PAIR,-----
268 -
270 -
272 W
266 -
Figure 2.55 Pairdisplay for ShadowImage on HOST2 (HORCMINST0)
106
Chapter 2 Overview of CCI Operations
2.10 Error Monitoring and Configuration Confirmation
CCI supports error monitoring and configuration confirmation commands for linkage with the
system operation management of the UNIX/PC server.
2.10.1 Error Monitoring for Paired Volumes
The HORC Manager (HORCM) monitors all volumes defined in the configuration definition file
at a certain interval regardless of the Hitachi TrueCopy/ShadowImage commands.
„
Objects and scope of monitoring: The HORCM operates as a daemon process on the host
server and monitors all the paired volumes defined in the configuration definition file,
not the volume groups. The HORC Manager’s monitoring applies to the primary volumes
only (since the primary volumes control the status). The HORC Manager monitors the
changes in the pair status of these volumes. Only when the PAIR status changes to the
PSUS status and that change is caused by an error (such as PVol error or SVOL SUS), the
HORC Manager regards the change as an error.
„
„
Monitoring time and interval: This command always issues I/O instructions to the
storage system to obtain information for monitoring. It is possible to specify the
monitoring interval in the configuration definition file in order to adjust the daemon
load.
Error notification by HORCM: If the mirroring status is suspended in the normal Hitachi
TrueCopy operation, an error message is displayed by Storage Navigator (and SVP).
However, no error message may be displayed, depending on the system operation form.
Since the operation management of the UNIX server checks Syslog to find system errors
in many cases, Hitachi TrueCopy error messages are output to Syslog for linkage with the
system operation management.
„
Error notification command: Hitachi TrueCopy supports the error notification function
using commands in order to allow the UNIX server client to monitor errors. This
command is connected to the HORCM (daemon) to obtain the transition of the pairing
status and report it. When an error is detected, this command outputs an error message.
This command waits until an error occurs or reports that “No” error occurs if it finds no
errors in pairing status transition queue of the HORCM’s pairing monitor. These
operations can be specified using the options. If the command finds the status transition
data in the status transition queue, it displays the data of all volumes. Data in the
HORCM’s status transition queue can be erased by specifying the option of this
command.
Note: CCI (HORCM) does not support the syslog function for OpenVMS systems. As an
alternative, the HORCM daemon uses a HORCM logging file.
2.10.2 Error Monitoring for Database Validator
CCI will report the following message to the syslog file as validation error when each
statistical information counted an error will be updated.
„
[HORCM_103] Detected a validation check error on this volume (¡¡¡¡ unit#¡,ldev#¡) :
CfEC=n, MNEC=n, SCEC=n, BNEC=n
Hitachi Command Control Interface (CCI) User and Reference Guide
107
2.10.3 Pair Status Display and Configuration Confirmation
The CCI pairing function (configuration definition file) combines the physical volumes in the
storage system used independently by the servers. Therefore, you should make sure that the
servers’ volumes are combined as intended by the server system administrator.
The pairdisplay command displays the pairing status to enable you to verify the completion
of pair creation or pair resynchronization (see Figure 2.56). This command is also used to
confirm the configuration of the paired volume connection path (physical link of paired
volumes among the servers). For further information on the pairdisplay command, see
section 4.8.
--Link info. of c----
----Link information of d-------------
Group PairVol(Local/Remote) P,T#,L#, Seq#, LDEV#
P/S, Status, Fence, Seq#, Pair-LDEV#
G1
G1
Oradb1(L)
Oradb1(R)
P1,T1,L1, Seq#, 20
P2,T2,L2, Seq#, 30
P-VOL, Pair, Never, Seq#, 30
S-VOL, Pair, Never, Seq#, 20
HOSTB
HOSTA
Configuration definition file
Configuration definition file
G1,Oradb1... P2,T2,L2
G1...HOSTA
Special
file
G1,Oradb1... P1,T1,L1
G1...HOSTB
Special
file
Fibre bus
P2, T2, L2
P1, T1, L1 (Port, Target, LUN)
Seq#
Seq#
c
c
G1,Oradb1
Primary
LDEV
#20
Secondary
LDEV
d
#30
Paired logical volumes
Hitachi RAID
Hitachi RAID
Figure 2.56 Example of Pair Configuration Confirmation (Pairdisplay)
The raidscan command displays the SCSI port, target ID, LDEVs mapped to LUNs, and status
of those LDEVs, regardless of the configuration definition file (see Figure 2.57). When a port
number is specified, this command displays information about all target IDs and LUNs of that
Port#, TargetID#, Lun# Number of LDEVs,LDEV#, P/S, Status, Fence, LDEV# Seq#, Pair-LDEV#
CL1-A 3(3,5,6) P-VOL, Pair, Never, 3 Seq#, 30
3
1
Figure 2.57 Example of Raidscan Command
108
Chapter 2 Overview of CCI Operations
2.11 Recovery Procedures for HA Configurations
After configuring and starting Hitachi TrueCopy operations, the system administrator should
conduct operational tests for possible failures in the system. In normal operation, service
personnel obtain information for identifying the failure cause on the SVP. However, a motive
for the action above should be given by the Hitachi TrueCopy operation command.
regression and Hitachi TrueCopy recovery procedure.
Takeover state
Recovery state
Mirror state
Host A
Host B
Host A
Host B
Host A goes down.
Takeover
P
Vol
S
Vol
P
Vol
S
Vol
c
d
Host A
Host B
Host A
Host A
Host B
Host B
Pvol goes down.
Takeover
Pvol recovers.
Pairresync -swaps
Svol
SSUS
(SSWS)
P
Vol
S
Vol
P
Vol
S
Vol
P
Vol
Difference
c
d
e
c A failure occurs in the host A server (1-top) or in the Pvol (1-bottom).
d Host B detects the failure of host A or the Pvol and issues a takeover command to make the SVOL usable. Host B takes over
processing from host A. In the case of host A failure (1-top), the Swap-takeover command will be executed. In the case of
Pvol failure (1-bottom), the SVOL-SSUS-takeover command will be executed.
e While host B continues processing, PVOL and SVOL are swapped (pairresync -swaps), and the delta data (BITMAP)
updated by host B is fed back to host A.
f After host A or the Pvol has recovered, host A can take over processing from host B by executing the swap-takeover
(horctakeover) command.
Figure 2.58 System Failover and Recovery
Hitachi Command Control Interface (CCI) User and Reference Guide
109
Regression state
Recovery state
Mirroring state
Host A
Host B
Host A
Host A
Host B
Host B
Svol goes down.
Svol recovers.
Continue (regression)
Pairsplit -S
Paircreate
P
Vol
S
Vol
S
Vol
P
Vol
P
Vol
S
Vol
Entire copy
e
d
c
Host A
Host B
Host A
Host A
Host B
Host B
Recovery
Disconnected
Continue (regression)
Pairresync
P
Vol
S
Vol
S
Vol
P
Vol
P
Vol
S
Vol
Difference
e
d
c
c The PVOL detects a failure in the SVOL and causes suspension of the duplicated writing. (The fence level determines
whether host A continues processing or host B takes over the processing from host A.)
d The PVOL changes the paired volume status to PSUE and keeps track of the difference data. The HORCM detects the
status change and outputs a message to syslog. If the client of host A has initiated the monitoring command, the message
concerned is displayed on the screen of the client.
e The SVOL recovers from the failure. The host A issues the pairsplit -S, paircreate -vl, or pairresync command to update the
PVOL data by copying entire data or copying differential data only. The updated data is fed back to the SVOL.
Figure 2.59 Degeneracy and Recovery in Case of System Error
110
Chapter 2 Overview of CCI Operations
Chapter 3 Preparing for CCI Operations
This chapter covers the following topics:
„
„
„
„
„
„
„
System requirements (section 3.1)
Hardware installation (section 3.2)
Software installation (section 3.3)
Creating/editing the configuration file (section 3.4)
CCI startup (section 3.6)
Starting CCI as a Service (Windows Systems) (section 3.7)
Hitachi Command Control Interface (CCI) User and Reference Guide
111
3.1 System Requirements
CCI operations involve the CCI software on the UNIX/PC server host and the RAID storage
system(s) containing the command device(s) and the Hitachi TrueCopy and/or ShadowImage
pair volumes. The system requirements for CCI are:
„
CCI software product. The CCI software is supplied on CD-ROM or diskette. The CCI
software files take up 2.5 MB of space. The log files can take up to 3 MB of space.
„
Host platform. CCI is supported on the following host platforms.
Solaris, Solaris/x86, HP-UX, AIX, Linux, Linux/IA64, DYNIX/ptx®, IRIX, OpenVMS,
OpenVMS/IA, VMware, Windows 2008, Windows 2003, Windows 2000, Windows NT,
Windows CE .NET
Note: TrueCopy Asynchronous platform and storage system support may vary. Please
contact your Hitachi Data Systems team for the latest information on Hitachi RAID
storage system support for CCI.
–
–
Root/administrator access to the host is required to perform CCI operations.
Static memory capacity: minimum = 300 KB, maximum = 500 KB
Dynamic memory capacity (set in HORCM_CONF): maximum = 500 KB per unit ID
–
–
CCI supports several failover products, including FirstWatch, MC/ServiceGuard,
HACMP, TruCluster, and ptx/CLUSTERS. Please contact your Hitachi Data Systems
account team for the latest information on failover software support for CCI.
The system which runs and operates Hitachi TrueCopy in an HA configuration must
be a duplex system having a hot standby configuration or mutual hot standby
(mutual takeover) configuration. The remote copy system must be designed for
remote backup among servers and configured so that servers cannot share the
primary and secondary volumes at the same time. The information in this document
does not apply to fault-tolerant system configurations such as Oracle Parallel Server
(OPS) in which nodes execute parallel accesses. However, two or more nodes can
share the primary volumes of the shared OPS database, and must use the secondary
volumes as backup volumes.
–
Host servers which are combined when paired logical volumes are defined should run
on the operating system of the same architecture. If not, one host may not be able
to recognize a paired volume of another host, even though HORCM can run properly.
112
Chapter 3 Preparing for CCI Operations
„
Hitachi RAID storage system(s). The Hitachi TagmaStore USP, Hitachi TagmaStore NSC,
Lightning 9900V, and Lightning 9900 storage systems support CCI operations. Hitachi
TrueCopy Synchronous and Asynchronous are supported for all storage system models.
Please contact your Hitachi Data Systems representative for further information on
storage system configurations.
Microcode:
The minimum microcode levels for CCI software version 01-22-03/02 are:
–
–
–
–
Universal Storage Platform V/VM: 60-03-xx
TagmaStore USP/NSC: 50-08-05 (same as for CCI 01-20-03/02)
Lightning 9900V: 21-14-28 (same as for CCI 01-20-03/02)
Lightning 9900: 01-19-93 (same as for CCI 01-20-03/02)
The CCI function for Oracle10g H.A.R.D. requires USP V/VM microcode 60-03-xx or
higher.
The CCI function for command device guarding requires USP/NSC microcode
50-07-30 or higher and 9900V microcode 21-14-24 or higher.
–
–
Command Device: The CCI command device must be defined and accessed as a raw
device (no file system, no mount operation).
TrueCopy: TrueCopy option must be installed and enabled on the storage systems.
Bi-directional swap must be enabled between the primary and secondary volumes.
The port modes (LCP, RCP, RCU target, etc.) and MCU-RCU paths must be defined.
–
–
TrueCopy Async: TrueCopy Async option must be installed and enabled.
ShadowImage: ShadowImage must be installed and enabled on the storage
system(s). Minimum 9900V microcode for Host Group support is 21-06-00.
–
Database Validator: All USP V/VM and USP/NSC features support Database Validator.
The 9900V DB Validator feature (DKC-F460I-8HSF, -8HLF, or -16HSF) must be
installed in the 9900V. Minimum 9900V microcode for DB Validator support is 21-02-
00/00.
–
–
Data Retention Utility: The Data Retention Utility feature (Open LDEV Guard on
9900V) must be installed and enabled on the storage system(s). Minimum 9900V
microcode for Open LDEV Guard support is 21-06-00.
Universal Replicator: The Universal Replicator feature (USP V/VM, USP/NSC) must
be installed and enabled on the storage system. Additionally, the path between the
CUs must be set (using Storage Navigator or SVP), and the bi-directional swap must
be enabled between the primary and secondary volumes.
–
–
Copy-on-Write Snapshot: Both ShadowImage and Copy-on-Write Snapshot must be
installed and enabled on the storage system(s). Minimum USP/NSC microcode for
Copy-on-Write Snapshot support via CCI is 50-04-05.
Hitachi and other RAID storage systems.
Hitachi Command Control Interface (CCI) User and Reference Guide
113
3.1.1 Supported Platforms
Table 3.1
Vendor
Supported Platforms for TrueCopy
Operating System
Failover
Volume Manager I/O Interface
Software
Sun
HP
Solaris 2.5
First Watch
—
VxVM
VxVM
SCSI/Fibre
Fibre
Solaris 10 /x86
HP-UX 10.20/11.0/11.2x
HP-UX 11.2x on IA64*
Digital UNIX 4.0
Tru64 UNIX 5.0
OpenVMS 7.3-1
DYNIX/ptx 4.4
MC/Service Guard LVM, SLVM
MC/Service Guard LVM, SLVM
SCSI/Fibre
Fibre
TruCluster
TruCluster
—
LSM
LSM
—
SCSI
SCSI/Fibre
Fibre
IBM
ptx/Custer
HACMP
—
LVM
LVM
—
SCSI/Fibre
SCSI/Fibre
Fibre (FCP)
AIX 4.3
zLinux (Suse 8)
For restrictions on zLinux, see section 3.1.3.
Microsoft Windows NT 4.0; Windows 2000, 2003, 2008
MSCS
MSCS
LDM
LDM
Fibre/iSCSI
Fibre
Windows 2003/2008 on IA64*
Windows 2003/2008 on EM64T
Red Hat
SGI
Red Hat Linux 6.0/7.0/AS2.1, 3.0, 4.0
—
—
—
—
SCSI/Fibre**
Fibre
AS 2.1, 3.0 Update2, 4.0 on IA64*
AS 4.0 on EM64T
IRIX64 6.5
—
—
SCSI/Fibre
* IA64: using IA-32EL on IA64 (except CCI for Linux/IA64)
114
Chapter 3 Preparing for CCI Operations
Table 3.2
Vendor
Supported Platforms for ShadowImage
Operating System
Failover
Volume Manager I/O Interface
Software
Sun
HP
Solaris 2.5
First Watch
-—
VxVM
VxVM
SCSI/Fibre
Fibre
Solaris 10 /x86
HP-UX 10.20/11.0/11.2x
HP-UX 11.2x on IA64*
Digital UNIX 4.0
Tru64 UNIX 5.0
OpenVMS 7.3-1
DYNIX/ptx 4.4
MC/Service Guard LVM, SLVM
MC/Service Guard LVM, SLVM
SCSI/Fibre
Fibre
TruCluster
TruCluster
—
LSM
LSM
—
SCSI
SCSI/Fibre
Fibre
IBM
ptx/Custer
HACMP
—
LVM
LVM
—
SCSI/Fibre
SCSI/Fibre
Fibre (FCP)
AIX 4.3
zLinux (Suse 8)
For restrictions on zLinux, see section 3.1.3.
Microsoft Windows NT 4.0; Windows 2000, 2003, 2008
MSCS
MSCS
LDM
LDM
iSCSI/Fibre
Fibre
Windows 2003/2008 on IA64*
Windows 2003/2008 on EM64T
Red Hat
Red Hat Linux
6.0/7.0/AS2.1, 3.0, 4.0
—
—
—
—
—
—
SCSI/Fibre**
Fibre
AS2.1, 3.0 Update2, 4.0 on IA64*
AS 4.0 on EM64T
SGI
IRIX64 6.5
SCSI/Fibre
* IA64: using IA-32EL on IA64 (except CCI for Linux/IA64)
Hitachi Command Control Interface (CCI) User and Reference Guide
115
Table 3.3
Vendor
Supported Platforms for TrueCopy Async
Operating System
Failover
Volume Manager I/O Interface
Software
First Watch
—
Sun
HP
Solaris 2.5
VxVM
VxVM
SCSI/Fibre
Fibre
Solaris 10 /x86
HP-UX 10.20/11.0/11.2x
HP-UX 11.2x on IA64*
Digital UNIX 4.0
Tru64 UNIX 5.0
OpenVMS 7.3-1
DYNIX/ptx 4.4
MC/Service Guard LVM, SLVM
MC/Service Guard LVM, SLVM
SCSI/Fibre
Fibre
TruCluster
TruCluster
—
LSM
LSM
—
SCSI
SCSI/Fibre
Fibre
IBM
ptx/Custer
HACMP
—
LVM
LVM
—
SCSI/Fibre
SCSI/Fibre
Fibre (FCP)
AIX 4.3
zLinux (Suse 8)
For restrictions on zLinux, see section 3.1.3.
Microsoft Windows NT 4.0; Windows 2000, 2003, 2008
MSCS
MSCS
LDM
LDM
iSCSI/Fibre
Fibre
Windows 2003/2008 on IA64*
Windows 2003/2008 on EM64T
Red Hat
SGI
Red Hat Linux 6.0/7.0/AS 2.1, 3.0, 4.0
—
—
—
—
SCSI/Fibre**
Fibre
AS 2.1, 3.0 Update2, 4.0 on IA64*
AS 4.0 on EM64T
IRIX64 6.5
—
—
SCSI/Fibre
* IA64: using IA-32EL on IA64 (except CCI for Linux/IA64)
116
Chapter 3 Preparing for CCI Operations
Table 3.4
Supported Platforms for Universal Replicator
Vendor
Operating System
Solaris2.8
Failover Software
Volume Manager I/O Interface
SUN
VCS
VxVM
Fibre
Solaris 10 /x86
HP-UX 11.0/11.2x
HP-UX 11.2x on IA64*
AIX 5.1
—
VxVM
Fibre
HP
MC/Service Guard
MC/Service Guard
HACMP
LVM, SLVM
LVM, SLVM
LVM
Fibre
Fibre
IBM
Fibre
Microsoft Windows 2000, 2003, 2008
MSCS
LDM
Fibre
Windows 2003/2008 on IA64*
Windows 2003/2008 on EM64T
MSCS
LDM
Fibre/iSCSI
Red Hat
Red Hat Linux AS 2.1, 3.0, 4.0
—
—
—
—
Fibre**
Fibre**
AS 2.1, 3.0 Update2, 4.0 on IA64*
AS 4.0 on EM64T
HP
Tru64 UNIX 5.0
OpenVMS 7.3-1
IRIX 64 6.5
TruCluster
TruCluster
—
LSM
LSM
—
Fibre
Fibre
Fibre
SGI
* IA64: using IA-32EL on IA64 (except CCI for Linux/IA64)
Table 3.5
Supported Platforms for Copy-on-Write Snapshot
Vendor
Operating System
Solaris 2.8
Failover Software
Volume Manager I/O Interface
SUN
—
—
—
—
—
—
—
VxVM
Fibre
Solaris 10 /x86
HP-UX 11.0/11.2x
HP-UX 11.2x on IA64*
AIX 5.1
VxVM
Fibre
HP
LVM, SLVM
LVM, SLVM
LVM
Fibre
Fibre
IBM
Fibre
Microsoft Windows 2000, 2003, 2008
LDM
Fibre
Windows 2003/2008 on IA64*
Windows 2003/2008 on EM64T
LDM
Fibre/iSCSI
Red Hat
Red Hat Linux AS 2.1, 3.0, 4.0
—
—
—
—
Fibre**
Fibre**
AS 2.1, 3.0 Update2, 4.0 on IA64*
AS 4.0 on EM64T
HP
Tru64 UNIX 5.0
OpenVMS 7.3-1
IRIX64 6.5
—
—
—
LSM
—
Fibre
Fibre
Fibre
SGI
—
* IA64: using IA-32EL on IA64 (except CCI for Linux/IA64)
Hitachi Command Control Interface (CCI) User and Reference Guide
117
Table 3.6
VM Vendor
Supported Guest OS for VMware
Layer
Guest OS
CCI Support Confirmation Volume Mapping I/O Interface
VMware ESX Server
2.5.1 or later using
Linux Kernel 2.4.9
[Note 1]
Guest
Windows 2003 SP1
Confirmed
RDM*
RDM*
Fibre
Fibre
Windows 2000 Server Unconfirmed
Windows NT 4.0
RHAS 3.0
Unconfirmed
Confirmed
Unconfirmed
Confirmed
Confirmed
Confirmed
RDM*
Fibre
SLES 9.0
Solaris 10 u3 (x86)
Linux Kernel 2.4.9
AIX 5.3
RDM*
Fibre
Fibre
Fibre
SVC
Direct
IBM AIX 5.3 VIO
Server [Note 2]
Client
Physical mode
Server
AIX 5.3
See (4) in section 3.1.4.2.
Fibre
* RDM: Raw Device Mapping using Physical Compatibility Mode.
Table 3.7
Supported Platforms: IPv6 vs IPv6
IPv6
CCI / IPv6 [Note 1]
Windows Linux
IPv6
HP-UX
Solaris AIX
Tru64
HP-UX
OpenVMS
N/A
HP-UX
Solaris
AIX
–
–
–
–
–
–
–
AV
AV
–
AV
AV
AV
–
AV
AV
AV
AV
–
AV
AV
AV
AV
AV
–
AV
AV
AV
AV
AV
AV
–
CCI / IPv6
N/A
N/A
Windows
Linux
–
N/A
–
–
N/A
Tru64
–
–
–
N/A
OpenVMS
–
–
–
–
AV
AV: Available for communicating with different platforms.
N/A: Not Applicable (Windows LH does not support IPv4 mapped IPv6).
118
Chapter 3 Preparing for CCI Operations
Table 3.8
Supported Platforms: IPv4 vs IPv6
IPv6
IPv6
CCI / IPv6 [Note 1]
Windows Linux
HP-UX
Solaris AIX
Tru64
HP-UX
OpenVMS
N/A
HP-UX
Solaris
AIX
AV
AV
AV
AV
AV
AV
AV
AV
AV
AV
AV
AV
AV
AV
N/A
AV
AV
AV
AV
AV
AV
AV
AV
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
AV
AV
AV
AV
AV
AV
AV
AV
N/A
AV
AV
AV
AV
AV
AV
AV
AV
N/A
CCI / IPv4
N/A
N/A
Windows
Linux
N/A
N/A
Tru64
N/A
OpenVMS AV
AV
IRIX64
DYNIX
AV
N/A
N/A
N/A
AV: Available for communicating with different platforms.
N/A: Not Applicable (Windows LH does not support IPv4 mapped IPv6).
Minimum platform versions for CCI/IPv6 support:
–
–
–
–
–
–
HP-UX: HP-UX 11.23 (PA/IA) or later
Solaris: Solaris 8/Sparc or later, Solaris 10/x86/64 or later
AIX: AIX 5.1 or later
Windows: Windows 2008(LH), Windows 2003 + IPv6 Install
Linux: Linux Kernel 2.4 (RH8.0) or later
Tru64: Tru64 v5.1A or later. Note that v5.1A does not support the getaddrinfo()
function, so this must be specified by IP address directly.
–
OpenVMS: OpenVMS 8.3 or later
Hitachi Command Control Interface (CCI) User and Reference Guide
119
3.1.2 Using CCI with Hitachi and Other RAID Storage Systems
Table 3.9 shows the related two controls between CCI and the RAID storage system type
storage system.
„
The following common API/CLI commands are rejected with EX_ERPERM(*1) by
connectivity of CCI with RAID storage system:
horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairvolchk, pairevtwait,
pairdisplay, raidscan (except -find option only), raidar, raidvchkset, raidvchkdsp,
raidvchkscan
„
The following XP API/CLI commands are rejected with EX_ERPERM(*2) on HITACHI
storage system even when both CCI and Raid Manager XP (provided by HP) are installed:
pairvolchk -s, pairdisplay -CLI, raidscan -CLI, paircreate -m noread for TrueCopy,
paircreate -m dif/inc for ShadowImage
Table 3.9
Relationship between CCI and RAID Storage System
CCI Version
Installation
RAID System
Hitachi
Common API/CLI XP API/CLI
CCI 01-08-03/00 or higher CCI
Enable
Cannot use
EX_ERPERM(*1)
HP® XP
(except CLI)
CCI and Raid Manager XP Hitachi
Enable
Enable
HP® XP
HP® XP
Hitachi
Raid Manager XP
01.08.00 or higher
(provided by HP®)
Raid Manager XP
Enable
Enable
EX_ERPERM(*1)
EX_ERPERM(*2)
Raid Manager XP and CCI HP® XP
Hitachi
Enable
Enable
Enable
EX_ERPERM(*2)
120
Chapter 3 Preparing for CCI Operations
APP can use
Common API/CLI
APP can use
XP API/CLI
on XP array only
Raid
Manager
XP
CCI
CCI
Raid
Mgr XP
-CM
-CM
HITACHI Array
HP® XP Array
: Common API/CLI commands are allowed under both installation only.
Figure 3.1 Relationship between APP, CCI, and Storage System
3.1.3 Restrictions on zLinux
In the following example, zLinux defines the Open Volumes that are connected to FCP as
/dev/sd*. Also, the mainframe volumes (3390-xx) that are connected to FICON are defined
as /dev/dasd*.
Z990
MVS
zLinux
RM
ZVM
FCP
FICON
Command
device
OPEN
3390-9A
RAID
Figure 3.2 Example of a RAID Manager Configuration on zLinux
Hitachi Command Control Interface (CCI) User and Reference Guide
121
The restrictions for using CCI with zLinux are:
„
„
„
Command device. CCI uses a SCSI Path-through driver to access the command device. As
such, the command device must be connected through FCP adaptors.
Open Volumes via FCP. You can control the ShadowImage and TrueCopy pair operations
without any restrictions.
Mainframe (3390-9A) Volumes via FICON. You cannot control the volumes (3390-9A)
that are directly connected to FICON for ShadowImage pair operations. Also, mainframe
volumes must be mapped to a CHF port to address target volumes using a command
an FCP adaptor.
Note: ShadowImage supports only 3390-9A multiplatform volumes. TrueCopy does not
support multiplatform volumes (including 3390-9A) via FICON.
„
Volume discovery via FICON. The inqraid command discovers the FCP volume
information by using SCSI inquiry. FICON volumes can only be discovered by using RAID
Manager to convert the mainframe interface (Read_device_characteristics or
Read_configuration_data) to SCSI Inquiry. As such, the information that is required to
run the inqraid command cannot be implemented, as shown in the following example:
sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid
/dev/dasda -> [ST] Unknown Ser =
1920 LDEV = 4 [HTC
] [0704_3390_0A]
] [C018_3390_0A]
] [C019_3390_0A]
/dev/dasdaa -> [ST] Unknown Ser = 62724 LDEV =4120 [HTC
/dev/dasdab -> [ST] Unknown Ser = 62724 LDEV =4121 [HTC
sles8z:/HORCM/usr/bin# ls /dev/dasd* | ./inqraid -CLI
DEVICE_FILE
dasda
dasdaa
dasdab
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
-
-
1920
4 -
- 00C0
- 9810
- 9810
- 0704_3390_0A
- C018_3390_0A
- C019_3390_0A
62724 4120 -
62724 4121 -
In the previous example, the Product_ID, C019_3390_0A, has the following associations:
„
„
„
C019 indicates the Devno
3390 indicates the Dev_type
0A indicates the Dev_model
Note: The following commands cannot be used because there is no PORT information:
„
„
raidscan -pd <device>, raidar -pd <device>, raidvchkscan -pd <device>
raidscan -find [conf] , mkconf
122
Chapter 3 Preparing for CCI Operations
3.1.4 Restrictions on VM
3.1.4.1 VMware ESX Server
Whether CCI (RM) runs or not depends on the support of guest OS by VMware. In addition,
the guest OS depends on VMware support of virtual H/W (HBA). Therefore, the following
guest OS and restrictions must be followed when using CCI on VMware.
Server
CCI#1
CCI#2
CCI#3
Guest OS
Guest OS
VMware ESX Server
HBA
Hitachi RAID
Storage System
Command device
for CCI #1 and #2
-CM
-CM
Command device
for CCI #3
Figure 3.3 RAID Manager Configuration on Guest OS/VMware
The restrictions for using CCI with VMware are:
1. Guest OS. CCI needs to use guest OS that is supported by CCI, and also VMware
supported guest OS (e.g., Windows Server 2003, Red Hat Linux, SuSE Linux). Refer to
2. Command device. CCI uses SCSI path-through driver to access the command device.
Therefore, the command device must be mapped as Raw Device Mapping using Physical
Compatibility Mode. At least one command device must be assigned for each guest OS.
3. CCI (RM) instance numbers among different guest OS must be different, even if the
command will be assigned for each guest OS, because the command device cannot
distinguish a difference among guest OS due to the same WWN as VMHBA.
4. About invisible Lun. Assigned Lun for the guest OS must be visible from SCSI Inquiry on
when VMware (host OS) will be started. For example, the SVOL on VSS will be used as
Read Only and Hidden, and this SVOL will be hidden from SCSI Inquiry. If VMware (host
OS) will be started on this volume state, the host OS will hang.
Hitachi Command Control Interface (CCI) User and Reference Guide
123
5. Lun sharing between Guest and Host OS. It is not supported to share a command device
or a normal Lun between guest OS and host OS.
6. About running on SVC. The ESX Server 3.0 SVC (service console) is a limited distribution
of Linux based on Red Hat Enterprise Linux 3, Update 6 (RHEL 3 U6). The service console
provides an execution environment to monitor and administer the entire ESX Server
host. The CCI user will be able to run CCI by installing “CCI for Linux” on SVC. The
volume mapping (/dev/sd??) on SVC is a physical connection without converting SCSI
Inquiry, so CCI will perform like running on Linux regardless of guest OS. However,
VMware protects the service console with a firewall. According to current
documentation, the firewall allows only PORT# 902,80,443,22(SSH) and ICMP(ping),
DHCP,DNS as defaults, so the CCI user must enable a PORT for CCI (HORCM) using the
“iptables” command.
3.1.4.2 Restrictions on AIX VIO
Whether CCI can function completely or not depends on how VIO Client/Server supports
virtual HBA(vscsi), and there are some restrictions in the case of volume discovery. Figure
3.4 shows CCI configuration on AIX VIO client.
System P
VIO Client
CCI#0 CCI#1
VIO Server
vscsi
vscsi
Hypervisor
HBA
Hitachi RAID
Storage System
Command device
For CCI #1 and #2
-CM
Command device
for other
-CM
Figure 3.4 CCI Configuration on VIO Client
124
Chapter 3 Preparing for CCI Operations
CCI on AIX VIO should be used with the following restrictions:
1. Command device. CCI uses SCSI Path-through driver for the purpose of access of the
command device. Therefore the command device must be mapped as RAW device of
Physical Mapping Mode. At least one command device must be assigned for each VIO
Client. The CCI instance numbers among different VIO Clients must be different, even if
the command will be assigned for each VIO Client, because the command device cannot
distinguish between VIO Clients due to use of the same WWN via vscsi.
2. Lun sharing between VIO Server and VIO Clients. It is not possible to share a command
device between VIO Server and VIO Clients, and a normal Lun also cannot be shared
between VIO Server and VIO Clients.
3. Volume discovery via vscsi on VIO Client. The inqraid command discovers the volume
information by using SCSI inquiry, but VIO Client cannot report the real SCSI inquiry
(Page0 and Page83) as shown below.
Example for inqraid command:
lsdev -Cc disk | /HORCM/usr/bin/inqraid
hdisk0 -> NOT supported INQ.
hdisk1 -> NOT supported INQ.
hdisk2 -> NOT supported INQ.
[AIX
[AIX
[AIX
] [VDASD
] [VDASD
] [VDASD
]
]
]
:
:
hdisk19 -> NOT supported INQ.
[AIX
] [VDASD
]
The following commands option discovers the volumes by issuing SCSI inquiry. These
commands option cannot be used, because there is no Port/LDEV for RAID information.
raidscan –pd <device>, raidar –pd <device>, raidvchkscan –pd <device>
raidscan –find [conf] , mkconf.sh, inqraid
pairxxx –d[g] <device>, raidvchkdsp –d[g] <device>, raidvchkset –d[g] <device>
\\.\CMD-Serial#-LDEV#-Port#:/dev/rhdisk on horcm.conf
So the user needs to know the volume mapping information (/dev/rhdisk??) on VIO Client
by referring to the physical volume mapping through the VIO Server.
4. About running on VIO Server. The volume mapping( /dev/rhdisk??) on VIO Server is a
physical connection without converting SCSI Inquiry, so CCI will perform like running on
AIX 5.3. However, IBM does not allow running applications in the VIO server. Since
commands or scripts would have to be run outside the restricted shell, it may be
necessary to get IBM's approval to run in the VIO server. So the user would have to
change their scripts to run in a VIO server to issue the oem_setup_env command to
access the non-restricted shell.
Hitachi Command Control Interface (CCI) User and Reference Guide
125
3.1.5 About Platforms Supporting IPv6
Library and System Call for IPv6
CCI uses the following functions of IPv6 library to get and convert from hostname to IPv6
address.
„
IPv6 library to resolve hostname and IPv6 address:
–
–
–
getaddrinfo()
inet_pton()
inet_ntop()
„
Socket System call to communicate using UDP/IPv6:
–
–
socket(AF_INET6)
bind(), sendmsg(), sendto(), rcvmsg(), recvfrom()…
If CCI links above function in the object(exe), a core dump may occur if an OLD platform
(e.g., Windows NT, HP-UX 10.20, Solaris 5) does not support it. So CCI links dynamically
above functions by resolving the symbol after determining whether the shared library and
function for IPv6 exists. It depends on supporting of the platform whether CCI can support
IPv6 or not. If platform does not support IPv6 library, then CCI uses its own internal function
corresponding to “inet_pton(),inet_ntop()”; in this case, IPv6 address will not be allowed to
describe hostname.
CCI
Communication layer
getaddrinfo()
inet_xxxx()
socket
IPv6 library
Dynamic Link
Ws2_32.dll
Windows
/usr/lib/libc.sl
HP-UX(PA)
/usr/lib/XX
Other OS
Figure 3.5 Library and System Call for IPv6
126
Chapter 3 Preparing for CCI Operations
Environment Variable
CCI loads and links the library for IPv6 by specifying a PATH as follows.
For Windows systems: Ws2_32.dll
For HP-UX (PA/IA) systems: /usr/lib/libc.sl
However, CCI may need to specify a different PATH to use the library for IPv6. After this
consideration, CCI also supports the following environment variables for specifying a PATH:
„
$IPV6_DLLPATH (valid for only HP-UX, Windows): This variable is used to change the
default PATH for loading the Library for IPv6. For example:
export IPV6_DLLPATH=/usr/lib/hpux32/lib.so
horcmstart.sh 10
„
$IPV6_GET_ADDR: This variable is used to change “AI_PASSIVE” value as default for
specifying to the getaddrinfo() function for IPv6. For example:
export IPV6_GET_ADDR=9
horcmstart.sh 10
HORCM Start-Up Log
Support level of IPv6 feature depends on the platform and OS version. In certain OS platform
environments, CCI will not be able to perform IPv6 communication completely, so CCI logs
the results of whether the OS environment supports the IPv6 feature or not.
/HORCM/log*/curlog/horcm_HOST NAME.log
*****************************************************************************
- HORCM STARTUP LOG - Fri Aug 31 19:09:24 2007
*****************************************************************************
19:09:24-cc2ec-02187- horcmgr started on Fri Aug 31 19:09:24 2007
:
:
19:09:25-3f3f7-02188- ******** starts Loading library for IPv6 *******
[ AF_INET6 = 26, AI_PASSIVE = 1 ]
19:09:25-47ca1-02188- dlsym() : Symbl = 'getaddrinfo' : dlsym: symbol "getaddrinfo" not
found in "/etc/horcmgr"
getaddrinfo() : Unlinked on itself
inet_pton() : Linked on itself
inet_ntop() : Linked on itself
19:09:25-5ab3e-02188- ******** finished Loading library **************
:
HORCM set to IPv6 ( INET6 value = 26)
:
Hitachi Command Control Interface (CCI) User and Reference Guide
127
3.2 Hardware Installation
Installation of the hardware required for CCI is performed by the user and the Hitachi Data
Systems representative. To install the hardware required for CCI operations:
1. User:
a) Identify the Hitachi TrueCopy and/or ShadowImage primary and secondary volumes,
so that the CCI hardware and software components can be installed and configured
properly.
b) Make sure that the UNIX/PC server hardware and software are properly installed and
2. Hitachi Data Systems representative:
a) Connect the RAID storage system(s) to the UNIX/PC server host(s). Please refer to
the Maintenance Manual and the Configuration Guide for the platform (e.g.,
Microsoft Windows Configuration Guide, IBM AIX Configuration Guide).
b) Install and enable the Hitachi TrueCopy and ShadowImage features on the RAID
storage system(s).
c) Configure the RAID storage systems which will contain the Hitachi TrueCopy and/or
ShadowImage primary volumes to report sense information to the host(s).
d) Set the SVP clock to local time so the TrueCopy/ShadowImage time stamps will be
correct.
e) Hitachi TrueCopy only: install the remote copy connections between the TrueCopy
main and remote control units (MCUs and RCUs). For detailed information on
installing the TrueCopy remote copy connections, please refer to the Hitachi
TrueCopy User and Reference Guide for the storage system.
3. User and Hitachi Data Systems Rep: Ensure that the storage systems are accessible via
Storage Navigator. For older storage systems, install, configure, and connect the Remote
Console PC to the storage systems. Enable the applicable options (e.g., TrueCopy,
ShadowImage, LUN Manager, Data Retention Utility).
For information and instructions, see the Storage Navigator User’s Guide for the storage
system (e.g., Hitachi TagmaStore USP/NSC Storage Navigator User’s Guide).
4. User: For Hitachi TrueCopy only, you must configure the RAID storage system for
TrueCopy operations as follows before you can create TrueCopy volume pairs using CCI.
For detailed instructions on configuring Hitachi TrueCopy operations, please refer to the
Hitachi TrueCopy User and Reference Guide for the storage system.
a) For 9900V and later, make sure that all TrueCopy MCUs are connected to the Storage
Navigator LAN.
For 9900, add all TrueCopy MCUs to the 9900 Remote Console PC at the main site.
b) Change the MCU and RCU remote copy ports to the correct mode (LCP, RCP,
initiator, target, RCU target).
c) Establish the MCU-RCU paths.
128
Chapter 3 Preparing for CCI Operations
3.3 Software Installation
Installation of the CCI software on the host server(s) is performed by the user, with
assistance as needed from the Hitachi Data Systems representative.
3.3.1 Software Installation for UNIX Systems
If you are installing CCI from CD-ROM, please use the RMinstsh and RMuninst scripts on the
CD-ROM to automatically install and uninstall the CCI software. For other media, please use
the following instructions. Note: The following instructions refer to UNIX commands which
may be different on your platform. Please consult your operating system documentation
(e.g., UNIX man pages) for platform-specific command information.
New Installation into Root Directory:
1. Insert the installation medium into the proper I/O device.
2. Move to the current root directory: # cd /
3. Copy all files from the installation medium using the cpio command:
# cpio -idmu < /dev/XXXX
XXXX = I/O device
Preserve the directory structure (d flag) and file modification times (m flag), and copy
unconditionally (u flag). For floppy disks, load them sequentially, and repeat the
command. An I/O device name of floppy disk designates a surface partition of the raw
device file (unpartitioned raw device file).
4. Execute the HORCM installation command: # /HORCM/horcminstall.sh
5. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
New Installation into Non-Root Directory:
1. Insert the installation medium (e.g., CD-ROM) into the proper I/O device.
2. Move to the desired directory for CCI. The specified directory must be mounted by a
partition of except root disk or an external disk.
# cd /Specified Directory
3. Copy all files from the installation medium using the cpio command:
# cpio -idmu < /dev/XXXX
XXXX = I/O device
Preserve the directory structure (d flag) and file modification times (m flag), and copy
unconditionally (u flag). For floppy disks, load them sequentially, and repeat the
command. An I/O device name of floppy disk designates a surface partition of the raw
device file (unpartitioned raw device file).
4. Make a symbolic link for /HORCM:
# ln -s /Specified Directory/HORCM /HORCM
Hitachi Command Control Interface (CCI) User and Reference Guide
129
5. Execute the HORCM installation command: # /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
Version Up. To install a new version of the CCI software:
1. Confirm that HORCM is not running. If it is running, shut it down:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
If Hitachi TrueCopy/ShadowImage commands are running in the interactive mode,
terminate the interactive mode and exit these commands using -q option.
2. Insert the installation medium (e.g., CD-ROM) into the proper I/O device.
3. Move to the directory containing the HORCM directory (e.g., # cd / for root directory).
4. Copy all files from the installation medium using the cpio command:
# cpio -idmu < /dev/XXXX
XXXX = I/O device
Preserve the directory structure (d flag) and file modification times (m flag) and copy
unconditionally (u flag). For floppy disks, load them sequentially, and repeat the
command. An input/output device name of floppy disk designates a surface partition of
the raw device file (unpartitioned raw device file).
5. Execute the HORCM installation command: # /HORCM/horcminstall.sh
6. Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
130
Chapter 3 Preparing for CCI Operations
3.3.2 Software Installation for Windows Systems
Make sure to install CCI on all servers involved in CCI operations. If network (TCP/IP) is not
established, install a network of Windows attachment, and add TCP/IP protocol.
To install the CCI software on a Windows system:
1. If a previous version of CCI is already installed, uninstall it as follows:
a) Confirm that HORCM is not running. If it is running, shut it down:
One CCI instance: D:\HORCM\etc> horcmshutdown
Two CCI instances: D:\HORCM\etc> horcmshutdown 0 1
b) If Hitachi TrueCopy/ShadowImage commands are running in the interactive mode,
terminate the interactive mode and exit these commands using -q option.
c) Remove the previous version of CCI using the Add/Remove Programs control panel.
2. Insert the installation medium (e.g., CD-ROM) into the proper I/O device.
3. Run Setup.exe, and follow the instructions on screen to complete the installation.
4. Verify installation of the proper version using the raidqry command:
D:\HORCM\etc> raidqry -h
Model: RAID-Manager/Windows2000
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
3.3.3 Software Installation for OpenVMS® Systems
Make sure to install CCI on all servers involved in CCI operations. Establish the network
(TCP/IP), if not already established. CCI will be provided as the following PolyCenter
Software Installation (PCSI) file:
HITACHI-ARMVMS-RM-V0122-2-1.PCSI
HITACHI-I64VMS-RM-V0122-2-1.PCSI
CCI also requires that POSIX_ROOT is existing on the system, so you must define the
POSIX_ROOT before installing the CCI software. It is recommended that you define the
following three logical names for CCI in LOGIN.COM:
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT “Device:[directory]”
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
$ SET PROCESS/PARSE_STYLE=EXTENDED
where Device:[directory] is defined as SYS$POSIX_ROOT
Hitachi Command Control Interface (CCI) User and Reference Guide
131
New installation. To install the CCI software on an OpenVMS® system:
1. Insert and mount the provided CD or diskette.
2. Execute the following command:
$ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG -
_$ /destination=SYS$POSIX_ROOT:[000000]
Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-1.PCSI exists
3. Verify installation of the proper version using the raidqry command:
$ raidqry -h
Model: RAID-Manager/OpenVMS
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
Version update. To update the CCI software version on an OpenVMS® system:
1. Perform the update after making sure that HORCM is not in operation:
$horcmshutdown for one HORCM instance
$horcmshutdown 0 1 for two HORCM instances
When a command is being used in interactive mode, terminate it using the “-q” option.
2. Insert and mount the provided CD or diskette.
3. Execute the following command:
$ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG
Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-1.PCSI exists
4. Verify installation of the proper version using the raidqry command:
$ raidqry -h
Model: RAID-Manager/OpenVMS
Ver&Rev: 01-22-03/02
Usage: raidqry [options]
132
Chapter 3 Preparing for CCI Operations
3.3.4 Changing the CCI User (UNIX Systems)
The CCI software is initially configured to allow only the root user (system administrator) to
execute CCI commands. If desired (e.g., CCI administrator does not have root access), the
system administrator can change the CCI user from root to another user name.
To change the CCI user:
1. Change the owner of the following CCI files from the root user to the desired user name:
/HORCM/etc/horcmgr
All CCI commands in the /HORCM/usr/bin directory
All CCI log directories in the /HORCM/log* directories
2. Change the owner of the raw device file of the HORCM_CMD command device in the
configuration definition file from the root user to the desired user name.
3. Optional: Establishing the HORCM (/etc/horcmgr) start environment: If users have
designation of the full environment variables (HORCM_LOG HORCM_LOGS), then they
start horcmstart.sh command without an argument. In this case, the HORCM_LOG and
HORCM_LOGS directories must be owned by the CCI administrator. The environment
variable (HORCMINST, HORCM_CONF) establishes as the need arises.
4. Optional: Establishing the command execution environment: If users have designation of
the environment variables (HORCC_LOG), then the HORCC_LOG directory must be owned
by the CCI administrator. The environment variable (HORCMINST) establishes as the need
arises.
Note: A user account for the Linux system must have the “CAP_SYS_ADMIN” and
“CAP_SYS_RAWIO” privileges to use the SCSI Class driver (Command device). The system
administrator can apply these privileges by using the PAM_capability module. However, if
the system administrator cannot set those user privileges, then use the following method.
This method starts the HORCM daemon only with the root user; as an alternative, the user
can execute CCI commands.
„
System administrator: Place the script that starts up horcmstart.sh in the following
directory so that the system can start HORCM from /etc/rc.d/rc: /etc/init.d
„
Users: When the log directory is only accessible by the system administrator, you cannot
use the inqraid or raidscan -find commands. Therefore, set the command log directory
by setting the environment variables (HORCC_LOG), and executing the RM command.
Hitachi Command Control Interface (CCI) User and Reference Guide
133
3.3.5 Changing the CCI User (Windows Systems)
Usually, RAID Manager commands can only be executed by the system administrator in order
to directly open the PhysicalDrive.
When an administrator of CCI does not have an “administrator” privilege or there is a
difference between the system administrator and the CCI administrator, the CCI
administrator can use CCI commands as follows:
System Administrator Tasks
1. Add a user_name to the PhysicalDrive.
Add the user name of the CCI administrator to the Device Objects of the command
device for HORCM_CMD in the configuration definition file. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Phys
PhysicalDrive0 -> \Device\Harddisk0\DR0
\\.\PhysicalDrive0 : changed to allow ‘RMadmin’
2. Add a user_name to the Volume{GUID}.
If the CCI administrator needs to use the “-x mount/umount” option for CCI commands,
the system administrator must add the user name of the CCI administrator to the Device
Objects of the Volume{GUID}. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Volume
Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} -> \Device\CdRom0
\\.\Volume{b0736c01-9b14-11d8-b1b6-806d6172696f} : changed to allow ‘RMadmin’
Volume{b0736c02-9b14-11d8-b1b6-806d6172696f} -> \Device\Floppy0
\\.\Volume{b0736c02-9b14-11d8-b1b6-806d6172696f} : changed to allow ‘RMadmin’
Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} -> \Device\HarddiskVolume1
\\.\Volume{b0736c00-9b14-11d8-b1b6-806d6172696f} : changed to allow ‘RMadmin’
3. Add a user_name to the ScsiX.
If the CCI administrator needs to use the “-x portscan” option for CCI commands, the
system administrator must add the user name of the CCI administrator to the Device
Objects of the ScsiX. For example:
C:\HORCM\tool\>chgacl /A:RMadmin Scsi
Scsi0: -> \Device\Ide\IdePort0
\\.\Scsi0: : changed to allow ‘RMadmin’
Scsi1: -> \Device\Ide\IdePort1
\\.\Scsi1: : changed to allow ‘RMadmin ‘
Note: Because the ACL (Access Control List) of the Device Objects is set every time Windows
starts-up, the Device Objects are also required when Windows starts-up. The ACL is also
required when new Device Objects are created.
134
Chapter 3 Preparing for CCI Operations
CCI Administrator Tasks
1. Establish the HORCM (/etc/horcmgr) startup environment.
By default, the configuration definition file is located in the following directory:
%SystemDrive%:\windows\
Because users cannot write to this directory, the CCI administrator must change the
directory by using the HORCM_CONF variable. For example:
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>horcmstart [This must be started without arguments]
Notes: The mountvol command will be denied use by the users privilege, therefore “the
directory mount” option of RM commands using mountvol command cannot be executed.
•The inqraid “-gvinf” option uses %SystemDrive%:\windows\directory, so this option
cannot use unless system administrator allow to write.
However, RAID Manager will be able to change from %SystemDrive%:\windows\
directory to %TEMP%directory by setting “HORCM_USE_TEMP” environment variable.
For example:
C:\HORCM\etc\>set HORCM_USE_TEMP=1
C:\HORCM\etc\>inqraid $Phys -gvinf
2. Ensure that RAID Manager and CCI (HORCM) have the same privileges.
If the RAID Manager command and CCI will be executing different privileges (different
users), then the RAID Manager command will be unable to attach to CCI (the RAID
Manager command and CCI are denied communication through the Mailslot).
However, RAID Manager does permit a HORCM connection through the
“HORCM_EVERYCLI” environment variable, as shown in the following example.
C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin\horcm10.conf
C:\HORCM\etc\>set HORCMINST=10
C:\HORCM\etc\>set HORCM_EVERYCLI=1
C:\HORCM\etc\>horcmstart [This must be started without arguments]
In this example, users who execute the RAID Manager command must be restricted to
use only that command. This can be done using the Windows “explore” or “cacls”
commands.
Hitachi Command Control Interface (CCI) User and Reference Guide
135
3.3.6 Uninstallation
Uninstalling permanently removes software.
Uninstallation for UNIX systems. To uninstall the CCI software:
1. Confirm that CCI (HORCM) is not running. If it is running, shut it down:
One CCI instance: # horcmshutdown.sh
Two CCI instances: # horcmshutdown.sh 0 1
If Hitachi TrueCopy/ShadowImage commands are running in the interactive mode,
terminate the interactive mode and exit these commands using -q option.
2. When HORCM is installed in the root directory (/HORCM is not a symbolic link):
Execute the horcmuninstall command: # /HORCM/horcmuninstall.sh
Move to the root directory: # cd /
Delete the product using the rm command: # rm -rf /HORCM
3. When HORCM is not installed in the root directory (/HORCM is a symbolic link):
Execute the horcmuninstall command: # /Directory/HORCM/horcmuninstall.sh
Move to the root directory: # cd /
Delete the symbolic link for /HORCM: # rm /HORCM
Delete the product using the rm command: # rm -rf /Directory/HORCM
Uninstallation for Windows systems. To uninstall the CCI software:
1. Confirm that CCI (HORCM) is not running. If it is running, shut it down:
One CCI instance: D:\HORCM\etc> horcmshutdown
Two CCI instances: D:\HORCM\etc> horcmshutdown 0 1
2. Delete the installed CCI (RAID Manager) using the Add/Remove Programs control panel.
Uninstallation for OpenVMS® systems. To uninstall the CCI software:
1. Confirm that CCI (HORCM) is not running. If it is running, shut it down:
For one instance: $ horcmshutdown
For two instances: $ horcmshutdown 0 1
If a command is being used in interactive mode, terminate it using the -q option.
2. Delete the installed CCI software by using the following command:
$ PRODUCT REMOVE RM /LOG
136
Chapter 3 Preparing for CCI Operations
3.4 Creating/Editing the Configuration File
The configuration definition file is a text file which is created and/or edited using any
standard text editor (e.g., UNIX vi editor, Windows Notepad). A sample configuration
definition file, HORCM_CONF (/HORCM/etc/horcm.conf), is included with the CCI software.
This file should be used as the basis for creating your configuration definition file(s). The
system administrator should copy the sample file, set the necessary parameters in the
copied file, and place the copied file in the proper directory.
file and specifies the default value, type, and limit for each parameter.
Caution: Do not edit the configuration definition file while HORCM is running. Shut down
HORCM, edit the configuration file as needed, and then restart HORCM.
Note: Do not mix pairs created with the At-Time Split option (-m grp) and pairs created
without this option in the same group defined in the CCI configuration file. If you do, the
pairsplit operation might end abnormally, or S-VOLs of the P-VOLs in the same consistency
group might not be created correctly at the time when the pairsplit request is received.
Restrictions for a ShadowImage volume group (9900V and later) in the CCI configuration file:
„
ShadowImage volume group:
–
–
A group cannot extend across multiple storage systems.
If a CT group contains more than one device group, pair operations act on the entire
CT Group.
–
If a ShadowImage volume will be cascading with TrueCopy/UR volume, data
consistency is not maintained with pairsplit.
„
CTGID number. CCI assigns a CTGID to disk array automatically when a user makes
ShadowImage volumes specified with “paircreate -m grp” command, and the group of
configuration file is mapped to CTGID. If CCI cannot assign a free CTGID, the “paircreate
-m grp” command is terminated with EX_ENOCTG.
MAX CTGID:
–
–
–
USP V/VM: 256 (0-255)
USP/NSC: 256 (0-255)
9900V: 128 (0-127)
„
Number of configurable LDEVs with “-m grp” option. Maximum number of configurable
LDEVs in the same CTGID:
–
–
–
USP V/VM: 8192
USP/NSC: 4096
9900V: 1024
Hitachi Command Control Interface (CCI) User and Reference Guide
137
Table 3.10 Configuration (HORCM_CONF) Parameters
Parameter
ip_address
service
Default value Type
Limit
64 characters
None
None
1000
Character string
Character string or numeric value 15 characters
poll (10 ms)
Numeric value
None
See Note
timeout (10 ms)
3000
Numeric value
None
See Note
dev_name for HORCM_DEV None
Character string
Character string
31 characters
dev_group
None
31 characters
Recommended value = 8 char. or less
port #
None
None
Character string
31 characters
7 characters
target ID
Numeric value
See Note
LU#
None
0
Numeric value
See Note
7 characters
7 characters
MU#
Numeric value
See Note
Serial#
None
None
Numeric value
Numeric value
Character string
12 characters
6 characters
CU:LDEV(LDEV#)
dev_name for HORCM_CMD None
63 characters
Recommended value = 8 char. or less
Note: Use decimal notation for numeric values (not hexadecimal).
138
Chapter 3 Preparing for CCI Operations
3.5 Porting Notice for OpenVMS
In the OpenVMS, the system call on UNIX are supported as the functions of CRTL (C Run Time
Library) on the user process, and also the CRTL for OpenVMS does not support the POSIX and
POSIX Shell fully such as UNIX. In addition to this, the RAID Manager uses the UNIX domain
socket for IPC (Inter Process Communication), but OpenVMS does not support the AF_UNIX
socket. As alternate method, RAID Manager has accomplished IPC between the Raid Manager
command and HORCM daemon by using the Mailbox driver on OpenVMS.
So, RAID Manager has the following restrictions in porting for OpenVMS.
3.5.1 Requirements and Restrictions
(1) Version of OpenVMS.
CCI uses the CRTL and needs the following version supported the ROOT directory for POSIX.
„
„
OpenVMS Version 7.3-1 or later
CRTL version must be installed prior to running CCI. (Compaq C V6.5-001 was used in
testing.)
(2) Defining the SYS$POSIX_ROOT.
CCI requires the POSIX_ROOT is existing on the system, so you must define the POSIX_ROOT
before running the CCI. For example:
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT “Device:[directory]”
where Device:[directory] is defined as SYS$POSIX_ROOT
(3) IPC method using MailBox driver.
As alternate method of the UNIX domain socket for IPC (Inter Process Communication), RAID
Manager use the mailbox driver to enable the communication between Raid Manager
command and HORCM.
Therefore if the Raid Manager command and HORCM will be executing in different jobs
(different terminal), then you must redefine LNM$TEMPORARY_MAILBOX in
LNM$PROCESS_DIRECTORY table as follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
(4) Start-up method for HORCM daemon.
HORCM will be able to start as the daemon process from UNIX Shell. But in case of ‘vfork’ of
CRTL, if a parent process has exit() then a child process also ends at the same time. In other
words it looks that OpenVMS cannot make a daemon process from the POSIX program.
Therefore, horcmstart.exe has been changed to wait until HORCM has been exiting by
horcmshutdown.exe after start-up of the horcmgr. According to the rule for creating process
in OpenVMS, to start-up the horcmstart.exe is to create the detached process or Batch JOB
by using DCL command, as this method closely resembles the horcmd process on UNIX.
Hitachi Command Control Interface (CCI) User and Reference Guide
139
For example, using the Detached process:
If you want to have the HORCM daemon running in the background, you need to make the
Detached LOGINOUT.EXE Process by using the ‘RUN /DETACHED’ command of the OpenVMS,
and need to make the commands file for LOGINOUT.EXE.
The following are examples of “loginhorcm*.com” file given to SYS$INPUT for
LOGINOUT.EXE, and are examples that “VMS4$DKB100:[SYS0.SYSMGR.]” was defined as
SYS$POSIX_ROOT.
loginhorcm0.com
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "VMS4$DKB100:[SYS0.SYSMGR.]"
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ horcmstart 0
loginhorcm1.com
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "VMS4$DKB100:[SYS0.SYSMGR.]"
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ horcmstart 1
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
$
$
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err
%RUN-S-PROC_ID, identification of created process is 00004166
And also you will be able to verify that HORCM daemon is running as Detached Process by
using ‘SHOW PROCESS’ command.
$ show process horcm0
25-MAR-2003 23:27:27.72 User: SYSTEM
Node: VMS4
Process ID: 00004160
Process name: "HORCM0"
Terminal:
User Identifier:
Base priority:
[SYSTEM]
4
Default file spec: Not available
Number of Kthreads: 1
Soft CPU Affinity: off
$
$ horcmshutdown 0 1
inst 0:
HORCM Shutdown inst 0 !!!
inst 1:
HORCM Shutdown inst 1 !!!
$
140
Chapter 3 Preparing for CCI Operations
(5) Command device.
CCI uses the SCSI class driver for the purpose of accessing the command device on the
9900V/9900, since OpenVMS does not provide the raw I/O device such as UNIX, and is
defining “DG*,DK*,GK*” as the logical name for the device. The SCSI class driver requires the
following privileges: DIAGNOSE and PHY_IO or LOG_IO (for details see the OpenVMS manual).
In CCI version 01-12-03/03 or earlier, you need to define the Physical device as either
DG* or DK* or GK* by using DEFINE/SYSTEM command. For example:
$ show device
Device
Name
Device
Status
Error
Count
Volume
Label
Free Trans Mnt
Blocks Count Cnt
VMS4$DKB0:
VMS4$DKB100:
VMS4$DKB200:
VMS4$DKB300:
VMS4$DQA0:
$1$DGA145:
$1$DGA146:
:
Online
Mounted
Online
Online
0
0
0
0
0
0
0
ALPHASYS
30782220 414 1
Online
(VMS4) Online
(VMS4) Online
:
$1$DGA153:
(VMS4) Online
0
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
(6) -zx option for Raid Manager commands. ‘-zx option’ for Raid manager commands are
using the select() function to wait a event from STDIN, but OpenVMS select() function does
not support to wait any events from STDIN, and the behavior of select() for terminal(STDIN)
is unable to echo back the terminal input.
Therefore ‘-zx option’ for Raid Manager commands will not be supported, and it will be
deleted a display for ‘-zx option’ on Help & Usage.
(7) Syslog function. The OpenVMS does not support the syslog function like UNIX. Therefore,
CCI does not support the syslog function. You can do an alternative by using HORCM logging
file for HORCM daemon.
(8) Start up log files. In start up of HORCM, CCI does share a start up log file by two process
for start-up, but CRTL does not work correctly to share from two processes.
As workaround, CCI has two start-up log files separated by using PID as follows.
For example, under the SYS$POSIX_ROOT:[HORCM.LOG*.CURLOG] directory:
HORCMLOG_VMS4 HORCM_VMS4_10530.LOG HORCM_VMS4_10531.LOG
Hitachi Command Control Interface (CCI) User and Reference Guide
141
(9) Option syntax and Case sensitivity.
VMS users are not accustomed to commands being case sensitive and syntax of the option,
like UNIX. So CCI changes “case sensitivity” and “-xxx” syntax for options in order to match
the expectations of VMS users as much as possible. CCI allows “/xxx” syntax for options as
well as “-xxx” option, but this will be a minor option.
The followings upper-case strings are not case sensitive:
„
„
„
„
DG* or DK* or GK* for Logical Device Name
-CLI or -FCA(-FHORC) or -FBC(-FMRCF) for the pair* command options
-CLI or -CLIWP or -CLIWN or -CM for the inqraid options
Environmental variable name such as HORCMINST … controlled by CRTL
Also you need to define the following logical name to your login.com in order to distinguish
the uppercase and the lowercase:
$ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
$ SET PROCESS/PARSE_STYLE=EXTENDED
(10) Regarding using spawn command.
You can also start the HORCM process easily by using the spawn command. The following
examples used SPAWN command on DCL.
For example, using spawn:
$ spawn /NOWAIT /PROCESS=horcm0 horcmstart 0
%DCL-S-SPAWNED, process HORCM0 spawned
$
starting HORCM inst 0
$ spawn /NOWAIT /PROCESS=horcm1 horcmstart 1
%DCL-S-SPAWNED, process HORCM1 spawned
$
starting HORCM inst 1
$
Note: the subprocess (HORCM) created by SPAWN will be terminated when the terminal will
be LOGOFF or the session will be terminated. If you want independence Process to the
terminal LOGOFF, then use the “RUN /DETACHED” command.
142
Chapter 3 Preparing for CCI Operations
(11) Privileges for using RAID Manager.
„
A user account for RAID Manager must have the same privileges as “SYSTEM” that
can be used the SCSI Class driver and Mailbox driver directly.
However some OpenVMS system administrators may not allow RAID Manager to run from
the system account (equivalent to root on UNIX), therefore RAID Manager recommends
to be created another account on the system such as “RMadmin” that has the equivalent
privileges to “SYSTEM”.
This would alleviate the problem of system administrators being nervous about giving
out system passwords.
„
RAID Manager uses the Mailbox driver to enable communication between RAID
Manager commands and HORCM. So, RAID Manager commands and HORCM must have
the same privileges.
If the RAID Manager command and HORCM will be executing in different privileges
(different user), then the RAID Manager command will be hang or unable to attach to
HORCM because the RAID Manager command and HORCM will be denied to communicate
through the Mailbox.
(12) Installation.
RAID Manager will be provided a file for installing as the following PCSI (PolyCenter Software
Installation) file.
- HITACHI-ARMVMS-RM-V0122-2-1.PCSI
- HITACHI-I64VMS-RM-V0122-2-1.PCSI
RAID Manager also requires that POSIX_ROOT is existing on the system, so you must define
the POSIX_ROOT before installing the RAID Manager.
RAID Manager recommends to be defined previously three logical names shown below for
RAID Manager in LOGIN.COM.
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "Device:[directory]"
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
$ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
$ SET PROCESS/PARSE_STYLE=EXTENDED
where Device:[directory] is defined as SYS$POSIX_ROOT
Hitachi Command Control Interface (CCI) User and Reference Guide
143
For Installing:
$ PRODUCT INSTALL RM /source=Device:[directory]/LOG -
_$ /destination=SYS$POSIX_ROOT:[000000]
Device:[directory] where HITACHI-ARMVMS-RM-V0122-2-1.PCSI exists
:
:
$ PRODUCT SHOW PRODUCT RM
----------------------------------------- ----------- ------------
PRODUCT KIT TYPE STATE
----------------------------------------- ----------- ------------
HITACHI ARMVMS RM V1.22-2 Full LP Installed
----------------------------------------- ----------- ------------
$ raidqry -h
Model : RAID-Manager/OpenVMS
Ver&Rev: 01-22-03/02
:
:
For Installation history:
$ PRODUCT SHOW HISTORY RM /FULL
For removing :
$ PRODUCT REMOVE RM /LOG
(13) About exit code of the command on DCL
RAID Manager return codes are the same for all platform, however if the process was
invoked by the DCL, the status is interpreted by DCL and a message is displayed as below.
---------------------------on DCL of OpenVMS-------------------------
$ pairdisplay jjj
PAIRDISPLAY: requires '-jjj' or '/jjj' as argument
PAIRDISPLAY: [EX_REQARG] Required Arg list
Refer to the command log(SYS$POSIX_ROOT:[HORCM.LOG]HORCC_RMOVMS.LOG
(/HORCM/log/horcc_rmovms.log)) for details.
$ sh sym $status
$STATUS == "%X0035A7F1"
$
$ pairdisplay -g aaa
PAIRDISPLAY: [EX_ATTHOR] Can't be attached to HORC manager
Refer to the command log(SYS$POSIX_ROOT:[HORCM.LOG]HORCC_RMOVMS.LOG
(/HORCM/log/horcc_rmovms.log)) for details.
$ sh sym $status
$STATUS == "%X0035A7D9"
--------------------------on DCL of OpenVMS--------------------------
You can get “Exit code” of Raid Manager from $status of DCL using below formula.
Formula for calculating the exit code is:
Exit code of RM command = ( $status % 2048 ) / 8
144
Chapter 3 Preparing for CCI Operations
3.5.2 Known Issues
Rebooting on PAIR state (Writing disable)
OpenVMS does not show the volumes of writing disable (e.g., SVOL_PAIR) at start-up of
system, therefore the SVOLs are hidden when rebooting in PAIR state or SUSPEND(read only)
mode. You are able to verify that the “show device” and “inqraid”command does not show
the SVOLs after reboot as below (notice that DGA148 and DGA150 devices are SVOL_PAIR).
$ sh dev dg
Device
Name
Device
Status
Error
Count
Volume Free Trans Mnt
Label Blocks Count Cnt
$1$DGA145:
$1$DGA146:
$1$DGA147:
$1$DGA149:
$1$DGA151:
$1$DGA152:
$1$DGA153:
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
0
0
0
0
0
0
0
$ inqraid DKA145-153 -cli
DEVICE_FILE
DKA145
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
DKA152
DKA153
PORT
CL1-H
CL1-H
CL1-H
-
CL1-H
-
CL1-H
CL1-H
CL1-H
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
30009 145 - - OPEN-9-CM
30009 146 - s/P/ss 0004 5:01-11 OPEN-9
30009 147 - s/S/ss 0004 5:01-11 OPEN-9
- -
30009 149 - P/s/ss 0004 5:01-11 OPEN-9
- - - -
30009 151 - P/s/ss 0004 5:01-11 OPEN-9
30009 152 - s/s/ss 0004 5:01-11 OPEN-9
30009 153 - s/s/ss 0004 5:01-11 OPEN-9
-
-
-
-
-
- -
-
-
-
$ inqraid DKA148
sys$assign : DKA148 -> errcode = 2312
DKA148 -> OPEN: no such device or address
After making the SVOL for Writing enable by using “pairsplit” or “horctakeover”
command, you need to perform the “mcr sysman” command in order to use the SVOLs
for back-up or disaster recovery.
$ pairsplit -g CAVG -rw
$ mcr sysman
SYSMAN> io auto
SYSMAN> exit
$ sh dev dg
Device
Name
Device
Status
Error
Count
Volume Free Trans Mnt
Label Blocks Count Cnt
$1$DGA145:
$1$DGA146:
$1$DGA147:
$1$DGA148:
$1$DGA149:
$1$DGA150:
$1$DGA151:
$1$DGA152:
$1$DGA153:
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
(VMS4) Online
0
0
0
0
0
0
0
0
0
Hitachi Command Control Interface (CCI) User and Reference Guide
145
3.5.3 Start-up Procedures Using Detached Process on DCL
(1) Create the shareable Logical name for RAID if undefined initially.
CCI (RAID Manager) need to define the physical device ($1$DGA145…) as either DG* or DK*
or GK* by using SHOW DEVICE command and DEFINE/SYSTEM command, but then does not
need to be mounted in CCI version 01-12-03/03 or earlier.
$ show device
Device
Name
$1$DGA145:
Device
Status
(VMS4) Online
(VMS4) Online
Error
Count
0
Volume
Label
Free Trans Mnt
Blocks Count Cnt
$1$DGA146:
0
:
:
$1$DGA153:
$
(VMS4) Online
0
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
(2) Define the environment for RAID Manager in LOGIN.COM.
You need to define the Path for the RAID Manager commands to DCL$PATH as the foreign
command. Refer to the section about Automatic Foreign Commands in the OpenVMS User’s
Manual.
$ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin],SYS$POSIX_ROOT:[horcm.etc]
If RAID Manager command and HORCM will be executing in different jobs (different
terminal), then you must redefine LNM$TEMPORARY_MAILBOX in LNM$PROCESS_DIRECTORY
table as follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
146
Chapter 3 Preparing for CCI Operations
(3) Discover and describe the command device on SYS$POSIX_ROOT:[etc]horcm0.conf.
$ inqraid DKA145-151 -CLI
DEVICE_FILE
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
DKA145
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
30009 145 - - OPEN-9-CM
-
-
30009 146 - s/S/ss 0004 5:01-11 OPEN-9
30009 147 - s/P/ss 0004 5:01-11 OPEN-9
30009 148 - s/S/ss 0004 5:01-11 OPEN-9
30009 149 - s/P/ss 0004 5:01-11 OPEN-9
30009 150 - s/S/ss 0004 5:01-11 OPEN-9
30009 151 - s/P/ss 0004 5:01-11 OPEN-9
SYS$POSIX_ROOT:[etc]horcm0.conf
HORCM_MON
#ip_address
127.0.0.1
service
30001
poll(10ms)
1000
timeout(10ms)
3000
HORCM_CMD
#dev_name
DKA145
dev_name
dev_name
You will have to start HORCM without a description for HORCM_DEV and HORCM_INST
because target ID & LUN are Unknown.
You will be able to know about a mapping of a physical device with a logical name easily by
using the raidscan -find command option.
(4) Execute an ‘horcmstart 0’.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
(5) Verify a physical mapping of the logical device.
$ HORCMINST := 0
$ raidscan -pi DKA145-151 -find
DEVICE_FILE
DKA145
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
30009 145 OPEN-9-CM
30009 146 OPEN-9
30009 147 OPEN-9
30009 148 OPEN-9
30009 149 OPEN-9
30009 150 OPEN-9
30009 151 OPEN-9
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0
0
0
0
0
0
0
1
2
3
4
5
6
7
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
$ horcmshutdown 0
inst 0:
HORCM Shutdown inst 0 !!!
Hitachi Command Control Interface (CCI) User and Reference Guide
147
(6) Describe the known HORCM_DEV on SYS$POSIX_ROOT:[etc]horcm*.conf
For horcm0.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
LU#
2
4
MU#
0
0
0
0
0
6
0
HORCM_INST
#dev_group
VG01
ip_address
HOSTB
service
horcm1
For horcm1.conf
HORCM_DEV
#dev_group
VG01
VG01
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
LU#
3
5
MU#
0
0
0
0
0
VG01
7
0
HORCM_INST
#dev_group
VG01
ip_address
HOSTA
service
horcm0
Note: Defines the UDP port name for HORCM communication in the
SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT file, as in the example below.
horcm0
horcm1
30001/udp
30002/udp
(7) Start horcm0 and horcm1 as the Detached process.
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm0 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm0.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run0.err
%RUN-S-PROC_ID, identification of created process is 00004160
$
$
$ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 -
_$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com -
_$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out -
_$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err
%RUN-S-PROC_ID, identification of created process is 00004166
You will be able to verify that HORCM daemon is running as Detached Process by using the
SHOW PROCESS command.
$ show process horcm0
25-MAR-2003 23:27:27.72 User: SYSTEM
Node: VMS4
Terminal:
Process ID: 00004160
Process name: "HORCM0"
User Identifier:
Base priority:
[SYSTEM]
4
Default file spec: Not available
Number of Kthreads: 1
Soft CPU Affinity: off
148
Chapter 3 Preparing for CCI Operations
3.5.4 Command Examples in DCL
(1) Setting the environment variable by using Symbol.
$ HORCMINST := 0
$ HORCC_MRCF := 1
$ raidqry -l
No Group
1 ---
$
Hostname
VMS4
HORCM_ver Uid Serial#
Micro_ver Cache(MB)
8192
01-22-03/02 30009 50-04-00/00
0
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File M ,Seq#,LDEV#.P/S,Status, % ,P-LDEV# M
VG01
VG01
VG01
VG01
VG01
VG01
$
oradb1(L)
oradb1(R)
oradb2(L)
oradb2(R)
oradb3(L)
oradb3(R)
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
0 30009 146..S-VOL PAIR, 100
0 30009 147..P-VOL PAIR, 100
0 30009 148..S-VOL PAIR, 100
0 30009 149..P-VOL PAIR, 100
0 30009 150..S-VOL PAIR, 100
0 30009 151..P-VOL PAIR, 100
147 -
146 -
149 -
148 -
151 -
150 -
(2) Removing the environment variable.
$ DELETE/SYMBOL HORCC_MRCF
$ pairdisplay -g VG01 -fdc
Group PairVol(L/R) Device_File ,Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01
VG01
VG01
VG01
VG01
VG01
$
oradb1(L)
oradb1(R)
oradb2(L)
oradb2(R)
oradb3(L)
oradb3(R)
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
30009 146..SMPL ---- ------,----- ---- -
30009 147..SMPL ---- ------,----- ---- -
30009 148..SMPL ---- ------,----- ---- -
30009 149..SMPL ---- ------,----- ---- -
30009 150..SMPL ---- ------,----- ---- -
30009 151..SMPL ---- ------,----- ---- -
(3) Changing the default log directory.
$ HORCC_LOG := /horcm/horcm/TEST
$ pairdisplay
PAIRDISPLAY: requires '-x xxx' as argument
PAIRDISPLAY: [EX_REQARG] Required Arg list
Refer to the command log(SYS$POSIX_ROOT:[HORCM.HORCM.TEST]HORCC_VMS4.LOG (/HORCM
/HORCM/TEST/horcc_VMS4.log)) for details.
(4) Turning back to the default log directory.
$ DELETE/SYMBOL HORCC_LOG
(5) Specifying the device described in scandev.LIS.
$ define dev_file SYS$POSIX_ROOT:[etc]SCANDEV
$ type dev_file
DKA145-150
$
$ pipe type dev_file | inqraid -CLI
DEVICE_FILE
DKA145
DKA146
DKA147
DKA148
DKA149
DKA150
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
30009 145 - - OPEN-9-CM
30009 146 - s/S/ss 0004 5:01-11 OPEN-9
30009 147 - s/P/ss 0004 5:01-11 OPEN-9
30009 148 - s/S/ss 0004 5:01-11 OPEN-9
30009 149 - s/P/ss 0004 5:01-11 OPEN-9
30009 150 - s/S/ss 0004 5:01-11 OPEN-9
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
-
-
Hitachi Command Control Interface (CCI) User and Reference Guide
149
(6) Making the configuration file automatically.
You will be able to omit the step from (3) to (6) on Start-up procedures by using mkconf
command.
$ type dev_file
DKA145-150
$
$ pipe type dev_file | mkconf -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE
DKA145
Group
-
URA
URA
URA
URA
URA
PairVol
-
URA_000
URA_001
URA_002
URA_003
URA_004
PORT TARG LUN M SERIAL LDEV
-
-
0
0
0
0
0
- -
2 0
3 0
4 0
5 0
6 0
30009 145
30009 146
30009 147
30009 148
30009 149
30009 150
DKA146
DKA147
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
DKA148
DKA149
DKA150
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
SYS$SYSROOT:[SYSMGR]horcm9.conf (/sys$sysroot/sysmgr/horcm9.conf)
# Created by mkconf on Thu Mar 13 20:08:41
HORCM_MON
#ip_address
127.0.0.1
service
52323
poll(10ms)
1000
timeout(10ms)
3000
HORCM_CMD
#dev_name
dev_name
dev_name
#UnitID 0 (Serial# 30009)
DKA145
# ERROR [CMDDEV] DKA145
HORCM_DEV
SER =
port#
30009 LDEV = 145 [ OPEN-9-CM `
#dev_group
# DKA146
URA
# DKA147
URA
dev_name
TargetID
LU#
MU#
SER =
URA_000
SER =
30009 LDEV = 146 [ FIBRE FCTBL = 3 ]
CL1-H
30009 LDEV = 147 [ FIBRE FCTBL = 3 ]
CL1-H
30009 LDEV = 148 [ FIBRE FCTBL = 3 ]
CL1-H
30009 LDEV = 149 [ FIBRE FCTBL = 3 ]
CL1-H
30009 LDEV = 150 [ FIBRE FCTBL = 3 ]
0
2
0
URA_001
0
3
0
# DKA148
URA
# DKA149
URA
SER =
URA_002
SER =
0
4
0
URA_003
0
5
0
# DKA150
URA
SER =
URA_004
CL1-H
0
6
0
HORCM_INST
#dev_group
URA
ip_address
127.0.0.1
service
52323
150
Chapter 3 Preparing for CCI Operations
(7) Using $1$* naming as native device name.
You are able to use the native device without DEFINE/SYSTEM command by specifying $1$*
naming directly.
$ inqraid $1$DGA145-155 -CLI
DEVICE_FILE
$1$DGA145
$1$DGA146
$1$DGA147
$1$DGA148
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
30009 145 - - OPEN-9-CM
30009 146 - s/P/ss 0004 5:01-11 OPEN-9
30009 147 - s/S/ss 0004 5:01-11 OPEN-9
30009 148 0 P/s/ss 0004 5:01-11 OPEN-9
CL2-H
CL2-H
CL2-H
CL2-H
-
-
$ pipe show device | INQRAID -CLI
DEVICE_FILE
$1$DGA145
$1$DGA146
$1$DGA147
$1$DGA148
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
30009 145 - - OPEN-9-CM
30009 146 - s/P/ss 0004 5:01-11 OPEN-9
30009 147 - s/S/ss 0004 5:01-11 OPEN-9
30009 148 0 P/s/ss 0004 5:01-11 OPEN-9
CL2-H
CL2-H
CL2-H
CL2-H
-
-
$ pipe show device | MKCONF -g URA -i 9
starting HORCM inst 9
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
HORCM inst 9 finished successfully.
starting HORCM inst 9
DEVICE_FILE
$1$DGA145
$1$DGA146
$1$DGA147
$1$DGA148
Group
-
URA
URA
URA
PairVol
-
URA_000
URA_001
URA_002
PORT TARG LUN M SERIAL LDEV
-
-
0
0
0
- -
2 0
3 0
4 0
30009 145
30009 146
30009 147
30009 148
CL2-H
CL2-H
CL2-H
HORCM Shutdown inst 9 !!!
Please check 'SYS$SYSROOT:[SYSMGR]HORCM9.CONF','SYS$SYSROOT:[SYSMGR.LOG9.CURLOG]
HORCM_*.LOG', and modify 'ip_address & service'.
HORCM inst 9 finished successfully.
$
$ pipe show device | RAIDSCAN -find
DEVICE_FILE
$1$DGA145
$1$DGA146
$1$DGA147
$1$DGA148
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
30009 145 OPEN-9-CM
30009 146 OPEN-9
30009 147 OPEN-9
30009 148 OPEN-9
0 F CL2-H
0 F CL2-H
0 F CL2-H
0 F CL2-H
0
0
0
0
1
2
3
4
$ pairdisplay -g BCVG -fdc
Group PairVol(L/R) Device_File
M ,Seq#,LDEV#..P/S,Status, % ,P-LDEV# M
BCVG
BCVG
$
oradb1(L)
oradb1(R)
$1$DGA146
$1$DGA147
0 30009 146..P-VOL PAIR, 100
0 30009 147..S-VOL PAIR, 100
147 -
146 -
$ pairdisplay -dg $1$DGA146
Group PairVol(L/R) (Port#,TID, LU-M) ,Seq#,LDEV#..P/S,Status, Seq#,P-LDEV# M
BCVG
BCVG
$
oradb1(L) (CL1-H , 0, 2-0) 30009 146..P-VOL PAIR,30009
oradb1(R) (CL1-H , 0, 3-0) 30009 147..S-VOL PAIR,-----
147 -
146 -
Hitachi Command Control Interface (CCI) User and Reference Guide
151
3.5.5 Start-up Procedures in Bash
CCI (RAID Manager) does not recommend to be used through the bash, because the bash will
not be provided as official release in OpenVMS 7.3-1.
(1) Create the shareable Logical name for RAID if undefined initially.
You need to define the Physical device ($1$DGA145…) as either DG* or DK* or GK* by using
SHOW DEVICE command and DEFINE/SYSTEM command, but then does not need to be
mounted.
$ show device
Device
Name
$1$DGA145:
Device
Status
(VMS4) Online
(VMS4) Online
Error
Count
0
Volume
Label
Free Trans Mnt
Blocks Count Cnt
$1$DGA146:
0
:
:
$1$DGA153:
$
(VMS4) Online
0
$ DEFINE/SYSTEM DKA145 $1$DGA145:
$ DEFINE/SYSTEM DKA146 $1$DGA146:
:
:
$ DEFINE/SYSTEM DKA153 $1$DGA153:
(2) Define the environment for RAID Manager in LOGIN.COM.
If Raid Manager command and HORCM will be executing in different jobs (different
terminal), then you must redefine LNM$TEMPORARY_MAILBOX in LNM$PROCESS_DIRECTORY
table as follows:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP
152
Chapter 3 Preparing for CCI Operations
(3) Discover and describe the command device on /etc/horcm0.conf.
bash$ inqraid DKA145-151 -CLI
DEVICE_FILE
DKA145
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
CL1-H
30009 145 - - OPEN-9-CM
-
-
30009 146 - s/S/ss 0004 5:01-11 OPEN-9
30009 147 - s/P/ss 0004 5:01-11 OPEN-9
30009 148 - s/S/ss 0004 5:01-11 OPEN-9
30009 149 - s/P/ss 0004 5:01-11 OPEN-9
30009 150 - s/S/ss 0004 5:01-11 OPEN-9
30009 151 - s/P/ss 0004 5:01-11 OPEN-9
/etc/horcm0.conf
HORCM_MON
#ip_address
127.0.0.1
service
52000
poll(10ms)
1000
timeout(10ms)
3000
HORCM_CMD
#dev_name
DKA145
dev_name
dev_name
HORCM_DEV
#dev_group
dev_name
port#
TargetID LU# MU#
HORCM_INST
#dev_group
ip_address
service
You will have to start HORCM without a description for HORCM_DEV and HORCM_INST
because target ID & Lun are Unknown.
You will be able to know about a mapping of a physical device with a logical name easily by
using the raidscan -find command option.
(4) Execute an ‘horcmstart 0’ as background.
bash$ horcmstart 0 &
18
bash$
starting HORCM inst 0
(5) Verify a physical mapping of the logical device.
bash$ export HORCMINST=0
bash$ raidscan -pi DKA145-151 -find
DEVICE_FILE
DKA145
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
30009 145 OPEN-9-CM
30009 146 OPEN-9
30009 147 OPEN-9
30009 148 OPEN-9
30009 149 OPEN-9
30009 150 OPEN-9
30009 151 OPEN-9
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0 F CL1-H
0
0
0
0
0
0
0
1
2
3
4
5
6
7
DKA146
DKA147
DKA148
DKA149
DKA150
DKA151
Hitachi Command Control Interface (CCI) User and Reference Guide
153
(6) Describe the known HORCM_DEV on /etc/horcm*.conf.
FOR horcm0.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
LU#
2
4
MU#
0
0
0
0
0
6
0
HORCM_INST
#dev_group
VG01
ip_address
HOSTB
service
horcm1
FOR horcm1.conf
HORCM_DEV
#dev_group
VG01
VG01
VG01
dev_name
oradb1
oradb2
oradb3
port#
CL1-H
CL1-H
CL1-H
TargetID
LU#
3
5
MU#
0
0
0
0
0
7
0
HORCM_INST
#dev_group
VG01
ip_address
HOSTA
service
horcm0
(7) Start ‘horcmstart 0 1’.
Note: The subprocess(HORCM) created by bash will be terminated when the bash is EXIT.
bash$ horcmstart 0 &
19
bash$
starting HORCM inst 0
bash$ horcmstart 1 &
20
bash$
starting HORCM inst 1
154
Chapter 3 Preparing for CCI Operations
3.6 CCI Startup
Hitachi TrueCopy and/or ShadowImage operations on the attached storage systems.
3.6.1 Startup for UNIX Systems
One Instance. To start up one instance of CCI on a UNIX system:
1. Modify /etc/services to register the port name/number (service) of the configuration
definition file. Make the port name/number the same on all servers.
horcm xxxxx/udp
xxxxx = the port name/number of horcm.conf
2. If you want HORCM to start automatically each time the system starts up, add
/etc/horcmstart.sh to the system automatic start-up file (e.g., /sbin/rc).
3. Execute the horcmstart.sh script manually to start the CCI instance:
# horcmstart.sh
4. Set the log directory (HORCC_LOG) in the command execution environment as needed.
5. If you want to perform Hitachi TrueCopy operations, do not set the HORCC_MRCF
environment variable. If you want to perform ShadowImage operations, set the
HORCC_MRCF environment variable for the HORCM execution environment.
For B shell:
# HORCC_MRCF=1
# export HORCC_MRCF
For C shell:
# setenv HORCC_MRCF 1
# pairdisplay -g xxxx
xxxx = group name
Hitachi Command Control Interface (CCI) User and Reference Guide
155
Two Instances. To start up two instances of CCI on a UNIX system:
1. Modify /etc/services to register the port name/number (service) of each configuration
definition file. The port name/number must be different for each CCI instance.
horcm0 xxxxx/udp
horcm1 yyyyy/udp
xxxxx = the port name/number for horcm0.conf
yyyyy = the port name/number for horcm1.conf
2. If you want HORCM to start automatically each time the system starts up, add
/etc/horcmstart.sh 0 1 to the system automatic start-up file (e.g., /sbin/rc).
3. Execute the horcmstart.sh script manually to start the CCI instances:
# horcmstart.sh 0 1
4. Set an instance number to the environment which executes a command:
For B shell:
# HORCMINST=X
X = instance number = 0 or 1
# export HORCMINST
For C shell:
# setenv HORCMINST X
5. Set the log directory (HORCC_LOG) in the command execution environment as needed.
6. If you want to perform Hitachi TrueCopy operations, do not set the HORCC_MRCF
environment variable. If you want to perform ShadowImage operations, set the
HORCC_MRCF environment variable for the HORCM execution environment.
For B shell:
# HORCC_MRCF=1
# export HORCC_MRCF
For C shell:
# setenv HORCC_MRCF 1
# pairdisplay -g xxxx
xxxx = group name
156
Chapter 3 Preparing for CCI Operations
3.6.2 Startup for Windows Systems
One Instance. To start up one instance of CCI on a Windows system:
1. Modify \WINNT\system32\drivers\etc\services to register the port name/number
(service) of the configuration definition file. Make the port name/number the same on
all servers: horcm xxxxx/udp
xxxxx = the port name/number of horcm.conf
2. If you want HORCM to start automatically each time the system starts up, add
\HORCM\etc\horcmstart to the system automatic start-up file (e.g., \autoexec.bat).
3. Execute the horcmstart script manually to start CCI: D:\HORCM\etc> horcmstart
4. Set the log directory (HORCC_LOG) in the command execution environment as needed.
5. If you want to perform Hitachi TrueCopy operations, do not set the HORCC_MRCF
environment variable. If you want to perform ShadowImage operations, set the
HORCC_MRCF environment variable for the HORCM execution environment:
D:\HORCM\etc> set HORCC_MRCF=1
D:\HORCM\etc> pairdisplay -g xxxx
xxxx = group name
Two Instances. To start up two instances of CCI on a Windows system:
1. Modify \WINNT\system32\drivers\etc\services to register the port name/number
(service) of the configuration definition files. Make sure that the port name/number is
different for each instance:
horcm0 xxxxx/udp
horcm1 xxxxx/udp
xxxxx = the port name/number of horcm0.conf
xxxxx = the port name/number of horcm1.conf
2. If you want HORCM to start automatically each time the system starts up, add
\HORCM\etc\horcmstart 0 1 to the system automatic start-up file (e.g., \autoexec.bat).
3. Execute the horcmstart script manually to start CCI: D:\HORCM\etc> horcmstart 0 1
4. Set an instance number to the environment which executes a command:
D:\HORCM\etc> set HORCMINST=X
X = instance number = 0 or 1
5. Set the log directory (HORCC_LOG) in the command execution environment as needed.
6. If you want to perform Hitachi TrueCopy operations, do not set the HORCC_MRCF
environment variable. If you want to perform ShadowImage operations, set the
HORCC_MRCF environment variable for the HORCM execution environment:
D:\HORCM\etc> set HORCC_MRCF=1
D:\HORCM\etc> pairdisplay -g xxxx
xxxx = group name
Hitachi Command Control Interface (CCI) User and Reference Guide
157
3.6.3 Startup for OpenVMS® Systems
One Instance. To start up one instance of CCI on an OpenVMS® system:
For a new installation, the configuration definition sample file is supplied
(SYS$POSIX_ROOT:[HORCM.etc]horcm.conf). Make a copy of the file:
$ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc]
Edit this file according to the system configuration using a text editor (e.g., eve).
Register the port name (service) of the configuration definition file in
“SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT “.
[horcm xxxxx/udp.
where "xxxxx" denotes a port number]
Use the same port number in all servers. The port number can be directly specified
without registering it in “SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT”.
2. Manually execute the HORCM startup command.
$ spawn /nowait /process=horcm horcmstart
Note: The subprocess(HORCM) created by SPAWN will be terminated when the terminal
will be LOGOFF or the session will be terminated. If you want independence Process to
the terminal LOGOFF, then use “RUN /DETACHED” command (Refer to item (4) in
3. Confirm the configuration.
Set the log directory (HORCC_LOG) in the command execution environment as required.
Note: If the log directory under SYS$POSIX_ROOT is shared with other nodes, the log
directory of Horc Manager must be set for each node. The log directory of Horc
Manager can be changed by setting the parameter of horcmstart (see Table 4.35).
When the command issued is for HOMRCF, set the environment variable (HORCC_MRCF).
$ HORCC_MRCF:=1
$ pairdisplay -g xxxx Where “xxxx” denotes a group name.
Note: If a system configuration change or a RAID configuration change causes this file to
change, (e.g., cache size change or microcode change), these changes will not take effect
until you stop HORCM (horcmshutdown) and restart HORCM (horcmstart). Use the “-c” option
of the pairdisplay command to verify that there are no configuration errors.
158
Chapter 3 Preparing for CCI Operations
Two Instances. To start up two instances of CCI on a OpenVMS® system:
For a new installation, the configuration definition sample file is supplied
(SYS$POSIX_ROOT:[HORCM.etc]horcm.conf). Copy the file twice, once for each instance.
$ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc]
horcm0.conf
$ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc]
horcm1.conf
Edit these two files according to the system configuration using a text editor (e.g., eve).
Register the port name (service) of the configuration definition file in
“SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT”.
horcm0 xxxxx/udp. Where "xxxxx" denotes a port number.
horcm1 yyyyy/udp. Where "xxxxx" denotes a port number.
Each instance should have a unique port number.
The port number can be directly specified without registering it in
“SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT”.
2. Execute the HORCM startup command.
$ spawn /nowait /process=horcm0 horcmstart 0
$ spawn /nowait /process=horcm1 horcmstart 1
Note: The subprocess(HORCM) created by SPAWN will be terminated when the terminal
will be LOGOFF or the session will be terminated. If you want independence Process to
the terminal LOGOFF, then use “RUN /DETACHED” command (Refer to item (4) in
3. Set the HORCM instance numbers in the environment in which the command is to be
executed:
$ HORCMINST:=X
where “X” denotes an instance number (0 or 1)
4. Confirm the configuration using a RAID Manager command.
Set the log directory (HORCC_LOG) in the command execution environment as required.
Note: If the log directory under SYS$POSIX_ROOT is shared with other nodes, the log
directory of Horc Manager must be set for each node. The log directory of Horc
Manager can be changed by setting the parameter of horcmstart (see Table 4.35).
When the command issued is for HOMRCF, set the environment variable (HORCC_MRCF).
$ HORCC_MRCF:=1
$ pairdisplay -g xxxx Where “xxxx” denotes a group name.
Note: If a system configuration change or a RAID configuration change causes this file to
change (e.g., cache size change, microcode change), these changes will not take effect until
you stop HORCM (horcmshutdown 0 1) and restart HORCM (horcmstart 0 and horcmstart 1).
Use the “-c” option to pairdisplay command to verify that there are no configuration errors.
Hitachi Command Control Interface (CCI) User and Reference Guide
159
3.7 Starting CCI as a Service (Windows Systems)
Usually, CCI (HORCM) is started by executing the start-up script from the Windows services.
However, in the VSS environment, there is no interface to automatically start CCI. As a
result, CCI provides the following svcexe.execommand and a sample script
(HORCM0_run.txt) file so that CCI can be started automatically from the services:
C:\HORCM\tool\>svcexe
„
Usage for adding [HORCM_START_SVC]: svcexe /A=command_path
–
–
–
for deleting [HORCM_START_SVC]: svcexe /D
for specifying a service: svcexe /S=service_name
for dependent services: svcexe /C=service_name,service_name
This command example uses HORCM0 for the registration of the service name for HORCM
instance#0:
„
Example for adding [HORCM0]: svcexe /S=HORCM0 “/A=C:\HORCM\tool\svcexe.exe”
–
–
for deleting [HORCM0]: svcexe /S=HORCM0 /D
for starting [HORCM0] :[1] make a C:\HORCM\tool\HORCM0_run.txt file.
–
–
–
–
:[2] set a user account to this service.
:[3] confirm to start by ‘horcmstart 0’.
:[4] confirm to stop by ‘horcmshutdown 0’.
:[5] start from a service by ‘net start HORCM0’.
Performing Additional Configuration Tasks
1. Registering the HORCM instance as a service.
The system administrator must add the HORCM instance using the following command:
C:\HORCM\tool\>svcexe /S=HORCM0 “/A=C:\HORCM\tool\svcexe.exe”
2. Customizing a sample script file.
The system administrator must customize the sample script file (HORCM0_run.txt)
according to the HORCM instance. For details, please refer to the descriptions in the
HORCM0_run.txt file.
160
Chapter 3 Preparing for CCI Operations
3. Setting the user account.
The system administrator must set the user account for the CCI administrator as needed.
In case of using GUI, use “Administrative ToolsÆServicesÆSelect HORCM0ÆLogon”.
In case of using CUI, use “sc config” command as follows:
C:\HORCM\tool\>sc config HORCM0 obj= AccountName password=
password
If the system administrator uses default account (LocalSystem), add
“HORCM_EVERYCLI=1”:
# **** For INSTANCE# X, change to HORCMINST=X as needed ****
START:
set HORCM_EVERYCLI=1
set HORCMINST=0
set HORCC_LOG=STDERROUT
C:\HORCM\etc\horcmstart.exe
exit 0
4. Starting the HORCM instance from the service.
After you have confirmed starting and stopping using “horcmstart 0” and
“horcmshutdown 0”, you must verify that HORCM0 starts from the service and that
HORCM0 started automatically from REBOOT, using the following command:
C:\HORCM\tool\>net start HORCM0
5. Stopping HORCM instance as a service.
Instead of using the “horcmshutdown 0” command, you must use the following command
to stop HORCM0:
C:\HORCM\tool\>net stop HORCM0
(By using the “horcmshutdown 0” command, the script written into HORCM0_run.txt will
automatically restart HORCM0).
Hitachi Command Control Interface (CCI) User and Reference Guide
161
162
Chapter 3 Preparing for CCI Operations
Chapter 4 Performing CCI Operations
This chapter covers the following topics:
„
„
„
„
„
„
„
„
„
„
„
„
Environmental variables (section 4.1)
Creating pairs (paircreate) (section 4.2)
Splitting and deleting pairs (pairsplit) (section 4.3)
Resynchronizing pairs (pairresync) (section 4.4)
Confirming pair operations (pairevtwait) (section 4.5)
Monitoring pair activity (pairmon) (section 4.6)
Checking attribute and status (pairvolchk) (section 4.7)
Displaying pair status (pairdisplay) (section 4.8)
Checking Hitachi TrueCopy pair currency (paircurchk) (section 4.9)
Performing Hitachi TrueCopy takeover operations (horctakeover) (section 4.10)
Performing data protection operations (raidvchkset, raidvchkdsp, raidvchkscan) (section
4.12)
„
„
„
Controlling CCI activity (hormstart, horcmshutdown, horcctl) (section 4.13)
Synchronous waiting command (pairsyncwait) for Hitachi TrueCopy Async (
section 4.15)
„
„
„
„
„
„
Protection facility (section 4.16)
LDM volume discovery and flushing for Windows (section 4.18)
Host group control (section 4.20)
Hitachi Command Control Interface (CCI) User and Reference Guide
163
4.1 Environmental Variables
When activating HORCM or initiating a command, users can specify any of the environmental
variables that are listed in Table 4.1.
Table 4.1
Variable
HORCM, Hitachi TrueCopy, and ShadowImage Variables
Functions
HORCM (/etc/horcmgr)
$HORCM_CONF: Names the HORCM configuration file, default = /etc/horcm.conf.
environmental variables
$HORCM_LOG: Names the HORCM log directory, default = /HORCM/log/curlog.
$HORCM_TRCSZ: Specifies the size of the HORCM trace file in KB, default = 1 MB. The trace
file size cannot be changed using the horcctl command.
$HORCM_TRCLVL: Specifies the HORCM trace level (0 - 15), default = 4. If a negative value is
specified, trace mode is canceled. The trace level can be changed using horcctl -c -l command.
$HORCM_TRCBUF: Specifies the HORCM trace mode. If this variable is specified, data is
written in the trace file in the non-buffer mode. If not, data is written in the buffer mode. The trace
mode can be changed using the horcctl -c -b command.
$HORCM_TRCUENV: Specifies whether or not to succeed the trace control parameters
(TRCLVL and TRCBUF) as they are when a command is issued. When this variable is specified,
the Hitachi TrueCopy default trace control parameters are used to the trace control parameters
of HORCM as global parameters. If not, the default trace control parameters for Hitachi
TrueCopy commands are used and tracing level = 4, trace mode = buffer mode.
$HORCMFCTBL: Changes the fibre address conversion table number, used when the target ID
indicated by the raidscan command is different than the TID on the system.
Hitachi TrueCopy command $HORCC_LOG: Specifies the command log directory name, default = /HORCM/log*
environmental variables
(* = instance number). If this variable has “STDERROUT” as magic strings, then the command
will change an output of the logging to STDERR. This strings is used to inhibit an output of the
logging when the user script does handle in prospect of an error code for the command.
$HORCC_TRCSZ: Specifies the size of the command trace file in KB, default = HORCM trace
file size. The default Hitachi TrueCopy trace file size can be changed using horcctl -d -s, and it
becomes effective from later executing a command.
$HORCC_TRCLVL: Specifies the command trace level (0 = 15), default = 4 or the specified
HORCM trace level. If a negative value is specified, trace mode is canceled. The default trace
level for Hitachi TrueCopy commands can be changed using the horcctl -d -l, and it becomes
effective from later executing a command.
$HORCC_TRCBUF: Specifies the command trace mode. If specified, data is written in the trace
file in the non-buffer mode. If not, the HORCM trace mode is used. The default trace mode for
Hitachi TrueCopy commands can be changed using the horcctl -d -b, and it becomes effective
from later executing a command.
$HORCC_LOGSZ: This variable is used to specify a maximum size (in units of KB) and normal
logging for the current command. “/HORCM/log*/horcc_HOST.log” file is moved to
“/HORCM/log*/horcc_HOST.oldlog” file when reaching in the specified maximum size. If this
variable is not specified or specified as ‘0’, it is same as the current logging for only command
error.
HORCM instance
environmental variable
$HORCMINST: Specifies the instance number when using two or more CCI instances on the
same server. The command execution environment and the HORCM activation environment
require an instance number to be specified. Set the configuration definition file (HORCM_CONF)
and log directories (HORCM_LOG and HORCC_LOG) for each instance.
164
Chapter 4 Performing CCI Operations
Variable
Functions
ShadowImage command
environmental variables
$ HORCC_MRCF: Sets the execution environment of the ShadowImage commands. The
selection whether the command functions as that of Hitachi TrueCopy or ShadowImage is made
according to this variable. The HORCM is not affected by this variable. When issuing a Hitachi
TrueCopy command, do not set the HORCC_MRCF variable for the execution environment of
the command. When issuing a ShadowImage command, set the environmental variable
HORCC_MRCF=1 for the execution environment of the command.
Note: The following environment variables are validated for USP V/VM and USP/NSC only, and
are also validated on TC-TC/SI cascading operation using “-FHOMRCF [MU#]” option. To
maintain compatibility across RAID storage systems, these environment variables are ignored by
9900V/9900, which enables you to use a script with “$HORCC_SPLT, $HORCC_RSYN,
$HORCC_REST” for USP/NSC on the 9900V/9900 storage systems.
$ HORCC_SPLT:
=NORMAL
The “pairsplit” and “paircreate -split” will be performed as Non quick mode regardless of setting
of the system option mode 122 via SVP.
=QUICK
The “pairsplit” and “paircreate -split” will be performed as Quick Split regardless of setting of the
system option mode 122 via SVP.
$ HORCC_RSYN:
=NORMAL
The “pairresync” will be performed as Non quick mode regardless of setting of the system option
mode 87 via SVP.
=QUICK
The “pairresync” will be performed as Quick Resync regardless of setting of the system option
mode 87 via SVP.
$ HORCC_REST:
=NORMAL
The “pairresync -restore” will be performed as Non quick mode regardless of setting of the
system option mode 80 via SVP.
=QUICK
The “pairresync -restore” will be performed as Quick Restore regardless of setting of the system
option mode 80 via SVP.
Hitachi Command Control Interface (CCI) User and Reference Guide
165
4.1.1 $HORCMINST and $HORCC_MRCF Supported Options
The CCI command has depended on the $HORCMINST,HORCC_MRCFenvironment variable as
described in the table above. However CCI also supports the following options that do not
depend on the $HORCMINST,HORCC_MRCFenvironment variable.
4.1.1.1 Specifying Options
„
„
„
-I[instance#] This option used for specifying Instance# of HORCM. For example to set
HORCMINST=5:
# pairdisplay -g <group> -I5 …
For example to set without HORCMINST:
# pairdisplay -g <group> -I …
-IH[instance#] or -ITC[instance#] This option used for specifying the command as
HORC, and used for specifying Instance# of HORCM. For example to set HORC(TC) mode:
# pairdisplay -g <group> -IH …
For example to set HORC(TC) mode and HORCMINST=5 :
# pairdisplay -g <group> -IH5 …
-IM[instance#] or -ISI[instance#] This option used for specifying the command as
HOMRCF, and used for specifying Instance# of HORCM. For example to set HOMRCF(SI)
mode:
# pairdisplay -g <group> -IM …
For example to set HOMRCF(SI) mode and HORCMINST=5 :
# pairdisplay -g <group> -IM5 …
Note: In interactive mode (-z option), the HORCM Instance# cannot be changed due to
be attaching to the Log directory for its instance# at that time.
4.1.1.2 Relationship between -I[H][M][inst#] option and $HORCMINST,HORCC_MRCF
If this option will not be specified, then the performing of the command has being depended
between “-I[inst#]” option and $HORCMINST and HORCC_MRCF.
166
Chapter 4 Performing CCI Operations
Table 4.2
Relationship Between -I[inst#] Option and $HORCMINST and HORCC_MRCF
-I[inst#] option
$HORCMINST
Behavior
-I
Don’t care
Attaching w/o HORCMINST
Attaching to HORCMINST=X
Attaching to HORCMINST=X
Attaching w/o HORCMINST
-IX
Unspecified
HORCMINST=X
Unspecified
-IH, -IM or -ITC, -ISI option
-IH or -ITC
$HORCC_MRCF
Behavior
Don’t care
Executing as HORC(TC) mode
Executing as HOMRCF(SI) mode
Executing as HOMRCF(SI) mode
Executing as HORC(TC) mode
-IM or -ISI
Unspecified
HORCC_MRCF=1
Unspecified
X: this shows the Instance Number
4.1.2 Verifying $HORCC_MRCF,HORCMINST
RAID Manager provides a way to verify “$HORCC_MRCF” and “$HORCMINST” environment
variable so that a user can confirm RM instance number and Copy mode they are setting.
# pairdisplay -h
Model : RAID-Manager/Solaris
Ver&Rev: 01-22-03/02
Usage : pairdisplay [options] for HORC[5]
-h
Help/Usage
-I[#] Set to the instance# of HORCM
-IH[#] Set to HORC mode [and the instance# of HORCM]
-IM[#] Set to HOMRCF mode [and the instance# of HORCM]
-z
Set to the interactive mode
-zx
-q
Set to the interactive mode and HORCM monitoring
Quit(Return to main())
-g <group>
Specify the group_name
„
Interactive mode
# pairdisplay -z
pairdisplay[HORC[5]]: -IM
pairdisplay[HOMRCF[5]]: -q
#
Hitachi Command Control Interface (CCI) User and Reference Guide
167
4.2 Creating Pairs (Paircreate)
WARNING: Use the paircreate command with caution. The paircreate command starts the
Hitachi TrueCopy/ShadowImage initial copy operation, which overwrites all data on the
secondary/target volume. If the primary and secondary volumes are not identified correctly,
or if the wrong options are specified (e.g., vl instead of vr), data will be transferred in the
wrong direction.
The paircreate command generates a new volume pair from two unpaired volumes. The
paircreate command can create either a paired logical volume or a group of paired volumes.
The paircreate command allows you to specify the direction (local or remote) of the pair
command has the primary volume. If remote (vr option) is specified, the remote server has
the primary volume. The -split option of the paircreate command (ShadowImage only) allows
you to simultaneously create and split pairs using a single CCI command. When -split is used,
the pair status changes from COPY to PSUS (instead of PAIR) when the initial copy operation
Note: Snapshot support for the TagmaStore USP/NSC depends on the microcode version.
Server A
Server B
Pair
(Remote is specified.)
generation
command
Local is specified.
Paired logical volumes
Entire copy
Secondary
volume
(Primary
volume)
Primary
volume
(Secondary
volume)
Figure 4.1 Pair Creation
Before issuing the paircreate command, make sure that the secondary volume is not
mounted on any system. If the secondary volume is found to be mounted after paircreate,
delete the pair (pairsplit -S), unmount the secondary volume, and then reissue the
paircreate command.
Note: The paircreate command terminates before the initial copy operation is complete
(except when the nocopy option is specified). Use the pair event waiting or pair display
command to verify that the initial copy operation completed successfully (status changes
from COPY to PAIR, or from COPY to PSUS if the -split option was specified). The execution
log file also shows completion of the initial copy operation.
Hitachi TrueCopy only: The paircreate command cannot execute copy rejection in case of
an error condition which made the target volume is accompanied by maintenance work.
168
Chapter 4 Performing CCI Operations
Table 4.3
Parameter
Paircreate Command Parameters
Value
Command Name
Format
paircreate
paircreate { -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#] ⎪ |-FHORC [MU#] -d[g]
<seq#> <LDEV#> [MU#] ⎪ -f[g] <fence> [CTGID] ⎪ -v ⎪ -c <size> ⎪ -nocopy ⎪ -nomsg ⎪ -split ⎪ [-m
<mode>] ⎪ -jp <id> ⎪ -js <id> ⎪ -pid <PID> |-fq <mode> | -cto <o-time> <c-time> <r-time> \ -nocsus }
Hitachi Command Control Interface (CCI) User and Reference Guide
169
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the paircreate command enter interactive mode.
The -zx option guards performing of the HORCM in interactive mode. When this option detects a HORCM
shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying Instance# of HORCM
-g <group>: Specifies a group name defined in the configuration definition file. The command is
executed for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of
“-g <group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is in the group, the target volume is executed as the paired
logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If
the specified LDEV is contained in two or more groups, the command is executed on the first group. The
<seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-f[g] <fence> [CTGID] (Hitachi TrueCopy or UR only): Specifies the fence level for assuring the
consistency of paired volume data. A fence level of “data”, “status”, “never”, or “async” must be specified.
Fence level "-f async" can be specified only for TC Async/UR. The “-fg” option is used to make TC Sync
CTG volume, and fence level must be specified as "-fg data", "-fg status", or "-fg never".
A CTGID (CT Group ID) is assigned automatically if you do not specify the “CTGID” option in this
command (and define it in the config file). If “CTGID” is not specified (with “-f async” or “-fg” option) and
the maximum number of CT groups already exists (e.g., 256 for USP/NSC, 128 for 9900V), an
EX_ENOCTG error will be returned. Therefore, the “CTGID” option can forcibly assign a volume group to
an existing CTGID (0-15/63/127/255) on the Hitachi RAID storage systems. The CTGID option is ignored
unless you specify the “-f async” or “-fg” option.
-vl or -vr: Specifies the data flow direction and must always be specified. The -vl option specifies “local”
and the host which issues the command possesses the primary volume. The -vr option specifies “remote”
and the remote host possesses the primary volume while the local host possesses the secondary volume.
-c <size>: Specifies the number of extents (1 - 15) to be used for the initial data copy. If this option is not
specified a default value is used.
-nocopy: Creates paired volumes without copying data in the case in which the data consistency of
simplex volumes is assured by the user.
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute
this command from a user program. This option must be specified at the beginning of a command
argument. The command execution log is not affected by this option.
-split (ShadowImage only): Splits the paired volume after the initial copy operation is complete. This
option will return after changed the state in PVOL_PSUS & SVOL_COPY immediately, and SVOL state
-m <mode>:
mode = noread (ShadowImage only): Specifies the noread mode for hiding the secondary volume. The
secondary volume becomes read-disabled when this mode option is specified. The secondary volume is
read-enabled when this mode option is omitted. Note: The primary volume becomes read-disabled during
a reverse resync operation (restore option of pairresync command).
mode = cyl (9900V only): sets TrueCopy bitmap difference management to cylinder.
mode = trk (9900V only): sets TrueCopy bitmap difference management to track.
Note: If the mode (cyl or track) is not specified, the default values are used: default is track for OPEN-3
and OPEN-9; default is cylinder for OPEN-E and OPEN-L.
Note: For TrueCopy volumes paired between 9900V and 9900 storage systems, the bitmap tables will be
managed at the Cylinder level, even if Track is specified.
mode=grp [CTGID] (9900V ShadowImage only). Makes a group for splitting all ShadowImage pairs
specified in a group. Like a TrueCopy Async/UR consistency group, ShadowImage guarantees data
consistency among multiple LUNs in a group at a single point in time when doing a split using the
“pairsplit -g <group>“ command (except “-S” or “-E” option).
170
Chapter 4 Performing CCI Operations
A CTGID (CT Group ID) is assigned automatically if you do not specify the “CTGID” option in this
command. If “CTGID” is not specified and the maximum number of CT groups already exists, an
EX_ENOCTG error will be returned. Therefore, the “CTGID” option can forcibly assign a volume group to
Parameter
Value
Returned values
This command sets the following returned values during exit allowing the user to check the execution
results.
Normal termination: 0. When creating groups, 0 = normal termination for all pairs.
Abnormal termination: other than 0, refer to the execution logs for error details.
Hitachi Command Control Interface (CCI) User and Reference Guide
171
Table 4.4
Category
Specific Error Codes for Paircreate
Error Code
Error Message
Recommended Action
Value
Volume status EX_ENQVOL
Unmatched volume status
within the group
Confirm status using the pairdisplay command.
Make sure all volumes in the group have the
same fence level and volume attributes.
236
EX_INCSTG
EX_INVVOL
Inconsistent status in group
Invalid volume status
Invalid pair status
Confirm pair status using pairdisplay.
Confirm pair status using pairdisplay -l.
Confirm pair status using pairdisplay.
229
222
228
212
EX_INVSTP
Unrecoverable EX_ENQSIZ
Unmatched volume size for Confirm volume size or number of LUSE volume
pairing
using raidscan -f, and make sure volume sizes
are identical.
Resource
EX_ENOCTG Not enough CT groups in
the RAID
Choose an existing CTGID (pairvolchk displays
CTGIDs). Use ‘-f async <CTGID>‘ or ‘-m grp
<CTGID>‘ option of paircreate to force the pair
into a pre-existing CTGID.
217
EX_ENXCTG
No CT groups left for OPEN Confirm whether all CT groups are already used
215
206
Vol use.
by TC/TC390 Async or SI/SI390.
Unrecoverable EX_ENOPOL
Not enough Pool in RAID
Could not retain the pool for executing a
command due to be exceeded the threshold rate.
Delete unnecessary/earlier generations paired
volume, or re-synchronize unnecessary/earlier
generations split volume.
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
172
Chapter 4 Performing CCI Operations
4.3 Splitting and Deleting Pairs (Pairsplit)
The pairsplit command stops updates to the secondary volume of a pair and can either
maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes (see
Table 4.3). The pairsplit command can be applied to a paired logical volume or a group of
paired volumes. The pairsplit command allows read access or read/write access to the
secondary volume, depending on the selected options. When the pairsplit command is
specified, acceptance of write requests to the primary volume depends on the fence level of
pairsplit command.
Server A
Server B
Pair
splitting
command
Paired logical volume
Primary
volume
Secondary
volume
(Secondary
volume)
(Primary
volume)
Figure 4.2 Pair Splitting
The primary volume’s server is automatically detected by the pairsplit command, so the
server does not need to be specified in the pairsplit command parameters. If the -S option
(simplex) is used, the volume pair is deleted, the volumes are returned to the simplex state,
and the primary and secondary volume status is lost. Paired volumes are split as soon as the
pairsplit command is issued. If you want to synchronize the volumes, the pairsplit command
examples).
Note: You can create and split ShadowImage pairs simultaneously using the -split option of
the paircreate command (refer to section 4.2).
Note on Quick Split: If “$HORCC_SPLT=QUICK” environment variable is set (USP V/VM or
USP/NSC), the “pairsplit” and “paircreate -split” operations will be performed as Quick Split
regardless of the system option mode 122 setting on the SVP. The $HORCC_SPLT
environment variable is ignored by 9900V/9900.
Hitachi Command Control Interface (CCI) User and Reference Guide
173
Table 4.5
Parameter
Pairsplit Command Parameters
Value
Command Name
Format
pairsplit
pairsplit {-h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#]
⎪ -FHORC [MU#]⎪ -FMRCF [MU#] | -d[g] <seq#> <LDEV#> [MU#] ⎪ -r ⎪ -rw ⎪ -S
⎪ -R[S][B] ⎪ -P ⎪ -l ⎪ -nomsg ⎪ -C <size> ⎪ -E | -fq <mode>}
174
Chapter 4 Performing CCI Operations
Options
-h: Displays Help/Usage and version information.
Note: Only one
pairsplit option (-r,
-rw, -S, -R, or -P)
can be specified.
If more than one
option is
specified, only the
last option will be
executed.
-q: Terminates the interactive mode and exits this command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairsplit command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-g <group>: Specifies a group name defined in the configuration definition file. This option must always
be specified. The command is executed for the specified group unless the -d <pair Vol> option is
specified.
-d <pair Vol>: Specifies the paired logical volume name defined in the configuration definition file. When
this option is specified, the command is executed for the specified paired logical volumes.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of
“-g <group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is in the group, the target volume is executed as the paired
logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If
the specified LDEV is contained in two or more groups, the command is executed on the first group. The
<seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-r or -rw (TrueCopy only): Specifies a mode of access to the SVOL after paired volumes are split. The -r
option (default) allows read-only to the SVOL. The -rw option enables read and write access to the SVOL.
-S: Selects simplex mode (deletes the pair). When the pairing direction is reversed among the hosts
(e.g., disaster recovery), this mode is established once, and then the paircreate command is issued.
When splitting a pair, whether or not you can change the pair status of S-VOL, changing the pair status of
P-VOL to SMPL takes priority. Therefore, if the pair status of S-VOL cannot be changed to SMPL, the pair
status of P-VOL might not correspond with that of S-VOL. When a path failure has occurred, the pair
status of S-VOL cannot be changed to SMPL.
-R: Brings the secondary volume into the simplex mode forcibly. It is issued by the secondary host, if the
host possessing the primary volume is down or has failed.
-R[S][B] (Specifiable for HORC only): This option is used to bring the secondary volume forcibly into
simplex mode. It is issued by the secondary host if the host possessing the primary volume goes down
due to a failure.
-RS option is used to bring the secondary volume forcibly into SSWS mode.
-RB option is used to back the secondary volume forcibly from SSWS into PSUS(PSUE)(SSUS) mode.
This makes to be able to back to the primary volume if the user wants to back from the secondary host in
SSWS state on Link failure to the primary host.
-P (Specifiable for TC/UR only): For TrueCopy Sync, this option is used to bring the primary volume
forcibly into write disabled mode like PSUE with "fence=data". It is issued by the secondary host to disable
PVOL data changes by the host possessing the primary volume.
For TrueCopy Async/JNL, this option is used to suspend and purge the remaining data into
SideFile/Journal like link failure (PSUE) without updating SVOL. This enables the user to stop journal
operations forcibly when the journal utilization traffic becomes high. This is the same for the case of
disaster that S-vol data is not up to date, but it allows to specify “-rw –P” for writing enable. In that
situation, if the user will use the SVOL as file system (i.e. UFS, NTFS, HANFS), then an FSCK(CHKDSK)
is necessary before mounting the volume even after the PVOL is unmounted.
-l: When this command cannot utilize the remote host for host down, this option enables a pairsplit
operation by a local host only. Except the -R option, the target volume of a local host must be P-VOL.
(ShadowImage volumes are able to split only SVOL.)
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute a
command from a user program. This option must be specified at the beginning of a command argument.
The command execution log is not affected by this option.
-C <size> (ShadowImage/Snapshot only): Copies difference data retained in the primary volume into the
secondary volume, then enables reading and writing from/to the secondary volume after completion of the
copying. (This is the default option.) For <size>, specify the copy pace for the pairsplit (range = 1 to 15
track extents). If not specified, the value used for paircreate is used.
Hitachi Command Control Interface (CCI) User and Reference Guide
-E (ShadowImage only): Suspends a paired volume forcibly when a failure occurs. Not normally used.
175
-FHORC [MU#] or -FCA [MU#]: Forcibly specifies a cascading Hitachi TrueCopy volume for specified
volume pair on ShadowImage environment (see example in Figure 4.3). If the -l option is specified, a
cascading TrueCopy volume is split on a local host (near site). If the -l option is not specified, a cascading
Returned values
Normal termination: 0. When splitting groups, 0 = normal termination for all pairs.
Abnormal termination: other than 0, refer to the execution logs for error details.
176
Chapter 4 Performing CCI Operations
Table 4.6
Category
Specific Error Codes for Pairsplit
Error Code
Error Message
Recommended Action
Value
Volume status EX_ENQVOL
Unmatched volume status within the
group
Confirm status using the pairdisplay
command. Make sure all volumes in the
group have the same fence level and
volume attributes.
236
EX_INCSTG
EX_INVVOL
EX_EVOLCE
Inconsistent status in group
Invalid volume status
Confirm pair status using pairdisplay.
Confirm pair status using pairdisplay -l.
229
222
235
Pair Volume combination error
Confirm pair status using pairdisplay,
and change combination of volumes.
EX_INVSTP
Invalid pair status
Confirm pair status using pairdisplay.
228
234
Unrecoverable EX_EWSUSE
Pair suspended at WAIT state
Issue pairresync manually to the
identified failed paired volume to try to
recover it. If the trouble persists, call the
Hitachi Data Systems Support Center.
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
ShadowImage Environment
pairsplit -g oradb1 -rw -FHORC
Oradb1(ShadowImage)
SVOL
SVOL
Ora(TrueCopy)
P/P
VOL
0
1
SVOL
Oradb2(ShadowImage)
0
Seq#
30052
Seq#
30053
Figure 4.3 Example of -FHORC Option for Pairsplit
TrueCopy Environment
pairsplit -g ora -FMRCF 1
SVOL
Oradb1(ShadowImage)
Ora(TrueCopy)
0
1
S/P
VOL
PVOL
0
Oradb2(ShadowImage)
Seq#
30052
SVOL
Seq#
30053
Figure 4.4 Example of -FMRCF Option for Pairsplit
Hitachi Command Control Interface (CCI) User and Reference Guide
177
4.3.1 Timing Pairsplit Operations
The pairsplit command terminates after verifying that the status has changed according to
the pairsplit command options (to PSUS or SMPL). If you want to synchronize the volume
pair, the non-written data (in the host buffer) must be written before you issue the pairsplit
command. When the pairsplit command is specified, acceptance of write requests to the
primary volume depends on the fence level of the pair (data, status, never, or async). Some
examples are shown below.
Instantaneous offline backup of UNIX file system:
–
–
–
–
–
Unmount the primary volume, and then split the volume pair.
Mount the primary volume (mount -rw).
Verify that the pairsplit is complete, and mount the secondary volume (mount -r).
Execute the backup.
Restore the volumes to their previous state, and resynchronize the volume pair.
Online backup of UNIX file system:
–
Issue the sync command to a mounted primary volume to flush the file system
buffer, and then split the volume pair using the -rw option.
–
Verify that the pairsplit is complete, and then use the fsck command to check the
consistency of the secondary volume file system.
–
–
–
Mount (mount -r) the secondary volume.
Execute the backup.
Restore the volumes to their previous state, and resynchronize the volume pair.
Instantaneous offline backup of Windows file system:
–
–
–
–
–
Execute -x umount on the PVOL, then split the volume pair with the -rw option.
Execute x-mount on the primary volume.
Make sure that the paired volume is split, then execute -x mount on the SVOL.
Execute backup, and unmount the SVOL (-x umount).
Resynchronize the volume pair and restore the previous state.
Online backup of Windows file system:
–
Issue -x sync in the state the primary volume is mounted, then flush only the file
system buffer. Then, split the paired volume with the -rw option.
–
–
–
Make sure that the paired volume is split, then execute -x mount on the SVOL.
Execute backup, and unmount the SVOL (-x umount).
Resynchronize the volume pair.
Note: If the primary volume is divided by LVM or partition, the control information of LVM or
partition on the primary volume is also copied to the secondary volume. In case of executing
the backup from the secondary volume, it is required to import this control information, and
to execute pairsplit with the -rw option when activating the secondary volume.
178
Chapter 4 Performing CCI Operations
4.3.2 Deleting Pairs (Pairsplit -S)
The pair delete operation is executed by using the -S option of the pairsplit command. When
the pairsplit -S command is issued, the specified Hitachi TrueCopy or ShadowImage pair is
deleted, and each volume is changed to SMPL (simplex) mode. If you want to re-establish a
pair which has been deleted, you must use the paircreate command (not pairresync).
Hitachi Command Control Interface (CCI) User and Reference Guide
179
4.4 Resynchronizing Pairs (Pairresync)
The pairresync command re-establishes a split pair and then restarts the update copy
resynchronize either a paired logical volume or a group of paired volumes. The normal
direction of resynchronization is from the primary volume to the secondary volume. If the -
-restore option is specified (ShadowImage only), the pair is resynchronized in the reverse
restore resync operations. The primary volume remains accessible during pairresync, except
when the -restore option is specified. The secondary volume becomes write-disabled when
parameters and returned values. The primary volume’s server is automatically detected by
the pairresync command, so the server does not need to be specified in the pairresync
Pairresyncterminates before resynchronization of the secondary (or primary) volume is
complete. Use the pair event waiting or pair display command to verify that the resync
operation completed successfully (status changes from COPY to PAIR). The execution log file
also shows completion of the resync operation. The status transition of the paired volume is
judged by the status of the primary volume. The fence level is not changed (only for
TrueCopy, TrueCopy Async, or UR).
If no data was written to the secondary volume while the pair was split, the differential data
on the primary volume is copied. If data was written to the secondary volume, the
differential data on the primary volume and secondary volume is copied. This process is
reversed when the ShadowImage -restore option is specified.
Before issuing the pairresync command (normal or reverse direction), make sure that the
secondary volume is not mounted on any UNIX system. Before issuing a reverse pairresync
command, make sure that the primary volume is not mounted on any UNIX system.
Note on Quick Resync/Restore: If the “$HORCC_RSYN=QUICK” /”$HORCC_REST=QUICK”
environment variable is set (USP V/VM or USP/NSC), the “pairresync” operation will be
performed as Quick Resync regardless of the system option mode 87/80 setting via SVP. The
$HORCC_RSYN and $HORCC_REST environment variables are ignored by 9900V/9900.
Hitachi TrueCopy only: The swaps(p) option is used to swap volume from the SVOL(PVOL)
to PVOL(SVOL) at suspending state on the SVOL(PVOL) side, and resynchronize the
NEW_SVOL based on the NEW_PVOL. At the result of this operation, the volume attributes of
own host (local host) become the attributes for the NEW_PVOL(SVOL). The paircreate
command cannot execute copy rejection in case of an error condition which made the target
volume is accompanied by maintenance work. The swaps(p) option will:
„
„
Ignore the -l option.
Use a default of three for number of copy tracks (-c size) when -c size option is
omitted.
„
„
Execute at PAIR state as well as PSUS/PSUE state (not applicable to COPY and SMPL).
Since the target volume of the local host has been already the PVOL(SVOL), this target
volume is skipped an operation.
180
Chapter 4 Performing CCI Operations
Server B
Server A
Pair re-
synchronization
command
Paired logical volumes
Secondary
volume
Primary
volume
(Primary
volume)
(Secondary
volume)
Differential/entire data copy
Figure 4.5 Pair Resynchronization
Read/Write
Read*
Normal Resync Copy
COPY
: Write Data
P-VOL
Read*
S-VOL
Read
Restore Resync Copy
(ShadowImage only)
RCPY
: Write Data
S-VOL
P-VOL
Figure 4.6 Normal Resync and ShadowImage Restore Resync
Hitachi Command Control Interface (CCI) User and Reference Guide
181
Table 4.7
Parameter
Pairresync Command Parameters
Value
Command
Name
pairresync
Format
pairresync{ -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#]
⎪ -FHORC [MU#] ⎪ -FMRCF [MU#]⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪-c <size> ⎪ -nomsg ⎪ -l ⎪ -restore ⎪
-swaps ⎪ -swapp |-fq <mode>| -cto <o-time> <c-time> <r-time> ⎪ -f[g] <fence> [CTGID]}
182
Chapter 4 Performing CCI Operations
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits this command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairresync command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-g <group>: This option is used to specify a group name defined in the configuration definition file. This
option must always be specified. The command is executed for the specified group unless the -d <pair Vol>
option is specified.
-d <pair Vol>: Specifies a paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volumes.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g
<group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified LDEV, and if the specified LDEV is contained in the group, the target volume is executed as the
paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option.
If the specified LDEV is contained in two or more groups, the command is executed on the first group. The
<seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-FHORC [MU#] or -FCA [MU#]: Forcibly specifies a cascading Hitachi TrueCopy volume for specified pair
logical volumes on ShadowImage environment (see example in Figure 4.7). If the -l option is specified, this
option resyncs a cascading TrueCopy volume on a local host (near site). If no -l option is specified, this
option resyncs a cascading TrueCopy volume on a remote host (far site). The target TrueCopy volume must
be a P-VOL, the -swapp option cannot be specified.
-FMRCF [MU#] or -FBC [MU#]: Forcibly specifies a cascading ShadowImage volume for specified pair
logical volumes on TrueCopy environment (see example in Figure 4.8). If the -l option is specified, this
option resyncs a cascading ShadowImage volume on a local host (near site). If no -l option is specified, this
option resyncs a cascading ShadowImage volume on a remote host (far site). The target ShadowImage
volume must be a P-VOL.
-swaps with -FHORC [MU#]
This option is used to swap the cascading UR volume from the primary node for failback.
In failback operation from 3DC Cascade Site Failure, If a user want to failback to DC1 from DC3 directly,
it will be needed to operate all cascading volume from DC1.
In order to make this operation possible, RAID Manager supports “pairresync -swaps -FHORC” option that
swaps UR volume on the cascading CA-Sync/UR volume.
-c <size>: Specify the copy pace for the resync operation (range = 1 to 15 track extents). If not specified,
the value used for paircreate is used.
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute this
command from a user program. This option must be specified at the beginning of a command argument.
The command execution log is not affected by this option.
-l: When this option cannot utilize the remote host for host down, this option enables a pairresync operation
by the local host only. The target volume of the local host must be P-VOL. (ShadowImage volumes are able
to resync only SVOL.)
-restore (ShadowImage only): Performs reverse resync (from secondary volume to primary volume).
swaps (TrueCopy only): Executed from the SVOL side when there is no host on the PVOL side to help.
Typically executed in PSUS state to facilitate “fast failback” without requiring a full copy. In Figure 4.9, the
left side shows T0 for both the PVOL and SVOL (before command execution), and the right side shows T1,
after the command has executed. For both -swaps and -swapp, the delta data from the original SVOL
becomes dominant and is copied to the original PVOL, then the S/PVOL designations are swapped.
swapp (TrueCopy only): Executes the equivalent of a -swaps from the original PVOL side. Unlike -swaps,
-swapp does require the cooperation of hosts at both sides.
-fq <mode> (9900V ShadowImage only)
This option is used to specify the mode whether “pairresync” is performed or not as “QUICK”.
mode = normal
Hitachi Command Control Interface (CCI) User and Reference Guide
pairresync will be performed as Non quick mode regardless of setting of $HORCC_RSYN environment
183
variable and/or the system option mode 87 via SVP.
mode = quick
pairresync will be performed as Quick Resync regardless of setting of $HORCC_RSYN environment
Parameter
Value
Returned
values
Normal termination: 0. When resynching groups, 0 = normal termination for all pairs.
Abnormal termination: other than 0, refer to the execution logs for error details.
184
Chapter 4 Performing CCI Operations
Table 4.8
Category
Specific Error Codes for Pairresync
Error Code
Error Message
Recommended Action
Value
Volume status EX_ENQVOL
Unmatched volume status
within the group
Confirm status using the pairdisplay command. 236
Make sure all volumes in the group have the
same fence level and volume attributes.
EX_INCSTG
EX_INVVOL
Inconsistent status in group
Invalid volume status
Invalid pair status
Confirm pair status using pairdisplay.
Confirm pair status using pairdisplay -l.
Confirm pair status using pairdisplay.
229
222
228
Unrecoverable EX_INVSTP
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
($HORCC_LOG) (see Table A.2), even if the user script has no error handling.
The primary and secondary volumes must not be mounted on any UNIX system because this
command renews data on both the primary and secondary volumes. This command cannot
execute copy rejection in case of the trouble (single error in cash memory, etc...) which
made the target volume is accompanied by maintenance work.(HORC only)
ShadowImage Environment
pairresync -g oradb1 -FHORC
Oradb1(ShadowImage)
Ora(TrueCopy)
SVOL
0
1
P/P
VOL
SVOL
0
Oradb2(ShadowImage)
SVOL
Seq#
30052
Seq#
30053
Figure 4.7 Example of -FHORC Option for Pairresync
TrueCopy Environment
pairresync -g ora -FMRCF 1
Oradb1(ShadowImage)
SVOL
Ora(TrueCopy)
0
1
S/P
VOL
PVOL
0
Oradb2(ShadowImage)
SVOL
Seq#
30052
Seq#
30053
Figure 4.8 Example of -FMRCF Option for Pairresync
Hitachi Command Control Interface (CCI) User and Reference Guide
185
R/W
R/W
R or R/W
R
T1
T0
pairresync -swaps on SVOL
or
pairresync -swapp on PVOL
P-VOL
S-VOL
NEW_
SVOL
NEW_
PVOL
Write Data
Figure 4.9 Swap Operation
SSW S
PVOL
G1 (Sync)
L1
L2
L
0
0
2
SVOL
SL
DC2
1
DC1
SM PL
G2(UR)
PVOL
1
2
L3
DC3
SM PL
After DC1 recovery, Failback from DC3
APP1
pairresync –g G1 –FHORC
pairresync –g G1 -swapp
1
-
A PP1
SVO L
0
PVOL
0
G1 (Sync)
L1
L2
2
PVO L
DC2
1
DC1
SM PL
G2(UR)
SVO L
1
L3
2
DC3
SM PL
Figure 4.10 Example swaps option with -FHORC [MU#]
186
Chapter 4 Performing CCI Operations
4.5 Confirming Pair Operations (Pairevtwait)
The pair event waiting (pairevtwait) command is used to wait for completion of pair
creation and pair resynchronization and to check the status (see Figure 4.11). It waits
(“sleeps”) until the paired volume status becomes identical to a specified status and then
completes. The pairevtwait command can be used for a paired logical volume or a group of
paired volumes. The primary volume’s server is automatically detected by the pair event
waiting command, so the server does not need to be specified in the pair event waiting
pairevtwait command.
The pair event waiting command waits until the specified status is established, and
terminates abnormally if an abnormal status is detected. The transition of the paired volume
status is judged by the status of the primary volume. If the event waiting command is issued
for a group, the command waits until the status of each volume in the group changes to the
specified status. When the event waiting command with the -nowait option is issued for a
group, the status is returned if the status of each volume in the group is identical. For
ShadowImage pairs, this command must be used to confirm a pair status transition.
Server A
Server B
Event
waiting
Status
Status
Paired logical volume
Primary
volume
Secondary
volume
(Primary
volume)
(Secondary
volume)
Figure 4.11 Pair Event Waiting
Table 4.9
Pairevtwait Command Parameters
Parameter
Command Name
Format
Value
pairevtwait
pairevtwait{ -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#] ⎪ -FHORC [MU#] ⎪ -
FMRCF [MU#] ⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪ -s [s] <status> ...⎪ -t <timeout>[intervall] ⎪ -nowait[s] ⎪ -l
⎪ -nomsg }
Hitachi Command Control Interface (CCI) User and Reference Guide
187
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits this command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairevtwait command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects
a HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-g <group>: Specifies a group name defined in the configuration definition file. This option must always
be specified. The command is executed for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies a paired logical volume name defined in the configuration definition file. When
this option is specified, the command is executed for the specified paired logical volumes.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g
<group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-FHORC [MU#] or -FCA [MU#]: Forcibly specifies a cascading TrueCopy volume for specified pair logical
volumes on ShadowImage environment (see example in Figure 4.12). If the -l option is specified, this
option tests status of a cascading TrueCopy volume on a local host (near site). If no -l option is specified,
this option tests status of a cascading TrueCopy volume on a remote host (far site). The target TrueCopy
volume must be P-VOL or SMPL.
-FMRCF [MU#] or -FBC [MU#]: Forcibly specifies a cascading ShadowImage volume for specified pair
logical volumes on TrueCopy environment (see example in Figure 4.13). If the -l option is specified, this
option tests status of a cascading ShadowImage volume on a local host (near site). If no -l option is
specified, this option tests status of a cascading ShadowImage volume on a remote host (far site). The
target ShadowImage volume must be P-VOL or SMPL.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is contained in the group, the target volume is executed as
the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“
option. If the specified LDEV is contained in two or more groups, the command is executed on the first
group. The <seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal
notation.
-s <status>: ..Specifies the waiting status, which is “smpl”, “copy/rcpy”, “pair”, “psus”, or “psue/pdub”. If
two or more statuses are specified following -s, waiting is done according to the logical OR of the specified
statuses. This option is valid when the -nowait option is not specified.
-ss <status>: Specifies the waiting status, which is “smpl”, “copy”(“RCPY” is included), “pair”,
“ssus”,”psue” on SVOL. If two or more statuses are specified following -s, waiting is done according to the
logical OR of the specified statuses. This option is valid when the -nowait option is not specified.
-t <timeout> [interval]: Specifies the interval of monitoring a status specified using the -s option and the
time-out period in units of 1 sec. Unless [interval] is specified, the default value is used. This option is valid
when the -nowait option is not specified. If <timeout> is specified more than 1999999, then “WARNING”
message will be displayed.
-nowait: When this option is specified, the pair status at that time is reported without waiting. The pair
status is set as a returned value for this command. When this option is specified, the -t and -s options are
not needed.
-nowait[s]
When this option is specified, the pairing status on SVOL at that time is reported without waiting. The
pairing status is set as a returned value for this command.
When this option is specified, the -t and -s options are not needed.
-l: When this command cannot utilize a remote host for host down, this option executes this command by a
local host only. The target volume of a local host must be SMPL or P-VOL. (ShadowImage volumes are
able to specify from SVOL.)
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute a
command from a user program. This option must be specified at the beginning of a command argument.
The command execution log is not affected by this option.
188
Chapter 4 Performing CCI Operations
Parameter
Value
Returned values
When the -nowait option is specified:
Normal termination:
1: The status is SMPL
2: The status is COPY or RCPY
3: The status is PAIR
4: The status is PSUS
5: The status is PSUE
When monitoring groups, 1/2/3/4/5 = normal termination for all pairs.
Abnormal termination: other than 0 to 127, refer to the execution logs for error details.
When the -nowaits option is specified:
Normal termination:
1: The status is SMPL
2: The status is COPY or RCPY
3: The status is PAIR
4: The status is SSUS ((Note that SVOL_PSUS will be displayed as SSUS)
5: The status is PSUE
When the -nowait and/or -nowaits option is not specified.
When the -nowait and or nowaits option is not specified:
Normal termination: 0. When monitoring groups, 0 = normal termination for all pairs.
Abnormal termination: other than 0 to 127, refer to the execution logs for error details.
Table 4.10 Specific Error Codes for Pairevtwait
Category
Error Code
Error Message
Recommended Action
Value
Volume status EX_ENQVOL
Unmatched volume status within the
group
Confirm status using the pairdisplay
command. Make sure all volumes in the
group have the same fence level and
volume attributes.
236
EX_INCSTG
EX_INVVOL
EX_EVOLCE
Inconsistent status in group
Invalid volume status
Confirm pair status using pairdisplay.
Confirm pair status using pairdisplay -l.
229
222
235
Pair Volume combination error
Confirm pair status using pairdisplay,
and change combination of volumes.
EX_EWSUSE
Pair suspended at WAIT state
Issue pairresync manually to the
234
identified failed paired volume to try to
recover it. If the trouble persists, call the
Hitachi Data Systems Support Center.
Unrecoverable
Timer
EX_EWSTOT Timeout waiting for specified status
Increase timeout value using -t option.
233
232
Recoverable
EX_EWSLTO Timeout waiting for specified status
on the local host
Confirm that CCI (HORCM) on the
remote host is running.
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
Hitachi Command Control Interface (CCI) User and Reference Guide
189
ShadowImage Environment
pairevtwait -g oradb1 -s psus -t 10 -FHORC
Oradb1(ShadowImage)
Ora(TrueCopy)
SVOL
SVOL
P/P
VOL
0
1
SVOL
Oradb2(ShadowImage)
0
Seq#
30052
Seq#
30053
Figure 4.12 Example of -FHORC Option for Pairevtwait
TrueCopy Environment
pairevtwait -g ora -s psus -t 10 -FMRCF 1
Oradb1(ShadowImage)
SVOL
Ora(TrueCopy)
PVOL
S/P
VOL
0
1
0
Oradb2(ShadowImage)
Seq#
30052
SVOL
Seq#
30053
Figure 4.13 Example of -FMRCF Option for Pairevtwait
Using -ss <status> ... and -nowaits option
In PVOL_PSUS & SVOL_COPY state of HOMRCF quick mode, pairevtwait will return
immediately even if the S-VOL is still in SVOL_COPY state because PVOL is already in
PVOL_PSUS state. If you want to wait the SVOL_SSUS state, then use -ss <status> and
-nowaits option in order for waiting the pair status on SVOL side. This will be needed for
operating pairresync -restore or pairsplit -S.
The figure below shows five examples of waiting until “PVOL_PSUS” & “SVOL_COPY” state
will be changed to SVOL_SSUS.
Pairevtwait –g G1 –ss ssus –t 600
Pairevtwait –g G1 –ss ssus –t 600
Wait on SVOL in communication with local and remote
Wait on SVOL in communication with local and remote
Pairevtwait –g G1 –ss ssus –FHOMRCF –t 600
Wait on SVOL in communication with remote only
Pairevtwait –g G1 –ss ssus –l –t 600
Pairevtwait –g G1 –ss ssus –l –t 600
Wait on SVOL directly
Wait on PVOL by finding from PVOL to SVOL
G1
SVOL
COPY
SSUS
PVOL
PSUS
Figure 4.14 Example for waiting on HOMRCF
190
Chapter 4 Performing CCI Operations
The horctakeover will suspend G2(CA-Jnl) automatically if horctakeover will return “Swap-
takeover” as exit code. In DC1 host failure, if APP1 want to wait until DC3 become the
suspend state, then they can verify “SSUS” state by using the pairevtwait command as shown
below.
APP1
APP1
horctakeover –g G1
SVOL
0
PVOL
0
G1 (Sync)
Pairevtwait –g G3 –FHORC 1 –ss ssus –t
L1
L2
2
PVOL
PSUS
DC1
1
DC2
SMPL
G2(UR)
G3 (UR)
SVOL
SSUS
1
L3
2
DC3
SMPL
APP
Figure 4.15 Example for waiting “SSUS” on 3DC using TC/UR
Hitachi Command Control Interface (CCI) User and Reference Guide
191
4.6 Monitoring Pair Activity (Pairmon)
The pairmon command, which is connected to the HORCM daemon, obtains the pair status
transition of each volume pair and reports it. If the pair status changes (due to an error or a
The pair status transition events exist in the HORCM pair state transfer queue. The -resevt
option (reset event) deletes one/all events from the HORCM pair state transfer queue. If
reset event is not specified, the pair state transfer queue is maintained. If the -s option is
not specified, pairmon displays all events for which it receives information from HORCM. If
the -s option is specified, only the specified status transitions are displayed.
The CCI software supports the error monitoring and configuration confirmation commands
for linkage with the system operation management of the UNIX server.
Table 4.11 Pairmon Command Parameters
Parameter
Command Name
Format
Value
pairmon
pairmon { -h ⎪ -q ⎪ -z ⎪ -D ⎪ -allsnd ⎪ -resevt ⎪ -nowait ⎪ -s <status> ... }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits this command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairmon command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-D: Selects the default report mode. In the default mode, if there is pair status transition information to be
reported, one event is reported and the event is reset. If there is no pair status transition information to be
reported, the command waits. The report mode consists of the three flags: -allsnd, - resevt, and -nowait
options.
-allsnd: Reports all events if there is pair status transition information.
-resevt: Reports events if there is pair status transition information, and then resets all events.
-nowait: When this option is specified, the command does not wait when there is no pair status transition
information.
-s <status> ...: Specifies the pair status transition to be reported: smpl, copy (includes rcpy), pair, psus,
psue. If two or more statuses are specified following -s, masking is done according to the logical OR of the
specified statuses. If this option is not specified, pairmon displays all events which received information
from HORCM.
# pairmon -allsnd -nowait
Group Pair vol Port
oradb oradb1 CL1-A 1
oradb oradb2 CL1-A 1
targ# lun#
LDEV#… Oldstat code –> Newstatcode
5
6
145…
146…
SMPL
PAIR
0x00 –> COPY
0x02 –> PSUS
0x01
0x04
Figure 4.16 Pairmon Command Example
192
Chapter 4 Performing CCI Operations
Output of the pairmon command:
„
„
„
Group: This column shows the group name (dev_group) which is described in the
configuration definition file.
Pair vol: This column shows the paired volume name (dev_name) in the specified group
which is described in the configuration definition file.
Port targ# lun#: These columns show the port ID, TID, and LUN which is described in
the configuration definition file. For further information on fibre-to-SCSI address
conversion, see Appendix C.
„
„
LDEV#: This column shows the LDEV ID for the specified device.
Oldstat: This column shows the old pair status when the status of the volume is
changed.
„
„
Newstat: This column shows the new pair status when the status of the volume is
changed.
code: This column shows the storage system-internal code for the specified status.
Table 4.12 Results of Pairmon Command Options
-D
-nowait -resevt -allsnd Actions
When HORCM does not have an event, this option waits until an event occurs. If one
-D
or more events exist, then it reports one event and resets the event which it reported.
Invalid
-allsnd
When HORCM does not have an event, this option waits until an event occurs. If one
or more events exist, then it reports all events.
Invalid
-resevt
When HORCM does not have an event, this option waits until an event occurs. If one
or more events exist, then it reports one event and resets all events.
Invalid
-resevt -allsnd
When HORCM does not have an event, this option waits until an event occurs. If one
or more events exist, then it reports all events and resets all events.
Invalid -nowait
Invalid -nowait
When HORCM does not have an event, this option reports event nothing. If one or
more events exist, then it reports one event and resets the event which it reported.
-allsnd
When HORCM does not have an event, this option reports event nothing. If one or
more events exist, then it reports all events.
Invalid -nowait -resevt
When HORCM does not have an event, this option reports event nothing. If one or
more events exist, then it reports one event and resets all events.
Invalid -nowait -resevt -allsnd
When HORCM does not have an event, this option reports event nothing. If one or
more events exist, then it reports all events and resets all events.
Hitachi Command Control Interface (CCI) User and Reference Guide
193
4.7 Checking Attribute and Status (Pairvolchk)
The pairvolchk command acquires and reports the attribute of a volume or group connected
to the local host (issuing the command) or remote host. The volume attribute is SMPL
(simplex), P-VOL (primary volume), or S-VOL (secondary volume). The -s[s] option reports
command and its output. lists and describes the pairvolchk command parameters and
state transition table for an HA control script using the pairvolchk and horctakeover
commands.
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = ASYNC CTGID = 2]
Å TC Async
Å TrueCopy Sync
Å ShadowImage
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = DATA ]
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR ]
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR CTGID = 1]
Å ShadowImage at-time split
Figure 4.17 Pairvolchk Command Examples
Table 4.13 Pairvolchk Command Parameters
Parameter
Value
Command Name pairvolchk
Format pairvolchk{ -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#]
⎪ -FHORC [MU#] ⎪ -FMRCF [MU#] ⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪ -c ⎪ -ss ⎪-nomsg }
194
Chapter 4 Performing CCI Operations
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the pair volume check command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairvolchk command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-g <group>: Specifies the group name defined in the configuration definition file. This option must always
be specified. The command is executed for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies the paired logical volume name defined in the configuration definition file. When
this option is specified, the command is executed for the specified paired logical volumes.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g
<group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified LDEV. If specified LDEV is contained in the group, the target volume is executed as the paired
logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If the
specified LDEV is contained in two or more groups, the command is executed on the first group. The <seq
#> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-c: Checks the conformability of the paired volumes of the local and remote hosts and reports the volume
attribute of the remote host. If this option is not specified, the volume attribute of the local host is reported.
-ss: Used to acquire the attribute of a volume and the pair status of a volume. If this option is not specified,
the volume attribute is reported.
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute a
command from a user program. This option must be specified at the beginning of a command argument.
The command execution log is not affected by this option.
-FHORC [MU#] or -FCA [MU#]: Forcibly specifies a cascading TrueCopy volume for specified pair logical
volumes on ShadowImage environment (see example in Figure 4.18). If no -c option is specified, this option
acquires the attributes of a cascading TrueCopy volume on a local host (near site). If the -c option is
specified, this option acquires the attributes of a cascading TrueCopy volume on a remote host (far site).
-FMRCF [MU#] or -FBC [MU#]: Forcibly specifies a cascading ShadowImage volume for specified pair
acquires the attributes of a cascading ShadowImage volume on a local host (near site). If the -c option is
specified, acquires the attributes of a cascading ShadowImage volume on a remote host (far site).
-MINAP: Shows the minimum active paths on specified group in HORC/HORCAsync on PVOL.
Note: If RAID F/W will not be supporting the number of active path, then “MINAP” item will not be displayed
as follows.
pairvolchk : Volstat is P-VOL.[status = PAIR fence = ASYNC CTGID = 2]
Display example for ShadowImage/Snapshot:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR ]
Display example for ShadowImage (specified with “-m grp” option):
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR CTGID = 1]
Display example for TrueCopy:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = DATA MINAP = 2 ]
Display example for TrueCopy Sync CTG:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = DATA CTGID = 2 MINAP
= 2 ]
Display example for TrueCopy Async:
# pairvolchk -g oradb
pairvolchk : Volstat is P-VOL.[status = PAIR fence = ASYNC CTGID = 2 MINAP
Hitachi Command Control Interface (CCI) User and Reference Guide
195
= 2 ]
MINAP displays the following two conditions (status) according to the pair status:
Parameter
Value
Returned values When the -ss option is not specified:
ƒ
Normal termination:
1: The volume attribute is SMPL.
2: The volume attribute is P-VOL.
3: The volume attribute is S-VOL.
ƒ
Abnormal termination: Other than 0 to 127, refer to the execution log files for error details.
When the -ss option is specified:
ƒ
ƒ
Normal termination:
11: The status is SMPL.
For Hitachi TrueCopy Sync/ShadowImage:
22: The status is PVOL_COPY or PVOL_RCPY.
23: The status is PVOL_PAIR.
24: The status is PVOL_PSUS.
25: The status is PVOL_PSUE.
26: The status is PVOL_PDUB (TrueCopy & LUSE volume only).
29: The status is PVOL_INCSTG (inconsistent status in group). Not returned.
32: The status is SVOL_COPY or SVOL_RCPY.
33: The status is SVOL_PAIR.
34: The status is SVOL_PSUS.
35: The status is SVOL_PSUE.
36: The status is SVOL_PDUB (TrueCopy & LUSE volume only).
39: The status is SVOL_INCSTG (inconsistent status in group). Not returned.
To identify TrueCopy Async/UR, the pairvolchk command returns a value which is 20 more than the
TrueCopy Sync status code and adds PFUL and PFUS states to return code to identify sidefile status of
TrueCopy Async or UR journal file.
For Hitachi TrueCopy Async and Universal Replicator:
42: The status is PVOL_COPY.
43: The status is PVOL_PAIR.
44: The status is PVOL_PSUS.
45: The status is PVOL_PSUE.
46: The status is PVOL_PDUB. (TrueCopy & LUSE volume only)
47: The status is PVOL_PFUL.
48: The status is PVOL_PFUS.
52: The status is SVOL_COPY or SVOL_RCPY.
53: The status is SVOL_PAIR.
54: The status is SVOL_PSUS.
55: The status is SVOL_PSUE.
56: The status is SVOL_PDUB. (TrueCopy & LUSE volume only)
57: The status is SVOL_PFUL.
58: The status is SVOL_PFUS.
For group status, see:
214: EX_EXQCTG
216: EX_EXTCTG
236: EX_ENQVOL
237: EX_CMDIOE
235: EX_EVOLCE ... When the -c option is specified only.
242: EX_ENORMT... When the -c option is specified only.
For a SnapShot Volume: The SnapShot needs to show the status of Full of the SnapShot Pool as
snapshot condition. For this purpose, SnapShot also uses PFUL and PFUS status which is the status of
Full of the sidefile for TrueCopy Async. The APP can refer this status as the return value.
22: The status is PVOL_COPY or PVOL_RCPY.
23: The status is PVOL_PAIR.
24: The status is PVOL_PSUS.
25: The status is PVOL_PSUE.
26: The status is PVOL_PDUB. (HORC && LUSE volumes only)
27: The status is PVOL_PFUL. (PAIR closing Full status of the SnapShot Pool)
28: The status is PVOL_PFUS. (PSUS closing Full status of the SnapShot Pool)
29: The status is PVOL_INCSTG. (Inconsistent status in group) ... Not returned
196
Chapter 4 Performing CCI Operations
32: The status is SVOL_COPY or SVOL_RCPY.
33: The status is SVOL_PAIR.
Table 4.14 Specific Error Codes for Pairvolchk
Category
Error Code
Error Message
Recommended Action
Value
Volume status EX_ENQVOL
Unmatched volume status
within the group
Confirm status using the pairdisplay
command. Make sure all volumes in the
group have the same fence level and volume
attributes.
236
Unrecoverable EX_EVOLCE
Pair Volume combination error
Confirm pair status using pairdisplay, and
change combination of volumes.
235
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
Hitachi Command Control Interface (CCI) User and Reference Guide
197
intermediate P/Pvol through specified pair group on ShadowImage environment. Figure 4.19
shows a pairvolchk example that acquires the status (PVOL_PSUS) of the intermediate
S/Pvol (MU#1) through specified pair group on Hitachi TrueCopy environment.
ShadowImage Environment
pairvolchk -g oradb1 -c -s -FHORC
Oradb1(ShadowImage)
SVOL
Ora(TrueCopy)
P/P
VOL
0
1
SVOL
0
Oradb2(ShadowImage)
SVOL
Seq#
30052
Seq#
30053
Figure 4.18 Example of -FHORC Option for Pairvolchk
TrueCopy Environment
pairvolchk -g ora -c -s -FMRCF 1
Oradb1(ShadowImage)
SVOL
Ora(TrueCopy)
S/P
VOL
0
1
PVOL
0
Oradb2(ShadowImage)
SVOL
Seq#
30052
Seq#
30053
Figure 4.19 Example of -FMRCF Option for Pairvolchk
198
Chapter 4 Performing CCI Operations
Table 4.15 Truth Table for Pairvolchk Group Status Display
Status of Each Volume in the Group
Option COPY* PSUE PDUB PFUS PSUS PFUL PAIR Group Status
See
TRUE
x
x
x
x
x
x
COPY*
Notes
below
false
false
false
false
false
false
TRUE
false
false
false
false
false
false
TRUE
false
false
false
false
false
x
x
x
x
x
x
x
x
x
x
PSUE
PDUB
PFUS
PSUS
PFUL
TRUE
false
false
false
false
x
x
x
x
TRUE
x
x
false
TRUE
x
false
false
TRUE
false
x
false
false
TRUE PAIR
-ss
x
x
x
x
x
x
x
x
x
x
x
x
x
x
COPY*
TRUE
false
false
false
false
false
x
x
PSUE
PDUB
PFUL
TRUE
false
false
false
false
x
TRUE
false
false
TRUE PAIR
TRUE false
false
false
false
PFUS
PSUS
TRUE false
*COPY = COPY or RCPY
x = true or false (does not matter).
Notes:
„
„
„
„
The PFUL state is displayed as PAIR by all commands (except the -fc option of the
pairdisplay command), since PFUL indicates PAIR state with sidefile at the HWM.
The PFUS state is displayed as PSUS by all commands (except the -fc option of the
pairdisplay command), since PFUS indicates SUSPENDED state due to sidefile full.
The SVOL_PSUS state is displayed as SSUS by the pairdisplay command and other
commands.
This option will be able to use under condition when ‘pairvolchk -s’ has
“USE_OLD_VCHK” variable.
Hitachi Command Control Interface (CCI) User and Reference Guide
199
Table 4.16 State Transition Table for HA Control Script
State
No.
Volume Attributes and Pair Status
Results Executing pairvolchk and horctakeover from DC1(DC2)
DC1(DC2)
DC2(DC1)
pairvolchk -s
(local volume)
pairvolchk -s -c
(remote volume)
Pair
Status
Horctakeover
result
1
SMPL
or
SMPL
SMPL
SMPL
XXX
EX_VOLCRE
Nop
2
P-VOL
COPY
PVOL_XXX
3
PAIR/PFUL
PSUS
SMPL
4
SVOL-
PSUS
or
4-1
5
PFUS
(SSWS)
PSUE
SVOL-PSUS
6
PDUB
8
S-VOL
EX_EVOLCE
EX_EVOLCE
9
Unknown
EX_ENORMT or
EX_CMDIOE
(EX_ENORMT)
(EX_CMDIOE)
10
11
12
P-VOL
SMPL
P-VOL
S-VOL
SMPL
XXX
XXX
EX_VOLCRE
EX_EVOLCE
EX_EVOLCE
SVOL_YYY
data or
status &
PVOL-PSUE Æ12
or PVOL-SMPL Æ8
PVOL_XXX
PSUE or
PDUB
Other
Nop
13
data or
status &
Unknown
EX_ENORMT or
EX_CMDIOE
XXX
XXX
PVOL-PSUE Æ13
or PVOL-SMPL Æ9
PSUE or
PDUB
Other
Nop
14
15
S-VOL
SMPL
EX_EVOLCE
PVOL_XXX
EX_EVOLCE
P-VOL
COPY
SVOL_E* Æ 4,5
SVOL_E*
16
17
PAIR/
PFUL
Swap Æ12
PSUS
PFUS
SVOL_E Æ 4
SVOL Æ 4-1
SVOL_YYY
18
PSUE data
SVOL Æ 5,6
200
Chapter 4 Performing CCI Operations
PDUB
statu
s
SVOL_E Æ 5,6
never
async
SVOL_E Æ 5,6
SVOL Æ 5,6
EX_EVOLCE
21
22
S-VOL
EX_EVOLCE
COPY
Unknown
EX_ENORMT or
EX_CMDIOE
YYY
SVOL_E * Æ 4,5
SVOL_E*
23
PAIR/
PFUL
data
SVOL Æ 4
status
SVOL Æ 4
never
async
SVOL_E Æ 4
SVOL Æ 4
24
25
PSUS
PFUS
SVOL_E Æ 4
SVOL Æ 4-1
SVOL Æ 5,6
SVOL_E Æ 5,6
SVOL_E Æ 5,6
SVOL Æ 5,6
PSUE
PDUB
data
status
never
async
Explanation of terms in Table 4.16:
XXX = Pair status of P-VOL returned by “pairvolchk -s” or “pairvolchk -s -c” command
YYY = Pair status of S-VOL returned by “pairvolchk -s” or “pairvolchk -s -c” command
PAIR STATUS = Since the P-VOL controls status, PAIR STATUS is reported as PVOL_XXX
(except when the P-VOL’s status is Unknown).
PVOL-PSUE = PVOL-PSUE-takeover
PVOL-SMPL = PVOL-SMPL-takeover
Nop = Nop-takeover
Swap = Swap-takeover. When the horctakeover command execution succeeds, the state
transitions to the indicated (Æ) state number.
SVOL = SVOL-SSUS takeover or Swap-takeover. In case of a host failure, this function
executes Swap-takeover. In case of an ESCON/fibre-channel or P-VOL site failure, this
function executes SVOL-SSUS-takeover.
SVOL_E = Execute SVOL-SSUS takeover and return EX_VOLCUR.
SVOL_E* = Return EX_VOLCUR.
When the horctakeover command execution succeeds, the state transitions to the
indicated (Æ) state number. For example, if the HA control script sees SVOL_PAIR at the
local (near) volume and PVOL_PAIR at the remote (far) volume (like state 16 above), it
will perform a swap takeover which will result in a state 12 situation.
Hitachi Command Control Interface (CCI) User and Reference Guide
201
4.7.1 Recovery in Case of SVOL-Takeover
While the DC1 is conducting processing (normally state = 4), and when the DC2 has
recovered from the failure, the following commands must be issued to make PVOL on the
DC1 side:
Host A
In case of operations on the DC1 side:
c pairsplit -S
Host B
d paircreate -vl
e pairevtwait (wait for PAIR)
In case of operations on the DC2 side:
c pairsplit -S
d paircreate -vr
PVOL
PSUS
SMPL
DC1
e pairevtwait (wait for PAIR)
DC2
State No. 4
horctakeover
Host A
Host A
Host A
Host A
Host B
Host B
Host B
Host B
pairevtwait
pairsplit -S
e
paircreate -vl
c
SMPL
DC2
d
SVOL
PAIR
PVOL
PAIR
SVOL
PAIR
SMPL
DC1
PVOL
COPY
PVOL
PAIR
SVOL
COPY
DC1
DC1
DC1
DC2
DC2
DC2
State No. 1
State No. 15
State No. 16
State No. 16
After operations (state is No.16), when the DC2 takes over processing from the DC1, the
horctakeover command will execute a swap-takeover operation due to (DC2)SVOL &
(DC1)PVOL_PAIR on the (DC2) side.
If the DC1 side has NO this operation, and when the DC2
takes over processing from the DC1, horctakeover command
will be returned with EX_VOLCRE due to (DC2)PVOL &
(DC1)SMPL on the (DC2) side.
Host A
Host B
Æ state is No. 10.
In this case, pairvolchk (-s) command will be returned with
PVOL_PSUS on the (DC2) side, and pairvolchk (-s -c)
command will be returned with SMPL on the (DC2) side.
PVOL
PSUS
SMPL
DC1
DC2
State No. 10
202
Chapter 4 Performing CCI Operations
If after pairsplit operation, and when the DC2 takes over
processing from the DC1, the horctakeover command will
be returned with EX_VOLCRE due to (DC2)SMPL &
(DC1)SMPL on the (DC2) side.
Host A
Host B
Æ state is No. 1.
pairsplit -S
As for other case:
If the DC2 takes over processing from the DC1 on processing
pairsplit operation, horctakeover command will be returned
with EX_ENQVOL (Unmatch volume status on the group)
due to the group’s volume attribute is not the same on
each volume ((DC2)SMPL & (DC2)PVOL) on the (DC2) side.
In this case, pairvolchk (-s) command will be returned with
EX_ENQVOL on the (DC2) side.
SMPL
SMPL
DC1
DC2
State No. 1
When the DC1 side has this operation and while the DC1 has
COPY state DC1 (PVOL-COPY) & DC2(SVOL-COPY), if the
DC2 takes over processing from the DC1, and then it will be
needed that ask operator for decision, and/or pairevtwait
(wait for PAIR) on the (DC2) side.
Host A
Host B
paircreate -vl
Æ state is No. 15.
SVOL
COPY
If the DC2 takes over processing from the DC1 without their
confirmation operations, horctakeover command will be
returned with SVOL_E (execute SVOL-takeover and return
EX_VOLCUR) on the (DC2) side.
PVOL
COPY
DC1
DC2
State No. 15
Æ state is No. 15.
As for other case:
If the DC2 takes over processing from the DC1 on processing
paircreate operation, horctakeover command will be
returned with EX_ENQVOL (Unmatch volume status on the
group) due to the group’s volume attribute is not the same
on each volume ((DC2)SMPL & (DC2)SVOL) on the (DC2)
side.
In this case, pairvolchk (-s) command will be returned with
EX_ENQVOL on the (DC2) side.
As for other case in state No. 16:
If the DC2 takes over processing from the DC1 without
pairevtwait (-s pair) operations, horctakeover command
will be returned with SVOL_E (execute SVOL-takeover and
return EX_VOLCUR) due to the group’s volume attribute is
not the same on each volume ((DC2)SVOL_PAIR &
(DC2)SVOL_COPY) on the (DC2) side.
Host A
Host B
SVOL
PAIR/
COPY
PVOL
PAIR/
COPY
In this case, pairvolchk (-s) command will be returned with
SVOL_COPY on the (DC2) side.
DC1
DC2
State No. 16
Hitachi Command Control Interface (CCI) User and Reference Guide
203
In case of state No. 17:
This case is pair suspend (using pairsplit command) by
operator. The DC1 takes over processing from the DC2,
when the DC2 has PSUS state DC1(SVOL-PSUS) & DC2(PVOL-
PSUS) that will be needed that ask operator for decision,
and/or pairresync on the DC1 side. If the DC1 takes over
processing from the DC2 without their confirmation
operations, horctakeover command will be returned with
SVOL_E (execute SVOL-takeover and return EX_VOLCUR)
on the (DC1) side.
Host A
Host B
PVOL
PSUS
SVOL
PSUS
DC1
DC2
Æ state is No. 17.
State No. 17
Consideration as for state No. 9
The horctakeover command will be failed with EX_ENORMT on the following nested failure
case (state No. 4Æ9). Therefore, HA Control Script will be needed that ask operator for
decision, and do nothing on the DC1 side.
Host A
Host A
Host C
Host
B
Host A
Host
C
Host B
Host B
Host
C
Host B failure
DC2 site failure
PVOL
DC2
SVOL
DC1
SMPL
DC1
PVOL
DC2
P
vol
SMPL
DC1
DC2
State No. 23
4
Æ
State No. 16
State No. 4
9
Æ
204
Chapter 4 Performing CCI Operations
4.7.2 PVOL-PSUE-Takeover
The horctakeover command executes PVOL-PSUE-takeover when the primary volume cannot
be used (PSUE or PDUB volume is contained in the group, or the link down that the pair
status is PVOL_PAIR/SVOL_PAIR and the AP (active path) value is 0), and will be returned
with “PVOL-PSUE-takeover” as the return value. PVOL-PSUE-takeover changes the primary
volume to the suspend state (PSUE or PDUB Æ PSUE*, PAIR Æ PSUS) which permits WRITE to
all primary volumes of the group.
Host A
Host A
Host C
Host C
Host B
Host B
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
ESCON® or Fibre-
channel failure
PAIR
PSUE
PAIR
PSUE
PAIR
PSUS
PSUE*
PSUS
PSUE*
PSUS
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
The horctakeover command will be returned with PVOL-PSUE-takeover also on the following
nested failure case.
Host A
Host A
Host C
Host C
Host B
Host B
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PSUS
PSUE*
PSUS
PSUE*
PSUS
PSUS
PSUE*
PSUS
PSUE*
PSUS
Host failure after
ESCON®/fibre failure
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
Hitachi Command Control Interface (CCI) User and Reference Guide
205
Even though ESCON or FC has been connected to S-VOL, PVOL-PSUE-takeover is changed to
the suspend state with the primary volume only (SVOL’s state is not changed), since that
maintains consistence of the secondary volume at having accepted horctakeover command.
Host A
Host A
Host C
Host C
Host B
Host B
PAIR
PSUE
PAIR
PAIR
PAIR
PAIR
PSUE
PAIR
PAIR
PAIR
PSUS
PSUE*
PSUS
PSUS
PSUS
PAIR
PSUE
PAIR
PAIR
PAIR
SVOL failure
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
Group STATUS of the P-VOL. The PSUE and/or PSUS status is intermingled in the group
through action of this PVOL-PSUE-takeover. This intermingled pair status is PSUE as the
group status, therefore the pairvolchk command returned give priority PSUE(PDUB) instead
of PSUS as the group status. Therefore, the group status of the PVOL is also continued after
the PVOL-PSUE-takeover.
4.7.3 Recovery in Case of PVOL-PSUE-Takeover
This special state (PSUE*) turns back to original state after the successful execution of the
pairresync command (after the recovery of ESCON/fibre-channel link). If the pairresync
command has been failed at the ESCON or fibre-channel link is not restored, then this
special state (PSUE*) is NOT changed.
Host A
Host C
Host B
Host A
Host A
Host C
Host B
Host C
Host B
PAIR
PAIR
PAIR
PAIR
PAIR
PSUS
PSUE*
PSUS
PSUE*
PSUS
COPY
COPY
COPY
COPY
COPY
PAIR
PAIR
PAIR
PAIR
PAIR
COPY
COPY
COPY
COPY
COPY
PAIR
PAIR
PAIR
PAIR
PAIR
Turns back to
original state
pairresync -l
on Host C
P-VOL
S-VOL
P-VOL
S-VOL
P-VOL
S-VOL
206
Chapter 4 Performing CCI Operations
4.7.4 SVOL-SSUS Takeover in Case of ESCON/Fibre/Host Failure
The SVOL-Takeover executes SVOL-SSUS-takeover to enable writing without changing the
SVOL to SMPL. SVOL-SSUS-takeover changes the SVOL to the suspend state (PAIR, PSUE Æ
SSUS) which permits write and maintains delta data (bitmap) for all SVOLs of the group.
Host A
Host A
Host C
Host C
Host B
Host B
ESCON®/Fibre/Host
Failure
PAIR
PAIR
PAIR
PAIR
PAIR
*
*
*
*
*
PAIR
PSUE
PAIR
PAIR
PAIR
SSUS
SSUS
SSUS
SSUS
SSUS
PAIR
PSUE
PAIR
PAIR
PAIR
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
PAIR* = PAIR for CA Sync., PAIRÆPSUE for Hitachi TrueCopy Async/UR.
SSUS = SVOL_PSUS
Group status of SVOL-SSUS-takeover: After SVOL-SSUS-takeover completes, the SVOL status
is displayed as SSUS by pairdisplay command, and pairvolchk command will return SVOL
status as SVOL_PSUS. Also this special state is displayed as SSWS using the -fc option of the
pairdisplay command. This special state (PVOL_PSUE and SVOL_PSUS) between PVOL and
SVOL may be needed that is handled by HA Control Script.
Hitachi TrueCopy Async/UR: Before the SVOL is changed to SSUS, the SVOL-takeover will try
to copy non-transmitted data (which remains in the FIFO queue (sidefile) of the PVOL) to the
SVOL. In case of an ESCON/FC failure, this data synchronize operation may fail. Even so, the
SVOL-takeover function will execute the force split to SSUS, enabling the SVOL to be used.
Note: Non-transmitted data (which remains in the FIFO queue (sidefile) of the PVOL) will be
reflected to the bitmap to empty the FIFO queue, and the pair state will be set to PSUE.
This non-transmitted data which is reflected to the bitmap will be lost (resynchronized as
NEW_SVOL) by issuing of the pairresync-swaps command for recovery from SVOL-SSUS-
takeover on takeover site (Host B) (see next section).
Hitachi Command Control Interface (CCI) User and Reference Guide
207
4.7.5 Recovery from SVOL-SSUS-Takeover
After recovery of the ESCON/FC link, this special state (PVOL_PSUE and SVOL_PSUS) will be
changed to COPY state that original SVOL is swapped as the NEW_PVOL and resynchronizes
(cast off original PVOL) the NEW_SVOL based on the NEW_PVOL by issuing of the
pairresync -swaps command on takeover site (Host B).
Host A
Host C
Host A
Host B
Host A
Host C
Host C
Host B
Host B
PAIR
PSUE
PAIR
PAIR
PAIR
SSUS
SSUS
SSUS
SSUS
SSUS
COPY
COPY
COPY
COPY
COPY
COPY
COPY
COPY
COPY
COPY
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
Delta COPY
pairresync -swaps
on Host B only
PAIR
Æ
P-VOL
S-VOL
S-VOL
P-VOL
S-VOL
P-VOL
If the pairresync -swaps command has been failed at the ESCON/FC link is not restored, then
this special state (PVOL_PSUE and SVOL_PSUS) is NOT changed.
Failback after recovery on Host B. After recovery with execution of the pairresync -swaps
command on Host B, if you stop the applications on Host B and restart the applications on
Host A, then horctakeover will execute Swap-Takeover, even though Host A cannot
communicate with remote Host B.
Host A
Host C
Host A
Host C
Host B
Host B
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
horctakeover
on HostA
S-VOL
P-VOL
P-VOL
S-VOL
208
Chapter 4 Performing CCI Operations
Failback without recovery on Host B. After recovery of the ESCON/FC link and hosts, if you
stopped the applications without executing the pairresync -swaps command on Host B and
restarted the applications on Host A, you must use the following procedure for recovery. At
this time, pairvolchk command on Host A will be returned PVOL_PSUE & SVOL_PSUS as
state combination.
Host A
Host C
Host B
PAIR
PSUE
PAIR
PAIR
PAIR
SSUS
SSUS
SSUS
SSUS
SSUS
S-VOL
P-VOL
Host A
Host C
Host C
Host A
Host A
Host B
Host C
Host B
Host B
pairresync
-swapp
on Host A
PAIR
PAIR
PAIR
PAIR
PAIR
COPY
COPY
COPY
COPY
COPY
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
COPY
COPY
COPY
COPY
COPY
PAIR
PAIR
PAIR
PAIR
PAIR
After PAIR
horctakeover
Delta COPY
PAIR
Æ
S-VOL
P-VOL
S-VOL
P-VOL
P-VOL
S-VOL
Note: The pairresync -swapp option is used to be swapped volume from the PVOL to SVOL at
suspending state on the PVOL side and resynchronizes (cast off original PVOL) the
NEW_SVOL based on the NEW_PVOL. At the result of this operation, the volume attributes of
own host (local host) becomes for the NEW_SVOL. The target volume of the local host must
have the P-VOL, and needs to the remote host for this operation.
Hitachi Command Control Interface (CCI) User and Reference Guide
209
4.7.6 SVOL-Takeover in Case of Host Failure
After SVOL-takeover changed to the suspend (PAIR, PSUE Æ SSUS) state with the SVOL only,
internal operation of SVOL-takeover will be executed pairresync -swaps command for
maintaining mirror consistency between NEW_PVOL and NEW_SVOL, and then will be
returned with Swap-takeover as the return value of horctakeover command.
Hitachi TrueCopy Async/UR. Before the SVOL is changed to SSUS, the SVOL-takeover will
copy non-transmitted data (which remains in the FIFO queue (sidefile) of the PVOL) to the
SVOL side. The SVOL-takeover operation is waited to copy non-transmitted data of the PVOL
before a timeout value (that is specified by -t <timeout> option). After the completion of a
synchronous state between the PVOL and SVOL, the SVOL-takeover will be split and the state
will be changed to SSUS, and the operation of after that is same.
Host A
Host C
Host B
Host A
Host A
Host C
Host C
Host B
Host B
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
Before running APP
internally execute
pairresync -swaps
PSUS
PSUS
PSUS
PSUS
PSUS
SSUS
SSUS
SSUS
SSUS
SSUS
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
PAIR
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
S-VOL
P-VOL
Since the SVOL has been the SSWS already as state of the SVOL_SSUS takeover after, the
horctakeover command will do nothing in this nested failure case.
Host A
Host C
Host A
Host C
Host B
Host B
Host D
Host D
PAIR
PSUE
PAIR
PAIR
PAIR
SSUS
SSUS
SSUS
SSUS
SSUS
SSUS
SSUS
SSUS
SSUS
SSUS
PAIR
PSUE
PAIR
PAIR
PAIR
horctakeover
horctakeover
P-VOL
S-VOL
P-VOL
S-VOL
210
Chapter 4 Performing CCI Operations
4.8 Displaying Pair Status (Pairdisplay)
The pairdisplay command displays the pair status allowing you to verify completion of pair
operations (e.g., paircreate, pairresync). The pairdisplay command is also used to confirm
the configuration of the pair connection path (the physical link of paired volumes and
servers). The pairdisplay command can be used for a paired volume or a group of paired
Table 4.17 Pairdisplay Command Parameters
Parameter
Value
Command
name
pairdisplay
Format
pairdisplay{ -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#] ⎪ -FHORC [MU#]⎪ -
FMRCF [MU#]⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪ -c ⎪ -l | -f[xcdm] | -CLI ⎪ -m <mode> ⎪ -v jnl[t] |-v ctg |-v pid
}
Hitachi Command Control Interface (CCI) User and Reference Guide
211
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the pair volume check command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the pairdisplay command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM
shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-g <group>: Specifies the group name defined in the configuration definition file. This option must always be
specified. The command is executed for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: This option is used to specify the paired logical volume name defined in the configuration
definition file. When this option is specified, the command is executed for the specified paired logical volumes.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is executed
as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“
option. If the specified the raw_device is contained in two or more groups, the command is executed on the
first group.
- FHORC [MU#] or -FCA [MU#]: Forcibly specifies a cascading TrueCopy volume for specified pair logical
volumes on ShadowImage environment. If the -l option is specified, this option displays status of a cascading
TrueCopy volume on a local host (near site). If no -l option is specified, this option displays status of a
cascading TrueCopy volume on a remote host (far site). This option cannot be specified with -m <mode>
option on the same command line.
-FMRCF [MU#] or -FBC [MU#]: Forcibly specifies a cascading ShadowImage volume for specified pair logical
volumes on TrueCopy environment. If the -l option is specified, this option displays status of a cascading
ShadowImage volume on a local host (near site). If no -l option is specified, this option displays status of a
cascading ShadowImage volume on a remote host (far site). This option cannot be specified with -m <mode>
option on the same command line.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified LDEV, and if the specified LDEV is contained in the group, the target volume is executed as the
paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If
the specified LDEV is contained in two or more groups, the command is executed on the first group. The <seq
#> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-c: Checks the configuration of the paired volume connection path (physical link of paired volume among the
servers) and displays illegal pair configurations. If this option is not specified, the status of the specified paired
volume is displayed without checking the path configuration.
-l: Displays the paired volume status of the local host (which issues this command).
-fx: Displays the LDEV ID as a hexadecimal number.
-fc: Displays copy operation progress, sidefile percentage, bitmap percentage, or UR journal percentage.
Displays PFUL/PFUS forTrueCopy Async/UR. Used to confirm SSWS state as indication of SVOL_SSUS-
takeover after.
-fd: Displays the relation between the Device_File and the paired volumes, based on the group (as defined in
the local instance configuration definition file). If Device_File column shows “Unknown” to either the local or
the remote HOST (instance) then it shows a volume which is not recognized on own HOST, and pair operation
are rejected (except the local option such as “-l”) in protection mode. Display example:
# pairdisplay -g oradb -fd
Group PairVol(L/R) Device_File M ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) c0t3d0
oradb oradev1(R) c0t3d1
0 35013 17..P-VOL COPY, 35013
0 35013 18..S-VOL COPY, 35013
18 -
17 -
-fm: Displays the Bitmap mode to output of M column.
-fe: Displays the serial# and LDEV# of the external LUNs mapped to the LDEV and additional information for
the pair volume. This option displays the information above by adding to last column, and then ignores the
format of 80 column. This option will be invalid if the cascade options (-m all,-m cas) are specified.
Display example for TrueCopy:
# pairdisplay -g horc0 -fdxe
Group ... LDEV#.P/S,Status,Fence,Seq#,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV#
horc0 ...
horc0 ...
41.P-VOL PAIR ASYNC ,63528
40 - 0 - 2 -
41 - 0 - - -
-
-
-
-
212
Chapter 4 Performing CCI O4p0e.rSa-tiVoOnLsPAIR ASYNC ,-----
Display example for ShadowImage/Snapshot:
# pairdisplay -g horc0 -fe
Group ... Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M CTG CM EM E-Seq# E-LDEV#
Parameter
Value
Returned
values
1: The volume attribute is SMPL.
2: The volume attribute is P-VOL.
3: The volume attribute is S-VOL. When displaying groups, 1/2/3 = normal termination for all pairs.
Abnormal termination (other than 0 to 127): refer to the execution log files for error details.
Hitachi Command Control Interface (CCI) User and Reference Guide
213
# pairdisplay -g oradb -fcx
Group Pair Vol(L/R) (P,T#,L#), Seq#, LDEV#..P/S, Status, Fence, Copy%, P-LDEV# M
oradb oradb1(L)
oradb oradb1(R)
(CL1-B, 1,0) 1234
(CL1-A, 1,0) 5678
64..P-VOL PAIR Never,
C8..S-VOL PAIR Never, ----
75
C8 -
64 -
Figure 4.20 Hitachi TrueCopy Pairdisplay Command Example
# pairdisplay -g oradb
Group Pair Vol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S, Status, Fence, Seq#, P-LDEV# M
oradb
oradb
oradb1(L)
oradb1(R)
(CL1-A, 1,0) 30053
(CL1-D, 1,0) 30053
18..P-VOL PAIR Never, 30053
19..S-VOL PAIR Never, ----
19 -
18 -
Figure 4.21 ShadowImage/Snapshot Pairdisplay Command Example
S/P
VOL
268
PVOL
266
0
1
oradb
Oradb1
270
272
SMPL
0
Oradb2
Serial # 30053
Serial #
30052
Display example for -m cas:
# pairdisplay -g oradb -m cas
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S, Status, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0) 30052 266....SMPL ----, ----- ----
oradb oradev1(L) (CL1-D , 3, 0) 30052 266....P-VOL COPY, 30053 268
oradb1 oradev11(R) (CL1-D , 3, 2-0) 30053 268....P-VOL COPY, 30053 270
oradb2 oradev21(R) (CL1-D , 3, 2-1) 30053 268....P-VOL PSUS, 30053 272
oradb oradev1(R) (CL1-D , 3, 2) 30053 268....S-VOL COPY, ----- 266
-
-
-
W
-
Display examples for -m all:
# pairdisplay -g oradb -m all
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S, Status, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0) 30052 266....SMPL ----, ---- ----
-
-
-
-
-
W
-
-
----- -----(L)
----- -----(L)
(CL1-D , 3, 0-1) 30052 266....SMPL ----, ---- ----
(CL1-D , 3, 0-2) 30052 266....SMPL ----, ---- ----
oradb oradev1(L) (CL1-D , 3, 0) 30052 266....P-VOL PAIR, 30053 268
oradb1 oradev11(R) (CL1-D , 3, 2-0) 30053 268....P-VOL COPY, 30053 270
oradb2 oradev21(R) (CL1-D , 3, 2-1) 30053 268....P-VOL PSUS, 30053 272
----- -----(R)
(CL1-D , 3, 2-1) 30053 268....SMPL ----, ----- ----
oradb oradev1(R) (CL1-D , 3, 2) 30053 268....S-VOL COPY, ----- 266
# pairdisplay -d /dev/rdsk/c0t3d0 -l -m all
Group PairVol(L/R) (Port#,TID,LU-M), Seq#, LDEV#..P/S, Status, Seq#, P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0) 30052 266....SMPL ----, ---- ---- -
----- -----(L)
----- -----(L)
(CL1-D , 3, 0-1) 30052 266....SMPL ----, ---- ---- -
(CL1-D , 3, 0-2) 30052 266....SMPL ----, ---- ---- -
oradb oradev1(L) (CL1-D , 3, 0) 30052 266....P-VOL PAIR, 30053 268
-
Figure 4.22 Pairdisplay -m Example
Output of the pairdisplay command:
„
„
Group = group name (dev_group) as described in the configuration definition file
Pair Vol(L/R) = paired volume name (dev_name) as described in the configuration
definition file. (L) = local host; (R) = remote host
214
Chapter 4 Performing CCI Operations
„
„
(P,T#,L#) (TrueCopy) = port, TID, and LUN as described in the configuration definition
file. For further information on fibre-to-SCSI address conversion, see Appendix C.
(Port#,TID,LU-M) (ShadowImage) = port number, TID, LUN, and MU number as described
in the configuration definition file
„
„
„
„
„
„
Seq# = serial number of the RAID storage system
LDEV# = logical device number
P/S = volume attribute
Status = status of the paired volume
Fence (TrueCopy only) = fence level
% (TrueCopy only) = copy operation completion, or percent pair synchronization
Hitachi TrueCopy Async
Hitachi TrueCopy Sync
ShadowImage
Vol.
COPY PAIR OTHER COPY
PAIR
OTHER
COPY
PAIR PVOL_PSUS OTHER
SVOL_COPY
PVOL CR
SF
SF
BMP
BMP
CR
-
BMP
BMP
BMP
BMP
CR
CR
CR
CR
BMP
CR
CR
SVOL
-
CR
UR Status
Volume
PVOL
COPY
PAIR
JF
PSUS/SSUS (PJNS/SJNS) OTHER
CR
-
JF
JF
BMP
BMP
SVOL
JF
CR: Shows the copy operation rate (identical rate of a pair).
BMP:
SF:
Shows the identical percentage of BITMAP both PVOL and SVOL.
Shows sidefile percentage of each CT group as sidefile 100% on cache of both PVOL and SVOL. Following is
an arithmetic expression using the High Water Mark (HWM) as 100% of a sidefile space:
HWM(%) = High water mark(%)/Sidefile space(30 to 70) * 100
JF:
Shows the usage rate of the current journal data as 100% of the journal data space.
„
„
P-LDEV# = LDEV number of the partner volume of the pair
M
–
–
–
For P-VOL and “PSUS” state:
M=“W” shows that S-VOL is suspending with R/W enabled through the pairsplit.
M=“-” shows that S-VOL is suspending with Read only through the pairsplit.
For S-VOL and “SSUS”state:
M=“W” shows that S-VOL has been altered since entering SSUS state.
M=“-” shows that S-VOL has NOT been altered since entering SSUS state.
For “COPY/RCPY/PAIR/PSUE” state:
M=“N” shows that its volume are Read-disabled through the paircreate ‘-m noread’.
Hitachi Command Control Interface (CCI) User and Reference Guide
215
4.9 Checking Hitachi TrueCopy Pair Currency (Paircurchk)
The CCI paircurchk command checks the currency of the Hitachi TrueCopy secondary
volume(s) by evaluating the data consistency based on pair status and fence level.
Table 4.18 specifies the data consistency for each possible state of a TrueCopy volume. A
paired volume or group can be specified as the target of the paircurchk command. The
paircurchk command assumes that the target is an S-VOL. If the paircurchk command is
specified for a group, the data consistency of each volume in the group is checked, and all
inconsistent volumes are found in the execution log file and displayed. Paircurchk is also
executed as part of the TrueCopy takeover (horctakeover) command (see next section).
Table 4.18 Data Consistency Displayed by the Paircurchk Command
Object Volume
Currency
Attribute
SMPL
Status
Fence
−
Paircurchk
SVOL_takeover
−
To be confirmed
To be confirmed
Inconsistent
−
P-VOL
S-VOL
−
−
−
COPY
Data
Status
Never
Async
Data
Status
Never
Async
Inconsistent
Inconsistent
OK
Inconsistent
OK
PAIR
OK
OK
To be analyzed
To be analyzed
To be analyzed
Suspected
Suspected
Suspected
Suspected
Suspected
OK
To be analyzed
OK (assumption)
OK (assumption)
Suspected
Suspected
Suspected
Suspected
OK (assumption)
OK
PAIR
PFUL
PSUS
Data
Status
Never
Async
PSUS
PFUS
PSUE
PDUB
Data
Status
Never
Async
Data
Suspected
Suspected
Suspected
Suspected
Suspected
Suspected
Suspected
Suspected
Suspected
OK (assumption)
−
SSWS
Status
Never
Async
216
Chapter 4 Performing CCI Operations
Notes:
1. To be confirmed = It is necessary to check the object volume, since it is not the secondary
volume.
2. Inconsistent = Data in the volume is inconsistent because it was being copied.
3. OK (assumption) = Mirroring consistency is not assured, but as S-VOL of Hitachi TrueCopy
Async/UR, the sequence of write data is ensured.
Figure 4.23 shows an example of the paircurchk command for a group and the resulting
error codes for the paircurchk command.
# paircurchk -g oradb
Group Pair vol Port targ# lun# LDEV# Volstatus Status Fence To be...
oradb oradb1
oradb oradb2
CL1-A 1
CL1-A 1
5
6
145
146
S-VOL
S-VOL
PAIR
PSUS
NEVER Analyzed
STATUS Suspected
Figure 4.23 Paircurchk Command Example
Table 4.19 Paircurchk Command Parameters
Parameter
Command Name
Format
Value
paircurchk
paircurchk { -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪-d[g] <raw_device> [MU#] ⎪
-d[g] <seq#> <LDEV#> [MU#] ⎪ -nomsg }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the paircurchk command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying Instance# of HORCM
-g <group>: Specifies a group name defined in the configuration definition file. The command is executed
for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g
<group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified LDEV, and if the specified LDEV is contained in the group, the target volume is executed as the
paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“
option. If the specified LDEV is contained in two or more groups, the command is executed on the first
group. The <seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal
notation.
-nomsg: Suppresses messages to be displayed when this command is executed. This option must be
specified at the beginning of a command argument. The command execution log is not affected by this
option.
Returned values
Normal termination (data is consistent): 0
Abnormal termination: other than 0, refer to the execution logs for error details.
Hitachi Command Control Interface (CCI) User and Reference Guide
217
Table 4.20 Specific Error Code for Paircurchk
Category
Error Code
Error Message
Recommended Action
Value
Volume status
Unrecoverable
EX_VOLCUR
S-VOL currency error
Check volume list to see if an operation was
directed to the wrong S-VOL.
225
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
218
Chapter 4 Performing CCI Operations
4.10 Performing Hitachi TrueCopy Takeover Operations
The Hitachi TrueCopy takeover command (horctakeover) is a scripted command for
executing several Hitachi TrueCopy operations. The takeover command checks the specified
volume’s or group’s attributes (paircurchk), decides the takeover function based on the
attributes, executes the chosen takeover function, and returns the result. The four Hitachi
TrueCopy takeover functions designed for HA software operation are (see section 4.10.1):
takeover-switch, swap-takeover, PVOL-takeover, and SVOL-takeover. A paired volume or a
group can be specified as the target of the TrueCopy takeover command. If SVOL-takeover is
specified for a group, the data consistency check is executed for all volumes in the group,
and all inconsistent volumes are found in the execution log file and displayed (same as
paircurchk command).
The takeover command allows swapping of the primary and secondary volumes, so that if
the primary or secondary volume is switched due to a server error or package transfer,
duplex operations can be continued using the reversed volumes. When control is handed
over to the current node, swapping the volumes again eliminates the need to copy them.
The takeover command also allows the secondary volume to be separated for disaster
recovery operations.
Table 4.21 lists and describes the horctakeover command parameters and returned values.
Table 4.22 lists and describes the error codes for the horctakeover command.
Table 4.21 Horctakeover Command Parameters
Parameter
Command Name
Format
Value
horctakeover
horctakeover { -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#] ⎪
-d[g] <seq#> <LDEV#> [MU#] ⎪ -S ⎪ -l ⎪ -t <timeout> ⎪ -nomsg }
Hitachi Command Control Interface (CCI) User and Reference Guide
219
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the horctakeover command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects
a HORCM shutdown, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM
-g <group>: Specifies a group name defined in the configuration definition file. The command is
executed for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of
“-g <group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is contained in the group, the target volume is executed as
the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“
option. If the specified LDEV is contained in two or more groups, the command is executed on the first
group. The <seq #> <LDEV #> values can be specified in hex (by addition of “0x”) or decimal notation.
-S: Selects and executes SVOL-takeover. The target volume of the local host must be an S-VOL. If this
option is specified, then the following “-l” option is invalid.
-l: Enables read and write to the primary volume(s) by a local host only without a remote host, and
executes PVOL-takeover when the primary volume cannot be used because it is fenced (fence = DATA or
STATUS, state = PSUE or PDUB, or PSUE or PDUB volume is contained in the group). If the primary
volume can be accessed, nop-takeover is executed. The target volume of the local host must be a P-VOL.
-t <timeout>: Must be specified for async volumes only, ignored for sync. Specifies the maximum time to
wait (in seconds) for swap-takeover and SVOL-takeover operation to synchronize the P-VOL and S-VOL.
If this timeout occurs, the horctakeover command fails with EX_EWSTOT. To avoid timeout, set this value
less than or equal to the start-up timeout value of the HA Control Script.
-nomsg: Suppresses messages to be displayed when this command is executed. This option must be
specified at beginning of a command argument. The command execution log is not affected by this option.
Returned values
Normal termination:
0: Nop-takeover (no operation).
1: Swap-takeover was successfully executed.
2: SVOL-takeover was successfully executed.
3: PVOL-SMPL-takeover was successfully executed.
4: PVOL-PSUE-takeover was successfully executed. (This value depends on the microcode level.)
5: SVOL-SSUS-takeover was successfully executed. (This value depends on the microcode level.)
Abnormal termination: other than 0-5, refer to the execution logs for error details.
Table 4.22 Specific Error Codes for Horctakeover
Category
Error Code Error Message
Recommended Action
Value
Volume status EX_ENQVOL Unmatched volume status within
the group
Confirm status using pairdisplay command. 236
Make sure all volumes in the group have
the same fence level and volume attributes.
EX_INCSTG
Inconsistent status in group
Confirm pair status using pairdisplay.
229
220
Chapter 4 Performing CCI Operations
Category
Error Code Error Message
Recommended Action
Value
EX_EVOLCE Pair Volume combination error
Confirm pair status using pairdisplay, and
change combination of volumes.
235
EX_VOLCUR S-VOL currency error
Check volume list to see if an operation
was directed to the wrong S-VOL.
225
EX_VOLCUE Local Volume currency error
Confirm pair status of the local volume.
224
223
Unrecoverable EX_VOLCRE Local and Remote Volume currency Confirm pair status of remote and local
error
volumes using pairdisplay command.
Timer
EX_EWSTOT Timeout waiting for specified status
Increase timeout value using -t option.
233
Recoverable
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
($HORCC_LOG) (see Table A.2), even if the user script has no error handling.
Hitachi Command Control Interface (CCI) User and Reference Guide
221
Recovery from EX_EWSTOT: If horctakeover failed with [EX_EWSTOT], recover as follows:
1. Wait until the SVOL state becomes “SVOL_PSUS” by using the return code of “pairvolchk
-g <group> -ss” command, and try to the start-up again for the HA Control Script.
2. Make an attempt to re-synchronize the original PVOL based on SVOL using “pairresync -g
<group> -swaps -c <size>“ for a fast failback operation.
If this pairresync operation fails with [EX_CMDRJE] or [EX_CMDIOE], there will be a cause
of ESCON link down and/or site failure.
If this operation fails, then HA Control Script reports the following message: ”After a
recovery from failure, please try ‘pairresync -g <group> -swaps -c <size>‘ command.”
To avoid above recovery steps, a timeout value should be a greater than (or equal) to the
start-up timeout value for the HA control script.
222
Chapter 4 Performing CCI Operations
4.10.1 Horctakeover Command Functions
4.10.1.1 Takeover-Switch Function
The control scripts activated by the HA software are used the same way by all nodes of a
cluster; they do not discriminate between primary and secondary volumes. The takeover
command, when activated by a control script, checks the combination of attributes of the
volume attributes and specifies the TrueCopy takeover action for each combination of
attributes.
Table 4.23 Volume Attributes and Takeover Actions
Local Node (Takeover Node)
Remote Node
Volume Attribute Fence Level and Status
Volume Attribute
SMPL
P-VOL Status Takeover Action
[1]
SMPL
-
-
-
-
-
-
-
-
-
NG
[2]
P-VOL
Nop-Takeover
[3]
S-VOL
Volumes not conform
NG
[4]
Unknown
P-VOL
(primary)
Fence = Data or Status
and
SMPL
P-VOL
S-VOL
NG
Volumes not conform
PVOL-Takeover
PVOL-Takeover
Status = PSUE or PDUB
or MINAP = 0
Unknown Status
(e.g., LAN down)
Fence = Never
Status = others
SMPL
P-VOL
S-VOL
-
-
-
-
NG
Volumes not conform
Nop-Takeover
Nop-Takeover
Unknown Status
(e.g., LAN down)
[5]
S-VOL
(secondary)
Status = SSWS
Any
-
-
Nop-Takeover
After SVOL_SSUS-takeover
Others
SMPL
Volumes not conform
P-VOL
PAIR or PFUL Swap-Takeover
Others
SVOL-Takeover
Volumes not conform
SVOL-Takeover
S-VOL
-
-
Unknown
Hitachi Command Control Interface (CCI) User and Reference Guide
223
Notes:
1. NG = The takeover command is rejected, and the operation terminates abnormally.
2. Nop-Takeover = The takeover command is accepted, but no operation is performed.
3. Volumes not conform = The volumes are not in sync, and the takeover command
terminates abnormally.
4. Unknown = The remote node attribute is unknown and cannot be identified. The remote
node system is down or cannot communicate.
5. SSWS = Suspend for Swapping with SVOL side only. The SSWS state is displayed as SSUS
(SVOL_PSUS) by ALL commands except the “-fc” option of the pairdisplay command.
4.10.1.2 Swap-Takeover Function
When the P-VOL status of the remote node is PAIR and the S-VOL data is consistent, it is
possible to swap the primary and secondary volumes. The swap-takeover function is used by
the HA control script when a package is manually moved to an alternate data center while
all hardware is operational. Swap-takeover can be specified for a paired volume or a group.
The swap-takeover function internally executes the following commands to swap the primary
and secondary volumes:
1. Execute Suspend for Swapping for the local volume (S-VOL). If this step fails, swap-
takeover is disabled and an error is returned.
2. Execute Resync for Swapping for switch to the primary volume that the local volume
(S-VOL) is swapped as the NEW_PVOL and resynchronizes the NEW_SVOL based on the
NEW_PVOL. As for copy tracks, if the remote host is known, the command will use the
value of PVOL specified at paircreate time. If the remote host is unknown, the command
will use the default number of tracks (three). If this step fails, swap-takeover returns at
SVOL-SSUS-takeover, and the local volume (S-VOL) is maintained in SSUS(PSUS) state
which allows and keeps track of write I/Os using a bitmap for the S-VOL. This special
state is displayed as SSWS using the -fc option of the pairdisplay command.
Note: The swap-takeover function does not use SMPL or No Copy mode for swapping to
guarantee mirror consistence, and this is included as a function of SVOL-takeover.
Note for Hitachi TrueCopy Async: The CCI software on the S-VOL side will issue a Suspend
for Swapping to the S-VOL side RAID storage system. Non-transmitted data which remains in
the FIFO queue (sidefile) of the P-VOL will be copied to the S-VOL, and a Resync for
Swapping operation will be performed (after the copy process). The swap operation is
required to copy non-transmitted P-VOL data within a given timeout value (specified by the
-t <timeout> option).
224
Chapter 4 Performing CCI Operations
4.10.1.3 SVOL-Takeover Function
The SVOL-takeover function allows the takeover node to use the secondary volume (except
in COPY state) in SSUS(PSUS) state (i.e., reading and writing are enabled), on the assumption
that the remote node (possessing the primary volume) cannot be used. The data consistency
of the Hitachi TrueCopy SVOL is evaluated by its pair status and fence level (same as
paircurchk, refer to). If the primary and secondary volumes are not consistent, the SVOL-
takeover function fails. If primary and secondary volumes are consistent, the SVOL-takeover
function attempts to switch to the primary volume using Resync for Swapping. If successful,
the SVOL-takeover function returns Swap-takeover as the return value of the horctakeover
command. If not successful, the SVOL-takeover function returns SVOL-SSUS-takeover as the
return value of the horctakeover command. In case of a host failure, Swap-takeover is
returned. In case of an ESCON/FC or P-VOL site failure, SVOL-SSUS-takeover is returned.
SVOL-takeover can be specified for a paired volume or a group. If the SVOL-takeover is
specified for a group, a data consistency check is executed for all volumes in the group, and
all inconsistent volumes are displayed (see example in Figure 4.24).
Group Pair vol
Port targ# lun# LDEV# Volstatus Status Fence To be...
oradb1 /dev/dsk/hd001 CL1-A 1
oradb1 /dev/dsk/hd002 CL1-A 1
5
6
145
146
S-VOL
S-VOL
PAIR NEVER Analyzed
PSUS STATUS Suspected
Figure 4.24 Display of Inconsistent Volumes for SVOL-Takeover of Group
Note for Hitachi TrueCopy Async/UR: The CCI software on the S-VOL side will issue a
Suspend for Swapping to the S-VOL side RAID storage system. Non-transmitted data of the
P-VOL will be copied to the S-VOL side, and a Resync for Swapping operation will be
performed (after the copy process). In case of a host failure, this data synchronize operation
will be accomplished and SVOL-takeover function will return Swap-takeover by attempting a
Resync for Swapping. In case of an ESCON/FC or P-VOL site failure, this data synchronize
operation may fail. Even so, the SVOL-takeover function will do Suspend for Swapping, and
enable the S-VOL to be used. As result, this function will be returned as SVOL-SSUS-
takeover. Through this behavior, you will be able to judge that the non-transmitted data of
the P-VOL was not transmitted completely when SVOL-takeover returns SVOL-SSUS-
takeover.
The SVOL-takeover operation is required to copy non-transmitted P-VOL data within a given
timeout value (specified by the -t <timeout> option). If the timeout occurs (before SVOL
takeover has completed all S-VOL changes to SSWS state), the horctakeover command will
be failed with EX_EWSTOT. Therefore this timeout value should be a greater than (or equal
to) the start-up timeout value for the HA Control Script.
If the horctakeover command failed due to timeout, then try to recover as follows:
1. Wait until SVOL state becomes SSWS (use pairdisplay -g <group> -l -fc command), and
try to the start-up again for the HA Control Script.
2. Make an attempt at doing resynchronize original PVOL based on SVOL using pairresync
-g <group> -swaps -c <size> for Fast Failback Performance. If this operation has been
failed at [EX_CMDRJE] or [EX_CMDIOE], then the cause is ESCON/FC link down and/or
site failure. After the recovery from failure, please try again this command.
Hitachi Command Control Interface (CCI) User and Reference Guide
225
4.10.1.4 PVOL-Takeover Function
The PVOL-takeover function releases the pair state as a group, since that maintains the
consistency of the secondary volume at having accepted horctakeover command when the
primary volume is fenced (“data or status” & “PSUE or PDUB” state, “PSUE or PDUB” volume
are contained in the group). This function allows the takeover node to use the primary
volume (i.e., reading and writing are enabled), on the assumption that the remote node
(possessing the secondary volume) cannot be used. PVOL-takeover can be specified for a
paired volume or a group.
The PVOL-takeover function executes the following two commands:
„
PVOL-PSUE-takeover: Changes the primary volume to the suspend (PSUE, PSUS) state
which enables write I/Os to all primary volumes of the group. The action of the PVOL-
PSUE-Takeover causes PSUE and/or PSUS to be intermingled in the group. This
intermingled pair status is PSUE as the group status, therefore pairvolchk command
returned give priority PSUE(PDUB) than PSUS as the group status. This special state turns
back to the original state when the pairresync command is issued.
„
PVOL-SMPL-takeover: Changes the primary volume to the simplex (SMPL) state. First,
PVOL-takeover executes PVOL-PSUE-takeover further than PVOL-SMPL-takeover. If the
PVOL-PSUE-takeover function fails, the PVOL-SMPL-takeover function is executed.
Note for Hitachi TrueCopy Async/UR: PVOL-Takeover will not be executed. It will be Nop-
Takeover, since the fence level for TrueCopy Asynchronous is Async, which is the same as
Never.
226
Chapter 4 Performing CCI Operations
4.10.2 Applications of the Horctakeover Command
The basic Hitachi TrueCopy commands (takeover, pair creation, pair splitting, pair
resynchronization, event waiting) can be combined to enable recovery from a disaster,
backup of paired volumes, and many other operations (e.g., restoration of paired volumes
flow of starting operations on a UNIX server at the secondary site using the Hitachi TrueCopy
server at the secondary site using the horctakeover command.
User’s Shell Script
User’s Environment
Activation from
Remote Console
1.
horctakeover
2. vgchange -a e
3. fsck
4. mount
Takeover
Command
Manual activation
5. Server software activation
6. Application activation
Activation from
HA software
Execution
Log
1.
2.
Communication with the primary site
horctakeover
is disabled. Accordingly, SVOL-takeover is executed
SVOL-SSUS-takeover. The secondary volume is
R/W-enabled.
vgchange -a e
The LVM is activated in exclusive mode, and the svol
server gets the right to use. The svol accepts R/W.
HORC
Manager
3.
4.
5.
fsck
Conformability of the file system is checked.
mount
The file system is mounted for R/W.
End of pairing.
R/W is enabled.
Server software activation
The server DB software is activated, and the
database is rolled back and rolled forward.
S-VOL
6.
Application activation
The user program is activated.
SMPL or SVOL_SSUS
Figure 4.25 Application/Example of TrueCopy Takeover (UNIX-based System)
Hitachi Command Control Interface (CCI) User and Reference Guide
227
User’s Script
User’s Environment
Manual activation
1.
horctakeover
2. -x mount
3. chkdsk
4. Server software activation
5. Application activation
Activation from
HA software
Takeover
Command
Execution
Log
1.
2.
Communication with the primary
horctakeover
site is disabled. Accordingly, SVOL-takeover is
executed SVOL-SSUS-takeover. The secondary
volume is R/W-enabled.
HORC
Manager
-x mount
The file system is mounted for R/W using the CCI
subcommand.
3.
4.
chkdsk
Conformability of the file system is checked.
End of pairing.
R/W is enabled.
Server software activation
The server DB software is activated, and the
database is rolled back and rolled forward.
5.
Application activation
The user program is activated.
S-VOL
SMPL or SVOL_SSUS
Figure 4.26 Application/Example of TrueCopy Takeover (Windows-based System)
228
Chapter 4 Performing CCI Operations
4.11 Displaying Configuration Information
4.11.1 Raidscan Command
The raidscan command displays configuration and status information for the specified
port/TID(s)/device(s). The information is acquired directly from the storage system (not the
Note: If Sync has failed, you need to confirm the following conditions:
„
The logical and physical drives designated as the objects of the sync command are not
opened to any applications. For example, confirm that Explore is not pointed on the
target drive. If Explore is pointed on the target drive, the target drive will be opening.
„
Sync command does not ignore the detected error on the NT file system, so sync
executes successfully in normal case (NO ERROR case) only on NT file system. For
example, confirm the target drive has no failure on the system for Event Viewer. In this
case, you must reboot the system or delete the partition and reconfigure the target
drive.
Table 4.24 Raidscan Command Parameters
Parameter
Command Name
Format
Value
raidscan
raidscan{ -h ⎪ -q ⎪ -z ⎪ -p <port> [hgrp]⎪-pd[g] <raw_device> ⎪-s <Seq#> ⎪-t <targ>⎪ -l <lun> | [ -
f[xfgde] ] | -CLI | -find[g] [op] [MU#] ⎪ -pi <strings> ⎪ -m <MU#> } }
Hitachi Command Control Interface (CCI) User and Reference Guide
229
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidscan command enter interactive mode.
The -zx option guards performing of the HORCM in interactive mode. When this option detects a HORCM
shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-p <port> [hgrp]: Specifies the port ID of the port to be scanned. Valid ports are CL1-A to CL1-R and
CL2-A to CL2-R (excluding CL1-I, CL1-O, CL2-I, CL2-O)
For USP V/VM: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port.
For TagmaStore USP V/VM: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port.
For TagmaStore NSC: CL3-a to CL3-h, or CL8-a to CL8-h for the expanded port.
For 9900V: CL3-a to CL3-r, or CL4-a to CL4-r for the expanded port.
The port is not case sensitive (e.g. CL1-A= cl1-a= CL1-a= cl1-A, CL3-a= CL3-A= cl3-a= cl3-A).
This option must be specified if “-find” or “-pd <raw_device>“option will not be specified.
[hgrp] is specified to display only the LDEVs mapped to a host group on a port (9900V and later).
-pd[g] <raw_device>: Specifies the raw device name. This option finds Seq# and port_name of the
storage system to which the specified device can be connected, and scans the port of the storage system
which corresponds with the unit ID that searches the unit ID from Seq#. This option must be specified if the
“-find” option will not be specified. If this option is specified, the following -s <Seq#> option is invalid.
-pdg option is used to show a LUN on the host view by finding a host group (9900V and later).
-s <Seq#>: Used to specify the Seq# (serial#) of the storage system when this option cannot specify the
unit ID which is contained for “-p <port>“ option. This option scans the port specified by “-p <port>“ option
of the storage system which corresponds with the unit ID that searches the unit ID from Seq#. If this option
is specified, then the unit ID which is contained in “-p <port>“ option is invalid.
-t <targ>: Specifies a target ID (0 to 15) of the specified port. If this option is not specified, the command
applies to all target IDs.
-l <lun>: Specifies a LUN (0 to 7) of the specified target ID. If this option is not specified, the command
applies to all LUNs. If this option is specified, the TID must also be specified.
-f or -ff: Specifies display of volume-type for a display column. If this is specified, -f[g] [d] option is invalid.
-fx: Displays the LDEV number in hexadecimal notation.
-fg: Specifies display of group_name for a display column. This option searches a group on the
configuration definition file (local CCI instance) from the scanned LDEV, and displays a group_name when
the scanned LDEV is contained in the group. If this option is specified, the -f[f] option is not allowed and
the -f[d] option is invalid.
-fd: Displays the Device_File that was registered to the group of the HORCM in the output, based on the
LDEV (as defined in local instance config. def. file). If this option is specified, -f[f][g] option is not allowed.
-fe: Displays the serial# (E-Seq#) and LDEV# (E-LDEV#) of the external LUNs only mapped to the LDEV.
If the external LUN mapped to the LDEV on a specified port does not exist, then this option will do nothing.
Also if this option is specified, -f[f][g][d] option is not allowed. Display example:
# raidscan -p cl1-a-0 -fe -CLI
PORT# /ALPA/C TID# LU# Seq# Num LDEV# P/S Status Fence E-Seq# E-LDEV#
CL1-A-0 ef 0 0 48 62468
CL1-A-0 ef 0 0 49 62468
CL1-A-0 ef 0 0 50 62468
2 256 SMPL
2 272 SMPL
1 288 SMPL
-
-
-
- 30053
- 30053
- 30053
17
23
28
-CLI: Specifies display for command line interface (CLI). This option displays to the same position that
defined number of columns, and displays one header. The delimiters between columns are displayed as
spaces or hyphens (-). Display example:
Port# TargetID# Lun# Seq# Num LDEV# P/S
Status Fence P-Seq# P-LDEV#
CL1-C
CL1-C
CL1-C
1
2
2
0 30053 1 274 SMPL
2 30053 1 260 P-VOL PAIR
3 30053 1 261 P-VOL PAIR
-
-
-
-
268
269
NEVER 30053
NEVER 30053
-m <MU#>: Displays only the cascading mirror specified by -m <MU#> option. To display the cascading
mirror descriptor for UR, -m <MU#> must be specified in TrueCopy or ShadowImage command
environment. If you want to display all cascading mirror descriptor, specify “-m all” for displaying all MUs.
-pi <strings>: Changes a strings via STDIN for -find option to “<strings>“. If this option is specified, the -
find option will be ignored a raw device file provided via STDIN, and <strings> will be used as input. A
<strings> must be specified within 255 characters.
230
Chapter 4 Performing CCI Operations
-find [op] [MU#]: Executes the specified [op] using a raw device file provided via STDIN. If -pi <strings>
option is specified, this option does not use a strings via STDIN, and -pi <strings> will be used as input.
# raidscan -p cl1-r
Port#, TargetID#, Lun# Num(LDEV#...) P/S,
Status, Fence, LDEV#, P-Seq# P-LDEV#
CL1-R,
CL1-R,
15,
7
6
5(100,101...) P-VOL PAIR
5(200,201...) SMPL ----
NEVER 100,
---- ----
5678
----
200
15,
# raidscan -p cl1-r -f
Port#, TargetID#, Lun# Num(LDEV#...) P/S,
Status, Fence, LDEV#, Vol.Type
CL1-R,
CL1-R,
15,
15,
7
6
5(100,101...) P-VOL PAIR
5(200,201...) SMPL ----
NEVER
----
100,
----
OPEN-3
OPEN-3
# raidscan -pd /dev/rdsk/c0t15/d7 -fg
Port#, TargetID#, Lun# Num(LDEV#...) P/S,
Status, Fence, LDEV#, Group
CL1-R,
CL1-R,
15,
15,
7
6
5(100,101...) P-VOL PAIR
5(200,201...) SMPL ----
NEVER
----
100,
----
oradb
oradb1
Specified device is LDEV# 0100.
Figure 4.27 Raidscan Command Examples for SCSI Ports
# raidscan -p cl1-r
PORT#/ALPA/C,
TID#, LU# Num(LDEV#...) P/S, Status, Fence, LDEV#, P-Seq# P-LDEV#
CL1-R/ ce/15, 15,
CL1-R/ ce/15, 15,
7
6
5(100,101..) P-VOL PAIR
5(200,201..) SMPL ----
NEVER 100,
---- ----
5678
----
200
-----
# raidscan -p cl1-r -f
PORT#/ALPA/C, TID#, LU# Num(LDEV#...) P/S, Status, Fence, LDEV#, Vol.Type
CL1-R/ ce/15, 15,
CL1-R/ ce/15, 15,
7
6
5(100,101..) P-VOL PAIR
5(200,201..) SMPL ----
NEVER
----
100,
----
OPEN-3
OPEN-3
Figure 4.28 Raidscan Command Examples for Fibre-Channel Ports
# ls /dev/rdsk/* | raidscan -find
DEVICE_FILE
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
/dev/rdsk/c0t0d4 0 S CL1-M
/dev/rdsk/c0t0d1 0 S CL1-M
/dev/rdsk/c1t0d1 - - CL1-M
0
0
-
4
1
-
31168 216 OPEN-3-CVS-CM
31168 117 OPEN-3-CVS
31170 121 OPEN-3-CVS
Figure 4.29 Example of -find Option for Raidscan
Output of the raidscan command:
„
„
SCSI: Port#, TargetID#, Lun# = port ID, TID, and LU number (LUN)
Fibre: Port#, ALPA/C, TID#, LU# = port ID, arbitrated loop phys. address, TID, LUN. For
further information on fibre-to-SCSI address conversion, see Appendix C.
Note: For ShadowImage, raidscan displays MU# for each LUN (e.g., LUN 7-0, 7-1, 7-2).
Num(LDEV#…) = number of LDEVs and LDEV ID for the LUSE volume
P/S = volume attribute
„
„
„
„
„
„
Status = status of the paired volume
Fence (TrueCopy only) = fence level
P-Seq# = serial # of the storage system which contains the partner volume of the pair
P-LDEV# = LDEV number of the partner volume of the pair
Hitachi Command Control Interface (CCI) User and Reference Guide
231
„
Vol.Type = logical unit (LU) type (e.g., OPEN-V, OPEN-9)
232
Chapter 4 Performing CCI Operations
„
„
Group = group name (dev_group) as described in the configuration definition file
UID: Displays the unit ID for multiple storage system configuration. If UID is displayed as
‘-’, the command device for HORCM_CMD is not found.
„
„
„
„
„
„
„
S/F: Displays whether a PORT is SCSI or fibre
PORT: Displays the RAID storage system port number
TARG: Displays the target ID (which was converted by the fibre conversion table)
LUN: Displays the LUN (which was converted by the fibre conversion table)
SERIAL: Displays the production (serial#) number of the RAID storage system
LDEV: Displays the LDEV# within the RAID storage system
PRODUCT_ID: Displays product-id field in the STD inquiry page
Hitachi Command Control Interface (CCI) User and Reference Guide
233
4.11.2 Raidar Command
The raidar command displays configuration, status, and I/O activity information for the
specified port/TID(s)/device(s) at the specified time interval. The configuration information
is acquired directly from the storage system (not from the configuration definition file).
example of the raidar command and its output.
Note: The I/O activity of a TrueCopy S-VOL in the COPY or PAIR state includes TrueCopy
remote I/Os (update copy operations) in addition to host-requested I/Os. The I/O activity of
a ShadowImage S-VOL in the COPY or PAIR state includes only host-requested I/Os
(ShadowImage update copy operations are excluded). The I/O activity of a P-VOL or simplex
volume includes only host-requested I/Os. If state changed into SMPL in S-VOL (COPY, PAIR)
I/O actively, and then I/O activity of the between is reported in the SMPL state.
Table 4.25 Raidar Command Parameters
Parameter
Command Name
Format
Value
raidar
raidar { -h ⎪ -q ⎪ -z ⎪ -p <port> <targ> <lun> ⎪ -pd[g] <raw_device ⎪ -s [interval] [count] }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes raidar command enter interactive mode. The -zx
option guards performing of the HORCM in the interactive mode. When this option detects a HORCM
shutdown, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-p <port> <targ> <lun> [mun]....: Monitors one or more (up to 16) devices at a time.
<port>: Specifies the port to be reported: CL1-A to CL1-R and CL2-A to CL2-R (excluding CL1-I,
CL1-O, CL2-I, CL2-O).
For USP V/VM: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port.
For TagmaStore USP V/VM: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port.
For TagmaStore NSC: CL3-a to CL3-h, or CL8-a to CL8-h for the expanded port.
For 9900V: CL3-a to CL3-r, or CL4-a to CL4-r for the expanded port.
The port is not case sensitive (e.g. CL1-A= cl1-a= CL1-a= cl1-A, CL3-a= CL3-A= cl3-a= cl3-A).
<targ>: Specifies the SCSI TID (0 to 15) of the specified port (see Appendix C for fibre address
conversion information).
<lun>: Specifies the LUN (0 to 7) on the specified TID.
[mun]: Specifies the MU number of the specified LUN (ShadowImage only).
-pd[g] <raw_device>: Allows designation of an LDEV by raw device file name.
-pdg option is used to show a LUN on the host view by finding a host group (9900V and later).
-s [interval] or -sm [interval]: Designates the time interval in seconds.
-s: Interprets the time interval as seconds.
-sm: Interprets the time interval as minutes.
[ interval]: Designates the time interval value (1 to 60). If not specified, the default interval (3) is used.
[count]: Designates number of repeats. When omitted, this command repeats until CNTL-C.
234
Chapter 4 Performing CCI Operations
# raidar -p cl1-a 15 6 -p cl1-b 14 5 -p cl1-a 12 3
-s 3
TIME[03]
13:45:25
13:45:28
PORT
-
CL1-A 15 6
CL1-B 14 5
CL1-A 12 3
T
-
L
-
VOL
-
SMPL
STATUS IOPS
HIT(%) W(%)
-
80.0
35.0
35.0
IOCNT
-
600
400
600
-
-
-
---
200.0
133.3
200.0
40.0
13.4
40.6
P-VOL PAIR
P-VOL PSUS
Figure 4.30 Raidar Command Example
Output of the raidar command:
„
„
„
„
IOPS = # of I/Os (read/write) per second (total I/O rate).
HIT(%) = Hit rate for read I/Os (read hit rate).
W(%) = Ratio of write I/Os to total I/Os (percent writes).
IOCNT = number of write and read I/Os.
Hitachi Command Control Interface (CCI) User and Reference Guide
235
4.11.3 Raidqry Command
The raidqry command (RAID query) displays the configuration of the connected host and
# raidqry -l
No Group Hostname
HORCM_ver
01-22-03/02 0 30053
01-22-03/02 1 30054
Uid Serial#
Micro_ver
50-04-00/00
50-04-00/00
Cache(MB)
256
1 ---
1 ---
HOSTA
HOSTA
256
# raidqry -r oradb
No Group Hostname
1 oradb HOSTA
2 oradb HOSTB
1 oradb HOSTA
2 oradb HOSTB
HORCM_ver
Uid Serial#
Micro_ver
50-04-00/00
50-04-00/00
50-04-00/00
50-04-00/00
Cache(MB)
256
01-22-03/02 0 30053
01-22-03/02 0 30053
01-22-03/02 1 30054
01-22-03/02 1 30054
256
256
256
# raidqry -l -f
No Group Floatable Host HORCM_ver
1 --- FH001 01-22-03/02
Uid Serial#
0 30053
Micro_ver Cache(MB)
50-04-00/00 256
Figure 4.31 Raidqry Command Examples
Output of the raidqry command:
„
„
„
No: This column shows the order when the group name (dev_group) which is described in
the configuration definition file has multiple remote hosts.
Group: When the -r option is used, this column shows the group name (dev_group)
which is described in the configuration definition file.
Floatable Host: When the -f option is used, this column shows the host name
(ip_address) which is described in the configuration definition file. Up to 30 host names
can be displayed. The -f option interprets the host name as utilizing floatable IP for a
host.
„
„
HORCM_ver: This column shows the version of the HORC Manager on the local or remote
host. The -l option specifies local host. The -r option specifies remote host.
Uid Serial# Micro_ver: This column shows unitID, serial number, and (DKCMAIN)
microcode version of the storage system which is connected to the local or remote host.
The -l option specifies local host. The -r option specifies remote host.
„
Cache(MB): Shows the logical cache capacity (in MB) of the storage system connected to
the local or remote host. The -l option specifies local host, and -r specifies remote host.
236
Chapter 4 Performing CCI Operations
Table 4.26 Raidqry Command Parameters
Parameter
Command Name
Format
Value
raidqry
raidqry { -h ⎪ -q ⎪ -z ⎪ -l ⎪ -r <group> ⎪ [ -f ] | -g}
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidqry command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-l: Displays the configuration information for the local host and the local RAID storage system.
-r <group>: Displays the configuration information for the remote host and the remote storage system
which contains the specified group.
-f: Displays the hostname (ip_address) as specified in the configuration definition file. Use this option if
“floatable IP address” is used for the hostname (ip_address) in the configuration file.
-g This option is used for displaying the lists of group name (dev_group) which described in the
configuration file of a local host (instance).
# raidqry -g
GNo Group
1 ora
2 orb
3 orc
RAID_type IV/H IV/M MUN/H MUN/M
HTC_RAID
XP_RAID
HTC_DF
12
12
8
9
9
6
4
4
1
64
64
1
„
„
GNo: shows as order when the group name (dev_group) which described in the
configuration definition file.
Group shows the group name (dev_group) which described in the configuration definition
file.
„
„
RAID_type shows the type of RAID configured a group.
IV/H:shows the interface version for HORC that made the consistence in a group, this is
used for the maintenance.
„
„
„
IV/M: shows the interface version for HOMRCF that made the consistence in a group, this
is used for the maintenance.
MUN/H: shows the number of maximum MUs for HORC/UR that made the consistence in
a group.
MUN/M: shows the number of maximum MUs for HOMRCF that made the consistence in a
group.
Hitachi Command Control Interface (CCI) User and Reference Guide
237
4.12 Performing Data Protection Operations
CCI supports the following three commands to set and verify the parameters for protection
checking (Data Retention Utility, Database Validator) to each LU. The protection checking
functions are available on the USP V/VM, USP/NSC, and Lightning 9900V (not 9900).
„
„
„
For further information on Data Protection Operations, see section 2.7.
4.12.1 Raidvchkset Command
The raidvchkset command sets the parameters for validation checking of the specified
volumes, and can also be used to turn off all validation checking without specifying [type].
Unit of checking for the validation is based on the group of CCI configuration definition file.
raidvchkset command parameters.
Note: This command will be controlled as protection facility. This command will be rejected
with EX_ERPERM by connectivity checking between CCI and the RAID storage system.
raidvchkset -g oralog -vt redo8
É Sets volumes in oralog group as redolog file prior to Oracle9I.
raidvchkset -g oradat -vt data8 -vs 16
É Sets volumes in oradat group as data file that Oracle block size is 8KB.
raidvchkset -g oradat -vt data8 -vs 32
É Sets volumes in oradat group as data file that Oracle block size is 16KB.
raidvchkset -g oralog -vt
Å Releases all checking to volumes in oralog group.
raidvchkset –g oralog –vt rd10g Å Sets Oracle10g volumes for oralog group as redolog file
raidvchkset –g oradat –vt rd10g –vs 16
Å Sets Oracle10g volumes for oradat group as data
file with block size of 8KB.
raidvchkset -g oralog -vg wtd
Å Disables writing to volumes in oralog group.
raidvchkset -g oralog -vg wtd 365
É Disables writing and sets retention time to volumes in oralog group.
raidvchkset -g oralog -vg Å Releases all guarding to volumes in oralog group.
Figure 4.32 Raidvchkset Command Examples
238
Chapter 4 Performing CCI Operations
Table 4.27 Raidvchkset Command Parameters
Parameter
Command Name
Format
Value
raidvchkset
raidvchkset { -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> -d[g] <raw_device> [MU#]
⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪ -nomsg
⎪ -vt [type] ⎪ -vs < bsize> [slba] [elba] ⎪ -vg [type] [rtime] }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkset command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects
a HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-g <group>: Specifies a group name written in the configuration definition file.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of
“-g <group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is in the group, the target volume is executed as the paired
logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If
the specified LDEV is contained in two or more groups, the command is executed on the first group. The
<seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-nomsg: Suppresses messages to be displayed when this command is executed. It is used to execute
this command from a user program. This option must be specified at the beginning of a command
argument. The command execution log is not affected by this option.
-vt [type]: Specifies the following data type that assumes the target volumes as Oracle database. If [type]
is not specified, then this option will disable all of the checking.
redo8: The target volumes sets the parameter for validation checking as Oracle redo log files (including
archive logs) prior Oracle9I. This option sets <bsize> to 1(512bytes) or 2(1024bytes).
data8: The target volumes sets the parameter for validation checking as Oracle data files (including
control files) prior Oracle9I.
redo9: The target volumes sets the parameter for validation checking as Oracle redo log files (including
archive logs) for Oracle9IR2 or later. This option sets <bsize> to 1 (512 bytes) or 2 (1024 bytes).
data9: The target volumes sets the parameter for validation checking as Oracle data files (including
control files) for Oracle9IR2 later.
In case of Oracle for Tru64 or Windows, the user must set the parameter in the init.ora file to
“_HARD_PROTECTION = TRUE “. If not so, a parameter for validation must be changed by using the
following “-vmf we” option: raidvchkset -vt data9 -vmf we
rd10g : The target volumes sets the parameter for validation checking as Oracle ALL files ( including
redo and data and RMAN backup piece) for Oracle10gR2 or later. This option sets <bsize> to 1(512bytes)
or 2(1024bytes). This option sets to the low 5 bits DBA for checking regarding CHK-F2.
-vs <bsize> [slba] [elba]: Specifies the data block size of Oracle I/O and a region on a target volume for
validation checking.
<bsize> is used for specifying the data block size of Oracle I/O, in units of 512 bytes. <bsize> is able to
specify between 1 (512 bytes) and 64 (32 Kbytes) (effective size for Oracle is also 1-64).
[slba] [elba] is used for specifying a region defined between Start_LBA (0 based) and End_LBA on a
target volume for checking, in units of 512 bytes.
[slba] [elba] can be specified in hexadecimal (by addition of “0x “) or decimal notation.
If this option is not specified, then a region for a target volume is set as all blocks (slba=0,elba=0).
Hitachi Command Control Interface (CCI) User and Reference Guide
239
Parameter
Value
-vg [type]: Specifies the following guard type to the target volumes for Data Retention Utility (Open LDEV
Guard on 9900V). If [type] is not specified, this option will disable all of the guarding.
inv: The target volumes are concealed from SCSI Inquiry command by responding “unpopulated volume”.
sz0: The target volumes replies with “SIZE 0” through SCSI Read capacity command.
rwd: The target volumes are disabled from reading and writing.
wtd: The target volumes are disabled from writing.
svd: If the target volume is SMPL, it is protected from paircreate (from becoming an SVOL).
If the target volume is PVOL, it is protected from pairresync restore or pairresync swaps(p).
If the target volume is SVOL_PSUS(SSUS), it is protected from pairresync synchronous copy.
[rtime]: Specifies the retention time, in units of day. If [rtime] is not specified, the default time defined by
the storage system will be used. The default time is “zero” in 9900V microcode version 21-08-xx. This
option is ignored (default = infinite) in 9900V microcode version 21-06-xx or 21-07-xx.
This option sets each four flags for guarding type as follows:
type
inv
Sz0
rwd
wtd
INQ
1
0
0
0
RCAP
READ
WRITE
1
1
0
0
1
1
1
0
1
1
1
1
Returned values
The command sets either of the following returned values in exit(), which allows users to check the
execution results using a user program.
Normal termination: 0
Abnormal termination:
The raidvchkset -vg option command returns the following error code (see Table 4.28 below) as well as
generic error (refer to Table 5.3).
Table 4.28 Specific Error Code for raidvchkset -vg Option
Category
Error Code
Error Message
Recommended Action
Value
Volume Status EX_EPRORT Mode changes denied
Please confirm the retention time for a target
volume by using raidvchkscan -v gflag command.
208
due to retention time
Unrecoverable
240
Chapter 4 Performing CCI Operations
Setting for Oracle H.A.R.D
Oracle 10g supports ASM (Automated Storage Manager), so users must change the setting
according to the use of this ASM. The USP V/VM and TagmaStore USP/NSC support the
Table 4.29 Setting H.A.R.D for USP V/VM and TagmaStore USP/NSC
Oracle
Storage System
Version
CHKDBA
Disable
Enable
ASM
Setting Parameter
TagmaStore USP/NSC
9iR2
Same as current setting
-vt redo9/data9 –vbf we bne VM=9 is fixed
-vt rd10g
10gR2
Disable
unused
used
-vt rd10g –vbf
(disable Block# check)
Enable
unused
used
Impossible (Due to be fixed as VM=9)
Impossible (Due to be fixed as VM=9)
Same as current setting
USP V/VM
9iR2
Disable
Enable
Disable
-vt redo9/data9 -vbf we bne
-vt rd10g
VM=9 is Val.
10gR2
unused
used
-vt rd10g –vbf we nzd
-vt rd10g –vbf we bne
-vt rd10g –vbf we bne nzd
Enable
unused
used
VM=5 is Val.
VM=5 is Val.
Hitachi Command Control Interface (CCI) User and Reference Guide
241
4.12.2 Raidvchkdsp Command
The raidvchkdsp command displays the parameters for validation checking of the specified
volumes. Unit of checking for the validation is based on the group of CCI configuration
Note: This command will be controlled as protection facility. Non-permitted volume is
shown without LDEV# information (LDEV# information is “ – ”). This command will be
rejected with EX_ERPERM by connectivity checking between CCI and the storage system.
Table 4.30 Raidvchkdsp Command Parameters
Parameter
Command Name
Format
Value
raidvchkdsp
raidvchkdsp { -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> -d[g] <raw_device> [MU#]
⎪ -d[g] <seq#> <LDEV#> [MU#] ⎪ -f[xde] ⎪ -v <op> ⎪ -c }
242
Chapter 4 Performing CCI Operations
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkdsp command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for
specifying instance# of HORCM.
-g <group>: Specifies a group name written in the configuration definition file.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device, and if the specified raw_device is contained in the group, the target volume is
executed as the paired logical volume (-d) or group (-dg). This option is effective without specification of “-g
<group>“ option. If the specified the raw_device is contained in two or more groups, the command is
executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified LDEV, and if the specified LDEV is in the group, the target volume is executed as the paired
logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>“ option. If the
specified LDEV is contained in two or more groups, the command is executed on the first group. The <seq
#> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal notation.
-fx: Displays the LDEV/STLBA/ENLBA number in hexadecimal.
-fd: Displays the relation between the Device_File and the paired Volumes, based on the Group (as
defined in the local instance configuration definition file). If Device_File column shows “Unknown” to HOST
(instance) (Figure 4.33), then the volume is not recognized on own HOST, and raidvchkdsp command will
be rejected in protection mode. Non-permitted volume is shown without LDEV# information (LDEV# is “– ”).
-fe: Displays the serial# and LDEV# of the external LUNs mapped to the LDEV for the target volume
-c: When RAID Manager starts, HORCM_DEV in horcm.conf will be translated from Port/target/lun
numbers to the CU:Ldev information, on one hand HORCM_LDEV in horcm.conf will be translated from
the CU:Ldev information to Port/target/lun numbers, because RAID needs to specify "Port#, Targ#, Lun#"
and “LDEV” for specifying the target device, and then HORCM keeps this information as internal database
for the configuration.
If a storage administrator changes the LDEV to LUN/port mapping, such as
ƒ
ƒ
a new/different LDEV is mapped to a previously used port/LUN, or
an LDEV is mapped to a different/new port
then pair operations might be rejected because the new mapping is different from the mapping information
the database in the running HORCM instance. A pairdisplay command shows the real LDEV mapping at
the time of the command execution and hence shows different information than what is stored in the
internal database of the HORCM instance.
The “-c” option for raidvchkdsp allows the user to see if there is a difference between the current running
HORCM instance information and the real mapping. This indication should be used to find such issues
which indicate that:
ƒ
ƒ
the HORCM instance should be restarted to discover and use the new mapping information, or
a configuration change occurred without changing the affected configuration files of the HORCM
instance.
Example change from LDEV#785 to LDEV#786:
# raidvchkdsp -g VG000 -c
Group PairVol Port# TID LU Seq# LDEV# LDEV#(conf) -change-> LDEV#
VG000 vg0001 CL4-E-0 0 17 63528 786
# raidvchkdsp -g VG000 -c -fx
785(conf) -change->
786
Group PairVol Port# TID LU Seq# LDEV# LDEV#(conf) -change-> LDEV#
VG000 vg0001 CL4-E-0 0 17 63528 312
311(conf) -change->
312
Example remove LDEV#785 from a port:
# raidvchkdsp -g VG000 -c
Group PairVol Port# TID LU Seq# LDEV# LDEV#(conf) -change-> LDEV#
Hitachi Command Control Interface (CCI) User and Reference Guide
243
VG000 vg0001 CL4-E-0 0 17 63528
# raidvchkdsp -g VG000 -c -fx
-
785(conf) -change-> NO LDEV
Group PairVol Port# TID LU Seq# LDEV# LDEV#(conf) -change-> LDEV#
VG000 vg0001 CL4-E-0 0 17 63528 311(conf) -change-> NO LDEV
-
raidvchkdsp -g vg01 -fd -v cflag
Å Example of -fd option showing Unknown vol.
Group PairVol Device_File
Seq# LDEV# BR-W-E-E MR-W-B BR-W-B SR-W-B-S
vg01
vg01
oradb1 Unknown
oradb2 c4t0d3
2332
2332
-
3
- - - - - - - - - - - - - -
D E B R D D D D E E D E D D
# raidvchkdsp -g horc0 -v gflag -fe
Å Example of -fe option.
Group ... TID LU Seq# LDEV# GI-C-R-W-S PI-C-R-W-S R-Time EM E-Seq# E-LDEV#
horc0 ... 0 20 63528
horc0 ... 0 20 63528
65 E E E E E E E E E E
66 E E E E E E E E E E
0 -
0 -
-
-
-
-
Figure 4.33 Raidvchkdsp Command Examples with -fd and -fe Options
244
Chapter 4 Performing CCI Operations
Output of the raidqvchkdsp command with -fe option:
„
EM: This column displays the external connection mode.
H = Mapped E-lun is hidden from the host.
V = Mapped E-lun is visible to the host.
—= Unmapped to the E-lun.
BH = Mapped E-lun as hidden from the host, but LDEV blockading.
BV = Mapped E-lun as visible to the host, but LDEV blockading.
B = Unmapped to the E-lun, but LDEV blockading.
„
„
E-Seq#: This column displays the production (serial) number of the external LUN
(‘Unknown’ shown as ‘-’).
E-LDEV#: This column displays the LDEV# of the external LUN (‘Unknown’ shown as ‘-’).
raidvchkdsp -g vg01 -fd -v cflag
Group PairVol Device_File
Seq# LDEV# BR-W-E-E MR-W-B BR-W-B-Z SR-W-B-S
vg01
vg01
oradb1 c4t0d2
oradb2 c4t0d3
2332
2332
2
3
D E B R D D D D E E E D E D D
D E B R D D D D E E E D E D D
Figure 4.34 Raidvchkdsp Command Example with -v cflag Option
Output of the raidqvchkdsp command with -v cflag option:
„
BR-W-E-E: This column displays the flags for checking regarding data block size.
R = E: Checking for data block size on Read is enabled.
D: Checking for data block size on Read is disabled.
W = E: Checking for data block size on Write is enabled.
D: Checking for data block size on Write is disabled.
E = L: Data block on Read/Write is interpreted as little endian format.
B: Data block on Read/Write is interpreted as big endian format.
E = W: Warning that Read/Write is not rejected when validation error is detected.
C: Read/Write is rejected when validation error is detected.
„
MR-W-B: This column displays the flags for checking regarding CHK-F3 in the data block.
R = E: Checking for CHK-F3 on Read is enabled.
D: Checking for CHK-F3 on Read is disabled.
W = E: Checking for CHK-F3 on Write is enabled.
D: Checking for CHK-F3 on Write is disabled.
B = E: Checking for CHK-F3 in the data block #0 is enabled.
D: Checking for CHK-F3 in the data block #0 is disabled.
„
BR-W-B-Z: This column displays the flags for checking regarding CHK-F2 in the data
block.
R = E: Checking for CHK-F2 on Read is enabled.
D: Checking for CHK-F2 on Read is disabled.
W = E: Checking for CHK-F2 on Write is enabled.
D: Checking for CHK-F2 on Write is disabled.
B = E: Comparing for CHK-F2 in the data block is enabled.
D: Comparing for CHK-F2 in the data block is disabled.
Z = E: The NON zero checking for CHK-F2 in the data block shows to being enabled.
D: The NON zero checking for CHK-F2 in the data block shows to being disabled.
Hitachi Command Control Interface (CCI) User and Reference Guide
245
„
SR-W-B-S: Displays the flags for checking regarding CHK-F1 in the data block.
R = E: Checking for CHK-F1 on Read is enabled.
D: Checking for CHK-F1 on Read is disabled.
W = E: Checking for CHK-F1 on Write is enabled.
D: Checking for CHK-F1 on Write is disabled.
B = E: Checking for CHK-F1 in the data block #0 is enabled.
D: Checking for CHK-F1 in the data block #0 is disabled.
S = E: Referring for CHK-F1 flag contained in the data block is enabled.
D: Referring for CHK-F1 flag contained in the data block is disabled.
# raidvchkdsp -g vg01 -fd -v offset
Group PairVol Device_File Seq# LDEV# Bsize
Å Example of -v offset option.
STLBA
ENLBA BNM
102400 9
102400 9
vg01
vg01
oradb1 c4t0d2
oradb2 c4t0d3
2332
2332
2 1024
3 1024
1
1
Figure 4.35 Raidvchkdsp Command Example with -v offset Option
Output of the raidqvchkdsp command with -v offset option:
„
„
„
Bsize: This column displays the data block size of Oracle I/O, in units of bytes.
STLBA: Displays the start of LBA on a target volume for checking, in units of 512 bytes.
ENLBA: Displays the end of LBA on a target volume for checking, in units of 512 bytes.
Note: If STLBA and ENLBA are both zero, this means to check all blocks.
„
BNM: Displays the number of bits for checking regarding CHK-F2, in units of bits.
If BNM is zero, this means the checking for CHK-F2 will be disabled.
# raidvchkdsp -g vg01 -fd -v errcnt
Group PairVol Device_File Seq# LDEV#
Å Example of -v errcnt option.
CfEC
MNEC
SCEC
BNEC
vg01
vg01
oradb1 c4t0d2
oradb2 c4t0d3
2332
2332
2
3
0
0
0
0
0
0
0
0
Figure 4.36 Raidvchkdsp Command Example with -v errcnt Option
Output of the raidqvchkdsp command with -v errcnt option:
„
„
„
„
CfEC: This column displays the error counter for checking of block size validation.
MNEC: Displays the error counter for checking of CHK-F3 validation.
SCEC: Displays the error counter for checking of CHK-F1 validation.
BNEC: Displays the error counter for checking of CHK-F2 validation.
# raidvchkdsp -g vg01 -fd -v gflag
Å Example of -v gflag option.
Group PairVol Device_File Seq# LDEV# GI-C-R-W-S PI-C-R-W-S R-Time
vg01
vg01
oradb1 c4t0d2
oradb2 c4t0d3
2332
2332
2
3
E E D D E E E D D E
E E D D E E E D D E
365
-
Figure 4.37 Raidvchkdsp Command Example with -v gflag Option
246
Chapter 4 Performing CCI Operations
Output of the raidqvchkdsp command with -v gflag option:
„
GI-C-R-W-S: This displays the flags for guarding as for the target volume.
I Æ E: Enabled for Inquiry command.
D: Disabled for Inquiry command.
C Æ E: Enabled for Read Capacity command.
D: Disabled for Read Capacity command.
R Æ E: Enabled for Read command.
D: Disabled for Read command.
WÆ E: Enabled for Write command.
D: Disabled for Write command.
SÆ E: Enabled for becoming the SVOL.
D: Disabled for becoming the SVOL.
„
PI-C-R-W-S: This displays the permission flags that show whether each mode flag can be
changed to enable or not.
I Æ E: ”I” flag can be changed to enable.
D: ”I” flag cannot be changed to enable.
C Æ E: ”C” flag can be changed to enable.
D: ”C” flag cannot be changed to enable.
R Æ E: ”R” flag can be changed to enable.
D: ”R” flag cannot be changed to enable.
WÆ E: ”W” flag can be changed to enable.
D: ”W” flag cannot be changed to enable.
SÆ E: ”S” flag can be changed to enable.
D: ”S” flag cannot be changed to enable.
„
R-Time: This displays the retention time for write protect, in units of day. The hyphen
(-) shows that the retention time is infinite. APP will be able to know whether the target
volume is denied to change to writing enable by referring “R-Time”.
Audit lock status is shown as the retention time plus 1000000.
“R-Time + 1000000” shows the retention time with Audit lock status.
raidvchkdsp -g vg01 -v pool
Group PairVol Port# TID LU Seq# LDEV# Bsize
Available
100000
Capacity
1000000000
1000000000
Vg01
Vg01
oradb1
oradb2
CL2-D
CL2-D
2 7 62500 167
2 10 62500 170
2048
2048
100000
Figure 4.38 Raidvchkdsp Command Example with -v pool Option
Hitachi Command Control Interface (CCI) User and Reference Guide
247
Output of the raidqvchkdsp command with -v pool option:
„
„
Bsize: This displays the data block size of the pool, in units of block (512bytes).
Available(Bsize): This displays the available capacity for the volume data on the
SnapShot pool in units of Bsize.
„
Capacity(Bsize): This displays the total capacity in the SnapShot pool in units of Bsize.
[Display example]
# raidvchkdsp -v aou -g AOU
Group PairVol Port# TID LU Seq# LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID
AOU
AOU
AOU_001 CL2-D
AOU_002 CL2-D
2 7 62500 167
2 10 62500 170 110000
20050
1100000 10 70
1100000 10 70
1
1
Figure 4.39 Raidvchkdsp Command Example with -v aou Option
Output of the raidqvchkdsp command with -v aou option:
„
Used(MB): Displays the usage size of the allocated block on this LUN.
Range: 0 ≤ Used (MB) < LU_CAP(MB) + 42MB
„
LU_CAP(MB): Displays the LUN capacity responded to the “Readcapacity” command as
SCSI interface.
„
„
„
U(%): Displays the usage rate of the allocated block on the AOU pool containing this LU.
T(%): Displays the threshold rate being set to the AOU pool as high water mark.
PID: Displays the AOU pool ID assigned to this AOU volume.
248
Chapter 4 Performing CCI Operations
4.12.3 Raidvchkscan Command
The raidvchkscan command displays the fibre port of the storage system (9900V and later),
target ID, LDEV mapped for LUN#, and the parameters for validation checking, regardless of
Note: This command will be rejected with EX_ERPERM by connectivity checking between CCI
and the Hitachi RAID storage system.
Table 4.31 Raidvchkscan Command Parameters
Parameter
Command Name
Format
Value
raidvchkscan
raidvchkscan { -h ⎪ -q ⎪ -z ⎪ -p <port> [hgrp] ⎪ -pd[g] <raw_device> ⎪ -s <seq#> ⎪ -t <target> ⎪ -l <lun>
⎪ [ -f[x] ] ⎪ -v <op> }
Hitachi Command Control Interface (CCI) User and Reference Guide
249
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkscan command enter the interactive
mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects
a HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-g <group>: Specifies a group name written in the configuration definition file.
-p <port> [hgrp]: Specifies the port ID of the port to be scanned. Valid ports are CL1-A to CL1-R and
CL2-A to CL2-R (excluding CL1-I, CL1-O, CL2-I, CL2-O). In addition:
For USP V/VM: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port
For USP: CL3-a to CL3-r, or CLG-a to CLG-r for the expanded port
For NSC: CL3-a to CL3-h, or CL8-a to CL8-h for the expanded port.
For 9900V: CL3-a to CL3-r, or CL4-a to CL4-r for the expanded port
The port is not case sensitive (e.g. CL1-A= cl1-a= CL1-a= cl1-A, CL3-a= CL3-A= cl3-a= cl3-A). This
option must be specified if “-find” or “-pd <raw_device>“option will not be specified.
[hgrp] is specified to display only the LDEVs mapped to a host group on a port (9900V and later).
-pd[g] <raw_device>: Specifies the raw device name. This option finds Seq# and port_name of the
storage system to which the specified device can be connected, and scans the port of the storage system
which corresponds with the unit ID that searches the unit ID from Seq#. This option must be specified if
the “-find” option will not be specified. If this option is specified, the following -s <Seq#> option is invalid.
-pdg (9900V and later): Shows a LUN on the host view by finding a host group (9900V and later).
-s <Seq#>: Used to specify the Seq# (serial#) of the storage system when this option cannot specify the
unit ID which is contained for “-p <port>“ option. This option scans the port specified by “-p <port>“ option
of the storage system which corresponds with the unit ID that searches the unit ID from Seq#. If this
option is specified, then the unit ID which is contained in “-p <port>“ option is invalid.
-t <target>: Specifies a target ID (0 to 15) of the specified port. If this option is not specified, the
command applies to all target IDs.
-l <lun>: Specifies a LUN (0 to 7) of the specified target ID. If this option is not specified, the command
applies to all LUNs. If this option is specified, the TID must also be specified.
-fx: Displays the LDEV/STLBA/ENLBA number in hexadecimal notation.
-v [op]: Specifies the following operation that displays each parameter for validation checking:
cflag: Displays all flags for checking regarding data block validation for target vols (see Figure 4.40).
offset: Displays the range setting for data block size of Oracle I/O and a region on a target volume for
errcnt: Displays the statistical information counted as an error for each checking on the target
volumes (see Figure 4.42). Each statistical information counted as an error will be cleared when the
individual flag for validation checking is disabled.
pool: This option displays the pool capacity and the usable capacity for the pool ID to which the LDEV
belongs. This will be needed to help the decision whether the restore operation is possible or not,
aou: Displays the LUN capacity and usage rate for only HDP volume mapped to the specified port,
250
Chapter 4 Performing CCI Operations
# raidvchkscan -p CL1-A -v cflag
PORT# /ALPA/C TID# LU# Seq# Num LDEV# BR-W-E-E MR-W-B BR-W-B-Z SR-W-B-S
CL1-A / ef/ 0
CL1-A / ef/ 0
0 0 2332
0 1 2332
1
1
0
1
D E B R D D D D E E E D E D D
D E B R D D D D E E E D E D D
Figure 4.40 Raidvchkscan Command Example with -v cflag Option
Output of the raidqvchkscan command with -v cflag option:
„
BR-W-E-E: This column displays the flags for checking regarding data block size.
R = E: Checking for data block size on Read is enabled.
D: Checking for data block size on Read is disabled.
W = E: Checking for data block size on Write is enabled.
D: Checking for data block size on Write is disabled.
E = L: Data block on Read/Write is interpreted as little endian format.
B: Data block on Read/Write is interpreted as big endian format.
E = W: Warning that Read/Write is not rejected when validation error is detected.
C: Read/Write is rejected when validation error is detected.
„
MR-W-B: This column displays the flags for checking regarding CHK-F3 in the data block.
R = E: Checking for CHK-F3 on Read is enabled.
D: Checking for CHK-F3 on Read is disabled.
W = E: Checking for CHK-F3 on Write is enabled.
D: Checking for CHK-F3 on Write is disabled.
B = E: Checking for CHK-F3 in the data block #0 is enabled.
D: Checking for CHK-F3 in the data block #0 is disabled.
„
BR-W-B-Z: This column displays the flags for checking regarding CHK-F2 in the data
block.
R = E: Checking for CHK-F2 on Read is enabled.
D: Checking for CHK-F2 on Read is disabled.
W = E: Checking for CHK-F2 on Write is enabled.
D: Checking for CHK-F2 on Write is disabled.
B = E: Comparing for CHK-F2 in the data block is enabled.
D: Comparing for CHK-F2 in the data block is disabled.
Z = E: The NON zero checking for CHK-F2 in the data block shows to being enabled.
D: The NON zero checking for CHK-F2 in the data block shows to being disabled.
Hitachi Command Control Interface (CCI) User and Reference Guide
251
„
SR-W-B-S: Displays the flags for checking regarding CHK-F1 in the data block.
R = E: Checking for CHK-F1 on Read is enabled.
D: Checking for CHK-F1 on Read is disabled.
W = E: Checking for CHK-F1 on Write is enabled.
D: Checking for CHK-F1 on Write is disabled.
B = E: Checking for CHK-F1 in the data block #0 is enabled.
D: Checking for CHK-F1 in the data block #0 is disabled.
S = E: Referring for CHK-F1 flag contained in the data block is enabled.
D: Referring for CHK-F1 flag contained in the data block is disabled.
# raidvchkscan -p CL1-A -v offset
PORT# /ALPA/C TID# LU# Seq# Num LDEV# Bsize
STLBA
ENLBA BNM
102400 9
102400 9
102400 9
102400 9
102400 9
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
0 0 2332
0 1 2332
0 2 2332
0 3 2332
0 4 2332
1
1
1
1
1
0
1
2
3
4
1024
1024
1024
1024
1024
1
1
1
1
1
Figure 4.41 Raidvchkscan Command Example with -v offset Option
Output of the raidqvchkscan command with -v offset option:
„
„
„
Bsize: This column displays the data block size of Oracle I/O, in units of bytes.
STLBA: Displays the Start of LBA on a target volume for checking, in units of 512 bytes.
ENLBA: Displays the End of LBA on a target volume for checking, in units of 512 bytes.
Note: If STLBA and ENLBA are both zero, this means to check all blocks.
„
BNM: Displays the number of bits for checking regarding CHK-F2, in units of bits.
If BNM is zero, this means the checking for CHK-F2 will be disabled.
# raidvchkscan -p CL1-A -v errcnt
PORT# /ALPA/C TID# LU# Seq# Num LDEV#
CfEC
MNEC
SCEC
BNEC
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
0 0 2332
0 1 2332
0 2 2332
0 3 2332
0 4 2332
1
1
1
1
1
0
1
2
3
4
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Figure 4.42 Raidvchkscan Command Example with -v errcnt Option
Output of the raidqvchkscan command with -v errcnt option:
„
„
„
„
CfEC: This column displays the error counter for checking of block size validation.
MNEC: Displays the error counter for checking of CHK-F3 validation.
SCEC: Displays the error counter for checking of CHK-F1 validation.
BNEC: Displays the error counter for checking of CHK-F2 validation.
252
Chapter 4 Performing CCI Operations
# raidvchkscan -p CL1-A -v gflag
Å Example of -v gflag option.
PORT# /ALPA/C TID# LU# Seq# Num LDEV# GI-C-R-W-S PI-C-R-W-S R-Time
CL1-A / ef/ 0
CL1-A / ef/ 0
CL1-A / ef/ 0
0 0 2332
0 1 2332
0 2 2332
1
1
1
0
1
2
E E D D E E E D D E
E E D D E E E D D E
E E D D E E E D D E
365
-
0
Figure 4.43 Raidvchkscan Command Example with -v gflag Option
Output of the raidqvchkscan command with -v gflag option:
„
GI-C-R-W-S: This displays the flags for guarding as for the target volume.
I Æ E: Enabled for Inquiry command.
D: Disabled for Inquiry command.
C Æ E: Enabled for Read Capacity command.
D: Disabled for Read Capacity command.
R Æ E: Enabled for Read command.
D: Disabled for Read command.
WÆ E: Enabled for Write command.
D: Disabled for Write command.
SÆ E: Enabled for becoming the SVOL.
D: Disabled for becoming the SVOL.
„
PI-C-R-W-S: This displays the permission flags that show whether each mode flag can be
changed to enable or not.
I Æ E: ”I” flag can be changed to enable.
D: ”I” flag cannot be changed to enable.
C Æ E: ”C” flag can be changed to enable.
D: ”C” flag cannot be changed to enable.
R Æ E: ”R” flag can be changed to enable.
D: ”R” flag cannot be changed to enable.
WÆ E: ”W” flag can be changed to enable.
D: ”W” flag cannot be changed to enable.
SÆ E: ”S” flag can be changed to enable.
D: ”S” flag cannot be changed to enable.
„
R-Time: This displays the retention time for write protect, in units of day. The hyphen
(-) shows that the retention time is infinite. APP will be able to know whether the target
volume is denied to change to writing enable by referring “R-Time”.
Audit lock status is shown as the retention time plus 1000000.
“R-Time + 1000000” shows the retention time with Audit lock status.
Hitachi Command Control Interface (CCI) User and Reference Guide
253
# raidvchkscan -v pool -p CL2-d-0
PORT# /ALPA/C TID# LU# Seq# Num LDEV# Bsize
Available
100000
Capacity
1000000000
1000000000
CL2-D-0 /e4/ 0 2 0 62500
CL2-D-0 /e4/ 0 2 1 62500
1 160
1 161
2048
2048
100000
Figure 4.44 Raidvchkscan Command Example with -v pool Option
Output of the raidqvchkscan command with -v pool option:
„
„
Bsize: This displays the data block size of the pool, in units of block (512 bytes).
Available(Bsize): This displays the available capacity for the volume data on the
SnapShot pool in units of Bsize.
„
Capacity(Bsize): This displays the total capacity in the SnapShot pool in units of Bsize.
# raidvchkscan -v aou -p CL2-d-0
PORT# /ALPA/C TID# LU# Seq# Num LDEV# Used(MB) LU_CAP(MB) U(%) T(%) PID
CL2-D-0 /e4/ 0 2 0 62500
CL2-D-0 /e4/ 0 2 1 62500
1 160
1 161 200500
20050
1100000
1100000
1 60
18 60
1
2
Figure 4.45 Raidvchkscan Command Example with -v aou Option
Output of the raidvchkscan command with -v aou option:
„
Used(MB): Displays the usage size the allocated block on this LUN.
Range: 0 ≤ Used (MB) < LU_CAP(MB) + 42MB
„
LU_CAP(MB): Displays the LUN capacity responded to the “Readcapacity” command as
SCSI interface.
„
„
„
U(%): Displays the usage rate of the allocated block on the AOU pool containing this LU.
T(%): Displays the threshold rate being set to the AOU pool as high water mark.
PID: Displays the AOU pool ID assigned to this AOU volume.
254
Chapter 4 Performing CCI Operations
4.12.4 Raidvchkscan Command for Journal (UR)
The raidvchkscan command supports the (-v jnl [t] [unit#]) option to find the journal
volume list setting via SVP. It also displays any information for the journal volume. The
Universal Replicator function is available on the Hitachi USP V/VM and USP/NSC storage
systems.
Table 4.32 Raidvchkscan Command Parameters (UR)
Parameter
Command Name
Format
Details
raidvchkscan: Validation checking confirmation command
raidvchkscan { -h ⎪ -q ⎪ -z ⎪ -v jnl [t] [unit#] ⎪ [ -s <Seq#>] ⎪ [ -f[x ] | }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx: Makes the raidvchkscan command enter the interactive mode. The -zx option guards
performing of the HORCM in the interactive mode. When this option detects a HORCM shut down,
interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-s <Seq#>: Used to specify the Seq# (serial#) of the storage system when this option cannot specify
unitID which is contained for “-v jnl” option. If this option is specified, the unitID which is contained in “-
v jnl” is invalid.
-fx: Displays the LDEV number in hexadecimal notation.
# raidvchkscan -v jnl 0
JID MU CTG JNLS AP U(%) Q-Marker Q-CNT D-SZ(BLK) Seq# Nnm LDEV#
001 0 1 PJNN 4 21 43216fde
002 1 2 PJNF 4 95 3459fd43 52000
002 2 2 SJNS 4 95 3459fd43 52000
30
512345 62500 2 265
512345 62500 3 270
512345 62500 3 270
512345 62500 1 275
512345 62500 1 276
512345 62500 1 277
512345 62500 1 278
512345 62500 1 278
003 0 3 PJSN 4
0
-
-
78
-
-
66
004 0 4 PJSF 4 45 1234f432
005 0 5 PJSE 0
006 - - SMPL -
007 0 6 SMPL 4
0
-
-
-
5 345678ef
Figure 4.46 Raidvchkscan Command Example with -v jnl 0 Option
Hitachi Command Control Interface (CCI) User and Reference Guide
255
Output of the raidqvchkscan command with -v jnl 0 option:
„
„
„
„
JID: Displays the journal group ID.
MU: Displays the mirror descriptions on UR.
CTG: Displays the CT group ID.
JNLS: Displays the following status in the journal group.
–
–
–
–
–
–
–
–
SMPL: this means the journal volume which does not have a pair, or deleting.
P(S)JNN: this means “P(S)vol Journal Normal Normal”.
P(S)JNS this means “P(S)vol Journal Normal suspend” created with -nocsus option.
P(S)JSN: this means “P(S)vol Journal Suspend Normal”.
PJNF: this means “P(S)vol Journal Normal Full”.
P(S)JSF: this means “P(S)vol Journal Suspend Full”.
P(S)JSE: this means “P(S)vol Journal Suspend Error” including link failure.
P(S)JES this means “P(S)vol Journal Error suspend” created with -nocsus option.
„
AP: Displays the following two conditions (status) according to the pair status.
–
Shows the number of active paths on the initiator port in UR links. ‘Unknown’ is
shown as ‘-’.
Control
AP
S-JNL
P-JNL
AM
AP
Read Data/control
–
AM: The activity monitor that detects whether or not there is a request for data
from the initiator at regular intervals. If AM detects a time-out, the P-JNL state will
be changed from P-JNN to PJSE.
Note: The same path information is used for AP for three commands (pairvolchk,
pairdisplay, raidvchkscan). The differential is that pairvolchk and pairdisplay are to
show a special meaning with SSUS(SSWS) state.
„
Q-Marker: Displays the sequence # of the journal group ID, called the Q-marker. For
P-JNL, Q-Marker shows the latest sequence # on the P-JNL volume. For S-JNL, the
Q-Marker shows the latest sequence # of the cache(DFW).
256
Chapter 4 Performing CCI Operations
„
Q-CNT: Displays the number of remaining Q-Markers within each journal volume.
Q-Marker (#2)
Q-Marker (#9)
of S-JNL
of P-JNL
R/W
P-
9 8 7 6 5 4
3
S-
7 6 5 4
3
Asynchronous transfer
Q-CNT
Q-CNT
PVOL
SVOL
Figure 4.47 Example of Q-Marker and Q-CNT
„
„
„
„
„
U(%): Displays the usage rate of the journal data.
D-SZ: Displays the capacity for the journal data on the journal volume.
Seq#: Displays the serial number of the RAID storage system.
Num: Displays the number of LDEV configured the journal volume.
LDEV#: Displays the first number of the LDEV that is configured for the journal volume.
Using a combination of JNLS status and other information, the application will know the
following detail state.
Hitachi Command Control Interface (CCI) User and Reference Guide
257
Table 4.33 lists information about the different journal volume statuses. QCNT=0 indicates
that the number of remaining Q-Markers is ‘0’. The letter ‘N’ indicates a non-zero.
Table 4.33 Detailed Status of the Journal Volume
JNLS
P-JNL
SMPL
Other Information Description
S-JNL
QCNT
AP
0
N
0
-
-
-
Configured as journal volume, but NOT pair
Deleting the journal volume
PJNN
SJNN
Normal state of the journal volume without data
(PJNS)
(SJNS)
PJNN
-
N
N
-
Normal state of the journal volume with data
(PJNS)
-
SJNN
N
0
-
Normal state of the journal volume with data
Still normal state of the journal volume at Link failure
Suspended journal volume via operation
Suspending the journal volume
(SJNS)
PJSN
SJSN
0
N
N
0
-
PJNF
PJSF
-
-
High water mark state
SJSF
-
Suspended journal volume due to full journal
Suspending the journal volume due to full journal
Suspended journal volume due to failure/Link failure
Suspending the journal volume due to failure/Link failure
Suspended journal volume due to failure
Suspended journal volume due to Link failure
Suspending the journal volume due to failure
Suspending the journal volume due to Link failure
N
0
-
PJSE
-
-
-
N
0
-
SJSE
N
0
N
0
N
# raidvchkscan -v jnlt
JID MU CTG JNLS AP U(%) Q-Marker Q-CNT D-SZ(BLK) Seq# DOW PBW APW
001 0 1 PJNN 4 21
002 1 2 PJNF 4 95
43216fde
3459fd43 52000
30
512345 63528 20 300 40
512345 63528 20 300 40
512345 63528 20 300 40
003 0 3 PJSN 4
0
-
-
Figure 4.48 Raidvchkscan Command Example with -v jnlt Option
Output of the raidqvchkscan command with -v jnlt option:
„
„
„
DOW: This shows “Data Overflow Watch” timer (in unit of Sec.) setting per the Journal
group.
PBW: This shows “Path Blockade Watch” (in unit of Sec.) timer setting per the Journal
group. Also this will be shown as “0” on “SMPL” state.
APW: This shows “Active Path Watch” timer (in unit of Sec.) for detecting Link failure.
258
Chapter 4 Performing CCI Operations
4.12.5 Raidvchkscan Command for Snapshot Pool and Dynamic Provisioning
The raidvchkscan command supports the option ( -v pid[a] [unit#]) to find the SnapShot pool
or HDP pool settings via SVP, and displays information for the SnapShot pool or HDP pool.
Table 4.34 Raidvchkscan Command Parameters (Snapshot/HDP)
Parameter
Command Name
Format
Details
raidvchkscan: Validation checking confirmation command
raidvchkscan { -h ⎪ -q ⎪ -z ⎪ -v pid[a] [unit#] ⎪ [ -s <Seq#>] ⎪ [ -f[x ] | }
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx: Makes the raidvchkscan command enter the interactive mode. The -zx option guards
performing of the HORCM in the interactive mode. When this option detects a HORCM shut down,
interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-s <Seq#>: Used to specify the Seq# (serial#) of the storage system when this option cannot specify
unitID which is contained for “-v jnl” option. If this option is specified, the unitID which is contained in “-
v jnl” is invalid.
-fx: Displays the LDEV number in hexadecimal notation.
# raidvchkscan -v pid 0
PID POLS U(%) SSCNT Available(MB) Capacity(MB)
Seq# Num LDEV# H(%)
001 POLN
002 POLF
003 POLS 100 10000
004 POLE
10
95
330
9900
10000000
100000
100
1000000000
1000000000
1000000000
0
62500
62500
62500
62500
2
3
1
0
265 80
270 70
275 70
0 80
0
0
0
Figure 4.49 Raidvchkscan Command Example with -v pid Option
Output of the raidqvchkscan command with -v pid option:
„
„
PID: Displays the SnapShot pool ID.
POLS: Displays the following status in the SnapShot pool.
–
–
–
–
POLN = “Pool Normal”
POLF = “Pool Full”
POLS = “Pool Suspend”
POLE = “Pool failure”. In this state, information for the pool cannot be displayed.
„
„
„
„
„
U(%): Displays the usage rate of the SnapShot pool.
SSCNT: Displays the number of SnapShot volume in SnapShot pool.
Available(MB): Displays the available capacity for the volume data on the SnapShot pool.
Capacity(MB): Displays the total capacity in the SnapShot pool.
Seq#: Displays the serial number of the RAID storage system.
Hitachi Command Control Interface (CCI) User and Reference Guide
259
„
„
„
Num: Displays the number of LDEV configured the SnapShot pool.
LDEV#: Displays the first number of LDEV configured the SnapShot pool.
H(%): Displays the threshold rate being set to the SnapShot pool as High water mark.
‘Unknown’ will be shown as ‘-’.
# raidvchkscan -v pida 0
PID POLS U(%) AV_CAP(MB) TP_CAP(MB) W(%) H(%) Num LDEV# LCNT TL_CAP(MB)
001 POLN 10
002 POLF 95
004 POLN 0
45000000
10000 100000000 50 80
10000000 100000000 80 90
50000000 50 80
2 265
3 270 900 100000000
2 280 0
33 65000000
0
Figure 4.50 Raidvchkscan Command Example with -v pida Option
Output of the raidqvchkscan command with -v pida option:
„
„
PID: Displays the HDP pool ID.
POLS: Displays the status of the HDP pool:
–
–
–
–
POLN = “Pool Normal”
POLF = “Pool Full”
POLS = “Pool Suspend”
POLE = “Pool failure”. In this state, information for the pool cannot be displayed.
„
„
„
„
„
„
„
„
„
U(%): Displays the usage rate of the HDP pool.
AV_CAP(MB): Displays the available capacity for the HDP volumes mapped to this pool.
TP_CAP(MB): Displays the total capacity of the HDP pool.
W(%): Displays the threshold rate for “WARNING” set for this HDP pool.
H(%): Displays the threshold rate set for the HDP pool as high water mark.
Num: Displays the number of LDEVs configured the HDP pool.
LDEV#: Displays the first number of LDEV configured the HDP pool.
LCNT: Displays the total number of HDP volumes mapped to this HDP pool.
TL_CAP(MB): Displays the total capacity of all HDP volumes mapped to this HDP pool.
260
Chapter 4 Performing CCI Operations
4.13 Controlling CCI Activity
4.13.1 Horcmstart Command
The horcmstart command is a shell script that starts the HORCM application (/etc/horcmgr).
This shell script also sets the environment variables for HORCM as needed (e.g.,
command parameters.
Table 4.35 Horcmstart Command Parameters
Parameter
Command Name
Format
Value
horcmstart
horcmstart.sh { inst ... } (UNIX systems)
horcmstart.exe { inst ... } (Windows systems)
Options
Inst: Specifies the HORCM instance number (numerical value). When this option is specified, the
horcmstart shell script sets the environment variables (HORCMINST, HORCM_CONF, HORCM_LOG,
HORCM_LOGS) which correspond to the instance number, and starts the specified HORCM instance.
(Environment variables set by the user become invalid.) When this option is not specified, the horcmstart
shell script starts 1 HORCM and uses the environment variables set by the user. If you have designated full
environment variables, you should use horcmstart.sh without any arguments. If you did not designate
environment variables (HORCM_CONF, HORCM_LOG, HORCM_LOGS), then this shell script sets the
environment variables as follows:
For UNIX-based platforms:
If HORCMINST is specified:
HORCM_CONF = /etc/horcm*.conf (* is instance number)
HORCM_LOG = /HORCM/log*/curlog
HORCM_LOGS = /HORCM/log*/tmplog
If no HORCMINST is specified:
HORCM_CONF = /etc/horcm.conf
HORCM_LOG = /HORCM/log/curlog
HORCM_LOGS = /HORCM/log/tmplog
For Windows platform:
If HORCMINST is specified:
HORCM_CONF = \WINNT\horcm*.conf (* is instance number)
HORCM_LOG = \HORCM\log*\curlog
HORCM_LOGS = \HORCM\log*\tmplog
If no HORCMINST is specified:
HORCM_CONF = \WINNT\horcm.conf
HORCM_LOG = \HORCM\log\curlog
HORCM_LOGS = \HORCM\log\tmplog
[environmental variable]
The HORCM_LOGS environment variable is used to specify the log file directory for automatic storing.
When HORCM starts up, the log files created in the operation are stored automatically in the
HORCM_LOGS directory. This log directory must give an equality class with HORCM_LOG
HORCMSTART_WAIT (for waiting the RM instance with start-up). Horcmgr does fork/exec() horcmd_XX
as deamon process, and verifies/waits until HORCM become ready state. The timeout is used for only
avoiding infinite loop, currently the default time is 200 sec in consideration of maximum LDEV.
However, it may be needed to change the default timeout value for starting HORCM under high-loading of
the server, or the remote command device. In such a case, this environmental variable is used to change a
timeout value (in units of Sec) from the current default value (200 sec), this value must be specified more
than 5 seconds and multiple of 5 seconds. For Example setting 500 sec:
HORCMSTART_WAIT=500
Export HORCMSTART_WAIT
For OpenVMS® platform: OpenVMS needs to make the Detached LOGINOUT.EXE Process as a JOB in
the background by using the ‘RUN /DETACHED’ command. Refer to item (4) in section 3.5.1 for details.
Hitachi Command Control Interface (CCI) User and Reference Guide
261
4.13.2 Horcmshutdown Command
The horcmshutdown command is a shell script for stopping the HORCM application
Table 4.36 Horcmshutdown Command Parameters
Parameter
Command Name
Format
Value
horcmshutdown
horcmshutdown.sh {inst...}
horcmshutdown.exe {inst...}
Option
Inst: Specifies the HORCM (CCI) instance number (numerical value). When this option is specified, the
command stops the specified HORCM instance. When this option is not specified, the command refers to
the instance (environment variable HORCMINST) of the execution environment of this shell script and
stops the following the HORCM instance.
When HORCMINST is specified, this command stops the HORCM instance of the execution environment
of this shell script.
When HORCMINST is not specified, this command stops the HORCM having no instance setting.
262
Chapter 4 Performing CCI Operations
4.13.3 Horcctl Command
The HORCM and Hitachi TrueCopy software have logs that identify the cause of software
and/or hardware errors as well as a tracing function for investigating such errors. The
location of the log files depends on the user’s command execution environment and the
HORC Manager’s execution environment. The command trace file and core file reside
together under the directory specified in the HORC Manager’s execution environment. See
Appendix A for log file and log directory information.
The Hitachi TrueCopy horcctl command can be used for both maintenance and
troubleshooting. The horcctl command allows you to change and display the internal trace
control parameters (e.g., level, type, buffer size) of the HORC Manager and/or Hitachi
TrueCopy commands. If a new value for a parameter is not specified, the current trace
parameters.
Caution: Do not change the trace level unless directed to do so by a Hitachi Data Systems
representative. Level 4 is the normal trace level setting. Levels 0-3 are for troubleshooting.
Setting a trace level other than 4 may impact problem resolution. If you request a change of
the trace level using the horcctl -l <level> command, a warning message is displayed, and
this command enters interactive mode.
Table 4.37 Horcctl Command Parameters
Parameter
Command Name
Format
Value
horcctl
horcctl { -h ⎪ -q ⎪ -z ⎪ -d ⎪ -c ⎪ -l <level> ⎪ -d <y/n> ⎪ -s <size(KB)> ⎪ -t <type> ⎪ -S ⎪ -D[I] ⎪
-C ⎪ [-u <-unitid> ⎪ -ND ⎪ -NC ⎪ -g <group>}
Hitachi Command Control Interface (CCI) User and Reference Guide
263
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the horcctl command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used
for specifying instance# of HORCM.
-d: Interprets the control options following this option (-l <level>, -b <y/n>, -s <size(KB)>, and -t <type>)
as the parameters of the TrueCopy commands.
-c: Interprets the control options following this option (-l <level>, -b <y/n> and -t <type>) as the
parameters of the HORC Manager (HORCM).
-l <level>: Sets the trace level (range = 0 to 15). If a negative value is specified, the trace mode is
canceled. A negative value “n” must be specified as “--n”.
Caution: Do not change the trace level unless directed to do so by a Hitachi Data Systems
representative. Level 4 is the normal trace level setting. Levels 0-3 are for troubleshooting. Setting a trace
level other than 4 may impact problem resolution. If you request a change of the trace level using the
horcctl -l <level> command, a warning message is displayed, and this command enters interactive
mode.
-b <y/n>: Sets the trace writing mode: Y = buffer mode, N = synchronous mode.
-t <type>: Sets the trace type (range = 0 to 511). When this option is used, only traces of the specified
type are output. One or more values can be specified.
-s <size(KB)>: Changes the default trace buffer size, which is 1 MB, in units of 1024 bytes.
-S: Shuts down HORCM.
-D: Displays the command device name currently used by HORCM. If the command device is blocked
due to online maintenance (microcode replacement) of the storage system, you can check the command
device name in advance using this option.
-C: Changes the command device name being used by HORCM and displays the new command device
name. If the command device is blocked due to online maintenance (microcode replacement) of the
storage system, you can change the command device in advance using this option.
[NOTE]: horcctl -D -Ccommand designates a protection mode command device by adding ‘*’ to the
device file name as follows:
HP-UX Example:
# horcctl -D
Current control device = /dev/rdsk/c0t0d0*
"horcctl -DI" command shows the number of RM instances of when HORCM has being started
as follows:
HP-UX Example without command device security:
# horcctl -DI
Current control device = /dev/rdsk/c0t0d0 AI = 14 TI = 0 CI = 1
AI : NUM of Actual instances in use
TI : NUM of temporary instances in RAID
CI : NUM of instances using current (own) instance
-u <unitid>: Used to specify the unit ID of a command device as the target. This option is effective when
the -D or -C option is specified. If this option is not specified, the unit ID is 0.
-ND -g <group>: Displays the network address and port name being used by HORCM. The -g <group>
option is used to specify the group name defined in the configuration definition file.
-NC -g <group>: Changes the network address and port name being used by HORCM and displays the
new network address name. The -g <group> option specifies the group name defined in the
configuration definition file.
264
Chapter 4 Performing CCI Operations
4.13.4 3DC Control Command using HORC/UR NEW
This is a scripted command for executing several HORC operation commands combined. It
checks the volume attribute (optionally specified) and decides a takeover action.
The horctakeoff operation is defined to change from 3DC multi-target to 3DC multi-hop with
the state of running APP, after that horctakeover command will be able to configure 3DC
multi-target on the remote site without stopping the APP.
The granularity of either a logical volume or volume group can be specified with this
command.
Table 4.38 Horctakeoff Command Parameters
Parameter
Command name
Format
Value
horctakeoff
horctakeoff | -h | -q | -z | -g[s] group | -d[s] pair Vol | -d[g][s] <raw_device> [MU#] | -
d[g][s] <seq#> <LDEV#> [MU#] | -jp <id> | -js <id> | [-t <timeout> ]| -nomsg }
Hitachi Command Control Interface (CCI) User and Reference Guide
265
Parameter
Value
Options
-h displays Help/Usage, and version information.
-q terminates interactive mode and exits this command.
-z or -zx (OpenVMS cannot use -zx option) This option makes this command enter interactive mode. The
-zx option prevents using HORCM in interactive mode. This option terminates interactive mode upon
HORCM shut-down.
-I[H][M][instance#] or -I[TC][SI][instance#] This option used for specifying the command as
-g[s] <group> This option is used to specify a group name (defined in the configuration definition file). The
command is executed for the specified group unless the -d <pair Vol> option shown below is specified.
-d[s] <pair Vol> This option is used to specify a logical (named) volume (defined in the configuration
definition file). When this option is specified, the command is executed for the specified paired logical
volume.
-d[g][s] <raw_device> [MU#] ...This option searches the RM configuration file (local instance) for a
volume that matches the specified raw device. If a volume is found, the command is executed on the
paired volume (-d) or group (-dg).This option is effective without specification of the “-g <group>” option. If
the specified raw_device is listed in multiple device groups, this will apply to the first one encountered.
-d[g][s] <seq#> <LDEV#> [MU#] This option searches the RM instance configuration file (local instance)
for a volume that matches the specified sequence # and LDEV. If a volume is found, the command is
executed on the paired logical volume (-d) or group (-dg).This option is effective without specification of
the “-g <group>” option. If the specified LDEV is listed in multiple device groups, this will apply to the first
one encountered. <seq #> <LDEV #> can be specified in a hexadecimal (by addition of “0x “) or decimal.
-jp <id> (HORC/UR only)Horctakeoff command can be changed 3DC configuration from 3 DC multi-target
to 3 DC multi-hop.In order to create 3 DC multi-hop (CA_SyncÆCA_Sync/UR_PVOLÆUR), it will be
needed to specify a journal group ID for UR_PVOL. So this option is used for that purpose. If this option
will not be specified, a journal group ID for UR_PVOL used for 3 DC multi-target will be inherited
automatically.
-js <id>(HORC/UR only)Horctakeoff command can be changed 3DC configuration from 3 DC multi-target
to 3 DC multi-hop.In order to create 3 DC multi-hop (CA_SyncÆCA_Sync/URÆUR_SVOL), it will be
needed to specify a journal group ID for UR_SVOL. So this option is used for that purpose. If this option
will not be specified, a journal group ID for UR_SVOL used with 3 DC multi-target will be inherited
automatically. The CTGID will also be inherited automatically for the internal paircreate command.
-t <timeout>The -t <timeout> option specifies the maximum time to wait for the Sync_PVOL to
Sync_SVOL delta data re-synchronizing operation. It is used for the internal pairresync command with the
time-out period in units of seconds. If this option will not be specified, the default timeout value (7200 sec)
is used.
-nomsg This option is used to suppress messages when this command is executed from a user program.
This option must be specified at the beginning of the command arguments.
Returned values
The horctakeoff command returns one of the following values in exit (), which allows users to check the
execution results using a user program or script. \\ Normal termination 0: \\• Abnormal termination The
horctakeoff command returns the following error codes as well as generic error.\\Specific error code for
horctakeoff
Category
Error Code
Error Message
Value
236
229
235
223
233
Volume status
EX_ENQVOL
EX_INCSTG
EX_EVOLCE
EX_VOLCRE
EX_EWSTOT
Unmatched volume status within the group
Inconsistent status in group
Pair Volume combination error
Local and Remote Volume currency error
Timeout waiting for specified status
Unrecoverable
Timer
Recoverable
266
Chapter 4 Performing CCI Operations
Note: Unrecoverable error should have been done without re-execute by handling of an
error code.
The command has failed, and then the detailed status will be logged on Raid Manager
command log ($HORCC_LOG), even though the user script has no error handling.
Hitachi Command Control Interface (CCI) User and Reference Guide
267
4.13.4.1 Horctakeoff Command Examples
3 DC Multi Target
3 DC Multi Hop
1
G
G1
L1
L1
L2
L2
G2
2
G
G3
G3
L3
L3
„
horctakeoff command on L1 local site
# horctakeoff -g G1 -gs G2
horctakeoff : 'pairsplit -g G1 -S -FHORC 2' is in progress
horctakeoff : 'pairsplit -g G1' is in progress
horctakeoff : 'pairsplit -g G2 -S' is in progress
horctakeoff : 'paircreate -g G1 -gs G2 -FHORC 2 -nocopy -f async -jp
0 -js 1' is in progress
horctakeoff : 'pairsplit -g G1 -FHORC 2' is in progress
horctakeoff : 'pairresync -g G1' is in progress
horctakeoff : 'pairresync -g G1 -FHORC 2' is in progress
horctakeoff : horctakeoff done
3 DC Multi Hop
3 DC Multi Target
G1
G1
L1
L1
L2
L2
G2
G3
G
2
G3
L3
L3
„
horctakeoff command on L2 local site
# horctakeoff -g G1 -gs G3
horctakeoff : 'pairsplit -g G1 -S -FHORC 1' is in progress.
horctakeoff : 'pairsplit -g G1' is in progress.
horctakeoff : 'pairsplit -g G3 -S' is in progress.
horctakeoff : 'paircreate -g G1 -gs G3 -FHORC 1 -nocopy -f async -jp
0 -js 1' is in progress.
horctakeoff : 'pairsplit -g G1 -FHORC 1' is in progress.
horctakeoff : 'pairresync -g G1' is in progress.
horctakeoff : 'pairresync -g G1 -FHORC 1' is in progress.
horctakeoff : horctakeoff done.
268
Chapter 4 Performing CCI Operations
3 DC Multi Hop
3 DC Multi Targte
G1
L1
L2
G1
L1
L2
G2
G2
G3
G3
L3
L3
„
horctakeoff command on L1 remote site
# horctakeoff -g G1 -gs G2
horctakeoff : 'pairsplit -g G2 -S' is in progress.
horctakeoff : 'pairsplit -g G1' is in progress.
horctakeoff : 'pairsplit -g G1 -FHORC 2 -S' is in progress.
horctakeoff : 'paircreate -g G2 -vl -nocopy -f async -jp 0 -js 1' is
in progress.
horctakeoff : 'pairsplit -g G2' is in progress.
horctakeoff : 'pairresync -g G1' is in progress.
horctakeoff : 'pairresync -g G2' is in progress.
horctakeoff : horctakeoff done.
3 DC Multi Target
3 DC Multi Hop
1
G
1
G
L1
L1
L2
L2
G2
2
G
3
G
3
G
L3
L3
horctakeoff command on L2 remote site
# horctakeoff -g G1 -gs G3
horctakeoff : 'pairsplit -g G3 -S' is in progress.
horctakeoff : 'pairsplit -g G1' is in progress.
horctakeoff : 'pairsplit -g G1 -FHORC 1 -S' is in progress.
horctakeoff : 'paircreate -g G3 -vl -nocopy -f async -jp 0 -js 1' is
in progress.
horctakeoff : 'pairsplit -g G3' is in progress.
horctakeoff : 'pairresync -g G1' is in progress.
horctakeoff : 'pairresync -g G3' is in progress.
horctakeoff : horctakeoff done.
Hitachi Command Control Interface (CCI) User and Reference Guide
269
4.13.5 Windows Subcommands
The CCI software provides subcommands for the Windows platforms which are executed as
options (-x <command> <arg>) of another command. When you specify a subcommand as the
only option of a command, you do not need to start HORCM. If another option of the
command and the subcommand are specified on the same command line, place the other
option after the subcommand.
4.13.6 Findcmddev Subcommand
The findcmddev subcommand (find command device) searches for command devices within
the specified range of disk drive numbers. If it is found, the command device is displayed in
the same format as in the configuration definition file. This subcommand is used when the
an example of the findcmddev subcommand used as an option of the raidscan command and
D:\HORCM\etc> raidscan -x findcmddev hdisk0, 20
cmddev of Ser# 62496 = \\.\PhysicalDrive0
cmddev of Ser# 62496 = \\.\E:
cmddev of Ser# 62496 = \\.\Volume{b9b31c79-240a-11d5-a37f-00c00d003b1e}
This example searches for command devices in the range of disk drive numbers 0-20.
Figure 4.51 Findcmddev Subcommand Example
Caution: The findcmddev subcommand must be used when HORCM is not running.
Note: The findcmddev subcommand searches for the physical and logical drives associated
with the command device. If the command device is indicated as a logical drive in addition
to a physical drive, then a drive letter is assigned to the command device. You must delete
the drive letter assigned to the command device to prevent utilization by general users.
The “Volume{GUID}” must be made by setting a partition using the disk management
without file system format, and is used to keep as the same command device even though
the physical drive numbers are changed on every reboot in a SAN environment.
Table 4.39 Findcmddev Subcommand Parameters
Parameter
Command Name
Format
Value
findcmddev
-x findcmddev drive#(0-N)
Argument
drive#(0-N): Specifies the range of disk drive numbers on the Windows system.
270
Chapter 4 Performing CCI Operations
4.13.7 Drivescan Subcommand
The drivescan subcommand displays the relationship between the disk numbers assigned by
the Windows system and the LDEVs on the RAID storage system, and also displays attribute
as an option of the raidscan command and its output.
Table 4.40 Drivescan Subcommand Parameters
Parameter
Command Name
Format
Value
drivescan
-x drivescan drive#(0-N)
Argument
drive#(0-N): Specifies the range of disk drive numbers on the Windows system.
raidscan -x drivescan harddisk0,20
Harddisk 0... Port[ 1] PhId[ 0] TId[ 0] Lun[ 0] [HITACHI] [DK328H-43WS]
Harddisk 1... Port[ 2] PhId[ 4] TId[ 29] Lun[ 0] [HITACHI] [OPEN-3]
Port[CL1-J] Ser#[ 30053] LDEV#[ 9(0x009)]
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
Harddisk 2... Port[ 2] PhId[ 4] TId[ 29] Lun[ 1] [HITACHI] [OPEN-3]
Port[CL1-J] Ser#[ 30053] LDEV#[ 10(0x00A)]
HORC = S-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0004 CTGID = 3
Harddisk 3... Port[ 2] PhId[ 4] TId[ 29] Lun[ 6] [HITACHI] [OPEN-3-CM]
Port[CL1-J] Ser#[ 30053] LDEV#[ 15(0x00F)]
Note: This example displays the devices for the range of disk drive numbers from 0 to 20.
Figure 4.52 Drivescan Subcommand Example
Output of the drivescan subcommand:
„
„
„
„
Harddisk #: Shows the hard disk recognized by the Windows system.
Port: Shows the port number on the device adapter recognized by the Windows system.
Phid: Shows the bus number on the device adapter port recognized by Windows system.
Tid: Shows the target ID of the hard disk(s) on the specified port and bus. For further
information on fibre-to-SCSI address conversion, see Appendix C.
„
„
„
„
„
„
LUN: Shows the LU number of the hard disk on the specified port, bus, and TID.
Port[CLX-Y]: Shows the port number on the storage system.
Ser#: Shows the production number (serial number) of the storage system.
LDEV#: Shows the LDEV ID (hexadecimal) of the specified volume.
HORC: Shows the TrueCopy attribute (P-VOL, S-VOL, SMPL) of the specified volume.
ShadowImage: Shows the ShadowImage attribute (P-VOL, S-VOL, or SMPL) and MU
number (0-2) of the specified volume.
„
RAIDX[Group]: Shows the physical location (frame number-parity group number) of the
specified volume and the RAID level of this parity group.
„
„
SSID: Shows the SSID of the specified volume.
CTGID (TrueCopy Async/UR only): Shows the consistency group ID of specified volume.
Hitachi Command Control Interface (CCI) User and Reference Guide
271
4.13.8 Portscan Subcommand
portscan subcommand used as an option of the raidscan command and its output.
Table 4.41 Portscan Subcommand Parameters
Parameter
Command Name
Format
Value
portscan
-x portscan port#(0-N)
Argument
port#(0-N): Specifies the range of port numbers on the Windows system.
raidscan -x portscan port0,20
PORT[ 0] IID [ 7] SCSI Devices
PhId[ 0] TId[ 3] Lun[ 0] [MATSHIT] [CD-ROM CR-508 ] ...Claimed
PhId[ 0] TId[ 4] Lun[ 0] [HP ] [C1537A ] ...Claimed
PORT[ 1] IID [ 7] SCSI Devices
PhId[ 0] TId[ 0] Lun[ 0] [HITACHI ] [DK328H-43WS ] ...Claimed
PORT[ 2] IID [ 7] SCSI Devices
PhId[ 0] TId[ 5] Lun[ 0] [HITACHI ] [OPEN-3
] ...Claimed
] ...Claimed
] ...Claimed
] ...Claimed
PhId[ 0] TId[ 5] Lun[ 1] [HITACHI ] [OPEN-3
PhId[ 0] TId[ 5] Lun[ 2] [HITACHI ] [OPEN-3
PhId[ 0] TId[ 6] Lun[ 0] [HITACHI ] [3390-3A
Note: This example displays the devices for the range of ports from 0 to 20.
Figure 4.53 Portscan Subcommand Example
Output of the portscan subcommand:
„
„
„
„
Port: Shows the port number on the device adapter recognized by the Windows system
IID: Shows the initiator ID on the specified device adapter port
Phid: Shows the BUS number on the specified device adapter port
Tid: Shows the target ID of the hard disk(s) on the specified adapter port and bus. For
further information on fibre-to-SCSI address conversion, see Appendix C.
„
LUN: Shows the LU number of each hard disk on the specified device adapter port/bus.
This item shows LDEV# of the partner who becomes a pair in or among the RAID storage
system.
272
Chapter 4 Performing CCI Operations
4.13.9 Sync and Syncd Subcommands
The sync (synchronization) subcommand sends unwritten data remaining on the
Windows server to the specified device(s) to synchronize the pair(s) before the CCI command
is executed. The syncd (synchronization delay) subcommand waits the delayed IO for
parameters.
Table 4.42 Sync and Syncd Subcommand Parameters
Parameter
Value
Command Name
sync
syncd
Format
-x sync[d] A: B: C: ...
-x sync[d] all
-x sync[d] drive#(0-N)
-x sync[d] Volume# ... for Windows 2008/2003/2000
-x sync[d] D:\Directory or \Directory pattern ... for Windows 2008/2003/2000 systems
Arguments
A: B: C: [\directory or \Directory pattern] Specifies the logical drive that you want to synchronize. Data
is flushed into the specified logical drive and the physical drive corresponding to the logical drive. If the
specified logical drive has the directory mount volumes then SYNC will be executed to all of the volumes
on the logical drive as shown below:
pairsplit -x sync D:
[SYNC] D: HarddiskVolume2
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
[\directory or \Directory pattern] is used to find the directory mount point on the logical drive. If the
directory is specified, then SYNC does execute to a directory mounted volume only.
pairsplit -x sync D:\hd1
[SYNC] D:\hd1 HarddiskVolume8
If the directory pattern is specified, then SYNC does execute to any directory mounted volumes identified
to “\directory pattern”.
pairsplit -x sync D:\h
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
all: Synchronizes all logical drives and the physical drives corresponding to the logical drives assuming
that they are on the hard disks. The logical drive on which the CCI software is installed and the logical
drive containing the Windows directory are excluded. If the logical drive has the directory mount volumes
then SYNC will be executed to all volumes on the logical drive as shown below:
pairsplit -x sync all
[SYNC] C: HarddiskVolume1
[SYNC] D:\hd1 HarddiskVolume8
[SYNC] D:\hd2 HarddiskVolume9
[SYNC] G: HarddiskVolume10
drive#(0-N): Specifies the range of drives on the Windows system.
Volume#(0-N): Specifies the LDM volumes to be flushed. Volume# must be specified ‘\Vol# or \Dms# or
\Dmt# or \Dmr#’ as LDM volume for Windows 2008/2003/2000 systems. To flush HarddiskVolumeX:
-x sync \VolX
Hitachi Command Control Interface (CCI) User and Reference Guide
273
The following examples show the sync subcommand used as an option of the pairsplit
written to disk, all pairs in the specified group are split (status = PSUS), and read/write
access is enabled for all S-VOLs in the specified group.
pairsplit -x sync C: D: -g oradb -rw
Figure 4.54 Sync Subcommand Example – Pairsplit
harddisk3 is written to disk, all pairs in the specified group are deleted (status = SMPL),
which enables read/write access for all secondary volumes.
pairsplit -x sync hdisk2 hdisk3 -g oradb -S
Figure 4.55 Sync Subcommand Example – Pairsplit -S
Note: Sync has the following behavior on any conditions:
„
If the logical drives designated as the objects of the sync command will not be opened
to any applications, then sync flushes the system buffer to a drive and makes the
dismount state for this drive.
„
If the logical drives designated as the objects of the sync command are already opened
to any applications, then sync only flushes the system buffer to a drive.
This will be allowed to flush the system buffer before pairsplit without unmounting the
PVOL (opening state), and indicates as [WARNING] below:
pairsplit -x sync C:
WARNING: Only flushed to [\\.\C:] drive due to be opening
[SYNC] C: HarddiskVolume3
Note: Syncd has the following behavior as well:
„
If the logical drives designated as the objects of the sync command will not be opened
to any applications, then syncd flushes the system buffer to a drive and waits (30 sec)
the delayed (paging) IO for dismount after made the dismount state about the drive.
„
This avoids a problem that NTFS on PVOL will be split on inconsistent state because
Windows 2003 delays the IO for dismounting.
274
Chapter 4 Performing CCI Operations
4.13.10 Mount Subcommand
The mount subcommand mounts the specified drive to the specified partition on the
specified hard disk drive using the drive letter. When the mount subcommand is executed
without an argument, all currently mounted drives (including directory mounted volumes)
are displayed, and logical drive has been mounting an LDM volume then displays
Harddisk#[n] configured an LDM volume.
4.57 show examples of the mount subcommand used as an option of the pairsplit command
and its output.
Table 4.43 Mount Subcommand Parameters
Parameter
Command Name
Format
Value
mount
-x mount
-x mount drive: hdisk# [partition#] for Windows NT
-x mount drive: Volume# for Windows 2008/2003/2000
-x mount drive: [\directory] Volume# for Windows 2008/2003/2000
Arguments
drive: hdisk# [partition #]: Specifies the logical drive, hard disk drive (number), and partition to be
mounted.
drive: [\directory] Volume#: Specifies the logical drive and LDM volume name and number to be
mounted. Volume# must be specified ‘\Vol# or \Dms# or \Dmt# or \Dmr# ‘ as LDM volume for Windows
2008/2003/2000. To mount HarddiskVolumeX: -x mount C: hdX or -x mount C: \VolX
[\directory]: Specifies the directory mount point on the logical drive.
pairsplit -x mount D:\hd1 \Vol8
D:\hd1 <+> HarddiskVolume8
pairsplit -x mount D:\hd2 \Vol9
D:\hd2 <+> HarddiskVolume9
Restriction
The partition on the specified disk drive (hard disk) must be recognized on the Windows system.
[\directory] for the mount must be specified a mount point without imbedded space character.
If [\directory] will be detected as mount point with embedded space (i.e. aaa bbb), then the directory
will be shown by adding “…” to first strings as below.
pairsplit -x mount
Drive FS_name VOL_name Device
Partition ... Port PathID Targ Lun
D:
NTFS
Null
Harddiskvolume3
... Harddisk2
D:\aaa… NTFS
Null
Harddiskvolume4
... Harddisk3
The same method is used for “inqraid $LETALL” and “raidscan -pi $LETALL -find” command.
pairsplit -x mount F: hdisk2 p1 -x mount G: hdisk1 p1
pairsplit -x mount
Drive FS_name VOL_name Device
Partition ... Port PathID Targ Lun
C:
F:
G:
Z:
FAT
FAT
NTFS
CDFS
Null
Null
Null
Null
Harddisk0 Partition1 ... 1
Harddisk2 Partition1 ... 2
Harddisk1 Partition1 ... 2
0
0
0
0
5
5
0
1
0
CdRom0
... Unknown
Figure 4.56 Mount Subcommand Example for Windows NT
Hitachi Command Control Interface (CCI) User and Reference Guide
275
the “F:” drive to partition1 on disk drive2 and the “G:” drive to partition1 on disk drive1,
and then displays the mounted devices.
pairsplit -x mount F: hdisk2
pairsplit -x mount
Drive
C:
F:
D:
D:\hd1 NTFS
D:\hd2 NTFS
FS_name
NTFS
NTFS
NTFS
VOL_name
Null
Null
Null
Null
Null
Device
Partition
... Port PathID Targ Lun
... Harddisk0
... Harddisk1
... Harddisk2
... Harddisk3
Harddiskvolume1
Harddiskvolume2
Harddiskvolume3
Harddiskvolume4
Harddiskvolume5
... Harddisk4
G:
NTFS
Null
HarddiskDmVolumes\…\Volume1 ... Harddisk5[3]
Figure 4.57 Mount Subcommand Example for Windows 2008/2003/2000
displays the mounted devices. The F: drive is mounted to harddiskvolume2, D: is mounted to
harddiskvolume3, D:\hd1 directory (‘hd1’ directory on D: drive) is mounted to
harddiskvolume4, D:\hd2 directory is mounted to harddiskvolume5, and G: drive is mounted
to harddiskDmVolumes\…\Volume1 for spanned volume configured with three harddisks
Output of the mount subcommand:
„
„
„
„
„
Drive: Shows the logical drive recognized by the Windows system
FS_name: Shows the name of the file system formatted on the specified drive
VOL_name: Shows the volume label name for the specified drive
Device, Partition: Shows the device name and partition for the specified drive
Port,Phid,Tid,Lun: Shows the port number, path ID (bus), target ID, and LUN for the
specified drive. For further information on fibre-to-SCSI address conversion, see
Appendix C.
276
Chapter 4 Performing CCI Operations
4.13.11 Umount and Umountd Subcommands
The umount subcommand unmounts the specified logical drive and deletes the drive letter.
Before deleting the drive letter, this subcommand executes sync internally for the specified
logical drive and flushes unwritten data. The umountd subcommand unmounts the logical
and umountd subcommand parameters. Figure 4.58 shows an example of the umount
subcommand used as an option of the pairsplit command.
Table 4.44 Umount and Umountd Subcommand Parameters
Parameter
Value
Command Name
umount
umountd
Format
-x umount[d] drive: [time]
-x umount[d] drive:[\directory] [time] for Windows 2008/2003/2000
Argument
drive: Specifies the mounted logical drive.
[\directory]: Specifies the directory mount point on the logical drive.
pairsplit -x umount D:\hd1 \Vol8
D:\hd1 <-> HarddiskVolume8
pairsplit -x umount D:\hd2 \Vol9
D:\hd2 <-> HarddiskVolume9
Example for waiting 45 sec:
pairsplit -x umount D: 45
D: <-> HarddiskVolume8
Restriction
The logical drive to be unmounted and the corresponding physical drive must be closed to all applications.
pairsplit -x umount F: -x umount G: -g oradb -rw
pairsplit -x mount
Drive FS_name VOL_name Device
Partition ... Port PathID Targ Lun
Partition1 ... 1
... Unknown
C:
Z:
FAT
Null
Harddisk0
CdRom0
0
0
0
Unknown Unknown
Figure 4.58 Umount Subcommand Example
group (status = PSUS), enables read/write access to all secondary volumes in the specified
group, and then displays all mounted drives.
Output of the umount subcommand:
„
„
„
„
„
Drive: Shows the logical drive recognized by the Windows system
FS_name: Shows the name of the file system formatted on the specified drive
VOL_name: Shows the volume label name for the specified drive
Device, Partition: Shows the device name and partition for the specified drive
Port,Phid,Tid,Lun: Shows the port number, path ID (bus), target ID, and LUN for the
specified drive. For further information on fibre-to-SCSI address conversion, see
Appendix C.
Hitachi Command Control Interface (CCI) User and Reference Guide
277
Note: The umount command flushes (sync) the system buffer of the associated drive before
deleting the drive letter. If umount has failed, you need to confirm the following conditions:
„
The logical and physical drives designated as the objects of the umount command are
not opened to any applications. For example, confirm that Explore is not pointed on the
target drive. If it is, then the target drive will be opening.
„
Umount command does not ignore the detected error on the NT file system, so that
umount is successful in a normal case (NO ERROR case) only on NT file system. For
example, confirm the target drive has no failure on the system for Event Viewer. If so,
you must reboot the system or delete the partition and reconfigure the target drive.
Note: Umountd has the following behavior as well.
„
Unmount the logical drive after waiting (30 sec) the delayed (paging) IO for dismount
after flushed the system buffer to a drive.
„
This avoids a problem (Windows 2003 only) that NTFS on PVOL will be split on
inconsistent state because Windows 2003 delays the IO for dismounting. This also avoids
a problem that the delayed (paging) IO for dismounting will be written on
SVOL_PAIR(Writing Disable) state by rescan, and logged as windows event (i.e., ID51,57).
These problems do not occur on Windows 2008 systems.
278
Chapter 4 Performing CCI Operations
4.13.12 Environment Variable Subcommands
If no environment variables are set in the execution environment, the environment variable
subcommand sets or cancels an environment variable within the CCI command. The setenv
subcommand sets the specified environment variable(s). The usetenv subcommand deletes
the specified environment variable(s). The env subcommand command displays the
environment variable(s). The sleep subcommand causes CCI to wait for the specified time.
Table 4.45 lists and describes the environment variable subcommands and their parameters.
Table 4.45 Environment Variable Subcommand Parameters
Parameter
Value
Command
Name
setenv
usetenv
env
sleep
Format
-x setenv vaname value
-x usetenv vaname
-x env
-x sleep time
Argument
Restriction
Vaname: Specifies the environment variable to be set or canceled.
Value: Specifies the value or character string of the environment variable to be set.
Time: Specifies the sleep time in seconds.
The environment variables must be set before connecting to HORCM, and must be specified during interactive
mode (-z option). Changing an environment variable after a CCI command execution error is invalid.
Figure 4.59 shows an example of the setenv and usetenv subcommands used as an option of
the raidscan command. This example changes from “HORC” to “HOMRCF” an execution
environment of the raidscan command which makes a dialog mode, because of establishing
“HORCC_MRCF” as an environment variable.
raidscan[HORC]: -x setenv HORCC_MRCF 1
raidscan[MRCF]:
raidscan[MRCF]: -x usetenv HORCC_MRCF
raidscan[HORC]:
Figure 4.59 Environment Variable Subcommand Examples
Hitachi Command Control Interface (CCI) User and Reference Guide
279
4.14 CCI Command Tools
4.14.1 Inqraid Command Tool
CCI provides the inqraid command tool for confirming the drive connection between the
storage system and host system. The inqraid command displays the relation between special
file(s) on the host system and actual physical drive of the RAID storage system.
examples of using inqraid and system command to display the connection between special
examples of the -find, -findc, -CLI, -sort [CM], -gvinf, and -svinf options.
Table 4.46 Inqraid Command Parameters
Parameter
Command Name
Format
Value
/HORCM/usr/bin/inqraid
/HORCM/usr/bin/inqraid [-h | quit | -inqdump | -f[x][p][l][g] | -find[c] | <special file> | -CLI[WPN] | -sort | -CM
| -gvinf | -svinf | -gplba | -pin | -fv(Windows only) ]
Options
-h: Displays Help/Usage.
quit: Terminates from waiting STDIN and exits this command.
-inqdump: Displays information for standard inquiry with Dump Image of hexadecimal.
-fx: Displays the LDEV number with hexadecimal.
-find[c]: Finds the appropriate group within the configuration file using a special file provided by STDIN.
-find: Searches a group on the configuration definition file (local instance) from <special file> of STDIN by
using pairdisplay command, and uses the following options of the pairdisplay command to display its state.
This option must be specified HORCMINST as command execution environment.
For ShadowImage: pairdisplay -d <Seq#> <LDEV#> 0 1 2 -l [-fx] [-CLI] 2>/dev/null
For Hitachi TrueCopy: pairdisplay -d <Seq#> <LDEV#> -l [-fx] [-CLI] 2>/dev/null
Note: <Seq#> and <LDEV#> are included using SCSI Inquiry command.
<special file>: This option is used to specify the special file name as argument of command. If no
argument, this command makes mode that wait for STDIN without argument.
-findc: Uses the following options of the pairdisplay command, and displays with CLI format by editing an
output of pairdisplay command.
For ShadowImage: pairdisplay -d <Seq#> <LDEV#> <MU#> -fd -CLI 2>/dev/null
For Hitachi TrueCopy: pairdisplay -d <Seq#> <LDEV#> -fd -CLI 2>/dev/null
Note: <Seq#> and <LDEV#> are included using SCSI Inquiry command.
<special file>: Specifies a special file name as the argument of a command.
No argument: Expects STDIN to provide the arguments.
-CLI: Displays structured column output for Command Line Interface (CLI) parsing. Also used for “-find”
option. The delimiters between columns can be spaces and/or dashes (-).
-CLIWP, -CLIWN: Displays the WWN (world wide name for HOST adapter) and LUN with CLI format, also
used for “-find” option.
-sort [CM]: Sorts the target devices by Serial#,LDEV# order.
[CM] Displays the command device only in horcm.conf image. This option is valid within “-sort” option
-gvinf (only Windows systems)
-gvinfex (for GPT disk on Windows 2008/2003): Gets the signature and volume layout information of a
raw device file provided via STDIN or arguments, and saves this information to the system disk with the
following format: \WindowsDirectory\VOLssss_llll.ini where ssss = serial#, llll = LDEV#
Normally this option is used to save the signature and volume layout information once after the user has
set the new partition for SVOL using the Windows Disk Management.
280
Chapter 4 Performing CCI Operations
Parameter
Value
-svinf[=PTN] (only Windows systems)
-svinfex[=PTN] (for GPT disk on Windows 2008/2003): Sets the signature and volume layout information
that was saved to the system disk to a raw device file provided via STDIN or arguments. Gets the serial#
and LDEV# for the target device using SCSI Inquiry, and sets the signature and volume layout information
into VOLssss_llll.ini file to the target device. This option will set correctly because the signature and
volume layout information is managed by the serial# and LDEV# without depend on Harddisk#, even if
Harddisk# is changed by the configuration changes.
[=PTN]: Specifies a strings pattern to interpret the strings provided via STDIN as a raw device.
\Device\HarddiskVolume#( number ) is made in a sequential order executed -svinf to Harddisk, and its
number will remain the same as long as the system configuration is not changed. If you want to make
\Device\HarddiskVolume#( number ) more absolutely, then make \Device\HarddiskVolume# in serial# and
LDEV# order by using the “-sort” option as shown below:
D:\HORCM\etc>echo hd5 hd4 hd3 | inqraid -svinf -sort
[VOL61459_451_5296A763] -> Harddisk3
[VOL61459_452_5296A760] -> Harddisk4
[VOL61459_453_5296A761] -> Harddisk5
[OPEN-3
[OPEN-3
[OPEN-3
]
]
]
-gplba (only Windows systems)
-gplbaex (for GPT disk on Windows 2008/2003): Displays usable LBA on a physical drive in units of 512
bytes, and specifies [slba] [elba] options for raidvchkset command.
Example: C:\HORCM\etc>inqraid $Phys -CLI -gplba -sort
Harddisk11 : SLBA = 0x00003f00 ELBA = 0x000620d9 PCNT = 7 [OPEN-3-CVS
Harddisk12 : SLBA = 0x00003f00 ELBA = 0x00042ad1 PCNT = 4 [OPEN-3-CVS
Harddisk13 : SLBA = 0x0000003f ELBA = 0x000620d9 PCNT = 1 [OPEN-3-CVS
]
]
]
SLBA: Displays usable starting LBA in units of 512 bytes.
ELBA: Displays usable ending LBA (ELBA -1) in units of 512 bytes.
PCNT: Displays the number of partitions.
Example for setting of Harddisk11:
C:\HORCM\etc>raidvchkset -d hd11 -vs 16 0x00003f00 0x000620d9
-fv (only Windows 2008/2003/2000 systems): Displays the Volume{GUID} via $Volume with wide format.
Example:
C:\HORCM\etc>inqraid -CLI $Vol -fv
DEVICE_FILE
Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}\Vol3\Dsk0 CL2-D 62496
256
PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
-
-
- OPEN-3-CVS-CM
-fp or -fl or -pin: Shows a data protection volume with “-CLI” option by appending ‘*’ to the device file
name. If the -fp option is specified, the data protection volume is a Database Validator volume. If the -fl
option is specified, the data protection volume is a Data Retention Utility (Open LDEV Guard on 9900V)
volume. If the -pin option is specified, shows that the volume is PIN track volume because of HDD double
drive failure and/or external connection disks failure especially.
# ls /dev/rdsk/c57t4* | ./inqraid -CLI -fp
DEVICE_FILE
c57t4d0*
c57t4d3*
c57t4d4
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
CL1-D
CL1-D
CL1-D
CL1-D
62496
62496
62496
62496
32 - s/P/ss 0004 5:01-03 OPEN-3
35 - s/P/ss 0004 5:01-03 OPEN-3
36 - s/P/ss 0004 5:01-01 OPEN-3
37 - s/P/ss 0004 5:01-02 OPEN-3
c57t4d5
This example shows that c57t4d0 and c57t4d3 (marked by *) are set to enable Database Validator
protection.
-fg (9900V and later): Shows a LUN on the host view by finding a host group for (9900V and later).
-fw: Displays all of the cascading volume statuses on the STD Inquiry Page. If this option is not specified,
then only four cascading mirrors are displayed.
Example:
# ls /dev/rdsk/* | inqraid -CLI -fw
DEVICE_FILE PORT SERIAL LDEV CTG H../M/..
SSID R:Group PRODUCT_ID
c1t2d10s2 CL2-D 62500 266 - Psss/P/PP----------- 0005 1:01-02 OPEN-3
c1t2d11s2 CL2-D 62500 267 - ssss/P/PP----------- 0005 1:01-02 OPEN-3
Hitachi Command Control Interface (CCI) User and Reference Guide
281
Parameter
Value
-CLIB -sort: This option is used to know how much pair is possible to create the paired volume on the
actual array, and calculates the total Bitmap page for HORC/HOMRCF and the unused Bitmap page by
sorting the specified special file (the standard input or the argument) with Serial#,LDEV# order.
The default is HOMRCF. This option is valid within “-sort” option.
Note: An identical LDEV which was sorted in Serial#, LDEV# is accepted to calculate the Bitmap page
(LDEVs shared by multiple ports are calculated as one LDEV).
Also, a command device is excepted from the total.
Example:
# ls /dev/rdsk/* | inqraid -sort -CLIB
DEVICE_FILE
c1t0d0
c1t0d1
c1t0d2
c1t0d3
c1t0d4
c1t0d5
c1t0d6
c2t0d6
PORT
SERIAL LDEV SL CL +SI/SI UNUSED PRODUCT_ID
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL2-E
63516 0 0 0 - OPEN-9-CM
-
63516 12288 0 0
63516 12403 0 0
63516 12405 0 0
63516 12800 0 0
63516 12801 0 0
63516 13057 0 0
63516 13057 0 0
1 30718 OPEN-3
4 30718 OPEN-9
9 30718 OPEN-E
12 30718 OPEN-8
18 30718 OPEN-8*2
31 30718 OPEN-L
31 30718 OPEN-L
-fh[c]: Used to specify the Bitmap page for HORC/UR.
“-fc” option is used to calculate the Bitmap page of cylinder size for HORC.
Example:
# ls /dev/rdsk/* | inqraid -sort -CLIB -fh
DEVICE_FILE
c1t0d0
c1t0d1
c1t0d2
c1t0d3
c1t0d4
c1t0d5
c1t0d6
c2t0d6
PORT
SERIAL LDEV SL CL +TC/UR UNUSED PRODUCT_ID
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL1-E
CL2-E
63516 0 0 0 - OPEN-9-CM
-
63516 12288 0 0
63516 12403 0 0
63516 12405 0 0
63516 12800 0 0
63516 12801 0 0
63516 13057 0 0
63516 13057 0 0
1 11605 OPEN-3
3 11605 OPEN-9
10 11605 OPEN-E
11 11605 OPEN-8
13 11605 OPEN-8*2
21 11605 OPEN-L
21 11605 OPEN-L
SL: This displays the SLPR number of LDEV.
CL: This displays the CLPR number of LDEV.
+SI/SI: This shows the total of Bitmap for the HOMRCF. The increase page shows necessary Bitmap page
as one paired volume of HOMRCF.
+TC/UR: This shows the total of Bitmap for the HORC or UR. The increase page shows necessary Bitmap
page as one volume of HORC or UR.
UNUSED: This shows the unused Bitmap page for each HOMRCF and HORC/UR. One Page is about 64
KB.
Restriction
STDIN or special files are specified as follows (lines starting with ‘#’ via STDIN are interpreted as
comments):
HP-UX: /dev/rdsk/* or /dev/rdisk/disk*
Solaris: /dev/rdsk/*s2 or c*s2
Linux : /dev/sd... or /dev/rd... ,/dev/raw/raw*.
zLinux: /dev/sd... or /dev/dasd… or /dev/rd... ,/dev/raw/raw*.
AIX: /dev/rhdisk* or /dev/hdisk* or hdisk*
DIGITAL or Tru64: /dev/rrz*c or /dev/rdisk/dsk*c or /dev/cport/scp*
DYNIX: /dev/rdsk/sd* or sd* for only unpartitioned raw device
IRIX64: /dev/rdsk/*vol or /dev/rdsk/node_wwn/*vol/* or /dev/dsk/*vol or /dev/dsk/node_wwn/*vol/*
OpenVMS: $1$* or DK* or DG* or GK*
Windows NT: hdX-Y, $LETALL, $Phys, D:\DskX\pY, \DskX\pY
Windows 2008/2003/2000: hdX-Y,$LETALL,$Volume,$Phys, D:\Vol(Dms,Dmt,Dmr)X\DskY,
\Vol(Dms,Dmt,Dmr)X\DskY
Lines starting with ‘#’ via STDIN are interpreted as comments.
282
Chapter 4 Performing CCI Operations
HP-UX System:
# ioscan -fun | grep rdsk | ./inqraid
/dev/rdsk/c0t2d1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP
] [OPEN-3
]
]
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdsk/c0t4d0 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP
RAID5[Group 2- 1] SSID = 0x0008
] [OPEN-3-CM
Linux and zLinux System:
# ls /dev/sd* | ./inqraid
/dev/sdh -> CHNO = 0 TID = 1 LUN = 7
[HP] CL2-B Ser = 30053 LDEV = 23 [HP
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004 CTGID = 2
/dev/sdi -> CHNO = 0 TID = 4 LUN = 0
] [OPEN-3
]
]
[HP] CL2-B Ser = 30053 LDEV = 14 [HP
RAID5[Group 1- 2] SSID = 0x0004
] [OPEN-3-CM
Solaris System:
# ls /dev/rdsk/* | ./inqraid
/dev/rdsk/c0t2d1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP
] [OPEN-3
]
CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdsk/c0t4d0 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP
RAID5[Group 2- 1] SSID = 0x0008
] [OPEN-3-CM
]
AIX System:
# lsdev -C -c disk | grep hdisk | ./inqraid
hdisk1 -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
hdisk2 -> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM
RAID5[Group 2- 1] SSID = 0x0008
]
]
Windows System:
C:\HORCM\etc> echo hd1-2 | inqraid ( or inqraid hd1-2 )
Harddisk 1 -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
Harddisk 2 -> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM
RAID5[Group 2- 1] SSID = 0x0008
]
]
Tru64 UNIX System:
# ls /dev/rdisk/dsk* | ./inqraid
/dev/rdisk/dsk10c -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
]
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdisk/dsk11c -> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM
RAID5[Group 2- 1] SSID = 0x0008
]
DYNIX® System:
# dumpconf -d | grep sd | ./inqraid
sd1-> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
Sd2-> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM
RAID5[Group 2- 1] SSID = 0x0008
]
]
Figure 4.60 Inqraid Command Tool Examples (continues on the next page)
Hitachi Command Control Interface (CCI) User and Reference Guide
283
IRIX System with FC_AL:
# ls /dev/rdsk/*vol | ./inqraid
/dev/rdsk/dks1d6vol -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
]
/dev/rdsk/dks1d7vol -> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI
RAID5[Group 2- 1] SSID = 0x0008
] [OPEN-3-CM ]
IRIX System with Fabric:
# ls /dev/rdsk/*/*vol/* | ./inqraid
/dev/rdsk/50060e8000100262/lun3vol/c8p0 -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI] [OPEN-3]
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
/dev/rdsk/50060e8000100262/lun4vol/c8p0 -> [SQ] CL2-D Ser=30053 LDEV = 14 [HITACHI] [OPEN-3-CM]
RAID5[Group 2- 1] SSID = 0x0008
OpenVMS® System:
$ inqraid dka145-146
DKA145 -> [ST] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3
HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3
DKA146 -> [ST] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM
RAID5[Group 2- 1] SSID = 0x0008
]
]
The following items are output for the inqraid command tool:
CLX-Y: Displays the port number on the RAID storage system.
Ser: Displays the production (serial#) number on the RAID storage system.
LDEV: Displays the LDEV# in the RAID storage system.
HORC: Displays the attribute (“PVOL/SVOL/SMPL”) of a volume as TrueCopy in the RAID
storage system.
HOMRCF: Displays the attribute (“PVOL/SVOL/SMPL”) of a volume as MU#0-2 of
ShadowImage/Snapshot in the RAID storage system.
Group: Displays the physical position of an LDEV according to mapping of LDEV in the
RAID storage system.
LDEV Mapping
Display Formats
RAID Group
RAID1[Group Group number - Sub number]
RAID5[Group Group number - Sub number]
RAID6[Group Group number - Sub number]
SnapShot SVOL
Unmapped
SNAPS[PoolID poolID number ]
UNMAP[Group 00000]
External LUN
E-LUN[Group External Group number]
A-LUN[PoolID poolID number ]
HDP (AOU) volume
SSID: Displays Sub System ID of the LDEV in the RAID storage system.
CTGID: Displays CT group ID of TrueCopy Async/UR when the LDEV has been defined as
the PVOL or SVOL of the TrueCopy Async/UR.
284
Chapter 4 Performing CCI Operations
CHNO: Displays the channel number on the device adapter that recognizes on the Linux
host. Displayed only for Linux systems.
TID: Displays target ID of the hard disk that connects on the device adapter port.
Displayed only for Linux systems.
LUN: Displays logical unit number of the hard disk that connects on the device adapter
port. Displayed only for Linux systems.
Note: The display of Group, SSID, and CTGID depends on the storage system microcode
level. The CHNO, TID, and LUN items are displayed only for Linux systems.
ls /dev/sd* | inqraid -find
/dev/sdb -> No such on the group
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, Seq#,P-LDEV# M
oradb oradev2(L) (CL2-N , 3, 2) 8071 22..SMPL ---- ------,----- ---- -
->/dev/sdc
Figure 4.61 Inqraid: Example of -find Option (Linux example shown)
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -find
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
horc1 dev00(L)
(CL2-J , 0, 0-0)61456 192..S-VOL SSUS,-----
193 -
->/dev/rdsk/c23t0d0
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
horc1 dev10(L)
(CL2-J , 2, 3-0)61456 209..S-VOL SSUS,-----
206 -
->/dev/rdsk/c23t2d3
Figure 4.62 Inqraid: Example of -find Option (HP-UX example shown)
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -findc
DEVICE_FILE
c23t0d0
M Group
0 horc1
PairVol
dev00
P/S Stat R_DEVICE
S-VOL SSUS c23t0d1
M P/S Stat LK
0 P-VOL PSUS OK
/dev/rdsk/c23t0d0[1] -> No such on the group
/dev/rdsk/c23t0d0[2] -> No such on the group
DEVICE_FILE
c23t2d3
M Group
0 horc1
PairVol
dev10
P/S Stat R_DEVICE
S-VOL SSUS c23t2d2
M P/S Stat LK
0 P-VOL PSUS OK
/dev/rdsk/c23t2d3[1] -> No such on the group
/dev/rdsk/c23t2d3[2] -> No such on the group
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | ./inqraid -findc -CLI
DEVICE_FILE
c23t0d0
c23t2d3
M Group
0 horc1
0 horc1
PairVol
dev00
dev10
P/S Stat R_DEVICE
S-VOL SSUS c23t0d1
S-VOL SSUS c23t2d2
M P/S Stat LK
0 P-VOL PSUS OK
0 P-VOL PSUS OK
Figure 4.63 Inqraid: Example of -findc Option (HP-UX example shown)
DEVICE_FILE: Device file name.
M: MU# of local and remote.
Group: Group name (dev_group) defined in the configuration file.
PairVol: Paired vol. name (dev_name) within the group defined in the configuration file.
P/S: Volume attribute (PVOL or SVOL or simplex).
Stat: Status of the paired volume.
R_DEVICE: Device file name of remote site.
LK: Check result of the paired volume connection path.
Hitachi Command Control Interface (CCI) User and Reference Guide
285
# ls /dev/sd* | ./inqraid -CLI
DEVICE_FILE
PORT
CL2-B
CL1-A
-
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
sdh
sdi
sdj
30053
64015
-
23 2 S/P/ss 0004 5:02-01 OPEN-3
14 -
- -
- 0004 E:00002 OPEN-3-CM
-
-
-
-
Figure 4.64 Inqraid: Example of -CLI Option (Linux example shown)
DEVICE_FILE: Displays the device file name only.
PORT: Displays the RAID storage system port number.
SERIAL: Displays the production (serial#) number of the storage system.
LDEV: Displays the LDEV# within the storage system.
CTG: Displays the CT group ID of TrueCopy Async/UR when the LDEV has been defined as
a TrueCopy Async/UR PVOL or SVOL.
H/M/12: Displays attribute (PVOL is “P”, SVOL is “S”, SMPL is “s”) of a TrueCopy volume,
ShadowImage/Snapshot volume, and ShadowImage/Snapshot MU#1,2 volumes.
SSID: Displays the Sub System ID of an LDEV in the storage system.
R:Group: Displays the physical position of an LDEV according to mapping of LDEV in the
storage system.
LDEV Mapping
R:
Group
RAID Group
RAID Level
1 Æ RAID1
5 Æ RAID5
6 Æ RAID6
RAID Group number - Sub number
SnapShot SVOL
Unmapped
S
U
E
A
Pool ID number
00000
External LUN
External Group number
Pool ID number
HDP (AOU) volume
PRODUCT_ID: Displays product-id field in the STD inquiry page.
Note: In case of a command device, PORT/SERIAL/LDEV/PRODUCT_ID is the SCCSI
Inquiry information for the external command device, if the command device is mapped
as ELUN(R: =E).
# echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t0d1 | ./inqraid -CLIWP
DEVICE_FILE
c23t0d0
c23t0d1
PWWN
AL PORT LUN
SERIAL LDEV PRODUCT_ID
61456 192 OPEN-3
61456 193 OPEN-3
500060e802f01018 - CL2-J
500060e802f01018 - CL2-J
-
-
# echo /dev/rdsk/c0t2d3 | ./inqraid -CLIWN
DEVICE_FILE
c0t2d3
NWWN
AL PORT LUN
SERIAL LDEV PRODUCT_ID
30015 2054 OPEN3-CVS
5000E000E0005000 - CL1-A
-
Figure 4.65 Inqraid: Example of -CLIWP and -CLIWN Options (HP-UX example shown)
286
Chapter 4 Performing CCI Operations
DEVICE_FILE: Displays the device file name only.
WWN: CLIWP option displays Port_WWN of the host adapter included in the STD inquiry
page. CLIWN option displays Node_WWN of host adapter included in STD inquiry page.
AL: This option always displays as “-”.
PORT: Displays the RAID storage system port number.
LUN: This option always displays as “-”.
SERIAL: Displays the production (serial#) number of the storage system.
LDEV: Displays the LDEV# within the storage system.
PRODUCT_ID: Displays product-id field in the STD inquiry page.
#ioscan -fun | grep rdsk | ./inqraid -sort -CM -CLI
HORCM_CMD
#dev_name
dev_name
dev_name
#UnitID 0 (Serial# 30012)
/dev/rdsk/c0t3d0
/dev/rdsk/c1t2d1
#UnitID 1 (Serial# 30013)
/dev/rdsk/c2t3d0
Figure 4.66 Inqraid: Example of -sort[CM] Option (HP-UX example shown)
D:\HORCM\etc>inqraid $Phys -gvinf -CLI
# Harddisk0
# Harddisk1
# Harddisk2
-> [VOL61459_448_DA7C0D91] [OPEN-3
]
]
]
-> [VOL61459_449_DA7C0D92] [OPEN-3
-> [VOL61459_450_DA7C0D93] [OPEN-3
ÉS/N ÉLDEV ÉSignature
Figure 4.67 Inqraid: Example of -gvinf Option
D:\HORCM\etc>pairdisplay -l -fd -g URA
Group PairVol(L/R) Device_File M ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
URA
URA
URA
URA_000(L) Harddisk3
URA_001(L) Harddisk4
URA_002(L) Harddisk5
0 61459 451..S-VOL SSUS,-----
0 61459 452..S-VOL SSUS,-----
0 61459 453..S-VOL SSUS,-----
448 -
449 -
450 -
D:\HORCM\etc>pairdisplay -l -fd -g URA | inqraid -svinf=Harddisk
[VOL61459_451_5296A763] -> Harddisk3
[VOL61459_452_5296A760] -> Harddisk4
[VOL61459_453_5296A761] -> Harddisk5
[OPEN-3
[OPEN-3
[OPEN-3
]
]
]
Caution: If the SVOL is created with “Noread” option (ShadowImage only) and the system is
rebooted, the system will not be able to create a Device object (\Device\HarddiskVolume#)
and Volume{guid} for SVOL, but a Device object (\Device\HarddiskVolume#) and
Volume{guid} will be created by using -svinf option after splits the SVOL.
Figure 4.68 Inqraid: Example of -svinf[=PTN] Option
Hitachi Command Control Interface (CCI) User and Reference Guide
287
4.14.2 Mkconf Command Tool
The mkconf command tool is used to make a configuration file from a special file (raw
device file) provided via STDIN. Execute the following steps to make a configuration file:
1. Make a configuration file for only HORCM_CMD by executing “inqraid -sort -CM -CLI”.
2. Start a HORCM instance without a description for HORCM_DEV and HORCM_INST for
executing the raidscan command with next step.
3. Make a configuration file included HORCM_DEV and HORCM_INST by executing “raidscan
-find conf” from a special file (raw device file) provided via STDIN.
4. Start a HORCM instance with a configuration file including HORCM_DEV and HORCM_INST
for verification of the configuration file.
5. Execute “raidscan -find verify” to verify the correspondence of the device file and the
configuration file.
example of the mkconf command. The configuration file is created as “horcm*.conf” in the
current directory. The log directory of HORCM is specified as “log*” in the current directory.
The user must modify the “ip_address & service” of an existing configuration file as needed.
Table 4.47 Mkconf Command Parameters
Parameter
Value
Command Name
/HORCM/usr/bin/mkconf.sh (UNIX systems)
\HORCM\Tool\mkconf.exe (Windows and OpenVMS® systems)
Format
Options
mkconf.sh [ -g[g] <group> [-m <mu#>] [-i <inst#>] [-s <service>] [-a] ]
mkconf.exe [ -g[g] <group> [-m <mu#>] [-i <inst#>] [-s <service>] [-a] [-c <drive#>] ]
No argument: No option displays Help/Usage.
-g <group>: Specifies the “dev_group” name for a configuration file. If not specified, ‘VG’ will be used as
default.
-gg (9900V and later): Shows a LUN on the host view by finding a host group (9900V and later).
-m <mu#>: Specifies the mirror descriptor for ShadowImage volume. TrueCopy volume does not specify
the mirror descriptor.
-i <inst#>: Specifies the instance number for HORCM.
-s <service>: Specifies the service name for a configuration file. If not specified, ‘52323’ will be used as
default.
-a: Specifies an addition of the group to a configuration file.
-c <drive#> (only Windows): Specifies the range of drive numbers that should be examined in order to
discover the command devices. If not specified, ‘$PhysicalDrive’ will be used as default.
-c <DKA#-#> (only OpenVMS®): Specifies the range of drive numbers that should be examined in order to
discover the command devices. If not specified, ‘$1$DGA0-10000 DKA0-10000 DGA0-10000’ will be used
as default.
288
Chapter 4 Performing CCI Operations
# cd /tmp/test
# cat /etc/horcmperm.conf | /HORCM/usr/bin/mkconf.sh -g ORA -i 9 -m 0
starting HORCM inst 9
HORCM inst 9 starts successfully.
HORCM Shutdown inst 9 !!!
A CONFIG file was successfully completed.
starting HORCM inst 9
HORCM inst 9 starts successfully.
DEVICE_FILE
Group
ORA
ORA
ORA
ORA
ORA
ORA
ORA
-
PairVol
ORA_000
ORA_001
ORA_002
ORA_003
ORA_004
ORA_005
ORA_006
-
PORT TARG LUN M SERIAL LDEV
/dev/rdsk/c23t0d0
/dev/rdsk/c23t0d1
/dev/rdsk/c23t0d2
/dev/rdsk/c23t0d3
/dev/rdsk/c23t0d4
/dev/rdsk/c23t0d5
/dev/rdsk/c23t0d6
/dev/rdsk/c23t0d7
CL2-J
CL2-J
CL2-J
CL2-J
CL2-J
CL2-J
CL2-J
-
0
0
0
0
0
0
0
-
0 0
1 0
2 0
3 0
4 0
5 0
6 0
- 0
61456 192
61456 193
61456 194
61456 195
61456 256
61456 257
61456 258
61456 259
HORCM Shutdown inst 9 !!!
Please check '/tmp/test/horcm9.conf','/tmp/test/log9/curlog/horcm_*.log', and modify
'ip_address & service'.
# ls
ÅVerify configuration and log files.
horcm9.conf log9
# vi *.conf
ÅVerify config file, check ip address & service.
# Created by mkconf.sh on Mon Jan 22 17:59:11 JST 2001
HORCM_MON
#ip_address
127.0.0.1
service
52323
poll(10ms)
1000
timeout(10ms)
3000
HORCM_CMD
#dev_name
dev_name
dev_name
#UnitID 0 (Serial# 61456)
/dev/rdsk/c23t3d0
HORCM_DEV
#dev_group
# /dev/rdsk/c23t0d0
ORA
# /dev/rdsk/c23t0d1
ORA
# /dev/rdsk/c23t0d2
ORA
# /dev/rdsk/c23t0d3
ORA
# /dev/rdsk/c23t0d4
ORA
# /dev/rdsk/c23t0d5
ORA
# /dev/rdsk/c23t0d6
ORA
dev_name
SER =
ORA_000
SER =
ORA_001
SER =
ORA_002
SER =
ORA_003
SER =
ORA_004
SER =
ORA_005
SER =
port#
TargetID
LU#
MU#
61456 LDEV = 192 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 193 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 194 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 195 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 256 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 257 [ FIBRE FCTBL = 4 ]
CL2-J
61456 LDEV = 258 [ FIBRE FCTBL = 4 ]
CL2-J
0
0
0
0
1
0
0
2
0
0
3
0
0
4
0
0
5
0
ORA_006
0
6
0
# ERROR [CMDDEV] /dev/rdsk/c23t0d7 SER = 61456 LDEV = 259 [ OPEN-3-CM ] ÅSee Notes
below
HORCM_INST
#dev_group
ORA
ip_address
127.0.0.1
service
52323
ÅCheck and update as needed.
Figure 4.69 Mkconf Command Tool Example (HP-UX example shown)
Hitachi Command Control Interface (CCI) User and Reference Guide
289
Notes on mkconf:
„
„
„
A unitID is added to the Serial# order. If two or more command devices exist in the
storage system, then this option selects the multiple device files linked to a command
device (an LDEV).
If the target device is the command device, then the target device is suppressed as
comment as shown below:
# ERROR [CMDDEV] /dev/rdsk/c23t0d7
SER = 61456 LDEV = 259 [ OPEN-3-CM
]
If the target device has shared an LDEV among multiple device files and an LDEV is
displayed by another target device already, then its target device is suppressed as
comment as shown below:
# ERROR [LDEV LINK] /dev/rdsk/c24t0d3 SER = 61456 LDEV = 195 [FIBRE FCTBL = 4]
„
„
If the target device does not have a valid MU#, then its target device is suppressed as
comment as shown below:
# ERROR [INVALID MUN (2 < 1)] /dev/rdsk/c24t0d3 SER = 61456 LDEV = 195 [ OPEN-3 ]
If the target device has been mixing with difference between RAID TYPE, then its target
device is suppressed as comment as shown below:
# ERROR [MIXING RAID TYPE] /dev/rdsk/c24t0d3 SER = 61456 LDEV = 195 [ OPEN-3 ]
290
Chapter 4 Performing CCI Operations
4.15 Synchronous Waiting Command (Pairsyncwait) for Hitachi TrueCopy Async/UR
More robust systems need to confirm the data consistency between the Hitachi TrueCopy
Async/UR PVol and SVOL. In DB operations (e.g., Oracle), the commit() of DB transaction
reached to remote site by using CCI-unique API command. The pairsyncwait command is
used to confirm that required writing was stored in DFW area of RCU, and it will be able to
confirm whether or not a last writing of just before this command is reached to RCU DFW
area.
When a client issued the pairsyncwait command, this command is placed on the queue
buffer for waiting in the HORCM daemon as a command request. HORCM get the latest
sequence # from MCU sidefile and the sequence # whose block was transferred and stored in
DFW area of RCU with data consistency, and will compare the latest sequence # of MCU
sidefile with the sequence # of RCU DFW area within the term. HORCM replies return code to
this command, when the write of MCU sidefile was stored in RCU DFW area. In use this
function, a client will be able to confirm that a commit() has been reached to remote site,
and also the backup utility on a remote site will be able to split the cascading ShadowImage
volumes (TrueCopy Async/UR Æ TrueCopy Async/ShadowImage/UR) without splitting for
TrueCopy Asynchronous/UR.
HA software package
Process-A
Process-B
write(1)
.
write(2)
.
write(4)
pairsyncwait
write(3)
Note: Write() shows that synchronous
writing or commit() of DB is used.
sequence #
of RCU DFW
sequence #
of MCU sidefile
R/W
PAIR
FIFO
FIFO
Asynchronous transfer
PAIR
. . . . . . . . . 5
4
3
. . . . . . . . . . .
2
1
PSUS / PSUE
PSUE
PSUE
Primary
volume
BIT MAP
Secondary
volume
BIT MAP
resynchronization
Primary
volume
BIT MAP
BIT MAP
Secondary
volume
CT group
Hitachi RAID
Hitachi RAID
Figure 4.70 Synchronization for Hitachi TrueCopy Async/UR
Hitachi Command Control Interface (CCI) User and Reference Guide
291
Table 4.48 lists and describes the pair synchronization waiting command parameters and
command. The pairsyncwait command is used to confirm that required writing was stored in
DFW area of RCU, and it will be able to confirm whether or not a last writing of just before
this command is reached to RCU DFW area. This command gets the latest sequence # of MCU
sidefile (PVOL latest sequence # within the CT group ID) and the sequence # of RCU DFW
within the CT group ID which correspond to the <group> or <raw_device> that is specified by
pairsyncwait, and compares MCU with RCU sequence # at that time and at regular interval. If
RCU sequence # is over the value of MCU sequence # within the term that was specified by
pairsysncwait, this command reports the return code 0 with the meaning of completion of
synchronization. The -nowait option shows the latest sequence # (Q-marker) of MCU PVol
and CTGID. The marker is shown in hexadecimal of ten characters.
Table 4.48 Pairsyncwait Command Parameters
Parameter
Command Name
Format
Value
pairsyncwait
pairsyncwait{ -h ⎪ -q ⎪ -z ⎪ -g <group> ⎪ -d <pair Vol> ⎪ -d[g] <raw_device> [MU#] ⎪ -d[g] <seq#>
<LDEV#> [MU#] ⎪ -m <marker> ⎪ -t <timeout> ⎪ -nowait ⎪ -nomsg ⎪ -fq}
292
Chapter 4 Performing CCI Operations
Parameter
Value
Options
-h: Displays Help/Usage and version information.
-q: Terminates the interactive mode and exits the command.
-z or -zx (OpenVMS cannot use the -zx option): Makes the raidar command enter the interactive mode.
The -zx option guards performing of the HORCM in the interactive mode. When this option detects a
HORCM shut down, interactive mode terminates.
-I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and is used
to specify the instance# of HORCM.
-g <group>: Specifies a group name defined in the configuration definition file. The command is executed
for the specified group unless the -d <pair Vol> option is specified.
-d <pair Vol>: Specifies paired logical volume name defined in the configuration definition file. When this
option is specified, the command is executed for the specified paired logical volume.
-d[g] <raw_device> [MU#]: Searches a group on the configuration definition file (local instance) for the
specified raw_device. If the specified raw_device is found, the command is executed on the paired logical
volume (-d) or group (-dg). This option is effective without specification of “-g <group>” option. If the
specified the raw_device is contained in two or more groups, the command is executed on the first group.
-d[g] <seq#> <LDEV#> [MU#]: Searches a group on the configuration definition file (local instance) for
the specified LDEV, and if the specified LDEV is contained in the group, the command is executed on the
paired logical volume (-d) or group (-dg). This option is effective without specification of “-g <group>”
option. If the specified LDEV is contained in two or more groups, the command is executed on the first
group. The <seq #> <LDEV #> values can be specified in hexadecimal (by addition of “0x “) or decimal.
-m <marker>: Specifies the sequence # of MCU PVOL, called the Q-marker. If the application gets Q-
marker as the result of execution of pairsyncwait because of timeout or “-nowait”, the application can
reconfirm the completion of Async transfer by using pairsysncwait with Q-marker. If the application does
not specify Q-marker, CCI uses the latest sequence # when CCI receives pairsysncwait. It is also possible
to wait for the completion from SVOL side with this option.
Q-Marker format: = iissssssss, where ii = incarnation # of pair volume, and ssssssss = PVOL serial #.
-t <timeout>: Specifies the timeout value to wait for the completion of RCU DFW area. The unit is 100
ms. MCU gets the latest sequence # from RCU at regular interval.
-nowait: Gets the latest sequence # of MCU PVol and CTGID without waiting. When this option is
specified, the latest sequence # of MCU PVol is reported immediately, and -t <timeout>option is ignored.
-nomsg: Suppresses messages to be displayed when this command is executed from a user program.
This option must be specified at the beginning of the command arguments.
-fq: Displays the number of remaining Q-Markers within the CT group by adding “QM-Cnt” to the last
column. “QM-Cnt” will be shown as follows:
• In case of specifying “-nowait -fq”
“QM-Cnt” will be shown as the number of remaining Q-Marker at this time within CT group.
• In case of specifying “-nowait -m <marker> -fq”
“QM-Cnt” will be shown as the number of remaining Q-Marker from the specified <marker> within CT
group.
• In case of “TIMEOUT” without “-nowait”
“QM-Cnt” will be shown as the number of remaining Q-Marker at this timeout within CT group.
“QM-Cnt” will be shown as “-”, if the status for Q-Marker is invalid (i.e. status is “BROKEN” or
“CHANGED”).
Example:
# pairsyncwait -g oradb -nowait -fq
UnitID CTGID Q-Marker Status Q-Num QM-Cnt
0
3 01003408ef NOWAIT
2
120
# pairsyncwait -g oradb -nowait -m 01003408e0 -fq
UnitID CTGID Q-Marker Status Q-Num QM-Cnt
0
3 01003408e0 NOWAIT
2
105
# pairsyncwait -g oradb -t 50 -fq
Hitachi Command Control Interface (CCI) User and Reference Guide
293
UnitID CTGID Q-Marker Status Q-Num QM-Cnt
3 01003408ef TIMEOUT
0
2
5
Parameter
Value
Returned values
When the -nowait option is specified:
Normal termination: 0: The status is NOWAIT.
Abnormal termination: other than 0 to 127, refer to the execution logs for error details.
When the -nowait option is not specified:
Normal termination: 0: The status is DONE (completion of synchronization).
1: The status is TIMEOUT (timeout).
2: The status is BROKEN (Q-marker synchronized process is rejected).
3: The status is CHANGED (Q-marker is invalid due to resynchronize).
Abnormal termination: other than 0 to 127, refer to the execution logs for error details.
Restriction
Specified <group> volume must be PVol with status PAIR. Other cases reply with error (EX_INVVOL). It is
possible to issue pairsysncwait from SVOL side with -m <marker>.
Table 4.49 Specific Error Code for Pairsyncwait
Category
Error Code
Error Message
Recommended Action
Value
Volume status EX_INVVOL
Unrecoverable
Invalid volume status
Confirm pair status using pairdisplay -l. 222
Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the
command. If the command failed, the detailed status will be logged in the CCI command log
294
Chapter 4 Performing CCI Operations
option. The output of the pairsyncwait command is:
„
„
„
„
„
„
UnitID: Unit ID in case of multiple storage system connection
CTGID: CTGID within Unit ID
Q-Marker: The latest sequence # of MCU PVol (Marker) when the command is received.
Status: The status after the execution of command.
Q-Num: The number of process queue to wait for synchronization within the CTGID.
QM-Cnt: The number of remaining Q-Markers within CT group of the Unit.
HORCAsync/UR sends a token called “dummy recordset” at regular intervals, therefore
QM-Cnt always shows “2” or “3” even if Host has NO writing.
Following is an arithmetic expression for determining the remaining Data in a CT group:
Remaining data in CT group = Sidefile capacity * Sidefile percentage / 100
Sidefile percentage is the rate showed to “%” column with “PAIR” state by Pairdisplay
command. Sidefile capacity is the capacity within 30% to 70% of the cache setting as the
sidefile.
Following is an arithmetic expression for determining the average data per Q-Marker in a
CT group:
Data per Q-Marker = Remaining data in CT group / QM-Cnt
# pairsyncwait -g oradb -nowait
UnitID CTGID Q-Marker Status
3 01003408ef NOWAIT
Å -nowait is specified.
Q-Num
2
0
# pairsyncwait -g oradb -t 100
UnitID CTGID Q-Marker Status
3 01003408ef DONE
Å -nowait is not specified.
Q-Num
0
2
# pairsyncwait -g oradb -t 1
UnitID CTGID Q-Marker Status
3 01003408ef TIMEOUT
Q-Num
0
3
# pairsyncwait -g oradb -t 100 -m 01003408ef
UnitID CTGID Q-Marker Status Q-Num
3 01003408ef DONE
0
0
# pairsyncwait -g oradb -t 100
UnitID CTGID Q-Marker Status
3 01003408ef BROKEN
Q-Num
0
0
# pairsyncwait -g oradb -t 100
UnitID CTGID Q-Marker Status
3 01003408ef CHANGED
-m 01003408ef
Q-Num
0
0
É Q-Marker(01003408ef) is invalid when PVOL was
resynchronized while this command is executed.
Figure 4.71 Pairsyncwait Command Examples
Hitachi Command Control Interface (CCI) User and Reference Guide
295
4.16 Protection Facility
The Protection Facility permits main operations to volumes that the user can see on the
host, and prevents wrong operations. CCI controls protected volumes at the result of
recognition of protection. CCI recognizes only volumes that the host shows. For that purpose
current Hitachi SANtinel is provided for the CCI environment.
It is not possible to turn ON or OFF the Protection Facility from CCI. The Protection Facility
ON/OFF is controlled by Remote Console/SVP or SNMP. The Protection Facility uses an
enhanced command device that the user defines using the LUN Manager remote console
software (or SNMP). When the user defines the command device, the Protection Facility is
turned ON or OFF to each command device, which has attribute to enable Protection
Facility. CCI distinguishes the attribute ON from OFF when CCI recognizes the command
Note: If the command device is set to enable protection mode, there is no impact on CCI
operations. CCI controls pairs under current specification.
Permitted volumes
Protected volumes
Volumes on Host view
via LUN Security
HOST1
HOST2
Volumes on Horcm.conf via
protection “On” command device
Figure 4.72 Definition of the Protection Volume
4.16.1 Protection Facility Specification
Only the permitted volumes must be registered in horcm.conf. When the user makes the
horcm.conf file, the user can describe volumes from only view that host shows. CCI manages
mirror descriptor (Hitachi TrueCopy, ShadowImage/MU#0/1/2) as the unit. The Protection
Facility has two specifications: one must be volume that the user can see from host such as
Inquiry tool, and the other must be mirror descriptor volume that was registered in
296
Chapter 4 Performing CCI Operations
Table 4.50 Registration for the Mirror Descriptor
Mirror Descriptor on Horcm.conf
TrueCopy
ShadowImage
MU#1
MU#0
none
MU#2
none
Volumes on Horcm.conf
E
none
E
E
none
E
Permitted Volumes
Unknown
/dev/rdsk/c0t0d0
Unknown
E = Mirror descriptor volume to be registered in horcm.conf.
Unknown: Volumes that own host cannot recognize, even though volumes were registered in
horcm.conf.
„
CCI permits operation after “Permission command” at startup of HORCM. The target is
volume that was registered in the horcm.conf file.
„
“Permission command” is necessary to permit the protected volume at first. “Permission
command” compares an identification for volumes of horcm.conf to all of own host
volumes, and the result is registered within HORCM. And HORCM makes tables for
protected volume and permitted volumes from horcm.conf and Inquiry result. Inquiry
result is based on configuration of Hitachi Data Retention Utility. When the user controls
pair volumes, request to protected volumes is rejected with error code “EX_ENPERM”.
„
„
„
Protection Facility is based on host side view at the result of Hitachi SANtinel. You need
to configure SANtinel before CCI operation. CCI checks SANtinel by Inquiry within CCI.
Protection Facility is supported for Lightning 9900 storage systems and later (not for
7700E). For Hitachi 7700E you can protect the volumes by using Hitachi SANtinel.
Protection Facility can be enabled separately for each command device. If you want to
use protection and non-protection modes in the same storage system at the same time,
you can define two (or more) command devices: one with protection ON, one with
protection OFF. Protection mode is enabled for the host that has Hitachi SANtinel and
ON command device.
Hitachi Command Control Interface (CCI) User and Reference Guide
297
4.16.2 Examples for Configuration and Protected Volumes
paired volume, because of Unknown for Grp4 on HOST2.
the paired volume, because of Unknown for Grp2 and Grp4 on HOST1. If HOST1 has a
protection OFF command device, then Ora1 and Ora2 are permitted to operate the paired
volume.
Note: The Protection Facility is implemented by only CCI. CCI needs to know the protection
attribute for the command device whether should be permitted the operation for paired
volume. If HORCM has protection ON command device at its time, then HORCM checks a
permission for a paired volume.
Horcm.conf on
Horcm.conf on
HOST2
HOST1
Ora1
Ora2
volumes for Grp2
volumes for Grp4
volumes for Grp1
volumes for Grp3
Visible to Grp2
Visible to Grp1,Grp3,Grp4
CM*
Grp2
Grp4
Grp1
Grp3
* CM = protection “On” command device
Figure 4.73 Example for the Two Host Configuration
298
Chapter 4 Performing CCI Operations
Horcm0.conf on
HOST1
Horcm1.conf on
HOST1
Horcm0.conf on
HOST2
Horcm1.conf on
HOST2
Ora1
Ora2
Ora3
volumes for Grp1
volumes for Grp3
volumes for Grp2
volumes for Grp4
volumes for Grp2
volumes for Grp4
Visible to Grp2,Grp4
Visible to Grp1,Grp3
CM*
Grp2
Grp4
Grp1
Grp3
* CM = protection “On” command device
Figure 4.74 Example for the One Host Configuration
Hitachi Command Control Interface (CCI) User and Reference Guide
299
4.16.3 Target Commands for Protection
The following commands are controlled by the Protection Facility: Horctakeover,
Paircurchk, Paircreate, Pairsplit, Pairresync, Pairvolchk, Pairevtwait, Pairsyncwait,
raidvchkset, raidvchkdsp. Pairdisplay is not included. When the command is issued to non-
permitted volumes, CCI rejects the request with error code “EX_ENPERM”.
„
Pairdisplay command shows all volumes, so that you can confirm non-permitted
volumes. Non-permitted volumes are shown without LDEV# information. As shown below,
the LDEV# information is “ **** ” (-CLI is “ – ”).
# pairdisplay -g oradb
Group PairVol(L/R) (Port#,TID,LU-M),Seq#, LDEV#.P/S,Status, Seq#,P-LDEV# M
oradb oradev1(L) (CL1-D , 3, 0-0) 35013 ****..---- ----,----- ---- -
oradb oradev1(R) (CL1-D , 3, 1-0) 35013 ****..---- ----,----- ---- -
„
Raidscan command shows all volumes same as current specification, because it does not
need HORCM_DEV and HORCM_INST on horcm.conf. If you want to know permitted
volumes at raidscan, you can use raidscan -find. The -find option shows device file
name and storage system information by using internal Inquiry result. You can use
raidscan -find to make horcm.conf, because only permitted volumes are shown with
host side view. Example for HP-UX systems:
# ioscan -fun | grep rdsk | raidscan -find
DEVICE_FILE
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
/dev/rdsk/c0t3d0
/dev/rdsk/c0t3d1
0
0
F CL1-D
F CL1-D
3
3
0
1
35013
35013
17 OPEN-3
18 OPEN-3
4.16.4 Permission Command
CCI recognizes permitted volumes at the result of “permission command”. The permission
command is -find inst option of raidscan. This option issues Inquiry to specified device file
to get Ser# and LDEV# from RAID storage system, and checks an identification for volumes of
horcm.conf to all of own host volumes, then stores the result within HORCM of the instance.
This permission command is started by /etc/horcmgr automatically.
The following example shows the relation between device file and horcm.conf in case of a
manual operation for HP-UX system. All of volumes of ioscan are permitted.
# ioscan -fun | grep rdsk | raidscan -find inst
DEVICE_FILE
/dev/rdsk/c0t3d0
/dev/rdsk/c0t3d0
Group
oradb
oradb
PairVol
oradev1
oradev1
PORT TARG LUN M SERIAL LDEV
CL1-D
CL1-D
3
3
0 -
0 0
35013
35013
17
17
300
Chapter 4 Performing CCI Operations
4.16.5 New Options for Security
(1) raidscan
-find inst. The -find inst option is used to register the device file name to all mirror
descriptors of the LDEV map table for CCI and permit the matching volumes on horcm.conf
in protection mode, and is started from /etc/horcmgr automatically. Therefore the user will
not normally need to use this option. This option issues Inquiry to device file from the result
of STDIN. And CCI gets Ser# and LDEV# from RAID storage system. Then, CCI compares
Inquiry result to content of horcm.conf, and the result is stored within HORCM of the
instance. At the same time CCI shows the result of this option about the relation. This option
will also be terminated to avoid wasteful scanning when the registration has been finished
with based on horcm.conf, because HORCM does not need the registration any more.
# ioscan -fun | grep rdsk | raidscan -find inst
DEVICE_FILE
/dev/rdsk/c0t3d0
/dev/rdsk/c0t3d0
Group
oradb
oradb
PairVol
oradev1
oradev1
PORT TARG LUN M SERIAL LDEV
CL1-D
CL1-D
3
3
0 -
0 0
35013
35013
17
17
Note: When multiple device files share the same LDEV, the first device file is registered to
the LDEV map table.
-find verify [MU#]. This option shows relation between group on horcm.conf and Device_File
registered to the LDEV map tables from DEVICE_FILE of STDIN.
# ioscan -fun | grep rdsk | raidscan -find verify -fd
DEVICE_FILE
Group
oradb
oradb
-
PairVol
oradev1
oradev2
-
Device_File
c0t3d0
Unknowm
-
M SERIAL LDEV
/dev/rdsk/c0t3d0
/dev/rdsk/c0t3d1
/dev/rdsk/c0t3d2
0
0
0
35013
35013
35013
17
18
19
Note: It shows shared LDEV among multiple device files, if there is difference between
DEVICE_FILE and Device_File. The user can also use this option to the command device that
specified non-protection mode. It is used for the purpose to see the relation between
DEVICE_FILE and the group of Horcm.conf.
-f[d]. The -f[d] option shows the Device_File that was registered on the group of HORCM,
based on the LDEV (as defined in the local instance configuration definition file).
# raidscan -p cl1-d -fd
Port# ,TargetID#,Lun#..Num(LDEV#....)...P/S, Status,Fence,LDEV#,Device_File
CL1-D ,
CL1-D ,
3, 0...1(17)............SMPL ---- ------ ----,c0t3d0
3, 1...1(18)............SMPL ---- ------ ----,c0t3d1
Hitachi Command Control Interface (CCI) User and Reference Guide
301
(2) pairdisplay
-f[d]. The -f[d] option shows the relation between the Device_File and the paired volumes
(protected volumes and permitted volumes), based on the group, even though this option
does not have any relation with protection mode.
# pairdisplay -g oradb -fd
Group PairVol(L/R) Device_File
oradb oradev1(L) c0t3d0
oradb oradev1(R) c0t3d1
M ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
0 35013 17..P-VOL COPY, 35013
0 35013 18..S-VOL COPY, 35013
18 -
17 -
If either the local or the remote host (instance) has not been shown the Device_File, then
pair operation are rejected (except the local option such as “-l”) in protection mode
because of Unknown volume, as shown in the following example.
# pairdisplay -g oradb -fd
Group PairVol(L/R) Device_File
oradb oradev1(L) c0t3d0
oradb oradev1(R) Unknown
M ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M
0 35013 17..P-VOL COPY, 35013 18 -
0 35013 ****..---- ----, ----- ---- -
4.16.6 Permitting the Protected Volumes
Protection Mode needs recognition step to check accessible volumes and horcm.conf at the
startup of HORCM on protection mode. The protected volumes must be registered to enable
Protection Facility at each startup of HORCM, so that this registration process is executed
automatically by /etc/horcmgr (see (1) and (2) below).
(1) The following is executed for registration of permitted volume file ($HORCMPERM file), if
$HORCMPERM file is existing and there are permitted volumes. If the user wants to permit
only the volumes specified, then the volume list must be described in $HORCMPERM file.
Naming of $HORCMPERM file:
„
UNIX systems. $HORCMPERM is “/etc/horcmperm.conf” or “/etc/horcmperm*.conf”
(* is the instance number) as default. Example for HP-UX systems:
cat $HORCMPERM | /HORCM/usr/bin/raidscan -find inst
# The following are an example to permit the LVM Volume groups.
# For MU# 0
vg00 /dev/rdsk/c0t3d0 /dev/rdsk/c0t3d1
vg00 /dev/rdsk/c0t3d2 /dev/rdsk/c0t3d3
# For MU# 1
vg01 /dev/rdsk/c0t3d0 /dev/rdsk/c0t3d1
vg01 /dev/rdsk/c0t3d2 /dev/rdsk/c0t3d3
.
Verifying a group for vg01. The following are examples to verify whether LVM volume
group is mapped to group (MU#1 for ShadowImage) on the horcm.conf correctly.
# export HORCC_MRCF=1
# cat /etc/horcmperm.conf | grep vg01 | raidscan -find verify 1 -fd
OR
# vgdisplay -v /dev/vg01|grep dsk|sed 's/\/*\/dsk\//\/rdsk\//g'|raidscan -find verify 1 -fd
DEVICE_FILE
Group
oradb1 oradev1
oradb1 oradev2
oradb
-
PairVol
Device_File
c0t3d0
c0t3d1
c0t3d2
-
M SERIAL LDEV
/dev/rdsk/c0t3d0
/dev/rdsk/c0t3d1
/dev/rdsk/c0t3d2
/dev/rdsk/c0t3d3
1
1
1
1
35013
35013
35013
35013
17
18
19
20
oradev3
-
Mapping to another group on horcm.conf !!
Unknown on horcm.conf !!
302
Chapter 4 Performing CCI Operations
„
Windows systems. $HORCMPERM is “\WINNT\horcmperm.conf” or
“\WINNT\horcmperm*.conf”(* is the instance number) as default.
type $HORCMPERM | x:\HORCM\etc\raidscan.exe -find inst
# The following are an example to permit the DB Volumes.
# Note: a numerical value is interpreted as Harddisk#.
# DB0 For MU# 0
Hd0-10
harddisk12 harddisk13 harddisk17
# DB1 For MU# 1
hd20-23
Verifying a group for DB1. The following is an example to verify whether DB volume
group is mapped to group (MU#1 for ShadowImage) on the horcm.conf correctly.
D:\HORCM\etc> set HORCC_MRCF=1
D:\HORCM\etc> echo hd20-23 | raidscan -find verify 1 -fd
DEVICE_FILE
Harddisk20
Harddisk21
Harddisk22
Harddisk23
Group
oradb1 oradev1
oradb1 oradev2
oradb
-
PairVol
Device_File
Harddisk20
Harddisk21
Harddisk22
-
M SERIAL LDEV
1
1
1
1
35013
35013
35013
35013
17
18
19
20
oradev3
-
Mapping to another group on horcm.conf !!
Unknown on horcm.conf !!
(1) If no $HORCMPERM file exists, then the following is executed to permit all volumes on
the host:
For HP-UX: ioscan -fun | grep -e rdisk -e rdsk | /HORCM/usr/bin/raidscan -find inst
For Linux: ls /dev/sd* | /HORCM/usr/bin/raidscan -find inst
For zLinux: ls /dev/sd* /dev/dasd*| /HORCM/usr/bin/raidscan -find inst
For Solaris: ls /dev/rdsk/* | /HORCM/usr/bin/raidscan -find inst
For AIX: lsdev -C -c disk | grep hdisk | /HORCM/usr/bin/raidscan -find inst
For Tru64 UNIX: ls /dev/rdisk/dsk* | /HORCM/usr/bin/raidscan -find inst
For Digital UNIX: ls /dev/rrz* | /HORCM/usr/bin/raidscan -find inst
For DYNIX/ptx: /etc/dumpconf -d | grep sd | /HORCM/usr/bin/raidscan -find inst
For IRIX64: ls /dev/rdsk/*vol /dev/rdsk/*/*vol/* | /HORCM/usr/bin/raidscan -find inst
For OpenVMS: /HORCM/usr/bin/raidscan -pi ‘$1$DGA0-10000 DKA0-10000 DGA0-10000’ -find inst
For Windows: x:\HORCM\etc\raidscan.exe -pi $PhysicalDrive -find inst
Note: This registration process has risk because it is executed automatically by
/etc/horcmgr without judgment for protection mode in order to validate -fd option. This
registration brings a degradation in horcmstart.sh, but HORCM daemon has been running as
usual, and it will depend on how many devices a host has. In non-protection mode, if a user
wants to start faster at HORCM start-up, then it is required to put $HORCMPERM file of “SIZE
0 byte” as dummy file or to set HORCMPERM=MGRNOINST. At this time, -fd option will show
Device_File name as Unknown, and after a user will be able to use raidscan -find inst in
order to validate the -fd option.
Hitachi Command Control Interface (CCI) User and Reference Guide
303
4.16.7 Environmental Variables
$HORCMPROMOD. This environmental parameter turns protection mode ON as specified in
Table 4.51. If your command device is set for non-protection mode, this parameter sets it to
protection mode.
Table 4.51 Relation between HORCMPROMOD and Command Device
Command Device
HORCMPROMOD Mode
Don’t care Protection mode
Protection mode
Non-protection mode Not specified
Specified
Non-protection mode
Protection mode
$HORCMPERM. This variable is used to specify HORCM permission file name. If no file name
is specified, “/etc/horcmperm.conf” or “/etc/horcmperm*.conf” (* is the instance
number) is the default.
„
If HORCM permission file exists, then “/etc/horcmgr” executes the following command
to permit the volumes specified.
Example for UNIX systems:
cat $HORCMPERM | /HORCM/usr/bin/raidscan -find inst
Example for Windows systems:
type $HORCMPERM | x:\HORCM\etc\raidscan.exe -find inst
„
If no HORCM permission file exists, then “/etc/horcmgr” executes the built-in command
to permit all volumes of own host. Examples:
HP-UX: ioscan -fun | grep -e rdisk -e rdsk | /HORCM/usr/bin/raidscan -find inst
Linux: ls /dev/sd* | /HORCM/usr/bin/raidscan -find inst
zLinux: ls /dev/sd* /dev/dasd*| /HORCM/usr/bin/raidscan -find inst
Solaris: ls /dev/rdsk/* | /HORCM/usr/bin/raidscan -find inst
AIX: lsdev -C -c disk | grep hdisk | /HORCM/usr/bin/raidscan -find inst
Tru64 UNIX: ls /dev/rdisk/dsk* | /HORCM/usr/bin/raidscan -find inst
Digital UNIX: ls /dev/rrz* | /HORCM/usr/bin/raidscan -find inst
DYNIX/ptx: /etc/dumpconf -d | grep sd | /HORCM/usr/bin/raidscan -find inst
IRIX64: ls /dev/rdsk/*vol /dev/rdsk/*/*vol/* | /HORCM/usr/bin/raidscan -find inst
OpenVMS:
/HORCM/usr/bin/raidscan -pi ‘$1$DGA0-10000 DKA0-10000 DGA0-10000’ -find inst
Windows: x:\HORCM\etc\raidscan.exe -pi $PhysicalDrive -find inst
„
“/etc/horcmgr” does not execute the built-in command if the following are specified to
$HORCMPERM. This is used to execute a system command to permit the volumes
specified from a user’s shell script.
HORCMPERM=MGRNOINST.
304
Chapter 4 Performing CCI Operations
4.16.8 Determining the Protection Mode Command Device
The inquiry page is not changed for a command device with protection mode ON. Therefore,
CCI provides how to find the protection mode command device. To determine the currently
used command device, use the horcctl -D command. This command shows the protection
mode command device by adding an asterisk (*) to the device file name.
Example for HP-UX systems:
# horcctl -D
Current control device = /dev/rdsk/c0t0d0*
Å * indicates protection ON.
Hitachi Command Control Interface (CCI) User and Reference Guide
305
4.17 Group Version Control for Mixed Storage System Configurations
Before executing each option of a command, CCI checks the facility version of the Hitachi
storage system internally to verify that the same version is installed on mixed storage system
configuration. If the configuration includes older storage systems (e.g., 9900), this method
may not meet the requirements for the mixed storage system environment, because the
older storage system limits the availability enhancements in later facility versions. If the
facility versions of the storage systems are different, the user will not be able to use
USP/NSC-specific facility, because CCI applies the minimum version to all storage systems.
To expand the capability for mixed storage system configurations and avoid problems such as
this, CCI supports the following “group version control” to manage a version for each group.
„
CCI (HORCM daemon) makes a facility version for each group based on a configuration
file at the start-up of HORCM.
„
In a mixed storage system configuration, if the facility version of the storage systems
(e.g., USP/NSC and 9900V) is different on a group, CCI will apply the minimum version
for each group (see Figure 4.75).
Facility version = N
for group A on Conf.file
Facility version = N+1
Facility version = N
for group C on Conf.file
for group B on Conf.file
9900V facility version = N
USP/NSC facility version = N+1
Mixed subsystem configuration
Figure 4.75 Definition of the Group Version
306
Chapter 4 Performing CCI Operations
4.18 LDM Volume Discovery and Flushing for Windows
Windows systems support the Logical Disk Manager (LDM) (such as VxVM), and a logical drive
letter is typically associated with an LDM volume (“\Device\HarddiskVolumeX”). Therefore,
the user will not be able to know the relationship between LDM volumes and the physical
volumes of the RAID storage system. The user makes the CCI configuration file, and then
needs to know the relationship that is illustrated in Figure 4.76.
Mounted point
Drive
Letter
G:
F:
E:
Volume{guid}
Volume{guid}
Volume{guid}
\Device\HarddiskDmVolumes
\...\VolumeX or
LDM
Volumes
\Device\
\Device\
HarddiskVolumeX HarddiskVolumeY
\...\StripeX
PhysicalDriveY
Physical
Volumes
\Device\HarddiskX\DR??
Mirrored Volume
ORB
ORA
Group in CCI configuration file
Figure 4.76 LDM Volume Configuration
Hitachi Command Control Interface (CCI) User and Reference Guide
307
4.18.1 Volume Discovery Function
CCI supports the volume discovery function of three levels that shows the relationship
between LDM volumes and the physical volumes.
„
„
„
Physical level. CCI shows the relationship between ‘PhysicalDrive’ and LDEV by given
$Physical as KEY WORD for the discovery.
LDM volume level. CCI shows the relationship between ‘LDM volume & PhysicalDrives’
and LDEV by given $Volume as KEY WORD for the discovery.
Drive letter level. CCI shows the relationship between ‘Drive letter & LDM volume &
PhysicalDrives’ and LDEV by given $LETALL as KEY WORD for the discovery.
The KEY WORD($Physical,$Volume,$LETALL) can be used with ‘raidscan
-find,inqraid,mkconf’ commands.
In Windows, DOS devices (i.e. C:, Volume{}) are linked to a Device Object Name
(\Device\...). CCI indicates as the following by abbreviating a long Device Object Name.
„
„
Device Object Name of the LDM for Windows 2008/2003/2000:
–
\Device\HarddiskVolumeX for Partition Æ \VolX\DskY
DskY shows that VolX are configured through HarddiskY.
Device Object Name of the LDM for Windows 2003/2000:
–
–
–
–
\Device\HarddiskDmVolumes\ ... \VolumeX for spanned volume Æ \DmsX\DskYs
\Device\HarddiskDmVolumes\ ... \StripeX for striped volume Æ \DmtX\DskYs
\Device\HarddiskDmVolumes\ ... \RaidX for Raid-5 volume Æ \DmrX\DskYs
DskYs shows that DmsX(DmtX,Dmr) volumes are configured through bundling multiple
HarddiskY1 Y2….
„
Device Object Name of the PhysicalDrive for Windows 2008/2003/2000:
\Device\HarddiskX\DR?? Æ HarddiskX
–
The user will be able to know the relationship between LDM volumes and LDEV by given a
KEY WORD to “inqraid” command.
inqraid $LETALL -CLI
DEVICE_FILE
D:\Vol2\Dsk4
E:\Vol44\Dsk0 CL2-K
F:\Vol45\Dsk0 CL2-K
G:\Dmt1\Dsk1
G:\Dmt1\Dsk2
G:\Dmt1\Dsk3
PORT
-
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
- - - DDRS-34560D
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 256 - s/s/ss 0005 1:01-11 OPEN-3
61456 257 - s/s/ss 0005 1:01-11 OPEN-3
61456 258 - s/s/ss 0005 1:01-11 OPEN-3
-
-
-
CL2-K
CL2-K
CL2-K
inqraid $Volume -CLI
DEVICE_FILE
\Vol2\Dsk4
\Vol44\Dsk0
\Vol45\Dsk0
\Dmt1\Dsk1
\Dmt1\Dsk2
\Dmt1\Dsk3
PORT
-
CL2-K
CL2-K
CL2-K
CL2-K
CL2-K
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
- -
-
-
- DDRS-34560D
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 256 - s/s/ss 0005 1:01-11 OPEN-3
61456 257 - s/s/ss 0005 1:01-11 OPEN-3
61456 258 - s/s/ss 0005 1:01-11 OPEN-3
308
Chapter 4 Performing CCI Operations
inqraid $Phy -CLI
DEVICE_FILE
Harddisk0
Harddisk1
Harddisk2
Harddisk3
Harddisk4
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 256 - s/s/ss 0005 1:01-11 OPEN-3
61456 257 - s/s/ss 0005 1:01-11 OPEN-3
61456 258 - s/s/ss 0005 1:01-11 OPEN-3
CL2-K
CL2-K
CL2-K
CL2-K
-
-
- -
-
-
- DDRS-34560D
„
Device Object Name of the Partition for Windows NT
\Device\HarddiskX\PartitionY Æ \DskX\pY
Device Object Name of the PhysicalDrive for Windows NT
\Device\HarddiskX\Partition0 Æ HarddiskX
–
„
–
inqraid $LETALL -CLI
DEVICE_FILE
D:\Dsk0\p1
E:\Dsk1\p1
F:\Dsk1\p2
PORT
-
CL2-K
CL2-K
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
- -
-
-
- DDRS-34560D
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
61456 194 - s/s/ss 0004 1:01-10 OPEN-3
inqraid $Phy -CLI
DEVICE_FILE
Harddisk0
Harddisk1
PORT
-
CL2-K
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
- -
-
-
- DDRS-34560D
61456 194 - s/s/ss 0005 1:01-11 OPEN-3
The user wants to know the relationship between LDM volumes and a group of the
configuration files, and then will be able to find a group of the configuration file by given a
KEY WORD to “raidscan -find verify” command.
raidscan -pi $LETALL -find verify
DEVICE_FILE
Group
ORA
ORA
ORB
ORB
PairVol
ORA_000
ORA_000
ORB_000
ORB_001
ORB_002
PORT TARG LUN M SERIAL LDEV
E:\Vol44\Dsk0
F:\Vol45\Dsk0
G:\Dmt1\Dsk1
G:\Dmt1\Dsk2
G:\Dmt1\Dsk3
CL2-K
CL2-K
CL2-K
CL2-K
CL2-K
7
7
7
7
7
2 -
2 -
4 -
5 -
6 -
61456 194
61456 194
61456 256
61456 257
61456 258
ORB
raidscan -pi $LETALL -find
DEVICE_FILE
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
61456 194 OPEN-3
61456 194 OPEN-3
61456 256 OPEN-3
61456 257 OPEN-3
61456 258 OPEN-3
E:\Vol44\Dsk0
F:\Vol45\Dsk0
G:\Dmt1\Dsk1
G:\Dmt1\Dsk2
G:\Dmt1\Dsk3
0 F CL2-K
0 F CL2-K
0 F CL2-K
0 F CL2-K
0 F CL2-K
7
7
7
7
7
2
2
4
5
5
Hitachi Command Control Interface (CCI) User and Reference Guide
309
4.18.2 Mountvol Attached to Windows 2008/2003/2000 Systems
The user must pay attention to ‘mountvol /D’ command attached to a Windows 2008, 2003,
or 2000 system, that it does not flush the system buffer associated with the specified logical
drive. The mountvol command shows the volume mounted as Volume{guid} as follows:
mountvol
Creates, deletes, or lists a volume mount point.
.
.
MOUNTVOL [drive:]path VolumeName
MOUNTVOL [drive:]path /D
MOUNTVOL [drive:]path /L
\\?\Volume{56e4954a-28d5-4824-a408-3ff9a6521e5d}\
G:\
\\?\Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}\
F:\
The user will be able to know what ‘\\?\Volume{guid}\’ is configured, as follows:
inqraid $Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e} -CLI
DEVICE_FILE
\Vol46\Dsk1
PORT
CL2-K
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
61456 193 - S/s/ss 0004 1:01-10 OPEN-3
raidscan -pi $Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e} -find
DEVICE_FILE
\Vol46\Dsk1
UID S/F PORT TARG LUN
0 F CL2-K
SERIAL LDEV PRODUCT_ID
61456 193 OPEN-3
7
1
310
Chapter 4 Performing CCI Operations
4.18.3 System Buffer Flushing Function
The logical drive to be flushed can be specified by the following two methods. One method
is that logical drive (e.g., G:\hd1 drive, as below) is specified immediately, but this method
must know about the logical drive which corresponds to a group before executes the sync
command. Also the volume is mounting by a directory and this method requires to find its
volume name. To solve such a complication, CCI supports new method that flushes the
system buffer associated to a logical drive through finding a volume{guid} which corresponds
to a group of the configuration file. This method does not depend on mounted point, so that
it is possible to flush the volume mounted by a directory. This method is supported to be
specified a group to the “raidscan -find sync” command.
Mounted point
Drive
Letter
G:\hd1
F:
E:
Volume{guid}
NT File system Buffer
\Device\HarddiskDmVolumes
\...\VolumeX or
\Device\
HarddiskVolumeX
LDM
Volumes
\Device\
HarddiskVolumeY
\...\StripeX
PhysicalDriveY
Physicall
Volumes
\Device\HarddiskX\DR??
Mirrored Volume
ORB
ORA
Group in CCI configuration file
Figure 4.77 LDM Volume Flushing
The following example flushes the system buffer associated to ORB group through $Volume.
raidscan -pi $Volume -find sync -g ORB
[SYNC] : ORB ORB_000[-] -> \Dmt1\Dsk1 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_001[-] -> \Dmt1\Dsk2 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_002[-] -> \Dmt1\Dsk3 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
The following example flushes the system buffer associated to all groups for the local
instance.
raidscan -pi $Volume -find sync
[SYNC] : ORA ORA_000[-] -> \Vol44\Dsk0 : Volume{56e4954a-28d5-4824-a408-3ff9a6521e5d}
[SYNC] : ORA ORA_000[-] -> \Vol45\Dsk0 : Volume{56e4954a-28d5-4824-a408-3ff9a6521e5e}
[SYNC] : ORB ORB_000[-] -> \Dmt1\Dsk1 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_001[-] -> \Dmt1\Dsk2 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
[SYNC] : ORB ORB_002[-] -> \Dmt1\Dsk3 : Volume{bf48a395-0ef6-11d5-8d69-00c00d003b1e}
Note: Windows NT does not support the LDM volume, so the user must specify $LETALL
instead of $Volume.
Hitachi Command Control Interface (CCI) User and Reference Guide
311
1. Offline backup used ‘raidscan-find sync’ for Windows NT file system:
‘raidscan-find sync’ flushes the system buffer through finding a logical drive which
corresponds to a group of the configuration file, so that the user will be able to use
without using -x mount and -x umount command. The following is an example for group
ORB.
P-VOL Side
S-VOL Side
Close all logical drives on the P-VOL by APP.
ƒ
ƒ
Back up the SVOL data.
ƒ
Flush the system buffer for P-VOL using
Flush the system buffer for SVOL updates using
“raidscan -pi $LETALL -find sync -g ORB” when the backup
is finished.
“raidscan -pi $LETALL -find sync -g ORB”.
ƒ
Split the paired volume using “pairsplit -g ORB”
with r/w mode.
ƒ
ƒ
Open all logical drives on the P-VOL by APP.
Resynchronize the paired volume using
“pairresync -g ORB”.
2. Offline backup used ‘raidscan-find sync’ for Windows 2008/2003/2000 file system:
‘raidscan-find sync’ flushes the system buffer associated to a logical drive through
finding a Volume{guid} which corresponds to a group of the configuration file so that the
user will be able to use without using -x mount and -x umount commands. The following
is an example for group ORB.
P-VOL Side
S-VOL Side
Close all logical drives on the P-VOL by APP.
ƒ
Flush the system buffer for NEW S-VOL data using
“raidscan -pi $Volume -find sync -g ORB”.
ƒ
Flush the system buffer for P-VOL using
“raidscan -pi $Volume -find sync -g ORB”.
ƒ
ƒ
Back up the SVOL data.
ƒ
Split the paired volume using “pairsplit -g ORB”
with r/w mode.
Flush the system buffer for SVOL updates using
“raidscan -pi $Volume -find sync -g ORB” when the backup
is finished.
ƒ
ƒ
Open all logical drives on the P-VOL by APP.
Resynchronize the paired volume using
“pairresync -g ORB”.
3. Online backup used ‘raidscan-find sync’ for Windows NT file system:
‘raidscan-find sync’ flushes the system buffer through finding a logical drive which
corresponds to a group of the configuration file, so that the user will be able to use
without using -x mount and -x umount commands. The following is an example for
group ORB.
P-VOL Side
S-VOL Side
Freeze DB on opening PVOL by APP.
ƒ
ƒ
Back up the SVOL data.
ƒ
Flush the system buffer for PVOL using the
Flush the system buffer for SVOL updates using
“raidscan -pi $LETALL -find sync -g ORB”
when the backup is finished.
“raidscan -pi $LETALL -find sync -g ORB”.
ƒ
Splits the paired volume using
“pairsplit -g ORB” with r/w mode.
ƒ
ƒ
Unfreeze DB on opening PVOL by APP.
Resynchronize the paired volume using
“ pairresync -g ORB”.
312
Chapter 4 Performing CCI Operations
4. Online backup used ‘raidscan-find sync’ for Windows 2008/2003/2000 file system:
‘raidscan-find sync’ flushes the system buffer associated to a logical drive through
finding a Volume{guid} which corresponds to a group of the configuration file so that the
user will be able to use without using -x mount and -x umount commands. The following
is an example for group ORB.
P-VOL Side
S-VOL Side
Freeze DB on opening PVOL by APP.
ƒ
Flush the system buffer for NEW SVOL data using
“ raidscan -pi $Volume -find sync -g ORB “.
ƒ
Flush the system buffer for PVOL using
“raidscan -pi $Volume -find sync -g ORB “.
ƒ
ƒ
Back up the SVOL data.
ƒ
Splits the paired volume using
“pairsplit -g ORB” with r/w mode.
Flush the system buffer for SVOL updates using
“ raidscan -pi $Volume -find sync -g ORB “
when the backup is finished.
ƒ
ƒ
Unfreeze DB on opening PVOL by APP.
Resynchronize the paired volume using
“ pairresync -g ORB”.
Notes:
„
PVOL side must stop the “WRITE IO” to the logical drive which corresponds to a [-g
name] before issuing the “raidscan -find sync” command.
„
SVOL side must close the logical drive which corresponds to a [-g name] before issuing
the “raidscan -find sync” command.
Hitachi Command Control Interface (CCI) User and Reference Guide
313
4.19 Special Facilities for Windows 2008/2003/2000 Systems
CCI provides the following special facilities for Windows 2008/2003/2000 systems:
„
„
Signature changing facility (section 4.19.1)
Directory mount facility (section 4.19.2)
4.19.1 Signature Changing Facility for Windows 2008/2003/2000 Systems
Consider the following Microsoft Cluster Server (MSCS) configuration in which a MSCS PVOL is
shared from MSCS Node1 and Node2, and the copied volume of SVOL is used for backup on
Node2. If the Node2 has reboot on standby state, then MSCS of Node2 has a problem to
assign drive letter of SVOL with previous PVOL drive letter. This problem will happen on
„
„
Node1 is active.
Node2 is standby state that PVOL on Node2 will be hidden by MSCS, and reboots the
Node2.
Node 1
MSCS
Node 2
MSCS + Backup
Sig
ShadowImage
(HOMRCF)
Sig
P-VOL
S-VOL
Figure 4.78 Configurations with MSCS and ShadowImage (HOMRCF)
MSCS on Node2 will misunderstand the SVOL as MSCS cluster resource, because the signature
of SVOL and PVOL is the same due to copied. The reason is that MSCS cluster resources are
managed with the signature only. Therefore SVOL of Node2 will unable to backup so that
MSCS of Node2 carry away the SVOL. This is a problem of MSCS service because Windows
system does change the signature through reboot if the same signature will be detected on
NO MSCS service. MSCS will not accommodate LUNs with duplicate signatures and partition
layout. The best way to avoid such problems is to transport to another host outside the
cluster, but this enforces to set up a backup server, so CCI supports a facility to put back the
signature as a second way.
The signature will be able to change by using “dumpcfg.exe” command attached to
Windows resource kits, but if the SVOL is created with “Noread” option and the system is
rebooted, then “dumpcfg.exe” command will fail to change the signature, because the
system does not know the signature and volume layout information for SVOL.
314
Chapter 4 Performing CCI Operations
CCI adopts the following way with this point in view:
„
The user must save the signature and volume layout information to the system disk by
using “inqraid -gvinf” command, after an SVOL has been set the signature and new
partition by the Windows disk management.
„
The user will be able to put back the signature by setting the signature and volume
layout information to an SVOL that was saved to the system disk by using “inqraid -
svinf” command, after splits the SVOL. If the SVOL is created with “Noread” option and
the system is rebooted, then the system will not be able to create a device object
(\Device\HarddiskVolume#) and Volume{guid} for SVOL, but “-svinf” option will create a
Device object (\Device\HarddiskVolume#) and Volume{guid} without using the Windows
disk management.
Note: The Cluster Disk Driver does not permit to use the “Noread” volume as “Device is not
ready” at the boot time, since the Cluster Disk Driver is Non-Plug and Play Driver. The user
will be able to verify this situation using inqraid command as follows:
inqraid $Phy -CLI
DEVICE_FILE
Harddisk0
Harddisk1
PORT
-
-
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
-
-
-
-
-
-
-
-
-
-
-
-
-
In this case, you need to perform the following procedures to disable the Cluster Disk Driver:
1. In the Computer Management window, double-click System Tools, and then click Device
Manager.
2. On the View menu, click Show Hidden Devices. Non-Plug and Play Drivers appear in the
list in the right pane.
3. Open Non-Plug and Play Drivers, right-click Cluster Disk, and then click Disable. When
you are prompted to confirm whether you want to disable the cluster disk, click Yes.
When you are prompted to restart the computer, click Yes.
4. Verify that you can see the “Noread” volume using inqraid command as follows.
inqraid $Phy -CLI
DEVICE_FILE
Harddisk0
Harddisk1
PORT
CL2-K
CL2-K
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
61456 194 - s/S/ss 0004 1:01-10 OPEN-3
61456 256 - s/S/ss 0005 1:01-11 OPEN-3
5. After starting up CCI and splitting the SVOL, put back the signature by using the “inqraid
-svinf” command.
6. Again In the Computer Management window, Enable the Cluster Disk Driver, and restart
the computer.
Hitachi Command Control Interface (CCI) User and Reference Guide
315
4.19.2 GPT disk for Windows 2003/2008
Windows 2003/2008 supports the basic disk called “GPT disk” used GUID partition instead of
the Signature. The “GPT disk” also can be used as SVOL of the BC, So RAID Manager supports
the way for saving/restoring the GUID DiskId of the GPT Basic disk to the inqraid command.
„
gvinfex option (Only Windows 2003/2008)
This option retrieves the LUN signature and volume layout information by way of a raw
device file provided via STDIN or arguments, and saves it in a system disk file with the
following format:
\WindowsDirectory\VOLssss_llll.ini
where ssss = serial#
where llll = LDEV#
Normally, this option is used to save the Disk signature/ GUID DiskId and volume layout
information once, after it has been written on a potential (and before its paircreate).
The user does not need to directly view these host files.
[For example] … saves the volume information for all physical drives
D:\HORCM\etc>inqraid $Phys -gvinfex -CLI
\\.\PhysicalDrive10:
# Harddisk10 -> [VOL61459_448_DA7C0D91] [OPEN-V
\\.\PhysicalDrive11:
]
# Harddisk11 -> [VOL61459_449_D4CB5F17-2ADC-4FEE-8650-D3628379E8F5] [OPEN-
V
]
\\.\PhysicalDrive12:
# Harddisk12 -> [VOL61459_450_9ABDCB73-3BA1-4048-9E94-22E3798C3B61] [OPEN-
V
]
„
-svinfex[=PTN] option (Only Windows 2003)
This option writes LUN signature/GUID DiskId and volume layout information (that had
previously been saved in a system disk file) by way of a raw device file provided via
STDIN or arguments.
This option gets a Serial# and LDEV# of the RAID storage system for the target device
using SCSI Inquiry, and writes the signature/ GUID DiskId and volume layout information
from the VOLssss_llll.ini file to the target device.
This option will work correctly (even if Harddisk# changes due to configuration changes)
because the signature/ GUID DiskId and volume layout information is associated the
array Serial# and LDEV# (not Harddisk#).
[=PTN]
This option specifies a string pattern useable to select only the pertinent output lines
being provide from STDIN. If used as shown, only the pairdisplay output lines containing
“Harddisk” would be used to cause signature writing.
316
Chapter 4 Performing CCI Operations
D:\HORCM\etc>pairdisplay -l -fd -g URA | inqraid -svinfex=Harddisk
[VOL61459_448_DA7C0D91] -> Harddisk10 [OPEN-V
[VOL61459_449_D4CB5F17-2ADC-4FEE-8650-D3628379E8F5] -> Harddisk11
]
[OPEN-
V
]
[VOL61459_450_9ABDCB73-3BA1-4048-9E94-22E3798C3B61] -> Harddisk12
[OPEN-V
]
„
-gplbaex option (Windows 2003 Only)
This option is used for displaying usable LBA on a Physical drive in units of 512 bytes, and
is used to specify [slba] [elba] options for raidvchkset command.
C:\HORCM\Tool>inqraid -CLI -gplbaex hd10,13
Harddisk10 : SLBA = 0x0000003f ELBA = 0x013fe5d9 PCNT = 1 [OPEN-V
Harddisk11 : SLBA = 0x00000022 ELBA = 0x013fffdf PCNT = 2 [OPEN-V
Harddisk12 : SLBA = 0x00000022 ELBA = 0x013fffdf PCNT = 3 [OPEN-V
]
]
]
–
–
–
SLBA: displays usable starting LBA in units of 512 bytes
ELBA: displays usable ending LBA (ELBA -1) in units of 512 bytes
PCNT: displays the number of partitions
Hitachi Command Control Interface (CCI) User and Reference Guide
317
4.19.3 Directory Mount Facility for Windows Systems
The attached mountvol command into Windows (2008, 2003, or 2000) supports the directory
mount, but it does not support the directory mount function that flushes the system buffer
associated to a logical drive such as in UNIX systems. The directory mount structure on
Windows is only symbolical link between a directory and Volume{guid}, illustrated in Figure
4.79 below. As such, CCI supports the function to discover the mounted volumes by a
directory, and supports the operation to mount/umount with the subcommand option.
D:
Logical
Drive
\hd1 \hd2
Volume{guid}
Volume{guid}
Volume{guid}
LDM
Volumes
\Device\
HarddiskVolume2
\Device\
HarddiskVolume8
\Device\
HarddiskVolume9
Physical
Volumes
\Device\
Harddisk7
\Device\
Harddisk0
\Device\
Harddisk1
Figure 4.79 Directory Mount Structure
Volume discovery for directory mounted volume: CCI will be able to discover the directory
mounted volume by using $LETALL that shows the relationship between logical drive and
the physical volumes. The KEY WORD ($LETALL) can also be used with the raidscan -find
and mkconf’ commands.
D:\HORCM\etc>inqraid $LETALL -CLI
DEVICE_FILE
D:\Vol2\Dsk7
PORT
-
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
-
- - - DDRS-34560D
-
-
D:\hd1\Vol8\Dsk0 CL2-F
D:\hd2\Vol9\Dsk1 CL2-F
61459 448 - s/s/ss 0005 1:01-01 OPEN-3
61459 449 - s/s/ss 0005 1:01-01 OPEN-3
61456 256 - s/s/ss 0005 1:01-11 OPEN-3
61456 257 - s/s/ss 0005 1:01-11 OPEN-3
61456 258 - s/s/ss 0005 1:01-11 OPEN-3
G:\Dms1\Dsk2
G:\Dms1\Dsk3
G:\Dms1\Dsk4
CL2-K
CL2-K
CL2-K
Subcommand for directory mounted volume: CCI supports the directory mount with “-x
mount,-x umount,-x sync” option so that the directory mount will be able to use for
mount/umount of the SVOL.
318
Chapter 4 Performing CCI Operations
Mount and Sync used Volume{GUID} for Windows 2008/2003/2000: RAID Manager supports
the mount command option specified in the device object name, such as
“\Device\Harddiskvolume X”. Windows changes the device number for the device object
name after recovering from a failure of the PhysicalDrive. As a result, the mount command
specified in the device object name may be failed. Therefore, RAID Manager supports a
mount command option that specifies a Volume{GUID} as well as the device object name.
„
Mount
–
–
–
The mount command option will be able to specify a Volume{GUID} as well as the
device object name.
If a Volume{GUID} is specified, then it will be executed by converting a
Volume{GUID} to a device object name.
The user will be able to discover the Volume{GUID}s by using “inqraid $Volu -fv”
command option.
Examples:
C:\HORCM\etc>inqraid -CLI $Vol -fv
DEVICE_FILE
R:Group ÆPRODUCT_ID
PORT
SERIAL LDEV CTG H/M/12 SSID
CL2-D 62496 256 -
Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}\Vol3\Dsk0
- OPEN-3-CVS-CM
-
-
[ Mount used DefineDosDevice() ]
Note: This may forcibly dismount the mounted volume due to LOG-OFF of Windows
2008/2003/2000. For example:
C:\HORCM\etc>raidscan -x mount E: Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
E: <+> HarddiskVolume3
[ Mount used Directory mount ]
Note: This prevents the forcible removal of a volume due to LOG-OFF of Windows
2008/2003/2000. For example:
C:\HORCM\etc>raidscan -x mount E:\ Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
E:\ <+> HarddiskVolume3
„
sync
–
The sync command option will also be able to specify a Volume{GUID} as well as the
device object name.
–
If a Volume{GUID} is specified, then it will be executed by converting a
Volume{GUID} to a device object name.
Example:
C:\HORCM\etc>raidscan -x sync Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
[SYNC] Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}
Hitachi Command Control Interface (CCI) User and Reference Guide
319
4.20 Host Group Control
The Hitachi RAID storage systems (9900V and later) have the defined host group in the port
and are able to allocate Host LU every this host group. CCI does not use this host LU, and
specifies by using absolute LUN in the port. Therefore, a user can become confused because
LUN of the CCI notation does not correspond to LUN on the host view and Remote Console.
Thus, CCI supports a way of specifying a host group and LUN on the host view.
4.20.1 Specifying a Host Group
(1) Defining the formats
The way what CCI has addition of argument for the host group to the raidscan command and
the configuration file will not be able to maintain the compatibility with conventional CLI.
Therefore, CCI adopts a way that supports in the form which specifies a host group in the
port strings as follows.
„
CL1-A-GRP# (GRP# are up to 127)
–
Specifying the host group for the raidscan command as follows:
raidscan -p CL1-A-5
–
Specifying the host group for the configuration file
#dev_group
ORA
ORA
dev_name
ORA_000
ORA_001
port#
CL2-D-1
CL2-D-1
TargetID
LU# MU#
4
4
1
2
0
0
If the port including a host group is specified to the port name, then maximum of
specifiable LUNs are up to 255.
(2) Specifiable port strings
As the result, CCI supports four kinds of forms in the port name.
–
–
Specifying the Port name without a host group
CL1-A
CL1-An
Specifying the Port name with a host group
CL1-A-g where g : host group
CL1-An-g where n-g : host group=g on CL1-A in unit ID=n
where n : unit ID for multiple RAID
320
Chapter 4 Performing CCI Operations
4.20.2 Commands and Options Including a Host Group
(1) Specifiable command for host group
The following commands are able to specify a host group with the port strings:
„
raidscan -p <port>, raidar -p <port>, raidvchkscan -p <port>
# raidscan -p CL2-D-1
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL2-D-1 /da/ 0, 4, 0.1(256)...........SMPL ---- ------ ----, ----- ----
CL2-D-1 /da/ 0, 4, 1.1(257)...........SMPL ---- ------ ----, ----- ----
CL2-D-1 /da/ 0, 4, 2.1(258)...........SMPL ---- ------ ----, ----- ----
(2) New option including a host group
CCI supports new option for the following commands in order to show a LUN on the host view
by finding a host group via the specified device.
„
raidscan -pdg <device>, raidar -pdg <device>, raidvchkscan -pdg <device>
# raidscan -pdg /dev/rdsk/c57t4d1
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL2-D-1 /da/ 0, 4, 0.1(256)...........SMPL ---- ------ ----, ----- ----
CL2-D-1 /da/ 0, 4, 1.1(257)...........SMPL ---- ------ ----, ----- ----
CL2-D-1 /da/ 0, 4, 2.1(258)...........SMPL ---- ------ ----, ----- ----
Specified device(hgrp=1) is LDEV# 0257
„
raidscan -findg
# ls /dev/rdsk/c57* | raidscan -findg
DEVICE_FILE
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
62500 256 OPEN3-CVS-CM
62500 257 OPEN3-CVS
62500 258 OPEN3-CVS
/dev/rdsk/c57t4d0
/dev/rdsk/c57t4d1
/dev/rdsk/c57t4d2
0
0
0
F CL2-D-1 4
F CL2-D-1 4
F CL2-D-1 4
0
1
2
„
raidscan -findg conf, mkconf -gg
# ls /dev/rdsk/c57* | raidscan -findg conf 0 -g ORA
HORCM_DEV
#dev_group
# /dev/rdsk/c57t4d1
ORA
# /dev/rdsk/c57t4d2
ORA
dev_name
SER =
ORA_000
SER =
port#
62500 LDEV = 257 [ FIBRE FCTBL = 4 ]
CL2-D-1
62500 LDEV = 258 [ FIBRE FCTBL = 4 ]
CL2-D-1
TargetID
LU#
MU#
4
1
0
ORA_001
4
2
0
„
inqraid -fg
# ls /dev/rdsk/c57* | ./inqraid -CLI -fg
DEVICE_FILE
c57t4d0
PORT
SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
CL2-D-1 62500 256 -
-
-
- OPEN-3-CVS-CM
c57t4d1
CL2-D-1 62500 257 - s/P/ss 0005 1:01-02 OPEN-3-CVS
c57t4d2
CL2-D-1 62500 258 - s/P/ss 0005 1:01-02 OPEN-3-CVS
Hitachi Command Control Interface (CCI) User and Reference Guide
321
4.21 Using CCI SLPR Security
The Virtual Partition Manager (VPM) feature of the Hitachi RAID storage systems (USP V/VM
and TagmaStore USP/NSC) supports Storage Logical Partitioning (SLPR), a feature that
partitions the ports and volumes of the RAID storage system. If CCI does not have SLPR
security, then it will be able to operate the target volumes crossing SLPR through the
command device. The purpose of CCI SLPR security is to prevent CCI from operating the
volumes on another SLPR (SLPR#N) through the command device from the SLPR (SLPR#M)
that is assigned to its Host. You can use CCI SLPR Security by defining the command device
through the Web console or the SVP-installed ‘VPM’ feature, so that CCI can protect the
target volume.
The following example represents the SLPR protection facility.
Host
RM
RM
INST0
INST1
CM
SLPR0
SLPR# M
SLPR# N
Figure 4.80 Protection of the command device that has the SLPR attribute
322
Chapter 4 Performing CCI Operations
4.21.1 Specifying the SLPR Protection Facility
When you want to access certain SLPRs on a single Host, use the CCI protection facility so
that the Host can access multiple SLPRs through a single command device. The following
outline reviews the setup tasks for the SLPR protection facility.
1. Setting SLPR on the command device: The command device has a SLPR number and an
associated bitmap so you can set multiple SLPRs. You accomplish this by sharing a
command device (using ports connected to different SLPRs) by setting the command
device through SLPR#0 (called Storage Administrator) on the Web console or SVP.
For example, if the command device will be shared with the port on SLPR#1 and SLPR#2,
then the command device will automatically set the bitmap corresponding to SLPR#1 and
SLPR#2.
2. Testing SLPR: CCI verifies whether or not the command device can access a target
within SLPR. So, if the command device belongs to SLPR#0, or CCI has no SLPR function,
then the SLPR protection is ignored.
However, if the command device will be shared with the port on SLPR#1 and SLPR#2, CCI
allows you to operate the volume on SLPR#1 and SLPR#2.
3. Rejecting commands: If access is denied on the specified port (or target volume), CCI
rejects the following commands and outputs an error code, EX_ESPERM:
–
Horctakeover, Paircurchk, Paircreate, Pairsplit, Pairresync, Pairvolchk, Pairevtwait,
Pairsyncwait
–
–
raidscan (except “-find verify”, “-find inst”), raidar, pairdisplay
raidvchkset, raidvchkscan (except “-v jnl”), raidvchkdsp
[EX_ESPERM]
Permission denied with the SLPR
[Cause ] : A specified command device does not have a permission to access other
SLPR.
[Action] : Please make the SLPR so that the target port and the command device
belongs to the same SLPR.
Hitachi Command Control Interface (CCI) User and Reference Guide
323
4.21.2 SLPR Configuration Examples
4.21.2.1 Single Host
Figure 4.81 provides an example of when control is denied to the paircreate and raidscan
commands in the following cases:
„
The volume described on RM INST1 is different from the SLPR of the command device, so
the paircreate command cannot control the paired volume.
„
The specified port is different from the SLPR of the command device, so the raidscan -p
CL3-A command cannot scan any ports that are defined as SLPR#N.
Host
RM
INST0
RM
INST1
CL3-A
CL1-A
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.81 SLPR Configuration on a Single Host
To operate SLPR#N, assign the command device. If RM INST1 has a command device for
SLPR#N, the paircreate command is permitted. However, the raidscan -p CL3-A command
(via RMINST0) will be unable to scan a port, because the specified port is different than the
SLPR of the command device. In this case, -p CL3-A must be operated via RMINST1, as shown
in the following example.
324
Chapter 4 Performing CCI Operations
Host
RM
RM
INST0
INST1
CL3-A
CL1-A
CM
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.82 Operation Across SLPRs Using two Command Devices on a Single Host
To operate SLPR#N, share the command device. If RMINST1 has a shared command device
for SLPR#N, the paircreate command is permitted. Additionally, the raidscan -p CL3-A
command (via RMINST0), will be permitted to scan a port, because the shared command
device has the Bitmap settings SLPR#M and SLPR#N.
Host
RM
RM
INST0
INST1
CL3-A
CL1-A
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.83 Operation Across SLPRs Using a Shared Command Device on a Single Host
Hitachi Command Control Interface (CCI) User and Reference Guide
325
4.21.2.2 Dual Hosts
In the following example, the paircreate command is unable to operate the paired volume
because the volume described on HostB is different than the SLPR of the command device.
Also, the raidscan -p CL3-A command (via both Hosts), will be unable to scan a port because
the specified port is different than the SLPR of the command device.
HostB
HostA
RM
RM
INST0
INST1
CL3-A
CL1-A
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.84 SLPR Configuration on Dual Hosts
To operate SLPR#N, assign the command device. If HostB has a command device for
SLPR#N, the paircreate command will be permitted. However, the raidscan -p CL3-A
command via HostA will be unable to scan a port because the specified port is different than
the SLPR of the command device. In this case, raidscan -p CL3-A command must be
operated via HostB.
HostB
HostA
RM
RM
INST0
INST1
CL3-A
CL1-A
CM
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.85 Operation Across SLPRs Using two Command Devices on Dual Hosts
326
Chapter 4 Performing CCI Operations
To operate SLPR#N, share the command device. If HostB has a shared command device for
SLPR#N, the paircreate command is permitted. Also, the raidscan -p CL3-A command (via
HostA), will be allowed to scan a port because the shared command device has the Bitmap
settings SLPR#M and SLPR#N.
HostB
HostA
RM
RM
INST0
INST1
CL3-A
CL1-A
CM
PVOL
SVOL
SLPR0
SLPR# M
SLPR# N
Figure 4.86 Operating SLPR#N by Sharing the Command Device
Hitachi Command Control Interface (CCI) User and Reference Guide
327
4.21.2.3 TrueCopy Using Dual Hosts
In the following example, the pair-operation command (except the -l option) determines
whether the operation for paired volumes should be permitted at a remote site. The result is
that the paircreate command is not allowed to operate the paired volume, because the
volume described on HostB differs from the SLPR of the command device. Also, the raidscan
-p CL3-A command (on HostB) will not be allowed to scan a port.
HostA
RM
INST0
HostB
RM
INST1
CL3-A
CL1-A
CM
CM
SVOL
PVOL
SLPR# M
SLPR# N
Figure 4.87 TrueCopy Operation using SLPR
328
Chapter 4 Performing CCI Operations
4.22 Controlling Volume Migration
The volume migration including the external volume will be required to control using CLI in
Data Lifecycle Management (DLCM) solution. It is possible to support the volume migration
that cooperates with CC (Cruising Copy) and the external connection by operating the
current ShadowImage and VDEV mapping of the external connection.
Also, it is important to consider the support of CC (Cruising Copy) on the compatibility based
on the current CLI interface, because CCI is supporting ShadowImage and the external
connection. For this purpose, CCI makes the CLI interface that works by minimum
compatible of the APP by specifying the COPY mode for CC (Cruising Copy) to the CLI of CCI.
4.22.1 Specifications for Volume Migration
CCI is necessary to be mapped to the port for pooling of RAID in order to control the volume
of the external connection. So the external volume need to be mapped previously to the
port of RAID without connecting to the host. The following is the execution example of the
volume migration executed for LDEV#18.
A Command for volume migration
Host
Host
RM
INST0
RM
INST1
RM
INST0
RM
INST1
Port for pooling
Port for pooling
CL1-A
CL1-A
SVOL
SI
SVOL
SI
PVOL
#18
SVOL
#19
PVOL
SVOL
#19
After copied, swaps
the mapping for LDEV
CC
CC
RAID
Group
E-LDEV
#30
RAID
Group
E-LDEV
#30
Figure 4.88 Volume Migration Configurations
Hitachi Command Control Interface (CCI) User and Reference Guide
329
(1) Command specification
CCI operates the volume migration by specifying to the horcm*.conf as same SI and TC,
because the volume migration using CCI is necessary to be defined the mapping for the
target volume.
MU# (of SMPL as SI) which is not used as SI is used for the operation for CC.
An original volume for the migration is defined as PVOL. A target volume for the
migration is defined as SVOL. In other words, an original volume is migrated from PVOL
to SVOL, and the mapping between LDEV and VDEV is swapped after copied.
(2) Mapping specification
The mapping between LUN and LDEV will be maintained for the replying of SCSI-Inquiry
in order to make recognize as identical LUN through the host after mapping changes.
The way to know whether the mapping is changed or not is possible to use “-fe” option
of pairdisplay and/or raidscan command that shows the connection for the external
volumes.
Also LU of the external connection and LU of RAID Group intermingle on the port for
pooling, but can confirm this with the above option of the raidscan command.
(3) Group operation
It is possible to execute the volume migration as group by describing to the horcm*.conf,
however LU(LDEV) which was mapped to SVOL after command execution does not
maintain the consistency as the group. In other words, the user must consider the
volume mapped to SVOL after execution as the discarded volume.
When HORCM demon is KILLed or the host has crash during group operation, the group
aborted the execution of the command has LUN mixed with the external connection and
RAID Group as the group. In this case, CCI skips the executed LU and issues CC command
to the un-executed LU, by the user issues an identical command once again.
(4) Using MU#
CCI manages the status of TC/SI using MU#, so CCI uses the empty MU# that is managed
for SI. Therefore, the user need to execute the command of the volume migration in the
environment for SI having HORCC_MRCF environment variable. An example is shown
below.
SVOL
S/P
VOL
oradb
oradb1
PVOL
0
0
0
1
2
1
2
It is possible to specify
MU#2 for CC.
It is possible to specify
MU#1 or MU#2 for CC.
(5) HORCM instance
It is possible to describe the original and target volume for the volume migration to MU#
as another group in horcm*.conf for HORCM instance of SI and /or TC. Also, it is possible
to define the original and target volume for the volume migration in the horcm*.conf as
HORCM instance independent from SI/TC.
330
Chapter 4 Performing CCI Operations
4.22.2 Commands to Control the Volume Migration
(1) Command for volume migration
CCI supports the volume migration by adding an option (-m cc) to the paircreate
command.
paircreate -g <group> -d <pair vol> … -m <mode> -vl[r] -c <size>
-m <mode>
mode = cc (Specifiable by the HOMRCF only)
This option is used to specify the Cruising Copy mode for the volume migration.
Note: This option cannot be specified with “-split” option in the same command.
-vl[r]
The -vl option specifies “local”, and copies from the local instance LU(PVOL) to the
remote instance LU(SVOL), an original volume as the local instance LU is migrated
from PVOL to SVOL, and the physical volume mapping between PVOL and SVOL is
swapped after copied
The -vr option specifies “remote”, and copies from the remote instance LU(PVOL) to
the local instance LU(SVOL), an original volume as the remote instance LU is
migrated from PVOL to SVOL, and the physical volume mapping between PVOL and
SVOL is swapped after copied.
-c <size>
This option is used to specify a track size of the case which copies paired volume at
1-15 extents. When omits specification of this option, it uses a default value (3) for
a track size.
(2) Command for discovering an external volume
It is possible to discover the external volumes by using “-fe” option of the raidscan
command.
raidscan -p <port> -fe
-fe
This option is used to display the serial# and LDEV# of the external LUNs only
mapped to the LDEV.
If the external LUN mapped to the LDEV on a specified port does not exist, then this
option will do nothing. Also if this option is specified, -f[f][g][d] option is not
allowed.
[Display example]
# raidscan -p cl1-a-0 -fe -CLI
PORT# /ALPA/C TID# LU# Seq# Num LDEV# P/S Status Fence E-Seq# E-LDEV#
CL1-A-0 ef 0 0
CL1-A-0 ef 0 0
CL1-A-0 ef 0 0 10 62496
8 62496
9 62496
1
1
1
19 SMPL -
21 SMPL -
22 SMPL -
-
-
-
30053
30053
30053
30
32
33
E-Seq#: Displays the production (serial) number of the external LUN.
E-LDEV#: Displays the LDEV# of the external LUN.
Hitachi Command Control Interface (CCI) User and Reference Guide
331
(3) Command for confirming the status
It is possible to confirm the status for CC by using “-fe” option of the pairdisplay
command.
pairdisplay -g <group> -fe
-fe
This option is used to display the serial# and LDEV# of the external LUNs mapped to
the LDEV and additional information for the pair volume.
This option displays the information above by adding to last column, and then
ignores the format of 80 column.
This option will be invalid if the cascade options (-m all,-m cas) are specified.
Display example:
[Before execution of CC command]
# pairdisplay -g horc0 -fe
Group ... Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M CTG CM EM E-Seq# E-LDEV#
horc0 ... 62496
horc0 ... 62496
18.SMPL ----,----- ---- - - - -
19.SMPL ----,----- ---- - - - H 30053
-
-
30
# paircreate -g horc0 -vl -m cc
[During execution of CC command, the progress is displayed in the copy %]
# pairdisplay -g horc0 -fe
Group ... Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M CTG CM EM E-Seq# E-LDEV#
horc0 ... 62496
horc0 ... 62496
18.P-VOL COPY,62496
19.S-VOL COPY,-----
19 - - C -
18 - - C H 30053
-
-
30
[After completion of CC command]
# pairdisplay -g horc0 -fe
Group ... Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M CTG CM EM E-Seq# E-LDEV#
horc0 ... 62496
horc0 ... 62496
18.P-VOL PSUS,62496
19.S-VOL SSUS,-----
19 - - C V 30053
18 - - C -
30
-
-
CM: Displays the Copy mode
N Æ Non SnapShot
S Æ SnapShot.
For SMPL state, this shows that pair-volume will be created as SnapShot.
C Æ Cruising Copy
EM: Displays the external connection mode
H Æ Mapped E-lun as hidden from the host.
V Æ Mapped E-lun as visible to the host
‘ - ‘Æ Unmapped to the E-lun
BH Æ Mapped E-lun as hidden from the host, but LDEV blockading.
BV Æ Mapped E-lun as visible to the host, but LDEV blockading
B Æ Unmapped to the E-lun, but LDEV blockading
E-Seq#: Displays the production (serial) number of the external LUN. ‘Unknown’ will
be shown as ‘-’.
E-LDEV#: Displays the LDEV# of the external LUN. ‘Unknown’ will be shown as ‘-’.
332
Chapter 4 Performing CCI Operations
(4) Command for discovering an external volume via the device file
It is possible to discover the external volumes by using the inqraid command.
Example in Linux:
# ls /dev/sd* | ./inqraid -CLI
DEVICE_FILE
PORT SERIAL LDEV CTG H/M/12 SSID R:Group PRODUCT_ID
sdh
sdu
sdv
sdw
CL2-G
CL2-G
CL2-G
CL2-G
63528 15360
63528 2755
63528 2768
63528 2769
- s/s/ss 0100 5:01-09 OPEN-V
- s/s/ss 000B S:00001 OPEN-0V
- s/s/ss 000B U:00000 OPEN-0V
- s/s/ss 000B E:16384 OPEN-V
„
R:Group: This displays the physical position of an LDEV according to mapping of LDEV in
the RAID storage system.
LDEV mapping
R:
Group
RAID Group
RAID Level
1 Æ RAID1
5 Æ RAID5
6Æ RAID6
RAID Group number - Sub number
SnapShot SVOL
Unmapped
S
U
E
PoolID number
00000
External LUN
External Group number
Example in Linux:
# ls /dev/sd* | ./inqraid
/dev/sdh -> CHNO = 0 TID = 1 LUN = 1
[SQ] CL2-G Ser = 63528 LDEV =15360 [HITACHI ] [OPEN-V
]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 9] SSID = 0x0100
/dev/sdu -> CHNO = 0 TID = 1 LUN = 14
[SQ] CL2-G Ser = 63528 LDEV =2755 [HITACHI ] [OPEN-V
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
E-LUN[Group 00001] SSID = 0x000B
]
SNAPS[PoolID 0001] SSID = 0x000B
/dev/sdv -> CHNO = 0 TID = 1 LUN = 15
[SQ] CL2-G Ser = 63528 LDEV =2768 [HITACHI ] [OPEN-V
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
E-LUN[Group 08191] SSID = 0x000B
]
]
UNMAP[Group 00000] SSID = 0x000B
/dev/sdw -> CHNO = 0 TID = 1 LUN = 16
[SQ] CL2-G Ser = 63528 LDEV =2769 [HITACHI ] [OPEN-V
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
E-LUN[Group 16384] SSID = 0x000B
E-LUN[Group 16384] SSID = 0x000B
Hitachi Command Control Interface (CCI) User and Reference Guide
333
„
Group: This item shows physical position of an LDEV according to mapping of LDEV in the
RAID storage system.
LDEV Mapping
Display Formats
RAID Group
RAID1[Group Group number - Sub number]
RAID5[Group Group number - Sub number]
RAID6[Group Group number - Sub number]
SnapShot SVOL
Unmapped
SNAPS[PoolID poolID number ]
UNMAP[Group 00000]
External LUN
E-LUN[Group External Group number]
334
Chapter 4 Performing CCI Operations
4.22.3 Relations between “cc” Command Issues and Status
The migration volumes can be handled by issuing the CCI commands (pair creation and pair
splitting commands). The validity of the specified operation is checked according to the
status of the paired volume (primary volume).
Table 4.52 shows the relations between the migration volume statuses and command
acceptances.
Table 4.52 Command Issues and Pairing Status Transition
Command: Pair Creation
Pairing Status CC: -m cc
Pair Splitting
Simlex -S
c SMPL
Accepted
dÆe
dÆf
Acceptable
d COPY
Acceptable
Accepted
È
e PSUS
Accepted c
f PSUE
Accepted c
PDUB
Explanation of terms in Table 4.52:
„
Accepted: A command is accepted and executed. When the command execution
succeeds, the status changes to that of the shown number.
„
„
Acceptable: No operation is executed, though a command is accepted.
Shaded portions: Command execution is rejected and the operation terminates
abnormally.
Notes:
„
Other commands and option (e.g., pairresync…) for operating a paired-volume are
rejected.
„
The “-m cc” option cannot be specified with “-split” option in the same command.
Hitachi Command Control Interface (CCI) User and Reference Guide
335
4.22.4 Restrictions for Volume Migration
Volume migration must be used within the following restrictions:
„
ShadowImage (HOMRCF). The operation for the volume migration must be operated at
the “SMPL” or “PAIR” or “COPY” state. If not, “paircreate -m cc” command will be
rejected with EX_CMDRJE or EX_CMDIOE. Also HOMRCF can not be operated to CC_SVOL
moving in Cruising Copy. In copying CC_SVOL, the copy operation for the volume
migration will be stopped, if pairsplit command for of HOMRCF will be executed.
COPY or PAIR
S/P
VOL
PVOL
COPY or PAIR
1
2
0
1
0
CC
CC
CC
SVOL
CC
SVOL
„
TrueCopy (HORC). The operation for the volume migration must be operated at the
“SMPL” or “PSUS” state. If not, “paircreate -m cc” command will be rejected with
EX_CMDRJE or EX_CMDIOE. Also HORC can not be operated to CC_SVOL copying in
Cruising Copy. On one hand, in copying CC_SVOL, the copy operation for the volume
migration will be stopped, if pairresync command for of HORC will be executed.
PSUS
PVOL
SVOL
1
0
1
CC
CC
CC
SVOL
CC
SVOL
„
LDEV type for CC. The volume of the external connection for the volume migration must
be mapped to an LDEV as OPEN-V.
336
Chapter 4 Performing CCI Operations
Chapter 5 Troubleshooting
This chapter contains the following resources to address issues that you may encounter while
working with the CCI software:
„
„
„
„
General Troubleshooting (section 5.1)
Changing IO way of the command device for AIX (section 5.2)
Error Reporting (section 5.3)
Calling the Hitachi Data Systems Support Center (section 5.4)
Hitachi Command Control Interface (CCI) User and Reference Guide
337
5.1 General Troubleshooting
If you have a problem with the CCI software, first make sure that the problem is not being
caused by the UNIX/PC server hardware or software, and try restarting the server. Table 5.1
provides operational notes and restrictions for CCI operations.
For maintenance of Hitachi TrueCopy and ShadowImage volumes, if a failure occurs, it is
important to find the failure in the paired volumes, recover the volumes, and continue
operation in the original system. When a CCI (HORCM), Hitachi TrueCopy, or ShadowImage
failure is detected, please collect the data in the error log file and trace data (all files in
HORCM_LOG directory), and report the failure to your Hitachi Data Systems representative.
instructions. Note: Use the FD Dump Tool or FDCOPY function (refer to the Storage
Navigator User’s Guide for the storage system) to copy the Storage Navigator configuration
information onto diskette, and give the diskette(s) to the Hitachi Data Systems service
personnel. The Storage Navigator Error Codes document for the storage system provides a
list of the error codes displayed by Storage Navigator.
Table 5.1
Condition
Operational Notes for CCI Operations
Recommended Action
Startup/shutdown restrictions
When the server starts up, the secondary volume may be updated by the primary volume’s
server. The secondary volume must not be mounted automatically in the startup sequence. If
the secondary volume is used by the LVM, the volume group of the LVM must be
deactivated. The secondary volume must be mounted in the split state or in the simplex
mode.
When the server starts up, the secondary volume can be activated without confirming, when
can be guaranteed that the secondary volume has been PSUS (R/W enable) or in the SMPL
state by server shutdown sequence.
Hot standby operations
Hitachi TrueCopy commands cannot execute hot standby operations between the primary
and secondary volumes. Use the takeover command intended for the HA configuration to
execute the hot standby operation. In hot standby operation, two servers are used, and the
active (primary) and standby (secondary) server programs are run alternately in each server
in case of failure in one server. Follow these precautions:
ƒ
Operation across volumes. Since each Hitachi TrueCopy command causes the server
software to handle the volume by volume, a single volume should not be partitioned to
prevent it from being used by some servers.
ƒ
Using LVM and Hitachi TrueCopy together. When constructing the LVM on the paired
volume in the mutual hot standby configuration, the LVM logical volumes must be
constructed in units of volume to prevent the volumes from being mounted by the LVM.
Coexistence of LVM mirror and When the LVM mirror and Hitachi TrueCopy volumes are used together, the LVM mirror
Hitachi TrueCopy
handles write errors and changes the volumes. Thus, the fence level of the volumes used by
the LVM mirror must be set to data.
Using paired volume in a single When constructing paired volume in a single host, it is necessary to activate two or more
host
CCI instances. To activate two or more CCI instances, instance numbers must be assigned
using the environmental variable HORCMINST. The HORCM and Hitachi
TrueCopy/ShadowImage commands must possess this environmental variable. A
configuration definition file and a log directory must be set for each instance. The command
device described in the configuration definition file must be established in a way to be
following either every instance. If using a command device between different instances on
the same SCSI port, the maximum number of instances per command device is 16. If this
number is exceeded, the use a different SCSI path for each instance.
338
Chapter 5 Troubleshooting
Condition
Recommended Action
Sharing volumes in a hot
standby configuration
When paired volume is used for the disk shared by the hosts in hot standby configuration
using HA software, use the primary volume as the shared disk and describe the
corresponding hosts using the paired volume in the configuration definition file as shown
below. In the HA configuration, if a TrueCopy command issued by host C fails in host B
(because host B has gone down and/or IO_ERROR of the command device), host A is
connected and the command execution is retried.
HA
configuration
Host A
Host B
Host C
Primary
volume
Secondary
volume
Paired volume
Linkage with HA software
The HORC Manager must not be an object of the process monitoring by the HA software
(cluster manager), because HORCM should run in the same level as the cluster manager.
Cooperation with HA software is done by activating the takeover command from the shell
script activated by the cluster manager in units of the package software.
Note: Cannot use a pair volume for the cluster lock disk which HA software uses for
election.
Maintenance
Restart of HORCM is required if the storage system configuration is changed (e.g.,
microcode exchange, cache memory install/uninstall).
Hitachi TrueCopy only: In case of an error (e.g., single error in cache memory) which made
the pair volume is accompanied by maintenance work, the pairresync command or
paircreate command cannot execute copy rejection.
Command device
Each Hitachi TrueCopy/ShadowImage command is executed by issuing a command to the
command device. The Hitachi TrueCopy/ShadowImage command is read or written from/into
the specific block area of the command device. Therefore, the command device cannot be
used by the user. In addition, this device must not belong to an LVM volume group. For
Windows systems, do not assign a drive letter to the command device to prevent utilization
by general users.
SCSI alternate path restrictions If the P-VOL and S-VOL are on the same server, alternate path from P-VOL to S-VOL
cannot be used. Use of SCSI alternate path to a volume pair is limited to among primary
(secondary) volumes. Alternate path using Hitachi Path Manager (Safe Path) is limited to
primary volumes.
Horctakeover (Swap-Takeover) When executing horctakeover on a standby server manually, I/O on the active server must
be stopped. When the package software goes for a standby server a failover by HA
software, the HA software must guarantee an I/O insulation of the active server.
HORCM failure to activate
After a new system has been constructed, a failure to activate HORCM may occur due to
improper environmental setting and/or configuration definition by the user. Refer to the
HORCM activation log, and correct the setting(s).
Abnormal termination of
command
Refer to the command log file and HORCM log file to identify the cause of the error. If a
command terminates abnormally because of a remote server failure, recover the server from
the failure, then re-execute the command. If HORCM has shut down, restart HORCM. If an
unrecoverable error occurs, obtain the log files (see Table A.2) and contact the Hitachi Data
Systems Support Center.
Hitachi Command Control Interface (CCI) User and Reference Guide
339
Condition
Recommended Action
Error in paired volume
operation
Hitachi TrueCopy only: If an error occurs in duplicated writing in paired volumes (i.e., pair
suspension), the server software using the volumes may detect the error by means of the
fence level of the paired volume. In such a case, check the error notification command or
syslog file to identify a failed paired volume.
The system administrator can confirm that duplicated writing in a paired volume is
suspended due to a failure and the system runs in regressed state using the error
notification command of the Hitachi TrueCopy. HORCM monitors failures in paired volumes
at regular intervals. When it detects a failure, it outputs it to the host’s syslog file. Thus, the
system administrator can detect the failure by checking the syslog file. Concerning the
operation of the RAID storage system, the failure can also be found on the Remote Console
PC (or SVP) provided.
Issue the Hitachi TrueCopy commands manually to the identified failed paired volume to try
to recover it. If the secondary volume is proved to be the failed volume, issue the pair
resynchronization command to recover it. If the primary volume fails, delete the paired
volume (pair splitting simplex) and use the secondary volume as the substitute volume.
About “/var(usr)/tmp” directory CCI uses “/var/tmp” or “/usr/tmp” as the directory for UNIX domain socket for IPC (Inter
Process Communication), and makes the directory and files as “/var/tmp/.lcm*” in CCI
version 01-16-06 or before.
Caution: This “/var/tmp/.lcm*” should not be removed while HORCM is running.
In case of Red Hat Linux, Cron executes the following “/etc/cron.daily/tmpwatch” file as
default:
------------------------------------------------------------
/usr/sbin/tmpwatch 240 /tmp
/usr/sbin/tmpwatch 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
if [ -d "$d" ]; then
/usr/sbin/tmpwatch -f 720 $d
fi
done
------------------------------------------------------------
The command of second line will remove “/var/tmp/.lcm*” directory after 720 Hr from
HORCM start-up, even though CCI command is used.
Action: So administrator needs to add the following command in order to avoid this problem:
------------------------------------------------------------
/bin/touch -c /var/tmp/.lcm* 2>/dev/null
/usr/sbin/tmpwatch 240 /tmp
/usr/sbin/tmpwatch 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
if [ -d "$d" ]; then
/usr/sbin/tmpwatch -f 720 $d
fi
done
------------------------------------------------------------
340
Chapter 5 Troubleshooting
5.1.1 About Linux Kernel 2.6.9.XX supported ioctl(SG_IO)
The RAID Manager currently uses the ioctl(SCSI_IOCTL_SEND_COMMAND) for sending the
control command to the command device. However, in RHEL 4.0 using kernel 2.6.9.XX, the
following messages are output to syslog file (/var/log/messages) with every ioctl().
program horcmgr is using a deprecated SCSI ioctl, please convert it to SG_IO
This seems to originate from the following kernel code in drivers/scsi/scsi_ioctl.c as way of
warning that ioctl(SCSI_IOCTL_…) of kernel 2.6.9.XX does not handle rightly an error of the
HBA driver.
-------------------------------------------------------------------------------------------
/* Check for deprecated ioctls ... all the ioctls which don't follow the new unique
numbering scheme are deprecated */
switch (cmd) {
case SCSI_IOCTL_SEND_COMMAND:
case SCSI_IOCTL_TEST_UNIT_READY:
case SCSI_IOCTL_BENCHMARK_COMMAND:
case SCSI_IOCTL_SYNC:
case SCSI_IOCTL_START_UNIT:
case SCSI_IOCTL_STOP_UNIT:
printk(KERN_WARNING "program %s is using a deprecated SCSI "
"ioctl, please convert it to SG_IO\n", current->comm);
-------------------------------------------------------------------------------------------
Thus, RAID Manager supports a way to change to the ioctl(SG_IO) automatically, if Linux
kernel supports the ioctl(SG_IO) for horcmgr and inqraid command. However, in the
customer site, RAID Manager may encounter to Lilux kernel which does not support the
ioctl(SG_IO) fully. After this consideration, RAID Manager also supports by defining either
following environment variable or “/HORCM/etc/USE_OLD_IOCTL” file (size=0) that uses the
ioctl(SCSI_IOCTL_SEND_COMMAND) forcibly.
For Example:
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root
0 Nov 11 11:12 USE_OLD_IOCTL
-r--r--r-- 1 root sys 32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr
Hitachi Command Control Interface (CCI) User and Reference Guide
341
5.2 Changing IO Way of the Command Device for AIX
RAID Manager tries to use ioctl(DK_PASSTHRU) or SCSI_Path_thruas much as
possible, if it fails, changes to RAW_IOfollows conventional ways. Even so, RAID Manager
may encounter to AIX FCP driver which does not support the ioctl(DK_PASSTHRU)fully in
the customer site. After this consideration, RAID Manager also supports by defining either
following environment variable or /HORCM/etc/USE_OLD_IOCTLfile(size=0)that uses
the RAW_IO forcibly.
For Example:
export USE_OLD_IOCTL=1
horcmstart.sh 10
HORCM/etc:
-rw-r--r-- 1 root root
0 Nov 11 11:12 USE_OLD_IOCT
-r--r--r-- 1 root sys 32651 Nov 10 20:02 horcm.conf
-r-xr--r-- 1 root sys 282713 Nov 10 20:02 horcmgr
342
Chapter 5 Troubleshooting
5.3 Error Reporting
Table 5.2 lists and describes the HORCM system log messages and provides guidelines for
their return values and also provides guidelines for resolving the error conditions. Table 5.4
Table 5.2
System Log Messages
Message ID
Condition
Cause
Recommended Action
HORCM_001 The HORCM log file cannot be
opened.
The file cannot be created in the
HORCM directory.
Create space on the disk on which the root
directory resides.
HORCM_002 The HORCM trace file cannot be
opened.
The file cannot be created in the
HORCM directory.
Create space on the disk on which the root
directory resides.
HORCM_003 The HORCM daemon process
HORCM daemon attempted to
Terminate unnecessary programs or daemon
processes running simultaneously.
cannot create a child process due to create more processes than the
an error. maximum allowable number.
HORCM_004 HORCM assertion failed, resulting in An internal error which could not be
Restart the system, and call the Hitachi Data
Systems support center.
a fatal internal error in the HORCM.
identified by the HORCM occurred.
HORCM_005 The CCI software failed to create
the end point for remote
HORCM failed to create a socket, or
an error exists in the HORCM
configuration file ($HORCM_CONF).
Refer to the HORCM startup log to identify
the cause of the error.
communication.
HORCM_006 HORCM memory allocation failed.
HORCM memory could not be
secured.
Increase the system virtual memory, or close
any unnecessary programs.
HORCM_007 An error exists in the HORCM setup An error exists in the HORCM setup
Refer to the startup log and reset the
parameters.
file.
file.
HORCM_008 HORCM configuration file
parameters could not be read.
An error exists in the format or
parameters of the HORCM
configuration file ($HORCM_CONF).
Refer to the HORCM startup log to identify
the cause of the error.
HORCM_009 HORC/HOMRCF connection to the
CCI software failed.
System devices are improperly
connected, or an error exists in the
HORCM configuration file.
Refer to the HORCM startup log to identify
the cause of the error.
HORCM_101 HORC/HOMRCF and the CCI
software communication fails.
A system I/O error occurred or an
error exists in the HORCM
configuration file ($HORCM_CONF).
Refer to the HORCM startup log to identify
the cause of the error.
HORCM_102 The volume is suspended.
The pair status was suspended due
to code XXXX.
Call the Hitachi Data Systems support center.
HORCM_103 Detected a validation check error on A validation error occurs on the
Please confirm the following items, and use
raidvchkdsp -v <op> command for verifying
the validation parameters.
this volume (xxxx unit#x,ldev#x) :
CfEC=n, MNEC=n, SCEC=n,
BNEC=n
database volume, or validation
parameters for this volume are
illegal.
(1) Check if the block size (-vs <size>) is an
appropriate size.
(2) Check if the type for checking (-vt <type>)
is an appropriate type.
(3) Check if the data validations are disabled
for LVM configuration changes.
(4) Check if the data validations are not used
based on file system.
(5) Check if the redo log and data file are
separated among the volumes.
Hitachi Command Control Interface (CCI) User and Reference Guide
343
Table 5.3
Error Code
Command Error Messages
Error Message
Condition
Recommended Action
Value
EX_COMERR Can’t be
communicated with
This command failed to
communicate with the CCI software.
Verify that HORCM is running by using UNIX
commands [ps - ef | grep horcm].
255
HORC Manager
EX_REQARG Required Arg list
An option or arguments of an option
are not sufficient.
Please designate the correct option using the -h
option.
254
253
252
251
250
EX_INVARG
Invalid argument
An option or arguments of an option
are incorrect.
Please designate the correct option using the -h
option.
EX_UNWOPT Unknown option
Designated an unknown option.
Please designate the correct option using the -h
option.
EX_ATTHOR
EX_ATTDBG
Can’t be attached to Could not connect with HORCM.
HORC Manager
Please verify that HORCM is running and/or that
HORCMINST is set correctly.
Can’t be attached to Failed to communicate with
a Debug layer
Verify that HORCM is running by using UNIX
commands [ps - ef | grep horcm].
HORCM, or cannot make a log
directory file.
EX_INVNAM
EX_OPTINV
EX_ENOENT
Invalid name of
option
The name specified in an argument
of an option is not appropriate.
Please designate the correct option using the -h
option.
249
248
247
A specified option is Detected contradiction in information Call the Hitachi Data Systems Support Center.
invalid
which RAID reported.
No such device or
group
The designated device or group
name does not exist in the
configuration file.
Verify the device or group name and add it to the
configuration file of the remote and local hosts.
EX_ENODEV
EX_ENOUNT
EX_ENQSER
No such device
The designated device name does
not exist in the configuration file.
Verify the device name and add it to the
configuration file of the remote and local hosts.
246
219
218
No such RAID unit
The designated RAID unit ID does
not exist in the configuration file.
Verify the RAID unit ID and add it to the
configuration file of the remote and local hosts.
Unmatched Serial#
vs RAID unitID
The group designated by
ShadowImage paircreate does not
have the same RAID unit, or the
Please confirm serial# (Seq#) using the
pairdisplay command, or confirm serial# (Seq#) of
the RAID storage system using the raidqry -r
unitID is not identical to the unit ID in command
the same RAID serial# (Seq#).
EX_ENOMEM Not enough core
Insufficient memory exists.
Increase the virtual memory capacity of the
system, or close any unnecessary programs
and/or daemon processes.
245
244
EX_ERANGE
Result too large
Tried to use arguments for an option Please refer to the error message, and designate
beyond the maximum allowed, or a
result beyond the maximum was
created.
an appropriate value.
EX_ENAMLG
File name too long
Undefined error.
Call the Hitachi Data Systems Support Center.
Please confirm that the HORC Manager in the
243
242
EX_ENORMT No remote host
alive for remote
A timeout occurred on remote
communication, and HORC Manager remote host is running, and then increase the
commands or
failed to re-execute.
value of the timeout in the configuration file.
remote HORCM
might be blocked
(sleeping) on an
existing I/O
EX_INVMOD
Invalid RAID
command mode
Detected a contradiction for a
command.
Call the Hitachi Data Systems Support Center.
241
344
Chapter 5 Troubleshooting
Error Code
Error Message
Condition
Recommended Action
Value
EX_INVCMD
Invalid RAID
command
Detected a contradiction for a
command.
Call the Hitachi Data Systems Support Center.
240
EX_ENOGRP No such group
The designated device or group
name does not exist in the
configuration file, or the network
address for remote communication
does not exist.
Verify the device or group name and add it to the
configuration file of the remote and local hosts.
239
EX_UNWCOD Unknown function
code
Detected a contradiction for a
command.
Call the Hitachi Data Systems Support Center.
238
237
EX_CMDIOE
Control command
I/O error
A read/write to the command device
failed with an I/O error.
Refer to the host syslog file, and investigate the
cause of the error. If the problem persists, call the
Hitachi Data Systems Support Center.
EX_CMDRJE
An order to the
control/command
The request to the command device
failed or was rejected.
Verify Hitachi TrueCopy/ShadowImage functions
are installed.
221
device was rejected Note: This error code is sometimes
caused by the operating system and
Verify ports (RCP, LCP, etc.) are set.
Verify CU paths have been established.
Verify that the target volume is available.
reported as EX_CMDIOE instead of
EX_CMDRJE (see next row).
CCI displays “SSB” in the output of the
commands so a service representative can
identify the cause of EX_CMDRJE ( except for
Tru64, DYNIX).
Example:
# paircreate -g G1 -f never -vl -nocopy
paircreate: [EX_CMDRJE] An order to the
control/command device was rejected
Refer to the command log
(/HORCM/log10/horcc_u1-1.log) for
details.
It was rejected due to SKEY=0x05,
ASC=0x26, SSB=0xB9BF,0xB9C7 on
Serial#(63502).
EX_CMDIOE
Control command
I/O error or rejected
A read/write to the command device
failed with an I/O error or was
rejected.
Refer to the host syslog file, and investigate the
cause of the error. If the cause is “Illegal Request
(0x05)” Sense Key, please confirm the following
items. If the problem persists, call the Hitachi
Data Systems Support Center.
237
Verify Hitachi TrueCopy/ShadowImage functions
are installed.
Verify ports (RCP, LCP, etc.) are set.
Verify CU paths have been established.
Verify that the target volume is available
EX_ENQVOL
EX_EVOLCE
EX_EWSUSE
Unmatched volume
status within the
group
The volume attribute or the fence
level within a group is not identical.
Confirm status using the pairdisplay command.
Make sure all volumes in the group have the
same fence level and volume attributes.
236
235
234
Pair Volume
combination error
Combination of a volume is
unsuitable between the remote and
local host.
Confirm volume status using the pairdisplay
command, and change the combination of
volumes properly.
Pair suspended at
WAIT state
Detected a suspended status
(PSUE) for the paired volume,
before it made it to the designated
status.
Please issue the pairresync command manually
to the identified failed paired volume to try to
recover it. If the trouble persists, call the Hitachi
Data Systems Support Center.
Hitachi Command Control Interface (CCI) User and Reference Guide
345
Error Code
Error Message
Condition
Recommended Action
Value
EX_EWSTOT
Timeout waiting for
specified status
Detected a time out, before it made it Please increase the value of the timeout using
233
to the designated status.
the -t option.
EX_EWSLTO
EX_ESTMON
Timeout waiting for
specified status on
the local host
Timeout error because the remote
did not notify about expected status
in time.
Please confirm that HORC Manager on the
remote host is running.
232
HORCM Monitor
stopped
HORC Manager monitoring was
refused.
Please confirm the value of “poll” in the
configuration file.
231
230
229
EX_UNWCMD Unknown command An unknown command was
attempted.
Please confirm the command name.
EX_INCSTG
Inconsistent status
in group
The pair status of a volume within a
group is not identical to the status of
the other volumes in the group.
Please confirm the pair status using the
pairdisplay command.
EX_INVSTP
EX_INVVOL
EX_INVMUN
Invalid pair status
The pair status of the target volume
is not appropriate.
Please confirm the pair status using the
pairdisplay command.
228
222
220
Invalid volume
status
The volume status of the target
volume is not appropriate.
Please confirm the pair status using the
pairdisplay -l command.
Invalid mu# with
HORC or HOMRCF operated is not appropriate.
The MU# of the volume to be
Please confirm the MU# for the specified group
using the pairdisplay command. MU #1/2 cannot
be used for Hitachi TrueCopy, and MU #1/2 must
be P-VOL for ShadowImage.
EX_ENLDEV
No such LDEV
within the RAID
A device defined in the configuration Please confirm that the Port, Target ID, LUN are
227
file does not have a mapping to a
real LUN and target ID within the
RAID storage system.
defined correctly under HORCM_DEV in the
configuration file.
EX_INVRCD
EX_VOLCUR
Invalid return code
Wrong return code.
Call the Hitachi Data Systems Support Center.
226
225
S-Vol currency error Currency check error for S-VOL.
Cannot guarantee identical data on
S-VOL.
Check the volume list to see if an operation was
directed to the wrong S-VOL.
EX_VOLCUE
EX_VOLCRE
Local volume
currency error
The volume specified with the
SVOL-takeover command is not the
same as the P-VOL.
Please confirm the pair status of the local volume. 224
Local and remote
vol. currency error
The combination of the volumes
specified with Swap-takeover is
unsuitable.
Please confirm the pair status of remote and local 223
volumes using the pairdisplay command.
EX_UNWERR Unknown error
code.
Wrong error code.
Call the Hitachi Data Systems Support Center.
--
EX_ENOCTG
EX_EXTCTG
Not enough CT
groups in the RAID
CTGID could not be registered due
to being beyond the max number of
CT groups (0-255 for USP V/VM, 0-
255 for USP/NSC, 0-127 for 9900V,
0-63 for 9900, 0-15 for 7700E) for an
async volume.
Choose an existing CTGID (use pairvolchk to
display CTGIDs). Use the ‘-f async <CTGID>‘
option of the paircreate command to force the
pair into a pre-existing CTGID.
217
Extended CT group
across RAIDs
A Hitachi TrueCopy Async or
ShadowImage volume is defined in
the configuration file
(HORCM_CONF) as a group that
extends across storage systems.
Please confirm the serial # of the volumes by
using the pairdisplay command to verify that the
CT group is contained completely within one
RAID storage system.
216
346
Chapter 5 Troubleshooting
Error Code
Error Message
Condition
Recommended Action
Value
EX_ENXCTG
No CT groups left
for OPEN Vol use.
An available CT group for OPEN
Volume does not exist (TrueCopy
Async or ShadowImage).
Please confirm whether all CT groups are already 215
used by mainframe volumes (TC and TC390
Async, SI and SI390).
EX_ENQCTG
Unmatched CTGID
within the group
The CT group references within a
group do not have an identical
CTGID.
Please confirm the CTGID using the pairvolchk
command and/or confirm that group references
within the configuration file (HORCM_CONF)
refer to the same CT group.
214
EX_ENPERM
EX_ENQSIZ
EX_ERPERM
EX_ESVOLD
Permission denied
with the LDEV
A device mentioned in the
configuration file does not have a
permission for a pair-operation.
Please confirm if a device which a pair-operation
was permitted by using the pairdisplay or
‘raidscan -find verify’ command.
213
212
211
Unmatched volume
size for pairing
Size of a volume is unsuitable
between the remote and local
volume.
Please confirm volume size or number of LUSE
volume using the ‘raidscan -f’ command, and
make sure the volume sizes are identical.
Permission denied
with the RAID
A storage system (RAID) mentioned
in the configuration file does not
have a permission for CCI.
Please confirm if the type of storage system is
permitted for a CCI by using the ‘inqraid -CLI’ and
‘raidqry -h’ commands.
SVOL denied due to A target volume for SVOL is denied
be disabling
Please confirm whether a target volume is setting 209
to SVOL disabling by using ‘inqraid -fl’ or
‘raidvchkscan -v gflag’ command.
to become SVOL via LDEV
guarding.
EX_ENOSUP
EX_EPRORT
Micro code not
supported
The storage system does not
support a function for CCI.
Please confirm the microcode version by using
the ‘raidqry -l’ command.
210
Mode changes
denied due to
retention time
A target volume is denied to be
changing due to retention time via
LDEV guarding.
Please confirm the retention time for a target
volume using ‘raidvchkscan -v gflag’ command.
208
EX_ESPERM
EX_ENOPOL
Permission denied
with the SLPR
A specified command device does
not have a permission to access
other SLPR.
Please make the SLPR so that the target port and 207
the command device belongs to the same SLPR.
Not enough Pool in
RAID
Could not retain the pool for
executing a command due to be
exceeded the threshold rate.
Please deletes unnecessary/earlier generations
paired volume, or re-synchronizes
206
unnecessary/earlier generations split volume.
Hitachi Command Control Interface (CCI) User and Reference Guide
347
horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairevtwait, pairvolchk,
pairsyncwait, pairdisplay. Unrecoverable error should be done without re-execute by
handling of an error code. Recoverable error can re-execute by handling of an error code.
Table 5.4
Generic Error Codes (horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairevtwait,
pairvolchk, pairsyncwait, pairdisplay)
Category
Error Code
Error Message
Value
Syntax for Argument
EX_REQARG
EX_INVARG
EX_INVNAM
Required Arg list
254
253
249
252
238
230
244
243
226
239
247
246
227
219
220
218
216
214
213
211
207
221
237
248
241
240
251
250
255
Invalid argument
Invalid name of option
EX_UNWOPT Unknown option
EX_UNWCOD Unknown function code
EX_UNWCMD Unknown command
EX_ERANGE
EX_ENAMLG
EX_INVRCD
EX_ENOGRP
EX_ENOENT
EX_ENODEV
EX_ENLDEV
EX_ENOUNT
EX_INVMUN
EX_ENQSER
EX_EXTCTG
EX_ENQCTG
EX_ENPERM
EX_ERPERM
EX_ESPERM
EX_CMDRJE
EX_CMDIOE
EX_OPTINV
EX_INVMOD
EX_INVCMD
EX_ATTHOR
EX_ATTDBG
Result too large
File name too long
Unrecoverable
Configuration
Invalid return code
No such group
No such device or group
No such device
No such LDEV within the RAID
No such RAID unit
Invalid mu# with HORC or HOMRCF
Unmatched Serial# vs RAID unitID
Extended CTgroup across RAIDs
Unmatched CTGID within the group
Permission denied with the LDEV
Permission denied with the RAID
Permission denied with the SLPR
An order to the control/command was rejected
Control command I/O error, or rejected
A specified option is invalid
Invalid RAID command mode
Invalid RAID command
Unrecoverable
Command I/O to RAID
Recoverable
Communication for HORCM
Cannot attached to HORC manager
Cannot attached to a Debug layer
EX_COMERR Cannot communicate with HORC manager
EX_ENORMT No remote host alive for remote commands, or Remote CCI might 242
Recoverable
be blocked (sleeping) on an existing I/O.
Resource
EX_ENOMEM Not enough core
245
Unrecoverable
348
Chapter 5 Troubleshooting
raidqry, raidar, horcctl. Unrecoverable error should be done without re-execute by
handling of an error code. Recoverable error can re-execute by handling an error code.
Table 5.5
Generic Error Codes (raidscan, raidqry, raidar, horcctl)
Category
Error Code
Error Message
Value
254
253
249
252
238
230
244
243
226
227
219
220
211
210
207
237
248
241
240
251
250
255
245
Syntax for Argument
EX_REQARG Required Arg list
EX_INVARG
EX_INVNAM
Invalid argument
Invalid name of option
EX_UNWOPT Unknown option
EX_UNWCOD Unknown function code
EX_UNWCMD Unknown command
EX_ERANGE
EX_ENAMLG
EX_INVRCD
EX_ENLDEV
EX_ENOUNT
EX_INVMUN
EX_ERPERM
EX_ENOSUP
EX_ESPERM
EX_CMDIOE
EX_OPTINV
EX_INVMOD
EX_INVCMD
Result too large
File name too long
Unrecoverable
Configuration
Invalid return code
No such LDEV within the RAID
No such RAID unit
Invalid mu# with HORC or HOMRCF
Permission denied with the RAID
Micro code not supported
Permission denied with the SLPR
Control command I/O error
A specified option is invalid
Invalid RAID command mode
Invalid RAID command
Unrecoverable
Command I/O to RAID
Recoverable
Communication for HORCM EX_ATTHOR
EX_ATTDBG
Can’t be attached to HORC manager
Can’t be attached to a Debug layer
Recoverable
EX_COMERR Can’t be communicated with HORC manager
EX_ENOMEM Not enough core
Resource
Unrecoverable
Hitachi Command Control Interface (CCI) User and Reference Guide
349
horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairevtwait, pairvolchk,
pairsyncwait, raidvchkset. Unrecoverable error should be done without re-execute by
handling of an error code. Recoverable error can re-execute (except for EX_EWSTOT of the
horctakeover) by handling an error code.
Table 5.6
Category
Specific Error Codes
Error Code
Error Message
Value
236
229
222
235
228
225
224
223
234
212
209
208
233
Volume Status EX_ENQVOL Unmatched volume status within the group
EX_INCSTG
EX_INVVOL
Inconsistent status in group
Invalid volume status
EX_EVOLCE Pair Volume combination error
EX_INVSTP Invalid pair status
EX_VOLCUR S-VOL currency error
EX_VOLCUE Local Volume currency error
EX_VOLCRE Local and Remote Volume currency error
EX_EWSUSE Pair suspended at WAIT state
EX_ENQSIZ
Unmatched volume size for pairing
EX_ESVOLD SVOL denied due to be disabling
Unrecoverable EX_EPRORT Mode changes denied due to retention time
Timer
EX_EWSTOT Timeout waiting for specified status
EX_EWSLTO Timeout waiting for specified status on the local host 232
Recoverable
Resource
EX_ENOCTG Not enough CT groups in the RAID
EX_ENXCTG No CT groups left for OPEN Vol use.
217
215
206
Unrecoverable EX_ENOPOL Not enough Pool in RAID
350
Chapter 5 Troubleshooting
5.4 Calling the Hitachi Data Systems Support Center
If you need to call the Hitachi Data Systems Support Center, please provide as much
information about the problem as possible, including:
„
The Storage Navigator configuration information saved on diskette using the FD Dump
Tool or FDCOPY function (see the Storage Navigator User’s Guide for the storage
system.
„
„
„
The circumstances surrounding the error or failure.
The exact content of any error messages displayed on the host system(s).
The remote service information messages (R-SIMs) logged by Storage Navigator and the
reference codes and severity levels of the recent R-SIMs.
The Hitachi Data Systems customer support staff is available 24 hours/day, seven days a
week. If you need technical support, please call:
„
„
United States: (800) 446-0744
Outside the United States: (858) 547-4526
Hitachi Command Control Interface (CCI) User and Reference Guide
351
352
Chapter 5 Troubleshooting
Appendix A Maintenance Logs and Tracing Functions
A.1 Log Files
The CCI software (HORCM) and Hitachi TrueCopy/ShadowImage commands maintain internal
logs and traces which can be used to identify the causes of errors and keep records of the
HORCM logs are classified into start-up logs and execution logs. The start-up logs contain
data on errors which occur before the HORCM becomes ready to provide services. Thus, if
the HORCM fails to start up due to improper environment setting, users should refer to the
start-up logs to resolve the problem. The HORCM execution logs (error log, trace, and core
files) contain data on errors which are caused by software or hardware problems. These logs
contain internal error data which does not apply to any user settings, and so users do not
need to refer to the HORCM execution logs. When an error occurs in execution of a
command, data on the error is collected in the command log file. Users may refer to the
command log file if a command execution error occurs.
Command execution environment
Command Command Command
HORCM execution environment
HORCM
Log directory
Log directory
HORCM
start-up logs
Command log
file
HORCM
core
HORCM
logs
HORCM
traces
Command
core
Command
traces
Figure A.1 Logs and Traces
Hitachi Command Control Interface (CCI) User and Reference Guide
353
The start-up log, error log, trace, and core files are stored as shown in Table A.1. The user
should specify the directories for the HORCM and command log files using the HORCM_LOG
the log files, or if an error occurs before the log files are created, the error logs are output
in the system log file. If the HORCM activation fails, the system administrator should check
describes the messages output to the system log file and provides recommended actions for
resolving the error conditions. The system log file for UNIX-based systems is the syslog file.
The system log file for Windows-based systems is the event log file.
Table A.1 Log Files
File
UNIX-Based Systems
Windows-Based Systems
Start-up log
HORCM start-up log:
$HORCM_LOG/horcm_HOST.log
HORCM start-up log:
$HORCM_LOG\horcm_HOST_log.txt
Command log: $HORCC_LOG/horcc_HOST.log
$HORCC_LOG/horcc_HOST.oldlog
Command log: $HORCC_LOG\horcc_HOST_log.txt
$HORCC_LOG\horcc_HOST_oldlog.txt
Error log
Trace
HORCM error log:
$HORCM_LOG/horcmlog_HOST/horcm.log
HORCM error log:
$HORCM_LOG\horcmlog_HOST\horcm_log.txt
HORCM trace:
$HORCM_LOG/horcmlog_HOST/horcm_PID.trc
HORCM trace:
$HORCM_LOG\horcmlog_HOST\horcm_PID_trc.txt
Command trace:
$HORCM_LOG/horcmlog_HOST/horcc_PID.trc
Command trace:
$HORCM_LOG\horcmlog_HOST\horcc_PID_trc.txt
Core
HORCM core:
$HORCM_LOG/core_HOST_PID/core
HORCM core: $HORCM_LOG\core_HOST_PID\core
Command core:
Command core:
$HORCM_LOG/core_HOST_PID/core
$HORCM_LOG\core_HOST_PID\core
Note: HOST denotes the host name of the corresponding machine. PID denotes the process
ID of that machine.
The location of the directory which contains the log file depends on the user’s command
execution environment and the HORCM execution environment. The command trace file and
core file reside together under the directory specified in the HORCM execution environment.
A directory specified using the environmental variable HORCM_LOG is used as the log
directory in the HORCM execution environment. If no directory is specified, the directory
/tmp is used. A directory specified using the environmental variable HORCC_LOG is used as
the log directory in the command execution environment. If no directory is specified, the
directory /HORCM/log* is used (* = instance number). A nonexistent directory may be
specified as a log directory using the environmental variable.
354
Appendix A Maintenance Logs and Tracing Functions
Table A.2 Log Directories
Directory Name
Definition
$HORCM LOG
A directory specified using the environmental variable HORCM_LOG. The HORCM log file, trace file, and
core file as well as the command trace file and core file are stored in this directory. If no environmental
variable is specified, “/HORCM/log/curlog” is used.
$HORCC LOG
A directory specified using the environmental variable HORCC_LOG. The command log file is stored in
this directory. If no environmental variable is specified, the directory “/HORCM/log*” is used (* is the
instance number). While the HORCM is running, the log files are stored in the $HORCM_LOG directory
shown in (a). When the HORCM starts up, the log files created in the operation are stored automatically in
the $HORCM_LOGS directory shown in (b).
a. HORCM log file directory in operation
$HORCM_LOG = /HORCM/log*/curlog (* is instance number)
b. HORCM log file directory for automatic storing
$HORCM_LOGS = /HORCM/log*/tmplog (* is instance number)
A.2 Trace Files
The command trace file is used for maintenance aiming at troubleshooting. It is not created
normally. If a cause of an error cannot be identified using the log file, the environmental
variables or trace control commands with trace control parameters are issued to start
tracing and the trace file is created. The trace control parameters include trace level, file
size, mode, etc. More detailed tracing is enabled by increasing the trace level. Tracing is
made in wraparound within the range of the file size. HORCM makes the trace file according
to the trace level specified in the HORCM start-up shell script set to activate the HORCM.
A.3 Trace Control Command
The trace control command (one of the HORCM control commands) sets or changes the trace
control parameters. This command is used for troubleshooting and maintenance. If no trace
control parameters can be specified using the environmental variables in the user’s
command execution environment, it is possible to change the trace control parameters into
the trace control command.
Table A.3 Trace Command Parameters
Parameter
Function
Trace level parameter
Trace size parameter
Trace mode parameter
Trace type parameter
Trace change instruction
Specifies the trace level, range = 0 to 15.
Specifies the trace file size in KB.
Specifies the buffer mode or non-buffer mode for writing data in the trace file.
Specifies the trace type defined internally.
Specifies either the command or the HORCM (CCI instance) for which the trace control
parameters are changed.
Hitachi Command Control Interface (CCI) User and Reference Guide
355
A.4 Logging Commands for Audit
RAID Manager supports the command error logging only, so this logging function will not be
able to use for auditing the script issuing the command. Thus RAID Manager supports the
function logging the result of the command executions by expanding the current logging.
This function has the following control parameter.
„
$HORCC_LOGSZ variable
This variable is used to specify a maximum size (in units of KB) and normal logging for
the current command.
‘/HORCM/log*/horcc_HOST.log’ file is moved to ‘/HORCM/log*/horcc_HOST.oldlog’ file
when reaching in the specified maximum size. If this variable is not specified or
specified as 0, it is same as the current logging for only command error.
This variable is able to define to the environment variable and/or ‘horcc_HOST.conf’ as
discussed below.
For Example setting 2MB size:
HORCC_LOGSZ=2048
Export HORCC_LOGSZ
„
/HORCM/log*/horcc_HOST.conf file
This file is used to describe ‘HORCC_LOGSZ’ variable and the masking variable for
logging. If the ‘HORCC_LOGSZ’ as the environment variable is not specified, then
‘HORCC_LOGSZ’ variable of this file is used. If both variable is not specified, then it is
same as the current logging for only command error.
„
„
HORCC_LOGSZ variable
This variable must be described as below format.
For Example
HORCC_LOGSZ=2048
The masking variable
This variable is used to mask (disable) the logging by specifying a condition of the
command and exit code (except inqraid or EX_xxx error code). This variable is valid for
NORMAL exit.
If the user is executing the pairvolchk repeatedly at every interval (i.e., 30 sec).they
may not be wanted to be logged its command. So they can mask it by specifying
HORCC_LOGSZ=0 as below, however they need to change their scripts if the tracing is
ON.
–
For example masking pairvolchk on the script
Export HORCC_LOGSZ=0
Pairvolchk -g xxx -s
Unset HORCC_LOGSZ
356
Appendix A Maintenance Logs and Tracing Functions
The masking feature is to enable the tracing without changing their scripts. And this
feature is available for all RM commands (except inqraid or EX_xxx error code).
For example, if you want to mask pairvolchk (returns 22) and raidqry, you can specify as
below.
pairvolchk=22
raidqry=0
The user will be able to track the performing of their scripts, and then they will decide
to mask by auditing the command logging file as needed.
Relationship between an environment variable and Horcc_HOST.conf
The performing of logging has being depended on $HORCC_LOGSZ environment variable
and/or the HORCC_HOST.conf file as shown below.
$HORCC_LOGSZ
=value
HORCC_HOST.conf
Performing
Don’t care
Tracing within this APP
=0
NO tracing within this APP
Global Tracing within this RM instance
Global NO tracing within this RM instance
Unspecified
HORCC_LOGSZ=value
HORCC_LOGSZ=0
Unspecified or Nonexistent
Use the default value (0)
The same as the current logging for only command error
„
/HORCM/log* directory
[root@raidmanager log9]# ls -l
total 16
drwxr-xr-x 3 root root
-rw-r--r-- 1 root root
4096 Oct 27 17:33 curlog
3936 Oct 27 17:36 horcc_raidmanager.log
-rw-r--r-- 1 root root 2097452 Oct 27 17:29 horcc_raidmanager.oldlog
-rw-r--r-- 1 root root
drwxr-xr-x 3 root root
46 Oct 27 17:19 horcc_raidmanager.conf
4096 Oct 27 17:19 tmplog
„
/HORCM/log*/horcc_HOST.log file
COMMAND NORMAL : EUserId for HORC : root (0) Tue Nov 1 12:21:53 2005
CMDLINE : pairvolchk -ss -g URA
12:21:54-2d27f-10090- [pairvolchk][exit(32)]
COMMAND NORMAL : EUserId for HORC : root (0) Thu Oct 27 17:36:32 2005
CMDLINE : raidqry -l
17:36:32-3d83c-17539- [raidqry][exit(0)]
COMMAND ERROR : EUserId for HORC : root (0) Thu Oct 27 17:31:28 2005
CMDLINE : pairdisplay -g UR
17:31:28-9a206-17514- ERROR:cm_sndrcv[rc < 0 from HORCM]
17:31:28-9b0a3-17514- [pairdisplay][exit(239)]
[EX_ENOGRP] No such group
[Cause ]:The group name which was designated or the device name doesn't exist in the
configuration file, or the network address for remote communication doesn't exist.
[Action]:Please confirm if the group name exists in the configuration file of the local and
remote host
Hitachi Command Control Interface (CCI) User and Reference Guide
357
„
/HORCM/log*/horcc_HOST.conf file
# For Example
HORCC_LOGSZ=2048
#The masking variable
#This variable is used to disable the logging by the command and exit code.
#For masking below log pairvolchk returned '32'(status is SVOL_COPY)
#COMMAND NORMAL : EUserId for HORC : root (0) Tue Nov 1 12:21:53 2005
#CMDLINE : pairvolchk -ss -g URA
#12:21:54-2d27f-10090- [pairvolchk][exit(32)]
pairvolchk=32
pairvolchk=22
358
Appendix A Maintenance Logs and Tracing Functions
Appendix B Updating and Uninstalling CCI
B.1 Uninstalling UNIX CCI Software
After verifying that the CCI software is not running, you can uninstall the CCI software. If
the CCI software is still running when you want to uninstall, shut down the CCI software
using the horcmshutdown.sh command to ensure a normal end to all TrueCopy/ShadowImage
functions.
Caution: Before uninstalling CCI, make sure that all device pairs are in simplex status.
command, go to the root directory, and delete the HORCM directory.
command, go to the root directory, delete the HORCM link, and delete the HORCM directory.
#/HORCM/horcmuninstall.sh
#cd /
Í Issue the uninstall command.
Í Change directories.
#rm -rf /HORCM
Í Delete the CCI directory.
Figure B.1 Uninstalling the CCI Software from a Root Directory
#/HORCM/horcmuninstall.sh
#cd /
Í Issue the uninstall command.
Í Change directories.
#rm
/HORCM
Í Delete the CCI link.
#rm -rf /non-root_directory_name/HORCM
Í Delete the CCI directory.
Figure B.2 Uninstalling the CCI Software from a Non-Root Directory
B.2 Upgrading UNIX CCI Software
After verifying that CCI is not running, you can upgrade the CCI software. If CCI is still
running when you want to upgrade software versions, shut down the CCI software using the
horcmshutdown.sh command to ensure a normal end to all Hitachi TrueCopy/ShadowImage
functions. To upgrade the CCI software in a UNIX environment follow the installation
instructions provided in Chapter 3.
Hitachi Command Control Interface (CCI) User and Reference Guide
359
B.3 Uninstalling Windows CCI Software
After verifying that the CCI software is not running, you can uninstall the CCI software. If
the CCI software is still running when you want to uninstall, shut down the CCI software
using the horcmshutdown command to ensure a normal end to all TrueCopy/ShadowImage
functions.
Caution: Before uninstalling the CCI software, make sure that all device pairs are in simplex
mode.
To uninstall the CCI software:
1. On the Control panel select the Add/Remove programs option.
2. When the Add/Remove Program Properties panel opens, choose the Install/Uninstall tab
and select CCI/HORC from the program products list.
3. Click Add/Remove to remove the CCI software.
B.4 Upgrading Windows CCI Software
After verifying that the CCI software is not running, you can upgrade the CCI software. If the
CCI software is still running when you want to upgrade software versions, shut down the CCI
software using the horcmshutdown command to ensure a normal end to all Hitachi TrueCopy
and/or ShadowImage functions. To upgrade the CCI software:
1. On the Control panel select the Add/Remove programs option.
2. When the Add/Remove Program Properties panel opens, choose the Install/Uninstall tab
and select CCI/HORC from the program products list.
3. Click Add/Remove to remove the CCI software.
4. Insert the program product cd or floppy disk into the server and on the Start menu
choose the Run command.
5. The Run window opens, enter A:\Setup.exe (where A: is floppy or CD drive) in the Open
pull down list box.
6. An InstallShield will open. Follow the on screen instructions to install the CCI software.
7. Reboot the Windows server, and verify that the correct version of the CCI software is
running on your system by executing the raidqry -h command.
360
Appendix B Updating and Uninstalling CCI
Appendix C Fibre-to-SCSI Address Conversion
Disks connected with Fibre channel display as SCSI disks on UNIX hosts. Disks connected with
Fibre channel connections can be fully utilized.
Fibre
AL_PA
LU
#0
LU
#1
LU
#n
. . .
conversion table
Target ID
LU
#0
LU
#1
LU
#n
. . .
Figure C.1 Example Fibre Address Conversion
Note: Use fixed address AL_PA (0xEF) when using iSCSI.
CCI converts fibre-channel physical addresses to SCSI target IDs (TIDs) using a conversion
operating systems.
Table C.1 Limits for Target IDs and LUNs
HP-UX, other Systems Solaris, IRIX Systems Windows Systems
Port
TID
LUN
TID
LUN
TID
LUN
Fibre/ iSCSI 0 to 15
SCSI 0 to 15
0 to 1023
0 to 7
0 to 125
0 to 15
0 to 1023
0 to 7
0 to 31 0 to 1023
0 to 15 0 to 7
Hitachi Command Control Interface (CCI) User and Reference Guide
361
Conversion table for Windows. The conversion table for Windows is based on conversion by
an Emulex driver. If the fibre-channel adapter is different (e.g., Qlogic, HP), the target ID
which is indicated by the raidscan command may be different from the target ID on the
Windows host.
Harddisk6 (HP driver). Note: You must start HORCM without the descriptions of HORCM_DEV
or HORCM_INST in the configuration definition file because of the unknown TIDs and LUNs.
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI
] [OPEN-3
]
Port[CL1-J] Ser#[ 30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-J / e2/4, 29, 0.1(9).............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 1.1(10)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 2.1(11)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 3.1(12)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 4.1(13)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 5.1(14)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/4, 29, 6.1(15)............SMPL ---- ------ ----, ----- ----
Specified device is LDEV# 0014
Figure C.2 Using Raidscan to Display TID and LUN for Fibre-Channel Devices
In this case, the target ID indicated by the raidscan command must be used in the
configuration definition file. This can be accomplished using either of the following two
methods:
„
Using default conversion table. Use the TID# and LU# indicated by the raidscan
command in the HORCM configuration definition file (TID=29 LUN=5 in Figure C.2).
„
Changing default conversion table. Change the default conversion table using the
C:\> set HORCMFCTBL=X
Å 'X' is fibre conversion table number.
C:\> horcmstart ...
Å Start of HORCM.
:
:
Result of "set HORCMFCTBL=X" command:
C:\>raidscan -pd hd6 -x drivescan hd6
Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI
] [OPEN-3
]
Port[CL1-J] Ser#[ 30053] LDEV#[ 14(0x00E)]
HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL]
RAID5[Group 1- 2] SSID = 0x0004
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S,Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-J / e2/0, 3, 0.1(9).............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 1.1(10)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 2.1(11)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 3.1(12)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 4.1(13)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 5.1(14)............SMPL ---- ------ ----, ----- ----
CL1-J / e2/0, 3, 6.1(15)............SMPL ---- ------ ----, ----- ----
Specified device is LDEV# 0014
Figure C.3 Using HORCMFCTBL to Change the Default Fibre Conversion Table
362
Appendix C Fibre-to-SCSI Address Conversion
C.1 LUN Configurations on the RAID Storage Systems
The Hitachi RAID storage systems (9900V and later) manage the LUN configuration on a port
through the LUN security as shown in Figure C.4.
Port
LUN (0 - N) on Port
LUN (N+1 - M) on Port
LUN (M+1 - MAX on Port
Absolute LUNs
LUNs on Group
LUN 0 - MAX-M-1
LUN 0 - N
on
Group A
LUN 0 - M-N-1
on
on
Group C
Group B
WWN1 for Host1
WWN2 for Host1
WWN4 for Host2
WWN3 for Host1
WWN5 for Host2
WWN6 for Host3
Mapped Hosts
Explanation of terms:
Group: A Group name registered by LUN security configuration on the port.
WWN: WWN list on a Group registered by LUN security configuration on the port.
MAX: The maximum LUN. 2048 for USP V/VM, 2048 for USP/NSC, 512 for 9900V.
Figure C.4 LUN Configuration
CCI uses absolute LUNs to scan a port, whereas the LUNs on a Group are mapped for the host
system so that the target ID & LUN which is indicated by the raidscan command will be
different from the target ID & LUN shown by the host system. In this case, the target ID &
LUN indicated by the raidscan command should be used.
In this case, you must start HORCM without a description for HORCM_DEV and HORCM_INST
because target ID & LUN are unknown. Use the port, target ID, and LUN displayed by the
# ls /dev/rdsk/* | raidscan -find
DEVICE_FILE
UID S/F PORT TARG LUN
SERIAL LDEV PRODUCT_ID
31168 216 OPEN-3-CVS-CM
31168 117 OPEN-3-CVS
31170 121 OPEN-3-CVS
/dev/rdsk/c0t0d4 0 S CL1-M
/dev/rdsk/c0t0d1 0 S CL1-M
/dev/rdsk/c1t0d1 - - CL1-M
0
0
-
4
1
-
Figure C.5 Displaying the Port, TID, and LUN Using raidscan
UID: displays the UnitID for multiple RAID configuration. If UID is displayed as ‘-’ then
the command device for HORCM_CMD is not found.
S/F: displays that a PORT is SCSI or fibre.
PORT: displays the RAID storage system port number.
TARG: displays the target ID (converted by the fibre conversion table, see next section).
LUN: displays the Logical Unit Number (converted by the fibre conversion table).
SERIAL: displays the production number (serial#) of the RAID storage system.
LDEV: displays the LDEV# within the RAID storage system.
PRODUCT_ID: displays product-id field in the STD inquiry page.
Hitachi Command Control Interface (CCI) User and Reference Guide
363
C.2 Fibre Address Conversion Tables
„
„
„
Table number 0 = HP-UX systems (see Table C.2)
Table number 1 = Solaris and IRIX systems (see Table C.3)
Table number 2 = Windows systems (see Table C.4)
Note: The conversion table for Windows systems is based on the Emulex driver. If a
different fibre-channel adapter is used, the target ID indicated by the raidscan
command may be different than the target ID indicated by the Windows system.
Note on Table 3 for other Platforms: Table 3 is used to indicate the LUN without Target ID
for unknown FC_AL conversion table or fibre-channel fabric (fibre-channel world wide
name). In this case, the Target ID is always zero, thus Table 3 is not described in this
document. Table 3 is used as the default for platforms other than those listed above. If the
host will use the WWN notation for the device files, then this table number should be
changed by using the $HORCMFCTBL variable.
Note: If the TID displayed on the system is different than the TID indicated in the fibre
address conversion table, you must use the TID (and LU#) returned by the raidscan command
to specify the device(s).
Table C.2 Fibre Address Conversion Table for HP-UX Systems (Table 0)
C0
C1
C2
C3
C4
C5
C6
C7
AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID
EF
E8
E4
E2
E1
E0
DC
DA
D9
D6
D5
D4
D3
D2
D1
CE
0
CD
CC
CB
CA
C9
C7
C6
C5
C3
BC
BA
B9
B6
B5
B4
B3
0
B2
B1
AE
AD
AC
AB
AA
A9
A7
A6
A5
A3
9F
9E
9D
9B
0
98
97
90
8F
88
84
82
81
80
7C
7A
79
76
75
74
73
0
72
71
6E
6D
6C
6B
6A
69
67
66
65
63
5C
5A
59
56
0
55
54
53
52
51
4E
4D
4C
4B
4A
49
47
46
45
43
3C
0
3A
39
36
35
34
33
32
31
2E
2D
2C
2B
2A
29
27
26
0
25
23
1F
1E
1D
1B
18
17
10
0F
08
04
02
01
0
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
4
4
4
4
4
4
4
4
5
5
5
5
5
5
5
5
6
6
6
6
6
6
6
6
7
7
7
7
7
7
7
7
8
8
8
8
8
8
8
8
9
9
9
9
9
9
9
9
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
14
15
10
11
12
13
364
Appendix C Fibre-to-SCSI Address Conversion
Table C.3 Fibre Address Conversion Table for Solaris and IRIX Systems (Table 1)
C0
C1
C2
C3
C4
C5
C6
C7
AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID
EF
E8
E4
E2
E1
E0
DC
DA
D9
D6
D5
D4
D3
D2
D1
CE
0
CD
CC
CB
CA
C9
C7
C6
C5
C3
BC
BA
B9
B6
B5
B4
B3
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
B2
B1
AE
AD
AC
AB
AA
A9
A7
A6
A5
A3
9F
9E
9D
9B
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
98
97
90
8F
88
84
82
81
80
7C
7A
79
76
75
74
73
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
72
71
6E
6D
6C
6B
6A
69
67
66
65
63
5C
5A
59
56
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
55
54
53
52
51
4E
4D
4C
4B
4A
49
47
46
45
43
3C
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
3A
39
36
35
34
33
32
31
2E
2D
2C
2B
2A
29
27
26
96
97
98
99
25
23
1F
1E
112
113
114
115
116
117
118
119
120
121
122
123
124
125
1
2
3
4
100 1D
101 1B
101 18
103 17
104 10
105 0F
106 08
107 04
108 02
109 01
110
5
6
7
8
9
10
11
12
13
14
15
111
Hitachi Command Control Interface (CCI) User and Reference Guide
365
Table C.4 Fibre Address Conversion Table for Windows Systems (Table 2)
C5 (PhId5)
C4 (PhId4)
C3 (PhId3)
C2 (PhId2)
C1 (PhId1)
AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID
CC
CB
CA
C9
C7
C6
C5
C3
BC
BA
B9
B6
B5
B4
B3
B2
15
14
13
12
11
10
9
98
97
90
8F
88
84
82
81
80
7C
7A
79
76
75
74
73
15
14
13
12
11
10
9
56
55
54
53
52
51
4E
4D
4C
4B
4A
49
47
46
45
43
15
14
13
12
11
10
9
27
26
25
23
1F
1E
1D
1B
18
17
10
0F
08
04
02
01
15
14
13
12
11
10
9
E4
E2
E1
E0
DC
DA
D9
D6
D5
D4
D3
D2
D1
CE
CD
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
B1
AE
AD
AC
AB
AA
A9
A7
A6
A5
A3
9F
9E
9D
9B
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
72
71
6E
6D
6C
6B
6A
69
67
66
65
63
5C
5A
59
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
3C
3A
39
36
35
34
33
32
31
2E
2D
2C
2B
2A
29
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
8
8
8
8
7
7
7
7
6
6
6
6
5
5
5
5
4
4
4
4
3
3
3
3
2
2
2
2
EF
E8
1
0
1
1
1
1
0
0
0
1
366
Appendix C Fibre-to-SCSI Address Conversion
Acronyms and Abbreviations
3DC
three-data-center
AL-PA
AOU
arbitrated loop-physical address
allocation on use (another name for Hitachi Dynamic Provisioning)
BMP
bitmap
C RTL
CCI
CD-ROM
CLPR
CM
C Run-Time Library
Command Control Interface
compact disk – read-only memory
Cache Logical Partition
Cluster Manager
COW
CTGID
CU
Copy-on-Write
consistency group ID
control unit
CVS
custom volume size
DB
database
DFW
DRU
DASD fast write
Data Retention Utility
ELBA
ending logical block address
ESCON
Enterprise System Connection (IBM trademark for optical channels)
FC
fibre-channel
FCP
FIFO
fibre-channel protocol
first in, first out
GB
gigabyte
GUI
graphical user interface
HA
high availability
HACMP
HARD
hdisk
HDLM
HDP
High Availability Cluster Multiprocessing
Hardware Assisted Resilient Data
hard disk
Hitachi Dynamic Link Manager
Hitachi Dynamic Provisioning
Hitachi Open Multi-RAID Coupling Feature (old name for
ShadowImage)
HOMRCF
HORC
HORCM
HRX
Hitachi Open Remote Copy (old name for TrueCopy)
HORC Manager
Hitachi RapidXchange
HWM
I/O
high water mark
input / output
INST
instance number
kilobytes
KB, KB
Hitachi Command Control Interface (CCI) User and Reference Guide
367
LBA
LCP
LDEV
LDKC
LDM
LU
logical block address
local control port
logical device
logical disk controller (used for USP V/VM)
Logical Disk Manager
logical unit
LUN
LUSE
LV
logical unit number
Logical Unit Size Expansion
logical volume
LVM
logical volume manager
MB
megabytes
MCU
MRCF
MSCS
MU
main control unit (Hitachi TrueCopy only)
Multi-RAID Coupling Feature (refers to ShadowImage)
Microsoft Cluster Server
mirrored unit
NSC
Hitachi TagmaStore Network Storage Controller
OPS
OS
Oracle Parallel Server
operating system
PB
petabyte
PC
personal computer system
PolyCenter Software Installation
Plug-and-Play
PCSI
PnP
PV
physical volume
P-VOL
primary volume
RAID600, R600 factory model number for the Universal Storage Platform V/VM
RAID500, R500 factory model number for the TagmaStore USP/NSC
RAID450, R450 factory model number for the Lightning 9900V
RAID400, R400 factory model number for the Lightning 9900
R/W,
RD/WR
RCP
RCU
RD
read/write
remote control port (used for Hitachi TrueCopy)
remote control unit (used for Hitachi TrueCopy)
read
RM
RAID Manager (another name for CCI)
S/W
SCSI
SF
software
small computer system interface
sidefile
SI
ShadowImage
SLPR
SVC
S-VOL
SVP
Storage Logical Partition
service console
secondary volume
service processor
TB
TC
terabyte
TrueCopy
368
Acronyms and Abbreviations
TID
target ID
UR
USP
Hitachi Universal Replicator
Universal Storage Platform
VPM
V-VOL
VxVM
Virtual Partition Manager
virtual volume
VERITAS Volume Manager
WR
write
Hitachi Command Control Interface (CCI) User and Reference Guide
369
370
Acronyms and Abbreviations
|
Motorola E816 User Manual
Lenovo 401 User Manual
Jwin JL 355 User Manual
Jura Capresso S7 User Manual
IBM 10K0001 User Manual
Hitachi Deskstar HDS721050CLA662 User Manual
GE 97846 User Manual
Emerson CKS9031 User Manual
Emerson CK9902 User Manual
American DJ PRO DJ1 User Manual