DFSMS/MVS V1R4 Technical Guide: June 1997
DFSMS/MVS V1R4 Technical Guide: June 1997
June 1997
IBML
SG24-4892-00
International Technical Support Organization
June 1997
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix B, “Special Notices” on page 143.
This edition applies to Version 1, Release 4 of DFSMS/MVS, Program Number 5695-DF1 for use with the MVS/ESA
platform and OS/390 operating system.
Warning
This book is based on a pre-GA version of a product and may not apply when the product becomes generally
available. It is recommended that, when the product becomes generally available, you destroy all copies of
this version of the book that you have in your possession.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 2. DFSMSdfp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Space Allocation Failure Reduction . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Allocations Affected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Volume Selection Failure Recovery . . . . . . . . . . . . . . . . . . . . 8
2.1.4 New Allocations (VSAM and non-VSAM) . . . . . . . . . . . . . . . . . 9
2.1.5 Extents on New Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.6 Non-VSAM Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.7 VSAM Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.8 Exceptions to Using Volume Selection Failure Recovery . . . . . . . . 10
2.1.9 ISMF Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.10 ISMF Message Changes . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.11 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 SAM Tailored Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Tailored Compression and Generic Compression . . . . . . . . . . . . 13
2.2.2 Sampling the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Enablement of Tailored Compression . . . . . . . . . . . . . . . . . . . 15
2.2.4 Wellness Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.5 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.6 Functional Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.7 Migration and Coexistence Considerations . . . . . . . . . . . . . . . . 18
2.3 O/C/EOV Serviceability Enhancements . . . . . . . . . . . . . . . . . . . . . 19
2.3.1 New Trace Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2 Enabling IFGOCETR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.3 IFGOCETR Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.4 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Enhanced Protection of Checkpointed Sequential DASD Data Sets . . . . 22
2.5 Optical Access Method SMF Recording Enhancement . . . . . . . . . . . . 29
2.5.1 Start Time and End Time Accuracy . . . . . . . . . . . . . . . . . . . . 30
2.6 Program Management 3 (PM3) . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6.1 Program Management - Background . . . . . . . . . . . . . . . . . . . 32
2.6.2 PM3 Enhancements for DFSMS/MVS V1R4 . . . . . . . . . . . . . . . . 33
Chapter 4. DFSMShsm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1 Duplex Tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1 Automatic TAPECOPY Scheduling . . . . . . . . . . . . . . . . . . . . . 52
4.1.2 Supported DFSMShsm Functions . . . . . . . . . . . . . . . . . . . . . . 52
4.1.3 Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.4 Initial Tape Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.5 TAPECOPY Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.6 Tape Exception Processing . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.7 AUDIT Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.8 LIST Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.9 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.10 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Alter without Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.1 New IDCAMS Interface with Catalog . . . . . . . . . . . . . . . . . . . . 56
4.2.2 New Catalog Notification of DFSMShsm . . . . . . . . . . . . . . . . . . 56
4.2.3 DFSMShsm Processing Considerations . . . . . . . . . . . . . . . . . . 56
4.2.4 DFSMShsm Error Processing . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 ABARS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3.1 Output File Stacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3.2 Up to 64 Concurrent Requests . . . . . . . . . . . . . . . . . . . . . . . 58
4.3.3 Invocation of ARCBEEXT Extended to DFSMSdss Processing . . . . . 58
4.3.4 GDG Base Name in ALLOCATE Statement . . . . . . . . . . . . . . . . 59
4.3.5 Automatic Delete of ABARS Activity Log during Roll-Off Processing 59
4.3.6 CPU Time for Aggregate Processing in WWFSR . . . . . . . . . . . . . 59
4.3.7 TGTGDS and OPTIMIZE Keyword Externalized . . . . . . . . . . . . . 60
4.3.8 ISMF Changes for ABARS Accounting Information . . . . . . . . . . . 61
4.3.9 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4 CDS Record Level Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4.1 Invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4.2 CDS Access in Record Level Sharing Mode . . . . . . . . . . . . . . . 63
4.4.3 QUERY CONTROLDATASETS Command . . . . . . . . . . . . . . . . . 63
4.4.4 Multicluster CDSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.5 CDS Creation or Redefinition . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.6 DCOLLECT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.7 ARCIMPRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.8 CDS Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.9 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4.10 Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 6. DFSMSrmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.1 Journal Usage Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.1.1 Update PARMLIB Member EDGRMMxx . . . . . . . . . . . . . . . . . . 79
6.1.2 Automating Control Data Set Backup and Journal Clearing . . . . . . 80
6.1.3 Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.1.4 TSO Subcommand Variables by Name . . . . . . . . . . . . . . . . . . 82
6.1.5 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2 Nonintrusive Backup of the CDS . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2.1 Functional Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2.2 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.3 Problem Determination Aid Trace . . . . . . . . . . . . . . . . . . . . . . . . 85
6.3.1 Functional Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.3.2 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.3.3 Support Use Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.3.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.4 Support for DFSMShsm Alternate Tape Processing . . . . . . . . . . . . . 88
6.5 Recognition of External Data Managers . . . . . . . . . . . . . . . . . . . . 89
6.6 DFSMSrmm Inventory Management Trial Runs . . . . . . . . . . . . . . . . 90
6.6.1 Inventory Management Processing . . . . . . . . . . . . . . . . . . . . 91
6.6.2 Inventory Management VRSEL Processing . . . . . . . . . . . . . . . . 92
6.7 Migration and Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.7.1 New EXEC Parameters for EDGHSKP parameters . . . . . . . . . . . . 93
Contents v
6.7.2 New PARMLIB Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.7.3 New and Changed Messages . . . . . . . . . . . . . . . . . . . . . . . . 94
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Contents vii
viii DFSMS/MVS V1R4 Technical Guide
Figures
1. Tailored Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2. Wellness Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3. Generic and Tailored Compression Measurements . . . . . . . . . . . . 16
4. Dictionary Token . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5. VSAM Loading Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 40
6. VSAM Last Reference Date at CLOSE . . . . . . . . . . . . . . . . . . . . . 42
7. VSAM Data Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8. DCOLLECT Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9. Problems with ALLMULTI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
10. Journal Usage Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
11. CDS and Journal Backup and Clearing . . . . . . . . . . . . . . . . . . . . 81
12. DFSMSrmm REXX Variables for the Journal . . . . . . . . . . . . . . . . . 82
13. DSSOPT DD Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
14. DFSMSrmm Problem Determination Aid . . . . . . . . . . . . . . . . . . . 86
15. EDGUX100 Installation Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
16. Inventory Management Trial Runs . . . . . . . . . . . . . . . . . . . . . . . 92
17. DFSMS/MVS NFS client feature . . . . . . . . . . . . . . . . . . . . . . . . 96
18. The DFSMS/MVS DFM DataAgent. . . . . . . . . . . . . . . . . . . . . . . 102
19. Elements of the DFSMS/MVS Optimizer . . . . . . . . . . . . . . . . . . 116
20. ISMF Primary Option Menu . . . . . . . . . . . . . . . . . . . . . . . . . . 121
21. DFSMSdfp NaviQuest Component Primary Option Menu . . . . . . . . 121
22. Test Case Generation Selection Menu . . . . . . . . . . . . . . . . . . . 122
23. Test Case Generator from Saved ISMF List Entry Panel . . . . . . . . . 123
24. ACS Test Listings Comparison Panel . . . . . . . . . . . . . . . . . . . . 124
25. Enhanced ACS Test Listing Entry Panel . . . . . . . . . . . . . . . . . . . 125
26. Test Case Update with Test Results Entry Panel . . . . . . . . . . . . . 126
27. Batch Testing/Configuration Management Selection Menu . . . . . . . 127
28. Saved ISMF List Operations Batch Samples Selection Menu . . . . . . 127
29. Configuration Changes Batch Samples Selection Menu . . . . . . . . . 128
30. First Edit Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
31. Second Edit Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
32. Catalog Search Interface Invocation Structure (Parameter List) . . . . 133
33. The Catalog Search Interface Work Area . . . . . . . . . . . . . . . . . . 137
Some of the new features of DFSMS/MVS V1R4 covered in this redbook include:
• Reducing space-related outages
• DFSMShsm Duplex Tape, ABARS, and VSAM RLS CDS processing
enhancements
• DFSMSrmm trial-run capability
• System-managed buffering and load enhancements of VSAM Extended
Format KSDSs
• SAM compression improvements
• New OAM SMF records
• Improved access to enterprise data through the DFM/MVS data agent
• Catalog Search Interface
• Batch ACS testing improvements
Peter Zerbini is an Adviser in Germany. He has worked at IBM for 24 years and
has 12 years of experience in storage software and storage management. His
areas of expertise include DFSMS/MVS, DFSMS Optimizer, DASD, tape, and
optical. Peter is currently the Marketing Support and Last Level specialist for
DFSMS/MVS, DFSMS Optimizer, and optical in Germany.
Thanks to the following people for their invaluable contributions to this project:
Ed Baker
Storage Systems Division - Tucson
Charlie Burger
Advanced Technical Support Organization - San Jose
Jerry Codde
Storage Systems Division - San Jose
Ed Daray
Storage Systems Division - San Jose
Scott Drummond
DFSMS Brand Manager - San Jose
Tina Dunton
Storage Systems Division - San Jose
Nadine Hart
Storage Systems Division - San Jose
Bob Kern
Storage Systems Division - Tucson
Ron Kern
Storage Systems Division - Tucson
Larry Law
Storage Systems Division - San Jose
William Nettles
Storage Systems Division - San Jose
Deborah Norberg
Storage Systems Division - San Jose
Tony Pearson
Storage Systems Division - Tucson
Jerry Pence
Storage Systems Division - Tucson
Savur Rao
Storage Systems Division - San Jose
Carmen Yep
Storage Systems Division - San Jose
Joy Nakamura
Storage Systems Division - San Jose
Ron Ward
Storage Systems Division - Tucson
Taylor Winship
Storage Systems Division - San Jose
Mike Wood
Sortware Services - IBM UK
Thanks are also due to the many other collaborators and reviewers who
contributed to this book. I especially want to acknowledge our technical editor,
Maggie Cutler, whose work substantially improved the quality of this book.
Comments Welcome
Your comments are important to us!
Preface xiii
xiv DFSMS/MVS V1R4 Technical Guide
Chapter 1. Introduction to DFSMS/MVS Version 1 Release 4
What has become apparent since DFSMS/MVS was first delivered is that many of
the most significant functional enhancements can be realized only if the data is
system-managed storage (SMS)-managed. That is true of this release of
DFSMS/MVS as well, as you will discover as you read this book.
The space allocation failure reduction further reduces the incidence of these
allocation and extension failures. It applies to SMS-managed data sets only and
is limited to new data set allocations and extends on new volumes. The space
allocation failure reduction enhancement does not provide relief for data set
extensions on the current volume.
The following enhancements may make having separate storage groups for
large, medium, and small data sets less of a requirement:
• The five-extent limit in DADSM for new data set allocation and for extends to
new volumes has been removed.
• Data sets can grow to either 59 volumes (which is the limit imposed for
multivolume data sets by the task input/output table (TIOT)) or the number of
volumes in the storage group (as long as a volume in the storage group has
space), whichever is smaller.
• Maximum number of extents per component of VSAM data sets is now 255
instead of 123 for multivolume data sets.
• Data sets that currently have a 16-extent or 123-extent per volume limit will
continue to have the same limit.
Solving the allocation and extent problem results in the added benefit of not
having to run DEFRAG as often, especially in a sysplex environment. DEFRAG
can move extents of a data set only when it is not in use anywhere in the
sysplex and requires GRS ring/star or equivalent product support to ensure data
integrity.
Of the two failing cases (case 2 and case 3), space outage reduction addresses
case 2. It is based on retrying allocation failures by a combination of:
• Spreading the requested space quantity over multiple volumes
• Reducing the requested space quantity by the allowed limit
• Using more than five extents to satisfy the allocation
If a volume selection failure occurs for a reason other than space, the space
outage reduction solution will not apply.
Note: Spreading data over multiple volumes means that SMS allocates the
primary space allocation over as few volumes as possible under the constraints
imposed by the maximum number of extents that may be allocated on each
volume. (For VSAM data sets, the constraints are based on the number of
extents allowed for the entire component over all volumes.)
In the paragraphs that follow, we illustrate how SMS processes these new fields.
If a user specifies Y for SPACE CONSTRAINT RELIEF and the volume count for
the allocation is 1, SMS will redrive the allocation after reducing the requested
space quantity (which could be PRIMARY or SECONDARY space quantity) on the
basis of the REDUCE SPACE UP TO parameter. If the reduced space quantity
(the requested space multiplied by the REDUCE SPACE UP TO parameter) is less
than 1, it will be rounded up to 1. Simultaneously SMS will also allow more than
five extents to allocate this recomputed space amount.
Note: It is valid for a user to specify 0 space quantity for primary space (when
allocating a model data set control block). The space quantity for non-VSAM data
sets is the PRIMARY space for new allocations and the SECONDARY space for
extensions. The space quantity for EF and non-EF VSAM data sets is the
PRIMARY space for new allocations and either the PRIMARY or the SECONDARY
space for extensions on new volumes (depending on a parameter in the data
class). Such allocations should not fail for space except in the rare
circumstance where there is no space in the VTOC for allocating a DSCB. In any
event, allocations where the primary space quantity is 0 will not be retried.
If the user specifies a Y and the volume count for the allocation is more than 1,
on retry SMS will attempt to allocate the requested space using more than one
but as few volumes as possible.
For allocation, the maximum number of volumes is equal to the largest of the
unit count, volume count, and volser count. If none of these three values is
specified, the volume count in the data class definition is used. This volume
selection failure recovery method is called best-fit allocation . It is applicable
only during the initial allocation of a data set. It is not applicable during
extensions to new volumes.
Note: For non-VSAM data sets, if the secondary space quantity is zero, the data
set is not extended on the current or new volume.
The amount of space allocated during extent processing of VSAM data sets
differs from that of non-VSAM data sets. If a VSAM data set is extended on the
current volume, the SECONDARY space specified by the user is allocated. If the
VSAM data set is extended to a new volume, the PRIMARY or SECONDARY
space specified is allocated as follows: the default is PRIMARY space; the
Chapter 2. DFSMSdfp 9
SECONDARY quantity is used if the data class allows it and the data set is
allocated in extended format.
If the best-fit allocation fails, SMS will retry the allocation as follows:
• If REDUCE SPACE UP TO (X%) is specified and X is between 0 and 99, SMS
will reduce the requested space quantity by X% and redrive the best-fit
allocation. A specification of 0 implies that the user wants to use more than
five extents to satisfy the allocation without reducing the allocation amount.
• SMS will use as many extents as are allowed.
The data class Define/Alter page 3 panel, DGTDCDC5, and the data class Display
page 3 panel, DGTICDC3, have been modified to accommodate the new SPACE
CONSTRAINT RELIEF and REDUCE SPACE UP TO fields. The fields have been
added to the last page, at the end, after all existing fields.
Data class List Display panel DGTLGP21, List Sort page 2 panel DGTDCDC4, List
View page 2 panel DGTDVW42, and List Print page 2 panel DGTDPR42 have
been modified to accommodate the SPACE CONSTRAINT RELIEF and REDUCE
SPACE UP TO new columns. They are to be added to the end or existing entries
are to be moved down or across columns to accommodate the new entries in
the appropriate place.
Chapter 2. DFSMSdfp 11
speaking, it is beneficial to specify Y for SPACE CONSTRAINT RELIEF and a
nonzero value for REDUCE SPACE UP TO unless the user application cannot
tolerate a reduction in the amount of space requested. Data class changes
apply to both new and existing data sets.
• Application programmer
Application programmers must be aware of the new messages that they may
encounter. If they expect the requested space to be satisfied within five
extents and on a single volume, the changes introduced by space outage
reduction could affect applications. If the new data class attribute
parameters are not used, applications will not be affected.
With these PTFs applied, lower-level systems can access all 255 extents of a
VSAM component that was created on a DFSMS/MVS V1R4 system. It will not
be possible, however, to extend VSAM components on lower-level systems if
they occupy 123 or more extents.
If customers do not want to use the new parameters in the data class, they need
take no action, and the system will continue to work as before. If customers
want to use the new parameters, they have to define new data classes or modify
existing definitions.
Tailored compression support does not apply to VSAM KSDSs, which can
continue to be compressed with generic DBB dictionary compression (as
illustrated in Figure 1).
Figure 1. Tailored Compression. The IGDSMxx PARMLIB m e m b e r specifies whether tailored compression or
generic DBB compression will be used. VSAM KSDSs can only use generic compression.
Chapter 2. DFSMSdfp 13
Standard DBBs consist of a fixed set of padding run blocks, numeric blocks, and
English text strings commonly found in the English language. As such, standard
generic compression algorithms do not work well for non-English languages, or
any text data that involves repetitive strings other than common English words;
for example, any data set made up of information not found in the available
DBBs, such as the repeating occurrence of a person′s name.
In contrast to generic, tailored dictionaries are built from data strings extracted
from the data to be compressed, assembled into a compression dictionary, and
stored in the first few tracks of a SAM data set; that is, tailored compression is
completely insensitive to the text language. As with generic, tailored
compression samples the first portion of data as it is written, choosing the
optimal data strings to use. These data strings are then assembled into a
compression dictionary and stored into the data set. This compression
dictionary is read and activated whenever the data set is opened.
Evidence shows that for large data sets, tailored compression significantly
improves the data compression ratio while it reduces the CPU time needed to
compress a data set (when compared to generic), although this CPU time is still
much higher than not using compression at all. Therefore, customers must still
make a trade-off between CPU time costs, channel traffic, and DASD space
savings, but tailored compression significantly reduces DASD space usage for
small as well as large data sets, even when taking into consideration the cost of
storing the compression dictionary. Also, because less DASD space is used, the
elapsed time to read or write the data set is reduced.
When evaluating whether or not to use compression for all data sets, it is
important to take into consideration factors other than the size of the data sets,
such as how long the data set is expected to be kept. When a data set is reused
(either read or rewritten), the CPU cost is much less. Therefore, it may be
desirable to restructure applications so that sampling takes place less
frequently.
Compression is never used for temporary data sets. The above discussion
applies only to permanent data sets.
Customers can choose at the data set level through a DATACLAS parameter
whether or not a data set should be compressed.
For example, suppose that in DFSMS/MVS V1R3 a data set was compressed by
only 8% and the CPU time cost was 1 sec. In DFSMS/MVS V1R4, the data set
would be compressed by 1.6%, but the CPU time for sampling would be reduced
to 0.2 sec (as illustrated in Figure 2).
Chapter 2. DFSMSdfp 15
2.2.5 Measurements
All measurements were done using a 9672-RX73 processor and IEBGENER to
copy an existing data set. The CPU cost of compression was derived by first
measuring the CPU time when only the output data set is compressed and then
subtracting the CPU time when both the input and output data sets are
noncompressed. Likewise, the CPU cost of decompression was derived by first
measuring the CPU time when only the input data set is compressed and then
subtracting the CPU time when both the input and output data sets are
noncompressed.
In the case of tailored compression, two variations were measured. The first
time IEBGENER was run is called DISP=NEW, which includes the cost of
sampling. The second time IEBGENER was run is called DISP=OLD, which
avoids the cost of sampling. Generic compression was not measured with
DISP=OLD, but it is expected that the cost of generic sampling is relatively
small compared to the cost of tailored compression sampling.
Note that although most of these data sets might be considered small or medium
size data sets, the same compression ratios would occur if the data were
replicated to create larger data sets.
For tailored compression with DISP=OLD, the normalized CPU time per kilobyte
for each of these data sets is in a narrow range between 330 and 355 sec/KB.
(This range is sensitive to the CPU model.) Although generic DISP=OLD
measurements were not done, it appears likely that such a metric would land in
the same range as with tailored compression, except for data set 1, which
generic compression does not compress well. In those cases where data does
not compress well, the CPU time per KB is higher.
If it is determined that the data set is eligible for tailored compression, the new
function:
• Builds a tailored dictionary using the initial data written to the data set
• Validates the useability of the resulting dictionary depending on whether a
preset compression criterion is met.
• Compresses the data set (including the initial sampled data) or rejects the
compression for the data set if a preset compression criterion is not met.
A dictionary token consists of 32 bytes and is stored in the catalog. Today, the
dictionary token may be found in output displayed by functions such as LISTCAT
and DCOLLECT, and in certain SMF records. The dictionary token is modified to
contain tailored compressed data set indicators. The first byte of the dictionary
token indicates the type of compression used for the data set (see Figure 4 on
page 18):
The second four bits in the first byte of the token contain bits to denote the
number of blocks containing system data from the beginning of the data set until
the user data starts:
The start of user data will begin in the physical block following the blocks
containing system data. A compressed data set with a tailored token or a
rejection token may contain a number of these system data blocks.
Chapter 2. DFSMSdfp 17
Figure 4. Dictionary Token. Tailored compression is indicated from the first four bits of the first byte.
For tailored compressed data sets, the number of system blocks is always
nonzero, as this is where the tailored dictionary is stored. When a compressed
format data set contains a rejection token, the number of system blocks may still
be nonzero.
When a compressed data set has system blocks, these blocks are reflected in
the DS1LSTAR field in the data set′s Format 1 DSCB. On reusing an existing
data set, it is possible for a compressed data set to contain no user data but still
have a nonzero DS1LSTAR.
The REGION size parameters of the JOB and EXEC JCL statements do not have
to be modified to use tailored compression, even though 3MB more storage
above 16MB is used.
Tailored compression does not apply to VSAM data sets at this time. Even
though the system option might specify tailored compression, VSAM will use
only standard generic dictionaries.
Chapter 2. DFSMSdfp 19
• The O/C/EOV abend message reflects the name of the module that detected
the error instead of the name of the module that issued the abend. For
example, during a B37, D37, or E37 abend, module IFG0554T is documented
in the message text, even though module IFG0554P actually requested the
abend. The message text now indicates IFG0554P.
• Currently if a user tries to open a PDS for output by specifying DISP=SHR,
and the PDS is already open in this condition, the user has failed to serialize
access before attempting to open the data set. The user has no information
about who owns the PDS for output. The data set might be in the same
system or another system that is sharing the volume. O/C/EOV is enhanced
to report helpful information to find out who has the PDS for output. If an
abend 213 reason code 30 is issued, a new message, IEC813I, is issued
documenting which address space, JOB, and task control block (TCB) owns
the PDS resource that is preventing the open from being allowed. The
information provided by the IEC813I message provides first-time data capture
information.
• A new SMF type 42 subtype 9 record is written for B37, D37, and E37 abends,
documenting such information as job name, data set name, volser, number
of extents on the volume, and secondary allocation amount that system
programmers can use to prevent x37 abends in the future.
• An above 16MB O/C/EOV work area extension has been implemented. This
extension does not provide new function; it has been implemented because
of storage constraints in the main O/C/EOV work area.
Parameters DISPLAY and DELETE are mutually exclusive from DSN, DDN, and
JN. If DISPLAY or DELETE is specified with DSN, DDN, or JN, it will be ignored.
DSN, DDN, and JN can be specified only once in each operator response. If
multiple trace entries are to be created, each must be created separately.
If the data set to be traced is a generation data group (GDG), DSN can be either
the complete generation data set (GDS) name (including G000v00) or the GDG
base name. If the complete GDG name is specified, only that generation data
set will be traced. If the GDG base name is specified, all of the generations are
eligible for tracing, depending on the other parameters specified.
Chapter 2. DFSMSdfp 21
The started task does not have to be active at the time of the data set tracing.
The started task is used only so the operator can enter parameters and cause
the trace table to be built or updated. The trace table, once obtained, is not
specifically freed. It will be allocated and available for use during the entire
duration of IPL. Only by re-IPLing is the table freed.
If the trace table becomes full, trace table entries must be deleted before a new
entry can be added.
A single-bit DS1CPOIT was used to indicate both IMSC and physical sequential
MVSC data sets, and the primary intent was to stop migration or any kind of
DFSMSdss processing such as DEFRAG. Remember that we are not talking
about checkpoint data sets but checkpointed data sets. For MVSC, this means
physical sequential data sets that were open during a checkpoint SVC. For
IMSC, this means physical sequential (GSAM) data sets that, as indicated at
open time, would be used during an IMS checkpoint. MVSC and IMSC data sets
have different properties. GSAM data sets can be DEFRAGed, migrated and
recalled, dumped and restored, and copied as long as the amount of data per
The handling of checkpointed data sets in this way results in several problems:
• DEFRAG was changed to recognize DS1CPOIT and not move the data set to
support the more restrictive MVSC. This resulted in problems for those
installations that were not using MVSC but used IMSC extensively and
depended on DEFRAGing their tape mount management (TMM) DASD
volumes to prevent allocation failures.
• Other DFSMSdss operations such as logical dump, restore, and copy ignore
DS1CPOIT altogether and move the data sets, which can result in a failure
on restarts.
• DFSMSdss physical operations also ignore DS1CPOIT, but they do not
change the amount of data per volume, device geometry (track size), and do
not reblock the data set.
• There was no way to satisfy all customers because of the use of a single bit
for these different types of data sets.
MVSC will continue to set the DS1CPOIT (checkpointed) indicator and now sets
the DS1DSGU (unmovable) indicator in the format 1 DSCB of the first and current
volume being processed for PS data sets. Open will continue to set DS1CPOIT
for ISMC data sets on the first volume of a multivolume data set. The bit may
not be (and need not be) propagated on secondary volumes for GSAM data sets.
DS1DSGU is new for SMS-managed data sets. It is called the SMS unmovable
indicator. It is treated differently from the DS1DSGU indicator on
non-SMS-managed volumes. In SMS, DS1DSGU indicates that the data set is
unmovable until it is no longer required to be used on a restart. The criterion of
when a data set can be moved is based on when it was accessed during a
checkpoint. Because this time of access is not retained in the format 1 DSCB, it
has to be based on the number of days since last access (last reference date).
New unmovable data sets (PSUs) cannot be created on SMS volumes. The
checks in SMS will continue to be enforced to prevent creation of unmovable
data sets.
Chapter 2. DFSMSdfp 23
Finding DS1DSGU on the format 1 DSCB of a physical sequential SMS data set is
not treated as an error.
CONVERTV: DFSMSdss allows a data set that has DS1CPOIT set, with or
without DS1DSGU set, to be converted to NONSMS only if the FORCECP( days )
keyword is specified. If a checkpointed data set is encountered during
conversion to non-SMS and either FORCECP( days ) is not specified or
FORCECP(days ) is specified and the minimum days have not elapsed, message
ADR878E RC102 is issued. If conversion is allowed, DS1CPOIT and DS1DSGU as
well as DS1SMSDS are reset for this data set.
The FORCECP( days ) parameter specifies that the checkpointed data set resident
on the SMS-managed volume can be converted to non-SMS-managed volumes.
All checkpoint indications are removed from the data set during conversion.
• days is a 1- to 3-digit number in the range 0 to 255. It indicates the number
of days that must have elapsed since the last date referenced before
conversion can take place.
COPY DATASET: The IMS GSAM restart facility can tolerate the relocation of an
extent on a volume. MVS Checkpoint/Restart cannot tolerate the relocation of
any extent needed for restart.
APAR OW08803 changed the DEFRAG function not to move extents of any data
set for which DS1CPOIT was set. DFSMSdss changes the code provided with
APAR OW08803 so that FORCECP( days ) is required to move the extents of a
checkpointed data set if both DS1CPOIT and DS1DSGU are set in the format 1
DSCB. When FORCECP( days ) is not specified or when FORCECP( days ) is
specified and the minimum days have not elapsed, existing message ADR211I is
issued.
The DFSMSdss logical data set dump function is invoked by DFSMShsm for
MIGRATION, BACKUP, and ABACKUP.
An SMS-managed physical sequential data set that has DS1CPOIT set, with or
without DS1DSGU set, is not logically dumped by DFSMSdss unless
FORCECP(days ) is specified. If FORCECP(days ) is not specified or when
FORCECP(days ) is specified and the minimum days have not elapsed, message
ADR298E is issued.
DUMP DATASET PHYSICAL: To be consistent with logical data set dump, users
should check the checkpointed data set indicator before they perform a physical
data set dump. The problem is that a physical data set restore may have as its
source either a physical data set dump or a full volume dump. Therefore, to
ensure that a checkpointed data set is not inadvertently destroyed, there must
be a check on the physical restore side. Dump itself is nondestructive, unless
DELETE is also specified. Since there is no other protection extended to
checkpointed data sets to prevent deletion, it was decided that users should be
able to physically dump with delete, if they so choose.
FULL/TRACKS RESTORE: Both full volume restore and tracks restore are
allowed for a data set that has DS1CPOIT set, with or without DS1DSGU set.
Neither DS1CPOIT nor DS1DSGU is checked or reset.
Chapter 2. DFSMSdfp 25
A full volume dump of an SMS-managed volume always is restored to a device
with like track geometry and always uses the volser of the source volume as the
volser of the target volume.
DFSMShsm invokes the DFSMSdss function of logical data set restore for
RECALL, ARECOVER, and RECOVER (from BACKUP) and the DFSMSdss function
of physical restore for RECOVER (from DUMP) if DFSMSdss is defined as the
data mover in DFSMShsm.
The DFSMSdss logical data set restore function restores checkpointed data sets
only if FORCECP( days ) is specified. If FORCECP(days ) is not specified or when
FORCECP(days ) is specified and the minimum days have not elapsed, the data
set is not restored and message ADR298E is issued.
When the target of a logical data set restore is preallocated and indicated as
checkpointed, the same eligibility criteria apply to the target. If the target data
set is eligible to be replaced, DS1CPOIT and DS1DSGU are reset for the target.
The DFSMSdss physical data set restore function restores checkpointed data
sets that have DS1DSGU set only if FORCECP( days ) is specified. If
FORCECP(days ) is not specified or when FORCECP( days ) is specified and the
m i n i m u m days have not elapsed, the data set is not restored, and message
ADR298E is issued.
When the target of a physical data set restore is preallocated and has DS1DSGU
set, the same eligibility criteria apply to the target. If the target data set is
eligible to be replaced, DS1CPOIT and DS1DSGU are reset for the target.
For physical data set restore of checkpointed data sets that do not have
DS1DSGU set, FORCECP is not required, and DS1CPOIT is not reset. For
preallocated checkpointed data sets having only DS1CPOIT set, the setting of
DS1CPOIT is taken from the dumped data set rather than the preallocated data
set when the settings are different.
For IMSC data sets that are physically restored, FORCECP(days) is not required,
and the checkpoint indications are not removed from the data sets, whether or
not FORCECP(days) is specified.
Checkpoint bits DS1CPOIT and DS1DSGU are reset when allocated but unused
space in a checkpointed data set is released, and message ADR297I is issued. If
a checkpointed data set is eligible for RELEASE but has no allocated but unused
tracks, the checkpoint bits are not reset.
Installations that support MVSC data sets and/or IMSC data sets and use either
DFSMSdss or DFSMShsm functions must be aware of the support provided in
DFSMS/MVS V1R4 to protect those data sets from inadvertent movement.
Selection of the optimal value for days from function to function and may vary
from application to application, depending on job frequency and use of the
affected data sets.
Chapter 2. DFSMSdfp 27
Note: X′nn′ is the hexadecimal representation for the number of days. The
default is 5.
A DEFRAG APAR, which allows extents to be moved when DS1CPOIT is set, will
be maintained for previous levels of DFSMSdss.
Note: OEM vendor products or user applications using the format-1 DSCB fields
should be aware of the use of the DS1DSGU field introduced by the enhanced
protection of checkpointed data sets.
The SMF recording enhancement assists customers who are using OAM in their
performance monitoring, analysis, tuning, and capacity planning activities.
This enhancement introduces a new SMF record for OAM at the OAM
programming interface level (the OSREQ macro interface). In an ImagePlus
environment, these records are used to account for the OAM portion of
document image processing.
The MVS system operator or system programmer can dynamically select the
OAM SMF record subtypes to be recorded.
OAM records SMF records in the SMF data set to account for OAM activity.
Each OAM SMF record contains three sections:
• Standard 48-byte SMF record header
• 112-byte OAM product section
• Variable length OAM data section
Table 2 lists the OAM SMF record subtypes that OAM SMF record type 85 (x ′55′)
supports.
Chapter 2. DFSMSdfp 29
Table 1. OAM SMF Record Subtypes
Size Description
1 324 OSREQ access
2 324 OSREQ store
3 324 OSREQ retrieve
4 324 OSREQ query
5 324 OSREQ change
6 324 OSREQ delete
7 324 OSREQ unaccess
32 336 OSMC storage group processing
33 336 OSMC DASD space management
34 336 OSMC optical disk recovery utility
35 336 OSMC MOVEVOL utility
36 296 OSMC single object recovery utility
37 184 OSMC library space management
64 256 LCS optical drive vary online
65 256 LCS optical drive vary offline
66 256 LCS optical library vary online
67 256 LCS optical library vary offline
68 284 LCS optical cartridge entry
69 284 LCS optical cartridge eject
70 284 LCS optical cartridge label
71 284 LCS optical volume audit
72 284 LCS optical volume mount
73 284 LCS optical volume demount
74 variable (min. 380, max. 32,744) LCS optical write request
75 380 LCS optical read request
76 380 LCS logical delete request
77 variable (min. 380, max. 32,744) LCS optical physical delete request
78 variable (min. 380, max. 32,744) LCS object tape write request
79 380 LCS object tape read request
87 228 LCS Object tape volume demount (OAM usage)
Every attempt is made to obtain the start and end time of the OAM function as
soon and as close as possible so that the elapsed time of the function includes
as much OAM processing time as possible.
A new reason code is returned in register 0, following the OSREQ macro, if the
tracking token is contained in a virtual storage area to which the application
program does not have both fetch and store authorization.
2.5.1.2 Invocation
Before activating OAM SMF recording following an MVS IPL, the OAM subsystem
identification (OAM) must be defined in the active SMF PARMLIB (SMFPRMxx).
If the OAM subsystem identification has not been defined to SMF, add the
following statements to the SMFPRMxx:
SUBSYS(OAM,TYPE(85))
and then activate it by entering the following MVS operator SET command:
SET SMF=xx
If the OAM subsystem identification has been defined to SMF, the MVS system
operator or MVS systems programmer can dynamically change OAM SMF
recording, using one of the following methods:
• Update the SMF PARMLIB member to include the OAM SMF record
subtypes, for example:
SUBSYS(OAM,TYPE(85(2:3)))
and activate the SMFPRMxx by entering the following MVS operator SET
command:
SET SMF(xx)
• Update the SMF options dynamically by entering the following MVS operator
SETSMF command:
SETSMF SUBSYS(OAM,TYPE(85(4:6)))
For additional information about the MVS operator SET SMF and SETSMF
commands, see the MVS/ESA System Commands Reference , (GX22-0015-03).
Chapter 2. DFSMSdfp 31
2.6 Program Management 3 (PM3)
DFSMS/MVS provides program management services to create, load, modify,
list, read, transport, and copy executable programs. DFSMS/MVS 1.1 introduced
the program management binder and the program management loader. The
binder extends the services formerly provided by the MVS/DFP linkage editor
and batch loader. These enhancements include support for an executable unit
called a program object , which includes all of the functions of a load module,
with additional functional and usability improvements. The loader adds to the
capabilities of program fetch and can load both program objects and load
modules into storage for execution.
In addition to support for the load module, the binder introduced a new
executable unit, the program object. The program object removes all structural
limitations that have long restricted the load module. Flexibility in the design
and structure supports ongoing enhancements.
Chapter 2. DFSMSdfp 33
34 DFSMS/MVS V1R4 Technical Guide
Chapter 3. DFSMSdfp VSAM
The order of precedence for specifying values is: JCL over data class, with data
class having precedence over specifications in the ACB. The MACRF values of
SEQ, DIR, and SKP are used with a specification of SYSTEM to determine how
buffers will be acquired. Also used with SYSTEM are the storage class values
for BIAS and/or MSR, should they be specified.
Specifying the type of Record Access Bias through the AMP parameter in JCL
will override anything specified in the data class for this parameter. If nothing
has been specified for this parameter, the default is USER. USER indicates that
VSAM will continue to use buffers as it now does without SMB. Record Access
Bias is ignored if the data set is not in EF.
SMB either weights the buffer handling toward sequential or direct processing or
optimizes the buffer handling for sequential or direct processing. Weighting
means that when the Record Access Bias specifies SW (sequential weighted)
most buffers will be used to support sequential processing but some will be
reserved for index buffers to help any direct processing. Conversely when the
Record Access Bias specifies DW (direct weighted), most buffers will be used to
support fast direct access to the data, with relatively few buffers reserved for any
sequential processing that might occur. The manner in which buffer selection
may be affected by the user program is specified by the use of the ACB MACRF
parameters of SEQ, SKP, and DIR and the use of the storage class BIAS and/or
MSR values (see Table 2 on page 37).
A value of SYSTEM (including SO, SW, DO, or DW for ACCBIAS) specifies that
VSAM is to determine the number of buffers to obtain for the data set when NSR
processing is used. When NSR processing is specified or defaulted, and VSAM
chooses DO (direct optimized) as the most appropriate type of access, the
buffering technique changes from NSR to LSR. LSR is also used if direct
optimization is forced by specifying ACCBIAS=DO on the AMP parameter.
Neither SO, SW, DO, nor DW are specified in the data class as these are
determined from a consideration of the storage class BIAS and/or MSR values
and the ACB MACRF values as shown in Table 2.
The amount of virtual buffer space to be acquired when opening the data set for
LSR processing can be specified with the SMBVSP subparameter of AMP. The
format is:
SMBVSP=nnK
where nn is 1 to 204800
or
SMBVSP=nnM
The SMBVSP parameter can be used to override the default buffer space to be
obtained, which generally speaking is calculated assuming 20% of the data will
account for 80% of the accesses. The buffer space acquired is split across two
LSR pools; one for the index and one for the data.
Note: The value specified is the total amount of virtual storage that can be
addressed in a single address space. It does not take into consideration the
storage required by the system or the access method.
When direct optimization is used, the write processing of modified buffers can be
deferred until the data set is closed or the buffer is required for a different
request. The SMBDFR subparameter of AMP is used to specify deferred write.
The format is:
SMBDFR=Y|N
where Y is the default for SHAREOPTIONS (1,3) and (2,3), and N is the default for
SHAREOPTIONS (3,3), (4,3), and (x,4)
The amount of hiperspace to be used for LSR buffers can be specified with the
SMBHWT subparameter of AMP. The value specified for SMBHWT is used as
the hiperspace weighting factor. The format is:
SMBHWT=nn
The hiperspace buffer size will be a multiple of 4096 (4K). These buffers may be
allocated for the base data component of the sphere.
If the control interval (CI) size of the database component is not a multiple of 4K,
both virtual space and hiperspace are wasted. In addition, excessive use of this
facility will have a negative affect on performance for both the system and the
user program. Overhead is involved in locating a buffer for the user program
that may not be in hiperspace storage because of system requirements.
The SMBHWT value is not a direct multiple of the number of virtual buffers that
are allocated to the resource pool but act as a weighting factor for the number of
hiperspace buffers to be established. If SMBHWT is not specified, hiperspace
will not be used.
The RMODE31 parameter that is currently specifiable in the ACB will be merged
with, or overridden by, the new JCL RMODE31= AMP subparameter.
Note: The use of the JCL AMP RMODE31= parameter is valid only on systems
with DFSMS/MVS V1R4 installed. A JCL error will be returned for earlier
releases.
With DFSMS/MVS V1R4 the number of LSR pools has been increased from 16 to
256. The value specified for SHRPOOL will now be a number between 0 and 255.
This will cause changes in the BLDVRP, DLVRP, ACB, GENCB ACB and MODCB
ACB macro instruction SHRPOOL= values.
The number of data buffers is optimized for load mode processing, which is
sequential; system-managed buffers adjust to the load mode requirements by
determining the current state of the data set. Sufficient buffers are acquired
such that each control area (CA) is written with a single I/O request and full
overlap of I/O and processing occurs. Before this release at least two I/O
requests would have been needed for each CA, more if the data set was defined
with the FREESPACE parameter specifying one or more free CIs per CA.
The index component (including the sequence set) should be updated only once
per data CA area during load. Before this release the index was updated
multiple times per CA. This change should improve load performance
considerably.
Performance should also improve because the entire CA is written to cache with
a single I/O request, and a device end is returned before all of the data is written
to DASD. Even in cases where the algorithms bypass cache, performance
should improve as the entire CA is written with a minimum of disk rotations.
3.2.1 Invocation
The following specifications are required for the load enhancements to take
effect:
• System managed buffering
• SPEED specified in the IDCAMS DEFINE command
3.2.2 Interfaces
The interfaces used for this support are the IDCAMS DEFINE command and
either the data class or the JCL AMP parameter.
Before to DFSMS/MVS V1R4, the LRD for VSAM data sets was updated at open
time. Thus there was always the potential for VSAM data sets that had been
open for many days to be migrated unnecessarily.
Updating the LRD only at OPEN time is of great concern to some installations
whose transaction-based systems may have a large number of data sets and/or
databases open for many days or even weeks. When the transaction system is
stopped, all of the data sets are eligible for migration by DFSMShsm according
to the LRD set at OPEN time. Depending on the number and size of the data
sets, the time to recall them could be long, thus delaying the time to bring up the
transaction system again.
As illustrated in Figure 6 on page 42, updating the LRD at CLOSE time prevents
the immediate migrate and recall activity and brings VSAM inline with updating
the LRD for non-VSAM data sets at CLOSE time.
For non-RLS VSAM, the IDATMSTP routine is called during the open of the data
set to retrieve a return code that specifies whether or not the date in the VTOC
is to be changed. VSAM keeps this information until the data set is closed. For
VSAM RLS, date stamp processing is always performed. Data stamp processing
for close compares the date on which the data set was opened with the date on
which the data set is closed to determine whether the date has changed.
DFSMS/MVS V1R4 supports EA VSAM KSDS only. It does not support other
types of VSAM data sets for EA.
To use VSAM RLS the installation must have implemented a Parallel Sysplex
environment. A coupling facility is therefore a prerequisite before VSAM RLS can
be implemented. For a brief overview of a Parallel Sysplex and VSAM RLS, see
Appendix A, “Parallel Sysplex and VSAM Record Level Sharing” on page 139.
Just as with VSAM non-RLS EA, an application can use the TESTCB macro to
test for extended addressability with VSAM RLS EA with, for example:
TESTCB ACB=EFKSDS,ATRB=XADDR
Products that rely on the expanded data set size to determine the most efficient
method of processing the data or return the logical data set size to the user can
still do so for both EA and non-EA data sets, using an externally callable service.
For example:
SHOWCB ACB=EAKSDS,
FIELDS=(XAVSPAC,XENDRBA,XHALCRBA)
could be used to return the amount of space, the last RBA value of the data set,
and the high-used RBA of the data set. The field values are returned as 8-byte
values instead of 4-byte values.
Use IDCAMS REPRO to migrate from non-EA KSDSs to EA KSDSs or vice versa.
You can also use IMPORT/EXPORT as long as the data set is not larger than
4GB. It is not possible to use ALTER to change an existing extended format
non-EA format KSDS into an EA format KSDS.
Only data sets on SMS-managed volumes are eligible for EA because the
mechanism for obtaining EA is the specification of a Data Set Name Type of EXT
and Extended Addressability set to Y in the data class.
VSAM RLS KSDS EA will operate on all processors supported by MVS/ESA 5.2 or
later. Coupling Facility hardware in the System/390 Sysplex is required. See
Appendix A, “Parallel Sysplex and VSAM Record Level Sharing” on page 139
for further information.
DFSMS/MVS V1R4 has enhanced ISMF and the Data Class Application to allow
these attributes to be defined in the data class.
IF JCL and the data class are used to set up a VSAM data set, attributes
conflicting with each other, or with the data set definition, can be caught when
the data set is created rather than when the data set is first opened.
When a data set is created, SMS applies the data class values for the attributes
only if they apply to the data set. For example, none of these attributes is
compatible with TYPE(LINEAR) and therefore none is applied to linear data sets.
The BWO, LOG, and LOGSTREAM ID attributes are saved in the RLS cell in the
catalog. The RLS toleration PTFs will fail OPEN on releases before DFSMS/MVS
V1R3 if the RLS cell exists for the VSAM sphere.
With DFSMS/MVS V1R4 most VSAM data sets can be defined completely through
JCL without requiring a separate IDCAMS DEFINE or IDCAMS ALTER.
A new parameter for the DCOLLECT command enables you not to produce
output for unneeded volumes. The new parameter, EXCLUDEVOLUMES, enables
you to eliminate volumes identified by the STORAGEGROUP and VOLUMES
parameters, which enable you to specify a set of volumes for which data is to be
collected.
Note: The size of both records (M and B) has increased by six characters.
The DCOLLECT command has been enhanced with the optional parameter,
EXCLUDEVOLUME. The EXCLUDEVOLUME parameter can be abbreviated as
EXCLVOL, EXV, or EXCLUDE.
DFSMShsm V1R4 has been enhanced to write data to two tapes simultaneously.
Thus you can make a backup copy or migrate data to tape on two different
devices at the same time. With this new function, you can address a second,
perhaps remote, Automated Tape Library (ATL).
DFSMShsm will provide more account data as well as aliases for keywords on
specified DFSMShsm commands.
The alternate tape will have one of the following data set names:
• prefix.COPY.HMIGTAPE.DATASET
• prefix.COPY.BACKTAPE.DATASET
This new TCN record is type E, and the key is either M- for migration tape or B-
for backup tape, followed by the 6-byte volume serial number, and padded with
blanks.
During secondary space management, if any TCN records exist for migration
volumes, another TAPECOPY is scheduled.
During autobackup on the primary host, if any TCN records exist for backup
volumes, another TAPECOPY is scheduled. This copy occurs only on the
primary host, to avoid multiple TAPECOPYs for the same volumes.
Chapter 4. DFSMShsm 53
• When an internal TAPECOPY MWE is being processed, if a valid alternate
exists in the TTOC record, do not perform TAPECOPY but delete only the
TCN record.
The (H)LIST BVOL output includes the duplex status and the alternate volser.
4.1.9 Limitations
Some parameters of nongeneric TAPECOPY commands are not available with
the duplex tape option.
If changing to use duplexed output and you have been using the
PARTIALTAPE(REUSE) option, consider marking full the existing partial tapes as
they are not considered in a duplexing environment. Only by marking partial
tapes full will they be considered by recycle.
You thus waste time until the data set is recalled and incur unnecessary CPU
and I/O use. Alter without recall relieves this constraint by not recalling a
migrated data set to perform an IDCAMS ALTER entryname command for only
the storage class and/or management class submitted by an end user. Thus, an
IDCAMS ALTER entryname command for only the storage class and/or
management class submitted by an end user will cause the storage class and/or
Chapter 4. DFSMShsm 55
management class to be altered for a migrated data set without recalling the
data set.
The DFSMShsm record used for SMS space management processing has been
updated with the new storage class and/or management class to keep it
consistent with the construct names in the catalog.
If DFSMShsm fails to process the catalog update request for the requested class
names, catalog will not update the class name(s) in the catalog and the IDCAMS
command will fail.
When catalog fails to update the class name(s) in the catalog after DFSMShsm′ s
updates, the DFSMShsm record will retain the updates made. The IDCAMS
command will fail.
The new alter without recall function causes both DFSMShsm′s and catalog′ s
records to be updated and kept synchronized. The HALTER EXEC updates only
DFSMShsm records, leaving DFSMShsm′s and catalog′s records
unsynchronized. The HALTER EXEC has been modified to reflect the fact that
IDCAMS ALTER can now perform the alter without recall function directly.
IGG026DU sees the command in both cases but due to the do not recall the data
set indicator being set, the request will pass directly on the catalog.
DFSMShsm sends a return code back to the catalog, indicating the success or
failure of the CDS record update. When the return code from DFSMShsm is not
zero, the catalog update will not be performed and the IDCAMS alter function
will fail.
4.3 ABARS
The following ABARS enhancements are included in DFSMShsm V1R4:
• ABARS output files can be stacked on a minimum number of tape volumes
• Number of concurrent active ABARS requests has increased to 64.
• Invocation of ARCBEEXT has been extended to data sets that are being
dumped by DFSMSdss processing so that installations can bypass data sets
that fail.
• GDG base names can be specified in the ALLOCATE list. Thus you can back
up and restore GDG base definitions without having to back up an associated
generation data set (GDS).
• ABARS activity log on DASD or tape is automatically deleted when
aggregate roll-off occurs, during either automatic roll-off or EXPIREBV
processing.
• CPU time for processing ABACKUP or ARECOVER is maintained in the
ABACKUP/ARECOVER Function Statistics record (WWFSR) and aggregate
backup and recovery (ABR) record. A new 32-character accounting
information attribute in the aggregate group definition is saved in the
WWFSR and the ABR record.
• The TGTGDS and OPTIMIZE keywords used when invoking DFSMSdss to
dump and restore data sets are externalized.
ABARS provides a new function that allows the ABACKUP output files to be
stacked on a minimum number of tape cartridges. The minimum number of tape
cartridges could be 1 if the aggregate is small.
Chapter 4. DFSMShsm 57
4.3.1.1 Invocation of Output File Stacking
The SETSYS command has been updated with a new parameter,
ABARSTAPES(STACK|NOSTACK), with STACK as default.
The STACK parameter indicates to ABACKUP to stack the ABACKUP output files
onto a minimum number of tape volumes, NOSTACK indicates to ABACKUP not
to stack the ABARS tape. Specifying NOSTACK causes ABACKUP to operate as
it does in earlier releases of DFSMShsm.
4.3.2.1 Invocation
The existing SETSYS MAXABARSADDRESSSPACE(n) parameter has been
enhanced to allow the value of n to range from 1 to 64, with the default
remaining as 1.
The started task identifier contains the task number as part of the identifier. The
identifier is currently ABARnntt, where nn is the task number, and tt is a time
stamp. The value of nn can now range from 1 to 64. For example, task 56 would
get an identifier of ABAR56tt.
When one of the above failures occurs, the failing data set name and an
indicator that a DFSMSdss failure occurred are passed to ARCBEEXT. Thus you
can determine whether the data set should be skipped. If ARCBEEXT does not
indicate that the data set be skipped, ABACKUP will fail at that point; otherwise
the data set in error will be skipped and the next data set processed.
To enable this support, you must specify SETSYS EXITON(BE) before issuing the
ABACKUP command.
ABARS has been enhanced so that you can specify a GDG base name in the
ALLOCATE statement and have ABARS predefine the GDG base name during
ARECOVER processing if it does not currently exist. You can specify a GDG
base name even though you have not specified an associated GDS name in the
selection data set.
During ARECOVER processing, all GDG base names are defined before any
GDSs specified in the aggregate are restored.
The ABR record has been enhanced to maintain an ABACKUP CPU time and an
ARECOVER CPU time.
Chapter 4. DFSMShsm 59
If an ARECOVER request fails and is reissued with a valid RESTART data set, the
CPU time in the WWFSR reflects only the time to process the remaining data
sets. The ABR record, however, accumulates the CPU times of each restart until
the recovery of the aggregate is successful.
The ISMF aggregate group definition panels have been enhanced to allow
specification of a 32-character accounting code. This accounting code is also
written to the WWFSR control block, ABR record, and the ABACKUP control file.
It is written to the control file so that the account code can be propagated to the
recovery site without an aggregate definition at the recovery site.
If ABACKUP OPTIMIZE is not specified, ABACKUP will use the value defined for
n in the SETSYS ABARSOPTIMIZE(n) keyword.
Chapter 4. DFSMShsm 61
4.3.9.2 Migration
Use the NOSTACK parameter on the ARECOVER DATASETNAME command to
ARECOVER backups performed on a downlevel version of DFSMShsm.
4.3.9.3 Coexistence
Use the ARECOVER AGGREGATE command on a DFSMShsm V1R4 system to
ARECOVER a backup performed on a downlevel DFSMShsm. This command
also allows a downlevel version of DFSMShsm to ARECOVER an aggregate
backed up on DFSMShsm V1R4, whether or not the output files are stacked.
DFSMShsm V1R4 provides an option for its CDSs to be accessed in RLS mode.
Thus DFSMShsm can take advantage of the S/390 coupling facility to reduce CDS
contention.
For a brief description of the benefits of a Parallel Sysplex with a coupling facility
and how it works, see Appendix A, “Parallel Sysplex and VSAM Record Level
Sharing” on page 139, which also contains an overview of VSAM RLS and VSAM
RLS locking.
DFSMShsm uses VSAM RLS with LOG(NONE), which means that DFSMShsm
does not use the MVS Logger facility. The DFSMShsm VSAM RLS
implementation uses the CACHE and LOCK coupling facility structures. The
DFSMShsm CDS recover process is unchanged. DFSMShsm still uses its own
journal for forward recovery.
4.4.1 Invocation
Startup procedure keyword CDSSHR has been enhanced to accept RLS as a
parameter. When RLS is specified, the DFSMShsm CDSs are accessed in RLS
mode. When the CDSs are accessed in RLS mode, any values specified for
CDSQ and CDSR are ignored.
CDSSHR = {YES|RLS|NO}
YES Perform multiple-processor serialization of the type requested
by the CDSQ and SDSR keywords.
DFSMShsm continues to support key ranges for CDSs not accessed in RLS
mode.
Chapter 4. DFSMShsm 63
CONTROLINTERVALSIZE(4096))
4.4.6 DCOLLECT
DCOLLECT has been changed so that it first attempts to open the CDSs in RLS
mode. If the CDSs are not RLS eligible, OPEN error message IDC161I RC9 is
issued. This message can be ignored for non-RLS-eligible data sets because the
OPEN is retried for non-RLS mode. If both OPENs fail, a new DFSMShsm
message is issued.
4.4.7 ARCIMPRT
The parameter passed to ARCIMPRT, the enhanced CDS recovery function utility,
specifies which cluster of a multicluster CDS should be recovered.
If you are recovering a single cluster of a multicluster CDS, the parameters are:
BCDS n | MCDSCONTROLDATASET n
MCDS n | MIGRATIONCONRTOLDATASET n
where n is a value from 1 to 4. (The OCDS cannot be defined as a
multicluster.)
Chapter 4. DFSMShsm 65
4.4.9 Migration
Changes to the DFSMShsm CDSs and the MVS/ESA or OS/390 systems are
required before the CDSs can be accessed in RLS mode.
4.4.10 Coexistence
When one system accesses the DFSMShsm CDSs in RLS mode, all systems in
the hsmplex must access the CDSs in RLS mode.
4.5.5 EDGTVEXT
DFSMSrmm does not use the DFSMShsm tape volume exit, ARCTVEXT, to
manage tapes that DFSMShsm uses. It has its own interface from DFSMShsm to
avoid conflicts over use of the ARCTVEXT installationwide exit.
If you have a product with similar requirements for releasing tapes from
DFSMSrmm as DFSMShsm, you can use either the EDGDFHSM or EDGTVEXT
Chapter 4. DFSMShsm 67
program interface. The difference between EDGDFHSM and EDGTVEXT is that
EDGTVEXT accepts the ARCTVEXT parameter list, and the EDGDFHSM interface
accepts only the first volume in the ARCTVEXT parameter list.
4.5.5.1 Invocation
EDGTVEXT can be invoked from either the LOAD, CALL, or LINK macro.
4.5.5.2 Input
The input is a parameter list that describes the volumes DFSMShsm is releasing
and the actions required. The parameter list is identical to the list DFSMShsm
passes to ARCTVEXT.
On entry, register 13 contains the address of a standard 18-word save area, and
register 14 contains the return address.
4.5.5.3 Output
EDGTVEXT and EDGDFHSM issue messages when problems are encountered.
EDGTVEXT always sets the ARCTVEXT return code value to zero. It does not
always pass a zero register 15 return code back to the caller. A nonzero return
code is the register 15 return code from the subsystem request attempted by
EDGDFHSM. You can obtain information about the return codes in MVS/ESA SP
Version 5 Using the Subsystem Interface .
4.5.5.4 Environment
EDGTVEXT must be link edited in an APF-authorized library. It runs in
AMODE(31) RMODE(ANY).
4.5.5.5 Migration
There are migration concerns to consider if you invoked ARCTVEXT for
DFSMSrmm support. The DFSMSrmm code that was previously shipped in
ARCTVEXT exit must be removed, or DFSMSrmm could be invoked twice by
DFSMShsm to remove the tape volume from its inventory. If the ARCTVEXT exit
was only invoked with the previously shipped DFSMSrmm ARCTVEXT code, turn
off the ARCTVEXT exit with the SETSYS EXITOFF(TV) command. ADSM
customers who previously invoked the ARCTVEXT exit to use DFSMSrmm
services should now invoke the new DFSMSrmm general-use programming
interface, EDGTVEXT, directly from the ADSM installationwide deletion exit.
Change the DELETIONEXIT option from ARCTVEXT to EDGTVEXT in the MVS
server options file. For more information about the ADSM deletion exit, refer to
the ADSTAR Distributed Storage Manager for MVS/ESA Administrator ′ s Guide .
Only those hosts with DFSMShsm V1R4 will use the DAE suppression for dumps
for the DFSMShsm product.
DFSMShsm V1R4 uses DAE for functions that run in both the primary address
space and the ABARS secondary address space.
This support does not apply to dumps produced by DFSMShsm as a result of the
TRAP command.
Chapter 4. DFSMShsm 69
4.6.4 DFSMShsm Processing in the Primary Space
For dumps taken in the primary address space that are eligible, DFSMShsm will
fill in additional fields in the SDWA to build a symptom string before taking a
dump. The following variables are filled in by DFSMShsm:
• Load module name
• Csect name
• Name of the recovery routine
• DFSMShsm component identifier
• DFSMShsm component base number (prefix)
The DAE function uses the symptom string to decide whether the dump
requested is a duplicate dump.
Two FSRs are now a subtype 17 instead of a subtype 6. Thus record subtype 6
can be written only for user-requested migrated data set deletes. FSRs are
created when the scheduled delete of the migrated data set takes place. An
ARC0734I message is issued when the delete is scheduled for:
• Expire data set from ML1
• Expire data set from ML2
Two new bits have been added to one of the flag bytes in the FSR mapping to
indicate whether the data set that is being expired is from ML1 or ML2. If
neither of these bits is set, the data set being expired is from L0. Thus you can
map the expired data set FSR subtype 17 to the different volume categories of
L0, ML1, or ML2.
Three new bits have been added to the flag bytes in the FSR mapping to
indicate:
• The data set is being deleted by its expiration data in the catalog or by the
management class attributes
• Whether the incremental backup version is being deleted by the EXPIREBV
command
• Whether the incremental backup version being deleted is on a tape volume
There may be migration concerns for the conversion of two FSRs (expire from
ML1 and expire from ML2) from a subtype 6 to a subtype 17. If the SMF subtype
6 information was previously collected, the values will change because the
expires from ML1 and ML2 are no longer included. To continue to collect the
information for expires from ML1 and ML2, subtype 17 must be collected.
Chapter 4. DFSMShsm 71
4.7.4 DFSMS/MVS Optimizer HSM Monitor/Tuner
Currently DFSMShsm communicates with the Optimizer Monitor/Tuner through
the DFSMShsm initialization exit ARCINEXT. In DFSMShsm V1R4, DFSMShsm
directly invokes GFTEMINX. It utilizes a new interface, GFTEMSDX, to the
DFSMS/MVS Optimizer component.
The DFSMSdss multivolume selection changes affect logical data set DUMP, data
set COPY, and CONVERTV processing. The support adds a new selection
keyword, SELECTMULTI(ALL|ANY|FIRST). This keyword replaces and improves
on the current ALLMULTI keyword, which will become obsolete.
For logical data set dump, data set copy, and CONVERTV processing, the new
keyword enables a multivolume data set to be selected according to whether the
volume list includes all of the volumes on which the data set resides (ALL),
some of the volumes (ANY), or at least the volume that contains the first extent
(FIRST).
Current DFSMSdss jobs that use the ALLMULTI keyword will continue to run as
before with the specification of ALLMULTI mapping to SELECTMULTI(ANY),
unless CONVERTV to SMS is specified. In this case ALLMULTI will map to
SELECTMULTI(FIRST). If neither ALLMULTI nor SELECTMULTI is specified, the
default will be SELECTMULTI(ALL).
5.1 ALLMULTI
The ALLMULTI keyword can cause multiple dumps of multivolume data sets to
be taken. Consider the situation where you have two jobs, one that dumps
VOL001 and VOL002, and the other that dumps VOL003. A large multivolume
data set is stored on VOL001 and VOL003. When both jobs include ALLMULTI,
the multivolume data set is included in the dump for volumes VOL001 and
VOL002 as well as in the dump for VOL003 (see Figure 9 on page 74).
Although this limited case can be ″handled″ by changing the volumes that the
jobs reference, in a real-life environment, this is often impossible to achieve. To
avoid missing any multivolume dumps, most installations using logical dumps
code ALLMULTI on each DFSMSdss job.
In the above situation, to ensure that each data set is dumped once, only one
DFSMSdss job can be run, which is impractical because:
• The total dump will be excessively long in terms of elapsed time.
• The total restore time will be unacceptably long in terms of restoring all of
the data within a disaster recovery window.
DFSMSdss users want to select data sets during a ″logical volume dump″
according to the following criteria:
• The entire data set, if the first primary extent is found on any volume that is
referenced, even though not all volumes are on the list.
• The entire data set, if any part of the data set is found on any volume that is
referenced, even though all of the volumes are not on the list.
The first method of selection is what most DFSMSdss users really want and
need, but DFSMSdss until now has only supported the second method through
the ALLMULTI keyword.
5.2 SELECTMULTI
The SELECTMULTI keyword, is applicable during logical data set DUMP, data set
COPY, and conversion of volumes to or from SMS management. It is a
functional replacement for the ALLMULTI keyword. Because SELECTMULTI is
meaningful only when there is a volume list from which to select, SELECTMULTI,
like ALLMULTI, requires that either LOGINDDNAME or LOGINDYNAM be
specified. If SELECTMULTI is present but both LOGINDDNAME and
LOGINDYNAM are absent, existing message ADR138E is issued.
Figure 9. Problems with ALLMULTI. When two DFSMSdss jobs are run, one to dump VOL001 and VOL002, and
the other to dump VOL003, with both specifying ALLMULTI, the large multivolume data set is dumped twice.
5.2.4 Interactions
The specification of SELECTMULTI(ANY) has the same effect as the specification
of ALLMULTI on previous levels of DFSMSdss except when you convert a volume
to SMS. When converting a volume to SMS, specify SELECTMULTI(ANY) to
select a data set for processing if any part of the data set is on a volume in the
volume list.
The specification SELECTMULTI(ALL) has the same effect as the absence of the
specification of ALLMULTI on previous levels of DFSMSdss.
Specifying SELECTMULTI(FIRST) enables you, for logical data set DUMP, data set
COPY, and CONVERTV to non-SMS, to select a data set for processing only when
the first volume on which it resides is included in the volume list. The
specification of SELECTMULTI(FIRST) has the same effect as the specification of
ALLMULTI on previous levels of DFSMSdss when you convert a volume to SMS.
5.2.5 Errors
• If you specify ALLMULTI but not SELECTMULTI, SELECTMULTI(ANY) is set,
and a new message, ADR146I, is issued.
• If you specify both ALLMULTI and SELECTMULTI(xyz), SELECTMULTI(xyz) is
set, and message ADR146I is issued.
• If you alter the setting of ALLMULTI, either from OFF to ON or from ON to
OFF, the SELECTMULTI(xyz) set by the keyword is not altered, and a new
message, ADR147W, is issued.
• If the installation exit, ADRUIXIT, altered the setting of SELECTMULTI(xyz),
whether set by keyword or by default, the setting is altered and existing
message ADR035I is issued.
No toleration PTFs are expected to be released for this announcement. The only
potential issue is in a shared spool environment with processors or LPARs
running different levels of DFSMS/MVS. Jobs executing DFSMSdss using the new
SELECTMULTI keyword must not be allowed to run on levels of DFSMS/MVS
previous to DFSMS/MVS V1R4.
5.2.10 Resources
No new control blocks have been added or deleted as a result of the
multivolume selection enhancement announcement. The sizes of existing
control blocks are not changed, and the announcement does not change the
DFSMSdss storage requirements.
Customers have become concerned that they are not informed when the journal
is becoming full. When the journal is full, all tape processing must stop until the
journal and the control data set have been backed up and the journal cleared.
This procedure is disruptive and could involve a tape processing outage.
6.1.1.1 JOURNALFULL(nn)
Specify JOURNALFULL to define a percentage-full threshold for the journal data
set. When DFSMSrmm detects that the journal has reached this threshold, it
issues message EDG2107E. DFSMSrmm also issues message EDG2107E at
DFSMSrmm startup if the journal has already reached the threshold specified. If
you specify a value of 0, DFSMSrmm issues no warnings on that system.
Different threshold values can be specified on systems sharing the RMM control
data sets.
6.1.1.2 BACKUPPROC(procname)
BACKUPPROC specifies the name of the procedure that you want started
automatically when the journal percentage full threshold is reached.
To find out the setting of the threshold and the current journal utilization, either
issue this command:
RMM LISTCONTROL CNTL OPTION
or use the DFSMSrmm ISPF panels to show the System Options Panel Display.
If a journal is not allocated, the journal is disabled, or the utilization is less than
0.5%, a value of zero is returned for the current utilization value.
To automate control data set backup and clearing of the journal you can:
Figure 11 shows a sample procedure for backing up the control data set and
clearing the journal.
Notes:
1. The names of the control data set and journal are obtained from the active
DFSMSrmm subsystem.
2. EDGHSKP backs up and clears the data set. If the R M M subsystem is not
active, you must use EDGBKUP, which does not clear the journal.
3. This example shows how DFSMSdss and DFSMSrmm work together to back
up the CDS and journal using concurrent copy; see 6.2, “Nonintrusive
Backup of the CDS” on page 82 for further information.
6.1.3 Messages
The following messages are displayed in relation to journal threshold
processing:
EDG2103E PERMANENT JOURNAL ERROR - REPLY ″R″ TO RETRY, ″ I″ TO
IGNORE, ″D″ TO DISABLE, OR ″L″ TO LOCK
• Look for a previous message with the EDG prefix that shows the
error.
• Notify your system programmer.
EDG2104E JOURNAL FILE IS FULL - SCHEDULE CONTROL DATA SET BACKUP
TO CLEAR IT
• Manually start the DFSMSrmm backup job to reset journal.
• Notify your system programmer.
• There is no reply for this message. This message is followed by
message EDG2103D, to which you must reply.
Chapter 6. D F S M S r m m 81
EDG2107E JOURNAL THRESHOLD REACHED - JOURNAL IS percentage_value%
FULL. tracks TRACKS( kilobytesK) AVAILABLE
• The journal has reached the specified threshold value. If an
autostart procedure for backup is defined, RMM starts it
automatically. Otherwise follow your installation-defined backup
procedure.
• Notify your system programmer.
EDG2108E JOURNAL IS percentage_value% FULL. tracks TRACKS ( kilobytesK)
AVAILABLE
• This message is issued for every additional 5% full, or every 1%
once over 90% full. If a backup procedure has not been started,
follow your installation-defined backup procedure.
• Notify your system programmer.
┌───────────────────────────────────────────────────────────────┐
│Variable Subcommand Content Format │
│Name │
├───────────────────────────────────────────────────────────────┤
│EDG@JDS LC Journal name 44 characters│
├───────────────────────────────────────────────────────────────┤
│EDG@JRNF LC JOURNALFULL PARMLIB Numeric 0-99 │
│ operand value │
├───────────────────────────────────────────────────────────────┤
│EDG@JRNU LC Journal percentage Numeric 0-100│
│ used │
└───────────────────────────────────────────────────────────────┘
Serialization prevents updates from occurring to the control data set and journal
until the backups are complete, thus ensuring the integrity of the backups so that
recovery is possible. While the backups are taking place, however, any tape
Chapter 6. D F S M S r m m 83
//EDGHSKP EXEC PGM=EDGHSKP,PARM=′ BACKUP(DSS)′
//MESSAGE DD DISP=SHR,DSN=RMM.MESSAGE
//SYSPRINT DD SYSOUT=*
//BACKUP DD DISP=(,CATLG),UNIT=TAPE,DSN=BACKUP.CDS(+1),
// LABEL=(,SL)
//JRNLBKUP DD DISP=(,CATLG),UNIT=TAPE,DSN=BACKUP.JRNL(+1),
// LABEL=(2,SL),VOL=REF=*.BACKUP
//DSSOPT DD *
CONCURRENT OPTIMIZE(4) VALIDATE COMPRESS
/*
The DUMP command operands you can specify in the DSSOPT DD statement are
controlled and validated by DFSMSdss, not by DFSMSrmm. If unsupported
command operands are specified, DFSMSdss fails the dump operation.
Comments can be included in the DSSOPT records, but they must be comments
acceptable to DFSMSdss. Refer to the DFSMS/MVS V1R3 DFSMSdss Storage
Administration Reference for command operands that are supported.
Backups taken on either level of code that do not use DFSMSdss are eligible on
either level of code.
When BACKUP(DSS) is requested, updates to the control data set are allowed
during journal backup. In addition, updates during control data set backup are
allowed if concurrent copy is used. The journal backup includes records that are
not included in the control data set backup. For this reason the restore process
now involves forward recovery from the latest journal backup as well as the
Using DFSMSdss with concurrent copy can dramatically reduce the time during
which DFSMSrmm is unavailable for tape management processing. The dump
output is fully compatible with dumps produced without concurrent copy.
6.2.3 Performance
Although DFSMSrmm nonintrusive backup does not speed up the actual dump, it
dramatically reduces the time during which DFSMSrmm is unavailable for
performing tape management functions. With concurrent copy, DFSMSrmm is
unavailable for only a few seconds rather than the minutes before the option was
available.
The DFSMSrmm trace recording function receives the trace data scheduled for
output and writes it to a file on DASD. The PDA trace consists of two separate
log data sets. DFSMSrmm recognizes these log data sets by their DD names,
EDGPDOX and EDGPDOY. Recording takes place in the data set defined by
EDGPDOX. When that data set is filled, the two data set names are swapped,
and recording continues on the newly defined data set.
When the newly defined data set is filled, the names are again swapped, and the
output switches to the other data set, thus overlaying the previously recorded
Chapter 6. D F S M S r m m 85
Figure 14. DFSMSrmm Problem Determination Aid
data. The larger the file, the longer the period of time that will be represented
by the accumulated data.
The in-storage circular file is a contiguous area of storage divided into blocks.
Each block contains multiple variable-length trace entries. Each block is written
to DASD as a separate unit of data. The size of each block and number of
blocks in the file are controlled by user-defined values within an in-storage
circular file. The DASD block size for the EDGPDOX/EDGPDOY data sets and the
number of blocks or buffers that make up the trace wrap table can be changed
with PARMLIB options:
PDABLKSZ(min=1, max=31) and
PDABLKCT(min=3, max=255)
The total size of the in-storage trace wrap table is the product of these two items
in kilobytes (KB). For example, if you specify PDABLKSZ(4) and PDABLKCT(12),
you get a total in-storage trace table of 48KB. The default in-storage trace wrap
table sizes if PDABLKSZ and PDABLKCT are not specified are based on where
the RMM trace data sets are placed. These defaults are:
EDGPDOX on 3380 = 5610K
EDGPDOX on 3390 = 6885K
default if no EDGPDOX = 3060K
The PDA trace came from DFSMShsm. The reuse of this trace facility will allow
consistency in how the two products implement a trace facility. Customers and
support groups familiar with the DFSMShsm trace facility will be able to use the
6.3.1.1 Invocation
The trace facility is enabled at the end of the DFSMSrmm startup unless
specifically inhibited by command in the startup procedure.
Typical commands that can be used from the console to control the DFSMSrmm
PDA trace are:
F DFRMM,PDA=ON (turns on tracing)
F DFRMM,PDALOG=SWAP (used to SWAP the DASD trace data sets)
Frequently, only several minutes to one hour of the PDA trace will be required to
analyze a problem. To reduce the amount of data to be sent for analysis, use
the DFSMSrmm trace formatter program, ARCPRPDO, to select and copy all
trace entries created during the time of interest.
Chapter 6. D F S M S r m m 87
IEFBR14 could of course be used to preallocate these data sets. An initial size
recommendation for these data sets is 20 cylinders. Then adjust them according
to installation requirements.
The DFSMShsm PDA formatter (ARCPRPDO utility) can be used to format the
EDGPDOX and EDGPDOY files. Users are not required to have the DFSMShsm
license to use the ARCPRPDO utility to format the DFSMSrmm PDA files.
6.3.4 Performance
Although the trace circular file processing is performed inline, the logging to
DASD is an independent task. This task is not merged inline with DFSMSrmm
function, has minimal dependence on the product structure or design, and
competes separately for time and resources. Therefore, the trace′s potential for
degrading DFSMSrmm performance and reliability is minimal.
where:
altvol
specifies the volser of the copy that is being used to replace the original
prefix
specifies the DFSMShsm-defined backup or migration prefix
xxxx
specifies either BACK or HMIG, depending on the DFSMShsm tape type
Customers may have applications that manage their own pool of tapes and the
data sets on these tapes. Until now these applications (known as external data
managers to some other tape management system) have stored information
about these data sets, and DFSMSrmm has duplicated it in its control data set.
Thus more disk space was required (for the data set information being
duplicated), and the backup for DFSMSrmm′s control data set and journal was
taking longer than necessary.
When DFSMSrmm is told through the EDGUX100 installation exit to record only
information for the first file on a tape volume, it still performs normal volume
validation, but it can no longer perform data set name checking for the second
and subsequent files on the volume.
DFSMSrmm can still be used to manage the volume based on the first file details
and the statistics maintained at the volume level.
Figure 15 on page 90 shows a sample EDGUX100 installation exit with the table
coded to include the names of four external data managers.
Chapter 6. D F S M S r m m 89
EDMTAB DS 0F *START OF TABLE
* JOBNAME DATA SET NAME
*
DC CL8′ * ′ , CL44′ BACKUP*′
DC CL8′ ABC*′ PROGRAM NAME
*
DC CL8′ STSGWD* ′ , CL44′ *′
DC CL8′ A*′ PROGRAM NAME
*
DC CL8′ STSG%D* ′ , CL44′ STSG%%.BACKUP.*′
DC CL8′ DEF%MAIN′ PROGRAM NAME
*
DC CL8′ STSGDPW ′ , CL44′ DAVE.TOOMUCH.DATA.*′
DC CL8′ AB999*′ PROGRAM NAME
*
DC CL8′ EDMEND′ *END OF TABLE MARKER
Figure 15. EDGUX100 Installation Exit. The table entries are coded to include four external data managers.
Using the EDGUX100 exit to specify external data managers allows quite a lot of
flexibility in specifying the type of jobs, data sets, and programs for recording
only the first file on a tape volume. For example, the first entry in Figure 15 asks
the exit to look for all data sets with a high-level qualifier starting with the
characters BACKUP that is being written by any program starting with the
characters ABC. In this particular example, the job name is specified with an *,
so all jobs would be considered.
To reduce the possible exposures occurring from changes made to VRS policies,
such as causing changes to tape data set and volume expiration processing,
customers have requested the following changes to DFSMSrmm:
• A trial run capability for inventory management
• A report of updates made by inventory management
• The option to enforce a trial run if policy changes are made
• The provision of thresholds for inventory key actions
Now that the date can be specified on the trial inventory management run, an
installation can check what effect the existing or changed VRS policies would
have at some time in the future. For example, it would be possible to show how
many tapes would be moved offsite after month end processing had completed.
It would also be possible to see whether sufficient scratch tapes would be
released and available for use on the weekend.
The ACTIVITY file is now supported, and, during VRSEL processing, details of
changes made to data set information are added to it. If the ACTIVITY file is not
allocated for a VERIFY run, existing message EDG6101E is issued, and
processing fails.
Figure 16 on page 92 shows what you must do to set up the environment for
inventory management trial runs.
Chapter 6. D F S M S r m m 91
Figure 16. Inventory Management Trial Runs
Once a successful inventory management trial run has completed (VERIFY was
specified as a parameter on the EXEC statement), a production inventory
management job can be submitted, even if the VRSCHANGE PARMLIB option
specifies VERIFY, because DFSMSrmm knows that no changes to VRS have been
made.
DFSMSrmm now counts all VRSs (volume, data set name, and name) that it does
not delete and writes this number with message EDG2229I to the MESSAGE data
set. DFSMSrmm also compares this number to the number specified through the
VRSMIN PARMLIB option and uses the VRSMIN action value (FAIL, WARN, or
INFO) to decide whether, and how, processing is to continue.
If the ACTIVITY file is allocated and open, details of changes to data set records
and their vital record status are added to the file during processing.
Chapter 6. D F S M S r m m 93
Also, to control what DFSMSrmm should do when the
count is not reached, specify one of the following:
FAIL Issue message EDG2229I to the MESSAGE file
and stop inventory management processing.
A return code of 8 is set. This is the default.
WARN Issue message EDG2229I to the MESSAGE file
and continue processing. A return code of 4 is
set.
INFO Issue message EDG2229I to the MESSAGE file
and continue processing.
VRSCHANGE This operand determines which action DFSMSrmm should
take during inventory management following changes to
VRS policies using ADD or DELETE subcommands.
Optionally one of the following can be specified:
INFO Changes to VRS policies will not force a
VERIFY run to be made
VERIFY VRS policy changes must be verified by
running EDGHSKP vital records selection with
the VERIFY parameter. This program must run
successfully before a production run can be
performed.
VERIFY is the default
The NFS protocols have become one of the most pervasive methods for
interoperability between different types of operating systems. The NFS protocol
is designed to allow transparent access between a client system and the server
system, with the files contained on the server system appearing as if they were
available locally on the client system. The NFS client program on the client
system maps local file system calls into network calls and sends a request for
data or control information to the NFS server. In this way, NFS client and server
deliver remote data access to an application without special interfaces in the
application.
The files from the server are mounted on the client′s file system. Data transfer
occurs when files are read or written and the data is transferred using TCP/IP
facilities across the network. A client can see much more data than can be held
locally, and many clients can share the same data, thus avoiding distribution of
data, downloading, and needless duplication.
Several functional changes were introduced with DFSMS/MVS V1R3 NFS, which
was announced in September 1996:
• Provide DFSMS/MVS NFS client support
With the DFSMS/MVS NFS client support, DFSMS/MVS has the complete client
and server implementation of NFS for MVS.
Figure 17. DFSMS/MVS NFS client feature. The NFS MVS client allows you to read and write data on UNIX and
other MVS machines.
Migrated PDS and PDSE members are not supported because a migrated
member ′s attributes are not accessible through this interface.
Attributes such as file size and time stamp are saved to DASD. Subsequent file
size requests do not cause a recall of the supported SMS-managed migrated
data set, thus improving performance.
If the data set is modified outside the server by a non-NFS application (for
example, by ISPF edit) before it was migrated, the stored file size could be
incorrect. When the data set is accessed again by the DFSMS/MVS NFS server,
it must be recalled to determine the correct file size.
The DFM product set is based on the Distributed Data Management (DDM)
Architecture that has introduced the terms source and target for data requests.
These terms are equivalent to the terms client and server , respectively. One of
the primary ingredients of the Distributed FileManager suite of products is Data
Access Services (DAS) for both record-oriented and stream-oriented data on
local systems and remote servers. With DFM you can invoke commands to
create, delete, rename, and copy data sets on a remote system.
DFSMS/MVS DFM uses the DDM protocol, which enables like and unlike
computer systems to share file systems across a network. It enables your
MVS/ESA system to act as a server (target) to remote client (source) systems.
DFSMS/MVS DFM is designed to work with operating system platforms that
support DDM requests.
The advantage of the DFSMS/MVS DFM DataAgent approach over some other
client/server packages is that it allows client applications to expand their
function beyond basic data access (read, write, update, and delete) by actually
executing MVS jobs that range from running simple TSO and REXX commands to
fairly elaborate programs.
This ability to initiate jobs on remote MVS/ESA systems provides the basis for
much greater flexibility to tailor access from workstation applications to specific
business requirements. For instance, one DataAgent could be invoked to
preprocess data by making extracts from various files and repositories and
placing the data into a temporary work file. The remote workstation application
could then process the data in the temporary file and, upon completion, another
The DFSMS/MVS DataAgent provides the ability to initiate jobs on the remote
MVS/ESA platform in a manner similar to remote procedure call. This
represents a significant increase in capability as the workstation application′ s
processing ability is extended beyond basic data access by allowing the actual
initiation of jobs from the remote platform.
The data routing and conversion services let you write client applications without
concern for the ___location and format of the server data. You do not have to know
whether the data server is local or remote, you do not have to know how to
address the data formats on the server systems, and you do not have to change
your application if the data is relocated.
The record access utility can help you access record-oriented data, in general
used in mainframe environments, even though workstation data is typically
stream-oriented. You can use exactly the same interface to access record data
anywhere, so you do not have to remember, or learn, how to use several
different data management systems.
The record access utility and the data sorting tools provide you with new data
management capabilities on the workstation that were only available on the
mainframe in the past. They can help you establish a more complete
development environment on the workstation by letting you test against server
data or simulate server data on the workstation.
From a system management perspective, the SMARTdata UTILITIES give you the
flexibility to develop, migrate, and run your applications independently of the way
you distribute your data. By making it easier to share data without replicating it,
the SMARTdata UTILITIES can help simplify data administration, enhance data
integrity, and reduce the overall costs of a client/server environment.
A DFSMS/MVS DFM DataAgent can also be used to extract data from MVS files
and databases at the beginning of a workstation application in preparation for
subsequent retrieval by the client through normal SMARTdata UTILITIES DFM
interfaces. You could, for example, use DFM DataAgent to access data sets not
otherwise supported by DFM and copy all or some of the data to a temporary file
In this way, all client platforms that have SMARTdata UTILITIES DFM installed
can use the DDMOpen remote file request to MVS/ESA or OS/390 to trigger
DataAgent processing on the host.
For both stream and record access, parameter declarations pertain to the object
for which a DCLFIL (declare file) is done. This could be a file, a directory, or a
drive.
In any case, two different agents cannot run in the same conversation or session
for a given object. Also, no more than 32 DataAgent routines can be active
concurrently on the DFSMS/MVS system and not more than one instance of a
given DataAgent routine can be run at a time if the DFMXLPRM (locate extended
DataAgent) parameter is used.
7.3.2.1 DDMOpen
The current DDMOpen function is used to provide a file name suffix that can be
used to trigger DFM DataAgent processing on MVS.
It is possible to run some existing PROCLIB members that may not have
particular initialization requirements by using only the AGENT keyword, we do
not recommend that you do so because return codes will not be passed back to
DFM. Also, there will usually be a need for extended parameter passing. For
these two reasons, use the AGENT parameter in conjunction with the PARM
and/or PGM parameter, even if the PARM parameter is the value of PARM() or
the PGM name is the same as the agent name.
DFSMS/MVS DFM imposes a limit of 255 bytes for the file name and file name
suffix and therefore for the total length of the parameter (AGENT,
PARM,PC_CCSID,START) that can be passed.
If the DataAgent routine does not exist or an error occurs while loading a
DataAgent, a VALNSPRM (parameter value not supported) will be returned. If
the agent returns an error as a result of a DCLFIL or later call, an invalid request
error reply message (INVRQSRM) will be returned.
The DataAgent can be called a number of times. It can be called at DCLFIL time
for preprocessing necessary to build a temporary file for use by an application or
for one-time processing. It can also be called at DELDCL (delete declaration)
time for postprocessing necessary to move temporary file data to its final
destination. Note that this temporary file is located on the server site, so there
is no data traffic over the network.
DataAgent Extended Parameter List: DCLFIL and DELDCL processing calls the
DataAgent routine with a second parameter consisting of an area of storage
containing the following:
1. Halfword length field
The length of the parameter list that follows
2. Reserved halfword
Reserved for future use
3. Two-byte command code point
Refer to the DDM architecture for the definitions of code points. In the first
release the agent will only be taken for DCLFIL, code point X′102C′, and
DELDCL, code point X′102D′. This code point can be used by the DataAgent
routine to determine whether to do preprocessing, postprocessing, or
one-time processing.
4. Two-byte code point representing the type of object to be declared
The agent can be taken for an object whose type is DRCNAM (directory
name), code point X′1165′, and for FILNAM (file name), code point X′110E′.
5. Original MVS file or directory name
Two-byte length field and a 54-byte character string containing the file or
directory name padded with blanks. This is the name as provided by the
workstation.
6. Modified MVS file name
Function syntax:
DFMXLPRM(AGENTNAME,PARMPTR)
− AGENTNAME (input)
This variable contains the name of the agent routine (which is not
necessarily the same as the procedure name) whose extended
parameter list is to be located. It is eight characters long and padded
with blanks, if necessary.
− PARMPTR (output)
A 4-byte pointer to the extended parameter list. Set to zero if the
parameter list cannot be found.
For example, a DataAgent exit written in C could call this function and test the
result as follows:
DFMXLPRM(″DFMXAGENT″,&p_extra);
if (p_extra == NULL) {
A nonzero value in register 15 will terminate any further processing for the
DataAgent routine regardless of the extended reason code setting.
The general procedure for converting an old PROCLIB member so that it can run
as a DataAgent is to:
1. Copy the member to a new PROCLIB member (preferably with a prefix of
DFMX).
2. Add DFMINIT = to the PROC statement.
3. Change the program name on the EXEC statement to PGM=&DFMINIT
4. Invoke the DataAgent routine by specifying AGENT(new_procname)
PGM(original_program_name).
IKJEFT01 will obtain commands to issue from both the parameter list and the
userid.DFMXTSO.SYSTIN file. Output will be written to the
userid.DFMXTSO.SYSTSPRT file.
SYSTPRT can be browsed from the workstation to determine the results of the
commands executed.
The agent routine is written in such a way as to obtain commands to issue from
the parameter list and not from a SYSTIN file. Output is written to
userid.DFMQTSO.SYSTSPRT.
SYSTSPRT can be browsed from the workstation to determine the result of the
command executed. Remember that the SYSTSPRT file may be empty if the
PARM field specifies an invalid command.
Note that the remote file name is optional for AGENT, disallowed for QTSO, TSO,
and START, and required for the free-form invocation. If a remote file name is
specified, it must exist but other than checking that it can be opened for input, it
is not used by DFMACALL itself.
QTSO invokes quick TSO exit DFMQTSO, TSO invokes regular TSO exit
DFMXTSO, AGENT invokes the specified MVS procedure, START runs the
specified MVS command, and the last is the free form getting its parameters
from the filename_suffix, if any.
To use this application you have to initialize the procedure′s input file on MVS.
The provided samples use userid.AGENT_NAME.SYSIN or
userid.AGENT_NAME.SYSTSIN, depending on whether the agent is DFMXTSO.
For example, when you use the TSO DataAgent routine DFMXTSO, the SYSTSIN
input file might contain something like %MYCLIST PARM1 ....PARMn.
After running the application, you can view the results in the MVS SYSOUT file,
for example, in userid.AGENT_NAME.SYSPRINT (or
userid.AGENT_NAME.SYSTSPRT for TSO agents DFMQTSO and DFMXTSO).
Because TSO batch defaults to no prefixing of data set names, you might want to
specify ″profile prefix(userid)″ in either the PARM or SYSTSIN.
Sample agent DFMXSORT sorts an input file based on column numbers specified
in the parameter list, builds a temporary file consisting of the sorted output, and
returns the name of the temporary file to DFM for subsequent retrieval by
SMARTdata UTILITIES DFM.
Explanation: During DFM DataAgent processing, the MVS function shown failed
with the indicated return and reason codes.
However, common errors have more specific text. For example, if the DataAgent
routine cannot be found in JOBLIB, STEPLIB, or LPALIB, ″LOCATING MODULE
DataAgent_routine_name″ will be substituted for function, return_code, and
reason_code.
The DFSMS Optimizer uses historical and real-time data to provide an overall
picture of data usage on each system. It gives an installation the ability to
understand how it is managing storage today. With that information the
installation can then make informed decisions about how it should manage this
storage in the future.
The Optimizer uses the following analyzers to provide detailed and summarized
information about storage use (Figure 19 on page 116):
• Performance analyzer
• Management class analyzer
The DFSMS Optimizer uses input data from several sources and processes it,
using an extract program that merges the data and builds the Optimizer
database.
By specifying different filters, you can produce reports that help you build a
detailed management picture of your enterprise. You can use the charting
facility to produce color charts and graphs from the report data to greatly
enhance the value of this information.
In addition, the HSM Monitor/Tuner is available to keep you informed about the
status of DFSMShsm′s activity and provides dynamic control and tuning
capability.
The management class analyzer can also be used to predict the amount of level
0 (L0) and migration level (ML1) storage that would be required if a new set of
migration attributes is used.
Additionally, the management class analyzer can be used to determine the most
cost-effective management class assignments for data sets that are to be
converted to SMS or to tune the current management class settings to reduce
possible DFSMShsm thrashing.
While viewing each chart, you can dynamically set attributes—style of chart,
colors, text placement—to create the charts that best fit the needs of your
installation.
The charting facility accepts any of the report files and automatically converts
each file into multiple charts. Color charts are available in various formats
including column, tabular, and three dimension.
The HSM Monitor/Tuner OS/2 interface enables all DFSMShsm functions running
on each of the associated MVS hosts. It helps you set up system resources
correctly to ensure that HSM runs in an efficient and timely manner.
CSI was developed to give users easy read-only access to information stored in
ICF catalogs. It provides an alternative to using the AMS LISTCAT command.
9.1 NaviQuest
SMS provides comprehensive and cost-effective management of an installation′ s
most valuable asset, data. Recognizing that some customers have not yet
exploited SMS through the DFSMS/MVS product, additional assistance is being
provided through the NaviQuest component of DFSMSdfp.
NaviQuest helps customers who are implementing SMS for the first time. It
enables the storage administrator to test policies and configurations before
running production data. NaviQuest also helps customers who have already
implemented SMS and want to restructure their SMS routines to add new data
types to be managed or exploit their SMS investment by using new or additional
SMS functions.
NaviQuest offers solutions for simplified DFSMS testing and additional storage
management capabilities:
• Familiar ISPF/ISMF panel interface
• SMS implementation assistance
• Fast, easy, bulk test case creation
• Automatic class selection (ACS) testing automation (that is, regression
testing)
• Storage reporting assistance
• Additional tools to aid with storage administration tasks.
Rather than presenting every panel and discussing every field, in this section we
show the main panels as they apply to two main areas for which the NaviQuest
component is likely to be used:
• Regression testing for changes to ACS routines and the DFSMS configuration
• Interactive selection of storage functions for batch processing
Your installation probably has an ISMF Saved List that contains the names and
attributes of many data sets that are directly associated with TSO. The ISMF
Saved List is created from ISMF′s Data Set Selection Entry Panel and then on
the resulting Data Set List panel by typing SAVE TSOLIST on the command line.
When using NaviQuest, it would be normal practice to limit the ISMF Saved List
to a single grouping of data sets (or data subtype) that have the same data
class, storage class, management class, and storage group policy names.
On the ISMF Primary Option Panel shown in Figure 20 on page 121, choose
option 11, Enchanced ACS Management, to go directly to the NaviQuest Primary
Option Menu (POM), which includes seven NaviQuest options, plus the option to
Exit (see Figure 21 on page 121).
Panel Help
--------------------------------------------------------------------------------
ACBSMDP0 ENHANCED ACS MANAGEMENT - NAVIQUEST PRIMARY OPTION MENU
Enter Selection or Command ===> 1______________________________________________
Choose option 1, Test Case Generation, to get to the Test Case Generation
Selection Menu, which, as you can see in Figure 22 on page 122, enables you to
create bulk test cases from one of four different sources of data.
You would choose option 4, VMA Extract Data, to generate test cases for tape
data sets and tape volumes that should be stored and accessed from a
system-managed tape library (that is, the IBM 3494 and/or 3495 Automated Tape
Library Data Servers).
In the example in Figure 22, we use option 1, Saved ISMF List. We call the list
TSOLIST .
Panel Help
--------------------------------------------------------------------------------
ACBSFLG4 TEST CASE GENERATION SELECTION MENU
Enter Selection or Command ===>1_______________________________________________
Figure 23 on page 123 shows the Test Case Generator from Saved ISMF List
Entry Panel from which test cases can be generated from a Saved ISMF List. In
this example, the DFSMS policies are assigned on the basis of the information
contained in the Saved ISMF List, which contains such information as data set
name, data set organization, the disk volume on which the data set was stored
on, and the size of the data set.
It is also possible to specify other variables on this panel that would apply to all
test cases in the Saved ISMF List, such as ddname and program name.
We continue to use the name of the previously Saved ISMF List: TSOLIST. Each
test case is stored as a separate member in a library; in this example, the
library is ′NAVIQ.ONL.TESTLIB.′ Each test case is given a member name (for
example, TSOL1, TSOL2, ...).
To generate test cases, specify the following information and press Enter:
Saved ISMF List . . . . . . TSOLIST_ (Data set list)
After these test cases have been generated, if a typical member is browsed, it
would contain information such as:
BROWSE NAVIQ.ONL.TESTLIB(TSOL1)
DESCRIPTION1:
TEST CASE CREATED 96/11/04 at 19:58:5 BY LITTLEP
DSN: NAVT001.DAILY.REPORT
DSORG: PS
DSTYPE: PERM
ACSENVIR: RECALL
JOB: DFHSM
SIZE: 55
STORCLAS: STD
MGMTCLAS: STD
NVOL: 1
VOL: 01
MIGRAT
UNIT: 3390
Once the test cases have been generated, you can test them, using option 7,
Automatic Class Selection (see Figure 20 on page 121). Save the output from
this run in the LISTING data set as it will act as the baseline against which future
runs with modified ACS routines are checked.
The full procedure for running regression testing is documented in the NaviQuest
User ′ s Guide (SC26-7194). Basically whenever the ACS routines are to be
changed such that a new data subtype is to be managed, you must do the
following:
1. Ensure that the test case library contains the new data subtype.
2. Test these test cases against the new ACS routine, using option 7, Automatic
Class Selection, and write the output to a new listing file.
This panel contains the names of the listing files entered, from before and after
the changes were made to the ACS routines, the name of the library used to
hold the test cases, and the names of two output data sets:
1. Comparison Results Data Set, a sequential report file used to highlight the
differences for the exception cases
2. Exception Test Case PDS, a data set that holds copies of the members that
were the exception test cases.
The Exception Test Case PDS also holds details of all test cases that either have
never been run before or have been given different data class, storage class,
management class, or storage group assignments. If these exceptions identify
errors in assigning the DFSMS policies, then the ACS routines must be recoded
and the process rerun.
Note: The only valid exceptions should be for the new data subtype that has
never been through this process before and for which, therefore, expected
results are not stored in the test case library for the new test cases.
Panel Help
--------------------------------------------------------------------------------
ACBDFLC1 ACS TEST LISTINGS COMPARISON PANEL
Command ===> __________________________________________________________________
To compare ACS listings, specify the following information and press Enter:
Input Data Sets:
Base ACS Test Listing (Before latest ACS routine changes)
===> LISTING_________________________________________________
New ACS Test Listing (After latest ACS routine changes)
===> NEW.LISTING_____________________________________________
Reference Data Set for Compare:
Test Case PDS (Test source for listings above)
===> ′ NAVIQ.ONL.TESTLIB′_____________________________________
If the ACS routines have logic errors that produce incorrect assignments for any
of the data types (or data subtypes), you can use the Enhanced ACS Test Listing
Entry Panel (select option 3, Enchanced ACS Test Listing, on the NaviQuest
POM) to find further information for each data set that was tested. As you can
see in Figure 25 on page 125, you can choose which additional information
should be included in the output listing.
To generate enhanced ACS test listing, specify the following and press Enter:
ACS Test Listing
===> NEW.LISTING_____________________________________________
DSN . . . . . Y JOBNAME . . . . Y
EXPDT . . . . N SIZE . . . . . . N
UNIT . . . . . Y PROGRAM . . . . Y
Once you have corrected all of the ACS routine errors and ensured that all
exceptions are valid (that is, only for the new data subtypes), you can use
NaviQuest to place the results of the test into the test case definitions, as the
saved expected results for later regression testing.
To save the test results, choose option 4, Test Case Update with Test Results,
from the NaviQuest POM to get the Test Case Update with Test Results Entry
Panel. Figure 26 on page 126 shows this panel filled in with the:
1. Latest listing (NEW.LISTING) after the changes to the ACS routines
2. Names of the comparison results and exception test case data sets
3. Test case library
The test case members for the exceptions are read and copied into the test bed
library. The saved expected results are obtained from the comparison report
and are also saved in the test bed library.
To update test cases with test results, specify the following and press Enter:
The batch jobs are grouped into four major divisions by function as shown in
Figure 27 on page 127. All of the jobs, complete with sample JCL, are stored in
the SYS1.SACBCNTL library. Once you have selected the specific type of job to
be run, you can edit a copy of the sample JCL from an ISPF Edit panel.
Let′s look at an example using NaviQuest to set up a batch job, in this case,
creating a data set listing. On the Batch Testing/Configuration Selection Menu,
select ISMF option 1, Saved ISMF List Operations Batch Samples, to get to the
Saved ISMF List Operations Batch Samples Selection Menu (Figure 28 on
page 127).
The panel shows the names of the different sample jobs related to data set and
volume lists. You can scroll down to show three further jobs related to
producing tape volume lists. In our example, we type S next to Create Data Set
List.
Panel Help
--------------------------------------------------------------------------------
ACBSMDJ2 SAVED ISMF LIST OPERATIONS BATCH SAMPLES SELECTION MENU
Command ===> __________________________________________________________________
Select an option by typing ′ S′ or enter Data Set to Edit and press Enter:
More: +
S Create Data Set List
_ Create Data Set List and Save Query
_ Create Data Set List from a Saved Query
_ Generate Test Cases from a Data Set List
_ Generate Model Commands from a Saved List
_ Generate Data Set Report
_ Create Data Set List and Generate Data Set Report
_ Create DASD Volume List
_ Create DASD Volume List and Save Query
_ Create DASD Volume Query from a Saved Query
_ Generate DASD Volume Report
_ Create DASD Volume List and Generate DASD Volume Report
Pressing Enter will put you into ISPF Edit for the sample JCL. Once you have
modified the JCL and/or commands, you can submit the job. If you press PF3 (or
If you want to define or modify the SMS configuration in batch, select option 3,
Configuration Changes Batch Samples, from the Batch Testing/Configuration
Selection Menu to get to the Configuration Changes Batch Samples Selection
Menu (Figure 29). This panel lists the sample jobs that you can select to define
or change various DFSMS policies in the DFSMS configuration. In our example,
we select Define/Alter Management Class.
Panel Help
--------------------------------------------------------------------------------
ACBSMDJ5 CONFIGURATION CHANGES BATCH SAMPLES SELECTION MENU
Command ===> __________________________________________________________________
Select an option by typing ′ S′ or enter Data Set to Edit and press Enter:
Selecting this option will put you into ISPF Edit, and the JCL will be displayed as
shown in Figure 30 on page 129 and Figure 31 on page 130. Changes will have
to be made to the JOBCARD, prefix, and available parameters.
Here are the parameters for defining or altering a management class:
NaviQuest Key Word ISMF Field Name
SCDS CDS Name
MGMTCLAS Management Class Name
DESCR Description
EXPNOUSE Expire after Days Non-usage
EXPDTDY Expire after Date/Days
RETNLIM Retention Limit
PARTREL Partial Release
PRINOUSE Primary Days Non-usage
LV1NOUSE Level 1 Days Non-usage
CMDORAUT Command or Auto Migrate
PRIGDGEL # GDG Elements on Primary
GDGROLL Rolled-off GDS Action
BACKUPFR Backup Frequency
NUMBKDSE Number of Backup Vers
(Data Set Exists)
Figure 31. Second Edit Screen. Shows the command and parameters that may be changed
For reference purposes the parameters for changing a pool storage group in
batch are:
NaviQuest Key Word ISMF Field Name
SCDS CDS Name
STORGRP Storage Group Name
DESCR Description
AUTOMIG Auto Migrate
MIGSYSNM Migrate Sys/Sys Group Name
AUTOBKUP Auto Backup
BKUPSYS Backup Sys/Sys Group Name
AUTODUMP Auto Dump
DMPSYSNM Dump Sys/Sys Group Name
DUMPCLAS Dump Class
HIGHTHRS Allocation/migration Threshold: High
LOWTHRS Allocation/migration Threshold: Low
GUARBKFR Guaranteed Backup Frequency
SGSTATUS SMS SG Status
DUMPCLAS can accept up to 5 values separated by commas.
SGSTATUS can accept up to 32 values separated by commas.
For reference purposes the parameters for changing a tape storage group in
batch are:
NaviQuest Key Word ISMF Field Name
SCDS CDS Name
STORGRP Storage Group Name
DESCR Description
LIBNAME Library Names
SGSTATUS SMS SG Status
LIBNAME can accept up to 8 values separated by commas.
SGSTATUS can accept up to 32 values separated by commas.
Just as with a pool storage group, you can use an asterisk (*) as a subparameter
of SGSTATUS to leave the status unmodified for a particular system or sysgroup.
The CSI is used to quickly obtain information about entries in the ICF catalogs.
The ICF catalog entries are selected by using a generic filter key provided as
input. The generic filter key can be either a fully qualified name that returns one
entry or a partially qualified name (wild cards) that returns multiple entries on a
single invocation. Several sample programs are provided to facilitate fast and
easy use of the CSI programming interface (see 9.2.5, “CSI Sample Programs”
on page 137).
9.2.1 Overview
To obtain information about ICF catalog entries, you request field information for
each entry by specifying field names. Thus you do not have to know whether the
information is in the Basic Catalog Structure (BCS) or the VSAM Volume Data
Set (VVDS).
9.2.2 Invocation
CSI can be invoked in either 24-bit or 31-bit addressing mode. CSI is reentrant
and reusable. CSI can be invoked in any protection key and in either supervisor
or problem state.
The contents of the reason area depend on the contents of GPR 15.
Refer to the “Return Status Information” section in the Catalog Search Interface
User ′ s Guide for a full explanation of return codes in GPR 15.
Generic Filter Key Field (CSIFILTK): CSI uses a generic filter key supplied in
CSIFILTK. A generic filter key is a character string that describes the catalog
entry names for which you want information returned. The generic filter key can
contain the following symbols:
Catalog Name (CSICATNM): CSICATNM is used for catalog selection. CSI uses
the catalog name supplied in the CSICATNM field to search for entries if
CSICATNM is not blank. If this field is blank, catalog management attempts to
use the high-level qualifier of CSIFILTK to locate an alias or multiple aliases that
match. If an alias is found, the user catalog for that alias is searched.
Otherwise, the master catalog is searched.
If a tape volume catalog library entry type or a tape volume catalog volume entry
type is specified in CSIDTYPS, a tape volume catalog is searched. Tape volume
catalog library entry types and tape volume catalog volume entry types should
not be mixed with ICF catalog entry types.
VSAM components data and index are returned with the cluster. Thus there are
no type specifications for them. However, D and I types will appear in the output
information.
The valid types can be mixed and in any order. Blanks cannot separate the
types. For instance, ABH might be specified to get only non-VSAM, GDG, and
All blanks for CSIDTYPS can be set and will get types A, B, C, G, H, R, X, and U.
These are the ICF catalog types. L and W must be explicitly specified to get the
tape volume catalog entry.
It is possible to get a list of names without any field information. In this case, set
CSINUMEN to zero.
There is a limit of 100 field names per invocation of CSI. CSINUMEN cannot be
greater than 100.
Field Names (CSIFLDNM): CSIFLDNM is a list of 8-byte field names. If the field
name is not eight characters long, it must be padded on the right with blanks to
make eight characters. Refer to the “Field Name Directory” section in the
Catalog Search Interface User ′ s Guide for valid field names that can be used in
the list and the information returned for each field name.
The second field is the minimum required length for a catalog entry and one
entry′s worth of returned data. If the minimum length is greater than the work
area length, then the work area length must be increased to at least as much as
the minimum length. If this is not done, and a resume condition occurs, the user
program will appear as if in an endless loop because the same information will
be returned for each resume until the first length is increased to contain the
entire entry. CSI sees a GDG with all of its associated GDSs as one record.
Thus the required length for this entry is apt to be large.
The third field is the amount of space that was used in the work area. CSI
always returns a full entry. If the last entry does not fully fit in the remaining
work area space, the resume flag is set and the space at the end of the work
area is unused. The unused space is usually small.
A catalog name entry is returned for every catalog processed. A catalog entry
can be identified because its type is X′F0′. This is an artificial type invented so
that the next catalog entry can be found. The catalog entry is always followed by
return information. The return code portion will be zero if problems were not
encountered while processing the catalog during the call.
Following the catalog entry is one or more entries contained in the catalog that
match the search criteria (filter key). Each entry has flags, followed by its type
and name. If the flags indicate, a module ID, reason code, and return code
follow the entry name; otherwise, the field information for the entry follows.
The MODULE ID / RSN / RC returned in the work area for each entry is
information returned by catalog management.
If no errors or messages occurred, the field information for the entry is returned
as a set of lengths with the data corresponding to the lengths.
The first length field is the total length of the length fields including this field and
all of the data returned for this entry. The total length field is 2 bytes long. After
that is a reserved 2-byte field.
Next are a set of lengths corresponding to the number of fields passed in. Each
length is used to determine the length and position of the returned data of the
entry. For example, if three field names were supplied on input, there will be
three field lengths. Each length will be for the data immediately following the
lengths. If the lengths had values 4, 6, and 8, then, following the last length,
there would be 4 bytes worth of data for the first field, 4 bytes from the last
length field would be 6 bytes of data for the second field, and 10 bytes from the
last length would be 8 bytes worth of data for the third field.
VSAM RLS extends the DFSMS/MVS storage hierarchy to support data sharing
across multiple systems in a System/390 parallel sysplex. Enhancements in
DFSMS/MVS V1R4 take advantage of VSAM RLS data access and thus exploit
the capabilities of a parallel sysplex. In this appendix we review VSAM RLS and
parallel sysplex technology.
Major transaction and database systems that can exploit the coupling facility
technology within a parallel sysplex are IMS/ESA, DB2, and CICS VSAM RLS.
DFSMShsm can now also exploit the functions and performance of VSAM RLS by
allowing the migration, backup, and offline control data sets (MCDS, BCDS, and
OCDS) to be cached in the coupling facility.
VSAM RLS relies on the following four functions, which are provided by other
software:
• Central logging
VSAM data can be accessed by multiple users. For integrity, backup, and
recovery purposes, the exploiting subsystem software can choose to log
changes to the VSAM data set by using the MVS Logger.
• Central locking
To use VSAM RLS, the VSAM data set must be SMS managed. The storage
class assigned to the VSAM data set determines which cache structures to use
in the coupling facility, if the VSAM data set is opened in RLS mode.
VSAM RLS supports the KSDS, ESDS, RRDS, and VRRDS file organizations.
KSDS and ESDS access is also supported through path access.
RLS specifies that VSAM record level sharing protocols are used and implies
that VSAM uses cross-system record level locking, as opposed to CI locking,
uses the coupling facility for buffer consistency and performance across the
sysplex, and manages a systemwide local cache. With VSAM RLS the coupling
facility is used as a store through cache and so the data on the shared DASD
always reflects the most recent updates to the data set. When a VSAM data set
is accessed in RLS mode, the SHAREOPTION values are totally ignored as the
VSAM RLS code always assumes multiple readers and writers and takes
responsibility for data integrity.
Currently, when a VSAM data set is opened in RLS mode, it cannot be accessed
by any other application in any other mode (NSR, LSR, or GSR) at the same
time. NSR reads will be allowed by 3Q97.
The SMSVSAM server uses the coupling facility for its cache and lock structures.
With VSAM RLS, the application buffers and most VSAM control blocks are
allocated in the associated 2GB data space with the SMSVSAM address space.
The only VSAM control blocks still in the application address space are the ACB,
RPLs, and EXLST.
The SMSVSAM servers (one per MVS image) coordinate with each other to
logically maintain a single control block structure for each open VSAM RLS data
set across the sysplex and assumes responsibility for maintaining
synchronization across the sysplex.
Because a data set CI can reside in more than one local buffer, it can be
updated concurrently. Because record-level locking is performed, any
concurrent updates are against different records in the same CI. The changed
records from the multiple copies of the CI are merged to create a copy of the CI
that contains all of the changed records. This avoids the update problem where
an update can be lost when multiple independent tasks attempt to update the
same block of data from their own buffers, not realizing that they do not have
exclusive control over this block of data.
Locking at the record level rather than at the CI, CA, or data set level provides a
much finer granularity of locking than would otherwise be the case. This helps
performance by limiting exclusive lockouts to cases where multiple applications
attempt to update the same record.
Information in this book was developed in conjunction with use of the equipment
specified, and is limited in application to those specific hardware and software
products and levels.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director of
Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594 USA.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM Corporation, Dept.
600A, Mail Drop 1329, Somers, NY 10589 USA.
The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The use of this information or the
implementation of any of these techniques is a customer responsibility and
depends on the customer′s ability to evaluate and integrate them into the
customer′s operational environment. While each item may have been reviewed
by IBM for accuracy in a specific situation, there is no guarantee that the same
or similar results will be obtained elsewhere. Customers attempting to adapt
these techniques to their own environments do so at their own risk.
Reference to PTF numbers that have not been released through the normal
distribution process does not imply general availability. The purpose of
including these reference numbers is to alert IBM customers to specific
information relative to the implementation of the PTF when it becomes available
to each customer according to the normal IBM PTF distribution process.
AIX BookManager
BookMaster C/370
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
This information was current at the time of publication, but is continually subject to change. The latest
information may be found at http://www.redbooks.ibm.com.
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.htm). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
IBMMAIL Internet
In United States: usib6fpl at ibmmail [email protected]
In Canada: caibmbkz at ibmmail [email protected]
Outside North America: dkibmbsh at ibmmail [email protected]
• Telephone orders
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.htm). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
Company
Address
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.
Index 155
messages (continued) OSREQ macro 31
IEC161I 43
IEC214I 19
IEC813I 20 P
IEC980A 20, 21 Parallel Sysplex
IEC980I 19 benefits 139
multivolume selection 73 definition 139
exploiters 139
hardware required 139
N PDA trace 85
NaviQuest physical file system 98
ACS management option panels 120 Program Management 3 (PM3)
ACS test listings comparison 124 background 32
define/alter management class in batch 129 Dynamic Load Libraries 33
DFSMS FIT 120 enhancement description 33
IGDACSSC exit 122 OpenEdition support 33
ISMF functions in batch 126
ISMF saved lists 120
migration considerations 131 R
o v e r v i e w 119 record access bias
regression testing 120 buffer optimization 35
SAVE TSOLIST command 120 buffering handling 36
SYS1.SACBCNTL library 126 data class specification 36
test case generation 121 ISMF data class panels 37
Network File System 95 JCL specification 36
NOSTACK keyword 58 record level sharing
DFSMShsm exploitation 62
RLS KSDS extended addressability
O data class definition 42
O/C/EOV serviceability enhancements hardware requirements 44
abend message 19 JCL specification 44
activating IFGOCETR 20 migration considerations 43
DFSMSdfp enhancements 23 software prerequisites 43
DFSMSdss enhancements 24 RMODE31 JCL AMP parameter 38
DFSMShsm enhancements 27
FORCEP parameter 24
IFGOCETR parameters 21 S
IFGOCETR started task 20 SAM tailored compression 12
migration considerations 22, 28 SETSYS command
OAM SMF recording enhancements 29 ABARSDELETEACTIVITY parameter 59
o v e r v i e w 19 ABARSOPTIMIZE keyword 61
PDS resource held for output 20 ABARSTAPES parameter 58
SMF recording 20 ARECOVERTGTGDS parameter 60
trace processing 20 CDSVERSIONBACKUP parameter 66
OAM SMF recording DUPLEX parameter 53
activating 31 EXITOFF parameter 67
OSREQ macro 31 EXITON parameter 58, 67
SMF record format 29 MAXABARSADDRESSSPACE parameter 58
SMF record subtypes 29 NOUSERUNITTABLE parameter 71
TTOKEN keyword 31 SMF parameter 60
OpenEdition access method support 99 SYS1DUMP parameter 69
OPTIMIZE keyword 61 USERUNITTABLE parameter 71
Optimizer SMARTdata UTILITIES 103
HSM monitor/tuner 117 addressing distributed data 104
input data 115 calling the utilities 104
management class analyzer 117 client/server development 104
o v e r v i e w 115 data access functions 105
performance analyzer 116 data description and conversion 103
DFM 103
T
tailored compression dictionary token 17
tailored compression, evaluating 14
TAPECOPY processing 53
TGTGDS keyword 60
TTOKEN keyword 31
V
VRSMIN option 92, 93
VSAM
DCOLLECT enhancements 47
DS1REFD 41
enhancements in DFSMS/MVS V1R4 35
fast load implementation 40
last reference date at CLOSE 41
load enhancements 39
RLS KSDS extended addressability 41
system-managed buffering 35
VSAM attributes in data class 44
VSAM attributes in data class 44
migration considerations 47
Index 157
158 DFSMS/MVS V1R4 Technical Guide
ITSO Redbook Evaluation
DFSMS/MVS V1R4 Technical Guide
SG24-4892-00
Your feedback is very important to help us maintain the quality of ITSO redbooks. Please complete this
questionnaire and return it using one of the following methods:
• Use the online evaluation form found at http://www.redbooks.com
• Fax this form to: USA International Access Code + 1 914 432 8264
• Send your comments in an Internet note to [email protected]
Please rate your overall satisfaction with this book using the scale:
(1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)
Was this redbook published in time for your needs? Yes____ No____
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
Printed in U.S.A.
SG24-4892-00