You are on page 1of 8

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

MVS Systems Programming


Home

Chapter 3c - MVS Internals

Contents

The Web Version of this chapter is split into 4 pages - this is page 3 - page contents are as follows: Section 3.6 - Serialisation 3.6.1 Introduction 3.6.2 Running Disabled 3.6.3 The ENQ/DEQ mechanism 3.6.4 Locks 3.6.5 Intersects Section 3.7 - Program Management 3.7.1 Program Fetch 3.7.2 Program Modes, Attributes, and Properties 3.7.3 The Linklist and LLA Back to part 2 of chapter 3 Forward to part 4 of chapter 3 Home Contents Prev part of chapter Next part of chapter Top of Page

3.6 Serialisation
3.6.1 Introduction
One potential problem in a multiprogramming operating system (i.e. one which can interleave multiple units of work, running them all concurrently) is the danger of two units of work attempting to update the same resource at the same time, or attempting to use a process which can only handle one requestor at a time. This could lead to serious integrity problems if it was allowed to occur. Imagine, for example, two tasks attempting to update the same record of a dataset at the same time. Each would read the record, update its copy in storage, then write its updated copy back to the dataset. The second updated copy to be written back, however, would overwrite the first one, and the first update would be lost. MVS provides several mechanisms to prevent such problems. These are: * running disabled for interrupts * the ENQ/DEQ mechanism * the lock mechanism * the intersect mechanism The following sections will cover each of these in turn.

3.6.2 Running Disabled


We have already come across this in the hardware chapter and the section above on I/O processing. Whenever an interrupt is accepted by a processor, it stores the current PSW in the old PSW location for the interrupt-type concerned, moves the corresponding new PSW into the current PSW, and allows execution to resume. It is clear that access to the old PSW must be serialised - if another interrupt came along and overwrote the same old PSW before the interrupt handler had stored it away somewhere more permanent, then the status of the task that was interrupted the first time would be lost forever! Serialisation of the old PSW is achieved by disabling the processor for further interrupts of the same type before allowing the interrupt handler to start executing. This is done by setting a bit in the PSW corresponding to the type of interrupt to be disabled. If any further interrupts of the same type occur while this bit is set, the processor will not accept them until the bit is turned off again. The interrupt handler must store the old PSW along with the rest of the interrupted task's execution environment (registers etc) before re-enabling the processor for interrupts of the same type.

1 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

Disabling the processor is an effective mechanism for serialising interrupt processing, but it would be impractical for other types of serialisation. It is essentially a hardware mechanism which is suitable only for controlling hardware events. It relates to an individual processor, as the PSW bits which it uses to control serialisation are kept in a register of the processor concerned, so it is not suitable for serialising resources which are shared between multiple processors. And it requires a dedicated bit in the PSA for each resource requiring serialisation, which would be totally impractical for serialising access to resources such as datasets, of which there could be tens of thousands in your installation, and which have unpredictable names - how would the system know which bit related to which dataset? Home Contents Previous Section Next Section Top of Page

3.6.3 The ENQ/DEQ mechanism


Serialisation on datasets and most other resources used by applications is done using the ENQ and DEQ macro instructions. These invoke the services of the MVS component known as GRS (Global Resource Serialisation), which has its own address space running constantly. The ENQ/DEQ mechanism itself is completely independent of the resource being serialised and of the process of using it. In other words, there is nothing to prevent a user accessing the resource without going through the ENQ/DEQ mechanism, and serialisation depends on each user following the conventions governing the use of ENQ/DEQ. In many cases, the MVS routines you use to access a resource will themselves invoke ENQ/DEQ for you, thus preventing the user from accessing the resource outside the serialisation mechanism, but it should still be clear that the mechanism is separate from the actual process of accessing the resource. This means that the same mechanism can be used for serialising access to a wide variety of resources which are accessed in many different ways. The basic convention is that each resource to be serialised has a name associated with it, and every user of the resource must serialise on the same name, using the ENQ macro instruction, and release the resource when it is finished with it, using the DEQ macro instruction. The name of each resource consists of two parts - the Queue name (QNAME), which describes the type of resource, and the Resource name (RNAME), which relates to the particular resource to be used. So, for example, whenever MVS performs dataset allocation, it serialises access to the dataset by issuing an ENQ with QNAME = SYSDSN and RNAME = datasetname. GRS builds a set of control blocks corresponding to each ENQ, and uses these to determine whether future ENQ requests for the same resource should be allowed to proceed or not. For each resource with an outstanding ENQ, GRS constructs a QCB (Queue Control Block), containing among other things the QNAME and the RNAME of the resource. For each outstanding requestor of the resource, GRS chains an QEL (Queue Element) control block behind the corresponding QCB. All these control blocks are held in GRS's private address space. Figure 3.6 shows the structure of the queues.

Each ENQ request can have a "scope" of SYSTEMS, SYSTEM, or STEP. SYSTEMS means the resource is to be serialised across all MVS systems known to (and communicating with) the GRS address space on the current system. This is discussed in more detail in Chapter 14. SYSTEM means the resource is to be serialised across all address spaces on the current MVS system, and STEP means the resource is to be serialised only within the current address space. The scope should correspond to the usage of the resource - if a dataset is shared between multiple MVS systems, the scope of an ENQ for it should be SYSTEMS, for example, to ensure that requestors of the same resource on another system sharing the dataset are not able to ignore the first requestor. Similarly, if a control block exists only within a given address space and can only be used by that address space, an ENQ for it should only have a scope of STEP, or the ENQ could hold up requestors of a control block with the same name in a different address space! Each ENQ must also specify whether the requestor requires SHARED or EXCLUSIVE control of the resource. Usually, requestors who only wish read-type access to a resource will ENQ it as SHARED, while those who need to update it will ENQ it with EXCLUSIVE control. GRS will allow multiple requestors to hold the same resource with SHARED control, but only one to hold it with EXCLUSIVE control. For example, if you specify DISP=SHR on a JCL DD statement for a dataset, MVS will ENQ it for SHARED access at allocation time, but if you specify DISP=OLD, MVS will ENQ it for EXCLUSIVE access at allocation time. As a result, only one job at a time can allocate a given dataset with DISP=OLD, whereas many can allocate it concurrently with

2 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

DISP=SHR. The action GRS takes in response to an ENQ macro is as follows: * check the existing QCBs in its address space to see if there is an outstanding ENQ for this resource * if there is not, create a QCB for the resource and add it to the QCB chain, then create a QEL for this request and add it to the queue for this QCB, and finally return control to the task issuing the ENQ * if there is already a QCB for this resource, add a QEL representing this request to the queue for the resource, then check to see if the request can be satisfied * if the request is for exclusive control and there is already a QCB for the resource (and therefore a QEL representing an outstanding ENQ), then the requestor will be suspended, and GRS will not return control to it until the request reaches the top of the queue * if the request is for shared control and there is a QCB for the resource and a QEL representing an outstanding ENQ for exclusive control, then this requestor will also be suspended (whether the previous request for exclusive control has been granted yet or not) until all previous requests for exclusive control have been released with DEQ instructions * if the request is for shared control and there is a QCB but all QELs on the queue represent requests for shared control, then this requestor will be allowed to proceed, and GRS will return control to it. When the DEQ macro is issued, GRS will: * delete the QEL representing the request * if this is the only QEL for the QCB concerned, delete the QCB * if there are other QELs for this QCB, rebuild the QEL chain without the deleted QEL * if there are other QELs for this QCB which were suspended but now can be released, allow the tasks owning them to resume execution This is an effective method of serialising access to a wide range of resources, but there are a couple of potential problems which make it unsuitable for serialising on vital system resources. One is simply the inefficiency of holding up shared requests when there are exclusive requests above the shared request in the queue but all the current holders of the resource are only actually holding shared control. Thus, one user mistakenly specifying DISP=OLD on a widely shared dataset (e.g. an ISPF dataset) can cause many others to grind to a halt, with little hope of the exclusive request ever getting to the top of the queue itself. This can only be dealt with by cancelling the user concerned and ensuring they do not make the same mistake again. More insidious, however, is the case of the "deadly embrace". This can occur if task A gains exclusive control of resource X, then attempts to gain control of resource Y, while task B has already gained exclusive control of resource Y and is now attempting to gain control of resource X. Similar problems can occur with a chain of tasks making interrelated requests, but the two-task two-resource version is the commonest. In this case it is logically impossible for either task ever to resume as they will both wait indefinitely for each other. The only solution once again is to cancel one of the tasks and try to prevent the recurrence of the problem. In practice, this can usually be done by amending the program code concerned so that all tasks which will attempt to gain control of the same group of resources always do the ENQs for the different resources in the same order. Thus, in our example above, if both tasks attempted to ENQ on X first and then Y, the deadly embrace would never occur. These problems can be resolved for applications, as there are tools available which will display the contention occurring and allow the cancellation of the guilty tasks. For system code, however, which may bring the whole system to a halt when it is forced to wait, long waits and deadlocks are unacceptable - both because of the impact on other users, and because it may be impossible to recover at all without re-IPLing the system (the tools for diagnosing and solving the problem may themselves be locked out). Home Contents Previous Section Next Section Top of Page

3.6.4 Locks
Locks are the serialisation mechanism used by MVS for fundamental system modules. They do not suffer from the "deadly embrace" danger, they have a much shorter "path-length" than ENQ/DEQ (i.e. they use less instructions so execute faster), and unlike the process of disabling the processor for interrupts, they are capable of serialising access to resources which are shared between multiple processors in a multiprocessor CPC. They are used for serialising access to processes and resources (particularly control blocks) used in storage management (real, virtual, and auxiliary), dispatching, the I/O supervisor, and cross-memory services, among others.

3 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

Locks differ from ENQ's in that the system does not need to search a chain of control blocks to see if a lock can be satisfied - instead, there is a specific location, known as a lockword, whose contents can be tested to reveal the status of a lock. Lockwords are tested and set by an MVS component known as the Lock Manager, in response to requests made using the SETLOCK macro. There is one lockword for each type of lock, except for locks which control multiple resources of the same type (e.g. UCBs). These are known as "multiple locks", and have one lockword for each occurrence of the resource (in the case of a UCB, for example, the lockword for each UCB is found in an area of storage immediately in front of the UCB concerned). Each CPU also has a group of bits in its PSA which are used to indicate which locks it currently holds. When a requestor issues the SETLOCK macro, the lock manager first checks the bits in the PSA, then accesses the lockword if the bits indicate that the lock can proceed. The lockwords are in storage which is shared between all processors in the complex, and the value in them is either hex zeros (indicating the lock is not held) or an indicator of the CPU-id of the processor holding the lock. If the lock is not already held by another processor, the Lock Manager will set the value of the requesting CPU in the lockword and return control to the requestor. If the lock is already held, the Lock Manager will force the requestor to wait. However, there are different types of lock, and the precise action which the Lock Manager takes will depend on the type of lock. For a SUSPEND lock, the Lock Manager will suspend the requestor if the lock is unavailable, which allows other tasks to be dispatched on the processor in the meantime. For a SPIN lock, however, the Lock Manager will place the requestor in a spin loop and disable it for interrupts if the lock is unavailable. This prevents other work from being dispatched - even interrupts - and ensures the requestor obtains the lock as quickly as possible. The Lock Manager also disables the requestor for interrupts when it does obtain a SPIN lock, in order to allow it to release the lock again as quickly as possible. One implication of this is that units of work requesting SPIN locks must ensure they will not encounter page faults during the time they hold the lock (since these cannot be resolved without taking an I/O interrupt), by fixing any pages of virtual storage they will require in real storage before asking for the lock. Locks are also divided into "local" and "global" locks, analogous to STEP and SYSTEM scope for ENQs - local locks serialise access to resources within an address space, while global locks serialise across the entire system. The feature of locks which prevents deadlocks occurring is the existence of a hierarchy of locks. If a task has obtained one lock, it may only obtain other locks which are higher in the hierarchy - if it attempts to obtain a lower lock than one it holds already, it is abended. This enforces the procedure recommended above for preventing deadlocks using ENQ - i.e. it ensures that requests for different resources are always made in the same order, so there is no danger of one task asking for lock A then lock B, while another is asking for lock B then lock A. It is the process of setting and checking the lock bits in the PSA which determines whether a higher level lock is held when a SETLOCK macro is issued. Figure 3.7 shows some of the lock types in hierarchical order. Note that all the global spin locks appear at the top, followed by global suspend locks and finally local suspend locks (there are no local spin locks).
Lock Name ASM RSM DISP IOSUCB SRM CMS LOCAL Category Global Global Global Global Global Global Local Type Spin Spin Spin Spin Spin Suspend Suspend Resource Serialised Auxiliary Storage Management Real Storage Management Dispatcher (ASVT and dispatching queue) UCB Updates SRM Control Blocks Cross Memory Services Local storage

Figure 3.7 - Partial Heirarchy of MVS Locks (highest first)

Home

Contents

Previous Section

Next Section

Top of Page

3.6.5 Intersects
Intersects are used in addition to the lock mechanism to serialise access to the queues used by the MVS dispatcher. They are designed to give the dispatcher itself priority in the use of its queues, but to allow other MVS functions to update those queues when the dispatcher is not using them.

4 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

Whenever a function other than the dispatcher wishes to use one of these queues, it must first obtain the relevant lock (the local lock if the queue is one which belongs to a specific address space, such as the TCB queue, or the dispatcher lock for a shared queue, such as the ASCB ready queue). This ensures that only one function other than the dispatcher itself can attempt to access a given queue at any one time. It must then "request an intersect" to find out if the dispatcher is using the queue. If it is, it will have set a bit to indicate this, this bit will be detected by the intersect process, and the requestor will spin until the dispatcher frees the resource by resetting the bit (these bits are in the ASCB for local queues, and the Supervisor Vector Table - SVT - for global queues). Once the requestor can gain control of the resource, it sets the bit itself. Once it has completed, it resets the bit, releasing the intersect, then it relinquishes the lock. Thus the dispatcher need only check/set the intersect bit to obtain control of one of its queues, while any other function must first obtain the relevant lock and then check/set the intersect bit. Home Contents Previous Section Next Section Top of Page

3.7 Program Management


3.7.1 Program Fetch
Whenever you attempt to load a program via an EXEC statement in your JCL, or via a LOAD, LINK, XCTL, or ATTACH macro, the MVS Program Fetch function is invoked (DFSMS/MVS replaces Program Fetch with a similar function called the Loader - not to be confused with the Loader in previous versions of DFP, which did something similar to the linkage-editor). This does two jobs - it finds the program you want, then, if it is not already present in the virtual storage addressable by the requesting address space, it loads the program into storage. If fetch was invoked via the LOAD macro, it will then return the address of the program it has loaded; if it was invoked using any of the other interfaces listed above, it will continue by passing control to the fetched program. Fetch finds the requested program by issuing a BLDL SVC. BLDL searches through a hierarchy of locations and libraries until it finds a program with the requested name. This hierarchy is as follows (in the order of search): * the Job Pack Area (JPA) - this is a notional area of storage within the caller's address space containing programs which have already been loaded and are available for reuse. It is represented by a chain of control blocks known as the JPA Queue, consisting of LLEs (Load List Elements) and CDEs (Contents Directory Entries). There is one LLE for each program which has been loaded, and each LLE has a pointer to a chain of CDEs - one for each name by which the program may be invoked, including, among other information, the virtual address of the entry point corresponding to that name). If the program is found here, there is no need to reload it, and the fetch process is now complete. * any tasklibs which have been specified for the current task, in order of their concatenation, followed by any STEPLIBs specified in the JCL in concatenation order, or (if there are no STEPLIBs) by any JOBLIBs specified in the JCL in concatenation order. Unless you have used the LLA (Library Lookaside) facility of MVS/ESA to load the directories of these libraries into virtual storage, BLDL will need to perform physical I/O to read the directories of each of these libraries in order to determine whether the program is there. If the program is found in any of these libraries, it will then be loaded into storage. * the Link Pack Area (LPA), starting with the Fixed LPA if one is present, then the Modified LPA, and finally the Pageable LPA. The PLPA is an area of common storage which is loaded at IPL time (when you do a cold start, i.e. specify CLPA in your IPL parameters) with all the members of SYS.LPALIB and any other libraries which are specified in the active LPALSTxx member(s) of SYS1.PARMLIB. The MLPA is loaded at every IPL with those modules listed in the active IEALPAxx member of SYS1.PARMLIB, and is used for temporary changes to the PLPA. The FLPA is also loaded at IPL time, with those modules listed in the active IEAFIXxx member of SYS1.PARMLIB, and contains (antiquated!) modules which must be kept in fixed storage frames (i.e. cannot be paged out). To speed up the process of finding modules in the LPA, there are two control block chains known as the LPA Directory and the LPA Queue. The LPA Directory contains an entry (LPDE - Link Pack Directory Entry) for each module in the PLPA, including its virtual storage address, and the LPA Queue contains an entry (a CDE) for each member of the FLPA and MLPA, and each member of the PLPA which is currently in use. If the program is found in the LPA, it need not be loaded in from disk, as the copy in the LPA can be addressed and used by any address space. * the last place BLDL looks for the program is in the linklist. This is the list of libraries in the LNKLSTxx member of SYS1.PARMLIB which was selected at the last IPL, concatenated behind SYS1.LINKLIB in the order in which they appear in that list. Since the introduction of MVS/XA, searches of linklist libraries have been speeded up substantially by the use of LLA (Linklist Lookaside, known as Library Lookaside under MVS/ESA). LLA is a separate address space, started at IPL time (from IEACMD00), which holds a copy of the directory entries of all the linklist libraries (see below for more details of how the linklist and LLA operate). BLDL can therefore tell whether a requested module exists in any of those libraries, and if so what its physical (CCHHRR) address is on the relevant pack, without performing any I/O to the libraries' directories. Whether or not the directory entries are held in virtual storage, however, physical I/O will usually be required to load the program into the caller's private address space if it is found in a linklist library (unless you are using VLF to manage your linklist libraries - see the section on LLA below).

5 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

* if the program is not found in any of these, program fetch will abend the caller with system abend code 806. If the program is found in a tasklib, STEPLIB, JOBLIB, or linklist library, it must now be loaded into virtual storage in the caller's private address space. Fetch has its own IOS driver (see the I/O Management section above) which bypasses EXCP and access methods and optimises the process of loading in the program. The text of the load module is passed to the 'relocating loader", which resolves any pointers in the program which are dependent on the virtual address at which it is loaded into storage. Finally the JPA Queue is updated to reflect the addition of the new module, and control is passed to the fetched program (or back to the caller if the fetch was performed in response to a LOAD macro). Home Contents Previous Section Next Section Top of Page

3.7.2 Program Modes, Attributes, and Properties


There are a number of modes, attributes, and properties which programs may possess which affect the way they are handled by program management and other areas of MVS. Each program has an addressing mode (AMODE) and residency mode (RMODE) associated with it at link-edit time. The AMODE can be 24, 31, or ANY, and determines whether the program runs in 24-bit addressing mode or 31-bit addressing mode when it is executed. Many programs from the days of MVS Version 1 still execute in 24-bit mode, but programs which need to address storage above the 16 megabyte line must execute in 31-bit mode. During execution, the mode can be changed using the BSM or BASSM instructions, but it is more common for each program to execute in one mode only - the initial mode is set by the fetch process depending on the AMODE of the program. At any time, the current mode is indicated by the high-order bit of the next instruction address in the PSW, which is 1 in 31-bit mode and 0 in 24-bit mode. Similarly, each program is assigned a residency mode (RMODE) at link-edit time, which determines whether FETCH will load it below or above the 16 megabyte line. The RMODE can be 24 or ANY. With RMODE = 24, a program will always be loaded below the 16 megabyte line, and with RMODE = ANY, it may be loaded above or below, depending on the AMODE of the caller. Program attributes are also assigned at link-edit time, and describe the scope for sharing a single copy of a program between concurrent users. Each program can be non-reusable, reusable, re-entrant, or refreshable: * non-reusable - the default - simply means that a single copy of the program cannot be shared between multiple concurrent users. Thus, if multiple tasks running in the same address space all wish to load and execute the program, each will have to load its own copy, even though there is already a copy loaded into the address space. This is clearly a wasteful use of virtual storage. However, many batch application programs never need to be shared between multiple tasks concurrently, and so there is no real objection to making these non-reusable. The advantage of making them non-reusable is that they do not have to comply with the rules relating to programs with the other attributes - in particular, they may be self-modifying, and may therefore include work areas within the program itself. * Reusable or "serially reusable" programs can be shared between multiple tasks, but only one can use it at a time. If multiple tasks in the address space wish to load such a program, only one copy will be loaded, but if multiple tasks wish to execute it at the same time, only the first will be allowed to proceed. The others will be suspended until the first task completes. Serially reusable programs may be self-modifying, but must restore themselves to their initial state before terminating, so that they perform identically for each task. Programs which modify their own machine code are rare these days (though this used to be a frequent trick when storage was at a premium and execution speeds were highly sensitive to the number of machine instructions executed). However, many programs include their own data areas, buffers, and switches, and it is these that usually have to be re-initialised by serially reusable programs before they terminate. * re-entrant programs are much more acceptable for serious sharing between multiple concurrent users. These programs allow genuine multi-threading - several different tasks can be using the same program concurrently. Because of this, single copies can serve multiple users simultaneously, and such programs are eligible to be loaded into common areas of storage and shared between users in multiple address spaces. They can be used in the FLPA or MLPA, and they can be loaded into CSA and shared from there (e.g. some third party systems software products give the appearance of dynamic modification to the PLPA by loading modules into the CSA then updating the LPA directory to point to the copy in the CSA instead of the original version in the PLPA proper). To be re-entrant, modules should generally not modify themselves at all - this means that any data areas, switches, buffers, etc required by the program must usually be placed in separate work areas which are GETMAINed at the beginning of the program and FREEMAINed at the end. Each task using the program therefore has its own copy of the work area. In theory, self-modifying sections are allowed in re-entrant programs as long as they are preceded by an ENQ, followed by a DEQ, and restore the modified section to its original value before issuing the DEQ. These restrictions ensure that only one user can use the modifying section of code at any one time; however, it introduces potential delays for widely-shared programs and is not recommended. * Refreshable programs are very similar to re-entrant ones, but no self-modification whatsoever is allowed. They can be fully shared between multiple tasks in multiple address spaces. Programs to be loaded into the PLPA must be refreshable, because of the way

6 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

paging works for this area of storage. The copy of the PLPA pages on auxiliary storage (i.e. in the paging datasets) is created at IPL time, and from then on, PLPA pages are never paged out. If the RSM decides to steal a PLPA page, it always does it without a page-out operation, as these areas are never updated, so the original copy in the page dataset is always considered to be usable. The next page-in for the page will always page in the copy which was created at IPL time. As any page of the PLPA could be paged out at any time, it is clear that any modification to it would be lost when it was paged back in. To prevent problems of this sort, PLPA pages are page-protected, so they cannot be updated, and programs to be loaded into the PLPA must be refreshable. If you place non-refreshable programs into the PLPA they will abend 0C4 as soon as they attempt to modify themselves. Another attribute which can be assigned at link-edit time is APF authorisation, though this is only effective if the library into which the program is linked is also APF authorised. This is discussed in more depth in the Storage Management section earlier in this chapter. Properties can also be assigned to programs in the MVS Program Properties Table. This can be used to make programs non-swappable or non-cancellable, or to assign special storage keys to them. Prior to MVS 2.2 you had to reassemble the module IEFSD060 to update this table, but you can now specify these properties in the SCHEDxx member of SYS1.PARMLIB which you select at IPL time. Home Contents Previous Section End of Page Top of Page

3.7.3 The Linklist and LLA


As was mentioned earlier, the linklist concatentaion is established at IPL time. It consists of SYS1.LINKLIB, followed by the libraries specified in the LNKLSTxx member(s) of SYS1.PARMLIB which were selected in the IPL parameters. This happens early in the IPL process, before any user catalogs are accessible, so only datasets whose catalog entries are in the master catalog may be included in the linklist. They are concatenated in the order in which they appear in the LNKLSTxx members, and a DEB (Data Extent Block) is built describing the concatenation. This contains details of each physical extent allocated to the linklist. MVS provides no mechanism for changing this DEB, so you cannot usually add any new linklist libraries except by re-IPLing your system. More seriously, if any linklist library extends into secondary extents which were not present at IPL time, those members which are placed in the new extents will not be accessible through the linklist until the next IPL. This can cause serious problems if the library extends, existing members are replaced (so the directory now points to the version of the member in the new extent), and then attempts are made to use the program. Now the old version is inaccessible because the directory points to the new one, and the new one is unusable because it is in an extent which is not in the linklist's DEB. The answer is either to ensure that your linklist libraries are always allocated in single extents (i.e. with zero secondary space specified), or to install a third party software product which allows you to update the linklist DEB without an IPL. Prior to MVS/XA, linklist datasets were automatically APF authorised, which was a serious security exposure if you wanted to simplify application JCL by placing application program libraries in the linklist. Now, however, it is possible to specify either LNKAUTH=APFTAB in the IEASYSxx member of SYS1.PARMLIB, which means that only linklist datasets which are specified in the APF list (IEAAPFxx) will be authorised. This is highly recommended - and update access to all libraries in your APF list should be strictly controlled, as anyone with sufficient knowledge of Assembler who can place a program in an APF authorised library will be able to circumvent MVS security (this is discussed further in chapter 15). Since the introduction of MVS/XA, there has been an optional MVS facility called Linklist Lookaside (LLA), which was mentioned above. LLA is initialised at IPL time by a "START LLA" command in IEACMD00 unless you remove this command. The only good reason for removing it would be if you were running a third party substitute which provided equivalent function, as LLA gives substantial performance benefits. It does this by keeping copies of the directories of all linklist libraries in its address space, so that when a BLDL is issued against the linklist, the program can be found (or not!) without the need to physically read the directory of each library in the concatenation in turn until the program is found. This eliminates a large amount of I/O against these directories, with performance benefits not only for the task issuing the BLDL but for all other users wishing to access datasets on packs holding linklist datasets. One interesting side effect, which has probably saved the bacon of several systems programmers, is that if you accidentally delete a linklist library, the system will carry on working quite happily, and carry on finding modules in the deleted library! This is because the physical addresses of the members which FETCH uses are still held by LLA even though it is no longer possible to open the directory. This only works, of course, until the space is reused by something else, but at least it gives you a breathing space to wriggle your way out! In other circumstances, however, this "advantage" of LLA can seem more like a problem. The fact that LLA's addresses do not reflect changes to the directories of the linklist datasets means that when you change a module in a linklist library, the system continues to use the old version even though the directory entry has been updated to point to the updated version of the module. To update LLA's copy of the directory entries, you must issue the "F LLA,REFRESH" or "F LLA,UPDATE=xx" console command. Another similar problem is that posed by the need to compress a linklist library. Under MVS/XA, the only safe way to do this was to stop the LLA address space with a "P LLA" command, compress the library, then restart LLA. Otherwise, the directory entries in LLA

7 of 8

11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3c - MVS Internals

http://www.mvsbook.fsnet.co.uk/chap03c.htm

for every module moved by the compress operation would be invalid, leading to abends whenever a user attempted to load one of these, until a refresh command was issued. With MVS/ESA, LLA has been renamed Library Lookaside as it can now be extended to cover non-linklist libraries, and it now uses an ESA facility called the Virtual Lookaside Facility to extend its function to include keeping commonly used load modules in virtual storage so that fetches for them can be resolved without any disk I/O at all. Refreshes can now be issued for a single member or a single library, and libraries can be dynamically added to or removed from LLA control. Probably the most interesting extension to LLA, however, is the use of VLF. As LLA resolves BLDL requests, it keeps a record of the most heavily used load modules, and calculates an index known as the "net staging value" for each module, which takes account of how often it is loaded, how large it is, and how great the response time saving arising from keeping it in memory would be. It then "stages" the modules with the highest net staging values into a VLF dataspace, and any future FETCH requests for them will be resolved from here instead of through physical I/O to the dataset on disk. Modules may later be purged from VLF for various reasons for example, if the dataspace runs short of storage, it will purge the least recently used modules, or if a refresh is issued which covers the module, it will be purged. LLA now also has two modes - a "freeze" mode and a "nofreeze" mode. Freeze mode is the default for linklist libraries, and works as we have described LLA processing above. Nofreeze mode, however, bypasses LLA directory processing and only does VLF staging for the library concerned. This is useful for libraries which are frequently updated, or updated by application programmers, as it means that it is unnecessary to issue refreshes every time a member of the library is updated. When a FETCH request is issued for a module which is already in the VLF dataspace but whose CCHHRR address has changed, LLA detects that the module has been updated, purges it from VLF, and resolves the FETCH request from disk. This does not work if you update a program in place (e.g. using AMASPZAP) so you still need to issue a refresh command when you do this. Home Contents Previous Section Next part of chapter Top of Page

This page last changed: 5 July 1998. All text on this site David Elder-Vass. Please see conditions of use. E-mail comments to: dave@mvsbook.fsnet.co.uk (Please check the FAQ's page first!)

None of the statements or examples on this site are guaranteed to work on your particular MVS system. It is your responsibility to verify any information found here before using it on a live system.

8 of 8

11/2/2001 1:48 PM

You might also like