Skip Headers

Oracle9i Real Application Clusters Deployment and Performance
Release 2 (9.2)

Part Number A96598-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

A
Configuring Multi-Block Lock Assignments (Optional)

This appendix explains how to configure locks to manage access to multiple blocks. Refer to this appendix only for a limited set of rare circumstances to override Oracle Real Application Clusters' default resource control scheme as performed by the Global Cache Service (GCS) and the Global Enqueue Service (GES). The topics in this appendix are:

Before You Override the Global Cache and Global Enqueue Service Resource Control Mechanisms

The default scheme provides exceptional performance for all system types in all Real Application Clusters environments. In addition, assigning locks requires additional administrative effort. Therefore, using the default scheme is preferable to performing the tasks required to override the default strategy as described in this appendix.


Note:

Only use the information in this appendix for exceptional cases. An example of this is an application where the data access patterns are almost exclusively read-mostly.


Deciding to Override Global Cache and Global Enqueue Service Processing

Cache Fusion provides exceptional scalability and performance using cache-to-cache transfers of data that is not cached locally. In other words, before an instance reads a data block from disk, Oracle attempts to obtain the requested data from another instance's cache. If the requested block exists in another cache, then the data block is transferred across the interconnect from the holding instance to the requesting instance.

Real Application Clusters' resource control scheme guarantees the integrity of changes to data made by multiple instances. By default, each data block in an instance's buffer cache is protected by the Global Cache Service. The GCS tracks the access modes, roles, privileges, and states of these resources.

In rare situations, you may want to override the GCS, and the GES by configuring multi-block locks where one lock covers multiple data blocks in a file. If blocks are frequently accessed from the same instance, or if blocks are accessed from multiple nodes but in compatible modes such as shared mode for concurrent reads, then a lock configuration may improve performance.

To do this, set the GC_FILES_TO_LOCKS parameter and specify the number of locks that Oracle uses for particular files. The syntax of the parameter also enables you to specify lock allocations for groups of files as well as the number of contiguous data blocks to be covered by each lock. If you indiscriminately use values for GC_FILES_TO_LOCKS, then adverse performance such as excessive forced disk writes can result. Therefore, only set GC_FILES_TO_LOCKS for:

When to Use Locks

Using multiple locks for each file can be useful for the types of data shown in Table A-1.

Table A-1 When to Use Locks
Situation Reason

When the composition of the data is mostly read-only.

A few locks can cover many blocks without requiring frequent lock operations. These locks are released only when another instance needs to modify the data. Assigning locks can result in better performance on read-only data with parallel execution processing. If the data is strictly read-only, then consider designating the tablespace as read-only.

When a large amount of data is modified by a relatively small set of instances.

Lock assignments permit access to an un-cached database block to proceed without Parallel Cache Management activity. However, this is only possible if the block is already in the requesting instance's cache.

Using locking can cause additional cross-instance cache management activity because conflicts can occur between instances that modify different database blocks. Resolution of false forced disk writes or excessive forced disk writes can require writing several blocks from the cache of the instance that currently owns access to the blocks.

Setting GC_FILES_TO_LOCKS

Set the GC_FILES_TO_LOCKS initialization parameter to specify the number of locks covering data blocks in a datafile or set of datafiles. This section covers:

GC_FILES_TO_LOCKS Syntax

The syntax for the GC_FILES_TO_LOCKS parameter enables you to specify the relationship between locks and files. The syntax for this parameter and the meanings of the variables as shown in Table A-2 are:

GC_FILES_TO_LOCKS="{file_list=#locks[!blocks] [EACH][:]} . . ." 

Table A-2 GC_FILES_TO_LOCKS Variables and their Meanings
Variable Meaning

file_list

file_list specifies a single file, range of files, or list of files and ranges as follows: fileidA[-fileidC][,fileidE[-fileidG]] ...

Query the data dictionary view DBA_DATA_FILES to find the correspondence between file names and file ID numbers.

#locks

Sets the number of locks to assign to file_list.

!blocks

Specifies the number of contiguous data blocks to be covered by each lock; also called the blocking factor

EACH

Specifies #locks as the number of locks to be allocated to each file in file_list.


Note:

All instance must have identical values for GC_FILE_TO_LOCKS. Also, do not use spaces within the quotation marks of the GC_FILES_TO_LOCKS parameter syntax.


The default value for !blocks is 1. When you specify blocks, contiguous data blocks are covered by each of the #lock locks. To specify a value for blocks, use the exclamation point (!) separator. You would primarily specify blocks, and not specify the EACH keyword to allocate sets of locks to cover multiple datafiles.

Always set the !blocks value to avoid interfering with data partitioning if you have also used the optional free list groups. Normally you do not need to preallocate extents. When a row is inserted into a table and Oracle allocates new extents, Oracle allocates contiguous blocks that are specified with !blocks in GC_FILES_TO_LOCKS to the free list group associated with an instance.

Lock Assignment Examples

For example, you can assign 300 locks to file 1 and 100 locks to file 2 by adding the following line to your initialization parameter file:

GC_FILES_TO_LOCKS = "1=300:2=100"

The following entry specifies a total of 1500 locks: 500 each for files 1, 2, and 3:

GC_FILES_TO_LOCKS = "1-3=500EACH"

By contrast, the following entry specifies a total of only 500 locks spread across the three files:

GC_FILES_TO_LOCKS = "1-3=500"

The following entry indicates that Oracle should use 1000 distinct locks to protect file 1. The data in the files is protected in groups of 25 contiguous locks.

GC_FILES_TO_LOCKS = "1=1000!25"

If you define a datafile with the AUTOEXTEND clause or if you issue the ALTER DATABASE ... DATAFILE ... RESIZE statement, then you may also need to adjust the lock assignment.

When you add new datafiles, decide whether these new files should be subject to the default control of the GCS or whether you want to assign locks using the GC_FILES_TO_LOCKS initialization parameter.

The following examples show different methods of mapping blocks to locks and how the same locks are used on multiple datafiles.

Figure A-1 Mapping Locks to Data Blocks

Text description of sps81057.gif follows
Text description of the illustration sps81057.gif


Example 1

Figure A-1 shows an example of mapping blocks to locks for the parameter value GC_FILES_TO_LOCKS = "1=60:2-3=40:4=140:5=30".

In datafile 1 shown in Figure A-1, 60 locks map to 120 blocks, which is a multiple of 60. Each lock covers two data blocks.

In datafiles 2 and 3, 40 locks map to a total of 160 blocks. A lock can cover either one or two data blocks in datafile 2, and two or three data blocks in datafile 3. Thus, one lock can cover three, four, or five data blocks across both datafiles.

In datafile 4, each lock maps exactly to a single data block, since there is the same number of locks as data blocks.

In datafile 5, 30 locks map to 170 blocks, which is not a multiple of 30. Each lock therefore covers five or six data blocks.

Each lock illustrated in Figure A-1 can be held in either shared read mode or read-exclusive mode.

Example 2

The following parameter setting allocates 500 locks to datafile 1; 400 locks each to files 2, 3, 4, 10, 11, and 12; 150 locks to file 5; 250 locks to file 6; and 300 locks collectively to files 7 through 9:

GC_FILES_TO_LOCKS = "1=500:2-4,10-12=400EACH:5=150:6=250:7-9=300" 

This example assigns a total of (500 + (6*400) + 150 + 250 + 300) = 3600 locks. You can specify more than this number of locks if you add more datafiles.

Example 3

In Example 2, 300 locks are allocated to datafiles 7, 8, and 9 collectively with the clause "7-9=300". The keyword EACH is omitted. If each of these datafiles contains 900 data blocks, then for a total of 2700 data blocks, then each lock covers nine data blocks. Because the datafiles are multiples of 300, the nine locks cover three data blocks in each datafile.

Example 4

The following parameter value allocates 200 locks each to files 1 through 3; 50 locks to datafile 4; 100 locks collectively to datafiles 5, 6, 7, and 9; and 20 locks in contiguous 50-block groups to datafiles 8 and 10 combined:

GC_FILES_TO_LOCKS = "1-3=200EACH 4=50:5-7,9=100:8,10=20!50" 

In this example, a lock assigned to the combined datafiles 5, 6, 7, and 9 covers one or more data blocks in each datafile, unless a datafile contains fewer than 100 data blocks. If datafiles 5 to 7 contain 500 data blocks each and datafile 9 contains 100 data blocks, then each lock covers 16 data blocks: one in datafile 9 and five each in the other datafiles. Alternatively, if datafile 9 contained 50 data blocks, half of the locks would cover 16 data blocks (one in datafile 9); the other half of the locks would only cover 15 data blocks (none in datafile 9).

The 20 locks assigned collectively to datafiles 8 and 10 cover contiguous groups of 50 data blocks. If the datafiles contain multiples of 50 data blocks and the total number of data blocks is not greater than 20 times 50, that is, 1000, then each lock covers data blocks in either datafile 8 or datafile 10, but not in both. This is because each of these locks covers 50 contiguous data blocks. If the size of datafile 8 is not a multiple of 50 data blocks, then one lock must cover data blocks in both files. If the sizes of datafiles 8 and 10 exceed 1000 data blocks, then some locks must cover more than one group of 50 data blocks, and the groups might be in different files.

Example 5

GC_FILES_TO_LOCKS="1-2=4"

In this example, four locks are specified for files 1 and 2. Therefore, the number of blocks covered by each lock is eight ((16+16)/4). The blocks are not contiguous.

Figure A-2 GC_FILES_TO_LOCKS Example 5

Text description of sps81058.gif follows
Text description of the illustration sps81058.gif


Example 6

GC_FILES_TO_LOCKS="1-2=4!8"

In this example, four locks are specified for files 1 and 2. However, the locks must cover eight contiguous blocks.

Figure A-3 GC_FILES_TO_LOCKS Example 6

Text description of sps81059.gif follows
Text description of the illustration sps81059.gif


Example 7

GC_FILES_TO_LOCKS="1-2=4!4EACH"

In this example, four locks are specified for file 1 and four for file 2. The locks must cover four contiguous blocks.

Figure A-4 GC_FILES_TO_LOCKS Example 7

Text description of sps81060.gif follows
Text description of the illustration sps81060.gif


Example 8

GC_FILES_TO_LOCKS="1=4:2=0"

In this example, file 1 has multi-block lock control with 4 locks. On file 2, locks are allocated.

Figure A-5 GC_FILES_TO_LOCKS Example 8

Text description of sps81061.gif follows
Text description of the illustration sps81061.gif


Additional Considerations for Setting GC_FILES_TO_LOCKS

Setting GC_FILES_TO_LOCKS in Real Application Clusters has further implications. For example, setting it can increase monitoring overhead and you may have to frequently adjust the parameter when the database grows or when you add files. Moreover, you cannot dynamically change the setting for GC_FILES_TO_LOCKS. To change the setting, you must stop the instances, alter the setting, and restart all the instances. In addition, consider the following topics in this section:

Expanding or Adding Datafiles

Sites that run continuously cannot afford to shut down for parameter value adjustments. Therefore, when you use the GC_FILES_TO_LOCKS parameter, remember to provide room for growth or room for files to extend.

You must also carefully consider how you use locks on files that do not grow significantly, such as read-only or read-mostly files. It is possible that better performance would result from assigning fewer locks for multiple blocks. However, if the expected CPU and memory savings due to fewer locks do not outweigh the administrative overhead, use the resource control scheme of the Global Cache and Global Enqueue Services.

Files To Avoid Including in GC_FILES_TO_LOCKS Settings

Never include the following types of files in the GC_FILES_TO_LOCKS parameter list:

Tuning Parallel Execution on Real Application Clusters

To optimize parallel execution in Real Application Clusters environments when not using the default resource control scheme, you must accurately set the GC_FILES_TO_LOCKS parameter. Data block address locking in its default behavior assigns one lock to each block. For example, during a full table scan, a lock must be acquired for each block read into the scan. To accelerate full table scans, you use one of the following three possibilities:

The following guidelines affect memory usage, and thus indirectly affect performance:

Analyzing Real Application Clusters I/O Statistics

If you set GC_FILES_TO_LOCKS, then Cache Fusion is disabled. In this case, you can use three statistics in the V$SYSSTAT view to measure the I/O performance related to global cache synchronization:

DBWR cross-instance writes occur when Oracle resolves inter-instance data block usage by writing the requested block to disk before the requesting node can use it.

Cache Fusion eliminates the disk I/O for current and consistent-read versions of blocks. This can lead to a substantial reduction in physical writes and reads performed by each instance.

Analyzing Real Application Clusters I/O Statistics Using V$SYSSTAT

You can obtain the following statistics to quantify the write I/Os required for global cache synchronization.

  1. Use this syntax to query the V$SYSSTAT view:
       SELECT NAME, VALUE FROM V$SYSSTAT 
       WHERE NAME IN ('DBWR cross-instance writes', 
       'remote instance undo block writes',
       'remote instance undo header writes',
       'physical writes');
    
    
    

    Oracle responds with output similar to:

    NAME                                                           VALUE
    --------------------------------------------------------- ----------
    physical writes                                                41802
    DBWR cross-instance writes                                      5403
    remote instance undo block writes                                  0
    remote instance undo header writes                                 2
    4 rows selected.
    
    
    

    Where the statistic physical writes refers to all physical writes that occurred from a particular instance performed by DBWR, the value for DBWR cross-instance writes accounts for all writes caused by writing a dirty buffer containing a data block that is requested for modification by another instance. Because the DBWR process also handles cross-instance writes, DBWR cross-instance writes are a subset of all physical writes.

  2. Calculate the ratio of Real Application Clusters-related I/O to overall physical I/O using this equation:

    Text description of psadm008.gif follows
    Text description of the illustration psadm008.gif

  3. Use this equation to calculate how many writes to rollback segments occur when a remote instance needs to read from rollback segments that are in use by a local instance:

    Text description of sps81131.gif follows
    Text description of the illustration sps81131.gif

    The ratio shows how much disk I/O is related to writes to rollback segments.

  4. To estimate the number or percentage of reads due to global cache synchronization, use the number of lock requests for conversions from NULL(N) to Shared mode (S) counted in V$LOCK_ACTIVITY and the physical reads statistics from V$SYSSTAT.

    The following formula computes the percentage of reads that are only for local work where lock buffers for read represents the N-to-S block access mode conversions:

    Text description of psadm017.gif follows
    Text description of the illustration psadm017.gif


    These so-called forced reads occur when a cached data block that was previously modified by the local instance had to be written to disk. This is due to a request from another instance, so the block is then re-acquired by the local instance for a read.

Monitoring Multi-Block Lock Usage by Detecting False Forced Writes

False forced writes occur when Oracle down-converts a lock that protects two or more blocks if the blocks are concurrently updated from different nodes. Assume that each node is updating a different block covered by the same lock. In this case, each node must write both blocks to disk even though the node is updating only one of them. This is necessary because the same lock covers both blocks.

Statistics are not available to show false forced write activity. To assess false forced write activity you can only consider circumstantial evidence as described in this section.

The following SQL statement shows the number of lock operations causing a write, and the number of blocks actually written:

   SELECT VALUE/(A.COUNTER + B.COUNTER + C.COUNTER) "PING RATE" 
     FROM V$SYSSTAT, 
       V$LOCK_ACTIVITY A, 
       V$LOCK_ACTIVITY B, 
       V$LOCK_ACTIVITY C 
   WHERE A.FROM_VAL = 'X' 
       AND A.TO_VAL = 'NULL' 
       AND B.FROM_VAL = 'X' 
       AND B.TO_VAL = 'S' 
       AND C.FROM_VAL = 'X' 
       AND C.TO_VAL = 'SSX' 
       AND NAME = 'DBWR cross-instance writes'; 

Table A-3 shows how to interpret the forced disk write rate.

Table A-3  Interpreting the Forced Write Rate
Forced Disk Write Rate Meaning

Less than 1

False forced writes may be occurring, but there are more lock operations than forced disk writes. DBWR is writing blocks fast enough, resulting in no writes for lock activity. This is also known as a soft ping, meaning I/O activity is not required for the forced disk write, only lock activity.

Equal to 1

Each lock activity involving a potential write causes the write to occur. False forced writes may be occurring.

Greater than 1

False forced writes are definitely occurring.

Use this formula to calculate the percentage of false forced writes:

Text description of sps81110.gif follows
Text description of the illustration sps81110.gif


Then check the total number of writes and calculate the number due to false forced writes:

   SELECT Y.VALUE "ALL WRITES", 
       Z.VALUE "PING WRITES",  
       Z.VALUE * pingrate "FALSE PINGS", 
   FROM V$SYSSTAT Z, 
       V$SYSSTAT Y,
   WHERE Z.NAME = 'DBWR cross-instance writes' 
   AND Y.NAME = 'physical writes'; 

Here, ping_rate is given by the following SQL statement:

   CREATE OR REPLACE VIEW PING_RATE AS
   SELECT ((VALUE/(A.COUNTER+B.COUNTER+C.COUNTER))-1)/ 
       (VALUE/(A.COUNTER+B.COUNTER+C.COUNTER)) RATE
   FROM V$SYSSTAT,
       V$LOCK_ACTIVITY A,
       V$LOCK_ACTIVITY B,
       V$LOCK_ACTIVITY C
   WHERE A.FROM_VAL = 'X'
       AND A.TO_VAL   = 'NULL'
       AND B.FROM_VAL = 'X'
       AND B.TO_VAL   = 'S'
       AND C.FROM_VAL = 'X'
   AND C.TO_VAL   = 'SSX'
   AND NAME = 'DBWR cross-instance writes';

The goal is not only to reduce overall forced disk writes, but also to reduce false forced writes. To do this, look at the distribution of instance locks in GC_FILES_TO_LOCKS and check the data in the files.

Lock Names and Lock Formats

The following section describes the lock names and lock formats of locks. The topics in this section are:

Lock Names and Lock Name Formats

Internally, Oracle global lock name formats use one of the following formats with parameter descriptions as shown in Table A-4:

For example, a space management lock might be named ST00. A lock might be named BL 1 900.

The clients of the lock manager define the lock type, for example BL for a lock, and two parameters, id1 and id2, and pass these parameters to the GCS API to open a lock. The lock manager does not distinguish between different types of locks. Each component of Oracle defines the type and the two parameters for its own needs, in other words, id1 and id2 have a meaning consistent with the requirements of each component.

Lock Names

All locks are Buffer Cache Management locks. Buffer Cache Management locks are of type BL. The syntax of lock names is type ID1 ID2, where:

Examples of lock names are:

Lock Types and Names

There are several different types and names of locks as shown in Table A-5:

Table A-5  Locks Types and Names
Type Lock Name Type Lock Name

CF

Controlfile Transaction

PS

Parallel Execution Process Synchronization

CI

Cross-Instance Call Invocation

RT

Redo Thread

DF

Datafile

SC

System Change Number

DL

Direct Loader Index Creation

SM

SMON

DM

Database Mount

SN

Sequence Number

DX

Distributed Recovery

SQ

Sequence Number Enqueue

FS

File Set

SV

Sequence Number Value

KK

Redo Log Kick

ST

Space Management Transaction

IN

Instance Number

TA

Transaction Recovery

IR

Instance Recovery

TM

DML Enqueue

IS

Instance State

TS

Temporary Segment (also Table-Space)

MM

Mount Definition

TT

Temporary Table

MR

Media Recovery

TX

Transaction

IV

Library Cache Invalidation

UL

User-Defined Locks

L[A-P]

Library Cache Lock

UN

User Name

N[A-Z]

Library Cache Pin

WL

Begin written Redo Log

Q[A-Z]

Row Cache

XA

Instance Registration Attribute Lock

PF

Password File

XI

Instance Registration Lock

PR

Process Startup


Go to previous page Go to next page
Oracle
Copyright © 1999, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback