Category: architecture

OLR and GPNP Profile

OLR

ls -ltr /etc/oracle/olr.loc
viz. $GIHOME/cdata/<hostname>.olr

$ cat olr.loc
olrconfig_loc=/u01/app/oracle/grid/11.2.0/cdata/<hostname>.olr
crs_home=/u01/app/oracle/grid/11.2.0

$ ./ocrcheck -local

$ ./ocrcheck -local -config

Oracle Local Registry configuration is :
Device/File Name         : /u01/app/oracle/grid/11.2.0/cdata/<hostname>.olr

 


GPNP

GPnP Deamon (GPnPD)

GRID_HOME/gpnp/<hostname>/profiles/peer/profile.xml

[root@host01 peer]# gpnptool get

On Cardinality

In SQL (Structured Query Language), the term cardinality refers to the uniqueness of data values contained in a particular column (attribute) of a database table. The lower the cardinality, the more duplicated elements in a column. Thus, a column with the lowest possible cardinality would have the same value for every row. SQL databases use cardinality to help determine the optimal query planfor a given query.

Values of Cardinality

When dealing with columnar value sets, there are 3 types of cardinality: high-cardinality, normal-cardinality, and low-cardinality. High-cardinality refers to columns with values that are very uncommon or unique. High-cardinality column values are typically identification numbers, email addresses, or user names. An example of a data table column with high-cardinality would be a USERS table with a column named USER_ID. This column would contain unique values of 1-n. Each time a new user is created in the USERS table, a new number would be created in the USER_ID column to identify them uniquely. Since the values held in the USER_ID column are unique, this column’s cardinality type would be referred to as high-cardinality. Normal-cardinality refers to columns with values that are somewhat uncommon. Normal-cardinality column values are typically names, street addresses, or vehicle types. An example of a data table column with normal-cardinality would be a CUSTOMER table with a column named LAST_NAME, containing the last names of customers. While some people have common last names, such as Smith, others have uncommon last names. Therefore, an examination of all of the values held in the LAST_NAME column would show “clumps” of names in some places (e.g.: a lot of Smith’s ) surrounded on both sides by a long series of unique values. Since there is a variety of possible values held in this column, its cardinality type would be referred to as normal-cardinality. Low-cardinality refers to columns with few unique values. Low-cardinality column values are typically status flags, Boolean values, or major classifications such as gender. An example of a data table column with low-cardinality would be a CUSTOMER table with a column named NEW_CUSTOMER. This column would contain only 2 distinct values: Y or N, denoting whether the customer was new or not. Since there are only 2 possible values held in this column, its cardinality type would be referred to as low-cardinality. reference : http://en.wikipedia.org/wiki/Cardinality_%28SQL_statements%29

About Checkpoints and Commits

 What is Checkpoint (CKPT) process :

  • At specific times, all modified database buffers in the SGA are written to the datafiles by DBWn. This event is called a checkpoint. The checkpoint process is responsible for signaling DBWn at checkpoints and updating all the datafiles and control files of the database to indicate the most recent checkpoint.
  • When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.
  • Control files also record information about checkpoints. Every three seconds, the checkpoint process (CKPT) records information in the control file about the checkpoint position in the online redo log. This information is used during database recovery to tell Oracle that all redo entries recorded before this point in the online redo log group are not necessary for database recovery; they were already written to the datafiles.

Further discussions about this can be found at AskTom website also (Questions regarding checkpoint, version 8.1.6 – AskTom)

MISCONCEPTION: CHECKPOINT NOT COMPLETE – MISGUIDED SOLUTIONS :

A common problem for people with very active systems who use filesystems to store their datafiles is the error, “Thread cannot allocate new log, sequence ; Checkpoint not complete” The most commonly recommended remedies for this situation are either to use larger or more online redologs.
Unfortunately, if ‘Checkpoint not complete’ is a chronic problem, neither of these solutions will eliminate the problem.  They may forestall, or even reduce the frequency of the error, but the problem will not be solved.
‘Checkpoint not complete’ is a result of an instance filling and switching through all available online redologs before one checkpoint can complete.  Because Oracle must always be able to recover from instance failure from the last checkpoint forward, the rules of recovery prevent an instance from reusing any online redolog that contains changes newer than the last checkpoint.
A checkpoint is the writing of all dirty blocks in the buffer cache to the datafiles as of a particular SCN.  In general, if a checkpoint cannot complete in a timely fashion, it is a result of slow I/O to the datafiles.  The possible solutions to this problem seek to eliminate the I/O bottleneck, or compensate for a slow I/O subsystem.

The problem with the recommendations to increase the size or number of redologs is that if the rate of DML activity is so high that checkpoints cannot keep up, there is no reason to think that by increasing the amount of online redologs, that it will make the I/O subsystem any more able to keep up.  That is to say, it will take a longer time to fill up all the online logs, but the basic problem of not being able to write out dirty blocks as fast as the database is changing will still be there.
The first step in diagnosing ‘checkpoint not complete’ is to determine if the problem is chronic or not.  If the error appears in the alert log many times a day, or consistently during peak hours, then the problem is chronic.  If the error appears at the same time every day or every week, or if the problem is only occasional, it is not chronic.

Non-chronic ‘checkpoint not complete’ probably doesn’t require any re-engineering of the systems architecture.  It is most likely the result of a single application suddenly making a large amount of DML (inserts, updates, deletes) to the database in a short time. The best way to solve this problem is to find out if the application can reduce its generation of redo by performing its changes ‘nologging.’  Any bulk inserts can be done using append mode unrecoverable, and generate no significant redo.  Deletes that clear a whole table or a whole class of records can be converted to truncates of the table or of a partition.  It very least, the application can be modified to throttle the rate of change back to a rate that the I/O subsystem can keep up with.  Even the crude solution of increasing the number or size of redologs may solve sporadic, non-chronic occurrences of  ‘checkpoint not complete.’
Chronic ‘checkpoint not complete’ is a more complicated problem.  It means that overall, the rate of DML of the instance is higher than the I/O subsystem can support.  In systems with chronically slow I/O, application performance will be degraded, because the buffer cache is not purged of dirty blocks fast enough or frequently enough.  Such systems show relatively long time_waited for the “buffer busy wait” and “write complete wait” events in v$system_event. The solution to such a problem is either to compensate for the problem by making the checkpoint more aggressive, or to solve the problem by making the I/O more efficient.

To understand the solution to this problem, it is first necessary to understand something about how checkpoints work.  When a periodic checkpoint is being performed, a certain portion of the database writer’s capacity, or “write batch,” is made available for the checkpoint to use.  If the checkpoint can’t complete in time, it is valid to infer that Oracle is not using enough of the database writer’s write batch for the checkpoint, and that it should probably use a larger portion.  Note that none of this has anything to do with the CKPT background process.  Checkpoints are performed by the database writer.  CKPT just relieves the log writer from updating file header SCNs when checkpoints complete.

In Oracle8, a new feature, sometimes called “incremental checkpoint” or “fast start instance recovery” was introduced.  This feature is enabled with the initialization parameter FAST_START_MTTR_TARGET in 9i (FAST_START_IO_TARGET in 8i), and completely changes the behavior of Oracle checkpointing.  Instead of performing large checkpoints at periodic intervals, the database writer tries to keep the number of dirty blocks in the buffer cache low enough to guarantee rapid recovery in the event of a crash.  It frequently updates the file headers to reflect the fact that there are not dirty buffers older than a particular SCN.  If the number of dirty blocks starts to grow too large, a greater portion of the database writer’s write batch will be given over to writing those blocks out.  Using FAST_START_MTTR_TARGET is one way to avoid ‘checkpoint not complete’ while living with a chronically slow I/O subsystem.
In Oracle7, although there is no incremental checkpoint feature, there is an “undocumented” initialization parameter that can be set to devote a larger portion of the write batch to checkpoints when they are in progress.  The parameter is _DB_BLOCK_CHECKPOINT_BATCH, and to set it you need to find out the size in blocks of the write batch and the current checkpoint batch.  This can be obtained from the internal memory structure x$kvii.
Another way to compensate for slow I/O is to increase the number of database writers.  By dedicating more processes to writing the blocks out, it may be possible to allow checkpoints to keep up with the rate of DML activity on the database.  Bear in mind that certain filesystems, such as AdvFS on Compaq Tru64 Unix, obtain no benefit, from multiple database writers.  Such filesystems exclusively lock a file for write when any block is written to that file.  This causes multiple database writers to queue up behind each other waiting to write blocks to a particular file.  Oracle has provided notes on Metalink regarding such filesystems.
If you are more inclined to address the root cause of the problem than to compensate for it, then there are a few measures you can take.  Oracle supports asynchronous I/O on most platforms, if datafiles are stored in raw or logical volumes.  Conversion to raw or LVs requires significant engineering, but is much easier than totally replacing the storage hardware.  Using asynchronous I/O also relieves the aforementioned file-locking bottleneck on certain types of filesystems.
I/O hardware upgrade or replacement is the most complex approach to solving the problem of slow I/O.  Using RAID disk arrays allows data to be “striped” across many disks, allowing a high rate of write-out. Caching disk controllers add a battery-protected cache for fast write-out of data.
reference : Misconceptions on Checkpoints

What happens when a transaction commits  :

When a transaction is committed, the following occurs:
1. The internal transaction table for the associated rollback segment records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table.
2. The log writer process (LGWR) writes redo log entries in the SGA’s redo log buffers to the online redo log file. It also writes the transaction’s SCN to the online redo log file. This atomic event constitutes the commit of the transaction.
3. Oracle releases locks held on rows and tables.
4. Oracle marks the transaction complete.

Other references :
Oracle 9.2 Concepts Documentation