Manually Creating an ASM Instance


Contents
  1. Overview
  2. Configuring Oracle Cluster Synchronization Services (CSS)
  3. Creating the ASM Instance
  4. Identify RAW Devices
  5. Starting the ASM Instance
  6. Verify RAW / Logical Disk Are Discovered
  7. Creating Disk Groups
  8. Using Disk Groups
  9. Startup Scripts


Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the DBA from having to manually manage and tune disks used by Oracle databases. ASM provides the DBA with a file system and volume manager that makes use of an Oracle instance (referred to as an ASM instance) and can be managed using either SQL or Oracle Enterprise Manager.
Only one ASM instance is required per node. The same ASM instance can manage ASM storage for all 10g databases running on the node.
When the DBA installs the Oracle10g software and creates a new database, creating an ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to create an ASM instance as well as an Oracle database that makes use of the new ASM instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or didn't opt to use ASM when a 10g database was first created. The DBA will need to know how to manually create an ASM instance and that is what this article provides.


Automatic Storage Management (ASM) requires the use of Oracle Cluster Synchronization Services (CSS), and as such, CSS must be configured and running before attempting to use ASM. The CSS service is required to enable synchronization between an ASM instance and the database instances that rely on it for database file storage.
In a non-RAC environment, the Oracle Universal Installer will configure and start a single-node version of the CSS service. For Oracle Real Application Clusters (RAC) installations, the CSS service is installed with Oracle Cluster Ready Services (CRS) in a separate Oracle home directory (also called the CRS home directory). For single-node installations, the CSS service is installed in and runs from the same Oracle home as the Oracle database.
Because CSS must be running before any ASM instance or database instance starts, Oracle Universal Installer configures it to start automatically when the system starts. For Linux / UNIX platforms, the Oracle Universal Installer writes the CSS configuration tasks to the root.sh which is run by the DBA after the installation process.
With Oracle10g R1, CSS was always configured regardless of whether you chose to configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured via the root.sh script. This caused a lot of problems since many did not know what this process was, and for most of them, didn't want the CSS process running since they were not using ASM.
Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS process and in Oracle10g R2, will only configure this process when it is absolutely necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-alone instance or if you don't choose to configure a database that uses ASM storage, Oracle will not automatically configure CSS in the root.sh script.
In the case where the CSS process is not configured to run on the node (see above), you can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or %ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux, run the following command as root to configure CSS outside of the root.sh script after the fact:
$ su
# $ORACLE_HOME/bin/localconfig all

/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized

Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.
        linux3
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)
http://www.idevelopment.info/images/popup_dialog_information_mark.gif 
Note that if you attempt to configure ASM after the fact, the Database Configuration Assistant (DBCA) detects whether CSS is configured on the node. If it does not detect CSS as configured (and running), the installer prompts the user to run 'localconfig add' as necessary.

http://www.idevelopment.info/images/popup_dialog_information_mark.gif 
When performing an Oracle10g Custom install, the issue can become a bit more confusing. During a custom install, Oracle will ask for any DB configuration questions during the install itself. It then invokes the DBCA at the end of the install in "custom" mode where the DBCA asks all the questions. As such, at the time Oracle prompts the user to run the root.sh script for a custom install, it does not know whether they will choose to configure ASM or not. Oracle will err on the side of what the majority of people would do. In this case, it means that Oracle will not configure CSS at all in the root.sh script in the case of a custom install since the majority of users will not be using ASM anyway. Here, Oracle relies on the fact that if CSS is not configured, the DBCA will prompt the user to go run 'localconfig add' as root. Once this is done, then CSS will be configured and the DBCA will allow the user to proceed with the configuration of ASM.


The following steps can be used to create a fully functional ASM instance named +ASM. The node I am using in this example also has a regular 10g database running named TESTDB. These steps should all be carried out by the oracle UNIX user account:
1.      Create Admin Directories
We start by creating the admin directories from the ORACLE_BASE. The admin directories for the existing database on this node, (TESTDB), is located at $ORACLE_BASE/admin/TESTDB. The new +ASM admin directories will be created alongside the TESTDB database:
UNIX
mkdir -p $ORACLE_BASE/admin/+ASM/bdump
mkdir -p $ORACLE_BASE/admin/+ASM/cdump
mkdir -p $ORACLE_BASE/admin/+ASM/hdump
mkdir -p $ORACLE_BASE/admin/+ASM/pfile
mkdir -p $ORACLE_BASE/admin/+ASM/udump
Microsoft Windows
mkdir %ORACLE_BASE%\admin\+ASM\bdump
mkdir %ORACLE_BASE%\admin\+ASM\cdump
mkdir %ORACLE_BASE%\admin\+ASM\hdump
mkdir %ORACLE_BASE%\admin\+ASM\pfile
mkdir %ORACLE_BASE%\admin\+ASM\udump
2.      Create Instance Parameter File
In this step, we will manually create an instance parameter file for the ASM instance. This is actually an easy task as most of the parameters that are used for a normal instance are not used for an ASM instance. Note that you should be fine by accepting the default size for the database buffer cache, shared pool, and many of the other SGA memory sructures. The only exception is the large pool. I like to manually set this value to at least 12MB. In most cases, the SGA memory footprint is less then 100MB. Let's start by creating the file init.ora and placing that file in $ORACLE_BASE/admin/+ASM/pfile. The initial parameters to use for the file are:
UNIX
$ORACLE_BASE/admin/+ASM/pfile/init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'

# Default asm_diskstring values for supported platforms:
#     Solaris (32/64 bit)   /dev/rdsk/*
#     Windows NT/XP         \\.\orcldisk*
#     Linux (32/64 bit)     /dev/raw/*
#     HPUX                  /dev/rdsk/*
#     HPUX(Tru 64)          /dev/rdisk/*
#     AIX                   /dev/rhdisk/*
# asm_diskstring=''

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
user_dump_dest=/u01/app/oracle/admin/+ASM/udump

###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0

###########################################
# Pools
###########################################
large_pool_size=12M

###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
Microsoft Windows
%ORACLE_BASE%\admin\+ASM\pfile\init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'

# Default asm_diskstring values for supported platforms:
#     Solaris (32/64 bit)   /dev/rdsk/*
#     Windows NT/XP         \\.\orcldisk*
#     Linux (32/64 bit)     /dev/raw/*
#     HPUX                  /dev/rdsk/*
#     HPUX(Tru 64)          /dev/rdisk/*
#     AIX                   /dev/rhdisk/*
# asm_diskstring=''

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\bdump
core_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\cdump
user_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\udump

###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0

###########################################
# Pools
###########################################
large_pool_size=12M

###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive

After creating the
$ORACLE_BASE/admin/+ASM/pfile/init.ora file, UNIX users should create the following symbolic link:
$ ln -s $ORACLE_BASE/admin/+ASM/pfile/init.ora $ORACLE_HOME/dbs/init+ASM.ora


Before starting the ASM instance, we should identify the RAW device(s) (UNIX) or logical drives (Windows) that will be used as ASM disks. For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw----  1 oracle dba 162, 1 Jun  2 22:04 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jun  2 22:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jun  2 22:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 4 Jun  2 22:04 /dev/raw/raw4

http://www.idevelopment.info/images/mini_white_box_enterprise_linux.gif  Attention Linux Users!
This article does not use Oracle's ASMLib I/O libraries. If you plan on using Oracle's ASMLib, you will need to install and configure ASMLib, as well as mark all disks using:
/etc/init.d/oracleasm createdisk <ASM_VOLUME_NAME> <LINUX_DEV_DEVICE>
. For more information on using Oracle ASMLib, see "Installing Oracle10g Release 1 (10.1.0) on Linux - (RHEL 4)".

http://www.idevelopment.info/images/mini_xp_windows_logo.gif  Attention Windows Users!
A task that must to be performed for Microsoft Windows users is to tag the logical drives that you will want to use for ASM storage. This is done using a new utility that is included with Oracle10g called asmtool. This tool can be run either before or after creating the ASM instance. asmtool is responsible for initializing the drive headers and marks drives for use by ASM. This really assists in reducing the risk of overwriting a usable drive that is being used for normal operating system files.


Once the instance parameter file is in place, it is time to start the ASM instance. It is important to note that an ASM instance never mounts an actual database. The ASM instance is responsible for mounting and managing disk groups.
http://www.idevelopment.info/images/mini_xp_windows_logo.gif  Attention Windows Users!
If you are running in Microsoft Windows, you will need to manually create a new Windows service to run the new instance. This is done using the ORADIM utility which allows you to create both the instance and the service in one command.
UNIX
# su - oracle
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> startup
ASM instance started

Total System Global Area   75497472 bytes
Fixed Size                   777852 bytes
Variable Size              74719620 bytes
Database Buffers                  0 bytes
Redo Buffers                      0 bytes
ORA-15110: no diskgroups mounted

SQL> create spfile from pfile='/u01/app/oracle/admin/+ASM/pfile/init.ora';

SQL> shutdown
ASM instance shutdown

SQL> startup
ASM instance started
Microsoft Windows
C:\> oradim -new -asmsid +ASM -syspwd change_on_install
    -pfile C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora -spfile
    -startmode manual -shutmode immediate

Instance created.

C:\> oradim -edit -asmsid +ASM -startmode a

C:\> set oracle_sid=+ASM
C:\> sqlplus "/ as sysdba"

SQL> startup pfile='C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora';
ASM instance started

Total System Global Area    125829120 bytes
Fixed Size                    769268 bytes
Variable Size               125059852 bytes
Database Buffers                   0 bytes
Redo Buffers                       0 bytes
ORA-15110: no diskgroups mounted

SQL> create spfile from pfile='C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora';
File created.

SQL> shutdown
ASM instance shutdown

SQL> startup
ASM instance started
You will notice when starting the ASM instance, we received the error:
ORA-15110: no diskgroups mounted
This error can be safely ignored.
Notice also that we created a server parameter file (SPFILE) for the ASM instance. This allows Oracle to automatically record new disk group names in the asm_diskgroups instance parameter, so that those disk groups can be automatically mounted whenever the ASM instance is started.
Now that the ASM instance is started, all other Oracle database instances running on the same node will be able to find it.


Verify RAW / Logical Disk Are Discovered
At this point, we have an ASM instance running, but no disk groups to speak of. ASM disk groups are created using from RAW (or logical) disks.
Available (candidate) disks for ASM are discovered by use of the asm_diskstring instance parameter. This parameter contains the path(s) that Oracle will use to discover (or see) these candidate disks. In most cases, you shouldn't have to set this value as the default value is set for the supported platform. The following table is a list of default values for asm_diskstring on supported platforms when the value of the instance parameter is set to NULL (the value is not set):
Operating System
Default Search String
Solaris (32/64 bit)
/dev/rdsk/*
Windows NT/XP
\\.\orcldisk*
Linux (32/64 bit)
/dev/raw/*
HP-UX
/dev/rdsk/*
HP-UX(Tru 64)
/dev/rdisk/*
AIX
/dev/rhdisk/*
For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw----  1 oracle dba 162, 1 Jun  2 22:04 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jun  2 22:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jun  2 22:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 4 Jun  2 22:04 /dev/raw/raw4
I now need to determine if Oracle can find these four disks. The view V$ASM_DISK can be queried from the ASM instance to determine which disks are being used or may potentially be used as ASM disks. Note that you must log into the ASM instance with SYSDBA privileges. Here is the query that I ran from the ASM instance:
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> SELECT group_number, disk_number, mount_status, header_status, state, path
  2  FROM   v$asm_disk

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE    PATH
------------ ----------- ------- ------------ -------- ---------------
           0           0 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw1
           0           1 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw2
           0           2 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw3
           0           3 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw4
Note the value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk is available but hasn't yet been assigned to a disk group. The next section details the steps for creating a disk group.


Creating Disk Groups
In this section, I will create a new disk group named TESTDB_DATA1 and assign all four discovered disks to it. The disk group will be configured for NORMAL REDUNDANCY which results in two-way mirroring of al files within the disk group. Within the disk group, I will be configuring two failure groups, which defines two independent sets of disk that should never contain more than one copy of mirrored data (mirrored extents).
For the purpose of this article, it is assumed that /dev/raw/raw1 and /dev/raw/raw2 are on one controller while /dev/raw/raw3 and /dev/raw/raw4 are on another controller. I want the ASM disk configuration so that any data files that are written to /dev/raw/raw1 and /dev/raw/raw2 will be mirrored to /dev/raw/raw3 and /dev/raw/raw4. I want ASM to guarantee that data on /dev/raw/raw1 is never mirrored to /dev/raw/raw2 and that data on /dev/raw/raw3 is never mirrored to /dev/raw/raw4. With this type of configuration, I can loose an entire controller and still have access to all of my data. When configuring failure groups, you should put all disks that share a controller (or any resource for that matter) into their own failure group. If that resource were to fail, you would still have access to the data as ASM guarantees that no mirrored data will exist in the same failure group.
The new disk group should be created from the ASM instance using the following SQL:
SQL> CREATE DISKGROUP testdb_data1 NORMAL REDUNDANCY
  2  FAILGROUP controller1 DISK '/dev/raw/raw1', '/dev/raw/raw2'
  3  FAILGROUP controller2 DISK '/dev/raw/raw3', '/dev/raw/raw4';

Diskgroup created.

Now, let's take a look at the new disk group and disk details:
SQL> select group_number, name, total_mb, free_mb, state, type
  2  from v$asm_diskgroup;

GROUP_NUMBER NAME             TOTAL_MB    FREE_MB STATE       TYPE
------------ -------------- ---------- ---------- ----------- ------
           1 TESTDB_DATA1          388        282 MOUNTED     NORMAL

SQL> select group_number, disk_number, mount_status, header_status, state, path, failgroup
  2  from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE    PATH            FAILGROUP
------------ ----------- ------- ------------ -------- --------------- ------------
           1           0 CACHED  MEMBER       NORMAL   /dev/raw/raw1   CONTROLLER1
           1           1 CACHED  MEMBER       NORMAL   /dev/raw/raw2   CONTROLLER1
           1           2 CACHED  MEMBER       NORMAL   /dev/raw/raw3   CONTROLLER2
           1           3 CACHED  MEMBER       NORMAL   /dev/raw/raw4   CONTROLLER2


Finally, let's start making use of the new disk group! Disk groups can be used in place of actual file names when creating database files, redo log members, control files, etc.
Let's now login to the database instance running on the node that will be making use of the new ASM instance. For this article, I have a database instance already created and running on the node named TESTDB. The database was created using the local file system for all database files, redo log members, and control files:
$ ORACLE_SID=TESTDB; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> @dba_files_all

Tablespace Name
File Class           Filename                                                        File Size
-------------------- ---------------------------------------------------------- --------------
SYSAUX               /u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf       241,172,480
SYSTEM               /u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf       471,859,200
TEMP                 /u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp          24,117,248
UNDOTBS1             /u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf     214,958,080
USERS                /u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf          5,242,880
[ CONTROL FILE    ]  /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE    ]  /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE    ]  /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log            10,485,760
                                                                                --------------
sum                                                                              1,051,721,728

Let's now create a new tablespace that makes use of the new disk group:
SQL> create tablespace users2 datafile '+TESTDB_DATA1' size 100m;

Tablespace created.

And that's it! The
CREATE TABLESPACE command (above) uses a datafile named +TESTDB_DATA1. Note that the plus sign (+) in front of the name TESTDB_DATA1 indicates to Oracle that this name is a disk group name, and not an operating system file name. In this example, the TESTDB instance queries the ASM instance for a new file in that disk group and uses that file for the tablespace data. Let's take a look at that new file name:
SQL> @dba_files_all

Tablespace Name
File Class           Filename                                                        File Size
-------------------- ---------------------------------------------------------- --------------
SYSAUX               /u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf       241,172,480
SYSTEM               /u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf       471,859,200
TEMP                 /u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp          24,117,248
UNDOTBS1             /u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf     214,958,080
USERS                /u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf          5,242,880
USERS2               +TESTDB_DATA1/testdb/datafile/users2.256.560031579            104,857,600
[ CONTROL FILE    ]  /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE    ]  /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE    ]  /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log            10,485,760
                                                                                --------------
sum                                                                              1,156,579,328


Most Linux / UNIX users have a script used to start and stop Oracle services on system restart. On UNIX platforms, the convention is to put all start / stop commands in a single shell script named dbora. The dbora script may differ on every database server only slightly as each database server has different requirements for handling Apache, TNS listener and other services. The dbora script should be place in /etc/init.d.
In this section, I will provide a dbora shell script that can used to start all required Oracle services including the Oracle Cluster Synchronization Services (CSS), ASM instance, database server(s), and the Oracle TNS listener process. This script will utilize the Oracle supplied scripts $ORACLE_HOME/bin/dbstart and $ORACLE_HOME/bin/dbshut to handle starting and stopping the Oracle database(s). The dbora will be run by the UNIX init process, and reads the /etc/oratab file to dynamically determine which database(s) to start and stop.

Create dbora File
The first step is to create the dbora shell script and place it in the /etc/init.d directory:
/etc/init.d/dbora
# +------------------------------------------------------------------------+
# | FILE         : dbora                                                   |
# | DATE         : 09-AUG-2006                                             |
# | HOSTNAME     : linux3.idevelopment.info                                |
# +------------------------------------------------------------------------+

# +---------------------------------+
# | FORCE THIS SCRIPT TO BE IGNORED |
# +---------------------------------+
# exit

# +---------------------------------+
# | PRINT HEADER INFORMATION        |
# +---------------------------------+
echo " "
echo "+----------------------------------+"
echo "| Starting Oracle Database Script. |"
echo "| 0 : $0          |"
echo "| 1 : $1                        |"
echo "+----------------------------------+"
echo " "

# +-----------------------------------------------------+
# | ALTER THE FOLLOWING TO REFLECT THIS SERVER SETUP    |
# +-----------------------------------------------------+

HOSTNAME=linux3.idevelopment.info
ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1
SLEEP_TIME=120
ORACLE_OWNER=oracle
DATE=`date "+%m/%d/%Y %H:%M"`

export HOSTNAME ORACLE_HOME SLEEP_TIME ORACLE_OWNER DATE

# +----------------------------------------------+
# | VERIFY THAT ALL NEEDED SCRIPTS ARE AVAILABLE |
# | BEFORE CONTINUING.                           |
# +----------------------------------------------+
if [ ! -f $ORACLE_HOME/bin/dbstart -o ! -d $ORACLE_HOME ]; then
  echo " "
  echo "+-------------------------------------+"
  echo "| ERROR:                              |"
  echo "| Oracle startup: cannot start        |"
  echo "|                 cannot find dbstart |"
  echo "+-------------------------------------+"
  echo " "
  exit
fi

# +---------------------------+
# | START/STOP CASE STATEMENT |
# +---------------------------+
case "$1" in

start)

        echo " "
        echo "+----------------------------------------+"
        echo "| ************************************** |"
        echo "| >>>>>>>>>   START PROCESS   <<<<<<<<<< |"
        echo "| ************************************** |"
        echo "+----------------------------------------+"
        echo " "

        echo "Going to sleep for $SLEEP_TIME seconds..."
        sleep $SLEEP_TIME
        echo " "
        su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbstart"

        echo " "
        echo "+---------------------------------------------------+"
        echo "| About to start the listener process in            |"
        echo "| $ORACLE_HOME                                      |"
        echo "+---------------------------------------------------+"
        echo " "

        su - $ORACLE_OWNER -c "lsnrctl start listener"

        touch /var/lock/subsys/dbora

        ;;

stop)

        echo " "
        echo "+----------------------------------------+"
        echo "| ************************************** |"
        echo "| >>>>>>>>>>   STOP PROCESS   <<<<<<<<<< |"
        echo "| ************************************** |"
        echo "+----------------------------------------+"
        echo " "


        echo " "
        echo "+-------------------------------------------------------+"
        echo "| About to stop the listener process in                 |"
        echo "| $ORACLE_HOME                                          |"
        echo "+-------------------------------------------------------+"
        echo " "

        su - $ORACLE_OWNER -c "lsnrctl stop listener"

        echo " "
        echo "+-------------------------------------------------------+"
        echo "| About to stop all Oracle databases                    |"
        echo "| running.                                              |"
        echo "+-------------------------------------------------------+"
        echo " "

        su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut"

        rm -f /var/lock/subsys/dbora

        ;;

*)

        echo $"Usage: $prog {start|stop}"
        exit 1

esac

echo " "
echo "+----------------------+"
echo "| ENDING ORACLE SCRIPT |"
echo "+----------------------+"
echo " "

exit
After the dbora shell script is in place, perform the following tasks as the root user:
# chmod 755 dbora
# chown root:root dbora

# ln -s /etc/init.d/dbora /etc/rc5.d/S99dbora
# ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
# ln -s /etc/init.d/dbora /etc/rc6.d/K10dbora
# exit

Modify oratab File
The next step is to edit the /etc/oratab file to allow the dbora script to automatically start and stop databases. Simply alter the final field in the +ASM and TESTDB entry from N to Y.
http://www.idevelopment.info/images/popup_dialog_stop_mark.gif 
Ensure that the ASM instance is started BEFORE any databases that are making use of disk groups contained in it.
...
+ASM:/u01/app/oracle/product/10.1.0/db_1:Y
TESTDB:/u01/app/oracle/product/10.1.0/db_1:Y
...

Modify /etc/inittab File
The final step to manually edit the script /etc/inittab so that the entry to respawn init.cssd comes before running the runlevel 3.
·         Orignal /etc/inittab file:
·         (...)
·         # System initialization.
·         si::sysinit:/etc/rc.d/rc.sysinit
·          
·         l0:0:wait:/etc/rc.d/rc 0
·         l1:1:wait:/etc/rc.d/rc 1
·         l2:2:wait:/etc/rc.d/rc 2
·         l3:3:wait:/etc/rc.d/rc 3
·         l4:4:wait:/etc/rc.d/rc 4
·         l5:5:wait:/etc/rc.d/rc 5
·         l6:6:wait:/etc/rc.d/rc 6
·         (...)
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
·         Modified /etc/inittab file:
·         (...)
·         # System initialization.
·         si::sysinit:/etc/rc.d/rc.sysinit
·          
·         l0:0:wait:/etc/rc.d/rc 0
·         l1:1:wait:/etc/rc.d/rc 1
·         l2:2:wait:/etc/rc.d/rc 2
·         h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
·         l3:3:wait:/etc/rc.d/rc 3
·         l4:4:wait:/etc/rc.d/rc 4
·         l5:5:wait:/etc/rc.d/rc 5
·         l6:6:wait:/etc/rc.d/rc 6
(...)
http://www.idevelopment.info/images/popup_dialog_information_mark.gif 
For Solaris users, you will need to manually edit the script /etc/inittab so that the entry for init.cssd comes before running the runlevel 3. As explained in Metalink Note ID: 264235.1, the fix is as follows:
  • Orignal /etc/inittab file:
·         (...)
·         s2:23:wait:/sbin/rc2  >/dev/msglog 2<>/dev/msglog </dev/console
·         s3:3:wait:/sbin/rc3   >/dev/msglog 2<>/dev/msglog </dev/console
·         s5:5:wait:/sbin/rc5   >/dev/msglog 2<>/dev/msglog </dev/console
·         (...)
h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
  • Modified /etc/inittab file:
·         (...)
·         s2:23:wait:/sbin/rc2  >/dev/msglog 2<>/dev/msglog </dev/console
·         h1:3:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
·         s3:3:wait:/sbin/rc3   >/dev/msglog 2<>/dev/msglog </dev/console
·         s5:5:wait:/sbin/rc5   >/dev/msglog 2<>/dev/msglog </dev/console
(...)

http://www.idevelopment.info/images/popup_dialog_stop_mark.gif 
Bug: 3458327 - Automatic Startup On Reboot Fails When Database Uses ASM
This bug is "NOT" fixed in the 10.1.0.4.0 Patch Set!!!!!!
If you have been following this article and applied the 10.1.0.4 patchset (and modified the /etc/inittab file to force init.cssd to run (actually to respaen) before running runlevel 3), this bug should not affect you. If you are using 10.1.0.3 (and below), however, this bug may not allow the Oracle ASM instance to start, which will also prevent any other instances that have disk groups within that ASM instance to start. As they exist, the dbstart and dbshut scripts are not ASM aware with 10.1.0.3 and below. Even with patchset 10.1.0.4.0, we had to manually modify the /etc/inittab script. When the dbora script attempts to start the ASM database, even after the ocssd.bin is up and running, you will receive the error:
ORA-29701: unable to connect to Cluster Manager
The problem is simply a matter of ordering of when services are started and that is why we needed to modify the /etc/inittab file. Upon entering a certain runlevel (e.g. runlevel 3), init starts all the 'respawn lines' AFTER the 'wait' lines have finished. It is important to understand that the S96init.cssd lines does not actually start the CSSD, it merely removes the 'NORUN' line. Then S99dbora tries to start the instances (and fails). Then, finally, init starts the CSSD.
Note that I used /etc/rc5.d/S99 to start the dbora script. You should make note that the dbora script MUST run after the /etc/init.d/init.cssd if you are starting an ASM instance. For Linux, the OUI (and manually running localconfig all) places the start for init.cssd as /etc/rc3.d/S96init.cssd.
You will also notice that I had to put a sleep 120 in the dbora script before starting any databases/instances. The dbora script will sleep for 120 seconds to ensure that ocssd.bin daemon is running before starting any ASM instances.

No comments:

Post a Comment