Ans: Oracle RAC enables you to cluster Oracle databases.Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.Oracle Clusterware is a portable cluster management solution that is integrated with Oracle Database.
Ans: The file storage options provided by Oracle Database for Oracle RAC are,
Ans: A cluster File System (CFS) is a file system that may be accessed (read and write) by all members in a cluster at the same time. This implies that all members of a cluster have the same view.
Ans: In a RAC environment, it is the combining of data blocks, which are shipped across the interconnect from remote database caches (SGA) to the local node, in order to fulfill the requirements for a transaction (DML, Query of Data Dictionary).
Ans: When database nodes in a cluster are unable to communicate with each other, they may continue to process and modify the data blocks independently. If the
same block is modified by more than one instance, synchronization/locking of the data blocks does not take place and blocks may be overwritten by others in the cluster. This state is called split brain.
Ans: Either the Network Time Protocol(NTP) can be configured or in 11gr2, Cluster Time Synchronization Service (CTSS) can be used.
Ans: The Clusterware is installed on each node (on an Oracle Home) and on the shared disks (the voting disks and the CSR file)
Ans: crs_stat -t -v (-t -v are optional)
Ans: You can create a RAC with just one server.
Ans: RAC processes are: LMON, LMDx, LMSn, LKCx and DIAG.
Ans: Spfiles, ControlFiles, Datafiles and Redolog files should be created on shared storage.
Ans: The network ping failure is written in $CRS_HOME/log
Ans: The ocrconfig -showbackup can be run to find out the automatic and manually run backups.
Ans: It is a private network which is used to ship data blocks from one instance to another for cache fusion. The physical data blocks as well as data dictionary blocks are shared across this interconnect.
Ans: One of the ways is to look at the database alert log for the time period when the database was started up.
Ans: You can use either the logical or the physical OCR backup copy to restore the Repository.
Ans: The hangcheck timer checks regularly the health of the system. If the system hangs or stop the node will be restarted automatically.
There are 2 key parameters for this module:
Ans: When an instance crashes in a single node database on startup a crash recovery takes place. In a RAC enviornment the same recovery for an instance is performed by the surviving nodes called Instance recovery.
Ans: You can query the V$ACTIVE_INSTANCES view to determine the member instances of the RAC cluster.
Ans: This is the parameter which controls the number of Allocation units the ASM instance will try to rebalance at any given time. In ASM versions less than 11.2.0.3 the default value is 11 however it has been changed to unlimited in later versions.
Ans: A patch is considered a rolling if it is can be applied to the cluster binaries without having to shutting down the database in a RAC environment. All nodes in the cluster are patched in a rolling manner, one by one, with only the node which is being patched unavailable while all other instance open.
Ans: In 10g the default SGA size is 1G in 11g it is set to 256M and in 12c ASM it is set back to 1G.
Ans: You can use the dba_hist_seg_stats.
Ans: The VIP is an alternate Virtual IP address assigned to each node in a cluster. During a node failure the VIP of the failed node moves to the surviving node and relays to the application that the node has gone down. Without VIP, the application will wait for TCP timeout and then find out that the session is no longer live due to the failure.
Ans: The backups should include OLR, OCR and ASM Metadata.
Ans: You can run the opatch lsinventory -all_nodes command from a single node to look at the inventory details for all nodes in the cluster.
Ans: You can use md_backup to restore the ASM diskgroup configuration in-case of ASM diskgroup storage loss.
Ans: In 11g the following files can be stored in ASM diskgroups.
In 12c the files below can also new be stored in the ASM Diskgroup
Ans: The Cluster Health Monitor (CHM) stores operating system metrics in the CHM repository for all nodes in a RAC cluster. It stores information on CPU, memory, process, network and other OS data, This information can later be retrieved and used to troubleshoot and identify any cluster related issues. It is a default component of the 11gr2 grid install. The data is stored in the master repository and replicated to a standby repository on a different node.
Ans: All processing will show down to the CPU speed of the slowest server.
Ans: Oracle Local repository contains information that allows the cluster processes to be started up with the OCR being in the ASM storage ssytem. Since the ASM file system is unavailable until the Grid processes are started up a local copy of the contents of the OCR is required which is stored in the OLR.
Ans: Some of the RAC parameters are:
Ans: The Grid software is becoming more and more capable of not just supporting HA for Oracle Databases but also other applications including Oracle’s applications. With 12c there are more features and functionality built-in and it is easier to deploy these pre-built solutions, available for common Oracle applications.
Related Interview Questions...