在前面的几个章节中,简要的介绍了在rac环境下对数据库执行备份和完全恢复,rac环境同单实例环境相比,除了需要备份控制文件,日志文件,数据文件和参数文件外,还需要对ocr和表决盘进行日常备份!下面来简要的介绍下votedisk和ocr的备份和恢复!
一:表决盘的备份,Voting disk记录节点成员信息,如包含哪些节点成员、节点的添加删除信息记录,大小为20MB
查看voting disk位置:
[oracle@rac1 ~]$ crsctl query css votedisk 0. 0 /dev/raw/raw7 1. 0 /dev/raw/raw8 2. 0 /dev/raw/raw9 located 3 votedisk(s). 备份表决盘 [oracle@rac1 ~]$ dd if=/dev/raw/raw7 of=votedisk.bak 587744+0 records in 587744+0 records out |
二:OCR的备份,OCR记录节点成员的配置信息,如database、ASM、instance、listener、VIP等CRS资源的配置信息,可存储于裸设备或者群集文件系统上,推荐设置大小为100MB,备份使用root用户操作
查看OCR所在的磁盘或裸设备位置
[root@rac1 ~]# cat /etc/oracle/ocr.loc ocrconfig_loc=/dev/raw/raw5 ocrmirrorconfig_loc=/dev/raw/raw6 local_only=FALSE [oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 293624 Used space (kbytes) : 3864 Available space (kbytes) : 289760 ID : 450284450 Device/File Name : /dev/raw/raw5 Device/File integrity check succeeded Device/File Name : /dev/raw/raw6 Device/File integrity check succeeded Cluster registry integrity check succeeded 备份OCR [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/ocrconfig -export /home/oracle/ocrbak.dbf [root@rac1 ~]# file /home/oracle/ocrbak.dbf /home/oracle/ocrbak.dbf: data |
三:关闭数据库和所有crs服务后,对表决盘和ocr进行破坏
[oracle@rac1 ~]$ srvctl stop database -d racdb -o immediate [oracle@rac1 ~]$ crs_stop -all [oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 OFFLINE OFFLINE ora....C1.lsnr application 0/5 0/0 OFFLINE OFFLINE ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac1.ons application 0/3 0/0 OFFLINE OFFLINE ora.rac1.vip application 0/0 0/0 OFFLINE OFFLINE ora....SM2.asm application 0/5 0/0 OFFLINE OFFLINE ora....C2.lsnr application 0/5 0/0 OFFLINE OFFLINE ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac2.ons application 0/3 0/0 OFFLINE OFFLINE ora.rac2.vip application 0/0 0/0 OFFLINE OFFLINE ora.racdb.db application 0/0 0/1 OFFLINE OFFLINE ora....b1.inst application 0/5 0/0 OFFLINE OFFLINE ora....b2.inst application 0/5 0/0 OFFLINE OFFLINE [root@rac1 ~]# for i in {5..9};do dd if=/dev/zero of=/dev/raw/raw$i bs=50M count=2; done 读入了 2+0 个块 输出了 2+0 个块 读入了 2+0 个块 输出了 2+0 个块 读入了 2+0 个块 输出了 2+0 个块 读入了 2+0 个块 输出了 2+0 个块 读入了 2+0 个块 输出了 2+0 个块 破坏完成后,查询表决盘和启动crs服务将报错 [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl query css votedisk OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage [oracle@rac1 ~]$ crs_start -all rac1 : CRS-1019: Resource ora.rac2.ASM2.asm (application) cannot run on rac1 rac1 : CRS-1019: Resource ora.rac2.ASM2.asm (application) cannot run on rac1 CRS-0184: Cannot communicate with the CRS daemon. |
四:利用备份恢复表决盘和ocr
[root@rac1 ~]# for i in {7..9};do dd if=/home/oracle/votedisk.bak of=/dev/raw/raw$i;done 读入了 587744+0 个块 输出了 587744+0 个块 读入了 587744+0 个块 输出了 587744+0 个块 读入了 587744+0 个块 输出了 587744+0 个块 表决盘恢复后,crs服务依旧无法启动 [oracle@rac1 ~]$ crs_start -all CRS-0184: Cannot communicate with the CRS daemon. 这个时候对表决盘进行查询,发现格式不对 [oracle@rac1 ~]$ crsctl query css votedisk OCR initialization failed with invalid format: PROC-22: The OCR backend has an invalid format 恢复ocr [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/ocrconfig -import /home/oracle/ocrbak.dbf [oracle@rac1 ~]$ crsctl query css votedisk 0. 0 /dev/raw/raw7 1. 0 /dev/raw/raw8 2. 0 /dev/raw/raw9 located 3 votedisk(s). 重启crs进程后,一切正常! [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly [oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE rac2 ora....b1.inst application 0/5 0/0 ONLINE ONLINE rac1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE rac2 [oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 293624 Used space (kbytes) : 3864 Available space (kbytes) : 289760 ID : 450284450 Device/File Name : /dev/raw/raw5 Device/File integrity check succeeded Device/File Name : /dev/raw/raw6 Device/File integrity check succeeded Cluster registry integrity check succeeded |
五:在ocr和表决盘未备份的情况下,进行恢复,ocr每4个小时会自动备份一次,这些备份可以通过ocrconfig -showbackup命令来查看
1:关闭数据库实例和crs所有服务后对votedisk和ocr进行破坏操作
[oracle@rac1 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ora.racdb.db application ONLINE ONLINE rac2 ora....b1.inst application ONLINE ONLINE rac1 ora....b2.inst application ONLINE ONLINE rac2 [oracle@rac1 ~]$ srvctl stop database -d racdb -o immediate [oracle@rac1 ~]$ crs_stop -all Attempting to stop `ora.rac1.gsd` on member `rac1` Attempting to stop `ora.rac1.ons` on member `rac1` Attempting to stop `ora.rac2.gsd` on member `rac2` Attempting to stop `ora.rac2.ons` on member `rac2` Stop of `ora.rac1.gsd` on member `rac1` succeeded. Stop of `ora.rac2.gsd` on member `rac2` succeeded. Stop of `ora.rac1.ons` on member `rac1` succeeded. Stop of `ora.rac2.ons` on member `rac2` succeeded. Attempting to stop `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` Attempting to stop `ora.rac1.ASM1.asm` on member `rac1` Attempting to stop `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` Attempting to stop `ora.rac2.ASM2.asm` on member `rac2` Stop of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded. Stop of `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` succeeded. Attempting to stop `ora.rac1.vip` on member `rac1` Attempting to stop `ora.rac2.vip` on member `rac2` Stop of `ora.rac1.vip` on member `rac1` succeeded. Stop of `ora.rac2.vip` on member `rac2` succeeded. Stop of `ora.rac2.ASM2.asm` on member `rac2` succeeded. Stop of `ora.rac1.ASM1.asm` on member `rac1` succeeded. [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@rac1 ~]# for i in {5..9};do dd if=/dev/zero of=/dev/raw/raw$i bs=20M count=3; done 读入了 3+0 个块 输出了 3+0 个块 读入了 3+0 个块 输出了 3+0 个块 读入了 3+0 个块 输出了 3+0 个块 读入了 3+0 个块 输出了 3+0 个块 读入了 3+0 个块 输出了 3+0 个块 |
2:启动crs服务将报错
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly [oracle@rac1 ~]$ crsctl query css votedisk OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage [oracle@rac1 ~]$ ocrcheck PROT-602: Failed to retrieve data from the cluster registry [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl check crs Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM [root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl check crs Failure 1 contacting CSS daemon Cannot communicate with CRS Cannot communicate with EVM |
3:在rac环境中的各个节点上运行$ORA_CRS_HOME/install/rootdelete.sh脚本
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories [root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories |
4:在任意一个节点中运行$ORA_CRS_HOME/install/rootdeinstall.sh脚本
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootdeinstall.sh Removing contents from OCR mirror device 读入了 2560+0 个块 输出了 2560+0 个块 Removing contents from OCR device 读入了 2560+0 个块 输出了 2560+0 个块 |
5:在rac环境中的各个节点上运行$ORA_CRS_HOME/root.sh脚本
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw7 Now formatting voting device: /dev/raw/raw8 Now formatting voting device: /dev/raw/raw9 Format of 3 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 CSS is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps 在输入参数中指定的接口 "255.255.255.0/eth0" 无效。 |
在第二个节点中运行脚本到最后将报错,这个时候就和安装rac时候一样的处理方法,在节点2上使用root用户运行vipca
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca
[oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 |
5:添加listener到ocr中
[oracle@rac1 ~]$ netca
[oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 |
6:添加ASM实例
[oracle@rac1 ~]$ srvctl add asm -n rac1 -i +ASM1 -o /u01/app/oracle/product/10.2.0/db_1/ [oracle@rac1 ~]$ srvctl add asm -n rac2 -i +ASM2 -o /u01/app/oracle/product/10.2.0/db_1/ [oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 OFFLINE OFFLINE ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora....SM2.asm application 0/5 0/0 OFFLINE OFFLINE ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 [oracle@rac1 ~]$ srvctl start asm -n rac1 [oracle@rac1 ~]$ srvctl start asm -n rac2 [oracle@rac1 ~]$ srvctl status asm -n rac1 ASM instance +ASM1 is running on node rac1. [oracle@rac1 ~]$ srvctl status asm -n rac2 ASM instance +ASM2 is running on node rac2. |
7:添加数据库和数据库实例
[oracle@rac1 ~]$ srvctl add database -d racdb -o /u01/app/oracle/product/10.2.0/db_1/ [oracle@rac1 ~]$ srvctl add instance -d racdb -i racdb1 -n rac1 [oracle@rac1 ~]$ srvctl add instance -d racdb -i racdb2 -n rac2 [oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 ora.racdb.db application 0/0 0/1 OFFLINE OFFLINE ora....b1.inst application 0/5 0/0 OFFLINE OFFLINE ora....b2.inst application 0/5 0/0 OFFLINE OFFLINE [oracle@rac1 ~]$ srvctl start database -d racdb [oracle@rac1 ~]$ srvctl status database -d racdb Instance racdb1 is running on node rac1 Instance racdb2 is running on node rac2 本文转自斩月博客51CTO博客,原文链接http://blog.51cto.com/ylw6006/751100如需转载请自行联系原作者 ylw6006 |