环境:centos7.9? ? ? ? Oracle11.2.0.4
###准备工作###
1、云服务器两台
IP:10.10.161.60,10.10.161.68
2、共享磁盘五块找基础设施组申请。
磁盘规划:
100G数据盘,50G归档盘,3个10G的OCR盘
/dev/sdb ocr
/dev/sdc ocr
/dec/sdd ocr
/dev/sde arch
/dev/sdf data
3、网卡两块找基础设施组申请。
4、心跳IP两个、VIP两个、scan-ip一个找网络组预留。
预留心跳ip:
172.16.10.21
172.16.10.22
vip:
10.10.161.160
10.10.161.168
预留scan ip:
10.10.161.200
准备工作完成后,下面开始操作
二、磁盘分区
这里我们简介一些,我相信安装O
racle软件的都会磁盘分区
fdisk /dev/sdb
n
p
回车
回车
回车
w
三、上传软件包到服务器
?四、节点安装集群相关安装包(两个节点执行)
#注意:要配好yum源
#检查软件依赖环境
yum install binutils elfutils-libelf-devel zlib-devel ?sysstat ??make libXtstlibXi libstdc++-devel libstdc++- libgcc ?libaio-devellibaio? ksh glibc-devel glibc ??gcc-c++ ?compat-libstdc++-33 compat-libcap1 ?gcc? -y
#覆盖一次
yum -y install binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ glibc glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel libXi libXtst make sysstat unixODBC unixODBC-devel
#安装完成后,检查依赖是否安装成功
rpm -q binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ glibc glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel libXi libXtst make sysstat unixODBC unixODBC-devel | grep "not installed"
五、设置主机名并关闭防火墙、selinux(两个节点执行)
#节点1
hostnamectl set-hostname rac1
bash
systemctl stop firewalld
systemctl disable firewalld?
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
#节点2
hostnamectl set-hostname rac2
bash
systemctl stop firewalld
systemctl disable firewalld?
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
六、创建用户(两个节点执行)
groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 505 asmoper
groupadd -g 506 asmdba
groupadd -g 507 oinstall
useradd -g oinstall -G dba,asmdba,oper oracle
useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
#修改用户密码(自己要能记住)
passwd oracle
passwd grid
七、创建目录(两个节点执行)
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
八、设置系统参数(两个节点执行)
[root@rac1 /]# vi /etc/security/limits.d/20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
#* soft nproc4096
rootsoft nproc unlimited
*- nproc 16384
[root@rac1 /]# vi /etc/security/limits.conf
#ORACLE SETTING
grid soft nproc2047
grid hard nproc16384
grid soft nofile1024
grid hard nofile65536
grid soft stack10240
grid hard stack32768
oracle soft nproc2047
oracle hard nproc16384
oracle soft nofile1024
oracle hard nofile65536
oracle soft stack10240
oracle hard stack32768
使用centos 7.2 安装grid时,需要修改这个参数,不然asm组件会起不来,crs时好时不好
[root@rac1 /]# vi /etc/systemd/logind.conf
RemoveIPC=no
[root@rac1 /]# systemctl daemon-reload
九、设置grid变量(两个节点都执行)
#第一个节点
[root@rac1 /]# su - grid
上一次登录:二 12月 21 10:08:32 CST 2021pts/0 上
[grid@rac1 ~]$ vi .bash_profile
PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
umask 022
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
#alias lsnrctl="rlwrap lsnrctl"
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac1
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
#第二个节点
[root@rac2 /]# su - grid
上一次登录:二 12月 21 10:11:42 CST 2021pts/0 上
[grid@rac2 ~]$ vi .bash_profile
PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
umask 022
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
#alias lsnrctl="rlwrap lsnrctl"
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac2
ORACLE_SID=+ASM2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
十、设置Oracle用户变量(两个节点都执行)
#节点1
[root@rac1 /]# su - oracle
上一次登录:二 12月 21 10:13:32 CST 2021pts/0 上
[oracle@rac1 ~]$ vi .bash_profile
PATH=$PATH:$HOME/.local/bin:$HOME/bin
PS1="[`whoami`@`hostname`:"'$PWD]$'
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
export PS1
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac1
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=wms; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG
PATH=.:$PATH:$HOME/bin:$ORACLE_BASE/product/11.2.0/db_1/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
[oracle@rac1 ~]$ source /home/oracle/.bash_profile
#二节点
[root@rac2 /]# su - oracle
上一次登录:二 12月 21 10:14:32 CST 2021pts/0 上
[oracle@rac2 ~]$ vi .bash_profile
PATH=$PATH:$HOME/.local/bin:$HOME/bin
PS1="[`whoami`@`hostname`:"'$PWD]$'
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
export PS1
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac2
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=wms; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG
PATH=.:$PATH:$HOME/bin:$ORACLE_BASE/product/11.2.0/db_1/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
[oracle@rac2 ~]$ source /home/oracle/.bash_profile
十一、安装asmlib(两个节点都执行)
[root@rac1 app]#?cd rac.zip/
[root@rac1 rac.zip]#?ls
kmod-20-21.0.1.el7.x86_64.rpm? ? ? ? ? ? ?? oracleasmlib-2.0.12-1.el7.x86_64.rpm? ? ? p13390677_112040_Linux-x86-64_2of7.zip
kmod-libs-20-21.0.1.el7.x86_64.rpm? ? ? ? ? oracleasm-support-2.1.8-3.el7.x86_64.rpm? p13390677_112040_Linux-x86-64_3of7.zip
kmod-oracleasm-2.0.8-21.0.1.el7.x86_64.rpm? ?p13390677_112040_Linux-x86-64_1of7.zip? ? pdksh-5.2.14-30.x86_64.rpm
[root@rac1 rac.zip]# rpm -ivh kmod-oracleasm-2.0.8-21.0.1.el7.x86_64.rpm
warning: kmod-oracleasm-2.0.8-21.0.1.el7.x86_64.rpm: Header V3 RSA/SHA256Signature, key ID ec551f03: NOKEY
Preparing... (1################################# [100%]
Updating / installing...
1:kmod-oracleasm-2.0.8-21.0.1.el7 (################################# [100%]
[root@rac1 rac.zip]#
[root@rac1 rac.zip]#
[root@rac1 rac.zip]# rpm -ivh oracleasmlib-2.0.12-1.el7.x86_64.rpm
warning: oracleasmlib-2.0.12-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... (1################################# [100%]
Updating / installing...
1:oracleasmlib-2.0.12-1.el7 (################################# [100%]
[root@rac1 rac.zip]#
[root@rac1 rac.zip]#
[root@rac1 rac.zip]# rpm -ivh oracleasm-support-2.1.8-3.el7.x86_64.rpm
warning: oracleasm-support-2.1.8-3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... (1################################# [100%]
Updating / installing...
1:oracleasm-support-2.1.8-3.el7 (################################# [100%]
Note: Forwarding request to 'systemctl enable oracleasm.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/oracleasm.service to /usr/lib/systemd/system/oracleasm.service.
[root@rac1 rac.zip]# systemctl status oracleasm.service
● oracleasm.service - Load oracleasm Modules
Loaded: loaded (/usr/lib/systemd/system/oracleasm.service; enabled; vendor preset: disabled)
Active: inactive (dead)
[root@rac1 rac.zip]# systemctl start oracleasm.service
[root@rac1 rac.zip]# systemctl status oracleasm.service
● oracleasm.service - Load oracleasm Modules
Loaded: loaded (/usr/lib/systemd/system/oracleasm.service; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2022-03-24 10:48:41 CST; 4s ago
Process: 423072 ExecStart=/usr/sbin/service oracleasm start_sysctl (code=exited, status=0/SUCCESS)
Main PID: 423072 (code=exited, status=0/SUCCESS)
Mar 24 10:48:41 rac1 systemd[1]: Starting Load oracleasm Modules...
Mar 24 10:48:41 rac1 service[423072]: Initializing the Oracle ASMLib d...]
Mar 24 10:48:41 rac1 service[423072]: Scanning the system for Oracle A...]
Mar 24 10:48:41 rac1 systemd[1]: Started Load oracleasm Modules.
Hint: Some lines were ellipsized, use -l to show in full.
十二、配置ASM(两个节点执行)
[root@rac1 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac1 rac.zip]#
十三、创建ASM磁盘(一个节点执行)
[root@rac1 app]# /etc/init.d/oracleasm createdisk OCR1 /dev/sdb1
Marking disk "OCR1" as an ASM disk: [ OK ]
[root@rac1 app]# /etc/init.d/oracleasm createdisk OCR2 /dev/sdc1
Marking disk "OCR2" as an ASM disk: [ OK ]
[root@rac1 app]# /etc/init.d/oracleasm createdisk OCR3 /dev/sdd1
Marking disk "OCR3" as an ASM disk: [ OK ]
[root@rac1 app]# /etc/init.d/oracleasm createdisk ARCH /dev/sde1
Marking disk "ARCH" as an ASM disk: [ OK ]
[root@rac1 app]# /etc/init.d/oracleasm createdisk DATA /dev/sdf1
Marking disk "DATA" as an ASM disk: [ OK ]
#节点一和二重新扫描:
[root@rac1 u01]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac1 u01]# /etc/init.d/oracleasm listdisks
ARCH
DATA
OCR1
OCR2
OCR3
[root@rac2 app]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac2 app]# /etc/init.d/oracleasm listdisks
ARCH
DATA
OCR1
OCR2
OCR3
十四、配内核参数(两个节点)
#节点1
vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152 #SGA/PAGE_SIZE 可以使用的共享内存数:可以通过getconf PAGESIZE获取
kernel.shmmax = 536870912 #最大共享内存段的总数:物理内存的70%,然后×3个1024
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
#使参数生效
sysctl -p
#节点2
vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152 #SGA/PAGE_SIZE 可以使用的共享内存数:可以通过getconf PAGESIZE获取
kernel.shmmax = 536870912 #最大共享内存段的总数:物理内存的70%,然后×3个1024
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
#使参数生效
sysctl -p
?###安装grid集群###
一、解压包安装并配置互信(两端)
[root@rac1 rpm]# su - grid
[grid@rac1:/home/grid]$ unzip -p p13390677_112040_Linux-x86-64_3of7.zip -d /u01/app
[grid@rac1:/home/grid]$ cd /u01/app/grid/rpm;rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing... ################################# [100%]
Using default group oinstall to install package
Updating / installing...
1:cvuqdisk-1.0.9-1 ################################# [100%]
#将通讯包scp过去
[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm root@10.10.161.68:/u01/app
#配置互信
[root@rac1 ~]# chmod +600 /u01/app/grid/sshsetup/sshUserSetup.sh
[root@rac1 ~]# su - grid
Last login: Fri Mar 25 09:50:28 CST 2022 on pts/0
[grid@rac1:/home/grid]$cd /u01/app/grid/sshsetup/
[grid@rac1:/u01/app/grid/sshsetup]$./sshUserSetup.sh -user grid -hosts rac2 -advanced -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2022-03-25-10-12-00.log
Hosts are rac2
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING rac2 (10.10.161.68) 56(84) bytes of data.
64 bytes from rac2 (10.10.161.68): icmp_seq=1 ttl=64 time=0.187 ms
yes
输入2节点密码
输入2节点密码
#rac2一致
[root@rac2 ~]# su - grid
Last login: Fri Mar 25 09:50:28 CST 2022 on pts/0
[grid@rac2:/home/grid]$cd /u01/app/grid/sshsetup/
[grid@rac2:/u01/app/grid/sshsetup]$./sshUserSetup.sh -user grid -hosts rac1 -advanced -noPromptPassphrase
#在grid用户下,检查下安装情况
[grid@rac1:/u01/app/grid]$ su - grid
[grid@rac1:/u01/app/grid]$ cd /opt/soft/grid
[grid@rac1:/u01/app/grid]$./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Verification of the hosts config file successful
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.10.161.68 10.10.161.0 0.0.0.0 10.10.161.1 FA:16:3E:1F:60:70 1500
eth1 172.16.10.22 172.16.10.0 0.0.0.0 10.10.161.1 00:50:56:91:20:37 1500
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.10.161.60 10.10.161.0 0.0.0.0 10.10.161.1 FA:16:3E:21:97:19 1500
eth1 172.16.10.21 172.16.10.0 0.0.0.0 10.10.161.1 00:50:56:91:FB:8A 1500
Check: Node connectivity of subnet "10.10.161.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2[10.10.161.68] rac1[10.10.161.60] yes
Result: Node connectivity passed for subnet "10.10.161.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "10.10.161.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:10.10.161.60 rac2:10.10.161.68 passed
Result: TCP connectivity check passed for subnet "10.10.161.0"
Check: Node connectivity of subnet "172.16.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2[172.16.10.22] rac1[172.16.10.21] yes
Result: Node connectivity passed for subnet "172.16.10.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "172.16.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:172.16.10.21 rac2:172.16.10.22 passed
Result: TCP connectivity check passed for subnet "172.16.10.0"
Interfaces found on subnet "10.10.161.0" that are likely candidates for VIP are:
rac2 eth0:10.10.161.68
rac1 eth0:10.10.161.60
Interfaces found on subnet "172.16.10.0" that are likely candidates for a private interconnect are:
rac2 eth1:172.16.10.22
rac1 eth1:172.16.10.21
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.161.0".
Subnet mask consistency check passed for subnet "172.16.10.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.161.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.161.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "172.16.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.16.10.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 15.4821GB (1.6234152E7KB) 1.5GB (1572864.0KB) passed
rac1 15.4821GB (1.6234144E7KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 14.2745GB (1.4967936E7KB) 50MB (51200.0KB) passed
rac1 14.0779GB (1.4761704E7KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 16GB (1.6777212E7KB) 15.4821GB (1.6234152E7KB) passed
rac1 16GB (1.6777212E7KB) 15.4821GB (1.6234144E7KB) passed
Result: Swap space check passed
Check: Free disk space for "rac2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac2 /tmp 9.1709GB 1GB passed
Result: Free disk space check passed for "rac2:/tmp"
Check: Free disk space for "rac1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac1 /tmp 6.7121GB 1GB passed
Result: Free disk space check passed for "rac1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(1002)
rac1 passed exists(1002)
Checking for multiple users with UID value 1002
Result: Check for multiple users with UID value 1002 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rac2 yes yes yes passed
rac1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rac2 3 3,5 passed
rac1 3 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 524288 65536 passed
rac1 hard 524288 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 524288 1024 passed
rac1 soft 524288 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 524288 16384 passed
rac1 hard 524288 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 524288 2047 passed
rac1 soft 524288 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 x86_64 x86_64 passed
rac1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 3.10.0-1160.25.1.el7.x86_64 2.6.9 passed
rac1 3.10.0-1160.25.1.el7.x86_64 2.6.9 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 500 500 250 passed
rac1 500 500 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 256000 256000 32000 passed
rac1 256000 256000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 250 250 100 passed
rac1 250 250 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 8192 8192 128 passed
rac1 8192 8192 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 536870912 536870912 4294967295 failed Current value incorrect. Configured value incorrect.
rac1 536870912 536870912 4294967295 failed Current value incorrect. Configured value incorrect.
Result: Kernel parameter check failed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 4096 unknown 4096 failed (ignorable) Configured value unknown.
rac1 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 2097152 2097152 2097152 passed
rac1 2097152 2097152 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 6815744 6815744 6815744 passed
rac1 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 between 9000.0 & 65000.0 between 9000.0 & 65000.0 between 9000.0 & 65500.0 passed
rac1 between 9000.0 & 65000.0 between 9000.0 & 65000.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 16777216 16777216 4194304 passed
rac1 16777216 16777216 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 16777216 16777216 1048576 passed
rac1 16777216 16777216 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 1048576 1048576 1048576 passed
rac1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 make-3.82-24.el7 make-3.80 passed
rac1 make-3.82-24.el7 make-3.80 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 binutils-2.27-44.base.el7_9.1 binutils-2.15.92.0.2 passed
rac1 binutils-2.27-44.base.el7_9.1 binutils-2.15.92.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc(x86_64)-4.8.5-44.el7 gcc(x86_64)-3.4.6 passed
rac1 gcc(x86_64)-4.8.5-44.el7 gcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.105 passed
rac1 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc(x86_64)-2.17-325.el7_9 glibc(x86_64)-2.3.4-2.41 passed
rac1 glibc(x86_64)-2.17-325.el7_9 glibc(x86_64)-2.3.4-2.41 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 compat-libstdc++-33(x86_64)-3.2.3-72.el7 compat-libstdc++-33(x86_64)-3.2.3 passed
rac1 compat-libstdc++-33(x86_64)-3.2.3-72.el7 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf(x86_64)-0.176-5.el7 elfutils-libelf(x86_64)-0.97 passed
rac1 elfutils-libelf(x86_64)-0.176-5.el7 elfutils-libelf(x86_64)-0.97 passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf-devel-0.176-5.el7 elfutils-libelf-devel-0.97 passed
rac1 elfutils-libelf-devel-0.176-5.el7 elfutils-libelf-devel-0.97 passed
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-common-2.17-325.el7_9 glibc-common-2.3.4 passed
rac1 glibc-common-2.17-325.el7_9 glibc-common-2.3.4 passed
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-devel(x86_64)-2.17-325.el7_9 glibc-devel(x86_64)-2.3.4 passed
rac1 glibc-devel(x86_64)-2.17-325.el7_9 glibc-devel(x86_64)-2.3.4 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-headers-2.17-325.el7_9 glibc-headers-2.3.4 passed
rac1 glibc-headers-2.17-325.el7_9 glibc-headers-2.3.4 passed
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc-c++(x86_64)-4.8.5-44.el7 gcc-c++(x86_64)-3.4.6 passed
rac1 gcc-c++(x86_64)-4.8.5-44.el7 gcc-c++(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio-devel(x86_64)-0.3.109-13.el7 libaio-devel(x86_64)-0.3.105 passed
rac1 libaio-devel(x86_64)-0.3.109-13.el7 libaio-devel(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libgcc(x86_64)-4.8.5-44.el7 libgcc(x86_64)-3.4.6 passed
rac1 libgcc(x86_64)-4.8.5-44.el7 libgcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++(x86_64)-4.8.5-44.el7 libstdc++(x86_64)-3.4.6 passed
rac1 libstdc++(x86_64)-4.8.5-44.el7 libstdc++(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-devel(x86_64)-4.8.5-44.el7 libstdc++-devel(x86_64)-3.4.6 passed
rac1 libstdc++-devel(x86_64)-4.8.5-44.el7 libstdc++-devel(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 sysstat-10.1.5-19.el7 sysstat-5.0.5 passed
rac1 sysstat-10.1.5-19.el7 sysstat-5.0.5 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 missing pdksh-5.2.14 failed
rac1 missing pdksh-5.2.14 failed
Result: Package existence check failed for "pdksh"
Check: Package existence for "expat(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 expat(x86_64)-2.1.0-12.el7 expat(x86_64)-1.95.7 passed
rac1 expat(x86_64)-2.1.0-12.el7 expat(x86_64)-1.95.7 passed
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed does not exist
rac1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 0022 0022 passed
rac1 0022 0022 passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "crhd0a.crc.hk" as found on node "rac2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Fixup information has been generated for following node(s):
rac2,rac1
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.4.0_grid/runfixup.sh'
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@rac1:/u01/app/grid]$
二、调用图形化
#安装VNC
yum -y install vnc vnc-server;yum install tightvnc-server
#启动VNC
vncserver
#如果没有vnc,可以用Xmanager
[root@rac1 ~]# su - grid
调用vnc或Xmanager的图形化
[grid@rac1:/home/grid]$ export DISPLAY=10.10.161.60:1.0
[grid@rac1:/home/grid]$ cd /u01/app/soft/grid
[grid@rac1:/home/grid]$ ./runInstall
三、grid图形化安装
#跳过更新
#选择安装grid
#选择自定义高级安装
?#配置语言,选择简体中文
#修改Cluster name和SCAN name,都修改为:rac-scan。默认端口为1526,取消勾(Configure GNS)。?
#add一个节点2的公网IP并验证ssh,成功后下一步
?
?
?#确认网卡
?#选择共享文件系统ASM
#选择对应磁盘冗余
冗余方式:
High:3份冗余
Normal:2份冗余
External:1份
注意:选择对应磁盘,并检查磁盘权限、路径。
?#设置管理密码:Tyysdwms0713
?#确认ASM管理组
?#选择下载位置
?#选择安装目录
#安装检查
出现两个参数的问题和一个漏洞问题,需要打个包
[root@rac1 rac.zip]# rpm -ivh --force pdksh-5.2.14-30.x86_64.rpm
warning: pdksh-5.2.14-30.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 73307de6: NOKEY
error: Failed dependencies:
pdksh conflicts with (installed) ksh-20120801-143.el7_9.x86_64
#检查package没有pdksh这个包,需要安装一下,rpm说已经安装ksh-20120801-143.el7_9.x86_64包了,会冲突,直接--force和--nodeps忽略强制安装,再次检查(两个节点)
[root@rac1 rac.zip]# rpm -ivh --force pdksh-5.2.14-30.x86_64.rpm --nodeps
warning: pdksh-5.2.14-30.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 73307de6: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:pdksh-5.2.14-30 ################################# [100%]
#安装完成
?来解决这个参数的问题,后面有yes的直接店第二个选项用root执行下面这个脚本,然后再检查(两个节点)
#开始安装
#根据弹窗提示,执行弹窗中的脚本 要求:以root用户登录,首先在第1个节点上执行成功后,再到其他节点上执行。?
?
?执行此脚本时会报错
CRS-2101:The OLR was formatted using version 3.
需要执行命令在开个窗口执行---/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null &
会继续执行,后面出现successfully。
?说明: 安装过程中,创建了OCR、vote disk。 OCR:集群的配置信息 Vote disk:表决磁盘 10g:无表决磁盘 11g:DATA磁盘组
#节点1执行完,开始执行节点2
可能还会报别的错,因为11g适合在centos6用,centos7一般适用于12c的库了,继续往下走
?报错?[client(345280)]CRS-2101:The OLR was formatted using version 3.
执行完完继续往下走---/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null &
?又说执行失败,Disk Group OCR creation failed with the following message,创建OCR失败,去用命令扫一下磁盘
?find / -name *OCR* 去找下磁盘路径
--------------------------------------------------------------------------------
?这就神奇了,能扫出来磁盘,但是脚本报错找不到磁盘,如果这步扫不出来,但是磁盘确认已经创了,那就chown 一下磁盘组的所属主和组,先改成root,在改回来grid:asmadmin,
[grid@rac2:/home/grid]$?kfod disks=all status=ture ?asm_diskstring='/dev/oracleasm/disks/*'??
?如果还不行,重启下服务器,要先启集群,在开始执行root.sh,不然会报集群css挂了和检查集群状态,/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null &,然后重新执行root.sh脚本
?重启完,观察日志的流动less /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rac2.log ,显示节点2已经注册好了
?#点OK,下一步,提示报错[INS-20802],点击跳过忽略(不影响)
?
?输入命令查看grid安装情况: crs_stat -t
?
查看asm进程
ps -ef|grep ASM
?
?###安装数据库软件###
一、拷贝Oracle软件,并使用root权限授权777,然后用oracle用户解压。
chmod +777 p13390677_112040_Linux-x86-64_*
?二、登录oracle用户,设置变量DISPLAY
[oracle@rac1:/home/oracle]$export DISPLAY=10.10.161.60:1.0
[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
三、解压两个p13390677_112040_Linux-x86-64.zip
四、到目录执行脚本
报错:Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Continue? (y/n) [n]
[root@rac1 ~]# su - oracle
Last login: Mon Mar 28 11:02:25 CST 2022 on pts/0
[oracle@rac1:/home/oracle]$cd /u01/app/soft/database/
[oracle@rac1:/u01/app/soft/database]$./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 5888 MB Passed
Checking swap space: must be greater than 150 MB. Actual 16383 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Continue? (y/n) [n] ^C
##说DISPLAY变量设置有误,我果断重新设置了下
[oracle@rac1:/u01/app/soft/database]$export DISPLAY=10.10.161.60:1.0
[oracle@rac1:/u01/app/soft/database]$./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 5888 MB Passed
Checking swap space: must be greater than 150 MB. Actual 16383 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-03-28_11-05-48AM. Please wait ...[oracle@rac1:/u01/app/soft/database]$
五、图形化
#取消软件更新
#跳过软件更新
?#仅支持数据库软件
#点击“SSH Connectivity”,输入密码,然后点击"Setup" (对等性安装完成后即可测试ssh在节点1和2之间互相连接测试)
?
?#选择语言
?
?#默认安装企业版
?#默认Oracle的base目录和Oracle的home目录
?#修改数据库组(OSOPER)为oinstall,默认SODBA为dba
?#因为没有DNS,提示Task和SCAN报错,2个错误均可忽略,然后下一步。
?#开始安装
?#Error in invoking target ‘agent nmhs‘ of makefile 报错,应该是86%报错,我是56%报错,但是报错信息一致
解决方案:
在makefile中添加链接libnnz11库的参数
修改$ORACLE_HOME/sysman/lib/ins_emagent.mk,将
$(MK_EMAGENT_NMECTL)修改为:$(MK_EMAGENT_NMECTL) -lnnz11
建议修改前备份原始文件
[oracle@ysserver ~]$?cd $ORACLE_HOME/sysman/lib
[oracle@ysserver lib]$?cp ins_emagent.mk ins_emagent.mk.bak
[oracle@ysserver lib]$?vi ins_emagent.mk
末行模式查找 /NMECTL 后面添加? ? -lnnz11
#然后重试?
#需要root用户执行脚本,然后ok
?#完成后查看监听状态(两个节点oracle和grid都要查看)
??###ASMCA建磁盘组###
?一、创建磁盘组
#因为刚装完grid,可以直接调用图形化
asmca
#可以看到,ASM中已经存在了一个OCR磁盘组,现在要给数据库创建DATA磁盘组,选择【Disk Groups】选项卡,点击左下角【Create】
?二、asmca调用图形化创建的时候出现了一个问题,桌面显示不齐
解决办法:
#先查看java命令的目录
[root@rac1 bin]#?which java
/usr/bin/java
#复制软连接并查看
[root@rac1 bin]#?ls -l /usr/bin/java
lrwxrwxrwx 1 root root 22 Jun? 3? 2021?/usr/bin/java -> /etc/alternatives/java
#复制软连接并查看
[root@rac1 bin]#?ls -l /etc/alternatives/java
lrwxrwxrwx 1 root root 73 Jun? 3? 2021?/etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64/jre/bin/java
[root@rac1 bin]#?ls -l /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64/jre
#修改asmca脚本
[root@rac1 bin]#?cp?/u01/app/11.2.0/grid/bin/asmca?/u01/app/11.2.0/grid/bin/asmca.bak
[root@rac1 bin]#?vim /u01/app/11.2.0/grid/bin/asmca? ? (建议先copy一份)
#找关键字,替换路径
JRE_DIR= /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64/jre
?三、磁盘组选项
磁盘组名:DATA,ARCH
冗余选择:external冗余
磁盘选中:ORCL:DATA
点击【OK】
?#完成
?#去asmcmd查看一下,有了就ok了
###netca创建监听###
#调图形化
?#监听配置
#起名
#直接下一步
?
#监听1526
?#选择不配置另一个监听
?#下一步
?#完成
#进入lsnrctl
#查看两节点的监听
[grid@rac1:/home/grid]$lsnrct
LSNRCTL> status RAC_LISTENER
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=RAC_LISTENER)))
STATUS of the LISTENER
------------------------
Alias RAC_LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 29-MAR-2022 17:34:30
Uptime 0 days 0 hr. 2 min. 24 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/rac2/rac_listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=RAC_LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.161.68)(PORT=1526)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.161.168)(PORT=1526)))
The listener supports no services
The command completed successfully
LSNRCTL> exit
###DBCA建库###
#监听和集群没问题后,开始建库
[root@rac1 ~]# su - oracle
Last login: Tue Mar 29 17:15:10 CST 2022 on pts/0
[oracle@rac1:/home/oracle]$export DISPLAY=10.10.161.60:1.0
[oracle@rac1:/home/oracle]$xhost +
access control disabled, clients can connect from any host
#dbca调用图形化
[oracle@rac1:/home/oracle]$dbca
?#选择多节点数据库
????????#选择创建数据库
?
#选择自定义数据库?
?
?#查看SID
?
?#选择admin管理类型,设置实例名为orcl,选择所有节点
#默认配置
?#配置oracle账号密码。默认都是oracle
?#使用ASM自动存储管理,选择磁盘组(+DATA)
?#输入ASMSNMP password,默认设置为123456
#取消快速恢复区?
#默认下一步
?#配置: 设置内存为1G(1024MB,跟进服务器内存情况设置),勾选“使用自动存储管理”, 进程数默认, 字符集选择“ZHS 16GBK”。
?#1500最大会话数
?
?#下面三个表空间是我自己创建的,不是迁移直接默认下一步即可
#创建数据库
#确定盘的位置,点ok开始安装
?
?#安装完成后看看集群状态
[grid@rac1:/home/grid]$crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ORC.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.RAC_LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.orcl.db
1 ONLINE ONLINE rac1 Open #db进程open就ok了
2 ONLINE ONLINE rac2 Open #db进程open就ok了
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
安装完成,可以去sqlplus里执行语句在验证一下,如果有一个节点报错,但是db进程起来了,那就shutdown abort,在startup就基本没问题了!!
|