高可用nginx反向代理
nginx反向代理简介
代理服务器是位于客户端和原始服务器的一台中间服务器,为了从原始服务器获取到内容,客户端向代理服务器发送一个请求并带上目标服务器(原始服务器),代理服务器在接收到请求后就会将请求转发给原始服务器,并将从原始服务器上获取到的数据返回给客户端,代理服务器是代理的客户端,所以一般客户端是知道代理服务器的存在的,比如翻墙就用了代理服务器。
反向代理服务器是位于原始服务器端的服务器,反向代理服务器接受来自互联网的请求,然后将这些请求发送给内网的服务器,并将从内网的服务器获取结果返回给互联网上的客户端,反向代理服务器是代理的服务端,所以客户端是不知道反向代理服务器的存在的,服务端是知道反向代理服务器的。
代理服务器的作用
- 访问原来无法访问的资源
- 用作缓存,加速访问速度
- 对客户端访问授权,上网进行认证
- 代理可以记录用户访问记录(上网行为管理),对外隐藏用户信息
反向代理服务器的作用
- 保护内网安全
- 负载均衡
- 缓存,减少服务器的压力
nginx的作用
1.反向代理,将多台服务器代理成一台服务器
2.负载均衡,将多个请求均匀的分配到多台服务器上,减轻每台服务器的压力,提高服务的吞吐量
3.动静分离,nginx可以用作静态文件的缓存服务器,提高访问速度
nginx反向代理的配置
配置环境:
系统 | ip | 服务 | 主机名 |
---|
centos8 | 192.168.171.133 | nginx(负载均衡调度器) | localhost | centos8 | 192.168.171.142 | nginx(网站服务) | RS1 | centos8 | 192.168.171.141 | apache(网站服务) | RS2 |
RS1配置
//关闭防火墙和selinux
[root@RS1 ~]
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@RS1 ~]
[root@RS1 ~]
//下载网页服务nginx
[root@RS1 ~]
//配置测试网站
[root@RS1 ~]
[root@RS1 html]
404.html 50x.html index.html nginx-logo.png poweredby.png
[root@RS1 html]
[root@RS1 html]
[root@RS1 html]
[root@RS1 html]
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
RS2配置
//关闭防火墙selinux
[root@RS2 ~]
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@RS2 ~]
[root@RS2 ~]
//下载网页服务apache
[root@RS2 ~]
//配置测试网页
[root@RS2 ~]
[root@RS2 ~]
[root@RS2 ~]
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@RS2 ~]
LISTEN 0 128 *:80 *:*
负载均衡调度器配置
//关闭防火墙和selinux
[root@localhost ~]
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]
[root@localhost ~]
//下载nginx服务,做反向代理。
[root@localhost ~]
[root@localhost ~]
[root@localhost nginx]
//配置反向代理
upstream webserver {
server 192.168.171.141;
server 192.168.171.142;
}
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
location / {
proxy_pass http://webserver;
}
[root@localhost ~]
访问测试
[root@localhost ~]
apache
[root@localhost ~]
nginx
[root@localhost ~]
apache
[root@localhost ~]
nginx
[root@localhost ~]
apache
高可用nginx反向代理
配置环境:
系统 | ip | 服务 | 主机名 |
---|
centos8 | 192.168.171.133 | nginx(负载均衡调度器 keepalived) | KD1 | centos8 | 192.168.171.142 | nginx(网站服务) | RS1 | centos8 | 192.168.171.141 | apache(网站服务) | RS2 | centos8 | 192.168.171.150 | nginx(负载均衡调度器 keepalived) | KD2 |
RS1、RS2和上面配置一样保持不变
虚拟vip:192.168.171.250
配置KD1
//下载高可用的服务。
[root@KD1 ~]
//配置keepalived的配置文件
[root@KD1 ~]
[root@KD1 keepalived]
keepalived.conf
[root@KD1 keepalived]
[root@KD1 keepalived]
keepalived.conf-bek
[root@KD1 keepalived]
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass nuanchun
}
virtual_ipaddress {
192.168.171.250
}
}
virtual_server 192.168.171.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.171.133 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.171.150 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
//重启服务,查看ip
[root@KD1 keepalived]
[root@KD1 keepalived]
[root@KD1 keepalived]
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.171.133/24 brd 192.168.171.255 scope global noprefixroute ens33
inet 192.168.171.250/32 scope global ens33
配置KD2
//关闭防火墙和selinux
[root@KD2 ~]
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@KD2 ~]
[root@KD2 ~]
//KD2必须要和KD1的内容保持一致才能做高可用
[root@KD2 ~]
//在KD1上面把nginx.conf的配置文件cp到KD2上
[root@KD1 nginx]
The authenticity of host '192.168.171.150 (192.168.171.150)' can't be established.
ECDSA key fingerprint is SHA256:b2+ErORHLlANCY23XTlkC8uzQ6KKscSXnc5aIAK80dI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.171.150' (ECDSA) to the list of known hosts.
root@192.168.171.150's password:
nginx.conf 100% 2529 1.5MB/s 00:00
[root@KD1 nginx]
//看配置文件
[root@KD2 ~]
upstream webserver {
server 192.168.171.141;
server 192.168.171.142;
}
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
location / {
proxy_pass http://webserver;
}
//下载keepalived高可用服务
[root@KD2 ~]
//再把keepalived的配置文件备份一下
[root@KD2 ~]
[root@KD2 keepalived]
keepalived.conf
[root@KD2 keepalived]
[root@KD2 keepalived]
keepalived.conf-bek
//然后在把KD1上面的keepalived的配置文件scp过来
[root@KD1 keepalived]
root@192.168.171.150's password:
keepalived.conf 100% 870 556.4KB/s 00:00
[root@KD2 keepalived]
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass nuanchun
}
virtual_ipaddress {
192.168.171.250
}
}
virtual_server 192.168.171.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.171.133 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.171.150 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@KD2 keepalived]
[root@KD2 keepalived]
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2022-10-17 04:54:18 EDT; 1min 26s ago
Process: 79776 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 79777 (keepalived)
Tasks: 3 (limit: 23460)
Memory: 2.0M
CGroup: /system.slice/keepalived.service
访问测试
//用虚拟ip访问
[root@KD1 nginx]
[root@KD1 keepalived]
[root@KD1 keepalived]
nginx
[root@KD1 keepalived]
apache
[root@KD1 keepalived]
nginx
[root@KD1 keepalived]
apache
//模拟KD1主机寄掉了,看会不会把从负载均衡调度器变成主
[root@KD1 keepalived]
[root@KD1 keepalived]
[root@KD1 keepalived]
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@KD1 keepalived]
//在从主机上查看vip是否过去,可以看到vip已经起来了。
[root@KD2 keepalived]
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.171.150/24 brd 192.168.171.255 scope global noprefixroute ens33
inet 192.168.171.250/32 scope global ens33
[root@KD2 keepalived]
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
//在KD2上访问一下看看
[root@KD2 keepalived]
apache
[root@KD2 keepalived]
nginx
[root@KD2 keepalived]
apache
[root@KD2 keepalived]
nginx
高可用自动化转换主备节点
keepalived通过脚本来监控nginx负载均衡机的状态
在KD1上编写脚本
//创建一个放置脚本的目录用来写监控脚本的状态
[root@KD1 ~]
[root@KD1 ~]
anaconda-ks.cfg scripts
[root@KD1 ~]
[root@KD1 scripts]
[root@KD1 scripts]
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl stop keepalived
fi
[root@KD1 scripts]
[root@KD1 scripts]
[root@KD1 scripts]
VIP=$2
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
sendmail
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@KD1 scripts]
//在配置文件里面引用脚本
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_n.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass nuanchun
}
virtual_ipaddress {
192.168.171.250
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.171.250"
notify_backup "/scripts/notify.sh backup 192.168.171.250"
}
}
virtual_server 192.168.171.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.171.133 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.171.150 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@KD1 scripts]
[root@KD1 scripts]
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.171.133/24 brd 192.168.171.255 scope global noprefixroute ens33
inet 192.168.171.250/32 scope global ens33
配置KD2
//先创建一个防止脚本的目录
[root@KD2 ~]
[root@KD2 ~]
//把脚本从KD1上面scp过来
[root@KD1 scripts]
root@192.168.171.150's password:
notify.sh 100% 451 247.4KB/s 00:00
[root@KD2 srcipts]
notify.sh
[root@KD2 srcipts]
//在备KD2上配置引用脚本的配置文件。
[root@KD2 srcipts]
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass nuanchun
}
virtual_ipaddress {
192.168.171.250
}
notify_master "/scripts/notify.sh master 192.168.171.250"
notify_backup "/scripts/notify.sh backup 192.168.171.250"
}
virtual_server 192.168.171.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.171.133 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.171.150 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@KD2 srcipts]
测试访问
//此时可以看到KD1的vip和80端口都是起来的,把KD1的nginx停掉模拟出故障
[root@KD1 ~]
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.171.133/24 brd 192.168.171.255 scope global noprefixroute ens33
inet 192.168.171.250/32 scope global ens33
[root@KD1 ~]
[root@KD1 ~]
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
[root@KD1 ~]
[root@KD1 ~]
[root@KD1 ~]
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2022-10-18 05:11:21 EDT; 943ms ago
Process: 112308 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
Process: 238678 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 238682 (code=exited, status=0/SUCCESS)
//去KD2上看vip和80端口起来没
[root@KD2 srcipts]
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 192.168.171.150/24 brd 192.168.171.255 scope global noprefixroute ens33
inet 192.168.171.250/32 scope global ens33
[root@KD2 srcipts]
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
[root@KD2 srcipts]
[root@KD2 srcipts]
apache
[root@KD2 srcipts]
nginx
[root@KD2 srcipts]
apache
[root@KD2 srcipts]
nginx
[root@KD2 srcipts]
|