安装
yum -y install epel-release
出现epel源
[root@localhost yum.repos.d]
CentOS-Base.repo CentOS-Media.repo epel.repo
CentOS-CR.repo CentOS-Sources.repo epel-testing.repo
CentOS-Debuginfo.repo CentOS-Vault.repo
CentOS-fasttrack.repo CentOS-x86_64-kernel.repo
yum -y install nginx
rpm -uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm 修改yum源 vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx.repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1
启动Nginx并设置开机启动
systemctl start nginx.service systemctl enable nginx.service 检查安装版本 nginx -v 重启 /usr/sbin/nginx -s stop /usr/sbin/nginx -s start
配置文件
配置文件位置:
/etc/nginx
[root@localhost yum.repos.d]
conf.d koi-utf scgi_params
default.d koi-win scgi_params.default
fastcgi.conf mime.types uwsgi_params
fastcgi.conf.default mime.types.default uwsgi_params.default
fastcgi_params nginx.conf win-utf
fastcgi_params.default nginx.conf.default
-
查看nginx.conf主配置文件 -
查找默认的安装位置
find / -name *nginx*
[root@localhost /]
/run/nginx.pid
/sys/fs/cgroup/systemd/system.slice/nginx.service
/etc/systemd/system/multi-user.target.wants/nginx.service
/etc/systemd/system/nginx.service.d
/etc/logrotate.d/nginx
/etc/nginx
/etc/nginx/nginx.conf
/etc/nginx/nginx.conf.default
/var/tmp/systemd-private-9fbd51e7c0714a37bc28f8f7aee49468-nginx.service-CmQIIT
/var/lib/yum/yumdb/n/f8a4655b1aa3ca9c0275be2d64ea287e99fc7217-nginx-filesystem-1.20.1-9.el7-noarch
/var/lib/yum/yumdb/n/85502cf4b35f5368569b0b62e550755496b72249-nginx-1.20.1-9.el7-x86_64
/var/lib/nginx
/var/log/nginx
/tmp/systemd-private-9fbd51e7c0714a37bc28f8f7aee49468-nginx.service-szPRoy
/usr/bin/nginx-upgrade
/usr/sbin/nginx
/usr/lib/systemd/system/nginx.service
/usr/lib/systemd/system/nginx.service.d
/usr/lib64/nginx
/usr/share/doc/nginx-1.20.1
/usr/share/licenses/nginx-1.20.1
/usr/share/man/man3/nginx.3pm.gz
/usr/share/man/man8/nginx-upgrade.8.gz
/usr/share/man/man8/nginx.8.gz
/usr/share/nginx
/usr/share/nginx/html/nginx-logo.png
/usr/share/vim/vimfiles/ftdetect/nginx.vim
/usr/share/vim/vimfiles/ftplugin/nginx.vim
/usr/share/vim/vimfiles/indent/nginx.vim
/usr/share/vim/vimfiles/syntax/nginx.vim
[root@localhost /]
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/bin/rm -f /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
KillSignal=SIGQUIT
TimeoutStopSec=5
KillMode=process
PrivateTmp=true
[Install]
WantedBy=multi-user.target
可以发现安装到了/usr/sbin/nginx下
文件配置解读
[root@localhost /]
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
use epoll;
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
client_header_buffer_size 1k;
large_client_header_brffers 4 4k;
include /etc/nginx/conf.d/*.conf;
upstream myserver{
server 192.168.31.10:8080 weight=3
server 192.168.31.11:8080 weight=3
server 192.168.31.12:8080 weight=3
}
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
变量说明
变量名 | 含义 |
---|
$remote_addr | 用以记录客户端的ip地址 | $http_x_forwarded_for | 用以记录客户端的ip地址 | $remote_user | 用来记录客户端用户名称 | $time_local | 用来记录访问的时间与时区 | $request | 用来记录请求的url与请求协议(http/https) | $status | 记录请求状态:成功时200等 | $body_bytes_send | 记录发送给客户端文件主体内容大小 | $http_referer | 记录从哪个页面链接过来的 |
nginx 基于域名的虚拟主机
server {
listen 80;
listen [::]:80;
server_name myserver my.server;
location = / {
root /nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
反向代理
正向代理即代理的时服务端:如客户端访问 www.baidu.com,客户端发送请求到我们的代理端,假设是nginx,nginx再转发请求到www.baidu.com,nginx再把请求结果返回给客户端。 反向代理即服务端代理 :如客户端访问的是nginx服务192.168.31.10/search,但是,nginx服务器访问了服务www.baidu.com/search,将结果返回给客户端。
配置参数 | 说明 |
---|
proxy_pass | 真实web/app服务器IP地址 | proxy_redirect | 如果真实服务器使用的时真的IP非默认端口,则改成ip+默认端口 | proxy_set_header | 重新定义或者添加发往后端服务器的请求头 | proxy_set_header X-Real-IP | 启用客户端真实地址(如果未启用,Nginx的日志,记录的只有当前层代理日志,不会追溯源地址) | proxy_set_header X-Forwarded-For | 记录代理地址 | proxy_connect_timeout | 后端服务器链接的超时时间,发起三次握手的等待超时时间 | proxy_send_timeout | 后端服务器传回数据时间,在规定的时间内必须完成传输 | proxy_read_timeout | nginx接收upstream(上游/真实)server数据超时,默认60s,如果连续的60s内没有接收到一个字节,连接关闭 | proxy_buffering on | 开启缓存 | proxy_buffer_size | 只是响应头的缓冲大小 | proxy_buffers 4 128k | 内容缓冲区域大小,一般设置的比较大,以响应网页,单个缓冲区的大小由系统内存叶决定,一般4k,proxy_buffers由缓冲数和缓冲区大小组成,总大小为number*size | proxy_busy_buffers_size 256k | 从proxy_buffers划出一部分缓冲区来专门向客户端传送数据的地方 | proxy_max_temp_file_size 256k | 超大的响应头存储文件 |
server {
listen 80;
server_name www.mytest.com;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
location = / {
proxy_pass http://192.168.31.11:80;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Proto $scheme;
proxy_set_header X-Nginx-Proxy true;
proxy_connect_timeout 30;
proxy_send_timeout 60;
proxy_read_timeout 60;
proxy_buffering on;
proxy_buffer_size 32k;
proxy_buffers 4 128k;
proxy_busy_buffers_size 256k;
proxy_max_temp_file_size 256k;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Nginx 负载均衡
基于反向代理,在我们需要减轻服务器的压力时,需要反向代理多台服务器,从而减少单台服务器压力。
upstream webapp {
server 192.168.31.11:80;
server 192.168.31.12:80;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://webapp;
}
}
upstream webapp {
server 192.168.31.11:80;
server 192.168.31.12:80 backup; #热备
}
upstream webapp {
server 192.168.31.11:80 weight=1;
server 192.168.31.12:80 weight=2;
}
- ip_hash,根据客户端ip去hash访问:即同一个ip访问到的一定时同一个服务器
upstream webapp {
ip_hash;
server 192.168.31.11:80 ;
server 192.168.31.12:80 ;
}
Nginx会话保持(同一ip持续访问到一个服务器)
upstream webapp { ip_hash; server 192.168.31.11:80 ; server 192.168.31.12:80 ; }
ip_hash使用原地址哈希算法,将同一客户端的请求总是发往同一台服务器 当后端服务宕机后不可用 不适于前端还有代理的情况,必须是只有一层代理,如果有两层代理就不可用 同一局域网的客户会被发送到同一台服务器
- sticky_cookie_insert
使用sticky_cookie_insert启用会话亲情关系,这会导致来自同一客户端的请求被传递到一组服务器的同一台服务器,与ip_hash不同的是,他不是基于ip来判断客户端的,而是基于cookie来判断,一次避免了上述ip_hash中来自同一局域网的客户端丢失负载均衡的情况。
upstream webapp { server 192.168.31.11:80 ; server 192.168.31.12:80 ; sticky_cokkie_insert srv_id expires=1h domain=3evip.cn path=/; }
说明: expires :浏览器保持cookie的时间 domain :定义cookie的域 path:为cookie定义路径
|