虚拟机下配置服务器开发环境
随着学习,持续更新
配置虚拟机服务器
配置虚拟机网络
按照截图配置虚拟机网络即可。虚机机的网络模式自行百度
配置虚拟机网络模式,及其子网IP、子网掩码、DHCP设置
配置NAT,网关IP 或者 安装系统后的IP端口映射
虚拟机CentOS网络配置
-
进入目录 cd /etc/sysconfig/network-scripts/
-rw-r--r--. 1 root root 381 2月 7 21:30 ifcfg-ens33 # ifcfg-ens 开头的一般为网络配置文件,根据你的网卡设备来定 有几个ifcfg-ens
-rw-r--r--. 1 root root 254 5月 22 2020 ifcfg-lo
lrwxrwxrwx. 1 root root 24 2月 7 19:28 ifdown -> ../../../usr/sbin/ifdown
-rwxr-xr-x. 1 root root 654 5月 22 2020 ifdown-bnep
-rwxr-xr-x. 1 root root 6532 5月 22 2020 ifdown-eth
-rwxr-xr-x. 1 root root 781 5月 22 2020 ifdown-ippp
-rwxr-xr-x. 1 root root 4540 5月 22 2020 ifdown-ipv6
lrwxrwxrwx. 1 root root 11 2月 7 19:28 ifdown-isdn -> ifdown-ippp
-rwxr-xr-x. 1 root root 2130 5月 22 2020 ifdown-post
-rwxr-xr-x. 1 root root 1068 5月 22 2020 ifdown-ppp
-rwxr-xr-x. 1 root root 870 5月 22 2020 ifdown-routes
-rwxr-xr-x. 1 root root 1456 5月 22 2020 ifdown-sit
-rwxr-xr-x. 1 root root 1621 12月 9 2018 ifdown-Team
-rwxr-xr-x. 1 root root 1556 12月 9 2018 ifdown-TeamPort
-rwxr-xr-x. 1 root root 1462 5月 22 2020 ifdown-tunnel
lrwxrwxrwx. 1 root root 22 2月 7 19:28 ifup -> ../../../usr/sbin/ifup
-rwxr-xr-x. 1 root root 12415 5月 22 2020 ifup-aliases
-rwxr-xr-x. 1 root root 910 5月 22 2020 ifup-bnep
-rwxr-xr-x. 1 root root 13758 5月 22 2020 ifup-eth
-rwxr-xr-x. 1 root root 12075 5月 22 2020 ifup-ippp
-rwxr-xr-x. 1 root root 11893 5月 22 2020 ifup-ipv6
lrwxrwxrwx. 1 root root 9 2月 7 19:28 ifup-isdn -> ifup-ippp
-rwxr-xr-x. 1 root root 650 5月 22 2020 ifup-plip
-rwxr-xr-x. 1 root root 1064 5月 22 2020 ifup-plusb
-rwxr-xr-x. 1 root root 4997 5月 22 2020 ifup-post
-rwxr-xr-x. 1 root root 4154 5月 22 2020 ifup-ppp
-rwxr-xr-x. 1 root root 2001 5月 22 2020 ifup-routes
-rwxr-xr-x. 1 root root 3303 5月 22 2020 ifup-sit
-rwxr-xr-x. 1 root root 1755 12月 9 2018 ifup-Team
-rwxr-xr-x. 1 root root 1876 12月 9 2018 ifup-TeamPort
-rwxr-xr-x. 1 root root 2780 5月 22 2020 ifup-tunnel
-rwxr-xr-x. 1 root root 1836 5月 22 2020 ifup-wireless
-rwxr-xr-x. 1 root root 5419 5月 22 2020 init.ipv6-global
-rw-r--r--. 1 root root 20678 5月 22 2020 network-functions
-rw-r--r--. 1 root root 30988 5月 22 2020 network-functions-ipv6
-
选择网络配置文件进行编辑 vi ifcfg-ens33
配置信息结合虚拟机网络配置
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=d163b238-24cb-4c77-a59e-6ef020159add
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.56.10
NETMASK=255.255.255.0
GATEWAY=192.168.56.100
DNS1=192.168.56.100
DNS2=8.8.8.8
-
配置生效 systemctl restart network
service network restart
-
测试 ping www.baidu.com
[root@localhost network-scripts]# ping www.baidu.com
PING www.a.shifen.com (110.242.68.3) 56(84) bytes of data.
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=1 ttl=128 time=32.8 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=2 ttl=128 time=30.8 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=3 ttl=128 time=31.3 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=4 ttl=128 time=29.1 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=5 ttl=128 time=31.2 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=6 ttl=128 time=27.4 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=7 ttl=128 time=33.0 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=8 ttl=128 time=35.7 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=9 ttl=128 time=29.1 ms
64 bytes from 110.242.68.3 (110.242.68.3): icmp_seq=10 ttl=128 time=28.9 ms
^C
--- www.a.shifen.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9016ms
rtt min/avg/max/mdev = 27.492/30.978/35.795/2.329 ms
虚拟机CentOS开发环境
安装配置SSH
-
查看是否安装SSH yum list installed | grep openssh-server
rpm -qa | grep ssh
-
安装SSH(未安装情况) yum install openssh-server
-
编辑ssh配置文件 vi /etc/ssh/sshd_config
# SSH 开放端口
Port 22
# SSH 监听IP
ListenAddress 0.0.0.0
ListenAddress ::
# 允许Puke登录
PubkeyAuthentication yes
# Key保存文件名
AuthorizedKeysFile .ssh/authorized_keys
# 允许Root登录
PermitRootLogin yes
# 允许使用密码登录
PasswordAuthentication yes
-
检查SSH是否运行 ps -e | grep sshd
netstat -an | grep 22
systemctl status sshd.service
-
启用ssh服务 service sshd restart
service sshd start
-
开机启动ssh服务 systemctl enable sshd
systemctl list-unit-files |grep ssh
systemctl stop sshd
systemctl disable sshd
-
防火墙打开22端口 firewall-cmd --zone=public --add-port=22/tcp --permanent
--zone
--add-port=80/tcp
--permanent
firewall-cmd --reload
firewall-cmd --zone=public --query-port=22/tcp
firewall-cmd --list-ports
换源
-
安装wget yum install wget -y
-
备份源文件 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
-
下载源文件并命名阿里源 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
-
清理并生成新缓存 yum clean all && yum makecache
已加载插件:fastestmirror
正在清理软件源: base epel extras updates
Cleaning up list of fastest mirrors
已加载插件:fastestmirror
Determining fastest mirrors
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
epel | 4.7 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
base/7/x86_64/primary_db FAILED
http://mirrors.cloud.aliyuncs.com/centos/7/os/x86_64/repodata/6d0c3a488c282fe537794b5946b01e28c7f44db79097bb06826e1c0c88bad5ef-primary.sqlite.bz2: [Errno 14] curl#6 - "Could not resolve host: mirrors.cloud.aliyuncs.com; Unknown error"
正在尝试其它镜像。
(1/16): epel/x86_64/group_gz | 96 kB 00:00:00
(2/16): base/7/x86_64/group_gz | 153 kB 00:00:00
(3/16): epel/x86_64/updateinfo | 1.1 MB 00:00:02
(4/16): epel/x86_64/prestodelta | 862 B 00:00:00
(5/16): base/7/x86_64/other_db | 2.6 MB 00:00:06
(6/16): epel/x86_64/primary_db | 7.0 MB 00:00:14
base/7/x86_64/filelists_db FAILED ] 928 kB/s | 21 MB 00:00:50 ETA
http://mirrors.aliyuncs.com/centos/7/os/x86_64/repodata/d6d94c7d406fe7ad4902a97104b39a0d8299451832a97f31d71653ba982c955b-filelists.sqlite.bz2: [Errno 14] curl#7 - "Failed connect to mirrors.aliyuncs.com:80; Connection refused"
正在尝试其它镜像。
(7/16): extras/7/x86_64/other_db | 148 kB 00:00:00
(8/16): epel/x86_64/other_db | 3.4 MB 00:00:07
(9/16): epel/x86_64/filelists_db | 12 MB 00:00:28
extras/7/x86_64/primary_db FAILED ============= ] 995 kB/s | 40 MB 00:00:26 ETA
http://mirrors.aliyuncs.com/centos/7/extras/x86_64/repodata/68cf05df72aa885646387a4bd332a8ad72d4c97ea16d988a83418c04e2382060-primary.sqlite.bz2: [Errno 14] curl#7 - "Failed connect to mirrors.aliyuncs.com:80; Connection refused"
正在尝试其它镜像。
extras/7/x86_64/filelists_db FAILED
http://mirrors.aliyuncs.com/centos/7/extras/x86_64/repodata/ceff3d07ce71906c0f0372ad5b4e82ba2220030949b032d7e63b7afd39d6258e-filelists.sqlite.bz2: [Errno 14] curl#7 - "Failed connect to mirrors.aliyuncs.com:80; Connection refused"
正在尝试其它镜像。
(10/16): updates/7/x86_64/filelists_db | 9.1 MB 00:00:19
(11/16): extras/7/x86_64/primary_db | 247 kB 00:00:00
(12/16): extras/7/x86_64/filelists_db | 277 kB 00:00:00
(13/16): updates/7/x86_64/other_db | 1.1 MB 00:00:02
(14/16): base/7/x86_64/primary_db | 6.1 MB 00:00:13
(15/16): base/7/x86_64/filelists_db | 7.2 MB 00:00:15
(16/16): updates/7/x86_64/primary_db | 16 MB 00:00:35
元数据缓存已建立
安装Docker与部署容器
1.安装Docker
-
通过脚本进行安装 curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh
-
启动Docker sudo systemctl enable docker.service && sudo systemctl enable containerd.service
sudo systemctl start docker
-
检查Docker是否启动 docker ps
2.安装Docker-Compose
-
下载docker-compose sudo curl -L "https://github.com/docker/compose/releases/download/2.6.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
-
更改docker-compose权限 sudo chmod +x /usr/local/bin/docker-compose
3.安装Docker应用
docker-compose.yml 公共部分
version: '2.4'
networks:
app_start:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16
ip_range: 172.18.0.0/24
gateway: 172.18.0.1
services:
执行docker-compose 在 /home/app_start 中进行
3.1安装Portainer
-
创建目录 mkdir -p ../build/portainer/data && mkdir -p ../build/portainer/public
-
编写docker-compose.yml配置 portainer:
restart: always
container_name: portainer
image: portainer/portainer
ports:
- "8000:8000"
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/home/app_start/build/portainer/data:/data"
networks:
app_start:
ipv4_address: 172.18.0.2
-
拉取安装Portainer docker-compose up -d portainer
-
访问Portainer,设置密码
初次访问,进去网页就是设置密码,根据需求设置Portainer
http://部署容器IP地址:9000
-
汉化Portainer
-
下载汉化包 curl -o /home/app_start/build/portainer/public.tar -L "https://data.smallblog.cn/blog-images/back/public.tar"
sudo tar -vxf /home/app_start/build/portainer/public.tar -C /home/app_start/build/portainer/
-
修改docker-compose.ym,去掉注释 - "/home/app_start/build/portainer/public:/public"
-
重启部署 docker-compose up -d portainer
-
重启访问浏览器,登录
3.2安装Nginx
-
创建目录 mkdir -p ./build/nginx && mkdir -p ./build/nginx/log && mkdir -p ./build/nginx/www
-
编写docker-compose.yml配置 nginx:
image: nginx
restart: always
container_name: nginx
environment:
- TZ=Asia/Shanghai
ports:
- "80:80"
- "443:443"
volumes:
- /home/app_start/build/nginx/conf.d:/etc/nginx/conf.d
- /home/app_start/build/nginx/log:/var/log/nginx
- /home/app_start/build/nginx/www:/etc/nginx/html
- /etc/letsencrypt:/etc/letsencrypt
networks:
app_start:
ipv4_address: 172.18.0.6
-
启动一个临时的Nginx容器 docker run -p 80:80 --name nginx_test -d nginx
-
将临时容器中的配置Copy到创建的目录中 docker container cp nginx_test:/etc/nginx /home/app_start/build
查看容器Copy到目录的内容 ls /home/app_start/build/nginx
conf.d fastcgi_params log mime.types modules nginx.conf scgi_params uwsgi_params www
-
停掉临时容器,启动构建正确的容器 docker stop nginx_test
docker-compose up -d nginx
-
访问浏览器,测试是否启动成功 http://部署容器IP地址:80
3.3安装Redis
-
创建目录 mkdir -p ./build/redis/data && mkdir -p ./build/redis/conf && mkdir -p ./build/redis/logs
-
下载Redis配置文件 curl -o /home/app_start/build/redis/conf/redis.conf -L "https://gitee.com/grocerie/centos-deployment/raw/master/redis.conf"
-
编写docker-compose.yml配置 redis:
restart: always
image: redis
container_name: redis
volumes:
- "/home/app_start/build/redis/data:/data"
- "/home/app_start/build/redis/conf:/usr/local/etc/redis"
- "/home/app_start/build/redis/logs:/logs"
command:
redis-server --requirepass 123456 --appendonly yes
ports:
- 6379:6379
environment:
- TZ=Asia/Shanghai
networks:
app_start:
ipv4_address: 172.18.0.5
-
构建启动容器 docker-compose up -d redis
-
启动后测试连接Redis
3.4安装MySQL
-
创建目录 mkdir -p ./build/mysql/data
-
编写docker-compose.yml配置 mysql:
image: mysql
restart: always
container_name: "mysql"
ports:
- 3306:3306
volumes:
- "/home/app_start/build/mysql/data:/var/lib/mysql"
command:
--default-authentication-plugin=mysql_native_password
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
--default-time-zone=+8:00
environment:
MYSQL_ROOT_PASSWORD: "12345678"
networks:
app_start:
ipv4_address: 172.18.0.4
-
启动后测试连接MySQL
3.5安装Elasticsearch
-
创建目录 mkdir -p ./build/elasticsearch/config && mkdir -p ./build/elasticsearch/data && mkdir -p ./build/elasticsearch/plugins
-
编写配置文件
echo "http.host: 0.0.0.0" >> /home/app_start/build/elasticsearch/config/elasticsearch.yml
-
对目录进行权限升级 chmod -R 777 /home/app_start/build/elasticsearch
-
编写docker-compose.yml配置 elasticsearch:
container_name: elasticsearch
image: docker.io/elasticsearch:7.4.2
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms64m -Xmx512m
volumes:
- /home/app_start/build/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/app_start/build/elasticsearch/data:/usr/share/elasticsearch/data
- /home/app_start/build/elasticsearch/plugins:/usr/share/elasticsearch/plugins
ports:
- "9200:9200"
- "9300:9300"
networks:
app_start:
ipv4_address: 172.18.0.20
restart: always
-
启动后浏览器访问 http://部署容器IP地址:9200/
-
Elasticsearch 安装 IK分词器
-
进入Elasticsearch挂载目录 cd /home/app_start/build/elasticsearch
-
下载IK分词器
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.4.2/elasticsearch-analysis-ik-7.4.2.zip
yum install unzip -y
unzip elasticsearch-analysis-ik-7.4.2.zip -d ik
mv ik plugins/
chmod -R 777 plugins/ik
-
确认是否安装好了分词器
-
进入容器:/usr/share/elasticsearch/bin -
运行:elasticsearch-plugin A tool for managing installed elasticsearch plugins
Non-option arguments:
command
Option Description
------ -----------
-h, --help show help
-s, --silent show minimal output
-v, --verbose show verbose output
-
列出所有已安装的插件:elasticsearch-plugin list [root@36a2ac9a6bd7 bin]
ik
-
重启elasticsearch docker restart elasticsearch
-
测试IK分词器
最大的分词组合
POST _analyze
{
"analyzer": "ik_smart",
"text": "我爱我的祖国!"
}
返回结果: {
"tokens" : [
{
"token" : "我",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "爱我",
"start_offset" : 1,
"end_offset" : 3,
"type" : "CN_WORD",
"position" : 1
},
{
"token" : "的",
"start_offset" : 3,
"end_offset" : 4,
"type" : "CN_CHAR",
"position" : 2
},
{
"token" : "祖国",
"start_offset" : 4,
"end_offset" : 6,
"type" : "CN_WORD",
"position" : 3
}
]
}
3.6安装kibana
-
编写docker-compose.yml配置 kibana:
container_name: kibana
image: kibana:7.4.2
environment:
- ELASTICSEARCH_HOSTS=http://172.18.0.20:9200
- I18N_LOCALE=zh-CN
ports:
- "5601:5601"
depends_on:
- elasticsearch
restart: on-failure
networks:
app_start:
ipv4_address: 172.18.0.21
-
启动后浏览器访问 http://部署容器IP地址:5601/
访问浏览器出现:Kibana server is not ready yet
可能就是Kibana服务器还没有准备好
查看docker Kibana 日志,出现一下字段为就绪(Kibana准备好了)
{
"type": "log",
"@timestamp": "2022-07-18T09:03:16Z",
"tags": [
"status",
"plugin:spaces@7.4.2",
"info"
],
"pid": 7,
"state": "green",
"message": "Status changed from yellow to green - Ready",
"prevState": "yellow",
"prevMsg": "Waiting for Elasticsearch"
}
3.7安装Logstash
-
启动临时容器 docker run -d --name=logstash logstash:7.4.2
-
将临时容器中的文件Copy出来 docker cp logstash:/usr/share/logstash/config /home/app_start/build/logstash/
docker cp logstash:/usr/share/logstash/data /home/app_start/build/logstash/
docker cp logstash:/usr/share/logstash/pipeline /home/app_start/build/logstash/
-
授权文件
这步必须做,否则后续后出现未知报错,logstash 启动不起来
chmod -R 777 /home/app_start/build/logstash/
-
修改logstash/config 下的 logstash.yml 文件 http.host: 0.0.0.0
path.logs: /usr/share/logstash/logs
pipeline.batch.size: 10
xpack.monitoring.elasticsearch.hosts:
- http://172.18.0.20:9200
-
修改 logstash/pipeline 下的 logstash.conf 文件 input {
tcp {
mode => "server"
host => "0.0.0.0" # 允许任意主机发送日志
port => 5045
codec => json_lines # 数据格式
}
}
output {
elasticsearch {
hosts => ["http://172.18.0.20:9200"] # ElasticSearch 的地址和端口
index => "elk" # 指定索引名
codec => "json"
}
stdout {
codec => rubydebug
}
}
-
编写docker-compose.yml配置 logstash:
image: logstash:7.4.2
container_name: logstash
restart: always
ports:
- "5044:5044"
- "9600:9600"
- "5045:5045"
volumes:
- ./build/logstash/config:/usr/share/logstash/config
- ./build/logstash/data:/usr/share/logstash/data
- ./build/logstash/pipeline:/usr/share/logstash/pipeline
networks:
app_start:
ipv4_address: 172.18.0.23
-
启动后查看日志
查看启动过程是否正常启动,及时发现问题
docker logs -f logstash
-
常见错误 [2022-07-18T13:51:50,582][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2022-07-18T13:51:54,827][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2022-07-18T13:51:55,897][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2022-07-18T13:51:56,823][ERROR][logstash.javapipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"1db05a8093f0a1ba531f41f086b7b6aae3f181480601fd93614e3bf59e5a6608", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_aadd2f75-f190-4183-a533-51389850e1b1", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
Error: Address already in use
注意:Address already in use (端口占用 5044),检查配置文件
[2022-07-18T13:51:55,924][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.161-b14 on 1.8.0_161-b14 +indy +jit [linux-x86_64]"}
[2022-07-18T13:51:55,262][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-07-18T13:51:55,286][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"8fa4bf61-3190-4e9d-ac9a-eccf68220b9f", :path=>"/usr/share/logstash/data/uuid"}
[2022-07-18T13:51:55,618][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/usr/share/logstash/config/conf.d/*.conf"}
[2022-07-18T13:51:55,674][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2022-07-18T13:51:55,820][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2022-07-18T13:51:55,896][INFO ][logstash.runner ] Logstash shut down.
注意:No config files found in path {:path=>"/usr/share/logstash/config/conf.d/*.conf"} ,找不到配置文件 ,在logstash.yml 文件中注释掉就可以了
3.8安装ElasticSearch-head
-
启动临时容器 docker pull mobz/elasticsearch-head:5
docker run -d --name es_admin -p 9100:9100 mobz/elasticsearch-head:5
-
将临时容器中的文件Copy出来 docker cp es_admin:/usr/src/app/ /home/app_start/build/elasticsearch-head/
-
修改elasticsearch-head/app下的Gruntfile.js 文件 connect: {
server: {
options: {
# 添加 hostname: '0.0.0.0',
hostname: '0.0.0.0',
port: 9100,
base: '.',
keepalive: true
}
}
}
-
编写docker-compose.yml配置 elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
restart: always
ports:
- "9100:9100"
volumes:
- ./build/elasticsearch-head/app/:/usr/src/app/
networks:
app_start:
ipv4_address: 172.18.0.24
-
启动后浏览器访问 http://部署容器IP地址:9100/
-
遇到跨域问题
去修改elasticsearch挂载目录下修改elasticsearch.yml 文件
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
3.9安装RabbitMQ
-
编写docker-compose.yml配置 rabbitmq:
restart: always
image: rabbitmq:management
container_name: rabbitmq
ports:
- 5671:5671
- 5672:5672
- 4369:4369
- 25672:25672
- 15671:15671
- 15672:15672
networks:
app_start:
ipv4_address: 172.18.0.22
端口介绍
4369, 25672 (Erlang发现&集群端口)
5672, 5671 (AMQP端口)
15672 (web管理后台端口)
61613, 61614 (STOMP协议端口)
1883, 8883 (MQTT协议端口)
-
启动后浏览器访问 http://部署容器IP地址:15672/
默认账号密码:guest /guest
本机安装JDK
-
安装jdk1.8,推荐路径/usr/local/src,先创建目录 mkdir -p /usr/local/src/jdk
-
查看是否已安装了jdk rpm -qa | grep -i jdk -- 查看
rpm -e --nodeps 文件名 -- 卸载
-
下载jdk包 wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
-
解压安装包 tar -zxvf jdk-8u131-linux-x64.tar.gz
-
配置环境变量 vi /etc/profile
在文件最后面加上 export JAVA_HOME=/usr/local/src/jdk/jdk1.8.0_131
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
-
让配置文件生效 source /etc/profile
-
检查是否安装成功 java -version
[root@localhost jdk]
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
systemctl和防火墙firewalld命令
一、防火墙的开启、关闭、禁用命令
(1)设置开机启用防火墙:systemctl enable firewalld.service
(2)设置开机禁用防火墙:systemctl disable firewalld.service
(3)启动防火墙:systemctl start firewalld
(4)关闭防火墙:systemctl stop firewalld
(5)检查防火墙状态:systemctl status firewalld
二、使用firewall-cmd配置端口
(1)查看防火墙状态:firewall-cmd --state
(2)重新加载配置:firewall-cmd --reload
(3)查看开放的端口:firewall-cmd --list-ports
(4)开启防火墙端口:firewall-cmd --zone=public --add-port=9200/tcp --permanent
命令含义:
–zone
–add-port=9200/tcp
–permanent
注意:添加端口后,必须用命令firewall-cmd --reload重新加载一遍才会生效
(5)关闭防火墙端口:firewall-cmd --zone=public --remove-port=9200/tcp --permanent
|