1:写在前面
本文介绍Linux下简单安装。在这里 下载es。在这里 下载jdk8。 在这里 下载ik。
2:安装配置
2.1:创建安装目录
$ mkdir -p /work/programs/elasticsearch
$ cd /work/programs/elasticsearch
将下载的文件放到目录,并tar -zxf 解压。
2.2:配置
- 修改elasticsearch.yml
添加如下内容:
network.host: 0.0.0.0
bootstrap.system_call_filter: false
- 修改/etc/security/limits.conf
修改用户可用的最大线程数,最多可打开文件句柄数,对于最大线程数因为es要求最少4096个才会启动,而默认的是1024或者3795。nofile 修改文件句柄,nproc 修改线程数。
* soft nofile 65535
* hard nofile 65535
* soft nproc 4097
* hard nproc 4097
2.2:启动
注意切换到非root用户,es在root用户下不启动。
/work/programs/elasticsearch/elasticsearch-6.7.2/bin/elasticsearch -d
启动成功后查看日志:
[root@localhost ~]# tail -10 /work/programs/elasticsearch/elasticsearch-6.7.2/logs/elasticsearch.log
[2021-12-04T04:54:23,420][INFO ][o.e.g.GatewayService ] [mYT6YO6] recovered [0] indices into cluster_state
[2021-12-04T04:54:24,115][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.watches] for index patterns [.watches*]
[2021-12-04T04:54:24,206][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2021-12-04T04:54:24,617][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2021-12-04T04:54:24,700][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2021-12-04T04:54:24,821][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2021-12-04T04:54:24,916][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2021-12-04T04:54:24,981][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2021-12-04T04:54:25,059][INFO ][o.e.c.m.MetaDataIndexTemplateService] [mYT6YO6] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2021-12-04T04:54:25,326][INFO ][o.e.l.LicenseService ] [mYT6YO6] license [7c34b397-4b3b-4b8f-90a8-2874f9bc8379] mode [basic] - valid
访问测试输出如下即为成功:
[dongyunqi@localhost elasticsearch]$ curl http://192.168.2.107:9200
{
"name" : "mYT6YO6",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "9D9vjYwvRny-kX8_s08p8A",
"version" : {
"number" : "6.7.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "56c6e48",
"build_date" : "2019-04-29T09:05:50.290371Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
3:安装插件ik
执行如下操作:
$ mkdir /work/programs/elasticsearch/elasticsearch-6.7.2/plugins/ik
$ cd /work/programs/elasticsearch/elasticsearch-6.7.2/plugins/ik
$ cp /path/to/elasticsearch-analysis-ik-6.7.2.zip /work/programs/elasticsearch/elasticsearch-6.7.2/plugins/ik
$ unzip /work/programs/elasticsearch/elasticsearch-6.7.2/plugins/ik/elasticsearch-analysis-ik-6.7.2.zip
$ ps -ef | grep elastic
$ kill 2382 # 假设我们找到的 ES 进程号为 2382 。
ik提供了两种分词模式,k_max_word :IK 最大化分词,会将文本做最细粒度的拆分。 ,k_smart :IK 智能分词,会做最粗粒度的拆分 。如下使用实例:
[dongyunqi@localhost ik]$ curl -X POST \
> http://localhost:9200/_analyze \
> -H 'content-type: application/json' \
> -d '{"analyzer": "ik_max_word","text": "百事可乐"}' && echo
{"tokens":[{"token":"百事可乐","start_offset":0,"end_offset":4,"type":"CN_WORD","position":0},{"token":"百事","start_offset":0,"end_offset":2,"type":"CN_WORD","position":1},{"token":"百","start_offset":0,"end_offset":1,"type":"TYPE_CNUM","position":2},{"token":"事","start_offset":1,"end_offset":2,"type":"CN_CHAR","position":3},{"token":"可乐","start_offset":2,"end_offset":4,"type":"CN_WORD","position":4}]}
[dongyunqi@localhost ik]$ curl -X POST \
> http://localhost:9200/_analyze \
> -H 'content-type: application/json' \
> -d '{ "analyzer": "ik_smart", "text": "百事可乐" }' && echo
{"tokens":[{"token":"百事可乐","start_offset":0,"end_offset":4,"type":"CN_WORD","position":0}]}
可以看到ik_max_word分出了更多的词,id_smart分出了更少的词。
|