IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 大数据 -> 50实战增删改查索引、自定义分词器 -> 正文阅读

[大数据]50实战增删改查索引、自定义分词器

索引

之前都是自动创建索引,自动进行dynamic mapping自动映射。

那我们看看如何手动创建索引,完成增删改查操作。

1、创建索引

创建索引的语法

PUT /my_index
{
    "settings": { ... any settings ... },
    "mappings": {
        "type_one": { ... any mappings ... },
        "type_two": { ... any mappings ... },
        ...
    }
}

创建索引的示例

PUT /my_index
{
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 0
  },
  "mappings": {
    "my_type" : {
      "properties": {
        "my_filed" : {
          "type": "text"
        }
      }
    }
  }
}

2、查看索引

GET /my_index

响应结果

{
  "my_index": {
    "aliases": {},
    "mappings": {
      "my_type": {
        "properties": {
          "my_filed": {
            "type": "text"
          }
        }
      }
    },
    "settings": {
      "index": {
        "creation_date": "1636764010606",
        "number_of_shards": "1",
        "number_of_replicas": "0",
        "uuid": "8HAFX_BZRy2LMXUnzcDVVw",
        "version": {
          "created": "5060099"
        },
        "provided_name": "my_index"
      }
    }
  }
}

3、修改索引

比如修改settings中的number_of_replicas

PUT /my_index/_settings
{
    "number_of_replicas": 1
}

响应结果

{
  "acknowledged": true
}

查看索引

GET /my_index

响应结果

{
  "my_index": {
    "aliases": {},
    "mappings": {
      "my_type": {
        "properties": {
          "my_filed": {
            "type": "text"
          }
        }
      }
    },
    "settings": {
      "index": {
        "creation_date": "1636764010606",
        "number_of_shards": "1",
        "number_of_replicas": "1",
        "uuid": "8HAFX_BZRy2LMXUnzcDVVw",
        "version": {
          "created": "5060099"
        },
        "provided_name": "my_index"
      }
    }
  }
}

4、删除索引

删除索引有多种方式比如:

DELETE /my_index
DELETE /index_one,index_two
DELETE /index_*
DELETE / _all

DELETE /my_index

响应结果

{
  "acknowledged": true
}

查看索引

GET /my_index

响应结果

{
  "error": {
    "root_cause": [
      {
        "type": "index_not_found_exception",
        "reason": "no such index",
        "resource.type": "index_or_alias",
        "resource.id": "my_index",
        "index_uuid": "_na_",
        "index": "my_index"
      }
    ],
    "type": "index_not_found_exception",
    "reason": "no such index",
    "resource.type": "index_or_alias",
    "resource.id": "my_index",
    "index_uuid": "_na_",
    "index": "my_index"
  },
  "status": 404
}

在es文件elasticsearch.yml中有一个配置
action.destructive_requires_name: true

表示删除索引必须指定索引名,这样DELETE / _all这种方式不可以了。

分词器

1、默认的分词器

standard

standard tokenizer:以单词边界进行切分
standard token filter:什么都不做
lowercase token filter:将所有字母转换为小写
stop token filer(默认被禁用):移除停用词,比如a the it等等

2、修改分词器的设置

启用english停用词token filter

PUT /my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "es_std": {
          "type": "standard",
          "stopwords": "_english_"
        }
      }
    }
  }
}

响应结果

{
  "acknowledged": true,
  "shards_acknowledged": true,
  "index": "my_index"
}

使用standard分词器

GET /my_index/_analyze
{
  "analyzer": "standard", 
  "text": "a dog is in the house"
}

响应结果

{
  "tokens": [
    {
      "token": "a",
      "start_offset": 0,
      "end_offset": 1,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "dog",
      "start_offset": 2,
      "end_offset": 5,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "is",
      "start_offset": 6,
      "end_offset": 8,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "in",
      "start_offset": 9,
      "end_offset": 11,
      "type": "<ALPHANUM>",
      "position": 3
    },
    {
      "token": "the",
      "start_offset": 12,
      "end_offset": 15,
      "type": "<ALPHANUM>",
      "position": 4
    },
    {
      "token": "house",
      "start_offset": 16,
      "end_offset": 21,
      "type": "<ALPHANUM>",
      "position": 5
    }
  ]
}

使用standard分词器修改后的es_std分词器

GET /my_index/_analyze
{
  "analyzer": "es_std",
  "text":"a dog is in the house"
}

响应结果

{
  "tokens": [
    {
      "token": "dog",
      "start_offset": 2,
      "end_offset": 5,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "house",
      "start_offset": 16,
      "end_offset": 21,
      "type": "<ALPHANUM>",
      "position": 5
    }
  ]
}

是英文停用词的单词都被过滤了

3、定制化自己的分词器

&_to_and:&转换为and

my_stopwords:the 、 a 为停用词

分词器使用

 "my_analyzer": {
          "type": "custom",
          "char_filter": ["html_strip", "&_to_and"],
          "tokenizer": "standard",
          "filter": ["lowercase", "my_stopwords"]
        }

先删除索引

DELETE /my_index

重新创建索引并自定义分词器

PUT /my_index
{
  "settings": {
    "analysis": {
      "char_filter": {
        "&_to_and": {
          "type": "mapping",
          "mappings": ["&=> and"]
        }
      },
      "filter": {
        "my_stopwords": {
          "type": "stop",
          "stopwords": ["the", "a"]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": ["html_strip", "&_to_and"],
          "tokenizer": "standard",
          "filter": ["lowercase", "my_stopwords"]
        }
      }
    }
  }
}

响应结果

{
  "acknowledged": true,
  "shards_acknowledged": true,
  "index": "my_index"
}

使用自定义的分词器

GET /my_index/_analyze
{
  "text": "tom&jerry are a friend in the house, <a>, HAHA!!",
  "analyzer": "my_analyzer"
}

响应结果

{
  "tokens": [
    {
      "token": "tomandjerry",
      "start_offset": 0,
      "end_offset": 9,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "are",
      "start_offset": 10,
      "end_offset": 13,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "friend",
      "start_offset": 16,
      "end_offset": 22,
      "type": "<ALPHANUM>",
      "position": 3
    },
    {
      "token": "in",
      "start_offset": 23,
      "end_offset": 25,
      "type": "<ALPHANUM>",
      "position": 4
    },
    {
      "token": "house",
      "start_offset": 30,
      "end_offset": 35,
      "type": "<ALPHANUM>",
      "position": 6
    },
    {
      "token": "haha",
      "start_offset": 42,
      "end_offset": 46,
      "type": "<ALPHANUM>",
      "position": 7
    }
  ]
}

设置指定type的某个字段使用自定义的分词器

PUT /my_index/_mapping/my_type
{
  "properties": {
    "content": {
      "type": "text",
      "analyzer": "my_analyzer"
    }
  }
}

响应结果

{
  "acknowledged": true
}

查看索引

GET /my_index

响应结果

{
  "my_index": {
    "aliases": {},
    "mappings": {
      "my_type": {
        "properties": {
          "content": {
            "type": "text",
            "analyzer": "my_analyzer"
          }
        }
      }
    },
    "settings": {
      "index": {
        "number_of_shards": "5",
        "provided_name": "my_index",
        "creation_date": "1636768940575",
        "analysis": {
          "filter": {
            "my_stopwords": {
              "type": "stop",
              "stopwords": [
                "the",
                "a"
              ]
            }
          },
          "analyzer": {
            "my_analyzer": {
              "filter": [
                "lowercase",
                "my_stopwords"
              ],
              "char_filter": [
                "html_strip",
                "&_to_and"
              ],
              "type": "custom",
              "tokenizer": "standard"
            }
          },
          "char_filter": {
            "&_to_and": {
              "type": "mapping",
              "mappings": [
                "&=> and"
              ]
            }
          }
        },
        "number_of_replicas": "1",
        "uuid": "YyPcT1OmS_O7xW5Xpq7fgw",
        "version": {
          "created": "5060099"
        }
      }
    }
  }
}

/my_index/my_type/content使用自定义的my_analyzer分词器

  大数据 最新文章
实现Kafka至少消费一次
亚马逊云科技:还在苦于ETL?Zero ETL的时代
初探MapReduce
【SpringBoot框架篇】32.基于注解+redis实现
Elasticsearch:如何减少 Elasticsearch 集
Go redis操作
Redis面试题
专题五 Redis高并发场景
基于GBase8s和Calcite的多数据源查询
Redis——底层数据结构原理
上一篇文章      下一篇文章      查看所有文章
加:2021-11-15 15:56:05  更:2021-11-15 15:56:44 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/24 5:28:43-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码