本篇将使用Linux集群,如果没有的可以看我的集群安装文档,见博客。
首先是Redis,我们用它二次提升首页的效率,将栏目这个基本不发生变化的数据放在Redis中。第一步我们要配置Redis的Spring文件
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang"
xmlns:jms="http://www.springframework.org/schema/jms" xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context"
xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:cache="http://www.springframework.org/schema/cache"
xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:task="http://www.springframework.org/schema/task" xmlns:tool="http://www.springframework.org/schema/tool"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd
http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd
http://www.springframework.org/schema/jms
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd
http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd
http://www.springframework.org/schema/tool http://www.springframework.org/schema/tool/spring-tool.xsd
http://www.springframework.org/schema/websocket">
<bean id="jdkSerializationRedisSerializer" class="org.springframework.data.redis.serializer.JdkSerializationRedisSerializer"></bean>
<bean id="stringRedisSerializer" class="org.springframework.data.redis.serializer.StringRedisSerializer"></bean>
<bean id="jackson2JsonRedisSerializer" class="org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer">
<constructor-arg value="java.lang.Object"></constructor-arg>
</bean>
<bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate">
<property name="connectionFactory" ref="connectionFactory"></property>
<property name="keySerializer" ref="stringRedisSerializer"></property>
<property name="valueSerializer" ref="jdkSerializationRedisSerializer"></property>
<property name="hashKeySerializer" ref="stringRedisSerializer"></property>
<property name="hashValueSerializer" ref="jdkSerializationRedisSerializer"></property>
</bean>
<bean id="connectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="hostName" value="192.168.88.188"></property>
<property name="port" value="6379"></property>
</bean>
</beans>
随后在spring配置文件中加载它
<!-- redis配置文件 -->
<import resource="redis.xml"/>
最后我们更改首页Controller修改修改它的业务逻辑,使得所以栏目和最新五条文章的查询先从Redis中查询
@Autowired
private RedisTemplate redisTemplate;
@RequestMapping("index.do")
public String index(Model model,Article article,@RequestParam(defaultValue="1")Integer pageNum) {
model.addAttribute("article", article);
Thread t1;
Thread t2;
Thread t3;
Thread t4;
t1=new Thread(new Runnable() {
@Override
public void run() {
List<Channel> channels = (List<Channel>) redisTemplate.opsForValue().get("channels");
if(channels==null){
channels = channelService.selects();
redisTemplate.opsForValue().set("channels", channels);
}
model.addAttribute("channels", channels);
}
});
t2=new Thread(new Runnable() {
@Override
public void run() {
if(article.getChannelId()!=null){
List<Category> categorys = channelService.selectsByChannelId(article.getChannelId());
model.addAttribute("categorys", categorys);
}else{
List<Slide> slides = slideService.getAll();
model.addAttribute("slides", slides);
article.setHot(1);
}
}
});
t3=new Thread(new Runnable() {
@Override
public void run() {
try {
t2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
article.setDeleted(0);
article.setStatus(1);
List<Article> selectArticle = articleService.selectArticle(article, pageNum, 6);
PageInfo info=new PageInfo<>(selectArticle);
model.addAttribute("info", info);
}
});
t4=new Thread(new Runnable() {
@Override
public void run() {
List<Article> newArticles=(List<Article>) redisTemplate.opsForValue().get("newArticles");
if(newArticles==null){
Article latest = new Article();
latest.setDeleted(0);
latest.setStatus(1);
newArticles = articleService.selectArticle(latest, 1, 5);
redisTemplate.opsForValue().set("newArticles", newArticles);
}
PageInfo lastArticles=new PageInfo<>(newArticles);
model.addAttribute("lastArticles", lastArticles);
}
});
t1.start();
t2.start();
t3.start();
t4.start();
try {
t1.join();
t3.join();
t4.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
return "index/index";
}
运行项目看效果,首先第一次查询用了2秒多,这很正常后台首次查询需要对redis做数据存储 第二次恢复毫秒 但是这里又出现一个bug,最新文章没了,通过排查发现是子栏目Bean没有实现系列化接口,导致封装五条最新文章的时候子类目异常,改过来后页面展示正常 去服务器后台查看redis,你会发现有运行是推送的key 至此Redis如何进行SSM整合就说完了,为了进行下一步开发你需要把redis相关暂时注释掉
下面我们整合kafka,使用它统计文章的被点击量。首先我们需要整合kafka的ssm配置。我们先说kafka的生产者producer
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<bean id="producerProperties" class="java.util.HashMap">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="192.168.88.186:9092,192.168.88.187:9092,192.168.88.188:9092" />
<entry key="retries" value="0" />
<entry key="batch.size" value="1638" />
<entry key="linger.ms" value="1" />
<entry key="buffer.memory" value="33554432 " />
<entry key="key.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
<entry key="value.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
</map>
</constructor-arg>
</bean>
<bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<ref bean="producerProperties" />
</constructor-arg>
</bean>
<bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerFactory" />
<property name="defaultTopic" value="wy" />
</bean>
</beans>
随后在spring中加载这个配置
<!-- kafka:生产者 -->
<import resource="producer.xml"/>
在首页Controller中注入生产者的Bean,且在详情中编写代码,使得每次被点击都向kafka集群发送一条记录,当然我们这里不做计算,通常点击量也不是后端人员干的,是大数据开发人员用Spark等手段处理好结果会提供回数据库
@Autowired
private KafkaTemplate kafkaTemplate;
@RequestMapping("detail.do")
public String detail(Model model, Integer id, HttpSession session, @RequestParam(defaultValue="1")Integer page) {
Article article = articleService.select(id);
model.addAttribute("article", article);
User user=(User) session.getAttribute("user");
if (null != user) {
int isCollect = collectService.selectCount(article.getTitle(), user.getId());
model.addAttribute("isCollect", isCollect);
}
PageInfo<Comment> info = commentService.selects(id, page, 5);
model.addAttribute("info", info);
kafkaTemplate.send("wy",id+","+1);
return "index/article";
}
集群后台准备好一个消费者,然后运行项目点击首页的文章,看结果
随后返回头我们说消费者,SSM里使用消费者一般很少,而且用也不会和web业务模块在同一项目下,总之SSM整合kafka消费者的需求很少但我们要知道怎么用。首先准备消费者的SSM配置文件
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<bean id="consumerProperties" class="java.util.HashMap">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="192.168.88.186:9092,192.168.88.187:9092,192.168.88.188:9092" />
<entry key="group.id" value="wy" />
<entry key="enable.auto.commit" value="true" />
<entry key="session.timeout.ms" value="15000 " />
<entry key="key.deserializer"
value="org.apache.kafka.common.serialization.StringDeserializer" />
<entry key="value.deserializer"
value="org.apache.kafka.common.serialization.StringDeserializer" />
</map>
</constructor-arg>
</bean>
<bean id="consumerFactory"
class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
<constructor-arg>
<ref bean="consumerProperties" />
</constructor-arg>
</bean>
<bean id="messageListenerContainer"
class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
init-method="doStart">
<constructor-arg ref="consumerFactory" />
<constructor-arg ref="containerProperties" />
</bean>
<bean id="containerProperties" class="org.springframework.kafka.listener.ContainerProperties">
<constructor-arg value="wy" />
<property name="messageListener" ref="messageListernerConsumerService" />
</bean>
<bean id="messageListernerConsumerService" class="com.wy.kafka.MesLis" />
</beans>
随后关键的一点注意,和生产者一样我们需要在spring的配置里加载配置,但是记住!!!如果有生产者一定要把相关代码暂时注释,尽量不要两者共存一个项目中,不然很可能其中一个会失效!!!!
<!-- kafka:消费者配置文件 -->
<import resource="consumer.xml"/>
最后我们要书写消费者类,也就是配置中指向的监听类
package com.wy.kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.listener.MessageListener;
import com.alibaba.fastjson.JSON;
import com.wy.bean.Article;
import com.wy.dao.ArticleMapper;
public class MesLis implements MessageListener<String, String>{
@Override
public void onMessage(ConsumerRecord<String, String> data) {
String d = data.value();
System.out.println("接收到的数据为"+d);
}
}
随后运行项目,在服务器端想topic发送消息,查看结果 不要有中文,SSM官方现在对框架维护很少,中文字符集不对这个问题需要自己解决一下
最后我们来说SSM如何整合ES,同样的步骤,ES后续步骤有些多,先准备SSM整合ES的配置文件
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:elasticsearch="http://www.springframework.org/schema/data/elasticsearch"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/elasticsearch http://www.springframework.org/schema/data/elasticsearch/spring-elasticsearch.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
<elasticsearch:repositories base-package="com.wy.es" />
<elasticsearch:transport-client id="client" cluster-nodes="192.168.88.188:9300" />
<bean id="elasticsearchTemplate"
class="org.springframework.data.elasticsearch.core.ElasticsearchTemplate">
<constructor-arg name="client" ref="client"></constructor-arg>
</bean>
</beans>
随后spring中引入该配置,并且创建配置中的ES接口包
<!-- es数据库 -->
<import resource="es.xml"/>
创建<elasticsearch:repositories base-package="com.wy.es" />指向的包
在这个包下我们需要准备ES数据接口
package com.wy.es;
import java.util.List;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import com.wy.bean.Article;
public interface ArticleElasticsearch extends ElasticsearchRepository<Article, Integer>{
List<Article> findByTitleOrContent(String title,String content);
}
我们要对实体Bean类中用ES提供的文档注解标记与ES对应的关系
//指定type库名(库名必须用纯小写的名字,不允许有特殊字符,否则就报错),indexName指定表名也叫索引名
@Document(indexName="test_user",type="user")
//用来指定ID
@Id
//指定name的值是否索引,2.是否存储3.name的值的分词方式 4.搜索的关键字分词的方式 5指定该字段的值以什么样的数据类型来存储
@Field(index=true,store=true,analyzer="ik_smart",searchAnalyzer="ik_smart",type=FieldType.text)
标记结果如下
package com.wy.bean;
import java.io.Serializable;
import java.util.Date;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
@Document(indexName="articles",type="article")
public class Article implements Serializable {
private static final long serialVersionUID = 1L;
@Id
private Integer id;
@Field(index=true,store=true,analyzer="ik_max_word",searchAnalyzer="ik_max_word",type=FieldType.text)
private String title;
private String summary;
@Field(index=true,store=true,analyzer="ik_max_word",searchAnalyzer="ik_max_word",type=FieldType.text)
private String content;
private String picture;
private Integer channelId;
private Integer categoryId;
private Integer userId;
private Integer hits;
private Integer hot;
private Integer status;
private Integer deleted;
private Date created;
private Date updated;
private String contentType ;
private Channel channel;
private Category category;
private User user;
private String keywords;
private String original;
public String getKeywords() {
return keywords;
}
public void setKeywords(String keywords) {
this.keywords = keywords;
}
public String getOriginal() {
return original;
}
public void setOriginal(String original) {
this.original = original;
}
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public String getSummary() {
return summary;
}
public void setSummary(String summary) {
this.summary = summary;
}
public String getContent() {
return content;
}
public void setContent(String content) {
this.content = content;
}
public String getPicture() {
return picture;
}
public void setPicture(String picture) {
this.picture = picture;
}
public Integer getChannelId() {
return channelId;
}
public void setChannelId(Integer channelId) {
this.channelId = channelId;
}
public Integer getCategoryId() {
return categoryId;
}
public void setCategoryId(Integer categoryId) {
this.categoryId = categoryId;
}
public Integer getUserId() {
return userId;
}
public void setUserId(Integer userId) {
this.userId = userId;
}
public Integer getHits() {
return hits;
}
public void setHits(Integer hits) {
this.hits = hits;
}
public Integer getHot() {
return hot;
}
public void setHot(Integer hot) {
this.hot = hot;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
public Integer getDeleted() {
return deleted;
}
public void setDeleted(Integer deleted) {
this.deleted = deleted;
}
public Date getCreated() {
return created;
}
public void setCreated(Date created) {
this.created = created;
}
public Date getUpdated() {
return updated;
}
public void setUpdated(Date updated) {
this.updated = updated;
}
public Channel getChannel() {
return channel;
}
public void setChannel(Channel channel) {
this.channel = channel;
}
public Category getCategory() {
return category;
}
public void setCategory(Category category) {
this.category = category;
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public String getContentType() {
return contentType;
}
public void setContentType(String contentType) {
this.contentType = contentType;
}
@Override
public String toString() {
return "Article [id=" + id + ", title=" + title + ", summary=" + summary + ", content=" + content + ", picture="
+ picture + ", channelId=" + channelId + ", categoryId=" + categoryId + ", userId=" + userId + ", hits="
+ hits + ", hot=" + hot + ", status=" + status + ", deleted=" + deleted + ", created=" + created
+ ", updated=" + updated + ", contentType=" + contentType + ", channel=" + channel + ", category="
+ category + ", user=" + user + ", keywords=" + keywords + ", original=" + original + "]";
}
}
此时我们使用测试类和我们自己的ES接口向ES初始化数据
import com.wy.bean.Article;
import com.wy.es.ArticleElasticsearch;
import com.wy.service.ArticleService;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import java.util.List;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:spring.xml" })
public class ESTest {
@Autowired
private ArticleElasticsearch articleElasticsearch;
@Autowired
private ArticleService articleService;
@Test
public void pushESData(){
List<Article> articles = articleService.selectArticle(new Article(), 1, 30);
for (Article a : articles){
articleElasticsearch.save(a);
}
}
}
测试类运行结束后,才head上就可以看到数据 到此ES整合的配置流程已经完成了,而本篇知识点不去做普通的整合,我们玩一个比较高端的ES高亮 ,既可以学习高亮又能知道ES整合怎么用,美滋滋
想要实现ES高亮,需要自己写一个工具类,这个工具类我已经给大家写完了,直接复制就行,提前说明我写的这个工具类不是公用的,只是用作本次CMS系统,如果有其他用处就需要你自己看着改
package com.wy.utils;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.data.domain.Sort;
import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;
import org.springframework.data.elasticsearch.core.SearchResultMapper;
import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;
import org.springframework.data.elasticsearch.core.aggregation.impl.AggregatedPageImpl;
import org.springframework.data.elasticsearch.core.query.GetQuery;
import org.springframework.data.elasticsearch.core.query.IndexQuery;
import org.springframework.data.elasticsearch.core.query.IndexQueryBuilder;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.data.elasticsearch.core.query.SearchQuery;
import com.github.pagehelper.PageInfo;
public class HLUtils {
public static void saveObject(ElasticsearchTemplate elasticsearchTemplate, String id, Object object) {
IndexQuery query = new IndexQueryBuilder().withId(id).withObject(object).build();
elasticsearchTemplate.index(query);
}
public static void deleteObject(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer ids[]) {
for (Integer id : ids) {
elasticsearchTemplate.delete(clazz, id + "");
}
}
public static Object selectById(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer id) {
GetQuery query = new GetQuery();
query.setId(id + "");
return elasticsearchTemplate.queryForObject(query, clazz);
}
public static PageInfo<?> findByHighLight(ElasticsearchTemplate elasticsearchTemplate, Class<?> clazz, Integer page,
Integer rows, String fieldNames[],String sortField, String value) {
AggregatedPage<?> pageInfo = null;
PageInfo<?> pi = new PageInfo<>();
final Pageable pageable = PageRequest.of(page - 1, rows, Sort.by(Sort.Direction.ASC, sortField));
SearchQuery query = null;
QueryBuilder queryBuilder = null;
if (value != null && !"".equals(value)) {
String preTags = "<font color=\"red\">";
String postTags = "</font>";
HighlightBuilder.Field highlightFields[] = new HighlightBuilder.Field[fieldNames.length];
for (int i = 0; i < fieldNames.length; i++) {
highlightFields[i] = new HighlightBuilder.Field(fieldNames[i]).preTags(preTags).postTags(postTags);
}
queryBuilder = QueryBuilders.multiMatchQuery(value, fieldNames);
query = new NativeSearchQueryBuilder().withQuery(queryBuilder).withHighlightFields(highlightFields)
.withPageable(pageable).build();
pageInfo = elasticsearchTemplate.queryForPage(query, clazz, new SearchResultMapper() {
public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable1) {
List<T> content = new ArrayList<T>();
long total = 0l;
try {
SearchHits hits = response.getHits();
if (hits != null) {
total = hits.getTotalHits();
SearchHit[] searchHits = hits.getHits();
if (searchHits != null && searchHits.length > 0) {
for (int i = 0; i < searchHits.length; i++) {
T entity = clazz.newInstance();
SearchHit searchHit = searchHits[i];
Field[] fields = clazz.getDeclaredFields();
for (int k = 0; k < fields.length; k++) {
Field field = fields[k];
field.setAccessible(true);
String fieldName = field.getName();
if (!fieldName.equals("serialVersionUID")&&!fieldName.equals("user")&&!fieldName.equals("channel")&&!fieldName.equals("category")&&!fieldName.equals("articleType")&&!fieldName.equals("imgList")) {
HighlightField highlightField = searchHit.getHighlightFields()
.get(fieldName);
if (highlightField != null) {
String value = highlightField.getFragments()[0].toString();
field.set(entity, value);
} else {
Object value = searchHit.getSourceAsMap().get(fieldName);
Class<?> type = field.getType();
if (type == Date.class) {
if(value!=null) {
field.set(entity, new Date(Long.valueOf(value + "")));
}
} else {
field.set(entity, value);
}
}
}
}
content.add(entity);
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
return new AggregatedPageImpl<T>(content, pageable, total);
}
});
} else {
query = new NativeSearchQueryBuilder().withPageable(pageable).build();
pageInfo = elasticsearchTemplate.queryForPage(query, clazz);
}
int totalCount = (int) pageInfo.getTotalElements();
int pages = totalCount%rows==0?totalCount/rows:totalCount/rows+1;
pi.setTotal(pageInfo.getTotalElements());
pi.setPageNum(page);
pi.setPageSize(rows);
pi.setPrePage(page-1);
pi.setLastPage(page+1);
pi.setPages(pages);
List content = pageInfo.getContent();
pi.setList(content);
return pi;
}
}
下面我们来使用这个工具类,首先在首页上放一个查询框,用来做高亮的搜索,我们放在展示五条最新消息的上面
<form action="/es.do" method="get">
<div class="input-group mb-3">
<input type="text" name="key" value="${key}" class="form-control"
placeholder="请输入要搜索的内容" aria-label="Recipient's username"
aria-describedby="button-addon2">
<div class="input-group-append">
<button class="btn btn-outline-secondary" id="button-addon2">搜索</button>
</div>
</div>
</form>
前端首页有了搜索框,那后端就要有它的接收Controller
@RequestMapping("es.do")
public String es(Model model,String key,@RequestParam(defaultValue="1")Integer pageNum){
PageInfo<?> info = HLUtils.findByHighLight(elasticsearchTemplate, Article.class, pageNum, 5, new String[]{"title"}, "id", key);
model.addAttribute("info", info);
model.addAttribute("key", key);
return "index/index";
}
现在我们就可以运行项目看效果 通过效果来看高亮成功,但是其他的数据没了,这是因为其他数据没有查询,大家可以自己扩展把高亮的查询和原先的查询融合一下
自此CMSDemo2.0版本完成
本项目目前以上传github :https://github.com/wangyang159/cmsdemo
|