1、前置条件,服务器必须安装了JDK环境
可使用java -version或 rpm -qa | grep jdk 验证服务器是否安装了JDK
由于之前已安装过,所以这里略过。
2、上传elasticsearch-7.2.0-linux-x86_64.tar.gz、kibana-7.2.0-linux-x86_64.tar.gz、node-v10.15.0-linux-x64.tar.xz 、elasticsearch-head-master.zip到/opt/目录下
3、执行解压指令解压安装包:
tar -xvf elasticsearch-7.2.0-linux-x86_64.tar.gz
tar -xvf kibana-7.2.0-linux-x86_64.tar.gz
tar -xvf node-v10.15.0-linux-x64.tar.xz
unzip elasticsearch-head-master.zip
4、安装elasticsearch-head-master插件
①、进入插件目录并查看:cd elasticsearch-head-master
[root@hadoop2 elasticsearch-head-master]# ll
②、检查node环境,输入命令:node -v
给解压包改名
mv node-v10.15.0-linux-x64 node
③、配置node环境变量:
vi /etc/profile
export NODE_HOME=/home/node
export PATH=$PATH:$NODE_HOME/bin
export NODE_PATH=$NODE_HOME/lib/node_modules
##让配置环境生效
source /etc/profile
④、安装grunt
grunt是基于Node.js的项目构建工具,可以进行打包压缩、测试、执行等等的工作,head插件就是通过grunt启动
`##进入到插件目录下面
cd /opt/elasticsearch-head-master/
##下载安装grunt
npm install -g grunt-cli
##检测是否安装成功,如果执行命令后出现版本号就表明成功
grunt -version
##安装npm的服务
npm install
使用淘宝镜像的命令:
npm install -g cnpm --registry=https://registry.npm.taobao.org
⑤、启动elasticsearch-head-master
[root@hadoop2 ~]# cd /opt/elasticsearch-head-master/
执行启动head命令
[root@hadoop2 elasticsearch-head-master]# npm run start
[root@hadoop2 elasticsearch-head-master]# grunt server
后台启动命令
[root@hadoop2 elasticsearch-head-master]# grunt server &
5、安装elasticsearch
5.1 因为elasticsearch不能以root用户身份启动
a.在linux系统中创建新的组+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c.修改es用户密码 ------- 123456+
groupadd es
useradd es -g es
passwd es
chown -R es:es /opt/elasticsearch-7.2.0/
f.文件授权指令
chmod -R 777 /opt/elasticsearch-7.2.0/
5.2 配置集群----路径/opt/elasticsearch-7.2.0/config
配置elasticsearch.yml文件
5.2.1、集群名称必须相同
5.2.2、每台服务器的节点名称与对应hostname一致
node.name: es1
node.master: true
node.data: true
node.max_local_storage_nodes: 3
5.2.3、data和logs路径配置---与服务器路径一致
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0 ----->
#
# Set a custom port for HTTP:
#
http.port: 9200
transport.tcp.port: 9300
#
# For more information, consult the network module documentation.
cluster.initial_master_nodes: ["es1","es2"]
discovery.zen.minimum_master_nodes: 1
discovery.seed_hosts: ["192.168.1.141:9300","192.168.1.142:9300","192.168.1.143:9300"]
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
常见错误:解决
最大线程个数太低。修改配置文件/etc/security/limits.conf(和问题1是一个文件),增加配置
每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 4096
* hard nproc 4096
3、max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[root@hadoop4 ~]# vim /etc/sysctl.conf
修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144
[root@hadoop4 ~]# sysctl -p