基础环境
此部署脚本包所包含软件 zookeeper kafka hbase apache-storm opentsdb fastdfs mysql mongodb emqtt redis nodejs hazelcast nginx solr
1. 环境信息
IP地址 | 主机名 | 软件 |
---|---|---|
192.168.0.71 | node01 | zookeeper, kafka, hadoop(namenode), hbase(master), apache-storm(master) |
192.168.0.72 | node02 | zookeeper, kafka, hadoop(datanode), hbase(slave), apache-storm(slave), opensdb |
192.168.0.73 | node03 | zookeeper, kafka, hadoop(datanode), hbase(slave), apache-storm(slave), opentsdb |
192.168.0.74 | node04 | fastdfs(tracker,storage), mysql(master), emqtt, solr, redis |
192.168.0.75 | node05 | fastdfs(tracker,storage), mysql(slave), mongodb, nodejs, hazelcast |
192.168.0.76 | node06 | nginx, 微服务应用 |
2. 下载解压部署脚本
下载部署脚本
curl -sLO https://dl.hc-yun.com/script/mango_service_install.tar.gz
解压
tar xvf mango_service_install.tar.gz
3. 创建配置文件
进入脚本所在目录
cd mango_service_install
复制配置文件模板
cp host-6.cfg conf.cfg
编辑配置文件 vi conf.cfg (修改部分配置如下)
# IP与主机名对应
SERVERS=(192.168.2.71 192.168.2.72 192.168.2.73 192.168.2.74 192.168.2.75 192.168.2.76)
HOSTS=(node01 node02 node03 node04 node05 node06)
# fastdfs 图片访问高可用
FDFS_STORAGE_KEEP_VIP=192.168.2.70
# 网卡配置(如果是多网卡请配置为部署服务IP的网卡名称)
NET_NAME=
# hazelcast 缓存配置
HAZELCAST_GROUP=mango-prod
HAZELCAST_NETWORK_JOIN=tcp-ip
# 免密码登录账号密码
SSH_PORT=22
SSH_USER=root
SSH_PASS=系统登陆密码
4. 下载软件及安装脚本推送到所有节点(192.168.2.71节点执行)
./download.sh
5. 安装软件(所有节点执行)
cd /home/software && sh install.sh
6. 初始化集群
hadoop 集群初始化, 执行完成后检查 hadoop 状态确认正常后再执行下一步
# 初始化集群
sh /home/software/init-hadoop.sh
# 确认初始化完成后删除初始化脚本
rm -f /home/software/init-hadoop.sh
hbase, opentsdb 初始化
# 初始化集群
sh /home/software/init-opentsdb.sh
# 确认初始化完成后删除初始化脚本
rm -f /home/software/init-opentsdb.sh
mysql 主从配置(mysql服务器为2节点时)
sh /home/software/init-mysql-cluster.sh
7. 验证服务
hadoop 服务
jps -l | egrep 'hdfs|yarn'
mango status hadoop
浏览器打开以下页面验证
hadoop datanode http://192.168.2.71:50070
hadoop namenode http://192.168.2.72:50075 http://192.168.2.73:50075
zookeeper 服务
jps -l | grep zookeeper
mango status zookeeper
hbase 服务
jps -l | grep hbase
mango status hbase
浏览器打开以下页面验证
hbase master: http://192.168.2.71:16010
hbase slave: http://192.168.2.72:16030 http://192.168.2.73:16030
opnetsdb 服务 [node02 node03 节点]
jps -l | grep opentsdb
mango status opentsdb
浏览器打开以下页面验证
http://192.168.2.72:14242 http://192.168.2.73:14242
kafak 服务
jps -l | grep kafka
mango status kafka
apache-storm 服务
jps -l | grep storm
mango status storm
浏览器打开以下页面验证
fastdfs 服务 [node04 node05 节点]
# fastdfs 服务状态
systemctl status fdfs_trackerd
systemctl status fdfs_storaged
# nginx 服务状态
systemctl status nginx
# keepalived 服务状态
systemctl status keepalived
# 查看集群节点状态
fdfs_monitor /etc/fdfs/client.conf | egrep 'ip_addr|tracker|Storage'
# 上传图片测试
fdfs_test /etc/fdfs/client.conf upload /usr/local/src/fastdfs-5.11/conf/anti-steal.jpg
浏览器打开以下页面验证
mysql 服务[node04 node05 节点]
# 检测服务状态
systemctl status mysqld
# 检测备用节点同步状态(mysql 备用节点执行 node05)
mysql -uroot -p -e "show slave status\G;"
mongodb 服务[node05 节点]
systemctl status mongod
redis 服务 [node04 节点]
systemctl status redis
emqtt 服务[node04 节点]
systemctl status emqtt
浏览器打开以下页面验证
http://http://192.168.2.74:18083 admin/public
solr 服务 [node04 节点]
systemctl status solr
浏览器打开以下页面验证
http://192.168.2.74:8983/solr
hazelcast 服务 [node05]
systemctl status hazelcast
使用脚本检测服务状态(192.168.2.71 节点执行)
# 检测所有服务状态
mango status all
# 检测指定服务状态
mango status kafka
8. 服务管理
大数据服务
cat /etc/rc.local
[root@localhost] mango
--------- start ---------
Usage:
start all
start zookeeper
start hadoop
start hbase
start kafka
start storm
start opentsdb
--------- stop ---------
stop all
stop opentsdb
stop storm
stop hbase
stop kafka
stop hadoop
stop zookeeper
--------- status ---------
status all
status opentsdb
status storm
status hbase
status kafka
status hadoop
status zookeeper
status mysql
status mongod
fastdfs 服务
# 传输节点
systemctl stop fdfs_trackerd # 关闭服务
systemctl start fdfs_trackerd # 启动服务
systemctl status fdfs_trackerd # 查看服务状态
# 存储节点
systemctl stop fdfs_storaged # 关闭服务
systemctl start fdfs_storaged # 启动服务
systemctl status fdfs_storaged # 查看服务状态
# 存储节点 nginx
systemctl stop nginx # 关闭服务
systemctl start nginx # 启动服务
systemctl status nginx # 查看服务状态
# 存储节点 keepelived
systemctl stop keepalived # 关闭服务
systemctl start keepalived # 启动服务
systemctl status keepalived # 查看服务状态
mysql 服务
systemctl stop mysqld # 关闭服务
systemctl start mysqld # 启动服务
systemctl status mysqld # 查看服务状态
mongodb 服务
systemctl stop mongod # 关闭服务
systemctl start mongod # 启动服务
systemctl status mongod # 查看服务状态
redis 服务
systemctl stop redis # 关闭服务
systemctl start redis # 启动服务
systemctl status redis # 查看服务状态
hazelcast 服务
systemctl stop hazelcast # 关闭服务
systemctl start hazelcast # 启动服务
systemctl status hazelcast # 查看服务状态
emqtt 服务
systemctl stop emqtt # 关闭服务
systemctl start emqtt # 启动服务
systemctl status emqtt # 查看服务状态
solr 服务
systemctl stop solr # 关闭服务
systemctl start solr # 启动服务
systemctl status solr # 查看服务状态
nginx 服务
systemctl stop nginx # 关闭服务
systemctl start nginx # 启动服务
systemctl status nginx # 查看服务状态
9.需要映射的端口
软件 | IP地址 | 端口 |
---|---|---|
fastdfs 资源 | 192.168.2.70 | 80 |
HT | 192.168.2.75 | 5889 |
WEB前端/后台/gateway | 192.168.2.76 | 801/901/8080 |
10.监控服务端
添加数据源
http://192.168.2.71:3000
导入dashboard(9276)
http://192.168.2.71:3000/dashboard/import
11.MySQL数据备份
mysql 主节点执行备份命令
/script/innobackup.sh --mode 1 --remoute-bak true --remoute-server root@node05 --remoute-dir /home/hadoop/backupmysql
mysql 从节点查看备份
ls /home/hadoop/backupmysql
评论区