MongoDB分片集群搭建

一、环境准备

1.1.服务器环境

三台ECS云服务器,其IP分别为

  • 118.31.244.127
  • 118.31.244.177
  • 118.31.244.21
    cat /etc/centos-release
    CentOS Linux release 7.3.1611 (Core)

1.2.替换yum源

参考我的博文

1.3.关闭防火墙

systemctl stop firewalld
systemctl disable firewalld
systemctl daemon-reload

1.4.关闭selinux

setenforce 0
sed -i 's/enforcing/disabled/g' /etc/selinux/config
sed -i 's/enforcing/disabled/g' /etc/sysconfig/selinux

1.5.部署docker和docker-compose

参考我的另外一篇博文
添加国内docker镜像源
cat dockeradd.sh

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://j1a2u2sa.mirror.aliyuncs.com"]
}
EOF

systemctl restart docker
systemctl daemon-reload

1.6.安装相关传输工具

安装scp
yum -y install openssh-clinets

二、开始搭建

2.1.原理图

11a

2.2.端口规划

11bu-huo
注意:config、shard1-3均为复制集,mongos为单点

2.3.搭建思路

  • 编写构建mongo分片和config的Dockerfile_mongo
  • 编写构建mongos的Dockerfile_mongos
  • 编写无认证docker-compose.yml文件
  • 启动所有容器
  • 在所有分片的主节点和config主节点创建库用户
  • 生成keyfile文件
  • down掉所有容器,编写认证docker-compose.yml文件
  • 启动带认证的所有容器
  • 测试

2.4.构建mongo镜像

cat Dockerfile_mongo

MAINTAINER linuxwt <tengwanginit@gmail.com>

RUN yum -y update

RUN  echo '[mongodb-org-3.6]' > /etc/yum.repos.d/mongodb-org-3.6.repo  
RUN  echo 'name=MongoDB Repository' >> /etc/yum.repos.d/mongodb-org-3.6.repo  
RUN  echo 'baseurl=http://repo.mongodb.org/yum/redhat/7/mongodb-org/3.6/x86_64/' >> /etc/yum.repos.d/mongodb-org-3.6.repo  
RUN  echo 'enabled=1' >> /etc/yum.repos.d/mongodb-org-3.6.repo  
RUN  echo 'gpgcheck=0' >> /etc/yum.repos.d/mongodb-org-3.6.repo

RUN yum -y install make  
RUN yum -y install mongodb-org  
RUN mkdir -p /data/db

EXPOSE 27017  
ENTRYPOINT ["/usr/bin/mongod"]

docker build -t cenntos7/mongo:3.6 < -Dockerfile_mongo

2.5.构建mongos镜像

cat Dockerfile_mongos

FROM centos7/mongo:3.6  
MAINTAINER linuxwt <tengwanginit@gmail.com>

ENTRYPOINT ["/usr/bin/mongos"]

docker build -t centos7/mongos:3.6 -< Dockerfile_mongos

2.6.启动非认证mongo副本集

三台机器做的动作是一样的,我们只需要在某一台机器启动所有容器,另外两台机器做同样的操作即可,这里以118.31.244.127为例
cat docker-compose.yml

config_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: config_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
 #    - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27117:27017
  command:  --bind_ip_all --dbpath /data/db/data/config/data --logpath /data/db/data/config/log/config.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet crs --configsvr --port 27017 --slowms=500 
shard1_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard1_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
  #   - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27118:27017
  command: --bind_ip_all --dbpath /data/db/data/shard1/data --logpath /data/db/data/shard1/log/shard1.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard1 --oplogSize 2000 --shardsvr --port 27017 --slowms=500  
 

shard2_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard2_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
  #   - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27119:27017
  command:   --bind_ip_all --dbpath /data/db/data/shard2/data --logpath /data/db/data/shard2/log/shard2.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard2 --oplogSize 2000 --shardsvr --port 27017 --slowms=500 


shard3_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard3_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
 #    - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27120:27017
  command:   --bind_ip_all --dbpath /data/db/data/shard3/data --logpath /data/db/data/shard3/log/shard3.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard3 --oplogSize 2000 --shardsvr --port 27017 --slowms=500  


mongos_tengwang1:  
  restart: always
  image: centos7/mongos:3.6
  container_name: mongos_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
  #   - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27121:27017
  command:   --bind_ip_all --configdb crs/118.31.244.127:27117,118.31.244.177:27117,118.31.244.21:27117 --logpath /data/db/data/mongos/log/mongos.log --logappend --port 27017

在启动之前我们需要创建相关的数据存放路径和日志路径
mkdir -p data/shard{1..3}/{data,log}
mkdir -p data/config/{data,log}
mkdir -p data/mongos/log
docker-compose up -d成功启动所有容器
重复上述动作启动另外两台机器的所有容器
mongo3.x版本需要配置以下两个文件才可以避免mongo登陆的时候报错
echo "always madvise [never]" > /data/gooalgene/mongo/enabled
echo "always madvise [never]" > /data/gooalgene/mongo/defrag

2.6.1.创建shard1复制集

在118.31.244.127上操作

docker exec -it shard1\_tengwang1 bash  
mongo  
use admin  
config = {_id:"shard1",members:[ {_id:0,host:"118.31.244.127:27118", priority:1}, {_id:1,host:" 118.31.244.177:27118",priority:2}, {_id:2,host:" 118.31.244.21:27118",arbiterOnly:true} ] }  
rs.initiate(config)

2.6.2.创建shard2复制集

在118.31.244.21上操作

docker exec -it shard2\_tengwanag2 bash
mongo
use admin
config = {_id:"shard2",members:[ {_id:0,host:" 118.31.244.127:27119", arbiterOnly:true}, {_id:1,host:" 118.31.244.177:27119",priority:1}, {_id:2,host:" 118.31.244.21:27119",priority:2} ] }
rs.initiate(config)

2.6.3.创建shard3复制集

在118.31.244.127上操作

docker exec -it shard3\_tengwang3 bash
mongo
use admin
config = {_id:"shard3",members:[ {_id:0,host:" 118.31.244.127:27120",priority:2}, {_id:1,host:" 118.31.244.177:27120",arbiterOnly:true}, {_id:2,host:" 118.31.244.21:27120",priority:1} ] }
rs.initiate(config)

2.6.4.创建config复制集

随便选择一台服务器

docker exec -it config_tengwang1 bash
mongo
use admin
config = {_id:"crs", configsvr:true, members:[ {_id:0,host:" 118.31.244.127:27117"}, {_id:1,host:" 118.31.244.177:27117"},{_id:2,host:" 118.31.244.21:27117"}] }
rs.initiate(config)

2.6.5.添加分片到config

登陆mongos

mongo
use admin
docker exec -it mongos\_tengwang1 bash
db.runCommand({addshard:"shard1/118.31.244.127:27118,118.31.244.177:27118,118.31.244.21:27118"}) //ip之间不能有空格
db.runCommand({addshard:"shard2/118.31.244.127:27119,118.31.244.177:27119,118.31.244.21:27119"})
db.runCommand({addshard:"shard3/118.31.244.127:27120,118.31.244.177:27120,118.31.244.21:27120"})
查看分片是否成功
db.runCommand({listshards:1})

2.7.创建库用户

以shard1为例,118.31.244.177是其主节点

docker exec -it shard1\_tengwang1 bash  
mongo
use admin //先创建admin库用户
db.createUser(
{
user:"shanwang",
pwd:"123456",
roles:
[
{role:"root",db:"admin"},
{role:"clusterAdmin",db:"admin"}
]
}
)

use linuxwt //创建linuxwt库用户
db.createUser(
{
user:"tengwang",
pwd:"123456",
roles:
[
{role:"dbOwner",db:"linuxwt"},
{role:"clusterAdmin",db:"admin"}
]
}
)

在shard2、shard3以及config的主节点重复上述操作

2.8.启动认证的mongo副本集

2.8.1.生成keyfile文件

在118.31.244.127上操作
openssl rand -base64 745 > keyfile.dat
chmod 600 keyfile.dat
cp keyfile.dat data/shard1
cp keyfile.dat data/shard2
cp keyfile.dat data/shard3
scp keyfile.dat 118.31.244.177:/data/tengwang/mongo/
scp keyfile.dat 118.31.244.21:/data/tengwang/mongo/
在后面两台服务器上重复前面的3个cp操作

2.8.2.编写认证的docker-compose.yml

备份非认证的docker-compose.yml
mv docker-compose.yml docker-compose.yml.bak
cat docker-compose.yml

config_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: config_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
     - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27117:27017
  command:  --bind_ip_all --dbpath /data/db/data/config/data --logpath /data/db/data/config/log/config.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet crs --configsvr --port 27017 --slowms=500 --keyFile /data/db/keyfile.dat --auth
shard1_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard1_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
     - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27118:27017
  command: --bind_ip_all --dbpath /data/db/data/shard1/data --logpath /data/db/data/shard1/log/shard1.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard1 --oplogSize 2000 --shardsvr --port 27017 --slowms=500  --keyFile /data/db/keyfile.dat --auth
 

shard2_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard2_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
     - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27119:27017
  command:   --bind_ip_all --dbpath /data/db/data/shard2/data --logpath /data/db/data/shard2/log/shard2.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard2 --oplogSize 2000 --shardsvr --port 27017 --slowms=500 --keyFile /data/db/keyfile.dat --auth


shard3_tengwang1:  
  restart: always
  image: centos7/mongo:3.6
  container_name: shard3_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
     - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27120:27017
  command:   --bind_ip_all --dbpath /data/db/data/shard3/data --logpath /data/db/data/shard3/log/shard3.log --logappend --storageEngine wiredTiger --wiredTigerEngineConfigString="cache_size=300M" -replSet shard3 --oplogSize 2000 --shardsvr --port 27017 --slowms=500  --keyFile /data/db/keyfile.dat --auth


mongos_tengwang1:  
  restart: always
  image: centos7/mongos:3.6
  container_name: mongos_tengwang1
  volumes:
     - /etc/localtime:/etc/localtime
     - /etc/timezone:/etc/timezone
     - /data/tengwang/mongo:/data/db
     - $PWD/enabled:/sys/kernel/mm/transparent_hugepage/enabled
     - $PWD/defrag:/sys/kernel/mm/transparent_hugepage/defrag
     - $PWD/keyfile.dat:/data/db/keyfile.dat
  ulimits:
     nofile:
       soft: 300000
       hard: 300000
  ports:
     - 27121:27017
  command:   --bind_ip_all --configdb crs/118.31.244.127:27117,118.31.244.177:27117,118.31.244.21:27117 --logpath /data/db/data/mongos/log/mongos.log --logappend --port 27017 --keyFile /data/db/keyfile.dat

ps:上面的配置文件中涉及到大小的均是以M为单位的,比如设定的Oplog的大小就为2000M
docker-compose up -d

2.9.测试

不出意外,前面我们成功启动了所有待认证的容器,下面我们进行测试,测试指标有两项:

  • 能否使用认证的的账户通过mongodb客户端登陆每一个分片、mongos以及config
  • 插入数据是否可以实现数据的分片存储

2.9.1.认证用户登录

下载mongoDB在宿主机上作客户端,方便我们远程链接mongodb实例

  • 登录mongos
    admin库用户登录
    mongo 118.31.244.127:27121/admin -u shanwang -p 123456 -authenticationDatabase admin mongo 118.31.244.127:27121/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
    其实mongos在三台服务器上均为单点,所以用任何一个ip登录都可以
  • 登录shard1
    主节点:mongo 118.31.244.177:27118/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
    从节点:mongo 118.31.244.21:27118/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
  • 登录shard2
    主节点:mongo 118.31.244.21:27119/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
    从节点:mongo 118.31.244.177:27119/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
  • 登录shard3
    主节点:mongo 118.31.244.127:27120/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
    从节点:mongo 118.31.244.21:27120/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt
  • 登录config
    config复制集的成员优先级一样,所以它的主节点和从节点会隔一段时间变化,现在假定某一时刻服务器118.31.244.127为主节点
    mongo 118.31.244.127:27117/linuxwt -u tengwang -p 123456 -authenticationDatabase linuxwt

2.9.2.分片测试

  1. 登陆某一台mongos
    mongo 118.31.244.127:27121/admin –u tengwang –p 123456 –authenticationDatabase admin
  2. 查看集群状态
    sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5b97bf3a46d83924b1b0818c")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/118.31.244.127:27118,118.31.244.177:27118",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/118.31.244.177:27119,118.31.244.21:27119",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/118.31.244.127:27120,118.31.244.21:27120",  "state" : 1 }
  active mongoses:
        "3.6.7" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  no //均衡器未开启exit

        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "linuxwt",  "primary" : "shard3",  "partitioned" : false}//表示主分片是shard3,集群未开启数据库linuxwt的分片功能  

插入一条数据
use linuxwt
db.students.insert({uid:1,name:'lisi',age:20})
3. 开启均衡器和分片功能
sh.startBalancer() 这样均衡器的功能就开启了,
sh.enableSharding('linuxwt') 该命令可以开启数据库的分片功能,但是想开启分片功能,得先建立索引
db.students.ensureIndex({uid:1})
sh.shardCollection('linuxwt.students',{uid:1}) 该命令表示在students集合上进行分片存储数据,且以uid为片键
4. 再次查看集群状态
正常状态应该是如下内容

--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5b97bf3a46d83924b1b0818c")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/118.31.244.127:27118,118.31.244.177:27118",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/118.31.244.177:27119,118.31.244.21:27119",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/118.31.244.127:27120,118.31.244.21:27120",  "state" : 1 }
  active mongoses:
        "3.6.7" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "linuxwt",  "primary" : "shard3",  "partitioned" : true } //数据库分片功能开启成功
                linuxwt.students
                        shard key: { "uid" : 1 } //片键
                        unique: false
                        balancing: true
                        chunks:
                                shard3  1
                        { "uid" : { "$minKey" : 1 } } -->> { "uid" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0) //集合分片成功