基于devops利用微服务引入三方服务基础组件部署

1.部署Docker Swarm

环境为四台虚拟机,系统为最小化的Centos7.5,具体配置如下

hostname ip role cpu ram
node150 192.168.0.150 manager 2 4g
node151 192.168.0.151 worker 2 4g
node152 192.168.0.152 worker 2 4g
node153 192.168.0.153 dns server 2 4g

后期在生产环境中会无法联网,这里模拟生产环境150~152均无法访问互联网,当然为了方便测试153是可以联网的,主要用于创建本地yum仓库、ntp服务器、dns服务器

1.1.部署yum私有仓库

上传用于创建本地仓库的包到节点node153

[root@node153 ~]# ls -lh ./yum  | grep -v total | awk '{print $9}'
compose.sh
createrepodir
createrepo.tar.gz
docker-compose-Linux-x86_64
local.sh
nginxdir
nginx.tar.gz
README.md
replenish
rpmdir
rpm.tar.gz
yumdownload.tar.gz

执行脚本部署本地yum私有仓库
bash local.sh
仓库基于nginx,开放访问端口80
firewall-cmd --permanent --add-port=80/tcp
nginx服务加入系统服务
systemctl enable nginx && systemctl daemon-reload
访问http://192.168.0.153/myshare 查看是否配置成功

1.2.远端yum源配置

上传脚本到150~152
cat yumconfig.sh

#!/bin/bash

mkdir -p /etc/yum.repos.d/bak   
mv /etc/yum.repos.d/* /etc/yum.repos.d/bak 
cat <<EOF>> /etc/yum.repos.d/local.repo  
[local]
name=local
baseurl=http://192.168.0.153/myshare
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
EOF
yum makecache

执行脚本配置yum源为本地私有yum源

1.3.安装docker

150~152上安装docker
yum -y install docker-ce*
启动查看docker
systemctl start docker && docker version

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:20:16 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:23:58 2018
  OS/Arch:      linux/amd64
  Experimental: false

docker加入系统服务
systemctl start docker && systemctl daemon-reload

1.4.安装docker-compose

上传docker-compose二进制文件和配置脚本到150~152

ls -lh /root  | grep -v total | awk '{print $9}
compose.sh
docker-compose-Linux-x86_64

cat compose.sh

#!/bin/bash

mv docker-compose-Linux-x86_64 /usr/local/bin
chmod +x /usr/local/bin/docker-compose-Linux-x86_64
ln -s /usr/local/bin/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

执行脚本,查看看装是否成功
docker-compose version

docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

1.5.放开swarm相关端口

firewall-cmd --permanent --add-port=2377/tcp(集群管理端口)
firewall-cmd --permanent --add-port=7946/tcp(节点之间通信端口)
firewall-cmd --permanent --add-port=7946/udp(节点之间通信端口)
firewall-cmd --permanent --add-port=4789/udp(overlay网络通信端口)
firewall-cmd --reload(必须要执行)

1.5.创建Swarm集群

创建Manager节点
[root@node150 ~]# docker swarm init --advertise-addr 192.168.0.150

作为manager节点加入swarm获取相关token

[root@node150 ~]# docker swarm join-token manager
add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4i5dymcrraowmol7h4xajlg9j7k3g8zfzrlc52j2d9hiu0dnuh-a7a70wk06auya8t0f76sovt4g 192.168.0.150:2377

作为worker节点加入swarm获取相关token

[root@node150 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4i5dymcrraowmol7h4xajlg9j7k3g8zfzrlc52j2d9hiu0dnuh-7ttku5tjvtsupzokuv67qm3oe 192.168.0.150:2377

查看swarm集群

[root@node150 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
otggb5pry381slnfq42idaf2u *   node150             Ready               Active              Leader              18.03.1-ce
nk8my4s86pqgba0sv5b949zuy     node151             Ready               Active                                  18.03.1-ce
ftt2iu5jopcn21atcnppr5t0v     node152             Ready               Active                                  18.03.1-ce

2.部署网络服务

后期在部署各个应用中间件准备使用域名来访问,需要部署dns服务,同时也需要部署ntp服务来模拟生产环境

2.1部署本地ntp服务

上传ntp配置脚本到153上
cat ntp.sh

#!/bin/bash

# 安装ntp服务
yum -y install ntp ntpdate
ntpdate ntp1.aliyun.com
ntpdate ntp2.aliyun.com

# 这里获取主机网络段,假设这里是24位网络号,子网掩码是255.255.255.0
netcard=$(ls /etc/sysconfig/network-scripts/ | grep ifcfg | grep -v lo)
card=${netcard//ifcfg-/}
ip_net=$(ip addr | grep $card | grep inet | awk '{print $2}')
ip=${ip_net//\/24/}
a=$(echo $ip | awk -F '.' '{print $1}')
b=$(echo $ip | awk -F '.' '{print $2}')
c=$(echo $ip | awk -F '.' '{print $3}')
net="$a.$b.$c.0"

# 备份ntp配置文件
[ -f "/etc/ntp.conf" ] && mv /etc/ntp.conf /etc/ntp.confbak

# 配置ntp.conf
cat <<EOF>> /etc/ntp.conf
restrict default nomodify notrap noquery
 
restrict 127.0.0.1
restrict $net mask 255.255.255.0 nomodify    
#只允许$net网段的客户机进行时间同步。如果允许任何IP的客户机都可以进行时间同步,就修改为"restrict default nomodify"
[root@node153 ntp]# cat ntp.sh 
#!/bin/bash

# 安装ntp服务
yum -y install ntp ntpdate
ntpdate ntp1.aliyun.com
ntpdate ntp2.aliyun.com

# 这里获取主机网络段,假设这里是24位网络号,子网掩码是255.255.255.0
netcard=$(ls /etc/sysconfig/network-scripts/ | grep ifcfg | grep -v lo)
card=${netcard//ifcfg-/}
ip_net=$(ip addr | grep $card | grep inet | awk '{print $2}')
ip=${ip_net//\/24/}
a=$(echo $ip | awk -F '.' '{print $1}')
b=$(echo $ip | awk -F '.' '{print $2}')
c=$(echo $ip | awk -F '.' '{print $3}')
net="$a.$b.$c.0"

# 备份ntp配置文件
[ -f "/etc/ntp.conf" ] && mv /etc/ntp.conf /etc/ntp.confbak

# 配置ntp.conf
cat <<EOF>> /etc/ntp.conf
restrict default nomodify notrap noquery
 
restrict 127.0.0.1
restrict $net mask 255.255.255.0 nomodify    
#只允许$net网段的客户机进行时间同步。如果允许任何IP的客户机都可以进行时间同步,就修改为"restrict default nomodify"
 
server ntp1.aliyun.com
server ntp2.aliyun.com
server time1.aliyun.com
server time2.aliyun.com
server time-a.nist.gov
server time-b.nist.gov
 
server  127.127.1.0     
# local clock
fudge   127.127.1.0 stratum 10
 
driftfile /var/lib/ntp/drift
broadcastdelay  0.008
keys            /etc/ntp/keys
EOF
# 启动服务
systemctl restart ntpd
systemctl enable ntpd
systemctl daemon-reload
# 加入计划任务
cat <<EOF>> /etc/crontab
0 0,6,12,18 * * * root /usr/sbin/ntpdate ntp1.aliyun.com; /sbin/hwclock -w
EOF
systemctl restart crond

date查看时间是否准确,同时作为本地ntp服务端,需要放开123/udp端口
firewall-cmd --permanent --add-port=123/udp
firewall-cmd --reload

2.2.远端时间同步

150-152上安装ntpdate
ntpdate 192.168.0.153
加入计划任务
echo "*/5 * * * * root /usr/sbin/ntpdate 192.168.0.153" >> /etc/crontab
systemctl restart crond

2.3.部署本地dns服务

开放dns服务需要的端口
firewall-cmd --permanent --add-port=53/tcp && firewall-cmd --permanent --add-port=53/udp && firewall-cmd --reload
关闭selinux

sed -i 's/enforcing/disabled/g' /etc/selinux/config   
sed -i 's/enforcing/disabled/g' /etc/sysconfig/selinux
setenforce 0   

安装bind bind-utils bind-devel
yum -y install bind-utils bind bind-devel

dns服务配置文件
cat /etc/named.conf

// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

options {
        listen-on port 53 { 192.168.0.153;127.0.0.1; };
//      listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };

        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        recursion yes;

        dnssec-enable yes;
//      dnssec-validation yes;

        /* Path to ISC DLV key */
//      bindkeys-file "/etc/named.root.key";

//      managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

脚本配置正向区域
cat zone_mz.sh

cat <<EOF>> /etc/named.rfc1912.zones
zone "cmdi.chinamobile.com" IN {
        type master;
        file "cmdi.chinamobile.com.zone";
        allow-update { none; };
};

脚本配置反向区域
cat zone_mf.sh

cat <<EOF>> /etc/named.rfc1912.zones
zone "0.168.192.in-addr.arpa" IN {
        type master;
        file "192.168.0.zone";
        allow-update { none; };
};

配置正向解析
cat /var/named/cmdi.chinamobile.com.zone

$TTL 1D
@       IN SOA  dns.cmdi.chinamobile.com. admin.cmdi.chinamobile.com. (
                                        0
                                        1D
                                        1H
                                        1W
                                        3H )
        NS      dns.cmdi.chinamobile.com.
        MX 10 mail.cmdi.chinamobile.com.
dns                     IN  A 192.168.0.153
ntp.cmdi.chinamobile.com.       IN  A 192.168.0.153
gitlab.cmdi.chinamobile.com.    IN  A 192.168.0.150
web                     IN  CNAME www

配置反向解析
cat /var/named/192.168.0.zone

$TTL 1D
@       IN SOA  dns.cmdi.chinamobile.com. admin.cmdi.chinamobile.com (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      dns.cmdi.chinamobile.com.
        MX 10 mail.cmdi.chinamobile.com.
153              IN  PTR  dns.cmdi.chinamobile.com.
153             IN  PTR  ntp.cmdi.chinamobile.com.
150              IN PTR  gitlab.cmdi.chinamobile.com.

重启named
systemctl restart named

3.部署基础组件

3.1.环境变量配置

环境变量配置
注意以下的环境变量只会在当前shell生效
150上
cat path.sh

#!/bin/bash

# 工作目录
export WORK_HOME=/root/docker
echo $WORK_HOME
mkdir -p $WORK_HOME

# ldap目录
export WORK_HOME_LDAP=$WORK_HOME/ldap
echo $WORK_HOME_LDAP
mkdir -p $WORK_HOME_LDAP/config
mkdir -p $WORK_HOME_LDAP/data

# gitlab目录
export WORK_HOME_GITLAB=$WORK_HOME/gitlab
echo $WORK_HOME_GITLAB
mkdir -p $WORK_HOME_GITLAB/config
mkdir -p $WORK_HOME_GITLAB/data
mkdir -p $WORK_HOME_GITLAB/logs

151上
cat path.sh

#!/bin/bash

# 工作目录
export WORK_HOME=/root/docker
echo $WORK_HOME
mkdir -p $WORK_HOME

# jaeger目录
export WORK_HOME_JAEGER=$WORK_HOME/jaeger
echo $WORK_HOME_JAEGER
mkdir -p $WORK_HOME_JAEGER

# Prometheus ⽬录
export WORK_HOME_PROMETHEUS=$WORK_HOME/prometheus
echo $WORK_HOME_PROMETHEUS
mkdir -p $WORK_HOME_PROMETHEUS/data
mkdir -p $WORK_HOME_PROMETHEUS/config

152上
cat path.sh

export WORK_HOME=/root/docker
echo $WORK_HOME
mkdir -p $WORK_HOME

export WORK_HOME_NEXUS=$WORK_HOME/nexus
echo $WORK_HOME_NEXUS
mkdir -p $WORK_HOME_NEXUS/data

3.2.网络部署

部署以下Overlay网络用于swarm集群中容器集群的通信
150上
cat net.sh

docker network create -d overlay ldap_overlay
docker network create -d overlay deploy_overlay
docker network create -d overlay monitor_overlay
docker network create -d overlay manager_overlay
docker network create -d overlay service_overlay
docker network create -d overlay elk_overlay
docker network create -d overlay database_overlay

以上网络具体覆盖的服务

网络 服务
ldap_overlay openldap gitlab nexus jenkins
deploy_overlay gitlab nexus jenkins
monitor_overlay nacos prometheus grafana jaeger
manager_overlay portainer agent
service_overlay gateway nacos
elk_overlay elasticsearch logstash kibana
database_overlay MySQL Redis PostgreSQL

3.3.部署OpenLDAP

镜像获取(150上)
docker pull osixia/openldap:1.3.0

新建docker-compose.yml

version: '3'
services:
  openldap:
    image: osixia/openldap:1.3.0
    networks: 
      - ldap_overlay
    volumes:
      - $WORK_HOME_LDAP/data:/var/lib/ldap
      - $WORK_HOME_LDAP/config:/etc/ldap/slapd.d
    environment:
      - LDAP_ORGANISATION=CMDI
      - LDAP_DOMAIN=cmdi.chinamobile.com
      - LDAP_ADMIN_PASSWORD=!QAZ1qaz
      - LDAP_READONLY_USER=true
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  ldap_overlay:
    external: true

启动
docker stack deploy -c docker-compose.yml 150

新建用户数据
cat wake9999.ldif

dn: uid=waka9999,dc=cmdi,dc=chinamobile,dc=com
uid: waka9999
cn: waka9999
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: !QAZ1qaz
shadowLastChange: 17779
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 10001
gidNumber: 10001
homeDirectory: /home/waka9999

cat wake2020.ldif

dn: uid=waka2020,dc=cmdi,dc=chinamobile,dc=com
uid: waka2020
cn: waka2020
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: !QAZ1qaz
shadowLastChange: 17779
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 10005
gidNumber: 10005
homeDirectory: /home/waka2020
memberOf:cn=users,dc=cmdi,dc=chinamobile,dc=com

创建用户组数据
cat group.ldif

dn: cn=users,dc=cmdi,dc=chinamobile,dc=com
objectClass: groupOfNames
cn: users
member:uid=waka9999,dc=cmdi,dc=chinamobile,dc=com

导入数据

docker cp wake999.ldif  150_openldap.1.bvufim9rtfmuaolsir5ol62d0:/tmp    
docker cp wake2020.ldif  150_openldap.1.bvufim9rtfmuaolsir5ol62d0:/tmp
docker cp group.ldif  150_openldap.1.bvufim9rtfmuaolsir5ol62d0:/tmp
docker exec -it  150_openldap.1.bvufim9rtfmuaolsir5ol62d0 bash
cd /tmp
ldapadd -x -H ldap://localhost -D "cn=admin,dc=cmdi,dc=chinamobile,dc=com" -W -f
/tmp/waka9999.ldif
ldapadd -x -H ldap://localhost -D "cn=admin,dc=cmdi,dc=chinamobile,dc=com" -W -f
/tmp/waka2020.ldif
ldapadd -x -H ldap://localhost -D "cn=admin,dc=cmdi,dc=chinamobile,dc=com" -W -f
/tmp/group.ldif    

测试
dapsearch -LLL -W -x -D "cn=admin,dc=cmdi,dc=chinamobile,dc=com" -H ldap://localhost -b "dc=cmdi,dc=chinamobile,dc=com"

3.4.部署gitlab

获取镜像(150上)
docker pull gitlab/gitlab-ce

创建gitlab.rb
cat $WORK_HOME_GITLAB/gitlab.rb

external_url 'http://gitlab.cmdi.chinamobile.com'
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS'
main: # 'main' is the GitLab 'provider ID' of this LDAP server
 label: 'LDAP'
 host: '150_openldap'
 port: 389
 uid: 'uid'
 encryption: 'plain' # "start_tls" or "simple_tls" or "plain"
 bind_dn: 'cn=readonly,dc=cmdi,dc=chinamobile,dc=com'
 password: 'readonly'
 verify_certificates: true
 active_directory: true
 allow_username_or_email_login: true
 block_auto_created_users: false
 base: 'DC=cmdi,DC=chinamobile,DC=com'
 user_filter: ''
EOS
prometheus['enable'] = false
node_exporter['enable'] = false

配置邮件以及gitlab线程数
cat $WORK_HOME_GITLAB/config/gitlab.rb

unicorn['worker_timeout'] = 60
unicorn['worker_processes'] = 2
postgresql['shared_buffers'] = "200MB"
postgresql['max_worker_processes']
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.qq.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "439757183@qq.com"
gitlab_rails['smtp_password'] = "ptelmyyzpnvpcaig"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['gitlab_email_from'] = "439757183@qq.com"

创建docker-compose.yml

version: "3"
services:
  gitlab:
    image: gitlab/gitlab-ce:latest
    ports:
      - 80:80
    networks:
      - ldap_overlay
      - deploy_overlay
    volumes:
      - "$WORK_HOME_GITLAB/config:/etc/gitlab"
      - "$WORK_HOME_GITLAB/logs:/var/log/gitlab"
      - "$WORK_HOME_GITLAB/data:/var/opt/gitlab"
      - "$WORK_HOME_GITLAB/gitlab.rb:/omnibus_config.rb"
    environment:
      GITLAB_OMNIBUS_CONFIG: "from_file('/omnibus_config.rb')"
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  ldap_overlay:
    external: true
  deploy_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 150

注意如果要使用ldap的两个账号,需要通过配置邮件服务给这两个账号绑定的邮箱发邮件,部署邮件服务可以参考以前的文章,再多说一句,一般使用smtp服务在自己建立的虚拟机上不用开465端口,在云上也别是阿里云需要在其网页防火墙开这个端口

访问服务
http://gitlab.cmdi.chinamobile.com

第一次登录设置管理员密码password
登录 root password
以管理员账号登录后屏蔽自注册功能
依次点击下图所示地方
06wu-biao-ti
06go111
屏蔽后登陆页面就不会再有register栏了

使用ldap账号waka9999和waka2020登录

使用非80端口来访问gitlab
cat $WORK_HOME_GITLAB/gitlab.rb

external_url 'http://gitlab.cmdi.chinamobile.com:8000'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS'
main: # 'main' is the GitLab 'provider ID' of this LDAP server
 label: 'LDAP'
 host: '150_openldap'
 port: 389
 uid: 'uid'
 encryption: 'plain' # "start_tls" or "simple_tls" or "plain"
 bind_dn: 'cn=readonly,dc=cmdi,dc=chinamobile,dc=com'
 password: 'readonly'
 verify_certificates: true
 active_directory: true
 allow_username_or_email_login: true
 block_auto_created_users: false
 base: 'DC=cmdi,DC=chinamobile,DC=com'
 user_filter: ''
EOS
prometheus['enable'] = false
node_exporter['enable'] = false

docker-compose.yml文件中的端口映射修改为如下

 ports:
      - 8000:8000
      - 2224:22

3.5.部署nexus

获取镜像(152上)
docker pull sonatype/nexus3
创建docker-compose.yml

version: "3"
services:
  nexus:
    image: sonatype/nexus3:latest
    ports:
      - 8081:8081
      - 8082:8082
    networks:
      - ldap_overlay
      - deploy_overla
    volumes:
      - "$WORK_HOME_NEXUS/data:/nexus-data"
    deploy:
      placement:
        constraints: [node.hostname==node152]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  ldap_overlay:
    external: true
  deploy_overlay:
    external: true

启动服务前先把映射的端口放开
这里只需要放开152节点上的对应端口即可,其它节点不需要放开
firewall-cmd --permanent --add-port=8081/tcp --zone=public
firewall-cmd --permanent --add-port=8081/tcp --zone=public && firewall-cmd --reload

启动服务
docker stack deploy -c docker-compose.yml
注意有一点需要格外注意 chmod 777 $WORK_HOME_NEXUS/data,否则服务无法正常启动

访问服务
http://192.168.0.152:8081 或者 http://192.168.0.150:8081 或者 http://192.168.0.151:8081

第一次使用默认账号密码登录,然后修改管理员密码,禁用匿名
安全-LDAP,新建LDAP连接测试

06git8

上面的*号内容可以参考gitlab里的只读连接账号密码,参考gitlab.rb里的ldap设置

测试连接成功后进行用户的角色分配和分组
06git9
06git10

使用ldap账号登陆成功后还不能直接使用,需要使用管理员赋予ldap账号权限
06git11
06git12

创建本地maven仓库和docker仓库

  • 新建Blob存储
    设置-reposibility-Blob
  • 创建类型为hosted的maven仓库
    repositories-create repository-选择类型为hosted的maven,在进行配置的时候存储选择前面新建的Blob
  • 创建hosted的docker镜像仓库
    也是类似创建hosted类型的maven仓库一样,先选择类型为hosted的docker,在进行配置的时候存储选择
    06png11
    06png112

导入本地库文件到maven-local
cat import.sh

#!/bin/bash

while getopts ":r:u:" opt
do
    case $opt in 
    r) REPO_URL=$OPTARG;;
    u) USER_NAME=$OPTARG;;
    esac
done

find 命令找出需要导入的文件及路径 | sed 's/^\.\///g' | xargs -l '{}' curl -u "$USERNAME:!QAZqaz" -X PUT -v -T {} ${REPO_URL}/{};   

执行脚本
bash import.sh -u waka9999 -r http://192.158.0.152:8081/maven-local

配置docker本地仓库(150~152上)
cat /etc/docker/daemon.json

{"registry-mirrors":["https://nr630v1c.mirror.aliyuncs.com"],"insecure-registries":["192.168.0.152:8082"]}

systemctl restart docker

docker login 192.168.0.152:8082
需要输入ldap账户

3.6.部署jaeger

获取镜像
docker pull jaegertracing/all-in-one
创建docker-compose.yml

version: "3"
services:
    jaeger:
      image: jaegertracing/all-in-one:latest
      ports:
        - 16686:16686
      networks:
        - monitor_overlay
      volumes:
        - "/etc/localtime:/etc/localtime:ro"
      deploy:
        placement:
          constraints: [node.hostname==node151]
        restart_policy:
          condition: any
          delay: 5s
          max_attempts: 3
networks:
  monitor_overlay:
    external: true

开放端口
firewall-cmd --add-port=16686/tcp --permanent && firewall-cmd --reload

启动服务
docker stack deploy -c docker-compose.yml 151

访问服务
http://192.168.0.151:16686

3.7.部署prometheus

获取镜像
docker pull prom/prometheus:latest

创建docker-compose.yml文件
cat docker-compose.yml

version: "3.6"
services:
  prometheus:
    image: prom/prometheus:latest
    ports:
      - 9090:9090
    networks:
      - monitor_overlay
    volumes:
      - "$WORK_HOME_PROMETHEUS/data:/data/lib/prometheus"
      - "$WORK_HOME_PROMETHEUS/config/prometheus.yaml:/etc/prometheus/prometheus.yaml:ro"
      - "/etc/localtime:/etc/localtime:ro"
    command:
      - --config.file=/etc/prometheus/prometheus.yaml
      - --storage.tsdb.path=/data/lib/prometheus
      - --storage.tsdb.retention=90d
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  cnode150:
    image: google/cadvisor:latest
    networks:
     - monitor_overlay
    volumes:
     - /:/rootfs:ro
     - /var/run:/var/run:rw
     - /sys:/sys:ro
     - /var/lib/docker/:/var/lib/docker:ro
     - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  cnode151:
    image: google/cadvisor:latest
    networks:
      - monitor_overlay
    volumes:
     - /:/rootfs:ro
     - /var/run:/var/run:rw
     - /sys:/sys:ro
     - /var/lib/docker/:/var/lib/docker:ro
     - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  cnode152:
    image: google/cadvisor:latest
    networks:
      - monitor_overlay
    volumes:
     - /:/rootfs:ro
     - /var/run:/var/run:rw
     - /sys:/sys:ro
     - /var/lib/docker/:/var/lib/docker:ro
     - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node152]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  enode150:       
    image: prom/node-exporter:latest
    networks:
      - monitor_overlay
    volumes:       
      - /proc:/host/proc:ro       
      - /sys:/host/sys:ro       
      - /:/rootfs:ro       
      - "/etc/localtime:/etc/localtime:ro"
    command:       
      - '--path.procfs=/host/proc'       
      - '--path.sysfs=/host/sys'       
      - '--path.rootfs=/host'       
      - --collector.filesystem.ignored-mount-points
      - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
      - --collector.filesystem.ignored-fs-types
      - "^/(sys|proc|auto|cgroup|devpts|ns|au|fuse.lxc|mqueue)(fs|)$$"
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  enode151:       
    image: prom/node-exporter:latest
    networks:
      - monitor_overlay
    volumes:       
      - /proc:/host/proc:ro       
      - /sys:/host/sys:ro       
      - /:/rootfs:ro       
      - "/etc/localtime:/etc/localtime:ro"
    command:       
      - '--path.procfs=/host/proc'       
      - '--path.sysfs=/host/sys'       
      - '--path.rootfs=/host'
      - --collector.filesystem.ignored-mount-points
      - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
      - --collector.filesystem.ignored-fs-types
      - "^/(sys|proc|auto|cgroup|devpts|ns|au|fuse.lxc|mqueue)(fs|)$$"
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
  enode152:       
    image: prom/node-exporter:latest
    networks:
      - monitor_overlay
    volumes:       
      - /proc:/host/proc:ro       
      - /sys:/host/sys:ro       
      - /:/rootfs:ro       
      - "/etc/localtime:/etc/localtime:ro"
    command:       
      - '--path.procfs=/host/proc'       
      - '--path.sysfs=/host/sys'       
      - '--path.rootfs=/host'
      - --collector.filesystem.ignored-mount-points
      - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
      - --collector.filesystem.ignored-fs-types
      - "^/(sys|proc|auto|cgroup|devpts|ns|au|fuse.lxc|mqueue)(fs|)$$"
    deploy:
      placement:
        constraints: [node.hostname==node152]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  monitor_overlay:
    external: true

创建prometheus.yaml文件
cat prometheus.yaml

global:
  scrape_interval: 15s
  external_labels:
    monitor: 'microservice-monitor'
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090','cnode150:8080','cnode151:8080','cnode152:8080','enode150:9100','enode151:9100','enode152:9100']
  - job_name: 'nacos'
    scrape_interval: 5s
    metrics_path: '/nacos/actuator/prometheus'
    static_configs:
      - targets: ['nacos:8848']

开放端口
firewall-cmd --add-port=9090/tcp --permanent
启动服务
docker stack deploy -c docker-compose.yml monitor
访问服务
http://192.168.0.151:9090/targets

3.8.部署grafana

获取镜像(151)
docker pull grafana/grafana
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  grafana:
    image: grafana/grafana:latest
    ports:
      - 3000:3000
    networks:
      - monitor_overlay
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - $WORK_HOME_GRAFANA/data:/var/lib/grafana
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  monitor_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 151
注意映射进去的数据目录需要赋予数据目录data所属其他用户组写的权限,因为使用的grfana镜像默认使用的是普通用户grafana
访问服务
http://192.168.0.151:3000

3.9.部署nacos

获取镜像(151)
docker pull nacos/nacos-server
创建配置文件
cat $WORK_HOME_NACOS/config/custom.properties

management.endpoints.web.exposure.include=*

创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  nacos:
    image: nacos/nacos-server:latest
    ports:
      - 8848:8848
    networks:
      - monitor_overlay
    volumes:
      - "$WORK_HOME_NACOS/logs:/home/nacos/logs"
      - "$WORK_HOME_NACOS/config/custom.properties:/home/nacos/init.d/custom.properties"
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      - MODE=standalone
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  monitor_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 151
访问服务
http://192.168.0.151:8848/nacos

添加监控
cat $WORK_HOME_NACOS/config/custom.properties

management.endpoints.web.exposure.include=*

cat prometheus.yaml

global:
  scrape_interval: 15s
  external_labels:
    monitor: 'microservice-monitor'
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090','cnode150:8080','cnode151:8080','cnode152:8080','enode150:9100','enode151:9100','enode152:9100']
  - job_name: 'nacos'
    scrape_interval: 5s
    metrics_path: '/nacos/actuator/prometheus'
    static_configs:
      - targets: ['nacos:8848']

注意上面的yml文件是包括了前面监控prometheus自身的配置的,这是一个完整的yml文件,后面如果还有要监控的组件,也依次添加到该文件中

3.10.部署portainer

portainer可以管理swarm集群
portainer镜像获取(150)
docker pull portainer/portainer
agent镜像获取(150~152)
docker pull portainer/agent

创建docker-compose.yml文件
cat docker-compose.yml

version: '3'
services:
  agent:
    image: portainer/agent
    networks:
      - manager_overlay
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]
  portainer:
    image: portainer/portainer
    ports:
      - 9000:9000
    networks:
      - manager_overlay
    volumes:
      - "$WORK_HOME_PORTAINER/data:/data"
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    deploy:
      placement:
        constraints: [node.role == manager]
networks:
  manager_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml manager
访问服务
http://192.168.0.150:9000

3.11.部署elasticsearch

获取镜像(152)
docker pull elasticsearch`
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  elasticsearch:
    image: elasticsearch:7.7.0
    networks:
      - elk_overlay
    ports:
      - "9200:9200"
    volumes:
      - "$WORK_HOME_ELASTICSEARCH/data:/usr/share/elasticsearch/data"
      - "$WORK_HOME_ELASTICSEARCH/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      - discovery.type=single-node
    deploy:
      placement:
        constraints: [node.hostname==node152]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  elk_overlay:
    external: true

创建配置文件
cat $WORK_HOME_ELASTICSEARCH/config/elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

启动服务
docker stack deploy -c docker-compose.yml 152
检测安装
docker exec containerID curl 127.0.0.1:9200会输出

{
  "name" : "1a124abad1e5",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "6hRSGjx0S2yPlKjXNFT61g",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

也可以访问地址
http://192.168.0.152:9200

3.12.部署logstash

镜像获取(152)
docker pull logstash:7.7.0
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  logstash:
    image: logstash:7.7.0
    networks:
      - elk_overlay
    volumes:
      - "$WORK_HOME_LOGSTASH/pipeline:/usr/share/logstash/pipeline"
      - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node152]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  elk_overlay:
    external: true

创建配置文件
cat $WORK_HOME_LOGSTASH/pipeline/logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
        index => "test-logstash"
  }
}

启动服务
docker stack deploy -c docker-compose.yml 152

3.13.部署kibana

镜像获取(151)
docker pull kibana:7.7.0
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  kibana:
    image: kibana:7.7.0
    ports:
      - 5601:5601
    networks:
      - elk_overlay
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "$WORK_HOME_KIBANA/config/kibana.yml:/usr/share/kibana/config/kibana.yml"
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  elk_overlay:
    external: true

创建配置文件
cat $WORK_HOME_KIBANA/config/kibana.yml

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]

启动服务
docker stack deploy -c docker-compose.yml 151
访问服务
http://192.168.0.151:5601

ps:这里重点说明以下,处于同一网络里的服务通过内部dns与vip绑定,访问某个服务名称就会访问其虚拟ip,同时该虚拟ip会通过网格路由(默认)或者dns轮询的方式去访问后端容器,一般在swarm里访问某个服务都是去访问其名称,因为虚拟ip重启或者重新初始化服务后会改变,同时同一网络中的服务之间通过服务名称访问是不需要使被访问的服务进行端口发布的,比如这里kibana通过名称elasticsearch来访问elasticsearch服务,elasticsearch并不需要发布端口9200到宿主机,同理logstash转发数据到elasticsearch也是一样,但是如果想外部访问容器服务,那么肯定是需要发布端口到宿主机的,比如kibana

3.14.部署Redis

镜像获取(151)
docker pull redis
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  redis:
    image: redis:latest
    networks:
      - database_overlay
    volumes:
      - "$WORK_HOME_REDIS/data:/data"
      - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  database_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 151

3.15.部署mysql

获取镜像(151)
docker pull mysql:5.7
创建docker-compose.yml
cat docker-compose.yml

version: "3"
services:
  mysql:
    image: mysql:5.7
    networks:
      - database_overlay
    volumes:
      - "$WORK_HOME_MYSQL/data:/var/lib/mysql"
      - "$WORK_HOME_MYSQL/logs:/var/log/mysql"
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      - MYSQL_ROOT_PASSWORD=!QAZ1qaz
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  database_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 151

3.16.部署postgres

获取镜像(151)
docker pull postgres
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  postgres:
    image: postgres:latest
    networks:
      - database_overlay
    volumes:
      - "$WORK_HOME_POSTGRES/data:/var/lib/postgresql/data"
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      - POSTGRES_PASSWORD=!QAZ1qaz
    deploy:
      placement:
        constraints: [node.hostname==node151]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  database_overlay:
    external: true

启动服务
docker stack deploy -c docker-compose.yml 151

3.17.部署jenkins

获取镜像(150)
docker pull jenkinsci/blueocean
创建docker-compose.yml文件
cat docker-compose.yml

version: "3"
services:
  jenkins:
    image: jenkinsci/blueocean:latest
    ports:
      - 8088:8080
    networks:
      - ldap_overlay
      - deploy_overlay
    volumes:
      - "$WORK_HOME_JENKINS/data:/var/jenkins_home"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  ldap_overlay:
    external: true
  deploy_overlay:
    external: true

这里需要针对宿主机的挂载目录做权限设置,因为镜像中是普通用户jenkins
宿主机上创建用户用户组jenkins

groupadd jenkins
useradd -g jenkins jenkins   
chown -R jenkins.jenkins $WORK_HOME_JENKINS/data    
chmod 666 /var/run/docker.sock

这里会有一个疑问为什么修改sockert文件docker.sock的用户用户组呢,这是因为jenkins的数据只能被jenkins用户写,所以要修改用户组,而docker.sock则没有这么严格权限需求,需要被除root以外的其他用户读写,所以只用修改其他用户权限即可

启动服务
docker stack deploy -c docker-compose.yml 150

访问服务
http://192.168.0.150:8088

安装插件
gitlab-plugin
Pipeline Utility Steps
Publish Over SSH

接入ldap
安装ldap插件
ldap-plugin
配置ldap
07jenkins41

07jenkins42
注意上面的Manager Password填写参考gitlab接入ldap的设置
07jenkins43

3.18.部署nginx

获取镜像(150)
docker pull nginx
创建docker-compose.yml文件
cat docker-compose.yml

version: "3.4"
services:
  nginx:
    image: nginx:latest
    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: host
    networks:
      - service_overlay
    volumes:
      - "$WORK_HOME_NGINX/config/mime.types:/etc/nginx/mime.types"
      - "$WORK_HOME_NGINX/config/conf.d:/etc/nginx/conf.d"
      - "$WORK_HOME_NGINX/config/nginx.conf:/etc/nginx/nginx.conf:ro"
      - "$WORK_HOME_NGINX/logs:/var/log/nginx"
      - "/etc/localtime:/etc/localtime:ro"
    deploy:
      placement:
        constraints: [node.hostname==node150]
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  service_overlay:
    external: true

创建配置文件
cat $WORK_HOME_NGINX/config/conf.d/default.conf

server {
   listen       80;
   server_name  localhost;

   #charset koi8-r;
   #access_log  /var/log/nginx/host.access.log  main;

    location / { 
       root   /usr/share/nginx/html;
       index  index.html index.htm;
}
}

#upstream nacos-gateway{
#server nacos-gateway:18847;
#}
#server {
#listen 80;
#server_name nginx;
#client_max_body_size 20m;
#client_body_buffer_size 128k;
#location / {
#proxy_pass http://nacos-gateway;
#proxy_set_header Host $host;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Real-IP $remote_addr;
#}
#location ~ .*\.(js|css)$
#{
#root /usr/share/nginx/html;
#expires 1d;
#}
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
#root /usr/share/nginx/html;
#}
#}
#}

生产环境中使用的是注释掉的server,这里为了方便测试服务是否起来,就写了一个基本的server

启动服务
docker stack deploy -c docker-compose.yml 150

3.19.本地微服务部署

通过装饰器来启动应用,这里以wrapper-hello应用为例来展示使用wrapper进行服务注册的演示
先看一下wrapper包装器的代码结构

otal 17000
drwxr-xr-x. 2 root root       24 Jul 29 22:39 config
-rw-r--r--. 1 root root      410 Jul 29 14:12 Dockerfile
-rw-r--r--. 1 root root      571 Jun 20 11:29 launcher.sh
drwxr-xr-x. 3 root root       33 Jul 11 11:42 monolith
-rw-r--r--. 1 root root 17399675 Jul 26 16:36 wrapper

简单说明上面文件中的挂载情况
wrapper是包装器可执行文件
monolith是应用存放目录
launcher.sh为应用启动脚本
config为包装器配置文件存放目录

创建docker-compose.yml
cat docker-compose.yml

version: "3.4"
services:
  hello:
    image: 192.168.0.152:8082/waka2020/wrapper-hello:latest
    networks:
      - service_overlay
      - monitor_overlay
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
    environment:
      - CENTER_HOSTNAME=nacos
      - SERVICE_ETHERNET_NAME=eth1
    deploy:
      mode: replicated
      replicas: 3
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3
networks:
  service_overlay:
    external: true
  monitor_overlay:
    external: true

compose文件中的注册到nacos的网卡是应用容器中的service_overlay网络的网卡,具体根据实际情况修改

查看wrapper包装器配置文件
cat config/config.yml

# 包装器配置
wrapper:
# 工作模式,默认为 "proxy" 代理模式
# "launcher"  启动器模式,负责为单体应用程序向注册中心注册及注销,并通过脚本控制单体应用执行
# "proxy" 模式为代理模式,包含启动器功能,并启动 HTTP 服务对外提供服务,转发请求至单体应用,可对请求进行日志分析处理
# mode: "proxy"

# HTTP 代理服务主机名, 默认为 "0.0.0.0", 即全部接收
# 该配置项应用于对 wrapper 代理模式下的 HTTP 服务的主机名。对应 http://{hostname}:{port}
# hostname: "0.0.0.0"

# HTTP 代理服务端口,默认 58080
# port: 58080

# HTTP 代理服务请求重试次数,默认值 3
# 转发请求到单体应用时,如果发起请求失败,将进行重试。超过重试次数仍然失败将重新启动单体应用
# request_retry: 3

# 是否根据单体应用 API 进行过滤,默认为否,即透明模式
# 如果为 true 将根据 API MAP 进行过滤,限制对未提供的 API 进行访问,当前版本采用透明模式不过滤,只支持 false
# 过滤 API 需向请求者返回失败信息,如 405 method not allowed,具体返回格式需确认。
# filter_api: false
# 待过滤的 API URL 信息,具体过滤内容需要与单体应用确认
# api:
# data: "/data"

# 单体应用监控 url , 用于确认单体应用运行是否正常,当前版本支持 GET 方法,根据 HTTP 状态 200 进行确认。如果状态不正确,将重新启动单体应用。
# http://{monolith_ip}:{port}/{check_url}
    check_url: "http://127.0.0.1:8080/data"

# 监控 URL 检查间隔, 默认 5 秒
# check_interval: 5

# 重启间隔,默认 5 秒
# HTTP 转发支持并发访问,可能同时由多个线程触发重启操作,需限制重启间隔,如单体应用已经重启,且不能提供服务,将返回 503 Service Unavailable 错误。
# restart_interval: 5

# 单体应用控制脚本,默认为 "./launcher.sh"
# script_path: "./launcher.sh"

# monolith 配置
# monolith:
# 单体应用接收转发请求的 IP , 默认为 "127.0.0.1"。
# 该配置项应用于接收  wrapper 代理服务器转发的单体应用主机名。对应 http://{IP}:{port}。
# 采用 "127.0.0.1" 避免其它微服务通过内部 overlay 网络直接访问单体应用。
# 如果 wrapper 不是代理模式,可以设置为 0.0.0.0
# ip: "127.0.0.1"

# 单体应用接收转发请求端口,默认 8080
# port: 8080

# center 配置
center:
    # 注册中心主机名, 默认为 "nacos"。
    hostname: "nacos"

    # 注册中心端口,默认 8848
    # port: 8848

# service 配置
service:
    # 命名空间 id, 需要手工在 nacos 中进行创建
    namespace_id: "d06f858f-bae6-4be4-9d4d-9b0f506cbd10"

    # 服务名,需要注册的服务名
    service_name: "wrapper-hello"

    # 元数据
    metadata:
        service_type: "TYPE"
        company_name: "NAME"

    # 查询服务名
    query_name: ["wrapper-hello"]

    # 集群名,默认值 "DEFAULT"
    # cluster_name: "DEFAULT"

    # 组名,默认值 "DEFAULT_GROUP"
    # group_name: "DEFAULT_GROUP"

    # 用户名,默认值 "nacos"
    # username

    # 密码,默认值 "nacos"
    # password

    # 网卡名,默认值 "eth0"
    ethernet_name: "eth1"

# prometheus 配置
# prometheus:
# 是否启用 prometheus, 默认值 true
# enable: true

# jaeger 配置
jaeger:
    # 是否启用 tracing,默认值 true
    # enable: true

    # tracer 名称,默认值 "jaeger-service"
    name: "wrapper-hello-tracer"

    # 采样率,默认值 0.1
    sampler: 1
    # 服务端点,默认值 ":6831"
    endpoint: "jaeger:6831"
# Pool 设置,Job-Worker 线程池
# pool:
# 调度器名称 默认值,"task"
# dispatcher_name: "task"

# 队列长度,默认值 200
# queue_capacity: 200

# 作业容量,默认值 50
# job_capacity: 50

# 工作者容量,默认值 20
# worker_capacity: 20

# breaker 设置
# breaker:
# 阈值,默认值 20,
# 20 次错误后,breaker 为 open 状态
# threshold: 20

# 刷新周期,默认值 10000 毫秒
# tick: 10000

# timing wheel 时间轮设置
# timing_wheel:
# 时间轮单圈容量,默认值 60
# capacity: 60

# 最短间隔周期,默认值 100 毫秒
# tick: 100

该配置文件中有两处配置需要根据具体情况做修改

  • 注册服务到nacos中的三方服务网卡为service_overlay,默认eth0,本文当时测试时为eth1
  • nacosd的命名空间需要去nacos上手动创建,然后在此配置文件中填写

手动创建nacos命名空间
07jenkins44
将获取的namespace_id替换上面config/config.yml中的nacos命名空间ID

构建wrapper-hello镜像
cat Dockerfile

FROM alpine:latest
LABEL maintainer="waka2020"
EXPOSE 58080/tcp
EXPOSE 8000/tcp
RUN mkdir -p /service/config
RUN mkdir -p /service/monolith
WORKDIR /service
ADD wrapper /service
ADD launcher.sh /service
ADD config/ /service/config
ADD monolith /service/monolith
RUN apk add --update curl && rm -rf /var/cache/apk/*
RUN chmod +x wrapper
RUN chmod +x monolith/hello
RUN chmod +x launcher.sh
CMD /service/wrapper

docker build -t waka2020/wrapper-hello:0.1.0 .
推送到私有仓库
docker tag waka2020/wrapper-hello:0.1.0 192.168.0.152:8082/waka2020/wrapper-hello:0.1.0
docker push 192.168.0.152:8082/waka2020/wrapperhello:0.1.0
docker tag waka2020/wrapper-hello:0.1.0 192.168.0.152:8082/waka2020/wrapper-hello:latest
docker push 192.168.0.152:8082/waka2020/wrapperhello:latest
删除镜像
docker rmi -f 192.168.0.152:8082/waka2020/wrapperhello:0.1.0
docker rmi -f 192.168.0.152:8082/waka2020/wrapperhello:latest

编写一个应用部署脚本
cat start.sh

#!/bin/bash

echo "Lanbroad202" | docker login 192.168.0.152:8082 -u admin --password-stdin 
docker stack deploy -c docker-compose.yml --with-registry-auth --prune --resolve-image always wrapper

该脚本会自动去仓库拉取最新的镜像,标签为none,注意不是latest,如果仓库里有更新的镜像会覆盖
这就是为什么某一个版本比如这里的0.1.0,要推送0.1.0和latest两个版本到仓库中了,因为一旦有0.2.0版本,打上latest后推送到仓库后会覆盖原来的0.1.0的latest,同时也保留0.2.0版本,在更新服务时就不用更改docker-compoe.yml中的版本了,永远是最新的版本

启动脚本即可部署应用。

nginx存放前端代码,配置

server {
        listen       80;
        server_name  nginx;
        client_max_body_size 20m;
        client_body_buffer_size 128K;

        location / {
          root /usr/share/nginx/html-hello;
          index index.html;
          try_files $uri $uri/ /index.html;
          proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header  X-Real-IP $remote_addr;
        }

        location /api/hello/data {
           proxy_set_header  Host  $host;
          proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header  X-Real-IP $remote_addr;
          proxy_pass http://hello:58080/data;
        }


       location ~ .*\.(js|css)$
        {
          root /usr/share/nginx/html-hello;
          expires 1d;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
          root   /usr/share/nginx/html;
        }
    }

重启nginx,访问地址
http://192.168.0.150
08api2
08api5
08api4

大致描述一下请求的过程

  • 浏览器访问nginx前端http://192.168.0.150/
  • 页面中的Get获取位于http://10.254.222.150/api/hello/data的数据,并将该url映射到http://hello:58080/data
  • wrapper采用代理模式,接受请求并转发到对应的http://localhost:8080/data
  • hello应用模拟单体应用,通过http://localhost:8080/data提供服务

根据compose文件,wrapper-hello服务下发任务启动三个副本容器(如上图所示)

注册应用的关键是wrapper,wrapper的运行机制为

  • 镜像创建时运行wrapper执行文件,对外暴漏58080端口
  • 容器启动时wrapper首先向nacos进行注册,成功后继续执行
  • 通过launcher.sh脚本启动hello应用,并根据配置进行Check,成功后向nacos发送Heartbeat
  • 外部访问wrapper代理地址转发到本地8080端口
  • 目前wrapper只在启动时判断nacos访问是否正确,如不能正常注册则退出,不支持Heartbeat失败或者发现服务失败后重新启动

nacos注册状态
08api6

jaeger代码追踪状态
proxy追踪
08api7
08api8
上面的时间中start time表示wrapper代理地址转发请求到本地8080花费的时间,duration time表示本地从接受代理地址发过来的请求到返回请求给代理地址花费的时间

check追踪
08api9
08api10

prometheus监控状态
08api11