ip |
內(nèi)存(Gb) |
cpu(核數(shù)) |
192.168.1.166 |
16 |
4 |
192.168.1.167 |
16 |
4 |
192.168.1.168 |
16 |
4 |
在安裝前確保所有機器已安裝docker
2.2 初始化swarm集群
在192.168.1.166機器上執(zhí)行
$ docker swarm init --advertise-addr 192.168.1.166 Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.1.166:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
2.3 加入worker節(jié)點
根據(jù)init命令提示內(nèi)容,加入swarm worker節(jié)點,在192.168.1.167 192.168.1.168分別執(zhí)行
docker swarm join \ --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \ 192.168.1.166:2377
2.4 驗證集群
docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION h0az2wzqetpwhl9ybu76yxaen * KF2-DATA-166 Ready Active Reachable 18.06.1-ce q6jripaolxsl7xqv3cmv5pxji KF2-DATA-167 Ready Active Leader 18.06.1-ce h1iql1uvm7123h3gon9so69dy KF2-DATA-168 Ready Active 18.06.1-ce
2.5 配置docker stack
vi docker-stack.yml
配置如下內(nèi)容
version: '3.6' services: metad0: image: vesoft/nebula-metad:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.166 - --ws_ip=192.168.1.166 - --port=45500 - --data_path=/data/meta - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-166 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.166:11000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 11000 published: 11000 protocol: tcp mode: host - target: 11002 published: 11002 protocol: tcp mode: host - target: 45500 published: 45500 protocol: tcp mode: host volumes: - data-metad0:/data/meta - logs-metad0:/logs networks: - nebula-net metad1: image: vesoft/nebula-metad:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.167 - --ws_ip=192.168.1.167 - --port=45500 - --data_path=/data/meta - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-167 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.167:11000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 11000 published: 11000 protocol: tcp mode: host - target: 11002 published: 11002 protocol: tcp mode: host - target: 45500 published: 45500 protocol: tcp mode: host volumes: - data-metad1:/data/meta - logs-metad1:/logs networks: - nebula-net metad2: image: vesoft/nebula-metad:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.168 - --ws_ip=192.168.1.168 - --port=45500 - --data_path=/data/meta - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-168 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.168:11000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 11000 published: 11000 protocol: tcp mode: host - target: 11002 published: 11002 protocol: tcp mode: host - target: 45500 published: 45500 protocol: tcp mode: host volumes: - data-metad2:/data/meta - logs-metad2:/logs networks: - nebula-net storaged0: image: vesoft/nebula-storaged:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.166 - --ws_ip=192.168.1.166 - --port=44500 - --data_path=/data/storage - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-166 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.166:12000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 12000 published: 12000 protocol: tcp mode: host - target: 12002 published: 12002 protocol: tcp mode: host volumes: - data-storaged0:/data/storage - logs-storaged0:/logs networks: - nebula-net storaged1: image: vesoft/nebula-storaged:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.167 - --ws_ip=192.168.1.167 - --port=44500 - --data_path=/data/storage - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-167 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.167:12000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 12000 published: 12000 protocol: tcp mode: host - target: 12002 published: 12004 protocol: tcp mode: host volumes: - data-storaged1:/data/storage - logs-storaged1:/logs networks: - nebula-net storaged2: image: vesoft/nebula-storaged:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --local_ip=192.168.1.168 - --ws_ip=192.168.1.168 - --port=44500 - --data_path=/data/storage - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-168 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.168:12000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 12000 published: 12000 protocol: tcp mode: host - target: 12002 published: 12006 protocol: tcp mode: host volumes: - data-storaged2:/data/storage - logs-storaged2:/logs networks: - nebula-net graphd1: image: vesoft/nebula-graphd:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --port=3699 - --ws_ip=192.168.1.166 - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-166 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.166:13000/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 3699 published: 3699 protocol: tcp mode: host - target: 13000 published: 13000 protocol: tcp # mode: host - target: 13002 published: 13002 protocol: tcp mode: host volumes: - logs-graphd:/logs networks: - nebula-net graphd2: image: vesoft/nebula-graphd:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --port=3699 - --ws_ip=192.168.1.167 - --log_dir=/logs - --v=2 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-167 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.167:13001/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 3699 published: 3640 protocol: tcp mode: host - target: 13000 published: 13001 protocol: tcp mode: host - target: 13002 published: 13003 protocol: tcp # mode: host volumes: - logs-graphd2:/logs networks: - nebula-net graphd3: image: vesoft/nebula-graphd:nightly env_file: - ./nebula.env command: - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 - --port=3699 - --ws_ip=192.168.1.168 - --log_dir=/logs - --v=0 - --minloglevel=2 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: - node.hostname == KF2-DATA-168 depends_on: - metad0 - metad1 - metad2 healthcheck: test: ["CMD", "curl", "-f", "http://192.168.1.168:13002/status"] interval: 30s timeout: 10s retries: 3 start_period: 20s ports: - target: 3699 published: 3641 protocol: tcp mode: host - target: 13000 published: 13002 protocol: tcp # mode: host - target: 13002 published: 13004 protocol: tcp mode: host volumes: - logs-graphd3:/logs networks: - nebula-net networks: nebula-net: external: true attachable: true name: host volumes: data-metad0: logs-metad0: data-metad1: logs-metad1: data-metad2: logs-metad2: data-storaged0: logs-storaged0: data-storaged1: logs-storaged1: data-storaged2: logs-storaged2: logs-graphd: logs-graphd2: logs-graphd3: docker-stack.yml
編輯 nebula.env
加入如下內(nèi)容
TZ=UTC USER=root
nebula.env
2.6 啟動nebula集群
docker stack deploy nebula -c docker-stack.yml
三、集群負載均衡及高可用配置
Nebula Graph的客戶端目前(1.X)沒有提供負載均衡的能力,只是隨機選一個graphd去連接。所以生產(chǎn)使用的時候要自己做個負載均衡和高可用。
圖3.1
將整個部署架構(gòu)分為三層,數(shù)據(jù)服務(wù)層,負載均衡層及高可用層。如圖3.1所示
負載均衡層:對client請求做負載均衡,將請求分發(fā)至下方數(shù)據(jù)服務(wù)層
高可用層: 這里實現(xiàn)的是haproxy的高可用,保證負載均衡層的服務(wù)從而保證整個集群的正常服務(wù)
3.1 負載均衡配置
haproxy使用docker-compose配置。分別編輯以下三個文件
Dockerfile 加入以下內(nèi)容
FROM haproxy:1.7 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg EXPOSE 3640
Dockerfile
docker-compose.yml加入以下內(nèi)容
version: "3.2" services: haproxy: container_name: haproxy build: . volumes: - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg ports: - 3640:3640 restart: always networks: - app_net networks: app_net: external: true
docker-compose.yml
haproxy.cfg加入以下內(nèi)容
global daemon maxconn 30000 log 127.0.0.1 local0 info log 127.0.0.1 local1 warning defaults log-format %hr\ %ST\ %B\ %Ts log global mode http option http-keep-alive timeout connect 5000ms timeout client 10000ms timeout server 50000ms timeout http-request 20000ms # custom your own frontends && backends && listen conf #CUSTOM listen graphd-cluster bind *:3640 mode tcp maxconn 300 balance roundrobin server server1 192.168.1.166:3699 maxconn 300 check server server2 192.168.1.167:3699 maxconn 300 check server server3 192.168.1.168:3699 maxconn 300 check listen stats bind *:1080 stats refresh 30s stats uri /stats
3.2 啟動haproxy
docker-compose up -d
3.2 高可用配置
注:配置keepalive需預(yù)先準備好vip (虛擬ip),在以下配置中192.168.1.99便為虛擬ip
在192.168.1.166 、192.168.1.167、192.168.1.168上均做以下配置
安裝keepalived
apt-get update && apt-get upgrade && apt-get install keepalived -y
更改keepalived配置文件/etc/keepalived/keepalived.conf(三臺機器中 做如下配置,priority應(yīng)設(shè)置不同值確定優(yōu)先級)
192.168.1.166機器配置
global_defs { router_id lb01 #標識信息,一個名字而已; } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 } vrrp_instance VI_1 { state MASTER interface ens160 virtual_router_id 52 priority 999 # 設(shè)定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒 advert_int 1 # 設(shè)置驗證類型和密碼 authentication { # 設(shè)置驗證類型,主要有PASS和AH兩種 auth_type PASS # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信 auth_pass amber1 } virtual_ipaddress { # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens169:1,主備相同 192.168.1.99/24 dev ens160 label ens160:1 } track_script { chk_haproxy } }
167機器配置
global_defs { router_id lb01 #標識信息,一個名字而已; } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 52 priority 888 # 設(shè)定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒 advert_int 1 # 設(shè)置驗證類型和密碼 authentication { # 設(shè)置驗證類型,主要有PASS和AH兩種 auth_type PASS # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信 auth_pass amber1 } virtual_ipaddress { # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同 192.168.1.99/24 dev ens160 label ens160:1 } track_script { chk_haproxy } }
168機器配置
global_defs { router_id lb01 #標識信息,一個名字而已; } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 52 priority 777 # 設(shè)定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒 advert_int 1 # 設(shè)置驗證類型和密碼 authentication { # 設(shè)置驗證類型,主要有PASS和AH兩種 auth_type PASS # 設(shè)置驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信 auth_pass amber1 } virtual_ipaddress { # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同 192.168.1.99/24 dev ens160 label ens160:1 } track_script { chk_haproxy } }
keepalived相關(guān)命令
# 啟動keepalived systemctl start keepalived # 使keepalived開機自啟 systemctl enable keeplived # 重啟keepalived systemctl restart keepalived
四、其他
離線怎么部署?把鏡像更改為私有鏡像庫就成了,有問題歡迎來勾搭啊。
到此這篇關(guān)于用Docker swarm快速部署Nebula Graph集群的文章就介紹到這了,更多相關(guān)Docker 部署Nebula Graph集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!