4000-520-616
欢迎来到免疫在线!(蚂蚁淘生物旗下平台)  请登录 |  免费注册 |  询价篮
主营:原厂直采,平行进口,授权代理(蚂蚁淘为您服务)
咨询热线电话
4000-520-616
当前位置: 首页 > 新闻动态 >
新闻详情
超详细实战教程丨多场景解析如何迁移Rancher Server-Rancher Labs...
来自 : 51CTO技术博客 发布时间:2021-03-25
修改server-url地址为新Rancher server的地址

\"超详细实战教程丨多场景解析如何迁移Rancher
保存

7. 更新local集群和业务集群的agent配置

通过新域名或IP登录 Rancher Server;

通过浏览器地址栏查询集群ID, c/后面以c开头的字段即为集群 ID,本例的集群ID为c-hftcn;

\"超详细实战教程丨多场景解析如何迁移Rancher

访问https:// 新的server_url /v3/clusters/ 集群ID /clusterregistrationtokens页面;

打开clusterRegistrationTokens页面后,定位到data字段;找到insecureCommand字段,复制 YAML 连接备用;

\"超详细实战教程丨多场景解析如何迁移Rancher

可能会有多组\"baseType\": \"clusterRegistrationToken\",如上图。这种情况以createdTS最大、时间最新的一组为准,一般是最后一组。


使用kubectl工具,通过前文中准备的直连kubeconfig配置文件和上面步骤中获取的 YAML 文件,执行以下命令更新agent相关配置。

注意:

更新local集群和业务集群使用的kubeconfig是不同的,请针对不通集群选择需要的kubeconfig。

关于--context=xxx说明请参考直接使用下游集群进行身份验证。

curl --insecure -sfL 替换为上面步骤获取的YAML文件链接 | kubectl --context=xxx apply -f -


业务集群agent更新成功后,使用相同的方法更新local集群agent配置。

9. 验证

过一会,local和demo集群都变为Active状态:

\"超详细实战教程丨多场景解析如何迁移Rancher

Local集群的cluster-agent和node-agent启动成功

\"超详细实战教程丨多场景解析如何迁移Rancher

Demo集群的cluster-agent和node-agent启动成功

\"超详细实战教程丨多场景解析如何迁移Rancher

然后验证我们之前部署的应用是否可用。

\"超详细实战教程丨多场景解析如何迁移Rancher

场景3:Rancehr高可用安装迁移至其他Local集群


Rancehr高可用安装迁移至其他Local集群,可以借助rke的更新功能完成。通过rke将原来的3节点local集群扩展成6个节点,此时etcd数据将自动同步到local集群内的6个节点上,然后再使用rke将原有的3台节点移除,再次更新。这样就将Rancher Server可以平滑的迁移到新的Rancher local集群。

1. RKE部署Local Kubernetes 集群

根据RKE示例配置创建 RKE 配置文件 cluster.yml:

nodes:- address: 3.96.52.186 internal_address: 172.31.11.95 user: ubuntu role: [controlplane, worker, etcd]- address: 35.183.186.213 internal_address: 172.31.0.201 user: ubuntu role: [controlplane, worker, etcd]- address: 35.183.130.12 internal_address: 172.31.15.236 user: ubuntu role: [controlplane, worker, etcd]

执行 rke 命令创建 Local Kubernetes 集群

rke up --config cluster.yml


检查 Kubernetes 集群运行状态

使用kubectl检查节点状态,确认节点状态为Ready

kubectl get nodesNAME STATUS ROLES AGE VERSION3.96.52.186 Ready controlplane,etcd,worker 71s v1.17.635.183.130.12 Ready controlplane,etcd,worker 72s v1.17.635.183.186.213 Ready controlplane,etcd,worker 72s v1.17.6


检查所有必需的 Pod 和容器是否状况良好,然后可以继续进行

kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx default-http-backend-67cf578fc4-gnt5c 1/1 Running 0 72singress-nginx nginx-ingress-controller-47p4b 1/1 Running 0 72singress-nginx nginx-ingress-controller-85284 1/1 Running 0 72singress-nginx nginx-ingress-controller-9qbdz 1/1 Running 0 72skube-system canal-9bx8k 2/2 Running 0 97skube-system canal-l2fjb 2/2 Running 0 97skube-system canal-v7fzs 2/2 Running 0 97skube-system coredns-7c5566588d-7kv7b 1/1 Running 0 67skube-system coredns-7c5566588d-t4jfm 1/1 Running 0 90skube-system coredns-autoscaler-65bfc8d47d-vnrzc 1/1 Running 0 90skube-system metrics-server-6b55c64f86-r4p8w 1/1 Running 0 79skube-system rke-coredns-addon-deploy-job-lx667 0/1 Completed 0 94skube-system rke-ingress-controller-deploy-job-r2nw5 0/1 Completed 0 74skube-system rke-metrics-addon-deploy-job-4bq76 0/1 Completed 0 84skube-system rke-network-plugin-deploy-job-gjpm8 0/1 Completed 0 99s


2. Rancher HA 安装

参考安装文档安装 Rancher HA。


3. 为Rancher HA配置NGINX 负载均衡

参考NGINX 配置示例为Rancher HA配置负载均衡。

Nginx 配置:

worker_processes 4;worker_rlimit_nofile 40000;events { worker_connections 8192;stream { upstream rancher_servers_http { least_conn; server 172.31.11.95:80 max_fails=3 fail_timeout=5s; server 172.31.0.201:80 max_fails=3 fail_timeout=5s; server 172.31.15.236:80 max_fails=3 fail_timeout=5s; server { listen 80; proxy_pass rancher_servers_http; upstream rancher_servers_https { least_conn; server 172.31.11.95:443 max_fails=3 fail_timeout=5s; server 172.31.0.201:443 max_fails=3 fail_timeout=5s; server 172.31.15.236:443 max_fails=3 fail_timeout=5s; server { listen 443; proxy_pass rancher_servers_https;}


Nginx启动后,我们就可以通过配置的域名/IP去访问Rancher UI。可以导航到local- Nodes 查看到local集群三个节点的状态:

\"超详细实战教程丨多场景解析如何迁移Rancher

4. 部署测试集群及应用


添加测试集群,Node Role同时选中etcd、Control Plane、Worker

\"超详细实战教程丨多场景解析如何迁移Rancher

等待测试集群添加成功后,部署一个nginx workload。再从应用商店部署一个测试应用。

\"超详细实战教程丨多场景解析如何迁移Rancher

5. 将新集群的节点添加到Local集群

修改刚才创建local集群所使用的rke配置文件,增加新集群的配置。

cluster.yml:

nodes:- address: 3.96.52.186 internal_address: 172.31.11.95 user: ubuntu role: [controlplane, worker, etcd]- address: 35.183.186.213 internal_address: 172.31.0.201 user: ubuntu role: [controlplane, worker, etcd]- address: 35.183.130.12 internal_address: 172.31.15.236 user: ubuntu role: [controlplane, worker, etcd]# 以下内容为新增节点的配置- address: 52.60.116.56 internal_address: 172.31.14.146 user: ubuntu role: [controlplane, worker, etcd]- address: 99.79.9.244 internal_address: 172.31.15.215 user: ubuntu role: [controlplane, worker, etcd]- address: 15.223.77.84 internal_address: 172.31.8.64 user: ubuntu role: [controlplane, worker, etcd]


更新集群,将local集群节点扩展到6个

rke up --cluster.yml


检查 Kubernetes 集群运行状态

使用kubectl测试您的连通性,并确认原节点(3.96.52.186、35.183.186.213、35.183.130.12)和新增节点(52.60.116.56、99.79.9.244、15.223.77.84)都处于Ready状态

kubectl get nodesNAME STATUS ROLES AGE VERSION15.223.77.84 Ready controlplane,etcd,worker 33s v1.17.63.96.52.186 Ready controlplane,etcd,worker 88m v1.17.635.183.130.12 Ready controlplane,etcd,worker 89m v1.17.635.183.186.213 Ready controlplane,etcd,worker 89m v1.17.652.60.116.56 Ready controlplane,etcd,worker 101s v1.17.699.79.9.244 Ready controlplane,etcd,worker 67s v1.17.6

检查所有必需的 Pod 和容器是否状况良好,然后可以继续进行

kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEcattle-system cattle-cluster-agent-68898b5c4d-lkz5m 1/1 Running 0 46mcattle-system cattle-node-agent-9xrbs 1/1 Running 0 109scattle-system cattle-node-agent-lvdlf 1/1 Running 0 46mcattle-system cattle-node-agent-mnk76 1/1 Running 0 46mcattle-system cattle-node-agent-qfwcm 1/1 Running 0 75scattle-system cattle-node-agent-tk66h 1/1 Running 0 2m23scattle-system cattle-node-agent-v2vpf 1/1 Running 0 46mcattle-system rancher-749fd64664-8cg4w 1/1 Running 1 58mcattle-system rancher-749fd64664-fms8x 1/1 Running 1 58mcattle-system rancher-749fd64664-rb5pt 1/1 Running 1 58mingress-nginx default-http-backend-67cf578fc4-gnt5c 1/1 Running 0 89mingress-nginx nginx-ingress-controller-44c5z 1/1 Running 0 61singress-nginx nginx-ingress-controller-47p4b 1/1 Running 0 89mingress-nginx nginx-ingress-controller-85284 1/1 Running 0 89mingress-nginx nginx-ingress-controller-9qbdz 1/1 Running 0 89mingress-nginx nginx-ingress-controller-kp7p6 1/1 Running 0 61singress-nginx nginx-ingress-controller-tfjrw 1/1 Running 0 61skube-system canal-9bx8k 2/2 Running 0 89mkube-system canal-fqrqv 2/2 Running 0 109skube-system canal-kkj7q 2/2 Running 0 75skube-system canal-l2fjb 2/2 Running 0 89mkube-system canal-v7fzs 2/2 Running 0 89mkube-system canal-w7t58 2/2 Running 0 2m23skube-system coredns-7c5566588d-7kv7b 1/1 Running 0 89mkube-system coredns-7c5566588d-t4jfm 1/1 Running 0 89mkube-system coredns-autoscaler-65bfc8d47d-vnrzc 1/1 Running 0 89mkube-system metrics-server-6b55c64f86-r4p8w 1/1 Running 0 89mkube-system rke-coredns-addon-deploy-job-lx667 0/1 Completed 0 89mkube-system rke-ingress-controller-deploy-job-r2nw5 0/1 Completed 0 89mkube-system rke-metrics-addon-deploy-job-4bq76 0/1 Completed 0 89mkube-system rke-network-plugin-deploy-job-gjpm8 0/1 Completed 0 89m


从上面的信息可以确认现在local集群已经扩展到6个,并且所有workload均正常运行。

6. 再次更新集群,剔除掉原Local集群节点

再次修改local集群所使用的rke配置文件,将原local集群节点配置注释掉。


cluster.yml:

nodes:# - address: 3.96.52.186# internal_address: 172.31.11.95# user: ubuntu# role: [controlplane, worker, etcd]# - address: 35.183.186.213# internal_address: 172.31.0.201# user: ubuntu# role: [controlplane, worker, etcd]# - address: 35.183.130.12# internal_address: 172.31.15.236# user: ubuntu# role: [controlplane, worker, etcd]# 以下内容为新增节点- address: 52.60.116.56 internal_address: 172.31.14.146 user: ubuntu role: [controlplane, worker, etcd]- address: 99.79.9.244 internal_address: 172.31.15.215 user: ubuntu role: [controlplane, worker, etcd]- address: 15.223.77.84 internal_address: 172.31.8.64 user: ubuntu role: [controlplane, worker, etcd]


更新集群,完成迁移。

rke up --cluster.yml


检查 Kubernetes 集群运行状态

使用kubectl检查节点状态为Ready,可以看到local集群的节点已经替换成了以下3个:

kubectl get nodesNAME STATUS ROLES AGE VERSION15.223.77.84 Ready controlplane,etcd,worker 11m v1.17.652.60.116.56 Ready controlplane,etcd,worker 13m v1.17.699.79.9.244 Ready controlplane,etcd,worker 12m v1.17.6


检查所有必需的 Pod 和容器是否状况良好,然后可以继续进行

kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEcattle-system cattle-cluster-agent-68898b5c4d-tm6db 1/1 Running 3 3m14scattle-system cattle-node-agent-9xrbs 1/1 Running 0 14mcattle-system cattle-node-agent-qfwcm 1/1 Running 0 14mcattle-system cattle-node-agent-tk66h 1/1 Running 0 15mcattle-system rancher-749fd64664-47jw2 1/1 Running 0 3m14scattle-system rancher-749fd64664-jpqdd 1/1 Running 0 3m14scattle-system rancher-749fd64664-xn6js 1/1 Running 0 3m14singress-nginx default-http-backend-67cf578fc4-4668g 1/1 Running 0 3m14singress-nginx nginx-ingress-controller-44c5z 1/1 Running 0 13mingress-nginx nginx-ingress-controller-kp7p6 1/1 Running 0 13mingress-nginx nginx-ingress-controller-tfjrw 1/1 Running 0 13mkube-system canal-fqrqv 2/2 Running 0 14mkube-system canal-kkj7q 2/2 Running 0 14mkube-system canal-w7t58 2/2 Running 0 15mkube-system coredns-7c5566588d-nmtrn 1/1 Running 0 3m13skube-system coredns-7c5566588d-q6hlb 1/1 Running 0 3m13skube-system coredns-autoscaler-65bfc8d47d-rx7fm 1/1 Running 0 3m14skube-system metrics-server-6b55c64f86-mcx9z 1/1 Running 0 3m14s


从上面的信息可以确认现在local集群已经迁移成功,并且所有workload均正常运行。

修改nginx负载均衡配置,将新节点的信息更新到nginx配置文件中

worker_processes 4;worker_rlimit_nofile 40000;events { worker_connections 8192;stream { upstream rancher_servers_http { least_conn; server 172.31.14.146:80 max_fails=3 fail_timeout=5s; server 172.31.8.64:80 max_fails=3 fail_timeout=5s; server 172.31.15.215:80 max_fails=3 fail_timeout=5s; server { listen 80; proxy_pass rancher_servers_http; upstream rancher_servers_https { least_conn; server 172.31.14.146:443 max_fails=3 fail_timeout=5s; server 172.31.8.64:443 max_fails=3 fail_timeout=5s; server 172.31.15.215:443 max_fails=3 fail_timeout=5s; server { listen 443; proxy_pass rancher_servers_https;}


7. 验证

确认local集群和业务集群状态为Active

\"超详细实战教程丨多场景解析如何迁移Rancher

确认Local集群节点已被替换

原集群节点IP分别为:3.96.52.186、35.183.186.213、35.183.130.12


\"超详细实战教程丨多场景解析如何迁移Rancher

然后验证我们之前部署的应用是否可用。

\"超详细实战教程丨多场景解析如何迁移Rancher



开源一直是Rancher的产品理念,我们也一向重视与开源社区用户的交流,为此创建了20个微信交流群。本篇文章的诞生源于和社区用户的多次交流,发现许多Rancher用户都有类似的问题。于是,我总结了三个场景并经过反复测试,最终完成这篇教程。我们也十分欢迎各位Rancher用户以各种形式分享自己的使用经验,一起共建愉快的开源社区。

©著作权归作者所有:来自51CTO博客作者RancherLabs的原创作品,如需转载,请注明出处,否则将追究法律责任 好知识,才能预见未来

赞赏

0人进行了赞赏支持

本文链接: http://racherm.immuno-online.com/view-746298.html

发布于 : 2021-03-25 阅读(0)
公司介绍
品牌分类
其他
联络我们
服务热线:4000-520-616
(限工作日9:00-18:00)
QQ :1570468124
手机:18915418616
官网:http://