Kubernetes使用kubeeasy部署集群

1585364631
2023-03-23 / 1 评论 / 211 阅读 / 正在检测是否收录...

Kubernetes使用kubeeasy部署集群

1.环境准备

  • 主机信息(一定要双网卡)
主机名NAT网卡本地网卡节点状态
master10.107.24.80192.168.50.80master
node10.107.24.81192.168.50.81worker
  • 准备文件

chinaskills_cloud_paas_v2.0.2.iso

[root@master ~]# mount -o loop /dev/cdrom /mnt/
[root@master ~]# cp -rf /mnt/* /opt/
[root@master ~]# umount /mnt/
[root@master ~]# ls /opt/
centos        extended-images        helm-v3.7.1-linux-amd64.tar.gz  kubeeasy           kubevirt.tar.gz
dependencies  harbor-offline.tar.gz  istio.tar.gz                    kubernetes.tar.gz

2.离线安装步骤

2.1 安装kubeeasy

[root@master ~]# cp -rf /opt/kubeeasy /usr/bin/
[root@master ~]# chmod +x /usr/bin/kubeeasy

2.2 安装集群依赖包

[root@master ~]# kubeeasy install dependencies \
  --host 10.107.24.80,10.107.24.81 \
  --user root \
  --password 000000 \
  --offline-file /opt/dependencies/base-rpms.tar.gz
[2023-03-20 01:20:52] INFO:    [start] bash kubeeasy install dependencies --host 10.107.24.80,10.107.24.81 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz
[2023-03-20 01:20:52] INFO:    [offline] unzip offline dependencies package on local.
[2023-03-20 01:20:54] INFO:    [offline] unzip offline dependencies package succeeded.
[2023-03-20 01:20:54] INFO:    [install] install dependencies packages on local.
[2023-03-20 01:22:36] INFO:    [install] install dependencies packages succeeded.
[2023-03-20 01:22:37] INFO:    [offline] 10.107.24.80: load offline dependencies file
[2023-03-20 01:22:39] INFO:    [offline] load offline dependencies file to 10.107.24.80 succeeded.
[2023-03-20 01:22:39] INFO:    [install] 10.107.24.80: install dependencies packages
[2023-03-20 01:22:40] INFO:    [install] 10.107.24.80: install dependencies packages succeeded.
[2023-03-20 01:22:41] INFO:    [offline] 10.107.24.81: load offline dependencies file
[2023-03-20 01:22:46] INFO:    [offline] load offline dependencies file to 10.107.24.81 succeeded.
[2023-03-20 01:22:46] INFO:    [install] 10.107.24.81: install dependencies packages
[2023-03-20 01:24:26] INFO:    [install] 10.107.24.81: install dependencies packages succeeded.

  See detailed log >> /var/log/kubeinstall.log 

2.3 安装Kubernetes集群

[root@master ~]# kubeeasy install kubernetes \
  --master 10.107.24.80 \
  --worker 10.107.24.81 \
  --user root \
  --password 000000 \
  --version 1.22.1 \
  --offline-file /opt/kubernetes.tar.gz
[2023-03-20 01:24:58] INFO:    [start] bash kubeeasy install kubernetes --master 10.107.24.80 --worker 10.107.24.81 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
[2023-03-20 01:24:58] INFO:    [check] sshpass command exists.
[2023-03-20 01:24:58] INFO:    [check] rsync command exists.
[2023-03-20 01:24:59] INFO:    [check] ssh 10.107.24.80 connection succeeded.
[2023-03-20 01:24:59] INFO:    [check] ssh 10.107.24.81 connection succeeded.
[2023-03-20 01:24:59] INFO:    [offline] unzip offline package on local.
[2023-03-20 01:25:09] INFO:    [offline] unzip offline package succeeded.
[2023-03-20 01:25:09] INFO:    [offline] master 10.107.24.80: load offline file
[2023-03-20 01:25:10] INFO:    [offline] load offline file to 10.107.24.80 succeeded.
[2023-03-20 01:25:10] INFO:    [offline] master 10.107.24.80: disable the firewall
[2023-03-20 01:25:11] INFO:    [offline] 10.107.24.80: disable the firewall succeeded.
[2023-03-20 01:25:11] INFO:    [offline] worker 10.107.24.81: load offline file
[2023-03-20 01:26:05] INFO:    [offline] load offline file to 10.107.24.81 succeeded.
[2023-03-20 01:26:05] INFO:    [offline] worker 10.107.24.81: disable the firewall
[2023-03-20 01:26:06] INFO:    [offline] 10.107.24.81: disable the firewall succeeded.
[2023-03-20 01:26:06] INFO:    [get] Get 10.107.24.80 InternalIP.
[2023-03-20 01:26:07] INFO:    [result] get MGMT_NODE_IP value succeeded.
[2023-03-20 01:26:07] INFO:    [result] MGMT_NODE_IP is 10.107.24.80
[2023-03-20 01:26:07] INFO:    [init] master: 10.107.24.80
[2023-03-20 13:26:09] INFO:    [init] init master 10.107.24.80 succeeded.
[2023-03-20 13:26:09] INFO:    [init] master: 10.107.24.80 set hostname and hosts
[2023-03-20 13:26:09] INFO:    [init] 10.107.24.80 set hostname and hosts succeeded.
[2023-03-20 13:26:09] INFO:    [init] worker: 10.107.24.81
[2023-03-20 13:26:12] INFO:    [init] init worker 10.107.24.81 succeeded.
[2023-03-20 13:26:12] INFO:    [init] master: 10.107.24.81 set hostname and hosts
[2023-03-20 13:26:12] INFO:    [init] 10.107.24.81 set hostname and hosts succeeded.
[2023-03-20 13:26:12] INFO:    [install] install docker on 10.107.24.80.
[2023-03-20 13:27:59] INFO:    [install] install docker on 10.107.24.80 succeeded.
[2023-03-20 13:27:59] INFO:    [install] install kube on 10.107.24.80
[2023-03-20 13:28:01] INFO:    [install] install kube on 10.107.24.80 succeeded.
[2023-03-20 13:28:01] INFO:    [install] install docker on 10.107.24.81.
[2023-03-20 13:29:45] INFO:    [install] install docker on 10.107.24.81 succeeded.
[2023-03-20 13:29:45] INFO:    [install] install kube on 10.107.24.81
[2023-03-20 13:29:47] INFO:    [install] install kube on 10.107.24.81 succeeded.
[2023-03-20 13:29:47] INFO:    [kubeadm init] kubeadm init on 10.107.24.80
[2023-03-20 13:29:47] INFO:    [kubeadm init] 10.107.24.80: set kubeadm-config.yaml
[2023-03-20 13:29:48] INFO:    [kubeadm init] 10.107.24.80: set kubeadm-config.yaml succeeded.
[2023-03-20 13:29:48] INFO:    [kubeadm init] 10.107.24.80: kubeadm init start.
[2023-03-20 13:30:03] INFO:    [kubeadm init] 10.107.24.80: kubeadm init succeeded.
[2023-03-20 13:30:06] INFO:    [kubeadm init] 10.107.24.80: set kube config.
[2023-03-20 13:30:06] INFO:    [kubeadm init] 10.107.24.80: set kube config succeeded.
[2023-03-20 13:30:06] INFO:    [kubeadm init] 10.107.24.80: delete master taint
[2023-03-20 13:30:07] INFO:    [kubeadm init] 10.107.24.80: delete master taint succeeded.
[2023-03-20 13:30:07] INFO:    [kubeadm init] Auto-Approve kubelet cert csr succeeded.
[2023-03-20 13:30:07] INFO:    [kubeadm join] master: get join token and cert info
[2023-03-20 13:30:08] INFO:    [result] get CACRT_HASH value succeeded.
[2023-03-20 13:30:08] INFO:    [result] get INTI_CERTKEY value succeeded.
[2023-03-20 13:30:09] INFO:    [result] get INIT_TOKEN value succeeded.
[2023-03-20 13:30:09] INFO:    [kubeadm join] worker 10.107.24.81 join cluster.
[2023-03-20 13:30:28] INFO:    [kubeadm join] worker 10.107.24.81 join cluster succeeded.
[2023-03-20 13:30:28] INFO:    [kubeadm join] set 10.107.24.81 worker node role.
[2023-03-20 13:30:29] INFO:    [kubeadm join] set 10.107.24.81 worker node role succeeded.
[2023-03-20 13:30:29] INFO:    [network] add flannel network
[2023-03-20 13:30:29] INFO:    [calico] change flannel pod subnet succeeded.
[2023-03-20 13:30:29] INFO:    [apply] apply kube-flannel.yaml file
[2023-03-20 13:30:30] INFO:    [apply] apply kube-flannel.yaml file succeeded.
[2023-03-20 13:30:33] INFO:    [waiting] waiting kube-flannel-ds
[2023-03-20 13:30:34] INFO:    [waiting] kube-flannel-ds pods ready succeeded.
[2023-03-20 13:30:34] INFO:    [apply] apply coredns-cm.yaml file
[2023-03-20 13:30:34] INFO:    [apply] apply coredns-cm.yaml file succeeded.
[2023-03-20 13:30:35] INFO:    [apply] apply metrics-server.yaml file
[2023-03-20 13:30:35] INFO:    [apply] apply metrics-server.yaml file succeeded.
[2023-03-20 13:30:38] INFO:    [waiting] waiting metrics-server
[2023-03-20 13:30:39] INFO:    [waiting] metrics-server pods ready succeeded.
[2023-03-20 13:30:39] INFO:    [apply] apply dashboard.yaml file
[2023-03-20 13:30:39] INFO:    [apply] apply dashboard.yaml file succeeded.
[2023-03-20 13:30:42] INFO:    [waiting] waiting dashboard-agent
[2023-03-20 13:30:43] INFO:    [waiting] dashboard-agent pods ready succeeded.
[2023-03-20 13:30:46] INFO:    [waiting] waiting dashboard-en
[2023-03-20 13:30:46] INFO:    [waiting] dashboard-en pods ready succeeded.
[2023-03-20 13:31:01] INFO:    [cluster] kubernetes cluster status
+ kubectl get node -o wide
NAME               STATUS   ROLES                         AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master-node1   Ready    control-plane,master,worker   61s   v1.22.1   10.107.24.80   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
k8s-worker-node1   Ready    worker                        37s   v1.22.1   10.107.24.81   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://20.10.8
+ kubectl get pods -A -o wide
NAMESPACE      NAME                                       READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
dashboard-cn   dashboard-agent-cd88cf454-q48b4            1/1     Running   0          22s   10.244.1.2     k8s-worker-node1   <none>           <none>
dashboard-cn   dashboard-cn-64bd46887f-jvm4t              1/1     Running   0          22s   10.244.1.3     k8s-worker-node1   <none>           <none>
dashboard-en   dashboard-en-55596d469-lrnp9               1/1     Running   0          22s   10.244.1.4     k8s-worker-node1   <none>           <none>
kube-system    coredns-78fcd69978-49kgg                   1/1     Running   0          44s   10.244.0.2     k8s-master-node1   <none>           <none>
kube-system    coredns-78fcd69978-hq69g                   1/1     Running   0          44s   10.244.0.3     k8s-master-node1   <none>           <none>
kube-system    etcd-k8s-master-node1                      1/1     Running   0          58s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    kube-apiserver-k8s-master-node1            1/1     Running   0          58s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    kube-controller-manager-k8s-master-node1   1/1     Running   0          58s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    kube-flannel-ds-9lnld                      1/1     Running   0          31s   10.107.24.81   k8s-worker-node1   <none>           <none>
kube-system    kube-flannel-ds-xf86g                      1/1     Running   0          31s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    kube-proxy-2f2vj                           1/1     Running   0          37s   10.107.24.81   k8s-worker-node1   <none>           <none>
kube-system    kube-proxy-d6xjq                           1/1     Running   0          44s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    kube-scheduler-k8s-master-node1            1/1     Running   0          58s   10.107.24.80   k8s-master-node1   <none>           <none>
kube-system    metrics-server-77564bc84d-j54hk            1/1     Running   0          26s   10.107.24.81   k8s-worker-node1   <none>           <none> 

  See detailed log >> /var/log/kubeinstall.log 

2.4 验证Docker容器启动状态

[root@master ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS     NAMES
992e282b87bc   8d147537fb7d           "/coredns -conf /etc…"   4 minutes ago   Up 4 minutes             k8s_coredns_coredns-78fcd69978-hq69g_kube-system_826a525e-72ce-4cbd-90e5-fdca4b2fab32_0
5a85366c9f9b   8d147537fb7d           "/coredns -conf /etc…"   4 minutes ago   Up 4 minutes             k8s_coredns_coredns-78fcd69978-49kgg_kube-system_3fccb231-ee10-4976-b81f-24d63aacd08a_0
7dc7e4542e87   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_coredns-78fcd69978-hq69g_kube-system_826a525e-72ce-4cbd-90e5-fdca4b2fab32_3
fe088504486c   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_coredns-78fcd69978-49kgg_kube-system_3fccb231-ee10-4976-b81f-24d63aacd08a_3
d6c22b0b9f01   404fc3ab6749           "/opt/bin/flanneld -…"   4 minutes ago   Up 4 minutes             k8s_kube-flannel_kube-flannel-ds-xf86g_kube-system_27967aa7-4921-4117-b512-98b39c39faf5_0
9b3495f4b962   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-flannel-ds-xf86g_kube-system_27967aa7-4921-4117-b512-98b39c39faf5_0
9859df5625ba   36c4ebbc9d97           "/usr/local/bin/kube…"   4 minutes ago   Up 4 minutes             k8s_kube-proxy_kube-proxy-d6xjq_kube-system_4f120047-946e-4cc4-9af0-a3d5502393eb_0
e3e8672494f2   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-proxy-d6xjq_kube-system_4f120047-946e-4cc4-9af0-a3d5502393eb_0
01b91ddb6ad2   6e002eb89a88           "kube-controller-man…"   5 minutes ago   Up 5 minutes             k8s_kube-controller-manager_kube-controller-manager-k8s-master-node1_kube-system_aa5c3b81774bd4cc98215b1c8732d87c_0
3fe245146e59   004811815584           "etcd --advertise-cl…"   5 minutes ago   Up 5 minutes             k8s_etcd_etcd-k8s-master-node1_kube-system_8d8c7b9310b33393732d993c41c7d450_0
10baaf8a7ab9   f30469a2491a           "kube-apiserver --ad…"   5 minutes ago   Up 5 minutes             k8s_kube-apiserver_kube-apiserver-k8s-master-node1_kube-system_710a8249ad30bff927c89228973db8ac_0
a06f62ed2331   aca5ededae9c           "kube-scheduler --au…"   5 minutes ago   Up 5 minutes             k8s_kube-scheduler_kube-scheduler-k8s-master-node1_kube-system_6ab3eb82cd0b41c3d3b546b333a12933_0
553f4038bea5   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago   Up 5 minutes             k8s_POD_kube-scheduler-k8s-master-node1_kube-system_6ab3eb82cd0b41c3d3b546b333a12933_0
9fd833e89902   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago   Up 5 minutes             k8s_POD_kube-controller-manager-k8s-master-node1_kube-system_aa5c3b81774bd4cc98215b1c8732d87c_0
f6ebe5b6fb57   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago   Up 5 minutes             k8s_POD_kube-apiserver-k8s-master-node1_kube-system_710a8249ad30bff927c89228973db8ac_0
7f0f2b465ec9   k8s.gcr.io/pause:3.5   "/pause"                 5 minutes ago   Up 5 minutes             k8s_POD_etcd-k8s-master-node1_kube-system_8d8c7b9310b33393732d993c41c7d450_0

[root@node ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS     NAMES
d619333b2d61   d3dc57185ba2           "/portainer --tunnel…"   4 minutes ago   Up 4 minutes             k8s_dashboard-en_dashboard-en-55596d469-lrnp9_dashboard-en_f8a4e4fc-c6c2-4dc3-95e3-bd581ea81e76_0
7d7c0eaf86f8   c0d510ae5b6e           "./agent"                4 minutes ago   Up 4 minutes             k8s_dashboard-agent_dashboard-agent-cd88cf454-q48b4_dashboard-cn_921e034a-83ba-4a22-8029-d74ebb82c9bb_0
f69883f86cbc   ff950b2c8963           "/portainer"             4 minutes ago   Up 4 minutes             k8s_dashboard-cn_dashboard-cn-64bd46887f-jvm4t_dashboard-cn_cf7654e0-4b77-4604-9bef-7ad37e2fff0d_0
dcfb61df7245   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_dashboard-en-55596d469-lrnp9_dashboard-en_f8a4e4fc-c6c2-4dc3-95e3-bd581ea81e76_0
ecf3389d5a7e   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_dashboard-agent-cd88cf454-q48b4_dashboard-cn_921e034a-83ba-4a22-8029-d74ebb82c9bb_0
76d14ee95f0f   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_dashboard-cn-64bd46887f-jvm4t_dashboard-cn_cf7654e0-4b77-4604-9bef-7ad37e2fff0d_0
b2b2ce7cebdc   17c225a562d9           "/metrics-server --c…"   4 minutes ago   Up 4 minutes             k8s_metrics-server_metrics-server-77564bc84d-j54hk_kube-system_662e9598-c936-4cbc-ad1c-5a671133da4b_0
b99a8d500109   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_metrics-server-77564bc84d-j54hk_kube-system_662e9598-c936-4cbc-ad1c-5a671133da4b_0
8b9548e04848   404fc3ab6749           "/opt/bin/flanneld -…"   4 minutes ago   Up 4 minutes             k8s_kube-flannel_kube-flannel-ds-9lnld_kube-system_b285135a-5a60-4060-9aa3-ace5c83222cf_0
0dc6fc7a3554   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-flannel-ds-9lnld_kube-system_b285135a-5a60-4060-9aa3-ace5c83222cf_0
f9a9b3929d68   36c4ebbc9d97           "/usr/local/bin/kube…"   4 minutes ago   Up 4 minutes             k8s_kube-proxy_kube-proxy-2f2vj_kube-system_efd5573d-bcfc-4977-b59d-5b618f76e481_0
124eb526549d   k8s.gcr.io/pause:3.5   "/pause"                 4 minutes ago   Up 4 minutes             k8s_POD_kube-proxy-2f2vj_kube-system_efd5573d-bcfc-4977-b59d-5b618f76e481_0

3.其他部署

3.1 在 Kubernetes 集群中完成 KubeVirt 环境的安装

[root@master ~]# kubeeasy add --virt kubevirt
[2023-03-20 13:40:18] INFO:    [start] bash kubeeasy add --virt kubevirt
[2023-03-20 13:40:18] INFO:    [check] sshpass command exists.
[2023-03-20 13:40:18] INFO:    [check] wget command exists.
[2023-03-20 13:40:19] INFO:    [check] conn apiserver succeeded.
[2023-03-20 13:40:19] INFO:    [virt] add kubevirt
[2023-03-20 13:40:19] INFO:    [apply] apply kubevirt-operator.yaml file
[2023-03-20 13:40:20] INFO:    [apply] apply kubevirt-operator.yaml file succeeded.
[2023-03-20 13:40:23] INFO:    [waiting] waiting kubevirt
[2023-03-20 13:40:30] INFO:    [waiting] kubevirt pods ready succeeded.
[2023-03-20 13:40:30] INFO:    [apply] apply kubevirt-cr.yaml file
[2023-03-20 13:40:30] INFO:    [apply] apply kubevirt-cr.yaml file succeeded.
[2023-03-20 13:41:03] INFO:    [waiting] waiting kubevirt
[2023-03-20 13:41:09] INFO:    [waiting] kubevirt pods ready succeeded.
[2023-03-20 13:41:12] INFO:    [waiting] waiting kubevirt
[2023-03-20 13:41:34] INFO:    [waiting] kubevirt pods ready succeeded.
[2023-03-20 13:41:37] INFO:    [waiting] waiting kubevirt
[2023-03-20 13:41:37] INFO:    [waiting] kubevirt pods ready succeeded.
[2023-03-20 13:41:37] INFO:    [apply] apply multus-daemonset.yaml file
[2023-03-20 13:41:38] INFO:    [apply] apply multus-daemonset.yaml file succeeded.
[2023-03-20 13:41:41] INFO:    [waiting] waiting kube-multus
[2023-03-20 13:41:41] INFO:    [waiting] kube-multus pods ready succeeded.
[2023-03-20 13:41:41] INFO:    [apply] apply multus-cni-macvlan.yaml file
[2023-03-20 13:41:41] INFO:    [apply] apply multus-cni-macvlan.yaml file succeeded.
[2023-03-20 13:41:41] INFO:    [cluster] kubernetes kubevirt status
+ kubectl get pod -n kubevirt -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
virt-api-86f9d6d4f-dh9pv          1/1     Running   0          52s   10.244.0.6   k8s-master-node1   <none>           <none>
virt-api-86f9d6d4f-twn7b          1/1     Running   0          52s   10.244.1.7   k8s-worker-node1   <none>           <none>
virt-controller-54b79f5db-vb42b   1/1     Running   0          27s   10.244.1.9   k8s-worker-node1   <none>           <none>
virt-controller-54b79f5db-wtfcp   1/1     Running   0          27s   10.244.0.8   k8s-master-node1   <none>           <none>
virt-handler-gch9f                1/1     Running   0          27s   10.244.1.8   k8s-worker-node1   <none>           <none>
virt-handler-lkp2l                1/1     Running   0          27s   10.244.0.7   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-756kp    1/1     Running   0          81s   10.244.0.4   k8s-master-node1   <none>           <none>
virt-operator-6fbd74566c-ns75v    1/1     Running   0          81s   10.244.1.6   k8s-worker-node1   <none>           <none> 
  See detailed log >> /var/log/kubeinstall.log 

3.2 在 Kubernetes 集群中完成服务网格(ServiceMesh)项目 Istio 环境的安装

[root@master ~]# kubeeasy add --istio istio

#在 Kubernetes 集群上完成 Istio 服务网格环境的安装,然后新建命名空间 exam,为该命名空间开启自动注入 Sidecar。
#创建exam命名空间
[root@master ~]# kubectl create ns exam
#通过为命名空间打标签来实现自动注入
[root@master ~]# kubectl label ns exam istio-injection=enabled
[2023-03-20 13:43:39] INFO:    [start] bash kubeeasy add --istio istio
[2023-03-20 13:43:39] INFO:    [check] sshpass command exists.
[2023-03-20 13:43:39] INFO:    [check] wget command exists.
[2023-03-20 13:43:39] INFO:    [check] conn apiserver succeeded.
[2023-03-20 13:43:40] INFO:    [istio] add istio
✔ Istio core installed                                                                                                                                                                   
✔ Istiod installed                                                                                                                                                                       
✔ Egress gateways installed                                                                                                                                                              
✔ Ingress gateways installed                                                                                                                                                             
✔ Installation complete                                                                                                                                                                  Making this installation the default for injection and validation.

Thank you for installing Istio 1.12.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/FegQbc9UvePd4Z9z7
[2023-03-20 13:43:55] INFO:    [waiting] waiting istio-egressgateway
[2023-03-20 13:43:55] INFO:    [waiting] istio-egressgateway pods ready succeeded.
[2023-03-20 13:43:58] INFO:    [waiting] waiting istio-ingressgateway
[2023-03-20 13:43:58] INFO:    [waiting] istio-ingressgateway pods ready succeeded.
[2023-03-20 13:44:01] INFO:    [waiting] waiting istiod
[2023-03-20 13:44:01] INFO:    [waiting] istiod pods ready succeeded.
[2023-03-20 13:44:05] INFO:    [waiting] waiting grafana
[2023-03-20 13:44:05] INFO:    [waiting] grafana pods ready succeeded.
[2023-03-20 13:44:08] INFO:    [waiting] waiting jaeger
[2023-03-20 13:44:08] INFO:    [waiting] jaeger pods ready succeeded.
[2023-03-20 13:44:11] INFO:    [waiting] waiting kiali
[2023-03-20 13:44:32] INFO:    [waiting] kiali pods ready succeeded.
[2023-03-20 13:44:35] INFO:    [waiting] waiting prometheus
[2023-03-20 13:44:35] INFO:    [waiting] prometheus pods ready succeeded.
[2023-03-20 13:44:35] INFO:    [cluster] kubernetes istio status
+ kubectl get pod -n istio-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
grafana-6ccd56f4b6-7sfwt               1/1     Running   0          34s   10.244.0.11   k8s-master-node1   <none>           <none>
istio-egressgateway-7f4864f59c-qg22p   1/1     Running   0          47s   10.244.0.10   k8s-master-node1   <none>           <none>
istio-ingressgateway-55d9fb9f-2rp9q    1/1     Running   0          47s   10.244.1.10   k8s-worker-node1   <none>           <none>
istiod-555d47cb65-sj56t                1/1     Running   0          53s   10.244.0.9    k8s-master-node1   <none>           <none>
jaeger-5d44bc5c5d-q9vj8                1/1     Running   0          34s   10.244.1.11   k8s-worker-node1   <none>           <none>
kiali-9f9596d69-rv2fv                  1/1     Running   0          33s   10.244.0.12   k8s-master-node1   <none>           <none>
prometheus-64fd8ccd65-t62tj            2/2     Running   0          33s   10.244.1.12   k8s-worker-node1   <none>           <none> 
  See detailed log >> /var/log/kubeinstall.log 

3.3 平台部署–部署 Harbor 仓库及 Helm 包管理工具

kubeeasy add --registry harbor
[2023-03-20 13:47:23] INFO:    [start] bash kubeeasy add --registry harbor
[2023-03-20 13:47:23] INFO:    [check] sshpass command exists.
[2023-03-20 13:47:23] INFO:    [check] wget command exists.
[2023-03-20 13:47:23] INFO:    [check] conn apiserver succeeded.
[2023-03-20 13:47:23] INFO:    [offline] unzip offline harbor package on local.
[2023-03-20 13:47:29] INFO:    [offline] installing docker-compose on local.
[2023-03-20 13:47:29] INFO:    [offline] Installing harbor on local.

[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.14

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 2.2.1

[Step 2]: loading Harbor images ...


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /opt/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir



[Step 5]: starting Harbor ...
[+] Running 10/10
 ⠿ Network harbor_harbor        Created                                                                                                                                             0.1s
 ⠿ Container harbor-log         Started                                                                                                                                             0.7s
 ⠿ Container harbor-db          Started                                                                                                                                             1.8s
 ⠿ Container registryctl        Started                                                                                                                                             2.1s
 ⠿ Container harbor-portal      Started                                                                                                                                             2.1s
 ⠿ Container redis              Started                                                                                                                                             1.9s
 ⠿ Container registry           Started                                                                                                                                             2.2s
 ⠿ Container harbor-core        Started                                                                                                                                             3.1s
 ⠿ Container harbor-jobservice  Started                                                                                                                                             4.0s
 ⠿ Container nginx              Started                                                                                                                                             4.1s
✔ ----Harbor has been installed and started successfully.----
[2023-03-20 13:49:07] INFO:    [cluster] kubernetes Harbor status
+ docker-compose -f /opt/harbor/docker-compose.yml ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
harbor-core         "/harbor/entrypoint.…"   core                running (healthy)   
harbor-db           "/docker-entrypoint.…"   postgresql          running (healthy)   
harbor-jobservice   "/harbor/entrypoint.…"   jobservice          running (healthy)   
harbor-log          "/bin/sh -c /usr/loc…"   log                 running (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       "nginx -g 'daemon of…"   portal              running (healthy)   
nginx               "nginx -g 'daemon of…"   proxy               running (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp
redis               "redis-server /etc/r…"   redis               running (healthy)   
registry            "/home/harbor/entryp…"   registry            running (healthy)   
registryctl         "/home/harbor/start.…"   registryctl         running (healthy)    

  See detailed log >> /var/log/kubeinstall.log 

0

评论 (1)

取消
  1. 头像
    加菲猫
    Windows 10 · Google Chrome

    [root@master ~]# kubeeasy install dependencies --host 192.168.100.10 --user root --password 000000
    [2024-06-18 06:09:33] INFO: [start] bash kubeeasy install dependencies --host 192.168.100.10 --u
    [2024-06-18 06:09:33] INFO: [offline] unzip offline dependencies package on local.
    [2024-06-18 06:09:36] INFO: [offline] unzip offline dependencies package succeeded.
    [2024-06-18 06:09:36] INFO: [install] install dependencies packages on local.
    [2024-06-18 06:09:37] INFO: [install] install dependencies packages succeeded.
    [2024-06-18 06:09:39] INFO: [offline] 192.168.100.10: load offline dependencies file
    [2024-06-18 06:09:41] ERROR: [offline] load offline dependencies file to 192.168.100.10 failed.

    ERROR Summary:
    [2024-06-18 06:09:41] ERROR: [offline] load offline dependencies file to 192.168.100.10 failed.

    See detailed log >> /var/log/kubeinstall.log
    这个报错是因为什么

    回复