首页
留言
Search
1
在Centos7下搭建Socks5代理服务器
1,036 阅读
2
在windows11通过Zip安装Mysql5.7
574 阅读
3
Mysql5.7开放远程登录
482 阅读
4
数据库
469 阅读
5
mysql5.7基本命令
377 阅读
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
登录
Search
标签搜索
java
centos7
linux
centos
html5
JavaScript
php
css3
mysql
spring
mysql5.7
linux全栈
ubuntu
BeanFactory
SpringBean
python
python3
ApplicationContext
kali
mysql8.0
我亏一点
累计撰写
139
篇文章
累计收到
8
条评论
首页
栏目
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
页面
留言
搜索到
17
篇与
运维
的结果
2023-03-23
Kubernetes使用kubeeasy部署集群
Kubernetes使用kubeeasy部署集群1.环境准备主机信息(一定要双网卡)主机名NAT网卡本地网卡节点状态master10.107.24.80192.168.50.80masternode10.107.24.81192.168.50.81worker准备文件chinaskills_cloud_paas_v2.0.2.iso[root@master ~]# mount -o loop /dev/cdrom /mnt/ [root@master ~]# cp -rf /mnt/* /opt/ [root@master ~]# umount /mnt/ [root@master ~]# ls /opt/ centos extended-images helm-v3.7.1-linux-amd64.tar.gz kubeeasy kubevirt.tar.gz dependencies harbor-offline.tar.gz istio.tar.gz kubernetes.tar.gz2.离线安装步骤2.1 安装kubeeasy[root@master ~]# cp -rf /opt/kubeeasy /usr/bin/ [root@master ~]# chmod +x /usr/bin/kubeeasy2.2 安装集群依赖包[root@master ~]# kubeeasy install dependencies \ --host 10.107.24.80,10.107.24.81 \ --user root \ --password 000000 \ --offline-file /opt/dependencies/base-rpms.tar.gz[2023-03-20 01:20:52] INFO: [start] bash kubeeasy install dependencies --host 10.107.24.80,10.107.24.81 --user root --password ****** --offline-file /opt/dependencies/base-rpms.tar.gz [2023-03-20 01:20:52] INFO: [offline] unzip offline dependencies package on local. [2023-03-20 01:20:54] INFO: [offline] unzip offline dependencies package succeeded. [2023-03-20 01:20:54] INFO: [install] install dependencies packages on local. [2023-03-20 01:22:36] INFO: [install] install dependencies packages succeeded. [2023-03-20 01:22:37] INFO: [offline] 10.107.24.80: load offline dependencies file [2023-03-20 01:22:39] INFO: [offline] load offline dependencies file to 10.107.24.80 succeeded. [2023-03-20 01:22:39] INFO: [install] 10.107.24.80: install dependencies packages [2023-03-20 01:22:40] INFO: [install] 10.107.24.80: install dependencies packages succeeded. [2023-03-20 01:22:41] INFO: [offline] 10.107.24.81: load offline dependencies file [2023-03-20 01:22:46] INFO: [offline] load offline dependencies file to 10.107.24.81 succeeded. [2023-03-20 01:22:46] INFO: [install] 10.107.24.81: install dependencies packages [2023-03-20 01:24:26] INFO: [install] 10.107.24.81: install dependencies packages succeeded. See detailed log >> /var/log/kubeinstall.log 2.3 安装Kubernetes集群[root@master ~]# kubeeasy install kubernetes \ --master 10.107.24.80 \ --worker 10.107.24.81 \ --user root \ --password 000000 \ --version 1.22.1 \ --offline-file /opt/kubernetes.tar.gz[2023-03-20 01:24:58] INFO: [start] bash kubeeasy install kubernetes --master 10.107.24.80 --worker 10.107.24.81 --user root --password ****** --version 1.22.1 --offline-file /opt/kubernetes.tar.gz [2023-03-20 01:24:58] INFO: [check] sshpass command exists. [2023-03-20 01:24:58] INFO: [check] rsync command exists. [2023-03-20 01:24:59] INFO: [check] ssh 10.107.24.80 connection succeeded. [2023-03-20 01:24:59] INFO: [check] ssh 10.107.24.81 connection succeeded. [2023-03-20 01:24:59] INFO: [offline] unzip offline package on local. [2023-03-20 01:25:09] INFO: [offline] unzip offline package succeeded. [2023-03-20 01:25:09] INFO: [offline] master 10.107.24.80: load offline file [2023-03-20 01:25:10] INFO: [offline] load offline file to 10.107.24.80 succeeded. [2023-03-20 01:25:10] INFO: [offline] master 10.107.24.80: disable the firewall [2023-03-20 01:25:11] INFO: [offline] 10.107.24.80: disable the firewall succeeded. [2023-03-20 01:25:11] INFO: [offline] worker 10.107.24.81: load offline file [2023-03-20 01:26:05] INFO: [offline] load offline file to 10.107.24.81 succeeded. [2023-03-20 01:26:05] INFO: [offline] worker 10.107.24.81: disable the firewall [2023-03-20 01:26:06] INFO: [offline] 10.107.24.81: disable the firewall succeeded. [2023-03-20 01:26:06] INFO: [get] Get 10.107.24.80 InternalIP. [2023-03-20 01:26:07] INFO: [result] get MGMT_NODE_IP value succeeded. [2023-03-20 01:26:07] INFO: [result] MGMT_NODE_IP is 10.107.24.80 [2023-03-20 01:26:07] INFO: [init] master: 10.107.24.80 [2023-03-20 13:26:09] INFO: [init] init master 10.107.24.80 succeeded. [2023-03-20 13:26:09] INFO: [init] master: 10.107.24.80 set hostname and hosts [2023-03-20 13:26:09] INFO: [init] 10.107.24.80 set hostname and hosts succeeded. [2023-03-20 13:26:09] INFO: [init] worker: 10.107.24.81 [2023-03-20 13:26:12] INFO: [init] init worker 10.107.24.81 succeeded. [2023-03-20 13:26:12] INFO: [init] master: 10.107.24.81 set hostname and hosts [2023-03-20 13:26:12] INFO: [init] 10.107.24.81 set hostname and hosts succeeded. [2023-03-20 13:26:12] INFO: [install] install docker on 10.107.24.80. [2023-03-20 13:27:59] INFO: [install] install docker on 10.107.24.80 succeeded. [2023-03-20 13:27:59] INFO: [install] install kube on 10.107.24.80 [2023-03-20 13:28:01] INFO: [install] install kube on 10.107.24.80 succeeded. [2023-03-20 13:28:01] INFO: [install] install docker on 10.107.24.81. [2023-03-20 13:29:45] INFO: [install] install docker on 10.107.24.81 succeeded. [2023-03-20 13:29:45] INFO: [install] install kube on 10.107.24.81 [2023-03-20 13:29:47] INFO: [install] install kube on 10.107.24.81 succeeded. [2023-03-20 13:29:47] INFO: [kubeadm init] kubeadm init on 10.107.24.80 [2023-03-20 13:29:47] INFO: [kubeadm init] 10.107.24.80: set kubeadm-config.yaml [2023-03-20 13:29:48] INFO: [kubeadm init] 10.107.24.80: set kubeadm-config.yaml succeeded. [2023-03-20 13:29:48] INFO: [kubeadm init] 10.107.24.80: kubeadm init start. [2023-03-20 13:30:03] INFO: [kubeadm init] 10.107.24.80: kubeadm init succeeded. [2023-03-20 13:30:06] INFO: [kubeadm init] 10.107.24.80: set kube config. [2023-03-20 13:30:06] INFO: [kubeadm init] 10.107.24.80: set kube config succeeded. [2023-03-20 13:30:06] INFO: [kubeadm init] 10.107.24.80: delete master taint [2023-03-20 13:30:07] INFO: [kubeadm init] 10.107.24.80: delete master taint succeeded. [2023-03-20 13:30:07] INFO: [kubeadm init] Auto-Approve kubelet cert csr succeeded. [2023-03-20 13:30:07] INFO: [kubeadm join] master: get join token and cert info [2023-03-20 13:30:08] INFO: [result] get CACRT_HASH value succeeded. [2023-03-20 13:30:08] INFO: [result] get INTI_CERTKEY value succeeded. [2023-03-20 13:30:09] INFO: [result] get INIT_TOKEN value succeeded. [2023-03-20 13:30:09] INFO: [kubeadm join] worker 10.107.24.81 join cluster. [2023-03-20 13:30:28] INFO: [kubeadm join] worker 10.107.24.81 join cluster succeeded. [2023-03-20 13:30:28] INFO: [kubeadm join] set 10.107.24.81 worker node role. [2023-03-20 13:30:29] INFO: [kubeadm join] set 10.107.24.81 worker node role succeeded. [2023-03-20 13:30:29] INFO: [network] add flannel network [2023-03-20 13:30:29] INFO: [calico] change flannel pod subnet succeeded. [2023-03-20 13:30:29] INFO: [apply] apply kube-flannel.yaml file [2023-03-20 13:30:30] INFO: [apply] apply kube-flannel.yaml file succeeded. [2023-03-20 13:30:33] INFO: [waiting] waiting kube-flannel-ds [2023-03-20 13:30:34] INFO: [waiting] kube-flannel-ds pods ready succeeded. [2023-03-20 13:30:34] INFO: [apply] apply coredns-cm.yaml file [2023-03-20 13:30:34] INFO: [apply] apply coredns-cm.yaml file succeeded. [2023-03-20 13:30:35] INFO: [apply] apply metrics-server.yaml file [2023-03-20 13:30:35] INFO: [apply] apply metrics-server.yaml file succeeded. [2023-03-20 13:30:38] INFO: [waiting] waiting metrics-server [2023-03-20 13:30:39] INFO: [waiting] metrics-server pods ready succeeded. [2023-03-20 13:30:39] INFO: [apply] apply dashboard.yaml file [2023-03-20 13:30:39] INFO: [apply] apply dashboard.yaml file succeeded. [2023-03-20 13:30:42] INFO: [waiting] waiting dashboard-agent [2023-03-20 13:30:43] INFO: [waiting] dashboard-agent pods ready succeeded. [2023-03-20 13:30:46] INFO: [waiting] waiting dashboard-en [2023-03-20 13:30:46] INFO: [waiting] dashboard-en pods ready succeeded. [2023-03-20 13:31:01] INFO: [cluster] kubernetes cluster status + kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master-node1 Ready control-plane,master,worker 61s v1.22.1 10.107.24.80 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.8 k8s-worker-node1 Ready worker 37s v1.22.1 10.107.24.81 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.8 + kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashboard-cn dashboard-agent-cd88cf454-q48b4 1/1 Running 0 22s 10.244.1.2 k8s-worker-node1 <none> <none> dashboard-cn dashboard-cn-64bd46887f-jvm4t 1/1 Running 0 22s 10.244.1.3 k8s-worker-node1 <none> <none> dashboard-en dashboard-en-55596d469-lrnp9 1/1 Running 0 22s 10.244.1.4 k8s-worker-node1 <none> <none> kube-system coredns-78fcd69978-49kgg 1/1 Running 0 44s 10.244.0.2 k8s-master-node1 <none> <none> kube-system coredns-78fcd69978-hq69g 1/1 Running 0 44s 10.244.0.3 k8s-master-node1 <none> <none> kube-system etcd-k8s-master-node1 1/1 Running 0 58s 10.107.24.80 k8s-master-node1 <none> <none> kube-system kube-apiserver-k8s-master-node1 1/1 Running 0 58s 10.107.24.80 k8s-master-node1 <none> <none> kube-system kube-controller-manager-k8s-master-node1 1/1 Running 0 58s 10.107.24.80 k8s-master-node1 <none> <none> kube-system kube-flannel-ds-9lnld 1/1 Running 0 31s 10.107.24.81 k8s-worker-node1 <none> <none> kube-system kube-flannel-ds-xf86g 1/1 Running 0 31s 10.107.24.80 k8s-master-node1 <none> <none> kube-system kube-proxy-2f2vj 1/1 Running 0 37s 10.107.24.81 k8s-worker-node1 <none> <none> kube-system kube-proxy-d6xjq 1/1 Running 0 44s 10.107.24.80 k8s-master-node1 <none> <none> kube-system kube-scheduler-k8s-master-node1 1/1 Running 0 58s 10.107.24.80 k8s-master-node1 <none> <none> kube-system metrics-server-77564bc84d-j54hk 1/1 Running 0 26s 10.107.24.81 k8s-worker-node1 <none> <none> See detailed log >> /var/log/kubeinstall.log 2.4 验证Docker容器启动状态[root@master ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 992e282b87bc 8d147537fb7d "/coredns -conf /etc…" 4 minutes ago Up 4 minutes k8s_coredns_coredns-78fcd69978-hq69g_kube-system_826a525e-72ce-4cbd-90e5-fdca4b2fab32_0 5a85366c9f9b 8d147537fb7d "/coredns -conf /etc…" 4 minutes ago Up 4 minutes k8s_coredns_coredns-78fcd69978-49kgg_kube-system_3fccb231-ee10-4976-b81f-24d63aacd08a_0 7dc7e4542e87 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_coredns-78fcd69978-hq69g_kube-system_826a525e-72ce-4cbd-90e5-fdca4b2fab32_3 fe088504486c k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_coredns-78fcd69978-49kgg_kube-system_3fccb231-ee10-4976-b81f-24d63aacd08a_3 d6c22b0b9f01 404fc3ab6749 "/opt/bin/flanneld -…" 4 minutes ago Up 4 minutes k8s_kube-flannel_kube-flannel-ds-xf86g_kube-system_27967aa7-4921-4117-b512-98b39c39faf5_0 9b3495f4b962 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-flannel-ds-xf86g_kube-system_27967aa7-4921-4117-b512-98b39c39faf5_0 9859df5625ba 36c4ebbc9d97 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-d6xjq_kube-system_4f120047-946e-4cc4-9af0-a3d5502393eb_0 e3e8672494f2 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-d6xjq_kube-system_4f120047-946e-4cc4-9af0-a3d5502393eb_0 01b91ddb6ad2 6e002eb89a88 "kube-controller-man…" 5 minutes ago Up 5 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master-node1_kube-system_aa5c3b81774bd4cc98215b1c8732d87c_0 3fe245146e59 004811815584 "etcd --advertise-cl…" 5 minutes ago Up 5 minutes k8s_etcd_etcd-k8s-master-node1_kube-system_8d8c7b9310b33393732d993c41c7d450_0 10baaf8a7ab9 f30469a2491a "kube-apiserver --ad…" 5 minutes ago Up 5 minutes k8s_kube-apiserver_kube-apiserver-k8s-master-node1_kube-system_710a8249ad30bff927c89228973db8ac_0 a06f62ed2331 aca5ededae9c "kube-scheduler --au…" 5 minutes ago Up 5 minutes k8s_kube-scheduler_kube-scheduler-k8s-master-node1_kube-system_6ab3eb82cd0b41c3d3b546b333a12933_0 553f4038bea5 k8s.gcr.io/pause:3.5 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-scheduler-k8s-master-node1_kube-system_6ab3eb82cd0b41c3d3b546b333a12933_0 9fd833e89902 k8s.gcr.io/pause:3.5 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-controller-manager-k8s-master-node1_kube-system_aa5c3b81774bd4cc98215b1c8732d87c_0 f6ebe5b6fb57 k8s.gcr.io/pause:3.5 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-apiserver-k8s-master-node1_kube-system_710a8249ad30bff927c89228973db8ac_0 7f0f2b465ec9 k8s.gcr.io/pause:3.5 "/pause" 5 minutes ago Up 5 minutes k8s_POD_etcd-k8s-master-node1_kube-system_8d8c7b9310b33393732d993c41c7d450_0 [root@node ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d619333b2d61 d3dc57185ba2 "/portainer --tunnel…" 4 minutes ago Up 4 minutes k8s_dashboard-en_dashboard-en-55596d469-lrnp9_dashboard-en_f8a4e4fc-c6c2-4dc3-95e3-bd581ea81e76_0 7d7c0eaf86f8 c0d510ae5b6e "./agent" 4 minutes ago Up 4 minutes k8s_dashboard-agent_dashboard-agent-cd88cf454-q48b4_dashboard-cn_921e034a-83ba-4a22-8029-d74ebb82c9bb_0 f69883f86cbc ff950b2c8963 "/portainer" 4 minutes ago Up 4 minutes k8s_dashboard-cn_dashboard-cn-64bd46887f-jvm4t_dashboard-cn_cf7654e0-4b77-4604-9bef-7ad37e2fff0d_0 dcfb61df7245 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_dashboard-en-55596d469-lrnp9_dashboard-en_f8a4e4fc-c6c2-4dc3-95e3-bd581ea81e76_0 ecf3389d5a7e k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_dashboard-agent-cd88cf454-q48b4_dashboard-cn_921e034a-83ba-4a22-8029-d74ebb82c9bb_0 76d14ee95f0f k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_dashboard-cn-64bd46887f-jvm4t_dashboard-cn_cf7654e0-4b77-4604-9bef-7ad37e2fff0d_0 b2b2ce7cebdc 17c225a562d9 "/metrics-server --c…" 4 minutes ago Up 4 minutes k8s_metrics-server_metrics-server-77564bc84d-j54hk_kube-system_662e9598-c936-4cbc-ad1c-5a671133da4b_0 b99a8d500109 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_metrics-server-77564bc84d-j54hk_kube-system_662e9598-c936-4cbc-ad1c-5a671133da4b_0 8b9548e04848 404fc3ab6749 "/opt/bin/flanneld -…" 4 minutes ago Up 4 minutes k8s_kube-flannel_kube-flannel-ds-9lnld_kube-system_b285135a-5a60-4060-9aa3-ace5c83222cf_0 0dc6fc7a3554 k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-flannel-ds-9lnld_kube-system_b285135a-5a60-4060-9aa3-ace5c83222cf_0 f9a9b3929d68 36c4ebbc9d97 "/usr/local/bin/kube…" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-2f2vj_kube-system_efd5573d-bcfc-4977-b59d-5b618f76e481_0 124eb526549d k8s.gcr.io/pause:3.5 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-2f2vj_kube-system_efd5573d-bcfc-4977-b59d-5b618f76e481_03.其他部署3.1 在 Kubernetes 集群中完成 KubeVirt 环境的安装[root@master ~]# kubeeasy add --virt kubevirt[2023-03-20 13:40:18] INFO: [start] bash kubeeasy add --virt kubevirt [2023-03-20 13:40:18] INFO: [check] sshpass command exists. [2023-03-20 13:40:18] INFO: [check] wget command exists. [2023-03-20 13:40:19] INFO: [check] conn apiserver succeeded. [2023-03-20 13:40:19] INFO: [virt] add kubevirt [2023-03-20 13:40:19] INFO: [apply] apply kubevirt-operator.yaml file [2023-03-20 13:40:20] INFO: [apply] apply kubevirt-operator.yaml file succeeded. [2023-03-20 13:40:23] INFO: [waiting] waiting kubevirt [2023-03-20 13:40:30] INFO: [waiting] kubevirt pods ready succeeded. [2023-03-20 13:40:30] INFO: [apply] apply kubevirt-cr.yaml file [2023-03-20 13:40:30] INFO: [apply] apply kubevirt-cr.yaml file succeeded. [2023-03-20 13:41:03] INFO: [waiting] waiting kubevirt [2023-03-20 13:41:09] INFO: [waiting] kubevirt pods ready succeeded. [2023-03-20 13:41:12] INFO: [waiting] waiting kubevirt [2023-03-20 13:41:34] INFO: [waiting] kubevirt pods ready succeeded. [2023-03-20 13:41:37] INFO: [waiting] waiting kubevirt [2023-03-20 13:41:37] INFO: [waiting] kubevirt pods ready succeeded. [2023-03-20 13:41:37] INFO: [apply] apply multus-daemonset.yaml file [2023-03-20 13:41:38] INFO: [apply] apply multus-daemonset.yaml file succeeded. [2023-03-20 13:41:41] INFO: [waiting] waiting kube-multus [2023-03-20 13:41:41] INFO: [waiting] kube-multus pods ready succeeded. [2023-03-20 13:41:41] INFO: [apply] apply multus-cni-macvlan.yaml file [2023-03-20 13:41:41] INFO: [apply] apply multus-cni-macvlan.yaml file succeeded. [2023-03-20 13:41:41] INFO: [cluster] kubernetes kubevirt status + kubectl get pod -n kubevirt -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES virt-api-86f9d6d4f-dh9pv 1/1 Running 0 52s 10.244.0.6 k8s-master-node1 <none> <none> virt-api-86f9d6d4f-twn7b 1/1 Running 0 52s 10.244.1.7 k8s-worker-node1 <none> <none> virt-controller-54b79f5db-vb42b 1/1 Running 0 27s 10.244.1.9 k8s-worker-node1 <none> <none> virt-controller-54b79f5db-wtfcp 1/1 Running 0 27s 10.244.0.8 k8s-master-node1 <none> <none> virt-handler-gch9f 1/1 Running 0 27s 10.244.1.8 k8s-worker-node1 <none> <none> virt-handler-lkp2l 1/1 Running 0 27s 10.244.0.7 k8s-master-node1 <none> <none> virt-operator-6fbd74566c-756kp 1/1 Running 0 81s 10.244.0.4 k8s-master-node1 <none> <none> virt-operator-6fbd74566c-ns75v 1/1 Running 0 81s 10.244.1.6 k8s-worker-node1 <none> <none> See detailed log >> /var/log/kubeinstall.log 3.2 在 Kubernetes 集群中完成服务网格(ServiceMesh)项目 Istio 环境的安装[root@master ~]# kubeeasy add --istio istio #在 Kubernetes 集群上完成 Istio 服务网格环境的安装,然后新建命名空间 exam,为该命名空间开启自动注入 Sidecar。 #创建exam命名空间 [root@master ~]# kubectl create ns exam #通过为命名空间打标签来实现自动注入 [root@master ~]# kubectl label ns exam istio-injection=enabled[2023-03-20 13:43:39] INFO: [start] bash kubeeasy add --istio istio [2023-03-20 13:43:39] INFO: [check] sshpass command exists. [2023-03-20 13:43:39] INFO: [check] wget command exists. [2023-03-20 13:43:39] INFO: [check] conn apiserver succeeded. [2023-03-20 13:43:40] INFO: [istio] add istio ✔ Istio core installed ✔ Istiod installed ✔ Egress gateways installed ✔ Ingress gateways installed ✔ Installation complete Making this installation the default for injection and validation. Thank you for installing Istio 1.12. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/FegQbc9UvePd4Z9z7 [2023-03-20 13:43:55] INFO: [waiting] waiting istio-egressgateway [2023-03-20 13:43:55] INFO: [waiting] istio-egressgateway pods ready succeeded. [2023-03-20 13:43:58] INFO: [waiting] waiting istio-ingressgateway [2023-03-20 13:43:58] INFO: [waiting] istio-ingressgateway pods ready succeeded. [2023-03-20 13:44:01] INFO: [waiting] waiting istiod [2023-03-20 13:44:01] INFO: [waiting] istiod pods ready succeeded. [2023-03-20 13:44:05] INFO: [waiting] waiting grafana [2023-03-20 13:44:05] INFO: [waiting] grafana pods ready succeeded. [2023-03-20 13:44:08] INFO: [waiting] waiting jaeger [2023-03-20 13:44:08] INFO: [waiting] jaeger pods ready succeeded. [2023-03-20 13:44:11] INFO: [waiting] waiting kiali [2023-03-20 13:44:32] INFO: [waiting] kiali pods ready succeeded. [2023-03-20 13:44:35] INFO: [waiting] waiting prometheus [2023-03-20 13:44:35] INFO: [waiting] prometheus pods ready succeeded. [2023-03-20 13:44:35] INFO: [cluster] kubernetes istio status + kubectl get pod -n istio-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES grafana-6ccd56f4b6-7sfwt 1/1 Running 0 34s 10.244.0.11 k8s-master-node1 <none> <none> istio-egressgateway-7f4864f59c-qg22p 1/1 Running 0 47s 10.244.0.10 k8s-master-node1 <none> <none> istio-ingressgateway-55d9fb9f-2rp9q 1/1 Running 0 47s 10.244.1.10 k8s-worker-node1 <none> <none> istiod-555d47cb65-sj56t 1/1 Running 0 53s 10.244.0.9 k8s-master-node1 <none> <none> jaeger-5d44bc5c5d-q9vj8 1/1 Running 0 34s 10.244.1.11 k8s-worker-node1 <none> <none> kiali-9f9596d69-rv2fv 1/1 Running 0 33s 10.244.0.12 k8s-master-node1 <none> <none> prometheus-64fd8ccd65-t62tj 2/2 Running 0 33s 10.244.1.12 k8s-worker-node1 <none> <none> See detailed log >> /var/log/kubeinstall.log 3.3 平台部署–部署 Harbor 仓库及 Helm 包管理工具kubeeasy add --registry harbor[2023-03-20 13:47:23] INFO: [start] bash kubeeasy add --registry harbor [2023-03-20 13:47:23] INFO: [check] sshpass command exists. [2023-03-20 13:47:23] INFO: [check] wget command exists. [2023-03-20 13:47:23] INFO: [check] conn apiserver succeeded. [2023-03-20 13:47:23] INFO: [offline] unzip offline harbor package on local. [2023-03-20 13:47:29] INFO: [offline] installing docker-compose on local. [2023-03-20 13:47:29] INFO: [offline] Installing harbor on local. [Step 0]: checking if docker is installed ... Note: docker version: 20.10.14 [Step 1]: checking docker-compose is installed ... Note: docker-compose version: 2.2.1 [Step 2]: loading Harbor images ... [Step 3]: preparing environment ... [Step 4]: preparing harbor configs ... prepare base dir is set to /opt/harbor WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml Generated and saved secret to file: /data/secret/keys/secretkey Successfully called func: create_root_cert Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [Step 5]: starting Harbor ... [+] Running 10/10 ⠿ Network harbor_harbor Created 0.1s ⠿ Container harbor-log Started 0.7s ⠿ Container harbor-db Started 1.8s ⠿ Container registryctl Started 2.1s ⠿ Container harbor-portal Started 2.1s ⠿ Container redis Started 1.9s ⠿ Container registry Started 2.2s ⠿ Container harbor-core Started 3.1s ⠿ Container harbor-jobservice Started 4.0s ⠿ Container nginx Started 4.1s ✔ ----Harbor has been installed and started successfully.---- [2023-03-20 13:49:07] INFO: [cluster] kubernetes Harbor status + docker-compose -f /opt/harbor/docker-compose.yml ps NAME COMMAND SERVICE STATUS PORTS harbor-core "/harbor/entrypoint.…" core running (healthy) harbor-db "/docker-entrypoint.…" postgresql running (healthy) harbor-jobservice "/harbor/entrypoint.…" jobservice running (healthy) harbor-log "/bin/sh -c /usr/loc…" log running (healthy) 127.0.0.1:1514->10514/tcp harbor-portal "nginx -g 'daemon of…" portal running (healthy) nginx "nginx -g 'daemon of…" proxy running (healthy) 0.0.0.0:80->8080/tcp, :::80->8080/tcp redis "redis-server /etc/r…" redis running (healthy) registry "/home/harbor/entryp…" registry running (healthy) registryctl "/home/harbor/start.…" registryctl running (healthy) See detailed log >> /var/log/kubeinstall.log
2023年03月23日
211 阅读
1 评论
0 点赞
2023-03-23
Kubernetes Pod的操作指令
Kubernetes Pod的操作指令查看默认命名空间的pod 默认命名空间: defaultkubectl get pod | pods | poNAME READY STATUS RESTARTS AGE exam 1/1 Running 0 2d1h查看指定命名空间的podkubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGE coredns-78fcd69978-49kgg 1/1 Running 0 2d1h coredns-78fcd69978-hq69g 1/1 Running 0 2d1h ......查看所有命名空间podkubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGE dashboard-en dashboard-en-55596d469-lrnp9 1/1 Running 0 2d1h default exam 1/1 Running 0 2d1h istio-system grafana-6ccd56f4b6-7sfwt 1/1 Running 0 2d1h ......查看pod详细信息kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES exam 1/1 Running 0 2d1h 10.244.1.5 k8s-worker-node1 <none> <none>实时监控pod详细kubectl get pod -wNAME READY STATUS RESTARTS AGE exam 1/1 Running 0 2d1h ^C命令行直接创建podkubectl run mariadb --image=hyperf-mariadb:v1.0模板创建podapiVersion: v1 kind: Pod metadata: name: mariadb spec: containers: - name: mariadb image: hyperf-mariadb:v1.0 ports: - containerPort: 3307create 仅仅是在不存在时创建,如果已经存在则会报错,apply在不存在时创建,在存在时更新配置kubectl create -f k8s.yml kubectl apply -f k8s.yml 删除podkubectl delete mariadbkubectl delete -f k8s.yml进入Pod中的容器# 默认进入pod中第一个容器 kubectl exec -it Pod名称 -- 命令 # 进入Pod中指定容器 kubectl exec -it Pod名称 -c 容器名称 -- 命令查看pod日志# 默认查看pod中所有容器日志 kubectl logs -f(可选,实时) pod名称 # 查看pod中单个容器日志 kubectl logs -f pod名称 -c 容器名称查看pod描述信息kubectl describe pod pod名称
2023年03月23日
157 阅读
0 评论
0 点赞
2023-03-19
Kubernetes集群ezdown脚本一键部署
Kubernetes集群ezdown脚本一键部署1.环境准备主机信息主机名IP地址集群服务node110.107.24.81nodeetcd、api-server、controlor-manager、schedulernode210.107.24.82nodekubelet、kube-proxy、dockermaster10.107.24.83masterkubelet、kube-proxy、docker配置hosts解析10.107.24.81 node1 10.107.24.82 node2 10.107.24.83 master生成SSH身份文件ssh-keygen 一路回车配置免密登入ssh-copy-id -i node1 ssh-copy-id -i node2 ssh-copy-id -i master测试免密登入ssh master ssh node1 ssh node2禁用交换分区swapoff -a sed -i.bak '/swap/s/^/#/' /etc/fstab关闭防火墙systemctl stop firewalld && systemctl disable firewalld安装git和wgetyum install -y git wget2.脚本安装Kubernetes下载ezdown[root@master ~]# wget https://github.com/easzlab/kubeasz/releases/download/3.5.2/ezdown [root@master ~]# chmod +x ezdown下载所需文件# 国内 [root@master ~]# ./ezdown -D # 国外 [root@master ~]# ./ezdown -D -m standard2023-03-19 14:19:49 INFO Action successed: download_all创建集群配置[root@master ~]# ./ezdown -S2023-03-19 14:20:13 INFO Action begin: start_kubeasz_docker 2023-03-19 14:20:13 INFO try to run kubeasz in a container 2023-03-19 14:20:13 DEBUG get host IP: 10.107.24.83 fc0377122adc8726397e25b6f0285e04072e6b76b9f6b9bd1a0dca3b17d6e1a2 2023-03-19 14:20:14 INFO Action successed: start_kubeasz_docker创建新的集群[root@master ~]# docker exec -it kubeasz ezctl new k8s2023-03-19 06:21:22 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s 2023-03-19 06:21:22 DEBUG set versions 2023-03-19 06:21:22 DEBUG cluster k8s: files successfully created. 2023-03-19 06:21:22 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s/hosts' 2023-03-19 06:21:22 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s/config.yml'修改新的集群配置文件[root@master ~]# vim /etc/kubeasz/clusters/k8s/hosts# 'etcd' cluster should have odd member(s) (1,3,5,...) [etcd] 10.107.24.83 # master node(s), set unique 'k8s_nodename' for each node # CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.', # and must start and end with an alphanumeric character [kube_master] 10.107.24.83 k8s_nodename='master' # work node(s), set unique 'k8s_nodename' for each node # CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.', # and must start and end with an alphanumeric character [kube_node] 10.107.24.81 k8s_nodename='node1' 10.107.24.82 k8s_nodename='node2'一键安装docker exec -it kubeasz ezctl setup k8s allPLAY RECAP ***************************************************************************************************** 10.107.24.81 : ok=76 changed=73 unreachable=0 failed=0 skipped=129 rescued=0 ignored=1 10.107.24.82 : ok=76 changed=73 unreachable=0 failed=0 skipped=129 rescued=0 ignored=1 10.107.24.83 : ok=113 changed=105 unreachable=0 failed=0 skipped=152 rescued=0 ignored=1 localhost : ok=43 changed=40 unreachable=0 failed=0 skipped=42 rescued=0 ignored=0
2023年03月19日
262 阅读
0 评论
0 点赞
2023-03-18
Kubernetes基础
Kubernetes基础1.Kubernetes核心组件2.Kubernetes架构图3.Kubernetes扩展组件
2023年03月18日
135 阅读
0 评论
0 点赞
2023-02-09
ansible 脚本搭建国基北盛openstack
1.openstack搭建基础信息主机名外网IP内网IPcontroller172.16.1.12110.10.10.121compute172.16.1.12210.10.10.122ansible172.16.1.123无搭建方式一使用提供的用户名密码,登录提供的OpenStack私有云平台,自行使用CentOS7.5镜像创建两台云主机,flavor使用4v_8G_100G_50G的配置,第一张网卡使用提供的网络,第二张网卡使用的网络自行创建(网段为10.10.X.0/24,X为工位号)。创建完云主机后确保网络正常通信,然后按以下要求配置服务器:设置控制节点主机名为controller,设置计算节点主机名为compute;controller[root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash [root@controller ~]#- compute [root@localhost ~]# hostnamectl set-hostname compute [root@localhost ~]# bash [root@compute ~]# 修改hosts文件将IP地址映射为主机名controller[root@controller ~]# echo 172.16.1.121 controller >> /etc/hosts [root@controller ~]# echo 172.16.1.122 compute >> /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute- compute [root@compute ~]# echo 172.16.1.121 controller >> /etc/hosts [root@compute ~]# echo 172.16.1.122 compute >> /etc/hosts [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute使用提供的用户名密码,登录提供的OpenStack私有云平台,自行使用CentOS7.5镜像创建一台云主机,flavor使用2v_4G_50G的配置,使用单网卡。启动后使用提供的ansible.tar.gz软件包在这个节点上安装ansible服务并配置ansible节点与controller、compute节点的hosts主机名映射。修改主机名ansible[root@localhost ~]# hostnamectl set-hostname ansible [root@localhost ~]# bash [root@ansible ~]#配置hosts主机名映射ansible[root@ansible ~]# echo 172.16.1.121 controller >> /etc/hosts [root@ansible ~]# echo 172.16.1.122 compute >> /etc/hosts [root@ansible ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@ansible ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible- controller [root@controller ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible- compute [root@compute ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible使用ansible.tar.gz软件包安装ansibleansible[root@ansible opt]# ls -al | grep ansible.tar.gz -rw-r--r--. 1 root root 20569762 Dec 1 08:41 ansible.tar.gz [root@ansible opt]# tar -xzvf ansible.tar.gz [root@ansible opt]# cd ansible [root@ansible ansible]# ls packages repodata #文件内容为yum内容,所以配置yum源进行安装 #如果为tar包安装,则解压后,用python setup.py install安装 [root@ansible ansible]# mv /etc/yum.repos.d/CentOS-* /home/ [root@ansible ansible]# cat << EOF >> /etc/yum.repos.d/http.repo > [ansible] > name=ansible > baseurl=file:///opt/ansible > gpgcheck=0 > enable=1 > EOF [root@ansible ansible]# cat /etc/yum.repos.d/http.repo [ansible] name=ansible baseurl=file:///opt/ansible gpgcheck=0 enable=1 [root@ansible ansible]# yum clean all Loaded plugins: fastestmirror Cleaning repos: ansible Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos Cleaning up list of fastest mirrors [root@ansible ansible]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors ansible | 2.9 kB 00:00:00 ansible/primary_db | 13 kB 00:00:00 …… repolist: 22 [root@ansible ansible]# yum install -y ansible [root@ansible ~]# ansible --version ansible 2.9.10 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]配置ansible节点无秘钥连接controller节点和compute节点,配置完成后并完成ssh连接两个节点的hostname进行测试。配置ansible密钥ansible[root@ansible ~]# ssh-keygen #一路回车 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:tdFAPC6wy10HEKzH5ObUPgVEkPrqjdFXkc/s1Pf+dSw root@ansible The key's randomart image is: +---[RSA 2048]----+ | .+X= | | . + =o . | | O oo++ | | + B.+oo= . | | . OS+.o. = o| | o.+ o. o .o| | ... .. E =| | .+ . oo| | .o . +| +----[SHA256]-----+配置无密钥连接ansible[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub controller /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'controller (172.16.1.121)' can't be established. ECDSA key fingerprint is SHA256:AeSm2G5M7LRpROfAHLBKE3tgheRyzXnppsEZ9MmnYNc. ECDSA key fingerprint is MD5:05:54:c3:4d:f7:67:19:44:3d:13:49:90:e4:7d:0d:e1. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@controller's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'controller'" and check to make sure that only the key(s) you wanted were added. [root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub compute /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'compute (172.16.1.122)' can't be established. ECDSA key fingerprint is SHA256:SpaLUh/Px8EEyBULW0ts3jNP87XfAFIjn2ehzbUxUvk. ECDSA key fingerprint is MD5:23:9a:c7:71:53:25:bc:41:07:25:b5:d7:ee:78:40:40. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@compute's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'compute'" and check to make sure that only the key(s) you wanted were added. #测试连接controller [root@ansible ~]# ssh controller Last login: Mon Dec 6 16:48:15 2021 from 172.16.1.101 [root@controller ~]# #测试连接compute [root@ansible ~]# ssh compute Last login: Mon Dec 6 16:32:03 2021 from 172.16.1.101 [root@compute ~]# 在ansible节点配置ansible的hosts文件,要求创建两个组分别为controller和compute,controller组下主机节点为controller节点;compute组下主机节点为compute。ansible#备份hosts文件 [root@ansible ansible]# ls ansible.cfg hosts roles [root@ansible ansible]# cp hosts hosts.backup [root@ansible ansible]# ls ansible.cfg hosts hosts.backup roles #修改hosts文件 [root@ansible ansible]# echo [controller] >> /etc/ansible/hosts [root@ansible ansible]# echo controller >> /etc/ansible/hosts [root@ansible ansible]# echo [compute] >> /etc/ansible/hosts [root@ansible ansible]# echo compute >> /etc/ansible/hosts [root@ansible ansible]# ansible all -m ping -o [WARNING]: Found both group and host with same name: controller [WARNING]: Found both group and host with same name: compute compute | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"} controller | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"}在compute节点上利用空白分区划分2个20G分区compute[root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sr0 11:0 1 4.2G 0 rom vda 252:0 0 100G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 99G 0 part ├─centos-root 253:0 0 93G 0 lvm / ├─centos-swap 253:1 0 1G 0 lvm [SWAP] └─centos-home 253:2 0 5G 0 lvm /home vdb 252:16 0 200G 0 disk [root@compute ~]# parted /dev/vdb GNU Parted 3.1 Using /dev/vdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt (parted) mkpart swift File system type? [ext2]? Start? 0Gib End? 100Gib Warning: You requested a partition from 0.00B to 107GB (sectors 0..209715199). The closest location we can manage is 17.4kB to 107GB (sectors 34..209715199). Is this still acceptable to you? Yes/No? yes Warning: The resulting partition is not properly aligned for best performance. Ignore/Cancel? i (parted) mkpart cinder File system type? [ext2]? Start? 100Gib End? 199Gib (parted) p Model: Virtio Block Device (virtblk) Disk /dev/vdb: 215GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 107GB 107GB swift 2 107GB 214GB 106GB cinder (parted) q Information: You may need to update /etc/fstab. [root@compute ~]# mkfs.xfs /dev/vdb1 meta-data=/dev/vdb1 isize=512 agcount=4, agsize=6553599 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=26214395, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=12799, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@compute ~]# mkfs.xfs /dev/vdb2 meta-data=/dev/vdb2 isize=512 agcount=4, agsize=6488064 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=25952256, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=12672, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0使用提供的openstack_ansible.tar.gz项目包解压至ansible节点的/opt目录下,然后编辑roles目录下init/tasks中的main.yaml;编辑group_vars目录下的all文件(openstack中的密码都设置为000000);编辑install_openstack.yaml文件,要求执行install_openstack.yaml文件可以在controller节点和compute节点执行init这个role来安装iaas-pre-host。(考试系统会进入你的ansible节点来执行install_openstack.yaml,请确保你的环境处于正确的可执行状态)。ansible#新建并配置ansible的yum源文件 [root@ansible ansible]# vi /opt/http.repo [centos] name=centos baseurl=ftp://172.16.1.101/centos/ gpgcheck=0 enable=1 [iaas] name=iaas baseurl=ftp://172.16.1.101/iaas/iaas-repo/ gpgcheck=0 enable=1 [paas] name=paas baseurl=ftp://172.16.1.101/paas/kubernetes-repo/ gpgcheck=0 enable=1 #删除所有被控节点的yum源文件 [root@ansible ansible]# ansible all -m shell -a "rm -rf /etc/yum.repos.d/*" [WARNING]: Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. 172.16.1.122 | CHANGED | rc=0 >> 172.16.1.121 | CHANGED | rc=0 >> #将ansible的yum源文件使用copy模块拷贝到各节点 #使用ansible-doc查看模块参数 [root@ansible ansible]# ansible-doc -s copy [root@ansible ansible]# ansible all -m copy -a "src=/opt/http.repo dest=/etc/yum.repos.d/http.repo" 172.16.1.121 | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "2d511284516642e4246fba1aadb183cdb9c32034", "dest": "/etc/yum.repos.d/http.repo", "gid": 0, "group": "root", "md5sum": "1e525cb10b2c07b82415fd11aaba9636", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:system_conf_t:s0", "size": 244, "src": "/root/.ansible/tmp/ansible-tmp-1638788844.33-1860-220661655967063/source", "state": "file", "uid": 0 } 172.16.1.122 | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "2d511284516642e4246fba1aadb183cdb9c32034", "dest": "/etc/yum.repos.d/http.repo", "gid": 0, "group": "root", "md5sum": "1e525cb10b2c07b82415fd11aaba9636", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:system_conf_t:s0", "size": 244, "src": "/root/.ansible/tmp/ansible-tmp-1638788844.32-1858-252113756740654/source", "state": "file", "uid": 0 } # 清除yum源缓存,查看是否配置成功 [root@ansible ansible]# ansible all -m shell -a "yum clean all && yum repolist" # 编写
2023年02月09日
179 阅读
0 评论
0 点赞
1
2
...
4