首页
留言
Search
1
在Centos7下搭建Socks5代理服务器
1,035 阅读
2
在windows11通过Zip安装Mysql5.7
574 阅读
3
Mysql5.7开放远程登录
482 阅读
4
数据库
469 阅读
5
mysql5.7基本命令
377 阅读
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
登录
Search
标签搜索
java
centos7
linux
centos
html5
JavaScript
php
css3
mysql
spring
mysql5.7
linux全栈
ubuntu
BeanFactory
SpringBean
python
python3
ApplicationContext
kali
mysql8.0
我亏一点
累计撰写
139
篇文章
累计收到
8
条评论
首页
栏目
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
页面
留言
搜索到
5
篇与
openstack
的结果
2023-02-09
ansible 脚本搭建国基北盛openstack
1.openstack搭建基础信息主机名外网IP内网IPcontroller172.16.1.12110.10.10.121compute172.16.1.12210.10.10.122ansible172.16.1.123无搭建方式一使用提供的用户名密码,登录提供的OpenStack私有云平台,自行使用CentOS7.5镜像创建两台云主机,flavor使用4v_8G_100G_50G的配置,第一张网卡使用提供的网络,第二张网卡使用的网络自行创建(网段为10.10.X.0/24,X为工位号)。创建完云主机后确保网络正常通信,然后按以下要求配置服务器:设置控制节点主机名为controller,设置计算节点主机名为compute;controller[root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash [root@controller ~]#- compute [root@localhost ~]# hostnamectl set-hostname compute [root@localhost ~]# bash [root@compute ~]# 修改hosts文件将IP地址映射为主机名controller[root@controller ~]# echo 172.16.1.121 controller >> /etc/hosts [root@controller ~]# echo 172.16.1.122 compute >> /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute- compute [root@compute ~]# echo 172.16.1.121 controller >> /etc/hosts [root@compute ~]# echo 172.16.1.122 compute >> /etc/hosts [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute使用提供的用户名密码,登录提供的OpenStack私有云平台,自行使用CentOS7.5镜像创建一台云主机,flavor使用2v_4G_50G的配置,使用单网卡。启动后使用提供的ansible.tar.gz软件包在这个节点上安装ansible服务并配置ansible节点与controller、compute节点的hosts主机名映射。修改主机名ansible[root@localhost ~]# hostnamectl set-hostname ansible [root@localhost ~]# bash [root@ansible ~]#配置hosts主机名映射ansible[root@ansible ~]# echo 172.16.1.121 controller >> /etc/hosts [root@ansible ~]# echo 172.16.1.122 compute >> /etc/hosts [root@ansible ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@ansible ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible- controller [root@controller ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible- compute [root@compute ~]# echo 172.16.1.123 ansible >> /etc/hosts [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.121 controller 172.16.1.122 compute 172.16.1.123 ansible使用ansible.tar.gz软件包安装ansibleansible[root@ansible opt]# ls -al | grep ansible.tar.gz -rw-r--r--. 1 root root 20569762 Dec 1 08:41 ansible.tar.gz [root@ansible opt]# tar -xzvf ansible.tar.gz [root@ansible opt]# cd ansible [root@ansible ansible]# ls packages repodata #文件内容为yum内容,所以配置yum源进行安装 #如果为tar包安装,则解压后,用python setup.py install安装 [root@ansible ansible]# mv /etc/yum.repos.d/CentOS-* /home/ [root@ansible ansible]# cat << EOF >> /etc/yum.repos.d/http.repo > [ansible] > name=ansible > baseurl=file:///opt/ansible > gpgcheck=0 > enable=1 > EOF [root@ansible ansible]# cat /etc/yum.repos.d/http.repo [ansible] name=ansible baseurl=file:///opt/ansible gpgcheck=0 enable=1 [root@ansible ansible]# yum clean all Loaded plugins: fastestmirror Cleaning repos: ansible Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos Cleaning up list of fastest mirrors [root@ansible ansible]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors ansible | 2.9 kB 00:00:00 ansible/primary_db | 13 kB 00:00:00 …… repolist: 22 [root@ansible ansible]# yum install -y ansible [root@ansible ~]# ansible --version ansible 2.9.10 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]配置ansible节点无秘钥连接controller节点和compute节点,配置完成后并完成ssh连接两个节点的hostname进行测试。配置ansible密钥ansible[root@ansible ~]# ssh-keygen #一路回车 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:tdFAPC6wy10HEKzH5ObUPgVEkPrqjdFXkc/s1Pf+dSw root@ansible The key's randomart image is: +---[RSA 2048]----+ | .+X= | | . + =o . | | O oo++ | | + B.+oo= . | | . OS+.o. = o| | o.+ o. o .o| | ... .. E =| | .+ . oo| | .o . +| +----[SHA256]-----+配置无密钥连接ansible[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub controller /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'controller (172.16.1.121)' can't be established. ECDSA key fingerprint is SHA256:AeSm2G5M7LRpROfAHLBKE3tgheRyzXnppsEZ9MmnYNc. ECDSA key fingerprint is MD5:05:54:c3:4d:f7:67:19:44:3d:13:49:90:e4:7d:0d:e1. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@controller's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'controller'" and check to make sure that only the key(s) you wanted were added. [root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub compute /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'compute (172.16.1.122)' can't be established. ECDSA key fingerprint is SHA256:SpaLUh/Px8EEyBULW0ts3jNP87XfAFIjn2ehzbUxUvk. ECDSA key fingerprint is MD5:23:9a:c7:71:53:25:bc:41:07:25:b5:d7:ee:78:40:40. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@compute's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'compute'" and check to make sure that only the key(s) you wanted were added. #测试连接controller [root@ansible ~]# ssh controller Last login: Mon Dec 6 16:48:15 2021 from 172.16.1.101 [root@controller ~]# #测试连接compute [root@ansible ~]# ssh compute Last login: Mon Dec 6 16:32:03 2021 from 172.16.1.101 [root@compute ~]# 在ansible节点配置ansible的hosts文件,要求创建两个组分别为controller和compute,controller组下主机节点为controller节点;compute组下主机节点为compute。ansible#备份hosts文件 [root@ansible ansible]# ls ansible.cfg hosts roles [root@ansible ansible]# cp hosts hosts.backup [root@ansible ansible]# ls ansible.cfg hosts hosts.backup roles #修改hosts文件 [root@ansible ansible]# echo [controller] >> /etc/ansible/hosts [root@ansible ansible]# echo controller >> /etc/ansible/hosts [root@ansible ansible]# echo [compute] >> /etc/ansible/hosts [root@ansible ansible]# echo compute >> /etc/ansible/hosts [root@ansible ansible]# ansible all -m ping -o [WARNING]: Found both group and host with same name: controller [WARNING]: Found both group and host with same name: compute compute | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"} controller | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"}在compute节点上利用空白分区划分2个20G分区compute[root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sr0 11:0 1 4.2G 0 rom vda 252:0 0 100G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 99G 0 part ├─centos-root 253:0 0 93G 0 lvm / ├─centos-swap 253:1 0 1G 0 lvm [SWAP] └─centos-home 253:2 0 5G 0 lvm /home vdb 252:16 0 200G 0 disk [root@compute ~]# parted /dev/vdb GNU Parted 3.1 Using /dev/vdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt (parted) mkpart swift File system type? [ext2]? Start? 0Gib End? 100Gib Warning: You requested a partition from 0.00B to 107GB (sectors 0..209715199). The closest location we can manage is 17.4kB to 107GB (sectors 34..209715199). Is this still acceptable to you? Yes/No? yes Warning: The resulting partition is not properly aligned for best performance. Ignore/Cancel? i (parted) mkpart cinder File system type? [ext2]? Start? 100Gib End? 199Gib (parted) p Model: Virtio Block Device (virtblk) Disk /dev/vdb: 215GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 17.4kB 107GB 107GB swift 2 107GB 214GB 106GB cinder (parted) q Information: You may need to update /etc/fstab. [root@compute ~]# mkfs.xfs /dev/vdb1 meta-data=/dev/vdb1 isize=512 agcount=4, agsize=6553599 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=26214395, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=12799, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@compute ~]# mkfs.xfs /dev/vdb2 meta-data=/dev/vdb2 isize=512 agcount=4, agsize=6488064 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=25952256, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=12672, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0使用提供的openstack_ansible.tar.gz项目包解压至ansible节点的/opt目录下,然后编辑roles目录下init/tasks中的main.yaml;编辑group_vars目录下的all文件(openstack中的密码都设置为000000);编辑install_openstack.yaml文件,要求执行install_openstack.yaml文件可以在controller节点和compute节点执行init这个role来安装iaas-pre-host。(考试系统会进入你的ansible节点来执行install_openstack.yaml,请确保你的环境处于正确的可执行状态)。ansible#新建并配置ansible的yum源文件 [root@ansible ansible]# vi /opt/http.repo [centos] name=centos baseurl=ftp://172.16.1.101/centos/ gpgcheck=0 enable=1 [iaas] name=iaas baseurl=ftp://172.16.1.101/iaas/iaas-repo/ gpgcheck=0 enable=1 [paas] name=paas baseurl=ftp://172.16.1.101/paas/kubernetes-repo/ gpgcheck=0 enable=1 #删除所有被控节点的yum源文件 [root@ansible ansible]# ansible all -m shell -a "rm -rf /etc/yum.repos.d/*" [WARNING]: Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. 172.16.1.122 | CHANGED | rc=0 >> 172.16.1.121 | CHANGED | rc=0 >> #将ansible的yum源文件使用copy模块拷贝到各节点 #使用ansible-doc查看模块参数 [root@ansible ansible]# ansible-doc -s copy [root@ansible ansible]# ansible all -m copy -a "src=/opt/http.repo dest=/etc/yum.repos.d/http.repo" 172.16.1.121 | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "2d511284516642e4246fba1aadb183cdb9c32034", "dest": "/etc/yum.repos.d/http.repo", "gid": 0, "group": "root", "md5sum": "1e525cb10b2c07b82415fd11aaba9636", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:system_conf_t:s0", "size": 244, "src": "/root/.ansible/tmp/ansible-tmp-1638788844.33-1860-220661655967063/source", "state": "file", "uid": 0 } 172.16.1.122 | CHANGED => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": true, "checksum": "2d511284516642e4246fba1aadb183cdb9c32034", "dest": "/etc/yum.repos.d/http.repo", "gid": 0, "group": "root", "md5sum": "1e525cb10b2c07b82415fd11aaba9636", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:system_conf_t:s0", "size": 244, "src": "/root/.ansible/tmp/ansible-tmp-1638788844.32-1858-252113756740654/source", "state": "file", "uid": 0 } # 清除yum源缓存,查看是否配置成功 [root@ansible ansible]# ansible all -m shell -a "yum clean all && yum repolist" # 编写
2023年02月09日
179 阅读
0 评论
0 点赞
2023-02-09
openstack 题目学习记录
在自行搭建的OpenStack平台上,使用命令创建一个名为Fmin,ID为1,内存为1024MB,磁盘为10GB,vcpu数量为1的云主机类型[root@controller ~]# openstack flavor create --vcpus 1 --disk 10 --ram 1024 --id 1 Fmin +----------------------------+-------+ | Field | Value | +----------------------------+-------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 10 | | id | 1 | | name | Fmin | | os-flavor-access:is_public | True | | properties | | | ram | 1024 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+-------+在自行搭建的OpenStack平台上,创建云主机网络extnet,子网extsubnet,虚拟机网段为192.168.100.0/24,网关为192.168.100.1,段ID默认写100,网络使用vlan模式。[root@controller ~]# openstack network create extnet --external --provider-network-type vlan --provider-physical-network provider --provider-segment 100 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2021-12-14T07:02:14Z | | description | | | dns_domain | None | | id | 4a8a40a5-e628-4149-b2c3-6b7edfcd96a2 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | is_vlan_transparent | None | | mtu | 1500 | | name | extnet | | port_security_enabled | True | | project_id | d33ead0cc8224ee9ad0d3b65f56c0ba5 | | provider:network_type | vlan | | provider:physical_network | provider | | provider:segmentation_id | 100 | | qos_policy_id | None | | revision_number | 5 | | router:external | External | | segments | None | | shared | False | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2021-12-14T07:02:14Z | +---------------------------+--------------------------------------+ [root@controller ~]# openstack subnet create --network extnet --gateway 192.168.100.1 --dhcp --subnet-range 192.168.100.0/24 extsubnet +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.100.2-192.168.100.254 | | cidr | 192.168.100.0/24 | | created_at | 2021-12-14T07:04:00Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.100.1 | | host_routes | | | id | e51d4693-cd0f-45f0-83cf-2176c4fa850a | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | extsubnet | | network_id | 4a8a40a5-e628-4149-b2c3-6b7edfcd96a2 | | project_id | d33ead0cc8224ee9ad0d3b65f56c0ba5 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2021-12-14T07:04:00Z | +-------------------+--------------------------------------+在自行搭建的OpenStack平台上,基于“cirros”镜像、1vCPU/1G /10G的flavor、extsubnet的网络,创建一台虚拟机VM1,启动VM1[root@controller images]# ls CentOS_6.5_x86_64_XD.qcow2 CentOS7_1804.tar CentOS_7.2_x86_64_XD.qcow2 CentOS_7.5_x86_64.qcow2 CentOS_7.5_x86_64_XD.qcow2 cirros-0.3.4-x86_64-disk.img [root@controller images]# openstack image create --file cirros-0.3.4-x86_64-disk.img --disk-format raw cirros +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2021-12-14T07:09:00Z | | disk_format | raw | | file | /v2/images/42b3009d-79a8-4b13-a35a-875479900b40/file | | id | 42b3009d-79a8-4b13-a35a-875479900b40 | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | d33ead0cc8224ee9ad0d3b65f56c0ba5 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2021-12-14T07:09:00Z | | virtual_size | None | | visibility | shared | +------------------+------------------------------------------------------+ [root@controller images]# openstack server create --image cirros --flavor Fmin --network extnet VM1 +-------------------------------------+-----------------------------------------------+ | Field | Value | +-------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | 7uEFyyfowamF | | config_drive | | | created | 2021-12-14T07:12:04Z | | flavor | Fmin (1) | | hostId | | | id | b4fb14e5-a55a-43ed-894e-87ce8d8fd250 | | image | cirros (42b3009d-79a8-4b13-a35a-875479900b40) | | key_name | None | | name | VM1 | | progress | 0 | | project_id | d33ead0cc8224ee9ad0d3b65f56c0ba5 | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2021-12-14T07:12:04Z | | user_id | feed7e464fb446188de23b147498ebcf | | volumes_attached | | +-------------------------------------+-----------------------------------------------+在 openstack 私有云平台上,基于 CentOS7_1804.tar 的docker镜像,使用命令创建一个名为 centos7.5-docker 的镜像,并且通过docker镜像启动容器。[root@controller images]# ls | grep CentOS7_1804.tar CentOS7_1804.tar [root@controller images]# openstack image create --file CentOS7_1804.tar --disk-format raw --container-format docker centos7.5-docker +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 438e76cdb677a3ab1156e284f58aa366 | | container_format | docker | | created_at | 2021-12-02T03:17:05Z | | disk_format | raw | | file | /v2/images/c776ae1f-90b9-4a7f-b3fa-67f2cf2b5b00/file | | id | c776ae1f-90b9-4a7f-b3fa-67f2cf2b5b00 | | min_disk | 0 | | min_ram | 0 | | name | centos7.5-docker | | owner | db2a714c481643e5ad18a30967c243aa | | protected | False | | schema | /v2/schemas/image | | size | 381696512 | | status | active | | tags | | | updated_at | 2021-12-02T03:17:06Z | | virtual_size | None | | visibility | shared | +------------------+------------------------------------------------------+ [root@controller images]# openstack image list +--------------------------------------+------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------+--------+ | 522259d3-20de-4f58-87ec-1422c87e6fe6 | centos_docker | active | +--------------------------------------+------------------+--------+ [root@controller images]# zun run --image-driver glance centos_docker [root@controller images]# zun list +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+ | uuid | name | image | status | task_state | addresses | ports | +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+ | ed1334ce-448b-4645-9d27-05e24259c171 | sigma-23-container | centos_docker | Running | None | 192.168.100.22 | [22] | +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+在自己搭建的OpenStack平台上,将云主机VM1保存为qcow2格式的快照并保存到controller节点/root/cloudsave目录下,保存名字为csccvm.qcow2[root@controller ~]# openstack server list +--------------------------------------+------+--------+----------------------+--------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------+--------+----------------------+--------+--------+ | b4fb14e5-a55a-43ed-894e-87ce8d8fd250 | VM1 | ACTIVE | extnet=192.168.100.4 | cirros | Fmin | +--------------------------------------+------+--------+----------------------+--------+--------+ [root@controller ~]# openstack server stop VM1 [root@controller ~]# openstack server image create --name csccvm.qcow2 VM1 [root@controller ~]# openstack image list +--------------------------------------+--------------+--------+ | ID | Name | Status | +--------------------------------------+--------------+--------+ | aca7ee52-51b6-4f09-b6ab-993eba815149 | Gmirror1 | active | | 42b3009d-79a8-4b13-a35a-875479900b40 | cirros | active | | 06e10537-4af8-49fa-bda0-6635012bdeb2 | csccvm.qcow2 | active | +--------------------------------------+--------------+--------+ [root@controller ~]# mkdir /root/cloudsave [root@controller ~]# openstack image save --file /root/cloudsave/csccvm.qcow2 csccvm.qcow2 [root@controller ~]# ls /root/cloudsave/ csccvm.qcow2在自己搭建的OpenStack平台上,使用cinder服务,创建一个名为“lvm”的卷类型,创建1块卷类型为lvm的1G云盘,并附加到虚拟机VM1上[root@controller ~]# openstack volume type create lvm +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | description | None | | id | 97504eed-9fd5-4fc0-bd2c-5e2101c320c2 | | is_public | True | | name | lvm | +-------------+--------------------------------------+ [root@controller ~]# openstack volume create --type lvm --size 1 lvm +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-12-14T07:32:59.000000 | | description | None | | encrypted | False | | id | 5dbfb3da-4799-4bf9-9d70-fce503f51e44 | | migration_status | None | | multiattach | False | | name | lvm | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | lvm | | updated_at | None | | user_id | feed7e464fb446188de23b147498ebcf | +---------------------+--------------------------------------+ [root@controller ~]# openstack server add volume VM1 lvm登录提供的私有云平台,创建一台centos7.5的云主机,flavor使用带有附加硬盘的类型。连接到该云主机,使用附加的硬盘,划分4个10G的分区,使用这4个分区创建一个raid5级别的磁盘阵列,其中1个分区作为热备盘[root@controller api]# openstack volume create --size 40 raid +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-12-18T06:13:48.000000 | | description | None | | encrypted | False | | id | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | | migration_status | None | | multiattach | False | | name | raid | | properties | | | replication_status | None | | size | 40 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 32a5404a9ca14a09ba0f12ae34c7a079 | +---------------------+--------------------------------------+ [root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available | 40 | | +--------------------------------------+------+-----------+------+-------------+ [root@controller api]# openstack server add volume chinaskill raid #分区 #使用parted工具分4个10g的硬盘,格式为xfs #配置centos的yum源 #安装mdadm工具 [root@chinaskill ~]# yum install -y mdadm [root@chinaskill dev]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 20G 0 disk └─vda1 253:1 0 20G 0 part / vdb 253:16 0 40G 0 disk ├─vdb1 253:17 0 10G 0 part ├─vdb2 253:18 0 10G 0 part ├─vdb3 253:19 0 10G 0 part └─vdb4 253:20 0 9G 0 part # -C 创建磁盘阵列 # -v 细节 # -l raid数 # -n 磁盘数量 # -x 热备数量 [root@chinaskill dev]# mdadm -C -v /dev/md0 -l5 -n3 /dev/vdb[123] -x1 /dev/vdb4 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 9427968K mdadm: largest drive (/dev/vdb2) exceeds size (9427968K) by more than 1% Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@chinaskill dev]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 20G 0 disk └─vda1 253:1 0 20G 0 part / vdb 253:16 0 40G 0 disk ├─vdb1 253:17 0 10G 0 part │ └─md0 9:0 0 18G 0 raid5 ├─vdb2 253:18 0 10G 0 part │ └─md0 9:0 0 18G 0 raid5 ├─vdb3 253:19 0 10G 0 part │ └─md0 9:0 0 18G 0 raid5 └─vdb4 253:20 0 9G 0 part └─md0 9:0 0 18G 0 raid5 [root@chinaskill dev]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=16, agsize=294528 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=4712448, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0在自行搭建的OpenStack平台上,对cinder存储空间进行扩容操作,要求将cinder存储空间扩容10G[root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------------------------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use | 40 | Attached to chinaskill on /dev/vdb | | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 2 | | +--------------------------------------+------+-----------+------+-------------------------------------+ #未使用(available)盘扩容 [root@controller api]# openstack volume set --size 10 disk [root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------------------------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use | 40 | Attached to chinaskill on /dev/vdb | | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 10 | | +--------------------------------------+------+-----------+------+-------------------------------------+ #使用(in-use)盘扩容 [root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------------------------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use | 40 | Attached to chinaskill on /dev/vdb | | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 10 | | +--------------------------------------+------+-----------+------+-------------------------------------+ [root@controller api]# openstack server remove volume chinaskill raid [root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available | 40 | | | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 10 | | +--------------------------------------+------+-----------+------+-------------+ [root@controller api]# openstack volume set --size 45 raid [root@controller api]# openstack volume list +--------------------------------------+------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------+ | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available | 45 | | | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 10 | | +--------------------------------------+------+-----------+------+-------------+ [root@controller api]# openstack server add volume chinaskill raid使用提供的云安全框架组件,将自行搭建的OpenStack云平台的安全策略从http优化至https。[root@controller /]# yum list | grep mod mod_wsgi.x86_64 3.4-18.el7 @iaas mod_ssl.x86_64 1:2.4.6-89.el7.centos iaas [root@controller /]# yum install -y httpd mod_ssl mod_wsgi [root@controller ~]# vim /etc/httpd/conf.d/ssl.conf #########修改前######### SSLProtocol all -SSLv2 -SSLv3 ####################### #########修改后######### SSLProtocol all -SSLv2 ####################### [root@controller ~]# vim /etc/openstack-dashboard/local_settings #########添加内容######### USE_SSL=True #加这一句后,安全策略已经从http优化至https了。 CSRF_COOKIE_SECURE = True #将该行的注释取消,做不做都无所谓 SESSION_COOKIE_SECURE = True #将该行的注释取消,做不做都无所谓 SESSION_COOKIE_HTTPONLY = True #添加该行,做不做都无所谓 ######################## [root@controller ~]# systemctl restart httpd [root@controller ~]# systemctl restart mecached在自行搭建的OpenStack平台上,使用glance相关命令上传镜像,镜像源为CentOS_7.5_x86_64.qcow2,名为Gmirror1,min _ram为2048M,min_disk为20G。[root@controller images]# ls | grep CentOS_7.5_x86_64.qcow2 CentOS_7.5_x86_64.qcow2 [root@controller images]# source /etc/keystone/admin-openrc.sh [root@controller images]# openstack image create --min-ram 2048 --min-disk 20 --file CentOS_7.5_x86_64.qcow2 --disk-format qcow2 Gmirror1 +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 3d3e9c954351a4b6953fd156f0c29f5c | | container_format | bare | | created_at | 2021-12-13T07:29:59Z | | disk_format | qcow2 | | file | /v2/images/5afddf53-d0d1-476a-8aa7-d800656a19e7/file | | id | 5afddf53-d0d1-476a-8aa7-d800656a19e7 | | min_disk | 20 | | min_ram | 2048 | | name | Gmirror1 | | owner | f36eeb24e1304f90b65e189a2c3f42b5 | | protected | False | | schema | /v2/schemas/image | | size | 510459904 | | status | active | | tags | | | updated_at | 2021-12-13T07:30:00Z | | virtual_size | None | | visibility | shared | +------------------+------------------------------------------------------+ [root@controller images]# openstack image list +--------------------------------------+----------+--------+ | ID | Name | Status | +--------------------------------------+----------+--------+ | 5afddf53-d0d1-476a-8aa7-d800656a19e7 | Gmirror1 | active | +--------------------------------------+----------+--------+使用qemu-img相关命令,查询Gmirror1镜像的compat版本,然后将Gmirror1镜像的compat版本修改为0.10(该操作是为了适配某些低版本的云平台)。#openstack的默认镜像目录在/var/lib/glance/images/ [root@controller images]# cd /var/lib/glance/images/ [root@controller images]# ls aca7ee52-51b6-4f09-b6ab-993eba815149 [root@controller images]# qemu-img info aca7ee52-51b6-4f09-b6ab-993eba815149 image: aca7ee52-51b6-4f09-b6ab-993eba815149 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 487M cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false #通过帮助命令查询修改命令 [root@controller images]# qemu-img --h amend [--object objectdef] [--image-opts] [-p] [-q] [-f fmt] [-t cache] -o options filename [root@controller images]# qemu-img amend aca7ee52-51b6-4f09-b6ab-993eba815149 -o compat=0.10 [root@controller images]# qemu-img info aca7ee52-51b6-4f09-b6ab-993eba815149 image: aca7ee52-51b6-4f09-b6ab-993eba815149 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 487M cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 16在自行搭建的OpenStack平台上,通过修改相关参数对openstack平台进行调优操作,相应的调优操作有:# nova调优配置文件/etc/nova/nova.conf [DEFAULT] vcpu_pin_set = 4-12,^8,15 #建议值是预留前几个物理 CPU,把后面的所有 CPU 分配给虚拟机使用,所有实例只能跑在CPUs 4,5,6,7,9,10,11,12,15上。 allow_resize_to_same_host=true #允许虚拟机后期的资源调整;允许openstack创建的虚拟机,当发现后期CPU、内存、磁盘空间不足时,对虚拟机进行动态调整 resume_guests_state_on_host_boot=true #配置虚拟机自启动;当宿主机启动后,把虚拟机恢复到之前的状态,如果虚拟机之前是关机,则宿主机启动后,虚拟机也是关机状态;如果虚拟机之前是开机状态,则宿主机启动后,虚拟机还是开机状态 cpu_allocation_ratio=8 #CPU超分;把宿主机 1 核CPU,当做 8 核去分;即宿主机的 1 核CPU在openstack看来,就是 8 核CPU;CPU不能超分太多 ram_allocation_ratio=1.0 #内存一般不超分,一般都是 1 比 1,如果想要想要超分的话,最多超分1.2倍或1.5倍,不能太多;超分有好处也有坏处,好处是可以让openstack创建更多的虚拟机;坏处是,假设一个宿主机有3个虚拟机,宿主机内存为100G,超分为1.2倍,在openstack看来就是120G内存,但如果前两台虚拟机一共已经使用了80G内存,第三台虚拟机使用了20G内存,还有20G内存可用,但是会报内存不足,无法分配内存,因为三台机器已经把宿主机的100G内存全部试用完了,虽然openstack显示还有20G内存可用,但是宿主机已经没有内存可以分配了,当宿主机内存用完后,宿主机内核会把占用内存最多的虚拟机kill掉,所以一般内存不进行超分 disk_allocation_ratio=1.0 #磁盘一般也不进行超分,原理与内存超分一致 reserved_host_disk_mb=20480 #配置磁盘保留空间;即预留指定大小的空间给宿主机使用,一般用于宿主机记录日志;预留10G或者20G磁盘空间即可 reserved_host_memory_mb=4096 #配置内存保留;即预留指定大小的内存给宿主机使用;一般预留4G service_down_time=120 #服务下线时间阈值,默认60,如果一个节点上的 nova 服务超过这个时间没有上报心跳到数据库,api 服务会认为该服务已经下线,如果配置过短或过长,都会导致误判。 rpc_response_timeout=300 #RPC 调用超时时间,由于 Python 的单进程不能真正的并发,所以 RPC 请求可能不能及时响应,尤其是目标节点在执行耗时较长的定时任务时,所以需要综合考虑超时时间和等待容忍时间。设置内存超售比例为1.5倍[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #ram_allocation_ratio=1.0 ################ ######修改后###### ram_allocation_ratio=1.5 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart设置cpu超售比例为4倍[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #ram_allocation_ratio=1.0 ################ ######修改后###### ram_allocation_ratio=4.0 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart设置nova服务心跳检查时间为120秒[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #service_down_time=60 ################ ######修改后###### service_down_time=120 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart预留前2个物理CPU,把后面的所有CPU分配给虚拟机使用(假设vcpu为16个)[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #vcpu_pin_set=<None> ################ ######修改后###### vcpu_pin_set=3-16 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart预留2048mb内存,这部分内存不能被虚拟机使用[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #reserved_host_memory_mb=512 ################ ######修改后###### reserved_host_memory_mb=2048 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart预留10240mb磁盘,这部分磁盘不能被虚拟机使用[root@controller images]# vim /etc/nova/nova.conf ######修改前##### #reserved_host_disk_mb=0 ################ ######修改后###### reserved_host_disk_mb=10240 ################# #wq保存 #重启整个openstack或者nova服务 [root@controller images]# openstack-service restart在自行搭建的OpenStack平台上,使用Swift对象存储服务,修改相应的配置文件,使对象存储Swift作为glance镜像服务的后端存储#glance配置文件/etc/glance/glance-api.conf [glance_store] ...... stores=glance.store.filesystem.Store,glance.store.swift.Store,glance.store.http.Store default_store=swift swift_store_auth_address=http://192.168.1.76:5000/v2.0/ swift_store_user=services:glance swift_store_key=000000 //glance用户的keystone认证密码 swift_store_container=glance swift_store_create_container_on_put=True swift_store_large_object_size=5120 swift_store_large_object_chunk_size=200 os_region_name=RegionOne ...... ----------------------------------- openstack-kilo,glance使用swift 作为后端存储在openstack私有云平台上,在/root目录下编写模板server.yaml,创建名为“m1.flavor”、 ID 为 1234、内存为1024MB、硬盘为20GB、vcpu数量为 1的云主机类型。[root@controller ~]# cd /root/ [root@controller ~]# vim server.yaml ############文 件 内 容############ heat_template_version: 2018-03-02 resources: flavor: type: OS::Nova::Flavor properties: disk: 20 flavorid: 1234 name: m1.flavor ram: 1024 vcpus: 1 ################################# 测试编配文件 [root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# openstack stack create -t fla.yaml flavor +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | id | 46e54381-baa2-47c4-89ef-cd775f589ce8 | | stack_name | flavor | | description | No description | | creation_time | 2021-12-01T03:06:57Z | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | Stack CREATE started | +---------------------+--------------------------------------+ [root@controller ~]# openstack flavor list +------+-----------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +------+-----------+------+------+-----------+-------+-----------+ | 1234 | m1.flavor | 1024 | 20 | 0 | 1 | True | +------+-----------+------+------+-----------+-------+-----------+在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,在/root 目录下编 写 Heat 模板 create_user.yaml,创建名为 heat-user 的用户,属于 admin 项目,并赋予 heat-user 用户 admin 的权限,配置用户密码为 123456。[root@controller ~]# cd /root/ [root@controller ~]# vim create_user.yaml 可在dashboard的编排中使用模板创建者创建生成代码 ############文 件 内 容############ heat_template_version: 2018-03-02 resources: user: type: OS::Keystone::User properties: name: heat-user domain: demo password: "123456" roles: [{"role": admin,"project": admin}] ################################# 测试编配文件 [root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# openstack stack create -t ./create_user.yaml user +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | id | c018a596-142e-45a8-a22f-3131317b5b31 | | stack_name | user | | description | No description | | creation_time | 2021-12-01T03:33:17Z | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | Stack CREATE started | +---------------------+--------------------------------------+ [root@controller ~]# openstack user list +----------------------------------+-------------------+ | ID | Name | +----------------------------------+-------------------+ | 0944f34be146406faebe9d9f0804f336 | neutron | | 159dd53f9105414985480b996ed28067 | glance | | 362104447741490888fce648863e203f | placement | | 4bb46e4bd83f4a4e8f81874b92b515a2 | kuryr | | 4de70f9596a64f6d9a29f7a70b8ee0d4 | heat-user | ##########这条 | 5f0a256639ab4491a9c1346cca3db42c | gnocchi | | 6f8df1b85e2140d58fc80693720f6e95 | admin | | 71e2175c45314dfeb0165019b61d08df | heat | | 821824ed38794510ad80494c47b803bb | heat_domain_admin | | a1e8e72403b04335b64ca2c4f160ef9f | aodh | | b02e95f997384b35b4fcb68c18cc1abd | cinder | | b193a113e69040f8be2199e487157cbd | demo | | c0bcbf260b6a4541b7bd2d0c5e38926b | nova | | faa22fa382c4469aa907f6481a573618 | swift | | fc9663cfb3a74b88be15dd801229a18e | zun | | fe23dec6db0d485cbad04798c97f778c | ceilometer | +----------------------------------+-------------------+在自行搭建的OpenStack平台上,编写heat模板createvm.yml文件,模板作用为按照要求创建一个云主机[root@controller ~]# heat --help | grep template resource-template DEPRECATED! resource-type-template Generate a template based on a resource type. template-function-list template-show Get the template for the specified stack. template-validate Validate a template with parameters. template-version-list List the available template versions. [root@controller ~]# heat resource-type-list | grep Nova WARNING (shell) "heat resource-type-list" is deprecated, please use "openstack orchestration resource type list" instead | OS::Nova::Flavor | | OS::Nova::FloatingIP | | OS::Nova::FloatingIPAssociation | | OS::Nova::HostAggregate | | OS::Nova::KeyPair | | OS::Nova::Quota | | OS::Nova::Server | | OS::Nova::ServerGroup | [root@controller ~]# heat resource-type-template OS::Nova::Server ##直接去openstack的dashboard平台上,使用模板创建者生成模板 [root@controller ~]# vim /root/createvm.yml ######################## heat_template_version: 2018-03-02 resources: Server_1: type: OS::Nova::Server properties: networks: - network: c6ed53d0-fa4d-431f-b91f-edee82008a4e name: test flavor: test image: 63cbe619-9854-4efe-9f7e-79313471171a availability_zone: nova ######################## [root@controller ~]# openstack stack create -t /root/createvm.yml server +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | id | a9d5c3a2-8ca3-4ff5-9f13-9d79566f110e | | stack_name | server | | description | No description | | creation_time | 2021-12-18T07:26:51Z | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | Stack CREATE started | +---------------------+--------------------------------------+在controller节点上,编写脚本mysqlbak.sh,要求执行该脚本可以备份数据库,并存放在/opt/mysqlbak目录下#数据库备份命令 mysqldump -u账号 -p密码 (数据库)或(--all-databases) > 文件路径+名字登录提供的私有云平台,创建一台centos7.5的云主机,flavor使用带有附加硬盘的类型。连接到该云主机,使用附加的硬盘,要求分出两个大小为5G的分区。使用两个分区,创建名为chinaskill-vg的卷组[root@controller ~]# openstack volume list +--------------------------------------+------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+------+-----------+------+-------------+ | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available | 10 | | +--------------------------------------+------+-----------+------+-------------+ [root@controller ~]# openstack server list +--------------------------------------+------+--------+--------------------------------+-----------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+------+--------+--------------------------------+-----------+--------+ | 1113c8b8-7072-4ad0-a8ef-05b66a5f162f | test | ACTIVE | intnet=192.168.1.5, 172.16.1.9 | centos7.5 | test | +--------------------------------------+------+--------+--------------------------------+-----------+--------+ [root@controller ~]# openstack server add volume test disk [root@test ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 20G 0 disk └─vda1 253:1 0 20G 0 part / vdb 253:16 0 10G 0 disk #使用parted工具,将vdb分成两块5G大小磁盘,并格式化 [root@test ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 20G 0 disk └─vda1 253:1 0 20G 0 part / vdb 253:16 0 10G 0 disk ├─vdb1 253:17 0 5G 0 part └─vdb2 253:18 0 5G 0 part #配置yum源,安装lvm2 [root@test ~]# yum install -y lvm2 #使用pvcreate 初始化硬盘 [root@test ~]# pvcreate -f /dev/vdb1 WARNING: lvmetad connection failed, cannot reconnect. lvmetad cannot be used due to error: Connection reset by peer WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0). WARNING: Not using lvmetad because cache update failed. Physical volume "/dev/vdb1" successfully created. [root@test ~]# pvcreate -f /dev/vdb2 WARNING: lvmetad connection failed, cannot reconnect. lvmetad cannot be used due to error: Connection reset by peer WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0). WARNING: Not using lvmetad because cache update failed. Physical volume "/dev/vdb2" successfully created. #使用vgcreate创建卷组 [root@test ~]# vgcreate chinaskill-vg /dev/vdb1 /dev/vdb2 WARNING: lvmetad connection failed, cannot reconnect. lvmetad cannot be used due to error: Connection reset by peer WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0). WARNING: Not using lvmetad because cache update failed. Volume group "chinaskill-vg" successfully created #查看卷组 [root@test ~]# vgdisplay WARNING: lvmetad connection failed, cannot reconnect. lvmetad cannot be used due to error: Connection reset by peer WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0). WARNING: Not using lvmetad because cache update failed. --- Volume group --- VG Name chinaskill-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 9.99 GiB PE Size 4.00 MiB Total PE 2558 Alloc PE / Size 0 / 0 Free PE / Size 2558 / 9.99 GiB VG UUID xe5E7f-fjU3-G1Dm-k2hb-MARI-8IfZ-wdgg8f在controller节点上创建名为chinaskill的容器,并获取该容器的存放路径;将cirros-0.3.4-x86_64-disk.img镜像上传到chinaskill容器中,并设置分段存放,每一段大小为10M[root@controller ~]# openstack container create chinaskill +---------------------------------------+------------+------------------------------------+ | account | container | x-trans-id | +---------------------------------------+------------+------------------------------------+ | AUTH_54bea643f53e4f2b96970ddfc14d3138 | chinaskill | tx322c769c6efc49008e32b-0061bd9998 | +---------------------------------------+------------+------------------------------------+ [root@controller images]# ls cirros-0.3.4-x86_64-disk.img [root@controller images]# swift upload -S 10m chinaskill cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.img segment 1 cirros-0.3.4-x86_64-disk.img segment 0 cirros-0.3.4-x86_64-disk.img [root@controller images]# swift list chinaskill cirros-0.3.4-x86_64-disk.img [root@controller images]# openstack object show chinaskill cirros-0.3.4-x86_64-disk.img +-------------------+---------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------+ | account | AUTH_54bea643f53e4f2b96970ddfc14d3138 | | container | chinaskill | | content-length | 13287936 | | content-type | application/octet-stream | | etag | "5cde37512919eda28a822e472bb0a2dd" | | last-modified | Sat, 18 Dec 2021 08:24:41 GMT | | object | cirros-0.3.4-x86_64-disk.img | | properties | Mtime='1639099260.000000' | | x-object-manifest | chinaskill_segments/cirros-0.3.4-x86_64-disk.img/1639099260.000000/13287936/10485760/ | +-------------------+---------------------------------------------------------------------------------------+在自行搭建的OpenStack平台上,使用cirros镜像创建云主机,flavor使用1vcpu/512M内存/1G硬盘,创建云主机cscc_vm,假设在使用过程中,发现该云主机配置太低,需要调整,请修改相应配置,将dashboard界面上的云主机调整实例大小可以使用,将该云主机实例大小调整为1vcpu/1G内存/2G硬盘在自行搭建的OpenStack平台上,使用cirros镜像创建云主机vm1,然后将该云主机进行手动迁移,若原来创建在compute节点上的,则迁移至controller节点上;若原来创建在controller节点上的,则迁移至compute节点上在controller控制节点上,安装libguestfs-tools工具的时候,会发生依赖包的冲突,请解决依赖关系的报错,完成libguestfs-tools工具的安装登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装zabbix监控服务,然后配置该服务监控controller节点。yum install -y zabbix-server-mysql zabbix-web-mysql yum install -y mariadb-server 用mysql xxxx --user=root配置 gzip -d解压sql文件 mysql -uroot -p000000 -e "use zabbix;source /usr/share/doc/zabbix-server-mysql-3.4.15/create.sql;" 登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装数据库、redis、zookeeper和kafka等服务,然后将商城应用部署在该云主机上,实现网站的访问#云主机ip:172.16.1.9 #准备tar包 gpmall-single.tar.gz #准备centos源 #查看jar包得到相关信息 #maridb:地址mysql.mall 端口3306 账号root 密码123456 数据库gpmall #redis:地址redis.mall 端口6379 #zookeeper: 地址zookeeper.mall 端口2181 #kafka:地址kafka.mall 端口9092 [root@gpmall ~]# ls gpmall-single.tar.gz [root@gpmall ~]# tar -xzvf gpmall-single.tar.gz [root@gpmall ~]# cd gpmall-single [root@gpmall gpmall-single]# ls dist gpmall-shopping-0.0.1-SNAPSHOT.jar gpmall-user-0.0.1-SNAPSHOT.jar shopping-provider-0.0.1-SNAPSHOT.jar zookeeper-3.4.14.tar.gz gpmall-repo gpmall.sql kafka_2.11-1.1.1.tgz user-provider-0.0.1-SNAPSHOT.jar #配置yum源 [root@gpmall gpmall-single]# cd gpmall-repo/ [root@gpmall gpmall-repo]# pwd /root/gpmall-single/gpmall-repo [root@gpmall gpmall-repo]# rm -rf /etc/yum.repos.d/CentOS-* [root@gpmall gpmall-repo]# vi /etc/yum.repos.d/http.repo [root@gpmall gpmall-repo]# cat /etc/yum.repos.d/http.repo [centos] name=centos baseurl=ftp://172.16.1.101/centos gpgcheck=0 enable=1 [gpmall] name=gpmall baseurl=file:///root/gpmall-single/gpmall-repo gpgcheck=0 enable=1 [root@gpmall gpmall-repo]# yum clean all #添加主机映射 [root@gpmall gpmall-single]# vi /etc/hosts [root@gpmall gpmall-single]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 redis.mall mysql.mall zookeeper.mall kafka.mall #安装数据库 [root@gpmall gpmall-repo]# yum install -y mariadb mariadb-server [root@gpmall gpmall-repo]# mysqld_safe & [1] 1576 [root@gpmall gpmall-repo]# 211218 12:16:08 mysqld_safe Logging to '/var/lib/mysql/gpmall.novalocal.err'. 211218 12:16:09 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql [root@gpmall gpmall-repo]# mysqladmin -uroot password "123456" [root@gpmall gpmall-repo]# cd .. [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "set names utf8;grant all privileges on *.* to 'root'@'%' identified by '123456';" [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "create database gpmall;use gpmall;source gpmall.sql;" [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "use gpmall;show tables" +--------------------+ | Tables_in_gpmall | +--------------------+ | tb_address | | tb_base | | tb_comment | | tb_comment_picture | | tb_comment_reply | | tb_dict | | tb_express | | tb_item | | tb_item_cat | | tb_item_desc | | tb_log | | tb_member | | tb_order | | tb_order_item | | tb_order_shipping | | tb_panel | | tb_panel_content | | tb_payment | | tb_refund | | tb_stock | | tb_user_verify | +--------------------+ #安装redis [root@gpmall gpmall-single]# yum install -y redis [root@gpmall gpmall-single]# sed -i "s/bind 127.0.0.1/bind 0.0.0.0/g" /etc/redis.conf [root@gpmall gpmall-single]# sed -i "s/protected-mode yes/protected-mode no/g" /etc/redis.conf [root@gpmall gpmall-single]# sed -i "s/daemonize no/daemonize yes/g" /etc/redis.conf [root@gpmall gpmall-single]# redis-server /etc/redis.conf [root@gpmall gpmall-single]# redis-cli 127.0.0.1:6379> exit #安装jdk [root@gpmall gpmall-single]# yum install -y java-1.8.0-openjdk [root@gpmall gpmall-single]# java -version openjdk version "1.8.0_262" OpenJDK Runtime Environment (build 1.8.0_262-b10) OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode) #安装zookeeper [root@gpmall gpmall-single]# tar -xzvf zookeeper-3.4.14.tar.gz [root@gpmall gpmall-single]# cd zookeeper-3.4.14 [root@gpmall zookeeper-3.4.14]# cd conf/ [root@gpmall conf]# mv zoo_sample.cfg zoo.cfg [root@gpmall ~]# cd /root/gpmall-single [root@gpmall gpmall-single]# zookeeper-3.4.14/bin/zkServer.sh start #安装kafka [root@gpmall gpmall-single]# tar -xzvf kafka_2.11-1.1.1.tgz [root@gpmall gpmall-single]# nohup kafka_2.11-1.1.1/bin/kafka-server-start.sh kafka_2.11-1.1.1/config/server.properties & #安装nginx [root@gpmall gpmall-single]# yum install -y nginx [root@gpmall gpmall-single]# rm -rf /usr/share/nginx/html/* [root@gpmall gpmall-single]# cp -rvf dist/* /usr/share/nginx/html/ [root@gpmall gpmall-single]# sed -i "1a location /user { proxy_pass http://localhost:8082; }" /etc/nginx/conf.d/default.conf [root@gpmall gpmall-single]# sed -i "1a location /shopping { proxy_pass http://localhost:8081; }" /etc/nginx/conf.d/default.conf [root@gpmall gpmall-single]# sed -i "1a location /cashier { proxy_pass http://localhost:8083; }" /etc/nginx/conf.d/default.conf #启动jar包 [root@gpmall gpmall-single]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar & [root@gpmall gpmall-single]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar & [root@gpmall gpmall-single]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar & [root@gpmall gpmall-single]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar & [root@gpmall gpmall-single]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 11819/redis-server tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 559/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 12666/nginx: master tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 996/master tcp6 0 0 :::9092 :::* LISTEN 12246/java tcp6 0 0 :::2181 :::* LISTEN 12000/java tcp6 0 0 :::3306 :::* LISTEN 1638/mysqld tcp6 0 0 :::43948 :::* LISTEN 12000/java tcp6 0 0 :::111 :::* LISTEN 559/rpcbind tcp6 0 0 :::39318 :::* LISTEN 12246/java tcp6 0 0 :::22 :::* LISTEN 1208/sshd tcp6 0 0 ::1:25 :::* LISTEN 996/master #重启nginx [root@gpmall gpmall-single]# nginx -t登录提供的私有云平台,使用centos7.5镜像创建两台云主机,使用提供的软件包。在这两台云主机上安装Redis服务,并配置成Redis主从架构#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 主节点 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 从节点 #配置yum源 #两台主机同时安装redis [root@server1 ~]# yum install -y redis [root@server2 ~]# yum install -y redis #修改配置文件 [root@server1 ~]# vim /etc/redis.conf ##########修改前########## bind 127.0.0.1 protected-mode yes daemonize no ######################### ##########修改后########## bind 0.0.0.0 protected-mode no daemonize yes ######################### [root@server2 ~]# vim /etc/redis.conf ##########修改前########## bind 127.0.0.1 protected-mode yes daemonize no # slaveof <masterip> <masterport> ######################### ##########修改后########## bind 0.0.0.0 protected-mode no daemonize yes slaveof 192.168.1.3 6379 ######################### [root@server1 ~]# redis-server /etc/redis.conf [root@server2 ~]# redis-server /etc/redis.conf [root@server1 ~]# redis-cli 127.0.0.1:6379> role 1) "master" 2) (integer) 127 3) 1) 1) "192.168.1.18" 2) "6379" 3) "127" 127.0.0.1:6379> set a '1' OK [root@server2 ~]# redis-cli 127.0.0.1:6379> ROLE 1) "slave" 2) "192.168.1.3" 3) (integer) 6379 4) "connected" 5) (integer) 155 127.0.0.1:6379> get a "1"登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,在这三台云主机安装上安装Redis服务,并配置成Redis哨兵模式。#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 主节点 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 从节点 #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 从节点 #配置yum源 #三台主机同时安装redis [root@server1 ~]# yum install -y redis [root@server2 ~]# yum install -y redis [root@server3 ~]# yum install -y redis #修改配置文件 [root@server1 ~]# vim /etc/redis.conf ##########修改前########## bind 127.0.0.1 protected-mode yes daemonize no ######################### ##########修改后########## bind 0.0.0.0 protected-mode no daemonize yes ######################### [root@server2 ~]# vim /etc/redis.conf ##########修改前########## bind 127.0.0.1 protected-mode yes daemonize no # slaveof <masterip> <masterport> ######################### ##########修改后########## bind 0.0.0.0 protected-mode no daemonize yes slaveof 192.168.1.3 6379 ######################### [root@server3 ~]# vim /etc/redis.conf ##########修改前########## bind 127.0.0.1 protected-mode yes daemonize no # slaveof <masterip> <masterport> ######################### ##########修改后########## bind 0.0.0.0 protected-mode no daemonize yes slaveof 192.168.1.3 6379 ######################### [root@server1 ~]# redis-server /etc/redis.conf [root@server2 ~]# redis-server /etc/redis.conf [root@server3 ~]# redis-server /etc/redis.conf [root@server1 ~]# redis-cli 127.0.0.1:6379> role 1) "master" 2) (integer) 127 3) 1) 1) "192.168.1.18" 2) "6379" 3) "127" 2) 1) "192.168.1.16" 2) "6379" 3) "127" [root@server2 ~]# redis-cli 127.0.0.1:6379> ROLE 1) "slave" 2) "192.168.1.3" 3) (integer) 6379 4) "connected" 5) (integer) 155 [root@server3 ~]# redis-cli 127.0.0.1:6379> role 1) "slave" 2) "192.168.1.3" 3) (integer) 6379 4) "connected" 5) (integer) 197 #配置哨兵服务 [root@server1 ~]# vim /etc/redis-sentinel.conf ##########修改前########## # protected-mode no sentinel monitor mymaster 127.0.0.1 6379 2 ######################### ##########修改后########## protected-mode no sentinel monitor mymaster 192.168.1.3 6379 2 daemonize yes ######################### [root@server2 ~]# vim /etc/redis-sentinel.conf ##########修改前########## # protected-mode no sentinel monitor mymaster 127.0.0.1 6379 2 ######################### ##########修改后########## protected-mode no sentinel monitor mymaster 192.168.1.3 6379 2 daemonize yes ######################### [root@server3 ~]# vim /etc/redis-sentinel.conf ##########修改前########## # protected-mode no sentinel monitor mymaster 127.0.0.1 6379 2 ######################### ##########修改后########## protected-mode no sentinel monitor mymaster 192.168.1.3 6379 2 daemonize yes ######################### #启动哨兵 [root@server1 ~]# redis-sentinel /etc/redis-sentinel.conf [root@server2 ~]# redis-sentinel /etc/redis-sentinel.conf [root@server3 ~]# redis-sentinel /etc/redis-sentinel.conf [root@server1 ~]# redis-cli -p 26379 127.0.0.1:26379> info sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=4登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机构建为zookeeper集群#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 #三台机配置yum源,安装jdk [root@server1 ~]# # yum install -y java-1.8.0-openjdk [root@server1 ~]# # java -version openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) [root@server2 ~]# # yum install -y java-1.8.0-openjdk [root@server2 ~]# # java -version openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) [root@server3 ~]# # yum install -y java-1.8.0-openjdk [root@server3 ~]# # java -version openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) #三台机解压zookeeper [root@server1 ~]# cd /opt/ [root@server1 opt]# ls kafka_2.11-1.1.1.tgz zookeeper-3.4.14.tar.gz [root@server1 opt]# tar -xzvf zookeeper-3.4.14.tar.gz [root@server2 ~]# cd /opt/ [root@server2 opt]# ls kafka_2.11-1.1.1.tgz zookeeper-3.4.14.tar.gz [root@server2 opt]# tar -xzvf zookeeper-3.4.14.tar.gz [root@server3 ~]# cd /opt/ [root@server3 opt]# ls kafka_2.11-1.1.1.tgz zookeeper-3.4.14.tar.gz [root@server3 opt]# tar -xzvf zookeeper-3.4.14.tar.gz #修改配置文件 [root@server1 opt]# cd zookeeper-3.4.14/conf/ [root@server1 conf]# cp zoo_sample.cfg zoo.cfg [root@server1 conf]# vim zoo.cfg #########修改前########## dataDir=/tmp/zookeeper #这个路径是到时候创建myid的路径 ######################## #########修改后########## dataDir=/tmp/zookeeper server.1=192.168.1.3:2888:3888 server.2=192.168.1.18:2888:3888 server.3=192.168.1.16:2888:3888 ######################## #scp配置文件到另外两台机 [root@server1 conf]# scp zoo.cfg 192.168.1.18:/opt/zookeeper-3.4.14/conf/ zoo.cfg 100% 1017 92.3KB/s 00:00 [root@server1 conf]# scp zoo.cfg 192.168.1.16:/opt/zookeeper-3.4.14/conf/ zoo.cfg 100% 1017 95.2KB/s 00:00 #创建dataDir路径,写入myid文件,数值为server.的数值 [root@server1 conf]# mkdir /tmp/zookeeper [root@server1 conf]# echo "1" > /tmp/zookeeper/myid [root@server1 conf]# cat /tmp/zookeeper/myid 1 [root@server2 opt]# mkdir /tmp/zookeeper [root@server2 opt]# echo "2" > /tmp/zookeeper/myid [root@server2 opt]# cat /tmp/zookeeper/myid 2 [root@server3 opt]# mkdir /tmp/zookeeper [root@server3 opt]# echo "3" > /tmp/zookeeper/myid [root@server3 opt]# cat /tmp/zookeeper/myid 3 #启动zookeeper,一台一台启动 [root@server1 conf]# /opt/zookeeper-3.4.14/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@server1 conf]# /opt/zookeeper-3.4.14/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: leader [root@server2 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@server2 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower [root@server3 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@server3 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机构建为kafka集群#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 #kafka需要依赖zookeeper和jdk,上一题已经搭好了zookeeper和jdk,zookeeper不用集群也可以 #三台机tar包解压 [root@server1 conf]# cd /opt/ [root@server1 opt]# tar -xzvf kafka_2.11-1.1.1.tgz [root@server2 opt]# tar -xzvf kafka_2.11-1.1.1.tgz [root@server3 opt]# tar -xzvf kafka_2.11-1.1.1.tgz #修改配置文件 [root@server1 opt]# vim kafka_2.11-1.1.1/config/server.properties #########修改前########## broker.id=0 zookeeper.connect=localhost:2181 ######################## #########修改后########## broker.id=1 zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 listeners=PLAINTEXT://192.168.1.3:9092 ######################## [root@server2 opt]# vim kafka_2.11-1.1.1/config/server.properties #########修改前########## broker.id=0 zookeeper.connect=localhost:2181 ######################## #########修改后########## broker.id=2 zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 listeners=PLAINTEXT://192.168.1.18:9092 ######################## [root@server3 opt]# vim kafka_2.11-1.1.1/config/server.properties #########修改前########## broker.id=0 zookeeper.connect=localhost:2181 #########修改后########## broker.id=3 zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 listeners=PLAINTEXT://192.168.1.16:9092 ######################## #启动kafka [root@server1 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log & [2] 1513 [root@server2 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log & [1] 1545 [root@server3 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log & [1] 1556 #测试kafka集群 [root@server1 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --create --zookeeper 192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 --replication-factor 2 --partitions 3 --topic demo_topics WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both. Created topic "demo_topics". [root@server1 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 demo_topics demo_topics [root@server2 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.18:2181 demo_topics [root@server3 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.16:2181 demo_topics登录提供的私有云平台,使用centos7.5镜像创建三台云主机来搭建rabbitmq集群。使用普通集群模式,其中一台做磁盘节点,另外两台做内存节点,配置完毕后启动rabbitmq服务#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 #配置主机映射 192.168.1.3 server1 192.168.1.18 server2 192.168.1.16 server3 #配置yum源为rabbitmq源,启动并查看运行状态 [root@server1 opt]# yum install -y rabbitmq-server [root@server1 opt]# systemctl start rabbitmq-server [root@server1 opt]# systemctl status rabbitmq-server [root@server2 opt]# yum install -y rabbitmq-server [root@server2 opt]# systemctl start rabbitmq-server [root@server2 opt]# systemctl status rabbitmq-server [root@server3 opt]# yum install -y rabbitmq-server [root@server3 opt]# systemctl start rabbitmq-server [root@server3 opt]# systemctl status rabbitmq-server #主节点开启rabbitmq_management插件,并且重启 [root@server1 opt]# rabbitmq-plugins list [ ] rabbitmq_management 3.3.5 [root@server1 opt]# rabbitmq-plugins enable rabbitmq_management The following plugins have been enabled: mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent rabbitmq_management Plugin configuration has changed. Restart RabbitMQ for changes to take effect. [root@server1 opt]# systemctl restart rabbitmq-server [上面不行就用这条([root@server1 opt]# service rabbitmq-server restart)] #在openstack的安全组开放tcp协议,打开端口15672,rabbitmq1节点的IP+端口15672(http://ip:15672)访问RabbitMQ监控界面,使用默认的用户名和密码登录(用户名和密码都为guest) #登入后,将cookie文件scp到另外两台机 [root@server1 opt]# scp /var/lib/rabbitmq/.erlang.cookie 192.168.1.18:/var/lib/rabbitmq/ .erlang.cookie 100% 20 2.1KB/s 00:00 [root@server1 opt]# scp /var/lib/rabbitmq/.erlang.cookie 192.168.1.16:/var/lib/rabbitmq/ .erlang.cookie 100% 20 2.2KB/s 00:00 #在另外两台机更改文件用户组 [root@server2 opt]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie [root@server2 opt]# systemctl restart rabbitmq-server [root@server3 opt]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie [root@server3 opt]# systemctl restart rabbitmq-server #另外两台机加入集群 这里得等很久 [root@server2 opt]# rabbitmqctl stop_app [root@server2 opt]# rabbitmq-plugins enable rabbitmq_management [root@server2 opt]# rabbitmqctl join_cluster --ram rabbit@server1 [root@server2 opt]# rabbitmqctl start_app [root@server2 opt]# systemctl restart rabbitmq-server [root@server3 opt]# rabbitmqctl stop_app [root@server3 opt]# rabbitmq-plugins enable rabbitmq_management [root@server3 opt]# rabbitmqctl join_cluster --ram rabbit@server1 [root@server3 opt]# rabbitmqctl start_app [root@server3 opt]# systemctl restart rabbitmq-server #默认rabbitmq启动后是磁盘节点,在这个cluster命令下,rabbitmq2和rabbitmq3是内存节点,rabbitmq1是磁盘节点。 如果要使rabbitmq2、rabbitmq3都是磁盘节点,去掉--ram参数即可。 如果想要更改节点类型,可以使用命令rabbitmqctl change_cluster_node_type disc(ram),前提是必须停掉Rabbit应用。 rabbitmq-plugins enable rabbitmq_management登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装LNMP环境,并将提供的WordPress案例包部署上去登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机都安装上MariaDB数据库服务,并配置为数据库集群,即MariaDB_galera_cluster数据库集群#server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 #配置映射 192.168.1.3 server1 192.168.1.18 server2 192.168.1.16 server3 #配置yum源 #安装mariadb [root@server1 ~]# yum install -y mariadb mariadb-server [root@server2 ~]# yum install -y mariadb mariadb-server [root@server3 ~]# yum install -y mariadb mariadb-server [root@server1 ~]# mysqld_safe & [root@server1 ~]# mysqladmin -uroot password "000000" [root@server1 ~]# systemctl stop mariadb [root@server2 ~]# mysqld_safe & [root@server2 ~]# mysqladmin -uroot password "000000" [root@server2 ~]# systemctl stop mariadb [root@server3 ~]# mysqld_safe & [root@server3 ~]# mysqladmin -uroot password "000000" [root@server3 ~]# systemctl stop mariadb #修改配置文件 [root@server1 ~]# vim /etc/my.cnf.d/server.cnf ##########修改前########## [galera] # Mandatory settings #wsrep_on=ON #wsrep_provider= #wsrep_cluster_address= #binlog_format=row #default_storage_engine=InnoDB #innodb_autoinc_lock_mode=2 # # Allow server to accept connections on all interfaces. # #bind-address=0.0.0.0 ######################### ##########修改后########## [galera] # Mandatory settings wsrep_on=ON wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_address="gcomm://192.168.1.3,192.168.1.16,192.168.1.18" binlog_format=row default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 # # Allow server to accept connections on all interfaces. # bind-address=0.0.0.0 ######################### #scp到另外两台机 [root@server1 ~]# scp /etc/my.cnf.d/server.cnf 192.168.1.18:/etc/my.cnf.d/server.cnf server.cnf 100% 1155 93.4KB/s 00:00 [root@server1 ~]# scp /etc/my.cnf.d/server.cnf 192.168.1.16:/etc/my.cnf.d/server.cnf server.cnf 100% 1155 126.0KB/s 00:00 #启动 [root@server1 ~]# service mysql start --wsrep-new-cluster Starting mysql (via systemctl): [ OK ] [root@server2 ~]# service mysql start Starting mysql (via systemctl): [ OK ] [root@server3 ~]# service mysql start Starting mysql (via systemctl): [ OK ] 登录提供的私有云平台,再创建一台centos7.5的云主机,使用提供的软件包,安装HAproxy负载均衡服务,与上一题搭建完成的高可用数据库进行关联,完成数据库集群+负载均衡的架构登录提供的私有云平台,创建两台centos7.5的云主机,使用提供的软件包,将这两台云主机上安装MariaDB数据库服务,并配置为主从数据库。#在openstack上创建两台云主机,处于同一网卡下 #mariadb1 ip:192.168.100.10 mariadb1 ip:192.168.100.6 #配置yum源,使用提供的mariadb镜像源 ###基础配置 ##主节点 [root@mariadb1 ~]# rm -rf /etc/yum.repos.d/CentOS-* [root@mariadb1 ~]# vi /etc/yum.repos.d/http.repo [root@mariadb1 ~]# cat /etc/yum.repos.d/http.repo [centos] name=centos baseurl=ftp://172.16.1.101/centos gpgcheck=0 enable=1 [mysql] name=mysql baseurl=ftp://172.16.1.101/iaas/mariadb-repo/ gpgcheck=0 enbale=1 [root@mariadb1 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: centos mysql Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [root@mariadb1 ~]# yum install -y mariadb mariadb-server [root@mariadb1 ~]# vi /etc/my.cnf #在[client-server]下加入 [mysqld] log-bin=mysql-bin server-id=10 [root@mariadb1 ~]# systemctl restart mysql [root@mariadb1 ~]# mysqladmin -uroot password "000000" ##从节点 [root@mariadb2 ~]# rm -rf /etc/yum.repos.d/CentOS-* [root@mariadb2 ~]# vi /etc/yum.repos.d/http.repo [root@mariadb2 ~]# cat /etc/yum.repos.d/http.repo [centos] name=centos baseurl=ftp://172.16.1.101/centos gpgcheck=0 enable=1 [mysql] name=mysql baseurl=ftp://172.16.1.101/iaas/mariadb-repo/ gpgcheck=0 enbale=1 [root@mariadb2 ~]# yum clean all Loaded plugins: fastestmirror Cleaning repos: centos mysql Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [root@mariadb2 ~]# yum install -y mariadb mariadb-server [root@mariadb2 ~]# vi /etc/my.cnf #在[client-server]下加入 [mysqld] log-bin=mysql-bin server-id=6 [root@mariadb2 ~]# systemctl restart mysql [root@mariadb2 ~]# mysqladmin -uroot password "000000" ###主从配置 #主节点 MariaDB [(none)]> grant all on *.* to 'root'@'%' identified by '000000'; Query OK, 0 rows affected (0.010 sec) MariaDB [(none)]> show master status; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 702 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.016 sec) #从节点 MariaDB [(none)]> change master to master_host='192.168.100.10',master_user='root',master_password='000000',master_log_file='mysql-bin.000001',master_log_pos=702; Query OK, 0 rows affected (0.045 sec) MariaDB [(none)]> start slave; Query OK, 0 rows affected (0.019 sec) MariaDB [(none)]> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.100.10 Master_User: root Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 702 Relay_Log_File: mariadb2-relay-bin.000002 Relay_Log_Pos: 555 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ###验证 #主节点 MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.012 sec) MariaDB [(none)]> create database data; Query OK, 1 row affected (0.018 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | data | | information_schema | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.004 sec) #从节点 MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | data | | information_schema | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.007 sec)登录提供的私有云平台,再创建一台centos7.5的云主机,使用提供的软件包与上一题配置完成的主从数据库,将这三台云主机配置为数据库读写分离架构。#创建一台云主机,与前两台mariadb处于同一网卡下 #mycat 192.168.100.13 #配置yum源,安装vim,将Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz和schema.xml文件scp到主机 [root@mycat /]# ls bin boot dev etc home lib lib64 media mnt Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz opt proc root run sbin schema.xml srv sys tmp usr var [root@mycat /]# tar -xzvf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz [root@mycat /]# chown -R 777 mycat/ #开放环境变量 [root@mycat /]# echo "export MYCAT_HOME=/mycat" >> /etc/profile [root@mycat /]# source /etc/profile #mycat需要依赖jdk [root@mycat /]# yum install -y java-1.8.0-openjdk [root@mycat /]# java -version openjdk version "1.8.0_161" OpenJDK Runtime Environment (build 1.8.0_161-b14) OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode) #将schema.xml替换 [root@mycat /]# cp -f /schema.xml /mycat/conf/ cp: overwrite ‘/mycat/conf/schema.xml’? y [root@mycat /]# vim /mycat/conf/schema.xml ########修改前######## <?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="USERDB" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema> <dataNode name="dn1" dataHost="localhost1" database="test" /> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" dbDriver="native" writeType="0" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="172.16.51.18:3306" user="root" password="123456"> <readHost host="hostS1" url="172.16.51.30:3306" user="root" password="123456" /> </writeHost> </dataHost> </mycat:schema> ##################### ########修改后######## <?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="mariadb" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema> <dataNode name="dn1" dataHost="localhost1" database="data" /> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" dbDriver="native" writeType="0" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="192.168.100.10:3306" user="root" password="000000"> <readHost host="hostS1" url="192.168.100.6:3306" user="root" password="000000" /> </writeHost> </dataHost> </mycat:schema> ##################### [root@mycat /]# vim /mycat/conf/server.xml ########修改前######## <user name="root"> <property name="password">123456</property> <property name="schemas">TESTDB</property> <!-- 表级 DML 权限设置 --> <!-- <privileges check="false"> <schema name="TESTDB" dml="0110" > <table name="tb01" dml="0000"></table> <table name="tb02" dml="1111"></table> </schema> </privileges> --> </user> <user name="user"> <property name="password">user</property> <property name="schemas">TESTDB</property> <property name="readOnly">true</property> </user> ##################### ########修改后######## <user name="root"> <property name="password">000000</property> <property name="schemas">mariadb</property> <!-- 表级 DML 权限设置 --> <!-- <privileges check="false"> <schema name="TESTDB" dml="0110" > <table name="tb01" dml="0000"></table> <table name="tb02" dml="1111"></table> </schema> </privileges> --> </user> ##################### #启动mycat [root@mycat /]# mycat/bin/mycat start Starting Mycat-server... #验证 [root@mycat /]# yum install -y MariaDB-client [root@mycat /]# mysql -h 127.0.0.1 -P9066 -uroot -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (monitor) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show @@datasource; +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+ | DATANODE | NAME | TYPE | HOST | PORT | W/R | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD | +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+ | dn1 | hostM1 | mysql | 192.168.100.10 | 3306 | W | 0 | 10 | 1000 | 51 | 5 | 0 | | dn1 | hostS1 | mysql | 192.168.100.6 | 3306 | R | 0 | 0 | 1000 | 0 | 0 | 0 | +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+ 2 rows in set (0.081 sec)在controller节点的/root目录下按要求编写Python程序create_sec.py文件,对接 openstack api,要求在云平台上创建一个安全组pvm_sec,开放20、21、22、80、3306端口(如果存在同名安全组,代码中需先进行删除操作)。输出安全组名称、id和详细信息api根据 http 服务中提供的 Python-api.tar.gz 软件包,完成 python3.6 软件和依赖库的安装[root@controller opt]# cd Python-api/ [root@controller Python-api]# ls certifi-2019.11.28-py2.py3-none-any.whl chardet-3.0.4-py2.py3-none-any.whl idna-2.8-py2.py3-none-any.whl python-3.6.8.tar.gz requests-2.24.0-py2.py3-none-any.whl urllib3-1.25.11-py3-none-any.whl 安装指南.txt [root@controller Python-api]# cat 安装指南.txt 1、正常安装python3.6 2、pip3 install certifi-2019.11.28-py2.py3-none-any.whl 3、pip3 install urllib3-1.25.11-py3-none-any.whl 4、pip3 install idna-2.8-py2.py3-none-any.whl 5、pip3 install chardet-3.0.4-py2.py3-none-any.whl 6、pip3 install requests-2.24.0-py2.py3-none-any.whl [root@controller Python-api]# tar -xzf python-3.6.8.tar.gz -C / [root@controller Python-api]# cd /python-3.6.8/ [root@controller python-3.6.8]# ls packages repodata [root@controller python-3.6.8]# pwd /python-3.6.8 [root@controller python-3.6.8]# echo "[python]" >> /etc/yum.repos.d/local.repo [root@controller python-3.6.8]# echo "name=python" >> /etc/yum.repos.d/local.repo [root@controller python-3.6.8]# echo "baseurl=file:///python-3.6.8" >> /etc/yum.repos.d/local.repo [root@controller python-3.6.8]# echo "gpgcheck=0" >> /etc/yum.repos.d/local.repo [root@controller python-3.6.8]# echo "enable=1" >> /etc/yum.repos.d/local.repo [root@controller python-3.6.8]# yum clean all Loaded plugins: fastestmirror Cleaning repos: centos iaas python Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos Cleaning up list of fastest mirrors [root@controller python-3.6.8]# yum list | grep python3 python3.x86_64 3.6.8-13.el7 python python3-libs.x86_64 3.6.8-13.el7 python python3-pip.noarch 9.0.3-7.el7_7 python python3-setuptools.noarch 39.2.0-10.el7 python [root@controller python-3.6.8]# yum install -y python3 #安装模块 [root@controller python-3.6.8]# cd /opt/Python-api/ [root@controller Python-api]# pip3 install certifi-2019.11.28-py2.py3-none-any.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Processing ./certifi-2019.11.28-py2.py3-none-any.whl Installing collected packages: certifi Successfully installed certifi-2019.11.28 [root@controller Python-api]# pip3 install urllib3-1.25.11-py3-none-any.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Processing ./urllib3-1.25.11-py3-none-any.whl Installing collected packages: urllib3 Successfully installed urllib3-1.25.11 [root@controller Python-api]# pip3 install idna-2.8-py2.py3-none-any.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Processing ./idna-2.8-py2.py3-none-any.whl Installing collected packages: idna Successfully installed idna-2.8 [root@controller Python-api]# pip3 install chardet-3.0.4-py2.py3-none-any.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Processing ./chardet-3.0.4-py2.py3-none-any.whl Installing collected packages: chardet Successfully installed chardet-3.0.4 [root@controller Python-api]# pip3 install requests-2.24.0-py2.py3-none-any.whl WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Processing ./requests-2.24.0-py2.py3-none-any.whl Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0) Installing collected packages: requests Successfully installed requests-2.24.0 #验证 [root@controller Python-api]# python3 Python 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 获取token| 参数 | 类型 | 描述 | | :----------------------------------------------------------- | :----- | :----------------------------------------------------------- || *用户域*(必需有) | 字符串 | 用户的域 | | 用户名 (必需有) | 字符串 | 用户名。如果您不提供用户名和密码,那么必须提供一个令牌。 | | 密码 (必需有) | 字符串 | 该用户的密码。 | | *项目域*(可选) | 字符串 | 该项目的域是scope对象的必需部分。 | | *项目名(可选) | 字符串 | 项目名。*项目ID和项目名都是可选的。 | | *项目ID(可选) | 字符串 | 项目ID。*项目ID和项目名都是可选的。但是伴随着项目域这两个属性其中之一是必须有的。这两个属性包含在scope对象下。如果你不知道项目的名称或者ID,发送一个不包含任何scope对象的请求。 |生成环境变量source /etc/keystone/admin-openrc.sh [root@controller api]# cat /etc/keystone/admin-openrc.sh export OS_PROJECT_DOMAIN_NAME=demo export OS_USER_DOMAIN_NAME=demo export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2地址:$OS_AUTH_URL/auth/tokens?nocatalog类型:post请求头Content-Type: application/json请求参数:json格式{ "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "$OS_USER_DOMAIN_NAME" }, "name": "$OS_USERNAME", "password": "$OS_PASSWORD" } } }, "scope": { "project": { "domain": { "name": "$OS_PROJECT_DOMAIN_NAME" }, "name": "$OS_PROJECT_NAME" } } } }curlcurl -v -s -X POST $OS_AUTH_URL/auth/tokens?nocatalog -H "Content-Type: application/json" -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name": "'"$OS_PROJECT_NAME"'" } } }}' [root@controller api]# curl -v -s -X POST $OS_AUTH_URL/auth/tokens?nocatalog -H "Content-Type: application/json" -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name": "'"$OS_PROJECT_NAME"'" } } }}' * About to connect() to controller port 5000 (#0) * Trying 172.16.1.151... * Connected to controller (172.16.1.151) port 5000 (#0) > POST /v3/auth/tokens?nocatalog HTTP/1.1 > User-Agent: curl/7.29.0 > Host: controller:5000 > Accept: */* > Content-Type: application/json > Content-Length: 220 > * upload completely sent off: 220 out of 220 bytes < HTTP/1.1 201 Created < Date: Sun, 19 Dec 2021 14:13:52 GMT < Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_wsgi/3.4 Python/2.7.5 < X-Subject-Token: gAAAAABhvz4gNvySQnPWjGxlKlea-gzvtY80v_NgLAnDP9z9Qkp2R1NAMJUEaBzydbmTjftUxRTa-TBQqBwM4XrUk396XJQz6W0tIQl8TmjdlZ9z4iOw2MM4w6XWKfhEGo8VqSS4CuH7ZoJgdvmc0wofuFXX2cZ7y4b0d4eV7c8axoTuyBVMdZI < Vary: X-Auth-Token < x-openstack-request-id: req-1e762a5f-7ee5-43f1-9bd8-d84422e295e0 < Content-Length: 568 < Content-Type: application/json < * Connection #0 to host controller left intact {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "c04096674c744030bf313cf107614f8d", "name": "admin"}], "expires_at": "2021-12-19T15:13:52.000000Z", "project": {"domain": {"id": "6897ff73286446cda4016bb748d2fd4d", "name": "demo"}, "id": "54bea643f53e4f2b96970ddfc14d3138", "name": "admin"}, "user": {"password_expires_at": null, "domain": {"id": "6897ff73286446cda4016bb748d2fd4d", "name": "demo"}, "id": "32a5404a9ca14a09ba0f12ae34c7a079", "name": "admin"}, "audit_ids": ["UxIdxDNCRk-JoeImrrG1yg"], "issued_at": "2021-12-19T14:13:52.000000Z"}}pythonimport json, requests, time url = "http://172.16.1.151:5000/v3/auth/tokens" body = { "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "demo" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "demo" }, "name": "admin" } } } } headers = { "Content-Type": "application/json", } token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] print(token)1.云主机类型取云主机类型(flavor)列表地址:http://controller:8774/v2.1/flavors列出详细信息地址:http://controller:8774/v2.1/flavors/detail类型:get请求头:X-Auth-Token : tokenflavors = requests.get('http://controller:8774/v2.1/flavors', headers={'X-Auth-Token': token}) json_flavors = json.loads(flavors.text) for i in json_flavors['flavors']: print(i)[root@controller api]# python3 get_flavors.py {'id': '051c9373-f51f-46cc-80bd-56ded52f4678', 'links': [{'href': 'http://controller:8774/v2.1/flavors/051c9373-f51f-46cc-80bd-56ded52f4678', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/051c9373-f51f-46cc-80bd-56ded52f4678', 'rel': 'bookmark'}], 'name': 'gpmall'} {'id': '1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'links': [{'href': 'http://controller:8774/v2.1/flavors/1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'rel': 'bookmark'}], 'name': 'chinaskill'} {'id': '4f9a6045-2968-457f-9111-1a9968dd2b69', 'links': [{'href': 'http://controller:8774/v2.1/flavors/4f9a6045-2968-457f-9111-1a9968dd2b69', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/4f9a6045-2968-457f-9111-1a9968dd2b69', 'rel': 'bookmark'}], 'name': 'test'}创建云主机类型地址:http://172.16.1.151:8774/v2.1/flavors类型:post请求头:Content-Type: application/jsonX-Auth-Token: token请求参数:json格式{ "flavor": { "name": name, "ram": ram, "vcpus": vcpus, "disk": disk, "id": id } }# 创建云主机类型 def create_flavor(id, vcpus, ram, disk, name): headers = { "Content-Type": "application/json", "X-Auth-Token": token } body = { "flavor": { "name": name, "ram": ram, "vcpus": vcpus, "disk": disk, "id": id } } return requests.post('http://172.16.1.151:8774/v2.1/flavors', data=json.dumps(body), headers=headers) flavor = create_flavor(id=100, vcpus=1, ram=1024, disk=10, name='api-create') print(flavor.text)[root@controller api]# python3 create_flavors.py {"flavor": {"name": "api-create", "links": [{"href": "http://172.16.1.151:8774/v2.1/flavors/100", "rel": "self"}, {-"href": "http://172.16.1.151:8774/flavors/100", "rel": "bookmark"}], "ram": 1024, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 10, "id": "100"}} [root@controller api]# python get_flavors.py {u'id': u'100', u'links': [{u'href': u'http://controller:8774/v2.1/flavors/100', u'rel': u'self'}, {u'href': u'http://controller:8774/flavors/100', u'rel': u'bookmark'}], u'name': u'api-create'}删除云主机类型地址:http://172.16.1.151:8774/v2.1/flavors/ + (flavor_id)类型:delete请求头:Content-Type: application/jsonX-Auth-Token: tokenheaders = { "context-type": "application/json", "x-auth-token": token } #删除云主机类型 flavor = requests.delete('http://172.16.1.151:8774/v2.1/flavors/' + '100',headers=headers) print(flavor.text)[root@controller api]# openstack flavor list +--------------------------------------+------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+------------+------+------+-----------+-------+-----------+ | 051c9373-f51f-46cc-80bd-56ded52f4678 | gpmall | 4096 | 20 | 0 | 4 | True | | 100 | api-create | 1024 | 10 | 0 | 1 | True | | 1b17a049-e504-4dc1-9fb4-da95fabf06ca | chinaskill | 512 | 10 | 0 | 1 | True | +--------------------------------------+------------+------+------+-----------+-------+-----------+ [root@controller api]# python3 delete_flavor.py [root@controller api]# openstack flavor list +--------------------------------------+------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+------------+------+------+-----------+-------+-----------+ | 051c9373-f51f-46cc-80bd-56ded52f4678 | gpmall | 4096 | 20 | 0 | 4 | True | | 1b17a049-e504-4dc1-9fb4-da95fabf06ca | chinaskill | 512 | 10 | 0 | 1 | True | +--------------------------------------+------------+------+------+-----------+-------+-----------+更新云主机类型地址:http://172.16.1.151:8774/v2.1/flavors/ + (flavor_id)类型:PUT请求头:Content-Type: application/jsonX-Auth-Token: tokenjson格式{ "flavor": { "description": "更新描述" } }2.镜像http://controller:8774/v2.1/images 这个镜像api已被弃用列出镜像列表地址:http://controller:8774/v2.1/images列出详细地址:http://controller:8774/v2.1/images/detail类型:get请求头:Content-Type: application/jsonX-Auth-Token: token创建镜像地址:http://controller:9292/v2/images类型:post请求头:Content-Type: application/jsonX-Auth-Token: tokenLocation:/opt/images/CentOS_7.5_x86_64_XD.qcow2请求参数:json{ "disk_format": "qcow2", "name": "api", }NameInTypeDescriptioncontainer_format (Optional)bodyenumFormat of the image container.Values may vary based on the configuration available in a particular OpenStack cloud. See the Image Schema response from the cloud itself for the valid values available.Example formats are: ami, ari, aki, bare, ovf, ova, or docker.The value might be null (JSON null data type).disk_format (Optional)bodyenumThe format of the disk.Values may vary based on the configuration available in a particular OpenStack cloud. See the Image Schema response from the cloud itself for the valid values available.Example formats are: ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, ploop or iso.The value might be null (JSON null data type).Newton changes: The vhdx disk format is a supported value. Ocata changes: The ploop disk format is a supported value.id (Optional)bodystringA unique, user-defined image UUID, in the format:nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn Where n is a hexadecimal digit from 0 to f, or F.For example:b2173dd3-7ad6-4362-baa6-a68bce3565cb If you omit this value, the API generates a UUID for the image. If you specify a value that has already been assigned, the request fails with a 409 response code.min_disk (Optional)bodyintegerAmount of disk space in GB that is required to boot the image.min_ram (Optional)bodyintegerAmount of RAM in MB that is required to boot the image.name (Optional)bodystringThe name of the image.protected (Optional)bodybooleanImage protection for deletion. Valid value is true or false. Default is false.tags (Optional)bodyarrayList of tags for this image. Each tag is a string of at most 255 chars. The maximum number of tags allowed on an image is set by the operator.visibility (Optional)bodystringVisibility for this image. Valid value is one of: public, private, shared, or community. At most sites, only an administrator can make an image public. Some sites may restrict what users can make an image community. Some sites may restrict what users can perform member operations on a shared image. Since the Image API v2.5, the default value is shared.[root@controller images]# openstack image list +--------------------------------------+-----------+--------+ | ID | Name | Status | +--------------------------------------+-----------+--------+ | 845a178a-367b-45a8-a9f5-a75a6e987e2f | api | queued | +--------------------------------------+-----------+--------+删除镜像地址:http://controller:8774/v2.1/images/ + (镜像id)类型:delete请求头:X-Auth-Token: token
2023年02月09日
170 阅读
0 评论
0 点赞
2023-02-09
openstack 国基北盛脚本安装学习记录
html {overflow-x: initial !important;}:root { --bg-color:#ffffff; --text-color:#333333; --select-text-bg-color:#B5D6FC; --select-text-font-color:auto; --monospace:"Lucida Console",Consolas,"Courier",monospace; --title-bar-height:20px; } .mac-os-11 { --title-bar-height:28px; } html { font-size: 14px; background-color: var(--bg-color); color: var(--text-color); font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; } body { margin: 0px; padding: 0px; height: auto; inset: 0px; font-size: 1rem; line-height: 1.42857; overflow-x: hidden; background: inherit; tab-size: 4; } iframe { margin: auto; } a.url { word-break: break-all; } a:active, a:hover { outline: 0px; } .in-text-selection, ::selection { text-shadow: none; background: var(--select-text-bg-color); color: var(--select-text-font-color); } #write { margin: 0px auto; height: auto; width: inherit; word-break: normal; overflow-wrap: break-word; position: relative; white-space: normal; overflow-x: visible; padding-top: 36px; } #write.first-line-indent p { text-indent: 2em; } #write.first-line-indent li p, #write.first-line-indent p * { text-indent: 0px; } #write.first-line-indent li { margin-left: 2em; } .for-image #write { padding-left: 8px; padding-right: 8px; } body.typora-export { padding-left: 30px; padding-right: 30px; } .typora-export .footnote-line, .typora-export li, .typora-export p { white-space: pre-wrap; } .typora-export .task-list-item input { pointer-events: none; } @media screen and (max-width: 500px) { body.typora-export { padding-left: 0px; padding-right: 0px; } #write { padding-left: 20px; padding-right: 20px; } } #write li > figure:last-child { margin-bottom: 0.5rem; } #write ol, #write ul { position: relative; } img { max-width: 100%; vertical-align: middle; image-orientation: from-image; } button, input, select, textarea { color: inherit; font: inherit; } input[type="checkbox"], input[type="radio"] { line-height: normal; padding: 0px; } *, ::after, ::before { box-sizing: border-box; } #write h1, #write h2, #write h3, #write h4, #write h5, #write h6, #write p, #write pre { width: inherit; } #write h1, #write h2, #write h3, #write h4, #write h5, #write h6, #write p { position: relative; } p { line-height: inherit; } h1, h2, h3, h4, h5, h6 { break-after: avoid-page; break-inside: avoid; orphans: 4; } p { orphans: 4; } h1 { font-size: 2rem; } h2 { font-size: 1.8rem; } h3 { font-size: 1.6rem; } h4 { font-size: 1.4rem; } h5 { font-size: 1.2rem; } h6 { font-size: 1rem; } .md-math-block, .md-rawblock, h1, h2, h3, h4, h5, h6, p { margin-top: 1rem; margin-bottom: 1rem; } .hidden { display: none; } .md-blockmeta { color: rgb(204, 204, 204); font-weight: 700; font-style: italic; } a { cursor: pointer; } sup.md-footnote { padding: 2px 4px; background-color: rgba(238, 238, 238, 0.7); color: rgb(85, 85, 85); border-radius: 4px; cursor: pointer; } sup.md-footnote a, sup.md-footnote a:hover { color: inherit; text-transform: inherit; text-decoration: inherit; } #write input[type="checkbox"] { cursor: pointer; width: inherit; height: inherit; } figure { overflow-x: auto; margin: 1.2em 0px; max-width: calc(100% + 16px); padding: 0px; } figure > table { margin: 0px; } thead, tr { break-inside: avoid; break-after: auto; } thead { display: table-header-group; } table { border-collapse: collapse; border-spacing: 0px; width: 100%; overflow: auto; break-inside: auto; text-align: left; } table.md-table td { min-width: 32px; } .CodeMirror-gutters { border-right: 0px; background-color: inherit; } .CodeMirror-linenumber { user-select: none; } .CodeMirror { text-align: left; } .CodeMirror-placeholder { opacity: 0.3; } .CodeMirror pre { padding: 0px 4px; } .CodeMirror-lines { padding: 0px; } div.hr:focus { cursor: none; } #write pre { white-space: pre-wrap; } #write.fences-no-line-wrapping pre { white-space: pre; } #write pre.ty-contain-cm { white-space: normal; } .CodeMirror-gutters { margin-right: 4px; } .md-fences { font-size: 0.9rem; display: block; break-inside: avoid; text-align: left; overflow: visible; white-space: pre; background: inherit; position: relative !important; } .md-fences-adv-panel { width: 100%; margin-top: 10px; text-align: center; padding-top: 0px; padding-bottom: 8px; overflow-x: auto; } #write .md-fences.mock-cm { white-space: pre-wrap; } .md-fences.md-fences-with-lineno { padding-left: 0px; } #write.fences-no-line-wrapping .md-fences.mock-cm { white-space: pre; overflow-x: auto; } .md-fences.mock-cm.md-fences-with-lineno { padding-left: 8px; } .CodeMirror-line, twitterwidget { break-inside: avoid; } svg { break-inside: avoid; } .footnotes { opacity: 0.8; font-size: 0.9rem; margin-top: 1em; margin-bottom: 1em; } .footnotes + .footnotes { margin-top: 0px; } .md-reset { margin: 0px; padding: 0px; border: 0px; outline: 0px; vertical-align: top; background: 0px 0px; text-decoration: none; text-shadow: none; float: none; position: static; width: auto; height: auto; white-space: nowrap; cursor: inherit; -webkit-tap-highlight-color: transparent; line-height: normal; font-weight: 400; text-align: left; box-sizing: content-box; direction: ltr; } li div { padding-top: 0px; } blockquote { margin: 1rem 0px; } li .mathjax-block, li p { margin: 0.5rem 0px; } li blockquote { margin: 1rem 0px; } li { margin: 0px; position: relative; } blockquote > :last-child { margin-bottom: 0px; } blockquote > :first-child, li > :first-child { margin-top: 0px; } .footnotes-area { color: rgb(136, 136, 136); margin-top: 0.714rem; padding-bottom: 0.143rem; white-space: normal; } #write .footnote-line { white-space: pre-wrap; } @media print { body, html { border: 1px solid transparent; height: 99%; break-after: avoid; break-before: avoid; font-variant-ligatures: no-common-ligatures; } #write { margin-top: 0px; padding-top: 0px; border-color: transparent !important; padding-bottom: 0px !important; } .typora-export * { -webkit-print-color-adjust: exact; } .typora-export #write { break-after: avoid; } .typora-export #write::after { height: 0px; } .is-mac table { break-inside: avoid; } .typora-export-show-outline .typora-export-sidebar { display: none; } } .footnote-line { margin-top: 0.714em; font-size: 0.7em; } a img, img a { cursor: pointer; } pre.md-meta-block { font-size: 0.8rem; min-height: 0.8rem; white-space: pre-wrap; background: rgb(204, 204, 204); display: block; overflow-x: hidden; } p > .md-image:only-child:not(.md-img-error) img, p > img:only-child { display: block; margin: auto; } #write.first-line-indent p > .md-image:only-child:not(.md-img-error) img { left: -2em; position: relative; } p > .md-image:only-child { display: inline-block; width: 100%; } #write .MathJax_Display { margin: 0.8em 0px 0px; } .md-math-block { width: 100%; } .md-math-block:not(:empty)::after { display: none; } .MathJax_ref { fill: currentcolor; } [contenteditable="true"]:active, [contenteditable="true"]:focus, [contenteditable="false"]:active, [contenteditable="false"]:focus { outline: 0px; box-shadow: none; } .md-task-list-item { position: relative; list-style-type: none; } .task-list-item.md-task-list-item { padding-left: 0px; } .md-task-list-item > input { position: absolute; top: 0px; left: 0px; margin-left: -1.2em; margin-top: calc(1em - 10px); border: none; } .math { font-size: 1rem; } .md-toc { min-height: 3.58rem; position: relative; font-size: 0.9rem; border-radius: 10px; } .md-toc-content { position: relative; margin-left: 0px; } .md-toc-content::after, .md-toc::after { display: none; } .md-toc-item { display: block; color: rgb(65, 131, 196); } .md-toc-item a { text-decoration: none; } .md-toc-inner:hover { text-decoration: underline; } .md-toc-inner { display: inline-block; cursor: pointer; } .md-toc-h1 .md-toc-inner { margin-left: 0px; font-weight: 700; } .md-toc-h2 .md-toc-inner { margin-left: 2em; } .md-toc-h3 .md-toc-inner { margin-left: 4em; } .md-toc-h4 .md-toc-inner { margin-left: 6em; } .md-toc-h5 .md-toc-inner { margin-left: 8em; } .md-toc-h6 .md-toc-inner { margin-left: 10em; } @media screen and (max-width: 48em) { .md-toc-h3 .md-toc-inner { margin-left: 3.5em; } .md-toc-h4 .md-toc-inner { margin-left: 5em; } .md-toc-h5 .md-toc-inner { margin-left: 6.5em; } .md-toc-h6 .md-toc-inner { margin-left: 8em; } } a.md-toc-inner { font-size: inherit; font-style: inherit; font-weight: inherit; line-height: inherit; } .footnote-line a:not(.reversefootnote) { color: inherit; } .reversefootnote { font-family: ui-monospace, sans-serif; } .md-attr { display: none; } .md-fn-count::after { content: "."; } code, pre, samp, tt { font-family: var(--monospace); } kbd { margin: 0px 0.1em; padding: 0.1em 0.6em; font-size: 0.8em; color: rgb(36, 39, 41); background: rgb(255, 255, 255); border: 1px solid rgb(173, 179, 185); border-radius: 3px; box-shadow: rgba(12, 13, 14, 0.2) 0px 1px 0px, rgb(255, 255, 255) 0px 0px 0px 2px inset; white-space: nowrap; vertical-align: middle; } .md-comment { color: rgb(162, 127, 3); opacity: 0.6; font-family: var(--monospace); } code { text-align: left; vertical-align: initial; } a.md-print-anchor { white-space: pre !important; border-width: initial !important; border-style: none !important; border-color: initial !important; display: inline-block !important; position: absolute !important; width: 1px !important; right: 0px !important; outline: 0px !important; background: 0px 0px !important; text-decoration: initial !important; text-shadow: initial !important; } .os-windows.monocolor-emoji .md-emoji { font-family: "Segoe UI Symbol", sans-serif; } .md-diagram-panel > svg { max-width: 100%; } [lang="flow"] svg, [lang="mermaid"] svg { max-width: 100%; height: auto; } [lang="mermaid"] .node text { font-size: 1rem; } table tr th { border-bottom: 0px; } video { max-width: 100%; display: block; margin: 0px auto; } iframe { max-width: 100%; width: 100%; border: none; } .highlight td, .highlight tr { border: 0px; } mark { background: rgb(255, 255, 0); color: rgb(0, 0, 0); } .md-html-inline .md-plain, .md-html-inline strong, mark .md-inline-math, mark strong { color: inherit; } .md-expand mark .md-meta { opacity: 0.3 !important; } mark .md-meta { color: rgb(0, 0, 0); } @media print { .typora-export h1, .typora-export h2, .typora-export h3, .typora-export h4, .typora-export h5, .typora-export h6 { break-inside: avoid; } } .md-diagram-panel .messageText { stroke: none !important; } .md-diagram-panel .start-state { fill: var(--node-fill); } .md-diagram-panel .edgeLabel rect { opacity: 1 !important; } .md-fences.md-fences-math { font-size: 1em; } .md-fences-advanced:not(.md-focus) { padding: 0px; white-space: nowrap; border: 0px; } .md-fences-advanced:not(.md-focus) { background: inherit; } .typora-export-show-outline .typora-export-content { max-width: 1440px; margin: auto; display: flex; flex-direction: row; } .typora-export-sidebar { width: 300px; font-size: 0.8rem; margin-top: 80px; margin-right: 18px; } .typora-export-show-outline #write { --webkit-flex:2; flex: 2 1 0%; } .typora-export-sidebar .outline-content { position: fixed; top: 0px; max-height: 100%; overflow: hidden auto; padding-bottom: 30px; padding-top: 60px; width: 300px; } @media screen and (max-width: 1024px) { .typora-export-sidebar, .typora-export-sidebar .outline-content { width: 240px; } } @media screen and (max-width: 800px) { .typora-export-sidebar { display: none; } } .outline-content li, .outline-content ul { margin-left: 0px; margin-right: 0px; padding-left: 0px; padding-right: 0px; list-style: none; } .outline-content ul { margin-top: 0px; margin-bottom: 0px; } .outline-content strong { font-weight: 400; } .outline-expander { width: 1rem; height: 1.42857rem; position: relative; display: table-cell; vertical-align: middle; cursor: pointer; padding-left: 4px; } .outline-expander::before { content: ""; position: relative; font-family: Ionicons; display: inline-block; font-size: 8px; vertical-align: middle; } .outline-item { padding-top: 3px; padding-bottom: 3px; cursor: pointer; } .outline-expander:hover::before { content: ""; } .outline-h1 > .outline-item { padding-left: 0px; } .outline-h2 > .outline-item { padding-left: 1em; } .outline-h3 > .outline-item { padding-left: 2em; } .outline-h4 > .outline-item { padding-left: 3em; } .outline-h5 > .outline-item { padding-left: 4em; } .outline-h6 > .outline-item { padding-left: 5em; } .outline-label { cursor: pointer; display: table-cell; vertical-align: middle; text-decoration: none; color: inherit; } .outline-label:hover { text-decoration: underline; } .outline-item:hover { border-color: rgb(245, 245, 245); background-color: var(--item-hover-bg-color); } .outline-item:hover { margin-left: -28px; margin-right: -28px; border-left: 28px solid transparent; border-right: 28px solid transparent; } .outline-item-single .outline-expander::before, .outline-item-single .outline-expander:hover::before { display: none; } .outline-item-open > .outline-item > .outline-expander::before { content: ""; } .outline-children { display: none; } .info-panel-tab-wrapper { display: none; } .outline-item-open > .outline-children { display: block; } .typora-export .outline-item { padding-top: 1px; padding-bottom: 1px; } .typora-export .outline-item:hover { margin-right: -8px; border-right: 8px solid transparent; } .typora-export .outline-expander::before { content: "+"; font-family: inherit; top: -1px; } .typora-export .outline-expander:hover::before, .typora-export .outline-item-open > .outline-item > .outline-expander::before { content: "−"; } .typora-export-collapse-outline .outline-children { display: none; } .typora-export-collapse-outline .outline-item-open > .outline-children, .typora-export-no-collapse-outline .outline-children { display: block; } .typora-export-no-collapse-outline .outline-expander::before { content: "" !important; } .typora-export-show-outline .outline-item-active > .outline-item .outline-label { font-weight: 700; } .md-inline-math-container mjx-container { zoom: 0.95; } .CodeMirror { height: auto; } .CodeMirror.cm-s-inner { background: inherit; } .CodeMirror-scroll { overflow: auto hidden; z-index: 3; } .CodeMirror-gutter-filler, .CodeMirror-scrollbar-filler { background-color: rgb(255, 255, 255); } .CodeMirror-gutters { border-right: 1px solid rgb(221, 221, 221); background: inherit; white-space: nowrap; } .CodeMirror-linenumber { padding: 0px 3px 0px 5px; text-align: right; color: rgb(153, 153, 153); } .cm-s-inner .cm-keyword { color: rgb(119, 0, 136); } .cm-s-inner .cm-atom, .cm-s-inner.cm-atom { color: rgb(34, 17, 153); } .cm-s-inner .cm-number { color: rgb(17, 102, 68); } .cm-s-inner .cm-def { color: rgb(0, 0, 255); } .cm-s-inner .cm-variable { color: rgb(0, 0, 0); } .cm-s-inner .cm-variable-2 { color: rgb(0, 85, 170); } .cm-s-inner .cm-variable-3 { color: rgb(0, 136, 85); } .cm-s-inner .cm-string { color: rgb(170, 17, 17); } .cm-s-inner .cm-property { color: rgb(0, 0, 0); } .cm-s-inner .cm-operator { color: rgb(152, 26, 26); } .cm-s-inner .cm-comment, .cm-s-inner.cm-comment { color: rgb(170, 85, 0); } .cm-s-inner .cm-string-2 { color: rgb(255, 85, 0); } .cm-s-inner .cm-meta { color: rgb(85, 85, 85); } .cm-s-inner .cm-qualifier { color: rgb(85, 85, 85); } .cm-s-inner .cm-builtin { color: rgb(51, 0, 170); } .cm-s-inner .cm-bracket { color: rgb(153, 153, 119); } .cm-s-inner .cm-tag { color: rgb(17, 119, 0); } .cm-s-inner .cm-attribute { color: rgb(0, 0, 204); } .cm-s-inner .cm-header, .cm-s-inner.cm-header { color: rgb(0, 0, 255); } .cm-s-inner .cm-quote, .cm-s-inner.cm-quote { color: rgb(0, 153, 0); } .cm-s-inner .cm-hr, .cm-s-inner.cm-hr { color: rgb(153, 153, 153); } .cm-s-inner .cm-link, .cm-s-inner.cm-link { color: rgb(0, 0, 204); } .cm-negative { color: rgb(221, 68, 68); } .cm-positive { color: rgb(34, 153, 34); } .cm-header, .cm-strong { font-weight: 700; } .cm-del { text-decoration: line-through; } .cm-em { font-style: italic; } .cm-link { text-decoration: underline; } .cm-error { color: red; } .cm-invalidchar { color: red; } .cm-constant { color: rgb(38, 139, 210); } .cm-defined { color: rgb(181, 137, 0); } div.CodeMirror span.CodeMirror-matchingbracket { color: rgb(0, 255, 0); } div.CodeMirror span.CodeMirror-nonmatchingbracket { color: rgb(255, 34, 34); } .cm-s-inner .CodeMirror-activeline-background { background: inherit; } .CodeMirror { position: relative; overflow: hidden; } .CodeMirror-scroll { height: 100%; outline: 0px; position: relative; box-sizing: content-box; background: inherit; } .CodeMirror-sizer { position: relative; } .CodeMirror-gutter-filler, .CodeMirror-hscrollbar, .CodeMirror-scrollbar-filler, .CodeMirror-vscrollbar { position: absolute; z-index: 6; display: none; outline: 0px; } .CodeMirror-vscrollbar { right: 0px; top: 0px; overflow: hidden; } .CodeMirror-hscrollbar { bottom: 0px; left: 0px; overflow: auto hidden; } .CodeMirror-scrollbar-filler { right: 0px; bottom: 0px; } .CodeMirror-gutter-filler { left: 0px; bottom: 0px; } .CodeMirror-gutters { position: absolute; left: 0px; top: 0px; padding-bottom: 10px; z-index: 3; overflow-y: hidden; } .CodeMirror-gutter { white-space: normal; height: 100%; box-sizing: content-box; padding-bottom: 30px; margin-bottom: -32px; display: inline-block; } .CodeMirror-gutter-wrapper { position: absolute; z-index: 4; background: 0px 0px !important; border: none !important; } .CodeMirror-gutter-background { position: absolute; top: 0px; bottom: 0px; z-index: 4; } .CodeMirror-gutter-elt { position: absolute; cursor: default; z-index: 4; } .CodeMirror-lines { cursor: text; } .CodeMirror pre { border-radius: 0px; border-width: 0px; background: 0px 0px; font-family: inherit; font-size: inherit; margin: 0px; white-space: pre; overflow-wrap: normal; color: inherit; z-index: 2; position: relative; overflow: visible; } .CodeMirror-wrap pre { overflow-wrap: break-word; white-space: pre-wrap; word-break: normal; } .CodeMirror-code pre { border-right: 30px solid transparent; width: fit-content; } .CodeMirror-wrap .CodeMirror-code pre { border-right: none; width: auto; } .CodeMirror-linebackground { position: absolute; inset: 0px; z-index: 0; } .CodeMirror-linewidget { position: relative; z-index: 2; overflow: auto; } .CodeMirror-wrap .CodeMirror-scroll { overflow-x: hidden; } .CodeMirror-measure { position: absolute; width: 100%; height: 0px; overflow: hidden; visibility: hidden; } .CodeMirror-measure pre { position: static; } .CodeMirror div.CodeMirror-cursor { position: absolute; visibility: hidden; border-right: none; width: 0px; } .CodeMirror div.CodeMirror-cursor { visibility: hidden; } .CodeMirror-focused div.CodeMirror-cursor { visibility: inherit; } .cm-searching { background: rgba(255, 255, 0, 0.4); } span.cm-underlined { text-decoration: underline; } span.cm-strikethrough { text-decoration: line-through; } .cm-tw-syntaxerror { color: rgb(255, 255, 255); background-color: rgb(153, 0, 0); } .cm-tw-deleted { text-decoration: line-through; } .cm-tw-header5 { font-weight: 700; } .cm-tw-listitem:first-child { padding-left: 10px; } .cm-tw-box { border-style: solid; border-right-width: 1px; border-bottom-width: 1px; border-left-width: 1px; border-color: inherit; border-top-width: 0px !important; } .cm-tw-underline { text-decoration: underline; } @media print { .CodeMirror div.CodeMirror-cursor { visibility: hidden; } } :root { --side-bar-bg-color: #fafafa; --control-text-color: #777; } @include-when-export url(https://fonts.loli.net/css?family=Open+Sans:400italic,700italic,700,400&subset=latin,latin-ext); /* open-sans-regular - latin-ext_latin */ /* open-sans-italic - latin-ext_latin */ /* open-sans-700 - latin-ext_latin */ /* open-sans-700italic - latin-ext_latin */ html { font-size: 16px; -webkit-font-smoothing: antialiased; } body { font-family: "Open Sans","Clear Sans", "Helvetica Neue", Helvetica, Arial, 'Segoe UI Emoji', sans-serif; color: rgb(51, 51, 51); line-height: 1.6; } #write { max-width: 860px; margin: 0 auto; padding: 30px; padding-bottom: 100px; } @media only screen and (min-width: 1400px) { #write { max-width: 1024px; } } @media only screen and (min-width: 1800px) { #write { max-width: 1200px; } } #write > ul:first-child, #write > ol:first-child{ margin-top: 30px; } a { color: #4183C4; } h1, h2, h3, h4, h5, h6 { position: relative; margin-top: 1rem; margin-bottom: 1rem; font-weight: bold; line-height: 1.4; cursor: text; } h1:hover a.anchor, h2:hover a.anchor, h3:hover a.anchor, h4:hover a.anchor, h5:hover a.anchor, h6:hover a.anchor { text-decoration: none; } h1 tt, h1 code { font-size: inherit; } h2 tt, h2 code { font-size: inherit; } h3 tt, h3 code { font-size: inherit; } h4 tt, h4 code { font-size: inherit; } h5 tt, h5 code { font-size: inherit; } h6 tt, h6 code { font-size: inherit; } h1 { font-size: 2.25em; line-height: 1.2; border-bottom: 1px solid #eee; } h2 { font-size: 1.75em; line-height: 1.225; border-bottom: 1px solid #eee; } /*@media print { .typora-export h1, .typora-export h2 { border-bottom: none; padding-bottom: initial; } .typora-export h1::after, .typora-export h2::after { content: ""; display: block; height: 100px; margin-top: -96px; border-top: 1px solid #eee; } }*/ h3 { font-size: 1.5em; line-height: 1.43; } h4 { font-size: 1.25em; } h5 { font-size: 1em; } h6 { font-size: 1em; color: #777; } p, blockquote, ul, ol, dl, table{ margin: 0.8em 0; } li>ol, li>ul { margin: 0 0; } hr { height: 2px; padding: 0; margin: 16px 0; background-color: #e7e7e7; border: 0 none; overflow: hidden; box-sizing: content-box; } li p.first { display: inline-block; } ul, ol { padding-left: 30px; } ul:first-child, ol:first-child { margin-top: 0; } ul:last-child, ol:last-child { margin-bottom: 0; } blockquote { border-left: 4px solid #dfe2e5; padding: 0 15px; color: #777777; } blockquote blockquote { padding-right: 0; } table { padding: 0; word-break: initial; } table tr { border: 1px solid #dfe2e5; margin: 0; padding: 0; } table tr:nth-child(2n), thead { background-color: #f8f8f8; } table th { font-weight: bold; border: 1px solid #dfe2e5; border-bottom: 0; margin: 0; padding: 6px 13px; } table td { border: 1px solid #dfe2e5; margin: 0; padding: 6px 13px; } table th:first-child, table td:first-child { margin-top: 0; } table th:last-child, table td:last-child { margin-bottom: 0; } .CodeMirror-lines { padding-left: 4px; } .code-tooltip { box-shadow: 0 1px 1px 0 rgba(0,28,36,.3); border-top: 1px solid #eef2f2; } .md-fences, code, tt { border: 1px solid #e7eaed; background-color: #f8f8f8; border-radius: 3px; padding: 0; padding: 2px 4px 0px 4px; font-size: 0.9em; } code { background-color: #f3f4f4; padding: 0 2px 0 2px; } .md-fences { margin-bottom: 15px; margin-top: 15px; padding-top: 8px; padding-bottom: 6px; } .md-task-list-item > input { margin-left: -1.3em; } @media print { html { font-size: 13px; } pre { page-break-inside: avoid; word-wrap: break-word; } } .md-fences { background-color: #f8f8f8; } #write pre.md-meta-block { padding: 1rem; font-size: 85%; line-height: 1.45; background-color: #f7f7f7; border: 0; border-radius: 3px; color: #777777; margin-top: 0 !important; } .mathjax-block>.code-tooltip { bottom: .375rem; } .md-mathjax-midline { background: #fafafa; } #write>h3.md-focus:before{ left: -1.5625rem; top: .375rem; } #write>h4.md-focus:before{ left: -1.5625rem; top: .285714286rem; } #write>h5.md-focus:before{ left: -1.5625rem; top: .285714286rem; } #write>h6.md-focus:before{ left: -1.5625rem; top: .285714286rem; } .md-image>.md-meta { /*border: 1px solid #ddd;*/ border-radius: 3px; padding: 2px 0px 0px 4px; font-size: 0.9em; color: inherit; } .md-tag { color: #a7a7a7; opacity: 1; } .md-toc { margin-top:20px; padding-bottom:20px; } .sidebar-tabs { border-bottom: none; } #typora-quick-open { border: 1px solid #ddd; background-color: #f8f8f8; } #typora-quick-open-item { background-color: #FAFAFA; border-color: #FEFEFE #e5e5e5 #e5e5e5 #eee; border-style: solid; border-width: 1px; } /** focus mode */ .on-focus-mode blockquote { border-left-color: rgba(85, 85, 85, 0.12); } header, .context-menu, .megamenu-content, footer{ font-family: "Segoe UI", "Arial", sans-serif; } .file-node-content:hover .file-node-icon, .file-node-content:hover .file-node-open-state{ visibility: visible; } .mac-seamless-mode #typora-sidebar { background-color: #fafafa; background-color: var(--side-bar-bg-color); } .md-lang { color: #b4654d; } /*.html-for-mac { --item-hover-bg-color: #E6F0FE; }*/ #md-notification .btn { border: 0; } .dropdown-menu .divider { border-color: #e5e5e5; opacity: 0.4; } .ty-preferences .window-content { background-color: #fafafa; } .ty-preferences .nav-group-item.active { color: white; background: #999; } .menu-item-container a.menu-style-btn { background-color: #f5f8fa; background-image: linear-gradient( 180deg , hsla(0, 0%, 100%, 0.8), hsla(0, 0%, 100%, 0)); } 新建 文本文档 1.基础配置除非硬盘够自信,不要分var分区创建4h-12g-100G的controller节点修改外部网卡ONBOOT="yes"BOOTPROTO="static"IPADDR="172.16.1.198"NETMASK="255.255.255.0"GATEWAY="172.16.1.1"DNS1="114.114.114.114"修改仅主机网卡xxxxxxxxxxONBOOT="yes"BOOTPROTO="static"IPADDR=10.10.42.198NETMASK=255.255.255.0DNS1=114.114.114.114修改主机名xxxxxxxxxxhostnamectl set-hostname controllerbash关闭控制节点的防火墙,设置开机不启动xxxxxxxxxxsystemctl stop firewalld && systemctl disable firewalld设置SELinux为Permissive 模式xxxxxxxxxxsed -i 's/enforcing/disabled/g' /etc/selinux/configsetenforce 0getenforce写入/etc/hostsxxxxxxxxxxecho 172.16.1.198 controller >> /etc/hostsecho 10.10.42.198 controller >> /etc/hostsecho 172.16.1.199 compute >> /etc/hostsecho 10.10.42.199 compute >> /etc/hostscat /etc/hosts配置yum源xmv /etc/yum.repos.d/* /var [root@controller ~]# cat > /etc/yum.repos.d/http.repo << EOF> [centos]> name=centos> baseurl=ftp://172.16.1.252/centos/> gpgcheck=0> enable=1> > [iaas]> name=iaas> baseurl=ftp://172.16.1.252/iaas/iaas-repo/> gpgcheck=0> enable=1> > EOF cat /etc/yum.repos.d/http.repo yum clean all && yum repolist && yum list配置时间同步xxxxxxxxxxyum install -y chronyvi /etc/chrony.conf #server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver controller iburstallow 10.10.42.0/24local stratum 10 systemctl restart chronyd && systemctl enable chronyd创建4h-8g-100g-50g的compute节点修改外部网卡xxxxxxxxxxONBOOT="yes"BOOTPROTO="static"IPADDR="172.16.1.199"NETMASK="255.255.255.0"GATEWAY="172.16.1.1"DNS1="114.114.114.114"修改仅主机网卡xxxxxxxxxxONBOOT="yes"BOOTPROTO="static"IPADDR=10.10.42.199NETMASK=255.255.255.0DNS1=114.114.114.114修改主机名xxxxxxxxxxhostnamectl set-hostname computebash关闭控制节点的防火墙,设置开机不启动xxxxxxxxxxsystemctl stop firewalld && systemctl disable firewalld设置SELinux为Permissive 模式xxxxxxxxxxsed -i 's/enforcing/disabled/g' /etc/selinux/configsetenforce 0getenforce写入/etc/hostsxxxxxxxxxxecho 172.16.1.198 controller >> /etc/hostsecho 10.10.42.198 controller >> /etc/hostsecho 172.16.1.199 compute >> /etc/hostsecho 10.10.42.199 compute >> /etc/hostscat /etc/hosts配置yum源xxxxxxxxxxmv /etc/yum.repos.d/* /var [root@compute ~]# cat > /etc/yum.repos.d/http.repo << EOF> [centos]> name=centos> baseurl=ftp://172.16.1.252/centos/> gpgcheck=0> enable=1> > [iaas]> name=iaas> baseurl=ftp://172.16.1.252/iaas/iaas-repo/> gpgcheck=0> enable=1> > EOF cat /etc/yum.repos.d/http.repo yum clean all && yum repolist && yum list配置时间同步xxxxxxxxxxyum install -y chronyvi /etc/chrony.conf #server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver controller iburst systemctl restart chronyd && systemctl enable chronyd利用空白硬盘分区xxxxxxxxxx[root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disk sr0 11:0 1 4.2G 0 rom vda 252:0 0 100G 0 disk ├─vda1 252:1 0 1G 0 part /boot└─vda2 252:2 0 99G 0 part ├─centos-root 253:0 0 92G 0 lvm / ├─centos-swap 253:1 0 1G 0 lvm [SWAP] ├─centos-var 253:2 0 5G 0 lvm /var └─centos-home 253:3 0 1G 0 lvm /homevdb 252:16 0 50G 0 disk [root@compute ~]# parted /dev/vdbGNU Parted 3.1Using /dev/vdbWelcome to GNU Parted! Type 'help' to view a list of commands.(parted) mklabel gptWarning: The existing disk label on /dev/vdb will be destroyed and all data on this disk will be lost.Do you want to continue?Yes/No? yes (parted) mkpart swiftFile system type? [ext2]? Start? 0 End? 20Gib Warning: The resulting partition is not properly aligned for best performance.Ignore/Cancel? I (parted) mkpart swift1File system type? [ext2]? Start? 20Gib End? 40Gib (parted) q Information: You may need to update /etc/fstab. [root@compute ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTfd0 2:0 1 4K 0 disk sr0 11:0 1 4.2G 0 rom vda 252:0 0 100G 0 disk ├─vda1 252:1 0 1G 0 part /boot└─vda2 252:2 0 99G 0 part ├─centos-root 253:0 0 92G 0 lvm / ├─centos-swap 253:1 0 1G 0 lvm [SWAP] ├─centos-var 253:2 0 5G 0 lvm /var └─centos-home 253:3 0 1G 0 lvm /homevdb 252:16 0 50G 0 disk ├─vdb1 252:17 0 20G 0 part └─vdb2 252:18 0 20G 0 part mkfs.xfs /dev/vdb1mkfs.xfs /dev/vdb2 2.openstack搭建1.安装iaas软件包并且修改配置xxxxxxxxxxyum install -y iaas-xiandianvim /etc/xiandian/openrc.shcontrollerxxxxxxxxxx[root@controller ~]# cat /etc/xiandian/openrc.sh | egrep -v '(^#|^$)'HOST_IP=172.16.1.198HOST_PASS=000000HOST_NAME=controllerHOST_IP_NODE=172.16.1.199HOST_PASS_NODE=000000HOST_NAME_NODE=computenetwork_segment_IP=172.16.1.0/24RABBIT_USER=openstackRABBIT_PASS=000000DB_PASS=000000DOMAIN_NAME=demoADMIN_PASS=000000DEMO_PASS=000000KEYSTONE_DBPASS=000000GLANCE_DBPASS=000000GLANCE_PASS=000000NOVA_DBPASS=000000NOVA_PASS=000000NEUTRON_DBPASS=000000NEUTRON_PASS=000000METADATA_SECRET=000000INTERFACE_IP=172.16.1.198INTERFACE_NAME=eth0Physical_NAME=providerminvlan=101maxvlan=200CINDER_DBPASS=000000CINDER_PASS=000000BLOCK_DISK=vdb1SWIFT_PASS=000000OBJECT_DISK=vdb2STORAGE_LOCAL_NET_IP=172.16.1.199HEAT_DBPASS=000000HEAT_PASS=000000ZUN_DBPASS=000000ZUN_PASS=000000KURYR_DBPASS=000000KURYR_PASS=000000CEILOMETER_DBPASS=000000CEILOMETER_PASS=000000AODH_DBPASS=000000AODH_PASS=000000BARBICAN_DBPASS=000000BARBICAN_PASS=000000computexxxxxxxxxx[root@compute ~]# cat /etc/xiandian/openrc.sh | egrep -v '(^#|^$)'HOST_IP=172.16.1.198HOST_PASS=000000HOST_NAME=controllerHOST_IP_NODE=172.16.1.199HOST_PASS_NODE=000000HOST_NAME_NODE=computenetwork_segment_IP=172.16.1.0/24RABBIT_USER=openstackRABBIT_PASS=000000DB_PASS=000000DOMAIN_NAME=demoADMIN_PASS=000000DEMO_PASS=000000KEYSTONE_DBPASS=000000GLANCE_DBPASS=000000GLANCE_PASS=000000NOVA_DBPASS=000000NOVA_PASS=000000NEUTRON_DBPASS=000000NEUTRON_PASS=000000METADATA_SECRET=000000INTERFACE_IP=172.16.1.199INTERFACE_NAME=eth0Physical_NAME=providerminvlan=101maxvlan=200CINDER_DBPASS=000000CINDER_PASS=000000BLOCK_DISK=vdb1SWIFT_PASS=000000OBJECT_DISK=vdb2STORAGE_LOCAL_NET_IP=172.16.1.199HEAT_DBPASS=000000HEAT_PASS=000000ZUN_DBPASS=000000ZUN_PASS=000000KURYR_DBPASS=000000KURYR_PASS=000000CEILOMETER_DBPASS=000000CEILOMETER_PASS=000000AODH_DBPASS=000000AODH_PASS=000000BARBICAN_DBPASS=000000BARBICAN_PASS=000000controller和compute安装重启xxxxxxxxxxiaas-pre-host.shreboot2.数据库安装controller在运行脚本xxxxxxxxxxiaas-install-mysql.sh 安装完后登入数据库中创建chinaskilldb库,在chinaskilldb库中创建表testable (id int not null primary key,Teamname varchar(50), remarks varchar(255)),在表中插入记录(1,“cloud”,“chinaskill”)xxxxxxxxxxmysql -u root -p000000 create database chinaskilldb; use chinaskilldb;create table testable (id int not null primary key,Teamname varchar(50), remarks varchar(255)); insert into testable values(1,'cloud','chinaskill')将memcached的缓存大小从64Mib改成256Mibxxxxxxxxxxsed -i 's/64/256/g' /etc/sysconfig/memcached 使用rabbitmq命令 创建用户,并设置Administrators限权xxxxxxxxxxrabbitmqctl set_permissions chinaskill ".*" ".*" ".*"3.安装keystonecontroller运行脚本安装xxxxxxxxxxiaas-install-keystone.sh 创建一个用户xxxxxxxxxx[root@controller images]# source /etc/keystone/admin-openrc.sh[root@controller images]# openstack user create --domain demo --password 000000 china+---------------------+----------------------------------+| Field | Value |+---------------------+----------------------------------+| domain_id | 226029b5aac74ce795fca3dd48e8e10c || enabled | True || id | e2db6597ae2c463185035d4a4bb2ab29 || name | china || options | {} || password_expires_at | None |+---------------------+----------------------------------+4.安装Glancecontroller运行脚本安装xxxxxxxxxxiaas-install-glance.sh上传镜像xxxxxxxxxx[root@controller images]# source /etc/keystone/admin-openrc.sh [root@controller images]# glance image-create --name cirros --disk-format qcow2 --container bare --progress < CentOS_7.5_x86_64_XD.qcow2 [=============================>] 100%+------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | 3d3e9c954351a4b6953fd156f0c29f5c || container_format | bare || created_at | 2021-11-29T11:02:21Z || disk_format | qcow2 || id | 2c659c04-6463-4888-ab21-e052f48d90e1 || min_disk | 0 || min_ram | 0 || name | cirros || owner | 6ed09fd59a174bb5a019b2434b7b3fc2 || protected | False || size | 510459904 || status | active || tags | [] || updated_at | 2021-11-29T11:02:23Z || virtual_size | None || visibility | shared |+------------------+--------------------------------------+ 5.安装novacontrollerxxxxxxxxxxiaas-install-nova-controller.shcomputexxxxxxxxxxiaas-install-nova-compute.sh创建一个实例xxxxxxxxxx[root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# openstack flavor create --id 1 --disk 20 --ram 1024 test+----------------------------+-------+| Field | Value |+----------------------------+-------+| OS-FLV-DISABLED:disabled | False || OS-FLV-EXT-DATA:ephemeral | 0 || disk | 20 || id | 1 || name | test || os-flavor-access:is_public | True || properties | || ram | 1024 || rxtx_factor | 1.0 || swap | || vcpus | 1 |+----------------------------+-------+ 6.安装Neutroncontrollerxxxxxxxxxxiaas-install-neutron-controller.shcomputexxxxxxxxxxiaas-install-neutron-compute.sh创建云主机网络extnet,子网extsubnet,虚拟机网段为192.168.y.0/24(其中y是vlan号), 网关为192.168.y.1。xxxxxxxxxx[root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type vlan extnet+---------------------------+--------------------------------------+| Field | Value |+---------------------------+--------------------------------------+| admin_state_up | UP || availability_zone_hints | || availability_zones | || created_at | 2021-11-29T11:17:41Z || description | || dns_domain | None || id | 97806bba-23b3-420f-ad98-05e800bd3e54 || ipv4_address_scope | None || ipv6_address_scope | None || is_default | False || is_vlan_transparent | None || mtu | 1500 || name | extnet || port_security_enabled | True || project_id | 6ed09fd59a174bb5a019b2434b7b3fc2 || provider:network_type | vlan || provider:physical_network | provider || provider:segmentation_id | 116 || qos_policy_id | None || revision_number | 5 || router:external | External || segments | None || shared | True || status | ACTIVE || subnets | || tags | || updated_at | 2021-11-29T11:17:41Z |+---------------------------+--------------------------------------+[root@controller ~]# openstack subnet create --network extnet --gateway=10.10.42.1 --subnet-range 10.10.42.0/24 extsubnet+-------------------+--------------------------------------+| Field | Value |+-------------------+--------------------------------------+| allocation_pools | 10.10.42.2-10.10.42.254 || cidr | 10.10.42.0/24 || created_at | 2021-11-29T11:20:18Z || description | || dns_nameservers | || enable_dhcp | True || gateway_ip | 10.10.42.1 || host_routes | || id | 03e202b2-f35b-4db5-aac8-06e27107ff65 || ip_version | 4 || ipv6_address_mode | None || ipv6_ra_mode | None || name | extsubnet || network_id | 97806bba-23b3-420f-ad98-05e800bd3e54 || project_id | 6ed09fd59a174bb5a019b2434b7b3fc2 || revision_number | 0 || segment_id | None || service_types | || subnetpool_id | None || tags | || updated_at | 2021-11-29T11:20:18Z |+-------------------+--------------------------------------+7.安装Dashboardcontrollerxxxxxxxxxxiaas-install-dashboard.sh打开网页访问http://controller/dashboard/Domain:demo用户名:admin密码:000000创建云主机xxxxxxxxxx[root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# openstack image create --disk-format qcow2 --file /opt/images/CentOS_7.5_x86_64_XD.qcow2 centos7+------------------+------------------------------------------------------+| Field | Value |+------------------+------------------------------------------------------+| checksum | 3d3e9c954351a4b6953fd156f0c29f5c || container_format | bare || created_at | 2021-11-29T11:41:29Z || disk_format | qcow2 || file | /v2/images/89e6e0b5-b4a3-4307-ad39-2a3db3a486c7/file || id | 89e6e0b5-b4a3-4307-ad39-2a3db3a486c7 || min_disk | 0 || min_ram | 0 || name | centos7 || owner | 6ed09fd59a174bb5a019b2434b7b3fc2 || protected | False || schema | /v2/schemas/image || size | 510459904 || status | active || tags | || updated_at | 2021-11-29T11:41:31Z || virtual_size | None || visibility | shared |+------------------+------------------------------------------------------+[root@controller ~]# openstack flavor create --disk 20 --ram 1024 --vcpus 1 centos+----------------------------+--------------------------------------+| Field | Value |+----------------------------+--------------------------------------+| OS-FLV-DISABLED:disabled | False || OS-FLV-EXT-DATA:ephemeral | 0 || disk | 20 || id | 4d8699e5-69ce-4469-803c-caf1d6331d96 || name | centos || os-flavor-access:is_public | True || properties | || ram | 1024 || rxtx_factor | 1.0 || swap | || vcpus | 1 |+----------------------------+--------------------------------------+ 8.安装Cindercontrollerxxxxxxxxxxiaas-install-cinder-controller.shcomputexxxxxxxxxxiaas-install-cinder-compute.sh创建一个新的卷xxxxxxxxxx[root@controller nova]# source /etc/keystone/admin-openrc.sh [root@controller nova]# cinder create --display-name myVolume 1+--------------------------------+--------------------------------------+| Property | Value |+--------------------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || consistencygroup_id | None || created_at | 2021-11-30T06:34:47.000000 || description | None || encrypted | False || id | f9f53265-6359-4cd6-91c2-83fb3f37c3ac || metadata | {} || migration_status | None || multiattach | False || name | myVolume || os-vol-host-attr:host | None || os-vol-mig-status-attr:migstat | None || os-vol-mig-status-attr:name_id | None || os-vol-tenant-attr:tenant_id | db2a714c481643e5ad18a30967c243aa || replication_status | None || size | 1 || snapshot_id | None || source_volid | None || status | creating || updated_at | None || user_id | 6f8df1b85e2140d58fc80693720f6e95 || volume_type | None |+--------------------------------+--------------------------------------+[root@controller nova]# cinder list+--------------------------------------+-----------+----------+------+-------------+----------+-------------+| ID | Status | Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+----------+------+-------------+----------+-------------+| f9f53265-6359-4cd6-91c2-83fb3f37c3ac | available | myVolume | 1 | - | false | |+--------------------------------------+-----------+----------+------+-------------+----------+-------------+ 9.安装swiftcontrollerxxxxxxxxxxiaas-install-swift-controller.shcomputexxxxxxxxxxiaas-install-swift-compute.sh 10.安装Heat编配服务controllerxxxxxxxxxxiaas-install-heat.sh 11.安装Zun服务controllerxxxxxxxxxxiaas-install-zun-controller.shcomputexxxxxxxxxxiaas-install-zun-compute.sh上传docker镜像CentOS7_1804.tar到glance,并且通过docker镜像启动容器xxxxxxxxxx[root@controller images]# lsCentOS_6.5_x86_64_XD.qcow2 CentOS7_1804.tar CentOS_7.2_x86_64_XD.qcow2 CentOS_7.5_x86_64_XD.qcow2[root@controller images]# source /etc/keystone/admin-openrc.sh[root@controller images]# openstack image create --file ./CentOS7_1804.tar --disk-format raw --public --container-format docker "centos_docker"+------------------+------------------------------------------------------+| Field | Value |+------------------+------------------------------------------------------+| checksum | 438e76cdb677a3ab1156e284f58aa366 || container_format | docker || created_at | 2021-11-30T07:02:14Z || disk_format | raw || file | /v2/images/522259d3-20de-4f58-87ec-1422c87e6fe6/file || id | 522259d3-20de-4f58-87ec-1422c87e6fe6 || min_disk | 0 || min_ram | 0 || name | centos_docker || owner | db2a714c481643e5ad18a30967c243aa || protected | False || schema | /v2/schemas/image || size | 381696512 || status | active || tags | || updated_at | 2021-11-30T07:02:16Z || virtual_size | None || visibility | public |+------------------+------------------------------------------------------+[root@controller images]# zun run --image-driver glance centos_docker[root@controller images]# zun list+--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+| uuid | name | image | status | task_state | addresses | ports |+--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+| ed1334ce-448b-4645-9d27-05e24259c171 | sigma-23-container | centos_docker | Running | None | 192.168.100.22 | [22] |+--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+ 12.安装Ceilometer 监控服务controllerxxxxxxxxxxiaas-install-ceilometer-controller.shcomputexxxxxxxxxxiaas-install-ceilometer-compute.sh 13.安装Aodh监控服务controllerxxxxxxxxxxiaas-install-aodh.sh 14.添加控制节点资源到云平台controller修改openrc.shxxxxxxxxxx旧HOST_IP_NODE=172.16.1.199HOST_NAME_NODE=compute 新HOST_IP_NODE=172.16.1.198HOST_NAME_NODE=controllerxxxxxxxxxxiaas-install-nova-compute.sh执行过程中需要确认登录controller节点和输入controller节点root用户密码
2023年02月09日
144 阅读
0 评论
0 点赞
2023-02-09
openstack api 操作 flavor
通过 openstack api 的方式操作 flavor获取flavor列表import json, requests url = "http://172.16.1.151:5000/v3/auth/tokens" body = { "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "demo" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "demo" }, "name": "admin" } } } } headers = { "Content-Type": "application/json", } token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] flavors = requests.get('http://controller:8774/v2.1/flavors', headers={'X-Auth-Token': token}) json_flavors = json.loads(flavors.text) for i in json_flavors['flavors']: print(i) 创建新的flavorimport json import requests url = "http://172.16.1.151:5000/v3/auth/tokens" body = { "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "demo" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "demo" }, "name": "admin" } } } } headers = { "Content-Type": "application/json", } token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] # 创建云主机类型 def create_flavor(id, vcpus, ram, disk, name): headers = { "Content-Type": "application/json", "X-Auth-Token": token } body = { "flavor": { "name": name, "ram": ram, "vcpus": vcpus, "disk": disk, "id": id } } return requests.post('http://172.16.1.151:8774/v2.1/flavors', data=json.dumps(body), headers=headers) flavor = create_flavor(id=100, vcpus=1, ram=1024, disk=10, name='api-create') print(flavor.text)删除flavorimport requests,json url = r'http://172.16.1.151:5000/v3/auth/tokens' headers = { "context-type": "application/json" } body = { "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "demo" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "demo" }, "name": "admin" } } } } token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] headers = { "context-type": "application/json", "x-auth-token": token } #删除云主机类型 flavor = requests.delete('http://172.16.1.151:8774/v2.1/flavors/' + '100',headers=headers) print(flavor.text)
2023年02月09日
163 阅读
0 评论
0 点赞
2023-02-09
openstack api 获取token
通过openstack api 的方式获取身份令牌tokenimport time, requests, json headers = { "Context-Type": "json" } url = r'http://172.16.1.121:5000/v3/auth/tokens' body = { "auth": { "identity": { "methods": [ "password" ], "password": { "user": { "domain": { "name": "demo" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "demo" }, "name": "admin" } } } } token = requests.post(url=url, data=json.dumps(body), headers=headers) print(token.headers)
2023年02月09日
171 阅读
0 评论
0 点赞