openstack 题目学习记录

1585364631
2023-02-09 / 0 评论 / 249 阅读 / 正在检测是否收录...
  • 在自行搭建的OpenStack平台上,使用命令创建一个名为Fmin,ID为1,内存为1024MB,磁盘为10GB,vcpu数量为1的云主机类型

    [root@controller ~]# openstack flavor create --vcpus 1 --disk 10 --ram 1024 --id 1 Fmin
    +----------------------------+-------+
    | Field                      | Value |
    +----------------------------+-------+
    | OS-FLV-DISABLED:disabled   | False |
    | OS-FLV-EXT-DATA:ephemeral  | 0     |
    | disk                       | 10    |
    | id                         | 1     |
    | name                       | Fmin  |
    | os-flavor-access:is_public | True  |
    | properties                 |       |
    | ram                        | 1024  |
    | rxtx_factor                | 1.0   |
    | swap                       |       |
    | vcpus                      | 1     |
    +----------------------------+-------+
  • 在自行搭建的OpenStack平台上,创建云主机网络extnet,子网extsubnet,虚拟机网段为192.168.100.0/24,网关为192.168.100.1,段ID默认写100,网络使用vlan模式。

    [root@controller ~]# openstack network create extnet --external --provider-network-type vlan --provider-physical-network provider --provider-segment 100
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | UP                                   |
    | availability_zone_hints   |                                      |
    | availability_zones        |                                      |
    | created_at                | 2021-12-14T07:02:14Z                 |
    | description               |                                      |
    | dns_domain                | None                                 |
    | id                        | 4a8a40a5-e628-4149-b2c3-6b7edfcd96a2 |
    | ipv4_address_scope        | None                                 |
    | ipv6_address_scope        | None                                 |
    | is_default                | False                                |
    | is_vlan_transparent       | None                                 |
    | mtu                       | 1500                                 |
    | name                      | extnet                               |
    | port_security_enabled     | True                                 |
    | project_id                | d33ead0cc8224ee9ad0d3b65f56c0ba5     |
    | provider:network_type     | vlan                                 |
    | provider:physical_network | provider                             |
    | provider:segmentation_id  | 100                                  |
    | qos_policy_id             | None                                 |
    | revision_number           | 5                                    |
    | router:external           | External                             |
    | segments                  | None                                 |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tags                      |                                      |
    | updated_at                | 2021-12-14T07:02:14Z                 |
    +---------------------------+--------------------------------------+
    [root@controller ~]# openstack subnet create --network extnet --gateway 192.168.100.1 --dhcp --subnet-range 192.168.100.0/24 extsubnet
    +-------------------+--------------------------------------+
    | Field             | Value                                |
    +-------------------+--------------------------------------+
    | allocation_pools  | 192.168.100.2-192.168.100.254        |
    | cidr              | 192.168.100.0/24                     |
    | created_at        | 2021-12-14T07:04:00Z                 |
    | description       |                                      |
    | dns_nameservers   |                                      |
    | enable_dhcp       | True                                 |
    | gateway_ip        | 192.168.100.1                        |
    | host_routes       |                                      |
    | id                | e51d4693-cd0f-45f0-83cf-2176c4fa850a |
    | ip_version        | 4                                    |
    | ipv6_address_mode | None                                 |
    | ipv6_ra_mode      | None                                 |
    | name              | extsubnet                            |
    | network_id        | 4a8a40a5-e628-4149-b2c3-6b7edfcd96a2 |
    | project_id        | d33ead0cc8224ee9ad0d3b65f56c0ba5     |
    | revision_number   | 0                                    |
    | segment_id        | None                                 |
    | service_types     |                                      |
    | subnetpool_id     | None                                 |
    | tags              |                                      |
    | updated_at        | 2021-12-14T07:04:00Z                 |
    +-------------------+--------------------------------------+
  • 在自行搭建的OpenStack平台上,基于“cirros”镜像、1vCPU/1G /10G的flavor、extsubnet的网络,创建一台虚拟机VM1,启动VM1

    [root@controller images]# ls
    CentOS_6.5_x86_64_XD.qcow2  CentOS7_1804.tar  CentOS_7.2_x86_64_XD.qcow2  CentOS_7.5_x86_64.qcow2  CentOS_7.5_x86_64_XD.qcow2  cirros-0.3.4-x86_64-disk.img
    [root@controller images]# openstack image create --file cirros-0.3.4-x86_64-disk.img --disk-format raw cirros
    +------------------+------------------------------------------------------+
    | Field            | Value                                                |
    +------------------+------------------------------------------------------+
    | checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
    | container_format | bare                                                 |
    | created_at       | 2021-12-14T07:09:00Z                                 |
    | disk_format      | raw                                                  |
    | file             | /v2/images/42b3009d-79a8-4b13-a35a-875479900b40/file |
    | id               | 42b3009d-79a8-4b13-a35a-875479900b40                 |
    | min_disk         | 0                                                    |
    | min_ram          | 0                                                    |
    | name             | cirros                                               |
    | owner            | d33ead0cc8224ee9ad0d3b65f56c0ba5                     |
    | protected        | False                                                |
    | schema           | /v2/schemas/image                                    |
    | size             | 13287936                                             |
    | status           | active                                               |
    | tags             |                                                      |
    | updated_at       | 2021-12-14T07:09:00Z                                 |
    | virtual_size     | None                                                 |
    | visibility       | shared                                               |
    +------------------+------------------------------------------------------+
    [root@controller images]# openstack server create --image cirros --flavor Fmin --network extnet VM1
    +-------------------------------------+-----------------------------------------------+
    | Field                               | Value                                         |
    +-------------------------------------+-----------------------------------------------+
    | OS-DCF:diskConfig                   | MANUAL                                        |
    | OS-EXT-AZ:availability_zone         |                                               |
    | OS-EXT-SRV-ATTR:host                | None                                          |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
    | OS-EXT-SRV-ATTR:instance_name       |                                               |
    | OS-EXT-STS:power_state              | NOSTATE                                       |
    | OS-EXT-STS:task_state               | scheduling                                    |
    | OS-EXT-STS:vm_state                 | building                                      |
    | OS-SRV-USG:launched_at              | None                                          |
    | OS-SRV-USG:terminated_at            | None                                          |
    | accessIPv4                          |                                               |
    | accessIPv6                          |                                               |
    | addresses                           |                                               |
    | adminPass                           | 7uEFyyfowamF                                  |
    | config_drive                        |                                               |
    | created                             | 2021-12-14T07:12:04Z                          |
    | flavor                              | Fmin (1)                                      |
    | hostId                              |                                               |
    | id                                  | b4fb14e5-a55a-43ed-894e-87ce8d8fd250          |
    | image                               | cirros (42b3009d-79a8-4b13-a35a-875479900b40) |
    | key_name                            | None                                          |
    | name                                | VM1                                           |
    | progress                            | 0                                             |
    | project_id                          | d33ead0cc8224ee9ad0d3b65f56c0ba5              |
    | properties                          |                                               |
    | security_groups                     | name='default'                                |
    | status                              | BUILD                                         |
    | updated                             | 2021-12-14T07:12:04Z                          |
    | user_id                             | feed7e464fb446188de23b147498ebcf              |
    | volumes_attached                    |                                               |
    +-------------------------------------+-----------------------------------------------+
  • 在 openstack 私有云平台上,基于 CentOS7_1804.tar 的docker镜像,使用命令创建一个名为 centos7.5-docker 的镜像,并且通过docker镜像启动容器。

    [root@controller images]# ls | grep CentOS7_1804.tar
    CentOS7_1804.tar
    [root@controller images]# openstack image create --file CentOS7_1804.tar --disk-format raw --container-format docker centos7.5-docker
    +------------------+------------------------------------------------------+
    | Field            | Value                                                |
    +------------------+------------------------------------------------------+
    | checksum         | 438e76cdb677a3ab1156e284f58aa366                     |
    | container_format | docker                                               |
    | created_at       | 2021-12-02T03:17:05Z                                 |
    | disk_format      | raw                                                  |
    | file             | /v2/images/c776ae1f-90b9-4a7f-b3fa-67f2cf2b5b00/file |
    | id               | c776ae1f-90b9-4a7f-b3fa-67f2cf2b5b00                 |
    | min_disk         | 0                                                    |
    | min_ram          | 0                                                    |
    | name             | centos7.5-docker                                     |
    | owner            | db2a714c481643e5ad18a30967c243aa                     |
    | protected        | False                                                |
    | schema           | /v2/schemas/image                                    |
    | size             | 381696512                                            |
    | status           | active                                               |
    | tags             |                                                      |
    | updated_at       | 2021-12-02T03:17:06Z                                 |
    | virtual_size     | None                                                 |
    | visibility       | shared                                               |
    +------------------+------------------------------------------------------+
    [root@controller images]# openstack image list
    +--------------------------------------+------------------+--------+
    | ID                                   | Name             | Status |
    +--------------------------------------+------------------+--------+
    | 522259d3-20de-4f58-87ec-1422c87e6fe6 | centos_docker    | active |
    +--------------------------------------+------------------+--------+
    [root@controller images]# zun run --image-driver glance centos_docker
    [root@controller images]# zun list
    +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+
    | uuid                                 | name               | image         | status  | task_state | addresses      | ports |
    +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+
    | ed1334ce-448b-4645-9d27-05e24259c171 | sigma-23-container | centos_docker | Running | None       | 192.168.100.22 | [22]  |
    +--------------------------------------+--------------------+---------------+---------+------------+----------------+-------+
  • 在自己搭建的OpenStack平台上,将云主机VM1保存为qcow2格式的快照并保存到controller节点/root/cloudsave目录下,保存名字为csccvm.qcow2

    [root@controller ~]# openstack server list
    +--------------------------------------+------+--------+----------------------+--------+--------+
    | ID                                   | Name | Status | Networks             | Image  | Flavor |
    +--------------------------------------+------+--------+----------------------+--------+--------+
    | b4fb14e5-a55a-43ed-894e-87ce8d8fd250 | VM1  | ACTIVE | extnet=192.168.100.4 | cirros | Fmin   |
    +--------------------------------------+------+--------+----------------------+--------+--------+
    [root@controller ~]# openstack server stop VM1
    [root@controller ~]# openstack server image create --name csccvm.qcow2 VM1
    [root@controller ~]# openstack image list
    +--------------------------------------+--------------+--------+
    | ID                                   | Name         | Status |
    +--------------------------------------+--------------+--------+
    | aca7ee52-51b6-4f09-b6ab-993eba815149 | Gmirror1     | active |
    | 42b3009d-79a8-4b13-a35a-875479900b40 | cirros       | active |
    | 06e10537-4af8-49fa-bda0-6635012bdeb2 | csccvm.qcow2 | active |
    +--------------------------------------+--------------+--------+
    [root@controller ~]# mkdir /root/cloudsave
    [root@controller ~]# openstack image save --file /root/cloudsave/csccvm.qcow2 csccvm.qcow2
    [root@controller ~]# ls /root/cloudsave/
    csccvm.qcow2
  • 在自己搭建的OpenStack平台上,使用cinder服务,创建一个名为“lvm”的卷类型,创建1块卷类型为lvm的1G云盘,并附加到虚拟机VM1上

    [root@controller ~]# openstack volume type create lvm
    +-------------+--------------------------------------+
    | Field       | Value                                |
    +-------------+--------------------------------------+
    | description | None                                 |
    | id          | 97504eed-9fd5-4fc0-bd2c-5e2101c320c2 |
    | is_public   | True                                 |
    | name        | lvm                                  |
    +-------------+--------------------------------------+
    [root@controller ~]# openstack volume create --type lvm --size 1 lvm
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | attachments         | []                                   |
    | availability_zone   | nova                                 |
    | bootable            | false                                |
    | consistencygroup_id | None                                 |
    | created_at          | 2021-12-14T07:32:59.000000           |
    | description         | None                                 |
    | encrypted           | False                                |
    | id                  | 5dbfb3da-4799-4bf9-9d70-fce503f51e44 |
    | migration_status    | None                                 |
    | multiattach         | False                                |
    | name                | lvm                                  |
    | properties          |                                      |
    | replication_status  | None                                 |
    | size                | 1                                    |
    | snapshot_id         | None                                 |
    | source_volid        | None                                 |
    | status              | creating                             |
    | type                | lvm                                  |
    | updated_at          | None                                 |
    | user_id             | feed7e464fb446188de23b147498ebcf     |
    +---------------------+--------------------------------------+
    [root@controller ~]# openstack server add volume VM1 lvm
  • 登录提供的私有云平台,创建一台centos7.5的云主机,flavor使用带有附加硬盘的类型。连接到该云主机,使用附加的硬盘,划分4个10G的分区,使用这4个分区创建一个raid5级别的磁盘阵列,其中1个分区作为热备盘

    [root@controller api]# openstack volume create --size 40 raid
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | attachments         | []                                   |
    | availability_zone   | nova                                 |
    | bootable            | false                                |
    | consistencygroup_id | None                                 |
    | created_at          | 2021-12-18T06:13:48.000000           |
    | description         | None                                 |
    | encrypted           | False                                |
    | id                  | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab |
    | migration_status    | None                                 |
    | multiattach         | False                                |
    | name                | raid                                 |
    | properties          |                                      |
    | replication_status  | None                                 |
    | size                | 40                                   |
    | snapshot_id         | None                                 |
    | source_volid        | None                                 |
    | status              | creating                             |
    | type                | None                                 |
    | updated_at          | None                                 |
    | user_id             | 32a5404a9ca14a09ba0f12ae34c7a079     |
    +---------------------+--------------------------------------+
    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------+
    | ID                                   | Name | Status    | Size | Attached to |
    +--------------------------------------+------+-----------+------+-------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available |   40 |             |
    +--------------------------------------+------+-----------+------+-------------+
    [root@controller api]# openstack server add volume chinaskill raid
    
    #分区
    #使用parted工具分4个10g的硬盘,格式为xfs
    #配置centos的yum源
    #安装mdadm工具
    [root@chinaskill ~]# yum install -y mdadm
    [root@chinaskill dev]# lsblk 
    NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    vda    253:0    0  20G  0 disk 
    └─vda1 253:1    0  20G  0 part /
    vdb    253:16   0  40G  0 disk 
    ├─vdb1 253:17   0  10G  0 part 
    ├─vdb2 253:18   0  10G  0 part 
    ├─vdb3 253:19   0  10G  0 part 
    └─vdb4 253:20   0   9G  0 part 
    
    # -C 创建磁盘阵列
    # -v 细节
    # -l raid数
    # -n 磁盘数量
    # -x 热备数量
    [root@chinaskill dev]# mdadm -C -v /dev/md0 -l5 -n3 /dev/vdb[123] -x1 /dev/vdb4
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: size set to 9427968K
    mdadm: largest drive (/dev/vdb2) exceeds size (9427968K) by more than 1%
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    [root@chinaskill dev]# lsblk 
    NAME    MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
    vda     253:0    0  20G  0 disk  
    └─vda1  253:1    0  20G  0 part  /
    vdb     253:16   0  40G  0 disk  
    ├─vdb1  253:17   0  10G  0 part  
    │ └─md0   9:0    0  18G  0 raid5 
    ├─vdb2  253:18   0  10G  0 part  
    │ └─md0   9:0    0  18G  0 raid5 
    ├─vdb3  253:19   0  10G  0 part  
    │ └─md0   9:0    0  18G  0 raid5 
    └─vdb4  253:20   0   9G  0 part  
      └─md0   9:0    0  18G  0 raid5 
    [root@chinaskill dev]# mkfs.xfs /dev/md0 
    meta-data=/dev/md0               isize=512    agcount=16, agsize=294528 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0, sparse=0
    data     =                       bsize=4096   blocks=4712448, imaxpct=25
             =                       sunit=128    swidth=256 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=8 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
  • 在自行搭建的OpenStack平台上,对cinder存储空间进行扩容操作,要求将cinder存储空间扩容10G

    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | ID                                   | Name | Status    | Size | Attached to                         |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use    |   40 | Attached to chinaskill on /dev/vdb  |
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |    2 |                                     |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    
    #未使用(available)盘扩容
    [root@controller api]# openstack volume set --size 10 disk
    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | ID                                   | Name | Status    | Size | Attached to                         |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use    |   40 | Attached to chinaskill on /dev/vdb  |
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |   10 |                                     |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    
    #使用(in-use)盘扩容
    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | ID                                   | Name | Status    | Size | Attached to                         |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | in-use    |   40 | Attached to chinaskill on /dev/vdb  |
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |   10 |                                     |
    +--------------------------------------+------+-----------+------+-------------------------------------+
    [root@controller api]# openstack server remove volume chinaskill raid
    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------+
    | ID                                   | Name | Status    | Size | Attached to |
    +--------------------------------------+------+-----------+------+-------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available |   40 |             |
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |   10 |             |
    +--------------------------------------+------+-----------+------+-------------+
    [root@controller api]# openstack volume set --size 45 raid
    [root@controller api]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------+
    | ID                                   | Name | Status    | Size | Attached to |
    +--------------------------------------+------+-----------+------+-------------+
    | fc2b4a57-5e2d-4c8f-bc04-857ad92a54ab | raid | available |   45 |             |
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |   10 |             |
    +--------------------------------------+------+-----------+------+-------------+
    [root@controller api]# openstack server add volume chinaskill raid
  • 使用提供的云安全框架组件,将自行搭建的OpenStack云平台的安全策略从http优化至https。

    [root@controller /]# yum list | grep mod
    mod_wsgi.x86_64                            3.4-18.el7                  @iaas
    mod_ssl.x86_64                             1:2.4.6-89.el7.centos       iaas
    [root@controller /]# yum install -y httpd mod_ssl mod_wsgi
    [root@controller ~]# vim /etc/httpd/conf.d/ssl.conf
    
    #########修改前#########
    SSLProtocol all -SSLv2 -SSLv3
    #######################
    
    #########修改后#########
    SSLProtocol all -SSLv2
    #######################
    
    [root@controller ~]# vim /etc/openstack-dashboard/local_settings
    #########添加内容#########
    USE_SSL=True #加这一句后,安全策略已经从http优化至https了。
    
    CSRF_COOKIE_SECURE = True             #将该行的注释取消,做不做都无所谓
    SESSION_COOKIE_SECURE = True          #将该行的注释取消,做不做都无所谓
    SESSION_COOKIE_HTTPONLY = True       #添加该行,做不做都无所谓
    ########################
    
    [root@controller ~]# systemctl restart httpd
    [root@controller ~]# systemctl restart mecached
  • 在自行搭建的OpenStack平台上,使用glance相关命令上传镜像,镜像源为CentOS_7.5_x86_64.qcow2,名为Gmirror1,min _ram为2048M,min_disk为20G。

    [root@controller images]# ls | grep CentOS_7.5_x86_64.qcow2
    CentOS_7.5_x86_64.qcow2
    [root@controller images]# source /etc/keystone/admin-openrc.sh
    [root@controller images]# openstack image create --min-ram 2048 --min-disk 20 --file CentOS_7.5_x86_64.qcow2 --disk-format qcow2 Gmirror1
    +------------------+------------------------------------------------------+
    | Field            | Value                                                |
    +------------------+------------------------------------------------------+
    | checksum         | 3d3e9c954351a4b6953fd156f0c29f5c                     |
    | container_format | bare                                                 |
    | created_at       | 2021-12-13T07:29:59Z                                 |
    | disk_format      | qcow2                                                |
    | file             | /v2/images/5afddf53-d0d1-476a-8aa7-d800656a19e7/file |
    | id               | 5afddf53-d0d1-476a-8aa7-d800656a19e7                 |
    | min_disk         | 20                                                   |
    | min_ram          | 2048                                                 |
    | name             | Gmirror1                                             |
    | owner            | f36eeb24e1304f90b65e189a2c3f42b5                     |
    | protected        | False                                                |
    | schema           | /v2/schemas/image                                    |
    | size             | 510459904                                            |
    | status           | active                                               |
    | tags             |                                                      |
    | updated_at       | 2021-12-13T07:30:00Z                                 |
    | virtual_size     | None                                                 |
    | visibility       | shared                                               |
    +------------------+------------------------------------------------------+
    [root@controller images]# openstack image list
    +--------------------------------------+----------+--------+
    | ID                                   | Name     | Status |
    +--------------------------------------+----------+--------+
    | 5afddf53-d0d1-476a-8aa7-d800656a19e7 | Gmirror1 | active |
    +--------------------------------------+----------+--------+
  • 使用qemu-img相关命令,查询Gmirror1镜像的compat版本,然后将Gmirror1镜像的compat版本修改为0.10(该操作是为了适配某些低版本的云平台)。

    #openstack的默认镜像目录在/var/lib/glance/images/
    [root@controller images]# cd /var/lib/glance/images/
    [root@controller images]# ls
    aca7ee52-51b6-4f09-b6ab-993eba815149
    [root@controller images]# qemu-img info aca7ee52-51b6-4f09-b6ab-993eba815149
    image: aca7ee52-51b6-4f09-b6ab-993eba815149
    file format: qcow2
    virtual size: 20G (21474836480 bytes)
    disk size: 487M
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    #通过帮助命令查询修改命令
    [root@controller images]# qemu-img --h
    amend [--object objectdef] [--image-opts] [-p] [-q] [-f fmt] [-t cache] -o options filename
    [root@controller images]# qemu-img amend aca7ee52-51b6-4f09-b6ab-993eba815149 -o compat=0.10
    [root@controller images]# qemu-img info aca7ee52-51b6-4f09-b6ab-993eba815149
    image: aca7ee52-51b6-4f09-b6ab-993eba815149
    file format: qcow2
    virtual size: 20G (21474836480 bytes)
    disk size: 487M
    cluster_size: 65536
    Format specific information:
        compat: 0.10
        refcount bits: 16
  • 在自行搭建的OpenStack平台上,通过修改相关参数对openstack平台进行调优操作,相应的调优操作有:

    # nova调优配置文件/etc/nova/nova.conf
    [DEFAULT]
    vcpu_pin_set = 4-12,^8,15  
    #建议值是预留前几个物理 CPU,把后面的所有 CPU 分配给虚拟机使用,所有实例只能跑在CPUs 4,5,6,7,9,10,11,12,15上。
    allow_resize_to_same_host=true
    #允许虚拟机后期的资源调整;允许openstack创建的虚拟机,当发现后期CPU、内存、磁盘空间不足时,对虚拟机进行动态调整
    resume_guests_state_on_host_boot=true
    #配置虚拟机自启动;当宿主机启动后,把虚拟机恢复到之前的状态,如果虚拟机之前是关机,则宿主机启动后,虚拟机也是关机状态;如果虚拟机之前是开机状态,则宿主机启动后,虚拟机还是开机状态
    cpu_allocation_ratio=8
    #CPU超分;把宿主机 1 核CPU,当做 8 核去分;即宿主机的 1 核CPU在openstack看来,就是 8 核CPU;CPU不能超分太多
    ram_allocation_ratio=1.0
    #内存一般不超分,一般都是 1 比 1,如果想要想要超分的话,最多超分1.2倍或1.5倍,不能太多;超分有好处也有坏处,好处是可以让openstack创建更多的虚拟机;坏处是,假设一个宿主机有3个虚拟机,宿主机内存为100G,超分为1.2倍,在openstack看来就是120G内存,但如果前两台虚拟机一共已经使用了80G内存,第三台虚拟机使用了20G内存,还有20G内存可用,但是会报内存不足,无法分配内存,因为三台机器已经把宿主机的100G内存全部试用完了,虽然openstack显示还有20G内存可用,但是宿主机已经没有内存可以分配了,当宿主机内存用完后,宿主机内核会把占用内存最多的虚拟机kill掉,所以一般内存不进行超分
    disk_allocation_ratio=1.0
    #磁盘一般也不进行超分,原理与内存超分一致
    reserved_host_disk_mb=20480
    #配置磁盘保留空间;即预留指定大小的空间给宿主机使用,一般用于宿主机记录日志;预留10G或者20G磁盘空间即可
    reserved_host_memory_mb=4096
    #配置内存保留;即预留指定大小的内存给宿主机使用;一般预留4G
    service_down_time=120
    #服务下线时间阈值,默认60,如果一个节点上的 nova 服务超过这个时间没有上报心跳到数据库,api 服务会认为该服务已经下线,如果配置过短或过长,都会导致误判。
    rpc_response_timeout=300
    #RPC 调用超时时间,由于 Python 的单进程不能真正的并发,所以 RPC 请求可能不能及时响应,尤其是目标节点在执行耗时较长的定时任务时,所以需要综合考虑超时时间和等待容忍时间。
    • 设置内存超售比例为1.5倍

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #ram_allocation_ratio=1.0
      ################
      
      ######修改后######
      ram_allocation_ratio=1.5
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
    • 设置cpu超售比例为4倍

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #ram_allocation_ratio=1.0
      ################
      
      ######修改后######
      ram_allocation_ratio=4.0
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
    • 设置nova服务心跳检查时间为120秒

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #service_down_time=60
      ################
      
      ######修改后######
      service_down_time=120
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
    • 预留前2个物理CPU,把后面的所有CPU分配给虚拟机使用(假设vcpu为16个)

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #vcpu_pin_set=<None>
      ################
      
      ######修改后######
      vcpu_pin_set=3-16
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
    • 预留2048mb内存,这部分内存不能被虚拟机使用

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #reserved_host_memory_mb=512
      ################
      
      ######修改后######
      reserved_host_memory_mb=2048
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
    • 预留10240mb磁盘,这部分磁盘不能被虚拟机使用

      [root@controller images]# vim /etc/nova/nova.conf
      ######修改前#####
      #reserved_host_disk_mb=0
      ################
      
      ######修改后######
      reserved_host_disk_mb=10240
      #################
      #wq保存
      #重启整个openstack或者nova服务
      [root@controller images]# openstack-service restart
  • 在自行搭建的OpenStack平台上,使用Swift对象存储服务,修改相应的配置文件,使对象存储Swift作为glance镜像服务的后端存储

    #glance配置文件/etc/glance/glance-api.conf
    [glance_store]
    ......
    stores=glance.store.filesystem.Store,glance.store.swift.Store,glance.store.http.Store
    default_store=swift
    swift_store_auth_address=http://192.168.1.76:5000/v2.0/
    swift_store_user=services:glance
    swift_store_key=000000      //glance用户的keystone认证密码
    swift_store_container=glance
    swift_store_create_container_on_put=True
    swift_store_large_object_size=5120
    swift_store_large_object_chunk_size=200
    os_region_name=RegionOne
    ......
    -----------------------------------
    openstack-kilo,glance使用swift 作为后端存储
  • 在openstack私有云平台上,在/root目录下编写模板server.yaml,创建名为“m1.flavor”、 ID 为 1234、内存为1024MB、硬盘为20GB、vcpu数量为 1的云主机类型。

    [root@controller ~]# cd /root/
    [root@controller ~]# vim server.yaml
    
    ############文 件 内 容############
    heat_template_version: 2018-03-02
    resources:
     flavor:
      type: OS::Nova::Flavor
      properties:
       disk: 20
       flavorid: 1234
       name: m1.flavor
       ram: 1024
       vcpus: 1
    #################################
    
    测试编配文件
    [root@controller ~]# source /etc/keystone/admin-openrc.sh
    [root@controller ~]# openstack stack create -t fla.yaml flavor
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | id                  | 46e54381-baa2-47c4-89ef-cd775f589ce8 |
    | stack_name          | flavor                               |
    | description         | No description                       |
    | creation_time       | 2021-12-01T03:06:57Z                 |
    | updated_time        | None                                 |
    | stack_status        | CREATE_IN_PROGRESS                   |
    | stack_status_reason | Stack CREATE started                 |
    +---------------------+--------------------------------------+
    [root@controller ~]# openstack flavor list
    +------+-----------+------+------+-----------+-------+-----------+
    | ID   | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
    +------+-----------+------+------+-----------+-------+-----------+
    | 1234 | m1.flavor | 1024 |   20 |         0 |     1 | True      |
    +------+-----------+------+------+-----------+-------+-----------+
  • 在自行搭建的 OpenStack 私有云平台或赛项提供的 all-in-one 平台上,在/root 目录下编 写 Heat 模板 create_user.yaml,创建名为 heat-user 的用户,属于 admin 项目,并赋予 heat-user 用户 admin 的权限,配置用户密码为 123456。

    [root@controller ~]# cd /root/
    [root@controller ~]# vim create_user.yaml
    
    可在dashboard的编排中使用模板创建者创建生成代码
    ############文 件 内 容############
    heat_template_version: 2018-03-02
    resources:
     user:
      type: OS::Keystone::User
      properties:
       name: heat-user
       domain: demo
       password: "123456"
       roles: [{"role": admin,"project": admin}]
    #################################
    
    测试编配文件
    [root@controller ~]# source /etc/keystone/admin-openrc.sh
    [root@controller ~]# openstack stack create -t ./create_user.yaml user
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | id                  | c018a596-142e-45a8-a22f-3131317b5b31 |
    | stack_name          | user                                 |
    | description         | No description                       |
    | creation_time       | 2021-12-01T03:33:17Z                 |
    | updated_time        | None                                 |
    | stack_status        | CREATE_IN_PROGRESS                   |
    | stack_status_reason | Stack CREATE started                 |
    +---------------------+--------------------------------------+
    [root@controller ~]# openstack user list
    +----------------------------------+-------------------+
    | ID                               | Name              |
    +----------------------------------+-------------------+
    | 0944f34be146406faebe9d9f0804f336 | neutron           |
    | 159dd53f9105414985480b996ed28067 | glance            |
    | 362104447741490888fce648863e203f | placement         |
    | 4bb46e4bd83f4a4e8f81874b92b515a2 | kuryr             |
    | 4de70f9596a64f6d9a29f7a70b8ee0d4 | heat-user         |    ##########这条
    | 5f0a256639ab4491a9c1346cca3db42c | gnocchi           |
    | 6f8df1b85e2140d58fc80693720f6e95 | admin             |
    | 71e2175c45314dfeb0165019b61d08df | heat              |
    | 821824ed38794510ad80494c47b803bb | heat_domain_admin |
    | a1e8e72403b04335b64ca2c4f160ef9f | aodh              |
    | b02e95f997384b35b4fcb68c18cc1abd | cinder            |
    | b193a113e69040f8be2199e487157cbd | demo              |
    | c0bcbf260b6a4541b7bd2d0c5e38926b | nova              |
    | faa22fa382c4469aa907f6481a573618 | swift             |
    | fc9663cfb3a74b88be15dd801229a18e | zun               |
    | fe23dec6db0d485cbad04798c97f778c | ceilometer        |
    +----------------------------------+-------------------+
  • 在自行搭建的OpenStack平台上,编写heat模板createvm.yml文件,模板作用为按照要求创建一个云主机

    [root@controller ~]# heat --help | grep template
        resource-template   DEPRECATED!
        resource-type-template
                  Generate a template based on a resource type.
        template-function-list
        template-show       Get the template for the specified stack.
        template-validate   Validate a template with parameters.
        template-version-list
                  List the available template versions.
    [root@controller ~]# heat resource-type-list | grep Nova
    WARNING (shell) "heat resource-type-list" is deprecated, please use "openstack orchestration resource type list" instead
    | OS::Nova::Flavor                         |
    | OS::Nova::FloatingIP                     |
    | OS::Nova::FloatingIPAssociation          |
    | OS::Nova::HostAggregate                  |
    | OS::Nova::KeyPair                        |
    | OS::Nova::Quota                          |
    | OS::Nova::Server                         |
    | OS::Nova::ServerGroup                    |
    [root@controller ~]# heat resource-type-template OS::Nova::Server
    
    ##直接去openstack的dashboard平台上,使用模板创建者生成模板
    [root@controller ~]# vim /root/createvm.yml
    ########################
    heat_template_version: 2018-03-02
    resources:
      Server_1:
        type: OS::Nova::Server
        properties:
          networks:
    - network: c6ed53d0-fa4d-431f-b91f-edee82008a4e
          name: test
          flavor: test
          image: 63cbe619-9854-4efe-9f7e-79313471171a
          availability_zone: nova
    ########################
    [root@controller ~]# openstack stack create -t /root/createvm.yml server
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | id                  | a9d5c3a2-8ca3-4ff5-9f13-9d79566f110e |
    | stack_name          | server                               |
    | description         | No description                       |
    | creation_time       | 2021-12-18T07:26:51Z                 |
    | updated_time        | None                                 |
    | stack_status        | CREATE_IN_PROGRESS                   |
    | stack_status_reason | Stack CREATE started                 |
    +---------------------+--------------------------------------+
  • 在controller节点上,编写脚本mysqlbak.sh,要求执行该脚本可以备份数据库,并存放在/opt/mysqlbak目录下

    #数据库备份命令
    mysqldump -u账号 -p密码 (数据库)或(--all-databases) > 文件路径+名字
  • 登录提供的私有云平台,创建一台centos7.5的云主机,flavor使用带有附加硬盘的类型。连接到该云主机,使用附加的硬盘,要求分出两个大小为5G的分区。使用两个分区,创建名为chinaskill-vg的卷组

    [root@controller ~]# openstack volume list
    +--------------------------------------+------+-----------+------+-------------+
    | ID                                   | Name | Status    | Size | Attached to |
    +--------------------------------------+------+-----------+------+-------------+
    | e5448eae-4aa9-421d-bb34-7c5015b0459e | disk | available |   10 |             |
    +--------------------------------------+------+-----------+------+-------------+
    [root@controller ~]# openstack server list
    +--------------------------------------+------+--------+--------------------------------+-----------+--------+
    | ID                                   | Name | Status | Networks                       | Image     | Flavor |
    +--------------------------------------+------+--------+--------------------------------+-----------+--------+
    | 1113c8b8-7072-4ad0-a8ef-05b66a5f162f | test | ACTIVE | intnet=192.168.1.5, 172.16.1.9 | centos7.5 | test   |
    +--------------------------------------+------+--------+--------------------------------+-----------+--------+
    [root@controller ~]# openstack server add volume test disk
    [root@test ~]# lsblk 
    NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    vda    253:0    0  20G  0 disk 
    └─vda1 253:1    0  20G  0 part /
    vdb    253:16   0  10G  0 disk 
    #使用parted工具,将vdb分成两块5G大小磁盘,并格式化
    [root@test ~]# lsblk                                                      
    NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    vda    253:0    0  20G  0 disk 
    └─vda1 253:1    0  20G  0 part /
    vdb    253:16   0  10G  0 disk 
    ├─vdb1 253:17   0   5G  0 part 
    └─vdb2 253:18   0   5G  0 part 
    
    #配置yum源,安装lvm2
    [root@test ~]# yum install -y lvm2
    
    #使用pvcreate 初始化硬盘
    [root@test ~]# pvcreate -f /dev/vdb1
      WARNING: lvmetad connection failed, cannot reconnect.
      lvmetad cannot be used due to error: Connection reset by peer
      WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0).
      WARNING: Not using lvmetad because cache update failed.
      Physical volume "/dev/vdb1" successfully created.
    [root@test ~]# pvcreate -f /dev/vdb2
      WARNING: lvmetad connection failed, cannot reconnect.
      lvmetad cannot be used due to error: Connection reset by peer
      WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0).
      WARNING: Not using lvmetad because cache update failed.
      Physical volume "/dev/vdb2" successfully created.
     
    #使用vgcreate创建卷组
    [root@test ~]# vgcreate chinaskill-vg /dev/vdb1 /dev/vdb2 
      WARNING: lvmetad connection failed, cannot reconnect.
      lvmetad cannot be used due to error: Connection reset by peer
      WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0).
      WARNING: Not using lvmetad because cache update failed.
      Volume group "chinaskill-vg" successfully created
      
     #查看卷组
    [root@test ~]# vgdisplay 
      WARNING: lvmetad connection failed, cannot reconnect.
      lvmetad cannot be used due to error: Connection reset by peer
      WARNING: To avoid corruption, restart lvmetad (or disable with use_lvmetad=0).
      WARNING: Not using lvmetad because cache update failed.
      --- Volume group ---
      VG Name               chinaskill-vg
      System ID             
      Format                lvm2
      Metadata Areas        2
      Metadata Sequence No  1
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                0
      Open LV               0
      Max PV                0
      Cur PV                2
      Act PV                2
      VG Size               9.99 GiB
      PE Size               4.00 MiB
      Total PE              2558
      Alloc PE / Size       0 / 0   
      Free  PE / Size       2558 / 9.99 GiB
      VG UUID               xe5E7f-fjU3-G1Dm-k2hb-MARI-8IfZ-wdgg8f
  • 在controller节点上创建名为chinaskill的容器,并获取该容器的存放路径;将cirros-0.3.4-x86_64-disk.img镜像上传到chinaskill容器中,并设置分段存放,每一段大小为10M

    [root@controller ~]# openstack container create chinaskill
    +---------------------------------------+------------+------------------------------------+
    | account                               | container  | x-trans-id                         |
    +---------------------------------------+------------+------------------------------------+
    | AUTH_54bea643f53e4f2b96970ddfc14d3138 | chinaskill | tx322c769c6efc49008e32b-0061bd9998 |
    +---------------------------------------+------------+------------------------------------+
    [root@controller images]# ls
    cirros-0.3.4-x86_64-disk.img
    [root@controller images]# swift upload -S 10m chinaskill cirros-0.3.4-x86_64-disk.img 
    cirros-0.3.4-x86_64-disk.img segment 1
    cirros-0.3.4-x86_64-disk.img segment 0
    cirros-0.3.4-x86_64-disk.img
    [root@controller images]# swift list chinaskill
    cirros-0.3.4-x86_64-disk.img
    [root@controller images]# openstack object show chinaskill cirros-0.3.4-x86_64-disk.img
    +-------------------+---------------------------------------------------------------------------------------+
    | Field             | Value                                                                                 |
    +-------------------+---------------------------------------------------------------------------------------+
    | account           | AUTH_54bea643f53e4f2b96970ddfc14d3138                                                 |
    | container         | chinaskill                                                                            |
    | content-length    | 13287936                                                                              |
    | content-type      | application/octet-stream                                                              |
    | etag              | "5cde37512919eda28a822e472bb0a2dd"                                                    |
    | last-modified     | Sat, 18 Dec 2021 08:24:41 GMT                                                         |
    | object            | cirros-0.3.4-x86_64-disk.img                                                          |
    | properties        | Mtime='1639099260.000000'                                                             |
    | x-object-manifest | chinaskill_segments/cirros-0.3.4-x86_64-disk.img/1639099260.000000/13287936/10485760/ |
    +-------------------+---------------------------------------------------------------------------------------+
  • 在自行搭建的OpenStack平台上,使用cirros镜像创建云主机,flavor使用1vcpu/512M内存/1G硬盘,创建云主机cscc_vm,假设在使用过程中,发现该云主机配置太低,需要调整,请修改相应配置,将dashboard界面上的云主机调整实例大小可以使用,将该云主机实例大小调整为1vcpu/1G内存/2G硬盘

  • 在自行搭建的OpenStack平台上,使用cirros镜像创建云主机vm1,然后将该云主机进行手动迁移,若原来创建在compute节点上的,则迁移至controller节点上;若原来创建在controller节点上的,则迁移至compute节点上

  • 在controller控制节点上,安装libguestfs-tools工具的时候,会发生依赖包的冲突,请解决依赖关系的报错,完成libguestfs-tools工具的安装

  • 登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装zabbix监控服务,然后配置该服务监控controller节点。

    yum install -y zabbix-server-mysql zabbix-web-mysql
    yum install -y  mariadb-server
    用mysql xxxx --user=root配置
    gzip -d解压sql文件
    mysql -uroot -p000000 -e "use zabbix;source /usr/share/doc/zabbix-server-mysql-3.4.15/create.sql;"
    
  • 登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装数据库、redis、zookeeper和kafka等服务,然后将商城应用部署在该云主机上,实现网站的访问

    #云主机ip:172.16.1.9
    #准备tar包 gpmall-single.tar.gz
    #准备centos源
    #查看jar包得到相关信息
    #maridb:地址mysql.mall 端口3306 账号root 密码123456 数据库gpmall
    #redis:地址redis.mall 端口6379
    #zookeeper: 地址zookeeper.mall 端口2181
    #kafka:地址kafka.mall 端口9092
    [root@gpmall ~]# ls
    gpmall-single.tar.gz
    [root@gpmall ~]# tar -xzvf gpmall-single.tar.gz 
    [root@gpmall ~]# cd gpmall-single
    [root@gpmall gpmall-single]# ls
    dist         gpmall-shopping-0.0.1-SNAPSHOT.jar  gpmall-user-0.0.1-SNAPSHOT.jar  shopping-provider-0.0.1-SNAPSHOT.jar  zookeeper-3.4.14.tar.gz
    gpmall-repo  gpmall.sql                          kafka_2.11-1.1.1.tgz            user-provider-0.0.1-SNAPSHOT.jar
    
    #配置yum源
    [root@gpmall gpmall-single]# cd gpmall-repo/
    [root@gpmall gpmall-repo]# pwd
    /root/gpmall-single/gpmall-repo
    [root@gpmall gpmall-repo]# rm -rf /etc/yum.repos.d/CentOS-*
    [root@gpmall gpmall-repo]# vi /etc/yum.repos.d/http.repo
    [root@gpmall gpmall-repo]# cat /etc/yum.repos.d/http.repo 
    [centos]
    name=centos
    baseurl=ftp://172.16.1.101/centos
    gpgcheck=0
    enable=1
    [gpmall]
    name=gpmall
    baseurl=file:///root/gpmall-single/gpmall-repo
    gpgcheck=0
    enable=1
    [root@gpmall gpmall-repo]# yum clean all
    
    #添加主机映射
    [root@gpmall gpmall-single]# vi /etc/hosts
    [root@gpmall gpmall-single]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1 redis.mall mysql.mall zookeeper.mall kafka.mall
    
    #安装数据库
    [root@gpmall gpmall-repo]# yum install -y mariadb mariadb-server
    [root@gpmall gpmall-repo]# mysqld_safe &
    [1] 1576
    [root@gpmall gpmall-repo]# 211218 12:16:08 mysqld_safe Logging to '/var/lib/mysql/gpmall.novalocal.err'.
    211218 12:16:09 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
    [root@gpmall gpmall-repo]# mysqladmin -uroot password "123456"
    [root@gpmall gpmall-repo]# cd ..
    [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "set names utf8;grant all privileges on *.* to 'root'@'%' identified by '123456';"
    [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "create database gpmall;use gpmall;source gpmall.sql;"
    [root@gpmall gpmall-single]# mysql -uroot -p123456 -e "use gpmall;show tables"
    +--------------------+
    | Tables_in_gpmall   |
    +--------------------+
    | tb_address         |
    | tb_base            |
    | tb_comment         |
    | tb_comment_picture |
    | tb_comment_reply   |
    | tb_dict            |
    | tb_express         |
    | tb_item            |
    | tb_item_cat        |
    | tb_item_desc       |
    | tb_log             |
    | tb_member          |
    | tb_order           |
    | tb_order_item      |
    | tb_order_shipping  |
    | tb_panel           |
    | tb_panel_content   |
    | tb_payment         |
    | tb_refund          |
    | tb_stock           |
    | tb_user_verify     |
    +--------------------+
    
    #安装redis
    [root@gpmall gpmall-single]# yum install -y redis
    [root@gpmall gpmall-single]# sed -i "s/bind 127.0.0.1/bind 0.0.0.0/g" /etc/redis.conf
    [root@gpmall gpmall-single]# sed -i "s/protected-mode yes/protected-mode no/g" /etc/redis.conf
    [root@gpmall gpmall-single]# sed -i "s/daemonize no/daemonize yes/g" /etc/redis.conf
    [root@gpmall gpmall-single]# redis-server /etc/redis.conf
    [root@gpmall gpmall-single]# redis-cli 
    127.0.0.1:6379> exit
    
    #安装jdk
    [root@gpmall gpmall-single]# yum install -y java-1.8.0-openjdk
    [root@gpmall gpmall-single]# java -version
    openjdk version "1.8.0_262"
    OpenJDK Runtime Environment (build 1.8.0_262-b10)
    OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
    
    #安装zookeeper
    [root@gpmall gpmall-single]# tar -xzvf zookeeper-3.4.14.tar.gz
    [root@gpmall gpmall-single]# cd zookeeper-3.4.14
    [root@gpmall zookeeper-3.4.14]# cd conf/
    [root@gpmall conf]# mv zoo_sample.cfg zoo.cfg 
    [root@gpmall ~]# cd /root/gpmall-single
    [root@gpmall gpmall-single]# zookeeper-3.4.14/bin/zkServer.sh start
    
    #安装kafka
    [root@gpmall gpmall-single]# tar -xzvf kafka_2.11-1.1.1.tgz
    [root@gpmall gpmall-single]# nohup kafka_2.11-1.1.1/bin/kafka-server-start.sh kafka_2.11-1.1.1/config/server.properties &
    
    #安装nginx
    [root@gpmall gpmall-single]# yum install -y nginx
    [root@gpmall gpmall-single]# rm -rf /usr/share/nginx/html/*
    [root@gpmall gpmall-single]# cp -rvf dist/* /usr/share/nginx/html/
    [root@gpmall gpmall-single]# sed -i "1a location /user { proxy_pass http://localhost:8082; }" /etc/nginx/conf.d/default.conf 
    [root@gpmall gpmall-single]# sed -i "1a location /shopping  { proxy_pass http://localhost:8081; }" /etc/nginx/conf.d/default.conf 
    [root@gpmall gpmall-single]# sed -i "1a location /cashier  { proxy_pass http://localhost:8083; }" /etc/nginx/conf.d/default.conf
    
    #启动jar包
    [root@gpmall gpmall-single]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
    [root@gpmall gpmall-single]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
    [root@gpmall gpmall-single]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
    [root@gpmall gpmall-single]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
    [root@gpmall gpmall-single]# netstat -ntlp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      11819/redis-server  
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      559/rpcbind         
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12666/nginx: master 
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1208/sshd           
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      996/master          
    tcp6       0      0 :::9092                 :::*                    LISTEN      12246/java          
    tcp6       0      0 :::2181                 :::*                    LISTEN      12000/java          
    tcp6       0      0 :::3306                 :::*                    LISTEN      1638/mysqld         
    tcp6       0      0 :::43948                :::*                    LISTEN      12000/java          
    tcp6       0      0 :::111                  :::*                    LISTEN      559/rpcbind         
    tcp6       0      0 :::39318                :::*                    LISTEN      12246/java          
    tcp6       0      0 :::22                   :::*                    LISTEN      1208/sshd           
    tcp6       0      0 ::1:25                  :::*                    LISTEN      996/master
    
    #重启nginx
    [root@gpmall gpmall-single]# nginx -t
  • 登录提供的私有云平台,使用centos7.5镜像创建两台云主机,使用提供的软件包。在这两台云主机上安装Redis服务,并配置成Redis主从架构

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 主节点
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 从节点
    #配置yum源
    #两台主机同时安装redis
    [root@server1 ~]# yum install -y redis
    [root@server2 ~]# yum install -y redis
    
    #修改配置文件
    [root@server1 ~]# vim /etc/redis.conf
    ##########修改前##########
    bind 127.0.0.1
    protected-mode yes
    daemonize no
    #########################
    ##########修改后##########
    bind 0.0.0.0
    protected-mode no
    daemonize yes
    #########################
    
    [root@server2 ~]# vim /etc/redis.conf
    ##########修改前##########
    bind 127.0.0.1
    protected-mode yes
    daemonize no
    # slaveof <masterip> <masterport>
    #########################
    ##########修改后##########
    bind 0.0.0.0
    protected-mode no
    daemonize yes
    slaveof 192.168.1.3 6379
    #########################
    
    [root@server1 ~]# redis-server /etc/redis.conf
    [root@server2 ~]# redis-server /etc/redis.conf
    
    [root@server1 ~]# redis-cli 
    127.0.0.1:6379> role
    1) "master"
    2) (integer) 127
    3) 1) 1) "192.168.1.18"
          2) "6379"
          3) "127"
          
    127.0.0.1:6379> set a '1'
    OK
    
    
    [root@server2 ~]# redis-cli 
    127.0.0.1:6379> ROLE
    1) "slave"
    2) "192.168.1.3"
    3) (integer) 6379
    4) "connected"
    5) (integer) 155
    
    127.0.0.1:6379> get a
    "1"
  • 登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,在这三台云主机安装上安装Redis服务,并配置成Redis哨兵模式。

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13 主节点
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9 从节点
    #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17 从节点
    #配置yum源
    #三台主机同时安装redis
    [root@server1 ~]# yum install -y redis
    [root@server2 ~]# yum install -y redis
    [root@server3 ~]# yum install -y redis
    
    #修改配置文件
    [root@server1 ~]# vim /etc/redis.conf
    ##########修改前##########
    bind 127.0.0.1
    protected-mode yes
    daemonize no
    #########################
    ##########修改后##########
    bind 0.0.0.0
    protected-mode no
    daemonize yes
    #########################
    
    [root@server2 ~]# vim /etc/redis.conf
    ##########修改前##########
    bind 127.0.0.1
    protected-mode yes
    daemonize no
    # slaveof <masterip> <masterport>
    #########################
    ##########修改后##########
    bind 0.0.0.0
    protected-mode no
    daemonize yes
    slaveof 192.168.1.3 6379
    #########################
    
    [root@server3 ~]# vim /etc/redis.conf
    ##########修改前##########
    bind 127.0.0.1
    protected-mode yes
    daemonize no
    # slaveof <masterip> <masterport>
    #########################
    ##########修改后##########
    bind 0.0.0.0
    protected-mode no
    daemonize yes
    slaveof 192.168.1.3 6379
    #########################
    
    [root@server1 ~]# redis-server /etc/redis.conf
    [root@server2 ~]# redis-server /etc/redis.conf
    [root@server3 ~]# redis-server /etc/redis.conf
    
    [root@server1 ~]# redis-cli 
    127.0.0.1:6379> role
    1) "master"
    2) (integer) 127
    3) 1) 1) "192.168.1.18"
          2) "6379"
          3) "127"
       2) 1) "192.168.1.16"
          2) "6379"
          3) "127"
    
    [root@server2 ~]# redis-cli 
    127.0.0.1:6379> ROLE
    1) "slave"
    2) "192.168.1.3"
    3) (integer) 6379
    4) "connected"
    5) (integer) 155
    
    [root@server3 ~]# redis-cli 
    127.0.0.1:6379> role
    1) "slave"
    2) "192.168.1.3"
    3) (integer) 6379
    4) "connected"
    5) (integer) 197
    
    
    #配置哨兵服务
    [root@server1 ~]# vim /etc/redis-sentinel.conf
    ##########修改前##########
    # protected-mode no
    sentinel monitor mymaster 127.0.0.1 6379 2
    #########################
    ##########修改后##########
    protected-mode no
    sentinel monitor mymaster 192.168.1.3 6379 2
    daemonize yes
    #########################
    
    [root@server2 ~]# vim /etc/redis-sentinel.conf
    ##########修改前##########
    # protected-mode no
    sentinel monitor mymaster 127.0.0.1 6379 2
    #########################
    ##########修改后##########
    protected-mode no
    sentinel monitor mymaster 192.168.1.3 6379 2
    daemonize yes
    #########################
    
    [root@server3 ~]# vim /etc/redis-sentinel.conf
    ##########修改前##########
    # protected-mode no
    sentinel monitor mymaster 127.0.0.1 6379 2
    #########################
    ##########修改后##########
    protected-mode no
    sentinel monitor mymaster 192.168.1.3 6379 2
    daemonize yes
    #########################
    
    #启动哨兵
    [root@server1 ~]# redis-sentinel /etc/redis-sentinel.conf
    [root@server2 ~]# redis-sentinel /etc/redis-sentinel.conf
    [root@server3 ~]# redis-sentinel /etc/redis-sentinel.conf
    
    [root@server1 ~]# redis-cli -p 26379
    127.0.0.1:26379> info sentinel
    # Sentinel
    sentinel_masters:1
    sentinel_tilt:0
    sentinel_running_scripts:0
    sentinel_scripts_queue_length:0
    sentinel_simulate_failure_flags:0
    master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=4
  • 登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机构建为zookeeper集群

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9
    #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17
    #三台机配置yum源,安装jdk
    [root@server1 ~]# # yum install -y java-1.8.0-openjdk
    [root@server1 ~]# # java -version
    openjdk version "1.8.0_161"
    OpenJDK Runtime Environment (build 1.8.0_161-b14)
    OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
    
    [root@server2 ~]# # yum install -y java-1.8.0-openjdk
    [root@server2 ~]# # java -version
    openjdk version "1.8.0_161"
    OpenJDK Runtime Environment (build 1.8.0_161-b14)
    OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
    
    [root@server3 ~]# # yum install -y java-1.8.0-openjdk
    [root@server3 ~]# # java -version
    openjdk version "1.8.0_161"
    OpenJDK Runtime Environment (build 1.8.0_161-b14)
    OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
    
    #三台机解压zookeeper
    [root@server1 ~]# cd /opt/
    [root@server1 opt]# ls
    kafka_2.11-1.1.1.tgz  zookeeper-3.4.14.tar.gz
    [root@server1 opt]# tar -xzvf zookeeper-3.4.14.tar.gz
    
    [root@server2 ~]# cd /opt/
    [root@server2 opt]# ls
    kafka_2.11-1.1.1.tgz  zookeeper-3.4.14.tar.gz
    [root@server2 opt]# tar -xzvf zookeeper-3.4.14.tar.gz
    
    [root@server3 ~]# cd /opt/
    [root@server3 opt]# ls
    kafka_2.11-1.1.1.tgz  zookeeper-3.4.14.tar.gz
    [root@server3 opt]# tar -xzvf zookeeper-3.4.14.tar.gz
    
    #修改配置文件
    [root@server1 opt]# cd zookeeper-3.4.14/conf/
    [root@server1 conf]# cp zoo_sample.cfg zoo.cfg
    [root@server1 conf]# vim zoo.cfg
    #########修改前##########
    dataDir=/tmp/zookeeper        #这个路径是到时候创建myid的路径
    ########################
    #########修改后##########
    dataDir=/tmp/zookeeper
    server.1=192.168.1.3:2888:3888
    server.2=192.168.1.18:2888:3888
    server.3=192.168.1.16:2888:3888
    ########################
    
    #scp配置文件到另外两台机
    [root@server1 conf]# scp zoo.cfg 192.168.1.18:/opt/zookeeper-3.4.14/conf/
    zoo.cfg                                       100% 1017    92.3KB/s   00:00  
    [root@server1 conf]# scp zoo.cfg 192.168.1.16:/opt/zookeeper-3.4.14/conf/
    zoo.cfg                                       100% 1017    95.2KB/s   00:00
    
    #创建dataDir路径,写入myid文件,数值为server.的数值
    [root@server1 conf]# mkdir /tmp/zookeeper
    [root@server1 conf]# echo "1" > /tmp/zookeeper/myid
    [root@server1 conf]# cat /tmp/zookeeper/myid 
    1
    
    [root@server2 opt]# mkdir /tmp/zookeeper
    [root@server2 opt]# echo "2" > /tmp/zookeeper/myid
    [root@server2 opt]# cat /tmp/zookeeper/myid 
    2
    
    [root@server3 opt]# mkdir /tmp/zookeeper
    [root@server3 opt]# echo "3" > /tmp/zookeeper/myid
    [root@server3 opt]# cat /tmp/zookeeper/myid 
    3
    
    #启动zookeeper,一台一台启动
    [root@server1 conf]# /opt/zookeeper-3.4.14/bin/zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@server1 conf]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Mode: leader
    
    [root@server2 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@server2 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Mode: follower
    
    [root@server3 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@server3 opt]# /opt/zookeeper-3.4.14/bin/zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
    Mode: follower
  • 登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机构建为kafka集群

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9
    #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17
    #kafka需要依赖zookeeper和jdk,上一题已经搭好了zookeeper和jdk,zookeeper不用集群也可以
    #三台机tar包解压
    [root@server1 conf]# cd /opt/
    [root@server1 opt]# tar -xzvf kafka_2.11-1.1.1.tgz
    [root@server2 opt]# tar -xzvf kafka_2.11-1.1.1.tgz
    [root@server3 opt]# tar -xzvf kafka_2.11-1.1.1.tgz
    
    #修改配置文件
    [root@server1 opt]# vim kafka_2.11-1.1.1/config/server.properties
    #########修改前##########
    broker.id=0
    zookeeper.connect=localhost:2181
    ########################
    #########修改后##########
    broker.id=1
    zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181
    listeners=PLAINTEXT://192.168.1.3:9092
    ########################
    
    [root@server2 opt]# vim kafka_2.11-1.1.1/config/server.properties
    #########修改前##########
    broker.id=0
    zookeeper.connect=localhost:2181
    ########################
    #########修改后##########
    broker.id=2
    zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181
    listeners=PLAINTEXT://192.168.1.18:9092
    ########################
    
    [root@server3 opt]# vim kafka_2.11-1.1.1/config/server.properties
    #########修改前##########
    broker.id=0
    zookeeper.connect=localhost:2181
    #########修改后##########
    broker.id=3
    zookeeper.connect=192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181
    listeners=PLAINTEXT://192.168.1.16:9092
    ########################
    
    #启动kafka
    [root@server1 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log &
    [2] 1513
    
    [root@server2 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log &
    [1] 1545
    
    [root@server3 opt]# nohup /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh /opt/kafka_2.11-1.1.1/config/server.properties >> log.log &
    [1] 1556
    
    #测试kafka集群
    [root@server1 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --create --zookeeper 192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 --replication-factor 2 --partitions 3 --topic demo_topics
    WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
    Created topic "demo_topics".
    [root@server1 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.3:2181,192.168.1.18:2181,192.168.1.16:2181 demo_topics
    demo_topics
    [root@server2 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.18:2181
    demo_topics 
    [root@server3 opt]# /opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --list --zookeeper 192.168.1.16:2181
    demo_topics
  • 登录提供的私有云平台,使用centos7.5镜像创建三台云主机来搭建rabbitmq集群。使用普通集群模式,其中一台做磁盘节点,另外两台做内存节点,配置完毕后启动rabbitmq服务

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9
    #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17
    #配置主机映射
    192.168.1.3 server1
    192.168.1.18 server2
    192.168.1.16 server3
    
    #配置yum源为rabbitmq源,启动并查看运行状态
    [root@server1 opt]# yum install -y rabbitmq-server
    [root@server1 opt]# systemctl start rabbitmq-server
    [root@server1 opt]# systemctl status rabbitmq-server
    
    [root@server2 opt]# yum install -y rabbitmq-server
    [root@server2 opt]# systemctl start rabbitmq-server
    [root@server2 opt]# systemctl status rabbitmq-server
    
    [root@server3 opt]# yum install -y rabbitmq-server
    [root@server3 opt]# systemctl start rabbitmq-server
    [root@server3 opt]# systemctl status rabbitmq-server
    
    #主节点开启rabbitmq_management插件,并且重启
    [root@server1 opt]# rabbitmq-plugins list
    [ ] rabbitmq_management               3.3.5
    [root@server1 opt]# rabbitmq-plugins enable rabbitmq_management
    The following plugins have been enabled:
      mochiweb
      webmachine
      rabbitmq_web_dispatch
      amqp_client
      rabbitmq_management_agent
      rabbitmq_management
    Plugin configuration has changed. Restart RabbitMQ for changes to take effect.
    [root@server1 opt]# systemctl restart rabbitmq-server
    [上面不行就用这条([root@server1 opt]# service rabbitmq-server restart)]
    #在openstack的安全组开放tcp协议,打开端口15672,rabbitmq1节点的IP+端口15672(http://ip:15672)访问RabbitMQ监控界面,使用默认的用户名和密码登录(用户名和密码都为guest)
    
    #登入后,将cookie文件scp到另外两台机
    [root@server1 opt]# scp /var/lib/rabbitmq/.erlang.cookie 192.168.1.18:/var/lib/rabbitmq/
    .erlang.cookie                                  100%   20     2.1KB/s   00:00    
    [root@server1 opt]# scp /var/lib/rabbitmq/.erlang.cookie 192.168.1.16:/var/lib/rabbitmq/
    .erlang.cookie                                  100%   20     2.2KB/s   00:00
    
    #在另外两台机更改文件用户组
    [root@server2 opt]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
    [root@server2 opt]# systemctl restart rabbitmq-server
    [root@server3 opt]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
    [root@server3 opt]# systemctl restart rabbitmq-server
    
    #另外两台机加入集群 这里得等很久
    [root@server2 opt]# rabbitmqctl stop_app
    [root@server2 opt]# rabbitmq-plugins enable rabbitmq_management
    [root@server2 opt]# rabbitmqctl join_cluster --ram rabbit@server1
    [root@server2 opt]# rabbitmqctl start_app
    [root@server2 opt]# systemctl restart rabbitmq-server
    
    [root@server3 opt]# rabbitmqctl stop_app
    [root@server3 opt]# rabbitmq-plugins enable rabbitmq_management
    [root@server3 opt]# rabbitmqctl join_cluster --ram rabbit@server1
    [root@server3 opt]# rabbitmqctl start_app
    [root@server3 opt]# systemctl restart rabbitmq-server
    
    #默认rabbitmq启动后是磁盘节点,在这个cluster命令下,rabbitmq2和rabbitmq3是内存节点,rabbitmq1是磁盘节点。
    如果要使rabbitmq2、rabbitmq3都是磁盘节点,去掉--ram参数即可。
    如果想要更改节点类型,可以使用命令rabbitmqctl change_cluster_node_type disc(ram),前提是必须停掉Rabbit应用。
    rabbitmq-plugins enable rabbitmq_management
  • 登录提供的私有云平台,创建一台centos7.5的云主机,使用提供的软件包,在这台云主机上安装LNMP环境,并将提供的WordPress案例包部署上去

  • 登录提供的私有云平台,创建三台centos7.5的云主机,使用提供的软件包,将这三台云主机都安装上MariaDB数据库服务,并配置为数据库集群,即MariaDB_galera_cluster数据库集群

    #server1 内网ip 192.168.1.3 浮动ip 172.16.1.13
    #server2 内网ip 192.168.1.18 浮动ip 172.16.1.9
    #server3 内网ip 192.168.1.16 浮动ip 172.16.1.17
    #配置映射
    192.168.1.3 server1
    192.168.1.18 server2
    192.168.1.16 server3
    
    #配置yum源
    #安装mariadb
    [root@server1 ~]# yum install -y mariadb mariadb-server
    [root@server2 ~]# yum install -y mariadb mariadb-server
    [root@server3 ~]# yum install -y mariadb mariadb-server
    
    [root@server1 ~]# mysqld_safe &
    [root@server1 ~]# mysqladmin -uroot password "000000"
    [root@server1 ~]# systemctl stop mariadb
    
    [root@server2 ~]# mysqld_safe &
    [root@server2 ~]# mysqladmin -uroot password "000000"
    [root@server2 ~]# systemctl stop mariadb
    
    [root@server3 ~]# mysqld_safe &
    [root@server3 ~]# mysqladmin -uroot password "000000"
    [root@server3 ~]# systemctl stop mariadb
    
    
    #修改配置文件
    [root@server1 ~]# vim /etc/my.cnf.d/server.cnf
    ##########修改前##########
    [galera]
    # Mandatory settings
    #wsrep_on=ON
    #wsrep_provider=
    #wsrep_cluster_address=
    #binlog_format=row
    #default_storage_engine=InnoDB
    #innodb_autoinc_lock_mode=2
    #
    # Allow server to accept connections on all interfaces.
    #
    #bind-address=0.0.0.0
    #########################
    ##########修改后##########
    [galera]
    # Mandatory settings
    wsrep_on=ON
    wsrep_provider=/usr/lib64/galera/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.3,192.168.1.16,192.168.1.18"
    binlog_format=row
    default_storage_engine=InnoDB
    innodb_autoinc_lock_mode=2
    #
    # Allow server to accept connections on all interfaces.
    #
    bind-address=0.0.0.0
    #########################
    
    #scp到另外两台机
    [root@server1 ~]# scp /etc/my.cnf.d/server.cnf 192.168.1.18:/etc/my.cnf.d/server.cnf 
    server.cnf                                                                                                                                                                     100% 1155    93.4KB/s   00:00    
    [root@server1 ~]# scp /etc/my.cnf.d/server.cnf 192.168.1.16:/etc/my.cnf.d/server.cnf 
    server.cnf                                                                                                                                                                     100% 1155   126.0KB/s   00:00 
    
    #启动
    [root@server1 ~]# service mysql start --wsrep-new-cluster
    Starting mysql (via systemctl):                            [  OK  ]
    
    [root@server2 ~]# service mysql start
    Starting mysql (via systemctl):                            [  OK  ]
    
    [root@server3 ~]# service mysql start
    Starting mysql (via systemctl):                            [  OK  ]
    
  • 登录提供的私有云平台,再创建一台centos7.5的云主机,使用提供的软件包,安装HAproxy负载均衡服务,与上一题搭建完成的高可用数据库进行关联,完成数据库集群+负载均衡的架构

  • 登录提供的私有云平台,创建两台centos7.5的云主机,使用提供的软件包,将这两台云主机上安装MariaDB数据库服务,并配置为主从数据库。

    #在openstack上创建两台云主机,处于同一网卡下
    #mariadb1 ip:192.168.100.10  mariadb1 ip:192.168.100.6
    #配置yum源,使用提供的mariadb镜像源
    
    ###基础配置
    ##主节点
    [root@mariadb1 ~]# rm -rf /etc/yum.repos.d/CentOS-*
    [root@mariadb1 ~]# vi /etc/yum.repos.d/http.repo
    [root@mariadb1 ~]# cat /etc/yum.repos.d/http.repo 
    [centos]
    name=centos
    baseurl=ftp://172.16.1.101/centos
    gpgcheck=0
    enable=1
    [mysql]
    name=mysql
    baseurl=ftp://172.16.1.101/iaas/mariadb-repo/
    gpgcheck=0
    enbale=1
    [root@mariadb1 ~]# yum clean all
    Loaded plugins: fastestmirror
    Cleaning repos: centos mysql
    Cleaning up everything
    Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
    [root@mariadb1 ~]# yum install -y mariadb mariadb-server
    [root@mariadb1 ~]# vi /etc/my.cnf
    #在[client-server]下加入
    [mysqld]
    log-bin=mysql-bin
    server-id=10
    [root@mariadb1 ~]# systemctl restart mysql
    [root@mariadb1 ~]# mysqladmin -uroot password "000000"
    
    
    ##从节点
    [root@mariadb2 ~]# rm -rf /etc/yum.repos.d/CentOS-*
    [root@mariadb2 ~]# vi /etc/yum.repos.d/http.repo
    [root@mariadb2 ~]# cat /etc/yum.repos.d/http.repo 
    [centos]
    name=centos
    baseurl=ftp://172.16.1.101/centos
    gpgcheck=0
    enable=1
    [mysql]
    name=mysql
    baseurl=ftp://172.16.1.101/iaas/mariadb-repo/
    gpgcheck=0
    enbale=1
    [root@mariadb2 ~]# yum clean all
    Loaded plugins: fastestmirror
    Cleaning repos: centos mysql
    Cleaning up everything
    Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
    [root@mariadb2 ~]# yum install -y mariadb mariadb-server
    [root@mariadb2 ~]# vi /etc/my.cnf
    #在[client-server]下加入
    [mysqld]
    log-bin=mysql-bin
    server-id=6
    [root@mariadb2 ~]# systemctl restart mysql
    [root@mariadb2 ~]# mysqladmin -uroot password "000000"
    
    ###主从配置
    #主节点
    MariaDB [(none)]> grant all on *.* to 'root'@'%' identified by '000000';
    Query OK, 0 rows affected (0.010 sec)
    MariaDB [(none)]> show master status;
    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000001 |      702 |              |                  |
    +------------------+----------+--------------+------------------+
    1 row in set (0.016 sec)
    
    #从节点
    MariaDB [(none)]> change master to master_host='192.168.100.10',master_user='root',master_password='000000',master_log_file='mysql-bin.000001',master_log_pos=702;
    Query OK, 0 rows affected (0.045 sec)
    MariaDB [(none)]> start slave;
    Query OK, 0 rows affected (0.019 sec)
    MariaDB [(none)]> show slave status\G;
    *************************** 1. row ***************************
                    Slave_IO_State: Waiting for master to send event
                       Master_Host: 192.168.100.10
                       Master_User: root
                       Master_Port: 3306
                     Connect_Retry: 60
                   Master_Log_File: mysql-bin.000001
               Read_Master_Log_Pos: 702
                    Relay_Log_File: mariadb2-relay-bin.000002
                     Relay_Log_Pos: 555
             Relay_Master_Log_File: mysql-bin.000001
                  Slave_IO_Running: Yes
                 Slave_SQL_Running: Yes
    
    ###验证
    #主节点
    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    4 rows in set (0.012 sec)
    MariaDB [(none)]> create database data;
    Query OK, 1 row affected (0.018 sec)
    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | data               |
    | information_schema |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    5 rows in set (0.004 sec)
    
    #从节点
    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | data               |
    | information_schema |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    5 rows in set (0.007 sec)
  • 登录提供的私有云平台,再创建一台centos7.5的云主机,使用提供的软件包与上一题配置完成的主从数据库,将这三台云主机配置为数据库读写分离架构。

    #创建一台云主机,与前两台mariadb处于同一网卡下
    #mycat 192.168.100.13
    #配置yum源,安装vim,将Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz和schema.xml文件scp到主机
    [root@mycat /]# ls
    bin  boot  dev  etc  home  lib  lib64  media  mnt  Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz  opt  proc  root  run  sbin  schema.xml  srv  sys  tmp  usr  var
    [root@mycat /]# tar -xzvf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz 
    [root@mycat /]# chown -R 777 mycat/
    #开放环境变量
    [root@mycat /]# echo "export MYCAT_HOME=/mycat" >> /etc/profile 
    [root@mycat /]# source /etc/profile
    
    
    #mycat需要依赖jdk
    [root@mycat /]# yum install -y java-1.8.0-openjdk
    [root@mycat /]# java -version
    openjdk version "1.8.0_161"
    OpenJDK Runtime Environment (build 1.8.0_161-b14)
    OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
    
    #将schema.xml替换
    [root@mycat /]# cp -f /schema.xml /mycat/conf/
    cp: overwrite ‘/mycat/conf/schema.xml’? y
    
    [root@mycat /]# vim /mycat/conf/schema.xml
    ########修改前########
    <?xml version="1.0"?>
    <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    <mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="USERDB" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema>
    <dataNode name="dn1" dataHost="localhost1" database="test" />
    <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" dbDriver="native" writeType="0" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM1" url="172.16.51.18:3306" user="root" password="123456">
            <readHost host="hostS1" url="172.16.51.30:3306" user="root" password="123456" />
        </writeHost>
    </dataHost>
    </mycat:schema>
    #####################
    
    ########修改后########
    <?xml version="1.0"?>
    <!DOCTYPE mycat:schema SYSTEM "schema.dtd">
    <mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="mariadb" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"></schema>
    <dataNode name="dn1" dataHost="localhost1" database="data" />
    <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" dbType="mysql" dbDriver="native" writeType="0" switchType="1"  slaveThreshold="100">
        <heartbeat>select user()</heartbeat>
        <writeHost host="hostM1" url="192.168.100.10:3306" user="root" password="000000">
            <readHost host="hostS1" url="192.168.100.6:3306" user="root" password="000000" />
        </writeHost>
    </dataHost>
    </mycat:schema>
    #####################
    
    [root@mycat /]# vim /mycat/conf/server.xml
    ########修改前########
            <user name="root">
                    <property name="password">123456</property>
                    <property name="schemas">TESTDB</property>
    
                    <!-- 表级 DML 权限设置 -->
                    <!--            
                    <privileges check="false">
                            <schema name="TESTDB" dml="0110" >
                                    <table name="tb01" dml="0000"></table>
                                    <table name="tb02" dml="1111"></table>
                            </schema>
                    </privileges>           
                     -->
            </user>
    
            <user name="user">
                    <property name="password">user</property>
                    <property name="schemas">TESTDB</property>
                    <property name="readOnly">true</property>
            </user>
    #####################
    
    ########修改后########
            <user name="root">
                    <property name="password">000000</property>
                    <property name="schemas">mariadb</property>
    
                    <!-- 表级 DML 权限设置 -->
                    <!--            
                    <privileges check="false">
                            <schema name="TESTDB" dml="0110" >
                                    <table name="tb01" dml="0000"></table>
                                    <table name="tb02" dml="1111"></table>
                            </schema>
                    </privileges>           
                     -->
            </user>
    #####################
    
    #启动mycat
    [root@mycat /]# mycat/bin/mycat start
    Starting Mycat-server...
    #验证
    [root@mycat /]# yum install -y MariaDB-client
    [root@mycat /]# mysql -h 127.0.0.1 -P9066 -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MySQL connection id is 2
    Server version: 5.6.29-mycat-1.6-RELEASE-20161028204710 MyCat Server (monitor)
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MySQL [(none)]> show @@datasource;
    +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+
    | DATANODE | NAME   | TYPE  | HOST           | PORT | W/R  | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
    +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+
    | dn1      | hostM1 | mysql | 192.168.100.10 | 3306 | W    |      0 |   10 | 1000 |      51 |         5 |          0 |
    | dn1      | hostS1 | mysql | 192.168.100.6  | 3306 | R    |      0 |    0 | 1000 |       0 |         0 |          0 |
    +----------+--------+-------+----------------+------+------+--------+------+------+---------+-----------+------------+
    2 rows in set (0.081 sec)
  • 在controller节点的/root目录下按要求编写Python程序create_sec.py文件,对接 openstack api,要求在云平台上创建一个安全组pvm_sec,开放20、21、22、80、3306端口(如果存在同名安全组,代码中需先进行删除操作)。输出安全组名称、id和详细信息

api

  • 根据 http 服务中提供的 Python-api.tar.gz 软件包,完成 python3.6 软件和依赖库的安装

    [root@controller opt]# cd Python-api/
    [root@controller Python-api]# ls
    certifi-2019.11.28-py2.py3-none-any.whl  chardet-3.0.4-py2.py3-none-any.whl  idna-2.8-py2.py3-none-any.whl  python-3.6.8.tar.gz  requests-2.24.0-py2.py3-none-any.whl  urllib3-1.25.11-py3-none-any.whl  安装指南.txt
    [root@controller Python-api]# cat 安装指南.txt 
    1、正常安装python3.6
    2、pip3 install certifi-2019.11.28-py2.py3-none-any.whl 
    3、pip3 install urllib3-1.25.11-py3-none-any.whl 
    4、pip3 install idna-2.8-py2.py3-none-any.whl 
    5、pip3 install chardet-3.0.4-py2.py3-none-any.whl
    6、pip3 install requests-2.24.0-py2.py3-none-any.whl 
    [root@controller Python-api]# tar -xzf python-3.6.8.tar.gz -C /
    [root@controller Python-api]# cd /python-3.6.8/
    [root@controller python-3.6.8]# ls
    packages  repodata
    [root@controller python-3.6.8]# pwd
    /python-3.6.8
    [root@controller python-3.6.8]# echo "[python]" >> /etc/yum.repos.d/local.repo 
    [root@controller python-3.6.8]# echo "name=python" >> /etc/yum.repos.d/local.repo 
    [root@controller python-3.6.8]# echo "baseurl=file:///python-3.6.8" >> /etc/yum.repos.d/local.repo 
    [root@controller python-3.6.8]# echo "gpgcheck=0" >> /etc/yum.repos.d/local.repo 
    [root@controller python-3.6.8]# echo "enable=1" >> /etc/yum.repos.d/local.repo 
    [root@controller python-3.6.8]# yum clean all
    Loaded plugins: fastestmirror
    Cleaning repos: centos iaas python
    Cleaning up everything
    Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
    Cleaning up list of fastest mirrors
    [root@controller python-3.6.8]# yum list | grep python3
    python3.x86_64                             3.6.8-13.el7                python   
    python3-libs.x86_64                        3.6.8-13.el7                python   
    python3-pip.noarch                         9.0.3-7.el7_7               python   
    python3-setuptools.noarch                  39.2.0-10.el7               python   
    [root@controller python-3.6.8]# yum install -y python3
    
    #安装模块
    [root@controller python-3.6.8]# cd /opt/Python-api/
    [root@controller Python-api]# pip3 install certifi-2019.11.28-py2.py3-none-any.whl
    WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
    Processing ./certifi-2019.11.28-py2.py3-none-any.whl
    Installing collected packages: certifi
    Successfully installed certifi-2019.11.28
    [root@controller Python-api]# pip3 install urllib3-1.25.11-py3-none-any.whl
    WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
    Processing ./urllib3-1.25.11-py3-none-any.whl
    Installing collected packages: urllib3
    Successfully installed urllib3-1.25.11
    [root@controller Python-api]# pip3 install idna-2.8-py2.py3-none-any.whl
    WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
    Processing ./idna-2.8-py2.py3-none-any.whl
    Installing collected packages: idna
    Successfully installed idna-2.8
    [root@controller Python-api]# pip3 install chardet-3.0.4-py2.py3-none-any.whl
    WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
    Processing ./chardet-3.0.4-py2.py3-none-any.whl
    Installing collected packages: chardet
    Successfully installed chardet-3.0.4
    [root@controller Python-api]# pip3 install requests-2.24.0-py2.py3-none-any.whl 
    WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
    Processing ./requests-2.24.0-py2.py3-none-any.whl
    Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0)
    Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0)
    Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0)
    Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/site-packages (from requests==2.24.0)
    Installing collected packages: requests
    Successfully installed requests-2.24.0
    
    #验证
    [root@controller Python-api]# python3
    Python 3.6.8 (default, Apr  2 2020, 13:34:55) 
    [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> 
    
  • 获取token

    • | 参数 | 类型 | 描述 |
      | :----------------------------------------------------------- | :----- | :----------------------------------------------------------- |

    | *用户域*(必需有) | 字符串 | 用户的域 |
    | 用户名 (必需有) | 字符串 | 用户名。如果您不提供用户名和密码,那么必须提供一个令牌。 |
    | 密码 (必需有) | 字符串 | 该用户的密码。 |
    | *项目域*(可选) | 字符串 | 该项目的域是scope对象的必需部分。 |
    | *项目名(可选) | 字符串 | 项目名。*项目ID项目名都是可选的。 |
    | *项目ID(可选) | 字符串 | 项目ID。*项目ID项目名都是可选的。但是伴随着项目域这两个属性其中之一是必须有的。这两个属性包含在scope对象下。如果你不知道项目的名称或者ID,发送一个不包含任何scope对象的请求。 |

    • 生成环境变量

      source /etc/keystone/admin-openrc.sh
      
      [root@controller api]# cat /etc/keystone/admin-openrc.sh 
      export OS_PROJECT_DOMAIN_NAME=demo
      export OS_USER_DOMAIN_NAME=demo
      export OS_PROJECT_NAME=admin
      export OS_USERNAME=admin
      export OS_PASSWORD=000000
      export OS_AUTH_URL=http://controller:5000/v3
      export OS_IDENTITY_API_VERSION=3
      export OS_IMAGE_API_VERSION=2
    • 地址:$OS_AUTH_URL/auth/tokens?nocatalog
    • 类型:post
    • 请求头

      • Content-Type: application/json
    • 请求参数:json格式

      {
          "auth": {
              "identity": {
                  "methods": [
                      "password"
                  ],
                  "password": {
                      "user": {
                          "domain": {
                              "name": "$OS_USER_DOMAIN_NAME"
                          },
                          "name": "$OS_USERNAME",
                          "password": "$OS_PASSWORD"
                      }
                  }
              },
              "scope": {
                  "project": {
                      "domain": {
                          "name": "$OS_PROJECT_DOMAIN_NAME"
                      },
                      "name": "$OS_PROJECT_NAME"
                  }
              }
          }
      }
    • curl

      curl -v -s -X POST $OS_AUTH_URL/auth/tokens?nocatalog   -H "Content-Type: application/json"   -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name":  "'"$OS_PROJECT_NAME"'" } } }}'
      
      [root@controller api]# curl -v -s -X POST $OS_AUTH_URL/auth/tokens?nocatalog   -H "Content-Type: application/json"   -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name":  "'"$OS_PROJECT_NAME"'" } } }}'
      * About to connect() to controller port 5000 (#0)
      *   Trying 172.16.1.151...
      * Connected to controller (172.16.1.151) port 5000 (#0)
      > POST /v3/auth/tokens?nocatalog HTTP/1.1
      > User-Agent: curl/7.29.0
      > Host: controller:5000
      > Accept: */*
      > Content-Type: application/json
      > Content-Length: 220
      > 
      * upload completely sent off: 220 out of 220 bytes
      < HTTP/1.1 201 Created
      < Date: Sun, 19 Dec 2021 14:13:52 GMT
      < Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_wsgi/3.4 Python/2.7.5
      < X-Subject-Token: gAAAAABhvz4gNvySQnPWjGxlKlea-gzvtY80v_NgLAnDP9z9Qkp2R1NAMJUEaBzydbmTjftUxRTa-TBQqBwM4XrUk396XJQz6W0tIQl8TmjdlZ9z4iOw2MM4w6XWKfhEGo8VqSS4CuH7ZoJgdvmc0wofuFXX2cZ7y4b0d4eV7c8axoTuyBVMdZI
      < Vary: X-Auth-Token
      < x-openstack-request-id: req-1e762a5f-7ee5-43f1-9bd8-d84422e295e0
      < Content-Length: 568
      < Content-Type: application/json
      < 
      * Connection #0 to host controller left intact
      {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "c04096674c744030bf313cf107614f8d", "name": "admin"}], "expires_at": "2021-12-19T15:13:52.000000Z", "project": {"domain": {"id": "6897ff73286446cda4016bb748d2fd4d", "name": "demo"}, "id": "54bea643f53e4f2b96970ddfc14d3138", "name": "admin"}, "user": {"password_expires_at": null, "domain": {"id": "6897ff73286446cda4016bb748d2fd4d", "name": "demo"}, "id": "32a5404a9ca14a09ba0f12ae34c7a079", "name": "admin"}, "audit_ids": ["UxIdxDNCRk-JoeImrrG1yg"], "issued_at": "2021-12-19T14:13:52.000000Z"}}
    • python

      import json, requests, time
      
      url = "http://172.16.1.151:5000/v3/auth/tokens"
      body = {
        "auth": {
          "identity": {
            "methods": [
              "password"
            ],
            "password": {
              "user": {
                "domain": {
                  "name": "demo"
                },
                "name": "admin",
                "password": "000000"
              }
            }
          },
          "scope": {
            "project": {
              "domain": {
                "name": "demo"
              },
              "name": "admin"
            }
          }
        }
      }
      headers = {
          "Content-Type": "application/json",
      }
      
      token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token']
      
      print(token)

1.云主机类型

  • 取云主机类型(flavor)列表

    flavors = requests.get('http://controller:8774/v2.1/flavors', headers={'X-Auth-Token': token})
    json_flavors = json.loads(flavors.text)
    for i in json_flavors['flavors']:
        print(i)
[root@controller api]# python3 get_flavors.py 
{'id': '051c9373-f51f-46cc-80bd-56ded52f4678', 'links': [{'href': 'http://controller:8774/v2.1/flavors/051c9373-f51f-46cc-80bd-56ded52f4678', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/051c9373-f51f-46cc-80bd-56ded52f4678', 'rel': 'bookmark'}], 'name': 'gpmall'}
{'id': '1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'links': [{'href': 'http://controller:8774/v2.1/flavors/1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/1b17a049-e504-4dc1-9fb4-da95fabf06ca', 'rel': 'bookmark'}], 'name': 'chinaskill'}
{'id': '4f9a6045-2968-457f-9111-1a9968dd2b69', 'links': [{'href': 'http://controller:8774/v2.1/flavors/4f9a6045-2968-457f-9111-1a9968dd2b69', 'rel': 'self'}, {'href': 'http://controller:8774/flavors/4f9a6045-2968-457f-9111-1a9968dd2b69', 'rel': 'bookmark'}], 'name': 'test'}
  • 创建云主机类型

    • 地址:http://172.16.1.151:8774/v2.1/flavors
    • 类型:post
    • 请求头:

      • Content-Type: application/json
      • X-Auth-Token: token
    • 请求参数:json格式

      {
        "flavor": {
          "name": name,
          "ram": ram,
          "vcpus": vcpus,
          "disk": disk,
          "id": id
        }
      }
# 创建云主机类型
def create_flavor(id, vcpus, ram, disk, name):
    headers = {
        "Content-Type": "application/json",
        "X-Auth-Token": token
    }
    body = {
        "flavor": {
            "name": name,
            "ram": ram,
            "vcpus": vcpus,
            "disk": disk,
            "id": id
        }
    }
    return requests.post('http://172.16.1.151:8774/v2.1/flavors', data=json.dumps(body), headers=headers)


flavor = create_flavor(id=100, vcpus=1, ram=1024, disk=10, name='api-create')
print(flavor.text)
[root@controller api]# python3 create_flavors.py 
{"flavor": {"name": "api-create", "links": [{"href": "http://172.16.1.151:8774/v2.1/flavors/100", "rel": "self"}, {-"href": "http://172.16.1.151:8774/flavors/100", "rel": "bookmark"}], "ram": 1024, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 10, "id": "100"}}

[root@controller api]# python get_flavors.py 
{u'id': u'100', u'links': [{u'href': u'http://controller:8774/v2.1/flavors/100', u'rel': u'self'}, {u'href': u'http://controller:8774/flavors/100', u'rel': u'bookmark'}], u'name': u'api-create'}
  • 删除云主机类型

    • 地址:http://172.16.1.151:8774/v2.1/flavors/ + (flavor_id)
    • 类型:delete
    • 请求头:

      • Content-Type: application/json
      • X-Auth-Token: token
    headers = {
        "context-type": "application/json",
        "x-auth-token": token
    }
    #删除云主机类型
    flavor = requests.delete('http://172.16.1.151:8774/v2.1/flavors/' + '100',headers=headers)
    print(flavor.text)
[root@controller api]# openstack flavor list
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| ID                                   | Name       |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| 051c9373-f51f-46cc-80bd-56ded52f4678 | gpmall     | 4096 |   20 |         0 |     4 | True      |
| 100                                  | api-create | 1024 |   10 |         0 |     1 | True      |
| 1b17a049-e504-4dc1-9fb4-da95fabf06ca | chinaskill |  512 |   10 |         0 |     1 | True      |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
[root@controller api]# python3 delete_flavor.py 

[root@controller api]# openstack flavor list
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| ID                                   | Name       |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
| 051c9373-f51f-46cc-80bd-56ded52f4678 | gpmall     | 4096 |   20 |         0 |     4 | True      |
| 1b17a049-e504-4dc1-9fb4-da95fabf06ca | chinaskill |  512 |   10 |         0 |     1 | True      |
+--------------------------------------+------------+------+------+-----------+-------+-----------+
  • 更新云主机类型

    • 地址:http://172.16.1.151:8774/v2.1/flavors/ + (flavor_id)
    • 类型:PUT
    • 请求头:

      • Content-Type: application/json
      • X-Auth-Token: token
    • json格式

      {
          "flavor": {
              "description": "更新描述"
          }
      }

2.镜像

  • http://controller:8774/v2.1/images 这个镜像api已被弃用

  • 列出镜像列表

    • 地址:http://controller:8774/v2.1/images
    • 列出详细地址:http://controller:8774/v2.1/images/detail
    • 类型:get
    • 请求头:

      • Content-Type: application/json
      • X-Auth-Token: token
  • 创建镜像

    • 地址:http://controller:9292/v2/images
    • 类型:post
    • 请求头:

      • Content-Type: application/json
      • X-Auth-Token: token
      • Location:/opt/images/CentOS_7.5_x86_64_XD.qcow2
    • 请求参数:json
    • {
          "disk_format": "qcow2",
          "name": "api",
      }
    • NameInTypeDescription
      container_format (Optional)bodyenumFormat of the image container.Values may vary based on the configuration available in a particular OpenStack cloud. See the Image Schema response from the cloud itself for the valid values available.Example formats are: ami, ari, aki, bare, ovf, ova, or docker.The value might be null (JSON null data type).
      disk_format (Optional)bodyenumThe format of the disk.Values may vary based on the configuration available in a particular OpenStack cloud. See the Image Schema response from the cloud itself for the valid values available.Example formats are: ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, ploop or iso.The value might be null (JSON null data type).Newton changes: The vhdx disk format is a supported value. Ocata changes: The ploop disk format is a supported value.
      id (Optional)bodystringA unique, user-defined image UUID, in the format:nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn Where n is a hexadecimal digit from 0 to f, or F.For example:b2173dd3-7ad6-4362-baa6-a68bce3565cb If you omit this value, the API generates a UUID for the image. If you specify a value that has already been assigned, the request fails with a 409 response code.
      min_disk (Optional)bodyintegerAmount of disk space in GB that is required to boot the image.
      min_ram (Optional)bodyintegerAmount of RAM in MB that is required to boot the image.
      name (Optional)bodystringThe name of the image.
      protected (Optional)bodybooleanImage protection for deletion. Valid value is true or false. Default is false.
      tags (Optional)bodyarrayList of tags for this image. Each tag is a string of at most 255 chars. The maximum number of tags allowed on an image is set by the operator.
      visibility (Optional)bodystringVisibility for this image. Valid value is one of: public, private, shared, or community. At most sites, only an administrator can make an image public. Some sites may restrict what users can make an image community. Some sites may restrict what users can perform member operations on a shared image. Since the Image API v2.5, the default value is shared.
    • [root@controller images]# openstack image list
      +--------------------------------------+-----------+--------+
      | ID                                   | Name      | Status |
      +--------------------------------------+-----------+--------+
      | 845a178a-367b-45a8-a9f5-a75a6e987e2f | api       | queued |
      +--------------------------------------+-----------+--------+
  • 删除镜像

    • 地址:http://controller:8774/v2.1/images/ + (镜像id)
    • 类型:delete
    • 请求头:

      • X-Auth-Token: token
0

评论 (0)

取消