Clone使用Ceph RBD的虚拟机

Libvirt集成Ceph RBD 环境中部署KVM虚拟机并调试运行正常之后,自然就想要类似 libvirt LVM卷管理存储池 一样,通过 virsh vol-clone 批量复制VM,来构建大规模集群。

不过,尝试:

virt-clone --original z-ubuntu20-rbd --name z-k8s-m-1 --auto-clone

提示错误:

ERROR    [Errno 2] No such file or directory: 'rbd://192.168.6.204/libvirt-pool'

这个问题在 virt-clone can’t handle domains with rbd volumes #177 有说明, virt-install 查看 pool + vol 的VM XML,没有认为存储是被管理的,所以不能clone。

解决思路

如果只是因为无法处理Ceph RBD存储中的VM镜像文件,我考虑把clone步骤拆分来完成:

  • 将模版虚拟机 XML 配置中有关 Ceph RBD 磁盘部分摘除,然后剩余部分作为模版

  • 使用 rbd cp libvirt-pool/z-ubuntu20 libvirt-pool/${VM} 来独立clone出RBD磁盘文件

  • 摘除的Ceph RBD磁盘配置XML单独保存成独立文件,等VM clone完成后再attach上去 ( 参考 Open Virtual Machine Firmware(OMVF) 添加PCIe设备方法 )

实践Clone Ceph RBD虚拟机

  • 备份 z-ubuntu20-rbd 配置:

    virsh dumpxml z-ubuntu20-rbd > z-ubuntu20-rbd.xml
    
  • 分离出 rbd.xml 配置:

rbd磁盘设备XML
 1  <disk type='network' device='disk'>
 2    <driver name='qemu' type='raw' cache='none' io='native'/>
 3    <auth username='libvirt'>
 4      <secret type='ceph' uuid='3f203352-fcfc-4329-b870-34783e13493a'/>
 5    </auth>
 6    <source protocol='rbd' name='libvirt-pool/RBD_DISK'>
 7      <host name='192.168.6.204' port='6789'/>
 8      <host name='192.168.6.205' port='6789'/>
 9      <host name='192.168.6.206' port='6789'/>
10    </source>
11    <target dev='vda' bus='virtio'/>
12    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
13  </disk>

请注意,这里 rbd.xml 配置中,我设置了 占位符 RBD_DISK ,后面每次可以通过修订这个变量来添加不同设备

  • 修订 z-ubuntu20-rbd 虚拟机配置,摘除 RBD 磁盘部分:

    virsh edit z-ubuntu20-rbd
    
  • 独立clone出新虚拟机磁盘:

    rbd cp libvirt-pool/z-ubuntu20 libvirt-pool/z-k8s-m-1
    

完成复制后检查:

sudo rbd ls -p libvirt-pool

可以看到:

z-k8s-m-1
z-ubuntu20
  • 刷新libvirt存储池 images_rbd (否则libvirt看不到新创建的RBD磁盘文件)

    virsh pool-refresh images_rbd
    
  • 使用 virt-clone 完成没有磁盘的虚拟机clone:

    virt-clone --original z-ubuntu20-rbd --name z-k8s-m-1 --auto-clone
    

提示:

Allocating 'z-k8s-m-1_VARS.fd'                          | 128 kB  00:00:00
Clone 'z-k8s-m-1' created successfully.
  • 然后生成需要添加磁盘的XML:

    VM=z-k8s-m-1
    cat rbd.xml | sed "s/RBD_DISK/$VM/g" > ${VM}-disk.xml
    
  • 将磁盘配置添加到新虚拟机:

    virsh attach-device $VM ${VM}-disk.xml --config
    

提示:

Device attached successfully
  • 现在我们就可以启动新虚拟机 z-k8s-m-1

    virsh start z-k8s-m-1
    
  • 在虚拟机内部订正主机名 (后续改为采用 使用libguestfs修订KVM镜像 来完成定制)

排查clone VM启动问题

这里出现一个问题 virsh console z-k8s-m-1 没有任何输出,观察日志 /var/log/libvirt/qemu/z-k8s-m-1.log 发现有 libvirt日志”Domain id=X is tainted: host-cpu”

我对比了一下 z-ubuntu20-rbd 和clone出来的 z-k8s-m-1 ,发现新虚拟机主要增加了 guest_agent

<channel type='unix'>
  <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-36-z-k8s-m-1/org.qemu.guest_agent.0'/>
  <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
  <alias name='channel0'/>
  <address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>

其他看不出特别

  • 改为 virsh start z-k8s-m-1 --console 直接连接控制台,原来启动时出现报错:

    !!!! X64 Exception Type - 0D(#GP - General Protection)  CPU Apic ID - 00000000 !!!!
    ExceptionData - 0000000000000000
    RIP  - 000000007EAB8BCA, CS  - 0000000000000038, RFLAGS - 0000000000010002
    RAX  - 49C7C83113FA7698, RCX - 0000000000000000, RDX - 000000007FBD3898
    RBX  - 0000000000000020, RSP - 000000007FF9C9B0, RBP - 00000000746E7684
    RSI  - 00000000746E7668, RDI - 000000007FBD3898
    R8   - 8000000000000001, R9  - 000000007FF9CAC0, R10 - 0000000000000000
    R11  - 0000000000001000, R12 - 00000000746E7665, R13 - 0000000000000000
    R14  - 0000000000000000, R15 - 000000007F2B5E18
    DS   - 0000000000000030, ES  - 0000000000000030, FS  - 0000000000000030
    GS   - 0000000000000030, SS  - 0000000000000030
    CR0  - 0000000080010033, CR2 - 0000000000000000, CR3 - 000000007FC01000
    CR4  - 0000000000000668, CR8 - 0000000000000000
    DR0  - 0000000000000000, DR1 - 0000000000000000, DR2 - 0000000000000000
    DR3  - 0000000000000000, DR6 - 00000000FFFF0FF0, DR7 - 0000000000000400
    GDTR - 000000007FBEE698 0000000000000047, LDTR - 0000000000000000
    IDTR - 000000007F2D0018 0000000000000FFF,   TR - 0000000000000000
    FXSAVE_STATE - 000000007FF9C610
    !!!! Find image based on IP(0x7EAB8BCA) /build/edk2-xUnmxG/edk2-0~20191122.bd85bf54/Build/OvmfX64/RELEASE_GCC5/X64/MdeModulePkg/Universal/Variable/RuntimeDxe/VariableRuntimeDxe/DEBUG/VariableRuntimeDxe.dll (ImageBase=000000007EAB1000, EntryPoint=000000007EABD0EC) !!!!
    

这个报错看起来是 EFI 错误

解决思路:

  • 将新clone出来磁盘添加到原先能够正常运行的 z-ubuntu20-rbd 上(运行在Ceph RBD的 Open Virtual Machine Firmware(OMVF) 虚拟机)

  • 手工按照 z-ubuntu20-rbd 编辑出 z-k8s-m-1 XML,定义虚拟机

  • 升级整个Host主机操作系统,再次尝试

  • 完全全新使用 Ceph RBD 从头开始安装 Ubuntu Linux 20.04.3 LTS操作系统 (这个方法是最干净的)

z-ubuntu20-rbd 添加RBD磁盘启动

  • 将RBD磁盘 z-k8s-m-1-disk.xml 内容插入到 z-ubuntu20-rbd 进行测试:

z-ubuntu20-rbd 使用 z-k8s-m-1-disk.xml
 1  <disk type='network' device='disk'>
 2    <driver name='qemu' type='raw' cache='none' io='native'/>
 3    <auth username='libvirt'>
 4      <secret type='ceph' uuid='3f203352-fcfc-4329-b870-34783e13493a'/>
 5    </auth>
 6    <source protocol='rbd' name='libvirt-pool/z-k8s-m-1'>
 7      <host name='192.168.6.204' port='6789'/>
 8      <host name='192.168.6.205' port='6789'/>
 9      <host name='192.168.6.206' port='6789'/>
10    </source>
11    <target dev='vda' bus='virtio'/>
12    <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
13  </disk>

添加磁盘后启动 z-ubuntu20-rbd

virsh start z-ubuntu20-rbd --console

输出信息

z-ubuntu20-rbd 启动控制台信息
 1/usr/sbin/fsck.xfs: XFS file system.
 2[    2.247332] pcieport 0000:00:01.5: pciehp: Failed to check link status
 3[    2.287301] pcieport 0000:00:01.2: pciehp: Failed to check link status
 4[  OK  ] Started Show Plymouth Boot Screen.
 5plymouth-start.service
 6[  OK  ] Started Forward Password R…s to Plymouth Directory Watch.
 7[  OK  ] Reached target Local Encrypted Volumes.
 8[  OK  ] Started Network Service.
 9systemd-networkd.service
10[  OK  ] Found device /dev/ttyS0.
11[  OK  ] Found device /dev/disk/by-uuid/CC06-E7F6.
12         Starting File System Check…/dev/disk/by-uuid/CC06-E7F6...
13[  OK  ] Started File System Check Daemon to report status.
14systemd-fsckd.service
15[  OK  ] Listening on Load/Save RF …itch Status /dev/rfkill Watch.
16[  OK  ] Finished File System Check…n /dev/disk/by-uuid/CC06-E7F6.
17         Mounting /boot/efi...
18systemd-fsck@dev-disk-by\x2duuid-CC06\x2dE7F6.service
19[  OK  ] Mounted /boot/efi.
20boot-efi.mount
21[  OK  ] Reached target Local File Systems.
22         Starting Load AppArmor profiles...
23         Starting Set console font and keymap...
24         Starting Tell Plymouth To Write Out Runtime Data...
25         Starting Create Volatile Files and Directories...
26[  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
27[  OK  ] Finished Set console font and keymap.
28console-setup.service
29[  OK  ] Finished Create Volatile Files and Directories.
30systemd-tmpfiles-setup.service
31         Starting Network Name Resolution...
32         Starting Network Time Synchronization...
33         Starting Update UTMP about System Boot/Shutdown...
34systemd-update-utmp.service
35[  OK  ] Finished Update UTMP about System Boot/Shutdown.
36[  OK  ] Started Network Time Synchronization.
37systemd-timesyncd.service
38[  OK  ] Reached target System Time Set.
39[  OK  ] Reached target System Time Synchronized.
40[  OK  ] Finished Load AppArmor profiles.
41apparmor.service
42[  OK  ] Reached target System Initialization.
43[  OK  ] Started Trigger to poll fo…y enabled on GCP LTS non-pro).
44[  OK  ] Started Daily apt download activities.
45[  OK  ] Started Daily apt upgrade and clean activities.
46[  OK  ] Started Periodic ext4 Onli…ata Check for All Filesystems.
47[  OK  ] Started Discard unused blocks once a week.
48[  OK  ] Started Daily rotation of log files.
49[  OK  ] Started Daily man-db regeneration.
50[  OK  ] Started Message of the Day.
51[  OK  ] Started Daily Cleanup of Temporary Directories.
52[  OK  ] Started Ubuntu Advantage Timer for running repeated jobs.
53[  OK  ] Reached target Paths.
54[  OK  ] Reached target Timers.
55[  OK  ] Listening on D-Bus System Message Bus Socket.
56[  OK  ] Listening on UUID daemon activation socket.
57[  OK  ] Reached target Sockets.
58[  OK  ] Reached target Basic System.
59         Starting Accounts Service...
60[  OK  ] Started Regular background program processing daemon.
61cron.service
62[  OK  ] Started D-Bus System Message Bus.
63dbus.service
64[  OK  ] Started Save initial kernel messages after boot.
65dmesg.service
66         Starting Remove Stale Onli…t4 Metadata Check Snapshots...
67         Starting Record successful boot for GRUB...
68[  OK  ] Started irqbalance daemon.
69irqbalance.service
70         Starting Dispatcher daemon for systemd-networkd...
71         Starting System Logging Service...
72         Starting Login Service...
73[  OK  ] Started Network Name Resolution.
74systemd-resolved.service
75[  OK  ] Reached target Network.
76[  OK  ] Reached target Host and Network Name Lookups.
77         Starting OpenBSD Secure Shell server...
78         Starting Permit User Sessions...
79         Starting Rotate log files...
80         Starting Daily man-db regeneration...
81[  OK  ] Finished Permit User Sessions.
82systemd-user-sessions.service
83         Starting Hold until boot process finishes up...
84         Starting Terminate Plymouth Boot Screen...
85[  OK  ] Started System Logging Service.
86rsyslog.service
87
88Ubuntu 20.04.3 LTS z-ubuntu-rbd ttyS0
89
90z-ubuntu-rbd login:

可以看到采用原始 z-ubuntu20-rbd 配置启动是完全正常的,这说明前述 virt-clone 出来配置存在不兼容问题:

virt-clone --original z-ubuntu20-rbd --name z-k8s-m-1 --auto-clone

全新安装基于Ceph RBD虚拟机模版

我在 Libvirt集成Ceph RBD 实践中全新安装 z-ubuntu20-rbd ,安装完成后检查:

virsh dumpxml z-ubuntu20-rbd > z-ubuntu20-rbd.xml
z-ubuntu20-rbd 全新安装dumpxml
  1<domain type='kvm'>
  2  <name>z-ubuntu20-rbd</name>
  3  <uuid>e14d43a0-2887-43ca-b568-cd3ee51ca31d</uuid>
  4  <metadata>
  5    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
  6      <libosinfo:os id="http://ubuntu.com/ubuntu/20.04"/>
  7    </libosinfo:libosinfo>
  8  </metadata>
  9  <memory unit='KiB'>2097152</memory>
 10  <currentMemory unit='KiB'>2097152</currentMemory>
 11  <vcpu placement='static'>1</vcpu>
 12  <os>
 13    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
 14    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
 15    <nvram>/var/lib/libvirt/qemu/nvram/z-ubuntu20-rbd_VARS.fd</nvram>
 16    <boot dev='hd'/>
 17  </os>
 18  <features>
 19    <acpi/>
 20    <apic/>
 21  </features>
 22  <cpu mode='host-passthrough' check='none'/>
 23  <clock offset='utc'>
 24    <timer name='rtc' tickpolicy='catchup'/>
 25    <timer name='pit' tickpolicy='delay'/>
 26    <timer name='hpet' present='no'/>
 27  </clock>
 28  <on_poweroff>destroy</on_poweroff>
 29  <on_reboot>restart</on_reboot>
 30  <on_crash>destroy</on_crash>
 31  <pm>
 32    <suspend-to-mem enabled='no'/>
 33    <suspend-to-disk enabled='no'/>
 34  </pm>
 35  <devices>
 36    <emulator>/usr/bin/qemu-system-x86_64</emulator>
 37    <disk type='network' device='disk'>
 38      <driver name='qemu' type='raw' cache='none' io='native'/>
 39      <auth username='libvirt'>
 40        <secret type='ceph' uuid='3f203352-fcfc-4329-b870-34783e13493a'/>
 41      </auth>
 42      <source protocol='rbd' name='libvirt-pool/z-ubuntu20-rbd'>
 43        <host name='192.168.6.204' port='6789'/>
 44        <host name='192.168.6.205' port='6789'/>
 45        <host name='192.168.6.206' port='6789'/>
 46      </source>
 47      <target dev='vda' bus='virtio'/>
 48      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
 49    </disk>
 50    <controller type='usb' index='0' model='ich9-ehci1'>
 51      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
 52    </controller>
 53    <controller type='usb' index='0' model='ich9-uhci1'>
 54      <master startport='0'/>
 55      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
 56    </controller>
 57    <controller type='usb' index='0' model='ich9-uhci2'>
 58      <master startport='2'/>
 59      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
 60    </controller>
 61    <controller type='usb' index='0' model='ich9-uhci3'>
 62      <master startport='4'/>
 63      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
 64    </controller>
 65    <controller type='sata' index='0'>
 66      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
 67    </controller>
 68    <controller type='pci' index='0' model='pcie-root'/>
 69    <controller type='pci' index='1' model='pcie-root-port'>
 70      <model name='pcie-root-port'/>
 71      <target chassis='1' port='0x8'/>
 72      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
 73    </controller>
 74    <controller type='pci' index='2' model='pcie-root-port'>
 75      <model name='pcie-root-port'/>
 76      <target chassis='2' port='0x9'/>
 77      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
 78    </controller>
 79    <controller type='pci' index='3' model='pcie-root-port'>
 80      <model name='pcie-root-port'/>
 81      <target chassis='3' port='0xa'/>
 82      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
 83    </controller>
 84    <controller type='pci' index='4' model='pcie-root-port'>
 85      <model name='pcie-root-port'/>
 86      <target chassis='4' port='0xb'/>
 87      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
 88    </controller>
 89    <controller type='pci' index='5' model='pcie-root-port'>
 90      <model name='pcie-root-port'/>
 91      <target chassis='5' port='0xc'/>
 92      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
 93    </controller>
 94    <controller type='pci' index='6' model='pcie-root-port'>
 95      <model name='pcie-root-port'/>
 96      <target chassis='6' port='0xd'/>
 97      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
 98    </controller>
 99    <controller type='virtio-serial' index='0'>
100      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
101    </controller>
102    <interface type='bridge'>
103      <mac address='52:54:00:d9:18:dd'/>
104      <source bridge='br0'/>
105      <model type='virtio'/>
106      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
107    </interface>
108    <serial type='pty'>
109      <target type='isa-serial' port='0'>
110        <model name='isa-serial'/>
111      </target>
112    </serial>
113    <console type='pty'>
114      <target type='serial' port='0'/>
115    </console>
116    <channel type='unix'>
117      <target type='virtio' name='org.qemu.guest_agent.0'/>
118      <address type='virtio-serial' controller='0' bus='0' port='1'/>
119    </channel>
120    <input type='mouse' bus='ps2'/>
121    <input type='keyboard' bus='ps2'/>
122    <memballoon model='virtio'>
123      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
124    </memballoon>
125    <rng model='virtio'>
126      <backend model='random'>/dev/urandom</backend>
127      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
128    </rng>
129  </devices>
130</domain>
  • 对全新安装的基于Ceph RBD的虚拟机进行clone:

    virt-clone --original z-ubuntu20-rbd --name z-k8s-m-1 --auto-clone
    

依然失败:

ERROR    [Errno 2] No such file or directory: 'rbd://192.168.6.204:6789/libvirt-pool'

所以,重新尝试先去除RBD磁盘,再尝试clone

  • 从新创建的 z-ubuntu20-rbd.xml 复制出新版本 new-rbd.xml

全新创建VM的rbd磁盘设备XML
 1<disk type='network' device='disk'>
 2  <driver name='qemu' type='raw' cache='none' io='native'/>
 3  <auth username='libvirt'>
 4    <secret type='ceph' uuid='3f203352-fcfc-4329-b870-34783e13493a'/>
 5  </auth>
 6  <source protocol='rbd' name='libvirt-pool/RBD_DISK'>
 7    <host name='192.168.6.204' port='6789'/>
 8    <host name='192.168.6.205' port='6789'/>
 9    <host name='192.168.6.206' port='6789'/>
10  </source>
11  <target dev='vda' bus='virtio'/>
12  <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
13</disk>
  • z-ubuntu20-rbd 的磁盘设备detach掉:

    virsh detach-device z-ubuntu20-rbd new-rbd.xml --config
    
  • 现在 z-ubuntu20-rbd 没有磁盘,我们开始clone:

    virt-clone --original z-ubuntu20-rbd --name z-k8s-m-1 --auto-clone
    

成功提示:

Allocating 'z-k8s-m-1_VARS.fd'    | 128 kB  00:00:00
Clone 'z-k8s-m-1' created successfully.
  • clone出来的 z-k8s-m-1 没有磁盘,先复制出磁盘:

    sudo rbd cp libvirt-pool/z-ubuntu20-rbd libvirt-pool/z-k8s-m-1
    
  • 然后将磁盘添加:

    VM=z-k8s-m-1
    cat new-rbd.xml | sed "s/RBD_DISK/$VM/g" > ${VM}-disk.xml
    virsh attach-device $VM ${VM}-disk.xml --config
    
  • 启动虚拟机:

    virsh start $VM
    

通过全新安装 Libvirt集成Ceph RBD 虚拟机进行clone,是可以成功完成虚拟机复制的。之前在 libvirt LVM卷管理存储池 虚拟机的虚拟磁盘导入 Ceph Block Device(RBD) ,我手工修改的原始VM xml配置,虽然首个导入VM可以正常运行,但是再通过 virt-clone 复制存在不兼容问题。

目前 workaround 方式是每个模版虚拟机在 Libvirt集成Ceph RBD 重新安装一次,然后作为模版进行clone复制。

  • 虚拟机内部需要修订内容有:

    # 主机名
    sudo hostnamectl set-hostname z-k8s-m-1
    # IP
    sudo sed -i 's/192.168.6.247/192.168.6.101/g' /etc/netplan/01-netcfg.yaml
    # hosts
    sudo sed -i '/z-ubuntu-rbd/d' /etc/hosts
    echo "192.168.6.101  z-k8s-m-1.huatai.me  z-k8s-m-1" | sudo tee -a /etc/hosts
    

libguestfs

为了便于脚本化完成大批虚拟机的创建,我们可以采用 使用libguestfs修订KVM镜像 完成虚拟机镜像修改,这样就无需手工操作。

这里 -d z-k8s-m-2 就是访问 libvirt 中domain z-k8s-m-2 ,会直接启动一个内核环境挂载虚拟机的磁盘(对于Ceph RBD磁盘也可以处理,说明是通过 libvirt 实现的)

备注

guestfish 交互命令中 ><fs> 是提示符,所以后续案例中,不要输入 ><fs>

  • 修改虚拟机主机名:

    ><fs> write /etc/hostname "z-k8s-m-2"
    

guestfish 中,提供了非常微小的 busybox 系统,但是没有完整的运维工具,所以想使用 sed 编辑需要一些曲折:

><fs> download /etc/netplan/01-netcfg.yaml /tmp/01-netcfg.yaml
><fs> ! sed -i 's/192.168.6.247/192.168.6.102/g' /tmp/01-netcfg.yaml
><fs> upload /tmp/01-netcfg.yaml /etc/netplan/01-netcfg.yaml

备注

guestfish 内部没有提供运维工具,如 sed ,但是可以把文件 download 到本地物理主机上( 注意,就是运行虚拟机的host物理主机 ),然后通过 ! 就可以运行本地物理主机上工具来处理。

这里你可以看到我是先把 /etc/netplan/01-netcfg.yaml 文件下载到物理主机 zcloud/tmp 目录下( 也就是物理主机 /tmp/01-netcfg.yaml ),修改好以后再 upload 回去

上述交互命令命令可以改写成一段脚本:

修改虚拟机IP的简单脚本
1#!/bin/bash -
2set -e
3vm="$1"
4
5guestfish -d "$vm" -i <<'EOF'
6  download /etc/netplan/01-netcfg.yaml /tmp/01-netcfg.yaml
7  ! sed -i 's/192.168.6.247/192.168.6.102/g' /tmp/01-netcfg.yaml
8  upload /tmp/01-netcfg.yaml /etc/netplan/01-netcfg.yaml
9EOF

完整的clone脚本

  • 将上述复制虚拟机以及通过 libguestfs 工具修订虚拟机配置的步骤串联起来,形成一个简单的 clone_vm.sh 脚本,使用方法如下:

    ./clone_vm.sh z-k8s-m-3
    
clone虚拟机的简单脚本
 1#!/bin/env bash
 2
 3. /etc/profile
 4
 5origin_vm="z-ubuntu20-rbd"
 6vm=$1
 7rbd_pool="libvirt-pool"
 8hosts_csv=/home/huatai/github.com/cloud-atlas/source/real/private_cloud/priv_cloud_infra/hosts.csv
 9
10
11function init {
12
13    if [ ! -f new-rbd.xml ]; then
14
15cat > new-rbd.xml <<__XML__
16<disk type='network' device='disk'>
17  <driver name='qemu' type='raw' cache='none' io='native'/>
18  <auth username='libvirt'>
19    <secret type='ceph' uuid='3f203352-fcfc-4329-b870-34783e13493a'/>
20  </auth>
21  <source protocol='rbd' name='libvirt-pool/RBD_DISK'>
22    <host name='192.168.6.204' port='6789'/>
23    <host name='192.168.6.205' port='6789'/>
24    <host name='192.168.6.206' port='6789'/>
25  </source>
26  <target dev='vda' bus='virtio'/>
27  <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
28</disk>
29__XML__
30
31     fi
32
33     ip=`grep ",${vm}," $hosts_csv | awk -F, '{print $2}'`
34
35     if [ -z $ip ]; then
36         echo "You input vm name is invalid, can't find vm IP"
37         echo "Please check $hosts_csv"
38         exit 1
39     fi
40}
41
42function clone_vm {
43    virt-clone --original $origin_vm --name $vm --auto-clone
44    sudo rbd cp ${rbd_pool}/${origin_vm} ${rbd_pool}/${vm}
45    cat new-rbd.xml | sed "s/RBD_DISK/${vm}/g" > ${vm}-disk.xml
46    virsh attach-device $vm ${vm}-disk.xml --config
47}
48
49function customize_vm {
50
51sudo guestfish -d "$vm" -i <<EOF
52  write /etc/hostname "${vm}"
53
54  download /etc/netplan/01-netcfg.yaml /tmp/01-netcfg.yaml
55  ! sed -i "s/192.168.6.247/${ip}/g" /tmp/01-netcfg.yaml
56  upload /tmp/01-netcfg.yaml /etc/netplan/01-netcfg.yaml
57
58  download /etc/hosts /tmp/hosts
59  ! sed -i '/z-ubuntu-rbd/d' /tmp/hosts
60  ! echo "${ip}  ${vm}.huatai.me  ${vm}" >> /tmp/hosts
61  upload /tmp/hosts /etc/hosts
62EOF
63
64}
65
66init
67clone_vm
68customize_vm

备注

实际使用时请根据环境做一些修订:

  • 我这里模版虚拟机 z-ubuntu20-rbd 安装时设置该虚拟机IP地址是 192.168.6.247

参考