8/12/2023 0 Comments Commander one pro lifetime upgradeWe replaced Citrix's gpumon package, not built by us, by a mock build of gpumon sources, without the proprietary nvidia developer kit.Many thanks go to the XAPI (opens new window) developers who have accepted to keep the related source code in the XAPI project for us to keep using, rather than deleteing it. We chose to retain the feature and still provide a guest tools ISO that you can mount to your VMs. Not really a change from XCP-ng 8.1, but rather a change from Citrix Hypervisor 8.2: they dropped the guest tools ISO, replaced by downloads from their website. => CephFS SR Documentation # Guest tools ISO Use this driver to connect to an existing Ceph storage through the CephFS storage interface. Use this driver to connect to an existing Gluster storage as a shared SR. See Transition to the new ZFS SR driver if you were already using ZFS in XCP-ng before the 8.2 release. So we developed a dedicated zfs SR driver that checks whether zfs is present before drawing such conclusions. Users would use the file driver, which has a major drawback: if the zpool is not active, that driver may believe that the SR suddenly became empty, and drop all VDI metadata. ![]() We already provided zfs packages in our repositories before, but there was no dedicated SR driver. Check the documentation for each storage driver. They need to be installed using yum for you to use the related SR drivers. ![]() We do not, however, install all the dependencies on dom0 by default: xfsprogs, gluster-server, ceph-common, zfs. We also decided to include all SR drivers by default in XCP-ng now, including experimental ones. We added three new experimental storage drivers: zfs, glusterfs and cephfs. You will have the option to select different granularity: CPU, core or socket, depending on the performance/security ratio you are looking for. To be usedonly with entirely trusted workloads.Ī new XAPI method allowing you to choose the frequency of the core scheduler was written. This will remove the vulnerability to a class of attacks from other VMs, but will leave the VM processes vulnerables to attacks from malevolent processes from within itself. With Core Scheduling, you now have another solution: you can choose to leave Hyper Threading enabled and ask the scheduler to always group vCPUs of a given VM together on the same physical core(s). The reason is that with Hyper Threading enabled you can't protect a VM's vCPUs from attacks originating from other VMs that have workloads scheduled on the same physical core. That's why it was required to disable it as part of the mitigations. # Core scheduling (experimental)Īs you probably know, Hyper Threading defeats all mitigations of CPU vulnerabilities related to side-channel attacks (as Spectre, Meltdown, Fallout.). We also backported this feature to XCP-ng 8.1 as this improvements was already supported by older XCP-ng version. Learn more about the VIFs network traffic control in Xen Orchestra in this dedicated devblog (opens new window). We automated the configuration needed by the user to allow communication with the Openflow controller in Xen Orchestra. This will also allow us to offer Secure Boot support for VMs in the near future. This project will be also pushed to be upstream in Xen itself! It was also very interesting to work on that and learn tons of things. # Fully Open Source UEFI implementationĪ complete reimplementation of the UEFI support in XCP-ng (opens new window) was written, because Citrix' one was closed source until recently. ![]() The rest, below, is about changes specific to XCP-ng.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |