ProxmoxVE - VM Performance Tuning
ProxmoxVE - VM Performance Tuning#
Proxmox VE, being based on KVM/QEMU, offers numerous avenues for tuning Virtual Machine (VM) performance. Optimal settings often depend on the specific workload of the VM (e.g., I/O intensive, CPU intensive, network intensive). This note covers general best practices and specific configurations.
1. CPU Configuration#
- CPU Type:
host: This is generally recommended for best performance. It passes through the host CPU's features and model directly to the VM, avoiding emulation overhead and ensuring access to all instruction sets (e.g., AVX, AES-NI).- Avoid older emulated CPU types unless required for specific compatibility (e.g., live migration to a host with a different CPU).
- Cores & Sockets:
- Start with what the VM workload reasonably needs. Over-allocating CPU cores can sometimes lead to scheduler contention on the host, potentially degrading performance for all VMs.
- For most VMs, configuring sockets=1 and adjusting the number of cores is straightforward. NUMA considerations might influence socket/core layout for very large VMs on multi-socket hosts.
- NUMA Pinning: For VMs with significant RAM and CPU allocated on NUMA-aware host hardware, enabling the "NUMA" option in Proxmox and ensuring the VM's vCPUs and memory are pinned to the same NUMA node can improve memory access latency and performance. This is an advanced setting.
2. Memory Configuration#
- Static Allocation: For performance-sensitive VMs, especially those running databases or memory-intensive applications (like TrueNAS with ZFS), assign static memory.
- Disable Ballooning: As detailed in ProxmoxVE - TrueNAS VM - Disable Memory Ballooning, memory ballooning can be detrimental for workloads that manage their own memory carefully (e.g., ZFS ARC, database buffer pools). Uncheck the "Ballooning Device" for such VMs.
- Hugepages: For certain applications (e.g., databases, Java applications), transparent hugepages (THP) or explicitly configured hugepages can improve memory performance by reducing TLB misses. This is configured at the host level and sometimes within the guest OS.
- Proxmox allows enabling hugepages for VMs (e.g., "KSM and Hugepages" settings).
3. Storage Performance#
- Disk Image Format:
raw: Generally offers the best performance as it has minimal overhead.qcow2: Offers features like snapshots and thin provisioning but can have a slight performance overhead compared toraw. If usingqcow2, preallocation can improve performance.- Storage Type on Host:
- Local SSDs (NVMe or SATA) will provide the best I/O performance for VM disks.
- Ceph or ZFS (if Proxmox is installed on ZFS) can provide robust and performant shared storage, but their performance depends heavily on their own configuration (e.g., number of OSDs, SSD journaling, network).
- VirtIO Drivers:
- VirtIO SCSI / VirtIO Block (
virtio-blk): Both are paravirtualized drivers offering good performance.VirtIO SCSI(often withVirtIO SCSI singlecontroller type) is generally recommended for flexibility and features like TRIM/UNMAP support.- Enable
IO Threadfor VirtIO SCSI disks. This dedicates a separate I/O thread per disk, which can significantly improve I/O throughput and reduce latency, especially for multi-core VMs and fast storage.
- Ensure VirtIO drivers are installed in the guest OS (most modern Linux distros include them; Windows requires separate driver installation).
- Cache Mode:
none(orwritethroughfor certain data integrity needs): Often recommended for VMs with applications that manage their own caching (e.g., databases) or when data integrity is paramount. This bypasses the host page cache.writeback: Can offer higher performance by using the host page cache, but carries a risk of data loss if the host crashes before data is flushed to disk (unless using features like Ceph's RBD caching with appropriate safety measures).- Carefully consider the implications of cache modes based on the workload and storage backend.
- Discard/TRIM: Enable the "Discard" option for virtual disks on SSD-backed storage (and ensure the guest OS also issues TRIM commands). This helps SSDs maintain performance by allowing them to reclaim unused blocks.
4. Network Performance#
- VirtIO NIC: The
VirtIO (paravirtualized)network device type offers the best performance for most VMs. Ensure guest OS drivers are installed. - Multiqueue VirtIO: Enable the "Multiqueue" option for VirtIO NICs (and ensure the guest OS supports it, e.g., Linux kernel 3.8+). This allows network packet processing to be distributed across multiple vCPUs, significantly improving network throughput for multi-core VMs. Set the number of queues appropriately (e.g., to match the number of vCPUs, up to a reasonable limit).
- vHostNet (Host-side processing): While not a direct Proxmox setting, understanding that VirtIO networking can leverage vHostNet in the host kernel can offload some network processing from QEMU, improving efficiency.
- SR-IOV / PCI Passthrough: For extreme network performance needs, passing through a physical NIC or a Virtual Function (VF) via SR-IOV to the VM can provide near bare-metal network performance by bypassing the hypervisor's network stack.
5. Guest OS Optimizations#
- Install Guest Utilities/Drivers:
qemu-guest-agent: Essential for proper communication with the hypervisor (e.g., for clean shutdowns, IP address reporting, freezing filesystem for snapshots).- Latest VirtIO drivers for Windows.
- Kernel Tuning (Linux Guests): Adjust kernel parameters within the guest (e.g., I/O schedulers, network buffer sizes, swappiness) based on the workload.
- Disable Unused Services: Minimize resource consumption within the guest.
6. General Proxmox Host Considerations#
- Host Kernel: Keep Proxmox VE updated to benefit from the latest KVM/QEMU features, bug fixes, and performance improvements.
- Host Resource Monitoring: Regularly monitor host CPU, memory, disk I/O, and network utilization to identify bottlenecks.
- Limit KSM (Kernel Samepage Merging): While KSM can save memory by deduplicating identical memory pages between VMs, it consumes CPU resources. If CPU is a bottleneck or for latency-sensitive VMs, consider disabling KSM or tuning its parameters. It's often disabled by default in recent Proxmox versions or can be toggled per VM.