ESXi to Proxmox Migration: A possible, not complete guide

While the new management is abandoning the lower market segment and focusing on large enterprise customers, smaller and medium-sized companies are facing the challenge of finding alternative solutions.

So you can say that the Acquisition of VMware by Broadcom This has fundamentally changed the virtualization landscape. Proxmox VE has established itself as a promising open source alternative that is not only cost-effective, but also offers professional features for productive use.

However, moving from an established platform like ESXi to a new environment is not a trivial undertaking. This comprehensive guide shows you three proven strategies for migration and helps to avoid some typical stumbling blocks. We consider both the direct transmission of virtual machines and alternative approaches for particularly challenging scenarios.

The three main strategies at a glance

Direct VM migration with the Import wizard

The most direct and most efficient method is to use the integrated import wizard that Proxmox has been offering since version 8.1.8. This wizard allows you to take virtual machines directly from ESXi hosts or vCenter servers without having to go through complex manual conversion processes.

Nested ESXi installation

For transition scenarios or particularly problematic legacy VMs, ESXi can be installed as a virtual machine under Proxmox. This solution makes it possible to initially continue operating existing VMware environments unchanged, while gradually migrating individual VMs to native Proxmox configurations.

Hybrid migration approaches

In practice, a combined approach is often proven, in which modern, uncomplicated VMs migrated directly while problematic or critical systems are initially located in a nested ESXi environment remain.

Preparation of the Proxmox environment

Updating to the required version

Before you can start the migration, you need to make sure that Proxmox at least in version 8.1.8 available at the end of 2023. (as of 12 August) It's now v9 live. and the latest version of 8 was 8.4.10.) This version brought with it the crucial import wizard, which greatly simplifies the migration process. If your installation is older, you must first update it.

The first step is to Customize repository configuration. By default, Enterprise repositories are active at Proxmox, but they require a paid subscription. For most smaller Homelab environments, the free repositories are quite sufficient. You can find this setting in the web console under ‘Updates’ and then ‘Repositories’.

Here you have to disable the two enterprise repositories and add the repositories ‘No-Subscription’ and ‘Test’ instead. After this change, you can access the available updates via ‘Refresh’ and then install them via ‘Upgrade’. A restart of the server completes the update process.

A look Tips for Preparing for Migration It can't hurt either. ⁇ Actually it is Complete instructions A mandatory reading in advance. I have summarized the relevant points that are particularly important:


Best practices for Proxmox VE configuration

Manual or automatic import of the entire VM is possible. It is recommended to practice the migration with test VMs first.

  • CPU:
    • Use the CPU type host, if all cluster nodes have the same CPU.
    • If the CPUs are different, use a generic x86-64-v<X> Type.
  • network:
    • Prefer it VirtIO Drivers with the slightest overhead.
    • Use other NIC models only on older operating systems without VirtIO drivers.
  • RAM:
    • Activate this Ballooning Device, to obtain detailed memory usage information.
  • disks:
    • Choose as bus type SCSI with the controller VirtIO SCSI single.
    • Activate Discard (for thin provisioning) and IO thread.
  • QEMU:
    • Install it QEMU Guest Agents in the VMs to improve communication between host and guest.

VirtIO guest driver

  • preparation: Make sure that the VirtIO Drivers installed in the guest system and in the initramfs are loaded before you perform the migration.
  • Problem solving in case of start-up errors:
    • If a VM does not start after the migration because the VirtIO driver is missing, you can Rescue mode use or temporarily set the hard disk bus type to IDE or SATA change.
    • For Windows VMs Additional steps are required to change the boot disk driver.
    • For Linux VMs it may be necessary to manually insert the drivers into the initramfs to integrate.
  • reinstalling: With new Windows VMs, you can VirtIO Drivers be installed directly during the installation process via an additional ISO drive.

BIOS / UEFI settings

  • BIOS mode:
    • Choose SeaBIOS for legacy BIOS-based VMs.
    • Choose OVMF (UEFI) for UEFI-based VMs.
  • UEFI Boot Entry: If the VM does not start in UEFI mode, a custom boot path may be missing. You need to add this manually in the UEFI BIOS and configure an EFI disk to maintain the setting.

Preparing for Migration

Shutdown: Turn off the source VM before migration.

Remove old tools: Uninstall all specific guest tools of the old hypervisor.

Network configuration:

Write down the network configuration.

Remove static IP addresses at Windows, since the new network card could trigger a warning.

For DHCP reservations, adjust the MAC address or manually set the MAC address on the new VM.

encryption:

Disable disk encryption if the keys in a vTPM device It is stored because the vTPM status It cannot be migrated. Therefore, be sure to have manual keys ready.

Configuration of the ESXi connection

Setting up the data source

The next step is to connect to your ESXi host or vCenter server. Via "Datacenter", "Storage", "Add" and then "ESXi", you open the configuration dialog for the new data source.

For the ID, you give a meaningful name for the connection, such as ‘esxi-migration’. This name is later used in the Proxmox interface to identify the source. As a server, you can enter both the IP address and the fully qualified domain name (FQDN) of your ESXi host.

An important note concerns the use of vCenter servers as a source. While this is possible in principle, the Proxmox documentation warns against ‘dramatic performance losses’ in this configuration. In practice, this means that migrations via vCenter can take much longer. For most scenarios, the direct connection to the ESXi host is therefore the better choice.

The credentials must, of course, belong to a user who has sufficient rights on the ESXi system. In many test environments or smaller installations, this is the root user, but in productive environments, you should use a dedicated service account with minimal required privileges.

Certificate verification and security aspects

If your ESXi host uses self-signed certificates, you can enable the Skip Certificate Verification option. This suppresses warnings about invalid certificates, but should only be used in trusted network environments. In productive environments, it is recommended to use proper certificates or at least check the certificate fingerprints.

After successful configuration, the new datastore will appear in the left sidebar under ‘Datacenter’ and ‘Storage’. Clicking on this entry displays all available virtual machines of the ESXi host in the middle of the window.

The migration process in detail

Preparation of virtual machines

A critical point in the current implementation is that Proxmox is still No real live migration supported. This means that all virtual machines to be migrated must be shut down before transfer. Therefore, schedule appropriate maintenance windows and inform all affected users in good time about the planned downtime.

The migration process does not change or delete the original VMs on the ESXi host. This provides an additional level of security, as the original systems can be used at any time in the event of a fault.

Using the Import Wizard

The The actual migration process starts with a click on the desired VM in the list, followed by the ‘Import’ button. The wizard that opens will guide you through all the necessary configuration steps.

In the first dialog you assign a new VM ID for the Proxmox system. This ID must be unique and is used for internal identification. Here you will also find the option ‘Live Import’, which, however, does not mean what the name suggests. This option only ensures that the VM is started automatically after successful import. As mentioned above, this does not mean a real live migration in which the VM could continue to run during the transfer.

Advanced configuration options

The Advanced page provides detailed control over the components to be migrated. All disks, CD/DVD drives and network interfaces of the original VM are listed here. You can exclude individual components from the migration, which can be especially useful for temporary disks or drives that are no longer needed.

Particularly important is the selection of the target storage for the migrated hard drives. Proxmox offers a variety of storage types, from local hard drives to NFS shares to high-performance ZFS pools. It is up to you or your requirements to choose the optimal solution, depending on your performance requirements and the available infrastructure. Since version 9, the snapshots can also be used on any Block storage system, iSCSI Storage or Fibre Channel SANs use.

Monitoring the migration process

The Final page "Resulting Config" shows a summary of all selected settings. After clicking on ‘Import’, the actual transfer process begins. The progress is displayed in a separate window.

You can also close this window without interrupting the process.

Proxmox displays all running tasks at the bottom of the interface. Double-clicking on a task reopens its detail view. This functionality makes it possible to perform multiple migrations in parallel without losing track.

The duration of migration depends on several factors: the size of the data to be transmitted, the network speed between ESXi host and Proxmox server, and the performance of the storage systems involved. For a typical Windows VM with ~50 GB of hard disk space, you can expect about 30-60 minutes on a gigabit network.

Rework and optimization

First steps after migration

After successful completion of the migration, the new VM will appear in the Proxmox interface. Although it could theoretically be started immediately, it is strongly recommended to first check and adjust the configuration.

A frequently overlooked point is the boot order, which is not automatically transmitted with. Therefore, first open the VM settings by clicking on the VM and then on ‘Options’. Here you can set the correct order of the boot devices under "Boot Order".

Hardware adjustments for optimal performance

You have already experimented yourself and the performance so far is rather unsatisfactory? Please take a closer look at this section:

The migrated VMs often initially still use the original VMware-specific drivers and hardware emulations. For optimal performance, you should gradually switch to the native KVM/QEMU equivalents.

For network adapters, we recommend switching from VMware vmxnet3 to VirtIO network cards, which usually offer better performance. However, this requires the installation of the corresponding VirtIO drivers in the guest operating system. (See above) Make such changes step by step and please test them extensively.

The same goes for storage controllers. While the original VMware PVSCSI controllers often continue to work, VirtIO SCSI controllers usually provide better integration with the KVM environment. Again, however, caution should be exercised as changes to the storage controller can cause boot problems.

Particular challenges in legacy systems

Modern operating systems such as current Windows or Linux versions can usually be migrated without any problems. The situation is different with older systems, which often have very specific hardware expectations.

A typical example from practice (discussed in the Proxmox Forum) shows the migration of a Windows 2000 VM that was originally virtualized from physical hardware using VMware vCenter Converter. After the transfer to Proxmox, the system got stuck during the boot process and showed 100% CPU usage without noticeable progress.

The solution was to adapt the hardware emulation. Such legacy systems often require a very conservative configuration: SeaBIOS instead of UEFI, IDE controller for boot hard drives, the generic ‘qemu32’ CPU type and the deactivation modern virtualization features. It should also be mentioned that in this example the VM could only be persuaded to cooperate if only one vCPU was discontinued.

Driver management and guest tools

After the successful migration, the VMware Tools can be uninstalled and replaced with the corresponding QEMU Guest Tools. The reason is relatively simple: VMware tools can cause conflict by accessing hardware features that are simply not available in the KVM environment.

The QEMU Guest Tools come with similar functionalities as the VMware Tools: better mouse integration, automatic screen resolution and more efficient memory balancing. However, with older operating systems, the installation can be problematic, as the above example with Windows 2000 shows, where the installation failed with a DLL error.

Alternative: Nested ESXi on Proxmox

When is it nested approach Makes sense?

The installation of ESXi as a virtual machine under Proxmox It may seem counterproductive at first, but it offers significant benefits in various scenarios. Especially in transition periods, this approach allows for gradual migration, where critical or problematic VMs can initially remain in their familiar environment.

Test environments also benefit from this approach, as it allows VMware environments to be built without dedicated hardware. Training and demonstration purposes are other typical use cases, as well as support for legacy applications that cannot be partout migrated to modern virtualization platforms.

Technical requirements and equipment

Nested virtualization requires special CPU features that are available by default on modern Intel and AMD processors, but must be explicitly enabled. On the Proxmox host, you have to adjust the corresponding kernel configuration.

For Intel-based systems, the file /etc/modprobe.d/kvm-intel.conf Created with the appropriate options. This configuration enables both basic Nested virtualization and advanced features such as EPT (Extended Page Tables) for better performance.

After a restart of the Proxmox host, the successful activation via the /sysThe file system is checked. The corresponding parameter should show the value ‘Y’, which confirms the readiness for nested virtualization.

Optimal VM configuration for ESXi

The configuration of the virtual machine for ESXi requires careful planning. As an operating system type, paradoxically, ‘Linux 6.x – 2.6 Kernel’ ESXi is based on a Linux kernel. UEFI boot with EFI disk is practically indispensable for modern ESXi versions.

When it comes to storage, you should SATA controller with SSD emulation enabled set. SSD emulation is important because modern ESXi versions increasingly expect SSD features and performance issues can occur with traditional hard drive emulations.

The CPU configuration is particularly critical. The CPU type ‘host“ provides the best performance as all the features of the physical processor are passed on to the VM. At least two CPU cores are required for a functional ESXi, but for productive use you should dimension more generously.

Installation and configuration of nested ESXi

Installing ESXi in the virtual environment can cause various problems, especially with older CPU architectures or special configuration requirements. Boot parameters such as allowLegacyCPU=true can remedy this.

Adjusting the OSData partition via the parameter autoPartitionOSDataSize=8192 provides sufficient storage space for ESXi-specific data. These parameters are used during installation. Shift+O Added in the boot menu.

After successful installation, the nested ESXi behaves largely like a physical installation. Virtual machines can be created and operated normally, but for the sake of fairness you have to take into account the performance losses caused by the additional virtualization level. We'll take a look at that in the next section.

Performance optimization and monitoring

Nested virtualization entails inherent performance losses because each operation must be routed through two levels of virtualization. However, these losses can be minimized through optimal configuration.

Memory ballooning should be disabled at both Proxmox and ESXi levels, as the complex interactions between layers can lead to unpredictable performance issues. Instead, you should always use fixed RAM assignments and rather dimension generously.

Monitoring resource usage becomes more complex in nested environments because you need to keep an eye on both the Proxmox host metrics and the values within the nested ESXi environment. However, modern monitoring tools can monitor and correlate both levels.

At this point, one could also think fundamentally whether, for example, one should also consider the Inventory monitoring plate edge He wants to expand his tech stack. ⁇

Strategic considerations and best practices

Migration Order and Risk Management

A well-thought-out migration order minimizes risks and makes it possible to learn from experiences with less critical systems. Always starts with development or test VMs that have low availability requirements. These systems are great for optimizing the migration workflow and identifying unexpected issues.

Productive services should not be migrated until the process has been successfully tested with less critical systems. Critical infrastructure services such as domain controllers, database servers or central application servers complete the migration when all experience has been gathered and processes optimized.

Backup strategies and rollback scenarios

One of the most important prerequisites for a successful migration is a comprehensive backup strategy. Since the original VMs on the ESXi host remain unchanged, you first have a natural fallback option. However, this should not be considered a substitute for proper backups.

Proxmox offers with its integrated backup system vzdump a powerful solution for VM backups. These backups can be stored both locally and on external storage systems or the specially developed Proxmox Backup Server. Extensively tests restore functionality before permanently deleting the original ESXi VMs.

Network Migration and VLAN Configuration

The Network configuration requires special attention, as the network architecture differs significantly between VMware and Proxmox in some cases. VMware works with port groups and vSwitches, while Proxmox relies on Linux bridges and VLANs.

Be sure to document all ESXi VM network settings, including VLAN IDs, IP addresses, and gateway configurations, before migrating. In many cases, you can apply these settings directly, but different VLAN implementations can cause connectivity issues.

Licensing and compliance aspects

An often overlooked aspect of VM migrations is the impact on software licenses. Many commercial software products are tied to hardware identifiers that may change as a result of migration. This applies to both Windows activations and specialized application software.

By: Current licensing model Although you should drive cheaper in any use case after the migration, because what Broadcom is currently operating is not only not fun, some of you are certainly no longer getting any offers... Well, a new purchase of tens of additional licenses within the VMs is probably not necessary in the vast majority of cases:

Documents all relevant hardware IDs and MAC addresses before migration. Proxmox makes it possible to set these values manually if necessary to avoid licensing problems. Otherwise, talk proactively with the software vendors about the planned migration in critical applications.

Performance optimization after migration

Migration is only the first step. The subsequent optimization of performance can make the difference between a successful and a problematic migration. As mentioned several times above, modern VMs benefit greatly from VirtIO drivers, but they are not installed automatically.

Plan a phase of performance optimization after the basic migration. This includes gradually switching to VirtIO drivers for network and storage, optimizing CPU configuration, and adjusting memory settings.

For example, ZFS as a storage backend offers extensive tuning options, from configuring SSD caches to optimizing record sizes for specific workloads. Invest time in analyzing your workloads and adjust the storage configuration accordingly.

Monitoring and long-term maintenance

Monitoring concepts for hybrid environments

In the transition phase between ESXi and Proxmox, hybrid environments often arise that place special demands on monitoring. Traditional VMware monitoring tools no longer work for the migrated VMs, while Proxmox-specific tools are not yet established.

A uniform monitoring strategy is crucial for operational success. Tools such as Zabbix, Nagios or solutions such as Prometheus can monitor both VMware and Proxmox environments. Invests early in the planning and establishment of a comprehensive monitoring system, which avoids later headaches!

Capacity planning and scaling

Proxmox offers different scaling capabilities than VMware. While VMware traditionally relies on expensive, high-performance hardware, Proxmox also enables the use of commodity hardware in scale-out architectures.

The cluster functionality of Proxmox allows you to connect multiple nodes to a logical cluster. This not only offers better reliability, but also flexible scaling options. Plan early on how your infrastructure should develop in the long term.

Update strategies and maintenance windows

Proxmox follows a different update cycle than VMware products. The more frequent updates require an adapted maintenance strategy. Take advantage of the opportunity to evaluate updates in a test environment before using them productively.

Live migration of VMs between Proxmox nodes enables maintenance-friendly update cycles. You can update individual nodes one after the other without having to fail all VMs at the same time. This is a significant advantage over single host ESXi installations.

Conclusion and outlook

The migration from ESXi to Proxmox is a complex undertaking that requires careful planning and methodological approach. However, investing in thorough preparation and gradual implementation pays off in the long term. Modern virtualization workloads can usually be easily transferred, while legacy systems require special attention.

The integrated import wizard in Proxmox 8.1.8 and higher significantly simplifies the migration process and makes it manageable for smaller IT teams. The ability to run ESXi as a nested solution provides additional flexibility for transition scenarios and problematic workloads.

The long-term benefits of an open source virtualisation platform – from reduced licensing costs to greater flexibility to avoided vendor lock-in effects – justify the effort of migration. With the right strategy and sufficient preparation, the changeover can be successfully managed and a future-proof virtualization infrastructure can be built.

The Proxmox community is active and helpful, which is a valuable resource when problems arise. Make sure to use this community support and share your own experiences to help others with similar projects. This creates an ecosystem that benefits all stakeholders and further advances open source virtualization.

Sources: Proxmox forum | WindowsPro.net | Bachmann-Lan.de | heise.de | Computerweekly.com