Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox Proxmox
What is Storage in Proxmox?
storage in Proxmox are the different locations where your data ends up, there are several of them in a normal setup:
- VM hard drives (Images): The virtual HDDs/SSDs of your VMs
- Container file systems (Rootdir): The file system of the LXC containers
- ISO images: Installation CDs/DVDs for VMs
- templates: Pre-built VM or container images
- backups: Secured VM/container data
- snippets: Cloud init configurations, hooks, etc.
Let's start classically with LVM-Thin: Your Swiss Army Knife for Storage Management
What is LVM-Thin? Think of it as ‘smart memory allocation’. Instead of immediately reserving the entire storage space, only the actual space used is occupied.
Practical example: You create a 100GB VM, but it initially only uses 10GB. With classic storage, 100GB would be immediately occupied, with LVM-Thin only the actually used 10GB.
Setting up the LVM-Thin Pool
# Create Volume Group (if not already available) pvcreate /dev/sdb vgcreate pve-data /dev/sdb # Create thin pool lvcreate -L 100G -n data pve-data lvconvert --type thin-pool pve-data/data # Configure in Proxmox pvesm add lvmthin local-lvm --thinpool data --vgname pve-data --content images,rootdir
Storage configuration in practice
For beginners: Proxmox sets up local storage by default:
local: For ISO images, templates, backupslocal-lvm: For VM hard drives
For advanced: Combine different storage types:
# NFS for shared templates pvesm add nfs shared-templates --server 192.168.1.100 --export /exports/templates --content iso,template # ceph for high availability storage (cluster) pvesm add ceph ceph-storage --pool vm-storage --content images # ZFS for local high performance zpool create -f tank mirror /dev/sdc /dev/sdd pvesm add zfspool zfs-local --pool tank --content images,rootdir
Let's also take a look at this in detail, because this is also one of the most critical areas for performance and reliability!
Which storage types do I currently have in my Proxmox setup?
# View all configured storage pvesm status # Example output: Name Type Status Total Used Available % local dir active 50.0GB 20.0GB 30.0GB 40.00% local-lvm lvmthin active 500.0GB 100.0GB 400.0GB 20.00%
LVM-Thin: Understand the heart
What makes LVM-Thin special?
Traditional storage (thick provisioning):
# VM gets 100GB disk qm set 100 --scsi0 local-lvm:100 # Problem: 100GB is immediately occupied by storage
# Even if VM only uses 5GB!
Better variant LVM-Thin (Thin Provisioning):
# VM gets 100GB disk qm set 100 --scsi0 local-lvm:100 # Advantage: Only data actually used can occupy memory
# VM uses 5GB → only 5GB occupied
# VM uses 50GB → only 50GB occupied
Setting up the LVM-Thin Pool – step by step
Step 1: Prepare Volume Group
# Partition hard disk (ATTENTION: Data will be deleted!) fdisk /dev/sdb # Partition type: 8e (Linux LVM)
# Creating a Physical Volume pvcreate /dev/sdb1 # Create or expand Volume Group vgcreate pve-data /dev/sdb1 # or add to existing VG:
# vgextend pve /dev/sdb1
Step 2: Creating a Thin Pool
# Create Thin Pool (80% of the available size) VGSIZE=$(vgs --noheadings -o vg_size --units g pve-data ⁇ tr -d ' G') POOLSIZE=$(echo "$VGSIZE * 0.8" ⁇ bc ⁇ cut -d. -f1) lvcreate -L ${POOLSIZE}G -n data pve-data lvconvert --type thin-pool pve-data/data # Adjust metadata pool size (if necessary) lvextend --poolmetadatasize +1G pve-data/data
Step 3: Integrate with Proxmox
# Add storage in Proxmox pvesm add lvmthin local-lvm-thin \ --thinpool data \ --vgname pve-data \ --content images,rootdir
Optimize LVM-Thin configuration
Configure Auto-Extend
In this example, your LVM-Thin memory allocation will automatically expand by an additional 20% Amount of memory as soon as the occupied memory reaches 80% Brand has reached. Basically a fine thing, but you also have to keep an eye on it and on the other hand have the storage space ‘free’ for.
# /etc/lvm/lvm.conf cat >> /etc/lvm/lvm.conf << 'EOF' activation { thin_pool_autoextend_threshold = 80 thin_pool_autoextend_percent = 20 } EOF # Meaning:
# At 80% Fill level automatically around 20% expand
Optimize metadata pool
The stored metadata also wants to be stored somewhere.
# Check metadata pool size lvs -a ⁇ grep metadata # Increase if necessary lvextend --poolmetadatasize +1G pve-data/data # Why important? Metadata stores:
# What Thin Volumes Exist
# - Which blocks are occupied
# - Snapshot information
Let's go into more detail about what the commands mean:
The first order, # Check metadata pool size lvs -a ⁇ grep metadata, is used to check the current status of the metadata:
lvs -a: Lists all logical volumes, including internal ones such as metadata volume and ⁇ grep metadata: Filters the output so that only rows containing the word ‘metadata’ are displayed.
The second order, # Increase if necessary lvextend --poolmetadatasize +1G pve-data/data, increases the metadata volume by 1 gigabyte of memory if it becomes too full with lvextend: A command to expand a logical volume, --poolmetadatasize +1G: This option specifically targets the metadata volume of the thin pool and increases it by 1 GB. And of course the pathagabe pve-data/data: This is the path to the thin pool that is to be expanded. In this example, pve-data the Volume Group and data The Thin Pool.
Why metadata is so important
Metadata is, so to speak, the table of contents of your thin pool. They store all important information so that Proxmox knows where which data is located. When the metadata space is full, you can no longer create new VMs, containers, or snapshots, and existing VMs may no longer be able to write new data.
Understand over-provisioning
Basically, it is possible to allocate much more memory to your VMs or LXC in the system than is actually available, as long as this is not used also completely unproblematic.
# 500GB Thin Pool
# 10×100GB VMs = 1000GB virtual
# But only data actually used takes up space
# Problem with 100% Capacity utilisation:
# All VMs get I/O errors!
Monitoring and alerts
However, you have to keep an eye on it, because as soon as the memory ‘overflows’, your VMs and LXC will only throw you I/O errors.
# Script for Thin Pool Monitoring cat > /usr/local/bin/thin-pool-monitor.sh << 'EOF' #!/bin/bash USAGE=$(lvs --noheadings -o data_percent pve-data/data ⁇ tr -d ' %') METADATA=$(lvs --noheadings -o metadata_percent pve-data/data ⁇ tr -d ' %') if [ "$USAGE" -gt 90 ]; Then logger "WARNING: Thin pool data usage: ${USAGE}%" echo "Thin pool is full: ${USAGE}%" ⁇ \ mail -s "Proxmox Storage Alert" admin@company.com fi if [ "$METADATA" -gt 90 ]; Then logger "WARNING: Thin pool metadata usage: ${METADATA}%" fi EOF # Cronjob every 5 minutes echo "*/5 * * * root /usr/local/bin/thin-pool-monitor.sh" >> /etc/crontab
In this example, a bash script would make sure that the memory does not exceed 90% Level increases, otherwise the warning message ‘Thin pool is full with the current percentage and sent to admin@company.com by e-mail. In addition, the same is logged for the metadata store.
The cronjob in the last line ensures that the monitoring script is executed every 5 minutes.
Expanding the Thin Pool
# Add new hard drive pvcreate /dev/sdc1 vgextend pve-data /dev/sdc1 # Enlarge Thin Pool lvextend -L +200G pve-data/data
Practical use cases
Template-based VM creation
# Create template qm create 9000 --memory 2048 --scsi0 local-lvm:20 # Configure template... qm template 9000 # Create Linked Clones (super fast!) qm clone 9000 101 --name web-server-1 qm clone 9000 102 --name web-server-2 # Linked Clone only uses additional memory for changes lvs # vm-9000-disk-0 pve-data Vwi---tz-- 20.00g data # Template
# vm-101-disk-0 pve-data Vwi-aotz-- 20.00g data vm-9000-disk-0 # clone
# vm-102-disk-0 pve-data Vwi-aotz-- 20.00g data vm-9000-disk-0 # clone
Snapshot management
# Snapshot before important changes qm snapshot 100 before-update # List snapshots qm listsnapshot 100 # Return to Snapshot qm rollback 100 before-update # Delete snapshot qm delsnapshot 100 before-update
NFS Storage: Shared memory for the Proxmox cluster
NFS (Network File System) is one of the classic network storage solutions and is e.g. suitable for Proxmox environments in which your storage wants to share between several nodes. The special feature: NFS is based on the directory backend, but has the advantage that Proxmox can mount the NFS shares automatically.
What makes NFS special in Proxmox?
The big plus point of NFS in Proxmox: You don't have to manually /etc/fstab Fumbling around! Proxmox takes over the complete mount management for you. The backend can even test if the NFS server is online and show you all available exports.
This is especially useful if you:
- Shared storage We need live migration.
- Templates and ISOs Want to centrally manage
- Simple backup solutions wants to implement
- Cost-effective storage extension needed
Configuring NFS Storage - Step by Step
The basic configuration
# Add NFS Storage in Proxmox pvesm add nfs iso-templates \ --server 10.0.0.10 \ --export /space/iso-templates \ --content iso,vztmpl \ --options vers=3,soft
What happens here in detail?
--server 10.0.0.10: This is your NFS server. Pro tip: Use IP addresses instead of DNS names to avoid DNS lookup delays. If you want to use DNS, put the server in the /etc/hosts one:
echo "10.0.0.10 nfs-server.local" >> /etc/hosts
--export /space/iso-templates: The NFS export path on the server. You can scan it beforehand:
# Show available NFS exports pvesm scan nfs 10.0.0.10 # Example output:
# /space/iso-templates
# /space/vm-storage
# /space/backups
--content iso,vztmpl: Specifies what can be stored on this storage:
- iso: ISO images for VM installations
- vztmpl: LXC Container Templates
--options vers=3,soft: This will be interesting for the performance:
- vers=3: Uses NFSv3 (usually more stable than v4 for virtualization)
- soft: It's important! Limits retry attempts to 3, prevents hanging VMs from NFS issues
The storage configuration in /etc/pve/storage.cfg
After the command, the following will automatically appear in the file:
nfs: iso-templates path /mnt/pve/iso-templates server 10.0.0.10 export /space/iso-templates options vers=3,soft content iso,vztmpl
path /mnt/pve/iso-templates: This is the local mount point on each Proxmox node. Proxmox automatically creates the directory and mounts the NFS share there.
Advanced NFS configurations
Performance-optimized settings
# For VM images (higher performance required) pvesm add nfs vm-storage \ --server 10.0.0.10 \ --export /space/vm-storage \ --content images,rootdir \ --options vers=3,hard,intr,rsize=32768,wsize=32768,tcp
The Mount options explain:
- hard: NFS requests are repeated infinitely (for critical data)
- intr: Processes can be interrupted with Ctrl+C
- rsize/wsize=32768: 32KB blocks for better performance
- tcp: TCP instead of UDP (more reliable for VMs)
Configure backup storage
# Dedicated backup storage pvesm add nfs backup-nfs \ --server backup.internal.lan \ --export /backup/proxmox \ --content backup \ --options vers=4,soft,bg \ --maxfiles 3
Backup-specific options:
- vers=4: NFSv4 for better security and performance
- bg: Background mount if server is not available
- maxfiles 3: Maximum of 3 backup files per VM (deprecated but functional)
Understanding NFS Storage Features
Snapshots and Clones with qcow2
Since NFS itself does not support hardware snapshots, Proxmox uses the qcow2 format for these features:
# Create VM with qcow2 on NFS qm set 100 --scsi0 nfs-storage:vm-100-disk-0.qcow2 # Create snapshot (internal qcow2 snapshot) qm snapshot 100 before-update # Create clone (qcow2-backed) qm clone 100 101 --name cloned-vm
The difference to LVM-Thin:
- LVM-Thin: Hardware-level snapshots (very fast)
- NFS + qcow2: Software-level snapshots (more flexible, but slower)
Migration and live migration
This is the Main advantage of NFS in clusters:
# Live migration between nodes (without storage transfer!) qm migrate 100 node2 --online # Why is it so fast?
# - VM data is on NFS (accessible for all nodes)
# - Only RAM content is transferred
# - No disc copying required
Practical application scenarios
Scenario 1: Homelab with Synology NAS
# Enable Synology NFS and create export
# All about DSM: Control Panel → File Services → NFS → Enable
# Configure in Proxmox pvesm add nfs synology-storage \ --server 192.168.1.200 \ --export /volume1/proxmox \ --content images,backup,iso \ --options vers=3,hard,intr
Scenario 2: Dedicated NFS server (Ubuntu/Debian)
NFS server setup:
# On the NFS server apt install nfs-kernel-server # Configuring Exports cat >> /etc/exports << 'EOF' /exports/proxmox-vms 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash) /exports/proxmox-backup 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash) /exports/proxmox-iso 192.168.1.0/24(ro,sync,no_subtree_check) EOF exportfs -ra systemctl enable nfs-server
Use in Proxmox:
# VM Storage pvesm add nfs nfs-vms \ --server 192.168.1.100 \ --export /exports/proxmox-vms \ --content images,rootdir # Backup storage pvesm add nfs nfs-backup \ --server 192.168.1.100 \ --export /exports/proxmox-backup \ --content backup # ISO Storage (read-only) pvesm add nfs nfs-iso \ --server 192.168.1.100 \ --export /exports/proxmox-iso \ --content iso,vztmpl
Troubleshooting and monitoring
Frequent NFS problems
Problem: ‘Connection refused’ or ‘No route to host’
# 1. Test network connectivity ping 10.0.0.10 # 2. Check NFS service rpcinfo -p 10.0.0.10 # 3. Check firewall (server-side)
# NFS requires several ports:
# - 111 (rpcbind)
# - 2049 (nfs)
# - Dynamic ports for rpc.statd, rpc.mountd
Problem: ‘Stale file handle’
# Performing Mount Renewal umount /mnt/pve/nfs-storage pvesm set nfs-storage --disable 1 pvesm set nfs-storage --disable 0 # Or completely re-add storage pvesm remove nfs-storage pvesm add nfs nfs-storage --server ... --export ...
Problem: Poor performance
# Optimize NFS mount options pvesm set nfs-storage --options vers=3,hard,intr,rsize=65536,wsize=65536,tcp # Test network performance iperf3 -c nfs-server # Test I/O performance dd if=/dev/zero of=/mnt/pve/nfs-storage/test bs=1M count=1000
Setting up NFS monitoring
# Monitor NFS status cat > /usr/local/bin/nfs-health-check.sh << 'EOF' #!/bin/bash for storage in $(pvesm status ⁇ grep nfs ⁇ awk '{print $1}'); do if ! pvesm status --storage $storage >/dev/null 2>&1; then echo "NFS Storage $storage offline!" ⁇ \ logger -t nfs-monitor # Mail/Alert send fi done EOF # Check every 2 minutes echo "*/2 * * * root /usr/local/bin/nfs-health-check.sh" >> /etc/crontab
Best Practices for NFS in Proxmox
Network design
# Using a Dedicated Storage Network
# Management: 192.168.1.x
# Storage: 10.0.0.x (Gigabit or better) auto vmbr1 iface vmbr1 inet static address 10.0.0.11/24 bridge-ports eth1 # No gateway - only storage traffic
Mount options depending on the application
# For read-only content (ISOs, templates) --options vers=3,ro,soft,intr # For VM images (critical) --options vers=3,hard,intr,tcp,rsize=32768,wsize=32768 # For backups (can be interrupted) --options vers=3,soft,bg,intr
Redundancy and high availability
# NFS server with failover
# Primary: 10.0.0.10
# Secondary: 10.0.0.11
# Heartbeat script for automatic failover cat > /usr/local/bin/nfs-failover.sh << 'EOF' #!/bin/bash PRIMARY="10.0.0.10" SECONDARY="10.0.0.11" if ! ping -c 3 $PRIMARY >/dev/null 2>&1; then # Primary offline - switch to secondary pvesm set nfs-storage --server $SECONDARY logger "NFS failover to secondary server" fi EOF
NFS is particularly suitable if you are looking for a simple yet professional shared storage solution for your Proxmox cluster. The configuration is uncomplicated, the performance is completely sufficient for most use cases and the maintenance is minimal!
For the sake of completeness, I would also like to focus on the Windows variant of NFS. She listens to the name CIFS and behaves broadly under Proxmox equal to: The main difference between NFS and CIFS (today usually referred to as SMB It is in their Development history and target platform. NFS was designed for Unix-based systems such as Linux, while CIFS/SMB was originally designed for Windows systems.
NFS (Network File System)
- origins: Developed by Sun Microsystems for Unix systems.
- functioning: Allows clients to access files and directories stored on a remote server as if they were local. Access is via a ‘mount’ process.
- performance: Often considered more performant for Unix/Linux environments because it is natively integrated there and has less overhead.
- restriction: Can be resource-intensive when used in non-Linux environments, as additional software is often needed.
- Current status: NFSv4 is the latest version and offers improved security and performance features. It is actively being developed.
CIFS (Common Internet File System) / SMB (Server Message Block)
Current status: CIFS is technically outdated. The current protocol is called SMB (Server Message Block), which is in constant development and serves as the standard for file sharing in modern Windows systems. With Samba, it can also be used in Unix/Linux environments to establish compatibility with Windows systems.
origins: CIFS is a special version of the SMB-Protocols from Microsoft. In the modern context, the terms are often used interchangeably or CIFS denotes the outdated SMB version 1.0.
functioning: Clients can access a server's files, printers, and other resources over the network. CIFS/SMB is a stateful protocol, which means that the server tracks the connections and the state of the opened files.
performance: May be slower than NFS (especially older versions) in WAN connections, but modern SMB versions (SMB2, SMB3) have significant performance improvements.
restriction: CIFS/SMB1 is considered unsafe and is disabled or no longer used by default in modern systems.
While we're at it, let's take a closer look at the rest. The official documentation shows how Proxmox uses the memory types ZFS, BTRFS and CEPH as Memory backends use. There are clear recommendations for various use cases.
Proxmox and ZFS
Proxmox sees ZFS as a powerful and reliable solution for single host storage or small, replicated setups.
Implementation in Proxmox: ZFS is used in Proxmox as an integrated storage plugin. You can create a ZFS pool on local hard drives directly in the web interface. Proxmox uses the copy-on-write capability of ZFS to very fast snapshots and clones of VMs to create.
Data integrity: ZFS checksums protect your VM data from silent data corruption, which is essential for critical workloads.
Efficient snapshots: Snapshots are very fast and consume little space, which is extremely useful for backup strategies + testing/staging.
RAID-Z: Proxmox supports the creation of ZFS RAID configurations (RAID-Z1, RAID-Z2, RAID-Z3) via the web interface, which increases data security.
When to use: ZFS is the preferred choice for a Single server, which requires high reliability, data integrity and simple snapshot functions. The official documentation also recommends it for clusters when synchronizing ZFS via Proxmox's own Replication Engine.
Proxmox and BTRFS
BTRFS is described in the Proxmox documentation as a modern, Flexible alternative to ZFS It also provides copy-on-write functionality. It is also intended for local storage on a host.
Implementation in Proxmox: Similar to ZFS, BTRFS can be configured directly in the Proxmox web interface as a file system and storage type. Proxmox uses the subvolume and snapshot capabilities of BTRFS.
simplicity: BTRFS is often considered easier to handle, especially when managing subvolumes.
Integrated RAID functions: BTRFS offers its own RAID level (RAID 0, 1, 10). However, the documentation mentions that RAID 5 and RAID 6 are still considered experimental. So there is definitely a need for caution!
balancing: BTRFS offers a built-in feature called ‘Balance’, which distributes data between disks and optimizes metadata.
When to use: BTRFS is a good option if you want the flexibility and features of a modern file system on a Single host You want to use it, but you want to avoid the resource requirements of ZFS. It is a solid choice for smaller environments.
Proxmox and Ceph
Ceph is in Proxmox. Recommended solution for cluster storage. It is deeply integrated into the Proxmox infrastructure and allows an Highly available, distributed storage pool Create across multiple hosts.
When to use: Ceph is the ideal solution for Larger clusters (three or more nodes). It allows you to create a central, high-availability storage pool for all VMs in the cluster that does not have a single point of failure. The Proxmox documentation also clearly highlights that Ceph is the best choice for shared storage in a Proxmox HA cluster.
Implementation in Proxmox: Proxmox offers a Native Ceph integration via the web interface, which allows you to set up and manage a Ceph cluster on your Proxmox host. Each host can serve as a Ceph node (OSD, Monitor, Manager).
Benefits:
High scalability: Additional hosts can easily be added to the cluster to increase capacity and performance.
High availability: Ceph replicates data through the cluster nodes. If a host fails, the memory remains available.
Unified storage: You can use Ceph to deploy block devices (RBDs) for VMs, object storage (RADOS Gateway) and file systems (CephFS).
The different storage backends in detail
1. Directory Storage (simple but flexible)
# Local directory as storage pvesm add dir backup-local \ --path /backup \ --content backup,iso,template \ --shared 0 # NFS Share as Storage pvesm add nfs shared-storage \ --server 192.168.1.100 \ --export /exports/proxmox \ --content images,template,backup \ --options vers=3
Benefits:
- Easy to understand and manage
- Flexible for different content
- Snapshots via file system (when using ZFS/BTRFS)
Disadvantages:
- Slower snapshots for large images
- Less efficient storage usage
2. ZFS Storage (Enterprise Features)
# Create ZFS Pool zpool create -f tank \ raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde \ cache /dev/nvme0n1p1 \ log /dev/nvme0n1p2 # ZFS optimizations zfs set compression=lz4 tank zfs set atime=off tank zfs set xattr=sa tank zfs set relatime=on tank # Add as Proxmox Storage pvesm add zfspool zfs-storage \ --pool tank \ --content images,rootdir \ --sparse 1
ZFS benefits:
- Built-in snapshots and replication
- Compression and Deduplication
- Very robust due to checksums
- Cache and L2ARC for performance
3. Ceph Storage (for clusters)
# Create Ceph OSD ceph-deploy osd create --data /dev/sdb proxmox1 ceph-deploy osd create --data /dev/sdc proxmox2 ceph-deploy osd create --data /dev/sdd proxmox3 # Create pool for VMs ceph osd pool create vm storage 128 128 ceph osd pool application enable vm storage rbd # Integrate with Proxmox pvesm add ceph ceph-storage \ --pool vm-storage \ --content images \ --krbd 0
ZFS and BTRFS: Scale-up approaches (single server)
ZFS (Zettabyte File System) is a mature file system and volume manager that is mainly used for its Strong data integrity is known. It is designed for use on a single, high-performance server.
- advantages: Superior data integrity thanks to copy-on-write and checksums. Flexible RAID variants (RAID-Z). Very reliable and stable.
- disadvantages: Can be resource hungry (RAM). Scales primarily vertically only. Sometimes complex in handling.
- scope: Single host systems, workstations, small to medium servers, NAS systems.
BTRFS (B-Tree File System) is a modern copy-on-write file system approach for Linux that replicates many features of ZFS, but is often considered more flexible and easier to manage.
- advantages: Built-in volume management features. Easy handling of subvolumes and snapshots. Incremental backups. Built-in RAID.
- disadvantages: The RAID5/6 implementation is still considered experimental and is not as robust as ZFS.
- scope: Linux systems where you want to take advantage of snapshots and data integrity without the resource overhead of ZFS. Ideal for home servers and Proxmox hosts that store data locally.
Ceph: The scale-out approach (cluster)
Ceph is not an alternative to ZFS or BTRFS on a single server. It is an Software-defined storage solution for large, distributed clusters. Its main goal is to provide a central storage pool across many servers.
- advantages: Extremely high scalability (horizontal). High reliability through self-healing and distributed data. Provides block, object, and file storage.
- disadvantages: Very complex in set-up and administration. High infrastructure requirements (at least 3 nodes recommended).
- scope: Large cloud environments, virtualization clusters, very large data archives.
Summary and recommendation
| trait | ZFS | BTRFS | Ceph |
| concept | Scale-Up (single server) | Scale-Up (single server) | Scale-out (cluster) |
| audience | Reliability, data integrity | Flexibility, simplicity (Linux) | Scalability, high availability |
| scaling | Vertical (more disks in one server) | Vertical (more disks in one server) | Horizontal (more servers in the cluster) |
| Application size | Small to medium | Small to medium | Medium to large (from 3+ knots) |
| Main advantage | Industry standard for data integrity | Flexible and native in Linux | Maximum reliability & scalability |
| Main disadvantage | High RAM requirements | RAID5/6 not yet fully developed | High complexity and infrastructure requirements |
In summary, one can therefore say the following:
Choosing a single Proxmox host: ZFS or BTRFS are the right choice. Both provide snapshots and good data integrity. ZFS is the gold standard for reliability, but BTRFS is often simpler and definitely more resource-efficient.
Choosing a Proxmox Cluster: Ceph is the best choice if you have multiple hosts and want to build a central, high-availability storage pool that can grow with your cluster in the long run.
Performance optimization in detail
SSD optimizations
Activate TRIM/Discard
# For individual VMs qm set 100 --scsi0 local-lvm:vm-100-disk-0,discard=on,ssd=1 # Global for all new VMs (in storage configuration) pvesm set local-lvm --content images --discard-support 1 # System Level TRIM (weekly) systemctl enable fstrim.timer
SSD-specific schedulers
# Ideal for SSDs echo mq-deadline > /sys/block/sda/queue/scheduler # Permanent via udev cat > /etc/udev/rules.d/60-ssd-scheduler.rules << 'EOF' ACTION=="add changedchange", KERNEL=="sd[a-z] ⁇ nvme[0-9]n[0-9]", \ ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline" EOF
I/O thread and cache optimization
# I/O thread for better parallelization qm set 100 --scsi0 local-lvm:vm-100-disk-0,iothread=1 qm set 100 --scsihw virtio-scsi-single # Cache modes depending on the application
# writethrough: Sure, but slower qm set 100 --scsi0 local-lvm:vm-100-disk-0,cache=writethrough # writeback: Faster but data loss risk in the event of power failure qm set 100 --scsi0 local-lvm:vm-100-disk-0,cache=writeback # None: For shared storage (cluster) qm set 100 --scsi0 ceph-storage:vm-100-disk-0,cache=none
Multi-Path I/O for Enterprise Storage
# Install and configure Multipath apt install multipath-tools cat > /etc/multipath.conf << 'EOF' defaults { user_friendly_names yes find_multipaths yes } multipaths { multipath { wwid 36001405d27e5d898dd34a9f98a9a8f55 alias shared-storage-lun1 } } EOF systemctl enable multipathd systemctl start multipathd
Storage layout best practices in TL:DR
Recommended layout for different scenarios
Homelab (1 or 2 servers)
# SSD 1: System + local VMs /dev/sda1: EFI boot (512MB) /dev/sda2: root (50GB) /dev/sda3: LVM-Thin Pool (rest) # HDD 1: Backups + ISO Storage /dev/sdb1: /backup (complete disc) # Configuration:
# local-lvm: images,rootdir (SSD)
# backup locale: backup,iso (HDD)
Small production environment (3+ servers)
# Per server:
# NVMe 1: System (RAID1 Mirror) /dev/nvme0n1: Proxmox system # NVMe 2: Local VMs (Hot Data) /dev/nvme1n1: Local LVM-Thin Pool # SAS/SATA: Shared storage via Ceph /dev/sd[a-c]: Ceph OSDs # External NAS: backups nfs://backup.internal.lan/proxmox
Enterprise (many servers, cluster operation)
# Dedicated storage:
# - SAN (iSCSI/FC) for VM Images
# - NFS for Templates/ISOs
# - Dedicated backup system
# - Separate Ceph clusters
# Pro Pro Proxmox node system SSD only /dev/sda: Proxmox System (RAID1) # Everything else about network
Implement storage tiering
# Tier 1: NVMe for critical VMs pvesm add lvmthin nvme-tier1 \ --vgname nvme-vg \ --thinpool nvme-pool \ --content images # Tier 2: SATA SSD for standard VMs pvesm add lvmthin ssd-tier2 \ --vgname ssd-vg \ --thinpool ssd-pool \ --content images,rootdir # Tier 3: HDD for Archive/Backup pvesm add dir hdd-tier3 \ --path /archive \ --content backup,template
Best practices from the Proxmox documentation
Assigning Content Types Correctly
Different storage types support different content types storage:
# Specialized storage for various purposes pvesm add lvmthin vm-storage --vgname pve-fast --thinpool fast --content images pvesm add lvmthin ct-storage --vgname pve-bulk --thinpool bulk --content rootdir pvesm add dir iso-storage --path /var/lib/vz/template --content iso,vztmpl
Storage not aliasing
It is problematic to show multiple storage configurations on the same storage storage:
# FALSE - Both show on same thin pool pvesm add lvmthin storage1 --vgname pve --thinpool data --content images pvesm add lvmthin storage2 --vgname pve --thinpool data --content rootdir # RIGHT - One storage for both content types pvesm add lvmthin local-lvm --vgname pve --thinpool data --content images,rootdir
Understanding Volume Ownership
Each volume belongs to a VM or container storage:
# Understanding Volume ID Format:
# local-lvm:vm-100-disk-0
# ^ ^ ^
# | | ⁇ ── Disk number
# ⁇ ─────────── VM-ID # ⁇ ─────────────────── Storage ID
# Determine volume path pvesm path local-lvm:vm-100-disk-0 # /dev/pve-data/vm-100-disk-0
Performance tuning according to Proxmox standards
Optimize I/O Scheduler for LVM Thin
# For SSDs under LVM-Thin echo mq-deadline > /sys/block/sda/queue/scheduler # Customize Cue Depth echo 32 > /sys/block/sda/queue/nr_requests # Read-ahead for sequential workloads blockdev --setra 4096 /dev/sda
VM configuration for optimal performance
# Best Performance Settings for LVM Thin qm set 100 --scsi0 local-lvm:vm-100-disk-0,iothread=1,discard=on,ssd=1 # Parameters explained:
# iothread=1: Separate I/O threads for better parallelization
# discard=on: TRIM support for SSD optimization
# ssd=1: Tells the VM that it's an SSD
Maintenance and monitoring
Thin pool health check
# Detailed pool information dmsetup status ⁇ grep thin # Thin Pool Repair (When Corrupted) lvconvert --repair pve-data/data # Thin Pool Chunk-Usage thin_dump /dev/pve-data/data_tmeta ⁇ less
Regular maintenance tasks
# Weekly maintenance cat > /etc/cron.weekly/lvm-maintenance << 'EOF' #!/bin/bash # Thin Pool defragment fstrim -av # Backup LVM metadata vgcfgbackup # Clean up Unused Logical Volumes lvremove $(lvs --noheadings -o lv_path,lv_attr ⁇ \ awk '$2 ~ /^V.*a.*z/ {print $1}' ⁇ \ head -5) EOF
This section of code is the foundation for high-performance, flexible storage in Proxmox
Storage Migration and Maintenance
Migrate VMs between Storages
# Offline migration (VM turned off) qm migrate 100 node2 --targetstorage new-storage # Online migration (VM continues) qm migrate 100 node2 --online --targetstorage new-storage # Change storage only (same node) qm move-disk 100 scsi0 new-storage --delete
Storage maintenance without downtime
# 1. Migrate VMs from Storage for vm in $(qm list ⁇ grep running ⁇ awk '{print $1}'); do qm migrate $vm node2 --targetstorage backup-storage --online done # 2. Perform storage maintenance
# - Exchange hard disks
# - Rebuild RAID
# - etc.
# 3. Migrate VMs back for vm in $(qm list ⁇ grep running ⁇ awk '{print $1}'); do qm migrate $vm node1 --targetstorage main-storage --online done
Monitoring and troubleshooting
Monitor storage performance
# I/O statistics live iostat -x 1 # Per-VM I/O Monitoring iotop -ao # Measuring storage latency ioping /var/lib/vz/
Resolving Common Storage Problems
Problem: "No space left on device"
# 1. Analyze memory consumption df -h lvs --all you -sh /var/lib/vz/* # 2. Expanding the Thin Pool lvextend -L +100G /dev/pve/data # 3. Unused Blocks fstrim -av
Problem: Poor I/O performance
# 1. Check Scheduler cat /sys/block/sda/queue/scheduler # 2. Optimize I/O Queue Depth echo 32 > /sys/block/sda/queue/nr_requests # 3. Check VM configuration sqm config 100 ⁇ grep scsi # iothread=1, cache=none/writeback depending on setup
Problem: Storage not available
# 1. Check storage status pvesm status # 2. Checking Mount Points mount ⁇ grep /var/lib/vz # 3. Test network storage ping storage-server showmount -e storage-server # 4. Re-enable storage pvesm set storage-name --disable 0
Storage is the foundation of your Proxmox installation – spend most of your time planning and setup here! This will definitely pay off later in coffee consumption and or consumption of headache pills. ⁇
Completion and advanced resources
Proxmox is a powerful tool, but with great power comes great responsibility. (winker) The best practices shown here are the result of experience in practice. Start with the basics and gradually work your way up to the advanced features.
Your next steps:
- Build a testing/staging environment: Test all configurations in a separate environment
- Implement monitoring: Monitor your system from the beginning
- Test backup strategy: Performs regular restore tests
- Join the Community: The Proxmox forum is very helpful
So remember: Take your time, the basics Understand before you More complex setups fades over. The Proxmox admin guide As a website I have linked several times in the article as a reference is also worth gold. Take a look in the forum around, If you have a question. There is also an entry point for YouTube channel.
The remaining parts of this article series I have also linked here again for you: Part 1: network | Part 2: storage | Part 3: backup | Part 4: security | Part 5: performance