How to set up ZFS ARC size on Ubuntu/Debian Linux
Proxmox ZFS Performance Tuning
Determine needed ARC size:
Create a new zfs.conf file:
sudo nano /etc/modprobe.d/zfs.conf
# Setting up ZFS ARC size on Ubuntu as per our needs
# Set Max ARC size => 2GB == 2147483648 Bytes
options zfs zfs_arc_max=2147483648
# Set Min ARC size => 1GB == 1073741824
options zfs zfs_arc_min=1073741824
Updates an existing initramfs for Linux kernel:
sudo update-initramfs -u -k all
Reboot:
sudo reboot
Verify that the correct ZFS ARC size set on Linux:
cat /sys/module/zfs/parameters/zfs_arc_min
cat /sys/module/zfs/parameters/zfs_arc_max
Finding the arc stats on Linux:
arcstat
arc_summary
Create a zvol:
zfs create -V 32G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=standard \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
Format the swap device:
mkswap -f /dev/zvol/rpool/swap
Update /etc/fstab:
echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab
Enable the swap device:
swapon -av
This needs to be done from the node containing the drives. This does not work when logged in on another node in the cluster.
I wanted to run docker inside an LXC container, which might sound weird, but alas, I wanted it. However, running docker in LXC with a ZFS mount was really slow. So I needed to change the setup a bit. It is officially supported though: https://docs.docker.com/storage/storagedriver/zfs-driver/.
Currently it doesn't seem possible to get Docker to work on ZFS directly. Well you can with a lot of manual work, but my experience was really poor.
LVM on top of linux zfs to use Openstack with nova-volume
How to enable rc.local shell script on systemd while booting Linux system
Create a zvol:
zfs create -s -V 32G rpool/lvm-docker
lvm-docker
is the zvol name, can be anything, and 32G is a 32GB size (arbitrary tbh, depends on how many images you'll have and how you manage other container data). To increase this later:
zfs set volsize=64G rpool/lvm-docker
resize2fs /dev/zvol/rpool/lvm-docker
Check that it's actually sparse, volsize
should be 32GB (that's the max it can take), referenced
is how much is actually used (should be very little when it's just created):
zfs get volsize,referenced rpool/lvm-docker
Find the file location (the first command will return f.e. /dev/zd256
, which might change after reboot!!) and create a device (if you get failed to set up loop device: Device or resource busy
try incrementing to loop3
, loop4
, etc.):
file /dev/zvol/rpool/lvm-docker
losetup /dev/loop2 /dev/zd256
Format the device:
fdisk /dev/loop2
and hit: n
, p
, 1
, ENTER
, ENTER
, t
, 8e
, w
Create a physical and logical volume vg-docker
:
pvcreate /dev/loop2
vgcreate vg-docker /dev/loop2
Create a logical volume lv-docker
of type thin-pool
in the volume group just created:
lvcreate -n lv-docker --type thin-pool -l 100%FREE vg-docker
Make the device mapping permanent to survive reboots:
nano /etc/rc.local
Paste:
#!/bin/sh
losetup /dev/loop2 /dev/zd256
exit 0
chmod +x /etc/rc/local
systemctl enable rc-local.service --now
In Proxmox --> Datacenter --> Storage add LVM-Thin
storage:
proxmox-lvm-docker
vg-docker
lv-docker
Container
Add mountpoint into lxc:
Through config file by adding this into /etc/pve/lxc/<vmid>.conf
:
nano /etc/pve/lxc/<vmid>.conf
mpX: /mnt/docker, mp=/var/lib/docker, backup=0
where X
is the number for your mountpoint (in case there are others already present)
Or using the GUI: select LXC --> Add --> Mount Point:
proxmox-lvm-docker
8
/var/lib/docker
Create a zvol:
zfs create -s -V 32G rpool/docker
docker
is the zvol name, can be anything, and 32G is a 32GB size (arbitrary tbh, depends on how many images you'll have and how you manage other container data). To increase this later:
zfs set volsize=64G rpool/docker
resize2fs /dev/zvol/rpool/docker
Check that it's actually sparse, volsize
should be 32GB (that's the max it can take), referenced
is how much is actually used (should be very little when it's just created):
zfs get volsize,referenced rpool/docker
Format it as ext4:
mkfs.ext4 /dev/zvol/rpool/docker
Mount it into a temp location to change permissions (as mentioned in one of the replies):
mkdir /tmp/zvol_tmp
mount /dev/zvol/rpool/docker /tmp/zvol_tmp
chown -R 100000:100000 /tmp/zvol_tmp
umount /tmp/zvol_tmp
Add a mount point:
/mnt/docker
:mkdir /mnt/docker
/etc/fstab
:nano /etc/fstab
# docker volume
/dev/zvol/rpool/docker /mnt/docker ext4 defaults 0 1
mount -a
In Proxmox --> Datacenter --> Storage add Directory
storage:
proxmox-rpool-docker
/mnt/docker
Container
Add mountpoint into lxc:
Through config file by adding this into /etc/pve/lxc/<vmid>.conf
:
nano /etc/pve/lxc/<vmid>.conf
mpX: /mnt/docker, mp=/var/lib/docker, backup=0
where X
is the number for your mountpoint (in case there are others already present)
Or using the GUI: select LXC --> Add --> Mount Point:
proxmox-rpool-docker
8
/var/lib/docker
SSH into the container.
sudo zdb
sudo zfs get compression <POOL>
sudo zfs set compression=lz4 <POOL>
sudo zfs get atime <POOL>
sudo zfs set atime=disabled <POOL>
sudo zfs get sync <POOL>
sudo zfs get sync <POOL>/<DATASET>
sudo zfs set sync=disabled <POOL>
sudo zfs set sync=disabled <POOL>/<DATASET>
https://forum.netgate.com/topic/112490/how-to-2-4-0-zfs-install-ram-disk-hot-spare-snapshot-resilver-root-drive/2
https://www.reddit.com/r/PFSENSE/comments/gceeci/zfs_mirror_recovery_process/
zpool status
Change
<x>
for the actual drive number in all commands below
sudo zpool offline da<x>p4
The first command shows the partition labels in column "GPT".
gpart show -l
=> 40 585937424 da1 GPT (279G)
40 409600 1 efiboot1 (200M)
409640 1024 2 gptboot1 (512K)
410664 984 - free - (492K)
411648 8388608 3 swap1 (4.0G)
8800256 577136640 4 zfs1 (275G)
585936896 568 - free - (284K)
The second command shows the partition type in column "GPT".
gpart show
=> 40 585937424 da1 GPT (279G)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 984 - free - (492K)
411648 8388608 3 freebsd-swap (4.0G)
8800256 577136640 4 freebsd-zfs (275G)
585936896 568 - free - (284K)
sudo gpart create -s gpt da<x>
sudo gpart add -a 4k -s 200M -t efi -l efiboot<x> da<x>
sudo gpart add -b 409640 -s 512k -t freebsd-boot -l gptboot<x> da<x>
sudo gpart add -b 411648 -s 4G -t freebsd-swap -l swap<x> da<x>
sudo gpart add -b 8800256 -s 275G -t freebsd-zfs -l zfs<x> da<x>
sudo gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 da<x>
zdb
sudo zpool replace pfSense <guid> da<x>p4