VM types
Lima supports several VM drivers for running guest machines:
The vmType can be specified only on creating the instance.
The vmType of existing instances cannot be changed.
💡 For developers: See Virtual Machine Drivers for technical details about driver architecture and creating custom drivers.
See the following flowchart to choose the best vmType for you:
flowchart
host{"Host OS"} -- "Windows" --> wsl2["WSL2"]
host -- "Linux" --> qemu["QEMU"]
host -- "macOS" --> intel_on_arm{"Need to run <br> Intel binaries <br> on ARM?"}
intel_on_arm -- "Yes" --> just_elf{"Just need to <br> run Intel userspace (fast), <br> or entire Intel VM (slow)?"}
just_elf -- "Userspace (fast)" --> vz
just_elf -- "VM (slow)" --> qemu
intel_on_arm -- "No" --> vz["VZ"]The default vmType is QEMU in Lima prior to v1.0.
Starting with Lima v1.0, Lima will use VZ by default on macOS (>= 13.5) for new instances,
unless the config is incompatible with VZ. (e.g., legacyBIOS or 9p is enabled)
1 - QEMU
“qemu” option makes use of QEMU to run guest operating system.
“qemu” is the default driver for Linux hosts.
Recommended QEMU version:
- v8.2.1 or later (macOS)
- v6.2.0 or later (Linux)
An example configuration:
limactl start --vm-type=qemu
vmType: "qemu"
base:
- template://_images/ubuntu
- template://_default/mounts
2 - VZ
| âš¡ Requirement | Lima >= 0.14, macOS >= 13.0 |
|---|
“vz” option makes use of native virtualization support provided by macOS Virtualization.Framework.
“vz” has been the default driver for macOS hosts since Lima v1.0.
An example configuration (no need to be specified manually):
limactl start --vm-type=vz
vmType: "vz"
base:
- template://_images/ubuntu
- template://_default/mounts
Caveats
- “vz” option is only supported on macOS 13 or above
- Virtualization.framework doesn’t support running “intel guest on arm” and vice versa
Known Issues
- “vz” doesn’t support
legacyBIOS: true option, so guest machine like centos-stream and oraclelinux-8 will not work on Intel Mac. - When running lima using “vz”,
${LIMA_HOME}/<INSTANCE>/serial.log will not contain kernel boot logs - On Intel Mac with macOS prior to 13.5, Linux kernel v6.2 (used by Ubuntu 23.04, Fedora 38, etc.) is known to be unbootable on vz.
kernel v6.3 and later should boot, as long as it is booted via GRUB.
https://github.com/lima-vm/lima/issues/1577#issuecomment-1565625668
The issue is fixed in macOS 13.5.
3 - WSL2
Warning
“wsl2” mode is experimental
| âš¡ Requirement | Lima >= 0.18 + (Windows >= 10 Build 19041 OR Windows 11) |
|---|
“wsl2” option makes use of native virtualization support provided by Windows’ wsl.exe (more info).
An example configuration:
limactl start --vm-type=wsl2 --mount-type=wsl2 --containerd=system
# Example to run Fedora using vmType: wsl2
vmType: wsl2
images:
# Source: https://github.com/runfinch/finch-core/blob/main/Dockerfile
- location: "https://deps.runfinch.com/common/x86-64/finch-rootfs-production-amd64-1690920103.tar.zst"
arch: "x86_64"
digest: "sha256:53f2e329b8da0f6a25e025d1f6cc262ae228402ba615ad095739b2f0ec6babc9"
mountType: wsl2
containerd:
system: true
user: false
Caveats
- “wsl2” option is only supported on newer versions of Windows (roughly anything since 2019)
Known Issues
- “wsl2” currently doesn’t support many of Lima’s options. See this file for the latest supported options.
- When running lima using “wsl2”,
${LIMA_HOME}/<INSTANCE>/serial.log will not contain kernel boot logs - WSL2 requires a
tar formatted rootfs archive instead of a VM image - Windows doesn’t ship with ssh.exe, gzip.exe, etc. which are used by Lima at various points. The easiest way around this is to run
winget install -e --id Git.MinGit (winget is now built in to Windows as well), and add the resulting C:\Program Files\Git\usr\bin\ directory to your path.
4 - Krunkit
Warning
“krunkit” is experimental
| âš¡ Requirement | Lima >= 2.0, macOS >= 14 (Sonoma+), Apple Silicon (arm64) |
|---|
Krunkit runs super‑light VMs on macOS/ARM64 with a focus on GPU access. It builds on libkrun, a library that embeds a VMM so apps can launch processes in a hardware‑isolated VM (HVF on macOS, KVM on Linux). The standout feature is GPU support in the guest via Mesa’s Venus Vulkan driver (venus), enabling Vulkan workloads inside the VM. See the project: containers/krunkit.
Install krunkit (host)
brew tap slp/krunkit
brew install krunkit
For reference: https://github.com/slp/homebrew-krun
Using the driver with Lima
Build the driver binary and point Lima to it. See also Virtual Machine Drivers.
git clone https://github.com/lima-vm/lima && cd lima
# From the Lima source tree
# <PREFIX> is your installation prefix. With Homebrew, use: $(brew --prefix)
go build -o <PREFIX>/libexec/lima/lima-driver-krunkit ./cmd/lima-driver-krunkit/main_darwin_arm64.go
limactl info # "vmTypes" should include "krunkit"
Quick start
You can run AI models either:
- With containers (fast to get started; any distro works), or
- Without containers (choose Fedora; build
llama.cpp from source).
Before running, install a small model on the host so examples can run quickly. We’ll use Qwen3‑1.7B GGUF:
mkdir -p models
curl -LO --output-dir models 'https://huggingface.co/Qwen/Qwen3-1.7B-GGUF/resolve/main/Qwen3-1.7B-Q8_0.gguf'
1) Run models using containers (easiest)
Start a krunkit VM with the default Lima template:
limactl start --vm-type=krunkit
limactl shell default
Then inside the VM:
nerdctl run --rm -ti \
--device /dev/dri \
-v $(pwd)/models:/models \
quay.io/slopezpa/fedora-vgpu-llama
For reference: https://sinrega.org/2024-03-06-enabling-containers-gpu-macos/
Once inside the container:
llama-cli -m /models/Qwen3-1.7B-Q8_0.gguf -b 512 -ngl 99 -p "Introduce yourself"
You can now chat with the model.
2) Run models without containers (hard way)
This path builds and installs dependencies (which can take some time. For faster builds, allocate more CPUs and memory to the VM. See options). Use Fedora as the image.
limactl start --vm-type=krunkit template://fedora
limactl shell fedora
vmType: krunkit
base:
- template://_images/fedora
- template://_default/mounts
mountType: virtiofs
Once inside the VM, install GPU/Vulkan support:
Click to expand script
#!/bin/bash
# SPDX-FileCopyrightText: Copyright The Lima Authors
# SPDX-License-Identifier: Apache-2.0
set -eu -o pipefail
# Install required packages
dnf install -y dnf-plugins-core dnf-plugin-versionlock llvm18-libs
# Install Vulkan and Mesa base packages
dnf install -y \
mesa-vulkan-drivers \
vulkan-loader-devel \
vulkan-headers \
vulkan-tools \
vulkan-loader \
glslc
# Enable COPR repo with patched Mesa for Venus support
dnf copr enable -y slp/mesa-krunkit fedora-40-aarch64
# Downgrade to patched Mesa version from COPR
dnf downgrade -y mesa-vulkan-drivers.aarch64 \
--repo=copr:copr.fedorainfracloud.org:slp:mesa-krunkit
# Lock Mesa version to prevent automatic upgrades
dnf versionlock add mesa-vulkan-drivers
# Clean up
dnf clean all
echo "Installing llama.cpp with Vulkan support..."
# Build and install llama.cpp with Vulkan support
dnf install -y git cmake clang curl-devel glslc vulkan-devel virglrenderer
(
cd ~
git clone https://github.com/ggml-org/llama.cpp
(
cd llama.cpp
cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=OFF -DGGML_NATIVE=OFF -DCMAKE_INSTALL_PREFIX=/usr
cmake --build build --config Release -j8
cmake --install build
)
rm -fr llama.cpp
)
echo "Successfully installed llama.cpp with Vulkan support. Use 'llama-cli' app with .gguf models."
The script will prompt to build and install llama.cpp with Venus support from source.
After installation, run:
llama-cli -m models/Qwen3-1.7B-Q8_0.gguf -b 512 -ngl 99 -p "Introduce yourself"
and enjoy chatting with the AI model.
Notes and caveats
- macOS Ventura or later on Apple Silicon is required.
- To verify GPU/Vulkan in the guest container or VM, use tools like
vulkaninfo --summary. - AI models on containers can run on any Linux distribution but without containers Fedora is required.
- For more information about usage of
llama-cli. See llama.cpp README.md. - Driver architecture details: see Virtual Machine Drivers.