|
NAMEqemu-block-drivers - QEMU block drivers referenceSYNOPSISQEMU block driver reference manualDESCRIPTIONDisk image file formatsQEMU supports many image file formats that can be used with VMs as well as with any of the tools (like qemu-img). This includes the preferred formats raw and qcow2 as well as formats that are supported for compatibility with older QEMU versions or other hypervisors.Depending on the image format, different options can be passed to qemu-img create and qemu-img convert using the -o option. This section describes each format and the options that are supported for it.
The use of this is no longer supported in system emulators. Support only remains in the command line utilities, for the purposes of data liberation and interoperability with old versions of QEMU. The luks format should be used instead.
Note: this option is only valid to new or empty files. If there is an existing file which is COW and has data blocks already, it couldn't be changed to NOCOW by setting nocow=on. One can issue lsattr filename to check if the NOCOW flag is set or not (Capital 'C' is NOCOW flag).
Read-only formatsMore disk image file formats are supported in a read-only mode.
Using host drivesIn addition to disk image files, QEMU can directly access host devices. We describe here the usage for QEMU version >= 0.8.3.LinuxOn Linux, you can directly use the host device filename instead of a disk image filename provided you have enough privileges to access it. For example, use /dev/cdrom to access to the CDROM.
Windows
Mac OS X/dev/cdrom is an alias to the first CDROM.Currently there is no specific code to handle removable media, so it is better to use the change or eject monitor commands to change or eject media. Virtual FAT disk imagesQEMU can automatically create a virtual FAT disk image from a directory tree. In order to use it, just type:qemu-system-x86_64 linux.img -hdb fat:/my_directory Then you access access to all the files in the /my_directory directory without having to copy them in a disk image or to export them via SAMBA or NFS. The default access is read-only. Floppies can be emulated with the :floppy: option: qemu-system-x86_64 linux.img -fda fat:floppy:/my_directory A read/write support is available for testing (beta stage) with the :rw: option: qemu-system-x86_64 linux.img -fda fat:floppy:rw:/my_directory What you should never do:
NBD accessQEMU can access directly to block device exported using the Network Block Device protocol.qemu-system-x86_64 linux.img -hdb nbd://my_nbd_server.mydomain.org:1024/ If the NBD server is located on the same host, you can use an unix socket instead of an inet socket: qemu-system-x86_64 linux.img -hdb nbd+unix://?socket=/tmp/my_socket In this case, the block device must be exported using qemu-nbd: qemu-nbd --socket=/tmp/my_socket my_disk.qcow2 The use of qemu-nbd allows sharing of a disk between several guests: qemu-nbd --socket=/tmp/my_socket --share=2 my_disk.qcow2 and then you can use it with two guests: qemu-system-x86_64 linux1.img -hdb nbd+unix://?socket=/tmp/my_socket qemu-system-x86_64 linux2.img -hdb nbd+unix://?socket=/tmp/my_socket If the nbd-server uses named exports (supported since NBD 2.9.18, or with QEMU's own embedded NBD server), you must specify an export name in the URI: qemu-system-x86_64 -cdrom nbd://localhost/debian-500-ppc-netinst qemu-system-x86_64 -cdrom nbd://localhost/openSUSE-11.1-ppc-netinst The URI syntax for NBD is supported since QEMU 1.3. An alternative syntax is also available. Here are some example of the older syntax: qemu-system-x86_64 linux.img -hdb nbd:my_nbd_server.mydomain.org:1024 qemu-system-x86_64 linux2.img -hdb nbd:unix:/tmp/my_socket qemu-system-x86_64 -cdrom nbd:localhost:10809:exportname=debian-500-ppc-netinst Sheepdog disk imagesSheepdog is a distributed storage system for QEMU. It provides highly available block level storage volumes that can be attached to QEMU-based virtual machines.You can create a Sheepdog disk image with the command: qemu-img create sheepdog:///IMAGE SIZE where IMAGE is the Sheepdog image name and SIZE is its size. To import the existing FILENAME to Sheepdog, you can use a convert command. qemu-img convert FILENAME sheepdog:///IMAGE You can boot from the Sheepdog disk image with the command: qemu-system-x86_64 sheepdog:///IMAGE You can also create a snapshot of the Sheepdog image like qcow2. qemu-img snapshot -c TAG sheepdog:///IMAGE where TAG is a tag name of the newly created snapshot. To boot from the Sheepdog snapshot, specify the tag name of the snapshot. qemu-system-x86_64 sheepdog:///IMAGE#TAG You can create a cloned image from the existing snapshot. qemu-img create -b sheepdog:///BASE#TAG sheepdog:///IMAGE where BASE is an image name of the source snapshot and TAG is its tag name. You can use an unix socket instead of an inet socket: qemu-system-x86_64 sheepdog+unix:///IMAGE?socket=PATH If the Sheepdog daemon doesn't run on the local host, you need to specify one of the Sheepdog servers to connect to. qemu-img create sheepdog://HOSTNAME:PORT/IMAGE SIZE qemu-system-x86_64 sheepdog://HOSTNAME:PORT/IMAGE iSCSI LUNsiSCSI is a popular protocol used to access SCSI devices across a computer network.There are two different ways iSCSI devices can be used by QEMU. The first method is to mount the iSCSI LUN on the host, and make it appear as any other ordinary SCSI device on the host and then to access this device as a /dev/sd device from QEMU. How to do this differs between host OSes. The second method involves using the iSCSI initiator that is built into QEMU. This provides a mechanism that works the same way regardless of which host OS you are running QEMU on. This section will describe this second method of using iSCSI together with QEMU. In QEMU, iSCSI devices are described using special iSCSI URLs. URL syntax: iscsi://[<username>[%<password>]@]<host>[:<port>]/<target-iqn-name>/<lun> Username and password are optional and only used if your target is set up using CHAP authentication for access control. Alternatively the username and password can also be set via environment variables to have these not show up in the process list: export LIBISCSI_CHAP_USERNAME=<username> export LIBISCSI_CHAP_PASSWORD=<password> iscsi://<host>/<target-iqn-name>/<lun> Various session related parameters can be set via special options, either in a configuration file provided via '-readconfig' or directly on the command line. If the initiator-name is not specified qemu will use a default name of 'iqn.2008-11.org.linux-kvm[:<uuid>'] where <uuid> is the UUID of the virtual machine. If the UUID is not specified qemu will use 'iqn.2008-11.org.linux-kvm[:<name>'] where <name> is the name of the virtual machine. Setting a specific initiator name to use when logging in to the target: -iscsi initiator-name=iqn.qemu.test:my-initiator Controlling which type of header digest to negotiate with the target: -iscsi header-digest=CRC32C|CRC32C-NONE|NONE-CRC32C|NONE These can also be set via a configuration file: [iscsi] user = "CHAP username" password = "CHAP password" initiator-name = "iqn.qemu.test:my-initiator" # header digest is one of CRC32C|CRC32C-NONE|NONE-CRC32C|NONE header-digest = "CRC32C" Setting the target name allows different options for different targets: [iscsi "iqn.target.name"] user = "CHAP username" password = "CHAP password" initiator-name = "iqn.qemu.test:my-initiator" # header digest is one of CRC32C|CRC32C-NONE|NONE-CRC32C|NONE header-digest = "CRC32C" How to use a configuration file to set iSCSI configuration options: cat >iscsi.conf <<EOF [iscsi] user = "me" password = "my password" initiator-name = "iqn.qemu.test:my-initiator" header-digest = "CRC32C" EOF qemu-system-x86_64 -drive file=iscsi://127.0.0.1/iqn.qemu.test/1 \ -readconfig iscsi.conf How to set up a simple iSCSI target on loopback and access it via QEMU: this example shows how to set up an iSCSI target with one CDROM and one DISK using the Linux STGT software target. This target is available on Red Hat based systems as the package 'scsi-target-utils'. tgtd --iscsi portal=127.0.0.1:3260 tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.qemu.test tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 \ -b /IMAGES/disk.img --device-type=disk tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 2 \ -b /IMAGES/cd.iso --device-type=cd tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL qemu-system-x86_64 -iscsi initiator-name=iqn.qemu.test:my-initiator \ -boot d -drive file=iscsi://127.0.0.1/iqn.qemu.test/1 \ -cdrom iscsi://127.0.0.1/iqn.qemu.test/2 GlusterFS disk imagesGlusterFS is a user space distributed file system.You can boot from the GlusterFS disk image with the command: URI: qemu-system-x86_64 -drive file=gluster[+TYPE]://[HOST}[:PORT]]/VOLUME/PATH [?socket=...][,file.debug=9][,file.logfile=...] JSON: qemu-system-x86_64 'json:{"driver":"qcow2", "file":{"driver":"gluster", "volume":"testvol","path":"a.img","debug":9,"logfile":"...", "server":[{"type":"tcp","host":"...","port":"..."}, {"type":"unix","socket":"..."}]}}' gluster is the protocol. TYPE specifies the transport type used to connect to gluster management daemon (glusterd). Valid transport types are tcp and unix. In the URI form, if a transport type isn't specified, then tcp type is assumed. HOST specifies the server where the volume file specification for the given volume resides. This can be either a hostname or an ipv4 address. If transport type is unix, then HOST field should not be specified. Instead socket field needs to be populated with the path to unix domain socket. PORT is the port number on which glusterd is listening. This is optional and if not specified, it defaults to port 24007. If the transport type is unix, then PORT should not be specified. VOLUME is the name of the gluster volume which contains the disk image. PATH is the path to the actual disk image that resides on gluster volume. debug is the logging level of the gluster protocol driver. Debug levels are 0-9, with 9 being the most verbose, and 0 representing no debugging output. The default level is 4. The current logging levels defined in the gluster source are 0 - None, 1 - Emergency, 2 - Alert, 3 - Critical, 4 - Error, 5 - Warning, 6 - Notice, 7 - Info, 8 - Debug, 9 - Trace logfile is a commandline option to mention log file path which helps in logging to the specified file and also help in persisting the gfapi logs. The default is stderr. You can create a GlusterFS disk image with the command: qemu-img create gluster://HOST/VOLUME/PATH SIZE Examples qemu-system-x86_64 -drive file=gluster://1.2.3.4/testvol/a.img qemu-system-x86_64 -drive file=gluster+tcp://1.2.3.4/testvol/a.img qemu-system-x86_64 -drive file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img qemu-system-x86_64 -drive file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img qemu-system-x86_64 -drive file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img qemu-system-x86_64 -drive file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img qemu-system-x86_64 -drive file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket qemu-system-x86_64 -drive file=gluster+rdma://1.2.3.4:24007/testvol/a.img qemu-system-x86_64 -drive file=gluster://1.2.3.4/testvol/a.img,file.debug=9,file.logfile=/var/log/qemu-gluster.log qemu-system-x86_64 'json:{"driver":"qcow2", "file":{"driver":"gluster", "volume":"testvol","path":"a.img", "debug":9,"logfile":"/var/log/qemu-gluster.log", "server":[{"type":"tcp","host":"1.2.3.4","port":24007}, {"type":"unix","socket":"/var/run/glusterd.socket"}]}}' qemu-system-x86_64 -drive driver=qcow2,file.driver=gluster,file.volume=testvol,file.path=/path/a.img, file.debug=9,file.logfile=/var/log/qemu-gluster.log, file.server.0.type=tcp,file.server.0.host=1.2.3.4,file.server.0.port=24007, file.server.1.type=unix,file.server.1.socket=/var/run/glusterd.socket Secure Shell (ssh) disk imagesYou can access disk images located on a remote ssh server by using the ssh protocol:qemu-system-x86_64 -drive file=ssh://[USER@]SERVER[:PORT]/PATH[?host_key_check=HOST_KEY_CHECK] Alternative syntax using properties: qemu-system-x86_64 -drive file.driver=ssh[,file.user=USER],file.host=SERVER[,file.port=PORT],file.path=PATH[,file.host_key_check=HOST_KEY_CHECK] ssh is the protocol. USER is the remote user. If not specified, then the local username is tried. SERVER specifies the remote ssh server. Any ssh server can be used, but it must implement the sftp-server protocol. Most Unix/Linux systems should work without requiring any extra configuration. PORT is the port number on which sshd is listening. By default the standard ssh port (22) is used. PATH is the path to the disk image. The optional HOST_KEY_CHECK parameter controls how the remote host's key is checked. The default is yes which means to use the local .ssh/known_hosts file. Setting this to no turns off known-hosts checking. Or you can check that the host key matches a specific fingerprint: host_key_check=md5:78:45:8e:14:57:4f:d5:45:83:0a:0e:f3:49:82:c9:c8 (sha1: can also be used as a prefix, but note that OpenSSH tools only use MD5 to print fingerprints). Currently authentication must be done using ssh-agent. Other authentication methods may be supported in future. Note: Many ssh servers do not support an fsync-style operation. The ssh driver cannot guarantee that disk flush requests are obeyed, and this causes a risk of disk corruption if the remote server or network goes down during writes. The driver will print a warning when fsync is not supported: warning: ssh server ssh.example.com:22 does not support fsync With sufficiently new versions of libssh and OpenSSH, fsync is supported. NVMe disk imagesNVM Express (NVMe) storage controllers can be accessed directly by a userspace driver in QEMU. This bypasses the host kernel file system and block layers while retaining QEMU block layer functionalities, such as block jobs, I/O throttling, image formats, etc. Disk I/O performance is typically higher than with -drive file=/dev/sda using either thread pool or linux-aio.The controller will be exclusively used by the QEMU process once started. To be able to share storage between multiple VMs and other applications on the host, please use the file based protocols. Before starting QEMU, bind the host NVMe controller to the host vfio-pci driver. For example: # modprobe vfio-pci # lspci -n -s 0000:06:0d.0 06:0d.0 0401: 1102:0002 (rev 08) # echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind # echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id # qemu-system-x86_64 -drive file=nvme://HOST:BUS:SLOT.FUNC/NAMESPACE Alternative syntax using properties: qemu-system-x86_64 -drive file.driver=nvme,file.device=HOST:BUS:SLOT.FUNC,file.namespace=NAMESPACE HOST:BUS:SLOT.FUNC is the NVMe controller's PCI device address on the host. NAMESPACE is the NVMe namespace number, starting from 1. Disk image file lockingBy default, QEMU tries to protect image files from unexpected concurrent access, as long as it's supported by the block protocol driver and host operating system. If multiple QEMU processes (including QEMU emulators and utilities) try to open the same image with conflicting accessing modes, all but the first one will get an error.This feature is currently supported by the file protocol on Linux with the Open File Descriptor (OFD) locking API, and can be configured to fall back to POSIX locking if the POSIX host doesn't support Linux OFD locking. To explicitly enable image locking, specify "locking=on" in the file protocol driver options. If OFD locking is not possible, a warning will be printed and the POSIX locking API will be used. In this case there is a risk that the lock will get silently lost when doing hot plugging and block jobs, due to the shortcomings of the POSIX locking API. QEMU transparently handles lock handover during shared storage migration. For shared virtual disk images between multiple VMs, the "share-rw" device option should be used. By default, the guest has exclusive write access to its disk image. If the guest can safely share the disk image with other writers the -device ...,share-rw=on parameter can be used. This is only safe if the guest is running software, such as a cluster file system, that coordinates disk accesses to avoid corruption. Note that share-rw=on only declares the guest's ability to share the disk. Some QEMU features, such as image file formats, require exclusive write access to the disk image and this is unaffected by the share-rw=on option. Alternatively, locking can be fully disabled by "locking=off" block device option. In the command line, the option is usually in the form of "file.locking=off" as the protocol driver is normally placed as a "file" child under a format driver. For example: -blockdev driver=qcow2,file.filename=/path/to/image,file.locking=off,file.driver=file To check if image locking is active, check the output of the "lslocks" command on host and see if there are locks held by the QEMU process on the image file. More than one byte could be locked by the QEMU instance, each byte of which reflects a particular permission that is acquired or protected by the running block driver. SEE ALSOThe HTML documentation of QEMU for more precise information and Linux user mode emulator invocation.AUTHORFabrice Bellard and the QEMU Project developersCOPYRIGHT2020, The QEMU Project Developers
Visit the GSP FreeBSD Man Page Interface. |