GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
NUMA(4) FreeBSD Kernel Interfaces Manual NUMA(4)

NUMA
Non-Uniform Memory Access

options MAXMEMDOM
options NUMA

Non-Uniform Memory Access is a computer architecture design which involves unequal costs between processors, memory and IO devices in a given system.

In a NUMA architecture, the latency to access specific memory or IO devices depends upon which processor the memory or device is attached to. Accessing memory local to a processor is faster than accessing memory that is connected to one of the other processors. FreeBSD implements NUMA-aware memory allocation policies. By default it attempts to ensure that allocations are balanced across each domain. Users may override the default domain selection policy using cpuset(1).

NUMA support is enabled when the NUMA option is specified in the kernel configuration file. Each platform defines the MAXMEMDOM constant, which specifies the maximum number of supported NUMA domains. This constant may be specified in the kernel configuration file. NUMA support can be disabled at boot time by setting the vm.numa.disabled tunable to 1. Other values for this tunable are currently ignored.

Thread and process NUMA policies are controlled with the cpuset_getdomain(2) and cpuset_setdomain(2) syscalls. The cpuset(1) tool is available for starting processes with a non-default policy, or to change the policy of an existing thread or process. See SMP(4) for information about CPU to domain mapping.

Systems with non-uniform access to I/O devices may mark those devices with the local VM domain identifier. Drivers can find out their local domain information by calling bus_get_domain(9).

The operation of NUMA is controlled and exposes information with these sysctl(8) MIB variables:

vm.ndomains
The number of VM domains which have been detected.

vm.phys_locality
A table indicating the relative cost of each VM domain to each other. A value of 10 indicates equal cost. A value of -1 means the locality map is not available or no locality information is available.

vm.phys_segs
The map of physical memory, grouped by VM domain.

The current NUMA implementation is VM-focused. The hardware NUMA domains are mapped into a contiguous, non-sparse VM domain space, starting from 0. Thus, VM domain information (for example, the domain identifier) is not necessarily the same as is found in the hardware specific information. Policy information is available in both struct thread and struct proc.

cpuset(1), cpuset_getaffinity(2), cpuset_setaffinity(2), SMP(4), bus_get_domain(9)

NUMA first appeared in FreeBSD 9.0 as a first-touch allocation policy with a fail-over to round-robin allocation and was not configurable. It was then modified in FreeBSD 10.0 to implement a round-robin allocation policy and was also not configurable.

The numa_getaffinity(2) and numa_setaffinity(2) syscalls and the numactl(1) tool first appeared in FreeBSD 11.0 and were removed in FreeBSD 12.0. The current implementation appeared in FreeBSD 12.0.

This manual page written by Adrian Chadd <adrian@FreeBSD.org>.

No statistics are kept to indicate how often NUMA allocation policies succeed or fail.
October 22, 2018 FreeBSD 13.1-RELEASE

Search for    or go to Top of page |  Section 4 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.