@@ -205,6 +205,115 @@ Product Name: Standard PC (Q35 + ICH9, 2009)</screen>
205205 for more information.
206206 </para >
207207 </sect1 >
208+ <sect1 xml : id =" sec-libvirt-config-boot-menu-virsh" >
209+ <title >Changing boot options</title >
210+
211+ <para >
212+ The boot menu of the &vmguest; can be found in the <tag >os</tag > element
213+ and looks similar to this example:
214+ </para >
215+
216+ <screen >< os>
217+ < type> hvm< /type>
218+ < loader> readonly='yes' secure='no' type='rom'/> /usr/lib/xen/boot/hvmloader< /loader>
219+ < nvram template='/usr/share/OVMF/OVMF_VARS.fd'/> /var/lib/libvirt/nvram/guest_VARS.fd< /nvram>
220+ < boot dev='hd'/>
221+ < boot dev='cdrom'/>
222+ < bootmenu enable='yes' timeout='3000'/>
223+ < smbios mode='sysinfo'/>
224+ < bios useserial='yes' rebootTimeout='0'/>
225+ < /os> </screen >
226+
227+ <para >
228+ In this example, two devices are available: <tag class =" attvalue" >hd</tag > and <tag
229+ class =" attvalue" >cdrom</tag >. The configuration also reflects the actual boot order, so the
230+ <tag class =" attvalue" >hd</tag > comes before the <tag class =" attvalue" >cdrom</tag >.
231+ </para >
232+
233+ <sect2 xml : id =" sec-libvirt-config-bootorder-virsh" >
234+ <title >Changing boot order</title >
235+ <para >
236+ The &vmguest; 's boot order is represented through the order of devices
237+ in the XML configuration file. As the devices are interchangeable, it
238+ is possible to change the boot order of the &vmguest; .
239+ </para >
240+ <procedure >
241+ <step >
242+ <para >
243+ Open the &vmguest; 's XML configuration.
244+ </para >
245+ <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
246+ </step >
247+ <step >
248+ <para >
249+ Change the sequence of the bootable devices.
250+ </para >
251+ <screen >
252+ ...
253+ < boot dev='cdrom'/>
254+ < boot dev='hd'/>
255+ ...
256+ </screen >
257+ </step >
258+ <step >
259+ <para >
260+ Check if the boot order was changed successfully by looking at the
261+ boot menu in the BIOS of the &vmguest; .
262+ </para >
263+ </step >
264+ </procedure >
265+ </sect2 >
266+
267+ <sect2 xml : id =" sec-libvirt-config-directkernel-virsh" >
268+ <title >Using direct kernel boot</title >
269+ <para >
270+ Direct kernel boot allows you to boot from a kernel and initrd stored
271+ on the host. Set the path to both files in the <tag >kernel</tag > and
272+ <tag >initrd</tag > elements:
273+ </para >
274+ <screen >< os>
275+ ...
276+ < kernel> /root/f8-i386-vmlinuz< /kernel>
277+ < initrd> /root/f8-i386-initrd< /initrd>
278+ ...
279+ < os> </screen >
280+ <para >
281+ To enable direct kernel boot:
282+ </para >
283+ <procedure >
284+ <step >
285+ <para >
286+ Open the &vmguest; 's XML configuration:
287+ </para >
288+ <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
289+ </step >
290+ <step >
291+ <para >
292+ Inside the <tag >os</tag > element, add a <tag >kernel</tag > element
293+ and the path to the kernel file on the host:
294+ </para >
295+ <screen >...
296+ < kernel> /root/f8-i386-vmlinuz< /kernel>
297+ ...</screen >
298+ </step >
299+ <step >
300+ <para >
301+ Add an <tag >initrd</tag > element and the path to the initrd file on
302+ the host:
303+ </para >
304+ <screen >...
305+ < initrd> /root/f8-i386-initrd< /initrd>
306+ ...</screen >
307+ </step >
308+ <step >
309+ <para >
310+ Start your VM to boot from the new kernel:
311+ </para >
312+ <screen >&prompt.sudo; <command >virsh start sles15</command ></screen >
313+ </step >
314+ </procedure >
315+ </sect2 >
316+ </sect1 >
208317 <sect1 xml : id =" libvirt-cpu-virsh" >
209318 <title >Configuring CPU</title >
210319
@@ -392,117 +501,6 @@ current live 4
392501 </para >
393502 </sect2 >
394503 </sect1 >
395- <sect1 xml : id =" sec-libvirt-config-boot-menu-virsh" >
396- <title >Changing boot options</title >
397-
398- <para >
399- The boot menu of the &vmguest; can be found in the <tag >os</tag > element
400- and looks similar to this example:
401- </para >
402-
403- <screen >< os>
404- < type> hvm< /type>
405- < loader> readonly='yes' secure='no' type='rom'/> /usr/lib/xen/boot/hvmloader< /loader>
406- < nvram template='/usr/share/OVMF/OVMF_VARS.fd'/> /var/lib/libvirt/nvram/guest_VARS.fd< /nvram>
407- < boot dev='hd'/>
408- < boot dev='cdrom'/>
409- < bootmenu enable='yes' timeout='3000'/>
410- < smbios mode='sysinfo'/>
411- < bios useserial='yes' rebootTimeout='0'/>
412- < /os> </screen >
413-
414- <para >
415- In this example, two devices are available,
416- <tag class =" attvalue" >hd</tag > and <tag class =" attvalue" >cdrom</tag > .
417- The configuration also reflects the actual boot order, so the
418- <tag class =" attvalue" >hd</tag > comes before the
419- <tag class =" attvalue" >cdrom</tag > .
420- </para >
421-
422- <sect2 xml : id =" sec-libvirt-config-bootorder-virsh" >
423- <title >Changing boot order</title >
424- <para >
425- The &vmguest; 's boot order is represented through the order of devices
426- in the XML configuration file. As the devices are interchangeable, it
427- is possible to change the boot order of the &vmguest; .
428- </para >
429- <procedure >
430- <step >
431- <para >
432- Open the &vmguest; 's XML configuration.
433- </para >
434- <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
435- </step >
436- <step >
437- <para >
438- Change the sequence of the bootable devices.
439- </para >
440- <screen >
441- ...
442- < boot dev='cdrom'/>
443- < boot dev='hd'/>
444- ...
445- </screen >
446- </step >
447- <step >
448- <para >
449- Check if the boot order was changed successfully by looking at the
450- boot menu in the BIOS of the &vmguest; .
451- </para >
452- </step >
453- </procedure >
454- </sect2 >
455-
456- <sect2 xml : id =" sec-libvirt-config-directkernel-virsh" >
457- <title >Using direct kernel boot</title >
458- <para >
459- Direct Kernel Boot allows you to boot from a kernel and initrd stored
460- on the host. Set the path to both files in the <tag >kernel</tag > and
461- <tag >initrd</tag > elements:
462- </para >
463- <screen >< os>
464- ...
465- < kernel> /root/f8-i386-vmlinuz< /kernel>
466- < initrd> /root/f8-i386-initrd< /initrd>
467- ...
468- < os> </screen >
469- <para >
470- To enable Direct Kernel Boot:
471- </para >
472- <procedure >
473- <step >
474- <para >
475- Open the &vmguest; 's XML configuration:
476- </para >
477- <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
478- </step >
479- <step >
480- <para >
481- Inside the <tag >os</tag > element, add a <tag >kernel</tag > element
482- and the path to the kernel file on the host:
483- </para >
484- <screen >...
485- < kernel> /root/f8-i386-vmlinuz< /kernel>
486- ...</screen >
487- </step >
488- <step >
489- <para >
490- Add an <tag >initrd</tag > element and the path to the initrd file on
491- the host:
492- </para >
493- <screen >...
494- < initrd> /root/f8-i386-initrd< /initrd>
495- ...</screen >
496- </step >
497- <step >
498- <para >
499- Start your VM to boot from the new kernel:
500- </para >
501- <screen >&prompt.sudo; <command >virsh start sles15</command ></screen >
502- </step >
503- </procedure >
504- </sect2 >
505- </sect1 >
506504 <sect1 xml : id =" sec-libvirt-config-memory-virsh" >
507505 <title >Configuring memory allocation</title >
508506
@@ -612,6 +610,111 @@ Used memory: 8388608 KiB
612610 </para >
613611 </important >
614612 </sect1 >
613+ <sect1 xml : id =" sec-libvirt-config-cpu-memory-virsh" >
614+ <title >CPU and Memory configuration and placement on large NUMA systems</title >
615+
616+ <para >
617+ When deploying &vmguest; s with large CPU and memory allocations on large
618+ &vmhost; s with multiple NUMA nodes, it may be necessary for both
619+ performance and resource planning to tune the &vmguest; 's allocations.
620+ </para >
621+
622+ <sect2 xml : id =" sec-libvirt-config-hugepages-virsh" >
623+ <title >Huge pages</title >
624+ <para >
625+ Using huge pages from the &vmhost; to back memory allocations for the &vmguest; can improve
626+ performance for many workloads. The following procedure shows how to back &vmguest; memory
627+ with huge pages. The &vmguest; should be shut off or restarted after changing the configuration.
628+ </para >
629+ <procedure >
630+ <step >
631+ <para >
632+ Edit the &vmguest; configuration:
633+ </para >
634+ <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
635+ </step >
636+ <step >
637+ <para >
638+ Add the <tag >memoryBacking</tag > element under the top level
639+ <tag >domain</tag > element:
640+ </para >
641+ <screen >< memoryBacking>
642+ < hugePages/>
643+ < memoryBacking/>
644+ </screen >
645+ </step >
646+ <step >
647+ <para >
648+ If not running, start the &vmguest; , otherwise restart.
649+ </para >
650+ </step >
651+ </procedure >
652+ <para >
653+ For more information on advanced <tag >memoryBacking</tag > settings, see
654+ the <citetitle >Memory Backing</citetitle > documentation at
655+ <link xlink : href =" https://libvirt.org/formatdomain.html#memory-backing" />.
656+ </para >
657+ </sect2 >
658+ <sect2 >
659+ <title >Automatic NUMA placement of &vmguest; </title >
660+ <para >
661+ When deploying large VMs on large physical NUMA systems, one challenge
662+ is ensuring the &vmguest; CPU and memory resources are not randomly
663+ spread across the &vmhost; 's NUMA nodes; otherwise, performance may
664+ suffer. All resources should be close to each other with respect to
665+ NUMA distance. To ensure best performance, the &vmguest; 's CPU and
666+ memory allocations should fit on a single &vmhost; NUMA node.
667+ </para >
668+
669+ <para >
670+ &libvirt; can automatically place &vmguest; CPU and memory resources
671+ using the external <command >numa-preplace</command > program. To activate
672+ automatic placement, ensure the <tag class =" attvalue" >placement</tag >
673+ attribute on the <tag >cpu</tag > element is set to <literal >auto</literal >.
674+ Memory placement should also be set to <literal >auto</literal > using
675+ the <tag >numatune</tag > element. The &vmguest; should be shutoff or
676+ restarted after changing the configuration.
677+ </para >
678+ <procedure >
679+ <step >
680+ <para >
681+ Edit the &vmguest; configuration:
682+ </para >
683+ <screen >&prompt.sudo; <command >virsh edit sles15</command ></screen >
684+ </step >
685+ <step >
686+ <para >
687+ Add the <tag >numatune</tag > element under the top level
688+ <tag >domain</tag > element, and set the memory mode to
689+ <literal >auto</literal >:
690+ </para >
691+ <screen >< numatune>
692+ < memory mode="auto"/>
693+ < numatune/>
694+ </screen >
695+ </step >
696+ <step >
697+ <para >
698+ If not running, start the &vmguest; , otherwise restart.
699+ </para >
700+ </step >
701+ </procedure >
702+ <important >
703+ <title >Automatic NUMA placement</title >
704+ <para >
705+ Automatic NUMA placement is only available if the
706+ <package >numa-preplace</package > package is installed. Automatic
707+ NUMA placement with <command >numa-preplace</command > works with
708+ both normal memory and hugepages.
709+ </para >
710+ </important >
711+ <para >
712+ For more information on advanced <tag >numatune</tag > settings, see the
713+ <citetitle >NUMA Node Tuning</citetitle > documentation at
714+ <link xlink : href =" https://libvirt.org/formatdomain.html#numa-node-tuning" />.
715+ </para >
716+ </sect2 >
717+ </sect1 >
615718 <sect1 xml : id =" sec-libvirt-config-pci-virsh" >
616719 <title >Adding a PCI device</title >
617720
0 commit comments