Running the Certification Tests =============================== You can initiate a testing session in a server as follows: #. Connect to the SUT via SSH or log in at the console. A standard MAAS installation creates a user called ``ubuntu``, as noted earlier. You can test using either a direct console login or SSH, but an SSH login may be disconnected by the network tests or for other reasons. #. If the SUT provides the suitable ports and drives, plug in a USB 2 stick, plug in a USB 3 stick, plug in an SD card, and insert a suitable data CD in the optical drive. (Note that USB testing is not required for blade/cartridge style systems *unless* the blade or cartridge has dedicated USB ports that are not shared via the chassis.) These media must remain inserted *throughout the test run*, because the media tests will be kicked off partway through the run. #. run the ``canonical-certification-precheck`` script, which tests critical configuration details and fixes some common problems: - The script completes APT configuration, which is sometimes incomplete at system installation. - If the script detects that the ``/etc/xdg/canonical-certification.conf`` file is missing information, it will give you the opportunity to fill it in. This information includes the SUT's Secure ID (SID) number and pointers to KVM and LXD image files. - Information on some critical configuration details is displayed, followed by a summary, such as the following: .. figure:: ../images/cert-pretest.png :alt: The certification pre-test script helps you identify simple problems that might make you go d'oh! :width: 100% - Summary results are color-coded, with white for information, green for passed results, yellow for warnings, and red for problems that should be corrected. In the preceding output, the Installed RAM value was displayed in yellow because the system's RAM is a bit shy of 4 GiB; the ``USB_Disks`` line is red because no USB flash drive was inserted in the SUT; and the ``UVT_KVM_Image_Check`` line is red because the KVM image was not configured. If your terminal supports the feature, you can scroll up to see details of any warnings or failures. - If the script identifies any problems, be sure to correct them. Some common sources of problems include the following: - If the precheck script fails the ``NICs_enabled`` test, you must correct the problem before testing. You must ensure that all network ports are cabled to a working LAN and configured as described earlier, in :ref:`final-pre-testing-sut`. - If your ``IPERF`` test failed, you may need to launch the ``iperf3`` server on the Target system, as described earlier. Your configuration may need updating in addition to or instead of this change, though. To do so, edit the ``/etc/xdg/canonical-certification.conf`` file on the SUT so as to specify your ``iperf3`` server(s). For example:: TEST_TARGET_IPERF = 192.168.0.2,172.24.124.7 If your environment includes multiple ``iperf3`` servers, you can identify them all, separated by commas. The test suite will attempt to use each server in sequence until one results in a passed test or until a timeout period of one hour has passed. You can use this feature if your environment includes separate networks with different speeds or simply to identify all of your ``iperf3`` servers. (Note that ``iperf3`` refuses a connection if a test is ongoing, so you can list multiple ``iperf3`` servers and let the test suite try them all until it finds a free one.) - If the ``Hard_Disks`` or ``USB_Disks`` options failed, you may need to attend to them. USB flash drives need only be prepared with FAT filesystems and inserted into the SUT, as described earlier. Most disks have device filenames of ``/dev/sda``, ``/dev/sdb``, and so on; but some disk devices may appear under other device names, such as ``/dev/nvme*``. If ``ls /dev/sd*`` shows a disk with no partitions, you should partition the disk (one big disk-spanning partition is best), create an ext4 filesystem on it, and mount it (subdirectories of ``/mnt`` work well). Repeat this process for each unmounted disk. - If the ``UVT_KVM_Image_Check`` or ``LXD_Image_Check`` tests failed and if your Internet access is slow, you should download the relevant virtualization images on the SUT: #. On a computer with better Internet access, download KVM and LXD cloud image files from http://cloud-images.ubuntu.com/focal/current/. In particular, obtain the ``focal-server-cloudimg-amd64.img``, ``focal-server-cloudimg-amd64.squashfs``, and ``focal-server-cloudimg-amd64-lxd.tar.xz`` files, or the equivalent for your CPU architecture. #. Copy those images to any convenient directory on the SUT. #. Supply the full paths under the section labeled "environment" in ``/etc/xdg/canonical-certification.conf``. For example:: [environment] KVM_TIMEOUT = 300 KVM_IMAGE = /home/ubuntu/focal-server-cloudimg-amd64.img LXD_ROOTFS = /home/ubuntu/focal-server-cloudimg-amd64.squashfs LXD_TEMPLATE = /home/ubuntu/focal-server-cloudimg-amd64-lxd.tar.xz Note that the KVM and LXD configurations are separated by several lines of comments in the configuration file. A failure of the virtualization image precheck need not be a problem if your outside network access is good; the test script will attempt to obtain the virtualization image from public mirrors if it is not present locally. #. If you're running the test via SSH, type ``screen`` on the SUT to ensure that you can reconnect to your session should your link to the SUT go down, as may happen when running the network tests. If you're disconnected, you can reconnect to your session by logging in and typing ``screen -r``. This step is not important if you're running the Server Test Suite at the console. #. Run the certification tests by typing an appropriate command, such as:: $ certify-ubuntu-server In some cases, though, another command may be necessary: - If you're testing another Ubuntu version, you must change the version number. - More exotic options, including running a limited set of tests, are described in :doc:`appendix-b-updated-test`. #. The full test suite can take several hours, or in extreme cases over a day, to complete, depending on the hardware configuration (amount of RAM, disk space, etc). During this time the computer may be unresponsive. This is due to the inclusion of some stress test cases. These are deliberately intensive and produce high load on the system's resources. #. If at any time during the execution you are *sure* the computer has crashed (or it reboots spontaneously) then after the system comes back up you should run the ``certify-ubuntu-server`` command again and respond `y` when asked if you want to resume the previous session. #. If any tests fail or do not run, a screen will appear that summarizes those tests that failed or did not run. The summary screen separates failures into two categories: * **Failed Jobs** -- These failures *might* be serious, or they might not be. (This issue is addressed in more detail shortly.) * **Jobs with Failed Dependencies** -- Failures in this category are *not* serious. A failed dependency means that a precondition for even running the test did not exist. For instance, in the below screen shot, a test intended for IBM Power-architecture (ppc64el) computers was not run because the SUT used an x86-64 CPU. .. figure:: ../images/cert-failures.png :alt: You can sometimes correct problems and re-run tests before submitting results. :width: 100% You can use this opportunity to re-run a test if you believe it failed for a transient reason, such as if your ``iperf3`` server crashed or was unavailable or if you forgot to insert a USB drive. To re-run tests, use the arrow keys to highlight each test you want to re-run, press Spacebar to select it, and then press the **R** key to re-run the selected tests. If you don't want to re-run any tests, press **F** to finish. #. When the test run is complete, you should see a summary of tests run, a note about where the ``submission*`` files have been stored, and a prompt to submit the results to C3. If you're connected to the Internet, typing ``y`` at this query should cause the results to be submitted. You will need either a Secure ID value or to have already entered this value in the ``/etc/xdg/canonical-certification.conf`` file. (The ``canonical-certification-precheck`` script will edit this file appropriately if you provided the SID when you ran that script.) The script will also prompt you for a description of the test run. This description is not shared publicly; it's intended to help both you and the Server Certification Team identify the purpose of a test run. #. Copying the results files off of the SUT is advisable. This is most important if the automatic submission of results fails; however, having the results available as a backup can be useful because it enables you to review the results off-line or in case of submission problems that aren't immediately obvious. The results are stored in the ``~/.local/share/checkbox-ng`` directory. The upcoming section, :doc:`manually-upload-test-results`, describes how to upload results manually to C3. You can review your results locally by loading ``submission_.html`` in a web browser. This enables you to quickly spot failed tests because they're highlighted in red with a "failed" notation in the Result column, whereas passed tests acquire a green color, with the word "passed." Note, however, that *a failed test does not necessarily denote a failed certification*. Reasons a test might fail but still enable a certification to pass include the following: - A test may be a non-blocking test, as described in the `Ubuntu Server Hardware Certification Coverage` document, available from |c3_link|. In the preceding screen shot, the Test That System Booted with Secure Boot Active is such a test. - Some tests are known to produce occasional false positives -- that is, they claim that problems exist when in fact they don't. In the preceding screen shot, the Run FWTS Server Cert Selected Test failure is an example of this condition. - Some test environments are sub-optimal, necessitating that specific tests be re-run. This can happen with network tests or if the tester forgot to insert a removable medium. In such cases, the specific test can be re-run rather than the entire test suite. In the preceding screen shot, the failed USB tests are examples; the tests failed because no USB devices were inserted, which is an easily-corrected oversight. Consult your account manager if you have questions about specific test results.