Red Hat Enterprise Linux 8 Show
A guide to configuring basic system settings in Red Hat Enterprise Linux 8Abstract This document describes basics of system administration on Red Hat Enterprise Linux 8. The title focuses on: basic tasks that a system administrator needs to do just after the operating system has been successfully installed, installing software with yum, using systemd for service management, managing users, groups and file permissions, using chrony to configure NTP, working with Python 3 and others. Making open source more inclusiveRed Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message. Providing feedback on Red Hat documentationWe appreciate your feedback on our documentation. Let us know how we can improve it. Submitting comments on specific passages
Submitting feedback through Bugzilla (account required)
Chapter 1. Preparing a control node and managed nodes to use RHEL System RolesBefore you can use individual RHEL System Roles to manage services and settings, prepare the involved hosts. 1.1. Introduction to RHEL System RolesRHEL System Roles is a collection of Ansible roles and modules. RHEL System Roles provide a configuration interface to remotely manage multiple RHEL systems. The interface enables managing system configurations across multiple versions of RHEL, as well as adopting new major releases. On Red Hat Enterprise Linux 8, the interface currently consists of the following roles:
All these roles are provided by the 1.2. RHEL System Roles terminologyYou can find the following terms across this documentation: Ansible playbook Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Control node Any machine with Ansible installed. You can run commands and playbooks, invoking /usr/bin/ansible or /usr/bin/ansible-playbook, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes. Inventory A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see the Working with Inventory section. Managed nodes The network devices, servers, or both that you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes. 1.3. Preparing a control node RHEL includes On RHEL 8.6 and later, use Prerequisites
Procedure
Additional resources
1.4. Preparing a managed nodeAnsible does not use an agent on managed hosts. The only requirements are Python, which is installed by default on RHEL, and SSH access to the managed host. However, direct SSH access as the Prerequisites
Procedure
1.5. Verifying access from the control node to managed nodesAfter you configured the control node and prepared managed nodes, test that Ansible can connect to the managed nodes. Perform this procedure as the
Prerequisites
Procedure
Chapter 2. Changing basic environment settingsConfiguration of basic environment settings is a part of the installation process. The following sections guide you when you change them later. The basic configuration of the environment includes:
2.1. Configuring the date and time Accurate timekeeping is important for a number of reasons. In Red Hat Enterprise Linux, timekeeping is ensured by the Red Hat Enterprise Linux 8 uses the 2.1.1. Displaying the current date and timeTo display the current date and time, use either of these steps. Procedure
2.2. Configuring the system locale System-wide locale settings are stored in the This section describes how to manage system locale. Procedure
Additional resources
2.3. Configuring the keyboard layoutThe keyboard layout settings control the layout used on the text console and graphical user interfaces. Procedure
Additional resources
2.4. Changing the language using desktop GUIThis section describes how to change the system language using the desktop GUI. Prerequisites
Procedure
Some applications do not support certain languages. The text of an application that cannot be translated into the selected language remains in US English. 2.5. Additional resources
Chapter 3. Configuring and managing network accessThis section describes different options on how to add Ethernet connections in Red Hat Enterprise Linux. 3.1. Configuring the network and host name in the graphical installation modeFollow the steps in this procedure to configure your network and host name. Procedure
3.2. Configuring a static Ethernet connection using nmcli This procedure describes adding an Ethernet connection with the following settings using
the
Procedure
Verification steps
Troubleshooting steps
3.3. Configuring a dynamic Ethernet connection using nmtui The In
Procedure
Verification
3.4. Configuring a static Ethernet connection using nmtui The In
Procedure
Verification
3.5. Managing networking in the RHEL web consoleIn the web console, the Networking menu enables you:
Figure 3.2. Managing Networking in the RHEL web console 3.6. Managing networking using RHEL System Roles You can configure the networking connections on multiple target machines using the The
The required networking connections for each host are provided as a list within the The The following example shows how to apply the An example playbook applying the network role to set up an Ethernet connection with the required parameters # SPDX-License-Identifier: BSD-3-Clause --- - hosts: network-test vars: network_connections: # Create one ethernet profile and activate it. # The profile uses automatic IP addressing # and is tied to the interface by MAC address. - name: prod1 state: up type: ethernet autoconnect: yes mac: "00:00:5e:00:53:00" mtu: 1450 roles: - rhel-system-roles.network 3.7. Additional resources
Chapter 4. Registering the system and managing subscriptionsSubscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. You can use a subscription to Red Hat Content Delivery Network to track:
4.1. Registering the system after the installationUse the following procedure to register your system if you have not registered it during the installation process already. Prerequisites
Procedure
4.2. Registering subscriptions with credentials in the web consoleUse the following steps to register a newly installed Red Hat Enterprise Linux with account credentials using the RHEL web console. Prerequisites
Procedure
At this point, your Red Hat Enterprise Linux system has been successfully registered. 4.3. Registering a system using Red Hat account on GNOMEFollow the steps in this procedure to enroll your system with your Red Hat account. Prerequisites
Procedure
4.4. Registering a system using an activation key on GNOMEFollow the steps in this procedure to register your system with an activation key. You can get the activation key from your organization administrator. Prerequisites
Procedure
4.5. Registering RHEL 8 using the installer GUIUse the following steps to register a newly installed Red Hat Enterprise Linux 8 using the RHEL installer GUI. Prerequisites
Procedure
Chapter 5. Making systemd services start at boot timesystemd is a system and service manager for Linux operating systems that introduces the concept of systemd units. This section provides information on how to ensure that a service is enabled or disabled at boot time. It also explains how to manage the services through the web console. 5.1. Enabling or disabling the servicesYou can determine which services are enabled or disabled at boot time already during the installation process. You can also enable or disable a service on an installed operating system. This section describes the steps for enabling or disabling those services on an already installed operating system:
Prerequisites
Procedure
You cannot enable a service that has been previously masked. You have to unmask it first: # systemctl unmask service_name 5.2. Managing services in the RHEL web consoleThis section describes how you can also enable or disable a service using the web console. You can manage systemd targets, services, sockets, timers, and paths. You can also check the service status, start or stop services, enable or disable them. Prerequisites
Procedure
Chapter 6. Configuring system securityComputer security is the protection of computer systems and their hardware, software, information, and services from theft, damage, disruption, and misdirection. Ensuring computer security is an essential task, in particular in enterprises that process sensitive data and handle business transactions. This section covers only the basic security features that you can configure after installation of the operating system. 6.1. Enabling the firewalld serviceA firewall is a network security system that monitors and controls incoming and outgoing network traffic according to configured security rules. A firewall typically establishes a barrier between a trusted secure internal network and another outside network. The To enable the Procedure
Verification steps
6.2. Managing firewall in the rhel 8 web console To configure the By default, the Procedure
Additionally, you can define more fine-grained access through the firewall to a service using the Add services… button. 6.3. Managing basic SELinux settingsSecurity-Enhanced Linux (SELinux) is an additional layer of system security that determines which processes can access which files, directories, and ports. These permissions are defined in SELinux policies. A policy is a set of rules that guide the SELinux security engine. SELinux has two possible states:
When SELinux is enabled, it runs in one of the following modes:
In enforcing mode, SELinux enforces the loaded policies. SELinux denies access based on SELinux policy rules and enables only the interactions that are explicitly allowed. Enforcing mode is the safest SELinux mode and is the default mode after installation. In permissive mode, SELinux does not enforce the loaded policies. SELinux does not deny access, but reports actions that break the rules to the 6.4. Ensuring the required state of selinuxBy default, SELinux operates in enforcing mode. However, in specific scenarios, you can set SELinux to permissive mode or even disable it. Red Hat recommends to keep your system in enforcing mode. For debugging purposes, you can set SELinux to permissive mode. Follow this procedure to change the state and mode of SELinux on your system. Procedure
6.5. Switching SELinux modes in the RHEL 8 web consoleYou can set SELinux mode through the RHEL 8 web console in the SELinux menu item. By default, SELinux enforcing policy in the web console is on, and SELinux operates in enforcing mode. By turning it off, you switch SELinux to permissive mode. Note that this selection is automatically reverted on the next boot to the configuration defined in the Procedure
6.6. Additional resources
Chapter 7. Getting started with managing user accountsRed Hat Enterprise Linux is a multi-user operating system, which enables multiple users on different computers to access a single system installed on one machine. Every user operates under its own account, and managing user accounts thus represents a core element of Red Hat Enterprise Linux system administration. The following are the different types of user accounts:
7.1. Managing accounts and groups using command line toolsThis section describes basic command-line tools to manage user accounts and groups.
Additional resources
7.2. System user accounts managed in the web consoleWith user accounts displayed in the RHEL web console you can:
The RHEL web console displays all user accounts located in the system. Therefore, you can see at least one user account just after the first login to the web console. After logging into the RHEL web console, you can perform the following operations:
7.3. Adding new accounts using the web consoleUse the following steps for adding user accounts to the system and setting administration rights to the accounts through the RHEL web console. Procedure
Chapter 8. Dumping a crashed kernel for later analysis To analyze why a system crashed, you can use the 8.1. What is kdump A kernel crash dump can be the only information available in the event of a system failure (a critical bug). Therefore, operational You can enable When 8.2. Configuring kdump memory usage and target location in web console The procedure below shows you how to use the Procedure
8.3. kdump using RHEL System Roles RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The The The following example playbook shows how to apply the --- - hosts: kdump-test vars: kdump_path: /var/crash roles: - rhel-system-roles.kdump For a detailed reference on 8.4. Additional resources
Chapter 9. Recovering and restoring a systemTo recover and restore a system using an existing backup, Red Hat Enterprise Linux provides the Relax-and-Recover (ReaR) utility. You can use the utility as a disaster recovery solution and also for system migration. The utility enables you to perform the following tasks:
Additionally, for disaster recovery, you can also integrate certain backup software with ReaR. Setting up ReaR involves the following high-level steps:
9.1. Setting up ReaRUse the following steps to install the package for using the Relax-and-Recover (ReaR) utility, create a rescue system, configure and generate a backup. Prerequisites
Procedure
9.2. Using a ReaR rescue image on the 64-bit IBM Z architectureBasic Relax and Recover (ReaR) functionality is now available on the 64-bit IBM Z architecture as a Technology Preview. You can create a ReaR rescue image on IBM Z only in the z/VM environment. Backing up and recovering logical partitions (LPARs) has not been tested. ReaR on the 64-bit IBM Z architecture is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview. The only output method currently available
is Initial Program Load (IPL). IPL produces a kernel and an initial ramdisk (initrd) that can be used with the Prerequisites
Procedure Add the following variables to the
Currently, the rescue process reformats all the DASDs (Direct Attached Storage Devices) connected to the system. Do not attempt a system recovery if there is any valuable data present on the system storage devices. This also includes the device prepared with the zipl bootloader, ReaR kernel, and initrd that were used to boot into the rescue environment. Ensure to keep a copy. Chapter 10. Troubleshooting problems using log files Log files
contain messages about the system, including the kernel, services, and applications running on it. These contain information that helps troubleshoot issues or monitor system functions. The logging system in Red Hat Enterprise Linux is based on the built-in 10.1. Services handling syslog messages The following two services handle
The
The 10.2. Subdirectories storing syslog messages The following subdirectories under the
10.3. Inspecting log files using the web consoleFollow the steps in this procedure to inspect the log files using the RHEL web console. Figure 10.1. Inspecting the log files in the RHEL 8 web console 10.4. Viewing logs using the command lineThe Journal is a component of systemd that helps to view and manage log files. It addresses problems connected with traditional logging, closely integrated with the rest of the system, and supports various logging technologies and access management for the log files. You can use the $ journalctl -b | grep kvm
May 15 11:31:41 localhost.localdomain kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
May 15 11:31:41 localhost.localdomain kernel: kvm-clock: cpu 0, msr 76401001, primary cpu clock
... Table 10.1. Viewing system information
Table 10.2. Viewing information on specific services
Table 10.3. Viewing logs related to specific boots
10.5. Additional resources
Chapter 11. Accessing the Red Hat support This section describes how to effectively troubleshoot your problems using Red Hat support and To obtain support from Red Hat, use the Red Hat Customer Portal, which provides access to everything available with your subscription. 11.1. Obtaining Red Hat support through Red Hat Customer PortalThe following section describes how to use the Red Hat Customer Portal to get help. Prerequisites
Procedure
11.2. Troubleshooting problems using sosreport The The following section describes how to use the Prerequisites
Procedure
Chapter 12. Managing software packages12.1. Software management tools in RHEL 8In RHEL 8, software installation is enabled by the new version of the YUM tool (YUM v4), which is based on the DNF technology. Upstream documentation identifies the technology as DNF and the tool is referred to as DNF in the upstream. As a result, some output returned by the new YUM tool in RHEL 8 mentions DNF. Although
YUM v4 used in RHEL 8 is based on DNF, it is compatible with YUM v3 used in RHEL 7. For software installation, the Selected yum plug-ins and utilities have been ported to the new DNF back end, and can be installed under the same names as in RHEL 7. Packages also provide compatibility symlinks, so the binaries, configuration files, and directories can be found in usual locations. Note that the legacy Python API provided by YUM v3 is no longer available. You can migrate your plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully supported. See DNF API Reference for more information. 12.2. Application streamsRHEL 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages, and are delivered through the AppStream repository in RHEL 8. Each Application Stream has a given life cycle, either the same as RHEL 8 or shorter, more suitable to the particular application. Application Streams with a shorter life cycle are listed in the Red Hat Enterprise Linux 8 Application Streams Life Cycle page. Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, two streams (versions) of the PostgreSQL database server are available in the postgresql module: PostgreSQL 10 (the default stream) and PostgreSQL 9.6. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest. 12.3. Searching for software packagesyum allows you to perform a complete set of operations with software packages. The following section describes how to use yum to:
12.3.1. Searching packages with YUMUse the following procedure to find a package providing a particular application or other content. Procedure
12.3.2. Listing packages with YUMUse the following procedure to list installed and available packages. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.3. Listing repositories with YUMUse the following procedure to list enabled and disabled repositories. Procedure
Note that you can filter the results by passing the ID or name of repositories as arguments or by appending global expressions. See Specifying global expressions in yum input for more details. 12.3.4. Displaying package information with YUMYou can display various types of information about a package using YUM, for example version, release, size, loaded plugins, and more. Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.5. Listing package groups with YUM Use Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.3.6. Specifying global expressions in YUM input Procedure To ensure global expressions are passed to
12.4. Installing software packagesThe following section describes how to use yum to:
12.4.1. Installing packages with YUM
Note that you can optimize the package search by explicitly defining how to parse the argument. See Section 12.4.3, “Specifying a package name in YUM input” for more details. 12.4.2. Installing a package group with YUM The following procedure describes how to install a package group by a group name or by a groupID using Procedure
12.4.3. Specifying a package name in YUM input To optimize the installation and removal process, you can append
12.5. Updating software packagesyum allows you to check if your system has any pending updates. You can list packages that need updating and choose to update a single package, multiple packages, or all packages at once. If any of the packages you choose to update have dependencies, they are updated as well. The following section describes how to use yum to:
12.5.1. Checking for updates with YUM The following procedure describes how to check the available updates for packages installed on your system using Procedure
12.5.2. Updating a single package with YUM Use the following procedure to update a single package and its dependencies using
When applying updates to kernel, yum always installs a new kernel regardless of whether you are using the 12.5.3. Updating a package group with YUM Use the following procedure to update a group of packages and their
dependencies using Procedure
12.5.4. Updating all packages and their dependencies with YUM Use the following procedure to update all packages and
their dependencies using Procedure
12.5.6. Automating software updates To check and download package updates automatically and regularly, you can use the DNF Automatic tool that is provided by the
DNF Automatic is an alternative command-line interface to yum that is suited for automatic and regular execution using systemd timers, cron jobs and other such tools. DNF Automatic synchronizes package metadata as needed and then checks for updates available. After, the tool can perform one of the following actions depending on how you configure it:
The outcome of the operation is then reported by a selected mechanism, such as the standard output or email. 12.5.6.1. Installing DNF AutomaticThe following procedure describes how to install the DNF Automatic tool. Procedure
Verification steps
12.5.6.2. DNF Automatic configuration file By default,
DNF Automatic uses The configuration file is separated into the following topical sections:
With the default settings of the Settings of the operation mode from the Additional resources
12.5.6.3. Enabling DNF Automatic To run DNF Automatic, you always need to enable and start a specific systemd timer unit. You can use one of the timer units provided in the The following section describes how to enable DNF Automatic. Prerequisites
For more information on DNF Automatic configuration file, see Section 2.5.6.2, “DNF Automatic configuration file”. Procedure
For downloading available updates, use: # systemctl enable dnf-automatic-download.timer # systemctl start dnf-automatic-download.timer For downloading and installing available updates, use: # systemctl enable dnf-automatic-install.timer # systemctl start dnf-automatic-install.timer For reporting about available updates, use: # systemctl enable dnf-automatic-notifyonly.timer # systemctl start dnf-automatic-notifyonly.timer Optionally, you can use: # systemctl enable dnf-automatic.timer # systemctl start dnf-automatic.timer In terms of downloading and applying updates,
this timer unit behaves according to settings in the Alternatively, you can also run DNF Automatic by executing the Verification steps
Additional resources
12.5.6.4. Overview of the systemd timer units included in the dnf-automatic package The systemd timer units take precedence and override the settings in the For example if you set the following option in the download_updates = yes The
Additional resources
12.6. Uninstalling software packagesThe following section describes how to use yum to:
12.6.1. Removing packages with YUMUse the following procedure to remove a package either by the group name or the groupID. Procedure
yum is not able to remove a package without removing depending packages. Note that you can optimize the package search by explicitly defining how to parse the argument. See Specifying a package name in yum input for more details. 12.6.2. Removing a package group with YUM
Use the following procedure to remove a package either by the group name or the groupID. Procedure
12.6.3. Specifying a package name in YUM input To optimize the installation and removal process, you can append
12.7. Managing software package groupsA package group is a collection of packages that serve a common purpose (System Tools, Sound and Video). Installing a package group pulls a set of dependent packages, which saves time considerably. The following section describes how to use yum to:
12.7.1. Listing package groups with YUM Use Procedure
Note that you can filter the results by appending global expressions as arguments. See Specifying global expressions in yum input for more details. 12.7.2. Installing a package group with YUM The following procedure describes how to install a package group by a group name or by a groupID using Procedure
12.7.3. Removing a package group with YUMUse the following procedure to remove a package either by the group name or the groupID. Procedure
12.7.4. Specifying global expressions in YUM input Procedure To ensure global expressions are passed to
12.8. Handling package management history The The following section describes how to use yum to:
12.8.1. Listing transactions with YUMUse the following procedure to list the latest transactions, the latest operations for a selected package, and details of a particular transaction. Procedure
12.8.2. Reverting transactions with YUM The following procedure describes how to revert a selected transaction or the last transaction using Procedure
Note that the 12.8.3. Repeating transactions with YUM Use the following procedure to repeat a selected transaction or the last transaction using Procedure
Note that the 12.8.4. Specifying global expressions in YUM input Procedure To ensure global expressions are passed to
12.9. Managing software repositories The configuration information for yum and related utilities are stored in the It is recommended to define individual repositories in new or existing Note that the values you define in individual The following section describes how to:
12.9.1. Setting YUM repository options The Do not give custom repositories names used by the Red Hat repositories to avoid conflicts. For a complete list of available 12.9.2. Adding a YUM repositoryProcedure To define a new repository, you can:
yum repositories commonly provide their own It is recommended to define your repositories in a
Obtaining and installing software packages from unverified or untrusted sources other than Red Hat certificate-based 12.9.3. Enabling a YUM repositoryOnce you added a `yum`repository to your system, enable it to ensure installation and updates. Procedure
12.9.4. Disabling a YUM repository
Disable a specific YUM repository to prevent particular packages from installation or update. Procedure
12.10. Configuring YUM The configuration information for
yum and related utilities are stored in the The following section describes how to:
12.10.1. Viewing the current YUM configurations Use the following procedure to view the current Procedure
12.10.2. Setting YUM main options The You can add additional options under the For a complete list of available 12.10.3. Using YUM plug-insyum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default. The following section describes how to enable, configure, and disable yum plug-ins. 12.10.3.1. Managing YUM plug-insProcedure The plug-in configuration files always contain a Every installed plug-in has its own configuration file in the 12.10.3.2. Enabling YUM plug-insThe following procedure describes how to disable or enable all YUM plug-ins, disable all plug-ins for a particular command, or certain YUM plug-ins for a single command. + Procedure
12.10.3.3. Disabling YUM plug-ins
Chapter 13. Introduction to systemdsystemd is a system and service manager for Linux operating systems. It is designed to be backwards compatible with SysV init scripts, and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, or dependency-based service control logic. Starting with Red Hat Enterprise Linux 7, systemd replaced Upstart as the default init system. systemd introduces the concept of systemd units. These units are represented by unit configuration files located in one of the directories listed in the following table: Table 13.1. systemd unit files locations
The units encapsulate information about:
The default configuration of systemd is defined during the compilation and it can be found in the systemd configuration file at For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec=pass:_required value_ 13.1. systemd unit typesFor a complete list of available systemd unit types, see the following table: Table 13.2. Available systemd unit types
13.2. systemd main featuresThe systemd system and service manager provides the following main features:
13.3. Compatibility changesThe systemd system and service manager is designed to be mostly compatible with SysV init and Upstart. The following are the most notable compatibility changes with regards to Red Hat Enterprise Linux 6 system that used SysV init:
13.4. Additional resources
Chapter 14. Managing system services with systemctl The This section describes how to manage system services with the 14.1. Service unit management with systemctlThe service units help to control the state of services and daemons in your system. Service units end with the
Additionally, some service units have alias names. Aliases can be shorter than units, and you can use them instead of the actual unit names. To find all aliases that can be used for a particular unit, use:
Additional resources
14.2. Comparison of a service utility with systemctl This section shows a comparison between a service utility and the usage of Table 14.1. Comparison of the service utility with systemctl
14.3. Listing system servicesYou can list all currently loaded service units and the status of all available service units. Procedure
14.4. Displaying system service statusYou can inspect any service unit to get its detailed information and verify the state of the service whether it is enabled or running. You can also view services that are ordered to start after or before a particular service unit. Procedure
14.5. Positive and negative service dependencies In When you attempt to start a new service, For example, if you are running the 14.6. Starting a system service You can start system
service in the current session using the Procedure
14.7. Stopping a system service You
can stop system service in the current session using the Procedure
14.8. Restarting a system service
You can restart system service in the current session using the This procedure describes how to:
Procedure
14.9. Enabling a system service You can configure service to start automatically at the system booting time. The Procedure
14.10. Disabling a system service You can prevent a service unit from starting automatically at boot time. The Procedure
Chapter 15. Working with systemd targets systemd targets are represented by
target units. Target units file ends with the This section includes procedures to implement while working with 15.1. Difference between SysV runlevels and systemd targetsThe previous versions of Red Hat Enterprise Linux were distributed with SysV init or Upstart, and implemented a predefined set of runlevels that represented specific modes of operation. These runlevels were numbered from 0 to 6 and were defined by a selection of system services to be run when a particular runlevel was enabled by the system administrator. Starting with Red Hat Enterprise Linux 7, the concept of runlevels has been replaced with systemd targets. Red Hat Enterprise Linux 7 was distributed with a number of predefined targets that are more or less similar to the standard set of runlevels from the previous releases. For compatibility reasons, it also provides aliases for these targets that directly maps to the SysV runlevels. The following table provides a complete list of SysV runlevels and their corresponding systemd targets: Table 15.1. Comparison of SysV runlevels with systemd targets
The following table compares the SysV init commands with systemctl. Use the systemctl utility to view, change, or configure systemd targets: The Table 15.2. Comparison of SysV init commands with systemctl
Additional resources
15.2. Viewing the default target The default target unit is represented by the Procedure
By default, the Procedure
15.2.1. Changing the default target The default target unit is represented by the Procedure
Additional resources
15.2.2. Changing the default target using symbolic linkThe following procedure describes how to change the default target by creating a symbolic link to the target. Procedure
15.2.3. Changing the current targetThis procedure explains how to change the target unit in the current session using the systemctl command. Procedure
Replace multi-user with the name of the target unit you want to use by default. Verification steps
Rescue mode provides a convenient single-user environment and allows you to repair your system in situations when it is unable to complete a regular booting process. In rescue mode, the system attempts to mount all local file systems and start some important system services, but it does not activate network interfaces or allow more users to be logged into the system at the same time. Procedure
15.2.3.1. Booting to emergency modeEmergency mode provides the most minimal environment possible and allows you to repair your system even in situations when the system is unable to enter rescue mode. In emergency mode, the system mounts the root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts a few essential services. Procedure
Chapter 16. Shutting down, suspending, and hibernating the systemThis section contains instructions about shutting down, suspending, or hibernating your operating system. 16.1. System shutdown To shut down the
system, you can either use the The advantage of using the
16.2. Shutting down the system using the shutdown command By following this procedure, you can use the Prerequisites
Procedure
16.3. Shutting down the system using the systemctl command By following this procedure, you can use the Prerequisites
Procedure
By default, running either of these commands causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run the selected command with the 16.4. Restarting the systemYou can restart the system by following this procedure. Prerequisites
Procedure
By default, this command causes
systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the 16.5. Suspending the systemYou can suspend the system by following this procedure. Prerequisites
Procedure
16.6. Hibernating the systemBy following this procedure, you can either hibernate the system, or hibernate and suspend the system. Prerequisites
Procedure
16.7. Overview of the power management commands with systemctl You can use the following list of the Table 16.1. Overview of the systemctl power management commands
Chapter 17. Working with systemd unit filesThis chapter includes the description of systemd unit files. The following sections show you how to:
17.1. Introduction to unit files A unit file contains configuration directives that describe the unit and define its behavior. Several Unit file names take the following form: unit_name.type_extension Here, unit_name stands for the name of the unit and type_extension identifies the unit type. For a complete list of unit types, see systemd unit files For example, there usually is Unit files can be supplemented with a directory for additional configuration files. For example, to add custom configuration options to Also, the Many unit file options can be set using the so called unit specifiers – wildcard strings that are dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic unit files that serve as templates for generating instantiated units. See Working with instantiated units. 17.2. Unit file structureUnit files typically consist of three sections:
17.3. Important [Unit] section optionsThe following tables lists important options of the [Unit] section. Table 17.1. Important [Unit] section options
17.4. Important [Service] section optionsThe following tables lists important options of the [Service] section. Table 17.2. Important [Service] section options
17.5. Important [Install] section optionsThe following tables lists important options of the [Install] section. Table 17.3. Important [Install] section options
17.6. Creating custom unit filesThere are several use cases for creating unit files from scratch: you could run a custom daemon, create a second instance of some existing service as in Creating a custom unit file by using the second instance of the sshd service On the other hand, if you intend just to modify or extend the behavior of an existing unit, use the instructions from Modifying existing unit files. Procedure The following procedure describes the general process of creating a custom service:
17.7. Creating a custom unit file by using the second instance of the sshd service System Administrators often need
to configure and run multiple instances of a service. This is done by creating copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. The following procedure shows how to create a second instance of the Procedure
17.8. Converting SysV init scripts to unit filesBefore taking time to convert a SysV init script to a unit file, make sure that the conversion was not already done elsewhere. All core services installed on Red Hat Enterprise Linux come with default unit files, and the same applies for many third-party software packages. Converting an init script to a unit file requires analyzing the script and extracting the necessary information from it. Based on this data you can create a unit file. As init scripts can vary greatly depending on the type of the service, you might need to employ more configuration options for translation than outlined in this chapter. Note that some levels of customization that were available with init scripts are no longer supported by systemd units. The majority of information needed for conversion is provided in the script’s header. The following example shows the opening section of the init script used to start the #!/bin/bash # postfix Postfix Mail Transfer Agent # chkconfig: 2345 80 30 # description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. # processname: master # pidfile: /var/spool/postfix/pid/master.pid # config: /etc/postfix/main.cf # config: /etc/postfix/master.cf ### BEGIN INIT INFO # Provides: postfix MTA # Required-Start: $local_fs $network $remote_fs # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop postfix # Description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. ### END INIT INFO In the above example, only lines starting with # chkconfig and # description are mandatory, so you might not find the rest in different init files. The text enclosed between the BEGIN INIT INFO and END INIT INFO lines is called Linux Standard Base (LSB) header. If specified, LSB headers contain directives defining the service description, dependencies, and default runlevels. What follows is an overview of analytic tasks aiming to collect the data needed for a new unit file. The postfix init script is used as an example. 17.9. Finding the systemd service description You can find descriptive information about the script on the line starting with
#description. Use this description together with the service name in the 17.10. Finding the systemd service dependenciesThe LSB header might contain several directives that form dependencies between services. Most of them are translatable to systemd unit options, see the following table: Table 17.4. Dependency options from the LSB header
17.11. Finding default targets of the service The line starting with #chkconfig contains three numerical values. The most important is the first number that represents the default runlevels in which the service is started. Map these runlevels to equivalent systemd targets. Then
list these targets in the The other two values specified on the #chkconfig line represent startup and shutdown priorities of the init script. These values are interpreted by systemd if it loads the init script, but there is no unit file equivalent. 17.12. Finding files used by the service
Init scripts require loading a function library from a dedicated directory and allow importing configuration, environment, and PID files. Environment variables are specified on the line starting with #config in the init script header, which translates to the The key information that is not
included in the init script header is the path to the service executable, and potentially some other files required by the service. In previous versions of Red Hat Enterprise Linux, init scripts used a Bash case statement to define the behavior of the service on default actions, such as start, stop, or restart, as well as custom-defined actions. The following excerpt from the conf_check() { [ -x /usr/sbin/postfix ] || exit 5 [ -d /etc/postfix ] || exit 6 [ -d /var/spool/postfix ] || exit 5 } make_aliasesdb() { if [ "$(/usr/sbin/postconf -h alias_database)" == "hash:/etc/aliases" ] then # /etc/aliases.db might be used by other MTA, make sure nothing # has touched it since our last newaliases call [ /etc/aliases -nt /etc/aliases.db ] || [ "$ALIASESDB_STAMP" -nt /etc/aliases.db ] || [ "$ALIASESDB_STAMP" -ot /etc/aliases.db ] || return /usr/bin/newaliases touch -r /etc/aliases.db "$ALIASESDB_STAMP" else /usr/bin/newaliases fi } start() { [ "$EUID" != "0" ] && exit 4 # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 conf_check # Start daemons. echo -n $"Starting postfix: " make_aliasesdb >/dev/null 2>&1 [ -x $CHROOT_UPDATE ] && $CHROOT_UPDATE /usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure $"$prog start" RETVAL=$? [ $RETVAL -eq 0 ] && touch $lockfile echo return $RETVAL } The extensibility of the init script allowed specifying two custom functions, systemd supports only the
predefined actions, but enables executing custom executables with 17.13. Modifying existing unit files Services installed on the system come with default unit files that are stored in the Procedure
To modify properties, such as dependencies or timeouts, of a service that is handled by a SysV initscript, do not modify the initscript itself. Instead, create a Then manage this service in the same way as a normal For example, to extend the configuration of the 17.14. Extending the default unit configurationThis section describes how to extend the default unit file with additional configuration options. Procedure
Example 17.1. Extending the httpd.service configuration To modify the httpd.service unit so that a custom shell script is automatically executed when starting the Apache service, perform the following steps.
The configuration files from configuration directories in 17.15. Overriding the default unit configurationThis section describes how to override the default unit configuration. Procedure
17.16. Changing the timeout limitYou can specify a timeout value per service to prevent a malfunctioning service from freezing the system. Otherwise, timeout is set by default to 90 seconds for normal services and to 300 seconds for SysV-compatible services. For example, to extend timeout limit for the Procedure
17.17. Monitoring overridden unitsThis section describes how to display an overview of overridden or modified unit files. Procedure
17.18. Working with instantiated units It is possible to instantiate multiple units from a single template configuration file at runtime. The "@" character is used to mark the template and to associate units with it. Instantiated units can be started from another unit file (using template_name@instance_name.service Where template_name stands for the name of the template configuration file. Replace instance_name with the name for the unit instance. Several instances can point to the same template file with configuration options common for all instances of the unit. Template unit name has the form of: unit_name@.service For example, the following Wants= first makes systemd search for given service units. If no such units are found, the part between "@" and the type suffix is ignored and
systemd searches for the For example, the [Unit] Description=Getty on %I … [Service] ExecStart=-/sbin/agetty --noclear %I $TERM … When the and are instantiated from the above template, 17.19. Important unit specifiersWildcard characters, called unit specifiers, can be used in any unit configuration file. Unit specifiers substitute certain unit parameters and are interpreted at runtime. The following table lists unit specifiers that are particularly useful for template units. Table 17.5. Important unit specifiers
For a complete list of unit specifiers, see the 17.20. Additional resources
Chapter 18. Optimizing systemd to shorten the boot timeThere is a list of systemd unit files that are enabled by default. System services that are defined by these unit files are automatically run at boot, which influences the boot time. This section describes:
18.1. Examining system boot performance To examine system boot performance, you can use the For a complete list and detailed description of all options, see the Prerequisites
Procedure $ systemctl list-unit-files --state=enabled Analyzing overall boot timeProcedure
$ systemd-analyze Analyzing unit initialization timeProcedure
$ systemd-analyze blame The output lists the units in descending order according to the time they took to initialize during the last successful boot. Identifying critical unitsProcedure
$ systemd-analyze critical-chain The output highlights the units that critically slow down the boot with the red color. Figure 18.1. The output of the systemd-analyze critical-chain command 18.2. A guide to selecting services that can be safely disabledIf you find the boot time of your system long, you can shorten it by disabling some of the services enabled on boot by default. To list such services, run: $ systemctl list-unit-files --state=enabled To disable a service, run: # systemctl disable service_name However, certain services must stay enabled in order that your operating system is safe and functions in the way you need. You can use the table below as a guide to selecting the services that you can safely disable. The table lists all services enabled by default on a minimal installation of Red Hat Enterprise Linux, and for each service it states whether this service can be safely disabled. The table also provides more information about the circumstances under which the service can be disabled, or the reason why you should not disable the service. Table 18.1. Services enabled by default on a minimal installation of RHEL
To find more information about a service, you can run one of the following commands: $ systemctl cat <service_name> $ systemctl help <service_name> The For more information on drop-in files, see the The 18.3. Additional resources
Chapter 19. Introduction to managing user and group accountsThe control of users and groups is a core element of Red Hat Enterprise Linux (RHEL) system administration. Each RHEL user has distinct login credentials and can be assigned to various groups to customize their system privileges. 19.1. Introduction to users and groups A user who creates a file is the owner of that file and the group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and those outside that group. The file owner can be
changed only by the Each user is associated with a unique numerical identification number called user ID (UID). Each group is associated with a group ID (GID). Users within a group share the same permissions to read, write, and execute files owned by that group. 19.2. Configuring reserved user and group IDs RHEL reserves user and group IDs below 1000 for system users and groups. You can find the reserved user and group IDs in the cat /usr/share/doc/setup*/uidgid It is recommended to assign IDs to the new users and groups starting at 5000, as the reserved range can increase in the future. To make the IDs assigned to new users start at 5000 by default, modify the Procedure To modify and make the IDs assigned to new users start at 5000 by default:
19.3. User private groupsRHEL uses the user private group (UPG) system configuration, which makes UNIX groups easier to manage. A user private group is created whenever a new user is added to the system. The user private group has the same name as the user for which it was created and that user is the only member of the user private group. UPGs simplify the collaboration on a project between multiple users. In addition, UPG system configuration makes it safe to set default permissions for a newly created file or directory, as it allows both the user, and the group this user is a part of, to make modifications to the file or directory. A list of all groups is stored in the Chapter 20. Managing user accounts in the web consoleThe RHEL web console offers a graphical interface that enables you to execute a wide range of administrative tasks without accessing your terminal directly. For example, you can add, edit or remove system user accounts. After reading this section, you will know:
Prerequisites
20.1. System user accounts managed in the web consoleWith user accounts displayed in the RHEL web console you can:
The RHEL web console displays all user accounts located in the system. Therefore, you can see at least one user account just after the first login to the web console. After logging into the RHEL web console, you can perform the following operations:
20.2. Adding new accounts using the web consoleUse the following steps for adding user accounts to the system and setting administration rights to the accounts through the RHEL web console. Procedure
20.3. Enforcing password expiration in the web consoleBy default, user accounts have set passwords to never expire. You can set system passwords to expire after a defined number of days. When the password expires, the next login attempt will prompt for a password change. Procedure
Verification steps
20.4. Terminating user sessions in the web consoleA user creates user sessions when logging into the system. Terminating user sessions means to log the user out from the system. It can be helpful if you need to perform administrative tasks sensitive to configuration changes, for example, system upgrades. In each user account in the RHEL 8 web console, you can terminate all sessions for the account except for the web console session you are currently using. This prevents you from loosing access to your system. Procedure
Chapter 21. Managing users from the command lineYou can manage users and groups using the command-line interface (CLI). This enables you to add, remove, and modify users and user groups in Red Hat Enterprise Linux environment. 21.1. Adding a new user from the command line This section describes how to use the Prerequisites
Procedure
Verification steps
Additional resources
21.2. Adding a new group from the command line This section describes how to use the Prerequisites
Procedure
Verification steps
Additional resources
21.3. Adding a user to a supplementary group from the command lineYou can add a user to a supplementary group to manage permissions or enable access to certain files or devices. Prerequisites
Procedure
Verification steps
21.4. Creating a group directory Under the UPG system configuration, you can apply the set-group identification permission (setgid bit) to a directory. The The following section describes how to create group directories. Prerequisites
Procedure
Verification steps
Chapter 22. Editing user groups using the command lineA user belongs to a certain set of groups that allow a logical collection of users with a similar access to files and folders. You can edit the primary and supplementary user groups from the command line to change the user’s permissions. 22.1. Primary and supplementary user groupsA group is an entity which ties together multiple user accounts for a common purpose, such as granting access to particular files. On Linux, user groups can act as primary or supplementary. Primary and supplementary groups have the following properties: Primary group
22.2. Listing the primary and supplementary groups of a userYou can list the groups of users to see which primary and supplementary groups they belong to. Procedure
22.3. Changing the primary group of a userYou can change the primary group of an existing user to a new group. Prerequisites:
Procedure
Verification steps
22.4. Adding a user to a supplementary group from the command lineYou can add a user to a supplementary group to manage permissions or enable access to certain files or devices. Prerequisites
Procedure
Verification steps
22.5. Removing a user from a supplementary groupYou can remove an existing user from a supplementary group to limit their permissions or access to files and devices. Prerequisites
Procedure
Verification steps
22.6. Changing all of the supplementary groups of a userYou can overwrite the list of supplementary groups that you want the user to remain a member of. Prerequisites
Procedure
Verification steps
Chapter 23. Managing sudo access System administrators can grant 23.1. User authorizations in sudoers The When a user tries to use The default ## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems). You can use the
following format to create new username hostname=path/to/command Where:
You can replace any of these variables with With overly permissive rules, such as You can specify the arguments negatively
using the Avoid using negative rules for commands because users can overcome such rules by renaming commands using the
The system reads the The preferred way of adding new rules to #includedir /etc/sudoers.d Note that the number sign 23.2. Granting sudo access to a user System administrators can grant When users need to perform an administrative command, they can precede that command with Be aware of the following limitations:
Prerequisites
Procedure
23.3. Enabling unprivileged users to run certain commands You can configure a policy that allows unprivileged user to run
certain command on a specific workstation. To configure this policy, you need to create and edit file in the Prerequisites
Procedure
23.4. Additional resources
Chapter 24. Changing and resetting the root password If the existing root password is no longer satisfactory or is forgotten, you can change or reset it both as the 24.1. Changing the root password as the root user This section describes how to use the Prerequisites
Procedure
24.2. Changing or resetting the forgotten root password as a non-root user This section describes how to use the Prerequisites
Procedure
24.3. Resetting the root password on boot If you are unable to log in as a non-root user or do not belong to the administrative Procedure
Verification steps
Chapter 25. Managing file permissionsFile permissions control the ability of user and group accounts to view, modify, access, and execute the contents of the files and directories. Every file or directory has three levels of ownership:
Each level of ownership can be assigned the following permissions:
Note that the execute permission for a file allows you to execute that file. The execute permission for a directory allows you to access the contents of the directory, but not execute it. When a new file or directory is created, the default set of permissions are automatically assigned to it. The default permissions for a file or directory are based on two factors:
25.1. Base file permissionsWhenever a new file or directory is created, a base permission is automatically assigned to it. Base permissions for a file or directory can be expressed in symbolic or octal values.
The base permission for a directory is Note that individual files within a directory can have their own permission that might prevent you from editing them, despite having unrestricted access to the directory. The base permission for a file is Example 25.1. Permissions for a file If a file has the following permissions: $ ls -l
-rwxrw----. 1 sysadmins sysadmins 2 Mar 2 08:43 file
Example 25.2. Permissions for a directory If a directory has the following permissions: $ ls -dl directory drwxr-----. 1 sysadmins sysadmins 2 Mar 2 08:43 directory
The base permission that is automatically assigned to a file or directory is not the default permission the file or directory ends up with. When you create a file or directory, the base permission is altered by the umask. The combination of the base permission and the umask creates the default permission for files and directories. 25.2. User file-creation mode maskThe user file-creation mode mask (umask) is variable that controls how file permissions are set for newly created files and directories. The umask automatically removes permissions from the base permission value to increase the overall security of a linux system. The umask can be expressed in symbolic or octal values.
The default umask for a standard user is The first digit of the umask represents special permissions (sticky bit, ). The last three digits of the umask represent the permissions that are removed from the user owner (u), group owner (g), and others (o) respectively. Example 25.3. Applying the umask when creating a file The following example illustrates how the umask with an octal value of 25.3. Default file permissionsThe default permissions are set automatically for all newly created files and directories. The value of the default permissions is determined by applying the umask to the base permission. Example 25.4. Default permissions for a directory created by a standard user When a standard user creates a new directory, the umask is set to
This means that the directory owner and the group can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. Other users can only list the contents of the directory and descend into it. Example 25.5. Default permissions for a file created by a standard user When a standard user creates a new
file, the umask is set to
This means that the file owner and the group can read and edit the file, while other users can only read the file. Example 25.6. Default permissions for a directory created by the root user When a root user creates a new directory, the umask is set to
This means that the directory owner can list the contents of the directory, create, delete, and edit items within the directory, and descend into it. The group and others can only list the contents of the directory and descend into it. Example 25.7. Default permissions for a file created by the root user When a root user creates a new
file, the umask is set to
This means that the file owner can read and edit the file, while the group and others can only read the file. For security reasons, regular files cannot have execute permissions by default, even if the umask is set to 25.4. Changing file permissions using symbolic values You can use the You can assign the following permissions:
Permissions can be assigned to the following levels of ownership:
To add or remove permissions you can use the following signs:
Procedure
Verification steps
Example 25.8. Changing permissions for files and directories
25.5. Changing file permissions using octal values You can use the Procedure
Chapter 26. Managing the umask You can use the 26.1. Displaying the current value of the umask You can use the Procedure
26.2. Displaying the default bash umask There are a number of shells you can use, such as To determine whether you are executing a command in a login or a non-login shell, use the Example 26.1. Determining if you are working in a login or a non-login bash shell
Procedure
26.3. Setting the umask using symbolic values You can use the You can assign the following permissions:
Permissions can be assigned to the following levels of ownership:
To add or remove permissions you can use the following signs:
Procedure
26.4. Setting the umask using octal values You can use the Procedure
26.5. Changing the default umask for the non-login shell You can change the default Prerequisites
Procedure
26.6. Changing the default umask for the login shell You can change the default Prerequisites
Procedure
26.7. Changing the default umask for a specific user You can change the default umask for a specific user by modifying the Procedure
26.8. Setting default permissions for newly created home directories You can change the permission modes for home directories of newly created users by modifying the Procedure
Chapter 27. Using dnstap in RHEL The 27.1. Recording DNS queries using dnstap in RHELThe network administrators can record the DNS queries to collect the website or IP address information along with the domain health. Prerequisites
If you already have a Procedure Following are the steps to record DNS queries:
Chapter 28. Managing the Access Control ListEach file and directory can only have one user owner and one group owner at a time. If you want to grant a user permissions to access specific files or directories that belong to a different user or group while keeping other files and directories private, you can utilize Linux Access Control Lists (ACLs). 28.1. Displaying the current Access Control List You can use the Procedure
28.2. Setting the Access Control List You can use the Prerequisites
Procedure
# setfacl -m u:username:symbolic_value file-name Replace username with the name of
the user, symbolic_value with a symbolic value, and file-name with the name of the file or directory. For more information see the Example 28.1. Modifying permissions for a group project The following example describes how to modify permissions for the
Procedure # setfacl -m u:andrew:rw- group-project # setfacl -m u:susan:--- group-project Verification steps
Chapter 29. Using the Chrony suite to configure NTP Accurate timekeeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. In Linux systems, the The user space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interruptions. Starting with Red Hat Enterprise Linux 8, the The following sections describe how to use the chrony suite to configure NTP. 29.1. Introduction to chrony suite chrony is an implementation of the
chrony performs well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuously, or run on a virtual machine. Typical accuracy between two machines synchronized over the Internet is within a few milliseconds, and for machines on a LAN within tens of microseconds. Hardware timestamping or a hardware reference clock may improve accuracy between two machines synchronized to a sub-microsecond level. chrony consists of The chrony daemon, 29.2. Using chronyc to control chronyd This section describes how to control Procedure
Changes made using chronyc are not permanent, they will be lost after a 29.3. Migrating to chrony In Red Hat Enterprise Linux 7, users could choose between ntp and
chrony to ensure accurate timekeeping. For differences between ntp and chrony, Starting with Red Hat Enterprise Linux 8, ntp is no longer supported. chrony is enabled by default. For this reason, you might need to migrate from ntp to chrony. Migrating from ntp to chrony is straightforward in most cases. The corresponding names of the programs, configuration files and services are: Table 29.1. Corresponding names of the programs, configuration files and services when migrating from ntp to chrony
The ntpdate and sntp utilities, which are included in the # chronyd -q 'server ntp.example.com iburst' 2018-05-18T12:37:43Z chronyd version 3.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG) 2018-05-18T12:37:43Z Initial frequency -2.630 ppm 2018-05-18T12:37:48Z System clock wrong by 0.003159 seconds (step) 2018-05-18T12:37:48Z chronyd exiting The
ntpstat utility, which was previously included in the 29.3.1. Migration script A Python script called By default, the script does not overwrite any files. If An example of an invocation of the script with the default # python3 /usr/share/doc/chrony/ntp2chrony.py -b -v Reading /etc/ntp.conf Reading /etc/ntp/crypto/pw Reading /etc/ntp/keys Writing /etc/chrony.conf Writing /etc/chrony.keys The only directive ignored in this case is The generated Chapter 30. Using Chrony The following sections describe how to install, start, and stop 30.1. Managing chrony The following procedure describes how to install, start, stop, and check the status of Procedure
30.2. Checking if chrony is synchronized The following procedure describes how to check if chrony is synchronized with the use of the Procedure
Additional resources
30.3. Manually adjusting the System ClockThe following procedure describes how to manually adjust the System Clock. Procedure
If the 30.4. Setting up chrony for a system in an isolated network For a network that is never
connected to the Internet, one computer is selected to be the master timeserver. The other computers are either direct clients of the master, or clients of clients. On the master, the drift file must be manually set with the average rate of drift of the system clock. If the master is rebooted, it will obtain the time from surrounding systems and calculate an average to set its system clock. Thereafter it resumes applying adjustments based on the drift file. The drift file will be updated
automatically when the The following procedure describes how to set up chrony for asystem in an isolated network. Procedure
On the client systems which are not to be direct clients of the master, the In an isolated network, you can also use the To allow multiple servers in the network to use the same local configuration and to be synchronized to one another, without confusing clients that poll more than one server, use the
30.5. Configuring remote monitoring access chronyc can access
By default, chronyc connects to the Unix domain socket. The default path is Only the following monitoring commands, which do not affect the behavior of
The set of hosts from which All other commands are allowed only through the Unix domain socket. When sent over the network, The following procedure describes how to access chronyd remotely with chronyc. Procedure
Additional resources
30.6. Managing time synchronization using RHEL System Roles You can manage time synchronization on multiple target machines using the Note that using the The The following example shows how to apply the Example 30.1. An example playbook applying the timesync role for a single pool of servers --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync For a detailed reference on 30.7. Additional resources
Chapter 31. Chrony with HW timestamping Hardware timestamping is a feature supported in some Network Interface Controller (NICs) which provides accurate timestamping of incoming and outgoing packets. Another protocol for time synchronization that uses hardware timestamping is Unlike The following sections describe how to:
31.1. Verifying support for hardware timestamping To verify that hardware timestamping with Example 31.1. Verifying support for hardware timestamping on a specific interface # ethtool -T eth0 Output: Timestamping parameters for eth0: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) all (HWTSTAMP_FILTER_ALL) ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC) ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC) ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT) ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC) ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) 31.2. Enabling hardware timestamping To enable hardware timestamping, use the Example 31.2. Enabling hardware timestamping by using the hwtimestamp directive hwtimestamp eth0 hwtimestamp eth2 hwtimestamp * 31.3. Configuring client polling intervalThe default range of a polling interval (64-1024 seconds) is recommended for servers on the Internet. For local servers and hardware timestamping, a shorter polling interval needs to be configured in order to minimize offset of the system clock. The following directive in server ntp.local minpoll 0 maxpoll 0
31.4. Enabling interleaved mode server ntp.local minpoll 0 maxpoll 0 xleave 31.5. Configuring server for large number of clients The default server configuration allows a few thousands of clients at most to use the interleaved mode concurrently. To
configure the server for a larger number of clients, increase the clientloglimit 100000000 31.6. Verifying hardware timestamping To verify that the interface has successfully enabled hardware timestamping, check the system log. The log should
contain a message from Example 31.3. Log messages for interfaces with enabled hardware timestamping chronyd[4081]: Enabled HW timestamping on eth0 chronyd[4081]: Enabled HW timestamping on eth2 When Example 31.4. Reporting the transmit, receive timestamping and interleaved mode for each NTP source # chronyc ntpdata Output: Remote address : 203.0.113.15 (CB00710F) Remote port : 123 Local address : 203.0.113.74 (CB00714A) Leap status : Normal Version : 4 Mode : Server Stratum : 1 Poll interval : 0 (1 seconds) Precision : -24 (0.000000060 seconds) Root delay : 0.000015 seconds Root dispersion : 0.000015 seconds Reference ID : 47505300 (GPS) Reference time : Wed May 03 13:47:45 2017 Offset : -0.000000134 seconds Peer delay : 0.000005396 seconds Peer dispersion : 0.000002329 seconds Response time : 0.000152073 seconds Jitter asymmetry: +0.00 NTP tests : 111 111 1111 Interleaved : Yes Authenticated : No TX timestamping : Hardware RX timestamping : Hardware Total TX : 27 Total RX : 27 Total valid RX : 27 Example 31.5. Reporting the stability of NTP measurements # chronyc sourcestats With hardware timestamping enabled, stability of Output: 210 Number of sources = 1 Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ntp.local 12 7 11 +0.000 0.019 +0ns 49ns 31.7. Configuring PTP-NTP bridge If a highly accurate Precision Time Protocol ( Configure the ptp4l and phc2sys programs from the Configure Example 31.6. Configuring chronyd to provide the system time using the other interface bindaddress 203.0.113.74 hwtimestamp eth2 local stratum 1 Chapter 32. Achieving some settings previously supported by NTP in chronySome settings that were in previous major version of Red Hat Enterprise Linux supported by ntp, are not supported by chrony. The following sections list such settings, and describe ways to achieve them on a system with chrony. 32.1. Monitoring by ntpq and ntpdc To monitor the status of the system clock sychronized by
Example 32.1. Using the tracking command $ chronyc -n tracking Reference ID : 0A051B0A (10.5.27.10) Stratum : 2 Ref time (UTC) : Thu Mar 08 15:46:20 2018 System time : 0.000000338 seconds slow of NTP time Last offset : +0.000339408 seconds RMS offset : 0.000339408 seconds Frequency : 2.968 ppm slow Residual freq : +0.001 ppm Skew : 3.336 ppm Root delay : 0.157559142 seconds Root dispersion : 0.001339232 seconds Update interval : 64.5 seconds Leap status : Normal Example 32.2. Using the ntpstat utility $ ntpstat synchronised to NTP server (10.5.27.10) at stratum 2 time correct to within 80 ms polling server every 64 s 32.2. Using authentication mechanism based on public key cryptographyIn Red Hat Enterprise Linux 7, ntp supported Autokey, which is an authentication mechanism based on public key cryptography. In Red Hat Enterprise Linux 8,
32.3. Using ephemeral symmetric associations In Red Hat Enterprise Linux 7, Note that using the client/server mode enabled by the 32.4. multicast/broadcast client Red Hat Enterprise Linux 7 supported the broadcast/multicast In Red Hat Enterprise Linux 8, There are several options of migration from an
Chapter 33. Overview of Network Time Security (NTS) in chrony
Network Time Security (NTS) is an authentication mechanism for Network Time Protocol (NTP), designed to scale substantial clients. It verifies that the packets received from the server machines are unaltered while moving to the client machine. Network Time Security (NTS) includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. 33.1. Enabling Network Time Security (NTS) in the client configuration file By default, Network Time Security (NTS) is not enabled. You can enable NTS in the Prerequisites
Procedure In the client configuration file:
Verification
Additional resources
33.2. Enabling Network Time Security (NTS) on the serverIf you run your own Network Time Protocol (NTP) server, you can enable the server Network Time Security (NTS) support to facilitate its clients to synchronize securely. If the NTP server is a client of other servers, that is, it is not a Stratum 1 server, it should use NTS or symmetric key for its synchronization. Prerequisites
Procedure
Verification
Chapter 34. Using secure communications between two systems with OpenSSHSSH (Secure Shell) is a protocol which provides secure communications between two systems using a client-server architecture and allows users to log in to server host systems remotely. Unlike other remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, which prevents intruders to collect unencrypted passwords from the connection. Red Hat Enterprise Linux includes the basic 34.1. SSH and OpenSSHSSH (Secure Shell) is a program for logging into a remote machine and executing commands on that machine. The SSH protocol provides secure encrypted communications between two untrusted hosts over an insecure network. You can also forward X11 connections and arbitrary TCP/IP ports over the secure channel. The SSH protocol mitigates security threats, such as interception of communication between two systems and impersonation of a particular host, when you use it for remote shell login or file copying. This is because the SSH client and server use digital signatures to verify their identities. Additionally, all communication between the client and server systems is encrypted. A host key authenticates hosts in the SSH protocol. Host keys are cryptographic keys that are generated automatically when OpenSSH is first installed, or when the host boots for the first time. OpenSSH is an implementation of the SSH protocol supported by Linux, UNIX, and similar operating systems. It includes the core files necessary for both the OpenSSH client and server. The OpenSSH suite consists of the following user-space tools:
Two versions of SSH currently exist: version 1, and the newer version 2. The OpenSSH suite in RHEL supports only SSH version 2. It has an enhanced key-exchange algorithm that is not vulnerable to exploits known in version 1. OpenSSH, as one of core cryptographic subsystems of RHEL, uses system-wide crypto policies. This ensures that weak cipher suites and
cryptographic algorithms are disabled in the default configuration. To modify the policy, the administrator must either use the The OpenSSH suite uses two sets of configuration files: one for client programs (that is, System-wide SSH configuration information is stored in the 34.2. Configuring and starting an OpenSSH server Use the following procedure for a basic configuration that might be
required for your environment and for starting an OpenSSH server. Note that after the default RHEL installation, the Prerequisites
Procedure
Verification
Additional resources
34.3. Setting an OpenSSH server for key-based authenticationTo improve system security, enforce key-based authentication by disabling password authentication on your OpenSSH server. Prerequisites
Procedure
Additional resources
34.4. Generating SSH key pairsUse this procedure to generate an SSH key pair on a local system and to copy the generated public key to an OpenSSH server. If the server is configured accordingly, you can log in to the OpenSSH server without providing any password. If you complete the following steps as Procedure
If you reinstall your system and want to keep previously generated key pairs, back up the Verification
Additional resources
34.5. Using SSH keys stored on a smart cardRed Hat Enterprise Linux enables you to use RSA and ECDSA keys stored on a smart card on OpenSSH clients. Use this procedure to enable authentication using a smart card instead of using a password. Prerequisites
Procedure
If you skip the $ ssh -i pkcs11: example.com
Enter PIN for 'SSH key':
[example.com] $ 34.6. Making OpenSSH more secure The following tips help you to increase security when using OpenSSH. Note that changes in the # systemctl reload sshd The majority of security hardening configuration changes reduce compatibility with clients that do not support up-to-date algorithms or cipher suites. Disabling insecure connection protocols
Enabling key-based authentication and disabling password-based authentication
Key types
Non-default port
No root login
Using the X Security extension
Restricting access to specific users, groups, or domains
Changing system-wide cryptographic policies
Additional resources
34.7. Connecting to a remote server using an SSH jump hostUse this procedure for connecting your local system to a remote server through an intermediary server, also called jump host. Prerequisites
Procedure
You can specify more jump servers and you can also skip adding host definitions to the configurations file when you provide their complete host names, for example: $ ssh -J jump1.example.com,jump2.example.com,jump3.example.com remote1.example.com Change the host name-only notation in the previous command if the user names or SSH ports on the jump servers differ from the names and ports on the remote server, for example: $ ssh -J johndoe@jump1.example.com:75,johndoe@jump2.example.com:75,:75 :220 Additional resources
34.8. Connecting to remote machines with SSH keys using ssh-agent To avoid entering a passphrase each time you initiate an SSH connection, you can use the Prerequisites
For more information, see Generating SSH key pairs. Procedure
Verification
34.9. Additional resources
Chapter 35. Configuring a remote logging solutionTo ensure that logs from various machines in your environment are recorded centrally on a logging server, you can configure the Rsyslog application to record logs that fit specific criteria from the client system to the server. 35.1. The Rsyslog logging service The Rsyslog application, in combination
with the The In In Additional resources
35.2. Installing Rsyslog documentation The Rsyslog application has extensive online documentation that is available at https://www.rsyslog.com/doc/, but you can also install the Prerequisites
Procedure
Verification
35.3. Configuring a server for remote logging over TCPThe Rsyslog application enables you to both run a logging server and configure individual systems to send their log files to the logging server. To use remote logging through TCP, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. With the Rsyslog application, you can maintain a centralized logging system where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, you can configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues cannot be configured for connections using the UDP protocol. The By default, Prerequisites
Procedure
Your log server is now configured to receive and store log files from the other systems in your environment. Additional resources
35.4. Configuring remote logging to a server over TCP Follow this procedure to configure a system for forwarding log messages to a server over the TCP protocol. The Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.5. Configuring TLS-encrypted remote loggingBy default, Rsyslog sends remote-logging communication in the plain text format. If your scenario requires to secure this communication channel, you can encrypt it using TLS. To use encrypted transport through TLS, configure both the server and the client. The server collects and analyzes the logs sent by one or more client systems. You can use either the If you have a separate system with higher security, for example, a system that is not connected to any network or has stricter authorizations, use the separate system as the certifying authority (CA). Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.6. Configuring a server for receiving remote logging information over UDP The Rsyslog application enables you to configure a system to receive logging information from remote systems. To use remote logging through UDP, configure both the server and the client. The receiving server collects and analyzes the logs sent by one or more client systems. By default, Follow this procedure to configure a server for collecting and analyzing logs sent by one or more client systems over the UDP protocol. Prerequisites
Procedure
Additional resources
35.7. Configuring remote logging to a server over UDP Follow
this procedure to configure a system for forwarding log messages to a server over the UDP protocol. The Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.8. Load balancing helper in Rsyslog The The action(type=”omfwd” protocol=”tcp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omfwd” protocol=”udp” RebindInterval=”250” target=”example.com” port=”514” …) action(type=”omrelp” RebindInterval=”250” target=”example.com” port=”6514” …) 35.9. Configuring reliable remote logging With the Reliable Event Logging Protocol (RELP), you can send and receive Prerequisites
Procedure
Verification To verify that the client system sends messages to the server, follow these steps:
Additional resources
35.10. Supported Rsyslog modulesTo expand the functionality of the Rsyslog application, you can use specific modules. Modules provide additional inputs (Input Modules), outputs (Output Modules), and other functionalities. A module can also provide additional configuration directives that become available after you load the module. You can list the input and output modules installed on your system by entering the following command: # ls /usr/lib64/rsyslog/{i,o}m* You can view the list of all available 35.11. Additional resources
Chapter 36. Using the logging System Role As a system administrator, you can use the 36.1. The logging System Role With the To apply a The set of systems that you want to configure according to the playbook is defined in an inventory file. For more information on creating and using inventories, see How to build your inventory in Ansible documentation. Logging solutions provide multiple ways of reading logs and multiple logging outputs. For example, a logging system can receive the following inputs:
In addition, a logging system can have the following outputs:
With the 36.2. logging System Role parameters In a Currently, the only available logging system in the
Additional resources
36.3. Applying a local logging System RoleFollow these steps to prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Each machine will record logs locally. Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for
automation based on Ansible. Ansible Engine contains command-line utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
You do not have to have the Procedure
Verification
36.4. Filtering logs in a local logging System Role You can deploy a logging solution which filters the logs based on the
Prerequisites
You do not have to have the Procedure
Verification
Additional resources
36.5. Applying a remote logging solution using the logging System Role Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging solution. In this playbook, one or more clients take logs from Prerequisites
You do not have to have the Procedure
Verification
Additional resources
36.6. Using the logging System Role with TLSTransport Layer Security (TLS) is a cryptographic protocol designed to securely communicate over the computer network. As an administrator, you can use the 36.6.1. Configuring client logging with TLS You can use the This procedure configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Prerequisites
Procedure
36.6.2. Configuring server logging with TLS You can use the This procedure configures TLS on all hosts in the server group in the Ansible inventory. Prerequisites
Procedure
36.7. Using the logging System Roles with RELPReliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. Administrators can use the 36.7.1. Configuring client logging with RELP You can use the This procedure configures RELP on all hosts in the Prerequisites
Procedure
36.7.2. Configuring server logging with RELP You can use the This procedure configures RELP on all hosts in the Prerequisites
Procedure
36.8. Additional resources
Chapter 37. Introduction to PythonPython is a high-level programming language that supports multiple programming paradigms, such as object-oriented, imperative, functional, and procedural paradigms. Python has dynamic semantics and can be used for general-purpose programming. With Red Hat Enterprise Linux, many packages that are installed on the system, such as packages providing system tools, tools for data analysis, or web applications, are written in Python. To use these packages, you must have the 37.1. Python versionsTwo incompatible versions of Python are widely used, Python 2.x and Python 3.x. RHEL 8 provides the following versions of Python. Table 37.1. Python versions in RHEL 8
For details about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux 8 Application Streams Life Cycle. Each of the Python versions is distributed in a separate module and by design you can install multiple modules in parallel on the same system. The Always specify the version of Python when installing it, invoking it, or otherwise interacting with it. For example, use The unversioned As a system administrator, use Python 3 for the following reasons:
For developers, Python 3 has the following advantages over Python 2:
However, legacy software might require System tools in Red Hat Enterprise Linux 8 use Python version 3.6 provided by the
internal Chapter 38. Installing and using Python In Red Hat Enterprise Linux 8, Python 3 is distributed in versions 3.6, 3.8, and 3.9, provided by the Using the
unversioned 38.1. Installing Python 3 By design, you can install RHEL 8 modules in parallel, including the You can install Python 3.8 and Python 3.9, including packages built for either version, in parallel with Python 3.6 on the same system, with the exception of the Procedure
Verification steps
38.2. Installing additional Python 3 packages Packages
with add-on modules for Python 3.6 generally use the Procedure
38.3. Installing additional Python 3 tools for developers Additional Python tools for developers are distributed through the CodeReady Linux Builder repository in the respective The The The CodeReady Linux Builder repository and its content is unsupported by Red Hat. To
install packages from the Procedure
To install packages from the 38.4. Installing Python 2 Some applications and scripts have not yet been fully ported to Python 3 and require Python 2 to run. Red Hat Enterprise Linux 8 allows parallel installation of Python 3 and Python 2. If you need the Python 2 functionality, install the Note that Python 3 is the main development
direction of the Python project. Support for Python 2 is being phased out. The Procedure
Packages with add-on modules for Python 2 generally use the
Verification steps
By design, you can install RHEL 8 modules in parallel, including the 38.5. Migrating from Python 2 to Python 3As a developer, you may want to migrate your former code that is written in Python 2 to Python 3. For more information on how to migrate large code bases to Python 3, see The Conservative Python 3 Porting Guide. Note that after this migration, the original Python 2 code becomes interpretable by the Python 3 interpreter and stays interpretable for the Python 2 interpreter as well. 38.6. Using PythonWhen running the Python interpreter or Python-related commands, always specify the version. Prerequisites
Procedure
Chapter 39. Configuring the unversioned Python System administrators can configure the unversioned The Additional Python-related commands, such as 39.1. Configuring the unversioned python command directly You can configure the unversioned Prerequisites
Procedure
39.2. Configuring the unversioned python command to the required Python version interactively You can configure the unversioned Prerequisites
Procedure
39.3. Additional resources
Chapter 40. Packaging Python 3 RPMs Most Python projects use Setuptools for packaging, and define package information in the You can also package your Python project into an RPM package, which provides the following advantages compared to Setuptools packaging:
40.1. SPEC file description for a Python package A SPEC file contains instructions that the
An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files. Most notably, a name of any RPM package of a Python library must always include the prefix
determining the version, for example, Other specifics are shown in the following SPEC file example for the %global modname detox 1 Name: python3-detox 2 Version: 0.12 Release: 4%{?dist} Summary: Distributing activities of the tox tool License: MIT URL: https://pypi.io/project/detox Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz BuildArch: noarch BuildRequires: python36-devel 3 BuildRequires: python3-setuptools BuildRequires: python36-rpm-macros BuildRequires: python3-six BuildRequires: python3-tox BuildRequires: python3-py BuildRequires: python3-eventlet %?python_enable_dependency_generator 4 %description Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs by running all possible activities in parallel. Detox has the same options and configuration that tox has, so after installation you can run it in the same way and with the same options that you use for tox. $ detox %prep %autosetup -n %{modname}-%{version} %build %py3_build 5 %install %py3_install %check %{__python3} setup.py test 6 %files -n python3-%{modname} %doc CHANGELOG %license LICENSE %{_bindir}/detox %{python3_sitelib}/%{modname}/ %{python3_sitelib}/%{modname}-%{version}* %changelog ... 1 The modname macro contains the name of the Python project. In this example it is When packaging a Python project into RPM, the BuildRequires specifies what
packages are required to build and test this package. In BuildRequires, always include items providing tools necessary for building Python packages:
Every Python package requires some other packages to work correctly. Such packages need to be specified in the SPEC file as well. To specify the dependencies, you can use the %python_enable_dependency_generator macro to automatically use dependencies defined in the The %py3_build and %py3_install macros run the The check section provides a macro that runs the correct version of Python. The %{__python3} macro contains a path for the Python 3 interpreter, for example 40.2. Common macros for Python 3 RPMsIn a SPEC file, always use the macros that are described in the following Macros for Python 3 RPMs table rather than hardcoding their values. In macro names, always use Table 40.1. Macros for Python 3 RPMs
40.3. Automatic provides for Python RPMsWhen packaging a Python project, make sure that the following directories are included in the resulting RPM if these directories are present:
From these directories, the RPM build process
automatically generates virtual Chapter 41. Handling interpreter directives in Python scriptsIn Red Hat Enterprise Linux 8, executable Python scripts are expected to use interpreter directives (also known as hashbangs or shebangs) that explicitly specify at a minimum the major Python version. For example: #!/usr/bin/python3 #!/usr/bin/python3.6 #!/usr/bin/python2 The The BRP script generates errors when encountering a Python script with an ambiguous interpreter directive, such as: #!/usr/bin/python or #!/usr/bin/env python 41.1. Modifying interpreter directives in Python scriptsModify interpreter directives in the Python scripts that cause the build errors at RPM build time. Prerequisites
Procedure To modify interpreter directives, complete one of the following tasks:
If the packaged Python scripts require a version other than Python 3.6, adjust the preceding commands to include the required version. 41.2. Changing /usr/bin/python3 interpreter directives in your custom packages By default, interpreter directives in the form of Procedure
To prevent the BRP script from checking and modifying interpreter directives, use the following RPM directive: %undefine __brp_mangle_shebangs Chapter 42. Using the PHP scripting languageHypertext Preprocessor (PHP) is a general-purpose scripting language mainly used for server-side scripting, which enables you to run the PHP code using a web server. In RHEL 8, the PHP scripting language is provided by the Depending on your use case, you can install a specific profile of the selected module stream:
42.1. Installing the PHP scripting language This section describes how to install
a selected version of the Procedure
Additional resources
42.2. Using the PHP scripting language with a web server42.2.1. Using PHP with the Apache HTTP Server In Red Hat Enterprise Linux 8, the This section describes how to run the PHP code using the FastCGI process server. Procedure
Example 42.1. Running a "Hello, World!" PHP script using the Apache HTTP Server
42.2.2. Using PHP with the nginx web server This section describes how to run PHP code through the Procedure
Example 42.2. Running a "Hello, World!" PHP script using the nginx server
42.3. Running a PHP script using the command-line interfaceA PHP script is usually run using a web server, but also can be run using the command-line interface. If you want to run See Installing the PHP scripting language. Procedure
Example 42.3. Running a "Hello, World!" PHP script using the command-line interface
42.4. Additional resources
Chapter 43. Using langpacksLangpacks are meta-packages which install extra add-on packages containing translations, dictionaries and locales for every package installed on the system. On a Red Hat Enterprise Linux 8 system, langpacks installation is based on the There are two prerequisites to be able to use langpacks for a selected language. If these prerequisites are fulfilled, the language meta-packages pull their langpack for the selected language automatically in the transaction set. Prerequisites
43.1. Checking languages that provide langpacksFolow this procedure to check which languages provide langpacks. Procedure
43.2. Working with RPM weak dependency-based langpacksThis section describes multiple actions that you may want to perform when querying RPM weak dependency-based langpacks, installing or removing language support. 43.2.1. Listing already installed language supportTo list the already installed language support, use this procedure. Procedure
43.2.2. Checking the availability of language supportTo check if language support is available for any language, use the following procedure. Procedure
# yum list available langpacks* 43.2.3. Listing packages installed for a languageTo list what packages get installed for any language, use the following procedure: Procedure
43.2.4. Installing language supportTo add new a language support, use the following procedure. Procedure
43.2.5. Removing language supportTo remove any installed language support, use the following procedure. Procedure
43.3. Saving disk space by using glibc-langpack-<locale_code> Currently, all locales are stored in the On systems where disk space is a critical issue, such as containers and cloud images, or only a few locales are needed, you can use the glibc
locale langpack packages ( To install locales individually, and thus gain a smaller package installation footprint, use the following procedure. Procedure
When installing the operating system with Anaconda, Note that installing only selected If disk space is not an issue, keep all locales installed by using the Chapter 44. Getting started with Tcl/Tk44.1. Introduction to Tcl/Tk Tool command language (Tcl) is a dynamic programming language. The interpreter for this language, together with the C library, is provided by the Using
Tcl paired with Tk (Tcl/Tk) enables creating cross-platform GUI applications. Tk is provided by the Note that Tk can refer to any of the following:
For more information about Tcl/Tk, see the Tcl/Tk manual or Tcl/Tk documentation web page. 44.2. Notable changes in Tcl/Tk 8.6Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is provided in the Base OS repository. Major changes in Tcl/Tk 8.6 compared to Tcl/Tk 8.5 are:
Major changes in Tk include:
For the detailed list of changes between Tcl 8.5 and Tcl 8.6, see Changes in Tcl/Tk 8.6. 44.3. Migrating to Tcl/Tk 8.6Red Hat Enterprise Linux 7 used Tcl/Tk 8.5. With Red Hat Enterprise Linux 8, Tcl/Tk version 8.6 is provided in the Base OS repository. This section describes migration path to Tcl/Tk 8.6 for:
44.3.1. Migration path for developers of Tcl extensionsTo make your code compatible with Tcl 8.6, use the following procedure. Procedure
44.3.2. Migration path for users scripting their tasks with Tcl/TkIn Tcl 8.6, most scripts work the same way as with the previous version of Tcl. To migrate you code into Tcl 8.6, use this procedure. Procedure
Legal NoticeCopyright © 2022 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners. What command can you use to view Journald log entries on a system that uses systemd?To see the logs that the journald daemon has collected, use the journalctl command. When used alone, every journal entry that is in the system will be displayed within a pager (usually less ) for you to browse. The oldest entries will be up top: journalctl.
What is the path to where the Journald conf file is located?Journald Configuration. The main configuration file for systemd-journald is /etc/systemd/journald. conf.
Which of the following commands are used to create a user account?To create a new user account, invoke the useradd command followed by the name of the user. When executed without any option, useradd creates a new user account using the default settings specified in the /etc/default/useradd file.
Which of the following will show account aging information for a user such as the date of the last password change?The chage command is used to modify user password expiry information. It enables you to view user account aging information, change the number of days between password changes and the date of the last password change.
|