Red Hat Enterprise Linux 8 Show
Securing Red Hat Enterprise Linux 8Abstract This title assists users and administrators in learning the processes and practices of securing workstations and servers against local and remote intrusion, exploitation, and malicious activity. Focused on Red Hat Enterprise Linux but detailing concepts and techniques valid for all Linux systems, this guide details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. With proper administrative knowledge, vigilance, and tools, systems running Linux can be both fully functional and secured from most common intrusion and exploit methods. Making open source more inclusiveRed Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message. Providing feedback on Red Hat documentationWe appreciate your feedback on our documentation. Let us know how we can improve it. Submitting comments on specific passages
Submitting feedback through Bugzilla (account required)
Chapter 1. Overview of security hardening in RHELDue to the increased reliance on powerful, networked computers to help run businesses and keep track of our personal information, entire industries have been formed around the practice of network and computer security. Enterprises have solicited the knowledge and skills of security experts to properly audit systems and tailor solutions to fit the operating requirements of their organization. Because most organizations are increasingly dynamic in nature, their workers are accessing critical company IT resources locally and remotely, hence the need for secure computing environments has become more pronounced. Unfortunately, many organizations, as well as individual users, regard security as more of an afterthought, a process that is overlooked in favor of increased power, productivity, convenience, ease of use, and budgetary concerns. Proper security implementation is often enacted postmortem — after an unauthorized intrusion has already occurred. Taking the correct measures prior to connecting a site to an untrusted network, such as the Internet, is an effective means of thwarting many attempts at intrusion. 1.1. What is computer security?Computer security is a general term that covers a wide area of computing and information processing. Industries that depend on computer systems and networks to conduct daily business transactions and access critical information regard their data as an important part of their overall assets. Several terms and metrics have entered our daily business vocabulary, such as total cost of ownership (TCO), return on investment (ROI), and quality of service (QoS). Using these metrics, industries can calculate aspects such as data integrity and high-availability (HA) as part of their planning and process management costs. In some industries, such as electronic commerce, the availability and trustworthiness of data can mean the difference between success and failure. 1.2. Standardizing securityEnterprises in every industry rely on regulations and rules that are set by standards-making bodies such as the American Medical Association (AMA) or the Institute of Electrical and Electronics Engineers (IEEE). The same concepts hold true for information security. Many security consultants and vendors agree upon the standard security model known as CIA, or Confidentiality, Integrity, and Availability. This three-tiered model is a generally accepted component to assessing risks of sensitive information and establishing security policy. The following describes the CIA model in further detail:
1.3. Cryptographic software and certificationsRed Hat Enterprise Linux undergoes several security certifications, such as FIPS 140-2 or Common Criteria (CC), to ensure that industry best practices are followed. The RHEL 8 core crypto components Knowledgebase article provides an overview of the Red Hat Enterprise Linux 8 core crypto components, documenting which are they, how are they selected, how are they integrated into the operating system, how do they support hardware security modules and smart cards, and how do crypto certifications apply to them. 1.4. Security controls Computer security is often divided into three distinct main categories, commonly referred to as
These three broad categories define the main objectives of proper security implementation. Within these controls are sub-categories that further detail the controls and how to implement them. 1.4.1. Physical controlsPhysical control is the implementation of security measures in a defined structure used to deter or prevent unauthorized access to sensitive material. Examples of physical controls are:
1.4.2. Technical controlsTechnical controls use technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as:
1.4.3. Administrative controlsAdministrative controls define the human factors of security. They involve all levels of personnel within an organization and determine which users have access to what resources and information by such means as:
1.5. Vulnerability assessmentGiven time, resources, and motivation, an attacker can break into nearly any system. All of the security procedures and technologies currently available cannot guarantee that any systems are completely safe from intrusion. Routers help secure gateways to the Internet. Firewalls help secure the edge of the network. Virtual Private Networks safely pass data in an encrypted stream. Intrusion detection systems warn you of malicious activity. However, the success of each of these technologies is dependent upon a number of variables, including:
Given the dynamic state of data systems and technologies, securing corporate resources can be quite complex. Due to this complexity, it is often difficult to find expert resources for all of your systems. While it is possible to have personnel knowledgeable in many areas of information security at a high level, it is difficult to retain staff who are experts in more than a few subject areas. This is mainly because each subject area of information security requires constant attention and focus. Information security does not stand still. A vulnerability assessment is an internal audit of your network and system security; the results of which indicate the confidentiality, integrity, and availability of your network. Typically, vulnerability assessment starts with a reconnaissance phase, during which important data regarding the target systems and resources is gathered. This phase leads to the system readiness phase, whereby the target is essentially checked for all known vulnerabilities. The readiness phase culminates in the reporting phase, where the findings are classified into categories of high, medium, and low risk; and methods for improving the security (or mitigating the risk of vulnerability) of the target are discussed If you were to perform a vulnerability assessment of your home, you would likely check each door to your home to see if they are closed and locked. You would also check every window, making sure that they closed completely and latch correctly. This same concept applies to systems, networks, and electronic data. Malicious users are the thieves and vandals of your data. Focus on their tools, mentality, and motivations, and you can then react swiftly to their actions. 1.5.1. Defining assessment and testingVulnerability assessments may be broken down into one of two types: outside looking in and inside looking around. When performing an outside-looking-in vulnerability assessment, you are attempting to compromise your systems from the outside. Being external to your company provides you with the cracker’s point of view. You see what a cracker sees — publicly-routable IP addresses, systems on your DMZ, external interfaces of your firewall, and more. DMZ stands for "demilitarized zone", which corresponds to a computer or small subnetwork that sits between a trusted internal network, such as a corporate private LAN, and an untrusted external network, such as the public Internet. Typically, the DMZ contains devices accessible to Internet traffic, such as web (HTTP) servers, FTP servers, SMTP (e-mail) servers and DNS servers. When you perform an inside-looking-around vulnerability assessment, you are at an advantage since you are internal and your status is elevated to trusted. This is the point of view you and your co-workers have once logged on to your systems. You see print servers, file servers, databases, and other resources. There are striking distinctions between the two types of vulnerability assessments. Being internal to your company gives you more privileges than an outsider. In most organizations, security is configured to keep intruders out. Very little is done to secure the internals of the organization (such as departmental firewalls, user-level access controls, and authentication procedures for internal resources). Typically, there are many more resources when looking around inside as most systems are internal to a company. Once you are outside the company, your status is untrusted. The systems and resources available to you externally are usually very limited. Consider the difference between vulnerability assessments and penetration tests. Think of a vulnerability assessment as the first step to a penetration test. The information gleaned from the assessment is used for testing. Whereas the assessment is undertaken to check for holes and potential vulnerabilities, the penetration testing actually attempts to exploit the findings. Assessing network infrastructure is a dynamic process. Security, both information and physical, is dynamic. Performing an assessment shows an overview, which can turn up false positives and false negatives. A false positive is a result, where the tool finds vulnerabilities which in reality do not exist. A false negative is when it omits actual vulnerabilities. Security administrators are only as good as the tools they use and the knowledge they retain. Take any of the assessment tools currently available, run them against your system, and it is almost a guarantee that there are some false positives. Whether by program fault or user error, the result is the same. The tool may find false positives, or, even worse, false negatives. Now that the difference between a vulnerability assessment and a penetration test is defined, take the findings of the assessment and review them carefully before conducting a penetration test as part of your new best practices approach. Do not attempt to exploit vulnerabilities on production systems. Doing so can have adverse effects on productivity and efficiency of your systems and network. The following list examines some of the benefits of performing vulnerability assessments.
1.5.2. Establishing a methodology for vulnerability assessmentTo aid in the selection of tools for a vulnerability assessment, it is helpful to establish a vulnerability assessment methodology. Unfortunately, there is no predefined or industry approved methodology at this time; however, common sense and best practices can act as a sufficient guide. What is the target? Are we looking at one server, or are we looking at our entire network and everything within the network? Are we external or internal to the company? The answers to these questions are important as they help determine not only which tools to select but also the manner in which they are used. To learn more about establishing methodologies, see the following website:
1.5.3. Vulnerability assessment toolsAn assessment can start by using some form of an information-gathering tool. When assessing the entire network, map the layout first to find the hosts that are running. Once located, examine each host individually. Focusing on these hosts requires another set of tools. Knowing which tools to use may be the most crucial step in finding vulnerabilities. The following tools are just a small sampling of the available tools:
1.6. Security threats1.6.1. Threats to network securityBad practices when configuring the following aspects of a network can increase the risk of an attack. Insecure architectures A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden neighborhood — nothing may happen for an arbitrary amount of time, but someone exploits the opportunity eventually. Broadcast networks System administrators often fail to realize the importance of networking hardware in their security schemes. Simple hardware, such as hubs and routers, relies on the broadcast or non-switched principle; that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a broadcast of the data packets until the recipient node receives and processes the data. This method is the most vulnerable to address resolution protocol (ARP) or media access control (MAC) address spoofing by both outside intruders and unauthorized users on local hosts. Centralized servers Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for many businesses is to consolidate all services to a single powerful machine. This can be convenient as it is easier to manage and costs considerably less than multiple-server configurations. However, a centralized server introduces a single point of failure on the network. If the central server is compromised, it may render the network completely useless or worse, prone to data manipulation or theft. In these situations, a central server becomes an open door that allows access to the entire network. 1.6.2. Threats to server securityServer security is as important as network security because servers often hold a great deal of an organization’s vital information. If a server is compromised, all of its contents may become available for the cracker to steal or manipulate at will. The following sections detail some of the main issues. Unused services and open ports A full installation of Red Hat Enterprise Linux 8 contains more than 1000 applications and library packages. However, most server administrators do not opt to install every single package in the distribution, preferring instead to install a base installation of packages, including several server applications. A common occurrence among system administrators is to install the operating system without paying attention to what programs are actually being installed. This can be problematic because unneeded services may be installed, configured with the default settings, and possibly turned on. This can cause unwanted services, such as Telnet, DHCP, or DNS, to run on a server or workstation without the administrator realizing it, which in turn can cause unwanted traffic to the server or even a potential pathway into the system for crackers. Unpatched services Most server applications that are included in a default installation are solid, thoroughly tested pieces of software. Having been in use in production environments for many years, their code has been thoroughly refined and many of the bugs have been found and fixed. However, there is no such thing as perfect software and there is always room for further refinement. Moreover, newer software is often not as rigorously tested as one might expect, because of its recent arrival to production environments or because it may not be as popular as other server software. Developers and system administrators often find exploitable bugs in server applications and publish the information on bug tracking and security-related websites such as the Bugtraq mailing list (http://www.securityfocus.com) or the Computer Emergency Response Team (CERT) website (http://www.cert.org). Although these mechanisms are an effective way of alerting the community to security vulnerabilities, it is up to system administrators to patch their systems promptly. This is particularly true because crackers have access to these same vulnerability tracking services and will use the information to crack unpatched systems whenever they can. Good system administration requires vigilance, constant bug tracking, and proper system maintenance to ensure a more secure computing environment. Inattentive administration Administrators who fail to patch their systems are one of the greatest threats to server security. This applies as much to inexperienced administrators as it does to overconfident or amotivated administrators. Some administrators fail to patch their servers and workstations, while others fail to watch log messages from the system kernel or network traffic. Another common error is when default passwords or keys to services are left unchanged. For example, some databases have default administration passwords because the database developers assume that the system administrator changes these passwords immediately after installation. If a database administrator fails to change this password, even an inexperienced cracker can use a widely-known default password to gain administrative privileges to the database. These are only a few examples of how inattentive administration can lead to compromised servers. Inherently insecure services Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet — which is itself inherently untrusted. One category of insecure network services are those that require unencrypted user names and passwords for authentication. Telnet and FTP are two such services. If packet sniffing software is monitoring traffic between the remote user and such a service user names and passwords can be easily intercepted. Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to the server, the attacker’s machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way a cracker can gather administrative passwords and raw data without the server or the user realizing it. Another category of insecure services include network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ASCII or DBM (ASCII-derived) database. A cracker who gains access to this database can then access every user account on a network, including the administrator’s account. By default, Red Hat Enterprise Linux 8 is released with all such services turned off. However, since administrators often find themselves forced to use these services, careful configuration is critical. 1.6.3. Threats to workstation and home PC securityWorkstations and home PCs may not be as prone to attack as networks or servers, but because they often contain sensitive data, such as credit card information, they are targeted by system crackers. Workstations can also be co-opted without the user’s knowledge and used by attackers as "bot" machines in coordinated attacks. For these reasons, knowing the vulnerabilities of a workstation can save users the headache of reinstalling the operating system, or worse, recovering from data theft. Bad passwords Bad passwords are one of the easiest ways for an attacker to gain access to a system. Vulnerable client applications Although an administrator may have a fully secure and patched server, that does not mean remote users are secure when accessing it. For instance, if the server offers Telnet or FTP services over a public network, an attacker can capture the plain text user names and passwords as they pass over the network, and then use the account information to access the remote user’s workstation. Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if they do not keep their client applications updated. For instance, SSH protocol version 1 clients are vulnerable to an X-forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly capture any keystrokes and mouse clicks made by the client over the network. This problem was fixed in the SSH version 2 protocol, but it is up to the user to keep track of what applications have such vulnerabilities and update them as necessary. 1.7. Common exploits and attacksThe following table details some of the most common exploits and entry points used by intruders to access organizational network resources. Key to these common exploits are the explanations of how they are performed and how administrators can properly safeguard their network against such attacks. Table 1.1. Common exploits
Chapter 2. Securing RHEL during installationSecurity begins even before you start the installation of Red Hat Enterprise Linux. Configuring your system securely from the beginning makes it easier to implement additional security settings later. 2.1. BIOS and UEFI securityPassword protection for the BIOS (or BIOS equivalent) and the boot loader can prevent unauthorized users who have physical access to systems from booting using removable media or obtaining root privileges through single user mode. The security measures you should take to protect against such attacks depends both on the sensitivity of the information on the workstation and the location of the machine. For example, if a machine is used in a trade show and contains no sensitive information, then it may not be critical to prevent such attacks. However, if an employee’s laptop with private, unencrypted SSH keys for the corporate network is left unattended at that same trade show, it could lead to a major security breach with ramifications for the entire company. If the workstation is located in a place where only authorized or trusted people have access, however, then securing the BIOS or the boot loader may not be necessary. 2.1.1. BIOS passwordsThe two primary reasons for password protecting the BIOS of a computer are[1]:
Because the methods for setting a BIOS password vary between computer manufacturers, consult the computer’s manual for specific instructions. If you forget the BIOS password, it can either be reset with jumpers on the motherboard or by disconnecting the CMOS battery. For this reason, it is good practice to lock the computer case if possible. However, consult the manual for the computer or motherboard before attempting to disconnect the CMOS battery. 2.1.2. Non-BIOS-based systems securityOther systems and architectures use different programs to perform low-level tasks roughly equivalent to those of the BIOS on x86 systems. For example, the Unified Extensible Firmware Interface (UEFI) shell. For instructions on password protecting BIOS-like programs, see the manufacturer’s instructions. 2.2. Disk partitioning Red Hat recommends creating separate partitions for the
During the installation process, you have an option to encrypt partitions. You must supply a passphrase. This passphrase serves as a key to unlock the bulk encryption key, which is used to secure the partition’s data. 2.3. Restricting network connectivity during the installation processWhen installing Red Hat Enterprise Linux 8, the installation medium represents a snapshot of the system at a particular time. Because of this, it may not be up-to-date with the latest security fixes and may be vulnerable to certain issues that were fixed only after the system provided by the installation medium was released. When installing a potentially vulnerable operating system, always limit exposure only to the closest necessary network zone. The safest choice is the “no network” zone, which means to leave your machine disconnected during the installation process. In some cases, a LAN or intranet connection is sufficient while the Internet connection is the riskiest. To follow the best security practices, choose the closest zone with your repository while installing Red Hat Enterprise Linux 8 from a network. 2.4. Installing the minimum amount of packages requiredIt is best practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. If you are installing from the DVD media, take the opportunity to select exactly what packages you want to install during the installation. If you find you need another package, you can always add it to the system later. 2.5. Post-installation proceduresThe following steps are the security-related procedures that should be performed immediately after installation of Red Hat Enterprise Linux 8.
Chapter 3. Installing a RHEL 8 system with FIPS mode enabledTo enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140-2, you have to operate RHEL 8 in FIPS mode. You can achieve this by:
To avoid cryptographic key material regeneration and reevaluation of the compliance of the resulting system associated with converting already deployed systems, Red Hat recommends starting the installation in FIPS mode. If you are using non-default values in the 3.1. Federal Information Processing Standard (FIPS)The Federal Information Processing Standard (FIPS) Publication 140-2 is a computer security standard developed by the U.S. Government and industry working group to validate the quality of cryptographic modules. See the official FIPS publications at NIST Computer Security Resource Center. The FIPS 140-2 standard ensures that cryptographic tools implement their algorithms correctly. One of the mechanisms for that is runtime self-checks. See the full FIPS 140-2 standard at FIPS PUB 140-2 for further details and other specifications of the FIPS standard. To learn about compliance requirements, see the Red Hat Government Standards page. 3.2. Installing the system with FIPS mode enabledTo enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) Publication 140-2, enable FIPS mode during the system installation. Red Hat recommends installing RHEL with FIPS mode enabled, as opposed to enabling FIPS mode later. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. Procedure
After the installation, the system starts in FIPS mode automatically. Verification
3.3. Additional resources
Chapter 4. Using system-wide cryptographic policiesThe system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSec, and Kerberos protocols. It provides a small set of policies, which the administrator can select. 4.1. System-wide cryptographic policiesWhen a system-wide policy is set up, applications in RHEL follow it and refuse to use algorithms and protocols that do not meet the policy, unless you explicitly request the application to do so. That is, the policy applies to the default behavior of applications when running with the system-provided configuration but you can override it if required. RHEL 8 contains the following predefined policies:
Red Hat continuously adjusts all policy levels so that all libraries, except when using the LEGACY policy, provide secure defaults. Even though the LEGACY profile does not provide secure defaults, it does not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux. Such changes reflect new security standards and new security research. If you must ensure interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should opt-out from cryptographic-policies for components that interact with that system or re-enable specific algorithms using custom policies. Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the To work around this problem, use the The specific algorithms and ciphers described in the policy levels as allowed are available only if an application supports them. Tool for managing crypto policies To view or change the current system-wide cryptographic policy, use the $ update-crypto-policies --show DEFAULT # update-crypto-policies --set FUTURE Setting system policy to FUTURE To ensure that the change of the cryptographic policy is applied, restart the system. Strong crypto defaults by removing insecure cipher suites and protocolsThe following list contains cipher suites and protocols removed from the core cryptographic libraries in Red Hat Enterprise Linux 8. They are not present in the sources, or their support is disabled during the build, so applications cannot use them.
Cipher suites and protocols disabled in all policy levelsThe following cipher suites and protocols are disabled in all crypto policy levels. They can be enabled only by an explicit configuration of individual applications.
Cipher suites and protocols enabled in the crypto-policies levelsThe following table shows the enabled cipher suites and protocols in all four crypto-policies levels.
Additional resources
4.2. Switching the system-wide cryptographic policy to mode compatible with earlier releases The default system-wide cryptographic policy in Red Hat Enterprise Linux
8 does not allow communication using older, insecure protocols. For environments that require to be compatible with Red Hat Enterprise Linux 6 and in some cases also with earlier releases, the less secure Switching to the Procedure
Additional resources
4.3. Setting up system-wide cryptographic policies in the web consoleYou can choose from predefined system-wide cryptographic policy levels and switch between them directly in the Red Hat Enterprise Linux web console interface. If you set a custom policy on your system, the web console displays the policy in the Overview page as well as the Change crypto policy dialog window. Prerequisites
Procedure
Verification
4.4. Switching the system to FIPS mode The system-wide cryptographic policies contain a policy level that enables cryptographic modules self-checks in accordance with the requirements by the Federal Information Processing Standard (FIPS) Publication 140-2. The Red Hat recommends installing Red Hat Enterprise Linux 8 with FIPS mode enabled, as opposed to enabling FIPS mode later. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. Procedure
Verification
Additional resources
4.5. Enabling FIPS mode in a container On systems with FIPS mode enabled, the The Prerequisites
Procedure
On a RHEL 8.1 system, you can enable FIPS mode in a container by performing the following steps:
4.6. List of RHEL applications using cryptography that is not compliant with FIPS 140-2Red Hat recommends to use libraries from the core crypto components set, as they are guaranteed to pass all relevant crypto certifications, such as FIPS 140-2, and also follow the RHEL system-wide crypto policies. See the RHEL 8 core crypto components article for an overview of the RHEL 8 core crypto components, the information on how are they selected, how are they integrated into the operating system, how do they support hardware security modules and smart cards, and how do crypto certifications apply to them. In addition to the following table, in some RHEL 8 Z-stream releases (for example, 8.1.1), the Firefox browser packages have been updated, and they contain a separate copy of the NSS cryptography library. This way, Red Hat wants to avoid the disruption of rebasing such a low-level component in a patch release. As a result, these Firefox packages do not use a FIPS 140-2-validated module. Table 4.1. List of RHEL 8 applications using cryptography that is not compliant with FIPS 140-2
4.7. Excluding an application from following system-wide crypto policiesYou can customize cryptographic settings used by your application preferably by configuring supported cipher suites and protocols directly in the application. You can also remove a symlink related
to your application from the 4.7.1. Examples of opting out of system-wide crypto policies
wget To customize cryptographic settings used by the $ wget --secure-protocol=TLSv1_1 --ciphers="SECURE128" https://example.com See the HTTPS (SSL/TLS) Options section of the curl To specify ciphers used by the $ curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA' See the Firefox Even though you cannot opt out of system-wide cryptographic policies in the OpenSSH To opt out of the system-wide crypto policies for your OpenSSH server, uncomment the line with the To opt out of system-wide crypto policies for your OpenSSH client, perform one of the following tasks:
See the Additional resources
4.8. Customizing system-wide cryptographic policies with subpoliciesUse this procedure to adjust the set of enabled cryptographic algorithms or protocols. You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or define such a policy from scratch. The concept of scoped policies allows enabling different sets of algorithms for different back ends. You can limit each configuration directive to specific protocols, libraries, or services. Furthermore, directives can use asterisks for specifying multiple values using wildcards. The Customization of system-wide cryptographic policies is available from RHEL 8.2. You can use the concept of scoped policies and the option of using wildcards in RHEL 8.5 and newer. Procedure
Verification
Additional resources
4.9. Disabling SHA-1 by customizing a system-wide cryptographic policy Because the SHA-1 hash function has an inherently weak design, and advancing cryptanalysis has made it vulnerable to attacks, RHEL 8 does not use SHA-1 by default. Nevertheless, some third-party applications, for example,
public signatures, still use SHA-1. To disable the use of SHA-1 in signature algorithms on your system, you can use the The If your scenario requires disabling a specific key exchange (KEX) algorithm combination, for example, The module for disabling SHA-1 is available from RHEL 8.3. Customization of system-wide cryptographic policies is available from RHEL 8.2. Procedure
Additional resources
4.10. Creating and setting a custom system-wide cryptographic policyThe following steps demonstrate customizing the system-wide cryptographic policies by a complete policy file. Customization of system-wide cryptographic policies is available from RHEL 8.2. Procedure
Additional resources
Chapter 5. Setting a custom cryptographic policy across systems As an administrator, you can use the System-wide 5.1. crypto_policies System Role variables and facts In a If you do not configure any variables, the System Role does not configure the system and only reports the facts. Selected variables for the crypto_policies_policy Determines the cryptographic policy the System Role applies to the managed nodes.
For details about the different crypto policies, see System-wide cryptographic policies . crypto_policies_reload If set to yes , the affected services, currently the ipsec , bind , and sshd services, reload after applying a crypto policy. Defaults to yes . crypto_policies_reboot_ok
If set to yes , and a reboot is necessary after the System Role changes the crypto policy, it sets crypto_policies_reboot_required to yes . Defaults to no . Facts set by the crypto_policies_active Lists the currently selected policy. crypto_policies_available_policies Lists all available policies available on the system. crypto_policies_available_subpolicies Lists all available subpolicies available on the
system. 5.2. Setting a custom cryptographic policy using the crypto_policies System Role You can use the Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains
Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
Procedure
Verification
5.3. Additional resources
Chapter 6. Configuring applications to use cryptographic hardware through PKCS #11Separating parts of your secret information on dedicated cryptographic devices, such as smart cards and cryptographic tokens for end-user authentication and hardware security modules (HSM) for server applications, provides an additional layer of security. In RHEL, support for cryptographic hardware through the PKCS #11 API is consistent across different applications, and the isolation of secrets on cryptographic hardware is not a complicated task. 6.1. Cryptographic hardware support through PKCS #11PKCS #11 (Public-Key Cryptography Standard) defines an application programming interface (API) to cryptographic devices that hold cryptographic information and perform cryptographic functions. These devices are called tokens, and they can be implemented in a hardware or software form. A PKCS #11 token can store various object types including a certificate; a data object; and a public, private, or secret key. These objects are uniquely identifiable through the PKCS #11 URI scheme. A PKCS #11 URI is a standard way to identify a specific object in a PKCS #11 module according to the object attributes. This enables you to configure all libraries and applications with the same configuration string in the form of a URI. RHEL provides the OpenSC PKCS #11 driver for smart cards by default. However, hardware tokens and HSMs can have their own PKCS #11 modules that do not have their counterpart in the system. You can register such PKCS #11 modules with the To make your own PKCS #11 module work on the system, add a new text file to the You can add your own PKCS #11 module into the
system by creating a new text file in the $ cat /usr/share/p11-kit/modules/opensc.module
module: opensc-pkcs11.so 6.2. Using SSH keys stored on a smart cardRed Hat Enterprise Linux enables you to use RSA and ECDSA keys stored on a smart card on OpenSSH clients. Use this procedure to enable authentication using a smart card instead of using a password. Prerequisites
Procedure
If you skip the $ ssh -i pkcs11: example.com
Enter PIN for 'SSH key':
[example.com] $ 6.3. Configuring applications to authenticate using certificates from smart cardsAuthentication using smart cards in applications may increase security and simplify automation.
Using PKCS #11 URIs in custom applications If your application uses the With applications that require working with private keys on smart cards and that do not use Additional resources
6.4. Using HSMs protecting private keys in Apache The For secure communication in the form of the HTTPS protocol, the SSLCertificateFile "pkcs11:id=%01;token=softhsm;type=cert" SSLCertificateKeyFile "pkcs11:id=%01;token=softhsm;type=private?pin-value=111111" Install the 6.5. Using HSMs protecting private keys in Nginx The Because ssl_certificate /path/to/cert.pem ssl_certificate_key "engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111"; Note that the Chapter 7. Controlling access to smart cards using polkit To cover possible threats that cannot be prevented by mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, and for more fine-grained control, RHEL uses the
System administrators can configure 7.1. Smart-card access control through polkit The Personal Computer/Smart Card (PC/SC) protocol specifies a standard for integrating smart cards and their readers into computing systems. In
RHEL, the Because access-control mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, do not cover all possible threats, RHEL uses the After installing the The Additional resources
7.2. Troubleshooting problems related to PC/SC and polkit Polkit policies that are automatically enforced after you install the Authentication is required to access the PC/SC daemon Note that the system can install the If your scenario does not require any interaction with smart cards and you want to prevent displaying authorization requests for the PC/SC daemon, you can remove the If you use smart cards, start troubleshooting by checking the rules in the system-provided policy file at To understand what authorization requests the system displays, check the Journal log, for example: $ journalctl -b | grep pcsc
...
Process 3087 (user: 1001) is NOT authorized for action: access_pcsc
... The previous log entry means that the
user is not authorized to perform an action by the policy. You can solve this denial by adding a corresponding rule to You can search also for log entries related to the $ journalctl -u polkit
...
polkitd[NNN]: Error compiling script /etc/polkit-1/rules.d/00-debug-pcscd.rules
...
polkitd[NNN]: Operator of unix-session:c2 FAILED to authenticate to gain authorization for action org.debian.pcsc-lite.access_pcsc for unix-process:4800:14441 [/usr/libexec/gsd-smartcard] (owned by unix-user:group)
... In the previous output, the first entry means that the rule file contains some syntax error. The second entry means that the user failed to gain the access to You can also list all applications that use the PC/SC protocol by a short script. Create an
executable file, for example, #!/bin/bash cd /proc for p in [0-9]* do if grep libpcsclite.so.1.0.0 $p/maps &> /dev/null then echo -n "process: " cat $p/cmdline echo " ($p)" fi done Run the script as # ./pcsc-apps.sh
process: /usr/libexec/gsd-smartcard (3048)
enable-sync --auto-ssl-client-auth --enable-crashpad (4828)
... Additional resources
7.3. Displaying more detailed information about polkit authorization to PC/SC In the default configuration, the Prerequisites
Procedure
Verification
Additional resources
7.4. Additional resources
Chapter 8. Using shared system certificatesThe shared system certificates storage enables NSS, GnuTLS, OpenSSL, and Java to share a default source for retrieving system certificate anchors and block-list information. By default, the trust store contains the Mozilla CA list, including positive and negative trust. The system allows updating the core Mozilla CA list or choosing another certificate list. 8.1. The system-wide trust store In Red Hat Enterprise Linux, the consolidated system-wide trust store is located in the Certificate files are treated depending on the subdirectory they are installed to the following directories:
In a hierarchical cryptographic system, a trust anchor is an authoritative entity which other parties consider being trustworthy. In the X.509 architecture, a root certificate is a trust anchor from which a chain of trust is derived. To enable chain validation, the trusting party must have access to the trust anchor first. 8.2. Adding new certificates To
acknowledge applications on your system with a new source of trust, add the corresponding certificate to the system-wide store, and use the Prerequisites
Procedure
While the Firefox browser is able to use an added certificate without executing 8.3. Managing trusted system certificates The
Additional resources
8.4. Additional resources
Chapter 9. Scanning the system for configuration compliance and vulnerabilitiesA compliance audit is a process of determining whether a given object follows all the rules specified in a compliance policy. The compliance policy is defined by security professionals who specify the required settings, often in the form of a checklist, that a computing environment should use. Compliance policies can vary substantially across organizations and even across different systems within the same organization. Differences among these policies are based on the purpose of each system and its importance for the organization. Custom software settings and deployment characteristics also raise a need for custom policy checklists. 9.1. Configuration compliance tools in RHELRed Hat Enterprise Linux provides tools that enable you to perform a fully automated compliance audit. These tools are based on the Security Content Automation Protocol (SCAP) standard and are designed for automated tailoring of compliance policies.
To perform automated compliance audits on multiple systems remotely, you can use the OpenSCAP solution for Red Hat Satellite. Additional resources
9.2. Vulnerability scanning9.2.1. Red Hat Security Advisories OVAL feedRed Hat Enterprise Linux security auditing capabilities are based on the Security Content Automation Protocol (SCAP) standard. SCAP is a multi-purpose framework of specifications that supports automated configuration, vulnerability and patch checking, technical control compliance activities, and security measurement. SCAP specifications create an ecosystem where the format of security content is well-known and standardized although the implementation of the scanner or policy editor is not mandated. This enables organizations to build their security policy (SCAP content) once, no matter how many security vendors they employ. The Open Vulnerability Assessment Language (OVAL) is the essential and oldest component of SCAP. Unlike other tools and custom scripts, OVAL describes a required state of resources in a declarative manner. OVAL code is never executed directly but using an OVAL interpreter tool called scanner. The declarative nature of OVAL ensures that the state of the assessed system is not accidentally modified. Like all other SCAP components, OVAL is based on XML. The SCAP standard defines several document formats. Each of them includes a different kind of information and serves a different purpose. Red Hat Product Security helps customers evaluate and manage risk by tracking and investigating all security issues affecting Red Hat customers. It provides timely and concise patches and security advisories on the Red Hat Customer Portal. Red Hat creates and supports OVAL patch definitions, providing machine-readable versions of our security advisories. Because of differences between platforms, versions, and other factors, Red Hat Product Security qualitative severity ratings of vulnerabilities do not directly align with the Common Vulnerability Scoring System (CVSS) baseline ratings provided by third parties. Therefore, we recommend that you use the RHSA OVAL definitions instead of those provided by third parties. The RHSA OVAL definitions are available individually and as a complete package, and are updated within an hour of a new security advisory being made available on the Red Hat Customer Portal. Each OVAL patch definition maps one-to-one to a Red Hat Security Advisory (RHSA). Because an RHSA can contain fixes for multiple vulnerabilities, each vulnerability is listed separately by its Common Vulnerabilities and Exposures (CVE) name and has a link to its entry in our public bug database. The RHSA OVAL definitions are designed to check for vulnerable versions of RPM packages installed on a system. It is possible to extend these definitions to include further checks, for example, to find out if the packages are being used in a vulnerable configuration. These definitions are designed to cover software and updates shipped by Red Hat. Additional definitions are required to detect the patch status of third-party software. The Red Hat Insights for Red Hat Enterprise Linux compliance service helps IT security and compliance administrators to assess, monitor, and report on the security policy compliance of Red Hat Enterprise Linux systems. You can also create and manage your SCAP security policies entirely within the compliance service UI. 9.2.2. Scanning the system for vulnerabilities The Prerequisites
Procedure
Verification
9.2.3. Scanning remote systems for vulnerabilities You can check also remote systems for vulnerabilities with the OpenSCAP scanner using the Prerequisites
Procedure
9.3. Configuration compliance scanning9.3.1. Configuration compliance in RHELYou can use configuration compliance scanning to conform to a baseline defined by a specific organization. For example, if you work with the US government, you might have to align your systems with the Operating System Protection Profile (OSPP), and if you are a payment processor, you might have to align your systems with the Payment Card Industry Data Security Standard (PCI-DSS). You can also perform configuration compliance scanning to harden your system security. Red Hat recommends you follow the Security Content Automation Protocol (SCAP) content provided in the SCAP Security Guide package because it is in line with Red Hat best practices for affected components. The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3 standards. The Performing a configuration compliance scanning does not guarantee the system is compliant. The SCAP Security Guide suite provides profiles for several platforms in a form of data stream documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules. Each rule specifies the applicability and requirements for compliance. RHEL provides several profiles for compliance with security policies. In addition to the industry standard, Red Hat data streams also contain information for remediation of failed rules. Structure of compliance scanning resources Data stream ├── xccdf | ├── benchmark | ├── profile | | ├──rule reference | | └──variable | ├── rule | ├── human readable data | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation A profile is a set of rules based on a security policy, such as OSPP, PCI-DSS, and Health Insurance Portability and Accountability Act (HIPAA). This enables you to audit the system in an automated way for compliance with security standards. You can modify (tailor) a profile to customize certain rules, for example, password length. For more information on profile tailoring, see Customizing a security profile with SCAP Workbench. 9.3.2. Possible results of an OpenSCAP scanDepending on various properties of your system and the data stream and profile applied to an OpenSCAP scan, each rule may produce a specific result. This is a list of possible results with brief explanations of what they mean. Table 9.1. Possible results of an OpenSCAP scan
9.3.3. Viewing profiles for configuration compliance Before you decide to use profiles for scanning or remediation, you can list them and check their detailed descriptions using the Prerequisites
Procedure
Additional resources
9.3.4. Assessing configuration compliance with a specific baselineTo determine whether your system conforms to a specific baseline, follow these steps. Prerequisites
Procedure
Additional resources
9.4. Remediating the system to align with a specific baselineUse this procedure to remediate the RHEL system to align with a specific baseline. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile. If not used carefully, running the system evaluation with the Prerequisites
Procedure
Verification
Additional resources
9.5. Remediating the system to align with a specific baseline using an SSG Ansible playbookUse this procedure to remediate your system with a specific baseline using an Ansible playbook file from the SCAP Security Guide project. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile. If not used carefully, running the system evaluation with the Prerequisites
In RHEL 8.6 and later versions,
Ansible Engine is replaced by the Procedure
Verification
9.6. Creating a remediation Ansible playbook to align the system with a specific baselineYou can create an Ansible playbook containing only the remediations that are required to align your system with a specific baseline. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile. With this procedure, you create a smaller playbook that does not cover already satisfied requirements. By following these steps, you do not modify your system in any way, you only prepare a file for later application. In RHEL 8.6, Ansible Engine is replaced by the Prerequisites
Procedure
Verification
9.7. Creating a remediation Bash script for a later applicationUse this procedure to create a Bash script containing remediations that align your system with a security profile such as HIPAA. Using the following steps, you do not do any modifications to your system, you only prepare a file for later application. Prerequisites
Procedure
Verification
Additional resources
9.8. Scanning the system with a customized profile using SCAP Workbench 9.8.1. Using SCAP Workbench to scan and remediate the systemTo evaluate your system against the selected security policy, use the following procedure. Prerequisites
Procedure
9.8.2. Customizing a security profile with SCAP WorkbenchYou can customize a security profile by changing parameters in certain rules (for example, minimum password length), removing rules that you cover in a different way, and selecting additional rules, to implement internal policies. You cannot define new rules by customizing a profile. The following procedure demonstrates the use of Prerequisites
Procedure
Because 9.9. Deploying systems that are compliant with a security profile immediately after an installationYou can use the OpenSCAP suite to deploy RHEL systems that are compliant with a security profile, such as OSPP, PCI-DSS, and HIPAA profile, immediately after the installation process. Using this deployment method, you can apply specific rules that cannot be applied later using remediation scripts, for example, a rule for password strength and partitioning. 9.9.1. Profiles not compatible with Server with GUICertain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 9.2. Profiles not compatible with Server with GUI
9.9.2. Deploying baseline-compliant RHEL systems using the graphical installationUse this procedure to deploy a RHEL system that is aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. For additional details, see Profiles not compatible with a GUI server . Prerequisites
Procedure
Verification
9.9.3. Deploying baseline-compliant RHEL systems using KickstartUse this procedure to deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites
Procedure
Passwords in Kickstart files are not checked for OSPP requirements. Verification
9.10. Scanning container and container images for vulnerabilitiesUse this procedure to find security vulnerabilities in a container or a container image. Prerequisites
Procedure
Verification
Additional resources
9.11. Assessing security compliance of a container or a container image with a specific baselineFollow these steps to assess compliance of your container or a container image with a specific security baseline, such as Operating System Protection Profile (OSPP), Payment Card Industry Data Security Standard (PCI-DSS), and Health Insurance Portability and Accountability Act (HIPAA). Prerequisites
Procedure
Verification
The rules marked as notapplicable are rules that do not apply to containerized systems. These rules apply only to bare-metal and virtualized systems. Additional resources
9.12. SCAP Security Guide profiles supported in RHEL 8Use only the SCAP content provided in the particular minor release of RHEL. This is because components that participate in hardening are sometimes updated with new capabilities. SCAP content changes to reflect these updates, but it is not always backward compatible. In the following tables, you can find the profiles provided in each minor version of RHEL, together with the version of the policy with which the profile aligns. Table 9.3. SCAP Security Guide profiles supported in RHEL 8.7
Table 9.4. SCAP Security Guide profiles supported in RHEL 8.6
Table 9.5. SCAP Security Guide profiles supported in RHEL 8.5
Table 9.6. SCAP Security Guide profiles supported in RHEL 8.4
Table 9.7. SCAP Security Guide profiles supported in RHEL 8.3
Table 9.8. SCAP Security Guide profiles supported in RHEL 8.2
Table 9.9. SCAP Security Guide profiles supported in RHEL 8.1
Table 9.10. SCAP Security Guide profiles supported in RHEL 8.0
9.13. Additional resources
Chapter 10. Checking integrity with AIDE Advanced Intrusion Detection Environment ( 10.1. Installing AIDE The following steps are necessary to install Prerequisites
Procedure
10.2. Performing integrity checks with AIDEPrerequisites
Procedure
10.3. Updating an AIDE database After verifying the changes of your system such as, package updates or configuration files adjustments, Red Hat recommends updating your baseline Prerequisites
Procedure
10.4. File-integrity tools: AIDE and IMARed Hat Enterprise Linux provides several tools for checking and preserving the integrity of files and directories on your system. The following table helps you decide which tool better fits your scenario. Table 10.1. Comparison between AIDE and IMA
Chapter 11. Enhancing security with the kernel integrity subsystemYou can increase the protection of your system by utilizing components of the kernel integrity subsystem. The following sections introduce the relevant components and provide guidance on their configuration. You can use the features with cryptographic signatures only for Red Hat products because the kernel keyring system includes only the certificates for Red Hat signature keys. Using other hash features results in incomplete tamperproofing. 11.1. The kernel integrity subsystemThe integrity subsystem is a part of the kernel that is responsible for maintaining the overall system data integrity. This subsystem helps to keep the state of a certain system the same from the time it was built and thus prevents undesired modification on specific system files. The kernel integrity subsystem consists of two major components: Integrity Measurement Architecture (IMA)
The kernel integrity subsystem can harness the Trusted Platform Module (TPM) to harden the system security even more. TPM is a specification by the Trusted Computing Group (TCG) for important cryptographic functions. TPMs are usually built as dedicated hardware that is attached to the platform’s motherboard and prevents software-based attacks by providing cryptographic functions from a protected and tamper-proof area of the hardware chip. Some of the TPM features are:
11.2. Integrity measurement architectureIntegrity Measurement Architecture (IMA) is a component of the kernel integrity subsystem. IMA aims to maintain the contents of local files. Specifically, IMA measures, stores, and appraises file hashes before they are accessed, which prevents the reading and execution of unreliable data. Thereby, usage of IMA with digital signatures enhances the security of the system and prevents from measurement data forgery. The scenario assumes the attacker does not possess the original signature key. 11.3. Extended verification moduleExtended Verification Module (EVM) is a component of the kernel integrity subsystem that monitors changes in file extended attributes (xattr). Many security-oriented technologies store sensitive file information, such as content hashes and signatures, in extended attributes of files. EVM goes one step further and hashes or signs those extended attributes. The resulting data is validated every time the extended attribute is used. For example, when IMA appraises the file. 11.4. Trusted and encrypted keysThe following section introduces trusted and encrypted keys as an important part of enhancing system security. Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that utilize the kernel keyring service. The fact that this type of keys never appear in the user space in an unencrypted form means that their integrity can be verified, which in turn means that they can be used, for example, by the extended verification module (EVM) to verify and confirm the integrity of a running system. User-level programs can only access the keys in the form of encrypted blobs. Trusted keys need a hardware component: the Trusted Platform Module (TPM) chip, which is used to both create and encrypt (seal) the keys. The TPM seals the keys using a 2048-bit RSA key called the storage root key (SRK). To use a TPM 1.2 specification, enable and activate it through a setting in the machine firmware or by using the In addition to that, the user can seal the trusted keys with a specific set of the TPM’s platform configuration register (PCR) values. PCR contains a set of integrity-management values that reflect the firmware, boot loader, and operating system. This means that PCR-sealed keys can only be decrypted by the TPM on the same system on which they were encrypted. However, once a PCR-sealed trusted key is loaded (added to a keyring), and thus its associated PCR values are verified, it can be updated with new (or future) PCR values, so that a new kernel, for example, can be booted. A single key can also be saved as multiple blobs, each with different PCR values. Encrypted keys do not require a TPM, as they use the kernel Advanced Encryption Standard (AES), which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. The master key is either a trusted key or a user key. If the master key is not trusted, the encrypted key is only as secure as the user key used to encrypt it. 11.5. Working with trusted keys The following section describes how to create, export, load or update trusted keys with the Prerequisites
Procedure
11.6. Working with encrypted keysThe following section describes managing encrypted keys to improve the system security on systems where a Trusted Platform Module (TPM) is not available. Prerequisites
Procedure
Keep in mind that encrypted keys that are not sealed by a trusted primary key are only as secure as the user primary key (random-number key) that was used to encrypt them. Therefore, the primary user key should be loaded as securely as possible and preferably early during the boot process. 11.7. Enabling integrity measurement architecture and extended verification moduleIntegrity measurement architecture (IMA) and extended verification module (EVM) belong to the kernel integrity subsystem and enhance the system security in various ways. The following section describes how to enable and configure IMA and EVM to improve the security of the operating system. This procedure works only when using certificates and keys issued by Red Hat. Prerequisites
Procedure
Verification step
If the system is rebooted, the keys are removed from the keyring. In such a case, you can import the already exported Procedure
11.8. Collecting file hashes with integrity measurement architectureThe first level of operation of integrity measurement architecture (IMA) is the measurement phase that allows to create file hashes and store them as extended attributes (xattrs) of those files. The following section describes how to create and inspect the files' hashes. Prerequisites
Procedure
Chapter 12. Encrypting block devices using LUKSDisk encryption protects the data on a block device by encrypting it. To access the device’s decrypted contents, a user must provide a passphrase or key as authentication. This is particularly important when it comes to mobile computers and removable media: it helps to protect the device’s contents even if it has been physically removed from the system. The LUKS format is a default implementation of block device encryption in RHEL. 12.1. LUKS disk encryptionThe Linux Unified Key Setup-on-disk-format (LUKS) enables you to encrypt block devices and it provides a set of tools that simplifies managing the encrypted devices. LUKS allows multiple user keys to decrypt a master key, which is used for the bulk encryption of the partition. RHEL uses LUKS to perform block device encryption. By default, the option to encrypt the block device is unchecked during the installation. If you select the option to encrypt your disk, the system prompts you for a passphrase every time you boot the computer. This passphrase “unlocks” the bulk encryption key that decrypts your partition. If you choose to modify the default partition table, you can choose which partitions you want to encrypt. This is set in the partition table settings. What LUKS does
What LUKS does not do
Ciphers The default cipher used for LUKS is
12.2. LUKS versions in RHELIn RHEL, the default format for LUKS encryption is LUKS2. The legacy LUKS1 format remains fully supported and it is provided as a format compatible with earlier RHEL releases. The LUKS2 format is designed to enable future updates of various parts without a need to modify binary structures. LUKS2 internally uses JSON text format for metadata, provides redundancy of metadata, detects metadata corruption and allows automatic repairs from a metadata copy. Do not use LUKS2 in systems that must be compatible with legacy systems that support only LUKS1. Note that RHEL 7 supports the LUKS2 format since version 7.6. LUKS2 and LUKS1 use different commands to encrypt the disk. Using the wrong command for a LUKS version might cause data loss.
Online re-encryption The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks:
When encrypting a non-encrypted device, you must still unmount the file system. You can remount the file system after a short initialization of the encryption. The LUKS1 format does not support online re-encryption. Conversion The LUKS2 format is inspired by LUKS1. In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the following scenarios:
12.3. Options for data protection during LUKS2 re-encryptionLUKS2 provides several options that prioritize performance or data protection during the re-encryption process:
This is the default mode. It balances data protection and performance. This mode stores individual checksums of the sectors in the re-encryption area, so the recovery process can detect which sectors LUKS2 already re-encrypted. The mode requires that the block device sector write is atomic. journal That is the safest mode but also the slowest. This mode journals the re-encryption
area in the binary area, so LUKS2 writes the data twice. none This mode prioritizes performance and provides no data protection. It protects the data only against safe process termination, such as the SIGTERM signal or the user pressing Ctrl+C. Any unexpected system crash or application crash might result in data corruption. You can select the mode using the If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in one of the following ways:
12.4. Encrypting existing data on a block device using LUKS2This procedure encrypts existing data on a not yet encrypted device using the LUKS2 format. A new LUKS header is stored in the head of the device. Prerequisites
Procedure
Additional resources
12.6. Encrypting a blank block device using LUKS2This procedure provides information about encrypting a blank block device using the LUKS2 format. Prerequisites
Procedure
Additional resources
12.7. Creating a LUKS encrypted volume using the storage RHEL System Role You can use the Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains
command-line utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
Procedure
Chapter 13. Configuring automated unlocking of encrypted volumes using policy-based decryptionPolicy-Based Decryption (PBD) is a collection of technologies that enable unlocking encrypted root and secondary volumes of hard drives on physical and virtual machines. PBD uses a variety of unlocking methods, such as user passwords, a Trusted Platform Module (TPM) device, a PKCS #11 device connected to a system, for example, a smart card, or a special network server. PBD allows combining different unlocking methods into a policy, which makes it possible to unlock the same volume in different ways. The current implementation of the PBD in RHEL consists of the Clevis framework and plug-ins called pins. Each pin provides a separate unlocking capability. Currently, the following pins are available:
The Network Bound Disc Encryption (NBDE) is a subcategory of PBD that allows binding encrypted volumes to a special network server. The current implementation of the NBDE includes a Clevis pin for the Tang server and the Tang server itself. 13.1. Network-bound disk encryptionIn Red Hat Enterprise Linux, NBDE is implemented through the following components and technologies: Figure 13.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used for LUKS2 volumes. Tang is a server for binding data to network presence. It makes a system containing your data available when the system is bound to a certain secure network. Tang is stateless and does not require TLS or authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any identifying information from the client. Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated unlocking of LUKS volumes. The clevis package provides the client side of the feature. A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements interactions with the NBDE server — Tang. Clevis and Tang are generic client and server components that provide network-bound encryption. In Red Hat Enterprise Linux, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. Both client- and server-side components use the José library to perform encryption and decryption operations. When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server’s advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang’s public keys can be distributed out of band so that clients can operate without access to the Tang server. This mode is called offline provisioning. The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong encryption key. Once the data is encrypted using this key, the key is discarded. The Clevis client should store the state produced by this provisioning operation in a convenient location. This process of encrypting data is the provisioning step. The LUKS version 2 (LUKS2) is the default disk-encryption format in RHEL, hence, the provisioning state for NBDE is stored as a token in a LUKS2 header. The leveraging of provisioning state for NBDE by the luksmeta package is used only for volumes encrypted with LUKS1. The Clevis pin for Tang supports both LUKS1 and LUKS2 without specification need. Clevis can encrypt plain-text files but you have to use the When the client is ready to access its data, it loads the metadata produced in the provisioning step and it responds to recover the encryption key. This process is the recovery step. In NBDE, Clevis binds a LUKS volume using a pin so that it can be automatically unlocked. After successful completion of the binding process, the disk can be unlocked using the provided Dracut unlocker. If the 13.2. Installing an encryption client - ClevisUse this procedure to deploy and start using the Clevis pluggable framework on your system. Procedure
Additional resources
13.3. Deploying a Tang server with SELinux in enforcing modeUse this procedure to deploy a Tang server running on a custom port as a confined service in SELinux enforcing mode. Prerequisites
Procedure
Additional resources
13.4. Rotating Tang server keys and updating bindings on clientsUse the following steps to rotate your Tang server keys and update existing bindings on clients. The precise interval at which you should rotate them depends on your application, key sizes, and institutional policy. Alternatively, you can rotate Tang keys by using the Prerequisites
Procedure
Removing the old keys
while clients are still using them can result in data loss. If you accidentally remove such keys, use the Additional resources
13.5. Configuring automated unlocking using a Tang key in the web consoleConfigure automated unlocking of a LUKS-encrypted storage device using a key provided by a Tang server. Prerequisites
Procedure
Verification
13.6. Basic NBDE and TPM2 encryption-client operationsThe Clevis framework can encrypt plain-text files and decrypt both ciphertexts in the JSON Web Encryption (JWE) format and LUKS-encrypted block devices. Clevis clients can use either Tang network servers or Trusted Platform Module 2.0 (TPM 2.0) chips for cryptographic operations. The following commands demonstrate the basic functionality provided by Clevis on examples containing plain-text files. You can also use them for troubleshooting your NBDE or Clevis+TPM deployments. Encryption client bound to a Tang server
Encryption client using TPM 2.0
The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way, the data can only be unsealed if the PCR hashes values match the policy used when sealing. For example, to seal the data to the PCR with index 0 and 7 for the SHA-256 bank: $ clevis encrypt tpm2 '{"pcr_bank":"sha256","pcr_ids":"0,7"}' < input-plain.txt > secret.jwe Hashes in PCRs can be rewritten, and you no longer can unlock your encrypted volume. For this reason, add a strong passphrase that enable you to unlock the encrypted volume manually even when a value in a PCR changes. If the system cannot automatically unlock your encrypted volume after an upgrade of the Additional resources
13.7. Configuring manual enrollment of LUKS-encrypted volumesUse the following steps to configure unlocking of LUKS-encrypted volumes with NBDE. Prerequisites
Procedure
Verification
To use NBDE for clients with static IP configuration (without DHCP), pass your
network configuration to the # dracut -fv --regenerate-all --kernel-cmdline "ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none" Alternatively, create a .conf file in the # cat /etc/dracut.conf.d/static_ip.conf
kernel_cmdline="ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none" Regenerate the initial RAM disk image: # dracut -fv --regenerate-all 13.8. Configuring manual enrollment of LUKS-encrypted volumes using a TPM 2.0 policyUse the following steps to configure unlocking of LUKS-encrypted volumes by using a Trusted Platform Module 2.0 (TPM 2.0) policy. Prerequisites
Procedure
Verification
Additional resources
13.9. Removing a Clevis pin from a LUKS-encrypted volume manually Use the following procedure for manual removing the metadata created by the The recommended way to remove a Clevis pin from a LUKS-encrypted volume is through the # clevis luks unbind -d /dev/sda2 -s 1
Prerequisites
Procedure
Additional resources
13.10. Configuring automated enrollment of LUKS-encrypted volumes using KickstartFollow the steps in this procedure to configure an automated installation process that uses Clevis for the enrollment of LUKS-encrypted volumes. Procedure
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. 13.11. Configuring automated unlocking of a LUKS-encrypted removable storage deviceUse this procedure to set up an automated unlocking process of a LUKS-encrypted USB storage device. Procedure
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. Additional resources
13.12. Deploying high-availability NBDE systemsTang provides two methods for building a high-availability deployment: Client redundancy (recommended) Clients should be configured with the ability to bind to multiple Tang servers. In this setup, each Tang server has its own keys and
clients can decrypt by contacting a subset of these servers. Clevis already supports this workflow through its 13.12.1. High-available NBDE using Shamir’s Secret SharingShamir’s Secret Sharing (SSS) is a cryptographic scheme that divides a secret into several unique parts. To reconstruct the secret, a number of parts is required. The number is called threshold and SSS is also referred to as a thresholding scheme. Clevis provides an implementation of SSS. It creates a key and divides it into a number of pieces. Each piece is encrypted using another pin including even SSS recursively. Additionally, you define the threshold 13.12.1.1. Example 1: Redundancy with two Tang serversThe following command decrypts a LUKS-encrypted device when at least one of two Tang servers is available: # clevis luks bind -d /dev/sda1 sss '{"t":1,"pins":{"tang":[{"url":"http://tang1.srv"},{"url":"http://tang2.srv"}]}}' The previous command used the following configuration scheme: { "t":1, "pins":{ "tang":[ { "url":"http://tang1.srv" }, { "url":"http://tang2.srv" } ] } } In this configuration, the SSS threshold 13.12.1.2. Example 2: Shared secret on a Tang server and a TPM device The following command successfully decrypts a LUKS-encrypted device when both the # clevis luks bind -d /dev/sda1 sss '{"t":2,"pins":{"tang":[{"url":"http://tang1.srv"}], "tpm2": {"pcr_ids":"0,7"}}}' The configuration scheme with the SSS threshold 't' set to '2' is now: { "t":2, "pins":{ "tang":[ { "url":"http://tang1.srv" } ], "tpm2":{ "pcr_ids":"0,7" } } } Additional resources
13.13. Deployment of virtual machines in a NBDE network The This is not a limitation of Clevis but a design principle of LUKS. If your scenario requires having encrypted root volumes in a cloud, perform the installation process (usually using Kickstart) for each instance of Red Hat Enterprise Linux in the cloud as well. The images cannot be shared without also sharing a LUKS master key. To deploy automated unlocking in a virtualized environment, use systems such as Additional resources
13.14. Building automatically-enrollable VM images for cloud environments using NBDEDeploying automatically-enrollable encrypted images in a cloud environment can provide a unique set of challenges. Like other virtualization environments, it is recommended to reduce the number of instances started from a single image to avoid sharing the LUKS master key. Therefore, the best practice is to create customized images that are not shared in any public repository and that provide a base for the deployment of a limited amount of instances. The exact number of instances to create should be defined by deployment’s security policies and based on the risk tolerance associated with the LUKS master key attack vector. To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a Kickstart file should be used to ensure master key uniqueness during the image building process. Cloud environments enable two Tang server deployment options which we consider here. First, the Tang server can be deployed within the cloud environment itself. Second, the Tang server can be deployed outside of the cloud on independent infrastructure with a VPN link between the two infrastructures. Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both the Tang server’s private key and the Clevis metadata to be stored on the same physical disk. Access to this physical disk permits a full compromise of the ciphertext data. For this reason, Red Hat strongly recommends maintaining a physical separation between the location where the data is stored and the system where Tang is running. This separation between the cloud and the Tang server ensures that the Tang server’s private key cannot be accidentally combined with the Clevis metadata. It also provides local control of the Tang server if the cloud infrastructure is at risk. 13.15. Deploying Tang as a container The Prerequisites
Procedure
Verification
Additional resources
13.16. Introduction to the nbde_client and nbde_server System Roles (Clevis and Tang)RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. RHEL 8.3 introduced Ansible roles for automated deployments of Policy-Based Decryption (PBD) solutions using Clevis and Tang. The The The If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not find any of these valid, it attempts to retrieve a passphrase from an existing binding. PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings for the same device. The default slot is slot 1. The Using the
Additional resources
13.17. Using the nbde_server System Role for setting up multiple Tang serversFollow the steps to prepare and apply an Ansible playbook containing your Tang server settings. Prerequisites
RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible.
Ansible Engine contains command-line utilities such as RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the
Procedure
To ensure that networking for a Tang pin is available during early boot by using the # grubby --update-kernel=ALL --args="rd.neednet=1" Additional resources
13.18. Using the nbde_client System Role for setting up multiple Clevis clientsFollow the steps to prepare and apply an Ansible playbook containing your Clevis client settings. The Prerequisites
Procedure
To ensure that networking for a Tang pin is available during early boot by using the # grubby --update-kernel=ALL --args="rd.neednet=1" Additional resources
13.19. Additional resources
Chapter 14. Auditing the systemAudit does not provide additional security to your system; rather, it can be used to discover violations of security policies used on your system. These violations can further be prevented by additional security measures such as SELinux. 14.1. Linux AuditThe Linux Audit system provides a way to track security-relevant information on your system. Based on pre-configured rules, Audit generates log entries to record as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine the violator of the security policy and the actions they performed. The following list summarizes some of the information that Audit is capable of recording in its log files:
The use of the Audit system is also a requirement for a number of security-related certifications. Audit is designed to meet or exceed the requirements of the following certifications or compliance guides:
Audit has also been:
Use Cases Watching file access Audit can track whether a file or a directory has been accessed, modified, executed, or the file’s attributes have been changed. This is useful, for example, to detect access to important files and have an Audit trail available in case one of these files is corrupted. Monitoring system calls Audit can be configured to generate a log entry every time a particular system call is used. This can be used, for example, to track changes to the system time by monitoring thesettimeofday , clock_adjtime , and other time-related system
calls. Recording commands run by a user Audit can track whether a file has been executed, so rules can be defined to record every execution of a particular command. For example, a rule can be defined for every executable in the /bin directory. The resulting log entries can then be searched by user ID to generate an audit trail of executed commands per user. Recording execution of system pathnames Aside from watching file access
which translates a path to an inode at rule invocation, Audit can now watch the execution of a path even if it does not exist at rule invocation, or if the file is replaced after rule invocation. This allows rules to continue to work after upgrading a program executable or before it is even installed. Recording security events The pam_faillock authentication module is capable of recording failed login attempts. Audit can be set up to record failed login attempts as
well and provides additional information about the user who attempted to log in. Searching for events Audit provides the ausearch utility, which can be used to filter the log entries and provide a complete audit trail based on several conditions. Running summary reports The aureport utility can be used to generate, among other things, daily reports of recorded events. A system administrator can then analyze these reports and
investigate suspicious activity further. Monitoring network access The nftables , iptables , and ebtables utilities can be configured to trigger Audit events, allowing system administrators to monitor network access. System performance may be affected depending on the amount of information that is collected by Audit. 14.2. Audit system architectureThe Audit system consists of two main parts: the user-space applications and utilities, and the kernel-side system call processing. The kernel component receives system calls from user-space applications and filters them through one of the following filters: user, task, fstype, or exit. Once a system call passes the exclude filter, it is sent through one of the aforementioned filters, which, based on the Audit rule configuration, sends it to the Audit daemon for further processing. The user-space Audit daemon collects the information from the kernel and creates entries in a log file. Other Audit user-space utilities interact with the Audit daemon, the kernel Audit component, or the Audit log files:
In RHEL 8, the Audit dispatcher daemon ( 14.3. Configuring auditd for a secure environment The default log_file The directory that holds the Audit log files (usually The remaining configuration options should be set according to your local security policy. 14.4. Starting and controlling auditd After # service auditd start To configure # systemctl enable auditd You can temporarily disable A number of other actions can be performed on
The 14.5. Understanding Audit log files By default, the Audit system stores log entries in the Add the following Audit rule to log every attempt to read or modify the # auditctl -w /etc/ssh/sshd_config -p warx -k sshd_config
If the $ cat /etc/ssh/sshd_config This event in the type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="cat" exe="/bin/cat" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sshd_config" type=CWD msg=audit(1364481363.243:24287): cwd="/home/shadowman" type=PATH msg=audit(1364481363.243:24287): item=0 name="/etc/ssh/sshd_config" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967 The above event consists of four records, which share the same time stamp and serial number. Records always start with the First Record type=SYSCALL The type field contains the type of the record. In this example, the SYSCALL value specifies that this record was triggered by a system call to the kernel.
The
arch=c000003e The arch field contains information about the CPU architecture of the system. The value, c000003e , is encoded in hexadecimal notation. When searching Audit records with the ausearch command, use the -i or --interpret option to
automatically convert hexadecimal values into their human-readable equivalents. The c000003e value is interpreted as x86_64 . syscall=2 The syscall field records the type of the system call that was sent to the kernel. The value, 2 , can be matched with its human-readable equivalent in the /usr/include/asm/unistd_64.h file. In this case, 2 is the open system call. Note that the ausyscall utility allows you to convert system call numbers to their human-readable
equivalents. Use the ausyscall --dump command to display a listing of all system calls along with their numbers. For more information, see the ausyscall (8) man page. success=no The success field records whether the system call recorded in that particular event succeeded or failed. In this case, the call did not succeed. exit=-13 The # ausearch --interpret --exit -13 Note that the previous example assumes that your Audit log contains an event that failed with exit code a0=7fffd19c5592 , a1=0 , a2=7fffd19c5592 , a3=a The a0 to a3 fields record the first four arguments, encoded in hexadecimal notation, of the system call in this event. These arguments depend on the system call that
is used; they can be interpreted by the ausearch utility. items=1 The items field contains the number of PATH auxiliary records that follow the syscall record. ppid=2686 The ppid field records the Parent Process ID (PPID). In this case, 2686 was the PPID of the parent process such as bash . pid=3538 The pid field records the Process ID (PID). In this case, 3538 was the PID of the
cat process. auid=1000 The auid field records the Audit user ID, that is the loginuid. This ID is assigned to a user upon login and is inherited by every process even when the user’s identity changes, for example, by switching user accounts with the su - john command. uid=1000 The uid field records the user ID of the user who started the analyzed process. The user ID can be interpreted into user names with the following
command: ausearch -i --uid UID . gid=1000 The gid field records the group ID of the user who started the analyzed process. euid=1000 The euid field records the effective user ID of the user who started the analyzed process. suid=1000 The suid field records the set user ID of the user who started the analyzed process. fsuid=1000 The fsuid field records the file system user ID
of the user who started the analyzed process. egid=1000 The egid field records the effective group ID of the user who started the analyzed process. sgid=1000 The sgid field records the set group ID of the user who started the analyzed process. fsgid=1000 The fsgid field records the file system group ID of the user who started the analyzed process. tty=pts0 The
tty field records the terminal from which the analyzed process was invoked. ses=1 The ses field records the session ID of the session from which the analyzed process was invoked. comm="cat" The comm field records the command-line name of the command that was used to invoke the analyzed process. In this case, the cat command was used to trigger this Audit event. exe="/bin/cat" The
exe field records the path to the executable that was used to invoke the analyzed process. subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 The subj field records the SELinux context with which the analyzed process was labeled at the time of execution. key="sshd_config" The key field records the administrator-defined string associated with the rule that generated this event in the Audit log. Second Record type=CWD
In the second record, the The purpose of this record is to record the current process’s location in case a relative path winds up being captured in the associated PATH record. This way the absolute path can be reconstructed. msg=audit(1364481363.243:24287) The
msg field holds the same time stamp and ID value as the value in the first record. The time stamp is using the Unix time format - seconds since 00:00:00 UTC on 1 January 1970. cwd="/home/user_name" The cwd field contains the path to the directory in which the system call was invoked. Third Record type=PATH In the third record, the type field value is PATH . An Audit event contains a
PATH -type record for every path that is passed to the system call as an argument. In this Audit event, only one path (/etc/ssh/sshd_config ) was used as an argument. msg=audit(1364481363.243:24287): The msg field holds the same time stamp and ID value as the value in the first and second record. item=0 The item field indicates which item, of the total number of items referenced in the SYSCALL type record, the current record is. This number is
zero-based; a value of 0 means it is the first item. name="/etc/ssh/sshd_config" The name field records the path of the file or directory that was passed to the system call as an argument. In this case, it was the /etc/ssh/sshd_config file. inode=409248 The # find / -inum 409248 -print
/etc/ssh/sshd_config dev=fd:00 The dev field specifies the minor and major ID of the device that contains the file or directory recorded in this event. In this case, the value represents the /dev/fd/0 device. mode=0100600 The mode field records the file or directory permissions, encoded in numerical notation as returned by the stat command in the st_mode field. See the stat(2) man page for more information. In this case,
0100600 can be interpreted as -rw------- , meaning that only the root user has read and write permissions to the /etc/ssh/sshd_config file. ouid=0 The ouid field records the object owner’s user ID. ogid=0 The ogid field records the object owner’s group ID. rdev=00:00 The rdev field contains a recorded device identifier for special files only. In this case, it is not used as the recorded file
is a regular file. obj=system_u:object_r:etc_t:s0 The obj field records the SELinux context with which the recorded file or directory was labeled at the time of execution. nametype=NORMAL The nametype field records the intent of each path record’s operation in the context of a given syscall. cap_fp=none The cap_fp field records data related to the setting of a permitted file system-based capability of the file or directory
object. cap_fi=none The cap_fi field records data related to the setting of an inherited file system-based capability of the file or directory object. cap_fe=0 The cap_fe field records the setting of the effective bit of the file system-based capability of the file or directory object. cap_fver=0 The cap_fver field records the version of the file system-based capability of the file or directory object.
Fourth Record type=PROCTITLE The type field contains the type of the record. In this example, the PROCTITLE value specifies that this record gives the full command-line that triggered this Audit event, triggered by a system call to the kernel. proctitle=636174002F6574632F7373682F737368645F636F6E666967 The proctitle field records the full command-line of the command that was used to invoke the analyzed process. The field is encoded in hexadecimal
notation to not allow the user to influence the Audit log parser. The text decodes to the command that triggered this Audit event. When searching Audit records with the ausearch command, use the -i or --interpret option to automatically convert hexadecimal values into their human-readable equivalents. The 636174002F6574632F7373682F737368645F636F6E666967 value is interpreted as cat /etc/ssh/sshd_config . 14.6. Using auditctl for defining and executing Audit rules The Audit system operates on a set of rules that define what is captured in the log files. Audit rules can be set either on the command line using the The File-system rules examples
System-call rules examples
Executable-file rules To define a rule that logs all execution of the # auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id Additional resources
14.7. Defining persistent Audit rules To define Audit rules that are persistent across reboots, you must either directly include them in the Note that the Furthermore, you can use the # auditctl -R /usr/share/audit/sample-rules/30-stig.rules 14.8. Using pre-configured rules files In the 30-nispom.rules Audit rule configuration that meets the requirements specified in the Information System Security chapter of the National Industrial Security Program Operating Manual. 30-ospp-v42*.rules Audit rule configuration that meets the requirements defined in the OSPP (Protection Profile for General Purpose Operating Systems) profile version 4.2. 30-pci-dss-v31.rules Audit rule configuration that meets the requirements set by Payment Card Industry Data Security Standard (PCI DSS) v3.1. 30-stig.rules Audit rule configuration that meets the requirements set by Security Technical Implementation Guides (STIG). To use these configuration files, copy them to the # cd /usr/share/audit/sample-rules/ # cp 10-base-config.rules 30-stig.rules 31-privileged.rules 99-finalize.rules /etc/audit/rules.d/ # augenrules --load You can order Audit rules
using a numbering scheme. See the Additional resources
14.9. Using augenrules to define persistent rules The
The
rules are not meant to be used all at once. They are pieces of a policy that should be thought out and individual files copied to Once you have the rules in the # augenrules --load
/sbin/augenrules: No change
No rules
enabled 1
failure 1
pid 742
rate_limit 0
... Additional resources
14.10. Disabling augenrules Use the following steps to disable the Procedure
14.11. Setting up Audit to monitor software updates In RHEL 8.6 and later versions, you can use
the pre-configured rule
By default, In RHEL 8.5 and earlier versions, you can manually add rules to monitor utilities that install software into a Pre-configured rule files cannot be used on systems with the Procedure
Verification
14.12. Monitoring user login times with Audit To monitor which users logged in at specific times, you do not need to configure Audit in any special
way. You can use the Procedure To display user log in times, use any one of the following commands:
Additional resources
Chapter 15. Blocking and allowing applications using fapolicydSetting and enforcing a policy that either allows or denies application execution based on a rule set efficiently prevents the execution of unknown and potentially malicious software. 15.1. Introduction to fapolicyd The The
The administrator can define the The The
Rules in
You can use one of the ways for
By default, Additional resources
15.2. Deploying fapolicyd To deploy the Procedure
Verification
15.3. Marking files as trusted using an additional source of trust The Marking files as trusted using Prerequisites
Procedure
Changing the content of a trusted file or directory changes their checksum, and therefore To make the new content trusted again, refresh the file trust database by using the
Verification
Additional resources
15.4. Adding custom allow and deny rules for fapolicyd The default
set of rules in the For basic scenarios, prefer
Marking files as trusted using an additional source of trust. In more advanced scenarios such as allowing to execute a custom binary only for specific user and
group identifiers, add new custom rules to the The following steps demonstrate adding a new rule to allow a custom binary. Prerequisites
Procedure
Verification
Additional resources
15.5. Enabling fapolicyd integrity checks By default, Prerequisites
Procedure
Verification
15.6. Troubleshooting problems related to fapolicyd The following section provides tips for basic troubleshooting of the Installing
applications using
Service status
Debug mode
Removing the
Dumping the
Application pipe
Additional resources
15.7. Additional resources
Chapter 16. Protecting systems against intrusive USB devicesUSB devices can be loaded with spyware, malware, or trojans, which can steal your data or damage your system. As a Red Hat Enterprise Linux administrator, you can prevent such USB attacks with USBGuard. 16.1. USBGuardWith the USBGuard software framework, you can protect your systems against intrusive USB devices by using basic lists of permitted and forbidden devices based on the USB device authorization feature in the kernel. The USBGuard framework provides the following components:
The The system service provides the USBGuard public IPC interface. In Red Hat Enterprise Linux, the access to this interface is limited to the root user only by default. Consider setting either the Ensure that you do not leave the Access Control List (ACL) unconfigured as this exposes the IPC interface to all local users and allows them to manipulate the authorization state of USB devices and modify the USBGuard policy. 16.2. Installing USBGuard Use this procedure to install and initiate the Procedure
Verification
Additional resources
16.3. Blocking and authorizing a USB device using CLI This procedure outlines how to authorize and block a USB device using the Prerequisites
Procedure
Additional resources
16.4. Permanently blocking and authorizing a USB device You can permanently block and authorize a USB device using the Prerequisites
Procedure
Verification
Additional resources
16.5. Creating a custom policy for USB devicesThe following procedure contains steps for creating a rule set for USB devices that reflects the requirements of your scenario. Prerequisites
Procedure
Verification
Additional resources
16.6. Creating a structured custom policy for USB devices You can organize your custom USBGuard policy in several Prerequisites
Procedure
Verification
Additional resources
16.7. Authorizing users and groups to use the USBGuard IPC interfaceUse this procedure to authorize a specific user or a group to use the USBGuard public IPC interface. By default, only the root user can use this interface. Prerequisites
Procedure
Additional resources
16.8. Logging USBguard authorization events to the Linux Audit log Use the following steps to integrate
logging of USBguard authorization events to the standard Linux Audit log. By default, the Prerequisites
Procedure
Verification
Additional resources
16.9. Additional resources
Legal NoticeCopyright © 2022 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners. |