smb.conf
FileEdition 20.0.1
Abstract
smb.conf
Filekdump
systemctl
utility.
sshd
service, as well as a basic usage of the ssh
, scp
, sftp
client utilities. Read this chapter if you need a remote access to a machine.
NTP
). Read this chapter if you need to configure the system to synchronize the clock with a remote NTP
server, or set up an NTP
server on this system.
NTP
daemon, ntpd
, for the Network Time Protocol (NTP
). Read this chapter if you need to configure the system to synchronize the clock with a remote NTP
server, or set up an NTP
server on this system, and you prefer not to use the chrony application.
PTP
). Read this chapter if you need to configure the system to synchronize the system clock with a master PTP
clock.
rsyslog
daemon, and explains how to locate, view, and monitor log files. Read this chapter to learn how to work with log files.
cron
, at
, and batch
utilities. Read this chapter to learn how to use these utilities to perform automated tasks.
rpm
command instead of yum
. Read this chapter if you cannot update a kernel package with the Yum package manager.
kdump
service in Fedora, and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility. Read this chapter to learn how to enable kdump
on your system.
rpm
utility. Read this appendix if you need to use rpm
instead of yum
.
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novel
in your current working directory, enter thecat my_next_bestselling_novel
command at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to a virtual terminal.
mono-spaced bold
. For example:
File-related classes includefilesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Choose Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic
or Proportional Bold Italic
To connect to a remote machine using ssh, typessh username@domain.name
at a shell prompt. If the remote machine isexample.com
and your username on that machine is john, typessh john@example.com
.Themount -o remount file-system
command remounts the named file system. For example, to remount the/home
file system, the command ismount -o remount /home
.To see the version of a currently installed package, use therpm -q package
command. It will return a result as follows:package-version-release
.
Publican is a DocBook publishing system.
mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman
but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object ref = iniCtx.lookup("EchoBean");
EchoHome home = (EchoHome) ref;
Echo echo = home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
}
}
Note
Important
Warning
Table of Contents
Graphical User Interface
, or GUI, applications in various environments.
[fedorauser@localhost]$ firefox
File names vs Application names
/usr/bin/gnome-disks
.
STDERR
, is sent to the terminal window. This can be especially useful when troubleshooting.
Example 1.1. Viewing errors by launching graphical applications from the command line
[fedorauser@localhost]$ astromenace-wrapper
AstroMenace 1.3.1 121212
Open XML file: /home/fedorauser/.config/astromenace/amconfig.xml
VFS file was opened /usr/share/astromenace/gamedata.vfs
Vendor : OpenAL Community
Renderer : OpenAL Soft
Version : 1.1 ALSOFT 1.15.1
ALut ver : 1.1
Font initialized: DATA/FONT/LiberationMono-Bold.ttf
Current Video Mode: 3200x1080 32bit
Xinerama/TwinView detected.
Screen count: 2
Screen #0: (0, 0) x (1920, 1080)
Screen #1: (1920, 0) x (1280, 1024)
Supported resolutions list:
640x480 16bit
640x480 32bit
640x480 0bit
768x480 16bit
<output truncated>
job control
feature.
[fedorauser@localhost]$ emacs foo.txt &
Ending a session
TTY
and displayed on another by specifying the DISPLAY
variable. This can be useful when running multiple graphical sessions, or for troubleshooting problems with a desktop session.
DISPLAY
variable is always an integer preceded by a colon, and will be :0 in most cases. Check the arguments of the currently running X process to verify the value. The command below shows both the DISPLAY
variable as well as the TTY that X is running on, tty1
.
[fedorauser@localhost]$ps aux|grep /usr/bin/X
root 1498 7.1 1.0 521396 353984tty1
Ss+ 00:04 66:34 /usr/bin/X:0
vt1 -background none -nolisten tcp -auth /var/run/kdm/A:0-22Degc root 23874 0.0 0.0 109184 900 pts/21 S+ 15:35 0:00 grep --color=auto /usr/bin/X
DISPLAY
variable when executing the program.
[fedorauser@localhost]$ DISPLAY=:0 gnome-shell --replace &
vt1
, pressing Ctrl+Alt+F1 will return to the desktop environment.
hot corner
, or by pressing the Super ( Windows ) key. The overview presents documents in addition to applications.
root
is allowed to set the system date and time. To unlock the configuration tool for changes, click the button in the top-right corner of the window, and provide the correct password when prompted.
NTP
protocol.
root
:
date +%D -s YYYY-MM-DD
~]# date +%D -s 2010-06-02
date
without any additional argument.
root
:
date +%T -s HH:MM:SS
date +%T -s HH:MM:SS -u
~]# date +%T -s 23:26:00 -u
date
without any additional argument. You should not use this command to set the time if the system clock is being maintained by chrony, ntpd
, or any other similar automated process.
ntpd
to adjust the system clock using the Network Time Protocol (NTP). See Chapter 14, Configuring NTP Using ntpd for information on configuring ntpd
.
root
, and access permissions can be changed by both the root
user and file owner.
/etc/bashrc
file. Traditionally on UNIX systems, the umask
is set to 022
, which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator's group, are not allowed to make any modifications. However, under the UPG scheme, this “group protection” is not necessary since every user has their own private group.
/etc/passwd
file to /etc/shadow
, which is readable only by the root
user.
/etc/login.defs
file to enforce security policies.
/etc/shadow
file, any commands which create or modify password aging information do not work. The following is a list of utilities and commands that do not work without first enabling shadow passwords:
chage
utility.
gpasswd
utility.
usermod
command with the -e
or -f
option.
useradd
command with the -e
or -f
option.
root
user is allowed to configure users and groups. To unlock the configuration tool for all kinds of changes, click the button in the top-right corner of the window, and provide the correct password when prompted.
Password security advice
Administrator
and Standard
(the default option).
/etc/skel/
directory into the new home directory.
system-config-users
at a shell prompt. Note that unless you have superuser privileges, the application will prompt you to authenticate as root
.
Password security advice
/home/username/
. You can choose not to create the home directory by clearing the Create home directory check box, or change this directory by editing the content of the Home Directory text box. Note that when the home directory is created, default configuration files are copied into it from the /etc/skel/
directory.
Table 4.1. Command line utilities for managing users and groups
Utilities | Description |
---|---|
useradd , usermod , userdel | Standard utilities for adding, modifying, and deleting user accounts. |
groupadd , groupmod , groupdel | Standard utilities for adding, modifying, and deleting groups. |
gpasswd | Standard utility for administering the /etc/group configuration file. |
pwck , grpck | Utilities that can be used for verification of the password, group, and associated shadow files. |
pwconv , pwunconv | Utilities that can be used for the conversion of passwords to shadow passwords, or back from shadow passwords to standard passwords. |
root
:
useradd
[options] username
useradd
command creates a locked user account. To unlock the account, run the following command as root
to assign a password:
passwd
username
Table 4.2. useradd command line options
Option | Description |
---|---|
-c 'comment' | comment can be replaced with any string. This option is generally used to specify the full name of a user. |
-d home_directory | Home directory to be used instead of default /home/username/ . |
-e date | Date for the account to be disabled in the format YYYY-MM-DD. |
-f days | Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not be disabled after the password expires. |
-g group_name | Group name or group number for the user's default group. The group must exist prior to being specified here. |
-G group_list | List of additional (other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here. |
-m | Create the home directory if it does not exist. |
-M | Do not create the home directory. |
-N | Do not create a user private group for the user. |
-p password | The password encrypted with crypt . |
-r | Create a system account with a UID less than 1000 and without a home directory. |
-s | User's login shell, which defaults to /bin/bash . |
-u uid | User ID for the user, which must be unique and greater than 999. |
useradd juan
is issued on a system that has shadow passwords enabled:
juan
is created in /etc/passwd
:
juan:x:501:501::/home/juan:/bin/bash
juan
.
x
for the password field indicating that the system is using shadow passwords.
juan
is set to /home/juan/
.
/bin/bash
.
juan
is created in /etc/shadow
:
juan:!!:14798:0:99999:7:::
juan
.
!!
) appear in the password field of the /etc/shadow
file, which locks the account.
Note
-p
flag, it is placed in the /etc/shadow
file on the new line for the user.
juan
is created in /etc/group
:
juan:x:501:
/etc/group
has the following characteristics:
juan
.
x
appears in the password field indicating that the system is using shadow group passwords.
juan
in /etc/passwd
.
juan
is created in /etc/gshadow
:
juan:!::
juan
.
!
) appears in the password field of the /etc/gshadow
file, which locks the group.
juan
is created in the /home/
directory:
~]# ls -l /home
total 4
drwx------. 4 juan juan 4096 Mar 3 18:23 juan
juan
and group juan
. It has read, write, and execute privileges only for the user juan
. All other permissions are denied.
/etc/skel/
directory (which contain default user settings) are copied into the new /home/juan/
directory. The contents of /etc/skel/
may vary depending on installed applications.
~]# ls -la /home/juan
total 28
drwx------. 4 juan juan 4096 Mar 3 18:23 .
drwxr-xr-x. 5 root root 4096 Mar 3 18:23 ..
-rw-r--r--. 1 juan juan 18 Jul 09 08:43 .bash_logout
-rw-r--r--. 1 juan juan 176 Jul 09 08:43 .bash_profile
-rw-r--r--. 1 juan juan 124 Jul 09 08:43 .bashrc
drwxr-xr-x. 4 juan juan 4096 Jul 09 08:43 .mozilla
-rw-r--r--. 1 juan juan 658 Jul 09 08:43 .zshrc
juan
exists on the system. To activate it, the administrator must next assign a password to the account using the passwd
command and, optionally, set password aging guidelines.
root
:
groupadd
[options] group_name
Table 4.3. groupadd command line options
Option | Description |
---|---|
-f , --force | When used with -g gid and gid already exists, groupadd will choose another unique gid for the group. |
-g gid | Group ID for the group, which must be unique and greater than 999. |
-K , --key key=value | Override /etc/login.defs defaults. |
-o , --non-unique | Allow to create groups with duplicate. |
-p , --password password | Use this encrypted password for the new group. |
-r | Create a system group with a GID less than 1000. |
chage
command.
Shadow passwords must be enabled to use chage
chage
command. For more information, see Section 4.1.2, “Shadow Passwords”.
root
:
chage
[options] username
chage
command is followed directly by a username (that is, when no command line options are specified), it displays the current password aging values and allows you to change them interactively.
Table 4.4. chage command line options
Option | Description |
---|---|
-d days | Specifies the number of days since January 1, 1970 the password was changed. |
-E date | Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. |
-I days | Specifies the number of inactive days after the password expiration before locking the account. If the value is 0 , the account is not locked after the password expires. |
-l | Lists current account aging settings. |
-m days | Specify the minimum number of days after which the user must change passwords. If the value is 0 , the password does not expire. |
-M days | Specify the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. |
-W days | Specifies the number of days before the password expiration date to warn the user. |
root
:
passwd
username
passwd
-d
username
Avoid using null passwords whenever possible
root
:
chage
-d
0
username
root
, an unattended login session may pose a significant security risk. To reduce this risk, you can configure the system to automatically log out idle users after a fixed period of time:
root
:
yum
install
screen
root
, add the following line at the beginning of the /etc/profile
file to make sure the processing of this file cannot be interrupted:
trap "" 1 2 3 15
/etc/profile
file to start a screen
session each time a user logs in to a virtual console or remotely:
SCREENEXEC="screen" if [ -w $(tty) ]; then trap "exec $SCREENEXEC" 1 2 3 15 echo -n 'Starting session in 10 seconds' sleep 10 exec $SCREENEXEC fi
sleep
command.
/etc/screenrc
configuration file to close the screen
session after a given period of inactivity:
idle 120 quit autodetach off
idle
directive.
idle 120 lockscreen autodetach off
/opt/myproject/
directory. Some people are trusted to modify the contents of this directory, but not everyone.
root
, create the /opt/myproject/
directory by typing the following at a shell prompt:
mkdir /opt/myproject
myproject
group to the system:
groupadd myproject
/opt/myproject/
directory with the myproject
group:
chown root:myproject /opt/myproject
chmod 2775 /opt/myproject
myproject
group can create and edit files in the /opt/myproject/
directory without the administrator having to change file permissions every time users write new files. To verify that the permissions have been set correctly, run the following command:
~]# ls -l /opt
total 4
drwxrwsr-x. 3 root myproject 4096 Mar 3 18:31 myproject
/etc/group
file.
/etc/group
file.
/etc/passwd
and /etc/shadow
files.
Secure package management with GPG-signed packages
Yum and superuser privileges
yum
to install, update or remove packages on your system. All examples in this chapter assume that you have already obtained superuser privileges by using either the su
or sudo
command.
yum
check-update
~]# yum check-update
Loaded plugins: langpacks, presto, refresh-packagekit
PackageKit.x86_64 0.6.14-2.fc15 fedora
PackageKit-command-not-found.x86_64 0.6.14-2.fc15 fedora
PackageKit-device-rebind.x86_64 0.6.14-2.fc15 fedora
PackageKit-glib.x86_64 0.6.14-2.fc15 fedora
PackageKit-gstreamer-plugin.x86_64 0.6.14-2.fc15 fedora
PackageKit-gtk-module.x86_64 0.6.14-2.fc15 fedora
PackageKit-gtk3-module.x86_64 0.6.14-2.fc15 fedora
PackageKit-yum.x86_64 0.6.14-2.fc15 fedora
PackageKit-yum-plugin.x86_64 0.6.14-2.fc15 fedora
gdb.x86_64 7.2.90.20110429-36.fc15 fedora
kernel.x86_64 2.6.38.6-26.fc15 fedora
rpm.x86_64 4.9.0-6.fc15 fedora
rpm-libs.x86_64 4.9.0-6.fc15 fedora
rpm-python.x86_64 4.9.0-6.fc15 fedora
yum.noarch 3.2.29-5.fc15 fedora
PackageKit
— the name of the package
x86_64
— the CPU architecture the package was built for
0.6.14
— the version of the updated package to be installed
fedora
— the repository in which the updated package is located
yum
and rpm
packages), as well as their dependencies (such as the kernel-firmware, rpm-libs, and rpm-python packages), all using yum
.
root
:
yum
update
package_name
~]# yum update udev
Loaded plugins: langpacks, presto, refresh-packagekit
Updating Red Hat repositories.
INFO:rhsm-app.repolib:repos updated: 0
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package gdb.x86_64 0:7.2.90.20110411-34.fc15 will be updated
---> Package gdb.x86_64 0:7.2.90.20110429-36.fc15 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Updating:
gdb x86_64 7.2.90.20110429-36.fc15 fedora 1.9 M
Transaction Summary
================================================================================
Upgrade 1 Package(s)
Total download size: 1.9 M
Is this ok [y/N]:
Loaded plugins:
— yum
always informs you which Yum plug-ins are installed and enabled. Here, yum
is using the langpacks, presto, and refresh-packagekit plug-ins. See Section 5.4, “Yum Plug-ins” for general information on Yum plug-ins, or to Section 5.4.3, “Plug-in Descriptions” for descriptions of specific plug-ins.
gdb.x86_64
— you can download and install new gdb package.
yum
presents the update information and then prompts you as to whether you want it to perform the update; yum
runs interactively by default. If you already know which transactions yum
plans to perform, you can use the -y
option to automatically answer yes
to any questions yum
may ask (in which case it runs non-interactively). However, you should always examine which changes yum
plans to make to the system so that you can easily troubleshoot any problems that might arise.
yum history
command as described in Section 5.2.6, “Working with Transaction History”.
Updating and installing kernels with Yum
yum
always installs a new kernel in the same sense that RPM installs a new kernel when you use the command rpm -i kernel
. Therefore, you do not need to worry about the distinction between installing and upgrading a kernel package when you use yum
: it will do the right thing, regardless of whether you are using the yum update
or yum install
command.
rpm -i kernel
command (which installs a new kernel) instead of rpm -u kernel
(which replaces the current kernel). See Section A.2.2, “Installing and Upgrading” for more information on installing/updating kernels with RPM.
yum update
(without any arguments):
yum update
yum
command with a set of highly-useful security-centric commands, subcommands and options. See Section 5.4.3, “Plug-in Descriptions” for specific information.
yum
search
term…
~]# yum search meld kompare
Loaded plugins: langpacks, presto, refresh-packagekit
============================== N/S Matched: meld ===============================
meld.noarch : Visual diff and merge tool
python-meld3.x86_64 : HTML/XML templating system for Python
============================= N/S Matched: kompare =============================
komparator.x86_64 : Kompare and merge two folders
Name and summary matches only, use "search all" for everything.
yum search
command is useful for searching for packages you do not know the name of, but for which you know a related term.
yum list
and related commands provide information about packages, package groups, and repositories.
*
(which expands to match any character multiple times) and ?
(which expands to match any one character).
Filtering results with glob expressions
yum
command, otherwise the Bash shell will interpret these expressions as pathname expansions, and potentially pass all files in the current directory that match the globs to yum
. To make sure the glob expressions are passed to yum
as intended, either:
yum list glob_expression…
Example 5.1. Listing all ABRT addons and plug-ins using glob expressions
~]# yum list abrt-addon\* abrt-plugin\*
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
abrt-addon-ccpp.x86_64 2.0.2-5.fc15 @fedora
abrt-addon-kerneloops.x86_64 2.0.2-5.fc15 @fedora
abrt-addon-python.x86_64 2.0.2-5.fc15 @fedora
abrt-plugin-bugzilla.x86_64 2.0.2-5.fc15 @fedora
abrt-plugin-logger.x86_64 2.0.2-5.fc15 @fedora
Available Packages
abrt-plugin-mailx.x86_64 2.0.2-5.fc15 updates
abrt-plugin-reportuploader.x86_64 2.0.2-5.fc15 updates
abrt-plugin-rhtsupport.x86_64 2.0.2-5.fc15 updates
yum list all
Example 5.2. Listing all installed and available packages
~]# yum list all
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
ConsoleKit.x86_64 0.4.4-1.fc15 @fedora
ConsoleKit-libs.x86_64 0.4.4-1.fc15 @fedora
ConsoleKit-x11.x86_64 0.4.4-1.fc15 @fedora
GConf2.x86_64 2.32.3-1.fc15 @fedora
GConf2-gtk.x86_64 2.32.3-1.fc15 @fedora
ModemManager.x86_64 0.4-7.git20110201.fc15 @fedora
NetworkManager.x86_64 1:0.8.998-4.git20110427.fc15 @fedora
NetworkManager-glib.x86_64 1:0.8.998-4.git20110427.fc15 @fedora
NetworkManager-gnome.x86_64 1:0.8.998-4.git20110427.fc15 @fedora
NetworkManager-openconnect.x86_64 0.8.1-9.git20110419.fc15 @fedora
[output truncated]
yum list installed
Example 5.3. Listing installed packages using a double-quoted glob expression
~]# yum list installed "krb?-*"
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
krb5-libs.x86_64 1.9-7.fc15 @fedora
yum list available
Example 5.4. Listing available packages using a single glob expression with escaped wildcard characters
~]# yum list available gstreamer\*plugin\*
Loaded plugins: langpacks, presto, refresh-packagekit
Available Packages
gstreamer-plugin-crystalhd.x86_64 3.5.1-1.fc14 fedora
gstreamer-plugins-bad-free.x86_64 0.10.22-1.fc15 updates
gstreamer-plugins-bad-free-devel.x86_64 0.10.22-1.fc15 updates
gstreamer-plugins-bad-free-devel-docs.x86_64 0.10.22-1.fc15 updates
gstreamer-plugins-bad-free-extras.x86_64 0.10.22-1.fc15 updates
gstreamer-plugins-base.x86_64 0.10.33-1.fc15 updates
gstreamer-plugins-base-devel.x86_64 0.10.33-1.fc15 updates
gstreamer-plugins-base-devel-docs.noarch 0.10.33-1.fc15 updates
gstreamer-plugins-base-tools.x86_64 0.10.33-1.fc15 updates
gstreamer-plugins-espeak.x86_64 0.3.3-3.fc15 fedora
gstreamer-plugins-fc.x86_64 0.2-2.fc15 fedora
gstreamer-plugins-good.x86_64 0.10.29-1.fc15 updates
gstreamer-plugins-good-devel-docs.noarch 0.10.29-1.fc15 updates
yum grouplist
Example 5.5. Listing all package groups
~]# yum grouplist
Loaded plugins: langpacks, presto, refresh-packagekit
Setting up Group Process
Installed Groups:
Administration Tools
Design Suite
Dial-up Networking Support
Fonts
GNOME Desktop Environment
[output truncated]
yum repolist
Example 5.6. Listing enabled repositories
~]# yum repolist
Loaded plugins: langpacks, presto, refresh-packagekit
repo id repo name status
fedora Fedora 15 - i386 19,365
updates Fedora 15 - i386 - Updates 3,848
repolist: 23,213
yum
info
package_name…
~]# yum info abrt
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
Name : abrt
Arch : x86_64
Version : 2.0.1
Release : 2.fc15
Size : 806 k
Repo : installed
From repo : fedora
Summary : Automatic bug detection and reporting tool
URL : https://fedorahosted.org/abrt/
License : GPLv2+
Description : abrt is a tool to help users to detect defects in applications and
: to create a bug report with all informations needed by maintainer
: to fix it. It uses plugin system to extend its functionality.
yum info package_name
command is similar to the rpm -q --info package_name
command, but provides as additional information the ID of the Yum repository the RPM package is found in (look for the From repo:
line in the output).
yumdb
info
package_name
user
indicates it was installed by the user, and dep
means it was brought in as a dependency). For example, to display additional information about the yum package, type:
~]# yumdb info yum
Loaded plugins: langpacks, presto, refresh-packagekit
yum-3.2.29-4.fc15.noarch
checksum_data = 249f21fb43c41381c8c9b0cd98d2ea5fa0aa165e81ed2009cfda74c05af67246
checksum_type = sha256
from_repo = fedora
from_repo_revision = 1304429533
from_repo_timestamp = 1304442346
installed_by = 0
reason = user
releasever = $releasever
yumdb
command, refer to the yumdb(8) manual page.
yum
install
package_name
yum
install
package_name package_name…
i586
, type:
~]# yum install sqlite2.i586
~]# yum install audacious-plugins-\*
yum install
. If you know the name of the binary you want to install, but not its package name, you can give yum install
the path name:
~]# yum install /usr/sbin/named
yum
then searches through its package lists, finds the package which provides /usr/sbin/named
, if any, and prompts you as to whether you want to install it.
Finding which package owns a file
named
binary, but you do not know in which bin
or sbin
directory is the file installed, use the yum provides
command with a glob expression:
~]# yum provides "*bin/named"
Loaded plugins: langpacks, presto, refresh-packagekit
32:bind-9.8.0-3.P1.fc15.i686 : The Berkeley Internet Name Domain (BIND) DNS
: (Domain Name System) server
Repo : fedora
Matched from:
Filename : /usr/sbin/named
yum provides "*/file_name"
is a common and useful trick to find the packages that contain file_name.
yum grouplist -v
command lists the names of all package groups, and, next to each of them, their groupid in parentheses. The groupid is always the term in the last pair of parentheses, such as kde-desktop
in the following example:
~]# yum -v grouplist kde\*
Not loading "blacklist" plugin, as it is disabled
Loading "langpacks" plugin
Loading "presto" plugin
Loading "refresh-packagekit" plugin
Not loading "whiteout" plugin, as it is disabled
Adding en_US to language list
Config time: 0.900
Yum Version: 3.2.29
Setting up Group Process
rpmdb time: 0.002
group time: 0.995
Available Groups:
KDE Software Compilation (kde-desktop)
KDE Software Development (kde-software-development)
Done
groupinstall
:
yum
groupinstall
group_name
yum
groupinstall
groupid
install
command if you prepend it with an @-symbol (which tells yum
that you want to perform a groupinstall
):
yum
install
@group
KDE Desktop
group:
~]#yum groupinstall "KDE Desktop"
~]#yum groupinstall kde-desktop
~]#yum install @kde-desktop
root
:
yum
remove
package_name…
~]# yum remove totem rhythmbox sound-juicer
install
, remove
can take these arguments:
Removing a package when other packages depend on it
install
syntax:
yum
groupremove
group
yum
remove
@group
KDE Desktop
group:
~]#yum groupremove "KDE Desktop"
~]#yum groupremove kde-desktop
~]#yum remove @kde-desktop
Intelligent package group removal
yum
to remove only those packages which are not required by any other packages or groups by adding the groupremove_leaf_only=1
directive to the [main]
section of the /etc/yum.conf
configuration file. For more information on this directive, refer to Section 5.3.1, “Setting [main] Options”.
yum history
command allows users to review information about a timeline of Yum transactions, the dates and times on when they occurred, the number of packages affected, whether transactions succeeded or were aborted, and if the RPM database was changed between transactions. Additionally, this command can be used to undo or redo certain transactions.
root
, either run yum history
with no additional arguments, or type the following at a shell prompt:
yum
history
list
all
keyword:
yum
history
list
all
yum
history
list
start_id..end_id
yum
history
list
glob_expression…
~]# yum history list 1..5
Loaded plugins: langpacks, presto, refresh-packagekit
ID | Login user | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
5 | Jaromir ... <jhradilek> | 2011-07-29 15:33 | Install | 1
4 | Jaromir ... <jhradilek> | 2011-07-21 15:10 | Install | 1
3 | Jaromir ... <jhradilek> | 2011-07-16 15:27 | I, U | 73
2 | System <unset> | 2011-07-16 15:19 | Update | 1
1 | System <unset> | 2011-07-16 14:38 | Install | 1106
history list
yum history list
command produce tabular output with each row consisting of the following columns:
ID
— an integer value that identifies a particular transaction.
Login user
— the name of the user whose login session was used to initiate a transaction. This information is typically presented in the Full Name <username>
form. For transactions that were not issued by a user (such as an automatic system update), System <unset>
is used instead.
Date and time
— the date and time when a transaction was issued.
Action(s)
— a list of actions that were performed during a transaction as described in Table 5.1, “Possible values of the Action(s) field”.
Altered
— the number of packages that were affected by a transaction, possibly followed by additional information as described in Table 5.2, “Possible values of the Altered field”.
Table 5.1. Possible values of the Action(s) field
Action | Abbreviation | Description |
---|---|---|
Downgrade | D | At least one package has been downgraded to an older version. |
Erase | E | At least one package has been removed. |
Install | I | At least one new package has been installed. |
Obsoleting | O | At least one package has been marked as obsolete. |
Reinstall | R | At least one package has been reinstalled. |
Update | U | At least one package has been updated to a newer version. |
Table 5.2. Possible values of the Altered field
Symbol | Description |
---|---|
< | Before the transaction finished, the rpmdb database was changed outside Yum. |
> | After the transaction finished, the rpmdb database was changed outside Yum. |
* | The transaction failed to finish. |
# | The transaction finished successfully, but yum returned a non-zero exit code. |
E | The transaction finished successfully, but an error or a warning was displayed. |
P | The transaction finished successfully, but problems already existed in the rpmdb database. |
s | The transaction finished successfully, but the --skip-broken command line option was used and certain packages were skipped. |
root
:
yum
history
summary
yum
history
summary
start_id..end_id
yum history list
command, you can also display a summary of transactions regarding a certain package or packages by supplying a package name or a glob expression:
yum
history
summary
glob_expression…
~]# yum history summary 1..5
Loaded plugins: langpacks, presto, refresh-packagekit
Login user | Time | Action(s) | Altered
-------------------------------------------------------------------------------
Jaromir ... <jhradilek> | Last day | Install | 1
Jaromir ... <jhradilek> | Last week | Install | 1
Jaromir ... <jhradilek> | Last 2 weeks | I, U | 73
System <unset> | Last 2 weeks | I, U | 1107
history summary
yum history summary
command produce simplified tabular output similar to the output of yum history list
.
yum history list
and yum history summary
are oriented towards transactions, and although they allow you to display only transactions related to a given package or packages, they lack important details, such as package versions. To list transactions from the perspective of a package, run the following command as root
:
yum
history
package-list
glob_expression…
~]# yum history package-list subscription-manager\*
Loaded plugins: langpacks, presto, refresh-packagekit
ID | Action(s) | Package
-------------------------------------------------------------------------------
3 | Updated | subscription-manager-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-1.el6_1.x86_64
3 | Updated | subscription-manager-firstboot-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-1.el6_1.x86_64
3 | Updated | subscription-manager-gnome-0.95.11-1.el6.x86_64
3 | Update | 0.95.17-1.el6_1.x86_64
1 | Install | subscription-manager-0.95.11-1.el6.x86_64
1 | Install | subscription-manager-firstboot-0.95.11-1.el6.x86_64
1 | Install | subscription-manager-gnome-0.95.11-1.el6.x86_64
history package-list
root
, use the yum history summary
command in the following form:
yum
history
summary
id
root
:
yum
history
info
id…
yum
automatically uses the last transaction. Note that when specifying more than one transaction, you can also use a range:
yum
history
info
start_id..end_id
~]# yum history info 4..5
Loaded plugins: langpacks, presto, refresh-packagekit
Transaction ID : 4..5
Begin time : Thu Jul 21 15:10:46 2011
Begin rpmdb : 1107:0c67c32219c199f92ed8da7572b4c6df64eacd3a
End time : 15:33:15 2011 (22 minutes)
End rpmdb : 1109:1171025bd9b6b5f8db30d063598f590f1c1f3242
User : Jaromir Hradilek <jhradilek>
Return-Code : Success
Command Line : install screen
Command Line : install yum-plugin-fs-snapshot
Transaction performed with:
Installed rpm-4.8.0-16.el6.x86_64
Installed yum-3.2.29-17.el6.noarch
Installed yum-metadata-parser-1.1.2-16.el6.x86_64
Packages Altered:
Install screen-4.0.3-16.el6.x86_64
Install yum-plugin-fs-snapshot-1.1.30-6.el6.noarch
history info
root
:
yum
history
addon-info
id
yum history info
, when no id is provided, yum
automatically uses the latest transaction. Another way to refer to the latest transaction is to use the last
keyword:
yum
history
addon-info
last
yum history addon-info
command would provide the following output:
~]# yum history addon-info 4
Loaded plugins: langpacks, presto, refresh-packagekit
Transaction ID: 4
Available additional history information:
config-main
config-repos
saved_tx
history addon-info
config-main
— global Yum options that were in use during the transaction. See Section 5.3.1, “Setting [main] Options” for information on how to change global options.
config-repos
— options for individual Yum repositories. See Section 5.3.2, “Setting [repository] Options” for information on how to change options for individual repositories.
saved_tx
— the data that can be used by the yum load-transaction
command in order to repeat the transaction on another machine (see below).
root
:
yum
history
addon-info
id information
yum history
command provides means to revert or repeat a selected transaction. To revert a transaction, type the following at a shell prompt as root
:
yum
history
undo
id
root
, run the following command:
yum
history
redo
id
last
keyword to undo or repeat the latest transaction.
yum history undo
and yum history redo
commands merely revert or repeat the steps that were performed during a transaction: if the transaction installed a new package, the yum history undo
command will uninstall it, and vice versa. If possible, this command will also attempt to downgrade all updated packages to their previous version, but these older packages may no longer be available. If you need to be able to restore the system to the state before an update, consider using the fs-snapshot plug-in described in Section 5.4.3, “Plug-in Descriptions”.
root
:
yum
-q
history
addon-info
idsaved_tx
>file_name
root
:
yum
load-transaction
file_name
rpmdb
version stored in the file must by identical to the version on the target system. You can verify the rpmdb
version by using the yum version nogroups
command.
root
:
yum
history
new
/var/lib/yum/history/
directory. The old transaction history will be kept, but will not be accessible as long as a newer database file is present in the directory.
yum
and related utilities is located at /etc/yum.conf
. This file contains one mandatory [main]
section, which allows you to set Yum options that have global effect, and may also contain one or more [repository]
sections, which allow you to set repository-specific options. However, best practice is to define individual repositories in new or existing .repo
files in the /etc/yum.repos.d/
directory. The values you define in the [main]
section of the /etc/yum.conf
file may override values set in individual [repository]
sections.
[main]
section of the /etc/yum.conf
configuration file;
[repository]
sections in /etc/yum.conf
and .repo
files in the /etc/yum.repos.d/
directory;
/etc/yum.conf
and files in the /etc/yum.repos.d/
directory so that dynamic version and architecture values are handled correctly;
/etc/yum.conf
configuration file contains exactly one [main]
section, and while some of the key-value pairs in this section affect how yum
operates, others affect how Yum treats repositories. You can add many additional options under the [main]
section heading in /etc/yum.conf
.
/etc/yum.conf
configuration file can look like this:
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
[comments abridged]
# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d
[main]
section:
assumeyes
=value0
— yum
should prompt for confirmation of critical actions it performs. This is the default.
1
— Do not prompt for confirmation of critical yum
actions. If assumeyes=1
is set, yum
behaves in the same way that the command line option -y
does.
cachedir
=directory/var/cache/yum/$basearch/$releasever
.
$basearch
and $releasever
Yum variables.
debuglevel
=value1
and 10
. Setting a higher debuglevel
value causes yum
to display more detailed debugging output. debuglevel=0
disables debugging output, while debuglevel=2
is the default.
exactarch
=value0
— Do not take into account the exact architecture when updating packages.
1
— Consider the exact architecture when updating packages. With this setting, yum
will not install an i686 package to update an i386 package already installed on the system. This is the default.
exclude
=package_name [more_package_names]*
and ?
) are allowed.
gpgcheck
=value0
— Disable GPG signature-checking on packages in all repositories, including local package installation.
1
— Enable GPG signature-checking on all packages in all repositories, including local package installation. gpgcheck=1
is the default, and thus all packages' signatures are checked.
[main]
section of the /etc/yum.conf
file, it sets the GPG-checking rule for all repositories. However, you can also set gpgcheck=value
for individual repositories instead; that is, you can enable GPG-checking on one repository while disabling it on another. Setting gpgcheck=value
for an individual repository in its corresponding .repo
file overrides the default if it is present in /etc/yum.conf
.
groupremove_leaf_only
=value0
— yum
should not check the dependencies of each package when removing a package group. With this setting, yum
removes all packages in a package group, regardless of whether those packages are required by other packages or groups. groupremove_leaf_only=0
is the default.
1
— yum
should check the dependencies of each package when removing a package group, and remove only those packages which are not not required by any other package or group.
installonlypkgs
=space separated list of packagesyum
can install, but will never update. See the yum.conf(5) manual page for the list of packages which are install-only by default.
installonlypkgs
directive to /etc/yum.conf
, you should ensure that you list all of the packages that should be install-only, including any of those listed under the installonlypkgs
section of yum.conf(5). In particular, kernel packages should always be listed in installonlypkgs
(as they are by default), and installonly_limit
should always be set to a value greater than 2
so that a backup kernel is always available in case the default one fails to boot.
installonly_limit
=valueinstallonlypkgs
directive.
installonlypkgs
directive include several different kernel packages, so be aware that changing the value of installonly_limit
will also affect the maximum number of installed versions of any single kernel package. The default value listed in /etc/yum.conf
is installonly_limit=3
, and it is not recommended to decrease this value, particularly below 2
.
keepcache
=value0
— Do not retain the cache of headers and packages after a successful installation. This is the default.
1
— Retain the cache after a successful installation.
logfile
=file_nameyum
should write its logging output. By default, yum
logs to /var/log/yum.log
.
multilib_policy
=valuebest
— install the best-choice architecture for this system. For example, setting multilib_policy=best
on an AMD64 system causes yum
to install 64-bit versions of all packages.
all
— always install every possible architecture for every package. For example, with multilib_policy
set to all
on an AMD64 system, yum
would install both the i586 and AMD64 versions of a package, if both were available.
obsoletes
=value0
— Disable yum
's obsoletes processing logic when performing updates.
1
— Enable yum
's obsoletes processing logic when performing updates. When one package declares in its spec file that it obsoletes another package, the latter package will be replaced by the former package when the former package is installed. Obsoletes are declared, for example, when a package is renamed. obsoletes=1
the default.
plugins
=value0
— Disable all Yum plug-ins globally.
Disabling all plug-ins is not advised
Yum
services. Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with Yum
.
1
— Enable all Yum plug-ins globally. With plugins=1
, you can still disable a specific Yum plug-in by setting enabled=0
in that plug-in's configuration file.
reposdir
=directory.repo
files are located. All .repo
files contain repository information (similar to the [repository]
sections of /etc/yum.conf
). yum
collects all repository information from .repo
files and the [repository]
section of the /etc/yum.conf
file to create a master list of repositories to use for transactions. If reposdir
is not set, yum
uses the default directory /etc/yum.repos.d/
.
retries
=value0
or greater. This value sets the number of times yum
should attempt to retrieve a file before returning an error. Setting this to 0
makes yum
retry forever. The default value is 10
.
[main]
options, refer to the [main] OPTIONS
section of the yum.conf(5) manual page.
[repository]
sections, where repository is a unique repository ID such as my_personal_repo
(spaces are not permitted), allow you to define individual Yum repositories.
[repository]
section takes:
[repository] name=repository_name baseurl=repository_url
[repository]
section must contain the following directives:
name
=repository_namebaseurl
=repository_urlhttp://path/to/repo
ftp://path/to/repo
file:///path/to/local/repo
username:password@link
. For example, if a repository on http://www.example.com/repo/ requires a username of “user” and a password of “password”, then the baseurl
link could be specified as http://user
:password
@www.example.com/repo/
.
baseurl=http://path/to/repo/releases/$releasever/server/$basearch/os/
$releasever
, $arch
, and $basearch
variables in URLs. For more information about Yum variables, refer to Section 5.3.3, “Using Yum Variables”.
[repository]
directive is the following:
enabled
=value0
— Do not include this repository as a package source when performing updates and installs. This is an easy way of quickly turning repositories on and off, which is useful when you desire a single package from a repository that you do not want to enable for updates or installs.
1
— Include this repository as a package source.
--enablerepo=repo_name
or --disablerepo=repo_name
option to yum
, or through the Add/Remove Software window of the PackageKit utility.
[repository]
options exist. For a complete list, refer to the [repository] OPTIONS
section of the yum.conf(5) manual page.
yum
commands and in all Yum configuration files (that is, /etc/yum.conf
and all .repo
files in the /etc/yum.repos.d/
directory):
$releasever
$releasever
from the distroverpkg=value
line in the /etc/yum.conf
configuration file. If there is no such line in /etc/yum.conf
, then yum
infers the correct value by deriving the version number from the redhat-release package.
$arch
os.uname()
function. Valid values for $arch
include: i586
, i686
and x86_64
.
$basearch
$basearch
to reference the base architecture of the system. For example, i686 and i586 machines both have a base architecture of i386
, and AMD64 and Intel64 machines have a base architecture of x86_64
.
$YUM0-9
/etc/yum.conf
for example) and a shell environment variable with the same name does not exist, then the configuration file variable is not replaced.
$
” sign) in the /etc/yum/vars/
directory, and add the desired value on its first line.
$osname
, create a new file with “Fedora” on the first line and save it as /etc/yum/vars/osname
:
~]# echo "Fedora" > /etc/yum/vars/osname
.repo
files:
name=$osname $releasever
[main]
section of the /etc/yum.conf
file), run the yum-config-manager
with no command line options:
yum-config-manager
yum-config-manager
section…
yum-config-manager
glob_expression…
~]$ yum-config-manager main \*
Loaded plugins: langpacks, presto, refresh-packagekit
================================== main ===================================
[main]
alwaysprompt = True
assumeyes = False
bandwith = 0
bugtracker_url = https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%206&component=yum
cache = 0
[output truncated]
yum-config-manager
command.
[repository]
section to the /etc/yum.conf
file, or to a .repo
file in the /etc/yum.repos.d/
directory. All files with the .repo
file extension in this directory are read by yum
, and best practice is to define your repositories here instead of in /etc/yum.conf
.
Be careful when using untrusted software sources
.repo
file. To add such a repository to your system and enable it, run the following command as root
:
yum-config-manager
--add-repo
repository_url
.repo
file. For example, to add a repository located at http://www.example.com/example.repo, type the following at a shell prompt:
~]# yum-config-manager --add-repo http://www.example.com/example.repo
Loaded plugins: langpacks, presto, refresh-packagekit
adding repo from: http://www.example.com/example.repo
grabbing file http://www.example.com/example.repo to /etc/yum.repos.d/example.repo
example.repo | 413 B 00:00
repo saved to /etc/yum.repos.d/example.repo
root
:
yum-config-manager
--enable
repository…
yum repolist all
to list available repository IDs). Alternatively, you can use a glob expression to enable all matching repositories:
yum-config-manager
--enable
glob_expression…
[example]
, [example-debuginfo]
, and [example-source]
sections, type:
~]# yum-config-manager --enable example\*
Loaded plugins: langpacks, presto, refresh-packagekit
============================== repo: example ==============================
[example]
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/6Server
baseurl = http://www.example.com/repo/6Server/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/6Server/example
[output truncated]
yum-config-manager --enable
command displays the current repository configuration.
root
:
yum-config-manager
--disable
repository…
yum repolist all
to list available repository IDs). Similarly to yum-config-manager --enable
, you can use a glob expression to disable all matching repositories at the same time:
yum-config-manager
--disable
glob_expression…
yum-config-manager --disable
command displays the current configuration.
~]# yum install createrepo
/mnt/local_repo/
.
createrepo --database
command on that directory:
~]# createrepo --database /mnt/local_repo
yum
operations.
yum
command. For example:
~]# yum info yum
Loaded plugins: langpacks, presto, refresh-packagekit
[output truncated]
Loaded plugins
are the names you can provide to the --disableplugin=plugin_name
option.
plugins=
is present in the [main]
section of /etc/yum.conf
, and that its value is set to 1
:
plugins=1
plugins=0
.
Disabling all plug-ins is not advised
Yum
services. Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with Yum
.
/etc/yum/pluginconf.d/
directory. You can set plug-in specific options in these files. For example, here is the refresh-packagekit plug-in's refresh-packagekit.conf
configuration file:
[main] enabled=1
[main]
section (similar to Yum's /etc/yum.conf
file) in which there is (or you can place if it is missing) an enabled=
option that controls whether the plug-in is enabled when you run yum
commands.
enabled=0
in /etc/yum.conf
, then all plug-ins are disabled regardless of whether they are enabled in their individual configuration files.
yum
command, use the --noplugins
option.
yum
command, add the --disableplugin=plugin_name
option to the command. For example, to disable the presto plug-in while updating a system, type:
~]# yum update --disableplugin=presto
--disableplugin=
option are the same names listed after the Loaded plugins
line in the output of any yum
command. You can disable multiple plug-ins by separating their names with commas. In addition, you can match multiple plug-in names or shorten long ones by using glob expressions:
~]# yum update --disableplugin=presto,refresh-pack*
yum-plugin-plugin_name
package-naming convention, but not always: the package which provides the presto plug-in is named yum-presto
, for example. You can install a Yum plug-in in the same way you install other packages. For instance, to install the security plug-in, type the following at a shell prompt:
~]# yum install yum-plugin-security
/
) must be on an LVM
(Logical Volume Manager) or Btrfs
volume. To use the fs-snapshot plug-in on an LVM volume, take the following steps:
vgdisplay
command in the following form as root
:
vgdisplay
volume_group
Free PE / Size
line.
root
, run the pvcreate
command in the following form to initialize a physical volume for use with the Logical Volume Manager:
pvcreate
device
vgextend
command in the following form as root
to add the physical volume to the volume group:
vgextend
volume_group physical_volume
/etc/yum/pluginconf.d/fs-snapshot.conf
, and make the following changes to the [lvm]
section:
enabled
option to 1
:
enabled = 1
#
) from the beginning of the lvcreate_size_args
line, and adjust the number of logical extents to be allocated for a snapshot. For example, to allocate 80 % of the size of the original logical volume, use:
lvcreate_size_args = -l 80%ORIGIN
fs-snapshot.conf
directives” for a complete list of available configuration options.
yum
command, and make sure fs-snapshot is included in the list of loaded plug-ins (the Loaded plugins
line) before you confirm the changes and proceed with the transaction. The fs-snapshot plug-in displays a line in the following form for each affected logical volume:
fs-snapshot: snapshotting file_system (/dev/volume_group/logical_volume): logical_volume_yum_timestamp
lvremove
command as root
:
lvremove
/dev/volume_group/logical_volume_yum_timestamp
root
, run the command in the following form to merge a snapshot into its original logical volume:
lvconvert
--merge
/dev/volume_group/logical_volume_yum_timestamp
lvconvert
command will inform you that a restart is required in order for the changes to take effect.
root
:
reboot
yum
command, and make sure fs-snapshot is included in the list of loaded plug-ins (the Loaded plugins
line) before you confirm the changes and proceed with the transaction. The fs-snapshot plug-in displays a line in the following form for each affected file system:
fs-snapshot: snapshotting file_system: file_system/yum_timestamp
root
:
btrfs
subvolume
delete
file_system/yum_timestamp
root
:
btrfs
subvolume
list
file_system
root
, configure the system to mount this snapshot by default:
btrfs
subvolume
set-default
id file_system
root
:
reboot
Table 5.3. Supported fs-snapshot.conf
directives
Section | Directive | Description |
---|---|---|
[main] | enabled =value | Allows you to enable or disable the plug-in. The value must be either 1 (enabled), or 0 (disabled). When installed, the plug-in is enabled by default. |
exclude =list | Allows you to exclude certain file systems. The value must be a space-separated list of mount points you do not want to snapshot (for example, /srv /mnt/backup ). This option is not included in the configuration file by default. | |
[lvm] | enabled =value | Allows you to enable or disable the use of the plug-in on LVM volumes. The value must be either 1 (enabled), or 0 (disabled). This option is disabled by default. |
lvcreate_size_args =value | Allows you to specify the size of a logical volume snapshot. The value must be the -l or -L command line option for the lvcreate utility followed by a valid argument (for example, -l 80%ORIGIN ). |
yum
is run. The refresh-packagekit plug-in is installed by default.
RHN Classic
. This allows systems registered with RHN Classic
to update and install packages from this system.
yum
with a set of highly-useful security-related commands, subcommands and options.
~]# yum check-update --security
Loaded plugins: langpacks, presto, refresh-packagekit, security
Limiting package lists to security relevant ones
updates-testing/updateinfo | 329 kB 00:00
9 package(s) needed for security, out of 270 available
ConsoleKit.x86_64 0.4.5-1.fc15 updates
ConsoleKit-libs.x86_64 0.4.5-1.fc15 updates
ConsoleKit-x11.x86_64 0.4.5-1.fc15 updates
NetworkManager.x86_64 1:0.8.999-2.git20110509.fc15 updates
NetworkManager-glib.x86_64 1:0.8.999-2.git20110509.fc15 updates
[output truncated]
yum update --security
or yum update-minimal --security
to update those packages which are affected by security advisories. Both of these commands update all packages on the system for which a security advisory has been issued. yum update-minimal --security
updates them to the latest packages which were released as part of a security advisory, while yum update --security
will update all packages affected by a security advisory to the latest version of that package available.
yum update-minimal --security
will update you to kernel-2.6.38.6-22, and yum update --security
will update you to kernel-2.6.38.6-26. Conservative system administrators may want to use update-minimal
to reduce the risk incurred by updating packages as much as possible.
yum
.
Yum Guides
section of the Yum wiki contains more documentation.
Table of Contents
httpd
if you are running a web server). However, if you do not need to provide a service, you should turn it off to minimize your exposure to possible bug exploits.
Keep the system secure
Do not use the ntsysv and chkconfig utilities
/etc/rc.d/init.d/
directory, it is advised that you use the systemctl utility.
Enabling the irqbalance service
irqbalance
service is enabled. In most cases, this service is installed and configured to run during the Fedora 20 installation. To verify that irqbalance
is running, type the following at a shell prompt:
systemctl status irqbalance.service
systemctl
command in the following form:
systemctl
enable
service_name.service
Example 6.1. Enabling the httpd service
httpd
service by typing the following at a shell prompt as root
:
~]# systemctl enable httpd.service
systemctl
command in the following form:
systemctl
disable
service_name.service
Example 6.2. Disabling the telnet service
telnet
service is disabled by running the following command as root
:
~]# systemctl disable telnet.service
Do not use the service utility
/etc/rc.d/init.d/
directory, it is advised that you use the systemctl utility.
systemctl
command in the following form:
systemctl
status
service_name.service
systemctl
command in the following form instead:
systemctl
is-active
service_name.service
Example 6.3. Checking the status of the httpd service
httpd
service at boot time. Imagine that the system has been restarted and you need to verify that the service is really running. You can do so by typing the following at a shell prompt:
~]$ systemctl is-active httpd.service
active
~]$ systemctl status httpd.service
httpd.service - LSB: start and stop Apache HTTP Server
Loaded: loaded (/etc/rc.d/init.d/httpd)
Active: active (running) since Mon, 23 May 2011 21:38:57 +0200; 27s ago
Process: 2997 ExecStart=/etc/rc.d/init.d/httpd start (code=exited, status=0/SUCCESS)
Main PID: 3002 (httpd)
CGroup: name=systemd:/system/httpd.service
├ 3002 /usr/sbin/httpd
├ 3004 /usr/sbin/httpd
├ 3005 /usr/sbin/httpd
├ 3006 /usr/sbin/httpd
├ 3007 /usr/sbin/httpd
├ 3008 /usr/sbin/httpd
├ 3009 /usr/sbin/httpd
├ 3010 /usr/sbin/httpd
└ 3011 /usr/sbin/httpd
systemctl list-units --type=service
UNIT
— A systemd
unit name. In this case, a service name.
LOAD
— Information whether the systemd
unit was properly loaded.
ACTIVE
— A high-level unit activation state.
SUB
— A low-level unit activation state.
JOB
— A pending job for the unit.
DESCRIPTION
— A brief description of the unit.
Example 6.4. Listing all active services
~]$ systemctl list-units --type=service
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
abrt-ccpp.service loaded active exited LSB: Installs coredump handler which saves segfault data
abrt-oops.service loaded active running LSB: Watches system log for oops messages, creates ABRT dump directories for each oops
abrtd.service loaded active running ABRT Automated Bug Reporting Tool
accounts-daemon.service loaded active running Accounts Service
atd.service loaded active running Job spooling tools
[output truncated]
abrtd
service is loaded, active, and running, and it does not have any pending jobs.
systemctl
command in the following form:
systemctl
start
service_name.service
Example 6.5. Running the httpd service
httpd
service at boot time. You can start the service immediately by typing the following at a shell prompt as root
:
~]# systemctl start httpd.service
systemctl
command in the following form:
systemctl
stop
service_name.service
Example 6.6. Stopping the telnet service
telnet
service at boot time. You can stop the service immediately by running the following command as root
:
~]# systemctl stop telnet.service
systemctl
command in the following form:
systemctl
restart
service_name.service
Example 6.7. Restarting the sshd service
/etc/ssh/sshd_config
configuration file to take effect, it is required that you restart the sshd
service. You can do so by typing the following at a shell prompt as root
:
~]# systemctl restart sshd.service
Important
system-config-authentication
command.
Important
openldap-clients
package or the sssd
package is used to configure an LDAP server for the user database. Both packages are installed by default.
namingContexts
and defaultNamingContext
attributes in the LDAP server's configuration entry.
ldaps://
, enables the button.
Important
ldaps
). This option uses Start TLS, which initiates a secure connection over a standard port; if a secure port is specified, then a protocol like SSL must be used instead of Start TLS.
ldaps://
) URL or the TLS option to connect to the LDAP server.
ypbind
package. This is required for NIS services, but is not installed by default.
[root@server ~]# yum install ypbind
ypbind
service is installed, the portmap
and ypbind
services are started and enabled to start at boot time.
authconfig
daemon scans for the NIS server.
samba-winbind
package, which is installed by default.
DOMAIN
.
authconfig
supports four types of security models:
krb5-server
package must be installed and Kerberos must be configured properly.
user
mode.
EXAMPLE\jsmith
.
Note
[root@server ~]# getent passwd domain\\user DOMAIN\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash
winbindd
service, refer to Section 12.1.2, “Samba Daemons and Related Services”.
krb5-libs
and krb5-workstation
packages.
kadmind
process in the realm.
/etc/security/access.conf
file to check for local user authorization rules.
Warning
pam_pkcs11
package.
Important
authconfig
command-line tool updates all of the configuration files and services required for system authentication, according to the settings passed to the script. Along with allowing all of the identity and authentication configuration options that can be set through the UI, the authconfig
tool can also be used to create backup and kickstart files.
authconfig
options, check the help output and the man page.
authconfig
:
--update
or --test
option. One of those options is required for the command to run successfully. Using --update
writes the configuration changes. --test
prints the changes to stdout but does not apply the changes to the configuration.
--enableldap
. To use LDAP as the authentication source, use --enableldapauth
and then the requisite connection information, like the LDAP server name, base DN for the user suffix, and (optionally) whether to use TLS. The authconfig
command also has options to enable or disable RFC 2307bis schema for user entries, which is not possible through the Authentication Configuration UI.
ldap
or ldaps
) and the port number. Do not use a secure LDAP URL (ldaps
) with the --enableldaptls
option.
authconfig --enableldap --enableldapauth --ldapserver=ldap://ldap.example.com:389,ldap://ldap2.example.com:389 --ldapbasedn="ou=people,dc=example,dc=com" --enableldaptls --ldaploadcacert=https://ca.server.example.com/caCert.crt --update
--ldapauth
for LDAP password authentication, it is possible to use Kerberos with the LDAP user store. These options are described in Section 7.1.5.5, “Configuring Kerberos Authentication”.
--enablenis
. This automatically uses NIS authentication, unless the Kerberos parameters are explicitly set, so it uses Kerberos authentication (Section 7.1.5.5, “Configuring Kerberos Authentication”). The only parameters are to identify the NIS server and NIS domain; if these are not used, then the authconfig
service scans the network for NIS servers.
authconfig --enablenis --nisdomain=EXAMPLE --nisserver=nis.example.com --update
authconfig --enablewinbind --enablewinbindauth --smbsecurity=user|server --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --update
Note
EXAMPLE\jsmith
.
[root@server ~]# getent passwd domain\\user DOMAIN\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash
authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --smbrealm EXAMPLE.COM --winbindtemplateshell=/bin/sh --update
authconfig
help.
authconfig NIS or LDAP options --enablekrb5 --krb5realm EXAMPLE --krb5kdc kdc.example.com:88,server.example.com:88 --krb5adminserver server.example.com:749 --enablekrb5kdcdns --enablekrb5realmdns --update
authconfig --enablemkhomedir --update
authconfig --passalgo=sha512 --update
authconfig
settings, like LDAP user stores.
authconfig --enablefingerprint --update
--enablesmartcard
option:
authconfig --enablesmartcard --update
authconfig --enablesmartcard --smartcardaction=0 --update
authconfig --enablerequiresmartcard --update
Warning
--enablerequiresmartcard
option until you have successfully authenticated to the system using a smart card. Otherwise, users may be unable to log into the system.
--update
option updates all of the configuration files with the configuration changes. There are a couple of alternative options with slightly different behavior:
--kickstart
writes the updated configuration to a kickstart file.
--test
prints the full configuration, with changes, to stdout but does not edit any configuration files.
authconfig
can be used to back up and restore previous configurations. All archives are saved to a unique subdirectory in the /var/lib/authconfig/
directory. For example, the --savebackup
option gives the backup directory as 2011-07-01
:
authconfig --savebackup=2011-07-01
/var/lib/authconfig/backup-2011-07-01
directory.
--restorebackup
option, giving the name of the manually-saved configuration:
authconfig --restorebackup=2011-07-01
authconfig
automatically makes a backup of the configuration before it applies any changes (with the --update
option). The configuration can be restored from the most recent automatic backup, without having to specify the exact backup, using the --restorelastbackup
option.
/home
and the system is configured to create home directories the first time users log in, then these directories are created with the wrong permissions.
/home
directory to the home directory that is created on the local system. For example:
# semanage fcontext -a -e /home /home/locale
oddjob-mkhomedir
package on the system.
pam_oddjob_mkhomedir.so
library, which the Authentication Configuration Tool uses to create home directories. The pam_oddjob_mkhomedir.so
library, unlike the default pam_mkhomedir.so
library, can create SELinux labels.
pam_oddjob_mkhomedir.so
library if it is available. Otherwise, it will default to using pam_mkhomedir.so
.
oddjobd
service is running.
# semanage fcontext -a -e /home /home/locale # restorecon -R -v /home/locale
.conf
file. The default file is /etc/sssd/sssd.conf
, although alternative files can be passed to SSSD by using the -c
option with the sssd
command:
# sssd -c /etc/sssd/customfile.conf
[domain/LDAP]
. The configuration file uses simple key = value lines to set the configuration. Comment lines are set by either a hash sign (#) or a semicolon (;)
[section] # Comment line key1 = val1 key10 = val1,val2
Note
service
command or the /etc/init.d/sssd
script can start SSSD. For example:
# service sssd start
authconfig
command:
[root@server ~]# authconfig --enablesssd --enablesssdauth --update
chkconfig
command:
[root@server ~]# chkconfig sssd on
sssd.conf
file. The [sssd]
section also lists the services that are active and should be started when sssd
starts within the services
directive.
sssd_nss
module. This is configured in the [nss]
section of the SSSD configuration.
sssd_pam
module. This is configured in the [pam]
section of the configuration.
monitor
, a special service that monitors and starts or restarts all other SSSD services. Its options are specified in the [sssd]
section of the /etc/sssd/sssd.conf
configuration file.
Note
lookup family order
option in the sssd.conf
configuration file.
sssd_nss
, which instructs the system to use SSSD to retrieve user information. The NSS configuration must include a reference to the SSSD module, and then the SSSD configuration sets how SSSD interacts with NSS.
passwd
)
shadow
)
groups
)
netgroups
)
services
)
nss_sss
module has to be included for the desired service type.
nsswitch.conf
file to use SSSD as a provider.
[root@server ~]# authconfig --enablesssd --update
passwd: files sss shadow: files sss group: files sss netgroup: files sss
authconfig
. To include that map, open the nsswitch.conf
file and add the sss
module to the services
map:
[root@server ~]# vim /etc/nsswitch.conf
...
services: file sss
...
[nss]
services section.
sssd.conf
file.
[root@server ~]# vim /etc/sssd/sssd.conf
[sssd]
config_file_version = 2
reconnection_retries = 3
sbus_timeout = 30
services = nss
, pam
[nss]
section, change any of the NSS parameters. These are listed in Table 7.1, “SSSD [nss] Configuration Parameters”.
[nss] filter_groups = root filter_users = root reconnection_retries = 3 entry_cache_timeout = 300 entry_cache_nowait_percentage = 75
[root@server ~]# service sssd restart
Table 7.1. SSSD [nss] Configuration Parameters
Parameter | Value Format | Description |
---|---|---|
enum_cache_timeout | integer | Specifies how long, in seconds, sssd_nss should cache requests for information about all users (enumerations). |
entry_cache_nowait_percentage | integer | Specifies how long sssd_nss should return cached entries before refreshing the cache. Setting this to zero (0 ) disables the entry cache refresh.
This configures the entry cache to update entries in the background automatically if they are requested if the time before the next update is a certain percentage of the next interval. For example, if the interval is 300 seconds and the cache percentage is 75, then the entry cache will begin refreshing when a request comes in at 225 seconds — 75% of the interval.
The allowed values for this option are 0 to 99, which sets the percentage based on the
entry_cache_timeout value. The default value is 50%.
|
entry_negative_timeout | integer | Specifies how long, in seconds, sssd_nss should cache negative cache hits. A negative cache hit is a query for an invalid database entries, including non-existent entries. |
filter_users, filter_groups | string | Tells SSSD to exclude certain users from being fetched from the NSS database. This is particularly useful for system accounts such as root . |
filter_users_in_groups | Boolean | Sets whether users listed in the filter_users list appear in group memberships when performing group lookups. If set to FALSE , group lookups return all users that are members of that group. If not specified, this value defaults to true , which filters the group member lists. |
debug_level | integer, 0 - 9 | Sets a debug logging level. |
Warning
sssd_pam
, which instructs the system to use SSSD to retrieve user information. The PAM configuration must include a reference to the SSSD module, and then the SSSD configuration sets how SSSD interacts with PAM.
authconfig
to enable SSSD for system authentication.
# authconfig --update --enablesssd --enablesssdauthThis automatically updates the PAM configuration to reference all of the SSSD modules:
#%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quietauth sufficient pam_sss.so use_first_pass
auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quietaccount [default=bad success=ok user_unknown=ignore] pam_sss.so
account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtokpassword sufficient pam_sss.so use_authtok
password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uidsession sufficient pam_sss.so
session required pam_unix.so
include
statements, as necessary.
sssd.conf
file.
# vim /etc/sssd/sssd.conf
[sssd]
config_file_version = 2
reconnection_retries = 3
sbus_timeout = 30
services = nss, pam
[pam]
section, change any of the PAM parameters. These are listed in Table 7.2, “SSSD [pam] Configuration Parameters”.
[pam] reconnection_retries = 3 offline_credentials_expiration = 2 offline_failed_login_attempts = 3 offline_failed_login_delay = 5
[root@server ~]# service sssd restart
Table 7.2. SSSD [pam] Configuration Parameters
Parameter | Value Format | Description |
---|---|---|
offline_credentials_expiration | integer | Sets how long, in days, to allow cached logins if the authentication provider is offline. This value is measured from the last successful online login. If not specified, this defaults to zero (0 ), which is unlimited. |
offline_failed_login_attempts | integer | Sets how many failed login attempts are allowed if the authentication provider is offline. If not specified, this defaults to zero (0 ), which is unlimited. |
offline_failed_login_delay | integer | Sets how long to prevent login attempts if a user hits the failed login attempt limit. If set to zero (0 ), the user cannot authenticate while the provider is offline once he hits the failed attempt limit. Only a successful online authentication can re-enable offline authentication. If not specified, this defaults to five (5 ). |
jsmith
in the ldap.example.com
domain and jsmith
in the ldap.otherexample.com
domain. SSSD allows requests using fully-qualified domain names, so requesting information for jsmith@ldap.example.com
returns the proper user account. Specifying only the username returns the user for whichever domain comes first in the lookup order.
Tip
filter_users
option, which excludes the specified users from being returned in a search.
Table 7.3. Identity Store and Authentication Type Combinations
Identification Provider | Authentication Provider |
---|---|
LDAP | LDAP |
LDAP | Kerberos |
proxy | LDAP |
proxy | Kerberos |
proxy | proxy |
domains = LOCAL,Name [domain/Name] id_provider = type auth_provider = type provider_specific = value global = value
Table 7.4. General [domain] Configuration Parameters
Parameter | Value Format | Description |
---|---|---|
id_provider | string | Specifies the data provider identity backend to use for this domain. The supported identity backends are:
|
auth_provider | string | Sets the authentication provider used for the domain. The default value for this option is the value of id_provider . The supported authentication providers are ldap, ipa, krb5 (Kerberos), proxy, and none. |
min_id,max_id | integer | Optional. Specifies the UID and GID range for the domain. If a domain contains entries that are outside that range, they are ignored. The default value for min_id is 1 ; the default value for max_id is 0 , which is unlimited.
Important
The default min_id value is the same for all types of identity provider. If LDAP directories are using UID numbers that start at one, it could cause conflicts with users in the local /etc/passwd file. To avoid these conflicts, set min_id to 1000 or higher as possible.
|
enumerate | Boolean | Optional. Specifies whether to list the users and groups of a domain. Enumeration means that the entire set of available users and groups on the remote source is cached on the local machine. When enumeration is disabled, users and groups are only cached as they are requested.
Warning
When enumeration is enabled, reinitializing a client results in a complete refresh of the entire set of available users and groups from the remote source. Similarly, when SSSD is connected to a new server, the entire set of available users and groups from the remote source is pulled and cached on the local machine. In a domain with a large number of clients connected to a remote source, this refresh process can harm the network performance because of frequent queries from the clients. If the set of available users and groups is large enough, it degrades client performance as well.
false , which disables enumeration. |
cache_credentials | Boolean | Optional. Specifies whether to store user credentials in the local SSSD domain database cache. The default value for this parameter is false . Set this value to true for domains other than the LOCAL domain to enable offline authentication. |
entry_cache_timeout | integer | Optional. Specifies how long, in seconds, SSSD should cache positive cache hits. A positive cache hit is a successful query. |
use_fully_qualified_names | Boolean | Optional. Specifies whether requests to this domain require fully-qualified domain names. If set to true , all requests to this domain must use fully-qualified domain names. It also means that the output from the request displays the fully-qualified name. Restricting requests to fully-qualified user names allows SSSD to differentiate between domains with users with conflicting usernames.
If
use_fully_qualified_names is set to false , it is possible to use the fully-qualified name in the requests, but only the simplified version is displayed in the output.
SSSD can only parse names based on the domain name, not the realm name. The same name can be used for both domains and realms, however.
|
Note
Tip
sssd-ldap(5)
.
Table 7.5. LDAP Domain Configuration Parameters
Parameter | Description |
---|---|
ldap_uri | Gives a comma-separated list of the URIs of the LDAP servers to which SSSD will connect. The list is given in order of preference, so the first server in the list is tried first. Listing additional servers provides failover protection. This can be detected from the DNS SRV records if it is not given. |
ldap_search_base | Gives the base DN to use for performing LDAP user operations. |
ldap_tls_reqcert | Specifies how to check for SSL server certificates in a TLS session. There are four options:
|
ldap_tls_cacert | Gives the full path and file name to the file that contains the CA certificates for all of the CAs that SSSD recognizes. SSSD will accept any certificate issued by these CAs.
This uses the OpenLDAP system defaults if it is not given explicitly.
|
ldap_referrals | Sets whether SSSD will use LDAP referrals, meaning forwarding queries from one LDAP database to another. SSSD supports database-level and subtree referrals. For referrals within the same LDAP server, SSSD will adjust the DN of the entry being queried. For referrals that go to different LDAP servers, SSSD does an exact match on the DN. Setting this value to true enables referrals; this is the default. |
ldap_schema | Sets what version of schema to use when searching for user entries. This can be either rfc2307 or rfc2307bis . The default is rfc2307 .
In RFC 2307, group objects use a multi-valued attribute,
memberuid , which lists the names of the users that belong to that group. In RFC 2307bis, group objects use the member attribute, which contains the full distinguished name (DN) of a user or group entry. RFC 2307bis allows nested groups usning the member attribute. Because these different schema use different definitions for group membership, using the wrong LDAP schema with SSSD can affect both viewing and managing network resources, even if the appropriate permissions are in place.
For example, with RFC 2307bis, all groups are returned when using nested groups or primary/secondary groups.
$ id uid=500(myserver) gid=500(myserver) groups=500(myserver),510(myothergroup)
If SSSD is using RFC 2307 schema, only the primary group is returned.
This setting only affects how SSSD determines the group members. It does not change the actual user data.
|
ldap_search_timeout | Sets the time, in seconds, that LDAP searches are allowed to run before they are canceled and cached results are returned. This defaults to five when the enumerate value is false and defaults to 30 when enumerate is true.
When an LDAP search times out, SSSD automatically switches to offline mode.
|
ldap_network_timeout | Sets the time, in seconds, SSSD attempts to poll an LDAP server after a connection attempt fails. The default is six seconds. |
ldap_opt_timeout | Sets the time, in seconds, to wait before aborting synchronous LDAP operations if no response is received from the server. This option also controls the timeout when communicating with the KDC in case of a SASL bind. The default is five seconds. |
Note
sssd.conf
file. For example:
domains = LOCAL,LDAP1,AD,PROXYNIS
Example 7.1. A Basic LDAP Domain Configuration
ldap_uri
option:
# An LDAP domain [domain/LDAP] enumerate = false cache_credentials = true id_provider = ldap auth_provider = ldap ldap_uri = ldaps://ldap.example.com:636 ldap_search_base = dc=example,dc=com
ldap_id_use_start_tls
option to use Start TLS and then ldap_tls_cacert
to identify the CA certificate which issued the SSL server certificates.
# An LDAP domain [domain/LDAP] enumerate = false cache_credentials = true id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap_id_use_start_tls = true ldap_tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt
Note
authconfig
, set the Linux client to use Active Directory as its LDAP identity provider. For example:
authconfig --enableldap --enableldapauth --ldapserver=ldap://ad.example.com:389 --enablekrb5 --krb5realm AD-REALM.EXAMPLE.COM --krb5kdc ad-kdc.example.com:88 --krb5adminserver ad-kdc.example.com:749 --update
authconfig
command is described in Section 7.1, “Configuring System Authentication”.
ad.example.com
.
rhel-server
, and click .
rhel-server
object, and select .
C:\> setspn -A host/rhel-server.example.com@AD-REALM.EXAMPLE.COM rhel-server Registering ServicePrincipalNames for CN=rhel server,CN=Computers,DC=ad,DC=example,DC=com host/rhel server.example.com@AD-REALM.EXAMPLE.COM Updated object C:\> setspn -L rhel-server Registered ServicePrincipalNames for CN=rhel server,CN=Computers,DC=ad,DC=example,DC=com: host/rhel server.example.com@AD-REALM.EXAMPLE.COM C:\> ktpass /princ host/rhel-server.example.com@AD-REALM.EXAMPLE.COM /out rhel-server.keytab /crypto all /ptype KRB5_NT_PRINCIPAL -desonly /mapuser AD\rhel-server$ +rndPass Targeting domain controller: ad.example.com Using legacy password setting method Successfully mapped host/rhel server.redhat.com ... 8< ...
/etc/krb5.keytab
.
[root@rhel-server ~]# chown root:root /etc/krb5.keytab [root@rhel-server ~]# chmod 0600 /etc/krb5.keytab
[root@rhel-server ~]# restorecon /etc/krb5.keytab
[root@rhel-server ~]# kinit -k -t /etc/krb5.keytab host/rhel-server.example.com@AD-REALM.EXAMPLE.COM
/bin/bash
/home/aduser
unixusers
Example 7.2. An Active Directory 2008 Domain
[root@rhel-server ~]# vim /etc/sssd/sssd.conf [sssd] config_file_version = 2 domains = ad.example.com services = nss, pam [nss] [pam] [domain/ad.example.com] cache_credentials = true enumerate = false id_provider = ldap auth_provider = krb5 chpass_provider = krb5 access_provider = ldap ldap_sasl_mech = GSSAPI ldap_sasl_authid = host/rhel-server.example.com@AD-REALM.EXAMPLE.COM ldap_schema = rfc2307bis ldap_user_search_base = ou=user accounts,dc=ad,dc=example,dc=com ldap_user_object_class = user ldap_user_home_directory = unixHomeDirectory ldap_user_principal = userPrincipalName ldap_user_name = sAMAccountName ldap_group_search_base = ou=groups,dc=ad,dc=example,dc=com ldap_group_object_class = group ldap_access_order = expire ldap_account_expire_policy = ad ldap_force_upper_case_realm = true ldap_disable_referrals = true #krb5_server = server.ad.example.com krb5_realm = AD-REALM.EXAMPLE.COM
sssd-ldap(5)
.
[root@rhel-server ~]# service sssd restart
ldap_uri
option instead of the server name may cause the TLS/SSL connection to fail. TLS/SSL certificates contain the server name, not the IP address. However, the subject alternative name field in the certificate can be used to include the IP address of the server, which allows a successful secure connection using an IP address.
-signkey
) is the key of the issuer of whatever CA originally issued the certificate. If this is done by an external CA, it requires a separate PEM file; if the certificate is self-signed, then this is the certificate itself. For example:
openssl x509 -x509toreq -in old_cert.pem -out req.pem -signkey key.pem
openssl x509 -x509toreq -in old_cert.pem -out req.pem -signkey old_cert.pem
/etc/pki/tls/openssl.cnf
configuration file to include the server's IP address under the [ v3_ca ]
section:
subjectAltName = IP:10.0.0.10
openssl x509 -req -in req.pem -out new_cert.pem -extfile ./openssl.cnf -extensions v3_ca -signkey old_cert.pem
-extensions
option sets which extensions to use with the certificate. For this, it should be v3_ca to load the appropriate section.
old_cert.pem
file into the new_cert.pem
file to keep all relevant information in one file.
nss-utils
package, note that certutil supports DNS subject alternative names for certificate creation only.
Note
krb5_kpasswd
option to specify where the password changing service is running or if it is running on a non-default port. If the krb5_kpasswd
option is not defined, SSSD tries to use the Kerberos KDC to change the password.
sssd-krb5(5)
man page has more information about Kerberos configuration options.
Example 7.3. Basic Kerberos Authentication
# A domain with identities provided by LDAP and authentication by Kerberos [domain/KRBDOMAIN] enumerate = false id_provider = ldap chpass_provider = krb5 ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap-tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt auth_provider = krb5 krb5_server = 192.168.1.1, kerberos.example.com krb5_realm = EXAMPLE.COM krb5_kpasswd = kerberos.admin.example.com krb5_auth_timeout = 15
Table 7.6. Kerberos Authentication Configuration Parameters
Parameter | Description |
---|---|
chpass_provider | Specifies which service to use for password change operations. This is assumed to be the same as the authentication provider. To use Kerberos, set this to krb5. |
krb5_server | Gives a comma-separated list of IP addresses or hostnames of Kerberos servers to which SSSD will connect. The list is given in order of preference, so the first server in the list is tried first. Listing additional servers provides failover protection.
When using service discovery for KDC or kpasswd servers, SSSD first searches for DNS entries that specify UDP as the connection protocol, and then falls back to TCP.
|
krb5_realm | Identies the Kerberos realm served by the KDC. |
krb5_lifetime | Requests a Kerberos ticket with the specified lifetime in seconds (s), minutes (m), hours (h) or days (d). |
krb5_renewable_lifetime | Requests a renewable Kerberos ticket with a total lifetime that is specified in seconds (s), minutes (m), hours (h) or days (d). |
krb5_renew_interval | Sets the time, in seconds, for SSSD to check if tickets should be renewed. Tickets are renewed automatically once they exceed half their lifetime. If this option is missing or set to zero, then automatic ticket renewal is disabled. |
krb5_store_password_if_offline | Sets whether to store user passwords if the Kerberos authentication provider is offline, and then to use that cache to request tickets when the provider is back online. The default is false , which does not store passwords. |
krb5_kpasswd | Lists alternate Kerberos kadmin servers to use if the change password service is not running on the KDC. |
krb5_ccname_template | Gives the directory to use to store the user's credential cache. This can be templatized, and the following tokens are supported:
krb5_ccname_template = FILE:%d/krb5cc_%U_XXXXXX |
krb5_ccachedir | Specifies the directory to store credential caches. This can be templatized, using the same tokens as krb5_ccname_template , except for %d and %P . If %u , %U , %p , or %h are used, then SSSD creates a private directory for each user; otherwise, it creates a public directory. |
krb5_auth_timeout | Gives the time, in seconds, before an online authentication or change password request is aborted. If possible, the authentication request is continued offline. The default is 15 seconds. |
Table 7.7. Proxy Domain Configuration Parameters
Parameter | Description |
---|---|
proxy_pam_target | Specifies the target to which PAM must proxy as an authentication provider. The PAM target is a file containing PAM stack information in the default PAM directory, /etc/pam.d/ .
This is used to proxy an authentication provider.
Important
Ensure that the proxy PAM stack does not recursively include pam_sss.so .
|
proxy_lib_name | Specifies which existing NSS library to proxy identity requests through.
This is used to proxy an identity provider.
|
Example 7.4. Proxy Identity and Kerberos Authentication
proxy_lib_name
parameter. This library can be anything as long as it is compatible with the given authentication service. For a Kerberos authentication provider, it must be a Kerberos-compatible library, like NIS.
[domain/PROXY_KRB5] auth_provider = krb5 krb5_server = 192.168.1.1 krb5_realm = EXAMPLE.COM id_provider = proxy proxy_lib_name = nis enumerate = true cache_credentials = true
Example 7.5. LDAP Identity and Proxy Authentication
proxy_pam_target
parameter. This library must be a PAM module that is compatible with the given identity provider. For example, this uses a PAM fingerprint module with LDAP:
[domain/LDAP_PROXY] id_provider = ldap ldap_uri = ldap://example.com ldap_search_base = dc=example,dc=com auth_provider = proxy proxy_pam_target = sssdpamproxy enumerate = true cache_credentials = true
sssdpamproxy
, so create a /etc/pam.d/sssdpamproxy
file and load the PAM/LDAP modules:
auth required pam_frprint.so account required pam_frprint.so password required pam_frprint.so session required pam_frprint.so
Example 7.6. Proxy Identity and Authentication
proxy_pam_target
for the authentication PAM module and proxy_lib_name
for the service, like NIS or LDAP.
[domain/PROXY_PROXY] auth_provider = proxy id_provider = proxy proxy_lib_name = ldap proxy_pam_target = sssdproxyldap enumerate = true cache_credentials = true
/etc/pam.d/sssdproxyldap
file which requires the pam_ldap.so
module:
auth required pam_ldap.so account required pam_ldap.so password required pam_ldap.so session required pam_ldap.so
nss-pam-ldap
package is installed.
[root@server ~]# yum install nss-pam-ldap
/etc/nslcd.conf
file, the configuration file for the LDAP name service daemon, to contain the information for the LDAP directory:
uid nslcd gid ldap uri ldaps://ldap.example.com:636 base dc=example,dc=com ssl on tls_cacertdir /etc/openldap/cacerts
simple_allow_users
and simple_allow_groups
, which grant access explicitly to specific users (either the given users or group members) and deny access to everyone else. It is also possible to create deny lists (which deny access only to explicit people and implicitly allow everyone else access).
[domain/example.com] access_provider = simple simple_allow_users = jsmith,bjensen simple_allow_groups = itgroup
Note
simple
as an access provider.
sssd-simple
man page, but these are rarely used.
ldap_access_filter
) specifies which users are granted access to the specified host. The user filter must be used or all users are denied access.
[domain/example.com] access_provider = ldap ldap_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com
Note
authorizedService
attribute.
/etc/sssd/sssd.conf
file. The servers are listed in order of preference. This list can contain any number of servers.
ldap_uri = ldap://ldap0.example.com, ldap://ldap1.example.com, ldap://ldap2.example.com
ldap://ldap0.example.com
, is the primary server. If this server fails, SSSD first attempts to connect to ldap1.example.com
and then ldap2.example.com
.
Important
_service._protocol._domain TTL priority weight port hostname
/var/lib/sss/db/
directory.
sss_cache
, invalidates records in the SSSD cache for a user, a domain, or a group. Invalidating the current records forces the cache to retrieve the updated records from the identity provider, so changes can be realized quickly.
sss_cache
can purge the records for that specific account, and leave the rest of the cache intact.
Table 7.8. sss_cache Options
Short Argument | Long Argument | Description |
---|---|---|
-d name | --domain name | Invalidates cache entries for users, groups, and other entries only within the specified domain. |
-G | --groups | Invalidates all group records. If -g is also used, -G takes precedence and -g is ignored. |
-g name | --group name | Invalidates the cache entry for the specified group. |
-N | --netgroups | Invalidates cache entries for all netgroup cache records. If -n is also used, -N takes precedence and -n is ignored. |
-n name | --netgroup name | Invalidates the cache entry for the specified netgroup. |
-U | --users | Invalidates cache entries for all user records. If the -u option is also used, -U takes precedence and -u is ignored. |
-u name | --user name | Invalidates the cache entry for the specified user. |
exampleldap
, the cache file is named cache_exampleldap.ldb
.
known_hosts
file or for the remote user in authorized_keys
. Whenever that remote machine or user attempts to authenticate again, the local system simply checks the known_hosts
or authorized_keys
file first to see if that remote entity is recognized and trusted. If it is, then access is granted.
known_hosts
file is a triplet of the machine name, its IP address, and its public key:
server.example.com,255.255.255.255 ssh-rsa AbcdEfg1234ZYX098776/AbcdEfg1234ZYX098776/AbcdEfg1234ZYX098776=
known_hosts
file can quickly become outdated for a number of different reasons: systems using DHCP cycle through IP addresses, new keys can be re-issued periodically, or virtual machines or services can be brought online and removed. This changes the hostname, IP address, and key triplet.
known_hosts
file to maintain security. (Or system users get in the habit of simply accepting any machine and key presented, which negates the security benefits of key-based security.)
known_hosts
file has not been updated uniformly.
NOTE
known_hosts
file.
~/.ssh/config
) or a system-wide configuration file (/etc/ssh/ssh_config
). The user file has precedence over the system settings and the first obtained value for a paramter is used. The formatting and conventions for this file are covered in Chapter 8, OpenSSH.
sss_ssh_knownhostsproxy
, which performs three operations:
.ssh/sss_known_hosts
.
sss_ssh_knownhostsproxy [-d sssd_domain] [-p ssh_port] HOST [PROXY_COMMAND]
Table 7.9. sss_ssh_knownhostsproxy Options
Short Argument | Long Argument | Description |
---|---|---|
HOSTNAME | Gives the hostname of the host to check and connect to. In the OpenSSH configuration file, this can be a token, %h . | |
PROXY_COMMAND | Passes a proxy command to use to connect to the SSH client. This is similar to running ssh -o ProxyCommand= value. This option is used when running sss_ssh_knownhostsproxy from the command line or through another script, but is not necessary in the OpenSSH configuration file. | |
-d sssd_domain | --domain sssd_domain | Only searches for public keys in entries in the specified domain. If not given, SSSD searches for keys in all configured domains. |
-p port | --port port | Uses this port to connect to the SSH client. By default, this is port 22. |
ssh_config
or ~/.ssh/config
file:
ProxyCommand
). This is the sss_ssh_knownhostsproxy
, with the desired arguments and hostname.
known_hosts
file (UserKnownHostsFile
). The SSSD hosts file is .ssh/sss_known_hosts
.
IPA1
SSSD domain and connects over whatever port and host are supplied:
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p -d IPA1 %h UserKnownHostsFile2 .ssh/sss_known_hosts
authorized_keys
file for OpenSSH. As with hosts, SSSD can maintain and automatically update a separate cache of user public keys for OpenSSH to refer to. This is kept in the .ssh/sss_authorized_keys
file.
~/.ssh/config
) or a system-wide configuration file (/etc/ssh/ssh_config
). The user file has precedence over the system settings and the first obtained value for a paramter is used. The formatting and conventions for this file are covered in Chapter 8, OpenSSH.
sss_ssh_authorizedkeys
, which performs two operations:
.ssh/sss_authorized_keys
, in the standard authorized keys format.
sss_ssh_authorizedkeys [-d sssd_domain] USER
Table 7.10. sss_ssh_authorizedkeys Options
Short Argument | Long Argument | Description |
---|---|---|
USER | Gives the username or account name for which to obtain the public key. In the OpenSSH configuration file, this can be represented by a token, %u . | |
-d sssd_domain | --domain sssd_domain | Only searches for public keys in entries in the specified domain. If not given, SSSD searches for keys in all configured domains. |
AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys
PubKeyAgent /usr/bin/sss_ssh_authorizedkeys %u
resolv.conf
file. This file is typically only read once, and so any changes made to this file are not automatically applied. This can cause NFS locking to fail on the machine where the NSCD service is running, unless that service is manually restarted.
/etc/nscd.conf
file and rely on the SSSD cache for the passwd
, group
, and netgroup
entries.
/etc/nscd.conf
file:
enable-cache hosts yes enable-cache passwd no enable-cache group no enable-cache netgroup no
debug_level
parameter for each section in the sssd.conf
file for which to produce extra logs. For example:
[domain/LDAP]
enumerate = false
cache_credentials = true
debug_level = 9
Table 7.11. Debug Log Levels
Level | Description |
---|---|
0 | Fatal failures. Anything that would prevent SSSD from starting up or causes it to cease running. |
1 | Critical failures. An error that doesn't kill the SSSD, but one that indicates that at least one major feature is not going to work properly. |
2 | Serious failures. An error announcing that a particular request or operation has failed. |
3 | Minor failures. These are the errors that would percolate down to cause the operation failure of 2. |
4 | Configuration settings. |
5 | Function data. |
6 | Trace messages for operation functions. |
7 | Trace messages for internal control functions. |
8 | Contents of function-internal variables that may be interesting. |
9 | Extremely low-level tracing information. |
NOTE
[sssd]
section. Now, each domain and service must configure its own debug log level.
sssd_update_debug_levels.py
script.
python /usr/lib/python2.6/site-packages/sssd_update_debug_levels.py
/var/log/sssd/
directory. SSSD produces a log file for each domain, as well as an sssd_pam.log
and an sssd_nss.log
file.
/var/log/secure
file logs authentication failures and the reason for the failure.
# sssd -d4 [sssd] [ldb] (3): server_sort:Unable to register control with rootdse! [sssd] [confdb_get_domains] (0): No domains configured, fatal error! [sssd] [get_monitor_config] (0): No domains configured.
/etc/sssd/sssd.conf
file and create at least one domain.
[sssd] [get_monitor_config] (0): No services configured!
/etc/sssd/sssd.conf
file and configure at least one service provider.
Important
services
entry in the /etc/sssd/sssd.conf
file. If services are listed in multiple entries, only the last entry is recognized by SSSD.
ldap_schema
setting in the [domain/DOMAINNAME]
section of sssd.conf
.
memberuid
attribute, which contains the name of the users that are members. In an RFC2307bis server, group members are stored as the multi-valued member
or uniqueMember
attribute which contains the DN of the user or group that is a member of this group. RFC2307bis allows nested groups to be maintained as well.
ldap_schema
to rfc2307bis
.
/var/lib/sss/db/cache_DOMAINNAME.ldb
.
sssd.conf
:
ldap_group_name = uniqueMember
sssd.conf
is configured to connect over a standard protocol (ldap://
), it attempts to encrypt the communication channel with Start TLS. If sssd.conf
is configured to connect over a secure protocol (ldaps://
), then SSSD uses SSL.
syslog
message is written, indicating that TLS encryption could not be started. The certificate configuration can be tested by checking if the LDAP server is accessible apart from SSSD. For example, this tests an anonymous bind over a TLS connection to test.example.com
:
$ ldapsearch -x -ZZ -h test.example.com -b dc=example,dc=com
ldap_start_tls: Connect error (-11) additional info: TLS error -8179:Unknown code ___f 13
sssd.conf
file that points to the CA certificate on the filesystem.
ldap_tls_cacert = /path/to/cacert
ldap_tls_reqcert
line from the sssd.conf
file.
# semanage port -a -t ldap_port_t -p tcp 1389
# service sssd status
[nss]
section of the /etc/sssd/sssd.conf
file. Especially check the filter_users
and filter_groups
attributes.
/etc/nsswitch.conf
file.
use_fully_qualified_domains
attribute to true
in the /etc/sssd/sssd.conf
file. This differentiates between different users in different domains with the same name.
[root@clientF11 tmp]# passwd user1000 Changing password for user user1000. New password: Retype new password: New Password: Reenter new Password: passwd: all authentication tokens updated successfully.
use_authtok
option is correctly configured in your /etc/pam.d/system-auth
file.
SSH
(Secure Shell) is a protocol which facilitates secure communications between two systems using a client/server architecture and allows users to log into server host systems remotely. Unlike other remote communication protocols, such as FTP
or Telnet
, SSH encrypts the login session, rendering the connection difficult for intruders to collect unencrypted passwords.
telnet
or rsh
. A related program called scp
replaces older programs designed to copy files between hosts, such as rcp
. Because these older applications do not encrypt passwords transmitted between the client and the server, avoid them whenever possible. Using secure methods to log into remote systems decreases the risks for both the client system and the remote host.
Avoid using SSH version 1
Always verify the integrity of a new SSH server
root
by typing:
su -
ssh
, scp
, and sftp
), and those for the server (the sshd
daemon).
/etc/ssh/
directory. See Table 8.1, “System-wide configuration files” for a description of its content.
Table 8.1. System-wide configuration files
Configuration File | Description |
---|---|
/etc/ssh/moduli | Contains Diffie-Hellman groups used for the Diffie-Hellman key exchange which is critical for constructing a secure transport layer. When keys are exchanged at the beginning of an SSH session, a shared, secret value is created which cannot be determined by either party alone. This value is then used to provide host authentication. |
/etc/ssh/ssh_config | The default SSH client configuration file. Note that it is overridden by ~/.ssh/config if it exists. |
/etc/ssh/sshd_config | The configuration file for the sshd daemon. |
/etc/ssh/ssh_host_dsa_key | The DSA private key used by the sshd daemon. |
/etc/ssh/ssh_host_dsa_key.pub | The DSA public key used by the sshd daemon. |
/etc/ssh/ssh_host_key | The RSA private key used by the sshd daemon for version 1 of the SSH protocol. |
/etc/ssh/ssh_host_key.pub | The RSA public key used by the sshd daemon for version 1 of the SSH protocol. |
/etc/ssh/ssh_host_rsa_key | The RSA private key used by the sshd daemon for version 2 of the SSH protocol. |
/etc/ssh/ssh_host_rsa_key.pub | The RSA public key used by the sshd for version 2 of the SSH protocol. |
~/.ssh/
directory. See Table 8.2, “User-specific configuration files” for a description of its content.
Table 8.2. User-specific configuration files
Configuration File | Description |
---|---|
~/.ssh/authorized_keys | Holds a list of authorized public keys for servers. When the client connects to a server, the server authenticates the client by checking its signed public key stored within this file. |
~/.ssh/id_dsa | Contains the DSA private key of the user. |
~/.ssh/id_dsa.pub | The DSA public key of the user. |
~/.ssh/id_rsa | The RSA private key used by ssh for version 2 of the SSH protocol. |
~/.ssh/id_rsa.pub | The RSA public key used by ssh for version 2 of the SSH protocol |
~/.ssh/identity | The RSA private key used by ssh for version 1 of the SSH protocol. |
~/.ssh/identity.pub | The RSA public key used by ssh for version 1 of the SSH protocol. |
~/.ssh/known_hosts | Contains DSA host keys of SSH servers accessed by the user. This file is very important for ensuring that the SSH client is connecting the correct SSH server. |
ssh_config
and sshd_config
man pages for information concerning the various directives available in the SSH configuration files.
Make sure you have relevant packages installed
sshd
daemon, type the following at a shell prompt:
systemctl start sshd.service
sshd
daemon, use the following command:
systemctl stop sshd.service
systemctl enable sshd.service
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed.
/etc/ssh/
directory (see Table 8.1, “System-wide configuration files” for a complete list), and restore them whenever you reinstall the system.
telnet
, rsh
, rlogin
, and vsftpd
.
systemctl stop telnet.service
systemctl stop rsh.service
systemctl stop rlogin.service
systemctl stop vsftpd.service
systemctl disable telnet.service
systemctl disable rsh.service
systemctl disable rlogin.service
systemctl disable vsftpd.service
/etc/ssh/sshd_config
configuration file in a text editor, and change the PasswordAuthentication
option as follows:
PasswordAuthentication no
ssh
, scp
, or sftp
to connect to the server from a client machine, generate an authorization key pair by following the steps below. Note that keys must be generated for each user separately.
Do not generate key pairs as root
root
, only root
will be able to use the keys.
Backup your ~/.ssh/ directory
~/.ssh/
directory. After reinstalling, copy it back to your home directory. This process can be done for all users on your system, including root
.
~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/john/.ssh/id_rsa):
~/.ssh/id_rsa
) for the newly created key.
Your identification has been saved in /home/john/.ssh/id_rsa. Your public key has been saved in /home/john/.ssh/id_rsa.pub. The key fingerprint is: e7:97:c7:e2:0e:f9:0e:fc:c4:d7:cb:e5:31:11:92:14 john@penguin.example.com The key's randomart image is: +--[ RSA 2048]----+ | E. | | . . | | o . | | . .| | S . . | | + o o ..| | * * +oo| | O +..=| | o* o.| +-----------------+
~/.ssh/
directory:
~]$ chmod 755 ~/.ssh
~/.ssh/id_rsa.pub
into the ~/.ssh/authorized_keys
on the machine to which you want to connect, appending it to its end if the file already exists.
~/.ssh/authorized_keys
file using the following command:
~]$ chmod 644 ~/.ssh/authorized_keys
~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/john/.ssh/id_dsa):
~/.ssh/id_dsa
) for the newly created key.
Your identification has been saved in /home/john/.ssh/id_dsa. Your public key has been saved in /home/john/.ssh/id_dsa.pub. The key fingerprint is: 81:a1:91:a8:9f:e8:c5:66:0d:54:f5:90:cc:bc:cc:27 john@penguin.example.com The key's randomart image is: +--[ DSA 1024]----+ | .oo*o. | | ...o Bo | | .. . + o. | |. . E o | | o..o S | |. o= . | |. + | | . | | | +-----------------+
~/.ssh/
directory:
~]$ chmod 775 ~/.ssh
~/.ssh/id_dsa.pub
into the ~/.ssh/authorized_keys
on the machine to which you want to connect, appending it to its end if the file already exists.
~/.ssh/authorized_keys
file using the following command:
~]$ chmod 644 ~/.ssh/authorized_keys
~]$ ssh-keygen -t rsa1
Generating public/private rsa1 key pair.
Enter file in which to save the key (/home/john/.ssh/identity):
~/.ssh/identity
) for the newly created key.
Your identification has been saved in /home/john/.ssh/identity. Your public key has been saved in /home/john/.ssh/identity.pub. The key fingerprint is: cb:f6:d5:cb:6e:5f:2b:28:ac:17:0c:e4:62:e4:6f:59 john@penguin.example.com The key's randomart image is: +--[RSA1 2048]----+ | | | . . | | o o | | + o E | | . o S | | = + . | | . = . o . .| | . = o o..o| | .o o o=o.| +-----------------+
~/.ssh/
directory:
~]$ chmod 755 ~/.ssh
~/.ssh/identity.pub
into the ~/.ssh/authorized_keys
on the machine to which you want to connect, appending it to its end if the file already exists.
~/.ssh/authorized_keys
file using the following command:
~]$ chmod 644 ~/.ssh/authorized_keys
Never share your private key
ssh-agent
authentication agent. To save your passphrase for a certain shell prompt, use the following command:
~]$ ssh-add
Enter passphrase for /home/john/.ssh/id_rsa:
Make sure you have relevant packages installed
ssh
allows you to log in to a remote machine and execute commands there. It is a secure replacement for the rlogin
, rsh
, and telnet
programs.
telnet
, to log in to a remote machine named penguin.example.com
, type the following command at a shell prompt:
~]$ ssh penguin.example.com
ssh username@hostname
form. For example, to log in as john
, type:
~]$ ssh john@penguin.example.com
The authenticity of host 'penguin.example.com' can't be established. RSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b:01:5d:b3:07:38:e2:11:0c. Are you sure you want to continue connecting (yes/no)?
yes
to confirm. You will see a notice that the server has been added to the list of known hosts, and a prompt asking for your password:
Warning: Permanently added 'penguin.example.com' (RSA) to the list of known hosts. john@penguin.example.com's password:
Updating the host key of an SSH server
~/.ssh/known_hosts
file. To do so, open the file in a text editor, and remove a line containing the remote machine name at the beginning. Before doing this, however, contact the system administrator of the SSH server to verify the server is not compromised.
ssh
program can be used to execute a command on the remote machine without logging in to a shell prompt. The syntax for that is ssh [username@]hostname command
. For example, if you want to execute the whoami
command on penguin.example.com
, type:
~]$ ssh john@penguin.example.com whoami
john@penguin.example.com's password:
john
scp
Utilityscp
can be used to transfer files between machines over a secure, encrypted connection. In its design, it is very similar to rcp
.
scp localfile username@hostname:remotefile
taglist.vim
to a remote machine named penguin.example.com
, type the following at a shell prompt:
~]$ scp taglist.vim john@penguin.example.com:.vim/plugin/taglist.vim
john@penguin.example.com's password:
taglist.vim 100% 144KB 144.5KB/s 00:00
.vim/plugin/
to the same directory on the remote machine penguin.example.com
, type the following command:
~]$ scp .vim/plugin/* john@penguin.example.com:.vim/plugin/
john@penguin.example.com's password:
closetag.vim 100% 13KB 12.6KB/s 00:00
snippetsEmu.vim 100% 33KB 33.1KB/s 00:00
taglist.vim 100% 144KB 144.5KB/s 00:00
scp username@hostname:remotefile localfile
.vimrc
configuration file from the remote machine, type:
~]$ scp john@penguin.example.com:.vimrc .vimrc
john@penguin.example.com's password:
.vimrc 100% 2233 2.2KB/s 00:00
sftp
Utilitysftp
utility can be used to open a secure, interactive FTP session. In its design, it is similar to ftp
except that it uses a secure, encrypted connection.
sftp username@hostname
penguin.example.com
with john
as a username, type:
~]$ sftp john@penguin.example.com
john@penguin.example.com's password:
Connected to penguin.example.com.
sftp>
sftp
utility accepts a set of commands similar to those used by ftp
(see Table 8.3, “A selection of available sftp commands”).
Table 8.3. A selection of available sftp commands
Command | Description |
---|---|
ls [directory] | List the content of a remote directory. If none is supplied, a current working directory is used by default. |
cd directory | Change the remote working directory to directory. |
mkdir directory | Create a remote directory. |
rmdir path | Remove a remote directory. |
put localfile [remotefile] | Transfer localfile to a remote machine. |
get remotefile [localfile] | Transfer remotefile from a remote machine. |
sftp
man page.
ssh -Y username@hostname
penguin.example.com
with john
as a username, type:
~]$ ssh -Y john@penguin.example.com
john@penguin.example.com's password:
~]$ system-config-printer &
TCP/IP
protocols via port forwarding. When using this technique, the SSH server becomes an encrypted conduit to the SSH client.
Using reserved port numbers
localhost
, use a command in the following form:
ssh -L local-port:remote-hostname:remote-port username@hostname
mail.example.com
using POP3
through an encrypted connection, use the following command:
~]$ ssh -L 1100:mail.example.com:110 mail.example.com
1100
on the localhost
to check for new email. Any requests sent to port 1100
on the client system will be directed securely to the mail.example.com
server.
mail.example.com
is not running an SSH server, but another machine on the same network is, SSH can still be used to secure part of the connection. However, a slightly different command is necessary:
~]$ ssh -L 1100:mail.example.com:110 other.example.com
1100
on the client machine are forwarded through the SSH connection on port 22
to the SSH server, other.example.com
. Then, other.example.com
connects to port 110
on mail.example.com
to check for new email. Note that when using this technique, only the connection between the client system and other.example.com
SSH server is secure.
A connection is only as secure as a client system
No
parameter for the AllowTcpForwarding
line in /etc/ssh/sshd_config
and restarting the sshd
service.
man ssh
man scp
man sftp
man sshd
man ssh-keygen
man ssh_config
man sshd_config
Table of Contents
smb.conf
FileHTTP
(Hypertext Transfer Protocol) server, or a web server, is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well.
httpd
, an open source web server developed by the Apache Software Foundation. In Fedora 20 the Apache server has been updated to Apache HTTP Server 2.4. This section describes the basic configuration of the httpd
service, and covers some advanced topics such as adding server modules, setting up virtual hosts, or configuring the secure HTTP server.
httpd
service configuration accordingly. This section reviews some of the newly added features, outlines important changes, and guides you through the update of older configuration files.
apachectl
and systemctl
commands to control the service, in place of the service
command. The following examples are specific to the httpd
service. The command: service httpd gracefulis replaced by
apachectl gracefulThe command:
service httpd configtestis replaced by
apachectl configtestThe
systemd
unit file for httpd
has different behavior from the init script as follows:
systemd
unit file runs the httpd
daemon using a private /tmp
directory, separate to the system /tmp
directory.
/etc/httpd/conf.modules.d
directory. Packages, such as php, which provide additional loadable modules for httpd
will place a file in this directory. Any configuration files in the conf.modules.d
are processed before the main body of httpd.conf
. Configuration files in the /etc/httpd/conf.d
directory are now processed after the main body of httpd.conf
.
/etc/httpd/conf.d/autoindex.confThis configures mod_autoindex directory indexing.
/etc/httpd/conf.d/userdir.confThis configures access to user directories, for example,
http://example.com/~username/
; such access is disabled by default for security reasons.
/etc/httpd/conf.d/welcome.confAs in previous releases, this configures the welcome page displayed for
http://localhost/
when no content is present.
httpd.conf
is now provided by default. Many common configuration settings, such as Timeout
or KeepAlive
are no longer explicitly configured in the default configuration; hard-coded settings will be used instead, by default. The hard-coded default settings for all configuration directives are specified in the manual. See Section 9.1.8.1, “Installed Documentation” for more information.
httpd
configuration syntax were made which will require changes if migrating an existing configuration from httpd 2.2 to httpd 2.4. See the following Apache document for more information on upgrading http://httpd.apache.org/docs/2.4/upgrading.html
httpd
binaries: the forked model, “prefork”, as /usr/sbin/httpd
, and the thread-based model “worker” as /usr/sbin/httpd.worker
.
httpd
binary is used, and three MPMs are available as loadable modules: worker, prefork (default), and event. The configuration file /etc/httpd/conf.modules.d/00-mpm.conf
can be changed to select which of the three MPM modules is loaded.
/var/cache/mod_proxy
directory is no longer provided; instead, the /var/cache/httpd/
directory is packaged with a proxy
and ssl
subdirectory.
httpd
has been moved from /var/www/
to /usr/share/httpd/
:
/usr/share/httpd/icons/The
/var/www/icons/
has moved to /usr/share/httpd/icons
. This directory contains a set of icons used with directory indices. Available at http://localhost/icons/
in the default configuration, via /etc/httpd/conf.d/autoindex.conf
.
/usr/share/httpd/manual/The
/var/www/manual/
has moved to /usr/share/httpd/manual/
. This directory, contained in the httpd-manual package, contains the HTML version of the manual for httpd
. Available at http://localhost/manual/
if the package is installed, via /etc/httpd/conf.d/manual.conf
.
/usr/share/httpd/error/The
/var/www/error/
has moved to /usr/share/httpd/error/
. Custom multi-language HTTP error pages. Not configured by default, the example configuration file is provided at /usr/share/doc/httpd-VERSION/httpd-multilang-errordoc.conf
.
Order
, Deny
and Allow
directives should be adapted to use the new Require
syntax. See the following Apache document for more information http://httpd.apache.org/docs/2.4/howto/auth.html
setuid root
; instead, it has file system capability bits set which allow a more restrictive set of permissions. In conjunction with this change, the suexec binary no longer uses the /var/log/httpd/suexec.log
logfile. Instead, log messages are sent to syslog; by default these will appear in the /var/log/secure
log file.
httpd
module interface, httpd 2.4 is not compatible with third-party binary modules built against httpd 2.2. Such modules will need to be adjusted as necessary for the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version 2.4
is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html.
/usr/sbin/apxs
to /usr/bin/apxs
.
httpd
modules removed in Fedora 20:
LoadModule
directive for each module that has been renamed.
/etc/httpd/conf.d/ssl.conf
to enable the Secure Sockets Layer (SSL) protocol.
~]# apachectl configtest
Syntax OK
httpd
service, make sure you have the httpd installed. You can do so by using the following command:
~]# yum install httpd
httpd
service, type the following at a shell prompt as root
:
~]# systemctl start httpd.service
~]# systemctl enable httpd.service
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
Using the secure server
httpd
service, type the following at a shell prompt as root
:
~]# systemctl stop httpd.service
~]# systemctl disable httpd.service
rm '/etc/systemd/system/multi-user.target.wants/httpd.service'
httpd
service:
root
:
~]# systemctl restart httpd.service
httpd
service and immediately starts it again. Use this command after installing or removing a dynamically loaded module such as PHP.
root
, type:
~]# systemctl reload httpd.service
httpd
service to reload its configuration file. Any requests being currently processed will be interrupted, which may cause a client browser to display an error message or render a partial page.
root
:
~]# service httpd graceful
httpd
service to reload its configuration file. Any requests being currently processed will use the old configuration.
httpd
service is running, type the following at a shell prompt:
~]# systemctl is-active httpd.service
active
httpd
service is started, by default, it reads the configuration from locations that are listed in Table 9.1, “The httpd service configuration files”.
Table 9.1. The httpd service configuration files
httpd
service.
~]# apachectl configtest
Syntax OK
/etc/httpd/conf/httpd.conf
configuration file:
<Directory>
<Directory>
directive allows you to apply certain directives to a particular directory only. It takes the following form:
<Directory directory> directive … </Directory>
cgi-bin
directories for server-side scripts located outside the directory that is specified by ScriptAlias
. In this case, the ExecCGI
and AddHandler
directives must be supplied, and the permissions on the target directory must be set correctly (that is, 0755
).
Example 9.1. Using the <Directory> directive
<Directory /var/www/html> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>
<IfDefine>
IfDefine
directive allows you to use certain directives only when a particular parameter is supplied on the command line. It takes the following form:
<IfDefine [!]parameter>
directive
…
</IfDefine>
-D
parameter command line option (for example, httpd -DEnableHome
). If the optional exclamation mark (that is, !
) is present, the enclosed directives are used only when the parameter is not specified.
<IfModule>
<IfModule>
directive allows you to use certain directive only when a particular module is loaded. It takes the following form:
<IfModule [!]module>
directive
…
</IfModule>
!
) is present, the enclosed directives are used only when the module is not loaded.
Example 9.3. Using the <IfModule> directive
<IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/mod_proxy </IfModule>
<Location>
<Location>
directive allows you to apply certain directives to a particular URL only. It takes the following form:
<Location url> directive … </Location>
DocumentRoot
directive (for example, /server-info
), or an external URL such as http://example.com/server-info
.
Example 9.4. Using the <Location> directive
<Location /server-info> SetHandler server-info Order deny,allow Deny from all Allow from .example.com </Location>
<Proxy>
<Proxy>
directive allows you to apply certain directives to the proxy server only. It takes the following form:
<Proxy pattern> directive … </Proxy>
http://example.com/*
).
Example 9.5. Using the <Proxy> directive
<Proxy *> Order deny,allow Deny from all Allow from .example.com </Proxy>
<VirtualHost>
<VirtualHost>
directive allows you apply certain directives to particular virtual hosts only. It takes the following form:
<VirtualHost address[:port]…>
directive
…
</VirtualHost>
Table 9.2. Available <VirtualHost> options
Option | Description |
---|---|
* | Represents all IP addresses. |
_default_ | Represents unmatched IP addresses. |
Example 9.6. Using the <VirtualHost> directive
<VirtualHost *:80> ServerAdmin webmaster@penguin.example.com DocumentRoot /www/docs/penguin.example.com ServerName penguin.example.com ErrorLog logs/penguin.example.com-error_log CustomLog logs/penguin.example.com-access_log common </VirtualHost>
AccessFileName
AccessFileName
directive allows you to specify the file to be used to customize access control information for each directory. It takes the following form:
AccessFileName filename…
.htaccess
.
Files
tag to prevent the files beginning with .ht
from being accessed by web clients. This includes the .htaccess
and .htpasswd
files.
Example 9.7. Using the AccessFileName directive
AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files>
Action
Action
directive allows you to specify a CGI script to be executed when a certain media type is requested. It takes the following form:
Action content-type path
text/html
, image/png
, or application/pdf
. The path refers to an existing CGI script, and must be relative to the directory specified by the DocumentRoot
directive (for example, /cgi-bin/process-image.cgi
).
AddDescription
AddDescription
directive allows you to specify a short description to be displayed in server-generated directory listings for a given file. It takes the following form:
AddDescription "description" filename…
"
). The filename can be a full file name, a file extension, or a wildcard expression.
AddEncoding
AddEncoding
directive allows you to specify an encoding type for a particular file extension. It takes the following form:
AddEncoding encoding extension…
x-compress
, x-gzip
, etc. The extension is a case sensitive file extension, and is conventionally written with a leading dot (for example, .gz
).
AddHandler
AddHandler
directive allows you to map certain file extensions to a selected handler. It takes the following form:
AddHandler handler extension…
.cgi
).
.cgi
extension as CGI scripts regardless of the directory they are in. Additionally, it is also commonly used to process server-parsed HTML and image-map files.
AddIcon
AddIcon
directive allows you to specify an icon to be displayed for a particular file in server-generated directory listings. It takes the following form:
AddIcon path pattern…
DocumentRoot
directive (for example, /icons/folder.png
). The pattern can be a file name, a file extension, a wildcard expression, or a special form as described in the following table:
Table 9.3. Available AddIcon options
Option | Description |
---|---|
^^DIRECTORY^^ | Represents a directory. |
^^BLANKICON^^ | Represents a blank line. |
AddIconByEncoding
AddIconByEncoding
directive allows you to specify an icon to be displayed for a particular encoding type in server-generated directory listings. It takes the following form:
AddIconByEncoding path encoding…
DocumentRoot
directive (for example, /icons/compressed.png
). The encoding has to be a valid MIME encoding such as x-compress
, x-gzip
, etc.
Example 9.13. Using the AddIconByEncoding directive
AddIconByEncoding /icons/compressed.png x-compress x-gzip
AddIconByType
AddIconByType
directive allows you to specify an icon to be displayed for a particular media type in server-generated directory listings. It takes the following form:
AddIconByType path content-type…
DocumentRoot
directive (for example, /icons/text.png
). The content-type has to be either a valid MIME type (for example, text/html
or image/png
), or a wildcard expression such as text/*
, image/*
, etc.
AddLanguage
AddLanguage
directive allows you to associate a file extension with a specific language. It takes the following form:
AddLanguage language extension…
cs
, en
, or fr
. The extension is a case sensitive file extension, and is conventionally written with a leading dot (for example, .cs
).
AddType
AddType
directive allows you to define or override the media type for a particular file extension. It takes the following form:
AddType content-type extension…
text/html
, image/png
, etc. The extension is a case sensitive file extension, and is conventionally written with a leading dot (for example, .cs
).
Alias
Alias
directive allows you to refer to files and directories outside the default directory specified by the DocumentRoot
directive. It takes the following form:
Alias url-path real-path
DocumentRoot
directive (for example, /images/
). The real-path is a full path to a file or directory in the local file system.
Directory
tag with additional permissions to access the target directory. By default, the /icons/
alias is created so that the icons from /var/www/icons/
are displayed in server-generated directory listings.
Example 9.17. Using the Alias directive
Alias /icons/ /var/www/icons/ <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all <Directory>
Allow
Allow
directive allows you to specify which clients have permission to access a given directory. It takes the following form:
Allow from client…
all
for all clients.
AllowOverride
AllowOverride
directive allows you to specify which directives in a .htaccess
file can override the default configuration. It takes the following form:
AllowOverride type…
Table 9.4. Available AllowOverride options
Option | Description |
---|---|
All | All directives in .htaccess are allowed to override earlier configuration settings. |
None | No directive in .htaccess is allowed to override earlier configuration settings. |
AuthConfig | Allows the use of authorization directives such as AuthName , AuthType , or Require . |
FileInfo | Allows the use of file type, metadata, and mod_rewrite directives such as DefaultType , RequestHeader , or RewriteEngine , as well as the Action directive. |
Indexes | Allows the use of directory indexing directives such as AddDescription , AddIcon , or FancyIndexing . |
Limit | Allows the use of host access directives, that is, Allow , Deny , and Order . |
Options [=option,…] | Allows the use of the Options directive. Additionally, you can provide a comma-separated list of options to customize which options can be set using this directive. |
BrowserMatch
BrowserMatch
directive allows you to modify the server behavior based on the client's web browser type. It takes the following form:
BrowserMatch pattern variable…
CacheDefaultExpire
CacheDefaultExpire
option allows you to set how long to cache a document that does not have any expiration date or the date of its last modification specified. It takes the following form:
CacheDefaultExpire time
3600
(that is, one hour).
CacheDisable
CacheDisable
directive allows you to disable caching of certain URLs. It takes the following form:
CacheDisable path
DocumentRoot
directive (for example, /files/
).
CacheEnable
CacheEnable
directive allows you to specify a cache type to be used for certain URLs. It takes the following form:
CacheEnable type url
DocumentRoot
directive (for example, /images/
), a protocol (for example, ftp://
), or an external URL such as http://example.com/
.
Table 9.5. Available cache types
Type | Description |
---|---|
mem | The memory-based storage manager. |
disk | The disk-based storage manager. |
fd | The file descriptor cache. |
CacheLastModifiedFactor
CacheLastModifiedFactor
directive allows you to customize how long to cache a document that does not have any expiration date specified, but that provides information about the date of its last modification. It takes the following form:
CacheLastModifiedFactor number
0.1
(that is, one tenth).
CacheMaxExpire
CacheMaxExpire
directive allows you to specify the maximum amount of time to cache a document. It takes the following form:
CacheMaxExpire time
86400
(that is, one day).
CacheNegotiatedDocs
CacheNegotiatedDocs
directive allows you to enable caching of the documents that were negotiated on the basis of content. It takes the following form:
CacheNegotiatedDocs option
Off
.
Table 9.6. Available CacheNegotiatedDocs options
Option | Description |
---|---|
On | Enables caching the content-negotiated documents. |
Off | Disables caching the content-negotiated documents. |
CacheRoot
CacheRoot
directive allows you to specify the directory to store cache files in. It takes the following form:
CacheRoot directory
/var/cache/mod_proxy/
.
CustomLog
CustomLog
directive allows you to specify the log file name and the log file format. It takes the following form:
CustomLog path format
ServerRoot
directive (that is, /etc/httpd/
by default). The format has to be either an explicit format string, or a format name that was previously defined using the LogFormat
directive.
DefaultIcon
DefaultIcon
directive allows you to specify an icon to be displayed for a file in server-generated directory listings when no other icon is associated with it. It takes the following form:
DefaultIcon path
DocumentRoot
directive (for example, /icons/unknown.png
).
DefaultType
DefaultType
directive allows you to specify a media type to be used in case the proper MIME type cannot be determined by the server. It takes the following form:
DefaultType content-type
text/html
, image/png
, application/pdf
, etc.
Deny
Deny
directive allows you to specify which clients are denied access to a given directory. It takes the following form:
Deny from client…
all
for all clients.
DirectoryIndex
DirectoryIndex
directive allows you to specify a document to be served to a client when a directory is requested (that is, when the URL ends with the /
character). It takes the following form:
DirectoryIndex filename…
index.html
, and index.html.var
.
DocumentRoot
DocumentRoot
directive allows you to specify the main directory from which the content is served. It takes the following form:
DocumentRoot directory
/var/www/html/
.
ErrorDocument
ErrorDocument
directive allows you to specify a document or a message to be displayed as a response to a particular error. It takes the following form:
ErrorDocument error-code action
403
(Forbidden), 404
(Not Found), or 500
(Internal Server Error). The action can be either a URL (both local and external), or a message string enclosed in double quotes (that is, "
).
Example 9.34. Using the ErrorDocument directive
ErrorDocument 403 "Access Denied" ErrorDocument 404 /404-not_found.html
ErrorLog
ErrorLog
directive allows you to specify a file to which the server errors are logged. It takes the following form:
ErrorLog path
ServerRoot
directive (that is, /etc/httpd/
by default). The default option is logs/error_log
ExtendedStatus
ExtendedStatus
directive allows you to enable detailed server status information. It takes the following form:
ExtendedStatus option
Off
.
Table 9.7. Available ExtendedStatus options
Option | Description |
---|---|
On | Enables generating the detailed server status. |
Off | Disables generating the detailed server status. |
Group
Group
directive allows you to specify the group under which the httpd
service will run. It takes the following form:
Group group
apache
.
Group
is no longer supported inside <VirtualHost>
, and has been replaced by the SuexecUserGroup
directive.
HeaderName
HeaderName
directive allows you to specify a file to be prepended to the beginning of the server-generated directory listing. It takes the following form:
HeaderName filename
HEADER.html
.
HostnameLookups
HostnameLookups
directive allows you to enable automatic resolving of IP addresses. It takes the following form:
HostnameLookups option
Off
.
Table 9.8. Available HostnameLookups options
Option | Description |
---|---|
On | Enables resolving the IP address for each connection so that the hostname can be logged. However, this also adds a significant processing overhead. |
Double | Enables performing the double-reverse DNS lookup. In comparison to the above option, this adds even more processing overhead. |
Off | Disables resolving the IP address for each connection. |
Include
Include
directive allows you to include other configuration files. It takes the following form:
Include filename
filename
can be an absolute path, a path relative to the directory specified by the ServerRoot
directive, or a wildcard expression. All configuration files from the /etc/httpd/conf.d/
directory are loaded by default.
IndexIgnore
IndexIgnore
directive allows you to specify a list of file names to be omitted from the server-generated directory listings. It takes the following form:
IndexIgnore filename…
Example 9.41. Using the IndexIgnore directive
IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t
IndexOptions
IndexOptions
directive allows you to customize the behavior of server-generated directory listings. It takes the following form:
IndexOptions option…
Charset=UTF-8
, FancyIndexing
, HTMLTable
, NameWidth=*
, and VersionSort
.
Table 9.9. Available directory listing options
Option | Description |
---|---|
Charset =encoding | Specifies the character set of a generated web page. The encoding has to be a valid character set such as UTF-8 or ISO-8859-2 . |
Type =content-type | Specifies the media type of a generated web page. The content-type has to be a valid MIME type such as text/html or text/plain . |
DescriptionWidth =value | Specifies the width of the description column. The value can be either a number of characters, or an asterisk (that is, * ) to adjust the width automatically. |
FancyIndexing | Enables advanced features such as different icons for certain files or possibility to re-sort a directory listing by clicking on a column header. |
FolderFirst | Enables listing directories first, always placing them above files. |
HTMLTable | Enables the use of HTML tables for directory listings. |
IconsAreLinks | Enables using the icons as links. |
IconHeight =value | Specifies an icon height. The value is a number of pixels. |
IconWidth =value | Specifies an icon width. The value is a number of pixels. |
IgnoreCase | Enables sorting files and directories in a case-sensitive manner. |
IgnoreClient | Disables accepting query variables from a client. |
NameWidth =value | Specifies the width of the file name column. The value can be either a number of characters, or an asterisk (that is, * ) to adjust the width automatically. |
ScanHTMLTitles | Enables parsing the file for a description (that is, the title element) in case it is not provided by the AddDescription directive. |
ShowForbidden | Enables listing the files with otherwise restricted access. |
SuppressColumnSorting | Disables re-sorting a directory listing by clicking on a column header. |
SuppressDescription | Disables reserving a space for file descriptions. |
SuppressHTMLPreamble | Disables the use of standard HTML preamble when a file specified by the HeaderName directive is present. |
SuppressIcon | Disables the use of icons in directory listings. |
SuppressLastModified | Disables displaying the date of the last modification field in directory listings. |
SuppressRules | Disables the use of horizontal lines in directory listings. |
SuppressSize | Disables displaying the file size field in directory listings. |
TrackModified | Enables returning the Last-Modified and ETag values in the HTTP header. |
VersionSort | Enables sorting files that contain a version number in the expected manner. |
XHTML | Enables the use of XHTML 1.0 instead of the default HTML 3.2. |
Example 9.42. Using the IndexOptions directive
IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8
KeepAlive
KeepAlive
directive allows you to enable persistent connections. It takes the following form:
KeepAlive option
Off
.
Table 9.10. Available KeepAlive options
Option | Description |
---|---|
On | Enables the persistent connections. In this case, the server will accept more than one request per connection. |
Off | Disables the keep-alive connections. |
KeepAliveTimeout
to a low number, and monitor the /var/log/httpd/logs/error_log
log file carefully.
KeepAliveTimeout
KeepAliveTimeout
directive allows you to specify the amount of time to wait for another request before closing the connection. It takes the following form:
KeepAliveTimeout time
15
.
LanguagePriority
LanguagePriority
directive allows you to customize the precedence of languages. It takes the following form:
LanguagePriority language…
cs
, en
, or fr
.
Listen
Listen [ip-address:]port [protocol]
80
.
httpd
service.
LoadModule
LoadModule
directive allows you to load a Dynamic Shared Object (DSO) module. It takes the following form:
LoadModule name path
/usr/lib/httpd/
on 32-bit and /usr/lib64/httpd/
on 64-bit systems by default).
LogFormat
LogFormat format name
CustomLog
directive.
Table 9.11. Common LogFormat options
Option | Description |
---|---|
%b | Represents the size of the response in bytes. |
%h | Represents the IP address or hostname of a remote client. |
%l | Represents the remote log name if supplied. If not, a hyphen (that is, - ) is used instead. |
%r | Represents the first line of the request string as it came from the browser or client. |
%s | Represents the status code. |
%t | Represents the date and time of the request. |
%u | If the authentication is required, it represents the remote user. If not, a hyphen (that is, - ) is used instead. |
%{field} | Represents the content of the HTTP header field. The common options include %{Referer} (the URL of the web page that referred the client to the server) and %{User-Agent} (the type of the web browser making the request). |
LogLevel
LogLevel
directive allows you to customize the verbosity level of the error log. It takes the following form:
LogLevel option
warn
.
Table 9.12. Available LogLevel options
Option | Description |
---|---|
emerg | Only the emergency situations when the server cannot perform its work are logged. |
alert | All situations when an immediate action is required are logged. |
crit | All critical conditions are logged. |
error | All error messages are logged. |
warn | All warning messages are logged. |
notice | Even normal, but still significant situations are logged. |
info | Various informational messages are logged. |
debug | Various debugging messages are logged. |
MaxKeepAliveRequests
MaxKeepAliveRequests
directive allows you to specify the maximum number of requests for a persistent connection. It takes the following form:
MaxKeepAliveRequests number
0
allows unlimited number of requests. The default option is 100
.
NameVirtualHost
NameVirtualHost
directive allows you to specify the IP address and port number for a name-based virtual host. It takes the following form:
NameVirtualHost ip-address[:port]
*
) representing all interfaces. Note that IPv6 addresses have to be enclosed in square brackets (that is, [
and ]
). The port is optional.
Using secure HTTP connections
Options
Options
directive allows you to specify which server features are available in a particular directory. It takes the following form:
Options option…
Table 9.13. Available server features
Option | Description |
---|---|
ExecCGI | Enables the execution of CGI scripts. |
FollowSymLinks | Enables following symbolic links in the directory. |
Includes | Enables server-side includes. |
IncludesNOEXEC | Enables server-side includes, but does not allow the execution of commands. |
Indexes | Enables server-generated directory listings. |
MultiViews | Enables content-negotiated “MultiViews”. |
SymLinksIfOwnerMatch | Enables following symbolic links in the directory when both the link and the target file have the same owner. |
All | Enables all of the features above with the exception of MultiViews . |
None | Disables all of the features above. |
Order
Order
directive allows you to specify the order in which the Allow
and Deny
directives are evaluated. It takes the following form:
Order option
allow,deny
.
Table 9.14. Available Order options
Option | Description |
---|---|
allow,deny | Allow directives are evaluated first. |
deny,allow | Deny directives are evaluated first. |
PidFile
PidFile
directive allows you to specify a file to which the process ID (PID) of the server is stored. It takes the following form:
PidFile path
ServerRoot
directive (that is, /etc/httpd/
by default). The default option is run/httpd.pid
.
ProxyRequests
ProxyRequests
directive allows you to enable forward proxy requests. It takes the following form:
ProxyRequests option
Off
.
Table 9.15. Available ProxyRequests options
Option | Description |
---|---|
On | Enables forward proxy requests. |
Off | Disables forward proxy requests. |
ReadmeName
ReadmeName
directive allows you to specify a file to be appended to the end of the server-generated directory listing. It takes the following form:
ReadmeName filename
README.html
.
Redirect
Redirect
directive allows you to redirect a client to another URL. It takes the following form:
Redirect [status] path url
DocumentRoot
directive (for example, /docs
). The url refers to the current location of the content (for example, http://docs.example.com
).
Table 9.16. Available status options
Status | Description |
---|---|
permanent | Indicates that the requested resource has been moved permanently. The 301 (Moved Permanently) status code is returned to a client. |
temp | Indicates that the requested resource has been moved only temporarily. The 302 (Found) status code is returned to a client. |
seeother | Indicates that the requested resource has been replaced. The 303 (See Other) status code is returned to a client. |
gone | Indicates that the requested resource has been removed permanently. The 410 (Gone) status is returned to a client. |
mod_rewrite
module that is part of the Apache HTTP Server installation.
ScriptAlias
ScriptAlias
directive allows you to specify the location of CGI scripts. It takes the following form:
ScriptAlias url-path real-path
DocumentRoot
directive (for example, /cgi-bin/
). The real-path is a full path to a file or directory in the local file system.
Directory
tag with additional permissions to access the target directory. By default, the /cgi-bin/
alias is created so that the scripts located in the /var/www/cgi-bin/
are accessible.
ScriptAlias
directive is used for security reasons to prevent CGI scripts from being viewed as ordinary text documents.
Example 9.58. Using the ScriptAlias directive
ScriptAlias /cgi-bin/ /var/www/cgi-bin/ <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory>
ServerAdmin
ServerAdmin
directive allows you to specify the email address of the server administrator to be displayed in server-generated web pages. It takes the following form:
ServerAdmin email
root@localhost
.
webmaster@hostname
, where hostname is the address of the server. Once set, alias webmaster
to the person responsible for the web server in /etc/aliases
, and as superuser, run the newaliases
command.
ServerName
ServerName
directive allows you to specify the hostname and the port number of a web server. It takes the following form:
ServerName hostname[:port]
Listen
directive.
/etc/hosts
file.
ServerRoot
ServerRoot
directive allows you to specify the directory in which the server operates. It takes the following form:
ServerRoot directory
/etc/httpd/
.
ServerSignature
ServerSignature
directive allows you to enable displaying information about the server on server-generated documents. It takes the following form:
ServerSignature option
On
.
Table 9.17. Available ServerSignature options
Option | Description |
---|---|
On | Enables appending the server name and version to server-generated pages. |
Off | Disables appending the server name and version to server-generated pages. |
EMail | Enables appending the server name, version, and the email address of the system administrator as specified by the ServerAdmin directive to server-generated pages. |
ServerTokens
ServerTokens
directive allows you to customize what information are included in the Server response header. It takes the following form:
ServerTokens option
OS
.
Table 9.18. Available ServerTokens options
Option | Description |
---|---|
Prod | Includes the product name only (that is, Apache ). |
Major | Includes the product name and the major version of the server (for example, 2 ). |
Minor | Includes the product name and the minor version of the server (for example, 2.2 ). |
Min | Includes the product name and the minimal version of the server (for example, 2.2.15 ). |
OS | Includes the product name, the minimal version of the server, and the type of the operating system it is running on (for example, Red Hat ). |
Full | Includes all the information above along with the list of loaded modules. |
SuexecUserGroup
SuexecUserGroup
directive allows you to specify the user and group under which the CGI scripts will be run. It takes the following form:
SuexecUserGroup user group
root
privileges. Note that in <VirtualHost>
, SuexecUserGroup
replaces the User
and Group
directives.
Timeout
Timeout
directive allows you to specify the amount of time to wait for an event before closing a connection. It takes the following form:
Timeout time
60
.
TypesConfig
TypesConfig
allows you to specify the location of the MIME types configuration file. It takes the following form:
TypesConfig path
ServerRoot
directive (that is, /etc/httpd/
by default). The default option is /etc/mime.types
.
/etc/mime.types
, the recommended way to add MIME type mapping to the Apache HTTP Server is to use the AddType
directive.
UseCanonicalName
UseCanonicalName
allows you to specify the way the server refers to itself. It takes the following form:
UseCanonicalName option
Off
.
Table 9.19. Available UseCanonicalName options
Option | Description |
---|---|
On | Enables the use of the name that is specified by the ServerName directive. |
Off | Disables the use of the name that is specified by the ServerName directive. The hostname and port number provided by the requesting client are used instead. |
DNS | Disables the use of the name that is specified by the ServerName directive. The hostname determined by a reverse DNS lookup is used instead. |
User
User
directive allows you to specify the user under which the httpd
service will run. It takes the following form:
User user
apache
.
httpd
service should not be run with root
privileges. Note that User
is no longer supported inside <VirtualHost>
, and has been replaced by the SuexecUserGroup
directive.
UserDir
UserDir
directive allows you to enable serving content from users' home directories. It takes the following form:
UserDir option
public_html
), or a valid keyword as described in Table 9.20, “Available UserDir options”. The default option is disabled
.
Table 9.20. Available UserDir options
Option | Description |
---|---|
enabled user… | Enables serving content from home directories of given users. |
disabled [user…] | Disables serving content from home directories, either for all users, or, if a space separated list of users is supplied, for given users only. |
Set the correct permissions
UserDir
directive. For example, to allow access to public_html/
in the home directory of user joe
, type the following at a shell prompt as root
:
~]#chmod a+x /home/joe/
~]#chmod a+rx /home/joe/public_html/
/etc/httpd/conf.d/ssl.conf
:
SetEnvIf
SetEnvIf
directive allows you to set environment variables based on the headers of incoming connections. It takes the following form:
SetEnvIf option pattern [!]variable[=value]…
!
) is present, the variable is removed instead of being set.
Table 9.21. Available SetEnvIf options
Option | Description |
---|---|
Remote_Host | Refers to the client's hostname. |
Remote_Addr | Refers to the client's IP address. |
Server_Addr | Refers to the server's IP address. |
Request_Method | Refers to the request method (for example, GET ). |
Request_Protocol | Refers to the protocol name and version (for example, HTTP/1.1 ). |
Request_URI | Refers to the requested resource. |
SetEnvIf
directive is used to disable HTTP keepalives, and to allow SSL to close the connection without a closing notification from the client browser. This is necessary for certain web browsers that do not reliably shut down the SSL connection.
Example 9.70. Using the SetEnvIf directive
SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0
/etc/httpd/conf.d/ssl.conf
file to be present, the mod_ssl needs to be installed. See Section 9.1.7, “Setting Up an SSL Server” for more information on how to install and configure an SSL server.
IfModule
. By default, the server-pool is defined for both the prefork
and worker
MPMs.
/etc/httpd/conf/httpd.conf
:
MaxClients
MaxClients
directive allows you to specify the maximum number of simultaneously connected clients to process at one time. It takes the following form:
MaxClients number
256
when using the prefork
MPM.
MaxRequestsPerChild
MaxRequestsPerChild
directive allows you to specify the maximum number of request a child process can serve before it dies. It takes the following form:
MaxRequestsPerChild number
0
allows unlimited number of requests.
MaxRequestsPerChild
directive is used to prevent long-lived processes from causing memory leaks.
MaxSpareServers
MaxSpareServers
directive allows you to specify the maximum number of spare child processes. It takes the following form:
MaxSpareServers number
prefork
MPM only.
MaxSpareThreads
MaxSpareThreads
directive allows you to specify the maximum number of spare server threads. It takes the following form:
MaxSpareThreads number
MinSpareThreads
and ThreadsPerChild
. This directive is used by the worker
MPM only.
MinSpareServers
MinSpareServers
directive allows you to specify the minimum number of spare child processes. It takes the following form:
MinSpareServers number
prefork
MPM only.
MinSpareThreads
MinSpareThreads
directive allows you to specify the minimum number of spare server threads. It takes the following form:
MinSpareThreads number
worker
MPM only.
StartServers
StartServers
directive allows you to specify the number of child processes to create when the service is started. It takes the following form:
StartServers number
ThreadsPerChild
ThreadsPerChild
directive allows you to specify the number of threads a child process can create. It takes the following form:
ThreadsPerChild number
worker
MPM only.
httpd
service is distributed along with a number of Dynamic Shared Objects (DSOs), which can be dynamically loaded or unloaded at runtime as necessary. By default, these modules are located in /usr/lib/httpd/modules/
on 32-bit and in /usr/lib64/httpd/modules/
on 64-bit systems.
LoadModule
directive as described in Section 9.1.4.1, “Common httpd.conf Directives”. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.d/
directory.
httpd
service.
root
:
~]# yum install httpd-devel
apxs
) utility required to compile a module.
~]# apxs -i -a -c module_name.c
/usr/share/doc/httpd-VERSION/httpd-vhosts.conf
into the /etc/httpd/conf.d/
directory, and replace the @@Port@@
and @@ServerRoot@@
placeholder values. Customize the options according to your requirements as shown in Example 9.80, “Example virtual host configuration”.
Example 9.80. Example virtual host configuration
<VirtualHost *:80> ServerAdmin webmaster@penguin.example.com DocumentRoot "/www/docs/penguin.example.com" ServerName penguin.example.com ServerAlias www.penguin.example.com ErrorLog "/var/log/httpd/dummy-host.example.com-error_log" CustomLog "/var/log/httpd/dummy-host.example.com-access_log" common </VirtualHost>
ServerName
must be a valid DNS name assigned to the machine. The <VirtualHost>
container is highly customizable, and accepts most of the directives available within the main server configuration. Directives that are not supported within this container include User
and Group
, which were replaced by SuexecUserGroup
.
Changing the port number
Listen
directive in the global settings section of the /etc/httpd/conf/httpd.conf
file accordingly.
httpd
service.
mod_ssl
, a module that uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server.
mod_ssl
prevents any inspection or modification of the transmitted content. This section provides basic information on how to enable this module in the Apache HTTP Server configuration, and guides you through the process of generating private keys and self-signed certificates.
Table 9.22. CA lists for most common web browsers
Web Browser | Link |
---|---|
Mozilla Firefox | Mozilla root CA list. |
Opera | Root certificates used by Opera. |
Internet Explorer | Windows root certificate program members. |
mod_ssl
module) and openssl (the OpenSSL toolkit) packages installed. To do so, type the following at a shell prompt as root
:
~]# yum install mod_ssl openssl
mod_ssl
configuration file at /etc/httpd/conf.d/ssl.conf
, which is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd
service as described in Section 9.1.3.3, “Restarting the Service”.
/etc/pki/tls/private/
and /etc/pki/tls/certs/
directories respectively. You can do so by running the following commands as root
:
~]#mv
key_file.key
/etc/pki/tls/private/hostname.key
~]#mv
certificate.crt
/etc/pki/tls/certs/hostname.crt
/etc/httpd/conf.d/ssl.conf
configuration file:
SSLCertificateFile /etc/pki/tls/certs/hostname.crt SSLCertificateKeyFile /etc/pki/tls/private/hostname.key
httpd
service as described in Section 9.1.3.3, “Restarting the Service”.
Example 9.81. Using a key and certificate from the Red Hat Secure Web Server
~]#mv /etc/httpd/conf/httpsd.key /etc/pki/tls/private/penguin.example.com.key
~]#mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt
root
, you can install it by typing the following at a shell prompt:
~]# yum install crypto-utils
Replacing an existing certificate
root
, use the following command instead of genkey:
~]# openssl req -x509 -new -set_serial number -key hostname.key -out hostname.crt
Remove a previously created key
root
:
~]# rm /etc/pki/tls/private/hostname.key
root
, run the genkey
command followed by the appropriate host name (for example, penguin.example.com
):
~]# genkey
hostname
2048 bits
. See NIST Special Publication 800-131A.
[*]
) or disable ([ ]
) the encryption of the private key.
Do not forget the passphrase
/etc/httpd/conf.d/ssl.conf
configuration file:
SSLCertificateFile /etc/pki/tls/certs/hostname.crt SSLCertificateKeyFile /etc/pki/tls/private/hostname.key
httpd
service as described in Section 9.1.3.3, “Restarting the Service”, so that the updated configuration is loaded.
/usr/share/doc/httpd/
man httpd
httpd
service containing the complete list of its command line options.
man apachectl
man genkey
genkey
containing the full documentation on its usage.
Installing the dovecot package
dovecot
package is installed on your system by running, as root
:
yum install dovecot
POP
server, email messages are downloaded by email client applications. By default, most POP
email clients are automatically configured to delete the message on the email server after it has been successfully transferred, however this setting usually can be changed.
POP
is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail Extensions (MIME), which allow for email attachments.
POP
works best for users who have one system on which to read email. It also works well for users who do not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for those with slow network connections, POP
requires client programs upon authentication to download the entire content of each message. This can take a long time if any messages have large attachments.
POP
protocol is POP3
.
POP
protocol variants:
POP3
with MDS
(Monash Directory Service) authentication. An encoded hash of the user's password is sent from the email client to the server rather then sending an unencrypted password.
POP3
with Kerberos authentication.
POP3
with RPOP
authentication. This uses a per-user ID, similar to a password, to authenticate POP requests. However, this ID is not encrypted, so RPOP
is no more secure than standard POP
.
pop3s
service, or by using the /usr/sbin/stunnel
application. For more information on securing email communication, refer to Section 10.5.1, “Securing Communication”.
IMAP
server under Fedora is Dovecot and is provided by the dovecot package. See Section 10.1.2.1, “POP” for information on how to install Dovecot.
IMAP
mail server, email messages remain on the server where users can read or delete them. IMAP
also allows client applications to create, rename, or delete mail directories on the server to organize and store email.
IMAP
is particularly useful for users who access their email using multiple machines. The protocol is also convenient for users connecting to the mail server via a slow connection, because only the email header information is downloaded for messages until opened, saving bandwidth. The user also has the ability to delete messages without viewing or downloading them.
IMAP
client applications are capable of caching copies of messages locally, so the user can browse previously read messages when not directly connected to the IMAP
server.
IMAP
, like POP
, is fully compatible with important Internet messaging standards, such as MIME, which allow for email attachments.
SSL
encryption for client authentication and data transfer sessions. This can be enabled by using the imaps
service, or by using the /usr/sbin/stunnel
program. For more information on securing email communication, refer to Section 10.5.1, “Securing Communication”.
imap-login
and pop3-login
processes which implement the IMAP
and POP3
protocols are spawned by the master dovecot
daemon included in the dovecot package. The use of IMAP
and POP
is configured through the /etc/dovecot/dovecot.conf
configuration file; by default dovecot
runs IMAP
and POP3
together with their secure versions using SSL
. To configure dovecot
to use POP
, complete the following steps:
/etc/dovecot/dovecot.conf
configuration file to make sure the protocols
variable is uncommented (remove the hash sign (#
) at the beginning of the line) and contains the pop3
argument. For example:
protocols = imap imaps pop3 pop3s
protocols
variable is left commented out, dovecot
will use the default values specified for this variable.
root
:
systemctl restart dovecot.service
systemctl enable dovecot.service
The dovecot service starts the POP3 server
dovecot
only reports that it started the IMAP
server, but also starts the POP3
server.
SMTP
, both IMAP
and POP3
require connecting clients to authenticate using a username and password. By default, passwords for both protocols are passed over the network unencrypted.
SSL
on dovecot
:
/etc/pki/dovecot/dovecot-openssl.conf
configuration file as you prefer. However, in a typical installation, this file does not require modification.
/etc/pki/dovecot/certs/dovecot.pem
and /etc/pki/dovecot/private/dovecot.pem
.
/usr/libexec/dovecot/mkcert.sh
script which creates the dovecot
self signed certificates. These certificates are copied in the /etc/pki/dovecot/certs
and /etc/pki/dovecot/private
directories. To implement the changes, restart dovecot
by typing the following at a shell prompt as root
:
systemctl restart dovecot.service
dovecot
can be found online at http://www.dovecot.org.
SMTP
. A message may involve several MTAs as it moves to its intended destination.
mail
or Procmail.
POP
or IMAP
protocols, setting up mailboxes to store messages, and sending outbound messages to an MTA.
root
, you can either uninstall Postfix or use the following command to switch to Sendmail:
alternatives --config mta
systemctl enable|disable service.service
/usr/sbin/postfix
. This daemon launches all related processes needed to handle mail delivery.
/etc/postfix/
directory. The following is a list of the more commonly used files:
access
— Used for access control, this file specifies which hosts are allowed to connect to Postfix.
main.cf
— The global Postfix configuration file. The majority of configuration options are specified in this file.
master.cf
— Specifies how Postfix interacts with various processes to accomplish mail delivery.
transport
— Maps email addresses to relay hosts.
aliases
file can be found in the /etc/
directory. This file is shared between Postfix and Sendmail. It is a configurable list required by the mail protocol that describes user ID aliases.
Configuring Postfix as a server for other clients
/etc/postfix/main.cf
file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, refer to Section 10.3.1.2, “Basic Postfix Configuration”.
postfix
service after changing any options in the configuration files under the /etc/postfix
directory in order for those changes to take effect. To do so, run the following command as root
:
systemctl restart postfix.service
root
to enable mail delivery for other hosts on the network:
/etc/postfix/main.cf
file with a text editor, such as vi
.
mydomain
line by removing the hash sign (#
), and replace domain.tld with the domain the mail server is servicing, such as example.com
.
myorigin = $mydomain
line.
myhostname
line, and replace host.domain.tld with the hostname for the machine.
mydestination = $myhostname, localhost.$mydomain
line.
mynetworks
line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server.
inet_interfaces = all
line.
inet_interfaces = localhost
line.
postfix
service.
/etc/postfix/main.cf
configuration file. Additional resources including information about Postfix configuration, SpamAssassin integration, or detailed descriptions of the /etc/postfix/main.cf
parameters are available online at http://www.postfix.org/.
LDAP
directory as a source for various lookup tables (e.g.: aliases
, virtual
, canonical
, etc.). This allows LDAP
to store hierarchical user information and Postfix to only be given the result of LDAP
queries when needed. By not storing this information locally, administrators can easily maintain it.
LDAP
to look up the /etc/aliases
file. Make sure your /etc/postfix/main.cf
contains the following:
alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf
/etc/postfix/ldap-aliases.cf
file if you do not have one created already and make sure it contains the following:
server_host = ldap.example.com search_base = dc=example, dc=com
ldap.example.com
, example
, and com
are parameters that need to be replaced with specification of an existing available LDAP
server.
The /etc/postfix/ldap-aliases.cf file
/etc/postfix/ldap-aliases.cf
file can specify various parameters, including parameters that enable LDAP
SSL
and STARTTLS
. For more information, refer to the ldap_table(5)
man page.
LDAP
, refer to Section 11.1, “OpenLDAP”.
SMTP
protocol. However, Sendmail is highly configurable, allowing control over almost every aspect of how email is handled, including the protocol used. Many system administrators elect to use Sendmail as their MTA due to its power and scalability.
POP
or IMAP
, to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually exist for different reasons and can operate separately from one another.
root
:
yum install sendmail
root
:
yum install sendmail-cf
/usr/sbin/sendmail
.
/etc/mail/sendmail.cf
. Avoid editing the sendmail.cf
file directly. To make configuration changes to Sendmail, edit the /etc/mail/sendmail.mc
file, back up the original /etc/mail/sendmail.cf
, and use the following alternatives to generate a new configuration file:
/etc/mail/
(~]# make all -C /etc/mail/
) to create a new /etc/mail/sendmail.cf
configuration file. All other generated files in /etc/mail
(db files) will be regenerated if needed. The old makemap commands are still usable. The make command will automatically be used by systemctl start|restart|reload sendmail.service
.
m4
macro processor to create a new /etc/mail/sendmail.cf
. The m4
macro processor is not installed by default. Before using it to create /etc/mail/sendmail.cf
, install the m4 package as root
:
yum install m4
/etc/mail/
directory including:
access
— Specifies which systems can use Sendmail for outbound email.
domaintable
— Specifies domain name mapping.
local-host-names
— Specifies aliases for the host.
mailertable
— Specifies instructions that override routing for particular domains.
virtusertable
— Specifies a domain-specific form of aliasing, allowing multiple virtual domains to be hosted on one machine.
/etc/mail/
, such as access
, domaintable
, mailertable
and virtusertable
, must actually store their information in database files before Sendmail can use any configuration changes. To include any changes made to these configurations in their database files, run the following command, as root
:
makemap hash /etc/mail/name < /etc/mail/name
sendmail
service for the changes to take effect by running:
systemctl restart sendmail.service
example.com
domain delivered to bob@other-example.com
, add the following line to the virtusertable
file:
@example.com bob@other-example.com
virtusertable.db
file must be updated:
makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable
virtusertable.db
file containing the new configuration.
/etc/mail/sendmail.cf
file.
Backup the sendmail.cf file before changing its content
sendmail.cf
file, it is a good idea to create a backup copy.
/etc/mail/sendmail.mc
file as root
. Once you are finished, restart the sendmail
service and, if the m4 package is installed, the m4
macro processor will automatically generate a new sendmail.cf
configuration file:
systemctl restart sendmail.service
Configuring Sendmail as a server for other clients
sendmail.cf
file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc
file, and either change the address specified in the Addr=
option of the DAEMON_OPTIONS
directive from 127.0.0.1
to the IP address of an active network device or comment out the DAEMON_OPTIONS
directive all together by placing dnl
at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf
by restarting the service:
systemctl restart sendmail.service
SMTP
-only sites. However, it does not work for UUCP (UNIX-to-UNIX Copy Protocol) sites. If using UUCP mail transfers, the /etc/mail/sendmail.mc
file must be reconfigured and a new /etc/mail/sendmail.cf
file must be generated.
/usr/share/sendmail-cf/README
file before editing any files in the directories under the /usr/share/sendmail-cf
directory, as they can affect the future configuration of the /etc/mail/sendmail.cf
file.
mail.example.com
that handles all of their email and assigns a consistent return address to all outgoing mail.
user@example.com
instead of user@host.example.com
.
/etc/mail/sendmail.mc
:
FEATURE(always_add_domain)dnl FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`bigcorp.com.')dnl MASQUERADE_DOMAIN(`bigcorp.com.')dnl MASQUERADE_AS(bigcorp.com)dnl
sendmail.cf
using the m4
macro processor, this configuration makes all mail from inside the network appear as if it were sent from bigcorp.com
.
SMTP
messages, also called relaying, has been disabled by default since Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host (x.edu
) to accept messages from one party (y.com
) and sent them to a different party (z.net
). Now, however, Sendmail must be configured to permit any domain to relay mail through the server. To configure relay domains, edit the /etc/mail/relay-domains
file and restart Sendmail:
systemctl restart sendmail.service
/etc/mail/access
file can be used to prevent connections from unwanted hosts. The following example illustrates how this file can be used to both block and specifically allow access to the Sendmail server:
badspammer.com ERROR:550 "Go away and do not spam us" tux.badspammer.com OK 10.0 RELAY
badspammer.com
is blocked with a 550 RFC-821 compliant error code, with a message sent back to the spammer. Email sent from the tux.badspammer.com
sub-domain, is accepted. The last line shows that any email sent from the 10.0.*.* network can be relayed through the mail server.
/etc/mail/access.db
file is a database, use the makemap
command to update any changes. Do this using the following command as root
:
makemap hash /etc/mail/access < /etc/mail/access
SMTP
servers store information about an email's journey in the message header. As the message travels from one MTA to another, each puts in a Received
header above all the other Received
headers. It is important to note that this information may be altered by spammers.
/usr/share/sendmail-cf/README
for more information and examples.
LDAP
is a very quick and powerful way to find specific information about a particular user from a much larger group. For example, an LDAP
server can be used to look up a particular email address from a common corporate directory by the user's last name. In this kind of implementation, LDAP
is largely separate from Sendmail, with LDAP
storing the hierarchical user information and Sendmail only being given the result of LDAP
queries in pre-addressed email messages.
LDAP
, where it uses LDAP
to replace separately maintained files, such as /etc/aliases
and /etc/mail/virtusertables
, on different mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP
abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP
cluster that can be leveraged by many different applications.
LDAP
. To extend the Sendmail server using LDAP
, first get an LDAP
server, such as OpenLDAP, running and properly configured. Then edit the /etc/mail/sendmail.mc
to include the following:
LDAPROUTE_DOMAIN('yourdomain.com')dnl FEATURE('ldap_routing')dnl
Advanced configuration
LDAP
. The configuration can differ greatly from this depending on the implementation of LDAP
, especially when configuring several Sendmail machines to use a common LDAP
server.
/usr/share/sendmail-cf/README
for detailed LDAP
routing configuration instructions and examples.
/etc/mail/sendmail.cf
file by running the m4
macro processor and again restarting Sendmail. See Section 10.3.2.3, “Common Sendmail Configuration Changes” for instructions.
LDAP
, refer to Section 11.1, “OpenLDAP”.
POP3
and IMAP
. It can even forward email messages to an SMTP
server, if necessary.
Installing the fetchmail package
root
:
yum install fetchmail
.fetchmailrc
file in the user's home directory. If it does not already exist, create the .fetchmailrc
file in your home directory
.fetchmailrc
file, Fetchmail checks for email on a remote server and downloads it. It then delivers it to port 25
on the local machine, using the local MTA to place the email in the correct user's spool file. If Procmail is available, it is launched to filter the email and place it in a mailbox so that it can be read by an MUA.
.fetchmailrc
file is much easier. Place any desired configuration options in the .fetchmailrc
file for those options to be used each time the fetchmail
command is issued. It is possible to override these at the time Fetchmail is run by specifying that option on the command line.
.fetchmailrc
file contains three classes of configuration options:
.fetchmailrc
file, followed by one or more server options, each of which designate a different email server that Fetchmail should check. User options follow server options for each user account checking that email server. Like server options, multiple user options may be specified for use with a particular server as well as to check multiple email accounts on the same server.
.fetchmailrc
file by the use of a special option verb, poll
or skip
, that precedes any of the server information. The poll
action tells Fetchmail to use this server option when it is run, which checks for email using the specified user options. Any server options after a skip
action, however, are not checked unless this server's hostname is specified when Fetchmail is invoked. The skip
option is useful when testing configurations in the .fetchmailrc
file because it only checks skipped servers when specifically invoked, and does not affect any currently working configurations.
.fetchmailrc
file:
set postmaster "user1" set bouncemail poll pop.domain.com proto pop3 user 'user1' there with password 'secret' is user1 here poll mail.domain2.com user 'user5' there with password 'secret2' is user1 here user 'user7' there with password 'secret3' is user1 here
postmaster
option) and all email errors are sent to the postmaster instead of the sender (bouncemail
option). The set
action tells Fetchmail that this line contains a global option. Then, two email servers are specified, one set to check using POP3
, the other for trying various protocols to find one that works. Two users are checked using the second server option, but all email found for any user is sent to user1
's mail spool. This allows multiple mailboxes to be checked on multiple servers, while appearing in a single MUA inbox. Each user's specific information begins with the user
action.
Omitting the password from the configuration
.fetchmailrc
file. Omitting the with password 'password'
section causes Fetchmail to ask for a password when it is launched.
fetchmail
man page explains each option in detail, but the most common ones are listed in the following three sections.
set
action.
daemon seconds
— Specifies daemon-mode, where Fetchmail stays in the background. Replace seconds with the number of seconds Fetchmail is to wait before polling the server.
postmaster
— Specifies a local user to send mail to in case of delivery problems.
syslog
— Specifies the log file for errors and status messages. By default, this is /var/log/maillog
.
.fetchmailrc
after a poll
or skip
action.
auth auth-type
— Replace auth-type with the type of authentication to be used. By default, password
authentication is used, but some protocols support other types of authentication, including kerberos_v5
, kerberos_v4
, and ssh
. If the any
authentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server.
interval number
— Polls the specified server every number
of times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages.
port port-number
— Replace port-number with the port number. This value overrides the default port number for the specified protocol.
proto protocol
— Replace protocol with the protocol, such as pop3
or imap
, to use when checking for messages on the server.
timeout seconds
— Replace seconds with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300
seconds is assumed.
user
option (defined below).
fetchall
— Orders Fetchmail to download all messages in the queue, including messages that have already been viewed. By default, Fetchmail only pulls down new messages.
fetchlimit number
— Replace number with the number of messages to be retrieved before stopping.
flush
— Deletes all previously viewed messages in the queue before retrieving new messages.
limit max-number-bytes
— Replace max-number-bytes with the maximum size in bytes that messages are allowed to be when retrieved by Fetchmail. This option is useful with slow network links, when a large message takes too long to download.
password 'password'
— Replace password with the user's password.
preconnect "command"
— Replace command with a command to be executed before retrieving messages for the user.
postconnect "command"
— Replace command with a command to be executed after retrieving messages for the user.
ssl
— Activates SSL encryption.
user "username"
— Replace username with the username used by Fetchmail to retrieve messages. This option must precede all other user options.
fetchmail
command mirror the .fetchmailrc
configuration options. In this way, Fetchmail may be used with or without a configuration file. These options are not used on the command line by most users because it is easier to leave them in the .fetchmailrc
file.
fetchmail
command with other options for a particular purpose. It is possible to issue command options to temporarily override a .fetchmailrc
setting that is causing an error, as any options specified at the command line override configuration file options.
fetchmail
command can supply important information.
--configdump
— Displays every possible option based on information from .fetchmailrc
and Fetchmail defaults. No email is retrieved for any users when using this option.
-s
— Executes Fetchmail in silent mode, preventing any messages, other than errors, from appearing after the fetchmail
command.
-v
— Executes Fetchmail in verbose mode, displaying every communication between Fetchmail and remote email servers.
-V
— Displays detailed version information, lists its global options, and shows settings to be used with each user, including the email protocol and authentication method. No email is retrieved for any users when using this option.
.fetchmailrc
file.
-a
— Fetchmail downloads all messages from the remote email server, whether new or previously viewed. By default, Fetchmail only downloads new messages.
-k
— Fetchmail leaves the messages on the remote email server after downloading them. This option overrides the default behavior of deleting messages after downloading them.
-l max-number-bytes
— Fetchmail does not download any messages over a particular size and leaves them on the remote email server.
--quit
— Quits the Fetchmail daemon process.
.fetchmailrc
options can be found in the fetchmail
man page.
/bin/mail
command to send email containing log messages to the root
user of the local system.
mail
. Both of the applications are considered LDAs and both move email from the MTA's spool file into the user's mailbox. However, Procmail provides a robust filtering system.
mail
command, consult its man page (man mail
).
/etc/procmailrc
or of a ~/.procmailrc
file (also called an rc file) in the user's home directory invokes Procmail whenever an MTA receives a new message.
rc
files exist in the /etc/
directory and no .procmailrc
files exist in any user's home directory. Therefore, to use Procmail, each user must construct a .procmailrc
file with specific environment variables and rules.
rc
file. If a message matches a recipe, then the email is placed in a specified file, is deleted, or is otherwise processed.
/etc/procmailrc
file and rc
files in the /etc/procmailrcs
directory for default, system-wide, Procmail environmental variables and recipes. Procmail then searches for a .procmailrc
file in the user's home directory. Many users also create additional rc
files for Procmail that are referred to within the .procmailrc
file in their home directory.
~/.procmailrc
file in the following format:
env-variable="value"
env-variable
is the name of the variable and value
defines the variable.
DEFAULT
— Sets the default mailbox where messages that do not match any recipes are placed.
DEFAULT
value is the same as $ORGMAIL
.
INCLUDERC
— Specifies additional rc
files containing more recipes for messages to be checked against. This breaks up the Procmail recipe lists into individual files that fulfill different roles, such as blocking spam and managing email lists, that can then be turned off or on by using comment characters in the user's ~/.procmailrc
file.
.procmailrc
file may look like this:
MAILDIR=$HOME/Msgs INCLUDERC=$MAILDIR/lists.rc INCLUDERC=$MAILDIR/spam.rc
INCLUDERC
line with a hash sign (#
).
LOCKSLEEP
— Sets the amount of time, in seconds, between attempts by Procmail to use a particular lockfile. The default is 8
seconds.
LOCKTIMEOUT
— Sets the amount of time, in seconds, that must pass after a lockfile was last modified before Procmail assumes that the lockfile is old and can be deleted. The default is 1024
seconds.
LOGFILE
— The file to which any Procmail information or error messages are written.
MAILDIR
— Sets the current working directory for Procmail. If set, all other Procmail paths are relative to this directory.
ORGMAIL
— Specifies the original mailbox, or another place to put the messages if they cannot be placed in the default or recipe-required location.
/var/spool/mail/$LOGNAME
is used.
SUSPEND
— Sets the amount of time, in seconds, that Procmail pauses if a necessary resource, such as swap space, is not available.
SWITCHRC
— Allows a user to specify an external file containing additional Procmail recipes, much like the INCLUDERC
option, except that recipe checking is actually stopped on the referring configuration file and only the recipes on the SWITCHRC
-specified file are used.
VERBOSE
— Causes Procmail to log more information. This option is useful for debugging.
LOGNAME
, which is the login name; HOME
, which is the location of the home directory; and SHELL
, which is the default shell.
procmailrc
man page.
:0flags: lockfile-name * special-condition-character condition-1 * special-condition-character condition-2 * special-condition-character condition-N special-action-character action-to-perform
flags
section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing lockfile-name
.
*
) can further control the condition.
action-to-perform
argument specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. See Section 10.4.2.4, “Special Conditions and Actions” for more information.
{
}
, that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages.
A
— Specifies that this recipe is only used if the previous recipe without an A
or a
flag also matched this message.
a
— Specifies that this recipe is only used if the previous recipe with an A
or a
flag also matched this message and was successfully completed.
B
— Parses the body of the message and looks for matching conditions.
b
— Uses the body in any resulting action, such as writing the message to a file or forwarding it. This is the default behavior.
c
— Generates a carbon copy of the email. This is useful with delivering recipes, since the required action can be performed on the message and a copy of the message can continue being processed in the rc
files.
D
— Makes the egrep
comparison case-sensitive. By default, the comparison process is not case-sensitive.
E
— While similar to the A
flag, the conditions in the recipe are only compared to the message if the immediately preceding the recipe without an E
flag did not match. This is comparable to an else action.
e
— The recipe is compared to the message only if the action specified in the immediately preceding recipe fails.
f
— Uses the pipe as a filter.
H
— Parses the header of the message and looks for matching conditions. This is the default behavior.
h
— Uses the header in a resulting action. This is the default behavior.
w
— Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it was successful before considering the message filtered.
W
— Is identical to w
except that "Program failure" messages are suppressed.
procmailrc
man page.
:
) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT
global environment variable.
*
) at the beginning of a recipe's condition line:
!
— In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message.
<
— Checks if the message is under a specified number of bytes.
>
— Checks if the message is over a specified number of bytes.
!
— In the action line, this character tells Procmail to forward the message to the specified email addresses.
$
— Refers to a variable set earlier in the rc
file. This is often used to set a common mailbox that is referred to by various recipes.
|
— Starts a specified program to process the message.
{
and }
— Constructs a nesting block, used to contain additional recipes to apply to matching messages.
grep
man page.
:0: new-mail.spool
LOCKEXT
environment variable. No condition is specified, so every message matches this recipe and is placed in the single spool file called new-mail.spool
, located within the directory specified by the MAILDIR
environment variable. An MUA can then view messages in this file.
rc
files to direct messages to a default location.
:0 * ^From: spammer@domain.com /dev/null
spammer@domain.com
are sent to the /dev/null
device, deleting them.
Sending messages to /dev/null
/dev/null
for permanent deletion. If a recipe inadvertently catches unintended messages, and those messages disappear, it becomes difficult to troubleshoot the rule.
/dev/null
.
:0: * ^(From|Cc|To).*tux-lug tuxlug
tux-lug@domain.com
mailing list are placed in the tuxlug
mailbox automatically for the MUA. Note that the condition in this example matches the message if it has the mailing list's email address on the From
, Cc
, or To
lines.
Installing the spamassassin package
root
:
yum install spamassassin
~/.procmailrc
file:
INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc
/etc/mail/spamassassin/spamassassin-default.rc
contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern:
*****SPAM*****
:0 Hw * ^X-Spam-Status: Yes spam
spam
.
spamd
) and the client application (spamc). Configuring SpamAssassin this way, however, requires root
access to the host.
spamd
daemon, type the following command:
systemctl start spamassassin.service
systemctl enable spamassassin.service
~/.procmailrc
file. For a system-wide configuration, place it in /etc/procmailrc
:
INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc
mutt
.
mutt
offer SSL-encrypted email sessions.
POP
and IMAP
protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting usernames and passwords as they are passed over the network.
IMAP
and POP
have known port numbers (993
and 995
, respectively) that the MUA uses to authenticate and download messages.
IMAP
and POP
users on the email server is a simple matter.
Avoid using self-signed certificates
IMAP
or POP
, change to the /etc/pki/dovecot/
directory, edit the certificate parameters in the /etc/pki/dovecot/dovecot-openssl.conf
configuration file as you prefer, and type the following commands, as root
:
dovecot]#rm -f certs/dovecot.pem private/dovecot.pem
dovecot]#/usr/libexec/dovecot/mkcert.sh
/etc/dovecot/conf.d/10-ssl.conf
file:
ssl_cert = </etc/pki/dovecot/certs/dovecot.pem ssl_key = </etc/pki/dovecot/private/dovecot.pem
systemctl restart dovecot.service
command to restart the dovecot
daemon.
stunnel
command can be used as an SSL encryption wrapper around the standard, non-secure connections to IMAP
or POP
services.
stunnel
utility uses external OpenSSL libraries included with Fedora to provide strong cryptography and to protect the network connections. It is recommended to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate.
Installing the stunnel package
stunnel
, first ensure the stunnel package is installed on your system by running, as root
:
yum install stunnel
/etc/pki/tls/certs/
directory, and type the following command:
certs]# make stunnel.pem
stunnel
configuration file, for example /etc/stunnel/mail.conf
, with the following content:
cert = /etc/pki/tls/certs/stunnel.pem [pop3s] accept = 995 connect = 110 [imaps] accept = 993 connect = 143
stunnel
with the created configuration file using the /usr/bin/stunnel /etc/stunnel/mail.conf
command, it will be possible to use an IMAP
or a POP
email client and connect to the email server using SSL encryption.
stunnel
, refer to the stunnel
man page or the documents in the /usr/share/doc/stunnel/
directory.
sendmail
and sendmail-cf
packages.
/usr/share/sendmail-cf/README
— Contains information on the m4
macro processor, file locations for Sendmail, supported mailers, how to access enhanced features, and more.
sendmail
and aliases
man pages contain helpful information covering various Sendmail options and the proper configuration of the Sendmail /etc/mail/aliases
file.
/usr/share/doc/postfix/
— Contains a large amount of information about ways to configure Postfix.
/usr/share/doc/fetchmail/
— Contains a full list of Fetchmail features in the FEATURES
file and an introductory FAQ
document.
/usr/share/doc/procmail/
— Contains a README
file that provides an overview of Procmail, a FEATURES
file that explores every program feature, and an FAQ
file with answers to many common configuration questions.
procmail
— Provides an overview of how Procmail works and the steps involved with filtering email.
procmailrc
— Explains the rc
file format used to construct recipes.
procmailex
— Gives a number of useful, real-world examples of Procmail recipes.
procmailsc
— Explains the weighted scoring technique used by Procmail to match a particular recipe to a message.
/usr/share/doc/spamassassin/
— Contains a large amount of information pertaining to SpamAssassin.
.procmailrc
files and use Procmail scoring to decide if a particular action should be taken.
LDAP
(Lightweight Directory Access Protocol) is a set of open protocols used to access centrally stored information over a network. It is based on the X.500
standard for directory sharing, but is less complex and resource-intensive. For this reason, LDAP is sometimes referred to as “X.500 Lite”.
Using Mozilla NSS
objectClass
definition, and can be found in schema files located in the /etc/openldap/slapd.d/cn=config/cn=schema/
directory.
[id] dn: distinguished_name
attribute_type: attribute_value…
attribute_type: attribute_value…
…
slapd
service as described in Section 11.1.4, “Running an OpenLDAP Server”.
ldapadd
utility to add entries to the LDAP directory.
ldapsearch
utility to verify that the slapd
service is accessing the information correctly.
Table 11.1. List of OpenLDAP packages
Package | Description |
---|---|
openldap | A package containing the libraries necessary to run the OpenLDAP server and client applications. |
openldap-clients | A package containing the command line utilities for viewing and modifying directories on an LDAP server. |
openldap-servers | A package containing both the services and utilities to configure and run an LDAP server. This includes the Standalone LDAP Daemon, slapd . |
openldap-servers-sql | A package containing the SQL support module. |
Table 11.2. List of commonly installed additional LDAP packages
Package | Description |
---|---|
nss-pam-ldapd | A package containing nslcd , a local LDAP name service that allows a user to perform local LDAP queries. |
mod_authz_ldap |
A package containing
mod_authz_ldap , the LDAP authorization module for the Apache HTTP Server. This module uses the short form of the distinguished name for a subject and the issuer of the client SSL certificate to determine the distinguished name of the user within an LDAP directory. It is also capable of authorizing users based on attributes of that user's LDAP directory entry, determining access to assets based on the user and group privileges of the asset, and denying access for users with expired passwords. Note that the mod_ssl module is required when using the mod_authz_ldap module.
|
yum
command in the following form:
yum
install
package…
root
:
yum install openldap openldap-clients openldap-servers
root
) to run this command. For more information on how to install new packages in Fedora, refer to Section 5.2.4, “Installing Packages”.
slapd
service:
Table 11.3. List of OpenLDAP server utilities
Command | Description |
---|---|
slapacl | Allows you to check the access to a list of attributes. |
slapadd | Allows you to add entries from an LDIF file to an LDAP directory. |
slapauth | Allows you to check a list of IDs for authentication and authorization permissions. |
slapcat | Allows you to pull entries from an LDAP directory in the default format and save them in an LDIF file. |
slapdn | Allows you to check a list of Distinguished Names (DNs) based on available schema syntax. |
slapindex | Allows you to re-index the slapd directory based on the current content. Run this utility whenever you change indexing options in the configuration file. |
slappasswd | Allows you to create an encrypted user password to be used with the ldapmodify utility, or in the slapd configuration file. |
slapschema | Allows you to check the compliance of a database with the corresponding schema. |
slaptest | Allows you to check the LDAP server configuration. |
Make sure the files have correct owner
root
can run slapadd
, the slapd
service runs as the ldap
user. Because of this, the directory server is unable to modify any files created by slapadd
. To correct this issue, after running the slapadd
utility, type the following at a shell prompt:
chown -R ldap:ldap /var/lib/ldap
Stop slapd before using these utilities
slapd
service before using slapadd
, slapcat
, or slapindex
. You can do so by typing the following at a shell prompt as root
:
systemctl stop slapd.service
slapd
service, refer to Section 11.1.4, “Running an OpenLDAP Server”.
Table 11.4. List of OpenLDAP client utilities
Command | Description |
---|---|
ldapadd | Allows you to add entries to an LDAP directory, either from a file, or from standard input. It is a symbolic link to ldapmodify -a . |
ldapcompare | Allows you to compare given attribute with an LDAP directory entry. |
ldapdelete | Allows you to delete entries from an LDAP directory. |
ldapexop | Allows you to perform extended LDAP operations. |
ldapmodify | Allows you to modify entries in an LDAP directory, either from a file, or from standard input. |
ldapmodrdn | Allows you to modify the RDN value of an LDAP directory entry. |
ldappasswd | Allows you to set or change the password for an LDAP user. |
ldapsearch | Allows you to search LDAP directory entries. |
ldapurl | Allows you to compose or decompose LDAP URLs. |
ldapwhoami | Allows you to perform a whoami operation on an LDAP server. |
ldapsearch
, each of these utilities is more easily used by referencing a file containing the changes to be made rather than typing a command for each entry to be changed within an LDAP directory. The format of such a file is outlined in the man page for each utility.
/etc/openldap/
directory. The following table highlights the most important directories and files within this directory:
Table 11.5. List of OpenLDAP configuration files and directories
/etc/openldap/slapd.conf
file. Instead, it uses a configuration database located in the /etc/openldap/slapd.d/
directory. If you have an existing slapd.conf
file from a previous installation, you can convert it to the new format by running the following command as root
:
slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/
slapd
configuration consists of LDIF entries organized in a hierarchical directory structure, and the recommended way to edit these entries is to use the server utilities described in Section 11.1.2.1, “Overview of OpenLDAP Server Utilities”.
Do not edit LDIF files directly
slapd
service unable to start. Because of this, it is strongly advised that you avoid editing the LDIF files within the /etc/openldap/slapd.d/
directly.
/etc/openldap/slapd.d/cn=config.ldif
file. The following directives are commonly used:
olcAllows
olcAllows
directive allows you to specify which features to enable. It takes the following form:
olcAllows
: feature…
bind_v2
.
Table 11.6. Available olcAllows options
Option | Description |
---|---|
bind_v2 | Enables the acceptance of LDAP version 2 bind requests. |
bind_anon_cred | Enables an anonymous bind when the Distinguished Name (DN) is empty. |
bind_anon_dn | Enables an anonymous bind when the Distinguished Name (DN) is not empty. |
update_anon | Enables processing of anonymous update operations. |
proxy_authz_anon | Enables processing of anonymous proxy authorization control. |
olcConnMaxPending
olcConnMaxPending
directive allows you to specify the maximum number of pending requests for an anonymous session. It takes the following form:
olcConnMaxPending
: number
100
.
olcConnMaxPendingAuth
olcConnMaxPendingAuth
directive allows you to specify the maximum number of pending requests for an authenticated session. It takes the following form:
olcConnMaxPendingAuth
: number
1000
.
olcDisallows
olcDisallows
directive allows you to specify which features to disable. It takes the following form:
olcDisallows
: feature…
Table 11.7. Available olcDisallows options
Option | Description |
---|---|
bind_anon | Disables the acceptance of anonymous bind requests. |
bind_simple | Disables the simple bind authentication mechanism. |
tls_2_anon | Disables the enforcing of an anonymous session when the STARTTLS command is received. |
tls_authc | Disallows the STARTTLS command when authenticated. |
olcIdleTimeout
olcIdleTimeout
directive allows you to specify how many seconds to wait before closing an idle connection. It takes the following form:
olcIdleTimeout
: number
0
).
olcLogFile
olcLogFile
directive allows you to specify a file in which to write log messages. It takes the following form:
olcLogFile
: file_name
olcReferral
olcReferral
option allows you to specify a URL of a server to process the request in case the server is not able to handle it. It takes the following form:
olcReferral
: URL
olcWriteTimeout
olcWriteTimeout
option allows you to specify how many seconds to wait before closing a connection with an outstanding write request. It takes the following form:
olcWriteTimeout
0
).
/etc/openldap/slapd.d/cn=config/olcDatabase={1}bdb.ldif
file. The following directives are commonly used in a database-specific configuration:
olcReadOnly
olcReadOnly
directive allows you to use the database in a read-only mode. It takes the following form:
olcReadOnly
: boolean
TRUE
(enable the read-only mode), or FALSE
(enable modifications of the database). The default option is FALSE
.
olcRootDN
olcRootDN
directive allows you to specify the user that is unrestricted by access controls or administrative limit parameters set for operations on the LDAP directory. It takes the following form:
olcRootDN
: distinguished_name
cn=Manager,dn=my-domain,dc=com
.
olcRootPW
olcRootPW
directive allows you to set a password for the user that is specified using the olcRootDN
directive. It takes the following form:
olcRootPW
: password
slappaswd
utility, for example:
~]$ slappaswd
New password:
Re-enter new password:
{SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD
olcSuffix
olcSuffix
directive allows you to specify the domain for which to provide information. It takes the following form:
olcSuffix
: domain_name
dc=my-domain,dc=com
.
/etc/openldap/slapd.d/
directory also contains LDAP definitions that were previously located in /etc/openldap/schema/
. It is possible to extend the schema used by OpenLDAP to support additional attribute types and object classes using the default schema files as a guide. However, this task is beyond the scope of this chapter. For more information on this topic, refer to http://www.openldap.org/doc/admin/schema.html.
slapd
service, type the following at a shell prompt as root
:
systemctl start slapd.service
systemctl enable slapd.service
slapd
service, type the following at a shell prompt as root
:
systemctl stop slapd.service
systemctl disable slapd.service
slapd
service, type the following at a shell prompt as root
:
systemctl restart slapd.service
systemctl is-active slapd.service
root
:
yum install openldap openldap-clients nss-pam-ldapd
root
:
yum install migrationtools
/usr/share/migrationtools/
directory. Once installed, edit the /usr/share/migrationtools/migrate_common.ph
file and change the following lines to reflect the correct domain, for example:
# Default DNS domain $DEFAULT_MAIL_DOMAIN = "example.com"; # Default base $DEFAULT_BASE = "dc=example,dc=com";
migrate_all_online.sh
script with the default base set to dc=example,dc=com
, type:
export DEFAULT_BASE="dc=example,dc=com" \
/usr/share/migrationtools/migrate_all_online.sh
Table 11.8. Commonly used LDAP migration scripts
Existing Name Service | Is LDAP Running? | Script to Use |
---|---|---|
/etc flat files | yes | migrate_all_online.sh |
/etc flat files | no | migrate_all_offline.sh |
NetInfo | yes | migrate_all_netinfo_online.sh |
NetInfo | no | migrate_all_netinfo_offline.sh |
NIS (YP) | yes | migrate_all_nis_online.sh |
NIS (YP) | no | migrate_all_nis_offline.sh |
README
and the migration-tools.txt
files in the /usr/share/doc/migrationtools/
directory.
/usr/share/doc/openldap-servers/guide.html
/usr/share/doc/openldap-servers/README.schema
man ldapadd
— Describes how to add entries to an LDAP directory.
man ldapdelete
— Describes how to delete entries within an LDAP directory.
man ldapmodify
— Describes how to modify entries within an LDAP directory.
man ldapsearch
— Describes how to search for entries within an LDAP directory.
man ldappasswd
— Describes how to set or change the password of an LDAP user.
man ldapcompare
— Describes how to use the ldapcompare
tool.
man ldapwhoami
— Describes how to use the ldapwhoami
tool.
man ldapmodrdn
— Describes how to modify the RDNs of entries.
man slapd
— Describes command line options for the LDAP server.
man slapadd
— Describes command line options used to add entries to a slapd
database.
man slapcat
— Describes command line options used to generate an LDIF file from a slapd
database.
man slapindex
— Describes command line options used to regenerate an index based upon the contents of a slapd
database.
man slappasswd
— Describes command line options used to generate user passwords for LDAP directories.
man ldap.conf
— Describes the format and options available within the configuration file for LDAP clients.
man slapd-config
— Describes the format and options available within the configuration directory.
smb.conf
FileCIFS
) protocol, and vsftpd, the primary FTP server shipped with Fedora. Additionally, it explains how to use the Printer Configuration tool to configure printers.
SMB
) protocol. Modern versions of this protocol are also known as the common Internet file system (CIFS
) protocol. It allows the networking of Microsoft Windows®, Linux, UNIX, and other operating systems together, enabling access to Windows-based file and printer shares. Samba's use of SMB
allows it to appear as a Windows server to Windows clients.
Installing the samba package
root
:
~]# yum install samba
4.1
:
WINS
) name server resolution
smbd
, nmbd
, and winbindd
). Three services (smb
, nmb
, and winbind
) control how the daemons are started, stopped, and other service-related features. These services act as different init scripts. Each daemon is listed in detail below, as well as which specific service has control over it.
smbd
smbd
server daemon provides file sharing and printing services to Windows clients. In addition, it is responsible for user authentication, resource locking, and data sharing through the SMB
protocol. The default ports on which the server listens for SMB
traffic are TCP
ports 139
and 445
.
smbd
daemon is controlled by the smb
service.
nmbd
nmbd
server daemon understands and replies to NetBIOS name service requests such as those produced by SMB/CIFS in Windows-based systems. These systems include Windows 95/98/ME, Windows NT, Windows 2000, Windows XP, and LanManager clients. It also participates in the browsing protocols that make up the Windows Network Neighborhood view. The default port that the server listens to for NMB
traffic is UDP
port 137
.
nmbd
daemon is controlled by the nmb
service.
winbindd
winbind
service resolves user and group information received from a server running Windows NT, 2000, 2003, Windows Server 2008, or Windows Server 2012. This makes Windows user and group information understandable by UNIX platforms. This is achieved by using Microsoft RPC calls, Pluggable Authentication Modules (PAM), and the Name Service Switch (NSS). This allows Windows NT domain users to appear and operate as UNIX users on a UNIX machine. Though bundled with the Samba distribution, the winbind
service is controlled separately from the smb
service.
winbindd
daemon is controlled by the winbind
service and does not require the smb
service to be started in order to operate. winbindd
is also used when Samba is an Active Directory member, and may also be used on a Samba domain controller (to implement nested groups and interdomain trust). Because winbind
is a client-side service used to connect to Windows NT-based servers, further discussion of winbind
is beyond the scope of this chapter.
Obtaining a list of utilities that are shipped with Samba
SMB
workgroup or domain on the network. Double-click one of the workgroup/domain icons to view a list of computers within the workgroup/domain.
smb://servername/sharename
findsmb
command. For each server found, it displays its IP
address, NetBIOS name, workgroup name, operating system, and SMB
server version.
~]$ smbclient //hostname/sharename -U username
IP
address of the Samba server you want to connect to, sharename with the name of the shared directory you want to browse, and username with the Samba username for the system. Enter the correct password or press Enter if no password is required for the user.
smb:\>
prompt, you have successfully logged in. Once you are logged in, type help
for a list of commands. If you wish to browse the contents of your home directory, replace sharename with your username. If the -U
switch is not used, the username of the current user is passed to the Samba server.
smbclient
, type exit
at the smb:\>
prompt.
root
:
mount -t cifs //servername/sharename /mnt/point/ -o username=username,password=password
Installing cifs-utils package
root
:
~]# yum install cifs-utils
man cifs.upcall
.
man mount.cifs
.
CIFS servers that require plain text passwords
root
:
~]# echo 0x37 > /proc/fs/cifs/SecurityFlags
/etc/samba/smb.conf
) allows users to view their home directories as a Samba share. It also shares all printers configured for the system as Samba shared printers. You can attach a printer to the system and print to it from the Windows machines on your network.
/etc/samba/smb.conf
as its configuration file. If you change this configuration file, the changes do not take effect until you restart the Samba daemon with the following command, as root
:
~]# systemctl restart smb.service
/etc/samba/smb.conf
file:
workgroup = WORKGROUPNAME server string = BRIEF COMMENT ABOUT SERVER
/etc/samba/smb.conf
file (after modifying it to reflect your needs and your system):
[sharename] comment = Insert a comment here path = /home/share/ valid users = tfox carole public = no writable = yes printable = no create mask = 0765
tfox
and carole
to read and write to the directory /home/share
, on the Samba server, from a Samba client.
smbpasswd -a username
.
root
:
~]# systemctl start smb.service
Setting up a domain member server
net join
command before starting the smb
service. Also, it is recommended to run winbind
before smbd
.
root
:
~]# systemctl stop smb.service
restart
option is a quick way of stopping and then starting Samba. This is the most reliable way to make configuration changes take effect after editing the configuration file for Samba. Note that the restart option starts the daemon even if it was not running originally.
root
:
~]# systemctl restart smb.service
condrestart
(conditional restart) option only starts smb
on the condition that it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running.
Applying the changes to the configuration
/etc/samba/smb.conf
file is changed, Samba automatically reloads it after a few minutes. Issuing a manual restart
or reload
is just as effective.
root
:
~]# systemctl condrestart smb.service
/etc/samba/smb.conf
file can be useful in case of a failed automatic reload by the smb
service. To ensure that the Samba server configuration file is reloaded without restarting the service, type the following command, as root
:
systemctl reload smb.service
smb
service does not start automatically at boot time. To configure Samba to start at boot time, type the following at a shell prompt as root
:
~]# systemctl enable smb.service
smb.conf
File/etc/samba/smb.conf
configuration file. Although the default smb.conf
file is well documented, it does not address complex topics such as LDAP, Active Directory, and the numerous domain controller implementations.
/etc/samba/smb.conf
file for a successful configuration.
/etc/samba/smb.conf
file shows a sample configuration needed to implement anonymous read-only file sharing. The security = share
parameter makes a share anonymous. Note, security levels for a single Samba server cannot be mixed. The security
directive is a global Samba parameter located in the [global]
configuration section of the /etc/samba/smb.conf
file.
[global] workgroup = DOCS netbios name = DOCS_SRV security = share [data] comment = Documentation Samba Server path = /export read only = Yes guest only = Yes
/etc/samba/smb.conf
file shows a sample configuration needed to implement anonymous read/write file sharing. To enable anonymous read/write file sharing, set the read only
directive to no
. The force user
and force group
directives are also added to enforce the ownership of any newly placed files specified in the share.
Do not use anonymous read/write servers
force user
) and group (force group
) in the /etc/samba/smb.conf
file.
[global] workgroup = DOCS netbios name = DOCS_SRV security = share [data] comment = Data path = /export force user = docsbot force group = users read only = No guest ok = Yes
/etc/samba/smb.conf
file shows a sample configuration needed to implement an anonymous print server. Setting browseable
to no
as shown does not list the printer in Windows Network Neighborhood. Although hidden from browsing, configuring the printer explicitly is possible. By connecting to DOCS_SRV
using NetBIOS, the client can have access to the printer if the client is also part of the DOCS
workgroup. It is also assumed that the client has the correct local printer driver installed, as the use client driver
directive is set to Yes
. In this case, the Samba server has no responsibility for sharing printer drivers to the client.
[global] workgroup = DOCS netbios name = DOCS_SRV security = share printcap name = cups disable spools= Yes show add printer wizard = No printing = cups [printers] comment = All Printers path = /var/spool/samba guest ok = Yes printable = Yes use client driver = Yes browseable = Yes
/etc/samba/smb.conf
file shows a sample configuration needed to implement a secure read/write print server. Setting the security
directive to user
forces Samba to authenticate client connections. Notice the [homes]
share does not have a force user
or force group
directive as the [public]
share does. The [homes]
share uses the authenticated user details for any files created as opposed to the force user
and force group
in [public]
.
[global] workgroup = DOCS netbios name = DOCS_SRV security = user printcap name = cups disable spools = Yes show add printer wizard = No printing = cups [homes] comment = Home Directories valid users = %S read only = No browseable = No [public] comment = Data path = /export force user = docsbot force group = users guest ok = Yes [printers] comment = All Printers path = /var/spool/samba printer admin = john, ed, @admins create mask = 0600 guest ok = Yes printable = Yes use client driver = Yes browseable = Yes
/etc/samba/smb.conf
file shows a sample configuration needed to implement an Active Directory domain member server. In this example, Samba authenticates users for services being run locally but is also a client of the Active Directory. Ensure that your kerberos realm
parameter is shown in all caps (for example realm = EXAMPLE.COM
). Since Windows 2000/2003/2008 requires Kerberos for Active Directory authentication, the realm
directive is required. If Active Directory and Kerberos are running on different servers, the password server
directive may be required to help the distinction.
[global] realm = EXAMPLE.COM security = ADS encrypt passwords = yes # Optional. Use only if Samba cannot determine the Kerberos server automatically. password server = kerberos.example.com
/etc/samba/smb.conf
file on the member server
/etc/krb5.conf
file, on the member server
root
on the member server:
kinit administrator@EXAMPLE.COM
kinit
command is a Kerberos initialization script that references the Active Directory administrator account and Kerberos realm. Since Active Directory requires Kerberos tickets, kinit
obtains and caches a Kerberos ticket-granting ticket for client/server authentication.
root
on the member server:
net ads join -S windows1.example.com -U administrator%password
windows1
was automatically found in the corresponding Kerberos realm (the kinit
command succeeded), the net
command connects to the Active Directory server using its required administrator account and password. This creates the appropriate machine account on the Active Directory and grants permissions to the Samba domain member server to join the domain.
The security option
security = ads
and not security = user
is used, a local password back end such as smbpasswd
is not needed. Older clients that do not support security = ads
are authenticated as if security = domain
had been set. This change does not affect functionality and allows local users not previously in the domain.
/etc/samba/smb.conf
file shows a sample configuration needed to implement a Windows NT4-based domain member server. Becoming a member server of an NT4-based domain is similar to connecting to an Active Directory. The main difference is NT4-based domains do not use Kerberos in their authentication method, making the /etc/samba/smb.conf
file simpler. In this instance, the Samba member server functions as a pass through to the NT4-based domain server.
[global] workgroup = DOCS netbios name = DOCS_SRV security = domain [homes] comment = Home Directories valid users = %S read only = No browseable = No [public] comment = Data path = /export force user = docsbot force group = users guest ok = Yes
/etc/samba/smb.conf
file to convert the server to a Samba-based PDC. If Windows NT-based servers are upgraded to Windows 2000/2003/2008, the /etc/samba/smb.conf
file is easily modifiable to incorporate the infrastructure change to Active Directory if needed.
Make sure you join the domain before starting Samba
/etc/samba/smb.conf
file, join the domain before starting Samba by typing the following command as root
:
net rpc join -U administrator%password
-S
option, which specifies the domain server hostname, does not need to be stated in the net rpc join
command. Samba uses the hostname specified by the workgroup
directive in the /etc/samba/smb.conf
file instead of it being stated explicitly.
A mixed Samba/Windows domain controller environment
tdbsam
tdbsam
password database back end. Replacing the aging smbpasswd
back end, tdbsam
has numerous improvements that are explained in more detail in Section 12.1.8, “Samba Account Information Databases”. The passdb backend
directive controls which back end is to be used for the PDC.
/etc/samba/smb.conf
file shows a sample configuration needed to implement a tdbsam
password database back end.
[global]
workgroup = DOCS
netbios name = DOCS_SRV
passdb backend = tdbsam
security = user
add user script = /usr/sbin/useradd -m "%u"
delete user script = /usr/sbin/userdel -r "%u"
add group script = /usr/sbin/groupadd "%g"
delete group script = /usr/sbin/groupdel "%g"
add user to group script = /usr/sbin/usermod -G "%g" "%u"
add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null -g machines "%u"
# The following specifies the default logon script
# Per user logon scripts can be specified in the user
# account using pdbedit logon script = logon.bat
# This sets the default profile path.
# Set per user paths with pdbedit
logon drive = H:
domain logons = Yes
os level = 35
preferred master = Yes
domain master = Yes
[homes]
comment = Home Directories
valid users = %S
read only = No
[netlogon]
comment = Network Logon Service
path = /var/lib/samba/netlogon/scripts
browseable = No
read only = No
# For profiles to work, create a user directory under the
# path shown.
mkdir -p /var/lib/samba/profiles/john
[Profiles]
comment = Roaming Profile Share
path = /var/lib/samba/profiles
read only = No
browseable = No
guest ok = Yes
profile acls = Yes
# Other resource shares ... ...
tdbsam
follow these steps:
smb.conf
file as shown in the example above.
root
user to the Samba password database:
smbpasswd -a root
smb
service.
groupadd -f users
groupadd -f nobody
groupadd -f ntadmins
net groupmap add ntgroup="Domain Users" unixgroup=users
net groupmap add ntgroup="Domain Guests" unixgroup=nobody
net groupmap add ntgroup="Domain Admins" unixgroup=ntadmins
net rpc rights grant 'DOCS\Domain Admins' SetMachineAccountPrivilege -S PDC -U root
Limitations of the tdbsam authentication back end
tdbsam
authentication back end. LDAP is recommended in these cases.
security = user
directive is not listed in the /etc/samba/smb.conf
file, it is used by Samba. If the server accepts the client's username/password, the client can then mount multiple shares without specifying a password for each instance. Samba can also accept session-based username/password requests. The client maintains multiple authentication contexts by using a unique UID for each logon.
/etc/samba/smb.conf
file, the security = user
directive that sets user-level security is:
[GLOBAL] ... security = user ...
/etc/samba/smb.conf
file:
[GLOBAL] ... security = domain workgroup = MARKETING ...
/etc/samba/smb.conf
file, the following directives make Samba an Active Directory member server:
[GLOBAL] ... security = ADS realm = EXAMPLE.COM password server = kerberos.example.com ...
Avoid using the server security mode
/etc/samba/smb.conf
, the following directives enable Samba to operate in server security mode:
[GLOBAL] ... encrypt passwords = Yes security = server password server = "NetBIOS_of_Domain_Controller" ...
/etc/samba/smb.conf
file, the security = share
directive that sets share-level security is:
[GLOBAL] ... security = share ...
/etc/passwd
type back ends. With a plain text back end, all usernames and passwords are sent unencrypted between the client and the Samba server. This method is very unsecure and is not recommended for use by any means. It is possible that different Windows clients connecting to the Samba server with plain text passwords cannot support such an authentication method.
smbpasswd
smbpasswd
back end utilizes a plain ASCII text layout that includes the MS Windows LanMan and NT account, and encrypted password information. The smbpasswd
back end lacks the storage of the Windows NT/2000/2003 SAM extended controls. The smbpasswd
back end is not recommended because it does not scale well or hold any Windows information, such as RIDs for NT-based groups. The tdbsam
back end solves these issues for use in a smaller database (250 users), but is still not an enterprise-class solution.
ldapsam_compat
ldapsam_compat
back end allows continued OpenLDAP support for use with upgraded versions of Samba. This option is normally used when migrating to Samba 3.0.
tdbsam
tdbsam
password back end provides an ideal database back end for local servers, servers that do not need built-in database replication, and servers that do not require the scalability or complexity of LDAP. The tdbsam
back end includes all of the smbpasswd
database information as well as the previously-excluded SAM information. The inclusion of the extended SAM data allows Samba to implement the same account and system access controls as seen with Windows NT/2000/2003/2008-based systems.
tdbsam
back end is recommended for 250 users at most. Larger organizations should require Active Directory or LDAP integration due to scalability and possible network infrastructure concerns.
ldapsam
ldapsam
back end provides an optimal distributed account installation method for Samba. LDAP is optimal because of its ability to replicate its database to any number of servers such as an OpenLDAP Server. LDAP databases are light-weight and scalable, and as such are preferred by large enterprises. For more information on LDAP, refer to Section 11.1, “OpenLDAP”.
/usr/share/doc/samba/LDAP/samba.schema
) has changed. These files contain the attribute syntax definitions and objectclass definitions that the ldapsam
back end needs in order to function properly.
ldapsam
back end for your Samba server, you will need to configure slapd
to include one of these schema file. See Section 11.1.3.3, “Extending Schema” for directions on how to do this.
Make sure the openldap-server package is installed
openldap-server
package installed if you want to use the ldapsam
back end.
TCP
/IP
. NetBIOS-based networking uses broadcast (UDP
) messaging to accomplish browse list management. Without NetBIOS and WINS as the primary method for TCP
/IP
hostname resolution, other methods such as static files (/etc/hosts
) or DNS
, must be used.
/etc/samba/smb.conf
file for a local master browser (or no browsing at all) in a domain controller environment is the same as workgroup configuration (see Section 12.1.4, “Configuring a Samba Server”).
/etc/samba/smb.conf
file in which the Samba server is serving as a WINS server:
[global] wins support = Yes
Using WINS
smb.conf
Settings/etc/samba/smb.conf
configuration for CUPS support:
[global] load printers = Yes printing = cups printcap name = cups [printers] comment = All Printers path = /var/spool/samba browseable = No public = Yes guest ok = Yes writable = No printable = Yes printer admin = @ntadmins [print$] comment = Printer Drivers Share path = /var/lib/samba/drivers write list = ed, john printer admin = ed, john
print$
directive contains printer drivers for clients to access if not available locally. The print$
directive is optional and may not be required depending on the organization.
browseable
to Yes
enables the printer to be viewed in the Windows Network Neighborhood, provided the Samba server is set up correctly in the domain/workgroup.
findsmb
findsmb subnet_broadcast_address
findsmb
program is a Perl script which reports information about SMB
-aware systems on a specific subnet. If no subnet is specified the local subnet is used. Items displayed include IP
address, NetBIOS name, workgroup or domain name, operating system, and version.
findsmb
as any valid user on a system:
~]$ findsmb
IP ADDR NETBIOS NAME WORKGROUP/OS/VERSION
------------------------------------------------------------------
10.1.59.25 VERVE [MYGROUP] [Unix] [Samba 3.0.0-15]
10.1.59.26 STATION22 [MYGROUP] [Unix] [Samba 3.0.2-7.FC1]
10.1.56.45 TREK +[WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager]
10.1.57.94 PIXEL [MYGROUP] [Unix] [Samba 3.0.0-15]
10.1.57.137 MOBILE001 [WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager]
10.1.57.141 JAWS +[KWIKIMART] [Unix] [Samba 2.2.7a-security-rollup-fix]
10.1.56.159 FRED +[MYGROUP] [Unix] [Samba 3.0.0-14.3E]
10.1.59.192 LEGION *[MYGROUP] [Unix] [Samba 2.2.7-security-rollup-fix]
10.1.56.205 NANCYN +[MYGROUP] [Unix] [Samba 2.2.7a-security-rollup-fix]
net
net protocol function misc_options target_options
net
utility is similar to the net
utility used for Windows and MS-DOS. The first argument is used to specify the protocol to use when executing a command. The protocol
option can be ads
, rap
, or rpc
for specifying the type of server connection. Active Directory uses ads
, Win9x/NT3 uses rap
, and Windows NT4/2000/2003/2008 uses rpc
. If the protocol is omitted, net
automatically tries to determine it.
wakko
:
~]$ net -l share -S wakko
Password:
Enumerating shared resources (exports) on remote server:
Share name Type Description
---------- ---- -----------
data Disk Wakko data share
tmp Disk Wakko tmp share
IPC$ IPC IPC Service (Samba Server)
ADMIN$ IPC IPC Service (Samba Server)
wakko
:
~]$ net -l user -S wakko
root password:
User name Comment
-----------------------------
andriusb Documentation
joe Marketing
lisa Sales
nmblookup
nmblookup options netbios_name
nmblookup
program resolves NetBIOS names into IP
addresses. The program broadcasts its query on the local subnet until the target machine replies.
IP
address of the NetBIOS name trek
:
~]$ nmblookup trek
querying trek on 10.1.59.255
10.1.56.45 trek<00>
pdbedit
pdbedit options
pdbedit
program manages accounts located in the SAM database. All back ends are supported including smbpasswd
, LDAP, and the tdb
database library.
~]$pdbedit -a kristin
new password: retype new password: Unix username: kristin NT username: Account Flags: [U ] User SID: S-1-5-21-1210235352-3804200048-1474496110-2012 Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077 Full Name: Home Directory: \\wakko\kristin HomeDir Drive: Logon Script: Profile Path: \\wakko\kristin\profile Domain: WAKKO Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: Mon, 18 Jan 2038 22:14:07 GMT Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT Password last set: Thu, 29 Jan 2004 08:29:28 GMT Password can change: Thu, 29 Jan 2004 08:29:28 GMT Password must change: Mon, 18 Jan 2038 22:14:07 GMT~]$ pdbedit -v -L kristin
Unix username: kristin NT username: Account Flags: [U ] User SID: S-1-5-21-1210235352-3804200048-1474496110-2012 Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077 Full Name: Home Directory: \\wakko\kristin HomeDir Drive: Logon Script: Profile Path: \\wakko\kristin\profile Domain: WAKKO Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: Mon, 18 Jan 2038 22:14:07 GMT Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT Password last set: Thu, 29 Jan 2004 08:29:28 GMT Password can change: Thu, 29 Jan 2004 08:29:28 GMT Password must change: Mon, 18 Jan 2038 22:14:07 GMT~]$ pdbedit -L
andriusb:505: joe:503: lisa:504: kristin:506:~]$ pdbedit -x joe
~]$ pdbedit -L
andriusb:505: lisa:504: kristin:506:
rpcclient
rpcclient server options
rpcclient
program issues administrative commands using Microsoft RPCs, which provide access to the Windows administration graphical user interfaces (GUIs) for systems management. This is most often used by advanced users that understand the full complexity of Microsoft RPCs.
smbcacls
smbcacls //server/share filename options
smbcacls
program modifies Windows ACLs on files and directories shared by a Samba server or a Windows server.
smbclient
smbclient //server/share password options
smbclient
program is a versatile UNIX client which provides functionality similar to ftp
.
smbcontrol
smbcontrol -i options
smbcontrol options destination messagetype parameters
smbcontrol
program sends control messages to running smbd
, nmbd
, or winbindd
daemons. Executing smbcontrol -i
runs commands interactively until a blank line or a 'q'
is entered.
smbpasswd
smbpasswd options username password
smbpasswd
program manages encrypted passwords. This program can be run by a superuser to change any user's password and also by an ordinary user to change their own Samba password.
smbspool
smbspool job user title copies options filename
smbspool
program is a CUPS-compatible printing interface to Samba. Although designed for use with CUPS printers, smbspool
can work with non-CUPS printers as well.
smbstatus
smbstatus options
smbstatus
program displays the status of current connections to a Samba server.
smbtar
smbtar options
smbtar
program performs backup and restores of Windows-based share files and directories to a local tape archive. Though similar to the tar
command, the two are not compatible.
testparm
testparm options filename hostname IP_address
testparm
program checks the syntax of the /etc/samba/smb.conf
file. If your smb.conf
file is in the default location (/etc/samba/smb.conf
) you do not need to specify the location. Specifying the host name and IP
address to the testparm
program verifies that the hosts.allow
and host.deny
files are configured correctly. The testparm
program also displays a summary of your /etc/samba/smb.conf
file and the server's role (stand-alone, domain, etc.) after testing. This is convenient when debugging as it excludes comments and concisely presents information for experienced administrators to read.
~]$testparm
Load smb config files from /etc/samba/smb.conf Processing section "[homes]" Processing section "[printers]" Processing section "[tmp]" Processing section "[html]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions<enter>
# Global parameters [global] workgroup = MYGROUP server string = Samba Server security = SHARE log file = /var/log/samba/%m.log max log size = 50 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 dns proxy = No [homes] comment = Home Directories read only = No browseable = No [printers] comment = All Printers path = /var/spool/samba printable = Yes browseable = No [tmp] comment = Wakko tmp path = /tmp guest only = Yes [html] comment = Wakko www path = /var/www/html force user = andriusb force group = users read only = No guest only = Yes
wbinfo
wbinfo options
wbinfo
program displays information from the winbindd
daemon. The winbindd
daemon must be running for wbinfo
to work.
/usr/share/doc/samba/
— All additional files included with the Samba distribution. This includes all helper scripts, sample configuration files, and documentation.
smb.conf
samba
smbd
nmbd
winbind
NNTP
protocol are also available. This an alternative to receiving mailing list emails.
FTP
) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly into the remote host or have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands.
FTP
protocol, as well as configuration options for the primary FTP
server shipped with Fedora, vsftpd
.
FTP
is so prevalent on the Internet, it is often required to share files to the public. System administrators, therefore, should be aware of the FTP
protocol's unique characteristics.
FTP
requires multiple network ports to work properly. When an FTP
client application initiates a connection to an FTP
server, it opens port 21
on the server — known as the command port. This port is used to issue all commands to the server. Any data requested from the server is returned to the client via a data port. The port number for data connections, and the way in which data connections are initialized, vary depending upon whether the client requests the data in active or passive mode.
FTP
protocol for transferring data to the client application. When an active mode data transfer is initiated by the FTP
client, the server opens a connection from port 20
on the server to the IP
address and a random, unprivileged port (greater than 1024
) specified by the client. This arrangement means that the client machine must be allowed to accept connections over any port above 1024
. With the growth of insecure networks, such as the Internet, the use of firewalls to protect client machines is now prevalent. Because these client-side firewalls often deny incoming connections from active mode FTP
servers, passive mode was devised.
FTP
client application. When requesting data from the server, the FTP
client indicates it wants to access the data in passive mode and the server provides the IP
address and a random, unprivileged port (greater than 1024
) on the server. The client then connects to that port on the server to download the requested information.
FTP
server. This also simplifies the process of configuring firewall rules for the server. See Section 12.2.5.8, “Network Options” for more information about limiting passive ports.
FTP
servers:
proftpd
- A fast, stable, and highly configurable FTP server.
vsftpd
— A fast, secure FTP
daemon which is the preferred FTP
server for Fedora. The remainder of this section focuses on vsftpd
.
vsftpd
vsftpd
) is designed from the ground up to be fast, stable, and, most importantly, secure. vsftpd
is the only stand-alone FTP
server distributed with Fedora, due to its ability to handle large numbers of connections efficiently and securely.
vsftpd
has three primary aspects:
libcap
library, tasks that usually require full root
privileges can be executed more safely from a less privileged process.
chroot
jail — Whenever possible, processes are change-rooted to the directory being shared; this directory is then considered a chroot
jail. For example, if the directory /var/ftp/
is the primary shared directory, vsftpd
reassigns /var/ftp/
to the new root directory, known as /
. This disallows any potential malicious hacker activities for any directories not contained below the new root directory.
vsftpd
deals with requests:
FTP
clients and run with as close to no privileges as possible.
HTTP
Server, vsftpd
launches unprivileged child processes to handle incoming connections. This allows the privileged, parent process to be as small as possible and handle relatively few tasks.
FTP
clients is handled by unprivileged child processes in a chroot
jail — Because these child processes are unprivileged and only have access to the directory being shared, any crashed processes only allows the attacker access to the shared files.
vsftpd
vsftpd
RPM installs the daemon (/usr/sbin/vsftpd
), its configuration and related files, as well as FTP
directories onto the system. The following lists the files and directories related to vsftpd
configuration:
/etc/rc.d/init.d/vsftpd
— The initialization script (initscript) used by the systemctl
command to start, stop, or reload vsftpd
. See Section 12.2.4, “Starting and Stopping vsftpd
” for more information about using this script.
/etc/pam.d/vsftpd
— The Pluggable Authentication Modules (PAM) configuration file for vsftpd
. This file specifies the requirements a user must meet to login to the FTP
server. For more information on PAM, refer to the Using Pluggable Authentication Modules (PAM) chapter of the Fedora 20 Managing Single Sign-On and Smart Cards guide.
/etc/vsftpd/vsftpd.conf
— The configuration file for vsftpd
. See Section 12.2.5, “ vsftpd
Configuration Options” for a list of important options contained within this file.
/etc/vsftpd/ftpusers
— A list of users not allowed to log into vsftpd
. By default, this list includes the root
, bin
, and daemon
users, among others.
/etc/vsftpd/user_list
— This file can be configured to either deny or allow access to the users listed, depending on whether the userlist_deny
directive is set to YES
(default) or NO
in /etc/vsftpd/vsftpd.conf
. If /etc/vsftpd/user_list
is used to grant access to users, the usernames listed must not appear in /etc/vsftpd/ftpusers
.
/var/ftp/
— The directory containing files served by vsftpd
. It also contains the /var/ftp/pub/
directory for anonymous users. Both directories are world-readable, but writable only by the root
user.
vsftpd
vsftpd
RPM installs the /etc/rc.d/init.d/vsftpd
script, which can be accessed using the systemctl
command.
root
type:
systemctl start vsftpd.service
root
type:
systemctl stop vsftpd.service
restart
option is a shorthand way of stopping and then starting vsftpd
. This is the most efficient way to make configuration changes take effect after editing the configuration file for vsftpd
.
root
type:
systemctl restart vsftpd.service
condrestart
(conditional restart) option only starts vsftpd
if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running.
root
type:
systemctl condrestart vsftpd.service
vsftpd
service does not start automatically at boot time. To configure the vsftpd
service to start at boot time, use a service manager such as systemctl
. See Chapter 6, Services and Daemons for more information on how to configure services in Fedora.
vsftpd
FTP
domains. This is a technique called multihoming. One way to multihome using vsftpd
is by running multiple copies of the daemon, each with its own configuration file.
IP
addresses to network devices or alias network devices on the system. For more information about configuring network devices, device aliases, and additional information about network configuration scripts, refer to the Red Hat Enterprise Linux 7 Networking Guide.
FTP
domains must be configured to reference the correct machine. For information about BIND and its configuration files, refer to the Red Hat Enterprise Linux 7 Networking Guide.
/etc/vsftpd
directory, calling systemctl start vsftpd.service
results in the /etc/rc.d/init.d/vsftpd
initscript starting the same number of processes as the number of configuration files. Each configuration file must have a unique name in the /etc/vsftpd/
directory and must be readable and writable only by root
.
vsftpd
Configuration Optionsvsftpd
may not offer the level of customization other widely available FTP
servers have, it offers enough options to fill most administrator's needs. The fact that it is not overly feature-laden limits configuration and programmatic errors.
vsftpd
is handled by its configuration file, /etc/vsftpd/vsftpd.conf
. Each directive is on its own line within the file and follows the following format:
directive=value
Do not use spaces
#
) and are ignored by the daemon.
vsftpd.conf
.
Securing the vsftpd service
vsftpd
, refer to the Fedora 20 Security Guide.
/etc/vsftpd/vsftpd.conf
. All directives not explicitly found or commented out within vsftpd
's configuration file are set to their default value.
vsftpd
daemon.
listen
— When enabled, vsftpd
runs in stand-alone mode. Fedora sets this value to YES
. This directive cannot be used in conjunction with the listen_ipv6
directive.
NO
.
listen_ipv6
— When enabled, vsftpd
runs in stand-alone mode, but listens only to IPv6
sockets. This directive cannot be used in conjunction with the listen
directive.
NO
.
session_support
— When enabled, vsftpd
attempts to maintain login sessions for each user through Pluggable Authentication Modules (PAM). For more information, refer to the Using Pluggable Authentication Modules (PAM) chapter of the Red Hat Enterprise Linux 6 Managing Single Sign-On and Smart Cards and the PAM man pages. . If session logging is not necessary, disabling this option allows vsftpd
to run with less processes and lower privileges.
YES
.
anonymous_enable
— When enabled, anonymous users are allowed to log in. The usernames anonymous
and ftp
are accepted.
YES
.
banned_email_file
— If the deny_email_enable
directive is set to YES
, this directive specifies the file containing a list of anonymous email passwords which are not permitted access to the server.
/etc/vsftpd/banned_emails
.
banner_file
— Specifies the file containing text displayed when a connection is established to the server. This option overrides any text specified in the ftpd_banner
directive.
cmds_allowed
— Specifies a comma-delimited list of FTP
commands allowed by the server. All other commands are rejected.
deny_email_enable
— When enabled, any anonymous user utilizing email passwords specified in the /etc/vsftpd/banned_emails
are denied access to the server. The name of the file referenced by this directive can be specified using the banned_email_file
directive.
NO
.
ftpd_banner
— When enabled, the string specified within this directive is displayed when a connection is established to the server. This option can be overridden by the banner_file
directive.
vsftpd
displays its standard banner.
local_enable
— When enabled, local users are allowed to log into the system.
YES
.
pam_service_name
— Specifies the PAM service name for vsftpd
.
ftp
. Note, in Fedora, the value is set to vsftpd
.
NO
. Note, in Fedora, the value is set to YES
.
userlist_deny
— When used in conjunction with the userlist_enable
directive and set to NO
, all local users are denied access unless the username is listed in the file specified by the userlist_file
directive. Because access is denied before the client is asked for a password, setting this directive to NO
prevents local users from submitting unencrypted passwords over the network.
YES
.
userlist_enable
— When enabled, the users listed in the file specified by the userlist_file
directive are denied access. Because access is denied before the client is asked for a password, users are prevented from submitting unencrypted passwords over the network.
NO
, however under Fedora the value is set to YES
.
userlist_file
— Specifies the file referenced by vsftpd
when the userlist_enable
directive is enabled.
/etc/vsftpd/user_list
and is created during installation.
anonymous_enable
directive must be set to YES
.
anon_mkdir_write_enable
— When enabled in conjunction with the write_enable
directive, anonymous users are allowed to create new directories within a parent directory which has write permissions.
NO
.
anon_root
— Specifies the directory vsftpd
changes to after an anonymous user logs in.
anon_upload_enable
— When enabled in conjunction with the write_enable
directive, anonymous users are allowed to upload files within a parent directory which has write permissions.
NO
.
anon_world_readable_only
— When enabled, anonymous users are only allowed to download world-readable files.
YES
.
ftp_username
— Specifies the local user account (listed in /etc/passwd
) used for the anonymous FTP
user. The home directory specified in /etc/passwd
for the user is the root directory of the anonymous FTP
user.
ftp
.
no_anon_password
— When enabled, the anonymous user is not asked for a password.
NO
.
secure_email_list_enable
— When enabled, only a specified list of email passwords for anonymous logins are accepted. This is a convenient way to offer limited security to public content without the need for virtual users.
/etc/vsftpd/email_passwords
. The file format is one password per line, with no trailing white spaces.
NO
.
local_enable
directive must be set to YES
.
chmod_enable
— When enabled, the FTP
command SITE CHMOD
is allowed for local users. This command allows the users to change the permissions on files.
YES
.
chroot_list_enable
— When enabled, the local users listed in the file specified in the chroot_list_file
directive are placed in a chroot
jail upon log in.
chroot_local_user
directive, the local users listed in the file specified in the chroot_list_file
directive are not placed in a chroot
jail upon log in.
NO
.
chroot_list_file
— Specifies the file containing a list of local users referenced when the chroot_list_enable
directive is set to YES
.
/etc/vsftpd/chroot_list
.
chroot_local_user
— When enabled, local users are change-rooted to their home directories after logging in.
NO
.
Avoid enabling the chroot_local_user option
chroot_local_user
opens up a number of security issues, especially for users with upload privileges. For this reason, it is not recommended.
guest_enable
— When enabled, all non-anonymous users are logged in as the user guest
, which is the local user specified in the guest_username
directive.
NO
.
guest_username
— Specifies the username the guest
user is mapped to.
ftp
.
local_root
— Specifies the directory vsftpd
changes to after a local user logs in.
local_umask
— Specifies the umask value for file creation. Note that the default value is in octal form (a numerical system with a base of eight), which includes a "0" prefix. Otherwise the value is treated as a base-10 integer.
022
.
passwd_chroot_enable
— When enabled in conjunction with the chroot_local_user
directive, vsftpd
change-roots local users based on the occurrence of the /./
in the home directory field within /etc/passwd
.
NO
.
user_config_dir
— Specifies the path to a directory containing configuration files bearing the name of local system users that contain specific setting for that user. Any directive in the user's configuration file overrides those found in /etc/vsftpd/vsftpd.conf
.
dirlist_enable
— When enabled, users are allowed to view directory lists.
YES
.
dirmessage_enable
— When enabled, a message is displayed whenever a user enters a directory with a message file. This message resides within the current directory. The name of this file is specified in the message_file
directive and is .message
by default.
NO
. Note, in Fedora, the value is set to YES
.
force_dot_files
— When enabled, files beginning with a dot (.
) are listed in directory listings, with the exception of the .
and ..
files.
NO
.
hide_ids
— When enabled, all directory listings show ftp
as the user and group for each file.
NO
.
message_file
— Specifies the name of the message file when using the dirmessage_enable
directive.
.message
.
text_userdb_names
— When enabled, text usernames and group names are used in place of UID and GID entries. Enabling this option may slow performance of the server.
NO
.
use_localtime
— When enabled, directory listings reveal the local time for the computer instead of GMT.
NO
.
download_enable
— When enabled, file downloads are permitted.
YES
.
chown_uploads
— When enabled, all files uploaded by anonymous users are owned by the user specified in the chown_username
directive.
NO
.
chown_username
— Specifies the ownership of anonymously uploaded files if the chown_uploads
directive is enabled.
root
.
write_enable
— When enabled, FTP
commands which can change the file system are allowed, such as DELE
, RNFR
, and STOR
.
YES
.
vsftpd
's logging behavior.
dual_log_enable
— When enabled in conjunction with xferlog_enable
, vsftpd
writes two files simultaneously: a wu-ftpd
-compatible log to the file specified in the xferlog_file
directive (/var/log/xferlog
by default) and a standard vsftpd
log file specified in the vsftpd_log_file
directive (/var/log/vsftpd.log
by default).
NO
.
log_ftp_protocol
— When enabled in conjunction with xferlog_enable
and with xferlog_std_format
set to NO
, all FTP
commands and responses are logged. This directive is useful for debugging.
NO
.
syslog_enable
— When enabled in conjunction with xferlog_enable
, all logging normally written to the standard vsftpd
log file specified in the vsftpd_log_file
directive (/var/log/vsftpd.log
by default) is sent to the system logger instead under the FTPD
facility.
NO
.
vsftpd_log_file
— Specifies the vsftpd
log file. For this file to be used, xferlog_enable
must be enabled and xferlog_std_format
must either be set to NO
or, if xferlog_std_format
is set to YES
, dual_log_enable
must be enabled. It is important to note that if syslog_enable
is set to YES
, the system log is used instead of the file specified in this directive.
/var/log/vsftpd.log
.
xferlog_enable
— When enabled, vsftpd
logs connections (vsftpd
format only) and file transfer information to the log file specified in the vsftpd_log_file
directive (/var/log/vsftpd.log
by default). If xferlog_std_format
is set to YES
, file transfer information is logged but connections are not, and the log file specified in xferlog_file
(/var/log/xferlog
by default) is used instead. It is important to note that both log files and log formats are used if dual_log_enable
is set to YES
.
NO
. Note, in Fedora, the value is set to YES
.
xferlog_file
— Specifies the wu-ftpd
-compatible log file. For this file to be used, xferlog_enable
must be enabled and xferlog_std_format
must be set to YES
. It is also used if dual_log_enable
is set to YES
.
/var/log/xferlog
.
xferlog_std_format
— When enabled in conjunction with xferlog_enable
, only a wu-ftpd
-compatible file transfer log is written to the file specified in the xferlog_file
directive (/var/log/xferlog
by default). It is important to note that this file only logs file transfers and does not log connections to the server.
NO
. Note, in Fedora, the value is set to YES
.
Maintaining compatibility with older log file formats
wu-ftpd
FTP
server, the xferlog_std_format
directive is set to YES
under Fedora. However, this setting means that connections to the server are not logged.
vsftpd
format and maintain a wu-ftpd
-compatible file transfer log, set dual_log_enable
to YES
.
wu-ftpd
-compatible file transfer log is not important, either set xferlog_std_format
to NO
, comment the line with a hash sign (#
), or delete the line entirely.
vsftpd
interacts with the network.
accept_timeout
— Specifies the amount of time for a client using passive mode to establish a connection.
60
.
anon_max_rate
— Specifies the maximum data transfer rate for anonymous users in bytes per second.
0
, which does not limit the transfer rate.
connect_from_port_20
When enabled, vsftpd
runs with enough privileges to open port 20 on the server during active mode data transfers. Disabling this option allows vsftpd
to run with less privileges, but may be incompatible with some FTP
clients.
NO
. Note, in Fedora, the value is set to YES
.
connect_timeout
— Specifies the maximum amount of time a client using active mode has to respond to a data connection, in seconds.
60
.
data_connection_timeout
— Specifies maximum amount of time data transfers are allowed to stall, in seconds. Once triggered, the connection to the remote client is closed.
300
.
ftp_data_port
— Specifies the port used for active data connections when connect_from_port_20
is set to YES
.
20
.
idle_session_timeout
— Specifies the maximum amount of time between commands from a remote client. Once triggered, the connection to the remote client is closed.
300
.
listen_address
— Specifies the IP
address on which vsftpd
listens for network connections.
Running multiple copies of vsftpd
vsftpd
serving different IP
addresses, the configuration file for each copy of the vsftpd
daemon must have a different value for this directive. See Section 12.2.4.1, “Starting Multiple Copies of vsftpd
” for more information about multihomed FTP
servers.
listen_address6
— Specifies the IPv6
address on which vsftpd
listens for network connections when listen_ipv6
is set to YES
.
Running multiple copies of vsftpd
vsftpd
serving different IP
addresses, the configuration file for each copy of the vsftpd
daemon must have a different value for this directive. See Section 12.2.4.1, “Starting Multiple Copies of vsftpd
” for more information about multihomed FTP
servers.
listen_port
— Specifies the port on which vsftpd
listens for network connections.
21
.
local_max_rate
— Specifies the maximum rate data is transferred for local users logged into the server in bytes per second.
0
, which does not limit the transfer rate.
max_clients
— Specifies the maximum number of simultaneous clients allowed to connect to the server when it is running in standalone mode. Any additional client connections would result in an error message.
0
, which does not limit connections.
max_per_ip
— Specifies the maximum of clients allowed to connected from the same source IP
address.
0
, which does not limit connections.
pasv_address
— Specifies the IP
address for the public facing IP
address of the server for servers behind Network Address Translation (NAT) firewalls. This enables vsftpd
to hand out the correct return address for passive mode connections.
pasv_enable
— When enabled, passive mode connects are allowed.
YES
.
pasv_max_port
— Specifies the highest possible port sent to the FTP
clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create.
0
, which does not limit the highest passive port range. The value must not exceed 65535
.
pasv_min_port
— Specifies the lowest possible port sent to the FTP
clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create.
0
, which does not limit the lowest passive port range. The value must not be lower 1024
.
pasv_promiscuous
— When enabled, data connections are not checked to make sure they are originating from the same IP
address. This setting is only useful for certain types of tunneling.
Avoid enabling the pasv_promiscuous option
IP
address as the control connection that initiates the data transfer.
NO
.
port_enable
— When enabled, active mode connects are allowed.
YES
.
vsftpd
, refer to the following resources.
/usr/share/doc/vsftpd/
directory — This directory contains a README
with basic information about the software. The TUNING
file contains basic performance tuning tips and the SECURITY/
directory contains information about the security model employed by vsftpd
.
vsftpd
related man pages — There are a number of man pages for the daemon and configuration files. The following lists some of the more important man pages.
man vsftpd
— Describes available command line options for vsftpd
.
man vsftpd.conf
— Contains a detailed list of options available within the configuration file for vsftpd
.
man 5 hosts_access
— Describes the format and options available within the TCP wrappers configuration files: hosts.allow
and hosts.deny
.
vsftpd
project page is a great place to locate the latest documentation and to contact the author of the software.
FTP
.
FTP
protocol from the IETF.
Using the CUPS web application or command line tools
system-config-printer
command from the command line to start the tool.
New Printer
dialog (refer to Section 12.3.2, “Starting Printer Setup”).
New Printer
dialog (refer to Section 12.3.1, “Starting the Printer Configuration Tool”).
9100
by default)
New Printer
dialog (refer to Section 12.3.2, “Starting Printer Setup”).
New Printer
dialog (refer to Section 12.3.2, “Starting Printer Setup”).
Installing the samba-client package
root
:
yum install samba-client
New Printer
dialog (refer to Section 12.3.2, “Starting Printer Setup”).
dellbox
and the printer share is r2
.
guest
for Windows servers, or nobody
for Samba servers.
Be careful when choosing a password
Selecting a printer driver
Describe Printer
enter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores (_); it must not contain any spaces. You can also use the Description and Location fields to add further printer information. Both fields are optional, and may contain spaces.
lpstat -o
. The last few lines look similar to the following:
Example 12.1. Example of lpstat -o
output
$ lpstat -o
Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT
Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT
Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMT
lpstat -o
and then use the command cancel job number
. For example, cancel 60
would cancel the print job in Example 12.1, “Example of lpstat -o
output”. You cannot cancel print jobs that were started by other users with the cancel
command. However, you can enforce deletion of such job by issuing the cancel -U root job_number
command. To prevent such canceling, change the printer operation policy to Authenticated
to force root authentication.
lp sample.txt
prints the text file sample.txt
. The print filter determines what type of file it is and converts it into a format the printer can understand.
man lp
lpr
command that allows you to print files from the command line.
man cancel
man mpage
man cupsd
man cupsd.conf
man classes.conf
man lpstat
lpstat
command, which displays status information about classes, jobs, and printers.
NTP
protocol is implemented by a daemon running in user space.
ntpd
and chronyd
, which are available from the repositories in the ntp and chrony packages respectively. This section describes the use of the chrony suite of utilities to update the system clock on systems that do not fit into the conventional permanently networked, always on, dedicated server category.
chronyd
, a daemon that runs in user space, and chronyc, a command line program for making adjustments to chronyd
. Systems which are not permanently connected, or not permanently powered up, take a relatively long time to adjust their system clocks with ntpd
. This is because many small corrections are made based on observations of the clocks drift and offset. Temperature changes, which may be significant when powering up a system, affect the stability of hardware clocks. Although adjustments begin within a few milliseconds of booting a system, acceptable accuracy may take anything from ten seconds from a warm restart to a number of hours depending on your requirements, operating environment and hardware. chrony is a different implementation of the NTP
protocol than ntpd
, it can adjust the system clock more rapidly.
ntpd
and chronyd
is in the algorithms used to control the computer's clock. Things chronyd
can do better than ntpd
are:
chronyd
can work well when external time references are only intermittently accessible, whereas ntpd
needs regular polling of time reference to work well.
chronyd
can perform well even when the network is congested for longer periods of time.
chronyd
can usually synchronize the clock faster and with better time accuracy.
chronyd
quickly adapts to sudden changes in the rate of the clock, for example, due to changes in the temperature of the crystal oscillator, whereas ntpd
may need a long time to settle down again.
chronyd
never steps the time after the clock has been synchronized at system start, in order not to upset other running programs. ntpd
can be configured to never step the time too, but it has to use a different means of adjusting the clock, which has some disadvantages.
chronyd
can adjust the rate of the clock on a Linux system in a larger range, which allows it to operate even on machines with a broken or unstable clock. For example, on some virtual machines.
chronyd
can do that ntpd
cannot do:
chronyd
provides support for isolated networks where the only method of time correction is manual entry. For example, by the administrator looking at a clock. chronyd
can examine the errors corrected at different updates to estimate the rate at which the computer gains or loses time, and use this estimate to trim the computer clock subsequently.
chronyd
provides support to work out the rate of gain or loss of the real-time clock, the hardware clock, that maintains the time when the computer is turned off. It can use this data when the system boots to set the system time using an adjusted value of the time taken from the real-time clock. This is, at time of writing, only available in Linux.
ntpd
can do that chronyd
cannot do:
ntpd
fully supports NTP
version 4 (RFC 5905), including broadcast, multicast, manycast clients and servers, and the orphan mode. It also supports extra authentication schemes based on public-key cryptography (RFC 5906). chronyd
uses NTP
version 3 (RFC 1305), which is compatible with version 4.
ntpd
includes drivers for many reference clocks whereas chronyd
relies on other programs, for example gpsd, to access the data from the reference clocks.
NTP
daemon (ntpd
) should be considered for systems which are normally kept permanently on. Systems which are required to use broadcast or multicast IP
, or to perform authentication of packets with the Autokey
protocol, should consider using ntpd
. Chrony only supports symmetric key authentication using a message authentication code (MAC) with MD5, SHA1 or stronger hash functions, whereas ntpd
also supports the Autokey
authentication protocol which can make use of the PKI system. Autokey
is described in RFC 5906.
chronyd
, running in user space, makes adjustments to the system clock which is running in the kernel. It does this by consulting external time sources, using the NTP
protocol, when ever network access allows it to do so. When external references are not available, chronyd
will use the last calculated drift stored in the drift file. It can also be commanded manually to make corrections, by chronyc.
chronyd
, can be controlled by the command line utility chronyc. This utility provides a command prompt which allows entering of a number of commands to make changes to chronyd
. The default configuration is for chronyd
to only accept commands from a local instance of chronyc, but chronyc can be used to alter the configuration so that chronyd
will allow external control. chronyc can be run remotely after first configuring chronyd
to accept remote connections. The IP
addresses allowed to connect to chronyd
should be tightly controlled.
chronyd
is /etc/chrony.conf
. The -f
option can be used to specify an alternate configuration file path. See the chronyd
man page for further options. For a complete list of the directives that can be used see http://chrony.tuxfamily.org/manual.html#Configuration-file. Below is a selection of configuration options:
NTP
connections to a machine acting as NTP
server. The default is not to allow connections.
Examples:
allow server1.example.com
allow 192.0.2.0/24
allow 2001:db8::/32
IPv6
address to be allowed access.
allow
directive (see section allow
), except that it allows control access (rather than NTP
client access) to a particular subnet or host. (By “control access” is meant that chronyc can be run on those hosts and successfully connect to chronyd
on this computer.) The syntax is identical. There is also a cmddeny all
directive with similar behavior to the cmdallow all
directive.
chronyd
(assuming no changes are made to the system clock behavior whilst it is not running). If this capability is to be used (via the dumponexit
command in the configuration file, or the dump
command in chronyc), the dumpdir
command should be used to define the directory where the measurement histories are saved.
chronyd
should save the measurement history for each of its time sources recorded whenever the program exits. (See the dumpdir
command above).
local
keyword is used to allow chronyd
to appear synchronized to real time (from the viewpoint of clients polling it), even if it has no current synchronization source. This option is normally used on computers in an isolated network, where several computers are required to synchronize to one other, this being the “master” which is kept vaguely in line with real time by manual input.
local stratum 10A large value of 10 indicates that the clock is so many hops away from a reference clock that its time is unreliable. If the computer ever has access to another computer which is ultimately synchronized to a reference clock, it will almost certainly be at a stratum less than 10. Therefore, the choice of a high value like 10 for the
local
command prevents the machine’s own time from ever being confused with real time, were it ever to leak out to clients that have visibility of real servers.
log
command indicates that certain information is to be logged. It accepts the following options:
NTP
measurements and related information to a file called measurements.log
.
statistics.log
.
tracking.log
.
refclocks.log
.
tempcomp.log
.
logdir
command. An example of the command is:
log measurements statistics tracking
logdir /var/log/chrony
chronyd
will cause the system to gradually correct any time offset, by slowing down or speeding up the clock as required. In certain situations, the system clock may be so far adrift that this slewing process would take a very long time to correct the system clock. This directive forces chronyd
to step system clock if the adjustment is larger than a threshold value, but only if there were no more clock updates since chronyd
was started than a specified limit (a negative value can be used to disable the limit). This is particularly useful when using reference clocks, because the initstepslew
directive only works with NTP
sources.
makestep 1000 10This would step the system clock if the adjustment is larger than 1000 seconds, but only in the first ten clock updates.
chronyd
will give up and exit (a negative value can be used to never exit). In both cases a message is sent to syslog.
maxchange 1000 1 2After the first clock update,
chronyd
will check the offset on every clock update, it will ignore two adjustments larger than 1000 seconds and exit on another one.
chronyd
's tasks is to work out how fast or slow the computer’s clock runs relative to its reference sources. In addition, it computes an estimate of the error bounds around the estimated value. If the range of error is too large, it indicates that the measurements have not settled down yet, and that the estimated gain or loss rate is not very reliable. The maxupdateskew
parameter is the threshold for determining whether an estimate is too unreliable to be used. By default, the threshold is 1000 ppm. The format of the syntax is:
maxupdateskew skew-in-ppmTypical values for skew-in-ppm might be 100 for a dial-up connection to servers over a telephone line, and 5 or 10 for a computer on a LAN. It should be noted that this is not the only means of protection against using unreliable estimates. At all times,
chronyd
keeps track of both the estimated gain or loss rate, and the error bound on the estimate. When a new estimate is generated following another measurement from one of the sources, a weighted combination algorithm is used to update the master estimate. So if chronyd
has an existing highly-reliable master estimate and a new estimate is generated which has large error bounds, the existing master estimate will dominate in the new master estimate.
chronyd
selects synchronization source from available sources, it will prefer the one with minimum synchronization distance. However, to avoid frequent reselecting when there are sources with similar distance, a fixed distance is added to the distance for sources that are currently not selected. This can be set with the reselectdist
option. By default, the distance is 100 microseconds.
reselectdist dist-in-seconds
stratumweight
directive sets how much distance should be added per stratum to the synchronization distance when chronyd
selects the synchronization source from available sources.
stratumweight dist-in-secondsBy default, dist-in-seconds is 1 second. This means that sources with lower stratum are usually preferred to sources with higher stratum even when their distance is significantly worse. Setting
stratumweight
to 0 makes chronyd
ignore stratum when selecting the source.
rtcfile
directive defines the name of the file in which chronyd
can save parameters associated with tracking the accuracy of the system’s real-time clock (RTC). The format of the syntax is:
rtcfile /var/lib/chrony/rtc
chronyd
saves information in this file when it exits and when the writertc
command is issued in chronyc. The information saved is the RTC’s error at some epoch, that epoch (in seconds since January 1 1970), and the rate at which the RTC gains or loses time. Not all real-time clocks are supported as their code is system-specific. Note that if this directive is used then the real-time clock should not be manually adjusted as this would interfere with chrony's need to measure the rate at which the real-time clock drifts if it was adjusted at random intervals.
rtcsync
directive is present in the /etc/chrony.conf
file by default. This will inform the kernel the system clock is kept synchronized and the kernel will update the real-time clock every 11 minutes.
chronyd
just as editing the configuration files would, access to chronyc should be limited. Passwords can be specified in the key file, written in ASCII or HEX, to restrict the use of chronyc. One of the entries is used to restrict the use of operational commands and is referred to as the command key. In the default configuration, a random command key is generated automatically on start. It should not be necessary to specify or alter it manually.
NTP
keys to authenticate packets received from remote NTP
servers or peers. The two sides need to share a key with identical ID, hash type and password in their key file. This requires manually creating the keys and copying them over a secure medium, such as SSH
. If the key ID was, for example, 10 then the systems that act as clients must have a line in their configuration files in the following format:
server w.x.y.z key 10 peer w.x.y.z key 10
/etc/chrony.conf
file. The default entry in the configuration file is:
keyfile
/etc/chrony.keys
/etc/chrony.conf
using the commandkey
directive, it is the key chronyd
will use for authentication of user commands. The directive in the configuration file takes the following form:
commandkey 1
/etc/chrony.keys
, for the command key is:
1 SHA1 HEX:A6CFC50C9C93AB6E5A19754C246242FC5471BCDFWhere
1
is the key ID, SHA1 is the hash function to use, HEX
is the format of the key, and A6CFC50C9C93AB6E5A19754C246242FC5471BCDF
is the key randomly generated when chronyd was started for the first time. The key can be given in hexidecimal or ASCII format (the default).
NTP
servers or peers, can be as simple as the following:
20 foobarWere
20
is the key ID and foobar
is the secret authentication key. The default hash is MD5, and ASCII is the default format for the key.
chronyd
is configured to listen for commands only from localhost
(127.0.0.1
and ::1
) on port 323
. To access chronyd
remotely with chronyc, any bindcmdaddress
directives in the /etc/chrony.conf
file should be removed to enable listening on all interfaces and the cmdallow
directive should be used to allow commands from the remote IP
address, network, or subnet. In addition, port 323
has to be opened in the firewall in order to connect from a remote system. Note that the allow
directive is for NTP
access whereas the cmdallow
directive is to enable the receiving of remote commands. It is possible to make these changes temporarily using chronyc running locally. Edit the configuration file to make persistent changes.
UDP
, so it needs to be authorized before issuing operational commands. To authorize, use the authhash
and password
commands as follows:
chronyc>authhash SHA1
chronyc>password HEX:A6CFC50C9C93AB6E5A19754C246242FC5471BCDF
200 OK
-a
option will run the authhash
and password
commands automatically.
activity
, authhash
, dns
, exit
, help
, password
, quit
, rtcdata
, sources
, sourcestats
, tracking
, waitsync
.
root
:
~]# yum install chrony
The default location for the chrony daemon is /usr/sbin/chronyd
. The command line utility will be installed to /usr/bin/chronyc
.
root
:
~]# yum install chrony -y
The default location for the chrony daemon is /usr/sbin/chronyd
. The command line utility will be installed to /usr/bin/chronyc
.
chronyd
, issue the following command:
~]$ systemctl status chronyd
chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
Active: active (running) since Wed 2013-06-12 22:23:16 CEST; 11h ago
chronyd
, issue the following command as root
:
~]# systemctl start chronyd
chronyd
starts automatically at system start, issue the following command as root
:
~]# systemctl enable chronyd
chronyd
, issue the following command as root
:
~]# systemctl stop chronyd
chronyd
from starting automatically at system start, issue the following command as root
:
~]# systemctl disable chronyd
tracking
, sources
, and sourcestats
commands.
~]$ chronyc tracking
Reference ID : 1.2.3.4 (a.b.c)
Stratum : 3
Ref time (UTC) : Fri Feb 3 15:00:29 2012
System time : 0.000001501 seconds slow of NTP time
Last offset : -0.000001632 seconds
RMS offset : 0.000002360 seconds
Frequency : 331.898 ppm fast
Residual freq : 0.004 ppm
Skew : 0.154 ppm
Root delay : 0.373169 seconds
Root dispersion : 0.024780 seconds
Update interval : 64.2 seconds
Leap status : Normal
The fields are as follows:
IP
address) if available, of the server to which the computer is currently synchronized. If this is 127.127.1.1
it means the computer is not synchronized to any external source and that you have the “local” mode operating (via the local command in chronyc, or the local
directive in the /etc/chrony.conf
file (see section local
)).
chronyd
never steps the system clock, because any jump in the timescale can have adverse consequences for certain application programs. Instead, any error in the system clock is corrected by slightly speeding up or slowing down the system clock until the error has been removed, and then returning to the system clock’s normal speed. A consequence of this is that there will be a period when the system clock (as read by other programs using the gettimeofday()
system call, or by the date command in the shell) will be different from chronyd
's estimate of the current true time (which it reports to NTP
clients when it is operating in server mode). The value reported on this line is the difference due to this effect.
chronyd
was not correcting it. It is expressed in ppm (parts per million). For example, a value of 1ppm would mean that when the system’s clock thinks it has advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time.
skew
next) of the existing frequency value. A weighted average is computed for the new frequency, with weights depending on these accuracies. If the measurements from the reference source follow a consistent trend, the residual will be driven to zero over time.
chronyd
is accessing. The optional argument -v can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns.
~]$ chronyc sources
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#* GPS0 0 4 377 11 -479ns[ -621ns] +/- 134ns
^? a.b.c 2 6 377 23 -923us[ -924us] +/- 43ms
^+ d.e.f 1 6 377 21 -2629us[-2619us] +/- 86ms
The columns are as follows:
^
means a server, =
means a peer and #
indicates a locally connected reference clock.
chronyd
is currently synchronized. “+” indicates acceptable sources which are combined with the selected source. “-” indicates acceptable sources which are excluded by the combining algorithm. “?” indicates sources to which connectivity has been lost or whose packets do not pass all tests. “x” indicates a clock which chronyd
thinks is a falseticker ( its time is inconsistent with a majority of other sources). “~” indicates a source whose time appears to have too much variability. The “?” condition is also shown at start-up, until at least 3 samples have been gathered from it.
IP
address of the source, or reference ID for reference clocks.
chronyd
automatically varies the polling rate in response to prevailing conditions.
m
, h
, d
or y
indicate minutes, hours, days or years. A value of 10 years indicates there were no samples received from this source yet.
ns
(indicating nanoseconds), us
(indicating microseconds), ms
(indicating milliseconds), or s
(indicating seconds). The number to the left of the square brackets shows the original measurement, adjusted to allow for any slews applied to the local clock since. The number following the +/-
indicator shows the margin of error in the measurement. Positive offsets indicate that the local clock is fast of the source.
sourcestats
command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd
. The optional argument -v
can be specified, meaning verbose. In this case, extra caption lines are shown as a reminder of the meanings of the columns.
~]$ chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
===============================================================================
abc.def.ghi
The columns are as follows:
IP
address of the NTP
server (or peer) or reference ID of the reference clock to which the rest of the line relates.
chronyd
discards older samples and re-runs the regression until the number of runs becomes acceptable.
root
:
~]#Where commandkey-password is the command key or password stored in the key file.chronyc
chrony>password
commandkey-password 200 OK chrony>makestep
200 OK
rtcfile
directive is used as this would interfere with chrony's need to measure the rate at which the real-time clock drifts if it was adjusted at random intervals.
-a
will run the authhash
and password
commands automatically. This means that the interactive session illustrated above can be replaced by: chronyc -a makestep
/etc/chrony.conf
are similar to the following:
driftfile /var/lib/chrony/drift commandkey 1 keyfile /etc/chrony.keysThe command key ID is generated at install time and should correspond with the
commandkey
value in the key file, /etc/chrony.keys
.
root
, add the addresses of four NTP
servers as follows:
server 0.pool.ntp.org offline server 1.pool.ntp.org offline server 2.pool.ntp.org offline server 3.pool.ntp.org offlineThe
offline
option can be useful in preventing systems from trying to activate connections. The chrony daemon will wait for chronyc to inform it that the system is connected to the network or Internet.
settime
command is used.
root
, edit the /etc/chrony.conf
as follows:
driftfile /var/lib/chrony/drift commandkey 1 keyfile /etc/chrony.keys initstepslew 10 client1 client3 client6 local stratum 8 manual allow 192.0.2.0Where
192.0.2.0
is the network or subnet address from which the clients are allowed to connect.
root
, edit the /etc/chrony.conf
as follows:
server master driftfile /var/lib/chrony/drift logdir /var/log/chrony log measurements statistics tracking keyfile /etc/chrony.keys commandkey 24 local stratum 10 initstepslew 20 master allow 192.0.2.123Where
192.0.2.123
is the address of the master, and master
is the host name of the master. Clients with this configuration will resynchronize the master if it restarts.
/etc/chrony.conf
file should be the same except that the local
and allow
directives should be omitted.
root
:
~]# chronyc
chronyc must run as root
if some of the restricted commands are to be used.
chronyc>
help
to list all of the commands.
~]# chronyc command
chronyd
, issue a command as root
in the following format:
~]# chronyc -h
hostname
Where hostname is the hostname
of a system running chronyd
to connect to in order to allow remote administration from that host. The default is to connect to the daemon on the localhost.
chronyd
on a non-default port, issue a command as root
in the following format:
~]# chronyc -h
hostname -p
port
Where port is the port in use for controlling and monitoring by the instance of chronyd
to be connected to.
password
command, preceded by the authhash
command if the key used a hash different from MD5, at the chronyc command prompt as follows:
chronyc> password secretpasswordwithnospaces
200 OK
SSH
. An SSH
connection should be established to the remote machine and the ID of the command key from /etc/chrony.conf
and the command key in /etc/chrony.keys
memorized or stored securely for the duration of the session.
chrony(1)
man page — Introduces the chrony daemon and the command-line interface tool.
chronyc(1)
man page — Describes the chronyc command-line interface tool including commands and command options.
chronyd(1)
man page — Describes the chronyd daemon including commands and command options.
chrony.conf(5)
man page — Describes the chrony configuration file.
/usr/share/doc/chrony/chrony.txt
— User guide for the chrony suite.
NTP
servers provide “Coordinated Universal Time” (UTC). Information about these time servers can found at www.pool.ntp.org.
NTP
is implemented by a daemon running in user space. The default NTP
user space daemon in Fedora 20 is chronyd
. It must be disabled if you want to use the ntpd
daemon. See Chapter 13, Configuring NTP Using the chrony Suite for information on chrony.
rtc(4)
and hwclock(8)
man pages for information on hardware clocks. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter (TSC) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start, the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system clock to be constantly synchronized with external time references. When the system clock is being synchronized by ntpd
, the kernel will in turn update the RTC every 11 minutes automatically.
NTP
servers are classified according to their synchronization distance from the atomic clocks which are the source of the time signals. The servers are thought of as being arranged in layers, or strata, from 1 at the top down to 15. Hence the word stratum is used when referring to a specific layer. Atomic clocks are referred to as Stratum 0 as this is the source, but no Stratum 0 packet is sent on the Internet, all stratum 0 atomic clocks are attached to a server which is referred to as stratum 1. These servers send out packets marked as Stratum 1. A server which is synchronized by means of packets marked stratum n
belongs to the next, lower, stratum and will mark its packets as stratum n+1
. Servers of the same stratum can exchange packets with each other but are still designated as belonging to just the one stratum, the stratum one below the best reference they are synchronized to. The designation Stratum 16 is used to indicate that the server is not currently synchronized to a reliable time source.
NTP
clients act as servers for those systems in the stratum below them.
NTP
Strata:
NTP
used by Fedora is as described in RFC 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis and RFC 5905 Network Time Protocol Version 4: Protocol and Algorithms Specification
NTP
enables sub-second accuracy to be achieved. Over the Internet, accuracy to 10s of milliseconds is normal. On a Local Area Network (LAN), 1 ms accuracy is possible under ideal conditions. This is because clock drift is now accounted and corrected for, which was not done in earlier, simpler, time protocol systems. A resolution of 233 picoseconds is provided by using 64-bit timestamps: 32-bits for seconds, 32-bits for fractional seconds.
NTP
represents the time as a count of the number of seconds since 00:00 (midnight) 1 January, 1900 GMT. As 32-bits is used to count the seconds, this means the time will “roll over” in 2036. However NTP
works on the difference between timestamps so this does not present the same level of problem as other implementations of time protocols have done. If a hardware clock accurate to better than 68 years is available at boot time then NTP
will correctly interpret the current date. The NTP4
specification provides for an “Era Number” and an “Era Offset” which can be used to make software more robust when dealing with time lengths of more than 68 years. Note, please do not confuse this with the Unix Year 2038 problem.
NTP
protocol provides additional information to improve accuracy. Four timestamps are used to allow the calculation of round-trip time and server response time. In order for a system in its role as NTP
client to synchronize with a reference time server, a packet is sent with an “originate timestamp”. When the packet arrives, the time server adds a “receive timestamp”. After processing the request for time and date information and just before returning the packet, it adds a “transmit timestamp”. When the returning packet arrives at the NTP
client, a “receive timestamp” is generated. The client can now calculate the total round trip time and by subtracting the processing time derive the actual traveling time. By assuming the outgoing and return trips take equal time, the single-trip delay in receiving the NTP
data is calculated. The full NTP
algorithm is much more complex then presented here.
ntpd
has determined the time should be. The system clock is adjusted slowly, at most at a rate of 0.5ms per second, to reduce this offset by changing the frequency of the counter being used. It will take at least 2000 seconds to adjust the clock by 1 second using this method. This slow change is referred to as slewing and cannot go backwards. If the time offset of the clock is more than 128ms (the default setting), ntpd
can “step” the clock forwards or backwards. If the time offset at system start is greater than 1000 seconds then the user, or an installation script, should make a manual adjustment. See Chapter 3, Configuring the Date and Time. With the -g
option to the ntpd
command (used by default), any offset at system start will be corrected, but during normal operation only offsets of up to 1000 seconds will be corrected.
-x
option (unrelated to the -g
option). Using the -x
option to increase the stepping limit from 0.128s to 600s has a drawback because a different method of controlling the clock has to be used. It disables the kernel clock discipline and may have a negative impact on the clock accuracy. The -x
option can be added to the /etc/sysconfig/ntpd
configuration file.
ntpd
. The drift file is replaced, rather than just updated, and for this reason the drift file must be in a directory for which the ntpd
has write permissions.
NTP
is entirely in UTC (Universal Time, Coordinated), Timezones and DST (Daylight Saving Time) are applied locally by the system. The file /etc/localtime
is a copy of, or symlink to, a zone information file from /usr/share/zoneinfo
. The RTC may be in localtime or in UTC, as specified by the 3rd line of /etc/adjtime
, which will be one of LOCAL or UTC to indicate how the RTC clock has been set. Users can easily change this setting using the checkbox System Clock Uses UTC in the Date and Time graphical configuration tool. See Chapter 3, Configuring the Date and Time for information on how to use that tool. Running the RTC in UTC is recommended to avoid various problems when daylight saving time is changed.
ntpd
is explained in more detail in the man page ntpd(8)
. The resources section lists useful sources of information. See Section 14.20, “Additional Resources”.
NTPv4
added support for the Autokey Security Architecture, which is based on public asymmetric cryptography while retaining support for symmetric key cryptography. The Autokey Security Architecture is described in RFC 5906 Network Time Protocol Version 4: Autokey Specification. The man page ntp_auth(5)
describes the authentication options and commands for ntpd
.
NTP
packets with incorrect time information. On systems using the public pool of NTP
servers, this risk is mitigated by having more than three NTP
servers in the list of public NTP
servers in /etc/ntp.conf
. If only one time source is compromised or spoofed, ntpd
will ignore that source. You should conduct a risk assessment and consider the impact of incorrect time on your applications and organization. If you have internal time sources you should consider steps to protect the network over which the NTP
packets are distributed. If you conduct a risk assessment and conclude that the risk is acceptable, and the impact to your applications minimal, then you can choose not to use authentication.
disable auth
directive in the ntp.conf
file. Alternatively, authentication needs to be configured by using SHA1 or MD5 symmetric keys, or by public (asymmetric) key cryptography using the Autokey scheme. The Autokey scheme for asymmetric cryptography is explained in the ntp_auth(8)
man page and the generation of keys is explained in ntp-keygen(8
). To implement symmetric key cryptography, see Section 14.17.12, “Configuring Symmetric Authentication Using a Key” for an explanation of the key
option.
kvm-clock
. See the KVM guest timing management chapter of the Virtualization Host Configuration and Guest Installation Guide.
NTP
transmits information about pending leap seconds and applies them automatically.
ntpd
, reads the configuration file at system start or when the service is restarted. The default location for the file is /etc/ntp.conf
and you can view the file by entering the following command:
~]$ less /etc/ntp.conf
The configuration commands are explained briefly later in this chapter, see Section 14.17, “Configure NTP”, and more verbosely in the ntp.conf(5)
man page.
driftfile /var/lib/ntp/driftIf you change this be certain that the directory is writable by
ntpd
. The file contains one value used to adjust the system clock frequency after every system or service start. See Understanding the Drift File for more information.
restrict default kod nomodify notrap nopeer noqueryThe
kod
option means a “Kiss-o'-death” packet is to be sent to reduce unwanted queries. The nomodify
options prevents any changes to the configuration. The notrap
option prevents ntpdc
control message protocol traps. The nopeer
option prevents a peer association being formed. The noquery
option prevents ntpq
and ntpdc
queries, but not time queries, from being answered. The ntpq
and ntpdc
queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery
option from the restrict default
command on publicly accessible systems.
127.0.0.0/8
range are sometimes required by various processes or applications. As the "restrict default" line above prevents access to everything not explicitly allowed, access to the standard loopback address for IPv4
and IPv6
is permitted by means of the following lines:
# the administrative functions. restrict 127.0.0.1 restrict ::1Addresses can be added underneath if specifically required by another application.
192.0.2.0/24
network to query the time and statistics but nothing more, a line in the following format is required:
restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap nopeerTo allow unrestricted access from a specific host, for example
192.0.2.250/24
, a line in the following format is required:
restrict 192.0.2.250A mask of
255.255.255.255
is applied if none is specified.
ntp_acc(5)
man page.
ntp.conf
file contains four public server entries:
server 0.fedora.pool.ntp.org iburst server 1.fedora.pool.ntp.org iburst server 2.fedora.pool.ntp.org iburst server 3.fedora.pool.ntp.org iburst
ntp.conf
file contains some commented out examples. These are largely self explanatory. See the explanation of the specific commands Section 14.17, “Configure NTP”. If required, add your commands just below the examples.
Note
DHCP
client program, dhclient, receives a list of NTP
servers from the DHCP
server, it adds them to ntp.conf
and restarts the service. To disable that feature, add PEERNTP=no
to /etc/sysconfig/network
.
ntpd
init script on service start. The default contents is as follows:
# Command line options for ntpd OPTIONS="-g"
-g
option enables ntpd
to ignore the offset limit of 1000s and attempt to synchronize the time even if the offset is larger than 1000s, but only on system start. Without that option ntpd will exit if the time offset is greater than 1000s. It will also exit after system start if the service is restarted and the offset is greater than 1000s even with the -g
option.
ntpd
the default user space daemon, chronyd
, must be stopped and disabled. Issue the following command as root
:
~]# systemctl stop chronyd
To prevent it restarting at system start, issue the following command as root
:
~]# systemctl disable chronyd
To check the status of chronyd
, issue the following command:
~]$ systemctl status chronyd
ntpd
is installed, enter the following command as root
:
~]# yum install ntp
NTP
is implemented by means of the daemon or service ntpd
, which is contained within the ntp package.
ntpd
, enter the following command as root
:
~]# yum install ntp
ntpd
at system start, enter the following command as root
:
~]# systemctl enable ntpd
ntpd
is running and configured to run at system start, issue the following command:
~]$ systemctl status ntpd
ntpd
, issue the following command:
~]$ ntpstat
unsynchronised
time server re-starting
polling server every 64 s
~]$ ntpstat
synchronised to NTP server (10.5.26.10) at stratum 2
time correct to within 52 ms
polling server every 1024 s
NTP
traffic consists of UDP
packets on port 123
and needs to be permitted through network and host-based firewalls in order for NTP
to function.
NTP
traffic for clients using the graphical Firewall Configuration tool.
firewall
and then press Enter. The firewall-config tool appears. You will be prompted for your user password.
root
user:
~]# firewall-config
The Firewall Configuration window opens. Note, this command can be run as normal user but you will then be prompted for the root
password from time to time.
firewalld
.
Note
123
and select udp from the drop-down list.
ntpdate
service is to set the clock during system boot. This was used previously to ensure that the services started after ntpdate
would have the correct time and not observe a jump in the clock. The use of ntpdate
and the list of step-tickers is considered deprecated and so Fedora uses the -g
option to the ntpd
command and not ntpdate
by default.
ntpdate
service in Fedora is mostly useful only when used alone without ntpd
. With systemd, which starts services in parallel, enabling the ntpdate
service will not ensure that other services started after it will have correct time unless they specify an ordering dependency on time-sync.target
, which is provided by the ntpdate
service. The ntp-wait
service (in the ntp-perl subpackage) provides the time-sync
target for the ntpd
service. In order to ensure a service starts with correct time, add After=time-sync.target
to the service and enable one of the services which provide the target (ntpdate
or sntp, or ntp-wait if ntpd
is enabled). Some services on Fedora have the dependency included by default ( for example, dhcpd
, dhcpd6
, and crond
).
ntpdate
service is enabled to run at system start, issue the following command:
~]$ systemctl status ntpdate
root
:
~]# systemctl enable ntpdate
/etc/ntp/step-tickers
file contains 0.fedora.pool.ntp.org
. To configure additional ntpdate
servers, using a text editor running as root
, edit /etc/ntp/step-tickers
. The number of servers listed is not very important as ntpdate
will only use this to obtain the date information once when the system is starting. If you have an internal time server then use that host name for the first line. An additional host on the second line as a backup is sensible. The selection of backup servers and whether the second host is internal or external depends on your risk assessment. For example, what is the chance of any problem affecting the first server also affecting the second server? Would connectivity to an external server be more likely to be available than connectivity to internal servers in the event of a network failure disrupting access to the first server?
NTP
service, use a text editor running as root
user to edit the /etc/ntp.conf
file. This file is installed together with ntpd
and is configured to use time servers from the Fedora pool by default. The man page ntp.conf(5)
describes the command options that can be used in the configuration file apart from the access and rate limiting commands which are explained in the ntp_acc(5)
man page.
NTP
service running on a system, make use of the restrict
command in the ntp.conf
file. See the commented out example:
# Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
restrict
command takes the following form:
restrict
option
ignore
— All packets will be ignored, including ntpq
and ntpdc
queries.
kod
— a “Kiss-o'-death” packet is to be sent to reduce unwanted queries.
limited
— do not respond to time service requests if the packet violates the rate limit default values or those specified by the discard
command. ntpq
and ntpdc
queries are not affected. For more information on the discard
command and the default values, see Section 14.17.2, “Configure Rate Limiting Access to an NTP Service”.
lowpriotrap
— traps set by matching hosts to be low priority.
nomodify
— prevents any changes to the configuration.
noquery
— prevents ntpq
and ntpdc
queries, but not time queries, from being answered.
nopeer
— prevents a peer association being formed.
noserve
— deny all packets except ntpq
and ntpdc
queries.
notrap
— prevents ntpdc
control message protocol traps.
notrust
— deny packets that are not cryptographically authenticated.
ntpport
— modify the match algorithm to only apply the restriction if the source port is the standard NTP
UDP
port 123
.
version
— deny packets that do not match the current NTP
version.
restrict
command has to have the limited
option. If ntpd
should reply with a KoD
packet, the restrict
command needs to have both limited
and kod
options.
ntpq
and ntpdc
queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery
option from the restrict default
command on publicly accessible systems.
NTP
service running on a system, add the limited
option to the restrict
command as explained in Section 14.17.1, “Configure Access Control to an NTP Service”. If you do not want to use the default discard parameters, then also use the discard
command as explained here.
discard
command takes the following form:
discard
[average
value] [minimum
value] [monitor
value]
average
— specifies the minimum average packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 3 (23 equates to 8 seconds).
minimum
— specifies the minimum packet spacing to be permitted, it accepts an argument in log2 seconds. The default value is 1 (21 equates to 2 seconds).
monitor
— specifies the discard probability for packets once the permitted rate limits have been exceeded. The default value is 3000 seconds. This option is intended for servers that receive 1000 or more requests per second.
discard
command are as follows: discard average 4
discard average 4 minimum 2
NTP
service of the same stratum, make use of the peer
command in the ntp.conf
file.
peer
command takes the following form:
peer
address
IP
unicast address or a DNS
resolvable name. The address must only be that of a system known to be a member of the same stratum. Peers should have at least one time source that is different to each other. Peers are normally systems under the same administrative control.
NTP
service of a higher stratum, make use of the server
command in the ntp.conf
file.
server
command takes the following form:
server
address
IP
unicast address or a DNS
resolvable name. The address of a remote reference server or local reference clock from which packets are to be received.
NTP
packets to, make use of the broadcast
command in the ntp.conf
file.
broadcast
command takes the following form:
broadcast
address
IP
broadcast or multicast address to which packets are sent.
NTP
broadcast server. The address used must be a broadcast or a multicast address. Broadcast address implies the IPv4
address 255.255.255.255
. By default, routers do not pass broadcast messages. The multicast address can be an IPv4
Class D address, or an IPv6
address. The IANA has assigned IPv4
multicast address 224.0.1.1
and IPv6
address FF05::101
(site local) to NTP
. Administratively scopedIPv4
multicast addresses can also be used, as described in RFC 2365 Administratively Scoped IP Multicast.
NTP
server discovery, make use of the manycastclient
command in the ntp.conf
file.
manycastclient
command takes the following form:
manycastclient
address
IP
multicast address from which packets are to be received. The client will send a request to the address and select the best servers from the responses and ignore other servers. NTP
communication then uses unicast associations, as if the discovered NTP
servers were listed in ntp.conf
.
NTP
client. Systems can be both client and server at the same time.
NTP
packets, make use of the broadcastclient
command in the ntp.conf
file.
broadcastclient
command takes the following form:
broadcastclient
NTP
client. Systems can be both client and server at the same time.
NTP
packets, make use of the manycastserver
command in the ntp.conf
file.
manycastserver
command takes the following form:
manycastserver
address
NTP
server. Systems can be both client and server at the same time.
NTP
packets, make use of the multicastclient
command in the ntp.conf
file.
multicastclient
command takes the following form:
multicastclient
address
NTP
client. Systems can be both client and server at the same time.
burst
option against a public server is considered abuse. Do not use this option with public NTP
servers. Use it only for applications within your own organization.
burst
server
command to improve the average quality of the time offset calculations.
iburst
server
command to improve the time taken for initial synchronization. This is now a default option in the configuration file.
key
number
1
to 65534
inclusive. This option enables the use of a message authentication code (MAC) in packets. This option is for use with the peer
, server
, broadcast
, and manycastclient
commands.
/etc/ntp.conf
file as follows:
server 192.168.1.1 key 10 broadcast 192.168.1.255 key 20 manycastclient 239.255.254.254 key 30
minpoll
value andmaxpoll
value
minpoll
value is 6, 26 equates to 64s. The default value for maxpoll
is 10, which equates to 1024s. Allowed values are in the range 3 to 17 inclusive, which equates to 8s to 36.4h respectively. These options are for use with the peer
or server
. Setting a shorter maxpoll
may improve clock accuracy.
prefer
peer
or server
commands.
ttl
value
NTP
servers. Specify the maximum time-to-live value to use for the “expanding ring search” by a manycast client. The default value is 127
.
NTP
should be used in place of the default, add the following option to the end of a server or peer command:
version
value
NTP
set in created NTP
packets. The value can be in the range 1
to 4
. The default is 4
.
/etc/sysconfig/ntpdate
:
SYNC_HWCLOCK=yes
root
:
~]# hwclock --systohc
ntpd
, the kernel will in turn update the RTC every 11 minutes automatically.
~]$In the above example, the kernel is using kvm-clock. This was selected at boot time as this is a virtual machine.cd /sys/devices/system/clocksource/clocksource0/
clocksource0]$cat available_clocksource
kvm-clock tsc hpet acpi_pm clocksource0]$cat current_clocksource
kvm-clock
grub.conf
:
clocksource=tscThe available clock source is architecture dependent.
NTP
and ntpd
.
ntpd(8)
man page — Describes ntpd
in detail, including the command line options.
ntp.conf(5)
man page — Contains information on how to configure associations with servers and peers.
ntpq(8)
man page — Describes the NTP
query utility for monitoring and querying an NTP
server.
ntpdc(8)
man page — Describes the ntpd
utility for querying and changing the state of ntpd
.
ntp_auth(5)
man page — Describes authentication options, commands, and key management for ntpd
.
ntp_keygen(8)
man page — Describes generating public and private keys for ntpd
.
ntp_acc(5)
man page — Describes access control options using the restrict
command.
ntp_mon(5)
man page — Describes monitoring options for the gathering of statistics.
ntp_clock(5)
man page — Describes commands for configuring reference clocks.
ntp_misc(5)
man page — Describes miscellaneous options.
ntp_decode(5)
man page — Lists the status words, event messages and error codes used for ntpd
reporting and monitoring.
ntpstat(8)
man page — Describes a utility for reporting the synchronization state of the NTP
daemon running on the local machine.
ntptime(8)
man page — Describes a utility for reading and setting kernel time variables.
tickadj(8)
man page — Describes a utility for reading, and optionally setting, the length of the tick.
NTPv4
.
PTP
is capable of sub-microsecond accuracy, which is far better than is normally obtainable with NTP
. PTP
support is divided between the kernel and user space. The kernel in Fedora includes support for PTP
clocks, which are provided by network drivers. The actual implementation of the protocol is known as linuxptp, a PTPv2
implementation according to the IEEE standard 1588 for Linux.
PTP
boundary clock and ordinary clock. With hardware time stamping, it is used to synchronize the PTP
hardware clock to the master clock, and with software time stamping it synchronizes the system clock to the master clock. The phc2sys program is needed only with hardware time stamping, for synchronizing the system clock to the PTP
hardware clock on the network interface card (NIC).
PTP
are organized in a master-slave hierarchy. The slaves are synchronized to their masters which may be slaves to their own masters. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. When a clock has only one port, it can be master or slave, such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and slave on another, such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock, which can be synchronized by using a Global Positioning System (GPS) time source. By using a GPS-based time source, disparate networks can be synchronized with a high-degree of accuracy.
PTP
has over the Network Time Protocol (NTP) is hardware support present in various network interface controllers (NIC) and network switches. This specialized hardware allows PTP
to account for delays in message transfer, and greatly improves the accuracy of time synchronization. While it is possible to use non-PTP enabled hardware components within the network, this will often cause an increase in jitter or introduce an asymmetry in the delay resulting in synchronization inaccuracies, which add up with multiple non-PTP aware components used in the communication path. To achieve the best possible accuracy, it is recommended that all networking components between PTP
clocks are PTP
hardware enabled. Time synchronization in larger networks where not all of the networking hardware supports PTP
might be better suited for NTP
.
PTP
support, the NIC has its own on-board clock, which is used to time stamp the received and transmitted PTP
messages. It is this on-board clock that is synchronized to the PTP
master, and the computer's system clock is synchronized to the PTP
hardware clock on the NIC. With software PTP
support, the system clock is used to time stamp the PTP
messages and it is synchronized to the PTP
master directly. Hardware PTP
support provides better accuracy since the NIC can time stamp the PTP
packets at the exact moment they are sent and received while software PTP
support requires additional processing of the PTP
packets by the operating system.
PTP
, the kernel network driver for the intended interface has to support either software or hardware time stamping capabilities.
~]# ethtool -T em3
Time stamping parameters for em3:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
off (HWTSTAMP_TX_OFF)
on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
none (HWTSTAMP_FILTER_NONE)
all (HWTSTAMP_FILTER_ALL)
Where em3 is the interface you wish to check.
SOF_TIMESTAMPING_SOFTWARE
SOF_TIMESTAMPING_TX_SOFTWARE
SOF_TIMESTAMPING_RX_SOFTWARE
SOF_TIMESTAMPING_RAW_HARDWARE
SOF_TIMESTAMPING_TX_HARDWARE
SOF_TIMESTAMPING_RX_HARDWARE
PTP
. User space support is provided by the tools in the linuxptp package. To install linuxptp, issue the following command as root
:
~]# yum install linuxptp
This will install ptp4l and phc2sys.
PTP
time using NTP
, see Section 15.7, “Serving PTP Time with NTP”.
-i
option. Enter the following command as root
:
~]# ptp4l -i em3 -m
Where em3 is the interface you wish to configure. Below is example output from ptp4l when the PTP
clock on the NIC is synchronized to a master:
~]# ptp4l -i em3 -m
selected em3 as PTP clock
port 1: INITIALIZING to LISTENING on INITIALIZE
port 0: INITIALIZING to LISTENING on INITIALIZE
port 1: new foreign master 00a069.fffe.0b552d-1
selected best master clock 00a069.fffe.0b552d
port 1: LISTENING to UNCALIBRATED on RS_SLAVE
master offset -23947 s0 freq +0 path delay 11350
master offset -28867 s0 freq +0 path delay 11236
master offset -32801 s0 freq +0 path delay 10841
master offset -37203 s1 freq +0 path delay 10583
master offset -7275 s2 freq -30575 path delay 10583
port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
master offset -4552 s2 freq -30035 path delay 10385
The master offset value is the measured offset from the master in nanoseconds. The s0
, s1
, s2
strings indicate the different clock servo states: s0
is unlocked, s1
is clock step and s2
is locked. Once the servo is in the locked state (s2
), the clock will not be stepped (only slowly adjusted) unless the pi_offset_const
option is set to a positive value in the configuration file (described in the ptp4l(8)
man page). The adj
value is the frequency adjustment of the clock in parts per billion (ppb). The path delay value is the estimated delay of the synchronization messages sent from the master in nanoseconds. Port 0 is a Unix domain socket used for local PTP
management. Port 1 is the em3
interface (based on the example above.) INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are some of possible port states which change on the INITIALIZE, RS_SLAVE, MASTER_CLOCK_SELECTED events. In the last state change message, the port state changed from UNCALIBRATED to SLAVE indicating successful synchronization with a PTP
master clock.
~]# systemctl start ptp4l
When running as a service, options are specified in the /etc/sysconfig/ptp4l
file. More information on the different ptp4l options and the configuration file settings can be found in the ptp4l(8)
man page.
/var/log/messages
. However, specifying the -m
option enables logging to standard output which can be useful for debugging purposes.
-S
option needs to be used as follows:
~]# ptp4l -i em3 -m -S
ptp4l
command as follows:
-P
-P
selects the peer-to-peer (P2P) delay measurement mechanism.
-E
-E
selects the end-to-end (E2E) delay measurement mechanism. This is the default.
-A
-A
enables automatic selection of the delay measurement mechanism.
Note
PTP
communication path must use the same mechanism to measure the delay. A warning will be printed when a peer delay request is received on a port using the E2E mechanism. A warning will be printed when a E2E delay request is received on a port using the P2P mechanism.
-f
option. For example:
~]# ptp4l -f /etc/ptp4l.conf
-i em3 -m -S
options shown above would look as follows:
~]# cat /etc/ptp4l.conf
[global]
verbose 1
time_stamping software
[em3]
PTP
management client, pmc, can be used to obtain additional information from ptp4l as follows:
~]# pmc -u -b 0 'GET CURRENT_DATA_SET'
sending: GET CURRENT_DATA_SET
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT CURRENT_DATA_SET
stepsRemoved 1
offsetFromMaster -142.0
meanPathDelay 9310.0
~]# pmc -u -b 0 'GET TIME_STATUS_NP'
sending: GET TIME_STATUS_NP
90e2ba.fffe.20c7f8-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP
master_offset 310
ingress_time 1361545089345029441
cumulativeScaledRateOffset +1.000000000
scaledLastGmPhaseChange 0
gmTimeBaseIndicator 0
lastGmPhaseChange 0x0000'0000000000000000.0000
gmPresent true
gmIdentity 00a069.fffe.0b552d
-b
option to zero
limits the boundary to the locally running ptp4l instance. A larger boundary value will retrieve the information also from PTP
nodes further from the local clock. The retrievable information includes:
stepsRemoved
is the number of communication paths to the grandmaster clock.
offsetFromMaster
and master_offset is the last measured offset of the clock from the master in nanoseconds.
meanPathDelay
is the estimated delay of the synchronization messages sent from the master in nanoseconds.
gmPresent
is true, the PTP
clock is synchronized to a master, the local clock is not the grandmaster clock.
gmIdentity
is the grandmaster's identity.
root
:
~]# pmc help
Additional information is available in the pmc(8)
man page.
PTP
hardware clock (PHC) on the NIC. To start phc2sys, where em3 is the interface with the PTP
hardware clock, enter the following command as root
:
~]# phc2sys -s em3 -w
The -w
option waits for the running ptp4l application to synchronize the PTP
clock and then retrieves the TAI to UTC offset from ptp4l.
PTP
operates in the International Atomic Time (TAI) timescale, while the system clock is kept in Coordinated Universal Time (UTC). The current offset between the TAI and UTC timescales is 35 seconds. The offset changes when leap seconds are inserted or deleted, which typically happens every few years. The -O
option needs to be used to set this offset manually when the -w
is not used, as follows:
~]# phc2sys -s em3 -O -35
-S
option is used. This means that the phc2sys program should be started after the ptp4l program has synchronized the PTP
hardware clock. However, with -w
, it is not necessary to start phc2sys after ptp4l as it will wait for it to synchronize the clock.
~]# systemctl start phc2sys
When running as a service, options are specified in the /etc/sysconfig/phc2sys
file. More information on the different phc2sys options can be found in the phc2sys(8)
man page.
PTP
time synchronization is working properly, new messages with offsets and frequency adjustments will be printed periodically to the ptp4l and phc2sys (if hardware time stamping is used) outputs. These values will eventually converge after a short period of time. These messages can be seen in /var/log/messages
file. An example of the output follows:
ptp4l[352.359]: selected /dev/ptp0 as PTP clock ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l[353.210]: port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l[357.214]: selected best master clock 00a069.fffe.0b552d ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l[359.224]: master offset 3304 s0 freq +0 path delay 9202 ptp4l[360.224]: master offset 3708 s1 freq -29492 path delay 9202 ptp4l[361.224]: master offset -3145 s2 freq -32637 path delay 9202 ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l[362.223]: master offset -145 s2 freq -30580 path delay 9202 ptp4l[363.223]: master offset 1043 s2 freq -29436 path delay 8972 ptp4l[364.223]: master offset 266 s2 freq -29900 path delay 9153 ptp4l[365.223]: master offset 430 s2 freq -29656 path delay 9153 ptp4l[366.223]: master offset 615 s2 freq -29342 path delay 9169 ptp4l[367.222]: master offset -191 s2 freq -29964 path delay 9169 ptp4l[368.223]: master offset 466 s2 freq -29364 path delay 9170 ptp4l[369.235]: master offset 24 s2 freq -29666 path delay 9196 ptp4l[370.235]: master offset -375 s2 freq -30058 path delay 9238 ptp4l[371.235]: master offset 285 s2 freq -29511 path delay 9199 ptp4l[372.235]: master offset -78 s2 freq -29788 path delay 9204
phc2sys[526.527]: Waiting for ptp4l... phc2sys[527.528]: Waiting for ptp4l... phc2sys[528.528]: phc offset 55341 s0 freq +0 delay 2729 phc2sys[529.528]: phc offset 54658 s1 freq -37690 delay 2725 phc2sys[530.528]: phc offset 888 s2 freq -36802 delay 2756 phc2sys[531.528]: phc offset 1156 s2 freq -36268 delay 2766 phc2sys[532.528]: phc offset 411 s2 freq -36666 delay 2738 phc2sys[533.528]: phc offset -73 s2 freq -37026 delay 2764 phc2sys[534.528]: phc offset 39 s2 freq -36936 delay 2746 phc2sys[535.529]: phc offset 95 s2 freq -36869 delay 2733 phc2sys[536.529]: phc offset -359 s2 freq -37294 delay 2738 phc2sys[537.529]: phc offset -257 s2 freq -37300 delay 2753 phc2sys[538.529]: phc offset 119 s2 freq -37001 delay 2745 phc2sys[539.529]: phc offset 288 s2 freq -36796 delay 2766 phc2sys[540.529]: phc offset -149 s2 freq -37147 delay 2760 phc2sys[541.529]: phc offset -352 s2 freq -37395 delay 2771 phc2sys[542.529]: phc offset 166 s2 freq -36982 delay 2748 phc2sys[543.529]: phc offset 50 s2 freq -37048 delay 2756 phc2sys[544.530]: phc offset -31 s2 freq -37114 delay 2748 phc2sys[545.530]: phc offset -333 s2 freq -37426 delay 2747 phc2sys[546.530]: phc offset 194 s2 freq -36999 delay 2749
summary_interval
, to reduce the output and print only statistics, as normally it will print a message every second or so. For example, to reduce the output to every 1024
seconds, add the following line to the /etc/ptp4l.conf
file:
summary_interval 10An example of the ptp4l output, with
summary_interval 6
, follows:
ptp4l: [615.253] selected /dev/ptp0 as PTP clock ptp4l: [615.255] port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.255] port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l: [615.564] port 1: new foreign master 00a069.fffe.0b552d-1 ptp4l: [619.574] selected best master clock 00a069.fffe.0b552d ptp4l: [619.574] port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l: [623.573] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l: [684.649] rms 669 max 3691 freq -29383 ± 3735 delay 9232 ± 122 ptp4l: [748.724] rms 253 max 588 freq -29787 ± 221 delay 9219 ± 158 ptp4l: [812.793] rms 287 max 673 freq -29802 ± 248 delay 9211 ± 183 ptp4l: [876.853] rms 226 max 534 freq -29795 ± 197 delay 9221 ± 138 ptp4l: [940.925] rms 250 max 562 freq -29801 ± 218 delay 9199 ± 148 ptp4l: [1004.988] rms 226 max 525 freq -29802 ± 196 delay 9228 ± 143 ptp4l: [1069.065] rms 300 max 646 freq -29802 ± 259 delay 9214 ± 176 ptp4l: [1133.125] rms 226 max 505 freq -29792 ± 197 delay 9225 ± 159 ptp4l: [1197.185] rms 244 max 688 freq -29790 ± 211 delay 9201 ± 162To reduce the output from the phc2sys, it can be called it with the
-u
option as follows:
~]# phc2sys -u summary-updates
Where summary-updates is the number of clock updates to include in summary statistics. An example follows:
~]# phc2sys -s em3 -w -m -u 60
phc2sys[700.948]: rms 1837 max 10123 freq -36474 ± 4752 delay 2752 ± 16
phc2sys[760.954]: rms 194 max 457 freq -37084 ± 174 delay 2753 ± 12
phc2sys[820.963]: rms 211 max 487 freq -37085 ± 185 delay 2750 ± 19
phc2sys[880.968]: rms 183 max 440 freq -37102 ± 164 delay 2734 ± 91
phc2sys[940.973]: rms 244 max 584 freq -37095 ± 216 delay 2748 ± 16
phc2sys[1000.979]: rms 220 max 573 freq -36666 ± 182 delay 2747 ± 43
phc2sys[1060.984]: rms 266 max 675 freq -36759 ± 234 delay 2753 ± 17
ntpd
daemon can be configured to distribute the time from the system clock synchronized by ptp4l or phc2sys by using the LOCAL reference clock driver. To prevent ntpd
from adjusting the system clock, the ntp.conf
file must not specify any NTP
servers. The following is a minimal example of ntp.conf
:
~]# cat /etc/ntp.conf
server 127.127.1.0
fudge 127.127.1.0 stratum 0
Note
DHCP
client program, dhclient, receives a list of NTP
servers from the DHCP
server, it adds them to ntp.conf
and restarts the service. To disable that feature, add PEERNTP=no
to /etc/sysconfig/network
.
NTP
to PTP
synchronization in the opposite direction is also possible. When ntpd
is used to synchronize the system clock, ptp4l can be configured with the priority1
option (or other clock options included in the best master clock algorithm) to be the grandmaster clock and distribute the time from the system clock via PTP
:
~]# cat /etc/ptp4l.conf
[global]
priority1 127
[em3]
# ptp4l -f /etc/ptp4l.conf
PTP
hardware clock to the system clock:
~]# phc2sys -c em3 -s CLOCK_REALTIME -w
PTP
clock's frequency, the synchronization to the system clock can be loosened by using smaller P
(proportional) and I
(integral) constants of the PI servo:
~]# phc2sys -c em3 -s CLOCK_REALTIME -w -P 0.01 -I 0.0001
PTP
synchronization accuracy (at the cost of increased power consumption). The kernel tickless mode can be disabled by adding nohz=off
to the kernel boot option parameters.
PTP
and the ptp4l tools.
ptp4l(8)
man page — Describes ptp4l options including the format of the configuration file.
pmc(8)
man page — Describes the PTP
management client and its command options.
phc2sys(8)
man page — Describes a tool for synchronizing the system clock to a PTP
hardware clock (PHC).
Table of Contents
ps
command allows you to display information about running processes. It produces a static list, that is, a snapshot of what is running when you execute the command. If you want a constantly updated list of running processes, use the top
command or the System Monitor application instead.
ps
ax
ps ax
command displays the process ID (PID
), the terminal that is associated with it (TTY
), the current status (STAT
), the cumulated CPU time (TIME
), and the name of the executable file (COMMAND
). For example:
~]$ ps ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:02 /usr/lib/systemd/systemd --system --deserialize 20
2 ? S 0:00 [kthreadd]
3 ? S 0:00 [ksoftirqd/0]
5 ? S 0:00 [kworker/u:0]
6 ? S 0:00 [migration/0]
[output truncated]
ps
aux
ps ax
command, ps aux
displays the effective username of the process owner (USER
), the percentage of the CPU (%CPU
) and memory (%MEM
) usage, the virtual memory size in kilobytes (VSZ
), the non-swapped physical memory size in kilobytes (RSS
), and the time or date the process was started. For instance:
~]$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.3 53128 2988 ? Ss 13:28 0:02 /usr/lib/systemd/systemd --system --deserialize 20
root 2 0.0 0.0 0 0 ? S 13:28 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 13:28 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S 13:28 0:00 [kworker/u:0]
root 6 0.0 0.0 0 0 ? S 13:28 0:00 [migration/0]
[output truncated]
ps
command in a combination with grep
to see if a particular process is running. For example, to determine if Emacs is running, type:
~]$ ps ax | grep emacs
2625 ? Sl 0:00 emacs
top
command displays a real-time list of processes that are running on the system. It also displays additional information about the system uptime, current CPU and memory usage, or total number of running processes, and allows you to perform actions such as sorting the list or killing a process.
top
command, type the following at a shell prompt:
top
top
command displays the process ID (PID
), the effective username of the process owner (USER
), the priority (PR
), the nice value (NI
), the amount of virtual memory the process uses (VIRT
), the amount of non-swapped physical memory the process uses (RES
), the amount of shared memory the process uses (SHR
), the percentage of the CPU (%CPU
) and memory (%MEM
) usage, the cumulated CPU time (TIME+
), and the name of the executable file (COMMAND
). For example:
~]$ top
top - 19:22:08 up 5:53, 3 users, load average: 1.08, 1.03, 0.82
Tasks: 117 total, 2 running, 115 sleeping, 0 stopped, 0 zombie
Cpu(s): 9.3%us, 1.3%sy, 0.0%ni, 85.1%id, 0.0%wa, 1.7%hi, 0.0%si, 2.6%st
Mem: 761956k total, 617256k used, 144700k free, 24356k buffers
Swap: 1540092k total, 55780k used, 1484312k free, 256408k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
510 john 20 0 1435m 99m 18m S 9.0 13.3 3:30.52 gnome-shell
32686 root 20 0 156m 27m 3628 R 2.0 3.7 0:48.69 Xorg
2625 john 20 0 488m 27m 14m S 0.3 3.7 0:00.70 emacs
1 root 20 0 53128 2640 1152 S 0.0 0.3 0:02.83 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.18 ksoftirqd/0
5 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0
6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
7 root RT 0 0 0 0 S 0.0 0.0 0:00.30 watchdog/0
8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset
9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
11 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns
12 root 20 0 0 0 0 S 0.0 0.0 0:00.11 sync_supers
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 bdi-default
14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kintegrityd
15 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kblockd
top
. For more information, refer to the top(1) manual page.
Table 16.1. Interactive top commands
Command | Description |
---|---|
Enter, Space | Immediately refreshes the display. |
h, ? | Displays a help screen. |
k | Kills a process. You are prompted for the process ID and the signal to send to it. |
n | Changes the number of displayed processes. You are prompted to enter the number. |
u | Sorts the list by user. |
M | Sorts the list by memory usage. |
P | Sorts the list by CPU usage. |
q | Terminates the utility and returns to the shell prompt. |
gnome-system-monitor
at a shell prompt. Then click the Processes tab to view the list of running processes.
free
command allows you to display the amount of free and used memory on the system. To do so, type the following at a shell prompt:
free
free
command provides information about both the physical memory (Mem
) and swap space (Swap
). It displays the total amount of memory (total
), as well as the amount of memory that is in use (used
), free (free
), shared (shared
), in kernel buffers (buffers
), and cached (cached
). For example:
~]$ free
total used free shared buffers cached
Mem: 761956 607500 154456 0 37404 156176
-/+ buffers/cache: 413920 348036
Swap: 1540092 84408 1455684
free
displays the values in kilobytes. To display the values in megabytes, supply the -m
command line option:
free
-m
~]$ free -m
total used free shared buffers cached
Mem: 744 593 150 0 36 152
-/+ buffers/cache: 404 339
Swap: 1503 82 1421
gnome-system-monitor
at a shell prompt. Then click the Resources tab to view the system's memory usage.
gnome-system-monitor
at a shell prompt. Then click the Resources tab to view the system's CPU usage.
lsblk
command allows you to display a list of available block devices. To do so, type the following at a shell prompt:
lsblk
lsblk
command displays the device name (NAME
), major and minor device number (MAJ:MIN
), if the device is removable (RM
), what is its size (SIZE
), if the device is read-only (RO
), what type is it (TYPE
), and where the device is mounted (MOUNTPOINT
). For example:
~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
|-vda1 252:1 0 500M 0 part /boot
`-vda2 252:2 0 19.5G 0 part
|-vg_fedora-lv_swap (dm-0) 253:0 0 1.5G 0 lvm [SWAP]
`-vg_fedora-lv_root (dm-1) 253:1 0 18G 0 lvm /
lsblk
lists block devices in a tree-like format. To display the information as an ordinary list, add the -l
command line option:
lsblk
-l
~]$ lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
vda1 252:1 0 500M 0 part /boot
vda2 252:2 0 19.5G 0 part
vg_fedora-lv_swap (dm-0) 253:0 0 1.5G 0 lvm [SWAP]
vg_fedora-lv_root (dm-1) 253:1 0 18G 0 lvm /
blkid
command allows you to display information about available block devices. To do so, type the following at a shell prompt as root
:
blkid
blkid
command displays available attributes such as its universally unique identifier (UUID
), file system type (TYPE
), or volume label (LABEL
). For example:
~]# blkid
/dev/vda1: UUID="4ea24c68-ab10-47d4-8a6b-b8d3a002acba" TYPE="ext4"
/dev/vda2: UUID="iJ9YwJ-leFf-A1zb-VVaK-H9t1-raLW-HoqlUG" TYPE="LVM2_member"
/dev/mapper/vg_fedora-lv_swap: UUID="d6d755bc-3e3e-4e8f-9bb5-a5e7f4d86ffd" TYPE="swap"
/dev/mapper/vg_fedora-lv_root: LABEL="_Fedora-17-x86_6" UUID="77ba9149-751a-48e0-974f-ad94911734b9" TYPE="ext4"
lsblk
command lists all available block devices. To display information about a particular device only, specify the device name on the command line:
blkid
device_name
/dev/vda1
, type:
~]# blkid /dev/vda1
/dev/vda1: UUID="4ea24c68-ab10-47d4-8a6b-b8d3a002acba" TYPE="ext4"
-p
and -o udev
command line options to obtain more detailed information. Note that root
privileges are required to run this command:
blkid
-po
udev
device_name
~]# blkid -po udev /dev/vda1
ID_FS_UUID=4ea24c68-ab10-47d4-8a6b-b8d3a002acba
ID_FS_UUID_ENC=4ea24c68-ab10-47d4-8a6b-b8d3a002acba
ID_FS_VERSION=1.0
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_TYPE=0x83
ID_PART_ENTRY_FLAGS=0x80
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SIZE=1024000
ID_PART_ENTRY_DISK=252:0
partx
command allows you to display a list of disk partitions. To list the partition table of a particular disk, as root
, run this command with the -s
option followed by the device name:
partx
-s
device_name
/dev/vda
, type:
~]# partx -s /dev/vda
NR START END SECTORS SIZE NAME UUID
1 2048 1026047 1024000 500M
2 1026048 41943039 40916992 19.5G
findmnt
command allows you to display a list of currently mounted file systems. To do so, type the following at a shell prompt:
findmnt
findmnt
command displays the target mount point (TARGET
), source device (SOURCE
), file system type (FSTYPE
), and relevant mount options (OPTIONS
). For example:
~]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/vg_fedora-lv_root
ext4 rw,relatime,seclabel,data=o
|-/proc proc proc rw,nosuid,nodev,noexec,rela
| `-/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=23,pgrp=1,ti
|-/sys sysfs sysfs rw,nosuid,nodev,noexec,rela
| |-/sys/kernel/security securityfs security rw,nosuid,nodev,noexec,rela
| |-/sys/fs/selinux selinuxfs selinuxf rw,relatime
| |-/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,secl
| | |-/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/net_cls cgroup cgroup rw,nosuid,nodev,noexec,rela
| | |-/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,rela
| | `-/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,rela
| |-/sys/kernel/debug debugfs debugfs rw,relatime
| `-/sys/kernel/config configfs configfs rw,relatime
[output truncated]
findmnt
lists file systems in a tree-like format. To display the information as an ordinary list, add the -l
command line option:
findmnt
-l
~]$ findmnt -l
TARGET SOURCE FSTYPE OPTIONS
/proc proc proc rw,nosuid,nodev,noexec,relatime
/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,s
/dev devtmpfs devtmpfs rw,nosuid,seclabel,size=370080k,n
/dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabe
/dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel
/run tmpfs tmpfs rw,nosuid,nodev,seclabel,mode=755
/ /dev/mapper/vg_fedora-lv_root
ext4 rw,relatime,seclabel,data=ordered
/sys/kernel/security securityfs security rw,nosuid,nodev,noexec,relatime
/sys/fs/selinux selinuxfs selinuxf rw,relatime
/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,m
/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,r
[output truncated]
-t
command line option followed by a file system type:
findmnt
-t
type
ext4
file systems, type:
~]$ findmnt -t ext4
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/vg_fedora-lv_root ext4 rw,relatime,seclabel,data=ordered
/boot /dev/vda1 ext4 rw,relatime,seclabel,data=ordered
df
command allows you to display a detailed report on the system's disk space usage. To do so, type the following at a shell prompt:
df
df
command displays its name (Filesystem
), size (1K-blocks
or Size
), how much space is used (Used
), how much space is still available (Available
), the percentage of space usage (Use%
), and where is the file system mounted (Mounted on
). For example:
~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 18877356 4605476 14082844 25% /
devtmpfs 370080 0 370080 0% /dev
tmpfs 380976 256 380720 1% /dev/shm
tmpfs 380976 3048 377928 1% /run
/dev/mapper/vg_fedora-lv_root 18877356 4605476 14082844 25% /
tmpfs 380976 0 380976 0% /sys/fs/cgroup
tmpfs 380976 0 380976 0% /media
/dev/vda1 508745 85018 398127 18% /boot
df
command shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h
command line option, which causes df
to display the values in a human-readable format:
df
-h
~]$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 4.4G 14G 25% /
devtmpfs 362M 0 362M 0% /dev
tmpfs 373M 256K 372M 1% /dev/shm
tmpfs 373M 3.0M 370M 1% /run
/dev/mapper/vg_fedora-lv_root 19G 4.4G 14G 25% /
tmpfs 373M 0 373M 0% /sys/fs/cgroup
tmpfs 373M 0 373M 0% /media
/dev/vda1 497M 84M 389M 18% /boot
/dev/shm
entry represents the system's virtual memory file system, /sys/fs/cgroup
is a cgroup file system, and /run
contains information about the running system.
du
command allows you to displays the amount of space that is being used by files in a directory. To display the disk usage for each of the subdirectories in the current working directory, run the command with no additional command line options:
du
~]$ du
8 ./.gconf/apps/gnome-terminal/profiles/Default
12 ./.gconf/apps/gnome-terminal/profiles
16 ./.gconf/apps/gnome-terminal
[output truncated]
460 ./.gimp-2.6
68828 .
du
command displays the disk usage in kilobytes. To view the information in megabytes and gigabytes, supply the -h
command line option, which causes the utility to display the values in a human-readable format:
du
-h
~]$ du -h
8.0K ./.gconf/apps/gnome-terminal/profiles/Default
12K ./.gconf/apps/gnome-terminal/profiles
16K ./.gconf/apps/gnome-terminal
[output truncated]
460K ./.gimp-2.6
68M .
du
command always shows the grand total for the current directory. To display only this information, supply the -s
command line option:
du
-sh
~]$ du -sh
68M .
gnome-system-monitor
at a shell prompt. Then click the File Systems tab to view a list of file systems.
lspci
command lists all PCI devices that are present in the system:
lspci
~]$ lspci
00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller
00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express Bridge
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02)
[output truncated]
-v
command line option to display more verbose output, or -vv
for very verbose output:
lspci
-v
|-vv
~]$ lspci -v
[output truncated]
01:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX 370] (rev a1) (prog-if 00 [VGA controller])
Subsystem: nVidia Corporation Device 0491
Physical Slot: 2
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at f2000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, non-prefetchable) [size=32M]
I/O ports at 1100 [size=128]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: nouveau
Kernel modules: nouveau, nvidiafb
[output truncated]
lsusb
command allows you to display information about USB buses and devices that are attached to them. To list all USB devices that are in the system, type the following at a shell prompt:
lsusb
~]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
[output truncated]
Bus 001 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader)
Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse
Bus 008 Device 003: ID 04b3:3025 IBM Corp.
-v
command line option to display more verbose output:
lsusb
-v
~]$ lsusb -v
[output truncated]
Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x03f0 Hewlett-Packard
idProduct 0x2c24 Logitech M-UAL-96 Mouse
bcdDevice 31.00
iManufacturer 1
iProduct 2
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
[output truncated]
lspcmcia
command allows you to list all PCMCIA devices that are present in the system. To do so, type the following at a shell prompt:
lspcmcia
~]$ lspcmcia
Socket 0 Bridge: [yenta_cardbus] (bus ID: 0000:15:00.0)
-v
command line option to display more verbose information, or -vv
to increase the verbosity level even further:
lspcmcia
-v
|-vv
~]$ lspcmcia -v
Socket 0 Bridge: [yenta_cardbus] (bus ID: 0000:15:00.0)
Configuration: state: on ready: unknown
lscpu
command allows you to list information about CPUs that are present in the system, including the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the following at a shell prompt:
lscpu
~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Stepping: 7
CPU MHz: 1998.000
BogoMIPS: 4999.98
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
NUMA node0 CPU(s): 0-3
Table 16.2. Available Net-SNMP packages
Package | Provides |
---|---|
net-snmp | The SNMP Agent Daemon and documentation. This package is required for exporting performance data. |
net-snmp-libs | The netsnmp library and the bundled management information bases (MIBs). This package is required for exporting performance data. |
net-snmp-utils | SNMP clients such as snmpget and snmpwalk . This package is required in order to query a system's performance data over SNMP. |
net-snmp-perl | The mib2c utility and the NetSNMP Perl module. |
net-snmp-python | An SNMP client library for Python. |
yum
command in the following form:
yum
install
package…
~]# yum install net-snmp net-snmp-libs net-snmp-utils
root
) to run this command. For more information on how to install new packages in Fedora, refer to Section 5.2.4, “Installing Packages”.
snmpd
, the SNMP Agent Daemon. This section provides information on how to start, stop, and restart the snmpd
service, and shows how to enable or disable it in the multi-user
target unit. For more information on the concept of target units and how to manage system services in Fedora in general, refer to Chapter 6, Services and Daemons.
snmpd
service in the current session, type the following at a shell prompt as root
:
systemctl
start
snmpd.service
systemctl
enable
snmpd.service
multi-user
target unit.
snmpd
service, type the following at a shell prompt as root
:
systemctl
stop
snmpd.service
systemctl
disable
snmpd.service
multi-user
target unit.
snmpd
service, type the following at a shell prompt:
systemctl
restart
snmpd.service
systemctl
reload
snmpd.service
snmpd
service to reload the configuration.
/etc/snmp/snmpd.conf
configuration file. The default snmpd.conf
file shipped with Fedora 20 is heavily commented and serves as a good starting point for agent configuration.
snmpconf
which can be used to interactively generate a valid agent configuration.
snmpwalk
utility described in this section.
Applying the changes
snmpd
service to re-read the configuration by running the following command as root
:
systemctl
reload
snmpd.service
system
tree. For example, the following snmpwalk
command shows the system
tree with a default agent configuration.
~]# snmpwalk -v2c -c public localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (99554) 0:16:35.54
SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf)
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Unknown (edit /etc/snmp/snmpd.conf)
sysName
object is set to the hostname. The sysLocation
and sysContact
objects can be configured in the /etc/snmp/snmpd.conf
file by changing the value of the syslocation
and syscontact
directives, for example:
syslocation Datacenter, Row 3, Rack 2 syscontact UNIX Admin <admin@example.com>
snmpwalk
command again:
~]#systemct reload snmpd.service
~]#snmpwalk -v2c -c public localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2
rocommunity
or rwcommunity
directive in the /etc/snmp/snmpd.conf
configuration file. The format of the directives is the following:
directive community [source [OID]]
system
tree to a client using the community string “redhat” on the local machine:
rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1
snmpwalk
command with the -v
and -c
options.
~]# snmpwalk -v2c -c redhat localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (158357) 0:26:23.57
SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <admin@example.com>
SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain
SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 3, Rack 2
net-snmp-create-v3-user
command. This command adds entries to the /var/lib/net-snmp/snmpd.conf
and /etc/snmp/snmpd.conf
files which create the user and grant access to the user. Note that the net-snmp-create-v3-user
command may only be run when the agent is not running. The following example creates the “sysadmin” user with the password “redhatsnmp”:
~]#systemctl stop snmpd.service
~]#net-snmp-create-v3-user
Enter a SNMPv3 user name to create: admin Enter authentication pass-phrase: redhatsnmp Enter encryption pass-phrase: [press return to reuse the authentication pass-phrase] adding the following line to /var/lib/net-snmp/snmpd.conf: createUser admin MD5 "redhatsnmp" DES adding the following line to /etc/snmp/snmpd.conf: rwuser admin ~]#systemctl start snmpd.service
rwuser
directive (or rouser
when the -ro
command line option is supplied) that net-snmp-create-v3-user
adds to /etc/snmp/snmpd.conf
has a similar format to the rwcommunity
and rocommunity
directives:
directive user [noauth
|auth
|priv
] [OID]
auth
option). The noauth
option allows you to permit unauthenticated requests, and the priv
option enforces the use of encryption. The authpriv
option specifies that requests must be authenticated and replies should be encrypted.
rwuser admin authpriv .1
.snmp
directory in your user's home directory and a configuration file named snmp.conf
in that directory (~/.snmp/snmp.conf
) with the following lines:
defVersion 3 defSecurityLevel authPriv defSecurityName admin defPassphrase redhatsnmp
snmpwalk
command will now use these authentication settings when querying the agent:
~]$ snmpwalk -v3 localhost system
SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 2.6.32-122.el6.x86_64 #1 SMP Wed Mar 9 23:54:34 EST 2011 x86_64
[output truncated]
Host Resources MIB
included with Net-SNMP presents information about the current hardware and software configuration of a host to a client utility. Table 16.3, “Available OIDs” summarizes the different OIDs available under that MIB.
Table 16.3. Available OIDs
OID | Description |
---|---|
HOST-RESOURCES-MIB::hrSystem | Contains general system information such as uptime, number of users, and number of running processes. |
HOST-RESOURCES-MIB::hrStorage | Contains data on memory and file system usage. |
HOST-RESOURCES-MIB::hrDevices | Contains a listing of all processors, network devices, and file systems. |
HOST-RESOURCES-MIB::hrSWRun | Contains a listing of all running processes. |
HOST-RESOURCES-MIB::hrSWRunPerf | Contains memory and CPU statistics on the process table from HOST-RESOURCES-MIB::hrSWRun. |
HOST-RESOURCES-MIB::hrSWInstalled | Contains a listing of the RPM database. |
HOST-RESOURCES-MIB::hrFSTable
:
~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable
SNMP table: HOST-RESOURCES-MIB::hrFSTable
Index MountPoint RemoteMountPoint Type
Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate
1 "/" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0
5 "/dev/shm" "" HOST-RESOURCES-TYPES::hrFSOther
readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0
6 "/boot" "" HOST-RESOURCES-TYPES::hrFSLinuxExt2
readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0
HOST-RESOURCES-MIB
, see the /usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt
file.
UCD SNMP MIB
. The systemStats
OID provides a number of counters around processor usage:
~]$ snmpwalk localhost UCD-SNMP-MIB::systemStats
UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1
UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats
UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB
UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s
UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s
UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s
UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0
UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736
UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629
UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0
UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434
UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770
UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302
UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442
UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557
UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128
UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0
UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0
ssCpuRawUser
, ssCpuRawSystem
, ssCpuRawWait
, and ssCpuRawIdle
OIDs provide counters which are helpful when determining whether a system is spending most of its processor time in kernel space, user space, or I/O. ssRawSwapIn
and ssRawSwapOut
can be helpful when determining whether a system is suffering from memory exhaustion.
UCD-SNMP-MIB::memory
OID, which provides similar data to the free
command:
~]$ snmpwalk localhost UCD-SNMP-MIB::memory
UCD-SNMP-MIB::memIndex.0 = INTEGER: 0
UCD-SNMP-MIB::memErrorName.0 = STRING: swap
UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB
UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB
UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB
UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB
UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB
UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB
UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB
UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0)
UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:
UCD SNMP MIB
. The SNMP table UCD-SNMP-MIB::laTable
has a listing of the 1, 5, and 15 minute load averages:
~]$ snmptable localhost UCD-SNMP-MIB::laTable
SNMP table: UCD-SNMP-MIB::laTable
laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage
1 Load-1 0.00 12.00 0 0.000000 noError
2 Load-5 0.00 12.00 0 0.000000 noError
3 Load-15 0.00 12.00 0 0.000000 noError
Host Resources MIB
provides information on file system size and usage. Each file system (and also each memory pool) has an entry in the HOST-RESOURCES-MIB::hrStorageTable
table:
~]$ snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable
SNMP table: HOST-RESOURCES-MIB::hrStorageTable
Index Type Descr
AllocationUnits Size Used AllocationFailures
1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory
1024 Bytes 1021588 388064 ?
3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory
1024 Bytes 2045580 388064 ?
6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers
1024 Bytes 1021588 31048 ?
7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory
1024 Bytes 216604 216604 ?
10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space
1024 Bytes 1023992 0 ?
31 HOST-RESOURCES-TYPES::hrStorageFixedDisk /
4096 Bytes 2277614 250391 ?
35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm
4096 Bytes 127698 0 ?
36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot
1024 Bytes 198337 26694 ?
HOST-RESOURCES-MIB::hrStorageSize
and HOST-RESOURCES-MIB::hrStorageUsed
can be used to calculate the remaining capacity of each mounted file system.
UCD-SNMP-MIB::systemStats
(ssIORawSent.0
and ssIORawRecieved.0
) and in UCD-DISKIO-MIB::diskIOTable
. The latter provides much more granular data. Under this table are OIDs for diskIONReadX
and diskIONWrittenX
, which provide counters for the number of bytes read from and written to the block device in question since the system boot:
~]$ snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable
SNMP table: UCD-DISKIO-MIB::diskIOTable
Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX
...
25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376
26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120
27 sda2 1486848 0 332 0 ? ? ? 1486848 0
28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 139104256
IF-MIB::ifTable
provides an SNMP table with an entry for each interface on the system, the configuration of the interface, and various packet counters for the interface. The following example shows the first few columns of ifTable
on a system with two physical network interfaces:
~]$ snmptable -Cb localhost IF-MIB::ifTable
SNMP table: IF-MIB::ifTable
Index Descr Type Mtu Speed PhysAddress AdminStatus
1 lo softwareLoopback 16436 10000000 up
2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up
3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 down
IF-MIB::ifOutOctets
and IF-MIB::ifInOctets
. The following SNMP queries will retrieve network traffic for each of the interfaces on this system:
~]$snmpwalk localhost IF-MIB::ifDescr
IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 ~]$snmpwalk localhost IF-MIB::ifOutOctets
IF-MIB::ifOutOctets.1 = Counter32: 10060699 IF-MIB::ifOutOctets.2 = Counter32: 650 IF-MIB::ifOutOctets.3 = Counter32: 0 ~]$snmpwalk localhost IF-MIB::ifInOctets
IF-MIB::ifInOctets.1 = Counter32: 10060699 IF-MIB::ifInOctets.2 = Counter32: 78650 IF-MIB::ifInOctets.3 = Counter32: 0
NET-SNMP-EXTEND-MIB
) that can be used to query arbitrary shell scripts. To specify the shell script to run, use the extend
directive in the /etc/snmp/snmpd.conf
file. Once defined, the Agent will provide the exit code and any output of the command over SNMP. The example below demonstrates this mechanism with a script which determines the number of httpd
processes in the process table.
Using the proc directive
proc
directive. See the snmpd.conf(5) manual page for more information.
httpd
processes running on the system at a given point in time:
#!/bin/sh NUMPIDS=`pgrep httpd | wc -l` exit $NUMPIDS
extend
directive to the /etc/snmp/snmpd.conf
file. The format of the extend
directive is the following:
extend
name prog args
/usr/local/bin/check_apache.sh
, the following directive will add the script to the SNMP tree:
extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh
NET-SNMP-EXTEND-MIB::nsExtendObjects
:
~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING: /usr/local/bin/check_apache.sh
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendCacheTime."httpd_pids" = INTEGER: 5
NET-SNMP-EXTEND-MIB::nsExtendExecType."httpd_pids" = INTEGER: exec(1)
NET-SNMP-EXTEND-MIB::nsExtendRunType."httpd_pids" = INTEGER: run-on-read(1)
NET-SNMP-EXTEND-MIB::nsExtendStorage."httpd_pids" = INTEGER: permanent(4)
NET-SNMP-EXTEND-MIB::nsExtendStatus."httpd_pids" = INTEGER: active(1)
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutputFull."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendOutNumLines."httpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING:
extend
directive. For example, the following shell script can be used to determine the number of processes matching an arbitrary string, and will also output a text string giving the number of processes:
#!/bin/sh PATTERN=$1 NUMPIDS=`pgrep $PATTERN | wc -l` echo "There are $NUMPIDS $PATTERN processes." exit $NUMPIDS
/etc/snmp/snmpd.conf
directives will give both the number of httpd
PIDs as well as the number of snmpd
PIDs when the above script is copied to /usr/local/bin/check_proc.sh
:
extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd
snmpwalk
of the nsExtendObjects
OID:
~]$ snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2
NET-SNMP-EXTEND-MIB::nsExtendCommand."httpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendCommand."snmpd_pids" = STRING: /bin/sh
NET-SNMP-EXTEND-MIB::nsExtendArgs."httpd_pids" = STRING: /usr/local/bin/check_proc.sh httpd
NET-SNMP-EXTEND-MIB::nsExtendArgs."snmpd_pids" = STRING: /usr/local/bin/check_proc.sh snmpd
NET-SNMP-EXTEND-MIB::nsExtendInput."httpd_pids" = STRING:
NET-SNMP-EXTEND-MIB::nsExtendInput."snmpd_pids" = STRING:
...
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8
NET-SNMP-EXTEND-MIB::nsExtendResult."snmpd_pids" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendOutLine."httpd_pids".1 = STRING: There are 8 httpd processes.
NET-SNMP-EXTEND-MIB::nsExtendOutLine."snmpd_pids".1 = STRING: There are 1 snmpd processes.
Integer exit codes are limited
httpd
processes. This query could be used during a performance test to determine the impact of the number of processes on memory pressure:
~]$snmpget localhost \
'NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids"' \
UCD-SNMP-MIB::memAvailReal.0
NET-SNMP-EXTEND-MIB::nsExtendResult."httpd_pids" = INTEGER: 8 UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB
extend
directive is a fairly limited method for exposing custom application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for exposing custom objects. The net-snmp-perl package provides the NetSNMP::agent
Perl module that is used to write embedded Perl plug-ins on Fedora.
NetSNMP::agent
Perl module provides an agent
object which is used to handle requests for a part of the agent's OID tree. The agent
object's constructor has options for running the agent as a sub-agent of snmpd
or a standalone agent. No arguments are necessary to create an embedded agent:
use NetSNMP::agent (':all'); my $agent = new NetSNMP::agent();
agent
object has a register
method which is used to register a callback function with a particular OID. The register
function takes a name, OID, and pointer to the callback function. The following example will register a callback function named hello_handler
with the SNMP Agent which will handle requests under the OID .1.3.6.1.4.1.8072.9999.9999
:
$agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999", \&hello_handler);
Obtaining a root OID
.1.3.6.1.4.1.8072.9999.9999
(NET-SNMP-MIB::netSnmpPlaypen
) is typically used for demonstration purposes only. If your organization does not already have a root OID, you can obtain one by contacting your Name Registration Authority (ANSI in the United States).
HANDLER
, REGISTRATION_INFO
, REQUEST_INFO
, and REQUESTS
. The REQUESTS
parameter contains a list of requests in the current call and should be iterated over and populated with data. The request
objects in the list have get and set methods which allow for manipulating the OID and value of the request. For example, the following call will set the value of a request object to the string “hello world”:
$request->setValue(ASN_OCTET_STR, "hello world");
getMode
method on the request_info
object passed as the third parameter to the handler function. If the request is a GET request, the caller will expect the handler to set the value of the request
object, depending on the OID of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID of the request to the next available OID in the tree. This is illustrated in the following code example:
my $request; my $string_value = "hello world"; my $integer_value = "8675309"; for($request = $requests; $request; $request = $request->next()) { my $oid = $request->getOID(); if ($request_info->getMode() == MODE_GET) { if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setValue(ASN_OCTET_STR, $string_value); } elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) { $request->setValue(ASN_INTEGER, $integer_value); } } elsif ($request_info->getMode() == MODE_GETNEXT) { if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1"); $request->setValue(ASN_INTEGER, $integer_value); } elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0"); $request->setValue(ASN_OCTET_STR, $string_value); } } }
getMode
returns MODE_GET
, the handler analyzes the value of the getOID
call on the request
object. The value of the request
is set to either string_value
if the OID ends in “.1.0”, or set to integer_value
if the OID ends in “.1.1”. If the getMode
returns MODE_GETNEXT
, the handler determines whether the OID of the request is “.1.0”, and then sets the OID and value for “.1.1”. If the request is higher on the tree than “.1.0”, the OID and value for “.1.0” is set. This in effect returns the “next” value in the tree so that a program like snmpwalk
can traverse the tree without prior knowledge of the structure.
NetSNMP::ASN
. See the perldoc
for NetSNMP::ASN
for a full list of available constants.
#!/usr/bin/perl use NetSNMP::agent (':all'); use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER); sub hello_handler { my ($handler, $registration_info, $request_info, $requests) = @_; my $request; my $string_value = "hello world"; my $integer_value = "8675309"; for($request = $requests; $request; $request = $request->next()) { my $oid = $request->getOID(); if ($request_info->getMode() == MODE_GET) { if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setValue(ASN_OCTET_STR, $string_value); } elsif ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) { $request->setValue(ASN_INTEGER, $integer_value); } } elsif ($request_info->getMode() == MODE_GETNEXT) { if ($oid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1"); $request->setValue(ASN_INTEGER, $integer_value); } elsif ($oid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { $request->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0"); $request->setValue(ASN_OCTET_STR, $string_value); } } } } my $agent = new NetSNMP::agent(); $agent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999", \&hello_handler);
/usr/share/snmp/hello_world.pl
and add the following line to the /etc/snmp/snmpd.conf
configuration file:
perl do "/usr/share/snmp/hello_world.pl"
snmpwalk
should return the new data:
~]$ snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen
NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world"
NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309
snmpget
should also be used to exercise the other mode of the handler:
~]$snmpget localhost \
NET-SNMP-MIB::netSnmpPlaypen.1.0 \
NET-SNMP-MIB::netSnmpPlaypen.1.1
NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: "hello world" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309
ps
command.
top
command.
free
command.
df
command.
du
command.
lspci
command.
snmpd
service.
/etc/snmp/snmpd.conf
file containing full documentation of available configuration directives.
rsyslogd
. A list of log files maintained by rsyslogd
can be found in the /etc/rsyslog.conf
configuration file.
sysklogd
daemon. rsyslog supports the same functionality as sysklogd and extends it with enhanced filtering, encryption protected relaying of messages, various configuration options, or support for transportation via the TCP
or UDP
protocols. Note that rsyslog is compatible with sysklogd.
/etc/rsyslog.conf
. It consists of global directives, rules or comments (any empty lines or any text following a hash sign (#
)). Both, global directives and rules are extensively described in the sections below.
rsyslogd
daemon. They usually specify a value for a specific pre-defined variable that affects the behavior of the rsyslogd
daemon or a rule that follows. All of the global directives must start with a dollar sign ($
). Only one directive can be specified per line. The following is an example of a global directive that specifies the maximum size of the syslog message queue:
$MainMsgQueueSize 50000
10,000
messages) can be overridden by specifying a different value (as shown in the example above).
/etc/rsyslog.conf
configuration file. A directive affects the behavior of all configuration options until another occurrence of that same directive is detected.
/usr/share/doc/rsyslog/rsyslog_conf_global.html
.
$ModLoad <MODULE>
$ModLoad
is the global directive that loads the specified module and <MODULE> represents your desired module. For example, if you want to load the Text File Input Module
(imfile
— enables rsyslog to convert any standard text files into syslog messages), specify the following line in your /etc/rsyslog.conf
configuration file:
$ModLoad imfile
im
prefix, such as imfile
, imrelp
, etc.
om
prefix, such as omsnmp
, omrelp
, etc.
fm
prefix.
pm
prefix, such as pmrfc5424
, pmrfc3164
, etc.
sm
prefix, such as smfile
, smtradfile
, etc.
Make sure you use trustworthy modules only
/etc/rsyslog.conf
configuration file, define both, a filter and an action, on one line and separate them with one or more spaces or tabs. For more information on filters, refer to Section 17.1.3.1, “Filter Conditions” and for information on actions, refer to Section 17.1.3.2, “Actions”.
<FACILITY>.<PRIORITY>
mail
subsystem handles all mail related syslog messages. <FACILITY> can be represented by one of these keywords: auth
, authpriv
, cron
, daemon
, kern
, lpr
, mail
, news
, syslog
, user
, uucp
, and local0
through local7
.
debug
, info
, notice
, warning
, err
, crit
, alert
, and emerg
.
=
), you specify that only syslog messages with that priority will be selected. All other priorities will be ignored. Conversely, preceding a priority with an exclamation mark (!
) selects all syslog messages but those with the defined priority. By not using either of these two extensions, you specify a selection of syslog messages with the defined or higher priority.
*
) to define all facilities or priorities (depending on where you place the asterisk, before or after the dot). Specifying the keyword none
serves for facilities with no given priorities.
,
). To define multiple filters on one line, separate them with a semi-colon (;
).
kern.* # Selects all kernel syslog messages with any priority
mail.crit # Selects all mail syslog messages with priority crit
and higher.
cron.!info,!debug # Selects all cron syslog messages except those with theinfo
ordebug
priority.
timegenerated
or syslogtag
. For more information on properties, refer to Section 17.1.3.3.2, “Properties”. Each of the properties specified in the filters lets you compare it to a specific value using one of the compare-operations listed in Table 17.1, “Property-based compare-operations”.
Table 17.1. Property-based compare-operations
Compare-operation | Description |
---|---|
contains | Checks whether the provided string matches any part of the text provided by the property. |
isequal | Compares the provided string against all of the text provided by the property. |
startswith | Checks whether the provided string matches a prefix of the text provided by the property. |
regex | Compares the provided POSIX BRE (Basic Regular Expression) regular expression against the text provided by the property. |
ereregex | Compares the provided POSIX ERE (Extended Regular Expression) regular expression against the text provided by the property. |
:<PROPERTY>, [!]<COMPARE_OPERATION>, "<STRING>"
timegenerated
, hostname
, etc.).
!
) negates the output of the compare-operation (if prefixing the compare-operation).
"
)), use the backslash character (\
).
error
in their message text:
:msg, contains, "error"
host1
:
:hostname, isequal, "host1"
fatal
and error
with any or no text between them (for example, fatal lib error
):
:msg, !regex, "fatal .* error"
/usr/share/doc/rsyslog/rscript_abnf.html
along with examples of various expression-based filters.
if <EXPRESSION> then <ACTION>
$msg startswith 'DEVNAME'
or $syslogfacility-text == 'local0'
.
true
.
Define an expression-based filter on a single line
Do not use regular expressions
/etc/rsyslog.conf
configuration file. Each block consists of rules which are preceded with a program or hostname label. Use the '!<PROGRAM>' or '-<PROGRAM>' labels to include or exclude programs, respectively. Use the '+<HOSTNAME> ' or '-<HOSTNAME> ' labels include or exclude hostnames, respectively.
/var/log/cron.log
log file:
cron.* /var/log/cron.log
-
) as a prefix of the file path you specified if you want to omit syncing the desired log file after every syslog message is generated.
?
) prefix. For more information on templates, refer to Section 17.1.3.3.1, “Generating dynamic file names”.
/dev/console
device, syslog messages are sent to standard output (using special tty-handling) or your console (using special /dev/console
-handling) when using the X Window System, respectively.
@[(<OPTION>)]<HOST>:[<PORT>]
@
) indicates that the syslog messages are forwarded to a host using the UDP
protocol. To use the TCP
protocol, use two at signs with no space between them (@@
).
z<NUMBER>
. This option enables zlib compression for syslog messages; the <NUMBER> attribute specifies the level of compression. To define multiple options, simply separate each one of them with a comma (,
).
IPv6
address as the host, enclose the address in square brackets ([
, ]
).
*.* @192.168.0.1 # Forwards messages to 192.168.0.1 via the UDP
protocol
*.* @@example.com:18 # Forwards messages to "example.com" using port 18 and the TCP
protocol
*.* @(z9)[2001::1] # Compresses messages with zlib (level 9 compression)
# and forwards them to 2001::1 using the UDP
protocol
$outchannel <NAME>, <FILE_NAME>, <MAX_SIZE>, <ACTION>
$outchannel
directive and then used in a rule which selects every syslog message with any priority and executes the previously-defined output channel on the acquired syslog messages. Once the limit (in the example 100 MB
) is hit, the /home/joe/log_rotation_script
is executed. This script can contain anything from moving the file into a different folder, editing specific content out of it, or simply removing it.
Example 17.2. Output channel log rotation
$outchannel log_rotation,/var/log/test_log.log, 104857600, /home/joe/log_rotation_script *.* $log_rotation
Support for output channels is to be removed in the future
,
). To send messages to every user that is currently logged on, use an asterisk (*
).
system()
call to execute the program in shell. To specify a program to be executed, prefix it with a caret character (^
). Consequently, specify a template that formats the received message and passes it to the specified executable as a one line parameter (for more information on templates, refer to Section 17.1.3.3, “Templates”). In the following example, any syslog message with any priority is selected, formatted with the template
template and passed as a parameter to the test-program program, which is then executed with the provided parameter:
*.* ^test-program;template
Be careful when using the shell execute action
:<PLUGIN>:<DB_HOST>,<DB_NAME>,<DB_USER>,<DB_PASSWORD>;[<TEMPLATE>]
ommysql
plug-in).
Using MySQL and PostgreSQL
MySQL
(for more information, refer to /usr/share/doc/rsyslog/rsyslog_mysql.html
) and PostgreSQL
databases only. In order to use the MySQL
and PostgreSQL
database writer functionality, install the rsyslog-mysql and rsyslog-pgsql packages installed, respectively. Also, make sure you load the appropriate modules in your /etc/rsyslog.conf
configuration file:
$ModLoad ommysql # Output module for MySQL support $ModLoad ompgsql # Output module for PostgreSQL support
omlibdb
module. However, this module is currently not compiled.
~
). The following rule discards any cron syslog messages:
cron.* ~
kern.=crit joe & ^test-program;temp & @192.168.0.1
crit
) are send to user joe
, processed by the template temp
and passed on to the test-program
executable, and forwarded to 192.168.0.1
via the UDP
protocol.
;
) and specify the name of the template.
Using templates
$template <TEMPLATE_NAME>,"text %<PROPERTY>% more text", [<OPTION>]
$template
is the template directive that indicates that the text following it, defines a template.
<TEMPLATE_NAME>
is the name of the template. Use this name to refer to the template.
"
…"
) is the actual template text. Within this text, you are allowed to escape characters in order to use their functionality, such as \n
for new line or \r
for carriage return. Other characters, such as %
or "
, have to be escaped in case you want to those characters literally.
%
) specifies a property that is consequently replaced with the property's actual value. For more information on properties, refer to Section 17.1.3.3.2, “Properties”
<OPTION>
attribute specifies any options that modify the template functionality. Do not mistake these for property options, which are defined inside the template text (between "
…"
). The currently supported template options are sql
and stdsql
used for formatting the text as an SQL query.
The sql and stdsql options
sql
and stdsql
options are specified in the template. If they are not, the database writer does not perform any action. This is to prevent any possible security threats, such as SQL injection.
timegenerated
property to generate a unique file name for each syslog message:
$template DynamicFile,"/var/log/test_logs/%timegenerated%-test.log"
$template
directive only specifies the template. You must use it inside a rule for it to take effect:
*.* ?DynamicFile
%
)) allow you to access various contents of a syslog message through the use of a property replacer. To define a property inside a template (between the two quotation marks ("
…"
)), use the following syntax:
%<PROPERTY_NAME>[:<FROM_CHAR>:<TO_CHAR>:<OPTION>]%
/usr/share/doc/rsyslog/property_replacer.html
under the section Available Properties.
R
as the <FROM_CHAR> attribute and specify your desired regular expression as the <TO_CHAR> attribute.
/usr/share/doc/rsyslog/property_replacer.html
under the section Property Options.
%msg%
%msg:1:2%
%msg:::drop-last-lf%
%timegenerated:1:10:date-rfc3339%
Example 17.3. A verbose syslog message template
$template verbose,"%syslogseverity%,%syslogfacility%,%timegenerated%,%HOSTNAME%,%syslogtag%,%msg%\n"
mesg(1)
permission set to yes
). This template outputs the message text, along with a hostname, message tag and a timestamp, on a new line (using \r
and \n
) and rings the bell (using \7
).
Example 17.4. A wall message template
$template wallmsg,"\r\n\7Message from syslogd@%HOSTNAME% at %timegenerated% ...\r\n %syslogtag% %msg%\n\r"
sql
option at the end of the template specified as the template option. It tells the database writer to format the message as an MySQL SQL
query.
Example 17.5. A database formatted message template
$template dbFormat,"insert into SystemEvents (Message, Facility,FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag) values ('%msg%', %syslogfacility%, '%HOSTNAME%',%syslogpriority%, '%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%, '%syslogtag%')",sql
RSYSLOG_
prefix. It is advisable to not create a template using this prefix to avoid any conflicts. The following list shows these predefined templates along with their definitions.
RSYSLOG_DebugFormat
"Debug line with all properties:\nFROMHOST: '%FROMHOST%', fromhost-ip: '%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\nsyslogtag '%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%', PROCID: '%PROCID%', MSGID: '%MSGID%',\nTIMESTAMP: '%TIMESTAMP%', STRUCTURED-DATA: '%STRUCTURED-DATA%',\nmsg: '%msg%'\nescaped msg: '%msg:::drop-cc%'\nrawmsg: '%rawmsg%'\n\n\"
RSYSLOG_SyslogProtocol23Format
"<%PRI%>1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n\"
RSYSLOG_FileFormat
"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n\"
RSYSLOG_TraditionalFileFormat
"%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n\"
RSYSLOG_ForwardFormat
"<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%\"
RSYSLOG_TraditionalForwardFormat
"<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%\"
-c
option. When no parameter is specified, rsyslog tries to be compatible with sysklogd. This is partially achieved by activating configuration directives that modify your configuration accordingly. Therefore, it is advisable to supply this option with a number that matches the major version of rsyslog that is in use and update your /etc/rsyslog.conf
configuration file accordingly. If you want to, for example, use sysklogd options (which were deprecated in version 3 of rsyslog), you can specify so by executing the following command:
~]# rsyslogd -c 2
rsyslogd
daemon, including the backward compatibility mode, can be specified in the /etc/sysconfig/rsyslog
configuration file.
rsyslogd
options, refer to man rsyslogd
.
/var/log/
directory. Some applications such as httpd
and samba
have a directory within /var/log/
for their log files.
/var/log/
directory with numbers after them (for example, cron-20100906
). These numbers represent a timestamp that has been added to a rotated log file. Log files are rotated so their file sizes do not become too large. The logrotate
package contains a cron task that automatically rotates log files according to the /etc/logrotate.conf
configuration file and the configuration files in the /etc/logrotate.d/
directory.
/etc/logrotate.conf
configuration file:
# rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # uncomment this if you want your log files compressed compress
.gz
format. Any lines that begin with a hash sign (#) are comments and are not processed
/etc/logrotate.d/
directory and define any configuration options there.
/etc/logrotate.d/
directory:
/var/log/messages { rotate 5 weekly postrotate /usr/bin/killall -HUP syslogd endscript }
/var/log/messages
log file only. The settings specified here override the global settings where possible. Thus the rotated /var/log/messages
log file will be kept for five weeks instead of four weeks as was defined in the global options.
weekly
— Specifies the rotation of log files on a weekly basis. Similar directives include:
daily
monthly
yearly
compress
— Enables compression of rotated log files. Similar directives include:
nocompress
compresscmd
— Specifies the command to be used for compressing.
uncompresscmd
compressext
— Specifies what extension is to be used for compressing.
compressoptions
— Lets you specify any options that may be passed to the used compression program.
delaycompress
— Postpones the compression of log files to the next rotation of log files.
rotate <INTEGER>
— Specifies the number of rotations a log file undergoes before it is removed or mailed to a specific address. If the value 0
is specified, old log files are removed instead of rotated.
mail <ADDRESS>
— This option enables mailing of log files that have been rotated as many times as is defined by the rotate
directive to the specified address. Similar directives include:
nomail
mailfirst
— Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files.
maillast
— Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files. This is the default option when mail
is enabled.
logrotate
man page (man logrotate
).
Vi
or Emacs. Some log files are readable by all users on the system; however, root
privileges are required to read most log files.
Installing the gnome-system-log package
root
:
yum install gnome-system-log
gnome-system-log
Reading zipped log files
.gz
format.
rsyslogd
manual page — Type man rsyslogd
to learn more about rsyslogd
and its many options.
rsyslog.conf
manual page — Type man rsyslog.conf
to learn more about the /etc/rsyslog.conf
configuration file and its many options.
/usr/share/doc/rsyslog/
— After installing the rsyslog package, this directory contains extensive documentation in the html
format.
logrotate
manual page — Type man logrotate
to learn more about logrotate
and its many options.
/etc/rsyslog.conf
configuration examples.
locate
command is updated daily. A system administrator can use automated tasks to perform periodic backups, monitor the system, run custom scripts, and more.
cron
, at
, and batch
.
cronie
RPM package must be installed and the crond
service must be running. anacron
is a sub-package of cronie
. To determine if these packages are installed, use the rpm -q cronie cronie-anacron
command.
systemctl is-active crond.service
root
:
systemctl start crond.service
root
:
systemctl stop crond.service
root
:
systemctl enable crond.service
/etc/anacrontab
(only root
is allowed to modify this file), which contains the following lines:
SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 # the jobs will be started during the following hours only START_HOURS_RANGE=3-22 #period in days delay in minutes job-identifier command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly
SHELL
variable tells the system which shell environment to use (in this example the bash shell). The PATH
variable defines the path used to execute commands. The output of the anacron jobs are emailed to the username defined with the MAILTO
variable. If the MAILTO
variable is not defined, (i.e. is empty, MAILTO=
), email is not sent.
RANDOM_DELAY
variable denotes the maximum number of minutes that will be added to the delay in minutes
variable which is specified for each job. The minimum delay value is set, by default, to 6 minutes. A RANDOM_DELAY
set to 12 would therefore add, randomly, between 6 and 12 minutes to the delay in minutes
for each job in that particular anacrontab. RANDOM_DELAY
can also be set to a value below 6, or even 0. When set to 0, no random delay is added. This proves to be useful when, for example, more computers that share one network connection need to download the same data every day. The START_HOURS_RANGE
variable defines an interval (in hours) when scheduled jobs can be run. In case this time interval is missed, for example, due to a power down, then scheduled jobs are not executed that day.
/etc/anacrontab
file represent scheduled jobs and have the following format:
period in days delay in minutes job-identifier command
period in days
— specifies the frequency of execution of a job in days. This variable can be represented by an integer or a macro (@daily
, @weekly
, @monthly
), where @daily
denotes the same value as the integer 1, @weekly
the same as 7, and @monthly
specifies that the job is run once a month, independent on the length of the month.
delay in minutes
— specifies the number of minutes anacron waits, if necessary, before executing a job. This variable is represented by an integer where 0 means no delay.
job-identifier
— specifies a unique name of a job which is used in the log files.
command
— specifies the command to execute. The command can either be a command such as ls /proc >> /tmp/proc
or a command to execute a custom script.
/etc/anacrontab
file:
SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=30 # the jobs will be started during the following hours only START_HOURS_RANGE=16-20 #period in days delay in minutes job-identifier command 1 20 dailyjob nice run-parts /etc/cron.daily 7 25 weeklyjob /etc/weeklyjob.bash @monthly 45 monthlyjob ls /proc >> /tmp/proc
anacrontab
file are randomly delayed by 6-30 minutes and can be executed between 16:00 and 20:00. Thus, the first defined job will run anywhere between 16:26 and 16:50 every day. The command specified for this job will execute all present programs in the /etc/cron.daily
directory (using the run-parts
script which takes a directory as a command-line argument and sequentially executes every program within that directory). The second specified job will be executed once a week and will execute the weeklyjob.bash
script in the /etc
directory. The third job is executed once a month and runs a command to write the contents of the /proc
to the /tmp/proc
file (e.g. ls /proc >> /tmp/proc
).
cronie-anacron
package. Thus, you will be able to define jobs using crontabs only.
/etc/crontab
(only root
is allowed to modify this file), contains the following lines:
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # For details see man 4 crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user command to be executed
anacrontab
file, SHELL
, PATH
and MAILTO
. For more information about these variables, refer to Section 18.1.2, “Configuring Anacron Jobs”. The fourth line contains the HOME
variable. The HOME
variable can be used to set the home directory to use when executing commands or scripts.
/etc/crontab
file represent scheduled jobs and have the following format:
minute hour day month day of week user command
minute
— any integer from 0 to 59
hour
— any integer from 0 to 23
day
— any integer from 1 to 31 (must be a valid day if a month is specified)
month
— any integer from 1 to 12 (or the short name of the month such as jan or feb)
day of week
— any integer from 0 to 7, where 0 or 7 represents Sunday (or the short name of the week such as sun or mon)
user
— specifies the user under which the jobs are run
command
— the command to execute (the command can either be a command such as ls /proc >> /tmp/proc
or the command to execute a custom script)
1-4
means the integers 1, 2, 3, and 4.
3, 4, 6, 8
indicates those four specific integers.
/integer
. For example, 0-59/2
can be used to define every other minute in the minute field. Step values can also be used with an asterisk. For instance, the value */3
can be used in the month field to run the task every third month.
root
can configure cron tasks by using the crontab
utility. All user-defined crontabs are stored in the /var/spool/cron/
directory and are executed using the usernames of the users that created them. To create a crontab as a user, login as that user and type the command crontab -e
to edit the user's crontab using the editor specified by the VISUAL
or EDITOR
environment variable. The file uses the same format as /etc/crontab
. When the changes to the crontab are saved, the crontab is stored according to username and written to the file /var/spool/cron/username
. To list the contents of your own personal crontab file, use the crontab -l
command.
Do not specify a user
crontab
utility, there is no need to specify a user when defining a job.
/etc/cron.d/
directory contains files that have the same syntax as the /etc/crontab
file. Only root
is allowed to create and modify files in this directory.
Do not restart the daemon to apply the changes
/etc/anacrontab
file, the /etc/crontab
file, the /etc/cron.d/
directory, and the /var/spool/cron/
directory every minute for any changes. If any changes are found, they are loaded into memory. Thus, the daemon does not need to be restarted if an anacrontab or a crontab file is changed.
/etc/cron.allow
and /etc/cron.deny
files are used to restrict access to cron. The format of both access control files is one username on each line. Whitespace is not permitted in either file. The cron daemon (crond
) does not have to be restarted if the access control files are modified. The access control files are checked each time a user tries to add or delete a cron job.
root
user can always use cron, regardless of the usernames listed in the access control files.
cron.allow
exists, only users listed in it are allowed to use cron, and the cron.deny
file is ignored.
cron.allow
does not exist, users listed in cron.deny
are not allowed to use cron.
/etc/security/access.conf
. For example, adding the following line in this file forbids creating crontabs for all users except the root
user:
-:ALL EXCEPT root :cron
access.conf.5
(i.e. man 5 access.conf
).
run-parts
script on a cron folder, such as /etc/cron.daily
, we can define which of the programs in this folder will not be executed by run-parts
.
jobs.deny
file in the folder that run-parts
will be executing from. For example, if we need to omit a particular program from /etc/cron.daily, then, a file /etc/cron.daily/jobs.deny
has to be created. In this file, specify the names of the omitted programs from the same directory. These will not be executed when a command, such as run-parts /etc/cron.daily
, is executed by a specific job.
jobs.allow
file.
jobs.deny
and jobs.allow
are the same as those of cron.deny
and cron.allow
described in section Section 18.1.4, “Controlling Access to Cron”.
at
command is used to schedule a one-time task at a specific time and the batch
command is used to schedule a one-time task to be executed when the systems load average drops below 0.8.
at
or batch
, the at
RPM package must be installed, and the atd
service must be running. To determine if the package is installed, use the rpm -q at
command. To determine if the service is running, use the following command:
systemctl is-active atd.service
at time
, where time
is the time to execute the command.
/usr/share/doc/at/timespec
text file.
at
command with the time argument, the at>
prompt is displayed. Type the command to execute, press Enter, and press Ctrl+D . Multiple commands can be specified by typing each command followed by the Enter key. After typing all the commands, press Enter to go to a blank line and press Ctrl+D . Alternatively, a shell script can be entered at the prompt, pressing Enter after each line in the script, and pressing Ctrl+D on a blank line to exit. If a script is entered, the shell used is the shell set in the user's SHELL
environment, the user's login shell, or /bin/sh
(whichever is found first).
atq
to view pending jobs. See Section 18.2.3, “Viewing Pending Jobs” for more information.
at
command can be restricted. For more information, refer to Section 18.2.5, “Controlling Access to At and Batch” for details.
batch
command.
batch
command, the at>
prompt is displayed. Type the command to execute, press Enter, and press Ctrl+D . Multiple commands can be specified by typing each command followed by the Enter key. After typing all the commands, press Enter to go to a blank line and press Ctrl+D . Alternatively, a shell script can be entered at the prompt, pressing Enter after each line in the script, and pressing Ctrl+D on a blank line to exit. If a script is entered, the shell used is the shell set in the user's SHELL
environment, the user's login shell, or /bin/sh
(whichever is found first). As soon as the load average is below 0.8, the set of commands or script is executed.
atq
to view pending jobs. See Section 18.2.3, “Viewing Pending Jobs” for more information.
batch
command can be restricted. For more information, refer to Section 18.2.5, “Controlling Access to At and Batch” for details.
at
and batch
jobs, use the atq
command. The atq
command displays a list of pending jobs, with each job on a line. Each line follows the job number, date, hour, job class, and username format. Users can only view their own jobs. If the root
user executes the atq
command, all jobs for all users are displayed.
at
and batch
include:
Table 18.1. at
and batch
Command Line Options
Option | Description |
---|---|
-f | Read the commands or shell script from a file instead of specifying them at the prompt. |
-m | Send email to the user when the job has been completed. |
-v | Display the time that the job is executed. |
/etc/at.allow
and /etc/at.deny
files can be used to restrict access to the at
and batch
commands. The format of both access control files is one username on each line. Whitespace is not permitted in either file. The at
daemon (atd
) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at
or batch
commands.
root
user can always execute at
and batch
commands, regardless of the access control files.
at.allow
exists, only users listed in it are allowed to use at
or batch
, and the at.deny
file is ignored.
at.allow
does not exist, users listed in at.deny
are not allowed to use at
or batch
.
at
service, use the following command as root
:
systemctl start atd.service
root
, type the following at a shell prompt:
systemctl stop atd.service
root
:
systemctl enable atd.service
cron
man page — contains an overview of cron.
crontab
man pages in sections 1 and 5 — The man page in section 1 contains an overview of the crontab
file. The man page in section 5 contains the format for the file and some example entries.
anacron
man page — contains an overview of anacron.
anacrontab
man page — contains an overview of the anacrontab
file.
/usr/share/doc/at/timespec
contains more detailed information about the times that can be specified for cron jobs.
at
man page — description of at
and batch
and their command line options.
oprofile
package must be installed to use this tool.
--separate=library
option is used.
opreport
does not associate samples for inline functions properly — opreport
uses a simple address range mechanism to determine which function an address is in. Inline function samples are not attributed to the inline function but rather to the function the inline function was inserted into.
opcontrol --reset
to clear out the samples from previous runs.
timer
mode. Run the command opcontrol --deinit
, and then execute modprobe oprofile timer=1
to enable the timer
mode.
oprofile
package.
Table 19.1. OProfile Commands
Command | Description |
---|---|
ophelp |
Displays available events for the system's processor along with a brief description of each.
|
opimport |
Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture.
|
opannotate | Creates annotated source for an executable if the application was compiled with debugging symbols. See Section 19.5.4, “Using opannotate ” for details. |
opcontrol |
Configures what data is collected. See Section 19.2, “Configuring OProfile” for details.
|
opreport |
Retrieves profile data. See Section 19.5.1, “Using
opreport ” for details.
|
oprofiled |
Runs as a daemon to periodically write sample data to disk.
|
opcontrol
utility to configure OProfile. As the opcontrol
commands are executed, the setup options are saved to the /root/.oprofile/daemonrc
file.
~]# opcontrol --setup --vmlinux=/usr/lib/debug/lib/modules/`uname -r`/vmlinux
Install the debuginfo package
~]# opcontrol --setup --no-vmlinux
oprofile
kernel module, if it is not already loaded, and creates the /dev/oprofile/
directory, if it does not already exist. See Section 19.6, “Understanding /dev/oprofile/
” for details about this directory.
Table 19.2. OProfile Processors and Counters
Processor | cpu_type | Number of Counters |
---|---|---|
AMD64 | x86-64/hammer | 4 |
AMD Athlon | i386/athlon | 4 |
AMD Family 10h | x86-64/family10 | 4 |
AMD Family 11h | x86-64/family11 | 4 |
AMD Family 12h | x86-64/family12 | 4 |
AMD Family 14h | x86-64/family14 | 4 |
AMD Family 15h | x86-64/family15 | 6 |
IBM eServer System i and IBM eServer System p | timer | 1 |
IBM POWER4 | ppc64/power4 | 8 |
IBM POWER5 | ppc64/power5 | 6 |
IBM PowerPC 970 | ppc64/970 | 8 |
IBM S/390 and IBM System z | timer | 1 |
Intel Core i7 | i386/core_i7 | 4 |
Intel Nehalem microarchitecture | i386/nehalem | 4 |
Intel Pentium 4 (non-hyper-threaded) | i386/p4 | 8 |
Intel Pentium 4 (hyper-threaded) | i386/p4-ht | 4 |
Intel Westmere microarchitecture | i386/westmere | 4 |
TIMER_INT | timer | 1 |
timer
is used as the processor type if the processor does not have supported performance monitoring hardware.
timer
is used, events cannot be set for any processor because the hardware does not have support for hardware performance counters. Instead, the timer interrupt is used for profiling.
timer
is not used as the processor type, the events monitored can be changed, and counter 0 for the processor is set to a time-based event by default. If more than one counter exists on the processor, the counters other than counter 0 are not set to an event by default. The default events monitored are shown in Table 19.3, “Default Events”.
Table 19.3. Default Events
Processor | Default Event for Counter | Description |
---|---|---|
AMD Athlon and AMD64 | CPU_CLK_UNHALTED | The processor's clock is not halted |
AMD Family 10h, AMD Family 11h, AMD Family 12h | CPU_CLK_UNHALTED | The processor's clock is not halted |
AMD Family 14h, AMD Family 15h | CPU_CLK_UNHALTED | The processor's clock is not halted |
IBM POWER4 | CYCLES | Processor Cycles |
IBM POWER5 | CYCLES | Processor Cycles |
IBM PowerPC 970 | CYCLES | Processor Cycles |
Intel Core i7 | CPU_CLK_UNHALTED | The processor's clock is not halted |
Intel Nehalem microarchitecture | CPU_CLK_UNHALTED | The processor's clock is not halted |
Intel Pentium 4 (hyper-threaded and non-hyper-threaded) | GLOBAL_POWER_EVENTS | The time during which the processor is not stopped |
Intel Westmere microarchitecture | CPU_CLK_UNHALTED | The processor's clock is not halted |
TIMER_INT | (none) | Sample for each timer interrupt |
~]# ls -d /dev/oprofile/[0-9]*
~]# ophelp
opcontrol
:
~]# opcontrol --event=event-name:sample-rate
ophelp
, and replace sample-rate with the number of events between samples.
cpu_type
is not timer
, each event can have a sampling rate set for it. The sampling rate is the number of events between each sample snapshot.
~]# opcontrol --event=event-name:sample-rate
Sampling too frequently can overload the system
ophelp
command. The values for each unit mask are listed in hexadecimal format. To specify more than one unit mask, the hexadecimal values must be combined using a bitwise or operation.
~]# opcontrol --event=event-name:sample-rate:unit-mask
~]# opcontrol --event=event-name:sample-rate:unit-mask:0
~]# opcontrol --event=event-name:sample-rate:unit-mask:1
~]# opcontrol --event=event-name:sample-rate:unit-mask:kernel:0
~]# opcontrol --event=event-name:sample-rate:unit-mask:kernel:1
~]# opcontrol --separate=choice
none
— do not separate the profiles (default)
library
— generate per-application profiles for libraries
kernel
— generate per-application profiles for the kernel and kernel modules
all
— generate per-application profiles for libraries and per-application profiles for the kernel and kernel modules
--separate=library
is used, the sample file name includes the name of the executable as well as the name of the library.
Restart the OProfile profiler
~]# opcontrol --start
Using log file /var/lib/oprofile/oprofiled.log Daemon started. Profiler running.
/root/.oprofile/daemonrc
are used.
oprofiled
, is started; it periodically writes the sample data to the /var/lib/oprofile/samples/
directory. The log file for the daemon is located at /var/lib/oprofile/oprofiled.log
.
Disable the nmi_watchdog registers
nmi_watchdog
registers with the perf
subsystem. Due to this, the perf
subsystem grabs control of the performance counter registers at boot time, blocking OProfile from working.
nmi_watchdog=0
kernel parameter set, or run the following command to disable nmi_watchdog
at run time:
~]# echo 0 > /proc/sys/kernel/nmi_watchdog
nmi_watchdog
, use the following command:
~]# echo 1 > /proc/sys/kernel/nmi_watchdog
~]# opcontrol --shutdown
~]# opcontrol --save=name
/var/lib/oprofile/samples/name/
is created and the current sample files are copied to it.
oprofiled
, collects the samples and writes them to the /var/lib/oprofile/samples/
directory. Before reading the data, make sure all data has been written to this directory by executing the following command as root:
~]# opcontrol --dump
/bin/bash
becomes:
\{root\}/bin/bash/\{dep\}/\{root\}/bin/bash/CPU_CLK_UNHALTED.100000
opreport
opannotate
Back up the executable and the sample files
oparchive
can be used to address this problem.
opreport
opreport
tool provides an overview of all the executables being profiled.
Profiling through timer interrupt TIMER:0| samples| %| ------------------ 25926 97.5212 no-vmlinux 359 1.3504 pi 65 0.2445 Xorg 62 0.2332 libvte.so.4.4.0 56 0.2106 libc-2.3.4.so 34 0.1279 libglib-2.0.so.0.400.7 19 0.0715 libXft.so.2.1.2 17 0.0639 bash 8 0.0301 ld-2.3.4.so 8 0.0301 libgdk-x11-2.0.so.0.400.13 6 0.0226 libgobject-2.0.so.0.400.7 5 0.0188 oprofiled 4 0.0150 libpthread-2.3.4.so 4 0.0150 libgtk-x11-2.0.so.0.400.13 3 0.0113 libXrender.so.1.2.2 3 0.0113 du 1 0.0038 libcrypto.so.0.9.7a 1 0.0038 libpam.so.0.77 1 0.0038 libtermcap.so.2.0.8 1 0.0038 libX11.so.6.2 1 0.0038 libgthread-2.0.so.0.400.7 1 0.0038 libwnck-1.so.4.9.0
opreport
man page for a list of available command line options, such as the -r
option used to sort the output from the executable with the smallest number of samples to the one with the largest number of samples.
opreport
:
~]# opreport mode executable
-l
opreport -l /lib/tls/libc-version.so
:
samples % symbol name 12 21.4286 __gconv_transform_utf8_internal 5 8.9286 _int_malloc 4 7.1429 malloc 3 5.3571 __i686.get_pc_thunk.bx 3 5.3571 _dl_mcount_wrapper_check 3 5.3571 mbrtowc 3 5.3571 memcpy 2 3.5714 _int_realloc 2 3.5714 _nl_intern_locale_data 2 3.5714 free 2 3.5714 strcmp 1 1.7857 __ctype_get_mb_cur_max 1 1.7857 __unregister_atfork 1 1.7857 __write_nocancel 1 1.7857 _dl_addr 1 1.7857 _int_free 1 1.7857 _itoa_word 1 1.7857 calc_eclosure_iter 1 1.7857 fopen@@GLIBC_2.1 1 1.7857 getpid 1 1.7857 memmove 1 1.7857 msort_with_tmp 1 1.7857 strcpy 1 1.7857 strlen 1 1.7857 vfprintf 1 1.7857 write
-r
in conjunction with the -l
option.
-i symbol-name
opreport -l -i __gconv_transform_utf8_internal /lib/tls/libc-version.so
:
samples % symbol name 12 100.000 __gconv_transform_utf8_internal
-d
-l
. For example, the following output is from the command opreport -l -d __gconv_transform_utf8_internal /lib/tls/libc-version.so
:
vma samples % symbol name 00a98640 12 100.000 __gconv_transform_utf8_internal 00a98640 1 8.3333 00a9868c 2 16.6667 00a9869a 1 8.3333 00a986c1 1 8.3333 00a98720 1 8.3333 00a98749 1 8.3333 00a98753 1 8.3333 00a98789 1 8.3333 00a98864 1 8.3333 00a98869 1 8.3333 00a98b08 1 8.3333
-l
option except that for each symbol, each virtual memory address used is shown. For each virtual memory address, the number of samples and percentage of samples relative to the number of samples for the symbol is displayed.
-x
symbol-name session
:name /var/lib/oprofile/samples/
directory.
initrd
file on boot up, the directory with the various kernel modules, or a locally created kernel module. As a result, when OProfile records sample for a module, it just lists the samples for the modules for an executable in the root directory, but this is unlikely to be the place with the actual code for the module. You will need to take some steps to make sure that analysis tools get the executable.
uname -a
command, obtain the appropriate debuginfo package and install it on the machine.
~]# opcontrol --reset
~]#opcontrol --setup --vmlinux=/usr/lib/debug/lib/modules/`uname -r`/vmlinux \
--event=CPU_CLK_UNHALTED:500000
~]# opreport /ext4 -l --image-path /lib/modules/`uname -r`/kernel
CPU: Intel Westmere microarchitecture, speed 2.667e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 500000
warning: could not check that the binary file /lib/modules/2.6.32-191.el6.x86_64/kernel/fs/ext4/ext4.ko has not been modified since the profile was taken. Results may be inaccurate.
samples % symbol name
1622 9.8381 ext4_iget
1591 9.6500 ext4_find_entry
1231 7.4665 __ext4_get_inode_loc
783 4.7492 ext4_ext_get_blocks
752 4.5612 ext4_check_dir_entry
644 3.9061 ext4_mark_iloc_dirty
583 3.5361 ext4_get_blocks
583 3.5361 ext4_xattr_get
479 2.9053 ext4_htree_store_dirent
469 2.8447 ext4_get_group_desc
414 2.5111 ext4_dx_find_entry
opannotate
opannotate
tool tries to match the samples for particular instructions to the corresponding lines in the source code. The resulting files generated should have the samples for the lines at the left. It also puts in a comment at the beginning of each function listing the total samples for the function.
opannotate
is as follows:
~]# opannotate --search-dirs src-dir --source executable
opannotate
man page for a list of additional command line options.
/dev/oprofile/
/dev/oprofile/
directory contains the file system for OProfile. Use the cat
command to display the values of the virtual files in this file system. For example, the following command displays the type of processor OProfile detected:
~]# cat /dev/oprofile/cpu_type
/dev/oprofile/
for each counter. For example, if there are 2 counters, the directories /dev/oprofile/0/
and dev/oprofile/1/
exist.
count
— The interval between samples.
enabled
— If 0, the counter is off and no samples are collected for it; if 1, the counter is on and samples are being collected for it.
event
— The event to monitor.
extra
— Used on machines with Nehalem processors to further specify the event to monitor.
kernel
— If 0, samples are not collected for this counter event when the processor is in kernel-space; if 1, samples are collected even if the processor is in kernel-space.
unit_mask
— Defines which unit masks are enabled for the counter.
user
— If 0, samples are not collected for the counter event when the processor is in user-space; if 1, samples are collected even if the processor is in user-space.
cat
command. For example:
~]# cat /dev/oprofile/0/count
opreport
can be used to determine how much processor time an application or service uses. If the system is used for multiple services but is under performing, the services consuming the most processor time can be moved to dedicated systems.
CPU_CLK_UNHALTED
event can be monitored to determine the processor load over a given period of time. This data can then be used to determine if additional processors or a faster processor might improve system performance.
-agentlib:jvmti_oprofile
Install the oprofile-jit package
oprof_start
command as root at a shell prompt. To use the graphical interface, you will need to have the oprofile-gui
package installed.
/root/.oprofile/daemonrc
, and the application exits. Exiting the application does not stop OProfile from sampling.
vmlinux
file for the kernel to monitor in the Kernel image file text field. To configure OProfile not to monitor the kernel, select No kernel image.
oprofiled
daemon log includes more information.
opcontrol --separate=library
command. If Per-application profiles, including kernel is selected, OProfile generates per-application profiles for the kernel and kernel modules as discussed in Section 19.2.3, “Separating Kernel and User-space Profiles”. This is equivalent to the opcontrol --separate=kernel
command.
opcontrol --dump
command.
netstat
, ps
, top
, and iostat
; however, SystemTap is designed to provide more filtering and analysis options for collected information.
/usr/share/doc/oprofile/oprofile.html
— OProfile Manual
oprofile
man page — Discusses opcontrol
, opreport
, opannotate
, and ophelp
Table of Contents
rpm
command instead of yum
.
Use Yum to install kernels whenever possible
/usr/share/doc/kernel-doc-kernel_version/
directory.
VFAT
file system. You can create bootable USB media on media formatted as ext2
, ext3
, ext4
, or VFAT
.
4 GB
is required for a distribution DVD image, around 700 MB
for a distribution CD image, or around 10 MB
for a minimal boot media image.
boot.iso
file from a Fedora installation DVD, or installation CD-ROM#1, and you need a USB storage device formatted with the VFAT
file system and around 16 MB
of free space. The following procedure will not affect existing files on the USB storage device unless they have the same path names as the files that you copy onto it. To create USB boot media, perform the following commands as the root
user:
syslinux /dev/sdX1
boot.iso
and the USB storage device:
mkdir /mnt/isoboot /mnt/diskboot
boot.iso
:
mount -o loop boot.iso /mnt/isoboot
mount /dev/sdX1 /mnt/diskboot
boot.iso
to the USB storage device:
cp /mnt/isoboot/isolinux/* /mnt/diskboot
isolinux.cfg
file from boot.iso
as the syslinux.cfg
file for the USB device:
grep -v local /mnt/isoboot/isolinux/isolinux.cfg > /mnt/diskboot/syslinux.cfg
boot.iso
and the USB storage device:
umount /mnt/isoboot /mnt/diskboot
mkbootdisk
command as root
. See man mkbootdisk
man page after installing the package for usage information.
yum list installed "kernel-*"
at a shell prompt. The output will comprise some or all of the following packages, depending on the system's architecture, and the version numbers may differ:
~]# yum list installed "kernel-*"
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
kernel.x86_64 3.1.0-0.rc6.git0.3.fc16 @updates-testing
kernel.x86_64 3.1.0-0.rc9.git0.0.fc16 @updates-testing
kernel-doc.x86_64 3.1.0-0.rc6.git0.3.fc16 @updates-testing
kernel-doc.x86_64 3.1.0-0.rc9.git0.0.fc16 @updates-testing
kernel-headers.x86_64 3.1.0-0.rc6.git0.3.fc16 @updates-testing
kernel-headers.x86_64 3.1.0-0.rc9.git0.0.fc16 @updates-testing
Keep the old kernel when performing the upgrade
-i
argument with the rpm
command to keep the old kernel. Do not use the -U
option, since it overwrites the currently installed kernel, which creates boot loader problems. For example:
rpm -ivh kernel-kernel_version.arch.rpm
initramfs
by running the dracut
command. However, you usually don't need to create an initramfs
manually: this step is automatically performed if the kernel and its associated packages are installed or upgraded from RPM packages distributed by The Fedora Project.
initramfs
corresponding to your current kernel version exists and is specified correctly in the /boot/grub2/grub.cfg
configuration file by following this procedure:
Procedure 20.1. Verifying the Initial RAM Disk Image
root
, list the contents in the /boot
directory and find the kernel (vmlinuz-kernel_version
) and initramfs-kernel_version
with the latest (most recent) version number:
~]# ls /boot
config-3.1.0-0.rc6.git0.3.fc16.x86_64
config-3.1.0-0.rc9.git0.0.fc16.x86_64
elf-memtest86+-4.20
grub
grub2
initramfs-3.1.0-0.rc6.git0.3.fc16.x86_64.img
initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img
initrd-plymouth.img
memtest86+-4.20
System.map-3.1.0-0.rc6.git0.3.fc16.x86_64
System.map-3.1.0-0.rc9.git0.0.fc16.x86_64
vmlinuz-3.1.0-0.rc6.git0.3.fc16.x86_64
vmlinuz-3.1.0-0.rc9.git0.0.fc16.x86_64
/boot
directory),
vmlinuz-vmlinuz-3.1.0-0.rc9.git0.0.fc16.x86_64
, and
initramfs
file matching our kernel version, initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img
, also exists.
initrd files in the /boot directory are not the same as initramfs files
/boot
directory you may find several initrd-<kernel_version>kdump.img
files. These are special files created by the kdump
mechanism for kernel debugging purposes, are not used to boot the system, and can safely be ignored. For more information on kdump
, refer to Chapter 22, The kdump Crash Recovery Service.
initramfs-kernel_version
file does not match the version of the latest kernel in /boot
, or, in certain other situations, you may need to generate an initramfs
file with the Dracut utility. Simply invoking dracut
as root
without options causes it to generate an initramfs
file in the /boot
directory for the latest kernel present in that directory:
~]# dracut
--force
option if you want dracut
to overwrite an existing initramfs
(for example, if your initramfs
has become corrupt). Otherwise dracut
will refuse to overwrite the existing initramfs
file:
~]# dracut
F: Will not override existing initramfs (/boot/initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img) without --force
dracut initramfs_name kernel_version
, for example:
~]# dracut "initramfs-$(uname -r).img" $(uname -r)
.ko
) inside the parentheses of the add_dracutmodules="module [more_modules]"
directive of the /etc/dracut.conf
configuration file. You can list the file contents of an initramfs
image file created by dracut by using the lsinitrd initramfs_file
command:
~]# lsinitrd /boot/initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img
/boot/initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img: 16M
========================================================================
dracut-013-15.fc16
========================================================================
drwxr-xr-x 8 root root 0 Oct 11 20:36 .
lrwxrwxrwx 1 root root 17 Oct 11 20:36 lib -> run/initramfs/lib
drwxr-xr-x 2 root root 0 Oct 11 20:36 sys
drwxr-xr-x 2 root root 0 Oct 11 20:36 proc
lrwxrwxrwx 1 root root 17 Oct 11 20:36 etc -> run/initramfs/etc
[output truncated]
man dracut
and man dracut.conf
for more information on options and usage.
/boot/grub2/grub.cfg
configuration file to ensure that an initrd /path/initramfs-kernel_version.img
exists for the kernel version you are booting. For example:
~]# grep initrd /boot/grub2/grub.cfg
initrd /initramfs-3.1.0-0.rc6.git0.3.fc16.x86_64.img
initrd /initramfs-3.1.0-0.rc9.git0.0.fc16.x86_64.img
/boot/grub2/grub.cfg
file.
addRamDisk
command. This step is performed automatically if the kernel and its associated packages are installed or upgraded from the RPM packages distributed by The Fedora Project; thus, it does not need to be executed manually. To verify that it was created, run the following command as root
to make sure the /boot/vmlinitrd-kernel_version
file already exists:
ls -l /boot
rpm
, the kernel package creates an entry in the boot loader configuration file for that new kernel. However, rpm
does not configure the new kernel to boot as the default kernel. You must do this manually when installing a new kernel with rpm
.
rpm
to ensure that the configuration is correct. Otherwise, the system may not be able to boot into Fedora properly. If this happens, boot the system with the boot media created earlier and re-configure the boot loader.
Table 20.1. Boot loaders by architecture
Architecture | Boot Loader | See |
---|---|---|
x86 | GRUB 2 | Section 20.6.1, “Configuring the GRUB 2 Boot Loader” |
AMD AMD64 or Intel 64 | GRUB 2 | Section 20.6.1, “Configuring the GRUB 2 Boot Loader” |
IBM eServer System i | OS/400 | Section 20.6.2, “Configuring the OS/400 Boot Loader” |
IBM eServer System p | YABOOT | Section 20.6.3, “Configuring the YABOOT Boot Loader” |
IBM System z | z/IPL | — |
/boot/grub2/grub.cfg
file. This file is generated by the grub2-mkconfig utility based on Linux kernels located in the /boot
directory, template files located in /etc/grub.d/
, and custom settings in the /etc/default/grub
file and is automatically updated each time you install a new kernel from an RPM package. To update this configuration file manually, type the following at a shell prompt as root
:
grub2-mkconfig
-o
/boot/grub2/grub.cfg
/boot/grub2/grub.cfg
configuration file contains one or more menuentry
blocks, each representing a single GRUB 2 boot menu entry. These blocks always start with the menuentry
keyword followed by a title, list of options, and opening curly bracket, and end with a closing curly bracket. Anything between the opening and closing bracket should be indented. For example, the following is a sample menuentry
block for Fedora 17 with Linux kernel 3.4.0-1.fc17.x86_64:
menuentry 'Fedora (3.4.0-1.fc17.x86_64)' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-77ba9149-751a-48e0-974f-ad94911734b9' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1' 4ea24c68-ab10-47d4-8a6b-b8d3a002acba else search --no-floppy --fs-uuid --set=root 4ea24c68-ab10-47d4-8a6b-b8d3a002acba fi echo 'Loading Fedora (3.4.0-1.fc17.x86_64)' linux /vmlinuz-3.4.0-1.fc17.x86_64 root=/dev/mapper/vg_fedora-lv_root ro rd.md=0 rd.dm=0 SYSFONT=True rd.lvm.lv=vg_fedora/lv_swap KEYTABLE=us rd.lvm.lv=vg_fedora/lv_root rd.luks=0 LANG=en_US.UTF-8 rhgb quiet echo 'Loading initial ramdisk ...' initrd /initramfs-3.4.0-1.fc17.x86_64.img }
menuentry
block that represents an installed Linux kernel contains linux
and initrd
directives followed by the path to the kernel and the initramfs
image respectively. If a separate /boot
partition was created, the paths to the kernel and the initramfs
image are relative to /boot
. In the example above, the initrd /initramfs-3.4.0-1.fc17.x86_64.img
line means that the initramfs
image is actually located at /boot/initramfs-3.4.0-1.fc17.x86_64.img
when the root file system is mounted, and likewise for the kernel path.
linux /vmlinuz-kernel_version
line must match the version number of the initramfs
image given on the initrd /initramfs-kernel_version.img
line of each menuentry
block. For more information on how to verify the initial RAM disk image, refer to Procedure 20.1, “Verifying the Initial RAM Disk Image”.
The initrd directive in grub.cfg refers to an initramfs image
menuentry
blocks, the initrd
directive must point to the location (relative to the /boot
directory if it is on a separate partition) of the initramfs
file corresponding to the same kernel version. This directive is called initrd
because the previous tool which created initial RAM disk images, mkinitrd
, created what were known as initrd
files. The grub.cfg
directive remains initrd
to maintain compatibility with other tools. The file-naming convention of systems using the dracut
utility to create the initial RAM disk image is initramfs-kernel_version.img
.
rpm
, verify that /boot/grub2/grub.cfg
is correct and reboot the computer into the new kernel. Ensure your hardware is detected by watching the boot process output. If GRUB 2 presents an error and is unable to boot into the new kernel, it is often easiest to try to boot into an alternative or older kernel so that you can fix the problem. Alternatively, use the boot media you created earlier to boot the system.
Causing the GRUB 2 boot menu to display
GRUB_TIMEOUT
option in the /etc/default/grub
file to 0
, GRUB 2 will not display its list of bootable kernels when the system starts up. In order to display this list when booting, press and hold any alphanumeric key while and immediately after BIOS information is displayed, and GRUB 2 will present you with the GRUB menu.
/boot/vmlinitrd-kernel-version
file is installed when you upgrade the kernel. However, you must use the dd
command to configure the system to boot the new kernel.
root
, issue the command cat /proc/iSeries/mf/side
to determine the default side (either A, B, or C).
root
, issue the following command, where kernel-version is the version of the new kernel and side is the side from the previous command:
dd if=/boot/vmlinitrd-kernel-version of=/proc/iSeries/mf/side/vmlinux bs=8k
/etc/aboot.conf
as its configuration file. Confirm that the file contains an image
section with the same version as the kernel package just installed, and likewise for the initramfs
image:
boot=/dev/sda1 init-message=Welcome to Fedora! Hit <TAB> for boot options partition=2 timeout=30 install=/usr/lib/yaboot/yaboot delay=10 nonvram image=/vmlinuz-2.6.32-17.EL label=old read-only initrd=/initramfs-2.6.32-17.EL.img append="root=LABEL=/" image=/vmlinuz-2.6.32-19.EL label=linux read-only initrd=/initramfs-2.6.32-19.EL.img append="root=LABEL=/"
default
and set it to the label
of the image stanza that contains the new kernel.
btrfs
or NFS
.
Installing the kmod package
~]# yum install kmod
lsmod
command, for example:
~]$ lsmod
Module Size Used by
xfs 803635 1
exportfs 3424 1 xfs
vfat 8216 1
fat 43410 1 vfat
tun 13014 2
fuse 54749 2
ip6table_filter 2743 0
ip6_tables 16558 1 ip6table_filter
ebtable_nat 1895 0
ebtables 15186 1 ebtable_nat
ipt_MASQUERADE 2208 6
iptable_nat 5420 1
nf_nat 19059 2 ipt_MASQUERADE,iptable_nat
rfcomm 65122 4
ipv6 267017 33
sco 16204 2
bridge 45753 0
stp 1887 1 bridge
llc 4557 2 bridge,stp
bnep 15121 2
l2cap 45185 16 rfcomm,bnep
cpufreq_ondemand 8420 2
acpi_cpufreq 7493 1
freq_table 3851 2 cpufreq_ondemand,acpi_cpufreq
usb_storage 44536 1
sha256_generic 10023 2
aes_x86_64 7654 5
aes_generic 27012 1 aes_x86_64
cbc 2793 1
dm_crypt 10930 1
kvm_intel 40311 0
kvm 253162 1 kvm_intel
[output truncated]
lsmod
output specifies:
lsmod
output is less verbose and considerably easier to read than the content of the /proc/modules
pseudo-file.
modinfo module_name
command.
Module names do not end in .ko
.ko
extension to the end of the name. Kernel module names do not have extensions; their corresponding files do.
Example 21.1. Listing information about a kernel module with lsmod
e1000e
module, which is the Intel PRO/1000 network driver, run:
~]#modinfo e1000e
filename: /lib/modules/3.15.6-200.fc20.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko version: 2.3.2-k license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, <linux.nics@intel.com> srcversion: AB1D5F954DC03B1296E61BD alias: pci:v00008086d00001503sv*sd*bc*sc*i* alias: pci:v00008086d00001502sv*sd*bc*sc*i* [somealias
lines omitted] alias: pci:v00008086d0000105Esv*sd*bc*sc*i* depends: ptp intree: Y vermagic: 3.15.6-200.fc20.x86_64 SMP mod_unload signer: Fedora kernel signing key sig_key: 5B:F5:46:43:B9:B1:61:72:B2:43:6D:40:A5:6F:75:0A:D1:58:1D:80 sig_hashalgo: sha256 parm: debug:Debug level (0=none,...,16=all) (int) parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int)
modinfo
output:
.ko
kernel object file. You can use modinfo -n
as a shortcut command for printing only the filename
field.
modinfo -d
as a shortcut command for printing only the description field.
alias
field appears as many times as there are aliases for a module, or is omitted entirely if there are none.
Omitting the depends field
depends
field may be omitted from the output.
parm
field presents one module parameter in the form parameter_name:description
, where:
.conf
file in the /etc/modprobe.d/
directory; and,
Example 21.2. Listing module parameters
-p
option. However, because useful value type information is omitted from modinfo -p
output, it is more useful to run:
~]# modinfo e1000e | grep "^parm" | sort
parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int)
parm: EEE:Enable/disable on parts that support the feature (array of int)
parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm: IntMode:Interrupt Mode (array of int)
parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int)
parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm: RxIntDelay:Receive Interrupt Delay (array of int)
parm: SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm: TxIntDelay:Transmit Interrupt Delay (array of int)
parm: WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int)
modprobe module_name
as root
. For example, to load the wacom
module, run:
~]# modprobe wacom
modprobe
attempts to load the module from /lib/modules/kernel_version/kernel/drivers/
. In this directory, each type of module has its own subdirectory, such as net/
and scsi/
, for network and SCSI interface drivers respectively.
modprobe
command always takes dependencies into account when performing operations. When you ask modprobe
to load a specific kernel module, it first examines the dependencies of that module, if there are any, and loads them if they are not already loaded into the kernel. modprobe
resolves dependencies recursively: it will load all dependencies of dependencies, and so on, if necessary, thus ensuring that all dependencies are always met.
-v
(or --verbose
) option to cause modprobe
to display detailed information about what it is doing, which can include loading module dependencies.
Example 21.3. modprobe -v shows module dependencies as they are loaded
Fibre Channel over Ethernet
module verbosely by typing the following at a shell prompt:
~]# modprobe -v fcoe
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/scsi_tgt.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/scsi_transport_fc.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/libfc/libfc.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/libfcoe.ko
insmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/scsi/fcoe/fcoe.ko
modprobe
loaded the scsi_tgt
, scsi_transport_fc
, libfc
and libfcoe
modules as dependencies before finally loading fcoe
. Also note that modprobe
used the more “primitive” insmod
command to insert the modules into the running kernel.
Always use modprobe instead of insmod!
insmod
command can also be used to load kernel modules, it does not resolve dependencies. Because of this, you should always load modules using modprobe
instead.
modprobe -r module_name
as root
. For example, assuming that the wacom
module is already loaded into the kernel, you can unload it by running:
~]# modprobe -r wacom
wacom
module;
wacom
directly depends on, or;
wacom
, through the dependency tree, depends on indirectly.
lsmod
to obtain the names of the modules which are preventing you from unloading a certain module.
Example 21.4. Unloading a kernel module
firewire_ohci
module (because you believe there is a bug in it that is affecting system stability, for example), your terminal session might look similar to this:
~]#modinfo -F depends firewire_ohci
depends: firewire-core ~]#modinfo -F depends firewire_core
depends: crc-itu-t ~]#modinfo -F depends crc-itu-t
depends:
firewire_ohci
depends on firewire_core
, which itself depends on crc-itu-t
.
firewire_ohci
using the modprobe -v -r module_name
command, where -r
is short for --remove
and -v
for --verbose
:
~]# modprobe -r -v firewire_ohci
rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-ohci.ko
rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/drivers/firewire/firewire-core.ko
rmmod /lib/modules/2.6.32-71.el6.x86_64/kernel/lib/crc-itu-t.ko
Do not use rmmod directly!
rmmod
command can be used to unload kernel modules, it is recommended to use modprobe -r
instead.
modprobe -r
, and then load it with modprobe
along with a list of customized parameters. This method is often used when the module does not have many dependencies, or to test different combinations of parameters without making them persistent, and is the method covered in this section.
/etc/modprobe.d/
directory. This method makes the module parameters persistent by ensuring that they are set each time the module is loaded, such as after every reboot or modprobe
command. This method is covered in Section 21.6, “Persistent Module Loading”, though the following information is a prerequisite.
modprobe
to load a kernel module with custom parameters using the following command line format:
modprobe module_name [parameter=value]
modprobe
will incorrectly interpret the values following spaces as additional parameters.
modprobe
command silently succeeds with an exit status of 0
if:
modprobe
command does not automatically reload the module, or alert you that it is already loaded.
e1000e
module, which is the network driver for Intel PRO/1000 network adapters, as an example:
Procedure 21.1. Loading a Kernel Module with Custom Parameters
~]# lsmod | grep e1000e
~]#
root
:
~]# modprobe e1000e InterruptThrottleRate=3000,3000,3000 EEE=1
file_name.modules
file in the /etc/sysconfig/modules/
directory, where file_name is any descriptive name of your choice. Your file_name.modules
files are treated by the system startup scripts as shell scripts, and as such should begin with an interpreter directive (also called a “bang line”) as their first line:
#!/bin/sh
file_name.modules
file should be executable. You can make it executable by running:
modules]# chmod +x file_name.modules
Example 21.5. /etc/sysconfig/modules/bluez-uinput.modules
bluez-uinput.modules
script loads the uinput
module:
#!/bin/sh if [ ! -c /dev/input/uinput ] ; then exec /sbin/modprobe uinput >/dev/null 2>&1 fi
if
-conditional statement on the third line ensures that the /dev/input/uinput
file does not already exist (the !
symbol negates the condition), and, if that is the case, loads the uinput
module by calling exec /sbin/modprobe uinput
. Note that the uinput
module creates the /dev/input/uinput
file, so testing to see if that file exists serves as verification of whether the uinput
module is loaded into the kernel.
>/dev/null 2>&1
clause at the end of that line redirects any output to /dev/null
so that the modprobe
command remains quiet.
man lsmod
lsmod
command.
man modinfo
modinfo
command.
man modprobe
modprobe
command.
man rmmod
rmmod
command.
man ethtool
ethtool
command.
man mii-tool
mii-tool
command.
/usr/share/doc/kernel-doc-kernel_version/Documentation/
root
:
yum install kernel-doc
e1000e
driver.
kdump
crash dumping mechanism is enabled, the system is booted from the context of another kernel. This second kernel reserves a small amount of memory and its only purpose is to capture the core dump image in case the system crashes.
kdump
service in Fedora, and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility.
kdump
service on your system, make sure you have the kexec-tools package installed. To do so, type the following at a shell prompt as root
:
yum install kexec-tools
kdump
service: at the first boot, using the Kernel Dump Configuration graphical utility, and doing so manually on the command line.
Disable IOMMU on Intel chipsets
Intel IOMMU
driver can occasionally prevent the kdump
service from capturing the core dump image. To use kdump
on Intel architectures reliably, it is advised that the IOMMU support is disabled.
kdump
, navigate to the Kdump section and follow the instructions below.
Make sure the system has enough memory
kdump
crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory).
kdump
daemon to start at boot time, select the Enable kdump? checkbox. This will enable the service and start it for the current session. Similarly, unselecting the checkbox will disable it for and stop the service immediately.
kdump
kernel, click the up and down arrow buttons next to the Kdump Memory field to increase or decrease the value. Notice that the Usable System Memory field changes accordingly showing you the remaining memory that will be available to the system.
system-config-kdump
at a shell prompt. You will be presented with a window as shown in Figure 22.1, “Basic Settings”.
kdump
as well as to enable or disable starting the service at boot time. When you are done, click to save the changes. The system reboot will be requested, and unless you are already authenticated, you will be prompted to enter the superuser password.
Make sure the system has enough memory
kdump
crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory).
kdump
daemon at boot time, click the button on the toolbar. This will enable the service and start it for the current session. Similarly, clicking the button will disable it and stop the service immediately.
kdump
kernel. To do so, select the Manual kdump memory settings radio button, and click the up and down arrow buttons next to the New kdump Memory field to increase or decrease the value. Notice that the Usable Memory field changes accordingly showing you the remaining memory that will be available to the system.
vmcore
dump. It can be either stored as a file in a local file system, written directly to a device, or sent over a network using the NFS (Network File System) or SSH (Secure Shell) protocol.
Table 22.1. Supported kdump targets
Type | Supported Targets | Unsupported Targets |
---|---|---|
Raw device | All locally attached raw disks and partitions. | — |
Local file system | ext2 , ext3 , ext4 , minix file systems on directly attached disk drives, hardware RAID logical drives, LVM devices, and mdraid arrays. | The eCryptfs file system. |
Remote directory | Remote directories accessed using the NFS or SSH protocol over IPv4 . | Remote directories on the rootfs file system accessed using the NFS protocol. |
Remote directories accessed using the iSCSI protocol over hardware initiators. | Remote directories accessed using the iSCSI protocol over software initiators. | |
— | Remote directories accessed over IPv6 . | |
Remote directories accessed using the SMB /CIFS protocol. | ||
Remote directories accessed using the FCoE (Fibre Channel over Ethernet) protocol. | ||
Remote directories accessed using wireless network interfaces. | ||
Multipath-based storages. |
vmcore
dump.
kdump
fails to create a core dump, select an appropriate option from the Default action pulldown list. Available options are (the default action), (to reboot the system), (to present a user with an interactive shell prompt), (to halt the system), and (to power the system off).
makedumpfile
core collector, edit the Core collector text field; see Section 22.2.3.3, “Configuring the Core Collector” for more information.
kdump
kernel, as root
, edit the /etc/default/grub
file and add the crashkernel=<size>M
(or crashkernel=auto
) parameter to the list of kernel options (the GRUB_CMDLINE_LINUX
line). For example, to reserve 128 MB of memory, use:
GRUB_CMDLINE_LINUX="crashkernel=128M
quiet rhgb"
root
:
grub2-mkconfig
-o
/boot/grub2/grub.cfg
Make sure the system has enough memory
kdump
crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory).
Using the crashkernel=auto parameter
crashkernel=auto
only reserves memory if the system has 4 GB of physical memory or more.
vmcore
file in the /var/crash/
directory of the local file system. To change this, as root
, open the /etc/kdump.conf
configuration file in a text editor and edit the options as described below.
#path /var/crash
line, and replace the value with a desired directory path. Optionally, if you wish to write the file to a different partition, follow the same procedure with the #ext4 /dev/sda3
line as well, and change both the file system type and the device (a device name, a file system label, and UUID are all supported) accordingly. For example:
ext3 /dev/sda4 path /usr/local/cores
#raw /dev/sda5
line, and replace the value with a desired device name. For example:
raw /dev/sdb1
#net my.server.com:/export/tmp
line, and replace the value with a valid hostname and directory path. For example:
net penguin.example.com:/export/cores
#net user@my.server.com
line, and replace the value with a valid username and hostname. For example:
net john@penguin.example.com
vmcore
dump file, kdump
allows you to specify an external application (that is, a core collector) to compress the data, and optionally leave out all irrelevant information. Currently, the only fully supported core collector is makedumpfile
.
root
, open the /etc/kdump.conf
configuration file in a text editor, remove the hash sign (“#”) from the beginning of the #core_collector makedumpfile -c --message-level 1 -d 31
line, and edit the command line options as described below.
-c
parameter. For example:
core_collector makedumpfile -c
-d value
parameter, where value is a sum of values of pages you want to omit as described in Table 22.2, “Supported filtering levels”. For example, to remove both zero and free pages, use the following:
core_collector makedumpfile -d 17 -c
makedumpfile
for a complete list of available options.
Table 22.2. Supported filtering levels
Option | Description |
---|---|
1 | Zero pages |
2 | Cache pages |
4 | Cache private |
8 | User pages |
16 | Free pages |
kdump
fails to create a core dump, the root file system is mounted and /sbin/init
is run. To change this behavior, as root
, open the /etc/kdump.conf
configuration file in a text editor, remove the hash sign (“#”) from the beginning of the #default shell
line, and replace the value with a desired action as described in Table 22.3, “Supported actions”.
Table 22.3. Supported actions
Option | Description |
---|---|
reboot | Reboot the system, losing the core in the process. |
halt | Halt the system. |
poweroff | Power off the system. |
shell | Run the msh session from within the initramfs, allowing a user to record the core manually. |
default halt
kdump
daemon at boot time, type the following at a shell prompt as root
:
systemctl
enable
kdump.service
systemctl disable kdump.service
will disable it. To start the service in the current session, use the following command as root
:
systemctl
start
kdump.service
Be careful when using these commands
kdump
enabled, and make sure that the service is running (refer to Section 6.2, “Running Services” for more information on how to run a service in Fedora):
systemctl is-active kdump.service
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
address-YYYY-MM-DD-HH:MM:SS/vmcore
file will be copied to the location you have selected in the configuration (that is, to /var/crash/
by default).
netdump
, diskdump
, xendump
, or kdump
.
Make sure you have relevant packages installed
vmcore
dump file, you must have the crash and kernel-debuginfo packages installed. To install these packages, type the following at a shell prompt as root
:
yum install crash
debuginfo-install kernel
crash
/usr/lib/debug/lib/modules/kernel/vmlinux
/var/crash/timestamp/vmcore
kdump
. To find out which kernel you are currently running, use the uname -r
command.
Example 22.1. Running the crash utility
~]#crash /usr/lib/debug/lib/modules/2.6.32-69.el6.i686/vmlinux \
/var/crash/127.0.0.1-2010-08-25-08:45:02/vmcore
crash 5.0.0-23.el6 Copyright (C) 2002-2010 Red Hat, Inc. Copyright (C) 2004, 2005, 2006 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. GNU gdb (GDB) 7.0 Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"... KERNEL: /usr/lib/debug/lib/modules/2.6.32-69.el6.i686/vmlinux DUMPFILE: /var/crash/127.0.0.1-2010-08-25-08:45:02/vmcore [PARTIAL DUMP] CPUS: 4 DATE: Wed Aug 25 08:44:47 2010 UPTIME: 00:09:02 LOAD AVERAGE: 0.00, 0.01, 0.00 TASKS: 140 NODENAME: hp-dl320g5-02.lab.bos.redhat.com RELEASE: 2.6.32-69.el6.i686 VERSION: #1 SMP Tue Aug 24 10:31:45 EDT 2010 MACHINE: i686 (2394 Mhz) MEMORY: 8 GB PANIC: "Oops: 0002 [#1] SMP " (check log for details) PID: 5591 COMMAND: "bash" TASK: f196d560 [THREAD_INFO: ef4da000] CPU: 2 STATE: TASK_RUNNING (PANIC) crash>
log
command at the interactive prompt.
Example 22.2. Displaying the kernel message buffer
crash> log
... several lines omitted ...
EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2
EIP is at sysrq_handle_crash+0xf/0x20
EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000
ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000)
Stack:
c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0
<0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000
<0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4
Call Trace:
[<c068146b>] ? __handle_sysrq+0xfb/0x160
[<c06814d0>] ? write_sysrq_trigger+0x0/0x50
[<c068150f>] ? write_sysrq_trigger+0x3f/0x50
[<c0569ec4>] ? proc_reg_write+0x64/0xa0
[<c0569e60>] ? proc_reg_write+0x0/0xa0
[<c051de50>] ? vfs_write+0xa0/0x190
[<c051e8d1>] ? sys_write+0x41/0x70
[<c0409adc>] ? syscall_call+0x7/0xb
Code: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83
EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24
CR2: 0000000000000000
help log
for more information on the command usage.
bt
command at the interactive prompt. You can use bt pid
to display the backtrace of the selected process.
Example 22.3. Displaying the kernel stack trace
crash> bt
PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash"
#0 [ef4dbdcc] crash_kexec at c0494922
#1 [ef4dbe20] oops_end at c080e402
#2 [ef4dbe34] no_context at c043089d
#3 [ef4dbe58] bad_area at c0430b26
#4 [ef4dbe6c] do_page_fault at c080fb9b
#5 [ef4dbee4] error_code (via page_fault) at c080d809
EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000
DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0
CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096
#6 [ef4dbf18] sysrq_handle_crash at c068124f
#7 [ef4dbf24] __handle_sysrq at c0681469
#8 [ef4dbf48] write_sysrq_trigger at c068150a
#9 [ef4dbf54] proc_reg_write at c0569ec2
#10 [ef4dbf74] vfs_write at c051de4e
#11 [ef4dbf94] sys_write at c051e8cc
#12 [ef4dbfb0] system_call at c0409ad5
EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002
DS: 007b ESI: 00000002 ES: 007b EDI: b7776000
SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033
CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246
help bt
for more information on the command usage.
ps
command at the interactive prompt. You can use ps pid
to display the status of the selected process.
Example 22.4. Displaying status of processes in the system
crash> ps
PID PPID CPU TASK ST %MEM VSZ RSS COMM
> 0 0 0 c09dc560 RU 0.0 0 0 [swapper]
> 0 0 1 f7072030 RU 0.0 0 0 [swapper]
0 0 2 f70a3a90 RU 0.0 0 0 [swapper]
> 0 0 3 f70ac560 RU 0.0 0 0 [swapper]
1 0 1 f705ba90 IN 0.0 2828 1424 init
... several lines omitted ...
5566 1 1 f2592560 IN 0.0 12876 784 auditd
5567 1 2 ef427560 IN 0.0 12876 784 auditd
5587 5132 0 f196d030 IN 0.0 11064 3184 sshd
> 5591 5587 2 f196d560 RU 0.0 5084 1648 bash
help ps
for more information on the command usage.
vm
command at the interactive prompt. You can use vm pid
to display information on the selected process.
Example 22.5. Displaying virtual memory information of the current context
crash> vm
PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash"
MM PGD RSS TOTAL_VM
f19b5900 ef9c6000 1648k 5084k
VMA START END FLAGS FILE
f1bb0310 242000 260000 8000875 /lib/ld-2.12.so
f26af0b8 260000 261000 8100871 /lib/ld-2.12.so
efbc275c 261000 262000 8100873 /lib/ld-2.12.so
efbc2a18 268000 3ed000 8000075 /lib/libc-2.12.so
efbc23d8 3ed000 3ee000 8000070 /lib/libc-2.12.so
efbc2888 3ee000 3f0000 8100071 /lib/libc-2.12.so
efbc2cd4 3f0000 3f1000 8100073 /lib/libc-2.12.so
efbc243c 3f1000 3f4000 100073
efbc28ec 3f6000 3f9000 8000075 /lib/libdl-2.12.so
efbc2568 3f9000 3fa000 8100071 /lib/libdl-2.12.so
efbc2f2c 3fa000 3fb000 8100073 /lib/libdl-2.12.so
f26af888 7e6000 7fc000 8000075 /lib/libtinfo.so.5.7
f26aff2c 7fc000 7ff000 8100073 /lib/libtinfo.so.5.7
efbc211c d83000 d8f000 8000075 /lib/libnss_files-2.12.so
efbc2504 d8f000 d90000 8100071 /lib/libnss_files-2.12.so
efbc2950 d90000 d91000 8100073 /lib/libnss_files-2.12.so
f26afe00 edc000 edd000 4040075
f1bb0a18 8047000 8118000 8001875 /bin/bash
f1bb01e4 8118000 811d000 8101873 /bin/bash
f1bb0c70 811d000 8122000 100073
f26afae0 9fd9000 9ffa000 100073
... several lines omitted ...
help vm
for more information on the command usage.
files
command at the interactive prompt. You can use files pid
to display files opened by the selected process.
Example 22.6. Displaying information about open files of the current context
crash> files
PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash"
ROOT: / CWD: /root
FD FILE DENTRY INODE TYPE PATH
0 f734f640 eedc2c6c eecd6048 CHR /pts/0
1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger
2 f734f640 eedc2c6c eecd6048 CHR /pts/0
10 f734f640 eedc2c6c eecd6048 CHR /pts/0
255 f734f640 eedc2c6c eecd6048 CHR /pts/0
help files
for more information on the command usage.
/etc/kdump.conf
configuration file containing the full documentation of available options.
makedumpfile
core collector.
/usr/share/doc/kexec-tools/kexec-kdump-howto.txt
— an overview of the kdump
and kexec installation and usage.
Use Yum Instead of RPM Whenever Possible
Install RPM packages with the correct architecture!
x86_64.rpm
.
.tar.gz
files.
Running rpm commands must be performed as root
rpm --help
or man rpm
. You can also refer to Section A.5, “Additional Resources” for more information on RPM.
Third-party repositories and package compatibility
tree-1.5.3-2.fc20.x86_64.rpm
. The file name includes the package name (tree
), version (1.5.3
), release (2
), operating system major version (fc20
) and CPU architecture (x86_64
).
rpm
's -U
option to:
rpm -U <rpm_file>
is able to perform the function of either upgrading or installing as is appropriate for the package.
tree-1.5.3-2.fc20.x86_64.rpm
package is in the current directory, log in as root and type the following command at a shell prompt to either upgrade or install the tree package as determined by rpm
:
rpm -Uvh tree-1.5.3-2.fc20.x86_64.rpm
Use -Uvh for nicely-formatted RPM installs
-v
and -h
options (which are combined with -U
) cause rpm to print more verbose output and display a progress meter using hash signs.
Preparing... ########################################### [100%] 1:tree ########################################### [100%]
Always use the -i (install) option to install new kernel packages!
rpm
provides two different options for installing packages: the aforementioned -U
option (which historically stands for upgrade), and the -i
option, historically standing for install. Because the -U
option subsumes both install and upgrade functions, we recommend to use rpm -Uvh
with all packages except kernel packages.
-i
option to simply install a new kernel package instead of upgrading it. This is because using the -U
option to upgrade a kernel package removes the previous (older) kernel package, which could render the system unable to boot if there is a problem with the new kernel. Therefore, use the rpm -i <kernel_package>
command to install a new kernel without replacing any older kernel packages. For more information on installing kernel packages, refer to Chapter 20, Manually Upgrading the Kernel.
error: tree-1.5.2.2-4.fc20.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD, key ID d22e77f2
error: tree-1.5.2.2-4.fc20.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD, key ID d22e77f2
NOKEY
:
warning: tree-1.5.2.2-4.fc20.x86_64.rpm: Header V3 RSA/SHA1 signature: NOKEY, key ID 57bbccba
Preparing... ########################################### [100%] package tree-1.5.3-2.fc20.x86_64 is already installed
--replacepkgs
option, which tells RPM to ignore the error:
rpm -Uvh --replacepkgs tree-1.5.3-2.fc20.x86_64.rpm
Preparing... ################################################## file /usr/bin/foobar from install of foo-1.0-1.fc20.x86_64 conflicts with file from package bar-3.1.1.fc20.x86_64
--replacefiles
option:
rpm -Uvh --replacefiles foo-1.0-1.fc20.x86_64.rpm
error: Failed dependencies: bar.so.3()(64bit) is needed by foo-1.0-1.fc20.x86_64
rpm -Uvh foo-1.0-1.fc20.x86_64.rpm bar-3.1.1.fc20.x86_64.rpm
Preparing... ########################################### [100%] 1:foo ########################################### [ 50%] 2:bar ########################################### [100%]
--whatprovides
option to determine which package contains the required file.
rpm -q --whatprovides "bar.so.3"
bar.so.3
is in the RPM database, the name of the package is displayed:
bar-3.1.1.fc20.i586.rpm
Warning: Forcing Package Installation
rpm
to install a package that gives us a Failed dependencies
error (using the --nodeps
option), this is not recommended, and will usually result in the installed package failing to run. Installing or removing packages with rpm --nodeps
can cause applications to misbehave and/or crash, and can cause serious package management problems or, possibly, system failure. For these reasons, it is best to heed such warnings; the package manager—whether RPM, Yum or PackageKit—shows us these warnings and suggests possible fixes because accounting for dependencies is critical. The Yum package manager can perform dependency resolution and fetch dependencies from online repositories, making it safer, easier and smarter than forcing rpm
to carry out actions without regard to resolving dependencies.
saving /etc/foo.conf as /etc/foo.conf.rpmsave
foo.conf.rpmnew
, and leave the configuration file you modified untouched. You should still resolve any conflicts between your modified configuration file and the new one, usually by merging changes from the old one to the new one with a diff
program.
package foo-2.0-1.fc20.x86_64.rpm (which is newer than foo-1.0-1) is already installed
--oldpackage
option:
rpm -Uvh --oldpackage foo-1.0-1.fc20.x86_64.rpm
rpm -e foo
rpm -e and package name errors
foo
, not the name of the original package file, foo-1.0-1.fc20.x86_64
. If you attempt to uninstall a package using the rpm -e
command and the original full file name, you will receive a package name error.
rpm -e ghostscript
error: Failed dependencies:
libgs.so.8()(64bit) is needed by (installed) libspectre-0.2.2-3.fc20.x86_64
libgs.so.8()(64bit) is needed by (installed) foomatic-4.0.3-1.fc20.x86_64
libijs-0.35.so()(64bit) is needed by (installed) gutenprint-5.2.4-5.fc20.x86_64
ghostscript is needed by (installed) printer-filters-1.1-4.fc20.noarch
<library_name>.so.<number>
file) in Section A.2.2.3, “Unresolved Dependency”, we can search for a 64-bit shared object library using this exact syntax (and making sure to quote the file name):
~]# rpm -q --whatprovides "libgs.so.8()(64bit)"
ghostscript-8.70-1.fc20.x86_64
Warning: Forcing Package Installation
rpm
to remove a package that gives us a Failed dependencies
error (using the --nodeps
option), this is not recommended, and may cause harm to other installed applications. Installing or removing packages with rpm --nodeps
can cause applications to misbehave and/or crash, and can cause serious package management problems or, possibly, system failure. For these reasons, it is best to heed such warnings; the package manager—whether RPM, Yum or PackageKit—shows us these warnings and suggests possible fixes because accounting for dependencies is critical. The Yum package manager can perform dependency resolution and fetch dependencies from online repositories, making it safer, easier and smarter than forcing rpm
to carry out actions without regard to resolving dependencies.
rpm -Fvh foo-2.0-1.fc20.x86_64.rpm
*.rpm
glob:
rpm -Fvh *.rpm
/var/lib/rpm/
, and is used to query what packages are installed, what versions each package is, and to calculate any changes to any files in the package since installation, among other use cases.
-q
option. The rpm -q package name
command displays the package name, version, and release number of the installed package <package_name>. For example, using rpm -q tree
to query installed package tree
might generate the following output:
tree-1.5.2.2-4.fc20.x86_64
man rpm
for details) to further refine or qualify your query:
-a
— queries all currently installed packages.
-f <file_name>
— queries the RPM database for which package owns <file_name>
. Specify the absolute path of the file (for example, rpm -qf /bin/ls
instead of rpm -qf ls
).
-p <package_file>
— queries the uninstalled package <package_file>
.
-i
displays package information including name, description, release, size, build date, install date, vendor, and other miscellaneous information.
-l
displays the list of files that the package contains.
-s
displays the state of all the files in the package.
-d
displays a list of files marked as documentation (man pages, info pages, READMEs, etc.) in the package.
-c
displays a list of files marked as configuration files. These are the files you edit after installation to adapt and customize the package to your system (for example, sendmail.cf
, passwd
, inittab
, etc.).
-v
to the command to display the lists in a familiar ls -l
format.
rpm -V
verifies a package. You can use any of the Verify Options listed for querying to specify the packages you wish to verify. A simple use of verifying is rpm -V tree
, which verifies that all the files in the tree
package are as they were when they were originally installed. For example:
rpm -Vf /usr/bin/tree
/usr/bin/tree
is the absolute path to the file used to query a package.
rpm -Va
rpm -Vp tree-1.5.2.2-4.fc20.x86_64.rpm
c
" denotes a configuration file) and then the file name. Each of the eight characters denotes the result of a comparison of one attribute of the file to the value of that attribute recorded in the RPM database. A single period (.
) means the test passed. The following characters denote specific discrepancies:
5
— MD5 checksum
S
— file size
L
— symbolic link
T
— file modification time
D
— device
U
— user
G
— group
M
— mode (includes permissions and file type)
?
— unreadable file (file permission errors, for example)
rpm -K --nosignature <rpm_file>
<rpm_file>: sha1 md5 OK
(specifically the OK
part of it) is displayed, the file was not corrupted during download. To see a more verbose message, replace -K
with -Kvv
in the command.
/etc/pki/rpm-gpg/
directory. To verify a Fedora Project package, first import the correct key based on your processor architecture:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-x86_64
rpm -qa gpg-pubkey*
gpg-pubkey-57bbccba-4a6f97af
rpm -qi
followed by the output from the previous command:
rpm -qi gpg-pubkey-57bbccba-4a6f97af
rpm -K <rpm_file>
rsa sha1 (md5) pgp md5 OK
. This means that the signature of the package has been verified, that it is not corrupt, and is therefore safe to install and use.
rpm -Va
rpm -qf /usr/bin/ghostscript
ghostscript-8.70-1.fc20.x86_64
/usr/bin/paste
. You would like to verify the package that owns that program, but you do not know which package owns paste
. Enter the following command,
rpm -Vf /usr/bin/paste
rpm -qdf /usr/bin/free
/usr/share/doc/procps-ng/BUGS /usr/share/doc/procps-ng/FAQ /usr/share/doc/procps-ng/NEWS /usr/share/doc/procps-ng/TODO /usr/share/man/man1/free.1.gz /usr/share/man/man1/pgrep.1.gz /usr/share/man/man1/pkill.1.gz /usr/share/man/man1/pmap.1.gz /usr/share/man/man1/ps.1.gz /usr/share/man/man1/pwdx.1.gz /usr/share/man/man1/skill.1.gz /usr/share/man/man1/slabtop.1.gz /usr/share/man/man1/snice.1.gz /usr/share/man/man1/tload.1.gz /usr/share/man/man1/top.1.gz /usr/share/man/man1/uptime.1.gz /usr/share/man/man1/w.1.gz /usr/share/man/man1/watch.1.gz /usr/share/man/man5/sysctl.conf.5.gz /usr/share/man/man8/sysctl.8.gz /usr/share/man/man8/vmstat.8.gz
rpm -qip crontabs-1.10-31.fc20.noarch.rpm
Name : crontabs Relocations: (not relocatable) Size : 2486 License: Public Domain and GPLv2 Signature : RSA/SHA1, Tue 11 Aug 2009 01:11:19 PM CEST, Key ID 9d1cc34857bbccba Packager : Fedora Project Summary : Root crontab files used to schedule the execution of programs Description : The crontabs package contains root crontab files and directories. You will need to install cron daemon to run the jobs from the crontabs. The cron daemon such as cronie or fcron checks the crontab files to see when particular commands are scheduled to be executed. If commands are scheduled, it executes them. Crontabs handles a basic system function, so it should be installed on your system.
crontabs
RPM package installs. You would enter the following:
rpm -qlp crontabs-1.10-31.fc20.noarch.rpm
/etc/cron.daily /etc/cron.hourly /etc/cron.monthly /etc/cron.weekly /etc/crontab /usr/bin/run-parts /usr/share/man/man4/crontabs.4.gz
rpm --help
— This command displays a quick reference of RPM parameters.
man rpm
— The RPM man page gives more detail about RPM parameters than the rpm --help
command.
Xorg
binary) listens for connections from X client applications via a network or local loopback interface. The server communicates with the hardware, such as the video card, monitor, keyboard, and mouse. X client applications exist in the user space, creating a graphical user interface (GUI) for the user and passing user requests to the X server.
evdev
, that supports all input devices that the kernel knows about, including most mice and keyboards.
/usr/
directory. The /etc/X11/
directory contains configuration files for X client and server applications. This includes configuration files for the X server itself, the X display managers, and many other base components.
/etc/fonts/fonts.conf
. For more information on configuring and adding fonts, refer to Section B.4, “Fonts”.
metacity
kwin
compiz
compiz
package.
mwm
mwm
) is a basic, stand-alone window manager. Since it is designed to be stand-alone, it should not be used in conjunction with GNOME or KDE. To run this window manager, you need to install the openmotif
package.
twm
twm
), which provides the most basic tool set among the available window managers, can be used either as a stand-alone or with a desktop environment. To run this window manager, you need to install the xorg-x11-twm
package.
/usr/bin/Xorg
; a symbolic link X
pointing to this file is also provided. Associated configuration files are stored in the /etc/X11/
and /usr/share/X11/
directories.
xorg.conf.d
directory contain preconfigured settings from vendors and from distribution, and these files should not be edited by hand. Configuration in the xorg.conf
file, on the other hand, is done completely by hand but is not necessary in most scenarios.
When do you need the xorg.conf file?
/etc/X11/xorg.conf
, that was necessary in previous releases, is not supplied with the current release of the X Window System. It can still be useful to create the file manually to configure new hardware, to set up an environment with multiple video cards, or for debugging purposes.
/usr/lib/xorg/modules/
(or /usr/lib64/xorg/modules/
) directory contains X server modules that can be loaded dynamically at runtime. By default, only some modules in /usr/lib/xorg/modules/
are automatically loaded by the X server.
mouse
, kbd
, or vmmouse
driver configured in the xorg.conf
file are, by default, ignored by the X server. See Section B.3.3.3, “The ServerFlags
section” for further details. Additional configuration is provided in the /etc/X11/xorg.conf.d/
directory and it can override or augment any configuration that has been obtained through HAL.
Section "section-name"
line, where "section-name" is the title for the section, and ends with an EndSection
line. Each section contains lines that include option names and one or more option values. Some of these are sometimes enclosed in double quotes ("
).
/etc/X11/xorg.conf
file accept a boolean switch which turns the feature on or off. The acceptable values are:
1
, on
, true
, or yes
— Turns the option on.
0
, off
, false
, or no
— Turns the option off.
#
) are not read by the X server and are used for human-readable comments.
# This file is autogenerated by system-setup-keyboard. Any # modifications will be lost. Section "InputClass" Identifier "system-setup-keyboard" MatchIsKeyboard "on" Option "XkbModel" "pc105" Option "XkbLayout" "cz,us" # Option "XkbVariant" "(null)" Option "XkbOptions" "terminate:ctrl_alt_bksp,grp:shifts_toggle,grp_led:scroll" EndSection
xorg.conf.d
Directory/usr/share/X11/xorg.conf.d/
provides separate configuration files from vendors or third-party packages; changes to files in this directory may be overwritten by settings specified in the /etc/X11/xorg.conf
file. The /etc/X11/xorg.conf.d/
directory stores user-specific configuration.
.conf
in configuration directories are parsed by the X server upon startup and are treated like part of the traditional xorg.conf
configuration file. These files may contain one or more sections; for a description of the options in a section and the general layout of the configuration file, refer to Section B.3.3, “The xorg.conf
File” or to the xorg.conf(5)
man page. The X server essentially treats the collection of configuration files as one big file with entries from xorg.conf
at the end. Users are encouraged to put custom configuration into /etc/xorg.conf
and leave the directory for configuration snippets provided by the distribution.
xorg.conf
File/etc/X11/xorg.conf
file was used to store initial setup for X. When a change occurred with the monitor, video card or other device managed by the X server, the file needed to be edited manually. In Fedora, there is rarely a need to manually create and edit the /etc/X11/xorg.conf
file. Nevertheless, it is still useful to understand various sections and optional parameters available, especially when troubleshooting or setting up unusual hardware configuration.
/etc/X11/xorg.conf
file. More detailed information about the X server configuration file can be found in the xorg.conf(5)
man page. This section is mostly intended for advanced users as most configuration options described below are not needed in typical configuration scenarios.
InputClass
sectionInputClass
is a new type of configuration section that does not apply to a single device but rather to a class of devices, including hot-plugged devices. An InputClass
section's scope is limited by the matches specified; in order to apply to an input device, all matches must apply to the device as seen in the example below:
Section "InputClass" Identifier "touchpad catchall" MatchIsTouchpad "on" Driver "synaptics" EndSection
xorg.conf
file or an xorg.conf.d
directory, any touchpad present in the system is assigned the synaptics
driver.
Alphanumeric sorting in xorg.conf.d
xorg.conf.d
directory, the Driver
setting in the example above overwrites previously set driver options. The more generic the class, the earlier it should be listed.
InputClass
section:
MatchIsPointer
, MatchIsKeyboard
, MatchIsTouchpad
, MatchIsTouchscreen
, MatchIsJoystick
— boolean options to specify a type of a device.
MatchProduct "product_name"
— this option matches if the product_name substring occurs in the product name of the device.
MatchVendor "vendor_name"
— this option matches if the vendor_name substring occurs in the vendor name of the device.
MatchDevicePath "/path/to/device"
— this option matches any device if its device path corresponds to the patterns given in the "/path/to/device" template, for example /dev/input/event*
. See the fnmatch(3)
man page for further details.
MatchTag "tag_pattern"
— this option matches if at least one tag assigned by the HAL configuration back end matches the tag_pattern pattern.
InputClass
sections. These sections are optional and are used to configure a class of input devices as they are automatically added. An input device can match more than one InputClass
section. When arranging these sections, it is recommended to put generic matches above specific ones because each input class can override settings from a previous one if an overlap occurs.
InputDevice
sectionInputDevice
section configures one input device for the X server. Previously, systems typically had at least one InputDevice
section for the keyboard, and most mouse settings were automatically detected.
InputDevice
configuration is needed for most setups, and the xorg-x11-drv-* input driver packages provide the automatic configuration through HAL. The default driver for both keyboards and mice is evdev
.
InputDevice
section for a keyboard:
Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection
InputDevice
section:
Identifier
— Specifies a unique name for this InputDevice
section. This is a required entry.
Driver
— Specifies the name of the device driver X must load for the device. If the AutoAddDevices
option is enabled (which is the default setting), any input device section with Driver "mouse"
or Driver "kbd"
will be ignored. This is necessary due to conflicts between the legacy mouse and keyboard drivers and the new evdev
generic driver. Instead, the server will use the information from the back end for any input devices. Any custom input device configuration in the xorg.conf
should be moved to the back end. In most cases, the back end will be HAL and the configuration location will be the /etc/X11/xorg.conf.d
directory.
Option
— Specifies necessary options pertaining to the device.
xorg.conf
file:
Protocol
— Specifies the protocol used by the mouse, such as IMPS/2
.
Device
— Specifies the location of the physical device.
Emulate3Buttons
— Specifies whether to allow a two-button mouse to act like a three-button mouse when both mouse buttons are pressed simultaneously.
xorg.conf(5)
man page for a complete list of valid options for this section.
ServerFlags
sectionServerFlags
section contains miscellaneous global X server settings. Any settings in this section may be overridden by options placed in the ServerLayout
section (refer to Section B.3.3.4, “ServerLayout
” for details).
ServerFlags
section occupies a single line and begins with the term Option
followed by an option enclosed in double quotation marks ("
).
ServerFlags
section:
Section "ServerFlags" Option "DontZap" "true" EndSection
"DontZap" "boolean"
— When the value of <boolean> is set to true
, this setting prevents the use of the Ctrl+Alt+Backspace key combination to immediately terminate the X server.
X keyboard extension
setxkbmap -option "terminate:ctrl_alt_bksp"
"DontZoom" "boolean"
— When the value of <boolean> is set to true
, this setting prevents cycling through configured video resolutions using the Ctrl+Alt+Keypad-Plus and Ctrl+Alt+Keypad-Minus key combinations.
"AutoAddDevices" "boolean"
— When the value of <boolean> is set to false
, the server will not hot plug input devices and instead rely solely on devices configured in the xorg.conf
file. See Section B.3.3.2, “The InputDevice
section” for more information concerning input devices. This option is enabled by default and HAL (hardware abstraction layer) is used as a back end for device discovery.
ServerLayout
ServerLayout
section binds together the input and output devices controlled by the X server. At a minimum, this section must specify one input device and one output device. By default, a monitor (output device) and a keyboard (input device) are specified.
ServerLayout
section:
Section "ServerLayout" Identifier "Default Layout" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection
ServerLayout
section:
Identifier
— Specifies a unique name for this ServerLayout
section.
Screen
— Specifies the name of a Screen
section to be used with the X server. More than one Screen
option may be present.
Screen
entry:
Screen 0 "Screen0" 0 0
Screen
entry (0
) indicates that the first monitor connector, or head on the video card, uses the configuration specified in the Screen
section with the identifier "Screen0"
.
Screen
section with the identifier "Screen0"
can be found in Section B.3.3.8, “The Screen
section”.
Screen
entry with a different number and a different Screen
section identifier is necessary.
"Screen0"
give the absolute X and Y coordinates for the upper left corner of the screen (0 0
by default).
InputDevice
— Specifies the name of an InputDevice
section to be used with the X server.
InputDevice
entries: one for the default mouse and one for the default keyboard. The options CorePointer
and CoreKeyboard
indicate that these are the primary mouse and keyboard. If the AutoAddDevices
option is enabled, this entry needs not to be specified in the ServerLayout
section. If the AutoAddDevices
option is disabled, both mouse and keyboard are auto-detected with the default values.
Option "option-name"
— An optional entry which specifies extra parameters for the section. Any options listed here override those listed in the ServerFlags
section.
xorg.conf(5)
man page.
ServerLayout
section in the /etc/X11/xorg.conf
file. By default, the server only reads the first one it encounters, however. If there is an alternative ServerLayout
section, it can be specified as a command line argument when starting an X session; as in the Xorg -layout <layoutname>
command.
Files
sectionFiles
section sets paths for services vital to the X server, such as the font path. This is an optional section, as these paths are normally detected automatically. This section can be used to override automatically detected values.
Files
section:
Section "Files" RgbPath "/usr/share/X11/rgb.txt" FontPath "unix/:7100" EndSection
Files
section:
ModulePath
— An optional parameter which specifies alternate directories which store X server modules.
Monitor
sectionMonitor
section configures one type of monitor used by the system. This is an optional entry as most monitors are now detected automatically.
Monitor
section for a monitor:
Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "DDC Probed Monitor - ViewSonic G773-2" DisplaySize 320 240 HorizSync 30.0 - 70.0 VertRefresh 50.0 - 180.0 EndSection
Monitor
section:
Identifier
— Specifies a unique name for this Monitor
section. This is a required entry.
VendorName
— An optional parameter which specifies the vendor of the monitor.
ModelName
— An optional parameter which specifies the monitor's model name.
DisplaySize
— An optional parameter which specifies, in millimeters, the physical size of the monitor's picture area.
HorizSync
— Specifies the range of horizontal sync frequencies compatible with the monitor, in kHz. These values help the X server determine the validity of built-in or specified Modeline
entries for the monitor.
VertRefresh
— Specifies the range of vertical refresh frequencies supported by the monitor, in kHz. These values help the X server determine the validity of built-in or specified Modeline
entries for the monitor.
Modeline
— An optional parameter which specifies additional video modes for the monitor at particular resolutions, with certain horizontal sync and vertical refresh resolutions. See the xorg.conf(5)
man page for a more detailed explanation of Modeline
entries.
Option "option-name"
— An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf(5)
man page.
Device
sectionDevice
section configures one video card on the system. While one Device
section is the minimum, additional instances may occur for each video card installed on the machine.
Device
section for a video card:
Section "Device" Identifier "Videocard0" Driver "mga" VendorName "Videocard vendor" BoardName "Matrox Millennium G200" VideoRam 8192 Option "dpms" EndSection
Device
section:
Identifier
— Specifies a unique name for this Device
section. This is a required entry.
Driver
— Specifies which driver the X server must load to utilize the video card. A list of drivers can be found in /usr/share/hwdata/videodrivers
, which is installed with the hwdata package.
VendorName
— An optional parameter which specifies the vendor of the video card.
BoardName
— An optional parameter which specifies the name of the video card.
VideoRam
— An optional parameter which specifies the amount of RAM available on the video card, in kilobytes. This setting is only necessary for video cards the X server cannot probe to detect the amount of video RAM.
BusID
— An entry which specifies the bus location of the video card. On systems with only one video card a BusID
entry is optional and may not even be present in the default /etc/X11/xorg.conf
file. On systems with more than one video card, however, a BusID
entry is required.
Screen
— An optional entry which specifies which monitor connector or head on the video card the Device
section configures. This option is only useful for video cards with multiple heads.
Device
sections must exist and each of these sections must have a different Screen
value.
Screen
entry must be an integer. The first head on the video card has a value of 0
. The value for each additional head increments this value by one.
Option "option-name"
— An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf(5)
man page.
"dpms"
(for Display Power Management Signaling, a VESA standard), which activates the Service Star energy compliance setting for the monitor.
Screen
sectionScreen
section binds one video card (or video card head) to one monitor by referencing the Device
section and the Monitor
section for each. While one Screen
section is the minimum, additional instances may occur for each video card and monitor combination present on the machine.
Screen
section:
Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Depth 24 Modes "1280x1024" "1280x960" "1152x864" "1024x768" "800x600" "640x480" EndSubSection SubSection "Display" Depth 16 Modes "1152x864" "1024x768" "800x600" "640x480" EndSubSection EndSection
Screen
section:
Identifier
— Specifies a unique name for this Screen
section. This is a required entry.
Device
— Specifies the unique name of a Device
section. This is a required entry.
Monitor
— Specifies the unique name of a Monitor
section. This is only required if a specific Monitor
section is defined in the xorg.conf
file. Normally, monitors are detected automatically.
DefaultDepth
— Specifies the default color depth in bits. In the previous example, 16
(which provides thousands of colors) is the default. Only one DefaultDepth
entry is permitted, although this can be overridden with the Xorg command line option -depth <n>
, where <n>
is any additional depth specified.
SubSection "Display"
— Specifies the screen modes available at a particular color depth. The Screen
section can have multiple Display
subsections, which are entirely optional since screen modes are detected automatically.
Option "option-name"
— An optional entry which specifies extra parameters for the section. Replace <option-name> with a valid option listed for this section in the xorg.conf(5)
man page.
DRI
sectionDRI
section specifies parameters for the Direct Rendering Infrastructure (DRI). DRI is an interface which allows 3D software applications to take advantage of 3D hardware acceleration capabilities built into most modern video hardware. In addition, DRI can improve 2D performance via hardware acceleration, if supported by the video card driver.
xorg.conf
file will override the default values.
DRI
section:
Section "DRI" Group 0 Mode 0666 EndSection
Qt 3
or GTK+ 2
graphical toolkits, or their newer versions.
Font configuration
/etc/fonts/fonts.conf
configuration file, which should not be edited by hand.
Fonts group
fonts
group installed. This can be done by selecting the group in the installer, and also by running the yum groupinstall fonts
command after installation.
.fonts/
directory in the user's home directory.
/usr/share/fonts/
directory. It is a good idea to create a new subdirectory, such as local/
or similar, to help distinguish between user-installed and default fonts.
fc-cache
command as root to update the font information cache:
fc-cache <path-to-font-directory>
/usr/share/fonts/local/
or /home/<user>/.fonts/
).
Interactive font installation
fonts:///
into the Nautilus address bar, and dragging the new font files there.
startx
. The startx
command is a front-end to the xinit
command, which launches the X server (Xorg
) and connects X client applications to it. Because the user is already logged into the system at runlevel 3, startx
does not launch a display manager or authenticate users. See Section B.5.2, “Runlevel 5” for more information about display managers.
startx
command is executed, it searches for the .xinitrc
file in the user's home directory to define the desktop environment and possibly other X client applications to run. If no .xinitrc
file is present, it uses the system default /etc/X11/xinit/xinitrc
file instead.
xinitrc
script then searches for user-defined files and default system files, including .Xresources
, .Xmodmap
, and .Xkbmap
in the user's home directory, and Xresources
, Xmodmap
, and Xkbmap
in the /etc/X11/
directory. The Xmodmap
and Xkbmap
files, if they exist, are used by the xmodmap
utility to configure the keyboard. The Xresources
file is read to assign specific preference values to applications.
xinitrc
script executes all scripts located in the /etc/X11/xinit/xinitrc.d/
directory. One important script in this directory is xinput.sh
, which configures settings such as the default language.
xinitrc
script attempts to execute .Xclients
in the user's home directory and turns to /etc/X11/xinit/Xclients
if it cannot be found. The purpose of the Xclients
file is to start the desktop environment or, possibly, just a basic window manager. The .Xclients
script in the user's home directory starts the user-specified desktop environment in the .Xclients-default
file. If .Xclients
does not exist in the user's home directory, the standard /etc/X11/xinit/Xclients
script attempts to start another desktop environment, trying GNOME first, then KDE, followed by twm
.
GDM
(GNOME Display Manager) — The default display manager for Fedora. GNOME
allows the user to configure language settings, shutdown, restart or log in to the system.
KDM
— KDE's display manager which allows the user to shutdown, restart or log in to the system.
xdm
(X Window Display Manager) — A very basic display manager which only lets the user log in to the system.
/etc/X11/prefdm
script determines the preferred display manager by referencing the /etc/sysconfig/desktop
file. A list of options for this file is available in this file:
/usr/share/doc/initscripts/sysconfig.txt
/etc/X11/xdm/Xsetup_0
file to set up the login screen. Once the user logs into the system, the /etc/X11/xdm/GiveConsole
script runs to assign ownership of the console to the user. Then, the /etc/X11/xdm/Xsession
script runs to accomplish many of the tasks normally performed by the xinitrc
script when starting X from runlevel 3, including setting system and user resources, as well as running the scripts in the /etc/X11/xinit/xinitrc.d/
directory.
GNOME
or KDE
display managers by selecting it from the menu item accessed by selecting → → → . If the desktop environment is not specified in the display manager, the /etc/X11/xdm/Xsession
script checks the .xsession
and .Xclients
files in the user's home directory to decide which desktop environment to load. As a last resort, the /etc/X11/xinit/Xclients
file is used to select a desktop environment or window manager to use in the same way as runlevel 3.
:0
) and logs out, the /etc/X11/xdm/TakeConsole
script runs and reassigns ownership of the console to the root user. The original display manager, which continues running after the user logged in, takes control by spawning a new display manager. This restarts the X server, displays a new login window, and starts the entire process over again.
/usr/share/doc/gdm/README
, or the xdm
man page.
/usr/share/X11/doc/
— contains detailed documentation on the X Window System architecture, as well as how to get additional information about the Xorg project as a new user.
/usr/share/doc/gdm/README
— contains information on how display managers control user authentication.
man xorg.conf
— Contains information about the xorg.conf
configuration files, including the meaning and syntax for the different sections within the files.
man Xorg
— Describes the Xorg
display server.
Revision History | |||
---|---|---|---|
Revision 1-1 | Thu Aug 9 2012 | ||
| |||
Revision 1-0 | Tue May 29 2012 | ||
|