Product SiteDocumentation Site

3. Changes in Fedora for System Administrators

3.1. Kernel

Fedora 21 features the 3.17.4 kernel.

3.1.1. Modular Kernel Packaging

The kernel package is now a meta package that pulls in kernel-core and kernel-modules. The kernel-core package is smaller than a full package and is well-suited for virtualized environments. By optionally uninstalling kernel-modules, cloud image size can be reduced.
The kernel-modules package should be included when Fedora is installed on real hardware.

Initramfs Changes

Please note, that a new initramfs is only automatically generated by the kernel-core package but not the kernel-modules package. If you only installed kernel-core at first and install kernel-modules at a later point in time, you need to create a new initramfs manually using dracut, if any of the newly installed modules has become critical for your system's boot up.
The dracut utility is used to create the initramfs on Fedora. To regenerate an initramfs for all installed kernels, use the following command:
        # dracut --regenerate-all

3.2. Installation

3.2.1. Built-in Help in the Graphical Installer

Each screen in the installer's graphical interface and in the Initial Setup utility now has a Help button in the top right corner. Clicking this button opens the section of the Fedora Installation Guide relevant to the current screen using the Yelp help browser.
The help is only available in the English language.

3.2.2. zRAM Swap Support

The Anaconda installer now supports swap on zRAM during the installation.
zRAM is a standard block device with compressed contents. Placing swap into such a device during the installation allows the installer to store more data in RAM instead of in the hard drive. This is especially helpful on low-memory systems; on these systems, the installation can be performed much faster with this feature enabled.
This feature is automatically enabled if Anaconda detects 2 GB or less memory, and disabled on systems with more memory. To force zRAM swap on or off, use the inst.zram=on or inst.zram=off boot option within the boot menu.
Specific limits, numbers and way of implementation may be changed in the future.

3.2.3. Changes in Boot Options

A boot option is used to modify the installer's behavior using the boot command line. The following boot options have been added in Fedora 21:
  • inst.zram=: Use this option to force zRAM swap on (inst.zram=on) or off (inst.zram=off).
  • inst.dnf: Use the experimental DNF backend for package installation instead of YUM.
  • inst.memcheck: Perform a check at the beginning of the installation to determine if there is enough available RAM. If there is not enough memory detected, the installation will stop with an error message. This option is enabled by default; use inst.memcheck=0 to disable it.

3.2.4. Changes in Anaconda Command Line Options

Anaconda command line options are used when running the installer from a terminal within an already installed system, as for example, when installing into a disk image.
  • The built-in help available through the anaconda -h command now provides descriptions for all available commands.
  • --memcheck: Check if the system has sufficient RAM to complete the installation and abort the installation if it does not. This check is approximate. Memory usage during installation depends on the package selection, user interface (graphical/text) and other parameters.
  • --nomemcheck: Do not check if the system has enough memory to complete the installation.
  • --leavebootorder: Boot drives in their existing order - used to override the default of booting into the newly installed drive on IBM Power Systems servers and EFI systems. This is useful for systems that, for example, should network boot first before falling back to a local boot.
  • --extlinux: Use extlinux as the boot loader. Note that there is no attempt to check whether this will work for your platform, which means your system may be unable to boot after completing the installation if you use this option.
  • --dnf: Use the experimental DNF package management backend to replace the default YUM package manager. See for more information about the DNF project.

3.2.5. Changes in Kickstart Syntax

This section provides a list of changes to Kickstart commands and options. A list of these changes can also be viewed using the following command on a Fedora system:
$ksverdiff -f F20 -t F21
This command will only work on Fedora 21 with the pykickstart package installed. New Commands and Options
  • fcoe --autovlan: Enable automatic discovery of VLANs.
  • bootloader --disabled: Do not attempt to install a boot loader. This option overrides all other boot loader configuration; all other boot loader options will be ignored and no boot loader packages will be installed.
  • network --interfacename=: Specify a custom interface name for a VLAN device. This option should be used when the default name generated by the --vlanid= option is not desired, and it must always be used together with --vlanid=.
  • ostreesetup: New optional command. Used for OSTree installations. Available options are:
    • --osname= (required): Management root for OS installation.
    • --remote= (optional): Name of the remote repository.
    • --url= (required): Repository URL.
    • --ref= (required): Name of branch inside the repository.
    • --nogpgcheck (optional): Disable GPG key verification.
    See for more information about OSTree.
  • clearpart --disklabel=: Create a custom disk label when relabeling disks.
  • autopart --fstype=: Specify a file system type (such as ext4 or xfs) to replace the default when doing automatic partitioning.
  • repo --install: Writes the repository information into the /etc/yum.repos.d/ directory. This makes the repository configured in Kickstart available on the installed system as well.
  • Changes in the %packages section:
    • You can now specify an environment to be installed in the %packages section by adding an environment name prefixed by @^. For example:
      @^Infrastructure Server
    • The %packages --nocore option can now be used to disable installing of the Core package group.
    • You can now exclude the kernel from installing. This is done the same way as excluding any other package - by prefixing the package name with -:
      %end Changes in Existing Commands and Options
  • volgroup --pesize=: This option now does not have a default value in Kickstart. The default size of a new volume group's physical extents is now determined by the installer during both manual and Kickstart installation. This means that the behavior of Kickstart and manual installations is now the same. The previous default value for Kickstart installations was 32768.

3.2.6. Additional Changes

  • Software RAID configuration in the graphical user interface has been tweaked.
  • You can now use the + and - keys as shortcuts in the manual partitioning screen in the graphical user interface.
  • The ksverdiff utility (part of the pykickstart package) has a new option: --listversions. Use this option to list all available operating system versions which can be used as arguments for the --from= and --to= options.

3.3. Security

3.3.1. SSSD GPO-Based Access Control

SSSD now supports centrally managed, host-based access control in an Active Directory (AD) environment, using Group Policy Objects (GPOs).
GPO policy settings are commonly used to manage host-based access control in an AD environment. SSSD supports local logons, remote logons, service logons and more. Each of these standard GPO security options can be mapped to any PAM service, allowing administrators to comprehensively configure their systems.
This enhancement to SSSD is related only to the retrieval and enforcement of AD policy settings. Administrators can continue to use the existing AD tool set to specify policy settings.
The new functionality only affects SSSD's AD provider and has no effect on any other SSSD providers (e.g. IPA provider). By default, SSSD's AD provider will be installed in "permissive" mode, so that it won't break upgrades. Administrators will need to set "enforcing" mode manually (see sssd-ad(5)).
More information about this change can be found at:

3.3.2. MD5 signed certificates are rejected

OpenSSL was patched to disallow verification of certificates that are signed with MD5 algorithm. The use of MD5 hash algorithm for certificate signatures is now considered as insecure and thus all the main crypto libraries in Fedora were patched to reject such certificates.
Certificates signed with MD5 algorithm are not present on public https web sites anymore but they may still be in use on private networks or used for authentication on openvpn based VPNs. It is highly recommended to replace such certificates with new ones signed with SHA256 or at least SHA1. As a temporary measure the OPENSSL_ENABLE_MD5_VERIFY environment variable can be set to allow verification of certificates signed with MD5 algorithm.

3.4. File Systems

3.4.1. Autofs learns amd maps

The autofs package has gained support for amd format automount maps. In the past, amd maps have been handled by the am-utils package, which has seen declining upstream development. Those that use amd format automount maps are encouraged to test the autofs functionality, and report any issues or feature requests at
For usage information, refer to /usr/share/doc/autofs/README.amd-maps.

3.5. Database Servers

3.5.1. Apache Accumulo

The Apache Accumulo sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system. Apache Accumulo is based on Google's BigTable design and is built on top of Apache Hadoop, Zookeeper, and Thrift. Apache Accumulo features a few novel improvements on the BigTable design in the form of cell-based access control and a server-side programming mechanism that can modify key/value pairs at various points in the data management process.
Please note that Accumulo's optional monitor service is not provided in the initial F21 release. It will be made available as soon as all its dependencies are in place.
For more information see

3.5.2. Apache HBase

Apache HBase is used when you need random, real-time read/write access to your Big Data. Apache HBase hosts very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is a distributed, versioned, non-relational database modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
For more information see

3.5.3. Apache Hive

The Apache Hive data warehouse software facilitates querying and managing large data sets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
For more information see

3.5.4. MariaDB 10.0

In Fedora 21, MariaDB have been updated to the upstream version 10.0, which provides various bug fixes and enhancements. Among others, the support for parallel and multi-source replication has been added as well as the support for global transaction IDs. In addition, several new storage engines have been implemented.
For the list of all changes, visit the MariaDB Knowledge Base at

3.6. Samba

3.7. Systemd

3.7.1. Journald

  • Journal messages can be forwarded to remote systems, without using a syslog daemon. The systemd-journal-remote and systemd-journal-upload packages provide receiver and sender daemons. Communication is done over HTTPS.
  • The cupsd service now logs output into the journal. See Section 4.5.1, “CUPS Journal Logging” for details.

3.7.2. systemd 215

systemd in Fedora 21 has been updated to version 215. This release includes substantial enhancements, including improved resource management, service isolation and other security improvements, and network management from systemd-networkd.
Many of these improvements enhance management of services running inside containers, and management of the containers themselves. systemd-nspawn creates securely isolated containers, and tools such as machinectl are available to manage them. systemd-networkd provides network services for the containers, and systemd itself manages resource allocations.
To learn more about enhancements to systemd, read:

3.7.3. Systemd PrivateDevices and PrivateNetwork

Two new security-related options are now being used by systemd for long-running services which do not require access to physical devices or the network:
  • The PrivateDevices setting, when set to "yes", provides a private, minimal /dev that does not include physical devices. This allows long-running services to have limited access, increasing security.
  • The PrivateNetwork setting, when set to "yes", provides a private network with only a loopback interface. This allows long-running services that do not require network access to be cut off from the network.
For details about this change, see the PrivateDevices and PrivateNetwork Wiki page.

3.8. Server Configuration Tools

3.8.1. Cockpit Management Console

The Cockpit Management Console is now available in Fedora Documentation Server. See Section 2.2.2, “Cockpit Management Console” for more information.

3.9. Monitoring and Management Solutions

3.9.1. Monitorix

The lightweight Monitorix system monitoring tool has been updated to version 3.6, adding improved support for many things, including libvirt, apcupsd, process statistics, and more.
Review the project's changelog at

3.9.2. SystemTap

Version 2.6 of systemtap data collection suite in Fedora 21 has many new features, described in /usr/share/doc/systemtap-runtime/NEWS. Documentation for systemtap can be found at

3.9.3. Zabbix

The Zabbix network and infrastructure monitoring utility has been updated to version 2.2.x in Fedora 21. With each release, the Zabbix team improves and expands this powerful tool.
For a complete overview of changes in Zabbix, visit

3.10. Cluster

3.10.1. Apache Ambari

The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
For more information see

3.10.2. Apache Mesos

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. Apache Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elastic Search) with APIs for resource management and scheduling across entire data center and cloud environments.
For more information see

3.10.3. Apache Oozie

Apache Oozie is a workflow scheduler to manage Hadoop jobs. It is integrated with the rest of the Hadoop stack and supports several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).
For more information see

3.10.4. Apache Pig

Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which, in turn, enables them to handle very large data sets. At the present time, Pig's infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop sub-project).
For more information see:

3.10.5. Apache Spark

Apache Spark is a fast and general engine for large-scale data processing. It supports developing custom analytic processing applications over large data sets or streaming data. Because it has the capability to cache intermediate results in cluster memory and schedule DAGs of computations, Spark programs can run up to 100x faster than equivalent Hadoop MapReduce jobs. Spark applications are easy to develop, parallel, fast, and resilient to failure, and they can operate on data from in-memory collections, local files, a Hadoop-compatible filesystem, or from a variety of streaming sources. Spark also includes libraries for distributed machine learning and graph algorithms.
For more information see: