How to restart docker daemon

How to restart docker daemon

Control Docker with systemd

Estimated reading time: 6 minutes

Many Linux distributions use systemd to start the Docker daemon. This document shows a few examples of how to customize Docker’s settings.

Start the Docker daemon

Start manually

Once Docker is installed, you need to start the Docker daemon. Most Linux distributions use systemctl to start services.

Start automatically at system boot

If you want Docker to start at boot, see Configure Docker to start on boot.

Custom Docker daemon options

There are a number of ways to configure the daemon flags and environment variables for your Docker daemon. The recommended way is to use the platform-independent daemon.json file, which is located in /etc/docker/ on Linux by default. See Daemon configuration file.

Runtime directory and storage driver

You may want to control the disk space used for Docker images, containers, and volumes by moving it to a separate partition.

To accomplish this, set the following flags in the daemon.json file:

HTTP/HTTPS proxy

This example overrides the default docker.service file.

If you are behind an HTTP or HTTPS proxy server, for example in corporate settings, you need to add this configuration in the Docker systemd service file.

Note for rootless mode

The location of systemd configuration files are different when running Docker in rootless mode. When running in rootless mode, Docker is started as a user-mode systemd service, and uses files stored in each users’ home directory in

Create a systemd drop-in directory for the docker service:

Create a file named /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

If you are behind an HTTPS proxy server, set the HTTPS_PROXY environment variable:

Multiple environment variables can be set; to set both a non-HTTPS and a HTTPs proxy;

If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable.

The NO_PROXY variable specifies a string that contains comma-separated values for hosts that should be excluded from proxying. These are the options you can specify to exclude hosts:

Flush changes and restart Docker

Verify that the configuration has been loaded and matches the changes you made, for example:

Create a systemd drop-in directory for the docker service:

Create a file named

/.config/systemd/user/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

If you are behind an HTTPS proxy server, set the HTTPS_PROXY environment variable:

Multiple environment variables can be set; to set both a non-HTTPS and a HTTPs proxy;

If you have internal Docker registries that you need to contact without proxying, you can specify them via the NO_PROXY environment variable.

The NO_PROXY variable specifies a string that contains comma-separated values for hosts that should be excluded from proxying. These are the options you can specify to exclude hosts:

Flush changes and restart Docker

Verify that the configuration has been loaded and matches the changes you made, for example:

Configure where the Docker daemon listens for connections

Manually create the systemd unit files

Configure and troubleshoot the Docker daemon

Estimated reading time: 12 minutes

After successfully installing and starting Docker, the dockerd daemon runs with its default configuration. This topic shows how to customize the configuration, start the daemon manually, and troubleshoot and debug the daemon if you run into issues.

Start the daemon using operating system utilities

On a typical installation the Docker daemon is started by a system utility, not manually by a user. This makes it easier to automatically start Docker when the machine reboots.

The command to start Docker depends on your operating system. Check the correct page under Install Docker. To configure Docker to start automatically at system boot, see Configure Docker to start on boot.

Start the daemon manually

When you start Docker this way, it runs in the foreground and sends its logs directly to your terminal.

To stop Docker when you have started it manually, issue a Ctrl+C in your terminal.

Configure the Docker daemon

There are two ways to configure the Docker daemon:

You can use both of these options together as long as you don’t specify the same option both as a flag and in the JSON file. If that happens, the Docker daemon won’t start and prints an error message.

To configure the Docker daemon using a JSON file, create a file at /etc/docker/daemon.json on Linux systems, or C:\ProgramData\docker\config\daemon.json on Windows. On MacOS go to the whale in the taskbar > Preferences > Daemon > Advanced.

Here’s what the configuration file looks like:

You can also start the Docker daemon manually and configure it using flags. This can be useful for troubleshooting problems.

Here’s an example of how to manually start the Docker daemon, using the same configurations as above:

You can learn what configuration options are available in the dockerd reference docs, or by running:

Many specific configuration options are discussed throughout the Docker documentation. Some places to go next include:

Docker daemon directory

The Docker daemon persists all data in a single directory. This tracks everything related to Docker, including containers, images, volumes, service definition, and secrets.

By default this directory is:

You can configure the Docker daemon to use a different directory, using the data-root configuration option.

Since the state of a Docker daemon is kept on this directory, make sure you use a dedicated directory for each daemon. If two daemons share the same directory, for example, an NFS share, you are going to experience errors that are difficult to troubleshoot.

Troubleshoot the daemon

You can enable debugging on the daemon to learn about the runtime activity of the daemon and to aid in troubleshooting. If the daemon is completely non-responsive, you can also force a full stack trace of all threads to be added to the daemon log by sending the SIGUSR signal to the Docker daemon.

Troubleshoot conflicts between the daemon.json and startup scripts

If you use a daemon.json file and also pass options to the dockerd command manually or using start-up scripts, and these options conflict, Docker fails to start with an error such as:

If you see an error similar to this one and you are starting the daemon manually with flags, you may need to adjust your flags or the daemon.json to remove the conflict.

Note: If you see this specific error, continue to the next section for a workaround.

If you are starting Docker using your operating system’s init scripts, you may need to override the defaults in these scripts in ways that are specific to the operating system.

Use the hosts key in daemon.json with systemd

There are other times when you might need to configure systemd with Docker, such as configuring a HTTP or HTTPS proxy.

Run sudo systemctl daemon-reload before attempting to start Docker. If Docker starts successfully, it is now listening on the IP address specified in the hosts key of the daemon.json instead of a socket.

Important: Setting hosts in the daemon.json is not supported on Docker Desktop for Windows or Docker Desktop for Mac.

Out Of Memory Exceptions (OOME)

If your containers attempt to use more memory than the system has available, you may experience an Out Of Memory Exception (OOME) and a container, or the Docker daemon, might be killed by the kernel OOM killer. To prevent this from happening, ensure that your application runs on hosts with adequate memory and see Understand the risks of running out of memory.

Read the logs

The daemon logs may help you diagnose problems. The logs may be saved in one of a few locations, depending on the operating system configuration and the logging subsystem used:

/Library/Containers/com.docker.docker/Data/log/vm/dockerd.logmacOS ( containerd logs)

/Library/Containers/com.docker.docker/Data/log/vm/containerd.logWindows (WSL2) ( dockerd logs)AppData\Roaming\Docker\log\vm\dockerd.logWindows (WSL2) ( containerd logs)AppData\Roaming\Docker\log\vm\containerd.logWindows (Windows containers)Logs are in the Windows Event Log

Enable debugging

There are two ways to enable debugging. The recommended approach is to set the debug key to true in the daemon.json file. This method works for every Docker platform.

If the file is empty, add the following:

Send a HUP signal to the daemon to cause it to reload its configuration. On Linux hosts, use the following command.

On Windows hosts, restart Docker.

Force a stack trace to be logged

If the daemon is unresponsive, you can force a full stack trace to be logged by sending a SIGUSR1 signal to the daemon.

Linux:

Windows Server:

This forces a stack trace to be logged but does not stop the daemon. Daemon logs show the stack trace or the path to a file containing the stack trace if it was logged to a file.

The daemon continues operating after handling the SIGUSR1 signal and dumping the stack traces to the log. The stack traces can be used to determine the state of all goroutines and threads within the daemon.

View stack traces

The Docker daemon log can be viewed by using one of the following methods:

It is not possible to manually generate a stack trace on Docker Desktop for Mac or Docker Desktop for Windows. However, you can click the Docker taskbar icon and choose Troubleshoot to send information to Docker if you run into issues.

Look in the Docker logs for a message like the following:

The locations where Docker saves these stack traces and dumps depends on your operating system and configuration. You can sometimes get useful diagnostic information straight from the stack traces and dumps. Otherwise, you can provide this information to Docker for help diagnosing the problem.

Check whether Docker is running

The operating-system independent way to check whether Docker is running is to ask Docker, using the docker info command.

dockerd

Estimated reading time: 59 minutes

daemon

Options with [] may be specified multiple times.

Description

Enabling experimental features

Environment variables

For easy reference, the following list of environment variables are supported by the dockerd command line:

Examples

Daemon socket option

If you’re using an HTTPS encrypted socket, keep in mind that only TLS1.0 and greater are supported. Protocols SSLv3 and under are not supported anymore for security reasons.

The example below runs the daemon listenin on the default unix socket, and on 2 specific IP addresses on this host:

The Docker client supports connecting to a remote daemon via SSH:

Bind Docker to another host/port or a Unix socket

-H accepts host and port assignment in the following format:

-H also accepts short form for TCP bindings: host: or host:port or :port

Run Docker in daemon mode:

Download an ubuntu image:

Daemon storage-driver

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However aufs allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically /var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Devicemapper options below for a way how to customize this setup.

jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

The overlay storage driver can cause excessive inode consumption (especially as the number of images grows). We recommend using the overlay2 storage driver instead.

Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions.

The fuse-overlayfs driver is similar to overlay2 but works in userspace. The fuse-overlayfs driver is expected to be used for Rootless mode.

On Windows, the Docker daemon supports a single image layer storage driver depending on the image platform: windowsfilter for Windows images, and lcow for Linux containers on Windows.

Options per storage driver

Devicemapper options

This is an example of the configuration file for devicemapper on Linux:

dm.thinpooldev

Specifies a custom block storage device to use for the thin pool.

If using a block device for device mapper storage, it is best to use lvm to create and manage the thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes needed for images and containers.

Managing the thin-pool outside of Engine makes for the most feature-rich method of having Docker utilize device mapper thin provisioning as the backing storage for Docker containers. The highlights of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm activates the thin-pool, etc.

Example:
dm.directlvm_device

As an alternative to providing a thin pool as above, Docker can setup a block device for you.

Example:
dm.thinp_percent

Sets the percentage of passed in block device to use for storage.

Example:
dm.thinp_metapercent

Sets the percentage of the passed in block device to use for metadata storage.

Example:
dm.thinp_autoextend_threshold

Sets the value of the percentage of space used before lvm attempts to autoextend the available space [100 = disabled]

Example:
dm.thinp_autoextend_percent

Sets the value percentage value to increase the thin pool by when lvm attempts to autoextend the available space [100 = disabled]

Example:
dm.basesize

Specifies the size to use when creating the base device, which limits the size of images and containers. The default value is 10G. Note, thin devices are inherently “sparse”, so a 10G device which is mostly empty doesn’t use 10 GB of space on the pool. However, the filesystem will use more space for the empty case the larger the device is.

The base device size can be increased at daemon restart which will allow all future images and containers (based on those new images) to be of the new base device size.

Examples

This will increase the base device size to 50G. The Docker daemon will throw an error if existing base device size is larger than 50G. A user can use this option to expand the base device size however shrinking is not permitted.

This value affects the system-wide “base” empty filesystem that may already be initialized and inherited by pulled images. Typically, a change to this value requires additional steps to take effect:

dm.loopdatasize

This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the “data” device which is used for the thin pool. The default size is 100G. The file is sparse, so it will not initially take up this much space.

Example
dm.loopmetadatasize

This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the “metadata” device which is used for the thin pool. The default size is 2G. The file is sparse, so it will not initially take up this much space.

Example

Specifies the filesystem type to use for the base device. The supported options are “ext4” and “xfs”. The default is “xfs”

Example
dm.mkfsarg

Specifies extra mkfs arguments to be used when creating the base device.

Example
dm.mountopt

Specifies extra mount options used when mounting the thin devices.

Example
dm.datadev

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for data for the thin pool.

If using a block device for device mapper storage, ideally both datadev and metadatadev should be specified to completely avoid using the loopback device.

Example
dm.metadatadev

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for metadata for the thin pool.

For best performance the metadata should be on a different spindle than the data, or even better on an SSD.

If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this:

Example
dm.blocksize

Specifies a custom blocksize to use for the thin pool. The default blocksize is 64K.

Example
dm.blkdiscard

Enables or disables the use of blkdiscard when removing devicemapper devices. This is enabled by default (only) if using loopback devices and is required to resparsify the loopback file on image/container removal.

Disabling this on loopback can lead to much faster container removal times, but will make the space used in /var/lib/docker directory not be returned to the system for other use when containers are removed.

Examples
dm.override_udev_sync_check

To view the udev sync support of a Docker daemon that is using the devicemapper driver, run:

To allow the docker daemon to start, regardless of udev sync not being supported, set dm.override_udev_sync_check to true:

dm.use_deferred_removal

Enables use of deferred device removal if libdm and the kernel driver support the mechanism.

Deferred device removal means that if device is busy when devices are being removed/deactivated, then a deferred removal is scheduled on device. And devices automatically go away when last user of the device exits.

For example, when a container exits, its associated thin device is removed. If that device has leaked into some other mount namespace and can’t be removed, the container exit still succeeds and this option causes the system to schedule the device for deferred removal. It does not wait in a loop trying to remove a busy device.

Example
dm.use_deferred_deletion

Enables use of deferred device deletion for thin pool devices. By default, thin pool device deletion is synchronous. Before a container is deleted, the Docker daemon removes any associated devices. If the storage driver can not remove a device, the container deletion fails and daemon returns.

To avoid this failure, enable both deferred device deletion and deferred device removal on the daemon.

With these two options enabled, if a device is busy when the driver is deleting a container, the driver marks the device as deleted. Later, when the device isn’t in use, the driver deletes it.

In general it should be safe to enable this option by default. It will help when unintentional leaking of mount point happens across multiple mount namespaces.

dm.min_free_space

Whenever a new a thin pool device is created (during docker pull or during container creation), the Engine checks if the minimum free space is available. If sufficient space is unavailable, then device creation fails and any relevant docker operation fails.

To recover from this error, you must create more free space in the thin pool to recover from the error. You can create free space by deleting some images and containers from the thin pool. You can also add more storage to the thin pool.

To add more space to a LVM (logical volume management) thin pool, just add more storage to the volume group container thin pool; this should automatically resolve any errors. If your configuration uses loop devices, then stop the Engine daemon, grow the size of loop files and restart the daemon to resolve the issue.

Example
dm.xfs_nospace_max_retries

Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device.

By default XFS retries infinitely for IO to finish and this can result in unkillable process. To change this behavior one can set xfs_nospace_max_retries to say 0 and XFS will not retry IO after getting ENOSPC and will shutdown filesystem.

Example
dm.libdm_log_level
libdm LevelValue—log-level
_LOG_FATAL2error
_LOG_ERR3error
_LOG_WARN4warn
_LOG_NOTICE5info
_LOG_INFO6info
_LOG_DEBUG7debug
Example

ZFS options

zfs.fsname

Set zfs filesystem under which docker will create its own datasets. By default docker will pick up the zfs filesystem where docker graph ( /var/lib/docker ) is located.

Example

Btrfs options

btrfs.min_space

Specifies the minimum size to use when creating the subvolume which is used for containers. If user uses disk quota for btrfs when creating or running a container with —storage-opt size option, docker should ensure the size cannot be smaller than btrfs.min_space.

Example

Overlay2 options

overlay2.override_kernel_check

Overrides the Linux kernel version check allowing overlay2. Support for specifying multiple lower directories needed by overlay2 was added to the Linux kernel in 4.0.0. However, some older kernel versions may be patched to add multiple lower directory support for OverlayFS. This option should only be used after verifying this support exists in the kernel. Applying this option on a kernel without this support will cause failures on mount.

overlay2.size

Sets the default max size of the container. It is supported only when the backing fs is xfs and mounted with pquota mount option. Under these conditions the user can pass any size less then the backing fs size.

Example

Windowsfilter options

Specifies the size to use when creating the sandbox which is used for containers. Defaults to 20G.

Example

LCOW (Linux Containers on Windows) options

lcow.globalmode

Specifies whether the daemon instantiates utility VM instances as required (recommended and default if omitted), or uses single global utility VM (better performance, but has security implications and not recommended for production deployments).

Example
lcow.kirdpath
Example
lcow.kernel
Example
lcow.initrd
Example
lcow.bootparameters

Specifies additional boot parameters for booting utility VMs when in kernel/ initrd mode. Ignored if the utility VM is booting from VHD. These settings are kernel specific.

Example
lcow.vhdx
Example
lcow.timeout

Specifies the timeout for utility VM operations in seconds. Defaults to 300.

Example
lcow.sandboxsize

Specifies the size in GB to use when creating the sandbox which is used for containers. Defaults to 20. Cannot be less than 20.

Example

Docker runtime execution options

The following is an example adding 2 runtimes via the configuration:

This is the same example via the command line:

Defining runtime arguments via the command line is not supported.

Options for the runtime

This example sets the cgroupdriver to systemd :

Setting this option applies to all containers the daemon launches.

Daemon DNS options

To set the DNS server for all Docker containers, use:

To set the DNS search domain for all Docker containers, use:

Allow push of nondistributable artifacts

Some images (e.g., Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included.

This option can be used multiple times.

This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server.

Warning: Nondistributable artifacts typically have restrictions on how and where they can be distributed and shared. Only use this feature to push artifacts to private registries and ensure that you are in compliance with any terms that cover redistributing nondistributable artifacts.

Insecure registries

Docker considers a private registry either secure or insecure. In the rest of this section, registry is used for private registry, and myregistry:5000 is a placeholder example for a private registry.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.

Legacy Registries

Running a Docker daemon behind an HTTPS_PROXY

When running inside a LAN that uses an HTTPS proxy, the Docker Hub certificates will be replaced by the proxy’s certificates. These certificates need to be added to your Docker host’s configuration:

Default ulimit settings

Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum number of processes available to a user, not to a container. For details please check the run reference.

Node discovery

The currently supported cluster store options are:

OptionDescription
discovery.heartbeatSpecifies the heartbeat timer in seconds which is used by the daemon as a keepalive mechanism to make sure discovery module treats the node as alive in the cluster. If not configured, the default value is 20 seconds.
discovery.ttlSpecifies the TTL (time-to-live) in seconds which is used by the discovery module to timeout a node if a valid heartbeat is not received within the configured ttl value. If not configured, the default value is 60 seconds.
kv.cacertfileSpecifies the path to a local file with PEM encoded CA certificates to trust.
kv.certfileSpecifies the path to a local file with a PEM encoded certificate. This certificate is used as the client cert for communication with the Key/Value store.
kv.keyfileSpecifies the path to a local file with a PEM encoded private key. This private key is used as the client key for communication with the Key/Value store.
kv.pathSpecifies the path in the Key/Value store. If not configured, the default value is ‘docker/nodes’.

Access authorization

The PLUGIN_ID value is either the plugin’s name or a path to its specification file. The plugin’s implementation determines whether you can specify a name or path. Consult with your Docker administrator to get information about the plugins available to you.

Once a plugin is installed, requests made to the daemon through the command line or Docker’s Engine API are allowed or denied by the plugin. If you have multiple plugins installed, each plugin, in order, must allow the request for it to complete.

For information about how to create an authorization plugin, refer to the authorization plugin section.

Daemon user namespace options

The Linux kernel user namespace support provides additional security by enabling a process, and therefore a container, to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. Potentially the most important security improvement is that, by default, container processes running as the root user will have expected administrative privilege (with some restrictions) inside the container but will effectively be mapped to an unprivileged uid on the host.

For details about how to use this feature, as well as limitations, see Isolate containers with a user namespace.

Miscellaneous options

Default cgroup parent

If the cgroup has a leading forward slash ( / ), the cgroup is created under the root cgroup, otherwise the cgroup is created under the daemon cgroup.

Daemon metrics

Port 9323 is the default port associated with Docker metrics to avoid collisions with other prometheus exporters and services.

If you are running a prometheus server you can add this address to your scrape configs to have prometheus collect metrics on Docker. For more information on prometheus refer to the prometheus website.

Please note that this feature is still marked as experimental as metrics and metric names could change while this feature is still in experimental. Please provide feedback on what you would like to see collected in the API.

Node Generic Resources

The current expected use case is to advertise NVIDIA GPUs so that services requesting NVIDIA-GPU=13 can land on a node that has enough GPUs for the task to run.

Example of usage:

Daemon configuration file

On Linux

This is a full example of the allowed configuration options on Linux:

On Windows

This is a full example of the allowed configuration options on Windows:

Feature options

The optional field features in daemon.json allows users to enable or disable specific daemon features. For example, <"features":<"buildkit": true>> enables buildkit as the default docker image builder.

The list of currently supported feature options:

Configuration reload behavior

The list of currently supported options that can be reconfigured is this:

Run multiple daemons

Running multiple daemons on a single host is considered as “experimental”. The user should be aware of unsolved problems. This solution may not work properly in some cases. Solutions are currently under development and will be delivered in the near future.

This section describes how to run multiple Docker daemons on a single host. To run multiple daemons, you must configure each daemon so that it does not conflict with other daemons on the same host. You can set these options either by providing them as flags, or by using a daemon configuration file.

The following daemon options must be configured for each daemon:

When your daemons use different values for these flags, you can run them on the same host without any problems. It is very important to properly understand the meaning of those options and to use them correctly.

Example script for a separate “bootstrap” instance of the Docker daemon without network:

How to restart docker daemon

update your bashrc

How to restart docker daemon. Смотреть фото How to restart docker daemon. Смотреть картинку How to restart docker daemon. Картинка про How to restart docker daemon. Фото How to restart docker daemon

information about docker itself

how to skip typing «sudo» each time, without sudo

logout and login again

also can be helpful to check entrypoint for UNIX (LF) as new line ( instead of windows CR LF )

proxy for daemon

proxy for docker client to pass proxy to containers

check login without access to config

restart Docker service

search image into registry, find image, catalog search

inspect image in repository inspect layers

export layers tag layers

export layers to filesystem

pull image from repository

image can be found: https://hub.docker.com/ example of command: docker pull mysql

push image to local repo

copy images between registries

show all local images

valuedescription
noDo not automatically restart the container. (the default)
on-failureRestart the container if it exits due to an error, which manifests as a non-zero exit code.
alwaysAlways restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details)
unless-stoppedSimilar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.

map volume ( map folder )

map multiply ports to current host

run container in detached ( background ) mode, without console attachment to running process

run image with specific name

run image with specific user ( when you have issue with rights for mounting external folders )

run container with empty entrypoint, without entrypoint

start stopped previously container

connecting containers via host port, host connection

connecting containers directly via link

connecting containers via network

share network for docker-compose

connecting containers via host, localhost connection, shares the host network stack and has access to the /etc/hosts for network communication, host as network share host network share localhost network

assign static hostname to container (map hostname)

mount folder, map folder, mount directory, map directory multiple directories

inspect volume, check volume, read data from volume, inspect data locally

list of all volumes

show all containers that are running

show all containers ( running, stopped, paused )

show container with filter, show container with format

join to executed container, connect to container, rsh, sh on container

with detached sequence

with translation of all signals ( detaching: ctrl-p & ctrl-q )

docker log of container, console output

show processes from container

run program inside container and attach to process

show difference with original image

show all layers command+size, reverse engineering of container, print dockerfile

docker running image information

or for file /etc/docker/daemon.json

docker save changed container commit changes fix container changes

make a changes and keep it running select another terminal window

. be aware, in case of skipping entrypoint, in committed image with be no entrypoint too

container new name, rename container, container new tag

Load/Import/Read from file to image

wait until container will be stopped

stop executing container

stop restarted container, compose stop, stop autostart, stop restarting

pause/unpause executing container

kill executing container

leave executing container

Remove and Clean, docker cleanup

remove all containers

remove volumes ( unused )

delete docker, remove docker, uninstall docker

docker events real time

disk usage infomration

remove unused data, remove stopped containers

possible issue with ‘pull’

need to add proxy into Dockerfile

build from file

build with parameters, build with proxy settings

build with parameters inside dockerfile

build useful commands

push your container

read labels from container, read container labels, LABEL commands from container

How to restart docker daemon

Options with [] may be specified multiple times.

Enabling experimental features

For easy reference, the following list of environment variables are supported by the dockerd command line:

Daemon socket option

If you’re using an HTTPS encrypted socket, keep in mind that only TLS1.0 and greater are supported. Protocols SSLv3 and under are not supported anymore for security reasons.

The example below runs the daemon listenin on the default unix socket, and on 2 specific IP addresses on this host:

The Docker client supports connecting to a remote daemon via SSH:

Bind Docker to another host/port or a Unix socket

-H accepts host and port assignment in the following format:

-H also accepts short form for TCP bindings: host: or host:port or :port

Run Docker in daemon mode:

Download an ubuntu image:

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However aufs allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically /var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Devicemapper options below for a way how to customize this setup.

jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

The overlay storage driver can cause excessive inode consumption (especially as the number of images grows). We recommend using the overlay2 storage driver instead.

Both overlay and overlay2 are currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions.

The fuse-overlayfs driver is similar to overlay2 but works in userspace. The fuse-overlayfs driver is expected to be used for Rootless mode.

On Windows, the Docker daemon only supports the windowsfilter storage driver.

Options per storage driver

This is an example of the configuration file for devicemapper on Linux:

Specifies a custom block storage device to use for the thin pool.

If using a block device for device mapper storage, it is best to use lvm to create and manage the thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes needed for images and containers.

Managing the thin-pool outside of Engine makes for the most feature-rich method of having Docker utilize device mapper thin provisioning as the backing storage for Docker containers. The highlights of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm activates the thin-pool, etc.

As an alternative to providing a thin pool as above, Docker can setup a block device for you.

Sets the percentage of passed in block device to use for storage.

Sets the percentage of the passed in block device to use for metadata storage.

Sets the value of the percentage of space used before lvm attempts to autoextend the available space [100 = disabled]

Sets the value percentage value to increase the thin pool by when lvm attempts to autoextend the available space [100 = disabled]

Specifies the size to use when creating the base device, which limits the size of images and containers. The default value is 10G. Note, thin devices are inherently «sparse», so a 10G device which is mostly empty doesn’t use 10 GB of space on the pool. However, the filesystem will use more space for the empty case the larger the device is.

The base device size can be increased at daemon restart which will allow all future images and containers (based on those new images) to be of the new base device size.

This will increase the base device size to 50G. The Docker daemon will throw an error if existing base device size is larger than 50G. A user can use this option to expand the base device size however shrinking is not permitted.

This value affects the system-wide «base» empty filesystem that may already be initialized and inherited by pulled images. Typically, a change to this value requires additional steps to take effect:

This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the «data» device which is used for the thin pool. The default size is 100G. The file is sparse, so it will not initially take up this much space.

This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the «metadata» device which is used for the thin pool. The default size is 2G. The file is sparse, so it will not initially take up this much space.

Specifies the filesystem type to use for the base device. The supported options are «ext4» and «xfs». The default is «xfs»

Specifies extra mkfs arguments to be used when creating the base device.

Specifies extra mount options used when mounting the thin devices.

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for data for the thin pool.

If using a block device for device mapper storage, ideally both datadev and metadatadev should be specified to completely avoid using the loopback device.

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for metadata for the thin pool.

For best performance the metadata should be on a different spindle than the data, or even better on an SSD.

If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this:

Specifies a custom blocksize to use for the thin pool. The default blocksize is 64K.

Enables or disables the use of blkdiscard when removing devicemapper devices. This is enabled by default (only) if using loopback devices and is required to resparsify the loopback file on image/container removal.

Disabling this on loopback can lead to much faster container removal times, but will make the space used in /var/lib/docker directory not be returned to the system for other use when containers are removed.

To view the udev sync support of a Docker daemon that is using the devicemapper driver, run:

To allow the docker daemon to start, regardless of udev sync not being supported, set dm.override_udev_sync_check to true:

Enables use of deferred device removal if libdm and the kernel driver support the mechanism.

Deferred device removal means that if device is busy when devices are being removed/deactivated, then a deferred removal is scheduled on device. And devices automatically go away when last user of the device exits.

For example, when a container exits, its associated thin device is removed. If that device has leaked into some other mount namespace and can’t be removed, the container exit still succeeds and this option causes the system to schedule the device for deferred removal. It does not wait in a loop trying to remove a busy device.

Enables use of deferred device deletion for thin pool devices. By default, thin pool device deletion is synchronous. Before a container is deleted, the Docker daemon removes any associated devices. If the storage driver can not remove a device, the container deletion fails and daemon returns.

To avoid this failure, enable both deferred device deletion and deferred device removal on the daemon.

With these two options enabled, if a device is busy when the driver is deleting a container, the driver marks the device as deleted. Later, when the device isn’t in use, the driver deletes it.

In general it should be safe to enable this option by default. It will help when unintentional leaking of mount point happens across multiple mount namespaces.

Whenever a new a thin pool device is created (during docker pull or during container creation), the Engine checks if the minimum free space is available. If sufficient space is unavailable, then device creation fails and any relevant docker operation fails.

To recover from this error, you must create more free space in the thin pool to recover from the error. You can create free space by deleting some images and containers from the thin pool. You can also add more storage to the thin pool.

To add more space to a LVM (logical volume management) thin pool, just add more storage to the volume group container thin pool; this should automatically resolve any errors. If your configuration uses loop devices, then stop the Engine daemon, grow the size of loop files and restart the daemon to resolve the issue.

Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device.

By default XFS retries infinitely for IO to finish and this can result in unkillable process. To change this behavior one can set xfs_nospace_max_retries to say 0 and XFS will not retry IO after getting ENOSPC and will shutdown filesystem.

libdm LevelValue—log-level
_LOG_FATAL2error
_LOG_ERR3error
_LOG_WARN4warn
_LOG_NOTICE5info
_LOG_INFO6info
_LOG_DEBUG7debug

Set zfs filesystem under which docker will create its own datasets. By default docker will pick up the zfs filesystem where docker graph ( /var/lib/docker ) is located.

Specifies the minimum size to use when creating the subvolume which is used for containers. If user uses disk quota for btrfs when creating or running a container with —storage-opt size option, docker should ensure the size cannot be smaller than btrfs.min_space.

Overrides the Linux kernel version check allowing overlay2. Support for specifying multiple lower directories needed by overlay2 was added to the Linux kernel in 4.0.0. However, some older kernel versions may be patched to add multiple lower directory support for OverlayFS. This option should only be used after verifying this support exists in the kernel. Applying this option on a kernel without this support will cause failures on mount.

Sets the default max size of the container. It is supported only when the backing fs is xfs and mounted with pquota mount option. Under these conditions the user can pass any size less then the backing fs size.

Specifies the size to use when creating the sandbox which is used for containers. Defaults to 20G.

Docker runtime execution options

The following is an example adding 2 runtimes via the configuration:

This is the same example via the command line:

Defining runtime arguments via the command line is not supported.

Options for the runtime

This example sets the cgroupdriver to systemd :

Setting this option applies to all containers the daemon launches.

Daemon DNS options

To set the DNS server for all Docker containers, use:

To set the DNS search domain for all Docker containers, use:

Allow push of nondistributable artifacts

Some images (e.g., Windows base images) contain artifacts whose distribution is restricted by license. When these images are pushed to a registry, restricted artifacts are not included.

This option can be used multiple times.

This option is useful when pushing images containing nondistributable artifacts to a registry on an air-gapped network so hosts on that network can pull the images without connecting to another server.

Warning : Nondistributable artifacts typically have restrictions on how and where they can be distributed and shared. Only use this feature to push artifacts to private registries and ensure that you are in compliance with any terms that cover redistributing nondistributable artifacts.

Docker considers a private registry either secure or insecure. In the rest of this section, registry is used for private registry, and myregistry:5000 is a placeholder example for a private registry.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.

Running a Docker daemon behind an HTTPS_PROXY

When running inside a LAN that uses an HTTPS proxy, the Docker Hub certificates will be replaced by the proxy’s certificates. These certificates need to be added to your Docker host’s configuration:

Default ulimit settings

Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum number of processes available to a user, not to a container. For details please check the run reference.

The PLUGIN_ID value is either the plugin’s name or a path to its specification file. The plugin’s implementation determines whether you can specify a name or path. Consult with your Docker administrator to get information about the plugins available to you.

Once a plugin is installed, requests made to the daemon through the command line or Docker’s Engine API are allowed or denied by the plugin. If you have multiple plugins installed, each plugin, in order, must allow the request for it to complete.

For information about how to create an authorization plugin, refer to the authorization plugin section.

Daemon user namespace options

The Linux kernel user namespace support provides additional security by enabling a process, and therefore a container, to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. Potentially the most important security improvement is that, by default, container processes running as the root user will have expected administrative privilege (with some restrictions) inside the container but will effectively be mapped to an unprivileged uid on the host.

For details about how to use this feature, as well as limitations, see Isolate containers with a user namespace.

Default cgroup parent

If the cgroup has a leading forward slash ( / ), the cgroup is created under the root cgroup, otherwise the cgroup is created under the daemon cgroup.

Port 9323 is the default port associated with Docker metrics to avoid collisions with other prometheus exporters and services.

If you are running a prometheus server you can add this address to your scrape configs to have prometheus collect metrics on Docker. For more information on prometheus refer to the prometheus website.

Please note that this feature is still marked as experimental as metrics and metric names could change while this feature is still in experimental. Please provide feedback on what you would like to see collected in the API.

Node Generic Resources

The current expected use case is to advertise NVIDIA GPUs so that services requesting NVIDIA-GPU=13 can land on a node that has enough GPUs for the task to run.

Example of usage:

Daemon configuration file

This is a full example of the allowed configuration options on Linux:

This is a full example of the allowed configuration options on Windows:

The default-runtime option is by default unset, in which case dockerd will auto-detect the runtime. This detection is currently based on if the containerd flag is set.

The optional field features in daemon.json allows users to enable or disable specific daemon features. For example, <"features":<"buildkit": true>> enables buildkit as the default docker image builder.

The list of currently supported feature options:

Configuration reload behavior

The list of currently supported options that can be reconfigured is this:

Run multiple daemons

Running multiple daemons on a single host is considered as «experimental». The user should be aware of unsolved problems. This solution may not work properly in some cases. Solutions are currently under development and will be delivered in the near future.

This section describes how to run multiple Docker daemons on a single host. To run multiple daemons, you must configure each daemon so that it does not conflict with other daemons on the same host. You can set these options either by providing them as flags, or by using a daemon configuration file.

The following daemon options must be configured for each daemon:

When your daemons use different values for these flags, you can run them on the same host without any problems. It is very important to properly understand the meaning of those options and to use them correctly.

Example script for a separate “bootstrap” instance of the Docker daemon without network:

Источники информации:

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *