How to start docker daemon

How to start docker daemon

How to start docker daemon

Copy raw contents

Copy raw contents

Control and configure Docker with systemd

Many Linux distributions use systemd to start the Docker daemon. This document shows a few examples of how to customize Docker’s settings.

Starting the Docker daemon

Once Docker is installed, you will need to start the Docker daemon.

If you want Docker to start at boot, you should also:

Custom Docker daemon options

There are a number of ways to configure the daemon flags and environment variables for your Docker daemon.

However, if you had previously used a package which had an EnvironmentFile (often pointing to /etc/sysconfig/docker ) then for backwards compatibility, you drop a file in the /etc/systemd/system/docker.service.d directory including the following:

To check if the docker.service uses an EnvironmentFile :

Alternatively, find out where the service file is located:

You can customize the Docker daemon options using override files as explained in the HTTP Proxy example below. The files located in /usr/lib/systemd/system or /lib/systemd/system contain the default options and should not be edited.

Runtime directory and storage driver

You may want to control the disk space used for Docker images, containers and volumes by moving it to a separate partition.

In this example, we’ll assume that your docker.service file looks something like:

This will allow us to add extra flags via a drop-in file (mentioned above) by placing a file containing the following in the /etc/systemd/system/docker.service.d directory:

You can also set other environment variables in this file, for example, the HTTP_PROXY environment variables described below.

To modify the ExecStart configuration, specify an empty configuration followed by a new configuration as follows:

If you fail to specify an empty configuration, Docker reports an error such as:

This example overrides the default docker.service file.

If you are behind a HTTP proxy server, for example in corporate settings, you will need to add this configuration in the Docker systemd service file.

First, create a systemd drop-in directory for the docker service:

Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:

Verify that the configuration has been loaded:

Manually creating the systemd unit files

Footer

© 2022 GitHub, Inc.

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.

Капитан грузового судна, или Как начать использовать Docker в своих проектах

How to start docker daemon. Смотреть фото How to start docker daemon. Смотреть картинку How to start docker daemon. Картинка про How to start docker daemon. Фото How to start docker daemon

Docker является open source инструментом, который автоматизирует разворачивание приложения внутри программного контейнера. Мы перевели для вас руководство по работе с Docker для новичков.

Простейший способ понять идею Docker — это сравнить его с обычным контейнером для транспортировки. Когда-то давно компании, занимающиеся транспортировкой, столкнулись со следующими проблемами:

С тех пор, как появились контейнеры, стало возможным помещать кирпичи поверх стекла, а химикаты — хранить рядом с едой. Груз различных размеров может быть помещен в стандартный контейнер, который загружен/разгружен одним и тем же транспортом.

Но вернемся к контейнерам в программной разработке.

How to start docker daemon. Смотреть фото How to start docker daemon. Смотреть картинку How to start docker daemon. Картинка про How to start docker daemon. Фото How to start docker daemon

Написанным кодом порою приходится делиться с другими людьми, например, разработчиками. Кроме того, в комплекте необходимо отправлять и все его зависимости вроде библиотек, веб-сервера, баз данных и так далее. Вы можете столкнуться с ситуацией, когда приложение работает на компьютере, но не может запуститься на тестовом сервере, компьютере разработчика или тестировщика.

В чем отличие от виртуализации?

Традиционно виртуальные машины используются, чтобы избежать неожиданного поведения. Главная проблема с ними заключается в том, что «дополнительная ОС» поверх основной ОС добавляет к проекту гигабайты места. Большую часть времени ваш сервер будет держать несколько виртуальных машин, что займет ещё больше пространства. Другой значительный недостаток — медленная загрузка.

Docker устраняет эти проблемы разделением ядра между всеми контейнерами, которые работают как отдельные процессы основной ОС.

Надо понимать, что Docker — не первая и не единственная платформа, основанная на контейнерах. Однако на данный момент он является самым большим и самым мощным инструментом на рынке.

How to start docker daemon. Смотреть фото How to start docker daemon. Смотреть картинку How to start docker daemon. Картинка про How to start docker daemon. Фото How to start docker daemon

Зачем нам нужен Docker?

Список преимуществ таков:

Поддерживаемые платформы

Linux является нативной платформой Docker, так как последний основан на особенностях, предоставляемых ядром операционной системы. Несмотря на это, вы можете запустить его на macOS или Windows. Разница состоит лишь в том, что Docker на них инкапсулирован в маленькую виртуальную машину. В настоящий момент Docker для этих ОС достиг значительного уровня удобства использования и очень похож на нативное приложение.

Более того, есть много дополнительных приложений, таких как Kitematic или Docker Machine, которые помогут установить Docker и управлять им на отличных от Linux платформах.

Установка

Вы можете посмотреть инструкцию по установке на официальном сайте. Если вы запустили Docker на Linux, вам следует выполнить все последующие команды из-под root или добавить своего пользователя в группу docker и перелогиниться:

Терминология

How to start docker daemon. Смотреть фото How to start docker daemon. Смотреть картинку How to start docker daemon. Картинка про How to start docker daemon. Фото How to start docker daemon

Пример 1: Hello world

Настало время запустить ваш первый контейнер:

Давайте попробуем создать интерактивную оболочку внутри Docker-контейнера:

Если вы хотите оставить контейнер работающим после завершения сессии, превратите его в демон-процесс:

Давайте взглянем, какие контейнеры у нас есть на текущий момент:

Команда ps показала, что у нас всего два контейнера:

Давайте проверим логи и посмотрим, что фоновый контейнер делает прямо сейчас:

Теперь остановим фоновый контейнер:

Удостоверимся, что контейнер остановился:

Контейнер остановился. Мы можем запустить его заново:

Удостоверимся, что он работает:

Теперь остановим его опять и удалим все контейнеры вручную:

Чтобы удалить все контейнеры, мы можем использовать следующую команду:

Пример 2: Nginx

Начиная с этого примера, вам понадобятся несколько дополнительных файлов, которые вы можете найти в Github-репозитории. Скачать файл можно, нажав на эту ссылку.

Настало время создать и запустить более полезный контейнер типа Nginx.

Сменим директорию на examples/nginx :

Теперь вы можете открыть localhost в вашем браузере.

Или можно изменить /example/nginx/index.html (который смонтирован как том в /usr/share/nginx/html внутри контейнера) и обновить страницу.

Получим информацию о контейнере test-nginx :

Эта команда показывает широкую системную информацию об установке Docker. Эта информация включает в себя версию ядра, количество контейнеров и образов, открытые порты, смонтированные тома и так далее.

Пример 3: написание Dockerfile

Чтобы создать Docker-образ, вам необходимо создать Dockerfile. Это просто текстовый файл с инструкциями и аргументами. Вот описание инструкций, которые мы будем использовать в нашем следующем примере:

Вы можете просмотреть справку по Dockerfile, чтобы узнать больше подробностей.

Dockerfile готов. Теперь настало время создать сам образ.

Перейдем в examples/curl и выполним следующую команду для создания образа:

Теперь у нас есть новый образ, и мы можем посмотреть список существующих:

Мы можем создать и запустить контейнер из образа. Попробуем сделать это со стандартными параметрами:

Чтобы увидеть результаты, сохраненные в файле, выполним команду:

Попробуем с facebook.com:

И снова посмотрим результаты:

Пример 4: связь контейнеров Python + Redis

Docker compose — единственный правильный способ соединять контейнеры друг с другом. В этом примере мы соединим контейнеры Python и Redis:

Перейдем в examples/compose и выполним следующую команду:

Данный пример увеличит счетчик просмотров в Redis. Откройте localhost в браузере и проверьте.

Теперь вы можете поиграться с различными образами из Docker Hub или, если хотите, создать свои собственные образы, соблюдая лучшие практики, описанные ниже. Единственное, что можно добавить насчет использования docker-compose: всегда давайте точные имена вашим томам в docker-compose.yml (если в образе есть тома). Это простое правило спасет вас от проблем при проверке ваших томов.

Чтобы просмотреть список томов:

Без точного имени тома там будет находиться UUID. Вот пример с локального компьютера:

Стайлгайды Docker

У Docker есть некоторые ограничения и требования, которые зависят от архитектуры системы (приложений, которые вы упаковываете в контейнеры). Можно игнорировать эти требования или найти какие-нибудь пути обхода, но в этом случае вы не получите все преимущества использования Docker. Настоятельно рекомендуется следовать следующим советам:

How to start docker daemon

Options with [] may be specified multiple times.

Daemon socket option

Note: If you’re using an HTTPS encrypted socket, keep in mind that only TLS1.0 and greater are supported. Protocols SSLv3 and under are not supported anymore for security reasons.

Daemon storage-driver option

The aufs driver is the oldest, but is based on a Linux kernel patch-set that is unlikely to be merged into the main kernel. These are also known to cause some serious kernel crashes. However, aufs is also the only storage driver that allows containers to share executable and shared library memory, so is a useful choice when running thousands of containers with the same program or libraries.

The devicemapper driver uses thin provisioning and Copy on Write (CoW) snapshots. For each devicemapper graph location – typically /var/lib/docker/devicemapper – a thin pool is created based on two block devices, one for data and one for metadata. By default, these block devices are created automatically by using loopback mounts of automatically created sparse files. Refer to Storage driver options below for a way how to customize this setup.

jpetazzo/Resizing Docker containers with the Device Mapper plugin article explains how to tune your existing setup without the use of options.

Note: As promising as overlay is, the feature is still quite young and should not be used in production. Most notably, using overlay can cause excessive inode consumption (especially as the number of images grows), as well as being incompatible with the use of RPMs.

Note: It is currently unsupported on btrfs or any Copy on Write filesystem and should only be used over ext4 partitions.

Storage driver options

Specifies a custom block storage device to use for the thin pool.

If using a block device for device mapper storage, it is best to use lvm to create and manage the thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes needed for images and containers.

Managing the thin-pool outside of Engine makes for the most feature-rich method of having Docker utilize device mapper thin provisioning as the backing storage for Docker containers. The highlights of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm activates the thin-pool, etc.

Specifies the size to use when creating the base device, which limits the size of images and containers. The default value is 10G. Note, thin devices are inherently «sparse», so a 10G device which is mostly empty doesn’t use 10 GB of space on the pool. However, the filesystem will use more space for the empty case the larger the device is.

The base device size can be increased at daemon restart which will allow all future images and containers (based on those new images) to be of the new base device size.

This will increase the base device size to 50G. The Docker daemon will throw an error if existing base device size is larger than 50G. A user can use this option to expand the base device size however shrinking is not permitted.

This value affects the system-wide «base» empty filesystem that may already be initialized and inherited by pulled images. Typically, a change to this value requires additional steps to take effect:

Note : This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the «data» device which is used for the thin pool. The default size is 100G. The file is sparse, so it will not initially take up this much space.

Note : This option configures devicemapper loopback, which should not be used in production.

Specifies the size to use when creating the loopback file for the «metadata» device which is used for the thin pool. The default size is 2G. The file is sparse, so it will not initially take up this much space.

Specifies the filesystem type to use for the base device. The supported options are «ext4» and «xfs». The default is «xfs»

Specifies extra mkfs arguments to be used when creating the base device.

Specifies extra mount options used when mounting the thin devices.

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for data for the thin pool.

If using a block device for device mapper storage, ideally both datadev and metadatadev should be specified to completely avoid using the loopback device.

(Deprecated, use dm.thinpooldev )

Specifies a custom blockdevice to use for metadata for the thin pool.

For best performance the metadata should be on a different spindle than the data, or even better on an SSD.

If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this:

Specifies a custom blocksize to use for the thin pool. The default blocksize is 64K.

Enables or disables the use of blkdiscard when removing devicemapper devices. This is enabled by default (only) if using loopback devices and is required to resparsify the loopback file on image/container removal.

Disabling this on loopback can lead to much faster container removal times, but will make the space used in /var/lib/docker directory not be returned to the system for other use when containers are removed.

To view the udev sync support of a Docker daemon that is using the devicemapper driver, run:

To allow the docker daemon to start, regardless of udev sync not being supported, set dm.override_udev_sync_check to true:

Enables use of deferred device removal if libdm and the kernel driver support the mechanism.

Deferred device removal means that if device is busy when devices are being removed/deactivated, then a deferred removal is scheduled on device. And devices automatically go away when last user of the device exits.

For example, when a container exits, its associated thin device is removed. If that device has leaked into some other mount namespace and can’t be removed, the container exit still succeeds and this option causes the system to schedule the device for deferred removal. It does not wait in a loop trying to remove a busy device.

Enables use of deferred device deletion for thin pool devices. By default, thin pool device deletion is synchronous. Before a container is deleted, the Docker daemon removes any associated devices. If the storage driver can not remove a device, the container deletion fails and daemon returns.

To avoid this failure, enable both deferred device deletion and deferred device removal on the daemon.

With these two options enabled, if a device is busy when the driver is deleting a container, the driver marks the device as deleted. Later, when the device isn’t in use, the driver deletes it.

In general it should be safe to enable this option by default. It will help when unintentional leaking of mount point happens across multiple mount namespaces.

Whenever a new a thin pool device is created (during docker pull or during container creation), the Engine checks if the minimum free space is available. If sufficient space is unavailable, then device creation fails and any relevant docker operation fails.

To recover from this error, you must create more free space in the thin pool to recover from the error. You can create free space by deleting some images and containers from the thin pool. You can also add more storage to the thin pool.

To add more space to a LVM (logical volume management) thin pool, just add more storage to the volume group container thin pool; this should automatically resolve any errors. If your configuration uses loop devices, then stop the Engine daemon, grow the size of loop files and restart the daemon to resolve the issue.

Currently supported options of zfs :

Set zfs filesystem under which docker will create its own datasets. By default docker will pick up the zfs filesystem where docker graph ( /var/lib/docker ) is located.

Docker runtime execution options

Options for the runtime

This example sets the cgroupdriver to systemd :

Setting this option applies to all containers the daemon launches.

Daemon DNS options

Docker considers a private registry either secure or insecure. In the rest of this section, registry is used for private registry, and myregistry:5000 is a placeholder example for a private registry.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure as of Docker 1.3.2. It is not recommended to rely on this, as it may change in the future.

Running a Docker daemon behind a HTTPS_PROXY

When running inside a LAN that uses a HTTPS proxy, the Docker Hub certificates will be replaced by the proxy’s certificates. These certificates need to be added to your Docker host’s configuration:

Be careful setting nproc with the ulimit flag as nproc is designed by Linux to set the maximum number of processes available to a user, not to a container. For details please check the run reference.

The currently supported cluster store options are:

Specifies the heartbeat timer in seconds which is used by the daemon as a keepalive mechanism to make sure discovery module treats the node as alive in the cluster. If not configured, the default value is 20 seconds.

Specifies the ttl (time-to-live) in seconds which is used by the discovery module to timeout a node if a valid heartbeat is not received within the configured ttl value. If not configured, the default value is 60 seconds.

Specifies the path to a local file with PEM encoded CA certificates to trust

Specifies the path to a local file with a PEM encoded certificate. This certificate is used as the client cert for communication with the Key/Value store.

Specifies the path to a local file with a PEM encoded private key. This private key is used as the client key for communication with the Key/Value store.

Specifies the path in the Key/Value store. If not configured, the default value is ‘docker/nodes’.

The PLUGIN_ID value is either the plugin’s name or a path to its specification file. The plugin’s implementation determines whether you can specify a name or path. Consult with your Docker administrator to get information about the plugins available to you.

Once a plugin is installed, requests made to the daemon through the command line or Docker’s remote API are allowed or denied by the plugin. If you have multiple plugins installed, at least one must allow the request for it to complete.

For information about how to create an authorization plugin, see authorization plugin section in the Docker extend section of this documentation.

Daemon user namespace options

The Linux kernel user namespace support provides additional security by enabling a process, and therefore a container, to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. Potentially the most important security improvement is that, by default, container processes running as the root user will have expected administrative privilege (with some restrictions) inside the container but will effectively be mapped to an unprivileged uid on the host.

Starting the daemon with user namespaces enabled

Example: starting with default Docker user management:

Detailed information on subuid / subgid ranges

Given potential advanced use of the subordinate ID ranges by power users, the following paragraphs define how the Docker daemon currently uses the range entries found within the subordinate range files.

The simplest case is that only one contiguous range is defined for the provided user or group. In this case, Docker will use that entire contiguous range for the mapping of host uids and gids to the container process. This means that the first ID in the range will be the remapped root user, and the IDs above that initial ID will map host ID 1 through the end of the range.

From the example /etc/subuid content shown above, the remapped root user would be uid 165536.

If the system administrator has set up multiple ranges for a single user or group, the Docker daemon will read all the available ranges and use the following algorithm to create the mapping ranges:

Disable user namespace for a container

User namespace known restrictions

The following standard Docker features are currently incompatible when running a Docker daemon with user namespaces enabled:

In general, user namespaces are an advanced feature and will require coordination with other capabilities. For example, if volumes are mounted from the host, file ownership will have to be pre-arranged if the user or administrator wishes the containers to have expected access to the volume contents.

Default cgroup parent

If the cgroup has a leading forward slash ( / ), the cgroup is created under the root cgroup, otherwise the cgroup is created under the daemon cgroup.

Daemon configuration file

Options that are not present in the file are ignored when the daemon starts. This is a full example of the allowed configuration options in the file:

The list of currently supported options that can be reconfigured is this:

Docker

Platform for distributed applications.

Getting started with Docker

Installation

Install the docker-ce package using the Docker repository:

To install the dnf-plugins-core package (which provides the commands to manage your DNF repositories) and set up the stable repository.

To add the docker-ce repository

To install the docker engine. The Docker daemon relies on a OCI compliant runtime (invoked via the containerd daemon) as its interface to the Linux kernel namespaces, cgroups, and SELinux.

To start the Docker service use:

Now you can verify that Docker was correctly installed and is running by running the Docker hello-world image.

Start the Docker daemon at boot

To make Docker start when you boot your system, use the command:

For additional systemd configuration options for Docker, like adding an HTTP Proxy, refer to the Docker documentation Systemd article.

Why can’t I use docker command as a non root user, by default?

You can either set up sudo to give docker access to non-root users.

Or you can create a Unix group called docker and add users to it. When the Docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.

Warning: The docker group is equivalent to the root user; For details on how this impacts security in your system, see Docker Daemon Attack Surface for details.

To create the docker group and add your user:

Share your knowledge

Fedora Developer Portal is a community effort to share guides and information about open-source development. And we need your help!

Configuring and running Docker on various distributions

After successfully installing Docker, the docker daemon runs with its default configuration.

Running the docker daemon directly

The Docker daemon can be run directly using the dockerd command. By default it listens on the Unix socket unix:///var/run/docker.sock

Configuring the docker daemon directly

If you’re running the Docker daemon directly by running dockerd instead of using a process manager, you can append the configuration options to the docker run command directly. Other options can be passed to the Docker daemon to configure it.

Some of the daemon’s options are:

Here is an example of running the Docker daemon with configuration options:

The command line reference has the complete list of daemon flags with explanations.

Daemon debugging

Note: The log level setting of the daemon must be at least “info” level and above for the stack trace to be saved to the logfile. By default the daemon’s log level is set to “info”.

The daemon will continue operating after handling the SIGUSR1 signal and dumping the stack traces to the log. The stack traces can be used to determine the state of all goroutines and threads within the daemon.

Ubuntu

After successfully installing Docker for Ubuntu, you can check the running status using Upstart in this way:

Running Docker

You can start/stop/restart the docker daemon using

Configuring Docker

The instructions below depict configuring Docker on a system that uses upstart as the process manager. As of Ubuntu 15.04, Ubuntu uses systemd as its process manager. For Ubuntu 15.04 and higher, refer to control and configure Docker with systemd.

You configure the docker daemon in the /etc/default/docker file on your system. You do this by specifying values in a DOCKER_OPTS variable.

To configure Docker options:

Log into your host as a user with sudo or root privileges.

If you don’t have one, create the /etc/default/docker file on your host. Depending on how you installed Docker, you may already have this file.

Open the file with your favorite editor.

Add a DOCKER_OPTS variable with the following options. These options are appended to the docker daemon’s run command.

The command line reference has the complete list of daemon flags with explanations.

Save and close the file.

Restart the docker daemon.

Verify that the docker daemon is running as specified with the ps command.

By default logs for Upstart jobs are located in /var/log/upstart and the logs for docker daemon can be located at /var/log/upstart/docker.log

CentOS / Red Hat Enterprise Linux / Fedora

After successfully installing Docker for CentOS/Red Hat Enterprise Linux/Fedora, you can check the running status in this way:

Running Docker

You can start/stop/restart the docker daemon using

If you want Docker to start at boot, you should also:

Configuring Docker

Previously, for CentOS 6.x and RHEL 6.x you would configure the docker daemon in the /etc/sysconfig/docker file on your system. You would do this by specifying values in a other_args variable. For a short time in CentOS 7.x and RHEL 7.x you would specify values in a OPTIONS variable. This is no longer recommended in favor of using systemd directly.

For this section, we will use CentOS 7.x as an example to configure the docker daemon.

To configure Docker options:

Log into your host as a user with sudo or root privileges.

Create the /etc/systemd/system/docker.service.d directory.

Create a /etc/systemd/system/docker.service.d/docker.conf file.

Open the file with your favorite editor.

Override the ExecStart configuration from your docker.service file to customize the docker daemon. To modify the ExecStart configuration you have to specify an empty configuration followed by a new one as follows:

The command line reference has the complete list of daemon flags with explanations.

Save and close the file.

Restart the docker daemon.

Verify that the docker daemon is running as specified with the ps command.

Note: Using and configuring journal is an advanced topic and is beyond the scope of this article.

Пожалуйста, авторизуйтесь что бы оставлять комментарии.

Источники информации:

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *