Docker and IPtables

Docker and IPtables

By default, docker daemon appends iptables rules for forwarding. For this, it uses a filter chain named DOCKER.


Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ...
Chain DOCKER (1 references)
target prot opt source destination

Moreover, when you tell docker to expose a port of a container, it exposes it to the entire world, breaking your possibly existing iptables rules.

So.. if you are running docker on a host that already have an iptables based firewall, you should probably set --iptables=false.

What are you talking about?

Let’s take an example. You want to start nginx and bind containerPort 80 to hostPort 9090:

docker run --name some-nginx -d -p 9090:80 nginx

What it does behind the scene is adding an iptables rule to the DOCKER filter chain:


Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ...
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:9090 <-- this was added when running the container

Now port 9090 is available from the entire world. Why? Because we’re listening 9090 on any IP addresses (*) and because of the forwarding rules that are dynamically added in the DOCKER filter chain. Note that docker’s forward rules permit all external source IPs by default.

You probably don’t want that.

Exposing ports locally

You might want to publish ports just locally and not to *, for internal use. Let’s read the documentation of docker run:

-p=[]      : Publish a container's port or a range of ports to the host
               format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
               Both hostPort and containerPort can be specified as a range of ports.
               When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)
               (use 'docker port' to see the actual mapping)

As you can see, you can bind the hostPort to an IP.


docker run --name some-nginx -d -p 127.0.0.1:9090:80 nginx

# BEFORE
netstat -an | grep 9090 tcp6 0 0 :::9090 :::* LISTEN
# AFTER
netstat -an | grep 9090 tcp 0 0 127.0.0.1:9090 0.0.0.0:* LISTEN

Better.

Docker, stop messing with my iptables rules!

Let’s say you are using docker on a server available on the Internet. You already have an iptables based firewall configured. Personally, I’m using uif which is a very powerful perl script in debian. Have a look at a

## Debian GNU Linux Firewall Package
## This file has been automatically generated by debconf. It will be overwritten
## the next time you configure firewall without choosing "don't touch".

## Sysconfig definitions
# These entries define the global behaviour of the firewall package. Normally
# they are preset in /etc/default/uif and may be overwritten by this
# section.
#
# syntax: LogLevel : set the kernel loglevel for iptables rules
# LogPrefix: prepend this string to all iptables logs
# LogLimit: set packet log limit per time interval (times/interval)
# LogBurst: set packet log burst
# Limit: set packet limit per time interval (times/interval)
# Burst: set packet burst
# example:
# sysconfig {
# LogLevel debug
# LogPrefix FW
# LogLimit 20/minute
# LogBurst 5
# Limit 20/minute
# Burst 5
# AccountPrefix ACC_
# }

## Include predefined services
# The include section takes a bunch of files and includes them into this
# configuration file.
#
# syntax: “filename”
#include {
# “/etc/uif/services”
#}

## Services needed for workstation setup
# The service section provides the protocol definitions you’re
# using in the rules. You’re forced to declare everything you
# need for your setup.
#
# syntax: service_name [tcp([source:range]/[dest:range])] [udp([source:range]/[dest:range])]
# [protocol_name([source:range][/][dest:range])] [service_name] …
# examples: http tcp(/80)
# dns tcp(/53) udp(/53)
# group http dns tcp(/443)
# ipsec esp(/) udp(/500)
service {
traceroute udp(32769:65535/33434:33523) icmp(11)
ping icmp(8)
}

## Network definitions needed for simple workstation setup
# In the network section you’re asked to provide informations on all
# hosts and/or networks running in your setup.
#
# syntax: net_name [ip-address[:mac-address]] [network] [net_name]
# examples: webserver 192.168.1.5
# intranet 10.1.0.0/16
# dmz 10.5.0.0/255.255.0.0
# some intranet dmz 10.2.1.1
# router 10.1.0.1=0A:32:F2:C7:1A:31
network {
localhost 127.0.0.1
all 0.0.0.0/0
trusted4 192.168.1.0/24
trusted6 fd00:1:2:3::/64
}

## Interface definitions
# Since all definitions used in the filter section are symbolic,
# you’ve to specify symbolic names for all your interfaces you’re
# going to use.
#
# syntax: interface_name [unix network interface] [interface_name]
# examples: internal eth0
# external ippp0 ipsec0
# allppp ppp+
# group external allppp eth3
interface {
loop lo
}

## Filter definitions
# The filter section defines the rules for in, out, forward, masquerading
# and nat. All rules make use of the symbolic names defined above. This
# section can be used multiple times in one config file. This makes more
# senese when using one of these alias names:
# filter, nat, input, output, forward, masquerade
#
# syntax: in[-/+] [i=interface] [s=source_net] [d=dest_net] [p=protocol] [f=flag_1,..,flag_n]
# out[-/+] [o=interface] [s=source_net] [d=dest_net] [p=protocol] [f=flag_1,..,flag_n]
# fw[>/-/+] [i/o=interface][s=source_net] [d=dest_net] [p=protocol] [f=flag_1,..,flag_n]
# masq[-/+][i/o=interface][s=source_net] [d=dest_net] [p=protocol] [f=flag_1,..,flag_n]
# nat[-/+] additionally allows [S=from source] [D=to destination] [P=to port:[range]]
# additional:
# All keys mentioned in the syntax section (in/out/…) can be prefixed with “sl”, which
# causes the creation of a stateless rule.
# flags: limit([count/time[,burst]])
# reject([reject type])
# log([name])
# account(name)
# examples:
# masq+ o=extern s=intranet
# nat+ s=intranet p=http D=relayintern P=squid
# in+ s=trusted p=ssh,ping,traceroute,http
# out- s=intranet p=smb f=reject
# fw- d=microsoft f=reject,log(ms-alert)
# slin+ s=testnet
# slout- d=testnet
# fw> o=extern
# fw+ p=myhttp f=account(HTTP)
# Take an attention about the protocol for your accounting rules. If you
# want to count user http traffice, you may need a “myhttp tcp(80/)”.
filter {
in+ i=loop s=localhost
out+ o=loop d=localhost

# allow incoming pings for IPv4
in+ s=all(4) p=ping
# these IPv6-ICMP types are a MUST for IPv6
in+ s=all(6) p=ping,pong,noroute,packet-too-big,time-exceeded,parameter-problem,neighbor-advertisement,neighbor-solicitation

in+ p=traceroute

in+ s=trusted4(4)
in+ s=trusted6(6)

out+ d=all

in- f=log(input),reject
out- f=log(output),reject
fw- f=log(forward),reject
}

To tell docker to never make changes to your system iptables rules, you have to set --iptables=false when the daemon starts.

For sysvinit and upstart based systems, you can edit /etc/default/docker. For systemd, you can do that:

mkdir /etc/systemd/system/docker.service.d
cat << EOF > /etc/systemd/system/docker.service.d/noiptables.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --iptables=false
EOF
systemctl daemon-reload

Now reload your firewall and restart docker daemon. You can see that the chain named DOCKER and the references to it in chain FORWARD (policy DROP) disappeared.

Configure iptables to work with docker

If you’re still using the Ethernet bridge created by docker and named docker0, you can set the following rules for forwarding:

# just an example. It implies that your host Ethernet NIC is eth0
-A FORWARD -i docker0 -o eth0 -j ACCEPT
-A FORWARD -i eth0 -o docker0 -j ACCEPT

Now if you want to expose TCP port 10000 of a running container to the world, this container must expose port to any IP (*) on host side:

docker run --name some-nginx -d -p 10000:80 nginx

netstat -an | grep 10000
tcp6       0      0 :::10000                :::*                    LISTEN

Then you can add this firewall rule to allow the world to access your container through the forwarding rules:

-A INPUT -p tcp -m tcp --dport 10000 -s 0.0.0.0/0 -j ACCEPT

How to stop the Docker Swarm Manager to act as Worker


Docker Swarm Manager act as Worker too?

Yes, by default all managers acts as worker nodes.Main reason is, in a single manager node cluster, you can run commands like docker service create and the scheduler will place all tasks on the local Engine.

 

How to stop the Docker Swarm Manager to act as Worker?

To prevent the scheduler from placing tasks on a manager node in a multi-node swarm, set the availability for the manager node to Drain. The scheduler gracefully stops tasks on nodes in Drain mode and schedules the tasks on an Active node. The scheduler does not assign new tasks to nodes with Drain availability.

# docker node update --availability drain <ManagerNode>


 
            

Create and Manage Swarm Services


Swarm Service

Service is the definition of the tasks to execute on the worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When you create a service, you specify which container image to use and which commands to execute inside running containers.

Running Services in the Docker Swarm

We have swarm cluster up and we are ready to deploy the services. In this demo we will deploy service name “webserver” which will be using “nginx” docker images.

# docker service create -p 8080:80 --name webserver nginx

In the above example, we’re mapping port 80 in the Nginx container to port 8080 on the cluster so that we can access the default nginx page from anywhere.


Swarm Service modes

Swarm service support 2 modes – Replicated and Global (Replicated mode is default)

Replicated mode – you can pass number of replica of the service and swarm maintain that count.

# docker service create --name replicated_service --replicas 3 nginx

Global mode – To start a global service on each available node, pass –mode global to docker service create. Every time a new node becomes available, the scheduler places a task for the global service on the new node.

# docker service create --name global_service --mode global nginx


To view services on a cluster

# docker service ls

# docker service inspect --pretty <ServiceNAME|ServiceID>


To determine which nodes the services is running on by using docker service ps followed by service name

# docker service ps <ServiceNAME|ServiceID>

Docker by default use mesh networking, a service running on a node can be accessed on any other node of the cluster.


Scale Up/Down the Service

# docker service scale <ServiceNAME>=<#ofReplicas>

Remove a Service

# docker service rm <ServiceNAME|ServiceID>


 
            

Create Docker Swarm


Docker Swarm

Swarm is native clustering for the Docker. When the Docker Engine runs is swarm mode, manager nodes implement the Raft Consensus Algorithm to manage the global cluster state. The reason why Docker swarm mode is using a consensus algorithm is to make sure that all the manager nodes that are in charge of managing and scheduling tasks in the cluster, are storing the same consistent state.


LAB Setup

In this LAB we are going to create a Swarm cluster with single manager and 2 worker nodes.

Operating System CentOS 7.4 x86_64
Platform Vagrant Machines
Manager Node manager 192.168.11.100/24
Worker Node 1 node-1 192.168.11.101/24
Worker Node 2 node-2  192.168.11.102/24

Prerequisites

  • Docker Engine 1.12 or later installed. We are going to install “ce” (community engine)

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# yum install docker-ce -y

# systemctl start docker.service

# systemctl enable docker.service

  • Static IP address of the manager machine, preferably for all machines
  • Network connectivity between all nodes and manager
  • Following Open Network ports

TCP port 2377 for cluster management communications

TCP and UDP port 7946 for communication among swarm nodes

UDP port 4789 for overlay network traffic


Create a Swarm

After the installation of the docker engine, next step is to enable the swarm mode, by default it is disabled.


Step-1: Initialize the Swarm

To crate a new swarm run the below command on the manager node.

# docker swarm init --advertise-addr 192.168.11.100

This command switches the current node into swarm mode and creates a new swarm. On the node where swarm init is done, that node is designated as manager node and it starts on listening on the advertised IP address over port 2377.

With swarm init – by default, generates tokens for worker and manager nodes to join the swarm, you can regenerate the tokens again, if missed to node those.


Step-2: Adding worker nodes on the swarm cluster

Login to every swarm node-1 and node-2 and run the following command

# docker swarm join --token <TOKEN> <Manager IP>:2377

Step-3: Check the Status of the Swarm Cluster

Run the following commands to check the status and health of swarm cluster.

# docker info

# docker node ls

# docker node inspect <node> --pretty

Please Note – By default manager also acts as worker node.


To see the Token

Display the token for manager to join

# docker swarm join-token manager

Display the token for worker to join

# docker swarm join-token worker


Swarm Cluster Management

AVAILABILITY column shows whether or not the scheduler can assign tasks to the node:

active: scheduler can assign tasks to the node.

pause: scheduler doesn’t assign new tasks to the node, but existing tasks remain running.

drain: scheduler doesn’t assign new tasks to the node, existing services will move to other nodes.

MANAGER STATUS column shows node participation in the Raft consensus:

No value: indicates a worker node that does not participate in swarm management.

leader: node is the primary manager that makes all swarm management and decisions.

reachable: node is a manager node participating in the Raft consensus quorum.

unavailable: node is a manager that is not able to communicate with other managers.


Management Commands

Update the states of manager/worker node

# docker node update --availability drain node-1.1it.click

Promote the node as manager

# docker node promote node-1.1it.click

Demote the node from manager role

# docker node demote node-2.1it.click

Add labels to the Node’s metadata

# docker node update --label-add Env=Dev node-2.1it.click

Node leaves the cluster

# docker swarm leave

Removes the node from cluster

# docker node rm node-2.1it.click

Docker Swarm


Docker Swarm

Swarm is native clustering for the Docker. in the context of swarm, a cluster is a poll of Docker hosts that acts as a bit like a single large docker host. You can also run swarm services and standalone containers on the same Docker instances.


Features of Swarm

  • Swarm setup is very quick and easy, no separate infrastructure requirements and Swarm ships as standard Docker image.
  • Swarm implements most of the Docker API endpoints, which means tools build on it can work out of the box.
  • Swarm support Affinity definition/configuration, which means Docker swarm launch a container only a Docker host that does not already have the same container already running on.
  • Swarm supports high availability, we can join multiple manager nodes to the cluster, so that if one manager node fails, another can automatically take its place without impact to the cluster.
  • Swarm support scaling, for each service you can declare the number of tasks you want to run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state.
  • Swarm handles desired state reconciliation very well, manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state.
  • Swarm support network overlays. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
  • Swarm is secure by default. Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes.
  • Rolling updates: At roll out time you can apply service updates to nodes incrementally.


Swarm Mode Key Concepts

Manger Node manages the application deployment of the request. Task Manager Node performs are

  • Dispatches units of work called tasks to worker nodes.
  • Checks are manage desired state of the swarm.
  • Manger nodes elect a single leader to conduct orchestration tasks.
  • Keep track of resource utilization on the worker nodes.

Worker nodes receive and execute tasks dispatched from manager nodes. By default manager nodes also run services as worker nodes, but you can configure them to run manager tasks exclusively and be manager-only nodes. An agent runs on each worker node and reports on the tasks assigned to it. The worker node notifies the manager node of the current state of its assigned tasks so that the manager can maintain the desired states.

Service is the definition of the tasks to execute on the worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When you create a service, you specify which container image to use and which commands to execute inside running containers.

Task carries a Docker container and the commands to run inside the container. It is the atomic scheduling unit of swarm. Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale. Once a tasks is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail.

Load balancing, The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a Published Port in the 30000-32767 range. Otherwise you can choose free port yourself.

DNS component automatically assigns each service in the swarm a DNS entry. The swarm manager uses internal load balancing to distribute requests among services within the cluster based on the DNS name of the service.

Docker Image Creation


Docker Image

Docker image can be described as a template with all required configurations Whereas a container is a  running instance of Docker image. Like containers, image are not bound to the states i.e. Image does not have states.

There are different images available from the OS/Application vendor along with the custom images from the community.

When working on container a DevOps/Application engineer generally create their own Docker image with all the customization, this enable them to launch a container quickly.


Methods for custom image creation

Interactive Method:

In this way, you can download the base Docker OS image -> create container -> manually launch a shell -> perform the customization -> commit the changes.

This process will save your container to a Docker image and that image can be stored/distributed.

Automated  method using Dockerfile:

Dockerfile is text file with the directives/instructions for the image creation. “docker build” command is used to build the image which creates/configures the image automatically by reading Dockerfile. Dockerfile accept the in the following format

DIRECTIVE               arguments

In the last session on “Docker”, we have covered the “interactive way” of image creation and in this session we are going to create a Docker image using the Dockerfile method.


LAB

Using the Dockerfile, we are going to create an Apache HTTPD Web server on CentOS 7 image, At high level below configuration will be performed/applied to the Docker image.

  • Download the official CentOS 7 image.
  • Perform package update on the image.
  • Install Apache HTTP server.
  • Add a directive to include/copy the index.html from Docker mgmt server to document root ( i.e. /var/www/html ) of the image.
  • Enable port 80 for automatically whenever a container created from this image.
  • Configure the auto start-up of Apache HTTPD service.

Below are the directives we are going to use in the Dockerfile.

FROM :   this directive tells which base image to be used to create the custom image, example centos/ubuntu etc.

RUN    :       this directive is use to define the command to be executed during the image build.

ADD    :      this directive is used to defines the files/directories to be copied from the source (local server) to the image during image build.

ENTRYPOINT :     this directive defines container as executable.

CMD :             this directive is used to define the arguments for the ENTRYPOINT command.

EXPOSE :  this directive defines the network ports on which container will listen.


Sample Dockerfile

#  use latest centos7 image

FROM centos:latest

#  add the image maintainer name and email id

MAINTAINER Aghassi email: [email protected]

# update the centos image with latest available updates

RUN yum update -y

RUN yum clean all

# install network utilities, such as ( ifconfig, netstat, etc)

RUN yum install net-tools -y

# install apache httpd web server

RUN yum install httpd -y

RUN yum clean all

# copy the index.html file from current directory to image's document root

ADD index.html /var/www/html/

# define image to allow listen on port 80 (whenever a container created)

EXPOSE 80

# define the commands to be executed when container boots (created from this image)

ENTRYPOINT [ "/usr/sbin/httpd" ]

CMD [ "-D",  "FOREGROUND" ]


Bulid image

# docker build -t [repository/image_name]:[tag] .


Test the newly created image by creating a container

# docker run -it -d -P [image id]

#curl [container IP]:80


 
            

Customize The Docker Networking

 

Why to use custom Network Subnet for Docker Networking?

Docker container makes use of default subnet "172.17.0.0/16" for Networking. There may be many scenarios where we can’t use the default network due to some restrictions or in case subnet already used in the network.

 

Lab Tasks

In this quick session, we will change the network from default subnet "172.17.0.0/16" to "10.10.10.10/24". The bridge interface is remain to docker0 i.e. default.

 

Configure the Custom Network

Stop The Docker Service

# systemctl stop docker.service

Bring down the Docker bridge docker0

# ip link set dev docker0 down

Verify if IP forwarding is enabled, if not enable it in sysctl.conf

# sysctl net.ipv4.conf.all.forwarding

Update new subnet in the /etc/sysconfig/docker-network add the following to DOCKER_NETWORK_OPTIONS:

"--bip=YOUR>CIDR>ADDRESS/24"

Example

DOCKER_NETWORK_OPTIONS="--bip=10.10.10.10/24"

Remove default subnet’s MASQUERADE rules from the POSTROUTING chain in network iptables:

# iptables -t nat -F POSTROUTING

# iptables -t nat -F DOCKER

Start Docker service:

# systemctl start docker.service

Verify that the MASQUERADE rule have new subnet added to the POSTROUTING chain:

# iptables -t nat -L -n

 

Validation

Check the new subnet is on the bridge now:

# docker network inspect bridge

Check IP Address of the Container

# docker inspect -f '{{ .NetworkSettings.IPAddress }}' [Container ID]

Run a docker container and check container have

# docker run -it [Container Name] /bin/bash

 

Docker


What is Docker?

Docker is an open platform for developers and system engineers to build, ship, and run distributed applications, whether on Bare Metal System (Physical), VMs, or the Cloud, Docker is not a container technology like Xen/KVM etc.

Docker provides an additional layer abstraction and automation of operating system I virtualization on Linux.


Advantages of using Docker

Portability – In Docker system, an application and its prerequisites/dependencies can be bundled into a single container/image, whish will be independent of host kernel, can be easily ported to different system.

Quick Application Deployment – As application and its dependencies can be bundled into single images, it makes easy to quickly deploy the apps.

Sharing – You can share your Docker image with other using remote repositories.

Lightweight – Docker images have very small, they need very low compute capacity and storage, …

Easy Maintenance – Maintenance is very quick and easy.

Cost Saving – Open Source technology and don’t need heavy compute.


Docker Containers vs. Virtual Machines

  • Docker container can be created/destroyed very quickly as compare to the virtual machines.
  • Docker containers are light weight is compare to the virtual machines. Being lightweight more containers can run at some time on a host.
  • Docker container make use of resources very efficiently. In case of virtual machines capacity is need to be reserved (compute + storage), whereas this is not needed is case of Docker containers.
  • Virtual Machines can be migrated across servers when they are running, but Docker need to stopped before migration as there is no hypervisor layer.

 

*Images taken from Docker Documentations


Docker Terminologies

  • Images – Images are templates for the docker containers.
  • Containers – created from Docker images and run the actual application.
  • Docker Daemon – The background service running on the host that manages building, running the containers.

Prerequisites Docker Installation

  1. CentOS 7 64 Bit / Kernel 3.10.x kernel in the minimum required.
  2. Disabled the SELinux and FirewallD services:  # systemctl stop firewalld
  3. Install EPEL repository:    # yum install -y epel-release

 Install Docker via yum provided by CentOS (method 1)

# yum install -y docker


Install Docker CE (community edition)  Software (method 2)

First remove older version of docker (if any):

# yum remove docker docker-client docker-common docker-selinux docker-engine-selinux docker-engine docker-ce

Next install needed packages:

# yum install yum-utils device-mapper-persistent-data lvm2 -y

Configure the docker-ce repo:

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Finally install docker-ce:

# yum install docker-ce -y


Enable and Start Docker service

# systemctl enable docker
# systemctl start docker


How to find out info about Docker network bridge and IP addresses

Default network bridge named as docker0 and is assigned with an IP address. To find this info run the following:

# ip a

# ip a s docker0


How to run docker commands

The syntax is:

# docker command
# docker command arg
# docker [options] command arg
# docker help | more


Getting help

# docker help | more

Run ‘docker COMMAND --help‘ for more information on a command:

# docker ps --help
# docker cp --help


Check the Docker version

# docker version


Check Detailed Docker Information

# docker info


How to test your docker installation

Docker images are pulled from docker cloud/hub such as docker.io or registry.access.redhat.com and so on. Type the following command to verify that your installation working:

# docker run hello-world


Search Docker Images on Internet

Now you have working Docker setup. It is time to find out images. We can find images for all sort of open source projects and Linux distributions. To search the Docker Hub/cloud for centos or nginx image run:

# docker search centos


# docker search nginx


Download Docker Images

To pull an image named centos or nginx from a registry, run:

# docker pull centos:centos7

# docker pull nginx


To Display the list of locally available images

# docker images


TAG − This is used to logically tag images.
Image ID − This is used to uniquely identify the image.
Created − The number of days since the image was created.
Virtual Size − The size of the image. 


Remove Docker Image

When you have lots of running which are obsolete or you no longer need any Docker image then you can remove that image using the following command.

# docker rmi [IMAGEname]


To test your new image

The concept is little catchy, whenever a command is sent for execution in the Docker image, a container in obtained. When this command execution is finished, the container gets stopped (a non-running or exited container state). It Means at every command execution into the same image a new container is created again and again and exited.

# docker run centos:centos7 /bin/ping 1it.click -c 5


List Docker Containers

Whenever a command execution is performed on a Docker Image a container is created and gets stopped after execution but it remains in exited or non-running state. The following command will display a list of the running and stopped (non-running) containers:

# docker ps -l

In a production environment there are many running containers and to list them we have command. This command is used to get the currently running containers:

# docker ps

This Can also be used with -a argument and this command will list all of the containers on the system:

# docker ps -a


Checking Docker Networking

# docker network ls

# docker network inspact [network name]


Checking Resource Consumption by Running Container

# docker stats


Check Resource limits for a docker container

# docker run -it -c 256 -m 300M centos:centos7 /bin/bash


Stop/Start/Restart operation

# docker start [container ID]                ## to start a docker container

# docker stop [container ID]                 ## to stop a docker container

# docker restart [container ID]              ## to restart a docker container


Committing the Docker Container Updates (This command turns your container to an image) And Adding a Repository/Tag value to a image

# docker commit [container ID]

# docker tag [image ID] <repo : tags>

Removing/Deleting a container

# docker rm [container ID]


Checking the docker container Logs

# docker logs [container ID]


Lets create our container and hots a demo website quickly using Python Simple HTTP Server module quickly will listen on port 8080:

# mkdir -p /var/www/html

# echo "This is my Aghassi's test Docker Website" > /var/www/html/demowebpage.txt

# docker run -d -p 8080:8080 --name="python_web" -v /usr/sbin:/usr/sbin -v /usr/bin:/usr/bin -v /usr/lib64:/usr/lib64 -w /var/www/html -v /var/www/html:/var/www/html centos:centos7 /bin/python -m SimpleHTTPServer 8080

-d, –detach                             Run container in background and print container ID

-p, –publish list                     Publish a container’s port(s) to the host (default [])

-v, –volume list                     Bind mount a volume (default [])

-w, –workdir string              Working directory inside the container

Check the network ports allocation:

# ss -tupln |grep 8080

Lets test the website:

# curl localhost:8080/demowebpage.txt


How to run Docker nginx image

Now you pulled nginx image, it is time to run it:

# docker run --name my-nginx-i --detach nginx

Say you want to host simple static file hosted in /var/www/html/ using nginx container:

# docker run --name my-nginx-ii -p 80:80 -v /var/www/html/:/usr/share/nginx/html:ro -d nginx

Where,

–name my-nginx-i : Assign a name to the container
–detach : Run container in background and print container ID
-v /var/www/html/:/usr/share/nginx/html:ro : Bind mount a volume
-p 80:80 : Publish a container’s port(s) to the host i.e redirect all traffic coming to port 80 to container traffic

Go ahead and create a file named index.html in /var/www/html/:

# echo 'Welcome. I am Nginx server locked inside Docker' > /var/www/html/index.html

Test it:

curl http://your-host-ip-address/
curl 192.168.1.7

Sample outputs:

Welcome. I am Nginx server locked inside Docker

How to run a command in a running container

Run ls /etc/nginx command for my-nginx-i container

# docker exec e535e4c08c07 ls /etc/nginx

OR

# docker exec my-nginx-i ls /etc/nginx

Want to gain bash shell for a running container and make changes to nginx image?

# docker exec -i -t e535e4c08c07 bash

OR

# docker exec -i -t my-nginx-i bash