...
×

Learn Docker in 1 Hour

This article will contain 90% of the docker, and the remaining 10% you can manage. I have learned docker for 3 weeks and I am writing that here, so you do not have to purchase any course. I have created this article so that I can revise it if I have very little time.

  • What is docker: Docker is an open-source platform based on which we can create isolated environments, these environments behave like virtual machines, but these environments don’t take as much memory as VMs
  • Docker Images: A docker image is a template to create a docker container, think of images as an iso, or exe file. a library can have its image with a specific operating system
  • What is a container: When we run the images, it will create isolated environments, these isolated environments in dockers are called containers

What is a docker hub:

Docker hub is a repository where all the public images are stored, you can also have private images, but that comes under the paid model. Most of the time private images are stored in companies’ repositories rather than the docker hub.

Whenever we pull any image that will be pulled from the Docker hub only

Dockerfile

The Dockerfile contains all the instructions that will be followed while creating a Docker image. When we run the build command based on Dockerfile, docker will create images following these instructions. If any of the instructions fails then the docker image will not be created. The file name itself is Dockerfile

Image layers

INSTRUCTION: each command present in the Dockerfile

Every INSTRUCTION that is written in the dockerfile is considered as a layer, if you have 5 INSTRUCTIONs then it has 5 Image layers.

Every layer in the image is cached while creating images, so if we create an image again without any change in INSTRUCTIONs then layers from cache will be used rather than creating new ones.

If we change something in the Dockerfile, then from that point everything after that INSTRUCTION will be recreated till the end of the Dockerfile.

So to avoid a lot of re-execution and improve performance using cache, we have to split the INSTRUCTIONs.

  • The least changing INSTRUCTIONs are at the top of the Dockerfile
  • The most changing INSTRUCTIONs are at the bottom of the Dockerfile

Below are a few commands that we use in the Dockerfile

  • FROM – Which image should be used as the base image
  • WORKDIR – Set a folder as a working directory when the container created
  • ENV – Set environment variable, which will be available inside the container
  • COPY – copy files from the local machine to the image
  • RUN – runs a command on the image like install or setting permissions
  • EXPOSE – makes the ports to be exposed
  • ENTRYPOINT – becomes the prefix for all the commands executed from cmd while creating the container
  • VOLUME – creates volume and attaches to the container while creating the container
  • CMD – the command that gets executed when a container is created from this image. Must be created as array elements (node server.js becomes CMD [“node”, “server.js”])

Sample python file

# app.py
print("Hello, World!")

Dockerfile

# Use an official Python runtime as a parent image
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the current directory contents into the container at /usr/src/app
COPY . .

# Run the Python script when the container launches
CMD ["python", "./app.py"]

Build Image: Building an image may take a few minutes initially, later it will start using cached layers

docker build -t image_name .
docker build -t python-hello-world-ubuntu .
# . denotes (current dir), the folder where the Dockerfile is present

#If you like to build without cache, use this
docker build --no-cache -t image_name .

Now you can list the images available in the machine, and check the image you created is present

docker images

Managing Containers

To stop a container :

docker stop container_id

To start a stopped container, you cannot start a removed container

docker start container_id

The start command starts the container in detach mode, If you want to attach use the below argument with the start command

docker start -a container_id

Attach already running detached container

docker attach container_id

Inspect the image

docker inspect image_id

Delete a container, You cannot delete a running container, you have to stop the container and then only you can delete it.

docker rm container_id1, container_id2, so on

Delete an image, You cannot delete an image if it is being used by a container even if the container is stopped. So, remove the container then delete the image

docker rmi image_1, image_2, so on

Delete all unused images

docker image prune

If want to Automatically remove the container when it stops, then while starting use the below command with --rm argument

docker run --rm image_name

Copy file local to a container (to be executed from local machine)

docker cp local_src container_id:/folder_path

Copy from docker container to local machine (to be executed from local machine)

docker cp container_id:/folder target

Naming Images, you can name your image using the -t argument, the tag is not mandatory

docker build -t name:tag

# name - name of the image
# tag - name to recognise the version in other words just version

Rename an image

docker tag old_name:tag new_name:tag

Naming container

docker run --name container_name image_name

# container_name is given to container rather than random name

Stop Container

docker stop container_id1 container_id2 # so on

Delete containers

docker rm container_id1 container_id2 so on # stop the contaoner before removing them

Push an image to the docker hub

Create an account at hub.docker.com and remember the Password. The image you create is mostly public, you can go for a private repository if you are doing it for your organization.

Create a repository with a name, don’t worry about the name because it is under your repository.

Log into docker from the terminal.

docker login

# Enter username & password.

Rename the Image tag to match the repository.

docker tag old_name: tag new_namo: tag

Before you push the image verify rename is Succes

docker images

Push the image

docker push new_name: tag

# new_name should be same as repository name

Done with pushing images to the docker hub

Volumes

When some data is generated inside the docker container those data will be deleted when the docker container is removed. To retain, such data will be using something called volumes

There are two kinds of volumes

  • Named volume: retains data even when the container is removed
  • Anonymous Volume: deleted when a container is removed

Note: Editing data is not possible as would not know where the data volumes are present in our local machine.

Check volume:

docker volume ls

Named Volume:

The named volume data is retained even after the container is removed so we can reuse the data again when a new container is created.

docker run -d -v name:path_in_container image_name

# -v denotes the volum
# name is volume name to be stored in local machine
# path_in_container is the data which need to be retained

Bind mount

In development, we make a lot of changes. In such cases, we have to rebuild the docker image again and again to reflect the changes. This is time-consuming.

Bind mount helps to sync the local folder and files with the docker container (not the image). So that all the changes we make can reflect on the containers.

Bind mount is also a command line argument, mentioned with -v just like volumes; Instead of names we provide absolute path

# provide below in single line

docker run -d -p 3000:80
-v name: /container path   #named volume
-v /user/pavan/project/ : /container_path   #bind mount
image_name

Note: Bind mount “-v‘ will replace all the files present in the container path with local files. That means node modules or packages are also replaced.

So we have to use anonymous volumes to retain the folder we want.

Steps of what is Happening.

  • Create an image command
  • Image created with copy also with installs.
  • Create a container from this image
  • The content of the container has installed packages
  • The Anonymous Volume backs up the container, including installations
  • we have passed -v bind mount so it replaces all folders and files including installations
  • Bind did not affect the anonymous volumes.
  • Next: we have to bring the anonymous volume into the container. So, that replaces completely/ partly the Bind mount

We will be using one more -v to perform the anonymous volume to replace the bind mount only for the installation folder.

# all in single line
dockel run -d -p 3000: 80
-v name: container_path
-v "Local path/: /container path"
-v "anonymous volume path"
image_name

Before you run just delete the container & image (optional but without it, it did not work for me).

Then create an image and create a container.

Reflecting changes of container to Local.

With Bind mount, we learned how to reflect the local item into the container.

If you create a file/folder inside the container it will reflect in our local machine without any extra commands. Log into the container, and create a file, and it will reflect to our local.

docker exec -it container_id bash

cd /app

touch abc.txt # create file

Readonly bind mount

We saw that changing something local reflects in the container and vice versa. But in some cases, we want to have read-only on containers. What I mean is, that we want to edit the files only from the local and reflect to the container and make container data to be read-only so that no changes allowed in container files.

We have to mention :ro after the bind mount to make that container path read-only

docker run -d
-v named:/container_path
-v "local_abs_path:/container_path:ro"
-v /app/temp
image_name

Copy vs Bind mount

  • COPY: it is useful in production where there are no code changes
  • Bind Mount: it is good in the development environment where most of the changes happen

.dockerignore

When you copy files and folders into the image, it will copy all the folders and files, but if you want a few files that are not to be copied then you can make an entry in .dockerignore file, consider below as the file content

dockerfile
.git
so on

Networking:

which docker we can connect with different items that are called networking. There are three types of networking available for Docker.

  • Connecting to the Internet from Docker
  • Connecting to a local machine from Docker
  • Connecting to a Docker from Docker

Connecting to the Internet from Docker:

You do not need any special thing to do to connect to the Internet. By default from Docker, we can connect to the Internet.

# run docker
run docker -d image_name

#get into docker
docker exec -it container_id bash

# check internet connection, install if ping is not installed
ping google.com

Connecting to Local Machine:

if you want to connect to DB or web server that is present in the local machine from Docker, then replace your localhost or 127.0.0.1 with host.docker.internal in your code.

# before
mongodb://localhost:27017/sw

# after
mongodb://host.docker.internal:27017/sw

Docker to docker connection:

Run a docker image and inspect the container

docker container inspect container_id

Then look for “IPAddress”, and use that value in place of localhost are 127.0.0.1, The IP address may look like “172.17.0.12“, which will be different on your machine

# before
mongodb://localhost:27017/sw

# after
# before
mongodb://ipaddress_value_from_inspect:27017/sw

Better docker-to-docker connection

In the above solution, whenever the IP address changes, we have to keep updating our files with that new IP address, and that is not a good way of developing.

To avoid this, we can create a network and use this network for containers, containers under the same network can talk to each other just by using the container name.

docker network create network_name. # lest say abc as network name

docker run -d --name mongodb --network abc mongo
# mongodb://mongodb:27017/sw # in your node code or whereever you have used.
docker run -d --name node --network abc node

Author :

Pavankumar Nagaraj is an automation testing expert with over 12 years of experience, specializing in tools like Selenium, Protractor, Puppeteer, and Playwright, and skilled in Java, Python, and JavaScript

Pavankumar Nagaraj

Pavankumar Nagaraj

Pavankumar Nagaraj is an automation testing expert with over 12 years of experience, specializing in tools like Selenium, Protractor, Puppeteer, and Playwright, and skilled in Java, Python, and JavaScript

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.