...
×

Parallel Testing with Docker Swarm – Pytest or Python

In this article, we will learn how to run our play scripts using docker images. After that, we will learn how to integrate that image into the docker swarm. With this, we will be able to run our scripts on multiple machines.

Steps to achieve parallel testing with docker swam:

  • Have your framework ready, meaning your script should work fine
  • Install Docker on your machine (Swarm is installed along with Docker)
  • Create a docker image in such a way that it can run when we create a container
  • Create/Have AWS EC2 or like machines needed
  • Start docker swarm and connect worker machines
  • Start the stack and service using dark compose

Have your framework/scripts ready

You should make sure your framework is ready, basically when you run the framework from the command line it should work. For this docker swarm example, I am going to use the below pytest script.

# filename test_math.py
def test_addition():
    print(f"addition of 10+20 is {10+20}")
    # output -> addition of 10+20 is 30
Pytest Basic Script Docker Swarm Tester Coder

Your pytest code runs fine in your system but to run this in any empty container or machine you need all the python packages installed in that machine. We usually use a requirements.txt file to keep all these packages. For our example, only pytest is required.

# requrements.txt
pytest

Install Docker on your machine

Before we proceed to create docker swam, we have to install docker on your machine and create an image. This image should run your framework when a container is created from this image (if you do not understand about two lines it is totally fine and Just keep reading). The machine can be your local machine machine or an EC2 instance. I am going to use an EC2 instance, for this example.

So install docker according to your operating system, I am using Ubuntu with t2.xlarge as type.

Docker Swarm Ubuntu Machine Tester Coder

If you are creating the keypair for the very first time then set the key permissions in the terminal (make sure you navigate to the folder that is the same folder as the key). Then SSH into the AWS, you can get the SSH command from the connect button on the AWS ec2 list.

chmod 400 "docker_swarm.pem". # setting key permission
ssh -i "docker_swarm.pem" [email protected]
# i have changed the ip address in above like abc, xyz

Become root in the machine using sudo su -

ubuntu@testercoder:~$ sudo su -
root@testercoder:~#

Open the browser and go to https://docs.docker.com/engine/install/ubuntu/, and follow the instructions to install docker on Ubuntu (this may change for other operating systems)

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Check if the docker is installed properly use the below command to list docker containers, as of now no container will be available.

docker ps
# output
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

If docker is not installed properly then use the below command to install it again.

apt  install docker.io
# for me this only worked

Create a Docker image

Create a Dockerfile in a folder called “docker_example“, The Dockerfile will contain all the instructions on how a docker image should be created.

root@testercoder:~# mkdir docker_example
root@testercoder:~# cd docker_example/
root@testercoder:~/docker_example# vi Dockerfile

Dockerfile Content for Simple Pytest

FROM which base image we are going to create our image, base images are simple images that can be used from the docker hub. For our example, we will be using the Ubuntu image.

# Use an official Ubuntu as a base image
FROM ubuntu:20.04

We need to update the Ubuntu machine and install the required packages, we need python3 and pip for the same.

# Update the package repository and install necessary packages
RUN apt-get update && \
    apt-get install -y python3 python3-pip python3-venv && \
    apt-get clean

Set the working directory inside the container, we can specify any folder, if the folder is not present then it will be created.

# Set the working directory in the container
WORKDIR /src

For our example, we have created a script file in the beginning test_math.py, the file needs to be copied into the container. Before that make sure the test_math.py is present in the test directory. and requirements.txt file outside the test directory

.
├── Dockerfile
├── requirements.txt
└── test
    └── test_math.py

Now add the COPY INSTRUCTION to copy the test files

# Copy the test directory contents into the container at /test
COPY test/. /src/test

Now, install any packages you want to install respective to your pytest code.

# Install any needed packages specified in requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt

# if you wan to install each package then use 
RUN pip3 install pytest
RUN pip3 install paramiko
RUN pip3 install so on

Now at the end of the Dockerfile, we have to define what should happen the moment when a container is created from this image. Use CMD instruction for this.

CMD ["pytest"]

The complete Dockerfile:

FROM ubuntu:20.04
RUN apt-get update && \
    apt-get install -y python3 python3-pip python3-venv && \
    apt-get clean
WORKDIR /src
COPY . /src/test
RUN pip3 install pytest
CMD ["pytest"]

Now let’s create an image from this.

root@testercoder:~/docker_example# pwd
/root/docker_example
root@testercoder:~/docker_example# ls
Dockerfile  requirements.txt  test
docker build -t image_name .  # . for saying the Dockerfile is present i this directory

docker build -t basic_pytest_image .

The output of the command and each and every instruction will be executed one after the other. When you are trying it for the first time, it may take a few minutes (~5 to 20 mins), depending on the number of instructions present on the Dockerfile, from the second time onwards, the docker uses instructions from the cache, so it will be faster:

root@testercoder:~/docker_example# docker build -t basic_pytest_image .
Sending build context to Docker daemon  11.26kB
Step 1/6 : FROM ubuntu:20.04
 ---> 9df6d6105df2
Step 2/6 : RUN apt-get update &&     apt-get install -y python3 python3-pip python3-venv &&     apt-get clean
 ---> Running in e69eb4df8881
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [128 kB]
Get:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [128 kB]

The final result of the image docker images

root@testercoder:~/docker_example# docker images
REPOSITORY           TAG       IMAGE ID       CREATED          SIZE
basic_pytest_image   latest    69744572146e   28 seconds ago   451MB. ---> this is our image
ubuntu               20.04     9df6d6105df2   3 weeks ago      72.8MB

Create a container from the image

Now when we create a container from the image, the pytest code should run automatically because we have given the pytest command as part of CMD. Run the docker with the attached mode only. docker run basic_pytest_image, you should see the output like below, if you see the output of the pytest, then only your image is ready.

root@testercoder:~/docker_example# docker run basic_pytest_image
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-8.3.2, pluggy-1.5.0
rootdir: /src
collected 1 item

test/test/test_math.py .                                                 [100%]
============================== 1 passed in 0.01s ===============================

Create/Have EC2 or like machines needed

We will be using 2 more EC2 instances, to make the docker swarm but I would recommend creating a launch template from this current EC2 and creating further machines. or create other Ec2 instances from this machine itself. So that we do not have to install the docker again and again if you are going to create multiple Ec2 for the docker swarm.

Aws Template For Docker Swarm Tester Coder

Before creating further machines, let’s rename our existing machine into Manager (the one that manages all the workers). If you like you use the existing machine as a worker and any other machine as a manager.

Now let’s create 2 more EC2 instances so we can have 1 manager and 2 workers from the launch template.

Docker Swarm Aws Template Ec2 Tester Coder

Right-click on the template choose “Launch Instance from this Template” and select the Launch Instance button

Laucnh Instance Aws Docker Swarm

Finally, you have the EC2 Instances like this, It does not matter which is a node or which is a Manager yet, as we have not defined which one is a manager or node.

Aws Instances Docker Swarm Pavankumar Nagaraj

You need to make sure you can ping all the machines from each other, otherwise, it is not possible to set the docker swarm

root@ip-172-31-1-39:~# ping 13.127.140.177
PING 13.127.140.177 (13.127.140.177) 56(84) bytes of data.
64 bytes from 13.127.140.177: icmp_seq=1 ttl=63 time=2.05 ms
64 bytes from 13.127.140.177: icmp_seq=2 ttl=63 time=1.23 ms
64 bytes from 13.127.140.177: icmp_seq=3 ttl=63 time=1.18 ms

If your ping not working then make sure all of those use the same security group and the security group has the following details ( if you do not have the same security group then also it is fine, but make sure all the security groups have the below):

Security Group To Docker Swarm Tester Coder

Start docker swarm and connect worker machines

Let’s decide manager and Nodes:

Now we can set which is manager, rename 1 machine as Manager and 2 machines as nodes, and SSH into the Manager.

Become root and make sure docker is installed, and run the below command to make the machine a manager.

docker swarm init --advertise-addr <MANAGER-IP>

# for our machine
docker swarm init --advertise-addr 13.abc.67.bbc
# abc and bbc are just for masking ip address

The output shows a token to add a worker, also it says the current machine has become Manager.

root@testercoder:~# docker swarm init --advertise-addr 13.abc.67.bbc
Swarm initialized: current node (urr4fux1ulsso7b9nemp5jxyi) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0znn9jh1yo1b5y3ijpn44e981eaeqvmlzb26bsjnxlni3 13.abc.67.bbc:2377

Now SSH into both node machines and execute, the below command (which we saw in the above output)

root@node1:~# docker swarm join --token SWMTKN-1-1i1bqtp7p1fo657xpbs4oqo62z4u-cpsd2oq3f6gbdzga6c 52.abc.237.bbc:2377
This node joined a swarm as a worker.
root@node1:~#

We can verify the docker swarm setup by logging(via SSH) into the Manager and executing the below command docker node ls

root@manager:~# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
z3mxqq3o071el7mm09uz0vbqq *   manager    Ready     Active         Leader (Manager) 27.2.0
o9bywlh04e5orljagy8d7g6jw     node1      Ready     Active                          27.2.0
4efewx70bcqonsnkrc1anro56     node2      Ready     Active                          27.2.0
root@manager:~#

Start the stack and service using dark compose

Before we can start executing we need to make sure the same image is present across all the managers and nodes. This doesn’t mean the same content should be present as part of your Python script but the image name should match. You should copy everything files like Dockerfile and test files.

Same File System On All Nodes And Manager

So we are going to edit the script in nodes, so we can have different content getting executed in different machines.

First, copy the code into all three machines either using SSH copy (scp) or using git clone.

Manager -> test_math.py
def test_addition():
    print(f"addition of 10+20 is {10+20}")
    # output -> addition of 10+20 is 30

Node1 -> test_math.py
def test_multiplication():
    print(f"addition of 10*20 is {10*20}")
    # output -> multiplication of 10+20 is 200

Node2 -> test_subtraction.py    (note file name is diffrent)
def test_sub():
    print(f"addition of 10-20 is {10-20}")
    # output -> subtraction of 10+20 is -10

Build the image using the docker build command docker build -t basic_pytest_image .

Same Image Name Present Across Nodes Manager

Setting up the docker swarm service

Now SSH into the manager and execute the below command to start running the Python script across nodes.

docker service create --name name --replicas 2 image_name

# actual one
docker service create --name pytest_tester_coder_example --replicas 2 basic_pytest_image

The out looks something like this:

pytest_tester_coder_exampleroot@manager:~/docker_example# docker service create --name pytest_tester_coder_example --replicas 3 basic_pytest_image

uu9y57iunl893q514fzg5opxz
overall progress: 1 out of 3 tasks
1/3: ready     [======================================>            ]
2/3: running   [==================================================>]
3/3: ready     [======================================>            ]
verify: Detected task failure [ Not sure why but tasks are running]

You can see the list of services running using

docker service ls

Get the logs of the service

docker service logs name_of_service

# actual one
docker service logs pytest_tester_coder_example

Output of the service logs

pytest_tester_coder_example.3.5rscaixkgbxw@node2    | ============================== 1 passed in 0.01s ===============================
pytest_tester_coder_example.3.pjq30awmn2o5@node2    |
pytest_tester_coder_example.3.pjq30awmn2o5@node2    | ============================== 1 passed in 0.01s ===============================
pytest_tester_coder_example.1.wcl5nv4bfczu@manager    | test/test/test_math.py .                                                 [100%]
pytest_tester_coder_example.1.wcl5nv4bfczu@manager    |
pytest_tester_coder_example.1.wcl5nv4bfczu@manager    | ============================== 1 passed in 0.01s ===============================
pytest_tester_coder_example.1.luqgly5ubb9m@manager    | test/test/test_math.py .                                                 [100%]
pytest_tester_coder_example.1.luqgly5ubb9m@manager    |
pytest_tester_coder_example.1.luqgly5ubb9m@manager    | ============================== 1 passed in 0.01s ===============================
pytest_tester_coder_example.2.7j7fxuge1ox7@node1    | ============================= test session starts ==============================
pytest_tester_coder_example.2.7j7fxuge1ox7@node1    | platform linux -- Python 3.8.10, pytest-8.3.2, pluggy-1.5.0
pytest_tester_coder_example.2.7j7fxuge1ox7@node1    | ============================== 1 passed in 0.00s ===============================

Author :

Pavankumar Nagaraj is an automation testing expert with over 12 years of experience, specializing in tools like Selenium, Protractor, Puppeteer, and Playwright, and skilled in Java, Python, and JavaScript

Pavankumar Nagaraj

Pavankumar Nagaraj

Pavankumar Nagaraj is an automation testing expert with over 12 years of experience, specializing in tools like Selenium, Protractor, Puppeteer, and Playwright, and skilled in Java, Python, and JavaScript

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.