Docker is a set of platforms as a service product that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.
When we build applications stack, in which different kinds of components need to be added as shown in below diagram. We have Gunicorn as a server, PostgreSQL as a database and for messaging we are using Redis.
- We must deal with OS compatibility for each component.
- Setup the same infrastructure for new onboarding team members.
- It takes a long time to set up.
“What is the matrix from hell”? Put simply, it is the challenge of packaging any application, regardless of language/frameworks/dependencies, so that it can run on any cloud, regardless of operating systems/hardware/infrastructure.
Docker will help us to modify or change these components without affecting the other components also it allows any modification at so level. Docker can run each component in a separate Docker Container with its own dependencies and its own libraries. We must configure one and the other can just run the container using a simple docker run command.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
A container only lives if the process inside it is alive if the service inside the container is stopped or crash then the container stops
Containers are a completely isolated environment. We can say that a container has its own processes, their own network interfaces, their own mounts that all share the same OS kernel.
The docker/container concept was already there 10 years before, which is LXc, LXd and LXcfs. Docker is utilizing a LXc container.
LXC and LXD are two important acronyms to know if you’re into containers. Unfortunately, they’re also acronyms that are hard to keep straight from one another. They sound alike. They refer to similar platforms, which were created in large part by the same company. And they’re deeply intertwined with one another at a technical level.
If this sounds confusing, it’s because it is, at least at first. Fortunately, with a little bit of explaining, it’s easy enough to understand LXC, LXD and what they mean for admins and developers who want to use containers.
This post explains what LXC and LXD are, what’s different between them, and why developers or admins would want to use them—or, alternatively, why they might prefer to stick with Docker or CoreOS.
- Community edition: has a set of free products.
- Enterprise edition: Certified and supported container platform with addons. ○ Image management
○ Image security
There are lots of containerized versions readily available, you will see all available images here. Today most organizations have their product containerized and available in the public repository. (Every time docker host runs the container then it will assign unique name to container )
- Operating System
>docker run mongodb
This command will run a mongodb instance in docker host. You can also run as many instances as you want; you just need to use the same command as shown below.
> docker run mongodb
> docker run mongodb
Image is a package or template just like a VM template it is used to create one or more containers.
Containers are running instances of images that are isolated and it has its own environment.
Step 1 : Go to Docker doc.
Step 2 : You can download and install Docker on multiple platforms. Choose your OS.
Step 3 : Follow the instructions.
Step 4 : Once the installation is completed then let’s try to run a simple hello world image to test.
Step 5 : in your terminal run > docker run hello-world
First time when we run an image it will check for the image into our local machine when it is unable to find it, then it will pull from docker hub and store it locally.
We can pull specific versions of images to do so we need to pass the image name with the tag. The default tag will always be “latest”.
> docker pull mysql (This will pull latest image)
> docker pull mysql:5.6 (This will pull 5.6 image)
You can get all available tags here in docker hub repo. details page.
Docker PS is the essential command to list existing docker containers in running state. PS stands for “Process Status”. PS command is used to describe process status of Unix variants of operating systems and docker borrowed the naming convention from there.
> docker ps
This command will print all running containers with basic details such as image name, container id etc.
> docker ps -a
This command will print currently running and previously closed containers with basic details such as image name, container id etc.
> docker images
This command will list all images with some basic details such as repository name, image tag, image id and size of image which are pulled and stored into your machine.
> docker stop
This command will stop running containers with provided container id. But it will still consume memory space.
> docker rm
This command will remove the container which is not running and clear the memory for the same.
> docker rmi
This command will remove the image file from docker host. Make sure you have stopped the container related to this image before removing the image.
> docker exec
This exec command will allow you to execute a command inside of docker container.
When we run any container, it will run at attach mode which means your prompt is reserved for that container till it's running.
If you pass `-d` with run command it means that you are running in detach mode container process run in background so if you want
to stop that process you have to list the container and then choose container id and
pass it with docker stop command. Or you will get container prompt using
> docker attach
Docker container wouldn't wait for the prompt, it just prints whatever the application is supposed to print on the terminal. That is because by default the docker container does not listen to a standard input even though you are attached to its console it is not able to read any input from the user. ( it runs in its own interactive mode ) if you'd like to provide your input you must map the standard input of your host to the docker container using the following command.
> docker run -i
In the above example, I have created one small shell script which takes input from the user and prints a simple message as output and created one docker image for this shell script.
Creating shell script: myscript.sh
echo "What is your name?" read PERSON echo "Hello, $PERSON"
Creating Docker file
FROM ubuntu COPY myscript.sh /myscript.sh CMD ["bash","./myscript.sh"]
As you can see, I have created a docker image using ubuntu. To know more about docker file commands please visit Docker file
Build docker image
> docker build -t myimg:v1.
> docker build [-t]
Suppose we have a docker image which has a simple web application and it’s into docker host. application listening on port 8000. But how users will interact with this webpage. Use the IP of the docker container (every docker container gets an IP assigned by default) this default IP is only accessible within the docker host. If you open browser within the docker host you can go to this internal IP seems it’s an internal IP and outside of docker user cannot access it using this IP. to do that we could use the IP of the docker host which 192.168.1.64 for that to work we must have mapped the port inside the docker container to an available port on the docker host
> docker run -p 80:8000 < image >
using -p parameter in run command as above so the user can access the web app outside of docker host. 192.168.1.64:80 all traffic routed to 5000 ports inside of docker container.
We can run the same webapp on multiple ports using this port mapping.
> docker run -p 81:8000 < image >
> docker run -p 82:8000 < image >
Full Stack Developer - NeenOpal Analytics @Anshuman Sharma
Google App Script
June 08, 2021
June 08, 2021
Azure Functions vs Azure Data Factory
August 25, 2020