How to create a package using Docker

When we talk about packages, we generally refer to scalable aggregated blocks of software or libraries. In fact, a package can be used by multiple applications by importing or extending the package itself.

Docker is a tool that will make your life as a developer easier. We are talking about an open-source project that can automate the deployment of applications inside software containers; in essence, a platform that will enable you to create, test and distribute applications via containers.

Docker Package: the concept of “image”

The concept of package can be thought of as ‘Docker with images’, or Docker image. A Docker image usually contains a set of instructions for creating containers. Containers are instances of running software. Bear in mind that you can create multiple containers from the same image.

For example, if I wanted to create multiple instances of ngnix to satisfy a number of HTTP requests, I can do it starting from the same nginx image.

Following the same example, you may be wondering how – starting from the ngnix public image – you can apply a custom configuration to your web server.

It’s very simple: you have to create a custom image in which, starting from the nginx base image, you apply steps to which you add the custom nginx configuration.

To do this, we create a new project which we will call myapp. Inside the project, we create a file called Dockerfile, and a config folder with the default.conf file.

Docker package

Let’s create our configuration within the config/default.conf file. The example contains a dummy configuration in which requests are forwarded to a PHP server.

Docker package

Now let’s create the new image creation steps inside the Dockerfile.

Docker package

As you can see, we perform three steps, the last of which is definitely optional and which has the sole purpose of showing the RUN command. Starting from the top, the FROM command indicates which image to start from when creating the new image. On the other hand, COPY has two arguments: the file to be copied and its final destination within the image. This is how we copy our custom nginx configuration to the appropriate folder within the image.
In addition to copying files, you can execute commands with the RUN command. This is useful for automating the installation of additional software or other purposes.

There are many other commands for the Docker file; you can find the complete reference at this address by clicking here where you can find a most interesting source if you intend to work with Docker seriously.

Launching the Docker Build Command

Once the files have been created, from the terminal you can simply launch the docker build command to create the new custom image.

Docker package

As you can see from the image, the three steps described in the Dockerfile are executed in sequence. The -t argument of the docker build command enables you to name the image. At this point, our image will be available for use:

Docker package

With this mini-guide you should now be able to create an image with Docker yourself. See the reference we mentioned in the previous paragraph for more details and start experimenting!

How to deploy with Docker

Docker is a platform that provides a virtualisation system on which you can run programs in packages called containers.

Containers are isolated from each other and comprise all required software resources, including the operating system, thus enabling the application to run.

Deploy Docker: Deploy an application with Docker using Docker Image

To deploy an application with Docker, you must first have an image available.

The image, or Docker image, is a read-only artefact that contains a set of instructions for creating a container that can run on the Docker platform. The image provides a particularly convenient method for creating preconfigured application packages and server environments.

The Docker image can be created using a Dockerfile and with the docker build command.

Deploy Docker

We can find the newly created image in the image list with the docker image ls command.

To deploy the application, you will need to have an Ubuntu Server instance with Docker installed. This instance can be on a local or remote virtual machine, or it can be on a dedicated physical machine.

Note that the virtual machine where the deployment will be performed must have access to the created image. To do this, you must copy the image to a docker registry accessible via the internet. There are many services available for this, including Docker Hub, Amazon ECR, and Google Container Registry.

Furthermore, we will need to create an account to get the registry URL and enable it to upload Docker images. At this point, you can load the image using the docker push command:

Deploy Docker

Once this has been done, we access the Ubuntu Docker machine to get the image from the registry:

Deploy Docker

As soon as we have the image, we can launch a container with the docker run command:

Deploy Docker

Deploy Docker: running commands from the container

The -p attribute enables you to map port 80 to make it accessible from the outside. Now, you should be able to run the docker ps command, and you should see the newly launched container in the list.

To execute commands from the container that is running, you can run the docker exec command. By running the bash or sh command, you may use the shell inside the container as follows:

Scaling multiple containers dynamically: container orchestration tools

This is the simplest way to deploy a Docker container. What if we want to run multiple containers and make them interact with each other? What if we want to scale the number of containers dynamically?

To do this, we need a container orchestration tool. There are many, such as Docker Swarm, Kubernetes or Amazon ECS, topics that we will cover in depth in future articles.

We hope that this mini-guide has been of help to you and, above all, that gave you an understanding of how to create an image and deploy a container.

Docker is now an essential tool for anyone involved in web development and server-side programming.

Docker makes it possible to distribute code more quickly. This tool provides a concrete opportunity to standardise the operation of applications by optimising code transfer.

How to install Docker: a mini practical guide

Installing Docker on your machine to create a development environment is really a very simple operation. Let’s see how to install it on Windows, Mac and Ubuntu.

How to install Docker: Windows, Mac and Ubuntu

In this mini-guide, we shall see how to install Docker on Windows, Mac and Ubuntu.

Installing Docker on Windows

Before starting the installation, make sure that your version of Windows is either Pro, Enterprise or Education.

If you have Windows Home, you’ll need to install WSL 2, a subsystem for windows that enables you to create a Linux environment on which Docker can run.

WSL 2 Installation Guide:

At this point, we should download the installer. The Docker for Windows installer can be downloaded from the Docker Hub:
Double click on the .exe executable and follow these steps:

  • When prompted, make sure you enable the Hyper-V option.
  • Follow the installer instructions and authorise it to proceed.
  • Complete the process and close the installer.

To launch Docker, search for “Docker” and choose “Docker Desktop” from the search results.

How to install Docker

Installing Docker on Mac

As with Windows, to install Docker on Mac, you need to download the installer from Docker Hub.

As a minimum requirement, your computer must not be a pre-2011 model, and you must have a version of macOS equal to or later than 10.14.

Note that if your model is very recent and features a new generation M1 processor, no stable version of Docker is available as of the date of writing this article. You may opt to download a version still under development called Apple M1 Tech Preview. Carefully read the “Known issues” section, where the limitations and known problems are listed.

Once you have downloaded the installer with .dmg extension, double click. A window will open with the Docker icon and the applications folder. Drag into the applications folder.

How to install Docker

Now search for Docker from the spotlight (cmd + space) and launch Docker.

You should be able to see the Docker icon on the top right status bar.

How to install Docker

When the icon remains stationary, it indicates that the boot process has completed.

Installing Docker on Ubuntu

To install Docker on Ubuntu, you need to follow some command line steps.

  1. Install a number of packages needed to use a repository over HTTPS:

    $ sudo apt-get update
    $ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \

  2. Add the official Docker GPG key:

    $ curl -fsSL | sudo apt-key add –

  3. Add the repository:

    $ sudo add-apt-repository \
    “deb [arch=amd64] \
    $(lsb_release -cs) \

    Note that you can set a different value of “arch” if your processor has an architecture other than x86, such as armhf or arm64.

  4. Install Docker Engine:

    $ sudo apt-get update
    $ sudo apt-get install docker-ce docker-ce-cli

  5. Docker installation is now complete.

What is Docker and how this valuable tool works

Docker is the platform that has revolutionised the way we build, test, and deploy our applications. Why? Because thanks to Docker – the open-source software for containerization in Linux environments – we can distribute and fine-tune the resources for our applications in any environment.

In this article we have written for you, we reviewed the main features of Docker, when to use it, how Docker works and what is the difference between Docker and a virtual machine.
Join us!

Let’s see Docker in detail

Its programming language is very simple and intuitive, a feature that enables us to keep our resources under control at all times.
The main feature of Docker is that it enables you to distribute code more quickly, harmonise the smooth execution of applications and optimise code migration. Understandably, this saves us a great deal of time, energy and money.
But it doesn’t stop there, thanks to Docker, in fact:

  • software is distributed seven times more frequently than occurs with non-users;
  • using containers simplifies deployment, problem identification and rollback for system recovery;
  • also the transfer of applications from local development computers to production deployments in AWS is facilitated.

How does Docker work?

In this paragraph, we shall review in greater detail how Docker works but without getting too involved in technicalities.

As we have seen in the previous paragraphs, Docker is the open-source software platform that provides a standard method for creating, running and deploying applications through the use of containers.
The container is an isolated environment where the application runs on its own operating system, sharing dynamically allocated hardware resources (CPU and RAM) with the host.
This functionality is supported and simplified by services such as Docker Compose or Kubernetes, which facilitate the execution and management of containers.

The other feature, which we can only define as positive, is that Docker containers simplify the execution of server-side code, improving its levels of use.

What is the difference between Docker and a virtual machine?

Virtual machines and containers differ in several ways, but the main difference is that containers provide a way to virtualise an operating system so that multiple workloads can run on a single operating system instance.
With VMs, the hardware is virtualized to run multiple instances of the operating system. The speed, agility and portability of containers ensure their place as further tools capable of simplifying software development.