Building a Highly Available NGINX Container with Docker

Ashish Dwivedi
4 min readJul 7, 2023

Introduction: In today’s fast-paced digital world, high availability is a critical requirement for web applications. NGINX, a powerful web server and reverse proxy, can be easily containerized using Docker to ensure reliability and scalability. In this article, we will guide you through the process of running a highly available NGINX container using Docker, with a focus on the following command:

docker run -d --name nginx --restart unless-stopped -p 80:80 --memory 1G --memory-reservation 250M nginx:1.18.0

Understanding the Command: Let’s examine the components of the command and delve into their functionalities:

  1. -d: This flag instructs Docker to run the container in detached mode, allowing it to operate in the background independently of the command line interface. By running NGINX as a daemon, it ensures continuous availability and allows for maintenance tasks to be performed without disrupting the container’s operation.
  2. -name nginx: The — name flag allows us to assign a custom name to the NGINX container. In this case, the name “nginx” is used, making it easier to manage and identify the container within the Docker environment.
  3. -restart unless-stopped: This flag ensures that the NGINX container restarts automatically in the event of a failure, unless it is explicitly stopped manually. Additionally, it instructs Docker to start the container automatically when the Docker daemon itself starts. This behavior ensures that NGINX is always available, even after system reboots or failures. Alternative options for the — restart flag include “no”, “on_failure”, and “always,” each providing different restart behaviors based on specific conditions.
  4. -p 80:80: By using the -p flag, we can forward traffic from the host’s port 80 to the container’s port 80. This port mapping allows the NGINX container to be exposed to the host network, enabling external access to web services provided by NGINX.
  5. -memory 1000M: With this flag, we can limit the memory consumption of the NGINX container to 1000 megabytes (MB). If the container exceeds this limit, Docker takes action according to the — restart flag, ensuring that the container operates within the defined memory constraints.
  6. -memory-reservation 250M: This flag allocates a soft memory limit of 250 MB to the NGINX container. In scenarios where the server is running low on memory, this reservation guarantees that the container receives a minimum allocation, preventing it from being starved. This is particularly useful in resource-constrained environments where multiple containers compete for memory resources.

Docker provides a wide range of options and flags that can be used to customize the behavior and configuration of containers. Here is a comprehensive list of some commonly used options and flags:

Container Management:

$ d, - detach: Run the container in detached mode, allowing it to run in the background.
$ --name: Assign a custom name to the container.
$ --restart: Specify the restart policy for the container (e.g., "no", "on-failure", "always").
$ --rm: Automatically remove the container when it exits.
$ -e, --env: Set environment variables inside the container.
$ -v, --volume: Mount a directory or file from the host into the container.
$ --network: Connect the container to a specific network.

Resource Management:

$ --cpu-shares: Allocate CPU shares to the container.
$ --cpu-quota: Set a CPU quota for the container.
$ --memory: Limit the container's memory usage.
$ --memory-swap: Set the container's total memory limit (including swap).
$ --memory-reservation: Set a soft limit for the container's memory.

Port Mapping and Networking:

$ -p, --publish: Map a port from the container to the host.
$ -P, --publish-all: Publish all exposed ports to random ports on the host.
$ --expose: Expose a port from the container to linked services.
$ --link: Link the container to another container.


$ --cap-add, --cap-drop: Add or drop Linux capabilities.
$ --privileged: Give the container full access to the host

Logging and Monitoring:

$ -t, --tty: Allocate a pseudo-TTY for the container.
$ --log-driver: Specify a logging driver for the container.
$ --log-opt: Set logging options for the container.

Container Interaction:

$ exec: Run a command in a running container.
$ logs: Fetch the logs of a container.
$ attach: Attach to a running container.

These are just a few examples of the many options and flags available in Docker. Each option provides specific functionality to enhance container management, resource allocation, networking, security, logging, and container interaction. Exploring and utilizing these options will allow you to tailor the behavior of your Docker containers to meet your specific requirements.

Additional Considerations for High Availability: To further enhance the high availability of the container, several additional considerations can be taken into account:

  1. Container Orchestration: Utilize container orchestration platforms like Kubernetes or Docker Swarm to manage multiple containers across a cluster of machines. These platforms provide automatic scaling, load balancing, and self-healing capabilities, ensuring continuous availability even in the face of failures.
  2. Health Checks: Implement health checks within the container to monitor its status and detect potential issues. Docker provides health check capabilities that allow you to define custom checks, ensuring that container is in a healthy state and ready to handle incoming requests.
  3. Load Balancing: Deploy multiple instances of the NGINX container behind a load balancer to distribute traffic evenly and handle increased load. This approach not only enhances availability but also improves performance by leveraging the scaling capabilities of container orchestration platforms.

Conclusion: Through the use of options and flags such as running the container in detached mode, setting restart policies, port forwarding, and memory constraints, you can create a robust and resilient container.

To achieve even greater availability, consider incorporating container orchestration platforms, implementing health checks, and utilizing load balancing techniques.

Keep Exploring…………