
long-running services commonly result in different container count patterns. For example, batch jobs running in containers vs. This metric can have different “patterns” depending on the use case. A stacked bar chart displaying the number of containers on each host and the total number of containers provides a quick visualization of how the cluster manager distributed the containers across the available hosts.Ĭontainer counts per Docker host over time When cluster managers like Docker Swarm, Mesos, Kubernetes automatically schedule containers to run on different hosts using different scheduling policies, the number of containers running on each host can help one verify the activated scheduling policies. For example, it is very handy during deployments and updates to check that everything is running like before.

The current and historical number of containers is an interesting metric for many reasons. For example, Sematext Monitoring automatically sets alert rules for disk space usage for you, so you don’t have to remember to do it.Ī good practice is to run tasks to clean up the disk by removing unused containers and images frequently. In our experience watching the disk space and using cleanup tools are essential for continuous operations of Docker hosts.īecause disk space is very critical it makes sense to define alerts for disk space utilization to serve as early warnings and provide enough time to clean up disks or add additional volumes. Persistent Docker volumes consume disk space on the host as well. For example, an application image might include a Linux operating system and might have a size of 150-700 MB depending on the size of the base image and installed tools in the container. Host Disk Spaceĭocker images and containers consume additional disk space. Adjusting the capacity of new cluster nodes according to the footprint of Docker applications could help optimize resource usage. That’s why it is important to know the host memory usage and the memory limits of containers. Deployments might fail if a cluster manager is unable to find a host with sufficient resources for the container. Dynamic cluster managers like Docker Swarm use the total memory available on the host and the requested memory for containers to decide on which host a new container should ideally be launched. The total memory used in each Docker host is important to know for the current operations and for capacity planning. When the resource usage is optimized, a high CPU utilization might actually be expected and even desired, and alerts might make sense only for when CPU utilization drops (think service outages) or increases for a longer period over some max limit (e.g. Throttling the CPU time is a good way to ensure the minimum of processing power needed by essential services – it’s like the good old nice levels in Unix/Linux. The container CPU usage can be throttled in order to avoid a single busy container slowing down other containers by using up all available CPU resources. Understanding the CPU utilization of hosts and containers helps one optimize the resource usage of Docker hosts. Watch Resources of Your Docker Hosts Host CPU Note: All screenshots in this post are from Sematext Cloud and its Docker monitoring integration. We recommend using monitoring alerts according to defined limits this way you can adjust limits or resource usage even before errors start happening. To make appropriate adjustments for resource quotas you need good visibility into any limits containers may have reached or errors they may have encountered or caused. Moreover, container resource sharing calls for stricter enforcement of resource usage limits, an additional issue you must watch carefully. On the other hand, some modern monitoring solutions were built with such dynamic systems in mind and even have out of the box reporting for Docker monitoring. Traditional monitoring tools not used to such dynamic environments are not suited for such deployments. for batch jobs) while a set of permanent services runs in parallel. It’s not uncommon for Docker servers to run thousands of short-term containers (e.g. Container deployments are different: a set of containers may run many applications, all sharing the resources of one or more underlying hosts. These servers and applications running on them are typically very static, with very long uptimes. Traditional monitoring solutions take metrics from each server and the applications they run. Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. In Part 1 we’ve described what container monitoring is and why you need it.

Application Performance Monitoring Guide.
