First there is no difference in the requirements for the virtual and hardware appliance.
The needed resources are based on the amount of services, active checks and types of hosts. If you have a lot of SNMP hosts, you'll need more CPU cores for executing the SNMP walks on those.
The only overview regarding needed resources we have is, and it is just a rough approximation: https://checkmk.com/product/appliances
We always recommend customers to orientate on the specifications for the HW Appliance.
When importing the virtual appliance, we have some default values preconfigured. Please check out this page: https://docs.checkmk.com/latest/en/introduction_virt1.html#_import_the_appliance
As this is a virtual machine, you can adjust these values at any time.
Configuration of Fetcher/Checker settings
Hands-On
Required services to monitor
To configure the right resources, we recommend checking the following graphs:
- PDF report with graphs of
- CPU
- Memory
- OMD <SITENAME> Performance
- activate the "Core statistics" snap in
- Check_MK
- Disk I/O Summary
- The local structure
- find -L ~/local > local.txt (as site user)
Let's give you an example:
With this snap in you will be able to check the load of the fetcher and helper. At 70% we recommend increasing these values in the global settings. While you're increasing these values the CPU load and Memory consumption will grow.
That's why we also recommend checking these graphs:
You will find more information about the fetcher and checker architecture here:
https://blog.checkmk.com/checkmk-2.0-cmc
https://checkmk.com/werk/11500
Important information about the Checkers: The checkers should not be higher than your CPU core count!
Adjust the helper settings
If you decide to adjust the helper settings, please be aware of these settings:
Setup → General → Global Settings → Monitoring Core →
Maximum concurrent active checks
The usage should stay under 80% on average.
Maximum concurrent Checkmk fetchers
With increasing number of fetchers, your RAM usage will rise, so make sure to adjust this setting carefully and keep an eye on the memory consumption of your server.
The usage should stay under 80% on average.
Maximum concurrent Checkmk checkers
The number of checkers should not be higher than your CPU core count! If you have more than two cores, the rule of thumb is:
Maximum checkers = number of cores - 1
.The usage should stay under 80% on average.
- Maximum concurrent Livestatus connections
In a distributed monitoring setup it may be useful to have different values for the remote sites. You will find the guidance on how to do that here!
Check the Livestatus Performance
If you face issues like this:
Please see this manual to check the Livestatus Performance
Required log files
Please see this manual to enable debug log of the helpers. The required settings are:
- Core
- Debugging of Checkmk helpers
High Fetcher Usage although fetcher helper count is already high
If you face the following problems:
- fetcher helper usage is permanently above 96% and fetcher count is already high (i.e., >50 or 100 or more) and
- the service "Check_MK" runs into constant "CRIT with fetcher timeouts
You can also use this command to narrow down and to find slow running active checks
lq "GET services\nColumns: execution_time host_name display_name" | awk -F';' '{ printf("%.2f %s %s\n", $1, $2, $3)}' | sort -rn | head
This can have mainly twofold reasons:
- This is a strong indicator that you might have plugins/local checks (primarily in Windows) that have quite a long runtime. Increasing the number of fetchers further here is not constructive. Instead, you have to identify the long-running plugins/local checks and set them to asynchronous execution and/or define (generous) cache settings or even timeouts especially for them.
- You might have too many DOWN hosts, which are still being checked. Checkmk still tries to query those hosts and the fetchers need to wait for a timeout every time. This can bind a lot of fetcher helpers, which are blocked for that time. Remove hosts, which are in a DOWN state for some time (due to scrapping or similar) from your monitoring.
Related articles