Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

The following article explains how to monitor and adjust the performance in Checkmk.

Status
colourGreen
titleLAST TESTED ON CHECKMK 2.3.0P1


Panel
borderColorblack
bgColor#f8f8f8
titleTable of Contents

Table of Contents


First, there is no difference in the requirements for the virtual and hardware appliances.

...

The only overview regarding needed resources we have is, and it is just a rough approximation: https://checkmk.com/product/appliancesCheckmk Appliance

We always recommend customers to orientate on the specifications for the HW Appliance.

When importing the virtual appliance, we have some default values preconfigured. Please check out this page: https://docs.checkmk.com/latest/en/introduction_virt1.html#_import_the_Installation of the virtual appliance

As this is a virtual machine, you can adjust these values anytime.

Configuration of Fetcher/Checker settings

Hands-On 

Required services to monitor

To configure the right resources, we recommend checking the following graphs:

...


Let's give you an example:

Screenshot of core statistics with Fetcher and Checker helper usages highlighted.Image Modified

With Core Statistics snap-in, you can check the load of the fetcher and helper. At 70%, we recommend increasing these values in the Global Settings. The CPU load and memory consumption will grow while you increase these values.

That's why we also recommend checking these graphs:

Image RemovedScreenshot of a service search that includes cpu, load and memory.Image Added


You will find more information about the fetcher and checker architecture here:https://blog.checkmk.com/checkmk-

...

Note

Important information about the Checkers: The checkers should not exceed your CPU core count!


Adjust the helper settings

If you decide to adjust the helper settings, please be aware of these settings:

...

  • Maximum concurrent Checkmk fetchers

    • With increasing the number of fetchers, your RAM usage will rise, so make sure to adjust this setting carefully and keep an eye on the memory consumption of your server.
    • The usage should stay under 80% on average.

  • Maximum concurrent Checkmk checkers

    • The number of checkers should not be higher than your CPU core count! If you have more than two cores, the general rule of thumb is: Maximum checkers = number of cores - 1 .
    • The usage should stay under 80% on average.

  • Maximum concurrent Livestatus connections
    • In a distributed monitoring setup, having different values for the remote sites may be helpful. You will find the guidance on how to do that here!

Check the Livestatus Performance

If you face issues like this:

Screenshot of a livestatus error. Unhandled exception 400. Timeout while waiting for free Livestatus channel.Image Modified


Please see this manual to check the Livestatus Performance

Required log files

Please see this manual to enable debug log of the helpers. The required settings are:

  • Core
  • Debugging of Checkmk helpers

High Fetcher Usage Although the fetcher helper count is already high


Tip

Also, please check out our article on  Troubleshooting high CPU usage of the Checkmk micro core (cmc)

If you face the following problems: 

  • Fetcher helper usage is permanently above 96%, and fetcher count is already high (i.e., >50 or 100 or more) and

  • the service "Check_MK" runs into constant "CRIT with fetcher timeouts   
    • You can also use this command as site user to narrow down and find slow-running active checks.

      Code Block
      languagebash
      themeRDark
      lq "GET services\nColumns: execution_time host_name display_name" | awk -F';' '{ printf("%.2f %s %s\n", $1, $2, $3)}' | sort -rn | head


This can have several reasons:

  • Firewalls are dropping traffic from Checkmk to the monitored systems. If the packets are dropped rather than blocked, Checkmk must wait for a timeout instead of instantly terminating the fetching process.

  • You might have too many DOWN hosts, which are still being checked. Checkmk still tries to query those hosts, and the fetchers need to wait for a timeout every time. This can bind a lot of fetcher helpers, which are blocked for that time. Remove hosts which are in a DOWN state from your monitoring. Either permanently or by setting their Criticality to "Do not monitor this host".

  • For classical operating systems (Linux/Windows/etc.), this indicates that you might have plugins/local checks with quite a long runtime. Increasing the number of fetchers further here is not constructive. Instead, you must identify the long-running plugins/local checks and set them to asynchronous execution and/or define (generous) cache settings or even timeouts, especially for them.

  • For SNMP devices, you might have poorly performing SNMP devices. To troubleshoot those,

...

Filter by label (Content by label)
showLabelsfalse
max5
spacesKB
showSpacefalse
sortmodified
reversetrue
typepage
cqllabel in ( "checker" , "fetcher" , "cmc" , "troubleshooting" , "performance" ) and type = "page" and space = "KB"
labelscmc fetcher checker

...