Mem buf overlimit warning

During Prod hours when logs count increases, we notice mem buf overlimit warning in my fluent bit logs. Will the below logs impact the logs count or any chances of losing the logs?
Here I am sending Kubernetes logs to Graylog using Fluent Bit.

Mem_Buf_Limit 100MB

Currently, I am using fluent bit v1.8

Here is my config file:

apiVersion: v1
kind: ConfigMap
  name: fluent-bit-config
  namespace: eks-logging
    k8s-app: fluent-bit
  # Configuration files: server, input, filters, and output
  # ======================================================
  fluent-bit.conf: |
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-elasticsearch.conf

  input-kubernetes.conf: |
        Name               tail
        Tag                kube.*
        Path               /var/log/containers/*.log
        Exclude_Path       /var/log/containers/*_ingress-dms_*.log,/var/log/containers/*_cattle-system_*.log
        Parser             docker
        DB                 /var/log/flb_kube.db
        Mem_Buf_Limit      100MB
        Docker_Mode        On
        Docker_Mode_Parser multi_line
        Skip_Long_Lines    On
        Refresh_Interval   10

  filter-kubernetes.conf: |
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Merge_Log           On
        Merge_Log_Key       log
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

  output-elasticsearch.conf: |
        Name                    gelf
        Match                   *
        Port                    12202
        Mode                    tcp
        Logstash_Format         On
        Gelf_Short_Message_Key  log

  parsers.conf: |
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$

        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

        Name   json
        Format json
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On

        Name cri
        Format regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z

        Name        syslog
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S

        Name                     multi_line
        Format                   regex
        Regex                    (?<log>^{"log":"\d{4}-\d{2}-\d{2}.*)

Do I need to change anything in my config to remove mef buf overlimit warning?

Note: The above host IP in config is the HAProxy IP

Please help me with this. A quick response will be appreciated.


This is down to the data rates for your logs through Fluent Bit: they exceed your buffer limits.
I’d have a look at Buffering & Storage - Fluent Bit: Official Manual

You need to tune appropriately for what you want to happen, I’m afraid there’s no magic solution here - if you specify limits then they will be respected so if your data rates cannot meet it for any of the myriad of reasons (input rate is too high, output rate is too low, Fluent Bit is not scheduled for enough time to process, etc.) you’ll hit that warning.

So you can increase the limit - what’s a good value is only a question you can answer?
You can allocate more resources to Fluent Bit either for I/O or processing if one of those is the limiting factor.
You could re-architect/factor slightly so less logs are handled to reduce the overall rate if input is too high, or drop some irrelevant data before output if output is too high.