Related Documentation
Made by
Kong Inc.
Supported Gateway Topologies
hybrid db-less traditional
Supported Konnect Deployments
hybrid cloud-gateways serverless
Compatible Protocols
grpc grpcs http https tcp tls tls_passthrough udp ws wss

The StatsD plugin logs metrics for a Gateway Service or Route to a StatsD server. It can also be used to log metrics on the Collectd daemon by enabling its StatsD plugin.

By default, the plugin sends a packet for each metric it observes. The config.udp_packet_size option configures the greatest datagram size the plugin can combine. It should be less than 65507, according to UDP protocol. Consider the MTU of the network when setting this parameter.

Metrics

The following configure the metrics that will be logged:

Metric

Description

Namespace syntax

request_count The number of requests. kong.service.<service_identifier>.request.count
request_size The request’s body size in bytes. kong.service.<service_identifier>.request.size
response_size The response’s body size in bytes. kong.service.<service_identifier>.response.size
latency The time interval in milliseconds between the request and response. kong.service.<service_identifier>.latency
status_count Tracks each status code returned in a response. kong.service.<service_identifier>.status.<status>
unique_users Tracks unique users who made requests to the underlying Service or Route. kong.service.<service_identifier>.user.uniques
request_per_user Tracks the request count per Consumer. kong.service.<service_identifier>.user.<consumer_identifier>.request.count
upstream_latency Tracks the time in milliseconds it took for the final Service to process the request. kong.service.<service_identifier>.upstream_latency
kong_latency Tracks the internal Kong Gateway latency in milliseconds that it took to run all the plugins. kong.service.<service_identifier>.kong_latency
status_count_per_user Tracks the status code per Consumer per Service. kong.service.<service_identifier>.user.<consumer_identifier>.status.<status>
status_count_per_workspace The status code per Workspace. kong.service.<service_identifier>.workspace.<workspace_identifier>.status.<status>
status_count_per_user_per_route The status code per consumer per Route. kong.route.<route_id>.user.<consumer_identifier>.status.<status>
shdict_usage The usage of a shared dict, sent once every minute.

Monitors any lua_shared_dict used by Kong Gateway. You can find all the shared dicts Kong Gateway has configured using the Status API.

For example, the metric might report on shdict.kong_locks or shdict.kong_counters.
kong.node.<node_hostname>.shdict.<lua_shared_dict>.free_space

kong.node.<node_hostname>.shdict.<lua_shared_dict>.capacity
cache_datastore_hits_total The total number of cache hits. (Kong Gateway Enterprise only) kong.service.<service_identifier>.cache_datastore_hits_total
cache_datastore_misses_total The total number of cache misses. (Kong Gateway Enterprise only) kong.service.<service_identifier>.cache_datastore_misses_total

If a request URI doesn’t match any Routes, the following metrics are sent instead:

Metric

Description

Namespace

request_count The request count. kong.global.unmatched.request.count
request_size The request’s body size in bytes. kong.global.unmatched.request.size
response_size The response’s body size in bytes. kong.global.unmatched.response.size
latency The time interval between when the request started and when the response is received from the upstream server. kong.global.unmatched.latency
status_count The status count. kong.global.unmatched.status.<status>.count
kong_latency The internal Kong Gateway latency in milliseconds that it took to run all the plugins. kong.global.unmatched.kong_latency

If you enable the tag_style configuration for the StatsD plugin, the following metrics are sent instead:

Metric

Description

Namespace

request_count The number of requests. kong.request.count
request_size The request’s body size in bytes. kong.request.size
response_size The response’s body size in bytes. kong.response.size
latency The time interval in milliseconds between the request and response. kong.latency
request_per_user Tracks the request count per consumer. kong.request.count
upstream_latency Tracks the time in milliseconds it took for the final Service to process the request. kong.upstream_latency
shdict_usage The usage of shared dict, sent once every minute. kong.shdict.free_space and kong.shdict.capacity
cache_datastore_hits_total The total number of cache hits. (Kong Gateway only) kong.cache_datastore_hits_total
cache_datastore_misses_total The total number of cache misses. (Kong Gateway only) kong.cache_datastore_misses_total

The StatsD Plugin supports Librato, InfluxDB, DogStatsD, and SignalFX-style tags, which are used like Prometheus labels.

  • Librato-style tags: Must be appended to the metric name with a delimiting #, for example: metric.name#tagName=val,tag2Name=val2:0|c See the Librato StatsD documentation for more information.

  • InfluxDB-style tags: Must be appended to the metric name with a delimiting comma, for example: metric.name,tagName=val,tag2Name=val2:0|c See the InfluxDB StatsD documentation for more information.

  • DogStatsD-style tags: Appended as a |# delimited section at the end of the metric, for example: metric.name:0|c|#tagName:val,tag2Name:val2 See the Datadog StatsD Tags documentation for more information about the concept description and Datagram Format. AWS CloudWatch also uses the DogStatsD protocol.

  • SignalFX dimension: Add the tags to the metric name in square brackets, for example: metric.name[tagName=val,tag2Name=val2]:0|c See the SignalFX StatsD documentation for more information.

When config.tag_style is enabled, Kong Gateway uses a filter label, like service, route, workspace, consumer, node, or status, on the metrics tags to see if these can be found. For shdict_usage metrics, only node and shdict are added.

For example:

kong.request.size,workspace=default,route=d02485d7-8a28-4ec2-bc0b-caabed82b499,status=200,consumer=d24d866a-020a-4605-bc3c-124f8e1d5e3f,service=bdabce05-e936-4673-8651-29d2e9eca382,node=c80a9c5845bd:120|c

Metric fields

The StatsD plugin can be configured with any combination of metrics, with each entry containing the following fields:

Field

Description

Datatype

Allowed values

name
required
StatsD metric’s name. String Metrics
stat_type
required
Determines what sort of event a metric represents. String gauge, timer, counter, histogram, meter and set
sample_rate
required
conditional
Sampling rate. Number number
consumer_identifier
conditional
Authenticated user detail. String One of the following options: consumer_id, custom_id, username, null
service_identifier
conditional
Service detail. String One of the following options: service_id, service_name, service_host, service_name_or_host, null
workspace_identifier
conditional
Workspace detail. String One of the following options: workspace_id, workspace_name, null

Metric behavior

  • All metrics are logged by default.
  • Metrics with a stat_type of counter or gauge require the sample_rate field.
  • The unique_users metric only supports the set stat type.
  • The following metrics only support the counter stat type:
    • status_count
    • status_count_per_user
    • status_count_per_user_per_route
    • request_per_user
  • The shdict_usage metric only supports the gauge stat type.
  • The following metrics require a consumer_identifier:
    • status_count_per_user
    • request_per_user
    • unique_users
    • status_count_per_user_per_route
  • The service_identifier field is optional for all metrics. If not set, it defaults to service_name_or_host.
  • The status_count_per_workspace metric requires a workspace_identifier.

Kong Gateway process errors

This logging plugin logs HTTP request and response data, and also supports stream data (TCP, TLS, and UDP). The Kong Gateway process error file is the Nginx error file. You can find it at the following path: {prefix}/logs/error.log

Queuing

The StatsD plugin uses internal queues to decouple the production of log entries from their transmission to the upstream log server.

With queuing, request information is put in a configurable queue before being sent in batches to the upstream server. This has the following benefits:

  • Reduces any possible concurrency on the upstream server
  • Helps deal with temporary outages of the upstream server due to network or administrative changes
  • Can reduce resource usage both in Kong Gateway and on the upstream server by collecting multiple entries from the queue in one request

Note: Because queues are structural elements for components in Kong Gateway, they only live in the main memory of each worker process and are not shared between workers. Therefore, queued content isn’t preserved under abnormal operational situations, like power loss or unexpected worker process shutdown due to memory shortage or program errors.

You can configure several parameters for queuing:

Parameters

Description

Queue capacity limits:

config.queue.max_entries
config.queue.max_bytes
config.queue.max_batch_size
Configure sizes for various aspects of the queue: maximum number of entries, batch size, and queue size in bytes.

When a queue reaches the maximum number of entries queued and another entry is enqueued, the oldest entry in the queue is deleted to make space for the new entry. The queue code provides warning log entries when it reaches a capacity threshold of 80% and when it starts to delete entries from the queue. It also writes log entries when the situation normalizes.
Timer usage:

config.queue.concurrency_limit
Only one timer is used to start queue processing in the background. You can add more if needed. Once the queue is empty, the timer handler terminates and a new timer is created as soon as a new entry is pushed onto the queue.
Retry logic:

config.queue.initial_retry_delay
config.queue.max_coalescing_delay
config.queue.max_retry_delay
config.queue.max_retry_time
If a queue fails to process, the queue library can automatically retry processing it if the failure is temporary (for example, if there are network problems or upstream unavailability).

Before retrying, the library waits for the amount of time specified by the initial_retry_delay parameter. This wait time is doubled every time the retry fails, until it reaches the maximum wait time specified by the max_retry_time parameter.

When a Kong Gateway shutdown is initiated, the queue is flushed. This allows Kong Gateway to shut down even if it was waiting for new entries to be batched, ensuring upstream servers can be contacted.

Queues are not shared between workers and queuing parameters are scoped to one worker. For whole-system capacity planning, the number of workers needs to be considered when setting queue parameters.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!
OSZAR »