Manage everything in one place? Explore Konnect, our unified API platform.
Use Azure AI Content Safety to check and audit AI Proxy plugin messages before proxying them to an upstream LLM
Prepend or append an array of llm/v1/chat messages to a user’s chat history
Check llm/v1/chat or llm/v1/completions requests against a list of allowed or denied expressions
Provide fill-in-the-blank AI prompts to users
The AI Proxy plugin lets you transform and proxy requests to a number of AI providers and models.
The AI Proxy Advanced plugin lets you transform and proxy requests to multiple AI providers and models at the same time. This lets you set up load balancing between targets.
Create RAG pipelines by automatically injecting content from a vector database
Provides rate limiting for the providers used by any AI plugins.
Use an LLM service to transform a client request body prior to proxying the request to the upstream server
Use an LLM service to transform the upstream HTTP(S) prior to forwarding it to the client
Protect sensitive information in client request bodies before they reach upstream services
Enhance performance for AI providers by caching LLM responses semantically
Semantically and intelligently create allow and deny lists of topics that can be requested across every LLM.
Integrate Kong Gateway with the AppDynamics APM Platform
Visualize metrics on Datadog
Propagate spans and report space to a backend server through OTLP protocol.
Expose metrics related to Kong Gateway and proxied upstream services in Prometheus exposition format
Send metrics to StatsD
Propagate Zipkin spans and report tracing data to a Zipkin server
Secure Services and Routes with Basic Authentication
Add HMAC Authentication to your Gateway Services
Authenticate clients with mTLS certificates passed in headers by a WAF or load balancer
Decrypt a JWE token in a request
Verify and authenticate JSON Web Tokens
Verify and sign one or two tokens in a request
Secure Services and Routes with key authentication
Add key authentication to your services
Integrate Kong with an LDAP server
Secure Kong with username and password protection, use LDAP search and service directory mapping
Secure routes and services with client certificate and mutual TLS authentication
Add OAuth 2.0 authentication to your Services and Routes
Integrate Kong Gateway with a third-party OAuth 2.0 Authorization Server
Integrate Kong Gateway with a third-party OpenID Connect provider
Provides SAML v2.0 authentication and authorization between a service provider (Kong) and an identity provider (IdP)
Support sessions for Kong authentication plugins.
Configure Kong Gateway to obtain an OAuth2 token to consume an upstream API
Add Vault authentication to your Services or Routes
Append request and response data to a log file
Send request and response logs to an HTTP server
Publish logs to a Kafka topic
Send request and response logs to Loggly
Send request and response logs to Syslog
Send request and response logs to a TCP server
Send request and response logs to a UDP server
Let’s Encrypt and ACMEv2 integration with Kong Gateway
Detect and block bots or custom clients
The CORS plugin lets you add Cross-Origin Resource Sharing (CORS) to a Service or a Route.
Allow or deny IPs that can make requests to your services
Detect and block injection attacks using regular expressions
Apply size checks on JSON payload and minimize risk of content-level attacks
Authorize requests against Open Policy Agent
Requests a client to present its client certificate
Proxies TLS client certificate metadata to upstream services via HTTP headers
Invoke and manage AWS Lambda functions from Kong Gateway
Invoke and manage Azure functions from Kong Gateway
Add and manage custom Lua functions to execute after other plugins
Add and manage custom Lua functions to run before other plugins
Invoke and manage OpenWhisk actions from Kong Gateway
Control which Consumers can access Services and Routes
Slowly roll out software changes to a subset of users
Consume messages from Confluent Cloud Kafka topics and make them available through HTTP endpoints
Allows Kong Gateway to connect to intermediary transparent HTTP proxies
Cache and serve commonly requested responses in Kong Gateway
Provides rate limiting for GraphQL queries
Consume messages from Kafka topics and make them available through HTTP endpoints
Provide mock endpoints to test your APIs against your Services
Validate HTTP requests and responses based on an OpenAPI 3.0 or Swagger API Specification
Cache and serve commonly requested responses in Kong
Cache and serve commonly requested responses in Kong, in-memory or using Redis
You can use the Rate Limiting plugin to limit how many HTTP requests can be made in a given period of seconds, minutes, hours, days, months, or years.
Enhanced rate limiting capabilities such as sliding window support, Redis Sentinel support, and increased performance
Redirect incoming requests to a new URL
Block requests with bodies greater than a specified size
Terminates all requests with a specific response
Validates requests before they reach the upstream service
Rate limit based on a custom response header value.
Route requests based on specified request headers
Prevent abuse and protect services with absolute limits on the number of requests reaching the service
Validate that incoming webhooks adhere to the Standard Webhooks specification
Set custom timeouts on connections to upstream services to override Gateway Service-level timeouts.
Block incoming WebSocket messages greater than a specified size
Validate WebSocket messages before they are proxied
Apply structural and size checks on XML payloads
Transform requests into Kafka messages in a Confluent Kafka topic.
Correlate requests and responses using a unique ID
Transform a GraphQL upstream into a REST API
Customize Kong exit responses sent downstream
Transform requests into Kafka messages in a Kafka topic.
Insert arbitrary API calls before proxying a request to the upstream service.
Use regular expressions, variables, and templates to transform requests
Use powerful regular expressions, variables, and templates to transform API requests
Modify the upstream response before returning it to the client
Modify the upstream response before returning it to the client, with greater customization capabilities
Transform routing by changing the upstream server, port, or path
Send requests to third-party APIs and use the response data to seed information for subsequent calls
Access gRPC services through HTTP REST
Allow browser clients to call gRPC services
Transform JSON objects included in API requests or responses using jq programs
API usage metering and usage-based billing
AppSentinels plugin for API security
Sign requests with AWS SIGV4 and temp credentials for secure use of AWS Lambdas in Kong
Detect and mitigate attacks on mobile apps, websites, and APIs with DataDome bot and online fraud protection
Integrate Kong Gateway with Imperva API Security to discover, monitor, and protect APIs
Integrate Impart Security’s WAF and API security protection platform with Kong Gateway.
Integrate Kong API Gateway with Inigo GraphQL Observability and Security
Block responses with bodies greater than a specified size
Mock virtual API request and response pairs through Kong Gateway
Expose OAS/Swagger/etc. specifications of auth protected APIs proxied by Kong
Log API transactions to Splunk using the Splunk HTTP collector
Add a signed JWT into the header of proxied requests
Powerful API analytics and usage-based billing to monetize APIs
Noname Security machine learning & prevention blocking for Kong Gateway discovery
Integrate Kong API Gateway with Salt Security Discovery & Prevention for API-based apps
API security with inline request blocking and data capture
Wallarm is AI-Powered Security Platform for protecting microservices and APIs