Deploy custom plugins
Store the plugin contents in a ConfigMap
and mount the ConfigMap
as a volume on your Pods.
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Enable the Gateway API
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER
Control Plane:
CONTROL_PLANE_DETAILS=$(curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "My KIC CP",
"cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
}')
We’ll need the id
and telemetry_endpoint
for the values.yaml
file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl
:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk
to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"cert": "'$CERT'"
}'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running (attached to Konnect)
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Create a
values.yaml
file:cat <<EOF > values.yaml controller: ingressController: image: tag: "3.4" env: feature_gates: "FillIDs=true" konnect: license: enabled: true enabled: true controlPlaneID: "$CONTROL_PLANE_ID" tlsClientCertSecretName: konnect-client-tls apiHostname: "us.kic.api.konghq.com" gateway: image: repository: kong/kong-gateway env: konnect_mode: 'on' vitals: "off" cluster_mtls: pki cluster_telemetry_endpoint: "$CONTROL_PLANE_TELEMETRY:443" cluster_telemetry_server_name: "$CONTROL_PLANE_TELEMETRY" cluster_cert: /etc/secrets/konnect-client-tls/tls.crt cluster_cert_key: /etc/secrets/konnect-client-tls/tls.key lua_ssl_trusted_certificate: system proxy_access_log: "off" dns_stale_ttl: "3600" secretVolumes: - konnect-client-tls EOF
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --values ./values.yaml
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Kong Ingress Controller running
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
Custom plugins
Custom Lua plugins can be stored in a Kubernetes ConfigMap or Secret and mounted in your Kong Gateway Pod.
The examples in this guide use a ConfigMap
, but you can replace any references to configmap
with secret
to use a Secret
instead.
If you would like to install a plugin which is available as a rock from Luarocks, then you need to download it, unzip it and create a ConfigMap from all the Lua files of the plugin.
Create a custom plugin
If you already have a real plugin, you can skip this step.
mkdir myheader
echo 'local MyHeader = {}
MyHeader.PRIORITY = 1000
MyHeader.VERSION = "1.0.0"
function MyHeader:header_filter(conf)
-- do custom logic here
kong.response.set_header("myheader", conf.header_value)
end
return MyHeader
' > myheader/handler.lua
echo 'return {
name = "myheader",
fields = {
{ config = {
type = "record",
fields = {
{ header_value = { type = "string", default = "roar", }, },
},
}, },
}
}
' > myheader/schema.lua
The directory should now look like this:
myheader
├── handler.lua
└── schema.lua
0 directories, 2 files
Create a ConfigMap
Create a ConfigMap
from your directory that will be mounted to your Kong Gateway Pod:
kubectl create configmap kong-plugin-myheader --from-file=myheader -n kong
If your custom plugin includes new entities, you need to create a daos.lua
file in your directory and a migration
sub-directory containing the scripts to create any database tables and migrate data between different versions (if your entities’ schemas changed between different versions). In this case, the directory should like this:
myheader
├── daos.lua
├── handler.lua
├── migrations
│ ├── 000_base_my_header.lua
│ ├── 001_100_to_110.lua
│ └── init.lua
└── schema.lua
1 directories, 6 files
As a ConfigMap
does not support nested directories, you need to create another ConfigMap
containing the migrations
directory:
kubectl create configmap kong-plugin-myheader-migrations --from-file=myheader/migrations -n kong
Deploy your custom plugin
Kong provides a way to deploy custom plugins using both Kong Gateway Operator and the Kong Ingress Controller Helm chart. This guide shows how to use the Helm chart, but we recommend using Kong Gateway Operator if possible. See Kong custom plugin distribution with KongPluginInstallation for more information.
The Kong Ingress Controller Helm chart automatically configures all the environment variables required based on the plugins you inject.
-
Create a
values.yaml
file in your current directory with the following contents. Ensure that you add in other configuration values you might need for your installation to be successful.gateway: plugins: configMaps: - name: kong-plugin-myheader pluginName: myheader
If you need to include the migration scripts to the plugin, configure
userDefinedVolumes
anduserDefinedVolumeMounts
invalues.yaml
to mount the migration scripts to the Kong Gateway pod:gateway: plugins: configMaps: - name: kong-plugin-myheader pluginName: myheader deployment: userDefinedVolumes: - name: "kong-plugin-myheader-migrations" configMap: name: "kong-plugin-myheader-migrations" userDefinedVolumeMounts: - name: "kong-plugin-myheader-migrations" mountPath: "/opt/kong/plugins/myheader/migrations" # Should be the path /opt/kong/plugins/<plugin-name>/migrations
-
Upgrade Kong Ingress Controller with the new values
helm upgrade --install kong kong/ingress -n kong --create-namespace --values values.yaml
Using custom plugins
If you get a “plugin failed schema validation” error, wait until your Kong Gateway Pods have cycled before trying to create a
KongPlugin
instance
After you have set up Kong Gateway with the custom plugin installed, you can use it like any other plugin by adding the konghq.com/plugins
annotation.
-
Create a KongPlugin custom resource:
echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: my-custom-plugin namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: myheader config: header_value: my first plugin " | kubectl apply -f -
Next, apply the
KongPlugin
resource by annotating theservice
resource:kubectl annotate -n kong service echo konghq.com/plugins=my-custom-plugin
-
Create a Route to the
echo
service to test your custom plugin:
Validate your configuration
Once the resource has been reconciled, you’ll be able to call the /echo
endpoint and Kong Gateway will route the request to the echo
service.
The -i
flag returns response headers from the server, and you will see myheader: my first plugin
in the output:
curl -i "$PROXY_IP/echo"
curl -i "$PROXY_IP/echo"