Introduction
An API gateway sits at the boundary between clients and backend services, handling cross-cutting concerns like authentication, rate limiting, routing, and observability. Choosing the right gateway and deployment pattern is critical for microservice architectures. This guide compares Kong, Tyk, and Apache APISIX across the dimensions that matter in production.
Gateway Comparison
Kong Gateway
Kong is built on OpenResty (NGINX + Lua) and offers enterprise features through a plugin ecosystem:
# Kong declarative config (kong.yml)
_format_version: "3.0"
services:
- name: user-service
url: http://user-svc:8080
routes:
- name: user-routes
paths:
- /api/v1/users
methods: [GET, POST, PUT, DELETE]
strip_path: false
plugins:
- name: rate-limiting
config:
minute: 100
hour: 1000
policy: local
- name: key-auth
config:
key_names: ["X-API-Key"]
- name: cors
config:
origins: ["*"]
methods: ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
Apache APISIX
APISIX provides sub-millisecond route matching via a radix tree and supports hot-reload of plugins:
# APISIX Admin API
curl http://apisix:9180/apisix/admin/routes/1 -X PUT -d '
{
"uri": "/api/v1/orders/*",
"methods": ["GET", "POST"],
"upstream": {
"type": "roundrobin",
"nodes": {
"order-svc:8080": 1
}
},
"plugins": {
"limit-req": {
"rate": 10,
"burst": 20,
"rejected_code": 429
},
"jwt-auth": {
"header": "Authorization"
},
"prometheus": {}
}
}'
Tyk
Tyk offers a dashboard-centric approach with API definitions stored in Redis:
{
"name": "Payment API",
"api_id": "payment-api-v1",
"org_id": "default-org",
"proxy": {
"target_url": "http://payment-svc:8080",
"listen_path": "/api/v1/payments/",
"strip_listen_path": true
},
"version_data": {
"not_versioned": true
},
"auth": {
"auth_header_name": "Authorization"
},
"rate_limit": {
"rate": 100,
"per": 60
},
"enable_coprocess_auth": false
}
Routing Strategies
Gateways support multiple routing strategies critical for microservice decomposition:
-- Kong: complex route matching with regex
{
name = "complex-route",
paths = { "/api/v2/(users|orders|products)/?.*" },
hosts = { "api.example.com" },
methods = { "GET", "POST" },
protocols = { "https" },
priority = 100 -- Higher priority routes checked first
}
APISIX supports weight-based routing for canary deployments:
upstream:
type: weighted_upstream
nodes:
user-svc-v1:8080: 90
user-svc-v2:8080: 10
Rate Limiting and Throttling
Implement multi-layered rate limiting to protect backend services:
-- Kong: combined rate limiting strategy
{
name = "rate-limiting-advanced",
config = {
limit_by = "consumer", -- consumer, credential, ip, service
policy = "redis", -- local, redis, cluster
minute = 60,
hour = 1000,
fault_tolerant = true,
hide_client_headers = false,
redis_host = "redis-cluster",
redis_port = 6379,
redis_timeout = 2000
}
}
Authentication Plugin Integration
Layer multiple auth methods with priority-based execution:
plugins:
- name: key-auth
config:
key_names: ["X-API-Key"]
key_in_header: true
key_in_query: false
hide_credentials: true
run_on_preflight: true
- name: oauth2
config:
scopes: ["read", "write", "admin"]
mandatory_scope: true
provision_key: "${OAUTH_PROVISION_KEY}"
token_expiration: 3600
enable_authorization_code: true
enable_client_credentials: true
Request/Response Transformation
Transform payloads between client and service boundaries:
-- Kong: response transformer plugin
{
name = "response-transformer",
config = {
remove = {
json = { "password", "credit_card", "ssn" }
},
add = {
headers = {
"X-Response-Time:$(context.now)"
}
}
}
}
APISIX supports serverless functions for custom transformations:
-- APISIX: serverless plugin for custom logic
{
"serverless-pre-function": {
"phase": "rewrite",
"functions": ["return function(conf, ctx)
local core = require(\"apisix.core\")
local token = core.request.header(ctx, \"Authorization\")
if token then
core.request.set_header(ctx, \"X-Internal-Token\", token:sub(8))
end
end"]
}
}
Analytics and Observability
All three gateways export metrics for monitoring and billing:
# APISIX: Prometheus and logging
plugins:
- name: prometheus
config:
prefer_name: true
- name: http-logger
config:
uri: http://log-collector:5000/logs
batch_max_size: 100
inactive_timeout: 5
- name: skywalking
config:
sample_ratio: 0.1
Deployment Patterns
Sidecar Pattern
Deploy the gateway as a sidecar alongside each service, suitable for service mesh architectures:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
template:
spec:
containers:
- name: user-app
image: user-service:latest
- name: kong-sidecar
image: kong:3.6
env:
- name: KONG_ROLE
value: data_plane
- name: KONG_CLUSTER_CONTROL_PLANE
value: cp:8005
Centralized Pattern
A shared gateway cluster handles all ingress traffic:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: apisix
spec:
controller: apache.org/apisix-ingress-controller
---
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: main-route
spec:
http:
- name: root
match:
hosts: ["api.example.com"]
paths: ["/*"]
backends:
- serviceName: aggregator-svc
servicePort: 80
Select the centralized pattern for simpler operations and the sidecar pattern for strict traffic isolation in multi-tenant environments. Whichever gateway you choose, invest in declarative configuration management and CI/CD integration from day one to avoid configuration drift at scale.