动手系列之开发并部署APISIX插件
HTTP服务
在实际工作中用以完成各种业务场景的后台服务多数为HTTP Server,此处笔者先为集群实现一个HTTP Server用于后续实验所需,这个服务命名为echo,目的是将HTTP Request的信息内容在HTTP Response Body返回。
新建目录echo,添加文件index.js、Dockerfile、k8s.yaml:
$ mkdir echo && cd echo
$ touch index.js Dockerfile k8s.yaml
$ tree .
.
├── Dockerfile
├── index.js
└── k8s.yaml
0 directories, 3 files
index.js
const http = require('http');
const server = http.createServer((req, res) => {
let {
method,
url,
headers,
} = req;
res.setHeader('content-type', 'application/json');
res.end(JSON.stringify({
method,
url,
headers,
}), err => {
console.log(`${Date.now()} ${method} ${url} ${err ? 'fail' : 'success'}`);
});
});
server.listen('8080');
Dockerfile
FROM node:lts-alpine
WORKDIR /usr/src/app
COPY . .
EXPOSE 8080
CMD node index.js
构建:
$ docker build --pull -t lab.com:release.echo.001 .
Sending build context to Docker daemon 3.072kB
Step 1/5 : FROM node:lts-alpine
lts-alpine: Pulling from library/node
Digest: sha256:1a9a71ea86aad332aa7740316d4111ee1bd4e890df47d3b5eff3e5bded3b3d10
Status: Image is up to date for node:lts-alpine
---> e5065cc78074
Step 2/5 : WORKDIR /usr/src/app
---> Using cache
---> 66657994466c
Step 3/5 : COPY . .
---> f75f52b8f915
Step 4/5 : EXPOSE 8080
---> Running in cf96d9cf79de
Removing intermediate container cf96d9cf79de
---> 6cbfe441d19d
Step 5/5 : CMD node index.js
---> Running in 2350d6d6b60e
Removing intermediate container 2350d6d6b60e
---> a78bde0cb89b
Successfully built a78bde0cb89b
Successfully tagged lab.com:release.echo.001
k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
namespace: lab
labels:
app: echo
spec:
replicas: 2
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
restartPolicy: Always
containers:
- image: "lab.com:release.echo.001"
imagePullPolicy: IfNotPresent
name: echo
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
tcpSocket:
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echo
namespace: lab
labels:
app: echo
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: echo
部署服务到k8s集群:
$ kubectl -n lab apply -f k8s.yaml
deployment.apps/echo created
service/echo created
$ kubectl -n lab get svc echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo ClusterIP 10.98.69.239 <none> 8080/TCP 46s
$ kubectl -n lab get pods |grep echo
echo-5f98ffcf77-6dth4 1/1 Running 0 80s
echo-5f98ffcf77-7w5qr 1/1 Running 0 80s
$ curl http://10.98.69.239:8080
{"method":"GET","url":"/","headers":{"host":"10.98.69.239:8080","user-agent":"curl/7.68.0","accept":"*/*"}}#
自定义插件
实现HTTP Server后接着开发一个插件,插件目标设计为追踪请求链路,如果请求有携带traceId则传递转发到下一个服务否则生成一个traceId,另在response header中返回traceId。
apisix/extra/lua/apisix/plugins/trace-id.lua
在前次部署APISIX到K8S中预留的extra目录中编写如下代码:
local ngx = ngx
local core = require('apisix.core')
local uuid = require('resty.jit-uuid')
local schema = {
type = 'object',
properties = {},
required = {},
}
local _M = {
version = 1.0,
name = 'trace-id',
priority = 0,
schema = schema,
}
function _M.check_schema(conf)
return core.schema.check(schema, conf)
end
function _M.rewrite(conf, ctx)
local id = core.request.header(ctx, 'x-trace-id')
if not id then
id = uuid.generate_v4()
end
core.request.set_header(ctx, 'x-trace-id', id)
ctx['x-trace-id'] = id
end
function _M.header_filter(conf, ctx)
core.response.set_header('x-trace-id', ctx['x-trace-id'])
end
function _M.log(conf, ctx)
ngx.var.trace_id = ctx['x-trace-id']
end
return _M
apisix/Dockerfile
修改apisix的Dockerfile在构建镜像时将插件copy进去:
FROM apache/apisix:2.13.0-alpine
COPY conf/apisix.yaml /usr/local/apisix/conf/config.yaml
COPY extra /usr/local/apisix/extra
EXPOSE 9080 9090 9091 9443
相比较之前的内容,此处还多expose了control api端口9090
apisix/conf/apisix.yaml
修改apisix配置文件以支持自定义插件:
etcd:
host:
- http://etcd-client.lab.svc.cluster.local:2379
apisix:
allow_admin:
- 0.0.0.0/0
admin_key:
- name: admin
key: your-secret
role: admin
extra_lua_path: /usr/local/apisix/extra/lua/?.lua
enable_control: true
control:
ip: 0.0.0.0
port: 9090
nginx_config:
http_server_configuration_snippet: |
set $trace_id "";
http:
access_log_format: '{"remote":"$remote_addr","time":"$time_local","method":"$request_method","uri":"$request_uri","status":$status,"request_time":$request_time,"agent":"$http_user_agent","upstream_status":$upstream_status,"upstream_time":$upstream_response_time,"upstream":"$upstream_addr","trace_id":"$trace_id"}'
access_log_format_escape: json
plugins:
- proxy-rewrite
- trace-id
将nginx访问日志改为json形式输出为后续日志采集进ElasticSearch做铺垫
构建镜像并更新到k8s集群
构建:
$ docker build --pull -t lab.com:release.apisix.001 apisix
Sending build context to Docker daemon 7.68kB
Step 1/4 : FROM apache/apisix:2.13.0-alpine
2.13.0-alpine: Pulling from apache/apisix
Digest: sha256:a39aa5e8d75f188111e9e64c2c8ecd728ad08a3522c0b4bf867531e32459031c
Status: Image is up to date for apache/apisix:2.13.0-alpine
---> 890821d363b4
Step 2/4 : COPY conf/apisix.yaml /usr/local/apisix/conf/config.yaml
---> Using cache
---> 9e4e81400847
Step 3/4 : COPY extra /usr/local/apisix/extra
---> Using cache
---> 17f9db61cfba
Step 4/4 : EXPOSE 9080 9090 9091 9443
---> Using cache
---> fc36158522bd
Successfully built fc36158522bd
Successfully tagged lab.com:release.apisix.001
修改k8s资源描述文件apisix.yaml的image为lab.com:release.apisix.001
,然后引用:
$ kubctl -n lab apply -f k8s-yaml/apisix.yaml
deployment.apps/apisix configured
configmap/apisix-dashboard-config unchanged
pod/apisix-dashboard configured
service/apisix unchanged
service/apisix-dashboard unchanged
路由
HTTP服务与插件开发完成后,尝试配置第一个路由并启用trace-id
插件,但配置路由的方式这里有两个不同的选择,Etcd Vs Stand-alone:
- Etcd,将各种路由规则配置到Etcd中,所有的apisix节点均使用其中的数据
- Stand-alone,通过apisix的配置文件定义路由规则,即规则跟随apisix节点
笔者进行了一定的思考后最终从两者中选择了Etcd,原因是两者作为控制平面的控制方式,相比较Etcd更为灵活,Stand-alone则有更多局限性。
确定使用Etcd管理路由规则后,有两种思路:
- 通过apisix-dashboard进行管理,Windows哲学
- 通过脚本调用admin-api进行管理,Linux哲学
笔者在实验中使用两者结合,最终目标是完全使用后者。
Windows方式
打开http://admin.lab.com
(前一篇的内容)使用UI操作操作增加一个路由,但由于admin-dashboard的插件同步需要通过apisix control导出并同步schema.json
,笔者最终会向Linux方式靠近故没有做相关适配,因而UI上无法配置插件trace-id
。
配置并生效路由后,在dashboard中可以查看路由的json描述,如下:
{
"uri": "/*",
"name": "echo",
"methods": [
"GET",
"POST",
"PUT",
"DELETE",
"HEAD",
"OPTIONS"
],
"plugins": {
"proxy-rewrite": {
"scheme": "http"
}
},
"upstream": {
"nodes": [
{
"host": "echo.lab.svc.cluster.local",
"port": 8080,
"weight": 1
}
],
"timeout": {
"connect": 3,
"send": 3,
"read": 6
},
"type": "roundrobin",
"hash_on": "vars",
"scheme": "http",
"pass_host": "pass",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1024,
"size": 16
}
},
"status": 1
}
Linux方式
在使用dashboard配置路由后,浏览器中已经可以访问http://lab.com/hello-world
,但trace-id
插件未生效,可使用apisix admin api为路由添加:
$ curl http://10.109.76.252:9080/apisix/admin/routes/408684119430529881 -H 'X-API-KEY: your-secret' -X PATCH -d '
{
"plugins": {
"trace-id":{}
}
}'
...
通过dashboard查看或实际测试确认trace-id
生效:
$ curl http://lab.com/hello-world
{"method":"GET","url":"/hello-world","headers":{"host":"lab.com","x-real-ip":"10.244.0.1","x-forwarded-for":"192.168.168.168, 10.244.0.1","x-forwarded-proto":"http","x-forwarded-host":"lab.com","x-forwarded-port":"80","x-forwarded-ssl":"off","user-agent":"curl/7.68.0","accept":"*/*","x-trace-id":"a17399de-4742-40fb-a809-7d364a0f7be3"}}#
$ curl -H 'x-trace-id: a17399de-4742-40fb-a809-7d364a0f7be3' curl http://lab.com/thank-you
{"method":"GET","url":"/thank-you","headers":{"host":"lab.com","x-real-ip":"10.244.0.1","x-forwarded-for":"192.168.168.168, 10.244.0.1","x-forwarded-proto":"http","x-forwarded-host":"lab.com","x-forwarded-port":"80","x-forwarded-ssl":"off","user-agent":"curl/7.68.0","accept":"*/*","x-trace-id":"a17399de-4742-40fb-a809-7d364a0f7be3"}}#
未传x-trace-id时插件生成,传的时候保持透传,查看网关日志确认trace-id有效。
结语
通过本次动手实验完成了如下几件事情:
- 部署HTTP服务echo到k8s
- 开发第一个apisix插件trace-id
- 熟悉并配置第一个路由转发
如果是在实际项目使用apisix,路由规则维护可能还需下些功夫,路由规则不多时可通过将dashboard schema.json同步到位并使用UI管理,而如果路由规则多笔者个人推荐用统一的配置文件描述,再通过脚本等方式进行自动同步管理。
动手系列的文章笔者时间主要花在动手上,文章只是把动手最后的结果用文字方式尽量呈现出来,可预见的是如果读者只是阅读而未实际动手,这文章的作用甚微,故有条件的读者极力推荐实际动手!