[date: 2022-06-11 15:28] [visits: 128]

动手系列之日志采集

经过前两次实践,应用与APISIX已部署到k8s集群且可访问,初步具备业务开发能力,但离工程完备还差一些关键内容,本次笔者尝试在k8s中搭建并配置日志组件EFK,期望达到的目标是通过日志平台搜索查询APISIX日志信息。

helm

helm是一个k8s资源管理工具,大致可类比为Ubuntu的apt、CentOS的yum、MacOS的brew。当我们希望部署一些服务到k8s中,如果没有helm,一般而言需要使用yaml文件描述我们期望创建的资源信息,然后通过kubectl工具应用到k8s集群之中。对于一些通用开源组件,需要在配置文件中描述的内容极其繁琐且复杂,而如果使用helm管理公共组件可以非常便捷的完成部署,节省时间。

根据经验,在本地使用helm部署公共组件时需留意对持久化的处理,简单来说我们需要提前在k8s中创建PersistentVolume(PV),helm再通过PersistentVolumeClaim(PVC)申请使用空闲的PV,创建PV的yaml示例:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: name
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: '/mnt/d/k8s-pv/name'

有了PV资源后,helm安装时指定storageClass为manual,安装过程中helm会使用PVC申请这些PV资源,但卸载时不会删除PVC需要手动删除才能释放PV给其他PVC使用。

ElasticSearch

参考前一节PV相关内容,为部署ElasticSearch至少需要创建6个PV(8G x 5, 16G x 1),然后为helm安装提供一个配置文件es.helm.values.yaml:

global:
  storageClass: manual
  kibanaEnabled: true

master:
  heapSize: 512m
coordinating:
  heapSize: 512m
data:
  heapSize: 4096m

笔者因机器内存较大,此处为了后续ES运行稳定,此处将默认heapSize提高了4倍,在已经安装helm的前提下接着通过如下命令安装ElasticSearch:

$ helm install elasticsearch bitnami/elasticsearch -f es.helm.values.yaml -n lab
...

经过一段时间后查看ES部署情况:

$ kubectl -n lab get pods |grep elasticsearch
elasticsearch-coordinating-only-0       1/1     Running   0               53m
elasticsearch-coordinating-only-1       1/1     Running   0               53m
elasticsearch-data-0                    1/1     Running   0               53m
elasticsearch-data-1                    1/1     Running   0               53m
elasticsearch-kibana-759576b848-wwk8p   1/1     Running   0               53m
elasticsearch-master-0                  1/1     Running   0               53m
elasticsearch-master-1                  1/1     Running   0               53m
elasticsearch-master-2                  1/1     Running   0               53m

$ kubectl -n lab get pvc
NAME                          STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-elasticsearch-data-0     Bound    c8       8Gi        RWO            manual         55m
data-elasticsearch-data-1     Bound    f8       8Gi        RWO            manual         55m
data-elasticsearch-master-0   Bound    a8       8Gi        RWO            manual         55m
data-elasticsearch-master-1   Bound    b8       8Gi        RWO            manual         55m
data-elasticsearch-master-2   Bound    e8       8Gi        RWO            manual         55m
elasticsearch-kibana          Bound    c16      16Gi       RWO            manual         55m

$ kubectl -n lab get svc |grep elasticsearch
elasticsearch-coordinating-only   ClusterIP   10.105.43.71    <none>        9200/TCP,9300/TCP                     56m
elasticsearch-data                ClusterIP   10.96.249.247   <none>        9200/TCP,9300/TCP                     56m
elasticsearch-kibana              ClusterIP   10.104.79.27    <none>        5601/TCP                              56m
elasticsearch-master              ClusterIP   10.105.192.95   <none>        9200/TCP,9300/TCP                     56m

$ curl http://10.105.43.71:9200
{
  "name" : "elasticsearch-coordinating-only-1",
  "cluster_name" : "elastic",
  "cluster_uuid" : "hKP-xGoPQfiIiYTOMEw4Dg",
  "version" : {
    "number" : "7.14.1",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
    "build_date" : "2021-08-26T09:01:05.390870785Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Fluentd

安装日志采集组件Fluentd,同样为helm提供配置文件fluentd.helm.values.yaml:

global:
  storageClass: manual

forwarder:
  configMap: fluentd-forwarder
aggregator:
  configMap: fluentd-aggregator
  extraEnv:
  - name: ELASTICSEARCH_HOST
    value: elasticsearch-coordinating-only.lab.svc.cluster.local
  - name: ELASTICSEARCH_PORT
    value: 9200

上述helm安装配置文件指定了ConfigMap名称,第一次安装时可以通过如下步骤获取默认ConfigMap:

此处为安装Fluentd所指定的ConfigMap,fluentd-forwarder未做修改均是默认值,fluentd-aggregator则是针对APISIX与echo应用采集做了定制化输出:

...
<match kubernetes.var.log.containers.apisix**>
  @type rewrite_tag_filter
  <rule>
    key log
    pattern /^\{"remote":/
    tag apisix.access
  </rule>
  <rule>
    key log
    pattern /^\{"remote":/
    tag apisix
    invert true
  </rule>
</match>

<filter apisix.access>
  @type parser
  format json
  key_name log
  reserve_time true
  reserve_data true
</filter>

<filter apisix.access>
  @type record_transformer
  renew_record true
  keep_keys @timestamp,log,remote,time,method,uri,status,request_time,agent,upstream_status,upstream_time,upstream,trace_id
</filter>

<match apisix.access>
  @type elasticsearch
  host "#{ENV['ELASTICSEARCH_HOST']}"
  port "#{ENV['ELASTICSEARCH_PORT']}"

  logstash_format true
  logstash_prefix apisix.access
  logstash_prefix_separator .
  logstash_dateformat %Y%m%d

  <buffer>
    @type memory
    flush_thread_count 2
    flush_interval 3s
  </buffer>
</match>

<match apisix>
  @type elasticsearch
  host "#{ENV['ELASTICSEARCH_HOST']}"
  port "#{ENV['ELASTICSEARCH_PORT']}"

  logstash_format true
  logstash_prefix apisix
  logstash_prefix_separator .
  logstash_dateformat %Y%m%d

  <buffer>
    @type memory
    flush_thread_count 2
    flush_interval 3s
  </buffer>
</match>

<match kubernetes.var.log.containers.echo**>
  @type elasticsearch
  host "#{ENV['ELASTICSEARCH_HOST']}"
  port "#{ENV['ELASTICSEARCH_PORT']}"

  logstash_format true
  logstash_prefix app.echo
  logstash_prefix_separator .
  logstash_dateformat %Y%m%d

  <buffer>
    @type memory
    flush_thread_count 2
    flush_interval 3s
  </buffer>
</match>

<match **>
  @type stdout
</match>
...

在前一次动手实践部署APISIX插件时,笔者有意将APISIX访问日志输出为JSON格式,即是为了方便此处Fluentd将日志按JSON格式采集到ES之中。而之所以按JSON格式采集到ES是为了后续接入Grafana时使用结构化信息可以较低成本实现一些数据分析能力,如访问量、响应时长、错误率等。

执行如下命令进行部署:

$ helm install fluentd bitnami/fluentd -f fluentd.helm.values.yaml -n lab
...

经过一段时间后检查:

$ kubectl -n lab get pods |grep fluentd
NAME                  READY   STATUS    RESTARTS        AGE
fluentd-0             1/1     Running   0               36m
fluentd-r6ggd         1/1     Running   0               34m

$ kubectl -n lab get svc |grep fluentd
fluentd-aggregator   ClusterIP   10.111.254.11    <none>        9880/TCP,24224/TCP                    72m
fluentd-forwarder    ClusterIP   10.104.155.180   <none>        9880/TCP                              72m
fluentd-headless     ClusterIP   None             <none>        9880/TCP,24224/TCP                    72m

Kibana

通过helm安装ES时笔者在配置中注明启用Kibana,因此集群中同时部署了Kibana,通过访问http://10.104.79.27:5601可以顺利打开页面:

$ curl -i http://10.104.79.27:5601/app/home
HTTP/1.1 200 OK
content-security-policy: script-src 'unsafe-eval' 'self'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self'
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
kbn-name: elasticsearch-kibana-759576b848-wwk8p
kbn-license-sig: 825a51075482232ad114ee0a8dde1a6716f78c078c9416321971d1a29f9f5f5e
content-type: text/html; charset=utf-8
cache-control: private, no-cache, no-store, must-revalidate
content-length: 136330
vary: accept-encoding
accept-ranges: bytes
Date: Sat, 04 Jun 2022 07:13:59 GMT
Connection: keep-alive
Keep-Alive: timeout=120

<!DOCTYPE html>...

确认各组件工作正常,通过lab.com访问几次echo服务(参考前一篇实践文章),而后使用浏览器打开Kibana,在Stack Management->Index Management中可以看到三个Indices:

接着在菜单Stack Management->Index Patterns中创建index parttern,完成后回到Discover菜单即可选择index parttern并进行日志查询。

结语

本次笔者通过helm在k8s集群中部署了ElasticSearch、Fluentd、Kibana,并通过配置Fluentd的采集规则将APISIX与应用日志采集到ES中,达到预期实践目标。

但如果是企业级应用软件工程架构,笔者会偏向依赖云服务完成,而非自己搭建并维护所有服务组件。不从零搭建的原因在于工程复杂性,现实之中较难依靠个人乃至一般团队运维确保这一系列系统稳定可靠,这个时代是云计算的时代,是SASS的时代。

Ref