十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章给大家分享的是有关Kubernetes中EFK怎么用的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。
岳阳网站建设公司创新互联建站,岳阳网站设计制作,有大型网站制作公司丰富经验。已为岳阳1000+提供企业网站建设服务。企业网站搭建\成都外贸网站建设要多少钱,请找那个售后服务好的岳阳做网站的公司定做!
一:前言
1.在安装Kubernetes集群的时候我们有下载过压缩包https://dl.k8s.io/v1.8.5/kubernetes-client-linux-amd64.tar.gz
解压缩后 在目录cluster\addons 下有各插件的yaml文件,大部分情况仅需少量改动即可使用。
2.在搭建Kubernetes的集群过程中,涉及到很多镜像的下载,建议可以在阿里云购买一个香港所在地的ECS服务器,镜像下载完成后通过docker save -o 将镜像导出,在通过docker load 导入镜像或者上传镜像到个人镜像仓库。
3.Kubernetes从1.8版本开始,EFK的安装中,elasticsearch-logging采用StatefulSet类型,但存在bug,会导致elasticsearch-logging-0 POD 一直无法成功创建。 所以建议还是采用1.8之前的版本采用ReplicationController。
4.要成功安装EFK,一定要先安装kube-DNS前面的文章已有介绍。
5.EFK安装过程中elasticsearch和kibana版本要兼容。这里采用的镜像如下:
gcr.io/google_containers/elasticsearch:v2.4.1-2
gcr.io/google_containers/fluentd-elasticsearch:1.22
gcr.io/google_containers/kibana:v4.6.1-1
二:yaml文件
efk-rbac.yaml
点击(此处)折叠或打开
apiVersion: v1
kind: ServiceAccount
metadata:
name: efk
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: efk
subjects:
- kind: ServiceAccount
name: efk
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
es-controller.yaml
点击(此处)折叠或打开
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: efk
containers:
- image: gcr.io/google_containers/elasticsearch:v2.4.1-2
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: es-persistent-storage
emptyDir: {}
es-service.yaml
点击(此处)折叠或打开
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
fluentd-es-ds.yaml
点击(此处)折叠或打开
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es-v1.22
namespace: kube-system
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v1.22
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: efk
containers:
- name: fluentd-es
image: gcr.io/google_containers/fluentd-elasticsearch:1.22
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"
tolerations:
- key : "node.alpha.kubernetes.io/ismaster"
effect: "NoSchedule"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
kibana-controller.yaml 此处需要特殊说明,绿色标识的部分KIBANA_BASE_URL 的value要设置为空,默认值会导致Kibana访问出现问题。
点击(此处)折叠或打开
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
serviceAccountName: efk
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:v4.6.1-1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: ""
ports:
- containerPort: 5601
name: ui
protocol: TCP
kibana-service.yaml
点击(此处)折叠或打开
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
三:启动与验证
1. 创建资源
kubectl create -f .
2.通过 kubectl logs -f 查看相关pod的日志,确认是否正常启动。 其中kibana-logging-* POD 启动需要一定的时间。
3.elasticsearch验证(可以通过kube proxy创建代理)
http://IP:PORT/_cat/nodes?v
点击(此处)折叠或打开
host ip heap.percent ram.percent load node.role master name
10.1.88.4 10.1.88.4 9 87 0.45 d m elasticsearch-logging-v1-hnfv2
10.1.67.4 10.1.67.4 6 91 0.03 d * elasticsearch-logging-v1-zmtdl
http://IP:PORT/_cat/indices?v
点击(此处)折叠或打开
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open logstash-2018.04.07 5 1 515 0 1.1mb 584.4kb
green open .kibana 1 1 2 0 22.2kb 9.7kb
green open logstash-2018.04.06 5 1 15364 0 7.3mb 3.6mb
4.kibana验证
http://IP:PORT/app/kibana#/discover?_g
四:备注
要成功搭建EFK,需要注意一下几点:
1.确保已经成功安装了kube-dns
2.当前版本elasticsearch-logging采用ReplicationController
3.elasticsearch和kibana的版本要兼容
4.KIBANA_BASE_URL value设置为“”
感谢各位的阅读!关于“Kubernetes中EFK怎么用”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!