码迷,mamicode.com
首页 > 其他好文 > 详细

k8s部署elk7.6

时间:2020-10-21 20:51:22      阅读:19      评论:0      收藏:0      [点我收藏+]

标签:server   groups   log   replica   很多   https   source   私有   incr   

前期准备:

1、一套k8s集群
2、镜像准备
3、harbor仓库
4、镜像制作文件
5、字签证书

镜像下载

docker pull elasticsearch:7.6.2
docker pull filebeat:7.6.2
docker pull kibana:7.6.2

因为这边需要用到elasticserach的安全验证功能模块,所以需要对elasticserach和kibana镜像有所修改

编辑elasticsearch.yml文件

vim elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.security.enabled: "true"
xpack.security.transport.ssl.enabled: "true"
xpack.security.transport.ssl.verification_mode : certificate
xpack.security.transport.ssl.certificate_authorities : /usr/share/elasticsearch/config/certs/ca.crt
xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/certs/tls.crt
xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/certs/tls.key
#xpack.security.http.ssl.enabled : true
#xpack.security.http.ssl.certificate :  /usr/share/elasticsearch/config/certs/tls.crt
#xpack.security.http.ssl.key :  /usr/share/elasticsearch/config/certs/tls.key

elasticsearch镜像制作

#vim Dockerfile-es
FROM elasticsearch:7.6.2
ADD elasticsearch.yml  /usr/share/elasticsearch/config/

elasticsearch镜像制作并上传到私有的harbor

docker build -f Dockerfile-es -t harbor.xxx.com/elk/elasticsearch_sa:7.6.2 .
docker login harbor.xxx.com -u admin -p 123qwe
docker push harbor.xxx.com/elk/elasticsearch_sa:7.6.2

编辑kibana.yml文件

vim kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
i18n.locale: zh-CN
elasticsearch.username: ${ELASTICSEARCH_USERNAME}
elasticsearch.password: ${ELASTICSEARCH_PASSWORD}
xpack.security.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.ui.enabled: true
# vim Dockerfile-kibana 
FROM kibana:7.6.2
ADD kibana.yml  /usr/share/kibana/config/kibana.yml

kibana镜像制作并上传到私有的harbor仓库

docker build -f Dockerfile-kibana -t harbor.xxx.com/elk/kibana_zh:7.6.2 .
docker login harbor.xxx.com -u admin-p 123qwe
docker push harbor.xxx.com/elk/kibana_zh:7.6.2

filebeat镜像仍然使用原来下载好的,但是我们也需要传到harbor上

docker tag filebeat:7.6.2 harbor.xxx.com/elk/filebeat:7.6.2
docker push harbor.xxx.com/elk/filebeat:7.6.2

创建自签证书,并创建k8s的secret资源对象

mkdir crt && cd crt
openssl req -x509 -sha256 -nodes -newkey rsa:4096 -days 365 -subj "/CN=quickstart-es-http" -addext "subjectAltName=DNS:quickstart-es-http.log-prod.svc" -keyout tls.key -out tls.crt
kubectl create secret -n log-prod generic quickstart-es-cert --from-file=ca.crt=tls.crt --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key

创建一个namespace(有就略过)
# kubectl create namespace log-prod

存储准备

这边我直接使用nfs的storageClass,原先创建好的
文档如下告诉我们怎么创建
文档:k8s使用动态nfs.note
友情链接:http://note.youdao.com/noteshare?id=0fbae4be6f952170cdee986a76da710f&sub=9F37314BF64C4B35B03783FD16FE6349

``部署es集群

创建service和StatefulSet

vim es.yaml

---
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: log-prod
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  type: ClusterIP
  ports:
    - port: 9200
      targetPort: 9200
      name: rest
    - port: 9300
      targetPort: 9300
      name: inter-node
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es
  namespace: log-prod
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels: 
        app: elasticsearch
    spec:
      imagePullSecrets:
        - name: harbor
      schedulerName: default-scheduler
      initContainers:
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch
        image: harbor.xxx.com/elk/elasticsearch_sa:7.6.2
        imagePullPolicy: Always
        ports:
        - name: rest
          containerPort: 9200
        - name: inter
          containerPort: 9300
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        - name: ca
          mountPath: /usr/share/elasticsearch/config/certs
        env:
        - name: cluster.name
          value: k8s-logs
        - name: node.name
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: cluster.initial_master_nodes
          value: "es-0,es-1,es-2"
        - name: discovery.zen.minimum_master_nodes
          value: "2"
        - name: discovery.seed_hosts
          value: "elasticsearch"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        - name: network.host
          value: "0.0.0.0"
      volumes:
      - name: ca
        secret:
          secretName: quickstart-es-cert   

  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: grafana-nfs
      resources:
        requests:
          storage: 50Gi 

创建

kubectl apply -f es.yaml

查询结果

kubectl  get pvc -n log-prod

访问elastic

创建ing

vim es-ing.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: elasticseach
  namespace: log-prod
  labels:
    app: elasticseach
  annotations: {}
spec:
  rules:
    - host: elastic.xxx.com
      http:
        paths:
          - path: /
            backend:
              serviceName: elasticsearch
              servicePort: 9200

创建

kubectl apply -f es-ing.yaml

esstic我们已经创建好了,并且需要使用账号密码认证才能去使用,但是呢,还没有初始化账号

初始化账号
进入到容器,并输入如下命令,进行交互
默认账号的密码我们都输入123456,生产的时候自己注意就行
这些密码设置复杂点,下面我们也不需要用这些默认账号,我们用自己创建的账号即可

#kubectl exec -it -n log-prod  es-0  bash
[root@es-0 elasticsearch]#bin/elasticsearch-setup-passwords interactive

备注:使用接口创建用户

curl -X POST -u elastic:123456 "localhost:9200/_security/user/huangrunda?pretty" -H ‘Content-Type: application/json‘ -d‘
{
  "password" : "123456",
  "roles" : [ "yjhl" ],
  "full_name" : "黄闰达",
  "email" : "huangrunda@xxx.com",
  "metadata" : {
    "intelligence" : 7
  }
}
‘

更改密码接口

curl -u elastic:$password -X POST "localhost:9200/_security/user/elastic/_password?pretty" -H ‘Content-Type: application/json‘ -d‘
{
  "password" : "1234567"
}
‘

相关文档https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api.html#security-user-apis

访问:

http://elastic.xxx.com
输入一个elastic:123456

这里我们用一个elasticsearch-head插件可以进行简单的查看下
打开谷歌浏览器,然后在插件搜索elasticsearch-head这个安装上
然后输入http://elastic.xxx.com点连接即可查看索引信息了

创建filebeat的相关yaml文件

ConfigMap
DaemonSet
ClusterRoleBinding
ClusterRole

#vim  filebeat-kubernetes.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: log-prod
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
    processors:
      - add_cloud_metadata:
      - add_host_metadata:
    setup.template.name: "filebeat"
    setup.template.pattern: "filebeat-*"
    output.elasticsearch:
      username: "elastic"
      password: "123456"
      hosts: [‘http://elasticsearch:9200‘]
      #      setup.template.enabled: false
      index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
    setup.dashboards.index: "filebeat-*"
    setup.kibana:
      host: "kibana:5601"
      protocol: "http"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: log-prod
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      imagePullSecrets:
        - name: harbor
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        #image: docker.elastic.co/beats/filebeat:7.7.0
        image: harbor.xxx.com/elk/filebeat:7.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: "123456"
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /data/var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          #  readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /data/var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don‘t send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: log-prod
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: log-prod
  labels:
    k8s-app: filebeat
---

上面文件有个简单的地方要注意
我这边k8s集群的节点docker部署存储在/data/var/lib/docker/下,所以在volumes的地方注意下即可
默认是在var/lib/docker/

创建
#kubectl apply -f filebeat-kubernetes.yaml

过一整子后,我们用elasticsearch-head插件简单看下,是不是有了filebeat-*的索引了

创建kibana

root@i-tuzgrn70:/data/elk/7.6# cat kibana.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: log-prod
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: log-prod
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      imagePullSecrets:
        - name: harbor
      containers:
      - name: kibana
        image: harbor.xxx.com/elk/kibana_zh:7.6.2
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 1000m
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: "123456" 
        ports:
        - containerPort: 5601

kubectl apply -f kibana.yaml

访问,创建ing

vim kibana-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  generation: 4
  labels:
    app: kibana
  name: kibana
  namespace: log-prod
  selfLink: /apis/extensions/v1beta1/namespaces/log-prod/ingresses/kibana
spec:
  rules:
  - host: kn.xxx.com
    http:
      paths:
      - backend:
          serviceName: kibana
          servicePort: 5601
        path: /

访问

http://kn.xxx.com
使用elastic:123456

部署过程其实踩了很多坑

相关官网文档
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#hashing-settings
https://discuss.elastic.co/t/es-7-3-2-sslhandshakeexception-no-available-authentication-scheme/200463/3
https://discuss.elastic.co/t/ssl-errors-remaining-after-upgrade-to-7-5-0/212520
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-put-role.html

备注: 文档中多次出现的images需要替换自己相应harbor私有仓库地址对应的镜像

k8s部署elk7.6

标签:server   groups   log   replica   很多   https   source   私有   incr   

原文地址:https://blog.51cto.com/zxlwz/2542605

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!