码迷,mamicode.com
首页 > Web开发 > 详细

kubernetes(七)二进制安装-worker节点安装

时间:2020-03-29 21:15:20      阅读:127      评论:0      收藏:0      [点我收藏+]

标签:gen   error:   manage   rsa   rgs   rtl   gate   tst   ati   

配置kubelet

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

  1. 创建 kubelet bootstrap kubeconfig 文件

    
    cd /opt/k8s/work
    
    export KUBE_APISERVER=https://192.168.0.107:6443
    export node_name=slave
    
    export BOOTSTRAP_TOKEN=$(kubeadm token create   --description kubelet-bootstrap-token   --groups system:bootstrappers:${node_name}   --kubeconfig ~/.kube/config)
    
    # 设置集群参数
    kubectl config set-cluster kubernetes   --certificate-authority=/etc/kubernetes/cert/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap   --token=${BOOTSTRAP_TOKEN}   --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 设置上下文参数
    kubectl config set-context default   --cluster=kubernetes   --user=kubelet-bootstrap   --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
    
    
    • 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书
    • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding
  2. 分发 bootstrap kubeconfig 文件到所有 worker 节点

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet-bootstrap.kubeconfig root@${node_ip}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
    
  3. 创建和分发 kubelet 参数配置文件

    从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示

    cd /opt/k8s/work
    
    export CLUSTER_CIDR="172.30.0.0/16"
    export NODE_IP=192.168.0.114
    export CLUSTER_DNS_SVC_IP="10.254.0.2"
    
    
    cat > kubelet-config.yaml <<EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: ${NODE_IP}
    staticPodPath: "/etc/kubernetes/manifests"
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
      x509:
        clientCAFile: "/etc/kubernetes/cert/ca.pem"
    authorization:
      mode: Webhook
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: ${NODE_IP}
    clusterDomain: "cluster.local"
    clusterDNS:
      - "${CLUSTER_DNS_SVC_IP}"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "${CLUSTER_CIDR}"
    podPidsLimit: -1
    resolvConf: /run/systemd/resolve/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
      memory.available:  "100Mi"
      nodefs.available:  "10%"
      nodefs.inodesFree: "5%"
      imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF
    
    • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
    • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
    • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
    • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
    • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
      对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
    • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
    • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 --experimental-cluster-signing-duration 参数
  4. 为各节点创建和分发 kubelet 配置文件

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet-config.yaml root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
    
    
  5. 创建和分发 kubelet 服务启动文件

    cd /opt/k8s/work
    export K8S_DIR=/data/k8s/k8s
    export NODE_NAME=slave
    cat > kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kubelet
    ExecStart=/opt/k8s/bin/kubelet \  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \  --cert-dir=/etc/kubernetes/cert \  --root-dir=${K8S_DIR}/kubelet \  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \  --config=/etc/kubernetes/kubelet-config.yaml \  --hostname-override=${NODE_NAME} \  --image-pull-progress-deadline=15m \  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \  --logtostderr=true \  --v=2
    Restart=always
    RestartSec=5
    StartLimitInterval=0
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    
    • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
    • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
    • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件
  6. 安装分发kubelet服务文件

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet.service root@${node_ip}:/etc/systemd/system/kubelet.service
    
  7. 授予 kube-apiserver 访问 kubelet API 的权限

    在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)对应的用户(CN:kubernetes-api)访问 kubelet API 的权限,详情参考kubelet-auth

    kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-api
    
    
  8. Bootstrap Token Auth 和授予权限

    kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。

    kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

    默认情况下,这个 user 和 group 没有创建 CSR 的权限, 需要创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
    
    
  9. 启动 kubelet 服务

    export K8S_DIR=/data/k8s/k8s
    
    export node_ip=192.168.0.114
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
    
    
    • kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

    • 注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

  10. 遇到问题

    1. 启动kubelet后,使用 kubectl get csr 没有结果,查看kubelet出现错误

      journalctl -u kubelet -a |grep -A 2 ‘certificate_manager.go‘ 
      
      Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized 
       
      

      查看kube-api服务日志

      root@master:/opt/k8s/work# journalctl -eu kube-apiserver
      
      Unable to authenticate the request due to an error: invalid bearer token
      

      原因,在kube-apiserver服务的启动文件中丢掉了下面的配置

      --enable-bootstrap-token-auth \

      追加上,重新启动kube-apiserver后解决

    2. kubelet 启动后持续不断的产生csr,手动approve后还继续产生
      原因是kube-controller-manager服务停止掉了,重新启动后解决

      • kubelet服务出问题后 要删除对应节点的/etc/kubernetes/kubelet.kubeconfig和/etc/kubernetes/cert/kubelet-client-current*.pem、/etc/kubernetes/cert/kubelet-client-current*.pem,之后再重新启动kubelet
  11. 查看 kubelet 情况

    root@master:/opt/k8s/work# kubectl get csr
    NAME        AGE   REQUESTOR                 CONDITION
    csr-kl5mg   49s   system:bootstrap:5t989l   Pending
    csr-mrmkf   2m1s  system:bootstrap:5t989l   Pending
    csr-ql68g   13s   system:bootstrap:5t989l   Pending
    csr-rvl2v   84s   system:bootstrap:5t989l   Pending
    
    
    • 执行时,在手动approve之前会一直追加csr
  12. 手动 approve csr

    root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk ‘{print $1}‘ | xargs kubectl certificate approve
    certificatesigningrequest.certificates.k8s.io/csr-kl5mg approved
    certificatesigningrequest.certificates.k8s.io/csr-mrmkf approved
    certificatesigningrequest.certificates.k8s.io/csr-ql68g approved
    certificatesigningrequest.certificates.k8s.io/csr-rvl2v approved
    
    root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk ‘{print $1}‘ | xargs kubectl certificate approve
    certificatesigningrequest.certificates.k8s.io/csr-f4smx approved
    
    
  13. 查看node信息

    root@master:/opt/k8s/work# kubectl get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    slave   Ready    <none>   10m   v1.17.2
    
    
  14. 查看kubelet服务状态

    export node_ip=192.168.0.114
    root@master:/opt/k8s/work# ssh root@${node_ip} "systemctl status kubelet.service"
    ● kubelet.service - Kubernetes Kubelet
       Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2020-02-10 22:48:41 CST; 12min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 15529 (kubelet)
        Tasks: 19 (limit: 4541)
       CGroup: /system.slice/kubelet.service
               └─15529 /opt/k8s/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/cert --root-dir=/data/k8s/k8s/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-config.yaml --hostname-override=slave --image-pull-progress-deadline=15m --volume-plugin-dir=/data/k8s/k8s/kubelet/kubelet-plugins/volume/exec/ --logtostderr=true --v=2
    
    2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.846285   15529 kubelet_node_status.go:73] Successfully registered node slave
    2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.930745   15529 certificate_manager.go:402] Rotating certificates
    2月 10 22:49:14 slave kubelet[15529]: I0210 22:49:14.966351   15529 kubelet_node_status.go:486] Recording NodeReady event message for node slave
    2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580410   15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2029-01-21 13:08:18.850930128 +0000 UTC
    2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580484   15529 certificate_manager.go:281] Waiting 78430h18m49.270459727s for next certificate rotation
    2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.580981   15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2027-07-14 16:09:26.990162158 +0000 UTC
    2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.581096   15529 certificate_manager.go:281] Waiting 65065h19m56.409078053s for next certificate rotation
    2月 10 22:53:44 slave kubelet[15529]: I0210 22:53:44.911705   15529 kubelet.go:1312] Image garbage collection succeeded
    2月 10 22:53:45 slave kubelet[15529]: I0210 22:53:45.053792   15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
    2月 10 22:58:45 slave kubelet[15529]: I0210 22:58:45.054225   15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.servic
    	
    

配置kube-proxy 组件

  1. 创建 kube-proxy 证书和私钥

    1. 创建证书签名请求文件

      cd /opt/k8s/work
      cat > kube-proxy-csr.json <<EOF
      {
          "CN": "system:kube-proxy",
          "key": {
              "algo": "rsa",
              "size": 2048
          },
          "names": [
            {
              "C": "CN",
              "ST": "NanJing",
              "L": "NanJing",
              "O": "system:kube-proxy",
              "OU": "system"
            }
          ]
      }
      EOF
      
      
      • CN:指定该证书的 User 为 system:kube-proxy;
      • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限。
    2. 生成证书和私钥

      cd /opt/k8s/work
      cfssl gencert -ca=/opt/k8s/work/ca.pem   -ca-key=/opt/k8s/work/ca-key.pem   -config=/opt/k8s/work/ca-config.json   -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
      
      ls kube-proxy*pem
      
      
    3. 安装证书

      cd /opt/k8s/work
      export node_ip=192.168.0.114
      scp kube-proxy*.pem root@${node_ip}:/etc/kubernetes/cert/
      
      
  2. 创建 kubeconfig 文件

    • kube-proxy 使用此文件访问apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-proxy证书等信息
    cd /opt/k8s/work
    
    export KUBE_APISERVER=https://192.168.0.107:6443
    
    kubectl config set-cluster kubernetes   --certificate-authority=/opt/k8s/work/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}    --kubeconfig=kube-proxy.kubeconfig
      
    kubectl config set-credentials kube-proxy   --client-certificate=kube-proxy.pem   --client-key=kube-proxy-key.pem   --embed-certs=true   --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default   --cluster=kubernetes   --user=kube-proxy   --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    
    
  3. 分发 kubeconfig

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kube-proxy.kubeconfig root@${node_ip}:/etc/kubernetes/kube-proxy.kubeconfig
    
    
  4. 创建 kube-proxy 配置文件

    cd /opt/k8s/work
    
    export CLUSTER_CIDR="172.30.0.0/16"
    
    export NODE_IP=192.168.0.114
    
    export NODE_NAME=slave
    
    cat > kube-proxy-config.yaml <<EOF
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      burst: 200
      kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
      qps: 100
    bindAddress: ${NODE_IP}
    healthzBindAddress: ${NODE_IP}:10256
    metricsBindAddress: ${NODE_IP}:10249
    enableProfiling: true
    clusterCIDR: ${CLUSTER_CIDR}
    hostnameOverride: ${NODE_NAME}
    mode: "ipvs"
    portRange: ""
    iptables:
      masqueradeAll: false
    ipvs:
      scheduler: rr
      excludeCIDRs: []
    EOF	
    
    
    • bindAddress: 监听地址;
    • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
    • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
    • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
    • mode: 使用 ipvs 模式;
  5. 分发kube-proxy 配置文件

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kube-proxy-config.yaml root@${node_ip}:/etc/kubernetes/kube-proxy-config.yaml
    
    
  6. 创建kube-proxy服务启动文件

    cd /opt/k8s/work
    export K8S_DIR=/data/k8s/k8s
    
    cat > kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kube-proxy
    ExecStart=/opt/k8s/bin/kube-proxy \  --config=/etc/kubernetes/kube-proxy-config.yaml \  --logtostderr=true \  --v=2
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    
  7. 分发 kube-proxy服务启动文件:

    export node_ip=192.168.0.114
    scp kube-proxy.service root@${node_ip}:/etc/systemd/system/
    
    
  8. 启动 kube-proxy服务

    export node_ip=192.168.0.114
    export K8S_DIR=/data/k8s/k8s
    
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "modprobe ip_vs_rr"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
    
    
  9. 检查启动结果

    export node_ip=192.168.0.114
    ssh root@${node_ip} "systemctl status kube-proxy  |grep Active"
    
    • 确保状态为 active (running),否则查看日志,确认原因

    • 如果出现异常,通过如下命令查看

      journalctl -u kube-proxy
      
      
  10. 查看状态

    
    root@slave:~# netstat -lnpt|grep kube-prox
    tcp        0      0 192.168.0.114:10256     0.0.0.0:*               LISTEN      23078/kube-proxy
    tcp        0      0 192.168.0.114:10249     0.0.0.0:*               LISTEN      23078/kube-proxy
    root@slave:~# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 192.168.0.107:6443           Masq    1      0          0         
    
    

kubernetes(七)二进制安装-worker节点安装

标签:gen   error:   manage   rsa   rgs   rtl   gate   tst   ati   

原文地址:https://www.cnblogs.com/gaofeng-henu/p/12594633.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!