码迷,mamicode.com
首页 > 其他好文 > 详细

k8s容器-节点部署篇

时间:2019-11-16 19:24:13      阅读:78      评论:0      收藏:0      [点我收藏+]

标签:secure   iss   spro   映射   $path   pem   tno   empty   证书   

一. k8s节点部署

1. 环境规划

  • 系统环境概述
系统环境说明
操作系统Ubuntu16.04 or CentOS7选的是CentOS7       -       
Kubernetes版本v1.14.3-       -       
Docker版本19.03.1yum安装       -       
  • 组件服务TLS证书对应关系表
集群部署-自签TLS证书
组件使用的证书       -       
etcdca.pem, server.pem,server-key.pem       -       
kube-apiserverca.pem, server.pem, server-key.pem       -       
flanneldca.pem, server.pem, server-key.pem       -       
kube-controller-managerca.pem, ca-key.pem       -       
kubeletca.pem, ca-key.pem       -       
kube-proxyca.pem, kube-proxy.pem, kube-proxy-key.pem       -       
kubectlca.pem, admin.pem, admin-key,pem       -       
  • 服务器ip对应角色关系表
角色IP组件
k8s-master192.168.10.21kube-apiserver   kube-controller-manager   kube-scheduler   docker
k8s-node01192.168.10.22etcd  kubelet   kube-proxy   docker
k8s-node02192.168.10.23etcd  kubelet   kube-proxy   docker
k8s-node02192.168.10.23etcd  kubelet   kube-proxy   docker

2. Etcd数据库集群部署

2.1 修改主机别名,并配置互信

cat >> /etc/hosts  << EOF
192.168.10.21  k8s-master
192.168.10.22  k8s-node01
192.168.10.23  k8s-node02
192.168.10.24  k8s-node03
EOF

master节点为例:(其他节点参照即可)   
ssh-keygen
ssh-copy-id  k8s-node01 

2.2 三个节点互相加一下规则

开启防火墙
systemctl    start   firewalld
firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address=192.168.10.21 accept"  
firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address=192.168.10.22 accept"  
firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address=192.168.10.23 accept"
firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address=192.168.10.24 accept" 
firewall-cmd --permanent --add-rich-rule="rule family=ipv4 source address=192.168.1.106 accept"

systemctl    daemon-reload
systemctl    restart firewalld

2.3 生成证书

  • 拷贝k8s-master节点所需二进制包
#拷贝k8s-master节点所需二进制包
mkdir -p /app/kubernetes/{bin,cfg,ssl};
\cp  ./server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl} /app/kubernetes/bin;

#创建环境变量
echo "export PATH=$PATH:/app/kubernetes/bin" >> /etc/profile;
source /etc/profile;

#下载生成TLS证书的二进制工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64;
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64;
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64;
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64;
mv cfssl_linux-amd64          /usr/local/bin/cfssl;
mv cfssljson_linux-amd64      /usr/local/bin/cfssljson;
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo;
  • 批量创建证书的执行配置脚本
  • cat certificate.sh
#!/bin/bash
#
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json  << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {

            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
        "O": "k8s",
        "OU": "System"
        }
    ]
}
EOF

cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.10.21",
      "192.168.10.22",
      "192.168.10.23",
      "192.168.10.24",
      "10.10.10.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
        "O": "k8s",
        "OU": "System"
        }
    ]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json     | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json      | cfssljson -bare admin
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
\cp  *pem /app/kubernetes/ssl/
  • 执行该证书脚本
sh  certificate.sh
  • 其他node节点,k8s-node01、k8s-node02和k8s-node03创建目录
mkdir -p /app/kubernetes/{bin,cfg,ssl}
  • master节点上将TLS证书拷贝到node节点
for i in   1 2 3   ; do scp *pem  k8s-node0$i:/app/kubernetes/ssl/; done
//做了互信认证
//证书包含了master节点和node节点所需TLS证书

2.4 配置etcd

https://github.com/coreos/etcd/releases/tag/v3.3.12

  • 所有节点,拷贝etcd二进制包
tar -xf etcd-v3.3.12-linux-amd64.tar.gz
for i in 1 2 3 ;do scp etcd-v3.3.12-linux-amd64/{etcd,etcdctl} k8s-node0$i:/app/kubernetes/bin/; done
  • 执行配置脚本
  • cat etcd.sh 配置文件的脚本待确定
#!/bin/bash
#
#master节点部署etcd01节点名称和ip配置如下
k8s_node01=192.168.10.22
k8s_node02=192.168.10.23
k8s_node03=192.168.10.24

cat > /app/kubernetes/cfg/etcd << EOF 
KUBE_ETCD_OPTS="                                   \--name=etcd03                                      \--data-dir=/var/lib/etcd/default.etcd              \--listen-peer-urls=https://${k8s_node01}:2380      \--listen-client-urls=https://${k8s_node01}:2379,http://127.0.0.1:2379 \--advertise-client-urls=https://${k8s_node01}:2379                    \--initial-advertise-peer-urls=https://${k8s_node01}:2380              \--initial-cluster=‘etcd01=https://${k8s_node01}:2380,etcd02=https://${k8s_node01}:2380,etcd03=https://${k8s_node02}:2380‘ \--initial-cluster-token=etcd-cluster               \--initial-cluster-state=new                        \--cert-file=/app/kubernetes/ssl/server.pem         \--key-file=/app/kubernetes/ssl/server-key.pem      \--peer-cert-file=/app/kubernetes/ssl/server.pem    \--peer-key-file=/app/kubernetes/ssl/server-key.pem \--trusted-ca-file=/app/kubernetes/ssl/ca.pem       \--peer-trusted-ca-file=/app/kubernetes/ssl/ca.pem"
EOF

#配置etcd启动脚本服务
cat  > /usr/lib/systemd/system/etcd.service   << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/app/kubernetes/cfg/etcd
ExecStart=/app/kubernetes/bin/etcd   \$KUBE_ETCD_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start  etcd
systemctl enable etcd
systemctl status etcd

其他node节点,部署方法相同,只需修改etcd节点的名称和IP,执行脚本即可!

  • 查看etcd集群状态
/app/kubernetes/bin/etcdctl --ca-file=/app/kubernetes/ssl/ca.pem         --cert-file=/app/kubernetes/ssl/server.pem                           --key-file=/app/kubernetes/ssl/server-key.pem                        --endpoints="https://192.168.10.22:2379,https:192.168.10.23:2379,https://192.168.10.24:2379" cluster-health

member 445a7d567d5cea7f is healthy: got healthy result from https://192.168.1.230:2379
member a04dd241344fb42a is healthy: got healthy result from https://192.168.1.240:2379
member e5160a05dd6cb2ed is healthy: got healthy result from https://192.168.1.226:2379
cluster is healthy

//出现cluster is healthy 说明集群状态是正常的

3. 部署所有节点docker服务

  • 执行配置脚本安装docker服务
  • cat docker_install.sh
cat > docker_install.sh << EOF
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

#安装docker tab键补全
yum install -y bash-completion
source /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/bash_completion

systemctl start  docker
systemctl enable docker
systemctl status docker
EOF
  • 拷贝脚本到其他节点,执行安装,master可选择性安装不影响使用,安装后,能与node节点上的容器通信。

4. 部署flannel容器网络

https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

  • master节点操作:
  • 所有node节点上,拷贝flannel二进制包
tar xf flannel-v0.11.0-linux-amd64.tar.gz
for i in 1 2 3;do scp flanneld  mk-docker-opts.sh k8s-node0$i:/app/kubernetes/bin/;done
  • 执行配置脚本
  • cat flanneld.sh
#!/bin/bash 
#
k8s_node01=192.168.10.22
k8s_node02=192.168.10.23
k8s_node03=192.168.10.24

#写入分配的子网段到etcd,供flanneld使用,需要先执行,方能启动flannel服务
/app/kubernetes/bin/etcdctl                       --ca-file=/app/kubernetes/ssl/ca.pem            --cert-file=/app/kubernetes/ssl/server.pem      --key-file=/app/kubernetes/ssl/server-key.pem   --endpoints="https://${k8s_node01}:2379,https://${k8s_node02}:2379,https://${k8s_node03}:2379"   set /coreos.com/network/config ‘{"Network": "172.50.0.0/16", "Backend": {"Type": "vxlan"}}‘

#配置Flannel:
cat > /app/kubernetes/cfg/flanneld << EOF 
flannel_options="                              \--etcd-endpoints=https://${k8s_node01}:2379,https://${k8s_node02}:2379,https://${k8s_node03}:2379 \-etcd-cafile=/app/kubernetes/ssl/ca.pem        \-etcd-certfile=/app/kubernetes/ssl/server.pem  \-etcd-keyfile=/app/kubernetes/ssl/server-key.pem"
EOF

#systemd管理Flannel:
cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/app/kubernetes/cfg/flanneld
ExecStart=/app/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/app/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

#配置Docker启动指定子网段:
cat > /usr/lib/systemd/system/docker.service <<  EOF    
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd  \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start   flanneld
systemctl enable  flanneld
systemctl restart docker  
systemctl status  flanneld
  • 获取创建的网络(验证用)
/app/kubernetes/bin/etcdctl                     --ca-file=/app/kubernetes/ssl/ca.pem           --cert-file=/app/kubernetes/ssl/server.pem     --key-file=/app/kubernetes/ssl/server-key.pem  --endpoints="https://192.168.10.22:2379,https://192.168.10.23:2379,https://192.168.10.24:2379"  get /coreos.com/network/config ‘{"Network": "172.50.0.0/16", "Backend": {"Type": "vxlan"}}‘

其他节点安装flanneld的方式相同。
测试不同节点互通,在当前节点访问另一个Node节点docker0 IP即可

  • 注意事项:
1) 保证etcd通信,集群状态是正常的
2) 要先添加子网段,否则flannel启动报错
3) 要确保docker0与flannel.1在同一网段,若不在,需要重新加载启动下docker服务,默认的docker0网段的ip涉及到的环境变量,在/run/flannel/subnet.env
    这个文件里,是由第一步"写入子网段到etcd集群"中时,自动生成的,可修改。

5. 获取token文件

1) 创建TLS Bootstrapping Token
2) 创建kubelet kubeconfig
3) 创建kube-proxy kubeconfig

在master节点上操作,然后将生成的配置文件拷贝到node节点上面

  • 执行配置脚本
  • cat kubeconfig.sh
#!/bin/bash
#
# 创建 TLS Bootstrapping Token,token的字符可以随机的生成
export  BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘)
cat > token.csv << EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# 创建kubelet bootstrapping kubeconfig 
KUBE_APISERVER="https://192.168.10.21:8080"

# 设置集群参数
kubectl config set-cluster kubernetes     --certificate-authority=/app/kubernetes/ssl/ca.pem      --embed-certs=true                    --server=${KUBE_APISERVER}            --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap     --token=${BOOTSTRAP_TOKEN}            --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default        --cluster=kubernetes                  --user=kubelet-bootstrap              --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#--------------------------------
# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes   --certificate-authority=/app/kubernetes/ssl/ca.pem   --embed-certs=true                    --server=${KUBE_APISERVER}            --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy   --client-certificate=./kube-proxy.pem     --client-key=./kube-proxy-key.pem         --embed-certs=true                        --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default   --cluster=kubernetes               --user=kube-proxy                  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 生成node节点所需配置文件以及token文件
chmod +x kubeconfig.sh  ; 
sh  kubeconfig.sh ;
//会生成 token.csv,bootstrap.kubeconfig,kube-proxy.kubeconfig,这三个文件,后面会用到

6. 部署Master节点组件

kubernetes master 节点包含的组件

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

目前这三个组件需要部署在同一台机器上

  • kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能紧密相关;
  • 同时只能有一个 kube-scheduler、kube-controller-manager 进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;

步骤简介:

  • 拷贝二进制包启动服务命令;
  • 拷贝token文件和需要的证书文件;
  • 配置并执行"apiserver.sh 脚本;
  • 验证kube-apiserver服务;

6.1 配置和启动 kube-apiserver 组件

  • 二进制包包含客户端的所有组件

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md

  • 执行配置脚本
  • cat apiserver.sh
#!/bin/bash
#
k8s_master=192.168.10.21
k8s_node01=192.168.10.22
k8s_node02=192.168.10.23
k8s_node03=192.168.10.24

ETCD_SERVER="https://${k8s_node01}:2379,https://${k8s_node02}:2379,https://${k8s_node03}:2379"

\cp  token.csv  /app/kubernetes/cfg

cat > /app/kubernetes/cfg/kube-apiserver << EOF
KUBE_APISERVER_OPTS="--logtostderr=true  \--v=4                                    \--etcd-servers=$ETCD_SERVER              \--bind-address=$k8s_master               \--secure-port=8080                       \--advertise-address=$k8s_master          \--allow-privileged=true                  \--service-cluster-ip-range=10.10.10.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node           \--enable-bootstrap-token-auth            \--token-auth-file=/app/kubernetes/cfg/token.csv           \--service-node-port-range=30000-50000                     \--tls-cert-file=/app/kubernetes/ssl/server.pem            \--tls-private-key-file=/app/kubernetes/ssl/server-key.pem \--client-ca-file=/app/kubernetes/ssl/ca.pem               \--service-account-key-file=/app/kubernetes/ssl/ca-key.pem \--etcd-cafile=/app/kubernetes/ssl/ca.pem                  \--etcd-certfile=/app/kubernetes/ssl/server.pem            \--etcd-keyfile=/app/kubernetes/ssl/server-key.pem"
EOF

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/app/kubernetes/cfg/kube-apiserver
ExecStart=/app/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start  kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
  • 正常查看集群状态如下,kubectl get cs 只能在master节点上执行检查状态
[root@localhost kubernetes]# kubectl  get cs 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

6.2 配置和启动 kube-controller-manager 组件

  • 执行配置脚本
  • cat controller-manager.sh
#!/bin/bash
cat  > /app/kubernetes/cfg/kube-controller-manager << EOF
KUBE_CONTROLLER_MANAGER_OPTS="                   \--logtostderr=true                               \--v=4                                            \--master=127.0.0.1:8080                          \--leader-elect=true                              \--address=127.0.0.1                              \--service-cluster-ip-range=10.10.10.0/24         \--cluster-name=kubernetes                        \--cluster-signing-cert-file=/app/kubernetes/ssl/ca.pem     \--cluster-signing-key-file=/app/kubernetes/ssl/ca-key.pem  \--root-ca-file=/app/kubernetes/ssl/ca.pem                  \--service-account-private-key-file=/app/kubernetes/ssl/ca-key.pem"
EOF

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/app/kubernetes/cfg/kube-controller-manager
ExecStart=/app/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl  daemon-reload
systemctl  start   kube-controller-manager
systemctl  enable  kube-controller-manager
systemctl  status  kube-controller-manager

6.3 配置和启动 kube-scheduler 组件

  • 执行配置脚本
  • cat scheduler.sh
#!/bin/bash
#
cat > /app/kubernetes/cfg/kube-scheduler << EOF
KUBE_SCHEDULER_OPTS="   \--logtostderr=true      \--v=4                   \--master=127.0.0.1:8080 \--leader-elect"
EOF

cat > /usr/lib/systemd/system/kube-scheduler.service  << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/app/kubernetes/cfg/kube-scheduler
ExecStart=/app/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start  kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler

7. 部署Node节点组件

Kubernetes node节点包含如下组件

  • Flanneld:安装过程参考文档上面安装flannel网络。
  • kubelet:直接用二进制文件安装
  • kube-proxy:直接用二进制文件安装

步骤简介:

  • 确认在之前我们安装配置的网络插件flannel已启动且运行正常
  • 安装配置docker后启动
  • 安装配置kubelet、kube-proxy后启动
  • 验证

master节点操作:
拷贝node节点kubelet和kube-proxy服务所需配置文件

for i in 1 2 3 ; do scp bootstrap.kubeconfig kube-proxy.kubeconfig  k8s-node0$i:/app/kubernetes/cfg/; done
for i in 1 2 3 ; do scp ./server/bin/{kubelet,kube-proxy}           k8s-node0$i:/app/kubernetes/bin/; done  
证书一开始已经拷贝过去了

执行将kubelet-bootstrap用户绑定到系统集群角色

kubectl create clusterrolebinding kubelet-bootstrap   --clusterrole=system:node-bootstrapper              --user=kubelet-bootstrap

7.1 配置和启动 kubelet 组件

  • k8s-node01节点操作:
  • 执行配置脚本
  • cat kubelet.sh
#!/bin/bash
#如下是node01为例
k8s_node01=192.168.10.22
k8s_node02=192.168.10.23
k8s_node03=192.168.10.24

#配置kubelet
cat > /app/kubernetes/cfg/kubelet << EOF
KUBELET_OPTS="--logtostderr=true              \  --v=4                                       \  --hostname-override=${k8s_node01}           \  --address=${k8s_node01}                     \  --kubeconfig=/app/kubernetes/cfg/kubelet.kubeconfig                          \  --experimental-bootstrap-kubeconfig=/app/kubernetes/cfg/bootstrap.kubeconfig \  --allow-privileged=true                     \  --cert-dir=/app/kubernetes/ssl              \  --cluster-dns=10.10.10.2                    \  --cluster-domain=cluster.local              \  --fail-swap-on=false                        \  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

#配置kubelet服务system启动
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/app/kubernetes/cfg/kubelet
ExecStart=/app/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start  kubelet
systemctl enable kubelet
systemctl status kubelet

其他k8s-node02节点部署方法相同,改下节点ip即可

  • 附: 其中/app/kubernetes/cfg/kubelet.config配置文件如下:
//无需手动配置,了解下即可。正常启动后,会自动加载进来,配置如下

cat /app/kubernetes/cfg/kubelet.config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  ……………很长字符串,省略…………… 
    server: https://192.168.10.21:8080
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /app/kubernetes/ssl/kubelet-client-current.pem
    client-key: /app/kubernetes/ssl/kubelet-client-current.pem
  • 查看未授权的CSR请求
$ kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-Cm3XIZb_R6fEV1bbT9N2ufxuDAXkf05-8mnUjWbh6eo   67s   kubelet-bootstrap   Pending
node-csr-Jp_oHiFFO4ZTRKcKaIzKXiyKIIAZ2c4e09ne8I-VU90   65s   kubelet-bootstrap   Pending
node-csr-bTrFC53MHuzspJQUlyYTsESLpQe4TlFnlUtmyiMASjY   67s   kubelet-bootstrap   Pending
$ kubectl get nodes
No resources found.
  • 通过 CSR 请求并验证
# kubectl certificate approve [NAME1,NAME2,NAME3]                 //可以同时给多个节点名进行授权认证
certificatesigningrequest "node-csr-Cm3XIZb_R6fEV1bbT9N2ufxuDAXkf05-8mnUjWbh6eo" approved

# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-Cm3XIZb_R6fEV1bbT9N2ufxuDAXkf05-8mnUjWbh6eo   92s   kubelet-bootstrap   Approved,Issued
node-csr-Jp_oHiFFO4ZTRKcKaIzKXiyKIIAZ2c4e09ne8I-VU90   90s   kubelet-bootstrap   Approved,Issued
node-csr-bTrFC53MHuzspJQUlyYTsESLpQe4TlFnlUtmyiMASjY   92s   kubelet-bootstrap   Approved,Issued

7.2 配置和启动 kube-proxy 组件

  • k8s-node01节点操作:
  • 执行配置脚本
  • cat kube-proxy.sh
#!/bin/bash
#如下是以node01为例
k8s_node01=192.168.10.22
k8s_node02=192.168.10.23
k8s_node03=192.168.10.24

#配置kube-proxy
cat > /app/kubernetes/cfg/kube-proxy << EOF
KUBE_PROXY_OPTS="                   \--logtostderr=true                  \--v=4                               \--hostname-override=${k8s_node01}   \--cluster-cidr=10.10.10.0/24        \--kubeconfig=/app/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

#systemd管理kube-proxy组件
cat  > /usr/lib/systemd/system/kube-proxy.service << EOF 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/app/kubernetes/cfg/kube-proxy
ExecStart=/app/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target  
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start  kube-proxy
systemctl status kube-proxy
  • 若kube-proxy服务有错误日志如下
Sep 20 09:35:16 k8s-node01 kube-proxy[25072]: E0920 09:35:16.077775   25072 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:anonymous
" cannot list resource "services" in API group "" at the cluster scope
  • 解决方法:需要绑定一个cluster-admin的权限
  • 否则映射的端口外网是访问不到的,权限不足
kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin   --user=system:anonymous
  • 其他k8s-node02节点部署方式一样,修改下对应的节点ip即可

8. 查看k8s集群状态

# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.10.22   Ready    <none>   15s   v1.14.3
192.168.10.23   Ready    <none>   17s   v1.14.3
192.168.10.24   Ready    <none>   17s   v1.14.3

附:该STATUS 状态值由‘NotReady‘ --->  ‘Ready‘,说明集群节点添加成功

[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   

9. 运行nginx测试容器

  • 创建一个nginx web服务,测试集群是否正常
1)命令运行测试容器:
kubectl run nginx --image=nginx --replicas=3

2)启用yaml文件启动测试容器:
cat > nginx.yaml << EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

运行容器:
kubectl   create   -f  nginx.yaml 

暴露映射外部端口,用来访问:
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
  • 查看Pod,Service:
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7db9fccd9b-5njsv   1/1     Running   0          45m
nginx-7db9fccd9b-tjz6z   1/1     Running   0          45m
nginx-7db9fccd9b-xtkdx   1/1     Running   0          45m
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.10.10.1    <none>        443/TCP        47m
nginx        NodePort    10.10.10.73   <none>        88:49406/TCP   3m38s   
[root@k8s-master ~]# 
  • 访问node节点网址:
192.168.10.22:49406
192.168.10.23:49406
192.168.10.24:49406

访问结果如下:
[root@k8s-master ~]# curl -I  192.168.10.23:49406
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Fri, 20 Sep 2019 01:58:34 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
Connection: keep-alive
ETag: "5d5279b8-264"
Accept-Ranges: bytes

附: 该代理端口在node节点上是可以查到的

10. 部署kubedns组件

  • 管理节点配置并修改kube-dns.yaml文件
  • clusterIP与kubelet启动参数--cluster-dns一致即可,在service cidr中预选1个地址做dns地址

  • 修改后配置

cat  > kube-dns.yaml <<  EOF
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.10.10.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ‘‘
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: netonline/k8s-dns-kube-dns-amd64:1.14.8
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local./127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: netonline/k8s-dns-sidecar-amd64:1.14.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don‘t use cluster DNS.
      serviceAccountName: kube-dns  
EOF
  • 查看配置说明
kube-dns ServiceAccount不用修改,kubernetes集群预定义的ClusterRoleBinding system:kube-dns已将kube-system(系统服务一般部署在此)namespace中的
ServiceAccout kube-dns 与预定义的ClusterRole system:kube-dns绑定,而ClusterRole system:kube-dns具有访问kube-apiserver dns的api权限
  • 查看重要信息
# kubectl get clusterrolebinding system:kube-dns -o yaml
………… 省略
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-dns
subjects:
- kind: ServiceAccount
  name: kube-dns
  namespace: kube-system

# kubectl get clusterrole system:kube-dns -o yaml
………… 省略
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  verbs:
  - list
  - watch
  • 启动kube-dns
kubectl create -f kube-dns.yaml
  • 若kube-dns执行有误,想删掉重新安装
kubectl   get deployment --all-namespaces   //查看
kubectl   delete  -f  kube-dns.yaml         //清除

11. 验证Kubedns组件服务

  • kube-dns Deployment & Service & Pod
  • kube-dns Pod 3个容器已”Ready”,服务,deployment等也正常启动
kubectl get pod        -n kube-system -o wide
kubectl get service    -n kube-system -o wide
kubectl get deployment -n kube-system -o wide
  • kube-dns 查询
  • 如下是查询dns是否成功解析的信息操作
  • 默认的busybox镜像有问题,选用一个即可
# cat  > busybox.yaml  << EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    #image: zhangguanzhang/centos
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

运行带有工具的pod
# kubectl create  -f busybox.yaml

查看kube-dns的pod IP
# kubectl  get pod -n kube-system -o wide
NAME                                    READY   STATUS   AGE    IP            NODE            NOMINATED NODE   READINESS GATES
kube-dns-67fb7c784c-998xh               3/3     Running  174m   172.50.36.2   192.168.10.23   <none>           <none>
kubernetes-dashboard-8646f64494-5nzvs   1/1     Running  32d    172.50.32.3   192.168.10.24   <none>           <none>

$IP 为查出来的dns  pod 的 IP
注: 应该有输出信息的,如果没有输出,是其他问题    
# kubectl exec -ti busybox -- nslookup kubernetes.default
  • kube-dns,有三组日志
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c dnsmasq

kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar

12. 部署dashboard组件

  • 管理节点执行
  • 所用镜像名如下
netonline/kubernetes-dashboard-amd64:v1.8.3    //如下配置,以此为例
registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.7.1
  • 配置文件模板
# ConfigMap
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-configmap.yaml

# Secret
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-secret.yaml

# RBAC
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-rbac.yaml

# dashboard-controller
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-controller.yaml

# Service
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-service.yaml
  • 本实验使用yaml文件(修改版)

https://github.com/Netonline2016/kubernetes/tree/master/addons/dashboard

12.1 修改dashboard-configmap.yaml

暂不修改,针对此次验证,dashboard-controller也未使用到configmap

12.2 修改dashboard-rbac.yaml

  • 默认dashboard-rbac.yaml定义了1个name为”kubernetes-dashboard-minimal”的Role;并做了name为”kubernetes-dashboard-minimal”的RoleBinding,向name为”kubernetes-dashboard”的ServiceAccount授权;
  • 但默认的dashboard-rbac.yaml定义的Role权限太小,不太方便验证;
  • 重新定义rbac,只需要定义新的ClusterRoleBinding: kubernetes-dashboard,将kubernetes自身的具有全部权限的ClusterRole: cluster-admin赋予ClusterRoleBinding;此授权方式在生产环境慎用;

  • 修改后的配置

cat > dashboard-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding        ## 修改或新增部分
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole             ## 修改或新增部分
  name: cluster-admin           ## 修改或新增部分
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard  ## 修改或新增部分
namespace: kube-system
EOF

12.3 修改dashboard-secret.yaml

dashboard-secret.yaml不做修改

12.4 修改dashboard-controller.yaml

  • dashboard-controller.yaml定义了ServiceAccount资源(授权)与Deployment(服务Pod)
  • 修改该文件使用的镜像
sed -i ‘s|k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3|netonline/kubernetes-dashboard-amd64:v1.8.3|g‘ dashboard-controller.yaml
  • 修改后的配置文件
cat > dashboard-controller.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: ‘docker/default‘
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: netonline/kubernetes-dashboard-amd64:v1.8.3   ## 修改或新增部分
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
EOF

12.5 修改dashboard-service.yaml

  • 定义”NodePort” type,为验证通过控制节点直接访问dashboard(生产环境中建议不使用 方式),”nodePort: 18443”定义具体的端口,不设置则在服务端口范围中随机产生

  • 修改后的配置文件

cat  > dashboard-service.yaml << EOF 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
  ports:
  - port: 443
    targetPort: 8443
    nodePort: 38443
EOF

12.6 启动kubernetes-dashboard并验证服务

  • 启动rbac,secret,controller,service4个yaml文件定义的服务即可;
  • 或者 kubectl create -f .
kubectl create -f dashboard-rbac.yaml 
kubectl create -f dashboard-secret.yaml 
kubectl create -f dashboard-controller.yaml 
kubectl create -f dashboard-service.yaml
  • 查看相关服务
  • 查看service,deployment,pod服务
# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.10.10.2    <none>        53/UDP,53/TCP   23h
kubernetes-dashboard   NodePort    10.10.10.30   <none>        443:38443/TCP   92m

# kubectl get deployment -n kube-system
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kube-dns               1/1     1            1           23h
kubernetes-dashboard   1/1     1            1           95m

# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kube-dns-5995c87955-dt76f               3/3     Running   0          23h
kubernetes-dashboard-8646f64494-6ttr4   1/1     Running   0          96m

获取集群服务列表
kubectl cluster-info

12.7 访问dashboard方式

  • 查看nodePort,进行访问UI
# kubectl   get svc -n kube-system
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.10.10.2    <none>        53/UDP,53/TCP   23h
kubernetes-dashboard   NodePort    10.10.10.97   <none>        443:38443/TCP   22h
  • 查看dashboard服务运行所在节点服务器
#  kubectl get pod -n kube-system -o wide
NAME                                  READY STATUS  RESTARTS AGE IP          NODE          NOMINATED NODE READINESS GATES
kube-dns-5995c87955-j5z7n             3/3   Running 6        26h 172.50.94.2 192.168.10.23 <none>         <none>
kubernetes-dashboard-8646f64494-5nzvs 1/1   Running 2        25h 172.50.29.2 192.168.10.24 <none>         <none>
  • 访问UI地址
//谷歌和IE对证书有要求,访问异常,需要用火狐浏览器访问,并添加url例外

https://192.168.10.24:38443  
  • 创建管理员角色用户
创建了一个admin-user的服务账号,并放在kube-system命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。
默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可

# cat > dashboard-adminuser.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

# kubectl create -f dashboard-adminuser.yaml
  • 获取管理员的token值
# kubectl get secret -n kube-system  
NAME                               TYPE                                  DATA   AGE
admin-user-token-g9tfb             kubernetes.io/service-account-token   3      164m
default-token-wgwk7                kubernetes.io/service-account-token   3      27h
kube-dns-token-x7skk               kubernetes.io/service-account-token   3      26h
kubernetes-dashboard-certs         Opaque                                0      25h
kubernetes-dashboard-key-holder    Opaque                                2      25h
kubernetes-dashboard-token-qrhhr   kubernetes.io/service-account-token   3      25h

# kubectl describe secret  -n kube-system admin-user-token-g9tfb
Name:         admin-user-token-g9tfb
…………
ca.crt:     1359 bytes
namespace:  11 bytes
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWc5dGZiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkYjE0Y2UxNi1kZTk4LTExZTktOGM3OC0wMDBjMjk2MGY2MWMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.QcLgpvc_gX8ZID7EpIcw5wwJtHb6S2e8DvdB5j-69uAWDFe46KJvRBYdDVCkAEHm0GcZDO1oQbNb-bQi0FdVdgG9G_bVFxo_1-LygBb5Uudqa6antjISmd9Gx675raw-Lwa2BLt4Y4_zEPKGR3cu9Ri6MYJG6ecGp5Q4ev5Ne8adK711dSWne_WLO22nFkdT-yqhWYecppnGSqrUNsBsDGI83IuZzxMrAH-nm7qAdnWDY7SOBzpeEpn9NDiIlh6kIz1c6n7pvQDILb4we9RF2IB5g-vi3lklk4lJnKo2WSmGEeRn7dQ-vYmCQ82OSUTCWtWTNKAgVJeQfGvSXOkbOA

附:该token值为登陆UI界面的令牌,拷贝进去即可,注意token值的空格

====

k8s容器-节点部署篇

标签:secure   iss   spro   映射   $path   pem   tno   empty   证书   

原文地址:https://www.cnblogs.com/yangsirs/p/11872861.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!