码迷,mamicode.com
首页 > Web开发 > 详细

kubenets installation--ranchor-mesos

时间:2018-10-11 11:34:21      阅读:288      评论:0      收藏:0      [点我收藏+]

标签:比较   int   systemctl   inux   shang   工作   clust   x86   mamicode   

[kube-proxy]http://www.cnblogs.com/xuxinkun/p/5799986.html


[flannel]

  • 安装Flannel
  1. [root@master ~]# cd ~/k8s
  2. [root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  3. [root@master ~]# kubectl apply -f kube-flannel.yml
  4. clusterrole "flannel" created
  5. clusterrolebinding "flannel" created
  6. serviceaccount "flannel" created
  7. configmap "kube-flannel-cfg" created
  8. daemonset "kube-flannel-ds" created
  • 指定网卡
    如果有多个网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=。
  1. ......
  2. apiVersion: extensions/v1beta1
  3. kind: DaemonSet
  4. metadata:
  5. name: kube-flannel-ds
  6. ......
  7. containers:
  8. - name: kube-flannel
  9. image: quay.io/coreos/flannel:v0.9.0-amd64
  10. command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ]



k8s 官方文档:


  • kubectl 大家应该都知道,是跟 k8s 服务交互的命令行工具。
  • kubeadm 就是安装 k8s 测试环境的命令行工具
  • kubelet 就比较重要了,没有 kubelet,kubeadm 啥也干不了。kubelet 其实就类似于 Nova 中的 nova-compute 进程(管理 VM),负责管理 container。安装完 kubelet,系统中就多了一个 kubelet 服务。关于 kubelet,

kubeadm init 干了什么:

  • 系统状态检查
  • 生成 token
  • 生成自签名证书
  • 生成 kubeconfig 用于跟 api talk
  • 为管理面服务容器生成 manifest,放在/etc/kubernetes/manifests 目录下
  • 配置 RBAC,并设置master node只运行管理容器
  • 创建附加服务,比如 kube-proxy 和 kube-dns 等


安装成功后就可以查看系统中创建的 container 和 pod:

 docker ps | grep -v ‘/pause‘

kubectl get pods --all-namespaces
熟悉 k8s 的朋友其实已经知道那些处于 pause 的 container 的作用。当你在 k8s 
中创建包含一个 container 的 pod 时,其实 k8s 会在这个 pod 里偷偷创建一个叫 infra-container 
的容器,初始化 pod 的网络、命名空间,pod 中的其他 container 就会共享这个网络和命名空间。所以完成网络初始化后,这些 
infra-container 就会永久睡眠,直到收到 SIGINT 或 SIGTERM 信号。


这里看到 kube-dns 卡在 Pending 状态,是因为它必须在安装 pod network 组件后才能启动成功。

Revert k8s master and nodes

当我再往下准备安装网络组件时,发现 calico 要求执行 kubeadm 时有额外参数,所以我就回退了 kubeadm 的安装:

kubectl get nodes
 kubectl drain lingxian-XXXX-kubeadm --delete-local-data --force --ignore-daemonsets
 kubectl delete node lingxian-XXXX-kubeadm
kubeadm reset
kubeadm init --pod-network-cidr=192.168.0.0/16


  • Node节点操作
  1. kubeadm reset
  2. ifconfig cni0 down
  3. ip link delete cni0
  4. ifconfig flannel.1 down
  5. ip link delete flannel.1
  6. rm -rf /var/lib/cni/
[Kubeadm reset]
 find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
  rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
   kubeadm reset

Allow Pod schedule to Master

我只有一个 node,所以就把 master 直接当 worker 用:

root@lingxian-test-kubeadm:~# kubectl taint nodes --all node-role.kubernetes.io/master-
node "lingxian-test-kubeadm" untainted



Master节点参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,可使用如下命令使Master节点参与工作负载。

  1. kubectl taint nodes node1 node-role.kubernetes.io/master-
Run 
(XX)kubectl run mynginx --image=nginx --expose --port 8088
kubectl delete deployment mynginx


kubectl delete svc mynginx



kubectl run unginx --image=nginx --expose --port 80

kubectl run centos --image=cu.eshore.cn/library/java:jdk8 --command -- vi 
kubectl scale --replicas=4 deployment/centos


向Kubernetes集群添加Node

  • 查看master的token
  1. kubeadm token list | grep authentication,signing | awk ‘{print $1}‘
  • 查看discovery-token-ca-cert-hash
  1. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //‘
  • 添加节点到Kubernetes集群
  1. kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443

     token失效被删除。在Master上查看token,结果为空。

  1. kubeadm token list
  • 解决方法
    重新生成token,默认token有效期为24小时,生成token时通过指定--ttl 0可设置token永久有效。
  1. [root@master ~]# kubeadm token create --ttl 0
 
 https://lingxiankong.github.io/2018-01-20-install-k8s.html[step by step]


时间同步,主机名,/etc/hosts,防火墙,selinux, 无密钥登录,安装docker-1.12.6 这些都已经配置好了的。

https://anthonychu.ca/post/api-versioning-kubernetes-nginx-ingress/


[dashboard]

https://www.zybuluo.com/ncepuwanghui/note/953929

授予Dashboard账户集群管理权限
创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml。


kubectl create -f kubernetes-dashboard-admin.rbac.yaml

  • 查看kubernete-dashboard-admin的token
  1. kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
  2. kubectl describe -n kube-system secret/xxxxxxx
  3. 查看Dashboard服务端口
    1. kubectl get svc -n kube-system

[heapster]

[heapster]http://www.mamicode.com/info-detail-1715935.html

https://www.slahser.com/2016/11/18/k8s%E5%90%8E%E6%97%A5%E8%B0%88-Heaspter%E7%9B%91%E6%8E%A7/

安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。

  1. mkdir -p ~/k8s/heapster
  2. cd ~/k8s/heapster
  3. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
  4. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
  5. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
  6. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
  7. kubectl create -f ./

 

docker pull mirrorgooglecontainers/heapster-grafana-amd64:v4.4.3

docker tag  mirrorgooglecontainers/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3

docker pull mirrorgooglecontainers/heapster-amd64:v1.4.2

docker tag mirrorgooglecontainers/heapster-amd64:v1.4.2  k8s.gcr.io/heapster-amd64:v1.4.2


docker pull mirrorgooglecontainers/heapster-influxdb-amd64:v1.3.3

docker tag mirrorgooglecontainers/heapster-influxdb-amd64:v1.3.3  k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++=

 


 docker pull ist0ne/kubernetes-dashboard-amd64:v1.8.0

docker tag  ist0ne/kubernetes-dashboard-amd64:v1.8.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


v1.8.4

kubectl replace --force -f


sysctl -P 报错解决办法
问题症状
修改 linux 内核文件
#vi /etc/sysctl.conf后执行sysctl  -P 报错
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
 
解决方法如下:
modprobe bridge
lsmod|grep bridge


 

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

https://www.zybuluo.com/ncepuwanghui/note/953929 [kexueshangwang--]

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 

刚刚查看了一下,二者确实不一样。docker的cgroup-driver是cgroupfs。将此conf文件对应修改成cgroupfs,正在重新运行,检查kubelet运行正常。但又卡在

跳过(依赖问题):
  docker-engine.x86_64 0:1.12.6-1.el7.centos                                   
  docker-engine-selinux.noarch 0:17.05.0.ce-1.el7.centos                       
  libtool-ltdl.x86_64 0:2.4.2-22.el7_3 



images=(kube-apiserver-amd64:v1.8.4 kube-controller-manager-amd64:v1.8.4 kube-scheduler-amd64:v1.8.4 kube-proxy-amd64:v1.8.4 etcd-amd64:3.0.17 pause-amd64:3.0 k8s-dns-sidecar-amd64:1.14.5 k8s-dns-kube-dns-amd64:1.14.5 k8s-dns-dnsmasq-nanny-amd64:1.14.5 flannel:v0.9.1-amd64)

for imageName in ${images[@]} ; do
  docker pull mritd/$imageName
  docker tag mritd/$imageName gcr.io/google_containers/$imageName
  docker rmi mritd/$imageName
done

问题2

新打开一个窗口,查看 /var/log/messages 有如下错误:

1
Aug 12 23:40:10 cu3 kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

docker和kubelet的cgroup driver不一样,修改kubelet的配置。把docker启动参数 masq 一起改了。

1
2
3
4
[root@cu3 ~]# sed -i ‘s/KUBELET_CGROUP_ARGS=--cgroup-driver=systemd/KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs/‘ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
[root@cu3 ~]# sed -i ‘s#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#‘ /usr/lib/systemd/system/docker.service

[root@cu3 ~]# systemctl daemon-reload; systemctl restart docker kubelet 

 

sed -i ‘s#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#‘ /usr/lib/systemd/system/docker.service 

注意:加了 ip-masq=false 后,docker0就不能上外网了。也就是单独起的docker容器不能上外网!

1
ExecStart=/usr/bin/dockerd --ip-masq=false

节点防火墙(由于是云主机,增加防火墙):

1
2
3
firewall-cmd --zone=trusted --add-source=192.168.0.0/16 --permanent 
firewall-cmd --zone=trusted --add-source=10.0.0.0/8 --permanent 
firewall-cmd --complete-reload


[download k8s]

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#v181

[k8s metrics]

http://www.cnblogs.com/iiiiher/p/7999761.html

[k8s cookbook]

https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook

[infra esta]

http://blog.csdn.net/qq_32971807/article/details/54693254


[ranchor]

https://www.cnrancher.com/rancher-k8s-accelerate-installation-document/


docker run -d --restart always --name rancher_server -p 6080:8080 rancher/server &&  docker logs -f rancher-server


[k8s 1.6.1 install]

https://www.jianshu.com/p/8ce11f947410

[CNI]

https://segmentfault.com/a/1190000008803805


http://www.cnblogs.com/whtydn/p/4353695.html

(+++)https://www.jianshu.com/p/a2039a8855ec

(!!!!!)https://www.cnblogs.com/liangDream/p/7358847.html

http://ju.outofmemory.cn/entry/231591

https://segmentfault.com/a/1190000007074726

http://www.infoq.com/cn/articles/centos7-practical-kubernetes-deployment

http://www.cnblogs.com/zhenyuyaodidiao/p/6500720.html (kernel concept)

https://www.jianshu.com/p/8d3204b96cf9
http://blog.csdn.net/hackstoic/article/details/50574886 (mesos)

kubenets installation--ranchor-mesos

标签:比较   int   systemctl   inux   shang   工作   clust   x86   mamicode   

原文地址:https://www.cnblogs.com/SZLLQ2000/p/9771080.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!