码迷,mamicode.com
首页 > 其他好文 > 详细

dockers-k8s本地共享仓库

时间:2019-11-22 12:06:03      阅读:101      评论:0      收藏:0      [点我收藏+]

标签:systemctl   resources   镜像   security   k8s   iss   集群搭建   报错   mes   

docker k8s

组件名称 说明
kube-dns 负责为整个集群提供DNS服务
Ingress Controller 为服务提供外网入口
Heapster 提供资源监控
Dashboard 提供GUI
Federation 提供跨可用区的集群
Fluentd-elasticsearch 提供集群日志采集、存储与查询

配置

主机 节点 hosts
10.0.0.202 master cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF
10.0.0.203 node cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF
10.0.0.204 node cat > /etc/hosts <<EOF
10.0.0.202 purple
10.0.0.203 yellow
10.0.0.204 blue
EOF

k8s集群搭建

202:master节点安装etcd

yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.202:2379"

systemctl start etcd.service
systemctl enable etcd.service

etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0

etcdctl -C http://10.0.0.202:2379 cluster-health

202:master节点安装kubernetes

yum install kubernetes-master.x86_64 -y

vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.202:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

203/204:node节点安装kubernetes

yum install kubernetes-node.x86_64 -y

vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.203"  ##204的node节点IP改为10.0.0.204
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.202:8080"

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

202:在master节点检查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.203   Ready     6m
10.0.0.204   Ready     3s
  • 若遇到报错
[root@purple ~]#  kubectl get nodes
No resources found.
检查 /etc/kubernetes/apiserver 文件中的23行是否修改为上述格式
检查hosts文件,查看是否解析
或者重启上述所有节点的各种服务刷新 再次尝试检查节点

6:所有节点配置flannel网络

yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.202:2379#g' /etc/sysconfig/flanneld

##master节点:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.16.0.0/16" }'
yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

##node节点:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service

202:配置master为镜像仓库

#所有节点
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.202:5000'

systemctl restart docker

#master节点创建仓库
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

验证仓库可用性

# 202节点上传一个打过标签的镜像到仓库中
[root@purple ~]# docker tag docker.io/busybox:latest 10.0.0.202:5000/docker.io/busybox:latest
[root@purple ~]# docker push 10.0.0.202:5000/docker.io/busybox:latest
The push refers to a repository [10.0.0.202:5000/docker.io/busybox]
1da8e4c8d307: Pushed 
latest: digest: sha256:679b1c1058c1f2dc59a3ee70eed986a88811c0205c8ceea57cec5f22d2c3fbb1 size: 527
#203/204节点尝试拉取仓库中的镜像
[root@yellow ~]#  docker pull 10.0.0.202:5000/docker.io/busybox:latest
Trying to pull repository 10.0.0.202:5000/docker.io/busybox ... 
latest: Pulling from 10.0.0.202:5000/docker.io/busybox
0f8c40e1270f: Pull complete 
Digest: sha256:679b1c1058c1f2dc59a3ee70eed986a88811c0205c8ceea57cec5f22d2c3fbb1
Status: Downloaded newer image for 10.0.0.202:5000/docker.io/busybox:latest
##拉取成功,说明我们的本地仓库可供三台主机共用

: Docker跨主机容器之间的通信macvlan

##在203上创建macvlan网络
docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1
##设置eth0的网卡为混杂模式
ip link set eth0 promisc on
##创建使用macvlan网络的容器
docker run -it --network macvlan_1 --ip=10.0.0.5 10.0.0.202:5000/docker.io/busybox:latest
[root@yellow ~]# docker run -it --network macvlan_1 --ip=10.0.0.5 10.0.0.202:5000/docker.io/busybox:latest 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.5/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:5/64 scope link 
       valid_lft forever preferred_lft forever
/ # 
可以看到新创建的容器IP为指定ip
我们可以尝试去202/204上使用ping命令,查看是否能ping通
[root@purple ~]# ping -c 2 10.0.0.5
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=0.394 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=0.919 ms
--- 10.0.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.394/0.656/0.919/0.263 ms
----------------purple-------------------------
我们在204上也创建一个指定IP的容器 看看是否可以与10.0.0.5的容器通信
[root@blue ~]# docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1
ed9af47d206c7790959ad6f9a560f45fd2e42144ff36763750c129d0ea52a335
[root@blue ~]# ip link set eth0 promisc on
[root@blue ~]# docker run -it --network macvlan_1 --ip=10.0.0.6 10.0.0.202:5000/docker.io/busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:0a:00:00:06 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.6/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:6/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping -c 3 10.0.0.5
PING 10.0.0.5 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: seq=0 ttl=64 time=2.725 ms
64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.435 ms
64 bytes from 10.0.0.5: seq=2 ttl=64 time=0.352 ms

--- 10.0.0.5 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.352/1.170/2.725 ms
/ # 
可以看到两个不同主机上的容器在同一网段下可以相互通信

dockers-k8s本地共享仓库

标签:systemctl   resources   镜像   security   k8s   iss   集群搭建   报错   mes   

原文地址:https://www.cnblogs.com/jiangyatao/p/11910521.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!