Kubernetes集群和Docker私有库搭建(CentOS 7)
环境规划
手里的环境是四台安装了CentOS 7的主机。环境规划如下:
Kubernetes Master 节点:192.168.20.145
Kubernetes Node 节点:192.168.20.146, 192.168.20.147
Docker私有库节点:192.168.20.148
每台主机上都运行了如下命令来关闭防火墙和启用ntp:
# systemctl stop firewalld
# systemctl disable firewalld
# yum -y install ntp
# systemctl start ntpd
# systemctl enable ntpd
Kubernetes Master节点的安装与配置
在Kubernetes Master节点上安装etcd, docker和Kubernetes
# yum -y install etcd docker kubernetes flannel
对etcd进行配置,编辑/etc/etcd/etcd.conf,内容如下:
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
其中ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"表示etcd在2379端口上监听所有网络接口。
对Master节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.20.145:8080"
KUBE_MASTER="--master=http://192.168.20.145:8080"是将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。
编辑配置文件/etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
默认配置文件是没有去掉ServiceAccount,这里一定要去掉,不然创建pod会失败
这些配置让apiserver进程在8080端口上监听所有网络接口,并告诉apiserver进程etcd服务的地址。
对flannel进行配置,编辑/etc/sysconfig/flanneld,内容如下:
FLANNEL_ETCD="http://192.168.20.145:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
现在,启动Kubernetes Master节点上的etcd, docker, apiserver, controller-manager和scheduler进程并查看其状态:
# for SERVICES in etcd docker kube-apiserver kube-controller-manager kube-scheduler flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
在etcd里定义flannel网络配置:
# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
在随后Kubernetes的Node节点搭建和配置时,我们可以看到,etcd里的/atomic.io/network/config节点会被Node节点上的flannel用来创建网络的iptables
现在我们可以使用kubectl get nodes命令来查看,当然,目前还没有Node节点加入到该Kubernetes集群,所以命令的执行结果是空的:
# kubectl get nodes
NAME STATUS AGE
Kubernetes Node节点的安装与配置
在Kubernetes Node节点上安装flannel, docker和Kubernetes
# yum -y install flannel docker kubernetes
对flannel进行配置,编辑/etc/sysconfig/flanneld,内容如下:
FLANNEL_ETCD="http://192.168.20.145:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置
对Node节点上的Kubernetes进行配置,两台Node节点(192.168.20.146和192.168.20.147)上的配置文件/etc/kubernetes/config内容和Master节点(192.168.20.145)相同,内容如下:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.20.145:8080"
KUBE_MASTER="--master=http://192.168.20.145:8080"是将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。
而两台Node节点上的/etc/kubernetes/kubelet配置文件内容略微有点不同。
192.168.20.146节点的/etc/kubernetes/kubelet:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.20.146"
KUBELET_API_SERVER="--api-servers=http://192.168.20.145:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
192.168.20.147节点的/etc/kubernetes/kubelet:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.20.147"
KUBELET_API_SERVER="--api-servers=http://192.168.20.145:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
分别在两个Kubernetes Node节点上启动kube-proxy kubelet docker和flanneld进程并查看其状态:
# for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
现在我们在Master节点上使用kubectl get nodes命令来查看,可以看到加入的两个Node节点:
# kubectl get nodes
NAME STATUS AGE
192.168.20.146 Ready 2d
192.168.20.147 Ready 2d
至此,Kubernetes集群环境搭建完毕,但是我的故事还没有结束。
Docker私有库搭建
搭建完了Kubernetes集群环境,我满心欢喜地去创建Pods,失败了。用kubectl describe和kubectl logs命令定位原因,发现原因是我的集群环境无法从gcr.io(Google Container-Registry)拉取镜像,但是从Docker hub可以拉取镜像。所以我萌生了搭建一个Docker私有镜像的想法。查阅资料后,过程描述如下。
为了让Docker私有库更加安全,我生成了自签名的证书来配置TLS. 首先编辑/etc/pki/tls/openssl.cnf,在[ v3_ca ]下增加了一行:
[plain]view plaincopy
[ v3_ca ]
subjectAltName = IP:192.168.20.148
然后使用openssl命令在当前的certs目录下创建了一个自签名的证书:
# mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
在证书的创建过程中,会询问国家、省分、城市、组织、部门和common name的信息,其中common name信息我填写的是主机的IP 192.168.20.148. 证书创建完毕后,在certs目录下出现了两个文件:证书文件domain.crt和私钥文件domain.key
在192.168.20.148上安装docker
# yum -y install docker
将前面生成的domain.crt文件复制到/etc/docker/certs.d/192.168.20.148:5000目录下,然后重启docker进程:
# mkdir -p /etc/docker/certs.d/192.168.20.148:5000
# cp certs/domain.crt /etc/docker/certs.d/192.168.20.148:5000/ca.crt
# systemctl restart docker
在Docker私有库节点192.168.20.148上运行registry容器,并暴露容器的5000端口:
# docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key registry:2
最后,将domain.crt文件复制到Kubernetes集群里的所有节点的/etc/docker/certs.d/192.168.20.148:5000目录下,并重启各节点的docker进程,例如在192.168.169.121节点上运行:
# mkdir -p /etc/docker/certs.d/192.168.20.148:5000
# scp root@192.168.20.148:~/certs/domain.crt /etc/docker/certs.d/192.168.20.148:5000/ca.crt
# systemctl restart docker
至此,Docker私有库搭建完成。
Kubernetes Web UI搭建
这节我以搭建Kubernetes Web UI(kubernetes-dashboard)来简要演示如何使用Docker私有库。
由于gcr.io被墙,我的Kubernetes集群无法直接从gcr.io拉取kubernetes-dashboard的镜像,我事先从docker.io下载了下载地址docker.io/mritd/kubernetes-dashboard-amd64:
# docker pull mritd/kubernetes-dashboard-amd64
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry 2 c6c14b3960bd 3 days ago 33.28 MB
ubuntu latest 42118e3df429 9 days ago 124.8 MB
hello-world latest c54a2cc56cbb 4 weeks ago 1.848 kB
docker.io/mritd/kubernetes-dashboard-amd64 v1.1.0 20b7531358be 5 weeks ago 58.52 MB
registry 2 8ff6a4aae657 7 weeks ago 171.5 MB
我为加载的kubernetes-dashboard镜像打上私有库的标签并推送到私有库:
# docker tag 20b7531358be 192.168.20.148:5000/kubernetes-dashboard-amd64
# docker push 192.168.20.148:5000/kubernetes-dashboard-amd64
kubernetes-dashboard.yaml配置文件如下,因为最新版本是1.7变化很大,所以这里用1.5的配置文件代替,对其进行编辑如下:
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.20.148:5000/kubernetes-dashboard-amd64
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
- --apiserver-host=192.168.20.145:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
尤其要注意:1 创建的Pods所要拉取的镜像是Docker私有库的192.168.20.148:5000/kubernetes-dashboard-adm64; 2 apiserver-host参数是192.168.20.145:8080,即Kubernetes Master节点的aipserver服务地址。
修改完kubernetes-dashboard.yaml后保存到Kubernetes Master节点192.168.20.145节点上,在该节点上用kubectl create命令创建kubernetes-dashboard:
# kubectl create -f kubernetes-dashboard.yaml
创建完成后,查看Pods和Service的详细信息:
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx 1/1 Running 0 3h
kube-system kubernetes-dashboard-4164430742-lqhcg 1/1 Running 0 2h
# kubectl describe pods/kubernetes-dashboard-4164430742-lqhcg --namespace="kube-system"
Name: kubernetes-dashboard-4164430742-lqhcg
Namespace: kube-system
Node: 192.168.20.146/192.168.20.146
Start Time: Mon, 01 Aug 2016 16:12:02 +0800
Labels: app=kubernetes-dashboard,pod-template-hash=4164430742
Status: Running
IP: 172.17.17.3
Controllers: ReplicaSet/kubernetes-dashboard-4164430742
Containers:
kubernetes-dashboard:
Container ID: docker://40ab377c5b8a333487f251547e5de51af63570c31f9ba05fe3030a02cbb3660c
Image: 192.168.20.148:5000/kubernetes-dashboard-amd64
Image ID: docker://sha256:20b7531358be693a34eafdedee2954f381a95db469457667afd4ceeb7146cd1f
Port: 9090/TCP
Args:
--apiserver-host=192.168.20.145:8080
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Mon, 01 Aug 2016 16:12:03 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready True
No volumes.
No events.
# kubectl describe service/kubernetes-dashboard --namespace="kube-system"
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.254.213.209
Port: 80/TCP
NodePort: 31482/TCP
Endpoints: 172.17.17.3:9090
Session Affinity: None
No events.
从kubernetes-dashboard的service详细信息可以看到,该service绑定到了Node节点的31482端口上。现在,通过浏览器访问该端口就能看到Kubernetes的Web UI,地址:192.168.20.145:8080/ui会自动进行跳转:
作者:负二贷
链接:https://www.jianshu.com/p/f00006928a2d
共同学习,写下你的评论
评论加载中...
作者其他优质文章