安装前准备
操作系统详情
需要三台主机,都最小化安装 centos7.1,并update到最新,详情见如下表格
角色 |
主机名 |
IP |
Master |
master |
192.168.0.79 |
Minion1 |
minion-1 |
192.168.0.80 |
Minion2 |
minion-2 |
192.168.0.81 |
在每台主机上关闭firewalld改用iptables
输入以下命令,关闭firewalld
1
2
|
# systemctl stop firewalld.service #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动
|
然后安装iptables并启用
1
2
3
|
# yum install -y iptables-services #安装
# systemctl start iptables.service #最后重启防火墙使配置生效
# systemctl enable iptables.service #设置防火墙开机启动
|
安装ntp服务
1
2
3
|
# yum install -y ntp
# systemctl start ntpd
# systemctl enable ntpd
|
安装配置
注:slug:,etcd等已经进入centos epel源,可以直接yum安装(需要安装epel-release)
安装 Kubernetes Master
1. 使用以下命令安装kubernetes 和 etcd
1
|
# yum install -y kubernetes etcd
|
2. 编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值
1
2
3
4
5
6
7
|
# vim /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="<span style="color: #ff0000;">http://0.0.0.0:2379</span>"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
|
3. 编辑Kubernetes API server的配置文件
确保下列行没有被注释,并为下列的值
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
# vim /etc/slug:/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
|
4. 启动etcd, kube-apiserver, kube-controller-manager 和 kube-scheduler服务,并设置开机自启
1
2
3
4
5
|
# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
|
5. 在etcd中定义flannel network的配置,这些配置会被flannel service下发到minions:
1
|
# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
|
6. 添加iptables规则,允许相应的端口
1
2
3
4
|
iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables-save
|
7. 查看节点信息(我们还没有配置节点信息,所以这里应该为空)
1
2
|
# kubectl get nodes
NAME LABELS STATUS
|
安装Kubernetes Minions (Nodes)
注:下面这些步骤应该在minion1和minions2上执行(也可以添加更多的minions)
1. 使用yum安装kubernetes 和 flannel
1
|
# yum install -y flannel kubernetes
|
2. 为flannel service配置etcd服务器
编辑/etc/sysconfig/flanneld文件中的下列行以连接到master
1
2
3
|
# vim /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.0.79:2379" #改为etcd服务器的ip
|
3. 编辑 kubernetes 配置文件
编辑/etc/slug:/config中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:
1
2
|
# vim /etc/slug:/config
KUBE_MASTER="--master=http://192.168.0.79:8080"
|
4. 编辑/etc/slug:/kubelet 如下行:
minion1:
1
2
3
4
5
6
7
|
# vim /etc/slug:/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.80"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""
|
minion2:
1
2
3
4
5
6
7
|
# vim /etc/slug:/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.81"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""
|
5. 启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启
1
2
3
4
5
|
# for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
|
6. 在每个minion节点,你应当注意到你有两块新的网卡docker0 和 flannel0。你应该得到不同的ip地址范围在flannel0上,就像下面这样:
minion1:
1
2
|
# ip a | grep flannel | grep inet
inet 172.17.29.0/16 scope global flannel0
|
minion2:
1
2
|
# ip a | grep flannel | grep inet
inet 172.17.37.0/16 scope global flannel0
|
7. 添加iptables规则:
1
2
3
|
iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
|
8. 现在登陆kubernetes master节点验证minions的节点状态:
1
2
3
4
|
# kubectl get nodes
NAME LABELS STATUS
192.168.0.80 slug:.io/hostname=192.168.0.80 Ready
192.168.0.81 slug:.io/hostname=192.168.0.81 Ready
|
至此,kubernetes集群已经配置并运行了,我们可以继续下面的步骤。
创建 Pods (Containers)
为了创建一个pod,我们需要在kubernetes master上面定义一个yaml 或者 json配置文件。然后使用kubectl命令创建pod
1
2
3
|
# mkdir -p k8s/pods
# cd k8s/pods/
# vim nginx.yaml
|
在nginx.yaml里面增加如下内容:
1
2
3
4
5
6
7
8
9
10
11
12
|
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
|
创建pod:
1
|
# kubectl create -f nginx.yaml
|
此时有如下报错:
1
|
Error from server: error when creating "nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
|
解决办法是编辑/etc/slug:/apiserver 去除 KUBE_ADMISSION_CONTROL 中的 SecurityContextDeny,ServiceAccount ,并重启kube-apiserver.service服务:
1
2
3
4
|
# vim /etc/slug:/apiserver
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
# systemctl restart kube-apiserver.service
|
之后重新创建pod:
1
2
|
# kubectl create -f nginx.yaml
pods/nginx
|
查看pod:
1
2
3
|
# kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 0/1 Image: nginx is not ready on the node 0 34s
|
这里STATUS一直是这个,创建不成功,下面排错。通过查看pod的描述发现如下错误:
1
2
3
4
5
6
|
# kubectl describe pod nginx
Wed, 28 Oct 2015 10:25:30 +0800 Wed, 28 Oct 2015 10:25:30 +0800 1 {kubelet 192.168.0.81} implicitly required container POD pulled Successfully pulled Pod container image "gcr.io/google_containers/pause:0.8.0"
Wed, 28 Oct 2015 10:25:30 +0800 Wed, 28 Oct 2015 10:25:30 +0800 1 {kubelet 192.168.0.81} implicitly required container POD failed Failed to create docker container with error: no such image
Wed, 28 Oct 2015 10:25:30 +0800 Wed, 28 Oct 2015 10:25:30 +0800 1 {kubelet 192.168.0.81} failedSync Error syncing pod, skipping: no such image
Wed, 28 Oct 2015 10:27:30 +0800 Wed, 28 Oct 2015 10:29:30 +0800 2 {kubelet 192.168.0.81} implicitly required container POD failed Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (API error (500): invalid registry endpoint "http://gcr.io/v0/". HTTPS attempt: unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 173.194.72.82:443: i/o timeout
|
手动ping了一下gcr.io发现无法ping通(可能是被墙了),从网上找到 pause:0.8.0 的镜像,然后再每个minion上导入镜像:
1
|
# docker load --input pause-0.8.0.tar
|
附下载:pause-0.8.0.tar
在执行以下命令即可成功创建pod
1
2
|
# kubectl create -f nginx.yaml
pods/nginx
|
查看pod
1
2
3
|
# kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2min
|