最近有时间重新学习 k8s。k8s 的安装比之前简单了许多,本文介绍如何使用 kubeadm 部署 kubernetns 1.13.1
前期准备 环境概览 准备了3台机器,有一台master,两台node,主机名及IP如下:
主机名
IP地址
k8s-master
172.20.6.116
k8s-node1
172.20.6.117
k8s-node2
172.20.6.118
系统设置 1. 修改三台机器的主机名 # hostnamectl set -hostname
2. 设置本地解析 编辑三台机器的 hosts 文件加入以下内容
# vim /etc/hosts 172.20.6.116 k8s-master172.20.6.117 k8s-node1172.20.6.118 k8s-node2
3. 关闭防火墙 # systemctl disable firewalld
4. 关闭selinux # sed -i s/SELINUX=enforcing/ SELINUX=disabled/g / etc/selinux/ config
5. 关闭NetworkManager # systemctl disable NetworkManager
6. 设置时间同步 所有机器上安装 chrony
设置时间同步(172.50.10.16为我本地的 NTP 服务器,也可以直接使用阿里云的NTP: time1.aliyun.com)
# vim /etc/chrony.conf #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server 172.50 .10 .16 iburst
启动服务并同步时间
# systemctl enable chronyd && systemctl restart chronyd # chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 172.50.10.16 3 6 17 13 +103us[ + 12us] +/- 24ms
星号代表同步成功
7. 重启所有主机
部署 kubernetes 安装 docker-ce(所有主机 ) 1. 下载 docker-ce 源 # wget -O /etc/yum .repos.d/docker-ce.repo https:/ /download.docker.com/ linux/centos/ docker-ce.repo
2. 配置 docker-ce 使用国内源 # sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum .repos.d/docker-ce.repo
3. 安装docker-ce 由于 kubernetes 1.13.1 只在 docker-ce 18.06以下测试过,所以指定安装的 docker-ce 版本
# yum install docker-ce-18 .06.1.ce-3 .el7
4. 启动并设置开机自启 # systemctl enable docker.service && systemctl start docker.service
安装 kubeadm, kubelet and kubectl(所有主机) 1. 配置 kubeadm 的源 cat <<EOF > /etc/yum .repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mi rrors.aliyun.com/kubernetes/yum /repos/ kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mi rrors.aliyun.com/kubernetes/yum /doc/yum -key.gpg https://mi rrors.aliyun.com/kubernetes/yum /doc/ rpm-package-key.gpg EOF
2. 安装 kubeadm, kubelet and kubectl # yum install -y kubelet kubeadm kubectl
3. 启动并设置开机自启 # systemctl enable kubelet && systemctl start kubelet
4. 调整内核参数 net .bridge.bridge-nf-call-ip6 tables = 1 net .bridge.bridge-nf-call-iptables = 1 EOF
初始化 k8s (master 节点) 1. 导入镜像包 由于不可描述的原因,无法拉取k8s的镜像,所以我准备了一份离线的数据,需要在所有节点导入,下载地址:kube.tar
# docker load -i kube.tar
查看导入后的镜像
REPOSITORY TAG IMAGE ID CREATED SIZEk8s .gcr.io/kube-proxy v1 .13 .1 fdb321 fd30 a0 2 weeks ago 80 .2 MBk8s .gcr.io/kube-controller-manager v1 .13 .1 26 e6 f1 db2 a52 2 weeks ago 146 MBk8s .gcr.io/kube-apiserver v1 .13 .1 40 a63 db91 ef8 2 weeks ago 181 MBk8s .gcr.io/kube-scheduler v1 .13 .1 ab81 d7360408 2 weeks ago 79 .6 MBk8s .gcr.io/coredns 1 .2 .6 f59 dcacceff4 7 weeks ago 40 MBk8s .gcr.io/etcd 3 .2 .24 3 cab8 e1 b9802 3 months ago 220 MBquay .io/coreos/flannel v0 .10 .0 -amd64 f0 fad859 c909 11 months ago 44 .6 MBk8s .gcr.io/pause 3 .1 da86 e6 ba6 ca1 12 months ago 742 kB
2. 初始化 master 节点 ********** You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at : https ://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.20 .6 .116 :6443
master节点初始化成功,注意保存最后的 kubeadmin join 的内容
3. 加载 k8s 环境变量 # export KUBECONFIG=/etc/kubernetes/admin.conf # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
4. 安装 network addon 要docker之间能互相通信需要做些配置,这里用Flannel来实现
clusterrole .rbac.authorization.k8 s.io/flannel createdclusterrolebinding .rbac.authorization.k8 s.io/flannel createdserviceaccount /flannel createdconfigmap /kube-flannel-cfg createddaemonset .extensions/kube-flannel-ds-amd64 createddaemonset .extensions/kube-flannel-ds-arm64 createddaemonset .extensions/kube-flannel-ds-arm createddaemonset .extensions/kube-flannel-ds-ppc64 le createddaemonset .extensions/kube-flannel-ds-s390 x created
5. 确认集群状态 NAMESPACE NAME READY STATUS RESTARTS AGEkube -system coredns-86 c58 d9 df4 -lhb7 w 1 /1 Running 0 95 mkube -system coredns-86 c58 d9 df4 -zprwr 1 /1 Running 0 95 mkube -system etcd-k8 s-master 1 /1 Running 0 100 mkube -system kube-apiserver-k8 s-master 1 /1 Running 0 100 mkube -system kube-controller-manager-k8 s-master 1 /1 Running 0 100 mkube -system kube-flannel-ds-amd64 -jjdmz 1 /1 Running 0 91 mkube -system kube-proxy-lfhbs 1 /1 Running 0 101 mkube -system kube-scheduler-k8 s-master 1 /1 Running 0 100 m
确认 CoreDNS pod 为运行状态
加入集群(node节点) 1. 配置node节点加入集群 在 k8s-node1 和 k8s-node2 执行以下命令(初始化中保存的 join 命令)
****** This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was inf ormed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
2. 检查集群 NAME STATUS ROLES AGE VERSIONk8s -master Ready master 2 m23 s v1 .13 .1 k8s -node1 Ready <none> 39 s v1 .13 .1 k8s -node2 Ready <none> 16 s v1 .13 .1
可以看到两个节点都已经加入了,并且是正常的ready状态。 至此,整个集群的配置完成,可以开始使用了。
配置 dashboard 服务配置 默认没有web页面,可以通过以下步骤部署 dashboard
1. 导入 dashboard-ui(所有节点) 下载地址:dashboard-ui.tar
# docker load -i dashboard-ui.tar
2. 下载配置文件
编辑kubernetes-dashboard.yaml文件,添加type: NodePort,暴露Dashboard服务,便于从外部访问dashboard。注意这里只添加行type: NodePort即可,其他配置不用改。
spec: type: NodePort ports: - port: 443 targetPort: 8443
3. 部署 Dashboard UI # kubectl create -f kubernetes-dashboard.yaml
4. 查看 dashboard 服务状态 NAMESPACE NAME READY STATUS RESTARTS AGEkube -system coredns-86 c58 d9 df4 -8 zhr5 1 /1 Running 0 2 d22 hkube -system coredns-86 c58 d9 df4 -jqn7 r 1 /1 Running 0 2 d22 hkube -system etcd-k8 s-master 1 /1 Running 0 2 d22 hkube -system kube-apiserver-k8 s-master 1 /1 Running 0 2 d22 hkube -system kube-controller-manager-k8 s-master 1 /1 Running 0 2 d22 hkube -system kube-flannel-ds-amd64 -krf6 t 1 /1 Running 0 2 d22 hkube -system kube-flannel-ds-amd64 -tkftg 1 /1 Running 0 2 d22 hkube -system kube-flannel-ds-amd64 -zxzld 1 /1 Running 0 2 d22 hkube -system kube-proxy-5 znt7 1 /1 Running 0 2 d22 hkube -system kube-proxy-gl9 sl 1 /1 Running 0 2 d22 hkube -system kube-proxy-q7 j7 m 1 /1 Running 0 2 d22 hkube -system kube-scheduler-k8 s-master 1 /1 Running 0 2 d22 hkube -system kubernetes-dashboard-57 df4 db6 b-pghk8 1 /1 Running 0 19 h
kubernetes-dashboard 为运行状态
创建简单用户 1. 创建服务账号和集群角色绑定配置文件 创建 dashboard-adminuser.yaml 文件,加入以下内容:
--- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system
2. 创建用户和角色绑定 # kubectl apply -f dashboard-adminuser.yaml
3. 查看 Token # kubectl -n kube-system describe secret $(kubectl -n kube -system get secret | grep kubernetes -dashboard -admin -token | awk '{print $1}') Name: kubernetes-dashboard-admin-token-xdrs6 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: 14082 a92-0e3 c-11e9 -ac3f-fa163e25b09e Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi14ZHJzNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE0MDgyYTkyLTBlM2MtMTFlOS1hYzNmLWZhMTYzZTI1YjA5ZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.K9Z5NOY2MusdXhiFx6NdA42Jpo1cCChN16CKsdsw-9 eh76p1O4kd4u22_ZgWzhRwarnllURXieDxEGpRmCJaBOmMo_xFmlCX6fxFQ-7 bWcXuWWpi3ay5qSOPsv_7EyvCvkFSFVfgMnppu3dvEhD5NoeSjnrkHshHxFFnhZc7ePIUVlY9KvMVWv7UDkhinJKy5HjLu_ejwy2jxmSNwZ-g9wnLVzw3-XObmUUL8nTRdE8KehKtpdo6Kd-BJlmfTNUPiSGxrcU1sW1hzwJLsEfTix4oQOhdCh2-z37Gr_1J7-bnf8F5_U90okH2nf1it2brmIM3JbzuQ8sWERx66gEkKQ ca.crt: 1025 bytes namespace: 11 bytes
保存 token 部分的内容
部署 Metrics Server Heapter 将在 Kubernetes 1.13 版本中移除(https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md ),推荐使用 metrics-server 与 Prometheus。
1. 导入 metrics-server 镜像 下载地址:metrics-server.tar
# docker load -i metrics-server .tar
2. 保存配置文件 # mkdir metrics-server # cd metrics-server # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/aggregated-metrics-reader.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-delegator.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/auth-reader.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-apiservice.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-deployment.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/metrics-server-service.yaml # wget https://github.com/kubernetes-incubator/metrics-server/raw/master/deploy/1.8%2B/resource-reader.yaml
修改 metrics-server-deployment.yaml 文件修改镜像默认拉去策略为 IfNotPresent
containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: IfNotPresent volumeMounts: - name: tmp-dir mountPath: /tmp
修改使用 IP 连接并且不验证证书
containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: IfNotPresent command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP volumeMounts: - name: tmp-dir mountPath: /tmp
3. 执行部署
4. 查看监控数据 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s -master 196 m 4 % 1101 Mi 14 % k8s -node1 44 m 1 % 2426 Mi 31 % k8s -node2 38 m 0 % 2198 Mi 28 % NAMESPACE NAME CPU(cores) MEMORY(bytes) kube -system coredns-86 c58 d9 df4 -8 zhr5 3 m 13 Mi kube -system coredns-86 c58 d9 df4 -jqn7 r 2 m 13 Mi kube -system etcd-k8 s-master 17 m 76 Mi kube -system kube-apiserver-k8 s-master 30 m 402 Mi kube -system kube-controller-manager-k8 s-master 36 m 63 Mi kube -system kube-flannel-ds-amd64 -krf6 t 2 m 13 Mi kube -system kube-flannel-ds-amd64 -tkftg 3 m 15 Mi kube -system kube-flannel-ds-amd64 -zxzld 2 m 12 Mi kube -system kube-proxy-5 znt7 2 m 14 Mi kube -system kube-proxy-gl9 sl 2 m 18 Mi kube -system kube-proxy-q7 j7 m 2 m 16 Mi kube -system kube-scheduler-k8 s-master 9 m 16 Mi kube -system kubernetes-dashboard-57 df4 db6 b-wtmkt 1 m 16 Mi kube -system metrics-server-879 f5 ff6 d-9 q5 xw 1 m 13 Mi
查看 Dashboard 1. 查找 dashboard 服务端口 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkube -dns ClusterIP 10.96.0.10 <none> 53 /UDP,53 /TCP 3 d19 hkubernetes -dashboard NodePort 10.109.196.92 <none> 443 :30678 /TCP 17 hmetrics -server ClusterIP 10.109.23.19 <none> 443 /TCP 6 m16 s
端口为: 30678
2. 访问 dashboard 访问地址为: https://172.20.6.116:30678 ,选择令牌,输入之前保存的 token 即可进入
参考文章 kubeadm 部署 kube1.10 Creating a single master cluster with kubeadm 使用 Kubeadm 安装部署 Kubernetes 1.12.1 集群 kubeadm快速部署Kubernetes(1.13.1,HA)