openstack nova-compute在不同的hypervisors上使用不同的存储后端

实验环境

controller1 192.168.2.240
controller2 192.168.2.241
compute1 192.168.2.242
compute2 192.168.2.243
compute3 192.168.2.248
compute4 192.168.2.249

在不同的计算节点使用不同的存储后端
image

计算节点配置

1.Scheduler

为了使nova的调度程序支持下面的过滤算法,需要修改使之支持 AggregateInstanceExtraSpecsFilter ,编辑控制节点的 /etc/nova/nova.conf 文件加入修改以下选项,然后重启nova-scheduler服务

# vim /etc/nova/nova.conf
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service
2.本地存储配置

nova默认支持,无需配置。为了支持迁移可以配置共享存储(NFS等)

3.ceph存储配置

编辑计算节点的 /etc/nova/nova.conf 文件加入修改以下选项,然后重启nova-compute服务(这里没有详细写,例如导入secret-uuid等操作请自行添加)

# vim /etc/nova/nova.conf
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid =20c3fd98-2bab-457a-b1e2-12e50dc6c98e
disk_cachemodes="network=writeback"
inject_partition=-2
inject_key=False
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST

# systemctl restart openstack-nova-compute.service

openstack配置

创建主机集合,包含ceph compute nodes 和 local storage compute nodes

# nova aggregate-create ephemeral-compute-storage
+----+---------------------------+-------------------+-------+----------+
| Id | Name                      | Availability Zone | Hosts | Metadata |
+----+---------------------------+-------------------+-------+----------+
| 8  | ephemeral-compute-storage | -                 |       |          |
+----+---------------------------+-------------------+-------+----------+

# nova aggregate-create ceph-compute-storage
+----+----------------------+-------------------+-------+----------+
| Id | Name                 | Availability Zone | Hosts | Metadata |
+----+----------------------+-------------------+-------+----------+
| 9  | ceph-compute-storage | -                 |       |          |
+----+----------------------+-------------------+-------+----------+

可以使用 nova hypervisor-list 命令来查看自己的hypervisor name

# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | compute1            | up    | enabled |
| 2  | compute2            | up    | enabled |
| 4  | compute4            | up    | enabled |
| 7  | compute3            | up    | enabled |
+----+---------------------+-------+---------+

在本例中,使用以下分类
Local storage:compute1,compute2
Ceph storage: conpute3,compute4
添加主机到主机集合

# nova aggregate-add-host ephemeral-compute-storage compute1
# nova aggregate-add-host ephemeral-compute-storage compute2
# nova aggregate-add-host ceph-compute-storage compute3
# nova aggregate-add-host ceph-compute-storage compute4

为主机集合创建新的metadata

# nova aggregate-set-metadata ephemeral-compute-storage ephemeralcomputestorage=true
# nova aggregate-set-metadata ceph-compute-storage cephcomputestorage=true

为使用本地存储和ceph存储的虚拟机创建flavor

# nova flavor-create m1.ephemeral-compute-storage 8 128 1 1
# nova flavor-create m1.ceph-compute-storage 9 128 1 1

为flavor绑定指定的属性

# nova flavor-key m1.ceph-compute-storage set aggregate_instance_extra_specs:cephcomputestorage=true
# nova flavor-key m1.ephemeral-compute-storage set aggregate_instance_extra_specs:ephemeralcomputestorage=true

结果验证

使用flavor m1.ceph-compute-storage 启动4台虚拟机,发现虚拟机磁盘文件全部在ceph的pool中

[root@controller1 ~]# nova list
+--------------------------------------+--------+--------+------------+-------------+---------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks            |
+--------------------------------------+--------+--------+------------+-------------+---------------------+
| 5d6bd85e-9b75-4035-876c-30e997ea0a98 | ceph-1 | BUILD  | spawning   | NOSTATE     | private=172.16.1.49 |
| aa666bd9-e370-4c53-8af3-f1bf7ba77900 | ceph-2 | BUILD  | spawning   | NOSTATE     | private=172.16.1.48 |
| 56d6a3a8-e6c4-4860-bd72-2e0aa0fa55f2 | ceph-3 | BUILD  | spawning   | NOSTATE     | private=172.16.1.47 |
| 2b9577d8-2448-4d8a-ba98-253b0f597b12 | ceph-4 | BUILD  | spawning   | NOSTATE     | private=172.16.1.50 |
+--------------------------------------+--------+--------+------------+-------------+---------------------+

[root@node1 ~]# rbd ls vms
2b9577d8-2448-4d8a-ba98-253b0f597b12_disk
56d6a3a8-e6c4-4860-bd72-2e0aa0fa55f2_disk
5d6bd85e-9b75-4035-876c-30e997ea0a98_disk
aa666bd9-e370-4c53-8af3-f1bf7ba77900_disk

删除所有虚拟机(便于验证),使用flavor m1.ephemeral-compute-storage 启动四台虚拟机,发现虚拟机磁盘文件分布于compute1 和 compute2 的本地存储中(没有配置NFS等共享存储)

[root@controller1 ~]# nova list
+--------------------------------------+---------+--------+------------+-------------+---------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks            |
+--------------------------------------+---------+--------+------------+-------------+---------------------+
| 1c1ce5f3-b5aa-47dd-806c-e2eba60b9eb0 | local-1 | ACTIVE | -          | Running     | private=172.16.1.51 |
| 5a3e4074-619e-423a-a649-e24771f9fbd1 | local-2 | ACTIVE | -          | Running     | private=172.16.1.54 |
| 5b838406-b9cf-4943-89f3-79866f8e6e19 | local-3 | ACTIVE | -          | Running     | private=172.16.1.52 |
| 30e7289f-bc80-4374-aabb-906897b8141c | local-4 | ACTIVE | -          | Running     | private=172.16.1.53 |
+--------------------------------------+---------+--------+------------+-------------+---------------------+

[root@compute1 ~]# ll /var/lib/nova/instances/
total 4
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 1c1ce5f3-b5aa-47dd-806c-e2eba60b9eb0
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 5b838406-b9cf-4943-89f3-79866f8e6e19
drwxr-xr-x 2 nova nova  53 Jul 25 16:01 _base
-rw-r--r-- 1 nova nova  31 Jul 27 10:33 compute_nodes
drwxr-xr-x 2 nova nova 143 Jul 25 16:01 locks
drwxr-xr-x 2 nova nova   6 Jul  6 15:51 snapshots

[root@compute2 ~]# ll /var/lib/nova/instances/
total 4
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 30e7289f-bc80-4374-aabb-906897b8141c
drwxr-xr-x 2 nova nova  69 Jul 27 10:40 5a3e4074-619e-423a-a649-e24771f9fbd1
drwxr-xr-x 2 nova nova  53 Jul 25 16:02 _base
-rw-r--r-- 1 nova nova  62 Jul 27 10:33 compute_nodes
drwxr-xr-x 2 nova nova 143 Jul 25 16:01 locks

补充说明

在线迁移虚拟机的时候,不在同一个主机集合的主机仍然可以选择,但是无法迁移,需要增加只能在所在主机集合内迁移的功能

 

参考文章:https://www.sebastien-han.fr/blog/2014/09/01/openstack-use-ephemeral-and-persistent-root-storage-for-different-hypervisors/

openstack配置rabbit cluster并使用

生产环境中至少运行3个rabbitmq服务器,测试环境中我们可以只运行两个,我们配置了两个节点,分别为controller1和controller2。

 

为HA队列配置RabbitMQ

  1. 在controller1上启动使用以下命令启动rabbitmq
    # systemctl start rabbitmq
  2. 从controller1上复制cookie到其他的节点
    # scp root@NODE:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie

    需要修改NODE为controller1或者对应ip
  3. 在每个目标节点上确认 erlang.cookie 文件的用户,组和权限
    # chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
    # chmod 400 /var/lib/rabbitmq/.erlang.cookie
  4. 设置rabbitmq开机自启并启动其他节点的rabbitmq-server
    # systemctl enable rabbitmq-server
    # systemctl start rabbitmq-server
  5. 使用以下命令确认rabbitmq-server在每个节点正确运行
    # rabbitmqctl cluster_status
    Cluster status of node rabbit@controller1...
    [{nodes,[{disc,[rabbit@ controller1]}]},
    {running_nodes,[rabbit@ controller1]},
    {partitions,[]}]
    ...done.
  6. 除第一个节点(controller1)外,其他节点执行以下命令加入集群
    # rabbitmqctl stop_app
    Stopping node rabbit@controller2...
    ...done.
    # rabbitmqctl join_cluster --ram rabbit@ controller1
    # rabbitmqctl start_app
    Starting node rabbit@ controller2...
    ...done.
  7. 确认集群状态
    # rabbitmqctl cluster_status
    Cluster status of node rabbit@controller1...
    [{nodes,[{disc,[rabbit@ controller1]},{ram,[rabbit@ controller2]}]}, \
        {running_nodes,[rabbit@NODE,rabbit@ controller1]}]
  8. 为了确保所有队列除了名字自动生成的可以在所有运行的节点上镜像,设置 ha-mode 策略,在任意节点上执行
    # rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}

 

配置openstack服务使用rabbitmq ha queues

  1. 使用方法
    rabbit_hosts=controller1:5672,controller2:5672
  2. RabbitMQ尝试重连的时间(这里的单位是?秒?)
    rabbit_retry_interval=1
  3. How long to back-off for between retries when connecting to RabbitMQ (问题同上)
    rabbit_retry_backoff=2
  4. 最小尝试重连RabbitMQ的次数(默认是无限)
    rabbit_max_retries=0
  5. 在RabbitMQ中使用durable queues
    rabbit_durable_queues=true
  6. 在RabbitMQ中使用HA queues
    rabbit_ha_queues=true
NOTE:如果想更改从没有使用HA queues的旧配置到HA queues,你需要重启服务

# rabbitmqctl stop_app
# rabbitmqctl reset
# rabbitmqctl start_app

 

 

centos7 devstack 安装openstack liberty

  1. 本机环境
    操作系统:CentOS Linux release 7.2.1511 (Core)
    本机IP:172.16.33.201
    网关:172.16.33.254
  2. 下载devstack和前期准备
    这里和别人的文章有点出入,git clone devstack的时候需要指定分支,不然安装openstack的时候会提示一个脚本不存在
    # cd /opt
    # git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty

    新建stack用户,修改devstack文件夹所有者
    # cd /opt/devstack/tools/
    # ./create-stack-user.sh
    # chown -R stack:stack /opt/devstack
  3. 新建local.conf文件,示例如下,按需更改
    # vim /opt/devstack/local.conf
    
    [[local|localrc]]
    # Define images to be automatically downloaded during the DevStack built process.
    IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
    # Credentials
    DATABASE_PASSWORD=123456
    ADMIN_PASSWORD=123456
    SERVICE_PASSWORD=123456
    SERVICE_TOKEN=pass
    RABBIT_PASSWORD=123456
    #FLAT_INTERFACE=eth0
    
    HOST_IP=172.16.33.201
    SERVICE_HOST=172.16.33.201
    MYSQL_HOST=172.16.33.201
    RABBIT_HOST=172.16.33.201
    GLANCE_HOSTPORT=172.16.33.201:9292
    
    
    ## Neutron options
    Q_USE_SECGROUP=True
    FLOATING_RANGE=172.16.33.0/24
    FIXED_RANGE=10.0.0.0/24
    Q_FLOATING_ALLOCATION_POOL=start=172.16.33.202,end=172.16.33.210
    PUBLIC_NETWORK_GATEWAY=172.16.33.254
    Q_L3_ENABLED=True
    PUBLIC_INTERFACE=eth0
    Q_USE_PROVIDERNET_FOR_PUBLIC=True
    OVS_PHYSICAL_BRIDGE=br-ex
    PUBLIC_BRIDGE=br-ex
    OVS_BRIDGE_MAPPINGS=public:br-ex
    
    
    # Work offline
    #OFFLINE=True
    # Reclone each time
    RECLONE=False
    
    
    # Logging
    # -------
    # By default ``stack.sh`` output only goes to the terminal where it runs. It can
    # be configured to additionally log to a file by setting ``LOGFILE`` to the full
    # path of the destination log file. A timestamp will be appended to the given name.
    LOGFILE=/opt/stack/logs/stack.sh.log
    VERBOSE=True
    LOG_COLOR=True
    SCREEN_LOGDIR=/opt/stack/logs
    
    # the number of days by setting ``LOGDAYS``.
    LOGDAYS=1
    # Database Backend MySQL
    enable_service mysql
    # RPC Backend RabbitMQ
    enable_service rabbit
    
    
    # Enable Keystone - OpenStack Identity Service
    enable_service key
    # Horizon - OpenStack Dashboard Service
    enable_service horizon
    # Enable Swift - Object Storage Service without replication.
    enable_service s-proxy s-object s-container s-account
    SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
    SWIFT_REPLICAS=1
    # Enable Glance - OpenStack Image service
    enable_service g-api g-reg
    
    # Enable Cinder - Block Storage service for OpenStack
    VOLUME_GROUP="cinder-volumes"
    enable_service cinder c-api c-vol c-sch c-bak
    # Enable Heat (orchestration) Service
    enable_service heat h-api h-api-cfn h-api-cw h-eng
    # Enable Trove (database) Service
    enable_service trove tr-api tr-tmgr tr-cond
    # Enable Sahara (data_processing) Service
    enable_service sahara
    
    # Enable Tempest - The OpenStack Integration Test Suite
    enable_service tempest
    
    # Enabling Neutron (network) Service
    disable_service n-net
    enable_service q-svc
    enable_service q-agt
    enable_service q-dhcp
    enable_service q-l3
    enable_service q-meta
    enable_service q-metering
    enable_service neutron
    
    
    ## Neutron - Load Balancing
    enable_service q-lbaas
    ## Neutron - Firewall as a Service
    enable_service q-fwaas
    ## Neutron - VPN as a Service
    enable_service q-vpn
    # VLAN configuration.
    #Q_PLUGIN=ml2
    #ENABLE_TENANT_VLANS=True
    
    
    # GRE tunnel configuration
    #Q_PLUGIN=ml2
    #ENABLE_TENANT_TUNNELS=True
    # VXLAN tunnel configuration
    Q_PLUGIN=ml2
    Q_ML2_TENANT_NETWORK_TYPE=vxlan
    
    # Enable Ceilometer - Metering Service (metering + alarming)
    enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector ceilometer-api
    enable_service ceilometer-alarm-notify ceilometer-alarm-eval
    enable_service ceilometer-anotification
    ## Enable NoVNC
    enable_service n-novnc n-cauth
    
    # Enable the Ceilometer devstack plugin
    enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git
    
    # Branches
    KEYSTONE_BRANCH=stable/liberty
    NOVA_BRANCH=stable/liberty
    NEUTRON_BRANCH=stable/liberty
    SWIFT_BRANCH=stable/liberty
    GLANCE_BRANCH=stable/liberty
    CINDER_BRANCH=stable/liberty
    HEAT_BRANCH=stable/liberty
    TROVE_BRANCH=stable/liberty
    HORIZON_BRANCH=stable/liberty
    SAHARA_BRANCH=stable/liberty
    CEILOMETER_BRANCH=stable/liberty
    TROVE_BRANCH=stable/liberty
    
    # Select Keystone's token format
    # Choose from 'UUID', 'PKI', or 'PKIZ'
    # INSERT THIS LINE...
    KEYSTONE_TOKEN_FORMAT=${KEYSTONE_TOKEN_FORMAT:-UUID}
    KEYSTONE_TOKEN_FORMAT=$(echo ${KEYSTONE_TOKEN_FORMAT} | tr '[:upper:]' '[:lower:]')
    
    
    [[post-config|$NOVA_CONF]]
    [DEFAULT]
    # Ceilometer notification driver
    instance_usage_audit=True
    instance_usage_audit_period=hour
    notify_on_state_change=vm_and_task_state
    notification_driver=nova.openstack.common.notifier.rpc_notifier
    notification_driver=ceilometer.compute.nova_notifier
    
  4. 安装openstack
    # cd /opt/devstack
    # su stack
    # ./stack.sh

    安装完成,如下图所示:
    1

    访问dashboard:
    2
  5. 命令行操作
    admin用户
    # source /opt/devstack/openrc admin admin # 加载环境变量进行操作
    demo用户
    # source /opt/devstack/openrc demo demo # 加载环境变量进行操作
    

     

kubernetes 安装配置 kube-ui

接上文:在centos7上安装和配置Kubernetes

  1. 下载kube-ui镜像并导入
    谷歌的镜像地址被墙了,无法pull拉取镜像,只能手动下载,附下载:kube-ui_v3.tar在每个minion上导入镜像:
    docker load < kube-ui_v3.tar

     
  2. 创建kube-system namespace
    创建kube-system.json,内容如下:
    {
      "kind": "Namespace",
      "apiVersion": "v1",
      "metadata": {
        "name": "kube-system"
      }
    }

    运行以下命令创建namespace
    # kubectl create -f kube-system.json
    # kubectl get namespace
    NAME          LABELS    STATUS
    default       <none>    Active
    kube-system   <none>    Active
    

     
  3. 创建rc
    创建kube-ui-rc.yaml 文件,并写入一下内容
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: kube-ui-v3
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        version: v3
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      selector:
        k8s-app: kube-ui
        version: v3
      template:
        metadata:
          labels:
            k8s-app: kube-ui
            version: v3
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kube-ui
            image: gcr.io/google_containers/kube-ui:v3
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 8080
            livenessProbe:
              httpGet:
                path: /
                port: 8080
              initialDelaySeconds: 30
              timeoutSeconds: 5
    

    运行一下命令创建rc,并查看
    # kubectl create -f kube-ui-rc.yaml
    
    #kubectl get rc --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    

     
  4. 创建service
    创建 kube-ui-svc.yaml 文件,并写入以下内容
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-ui
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "KubeUI"
    spec:
      selector:
        k8s-app: kube-ui
      ports:
      - port: 80
        targetPort: 8080

    运行以下命令创建service,并查看service 和 pods
    # kubectl create -f kube-ui-svc.yaml
    # kubectl get rc,pods --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    NAMESPACE     NAME               READY     STATUS    RESTARTS   AGE
    kube-system   kube-ui-v3-0zyjp   1/1       Running   0          21h
    kube-system   kube-ui-v3-6s1d0   1/1       Running   0          21h
    kube-system   kube-ui-v3-i0uqs   1/1       Running   0          21h
    

    可以看到kube-ui服务已经成功创建,运行3个副本
  5. master配置flannel网络,与minion连通
    master安装flannel,并启动
    # yum install flannel -y
    # systemctl enable flanneld
    # systemctl start flanneld
  6. 访问kube-ui
    访问 http://master_ip:8080/ui/ 会自动跳转 http://kube-ui:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/ 即可访问kube-ui的dashboard 页面,如下图所示:
    kube-ui

    可以查看minion的系统信息,pods,RC,services等信息

kubernetes 安装遇到的一些问题

1.Error from server: namespaces "kube-system" not found

Error from server: namespaces "kube-system" not found

解决方法:

# vim kube-system.json
{
  "apiVersion": "v1",
  "kind": "Namespace",
  "metadata": {
    "name": "kube-system"
  }
}
# kubectl create -f kube-system.json

2.Unable to generate self signed cert: mkdir /var/run/kubernetes: permission denied

Aug 12 11:07:05 master kube-apiserver[5336]: E0812 11:07:05.063837    5336 genericapiserver.go:702] Unable to generate self signed cert: mkdir /var/run/kubernetes: permission denied
Aug 12 11:07:05 master kube-apiserver[5336]: I0812 11:07:05.063915    5336 genericapiserver.go:734] Serving insecurely on 0.0.0.0:8080
Aug 12 11:07:05 master systemd[1]: Started Kubernetes API Server.
Aug 12 11:07:05 master kube-apiserver[5336]: E0812 11:07:05.064151    5336 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.

解决办法:

# mkdir -p /var/run/kubernetes/
# chown -R kube.kube /var/run/kubernetes/
# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

3.下载google-container镜像(minion上下载)

在hosts文件中加入以下内容

# vim /etc/hosts
220.255.2.153 www.gcr.io
220.255.2.153 gcr.io
# docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1

4.no API token found for service account kube-system/default

Error creating: pods "kubernetes-dashboard-1881024876-" is forbidden: no API token found for service account kube-system/default,

解决方法:etc/kubernetes/apiserver 去除 KUBE_ADMISSION_CONTROL中的SecurityContextDeny,ServiceAccount,并重启kube-apiserver.service服务

#vim /etc/kubernetes/apiserver
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#systemctl restart kube-apiserver.service

5.Get http://localhost:8080/version: dial tcp 202.102.110.203:8080: getsockopt: connection refused

# docker logs b7cff1accc06
Starting HTTP server on port 9090
Creating API server client for http://localhost:8080
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://localhost:8080/version: dial tcp 202.102.110.203:8080: getsockopt: connection refused

删除原有失败的kubernetes-dashboard

# kubectl delete -f kubernetes-dashboard.yaml

修改 kubernetes-dashboard.yaml 文件加入以下行

# vim kubernetes-dashboard.yaml
        ports:
        - containerPort: 9090
          protocol: TCP 
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
          - --apiserver-host=http://192.168.2.247:8080    ##加入此行 指定apiserver地址

重新创建kubernetes-dashboard

# kubectl create -f kubernetes-dashboard.yaml

6.不能浏览器访问kubernetes-dashboard

Error: 'dial tcp 172.17.97.3:9090: i/o timeout'
Trying to reach: 'http://172.17.97.3:9090/'

master上安装flannel

# yum install -y flannel

编辑flannel配置文件并启动

# vim /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://192.168.2.247:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"                                                                                                                                                                                       

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

# systemctl enable flanneld.service ; systemctl start flanneld.service