在centos7上安装和配置Kubernetes集群管理pods和services

一、安装前准备

1.操作系统详情

需要三台主机,都最小化安装 centos7.1,并update到最新,详情见如下表格

角色 主机名 IP
Master master 192.168.0.79
Minion1 minion-1 192.168.0.80
Minion2 minion-2 192.168.0.81

2.在每台主机上关闭firewalld改用iptables

输入以下命令,关闭firewalld

# systemctl stop firewalld.service    #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动

然后安装iptables并启用

# yum install -y iptables-services     #安装
# systemctl start iptables.service  #最后重启防火墙使配置生效
# systemctl enable iptables.service #设置防火墙开机启动

3.安装ntp服务

# yum install -y ntp
# systemctl start ntpd
# systemctl enable ntpd

二、安装配置

注:kubernetes,etcd等已经进去centos epel源,可以直接yum安装(需要安装epel-release)

1.安装Kubernetes Master

•  使用以下命令安装kubernetes 和 etcd

# yum install -y kubernetes etcd

•  编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值

# vim /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

•  编辑Kubernetes API server的配置文件 /etc/kubernetes/apiserver,确保下列行没有被注释,并为下列的值

#  vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

•  启动etcd, kube-apiserver, kube-controller-manager and kube-scheduler服务,并设置开机自启

# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

•  在etcd中定义flannel network的配置,这些配置会被flannel service下发到minions:

# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

• 添加iptables规则,允许相应的端口

iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables-save

•  查看节点信息(我们还没有配置节点信息,所以这里应该为空)

# kubectl get nodes
NAME             LABELS              STATUS

2. 安装Kubernetes Minions (Nodes)

注:下面这些步骤应该在minion1和minions2上执行(也可以添加更多的minions)

•  使用yum安装kubernetes 和 flannel

# yum install -y flannel kubernetes

•  为flannel service配置etcd服务器,编辑/etc/sysconfig/flanneld文件中的下列行以连接到master

# vim /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.0.79:2379"        #改为etcd服务器的ip

•  编辑/etc/kubernetes/config 中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:

# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.0.79:8080"

•  编辑/etc/kubernetes/kubelet 如下行:

minion1:
# vim /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.80"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""
minion2:
# vim /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.0.81"
KUBELET_API_SERVER="--api_servers=http://192.168.0.79:8080"
KUBELET_ARGS=""

•  启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启

# for SERVICES in kube-proxy kubelet docker flanneld; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

•  在每个minion节点,你应当注意到你有两块新的网卡docker0 和 flannel0。你应该得到不同的ip地址范围在flannel0上,就像下面这样:

minion1:
# ip a | grep flannel | grep inet
    inet 172.17.29.0/16 scope global flannel0
minion2:
# ip a | grep flannel | grep inet
    inet 172.17.37.0/16 scope global flannel0

•   添加iptables规则:

iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT

•  现在登陆kubernetes master节点验证minions的节点状态:

# kubectl get nodes
NAME           LABELS                                STATUS
192.168.0.80   kubernetes.io/hostname=192.168.0.80   Ready
192.168.0.81   kubernetes.io/hostname=192.168.0.81   Ready

至此,kubernetes集群已经配置并运行了,我们可以继续下面的步骤。

三、创建Pods (Containers)

为了创建一个pod,我们需要在kubernetes master上面定义一个yaml 或者 json配置文件。然后使用kubectl命令创建pod

# mkdir -p k8s/pods
# cd k8s/pods/
# vim nginx.yaml

在nginx.yaml里面增加如下内容:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

创建pod:

# kubectl create -f nginx.yaml

此时有如下报错:

Error from server: error when creating "nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account

解决办法是编辑/etc/kubernetes/apiserver 去除 KUBE_ADMISSION_CONTROL中的SecurityContextDeny,ServiceAccount,并重启kube-apiserver.service服务:

#vim /etc/kubernetes/apiserver
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

#systemctl restart kube-apiserver.service

之后重新创建pod:

# kubectl create -f nginx.yaml
pods/nginx

查看pod:

# kubectl get pod nginx
NAME      READY     STATUS                                            RESTARTS   AGE
nginx     0/1       Image: nginx is not ready on the node   0          34s

这里STATUS一直是这个,创建不成功,下面排错。通过查看pod的描述发现如下错误:

# kubectl describe pod nginx 
Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}  implicitly required container POD       pulled          Successfully pulled Pod container image "gcr.io/google_containers/pause:0.8.0"
  Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}  implicitly required container POD       failed          Failed to create docker container with error: no such image
  Wed, 28 Oct 2015 10:25:30 +0800       Wed, 28 Oct 2015 10:25:30 +0800 1       {kubelet 192.168.0.81}                                          failedSync      Error syncing pod, skipping: no such image
  Wed, 28 Oct 2015 10:27:30 +0800       Wed, 28 Oct 2015 10:29:30 +0800 2       {kubelet 192.168.0.81}  implicitly required container POD       failed          Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (API error (500): invalid registry endpoint "http://gcr.io/v0/". HTTPS attempt: unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 173.194.72.82:443: i/o timeout

手动ping了一下gcr.io发现无法ping通(可能是被墙了)

从网上找到 pause:0.8.0 的镜像,然后再每个minion上导入镜像:

# docker load --input pause-0.8.0.tar

附下载:pause-0.8.0.tar

在执行以下命令即可成功创建pod

#kubectl create -f nginx.yaml
pods/nginx

查看pod

# kubectl get pod nginx
NAME      READY     STATUS                                            RESTARTS   AGE
nginx      1/1             Running                                            0               2min

 

ofbiz 使用mysql作为存储数据库

        OFBiz是一个非常著名的电子商务平台,是一个非常著名的开源项目,提供了创建基于最新J2EE/XML规范和技术标准,构建大中型企业级、跨平台、跨数据库、跨应用服务器的多层、分布式电子商务类WEB应用系统的框架。 OFBiz最主要的特点是OFBiz提供了一整套的开发基于Java的web应用程序的组件和工具。包括实体引擎, 服务引擎, 消息引擎, 工作流引擎, 规则引擎等。OFBiz 已经正式成为 Apache 的顶级项目: Apache OFBiz。
        ofbiz自带的数据库是Derby,这是一种小型的适合于测试系统的数据库,但不适合在产品级系统中使用,所以通常我们需要将ofbiz数据库迁移到其它数据库上。下面介绍迁移到mysql的步骤,迁移到其他数据库操作类似。
  1. 安装mysql,创建ofbiz的数据库
    使用以下命令分别创建ofbiz用户(密码ofbiz),和ofbiz、ofbizolap、ofbiztenant三个数据库
    mysql -u root 
    >create user 'ofbiz'@'localhost' identified by 'ofbiz';   
    >create database ofbiz DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >create database ofbizolap DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >create database ofbiztenant DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_general_ci;  
    >grant all on *.* to 'ofbiz'@'localhost';
    >flush privileges;
    >quit;
    
  2. 修改ofbiz配置文件
    编辑 entityengine.xml 修改默认的数据库引擎,以及连接数据库的用户名密码等信息
    vim ofbiz_HOME/framework/entity/config/entityengine.xml

    修改其中的delegator name标签为如下内容(即注释derby启用mysql)
    <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">
            <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/>
            <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/>
            <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> -->
    <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
    <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
    <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
    <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/> <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/> <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/> --> </delegator> <delegator name="default-no-eca" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" entity-eca-enabled="false" distributed-cache-clear-enabled="false"> <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/> <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/> <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> --> <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
    <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
    <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
    <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/> <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/> <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/> --> </delegator> <!-- be sure that your default delegator (or the one you use) uses the same datasource for test. You must run "ant load-demo" before running "ant run-tests" --> <delegator name="test" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main"> <!-- <group-map group-name="org.ofbiz" datasource-name="localderby"/> <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/> <group-map group-name="org.ofbiz.tenant" datasource-name="localderbytenant"/> --> <group-map group-name="org.ofbiz" datasource-name="localmysql"/>
    <group-map group-name="org.ofbiz.olap" datasource-name="localmysqlolap"/>
    <group-map group-name="org.ofbiz.tenant" datasource-name="localmysqltenant"/>
    <!-- <group-map group-name="org.ofbiz" datasource-name="localpostnew"/> <group-map group-name="org.ofbiz.olap" datasource-name="localpostolap"/> <group-map group-name="org.ofbiz.tenant" datasource-name="localposttenant"/> --> </delegator>

    修改datasource name部分注意修改数据库登陆信息及字符集和编码
    <datasource name="localmysql"
                helper-class="org.ofbiz.entity.datasource.GenericHelperDAO"
                field-type-name="mysql"
                check-on-start="true"
                add-missing-on-start="true"
                check-pks-on-start="false"
                use-foreign-keys="true"
                join-style="ansi-no-parenthesis"
                alias-view-columns="false"
                drop-fk-use-foreign-key-keyword="true"
                table-type="InnoDB"
    character-set="utf8"
    collate="utf8_general_ci">
    <read-data reader-name="tenant"/> <read-data reader-name="seed"/> <read-data reader-name="seed-initial"/> <read-data reader-name="demo"/> <read-data reader-name="ext"/> <read-data reader-name="ext-test"/> <read-data reader-name="ext-demo"/> <inline-jdbc jdbc-driver="com.mysql.jdbc.Driver" jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbiz?autoReconnect=true" jdbc-username="ofbiz"
    jdbc-password="ofbiz"
    isolation-level="ReadCommitted" pool-minsize="2" pool-maxsize="250" time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL and had to set it to -1 in order to avoid this issue. For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html--> <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> --> </datasource> <datasource name="localmysqlolap" helper-class="org.ofbiz.entity.datasource.GenericHelperDAO" field-type-name="mysql" check-on-start="true" add-missing-on-start="true" check-pks-on-start="false" use-foreign-keys="true" join-style="ansi-no-parenthesis" alias-view-columns="false" drop-fk-use-foreign-key-keyword="true" table-type="InnoDB" character-set="utf8"
    collate="utf8_general_ci">
    <read-data reader-name="tenant"/> <read-data reader-name="seed"/> <read-data reader-name="seed-initial"/> <read-data reader-name="demo"/> <read-data reader-name="ext"/> <read-data reader-name="ext-test"/> <read-data reader-name="ext-demo"/> <inline-jdbc jdbc-driver="com.mysql.jdbc.Driver" jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbizolap?autoReconnect=true" jdbc-username="ofbiz"
    jdbc-password="ofbiz"
    isolation-level="ReadCommitted" pool-minsize="2" pool-maxsize="250" time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL and had to set it to -1 in order to avoid this issue. For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html--> <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> --> </datasource> <datasource name="localmysqltenant" helper-class="org.ofbiz.entity.datasource.GenericHelperDAO" field-type-name="mysql" check-on-start="true" add-missing-on-start="true" check-pks-on-start="false" use-foreign-keys="true" join-style="ansi-no-parenthesis" alias-view-columns="false" drop-fk-use-foreign-key-keyword="true" table-type="InnoDB" character-aracter-set="utf8"
    collate="utf8_general_ci">
    <read-data reader-name="tenant"/> <read-data reader-name="seed"/> <read-data reader-name="seed-initial"/> <read-data reader-name="demo"/> <read-data reader-name="ext"/> <read-data reader-name="ext-test"/> <read-data reader-name="ext-demo"/> <inline-jdbc jdbc-driver="com.mysql.jdbc.Driver" jdbc-uri="jdbc:mysql://127.0.0.1:3306/ofbiztenant?autoReconnect=true" jdbc-username="ofbiz"
    jdbc-password="ofbiz"
    isolation-level="ReadCommitted" pool-minsize="2" pool-maxsize="250" time-between-eviction-runs-millis="600000"/><!-- Please note that at least one person has experienced a problem with this value with MySQL and had to set it to -1 in order to avoid this issue. For more look at http://markmail.org/thread/5sivpykv7xkl66px and http://commons.apache.org/dbcp/configuration.html--> <!-- <jndi-jdbc jndi-server-name="localjndi" jndi-name="java:/MySqlDataSource" isolation-level="Serializable"/> --> </datasource>


  3. 复制mysql.jar文件到指定目录 mysql.jar下载地址:http://dev.mysql.com/downloads/connector/j/ 这里上传自己使用的mysql-connector-java-5.1.36-bin 复制mysql.jar到lib目录

    cp mysql-connector-java-5.1.36-bin.jar ofbiz_HOME/framework/base/lib/
  4. 导入数据,启动ofbiz
    cd ofbiz_HOME
    ./ant load-demo           #导入demo数据
    ./ant start               #启动ofbiz

    至此已经完成ofbiz使用mysql数据库的配置,其他操作请参考ofbiz目录下的README文件

配置docker本地仓库遇到的一些问题

配置docker本地仓库的方法参考:http://dockerpool.com/static/books/docker_practice/repository/local_repo.html

在执行一下命令的时候遇到一些问题,记录如下:

pip install docker-registry
  •  ERROR 1
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/egg-dist-tmp-3c7TJ3
    SWIG/_m2crypto.i:30: Error: Unable to find 'openssl/opensslv.h'
    SWIG/_m2crypto.i:33: Error: Unable to find 'openssl/safestack.h'
    SWIG/_evp.i:12: Error: Unable to find 'openssl/opensslconf.h'
    SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h'
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是安装 openssl-devel:
    yum install -y openssl-devel.x86_64

    重新执行 pip install docker-registry ,又有如下报错:
  • ERROR 2
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/egg-dist-tmp-pZ_OGN
    /usr/include/openssl/opensslconf.h:36: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是手动安装 M2Crypto 0.22.3 (M2Crypto 0.22.3在centos7上安装会有一些问题需要借助脚本)
    wget https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz   #下载源码
    tar zxvf M2Crypto/M2Crypto-0.22.3.tar.gz                                                                              # 解压
    cd M2Crypto-0.22.3

    然后创建安装脚本,内容如下:
    vim fedora_setup.sh
    #!/bin/sh
    # This script is meant to work around the differences on Fedora Core-based# distributions (Redhat, CentOS, ...) compared to other common Linux
    # distributions.
    #
    # Usage: ./fedora_setup.sh [setup.py options]
    #
    
    arch=`uname -m`
    for i in SWIG/_{ec,evp}.i; do
      sed -i -e "s/opensslconf\./opensslconf-${arch}\./" "$i"
    done
    
    SWIG_FEATURES=-cpperraswarn python setup.py $*

    然后为脚本添加执行权限,执行脚本,并安装M2Crypto 0.22.3
    chmod +x fedora_setup.sh
    ./fedora_setup.sh build
    python setup.py install

    至此可以完成安装,需要注意的是私有仓库的配置文件 config_sample.yml在以下路径
    /usr/lib/python2.7/site-packages/docker_registry-1.0.0_dev-py2.7.egg/config

    配置完成后启动服务,push镜像的时候又有如下错误:
  • ERROR 3
    docker pull 172.16.18.159:5000/ubuntu:12.04
    Error: Invalid registry endpoint https://172.16.18.159:5000/v1/: Get https://172.16.18.159:5000/v1/_ping: EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry http://172.16.18.159:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/http://172.16.18.159:5000/ca.crt

    解决方法是在docker的配置文件里面OPTIONS添加 --insecure-registry http://172.16.18.159:5000 选项
    # /etc/sysconfig/docker
    
    # Modify these options if you want to change the way the docker daemon runs
    OPTIONS='--selinux-enabled --insecure-registry 172.16.18.159:5000'
    DOCKER_CERT_PATH=/etc/docker

    然后重启docker服务:
    systemctl restart docker

    至此错误全部解决,本地仓库配置成功

openstack tenant指定availability-zone启动虚拟机报错

在admin租户下使用nova boot  --availability-zone在指定的节点上启动虚拟机正常

可是当在非admin租户下指定 --availability-zone 启动虚拟机报错

#nova boot --flavor m1.tiny --image  cirros --nic net-id=65758d11-4027-4b33-9a8f-a5a215bb89c0 --availability-zone nova:vgw test-vgw
ERROR: Policy doesn't allow compute:create:forced_host to be performed. (HTTP 403) (Request-ID: req-42f48090-e0eb-4ed0-8493-99b06d1ce02d)

加--debug选项,看到如下报错信息

INFO (connectionpool:203) Starting new HTTP connection (1): 172.16.85.129
DEBUG (connectionpool:295) "POST /v1.1/bdd28cc0c15245adae5455a67118bb17/servers HTTP/1.1" 403 107
RESP: [403] {'date': 'Fri, 19 Jun 2015 04:45:54 GMT', 'content-length': '107', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-ed4a06fc-512e-4c5a-9f99-0b7304f817d0'}
RESP BODY: {"forbidden": {"message": "Policy doesn't allow compute:create:forced_host to be performed.", "code": 403}}

DEBUG (shell:783) Policy doesn't allow compute:create:forced_host to be performed. (HTTP 403) (Request-ID: req-ed4a06fc-512e-4c5a-9f99-0b7304f817d0)
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/novaclient/shell.py", line 780, in main
    OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File "/usr/lib/python2.6/site-packages/novaclient/shell.py", line 716, in main
    args.func(self.cs, args)
  File "/usr/lib/python2.6/site-packages/novaclient/v1_1/shell.py", line 433, in do_boot
    server = cs.servers.create(*boot_args, **boot_kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/v1_1/servers.py", line 871, in create
    **boot_kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/v1_1/servers.py", line 534, in _boot
    return_raw=return_raw, **kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/base.py", line 152, in _create
    _resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 312, in post
    return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 286, in _cs_request
    **kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 268, in _time_request
    resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.6/site-packages/novaclient/client.py", line 262, in request
    raise exceptions.from_response(resp, body, url, method)
Forbidden: Policy doesn't allow compute:create:forced_host to be performed. (HTTP 403) (Request-ID: req-ed4a06fc-512e-4c5a-9f99-0b7304f817d0)
ERROR: Policy doesn't allow compute:create:forced_host to be performed. (HTTP 403) (Request-ID: req-ed4a06fc-512e-4c5a-9f99-0b7304f817d0)

 

解决方法如下

vim /etc/nova/policy.json

#change
"compute:create:forced_host": "is_admin:True",
#to
"compute:create:forced_host": "",

重启nova-compute服务即可

nova boot vm with '--nic net-id=xxxx, v4-fixed-ip=xxx' failed

在juno上指定ip启动虚拟机会出错,查询日志,在/var/log/nova/nova-compute.log
里面有如下出错信息(拖动滚动条看最右边的):

2015-06-09 05:53:41.966 19951 ERROR nova.compute.manager [-] [instance: d9058791-9971-4962-8c18-5fb3188355ab] Instance failed to spawn
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab] Traceback (most recent call last):
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     yield resources
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     block_device_info=block_device_info)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2616, in spawn
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     admin_pass=admin_password)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3097, in _create_image
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     instance, network_info, admin_pass, files, suffix)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2894, in _inject_data
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     network_info, libvirt_virt_type=CONF.libvirt.virt_type)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 87, in get_injected_network_template
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     if not (network_info and template):
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 463, in __len__
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return self._sync_wrapper(fn, *args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 450, in _sync_wrapper
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     self.wait()
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 482, in wait
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     self[:] = self._gt.wait()
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return self._exit_event.wait()
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 125, in wait
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     current.throw(*self._exc)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     result = function(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1646, in _allocate_network_async
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     dhcp_options=dhcp_options)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 443, in allocate_for_instance
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     self._delete_ports(neutron, instance, created_port_ids)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     six.reraise(self.type_, self.value, self.tb)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 423, in allocate_for_instance
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     security_group_ids, available_macs, dhcp_opts)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 214, in _create_port
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     port_id = port_client.create_port(port_req_body)['port']['id']
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py", line 84, in wrapper
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     ret = obj(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 98, in with_params
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     ret = self.function(instance, *args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 322, in create_port
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return self.post(self.ports_path, body=body)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py", line 84, in wrapper
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     ret = obj(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1325, in post
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     headers=headers, params=params)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py", line 84, in wrapper
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     ret = obj(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1236, in do_request
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     body = self.serialize(body)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/__init__.py", line 84, in wrapper
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     ret = obj(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 1266, in serialize
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     self.get_attr_metadata()).serialize(data, self.content_type())
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/common/serializer.py", line 390, in serialize
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return self._get_serialize_handler(content_type).serialize(data)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/common/serializer.py", line 54, in serialize
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return self.dispatch(data, action=action)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/common/serializer.py", line 44, in dispatch
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return action_method(*args, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/common/serializer.py", line 66, in default
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return jsonutils.dumps(data, default=sanitizer)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/openstack/common/jsonutils.py", line 168, in dumps
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return json.dumps(value, default=default, **kwargs)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib64/python2.7/json/__init__.py", line 250, in dumps
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     sort_keys=sort_keys, **kw).encode(obj)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     chunks = self.iterencode(o, _one_shot=True)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return _iterencode(o, 0)
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]   File "/usr/lib/python2.7/site-packages/neutronclient/common/serializer.py", line 65, in sanitizer
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab]     return six.text_type(obj, 'utf8')
2015-06-09 05:53:41.966 19951 TRACE nova.compute.manager [instance: d9058791-9971-4962-8c18-5fb3188355ab] TypeError: coercing to Unicode: need string or buffer, IPAddress found

以关键字 “TypeError: coercing to Unicode: need string or buffer, IPAddress found” ,搜索到此bug网页:https://bugs.launchpad.net/nova/+bug/1408529
文中给出了bug原因:If ip address is provided when running nova boot, nova compute will invoke neutron client to create a port. However, the ip address parameter is an IPAddress object so neutron client will fail to send the request to neutron server. Transform IPAddress object to string to address this issue.
只需要把发送给neutronclient的参数改为str就可以了
bug修复:https://git.openstack.org/cgit/openstack/nova/commit/?id=aae858a246e20b1bf55004517b5d9ab28968190a

编辑/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py

 198   try:
 199       if fixed_ip:
 200           port_req_body['port']['fixed_ips'] = [{'ip_address': fixed_ip}]
 201           port_req_body['port']['network_id'] = network_id

修改200行为:

200            port_req_body['port']['fixed_ips'] = [{'ip_address': str(fixed_ip)}]

重启 nova-compute后成功指定ip启动虚拟机