分类 Openstack 下的文章

通过 Rally 进行 OpenStack Tempest 测试

Rally 基本介绍

Rally 是OpenStack社区推出开源测试工具,可用于对OpenStack各个组件进行性能测试。通过使用Rally组件,用户可完成OpenStack云计算平台的安装部署、功能验证、大规模负载测试(性能测试)、输出测试报告等一系列动作。
Rally 的概况和结构如下图所示:
image
Rally 主要包括三大部分:

  • Deploy engine:这不是一个真的部署工具,它只是一个插件形式的东西,它可以和其他部署工具(比如 DevStack,Fuel,Anvil 等)一起工作来简化和统一部署流程。
  • Verification:使用tempest验证已经部署的openstack云环境的功能。
  • Benchmark engine:性能测试

Tempest 基本介绍

Tempest 是一个旨在为云计算平台 OpenStack 提供集成功能测试的开源项目,包含了 Openstack 基本组件(nova, keystone, glance, neutron, cinder 等)的 API 测试用例与场景。它是基于 unittest2 和 nose 建立的,灵活且易于扩展及维护,使得 OpenStack 相关测试效率得到大幅度提升。

安装 Rally

  1. 安装依赖包
# yum install python-pip lsb_release gcc gmp-devel libffi-devel libxml2-devel libxslt-devel openssl-devel postgresql-devel python-devel redhat-rpm-config
  1. 安装 rally 最简单的方法就是使用下面的安装脚本
wget -q -O- https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh | bash
# or using curl:
curl https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh | bash

如果用普通用户执行脚本,Rally 会在 ==~/rally/== 下创建一个新的虚拟环境并安装在这里,使用 sqlite作为数据库后端。如果使用 root 用户执行脚本,Rally 会安装在系统路径,更多的安装选项,可以参考安装页面

Rally 配置

  1. 创建 openstack 环境变量文件,加入以下内容,注意修改用户名、密码、认证地址、region_name 等内容。
# vim admin-openrc

unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD=admin
    export OS_AUTH_URL=http://192.168.3.222:5000/v3

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
  1. 加载环境变量
# . admin-openrc
  1. 注册一个 Openstack deployment,注册成功后,将会默认使用这个 deployment,同时在主目录下会有一个新的目录出现:.rally。
# rally deployment create --fromenv --name=openstack

2017-07-31 15:44:12.509 20293 INFO rally.deployment.engines.existing [-] Save deployment 'openstack' (uuid=3403b234-76ae-4afb-9d96-49ef2d872069) with 'openstack' platform.
+--------------------------------------+---------------------+-----------+------------------+--------+
| uuid                                 | created_at          | name      | status           | active |
+--------------------------------------+---------------------+-----------+------------------+--------+
| 3403b234-76ae-4afb-9d96-49ef2d872069 | 2017-07-31T07:44:12 | openstack | deploy->finished |        |
+--------------------------------------+---------------------+-----------+------------------+--------+
Using deployment: 3403b234-76ae-4afb-9d96-49ef2d872069
~/.rally/openrc was updated

HINTS:

* To use standard OpenStack clients, set up your env by running:
    source ~/.rally/openrc
  OpenStack clients are now configured, e.g run:
    openstack image list
  1. 检查一下刚注册的 deployment 是否存在。
# rally deployment list
+--------------------------------------+---------------------+-----------+------------------+--------+
| uuid                                 | created_at          | name      | status           | active |
+--------------------------------------+---------------------+-----------+------------------+--------+
| 3403b234-76ae-4afb-9d96-49ef2d872069 | 2017-07-31T07:44:12 | openstack | deploy->finished | *      |
+--------------------------------------+---------------------+-----------+------------------+--------+
  1. 检查 deployment 是否可行
# rally deployment check

--------------------------------------------------------------------------------
Platform openstack:
--------------------------------------------------------------------------------

Available services:
+-------------+----------------+-----------+
| Service     | Service Type   | Status    |
+-------------+----------------+-----------+
| __unknown__ | alarming       | Available |
| __unknown__ | compute_legacy | Available |
| __unknown__ | event          | Available |
| __unknown__ | placement      | Available |
| __unknown__ | volumev2       | Available |
| __unknown__ | volumev3       | Available |
| cinder      | volume         | Available |
| glance      | image          | Available |
| gnocchi     | metric         | Available |
| keystone    | identity       | Available |
| neutron     | network        | Available |
| nova        | compute        | Available |
+-------------+----------------+-----------+

关于 service 显示 unknown 的问题可以参看以下文章:
rally deployment check is giving unknown under services
OpenStack Rally 性能测试

通过 Tempest verifier 验证云环境

  1. 创建 Tempest verifier
#  rally verify create-verifier --type tempest --name tempest-verifier
  1. 验证是否安装完成
# rally verify list-verifiers
+--------------------------------------+------------------+---------+-----------+---------------------+---------------------+-----------+---------+-------------+--------+
| UUID                                 | Name             | Type    | Namespace | Created at          | Updated at          | Status    | Version | System-wide | Active |
+--------------------------------------+------------------+---------+-----------+---------------------+---------------------+-----------+---------+-------------+--------+
| 4f4db99c-3930-442e-b592-bed5f428814e | tempest-verifier | tempest | openstack | 2017-07-31T05:24:09 | 2017-07-31T05:25:28 | installed | master  | False       | ✔      |
+--------------------------------------+------------------+---------+-----------+---------------------+---------------------+-----------+---------+-------------+--------+
  1. 配置 Tempest verifier
    执行以下命令为当前部署配置 Tempest verifier
# rally verify configure-verifier

2017-07-31 15:56:33.940 20338 INFO rally.api [-] Configuring verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e) for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069).
2017-07-31 15:56:35.945 20338 INFO rally.api [-] Verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e) has been successfully configured for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069)!

查看配置信息

# rally verify configure-verifier --show

[DEFAULT]
debug = True
use_stderr = False
log_file = 

[auth]
use_dynamic_credentials = True
admin_username = admin
admin_password = admin
admin_project_name = admin
admin_domain_name = default
…………
[service_available]
cinder = True
glance = True
heat = False
ironic = False
neutron = True
nova = True
sahara = False
swift = False

[validation]
run_validation = True
image_ssh_user = cirros
connect_method = floating

[volume-feature-enabled]
bootable = True

开始验证

  1. 执行以下命令开始验证
# rally verify start
2017-07-31 16:02:14.679 20417 INFO rally.api [-] Starting verification (UUID=ddca5b4b-03a9-49e4-8c91-1d53943ad10b) for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069) by verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e).
2017-07-31 16:02:25.381 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_create_agent ... success [0.752s]
2017-07-31 16:02:25.972 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_delete_agent ... success [0.588s]
2017-07-31 16:02:26.458 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents ... success [0.486s]
2017-07-31 16:02:27.335 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents_with_filter ... success [0.877s]
2017-07-31 16:02:27.975 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_update_agent ... success [0.639s]
2017-07-31 16:02:37.491 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_availability_zone.AZAdminV2TestJSON.test_get_availability_zone_list ... success [0.498s]
2017-07-31 16:02:38.042 20417 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_availability_zone.AZAdminV2TestJSON.test_get_availability_zone_list_detail ... success [0.551s]

默认情况下,以上命令会为当前部署执行完整的 tempest测试。

  1. 可以使用 --pattern 选项只执行部分tempest测试
# rally verify start --pattern set=compute
2017-07-31 16:07:12.163 20459 INFO rally.api [-] Starting verification (UUID=4e36a2fb-5780-4db0-86bf-fe2b0ab92bf2) for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069) by verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e).
2017-07-31 16:07:17.189 20459 INFO tempest-verifier [-] {1} tempest.api.compute.admin.test_auto_allocate_network.AutoAllocateNetworkTest ... skip: The microversion range[2.37 - latest] of this test is out of the configuration range[None - None].
2017-07-31 16:07:21.786 20459 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_create_agent ... success [0.836s]
2017-07-31 16:07:23.170 20459 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_delete_agent ... success [1.382s]
2017-07-31 16:07:25.590 20459 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents ... success [2.422s]
2017-07-31 16:07:27.447 20459 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents_with_filter ... success [1.856s]
2017-07-31 16:07:28.135 20459 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_update_agent ... success [0.686s]

如 --pattern set=compute 选项,只会执行compute相关的测试。当前可供选择的测试内容有 full, smoke, compute, identity, image, network, object_storage, orchestration, volume, scenario

  1. 用户可以使用正则表达式运行某些的测试集
# rally verify start --pattern tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON
2017-07-31 16:25:55.659 20502 INFO rally.api [-] Starting verification (UUID=84fce1ca-304b-4663-bba5-185f24d013a1) for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069) by verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e).
2017-07-31 16:26:03.792 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_using_string_ram ... success [0.683s]
2017-07-31 16:26:04.703 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_verify_entry_in_list_details ... success [0.910s]
2017-07-31 16:26:05.478 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_with_int_id ... success [0.774s]
2017-07-31 16:26:06.230 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_with_none_id ... success [0.750s]
2017-07-31 16:26:06.906 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_with_uuid_id ... success [0.677s]
2017-07-31 16:26:08.224 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_list_flavor_without_extra_data ... success [1.317s]
2017-07-31 16:26:09.264 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_server_with_non_public_flavor ... success [1.038s]
2017-07-31 16:26:13.144 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_is_public_string_variations ... success [3.873s]
2017-07-31 16:26:14.477 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_list_non_public_flavor ... success [1.336s]
2017-07-31 16:26:15.548 20502 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_list_public_flavor_with_other_user ... success [1.067s]
2017-07-31 16:26:22.767 20502 INFO rally.api [-] Verification (UUID=84fce1ca-304b-4663-bba5-185f24d013a1) has been successfully finished for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069)!

======
Totals
======

Ran: 10 tests in 14.768 sec.
 - Success: 10
 - Skipped: 0
 - Expected failures: 0
 - Unexpected success: 0
 - Failures: 0

Using verification (UUID=84fce1ca-304b-4663-bba5-185f24d013a1) as the default verification for the future operations.

只会运行compute中和flavor相关的测试

  1. 以这种方式,可以从某个目录或类运行测试,甚至可以运行单个测试
# rally verify start --pattern tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_using_string_ram
2017-07-31 16:30:28.586 20533 INFO rally.api [-] Starting verification (UUID=181d37bd-d9a7-46fa-9311-ffe09d81e84c) for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069) by verifier 'tempest-verifier' (UUID=4f4db99c-3930-442e-b592-bed5f428814e).
2017-07-31 16:30:36.781 20533 INFO tempest-verifier [-] {0} tempest.api.compute.admin.test_flavors.FlavorsAdminTestJSON.test_create_flavor_using_string_ram ... success [0.772s]
2017-07-31 16:30:42.064 20533 INFO rally.api [-] Verification (UUID=181d37bd-d9a7-46fa-9311-ffe09d81e84c) has been successfully finished for deployment 'openstack' (UUID=3403b234-76ae-4afb-9d96-49ef2d872069)!

======
Totals
======

Ran: 1 tests in 2.734 sec.
 - Success: 1
 - Skipped: 0
 - Expected failures: 0
 - Unexpected success: 0
 - Failures: 0

Using verification (UUID=181d37bd-d9a7-46fa-9311-ffe09d81e84c) as the default verification for the future operations.

查看结果

我们可以报错结果为html、json等格式,一般保存为 html 格式,可以在浏览器中直观的查看

  1. 获得 verify id
# rally verify list
+--------------------------------------+------+------------------+-----------------+---------------------+---------------------+----------+----------+
| UUID                                 | Tags | Verifier name    | Deployment name | Started at          | Finished at         | Duration | Status   |
+--------------------------------------+------+------------------+-----------------+---------------------+---------------------+----------+----------+
| db55c49c-9316-4353-94db-e0c777831157 | -    | tempest-verifier | openstack       | 2017-07-31T08:37:28 | 2017-07-31T14:40:46 | 6:03:18  | failed   |
+--------------------------------------+------+------------------+-----------------+---------------------+---------------------+----------+----------+

如果进行了多次测试会有多条劫夺,可以根据时间来区分,每次测试结束的时候都会提示本次测试的UUID

  1. 导出为 html 文件
# rally verify report --uuid db55c49c-9316-4353-94db-e0c777831157 --type html --to export-name.html
  1. 在浏览器中查看
    如图,可以显示所有的测试用例,错误的用例也会给出详细的错误信息
    image

devstack dashboard 开启开发者选项 和 OpenStack Profiler

在ocata的版本中,引入了一个新的“openstack profiler”的面板,启用openstack profiler可以方便的看到访问horizon页面时的API调用情况。如下图所示:
image
下面介绍如何启用 openstack profiler,首先需要一个正常运行的devstack环境,启用方法如下

安装mongoDB

Horizon会将API调用过程的数据都保存到mongodb中,mongodb可以安装在本机,也可以在本机能够访问的任意一台机器上。

  1. 安装软件包

    # yum install mongodb-server mongodb -y
  2. 编辑文件 /etc/mongod.conf 并完成如下动作:
    • 配置 bind_ip 使用本机 ip 或者 0.0.0.0。
      bind_ip = 192.168.3.222
    • 默认情况下,MongoDB会在/var/lib/mongodb/journal 目录下创建几个 1 GB 大小的日志文件。如果你想将每个日志文件大小减小到128MB并且限制日志文件占用的总空间为512MB,配置 smallfiles 的值:
      smallfiles = true
  3. 启动MongoDB 并配置它随系统启动
    # systemctl enable mongod.service
    # systemctl start mongod.service

配置 horizon

  1. 复制文件
    $ cd /opt/stack/horizon
    $ cp openstack_dashboard/contrib/developer/enabled/_9001_developer.py openstack_dashboard/local/enabled/
    $ cp openstack_dashboard/contrib/developer/enabled/_9030_profiler.py openstack_dashboard/local/enabled/
    $ cp openstack_dashboard/contrib/developer/enabled/_9010_preview.py openstack_dashboard/local/enabled/
    $ cp openstack_dashboard/local/local_settings.d/_9030_profiler_settings.py.example openstack_dashboard/local/local_settings.d/_9030_profiler_settings.py
  2. 编辑 _9030_profiler_settings.py 文件,修改 mongoDB 相关配置
    修改 OPENSTACK_HOST 为mongoDB所在地址

    $ vim openstack_dashboard/local/local_settings.d/_9030_profiler_settings.py
    
    OPENSTACK_PROFILER.update({
      'enabled': True,
      'keys': ['SECRET_KEY'],
      'notifier_connection_string': 'mongodb://192.168.3.222:27017',
      'receiver_connection_string': 'mongodb://192.168.3.222:27017'
    })
  3. 重启 horizon,重新登录 dashboard ,会发现右上角有一个 Profile 下拉菜单,如下图:
    image
    如果要获取当前页面的API调用数据,点击 Profile Current Page 会重新刷新页面,加载完成后,到 Developer 下面的 OpenStack Profiler 页面就会看到页面加载过程的详细数据。

参考文章:
孔令贤-OpenStack Horizon Profiling
OpenStack Installation Guide for Red Hat Enterprise Linux and CentOS

ssh 无密码访问的问题

ssh 无密码登录失败

虚拟机 resize 需要配置计算节点之间 nova 用户无密码访问,但是在配置过程中有一台始终不能用密钥登录,对比了正常可以无密码登录的日志如下。

# 正常登录
debug2: we did not send a packet, disable method
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /var/lib/nova/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 279
# 异常报错
debug2: we did not send a packet, disable method
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /var/lib/nova/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: /var/lib/nova/.ssh/id_dsa
debug3: no such identity: /var/lib/nova/.ssh/id_dsa: No such file or directory
debug1: Trying private key: /var/lib/nova/.ssh/id_ecdsa
debug3: no such identity: /var/lib/nova/.ssh/id_ecdsa: No such file or directory
debug1: Trying private key: /var/lib/nova/.ssh/id_ed25519
debug3: no such identity: /var/lib/nova/.ssh/id_ed25519: No such file or directory
debug2: we did not send a packet, disable method
debug3: authmethod_lookup password
debug3: remaining preferred: ,password
debug3: authmethod_is_enabled password
debug1: Next authentication method: password

分析问题

  1. 找个一个类似报错的 CentOS SSH公钥登录问题 ,文中是由于seliunx导致的,我查看了本地的selinux发现已经关闭,不适用我的情况

  2. 使用 journalctl _COMM=sshd 命令查看日志,发现如下权限问题
May 10 17:11:11 compute01 sshd[26498]: pam_systemd(sshd:session): Failed to release session: Interrupted system call
May 10 17:11:11 compute01 sshd[26498]: pam_unix(sshd:session): session closed for user root
May 10 17:12:28 compute01 sshd[2297]: Authentication refused: bad ownership or modes for directory /var/lib/nova
May 10 17:13:09 compute01 sshd[2297]: Connection closed by 192.168.101.105 [preauth]
May 10 17:13:33 compute01 sshd[4103]: Authentication refused: bad ownership or modes for directory /var/lib/nova
May 10 17:25:21 compute01 sshd[23157]: Authentication refused: bad ownership or modes for directory /var/lib/nova
May 10 17:25:25 compute01 sshd[23157]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=compute02  user=nova
  1. 对比无问题主机的 /var/lib/nova 权限
正常主机
drwxr-xr-x   8 nova    nova     118 May 10 16:59 nova
异常主机
drwxrwxrwx. 11 nova           nova            4096 May 10 17:07 nova
  1. 解决办法
    修改 /var/lib/nova 目录权限为 755 后,可以正常无密码登录
# chmod -R 755 /var/lib/nova/

openstack HA模式下控制台无法访问的问题

控制台无法访问,多次刷新才能访问,nova有如下报错

2017-02-09 17:09:51.311 57467 INFO nova.console.websocketproxy [-] 192.168.170.41 - - [09/Feb/2017 17:09:51] "GET /websockify HTTP/1.1" 101 -
2017-02-09 17:09:51.312 57467 INFO nova.console.websocketproxy [-] 192.168.170.41 - - [09/Feb/2017 17:09:51] 192.168.170.41: Plain non-SSL (ws://) WebSocket connection
2017-02-09 17:09:51.313 57467 INFO nova.console.websocketproxy [-] 192.168.170.41 - - [09/Feb/2017 17:09:51] 192.168.170.41: Version hybi-13, base64: 'False'
2017-02-09 17:09:51.313 57467 INFO nova.console.websocketproxy [-] 192.168.170.41 - - [09/Feb/2017 17:09:51] 192.168.170.41: Path: '/websockify'
2017-02-09 17:09:51.382 57467 INFO nova.console.websocketproxy [req-f51929d9-8c9b-4df0-abeb-247ce6ef5d65 - - - - -] handler exception: The token '1dfc9af9-8a49-44b3-a955-5196197bc8f7' is invalid or has expired

原因分析

When running a multi node environment with HA between two or more controller nodes(or controller plane service nodes), nova consoleauthservice must be configured with memcached.  
If not, no more than one consoleauth service can berunning in active state, since it need to save the state of the sessions. Whenmemcached is not used, you can check that can connect to the vnc console only afew times when you refresh the page. If that occurs means that the connectionis handled by the consoleauth service that currently is issuing sessions.    
To solve your issue, configure memcached as backend tonova-consoleauth service.  
To solve your issue add this line to nova.conf:  
memcached_servers = 192.168.100.2:11211,192.168.100.3:11211  
This should work to solve your issue.  

解决

M版在增加memcached_servers选项

# vim /etc/nova/nova.conf

[DEFAULT]
# "memcached_servers" opt is deprecated in Mitaka. In Newton release oslo.cache
# config options should be used as this option will be removed. Please add a
# [cache] group in your nova.conf file and add "enable" and "memcache_servers"
# option in this section. (list value)
memcached_servers=controller01:11211,controller02:11211,controller03:11211

如果是N版的话,memcached_servers已经废弃,需要按照如下修改

[cache]
enabled=true
backend=oslo_cache.memcache_pool
memcache_servers=controller01:11211,controller02:11211,controller03:11211

使用kolla快速部署openstack all-in-one环境

kolla项目是为了容器化openstack,目标是做到100个节点的开箱即用,所有的组件的HA都具备。kolla是一个革命性的项目,我们以前积累的安装部署经验,全部都报废。使用kolla可以快速部署可扩展,可靠的生产就绪的openstack环境。

基本环境

操作系统:CentOS Linux release 7.2.1511 (Core)
内核版本:3.10.0-327.28.3.el7.x86_64
docker版本:Docker version 1.12.1, build 23cf638

部署kolla

  1. 安装依赖

    yum install epel-release python-pip
    yum install -y python-devel libffi-devel openssl-devel gcc
    pip install -U pip
  2. 修改docker启动文件

    # Create the drop-in unit directory for docker.service
    mkdir -p /etc/systemd/system/docker.service.d
    
    # Create the drop-in unit file
    tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
    [Service]
    MountFlags=shared
    EOF
  3. 重启docker

    systemctl daemon-reload
    systemctl restart docker
  4. 安装docker的python库

    shellyum install python-docker-py
    or
    pip install -U docker-py
  5. 配置时间同步(此处略)
  6. 禁用libvirt

    systemctl stop libvirtd.service
    systemctl disable libvirtd.service
  7. 安装ansible
    这里需要注意的是如果安装stable版的kolla需要Ansible < 2.0,master版需要Ansible > 2.0。默认yum安装ansible版本>2.0,因为我要安装stable/mitaka版,所有指定安装版本。

    pip install -U ansible==1.9.4
  8. 安装stable版kolla
  • 下载源码

    git clone https://git.openstack.org/openstack/kolla -b stable/mitaka
  • 安装依赖

    pip install -r kolla/requirements.txt -r kolla/test-requirements.txt
  • 源码安装

    pip install kolla/
  1. 安装tox,生成配置文件

    pip install -U tox
    cd kolla/
    tox -e genconfig
    cp -rv etc/kolla /etc/
  2. 安装python client

    yum install python-openstackclient python-neutronclient
  3. 本地docker仓库
    all-in-one环境中本地仓库不是必须的这里没有配置

编译镜像

kolla-build

更多的编译选项可以参看:Building Container Images
如果个别镜像编译失败可以重新执行以上操作,因为docker的容器缓存,重新编译会很快
编译成功后生成的镜像如下所示:

# docker images
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
kolla/centos-binary-heat-engine                 2.0.3               28956cc878d3        20 hours ago        571.4 MB
kolla/centos-binary-heat-api-cfn                2.0.3               d69858fd13fa        20 hours ago        571.4 MB
kolla/centos-binary-heat-api                    2.0.3               90a92ca6b71a        20 hours ago        571.4 MB
kolla/centos-binary-heat-base                   2.0.3               8f1cf8a1f536        21 hours ago        551.6 MB
kolla/centos-binary-neutron-openvswitch-agent   2.0.3               e7d0233ca541        21 hours ago        822.3 MB
kolla/centos-binary-neutron-base                2.0.3               8767569ca9b3        21 hours ago        796.7 MB
kolla/centos-binary-openvswitch-vswitchd        2.0.3               6867586ae335        21 hours ago        330.6 MB
kolla/centos-binary-openvswitch-db-server       2.0.3               3c692f316662        21 hours ago        330.6 MB
kolla/centos-binary-openvswitch-base            2.0.3               c3a263463f8f        21 hours ago        330.6 MB
kolla/centos-binary-cron                        2.0.3               d16d53e85ed9        26 hours ago        317.5 MB
kolla/centos-binary-kolla-toolbox               2.0.3               1fd9634b88ee        26 hours ago        568.4 MB
kolla/centos-binary-heka                        2.0.3               627a3de5e91c        26 hours ago        371.1 MB
kolla/centos-binary-neutron-metadata-agent      2.0.3               aad43ed7a5a1        42 hours ago        796.7 MB
kolla/centos-binary-neutron-server              2.0.3               bc1a7c0ec402        42 hours ago        796.7 MB
kolla/centos-binary-nova-compute                2.0.3               619344ac721b        42 hours ago        1.055 GB
kolla/centos-binary-nova-libvirt                2.0.3               6144729fff5f        42 hours ago        1.106 GB
kolla/centos-binary-neutron-linuxbridge-agent   2.0.3               720c9c5fa63d        42 hours ago        822 MB
kolla/centos-binary-neutron-l3-agent            2.0.3               3a82df7cb9c2        42 hours ago        796.7 MB
kolla/centos-binary-glance-api                  2.0.3               fb67115357d5        42 hours ago        673.8 MB
kolla/centos-binary-neutron-dhcp-agent          2.0.3               8c6fa56497ca        42 hours ago        796.7 MB
kolla/centos-binary-nova-compute-ironic         2.0.3               6f235dc430e5        43 hours ago        1.019 GB
kolla/centos-binary-glance-registry             2.0.3               f4cf7bc1536f        43 hours ago        673.8 MB
kolla/centos-binary-cinder-volume               2.0.3               0197cc13468d        43 hours ago        788.4 MB
kolla/centos-binary-cinder-api                  2.0.3               ed7c623e7364        43 hours ago        800.4 MB
kolla/centos-binary-cinder-rpcbind              2.0.3               75466dc5a3ba        43 hours ago        790.2 MB
kolla/centos-binary-horizon                     2.0.3               92c7ea9fc493        43 hours ago        703.1 MB
kolla/centos-binary-cinder-backup               2.0.3               e3ee19440831        43 hours ago        761.3 MB
kolla/centos-binary-cinder-scheduler            2.0.3               e3ee19440831        43 hours ago        761.3 MB
kolla/centos-binary-nova-consoleauth            2.0.3               96a9638801cd        43 hours ago        609.6 MB
kolla/centos-binary-nova-api                    2.0.3               eff73f704a90        43 hours ago        609.4 MB
kolla/centos-binary-nova-conductor              2.0.3               6016ae01a60d        43 hours ago        609.4 MB
kolla/centos-binary-nova-scheduler              2.0.3               726f100a5533        43 hours ago        609.4 MB
kolla/centos-binary-nova-spicehtml5proxy        2.0.3               c6a1a49e4226        43 hours ago        609.9 MB
kolla/centos-binary-glance-base                 2.0.3               1e4efa0f6701        43 hours ago        673.8 MB
kolla/centos-binary-nova-network                2.0.3               87f6389dd11a        43 hours ago        610.4 MB
kolla/centos-binary-ironic-pxe                  2.0.3               82f25f73c28f        43 hours ago        574.2 MB
kolla/centos-binary-nova-novncproxy             2.0.3               4726875ed228        43 hours ago        610.1 MB
kolla/centos-binary-nova-ssh                    2.0.3               51c70b9e9c47        43 hours ago        610.4 MB
kolla/centos-binary-cinder-base                 2.0.3               7c2d031be713        43 hours ago        761.3 MB
kolla/centos-binary-keystone                    2.0.3               c51a93cc9e2e        43 hours ago        585.2 MB
kolla/centos-binary-ironic-api                  2.0.3               b1771f5cc27f        43 hours ago        570.6 MB
kolla/centos-binary-ironic-inspector            2.0.3               32f4e33e1037        43 hours ago        576.2 MB
kolla/centos-binary-ironic-conductor            2.0.3               d552c64f3a08        43 hours ago        599 MB
kolla/centos-binary-nova-base                   2.0.3               8f077fafc5d8        43 hours ago        588.7 MB
kolla/centos-binary-rabbitmq                    2.0.3               d9e543e4f179        43 hours ago        370.3 MB
kolla/centos-binary-ironic-base                 2.0.3               6c4c453ddbce        43 hours ago        550.8 MB
kolla/centos-binary-openstack-base              2.0.3               cf48d5b3f3ee        43 hours ago        518.2 MB
kolla/centos-binary-mariadb                     2.0.3               cd9b363fe034        43 hours ago        630.5 MB
kolla/centos-binary-memcached                   2.0.3               49c536466427        43 hours ago        354.6 MB
kolla/centos-binary-base                        2.0.3               d04ac1ecd01a        43 hours ago        300 MB
centos                                          latest              980e0e4c79ec        2 days ago          196.7 MB

部署容器

  1. 生成密码
    openstack环境的密码等变量可以在 /etc/kolla/passwords.yml 中指定,为了方便可以使用kolla-genpwd工具自动生成复杂密码。

    kolla-genpwd

    为了方便,我们修改其中的管理员登陆密码

    vim /etc/kolla/passwords.yml
    keystone_admin_password: admin
  2. 修改部署配置文件
    修改/etc/kolla/globals.yml 文件,指定部署的一些信息

    vim /etc/kolla/globals.yml
    
    kolla_base_distro: "centos"
    kolla_install_type: "binary"
    enable_haproxy: "no"
    #kolla_internal_vip_address: "10.10.10.254"
    kolla_internal_address: "192.168.2.120"
    network_interface: "ens160"
    neutron_external_interface: "ens192"
    neutron_plugin_agent: "openvswitch"
    openstack_logging_debug: "True"
  3. 检查配置

    kolla-ansible prechecks
  4. 开始部署

    kolla-ansible deploy

    部署成功后查看容器

    # docker ps
    CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
    3938136934cf        kolla/centos-binary-horizon:2.0.3                     "kolla_start"            17 hours ago        Up 17 hours                             horizon
    cc68cb8d96e4        kolla/centos-binary-heat-engine:2.0.3                 "kolla_start"            17 hours ago        Up 17 hours                             heat_engine
    96c94995ef7c        kolla/centos-binary-heat-api-cfn:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             heat_api_cfn
    cb8ae3afb767        kolla/centos-binary-heat-api:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             heat_api
    e8f98659e03f        kolla/centos-binary-neutron-metadata-agent:2.0.3      "kolla_start"            17 hours ago        Up 17 hours                             neutron_metadata_agent
    d326fa732c2b        kolla/centos-binary-neutron-l3-agent:2.0.3            "kolla_start"            17 hours ago        Up 17 hours                             neutron_l3_agent
    4b1bbbe4fe5b        kolla/centos-binary-neutron-dhcp-agent:2.0.3          "kolla_start"            17 hours ago        Up 17 hours                             neutron_dhcp_agent
    88b2afbba5d9        kolla/centos-binary-neutron-openvswitch-agent:2.0.3   "kolla_start"            17 hours ago        Up 17 hours                             neutron_openvswitch_agent
    b73d52de75b2        kolla/centos-binary-neutron-server:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             neutron_server
    1c716402d95f        kolla/centos-binary-openvswitch-vswitchd:2.0.3        "kolla_start"            17 hours ago        Up 17 hours                             openvswitch_vswitchd
    176e7ee659f1        kolla/centos-binary-openvswitch-db-server:2.0.3       "kolla_start"            17 hours ago        Up 17 hours                             openvswitch_db
    457e0921c61a        kolla/centos-binary-nova-ssh:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             nova_ssh
    b02acebb3dc3        kolla/centos-binary-nova-compute:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             nova_compute
    59be78a597d8        kolla/centos-binary-nova-libvirt:2.0.3                "kolla_start"            17 hours ago        Up 17 hours                             nova_libvirt
    668ad8f91920        kolla/centos-binary-nova-conductor:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             nova_conductor
    34f81b4bc18b        kolla/centos-binary-nova-scheduler:2.0.3              "kolla_start"            17 hours ago        Up 17 hours                             nova_scheduler
    eb47844e6547        kolla/centos-binary-nova-novncproxy:2.0.3             "kolla_start"            17 hours ago        Up 17 hours                             nova_novncproxy
    93563016cf21        kolla/centos-binary-nova-consoleauth:2.0.3            "kolla_start"            17 hours ago        Up 17 hours                             nova_consoleauth
    cc8a1cca2e98        kolla/centos-binary-nova-api:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             nova_api
    40db89e89758        kolla/centos-binary-glance-api:2.0.3                  "kolla_start"            17 hours ago        Up 17 hours                             glance_api
    4fa5f0f38f0d        kolla/centos-binary-glance-registry:2.0.3             "kolla_start"            17 hours ago        Up 17 hours                             glance_registry
    f05120c95a9f        kolla/centos-binary-keystone:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             keystone
    149a49d57aa6        kolla/centos-binary-rabbitmq:2.0.3                    "kolla_start"            17 hours ago        Up 17 hours                             rabbitmq
    5f4298c3821e        kolla/centos-binary-mariadb:2.0.3                     "kolla_start"            17 hours ago        Up 17 hours                             mariadb
    64f6fbb19892        kolla/centos-binary-cron:2.0.3                        "kolla_start"            17 hours ago        Up 17 hours                             cron
    4cab0e756b61        kolla/centos-binary-kolla-toolbox:2.0.3               "/usr/local/bin/dumb-"   17 hours ago        Up 17 hours                             kolla_toolbox
    293a7ccaab52        kolla/centos-binary-heka:2.0.3                        "kolla_start"            17 hours ago        Up 17 hours                             heka
    6dcf3a2c12cc        kolla/centos-binary-memcached:2.0.3                   "kolla_start"            17 hours ago        Up 17 hours                             memcached
  5. 修改虚拟化类型
    因为是在虚拟机中安装,不支持kvm,需要修改虚拟类型为qemu

    vim /etc/kolla/nova-compute/nova.conf
    
    [libvirt]
    ...
    virt_type=qemu

然后就可以通过 kolla_internal_address 访问openstack环境
image

一些有用的工具

  1. 部署完成后,运行以下命令可以生成一个openrc文件(运行openstack CLI所需的环境变量):

    kolla-ansible post-deploy
  2. openrc文件生成之后,使用以下命令可以帮你做一下openstack的初始化工作,包括上传一个glance镜像以及创建几个虚拟网络:

    source /etc/kolla/admin-openrc.sh
    kolla/tools/init-runonce
  3. 由于错误的出现,可能需要多次的部署,而有些错误重新部署是不会进行修正的,所以需要将整个环境进行清理:

    tools/cleanup-containers                #可用于从系统中移除部署的容器
    tools/cleanup-host                      #可用于移除由于残余网络变化引发的docker启动的neutron-agents主机
    tools/cleanup-images                    #可用于从本地缓存中移除所有的docker image

日志查看

kolla通过heka容器来收集所有容器的日志

docker exec -it heka bash

所有的容器都可以从这个目录中获取服务日志:/var/log/kolla/SERVICE_NAME。
如果需要输出日志,请运行:

docker logs

大多数容器不会stdout,上面的命令将不会提供信息。

出错处理

deploy时遇到以下错误:

TASK: [rabbitmq | fail msg="Hostname has to resolve to IP address of api_interface"] ***
failed: [localhost] =&gt; (item={'cmd': ['getent', 'ahostsv4', 'localhost'], 'end': '2016-06-24 04:51:39.738725', 'stderr': u'', 'stdout': '127.0.0.1       STREAM localhost\n127.0.0.1       DGRAM  \n127.0.0.1       RAW    \n127.0.0.1       STREAM \n127.0.0.1       DGRAM  \n127.0.0.1       RAW    ', 'changed': False, 'rc': 0, 'item': 'localhost', 'warnings': [], 'delta': '0:00:00.033351', 'invocation': {'module_name': u'command', 'module_complex_args': {}, 'module_args': u'getent ahostsv4 localhost'}, 'stdout_lines': ['127.0.0.1       STREAM localhost', '127.0.0.1       DGRAM  ', '127.0.0.1       RAW    ', '127.0.0.1       STREAM ', '127.0.0.1       DGRAM  ', '127.0.0.1       RAW    '], 'start': '2016-06-24 04:51:39.705374'}) =&gt; {"failed": true, "item": {"changed": false, "cmd": ["getent", "ahostsv4", "localhost"], "delta": "0:00:00.033351", "end": "2016-06-24 04:51:39.738725", "invocation": {"module_args": "getent ahostsv4 localhost", "module_complex_args": {}, "module_name": "command"}, "item": "localhost", "rc": 0, "start": "2016-06-24 04:51:39.705374", "stderr": "", "stdout": "127.0.0.1       STREAM localhost\n127.0.0.1       DGRAM  \n127.0.0.1       RAW    \n127.0.0.1       STREAM \n127.0.0.1       DGRAM  \n127.0.0.1       RAW    ", "stdout_lines": ["127.0.0.1       STREAM localhost", "127.0.0.1       DGRAM  ", "127.0.0.1       RAW    ", "127.0.0.1       STREAM ", "127.0.0.1       DGRAM  ", "127.0.0.1       RAW    "], "warnings": []}}
msg: Hostname has to resolve to IP address of api_interface

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
to retry, use: --limit @/root/site.retry

localhost                  : ok=87   changed=24   unreachable=0    failed=1

解决办法:

vim /etc/hosts
127.0.0.1     localhost
192.168.2.120 localhost