分类 docker 下的文章

使用 docker 快速部署 ceph

系统环境

  • 至少需要三台虚拟机或者物理机,这里使用虚拟机
  • 每台虚拟机至少需要两块硬盘(一块系统盘,一块OSD),本例中有三块硬盘
  1. 部署流程(博客使用的markdown解析器不支持流程图使用图片代替)
    liuchengtu.png

  2. 主机规划
    biaoge.png

安装 docker

登录 https://cr.console.aliyun.com/#/accelerator 获取自己的阿里云 docker 加速地址

  1. 安装升级 docker 客户端
# curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh -
  1. 使用 docker 加速器
    可以通过修改 daemon 配置文件 /etc/docker/daemon.json 来使用加速器,注意修改使用自己的加速地址
# mkdir -p /etc/docker
# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://******.mirror.aliyuncs.com"]
}
EOF
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker

启动 MON

  1. 下载 ceph daemon 镜像
# docker pull ceph/daemon
  1. 启动第一个 mon
    在 node1 上启动第一个 mon,注意修改 MON_IP
# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -e MON_IP=192.168.3.123 \
        -e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
        ceph/daemon mon

查看容器

# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS               NAMES
b79a02c40296        ceph/daemon         "/entrypoint.sh mon"   About a minute ago   Up About a minute                       sad_shannon

查看集群状态

# docker exec b79a02 ceph -s
    cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
     health HEALTH_ERR
            no osds
     monmap e2: 1 mons at {node01=192.168.3.123:6789/0}
            election epoch 4, quorum 0 node01
        mgr no daemons active 
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating
  1. 复制配置文件
    将 node1 上的配置文件复制到 node02 和 node03,复制的路径包含/etc/ceph和/var/lib/ceph/bootstrap-*下的所有内容。
# ssh root@node2 mkdir -p /var/lib/ceph
# scp -r /etc/ceph root@node2:/etc
# scp -r /var/lib/ceph/bootstrap* root@node2:/var/lib/ceph

# ssh root@node3 mkdir -p /var/lib/ceph
# scp -r /etc/ceph root@node3:/etc
# scp -r /var/lib/ceph/bootstrap* root@node3:/var/lib/ceph
  1. 启动第二个和第三个 mon
    在 node02 上执行以下命令启动 mon,注意修改 MON_IP
# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -e MON_IP=192.168.3.124 \
        -e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
        ceph/daemon mon

在 node03 上执行以下命令启动 mon,注意修改 MON_IP

# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -e MON_IP=192.168.3.125 \
        -e CEPH_PUBLIC_NETWORK=192.168.3.0/24 \
        ceph/daemon mon

查看在 node01 上集群状态

# docker exec b79a02 ceph -s
    cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
            election epoch 12, quorum 0,1,2 node01,node02,node03
        mgr no daemons active 
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

可以看到三个 mon 已经正确启动

启动 OSD

每台虚拟机准备了两块磁盘作为 osd,分别加入到集群,注意修改磁盘

# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -v /dev/:/dev/ \
        --privileged=true \
        -e OSD_FORCE_ZAP=1 \
        -e OSD_DEVICE=/dev/sdb \
        ceph/daemon osd_ceph_disk
# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -v /dev/:/dev/ \
        --privileged=true \
        -e OSD_FORCE_ZAP=1 \
        -e OSD_DEVICE=/dev/sdc \
        ceph/daemon osd_ceph_disk

按照同样方法将 node02 和 node03 的 sdb、sdc 都加入集群

查看集群状态

# docker exec b79a ceph -s
    cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
     health HEALTH_OK
     monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
            election epoch 12, quorum 0,1,2 node01,node02,node03
        mgr no daemons active 
     osdmap e63: 6 osds: 6 up, 6 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v157: 64 pgs, 1 pools, 0 bytes data, 0 objects
            212 MB used, 598 GB / 599 GB avail
                  64 active+clean

可以看到 mon 和 osd 都已经正确配置,切集群状态为 HEALTH_OK

创建 MDS

使用以下命令在 node01 上启动 mds

# docker run -d \
        --net=host \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        -e CEPHFS_CREATE=1 \
        ceph/daemon mds

启动 RGW ,并且映射 80 端口

使用以下命令在 node01 上启动 rgw,并绑定 80 端口

# docker run -d \
        -p 80:80 \
        -v /etc/ceph:/etc/ceph \
        -v /var/lib/ceph/:/var/lib/ceph/ \
        ceph/daemon rgw

集群的最终状态

# docker exec b79a02 ceph -s
    cluster 96ae62d2-2249-4173-9dee-3a7215cba51c
     health HEALTH_OK
     monmap e4: 3 mons at {node01=192.168.3.123:6789/0,node02=192.168.3.124:6789/0,node03=192.168.3.125:6789/0}
            election epoch 12, quorum 0,1,2 node01,node02,node03
      fsmap e5: 1/1/1 up {0=mds-node01=up:active}
        mgr no daemons active 
     osdmap e136: 6 osds: 6 up, 6 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v1460: 136 pgs, 10 pools, 3829 bytes data, 223 objects
            254 MB used, 598 GB / 599 GB avail
                 136 active+clean

参考文章:
使用Docker部署Ceph
Demo: running Ceph in Docker containers

openstack 使用 nova docker driver

一、安装docker并修改使用阿里云的镜像加速

以下操作在controller节点和compute节点进行(controller节点安装docker是为了方便下载docker镜像直接导入glance)
1. 创建yum repo文件(这里使用阿里云的源)

# tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/docker-engine/yum/gpg
EOF

2.安装docker

# yum install docker-engine

3.修改docker使用阿里云镜像加速

# cp -n /lib/systemd/system/docker.service /etc/systemd/system/docker.service
# sed -i "s|ExecStart=/usr/bin/dockerd|ExecStart=/usr/bin/dockerd --registry-mirror=https://dhxb****.mirror.aliyuncs.com|g" /etc/systemd/system/docker.service
# systemctl daemon-reload

上文https://dhxb****.mirror.aliyuncs.com是我的加速器地址,获取自己加速地址请参考阿里云:https://cr.console.aliyun.com/#/accelerator
4.启动docker并设置开机自启

# systemctl enable docker
# systemctl start docker

二、在compute节点安装并配置novadocker

1.安装novadocker

# usermod -aG docker nova
# yum -y install git python-pip
# pip install -e git+https://github.com/openstack/nova-docker#egg=novadocker
# cd src/novadocker/
# python setup.py install

2.配置 /etc/nova/nova.conf 使用docker driver

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

[docker]
# Commented out. Uncomment these if you'd like to customize:
## vif_driver=novadocker.virt.docker.vifs.DockerGenericVIFDriver
## snapshots_directory=/var/tmp/my-snapshot-tempdir

将/src/novadocker/etc/nova/rootwrap.d/docker.filters文件拷贝到/etc/nova/rootwrap.d/docker.filters,并修改rootwrap.d的访问权限,然后启动nova-compute服务

# cp -R /src/novadocker/etc/nova/rootwrap.d /etc/nova/
# chown -R root:nova /etc/nova/rootwrap.d # systemctl restart openstack-nova-compute

三、上传镜像到glacne

1.在glance的配置文件中启用driver

# vim /etc/glance/glance-api.conf
[image_format]
container_formats = ami,ari,aki,bare,ovf,docker

2.重启glance-api服务

# openstack-sevice restart glance

3.获取docker镜像,并上传到glance中

# docker pull cirros
# docker save cirros | glance image-create --container-format=docker --disk-format=raw --name cirros

四、创建docker instance

创建实例

# nova boot --image cirros --flavor m1.tiny --nic net-id=59cc6a1d-0cc1-44c7-8b0a-9dc071fde397 cirros-docker

使用docker命令查看容器

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dc6e1c21887d cirros "/sbin/init" 47 minutes ago Up 47 minutes nova-bfeeb788-7fdf-476f-904a-8cc8ee3eb81c

注:dashboard上控制台无法使用

遇到的一些问题

修改使用docker driver后nova-compute的日志可以在 /var/log/message查看
1.重启nova-conpute服务失败

……
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver File "/usr/lib/python2.7/site-packages/oslo_config
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver __import__(module_str)
Aug 08 12:14:51 compute2 nova-compute[21233]: 2016-08-08 12:14:51.388 21233 ERROR nova.virt.driver ImportError: No module named conf.netconf

解决方法:

# cd src/novadocker/
# git checkout -b stable/liberty origin/stable/liberty
# python setup.py install

然后即可正常启动nova-compute服务

2.创建虚拟机的时候提示报错

404 Client Error: Not Found ("No such image: cirros-docker")]

解决方法:上传image的时候image name必须和docker image的名字一致,否则在创建instance的时候就是有上述错误

3.启动虚拟机的时候报命名空间权限错误

Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip netns exec ee27f11ab9dc265ad864dbcb8b9a800693fd9517f0bcfa166e3ccae66c300843 ip link set lo up
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Exit code: 1
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Stdout: u''
Aug 8 14:12:59 compute2 nova-compute: 2016-08-08 14:12:59.200 12444 ERROR nova.compute.manager [instance: 3608b187-fe0c-4554-aa96-d5ed630042bc] Stderr: u'Cannot open network namespace "ee27f11ab9dc265ad864dbcb8b9a800693fd9517f0bcfa166e3ccae66c300843": Permission denied\n'

解决方法:关闭selinux

# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# reboot

参考文章:
http://blog.csdn.net/zhangli_perdue/article/details/50155705
https://github.com/openstack/nova-docker
http://heavenkong.blogspot.com/2016/07/resolved-mitaka-novadocker-error.html

kubernetes 安装配置 kube-ui

接上文:在centos7上安装和配置Kubernetes

  1. 下载kube-ui镜像并导入
    谷歌的镜像地址被墙了,无法pull拉取镜像,只能手动下载,附下载:kube-ui_v3.tar在每个minion上导入镜像:
    docker load < kube-ui_v3.tar

     
  2. 创建kube-system namespace
    创建kube-system.json,内容如下:
    {
      "kind": "Namespace",
      "apiVersion": "v1",
      "metadata": {
        "name": "kube-system"
      }
    }

    运行以下命令创建namespace
    # kubectl create -f kube-system.json
    # kubectl get namespace
    NAME          LABELS    STATUS
    default       <none>    Active
    kube-system   <none>    Active
    

     
  3. 创建rc
    创建kube-ui-rc.yaml 文件,并写入一下内容
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: kube-ui-v3
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        version: v3
        kubernetes.io/cluster-service: "true"
    spec:
      replicas: 3
      selector:
        k8s-app: kube-ui
        version: v3
      template:
        metadata:
          labels:
            k8s-app: kube-ui
            version: v3
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kube-ui
            image: gcr.io/google_containers/kube-ui:v3
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 8080
            livenessProbe:
              httpGet:
                path: /
                port: 8080
              initialDelaySeconds: 30
              timeoutSeconds: 5
    

    运行一下命令创建rc,并查看
    # kubectl create -f kube-ui-rc.yaml
    
    #kubectl get rc --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    

     
  4. 创建service
    创建 kube-ui-svc.yaml 文件,并写入以下内容
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-ui
      namespace: kube-system
      labels:
        k8s-app: kube-ui
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "KubeUI"
    spec:
      selector:
        k8s-app: kube-ui
      ports:
      - port: 80
        targetPort: 8080

    运行以下命令创建service,并查看service 和 pods
    # kubectl create -f kube-ui-svc.yaml
    # kubectl get rc,pods --all-namespaces
    NAMESPACE     CONTROLLER   CONTAINER(S)   IMAGE(S)                              SELECTOR                     REPLICAS
    kube-system   kube-ui-v3   kube-ui        gcr.io/google_containers/kube-ui:v3   k8s-app=kube-ui,version=v3   3
    NAMESPACE     NAME               READY     STATUS    RESTARTS   AGE
    kube-system   kube-ui-v3-0zyjp   1/1       Running   0          21h
    kube-system   kube-ui-v3-6s1d0   1/1       Running   0          21h
    kube-system   kube-ui-v3-i0uqs   1/1       Running   0          21h
    

    可以看到kube-ui服务已经成功创建,运行3个副本
  5. master配置flannel网络,与minion连通
    master安装flannel,并启动
    # yum install flannel -y
    # systemctl enable flanneld
    # systemctl start flanneld
  6. 访问kube-ui
    访问 http://master_ip:8080/ui/ 会自动跳转 http://kube-ui:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/ 即可访问kube-ui的dashboard 页面,如下图所示:
    kube-ui

    可以查看minion的系统信息,pods,RC,services等信息

kubernetes 安装遇到的一些问题

1.Error from server: namespaces "kube-system" not found

Error from server: namespaces "kube-system" not found

解决方法:

# vim kube-system.json
{
  "apiVersion": "v1",
  "kind": "Namespace",
  "metadata": {
    "name": "kube-system"
  }
}
# kubectl create -f kube-system.json

2.Unable to generate self signed cert: mkdir /var/run/kubernetes: permission denied

Aug 12 11:07:05 master kube-apiserver[5336]: E0812 11:07:05.063837    5336 genericapiserver.go:702] Unable to generate self signed cert: mkdir /var/run/kubernetes: permission denied
Aug 12 11:07:05 master kube-apiserver[5336]: I0812 11:07:05.063915    5336 genericapiserver.go:734] Serving insecurely on 0.0.0.0:8080
Aug 12 11:07:05 master systemd[1]: Started Kubernetes API Server.
Aug 12 11:07:05 master kube-apiserver[5336]: E0812 11:07:05.064151    5336 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.

解决办法:

# mkdir -p /var/run/kubernetes/
# chown -R kube.kube /var/run/kubernetes/
# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

3.下载google-container镜像(minion上下载)

在hosts文件中加入以下内容

# vim /etc/hosts
220.255.2.153 www.gcr.io
220.255.2.153 gcr.io
# docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1

4.no API token found for service account kube-system/default

Error creating: pods "kubernetes-dashboard-1881024876-" is forbidden: no API token found for service account kube-system/default,

解决方法:etc/kubernetes/apiserver 去除 KUBE_ADMISSION_CONTROL中的SecurityContextDeny,ServiceAccount,并重启kube-apiserver.service服务

#vim /etc/kubernetes/apiserver
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#systemctl restart kube-apiserver.service

5.Get http://localhost:8080/version: dial tcp 202.102.110.203:8080: getsockopt: connection refused

# docker logs b7cff1accc06
Starting HTTP server on port 9090
Creating API server client for http://localhost:8080
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://localhost:8080/version: dial tcp 202.102.110.203:8080: getsockopt: connection refused

删除原有失败的kubernetes-dashboard

# kubectl delete -f kubernetes-dashboard.yaml

修改 kubernetes-dashboard.yaml 文件加入以下行

# vim kubernetes-dashboard.yaml
        ports:
        - containerPort: 9090
          protocol: TCP 
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
          - --apiserver-host=http://192.168.2.247:8080    ##加入此行 指定apiserver地址

重新创建kubernetes-dashboard

# kubectl create -f kubernetes-dashboard.yaml

6.不能浏览器访问kubernetes-dashboard

Error: 'dial tcp 172.17.97.3:9090: i/o timeout'
Trying to reach: 'http://172.17.97.3:9090/'

master上安装flannel

# yum install -y flannel

编辑flannel配置文件并启动

# vim /etc/sysconfig/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://192.168.2.247:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"                                                                                                                                                                                       

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

# systemctl enable flanneld.service ; systemctl start flanneld.service

配置docker本地仓库遇到的一些问题

配置docker本地仓库的方法参考:http://dockerpool.com/static/books/docker_practice/repository/local_repo.html

在执行一下命令的时候遇到一些问题,记录如下:

pip install docker-registry
  •  ERROR 1
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-vVPR1Z/M2Crypto-0.22.3/egg-dist-tmp-3c7TJ3
    SWIG/_m2crypto.i:30: Error: Unable to find 'openssl/opensslv.h'
    SWIG/_m2crypto.i:33: Error: Unable to find 'openssl/safestack.h'
    SWIG/_evp.i:12: Error: Unable to find 'openssl/opensslconf.h'
    SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h'
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是安装 openssl-devel:
    yum install -y openssl-devel.x86_64

    重新执行 pip install docker-registry ,又有如下报错:
  • ERROR 2
    Searching for M2Crypto==0.22.3
    Reading https://pypi.python.org/simple/M2Crypto/
    Best match: M2Crypto 0.22.3
    Downloading https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz#md5=573f21aaac7d5c9549798e72ffcefedd
    Processing M2Crypto-0.22.3.tar.gz
    Writing /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/setup.cfg
    Running M2Crypto-0.22.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5hkA4l/M2Crypto-0.22.3/egg-dist-tmp-pZ_OGN
    /usr/include/openssl/opensslconf.h:36: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
    error: Setup script exited with error: command 'swig' failed with exit status 1

    解决办法是手动安装 M2Crypto 0.22.3 (M2Crypto 0.22.3在centos7上安装会有一些问题需要借助脚本)
    wget https://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.22.3.tar.gz   #下载源码
    tar zxvf M2Crypto/M2Crypto-0.22.3.tar.gz                                                                              # 解压
    cd M2Crypto-0.22.3

    然后创建安装脚本,内容如下:
    vim fedora_setup.sh
    #!/bin/sh
    # This script is meant to work around the differences on Fedora Core-based# distributions (Redhat, CentOS, ...) compared to other common Linux
    # distributions.
    #
    # Usage: ./fedora_setup.sh [setup.py options]
    #
    
    arch=`uname -m`
    for i in SWIG/_{ec,evp}.i; do
      sed -i -e "s/opensslconf\./opensslconf-${arch}\./" "$i"
    done
    
    SWIG_FEATURES=-cpperraswarn python setup.py $*

    然后为脚本添加执行权限,执行脚本,并安装M2Crypto 0.22.3
    chmod +x fedora_setup.sh
    ./fedora_setup.sh build
    python setup.py install

    至此可以完成安装,需要注意的是私有仓库的配置文件 config_sample.yml在以下路径
    /usr/lib/python2.7/site-packages/docker_registry-1.0.0_dev-py2.7.egg/config

    配置完成后启动服务,push镜像的时候又有如下错误:
  • ERROR 3
    docker pull 172.16.18.159:5000/ubuntu:12.04
    Error: Invalid registry endpoint https://172.16.18.159:5000/v1/: Get https://172.16.18.159:5000/v1/_ping: EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry http://172.16.18.159:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/http://172.16.18.159:5000/ca.crt

    解决方法是在docker的配置文件里面OPTIONS添加 --insecure-registry http://172.16.18.159:5000 选项
    # /etc/sysconfig/docker
    
    # Modify these options if you want to change the way the docker daemon runs
    OPTIONS='--selinux-enabled --insecure-registry 172.16.18.159:5000'
    DOCKER_CERT_PATH=/etc/docker

    然后重启docker服务:
    systemctl restart docker

    至此错误全部解决,本地仓库配置成功