Openstack O版 配置swift对象存储服务

环境如下图所示:
IP:192.168.0.111 controller
IP:192.168.0.112 compute
IP:192.168.0.113 object1
IP:192.168.0.117 object2
IP:192.168.0.118 cinder
1.在控制节点上安装 swift服务
[[email protected]er ~]# source admin-openrc
创建swift用户
[[email protected] ~]# openstack user create --domain default --password-prompt swift

[[email protected] ~]# openstack role add --project service --user swift admin
创建swift服务
[[email protected] ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store


创建对象存储服务API端点:
[[email protected] ~]# openstack endpoint create --region RegionOne \

object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s


[[email protected] ~]# openstack endpoint create --region RegionOne \

object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s

[[email protected] ~]# openstack endpoint create --region RegionOne \
object-store admin http://controller:8080/v1

安装服务包
[[email protected] ~]# yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
拷贝proxy-server.conf配置文件到/etc/swift/目录下面
[[email protected] swift]# vim /etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
swift_dir = /etc/swift
user = swift
[pipeline:main] (删除tempurl和tempauth,添加authtoken和keystoneauth模块)
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = devops
delay_auth_decision = True
[filter:cache]
use = egg:swift#memcache
memcache_servers = controller:11211
2.安装和配置存储节点分别为object1和object2在两个节点上分别进行一下操作,并且在存储节点上分别各有两块硬盘,分别为sdb和sdc
安装xfsprogs rsync服务
[[email protected] ~]# ifconfig | head -2
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.113 netmask 255.255.255.0 broadcast 192.168.0.255
[[email protected] ~]# yum -y install xfsprogs rsync
格式化sdb和sdc设备
[[email protected] ~]# mkfs.xfs /dev/sdb
[[email protected] ~]# mkfs.xfs /dev/sdc
创建安装点目录结构
[[email protected] ~]# mkdir -p /srv/node/sdb
[[email protected] ~]# mkdir -p /srv/node/sdc
编辑/etc/fstab文件并添加一下内容
[[email protected] ~]# cat /etc/fstab | tail -2
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
挂载设备
[[email protected] ~]# mount /srv/node/sdb/
[[email protected] ~]# mount /srv/node/sdc
[[email protected] ~]# vim /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.0.113
[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
启动rsyncd服务
[[email protected] ~]# systemctl enable rsyncd.service
[[email protected] ~]# systemctl start rsyncd.service
[[email protected] ~]# systemctl status rsyncd.service
安装和配置组件
[[email protected] ~]# yum install openstack-swift-account openstack-swift-container openstack-swift-object
[[email protected] ~]# curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton
[[email protected] ~]# curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton
[[email protected] ~]# curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton
[[email protected] ~]# vim /etc/swift/account-server.conf
[DEFAULT]
bind_ip =192.168.0.113
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[[email protected] ~]# vim /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 192.168.0.113
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon container-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[[email protected] ~]# vim /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 192.168.0.113
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon object-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lockpath = /var/lock
[[email protected] ~]# chown -R swift:swift /srv/node
[[email protected] ~]# chown -R root:swift /var/cache/swift/
[[email protected] ~]# chmod -R 755 /var/cache/swift/
同理在object2上同上操作,配置文件中修改IP地址的IP就可以。
[[email protected] ~]# systemctl enable rsyncd.service
[[email protected] ~]# systemctl start rsyncd.service
[[email protected] ~]# systemctl status rsyncd.service
3.创建和分发初始环在控制节点上执行一下步骤
创建账号
切换到/etc/swift目录下面
[[email protected] ~]# cd /etc/swift/
[[email protected] swift]# swift-ring-builder account.builder create 10 3 1
添加存储节点
[[email protected] swift]# swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6202 --device sdb --weight 100
Device d0r1z1-192.168.0.113:6202R192.168.0.113:6202/sdb
"" with 100.0 weight got id 0
You have mail in /var/spool/mail/root
[[email protected] swift]# swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6202 --device sdc --weight 100
Device d1r1z1-192.168.0.113:6202R192.168.0.113:6202/sdc"" with 100.0 weight got id 1
[[email protected] swift]# swift-ring-builder account.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6202 --device sdb --weight 100
Device d2r1z2-192.168.0.117:6202R192.168.0.117:6202/sdb
"" with 100.0 weight got id 2
[[email protected] swift]# swift-ring-builder account.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6202 --device sdc --weight 100
Device d3r1z2-192.168.0.117:6202R192.168.0.117:6202/sdc_"" with 100.0 weight got id 3
[[email protected] swift]# swift-ring-builder account.builder
account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file account.ring.gz not found, probably it hasn‘t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.113:6202 192.168.0.113:6202 sdb 100.00 0 -100.00
1 1 1 192.168.0.113:6202 192.168.0.113:6202 sdc 100.00 0 -100.00
2 1 2 192.168.0.117:6202 192.168.0.117:6202 sdb 100.00 0 -100.00
3 1 2 192.168.0.117:6202 192.168.0.117:6202 sdc 100.00 0 -100.00
[[email protected] swift]# swift-ring-builder account.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

[[email protected] swift]# swift-ring-builder container.builder create 10 3 1
[[email protected] swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6201 --device sdb --weight 100
Device d0r1z1-192.168.0.113:6201R192.168.0.113:6201/sdb"" with 100.0 weight got id 0
[[email protected] swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6201 --device sdc --weight 100
Device d1r1z1-192.168.0.113:6201R192.168.0.113:6201/sdc
"" with 100.0 weight got id 1
[[email protected] swift]# swift-ring-builder container.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6201 --device sdb --weight 100
Device d2r1z2-192.168.0.117:6201R192.168.0.117:6201/sdb"" with 100.0 weight got id 2
[[email protected] swift]# swift-ring-builder container.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6201 --device sdc --weight 100
Device d3r1z2-192.168.0.117:6201R192.168.0.117:6201/sdc
"" with 100.0 weight got id 3
[[email protected] swift]# swift-ring-builder container.builder
container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file container.ring.gz not found, probably it hasn‘t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.113:6201 192.168.0.113:6201 sdb 100.00 0 -100.00
1 1 1 192.168.0.113:6201 192.168.0.113:6201 sdc 100.00 0 -100.00
2 1 2 192.168.0.117:6201 192.168.0.117:6201 sdb 100.00 0 -100.00
3 1 2 192.168.0.117:6201 192.168.0.117:6201 sdc 100.00 0 -100.00
[[email protected] swift]# swift-ring-builder container.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

[[email protected] swift]# swift-ring-builder object.builder create 10 3 1
[[email protected] swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6200 --device sdb --weight 100
Device d0r1z1-192.168.0.113:6200R192.168.0.113:6200/sdb"" with 100.0 weight got id 0
[[email protected] swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.0.113 --port 6200 --device sdc --weight 100
Device d1r1z1-192.168.0.113:6200R192.168.0.113:6200/sdc
"" with 100.0 weight got id 1
[[email protected] swift]# swift-ring-builder object.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6200 --device sdb --weight 100
Device d2r1z2-192.168.0.117:6200R192.168.0.117:6200/sdb"" with 100.0 weight got id 2
[[email protected] swift]# swift-ring-builder object.builder add --region 1 --zone 2 --ip 192.168.0.117 --port 6200 --device sdc --weight 100
Device d3r1z2-192.168.0.117:6200R192.168.0.117:6200/sdc
"" with 100.0 weight got id 3
[[email protected] swift]# swift-ring-builder object.builder
object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file object.ring.gz not found, probably it hasn‘t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.113:6200 192.168.0.113:6200 sdb 100.00 0 -100.00
1 1 1 192.168.0.113:6200 192.168.0.113:6200 sdc 100.00 0 -100.00
2 1 2 192.168.0.117:6200 192.168.0.117:6200 sdb 100.00 0 -100.00
3 1 2 192.168.0.117:6200 192.168.0.117:6200 sdc 100.00 0 -100.00
[[email protected] swift]# swift-ring-builder object.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

下载controller 中/etc/swift目录下account.ring.gz container.ring.gz object.ring.gz放到各个存储节点上/etc/swift/目录下面
4.在controller控制节点上下载和配置swift.conf配置文件
[[email protected] swift]# curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/newton
[[email protected] swift]# vim /etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix = openstack
swift_hash_path_prefix = openstack
[storage-policy:0]
name = Policy-0
default = yes
下载swift.conf配置文件到各个存储节点上
过程省略。。。。。。。。。。。。。。。。。。。
[[email protected] swift]# chown -R root:swift /etc/swift/
[[email protected] swift]# systemctl enable openstack-swift-proxy.service memcached.service
[[email protected] swift]# systemctl start openstack-swift-proxy.service memcached.service
[[email protected] swift]# systemctl status openstack-swift-proxy.service memcached.service
5.在存储节点上启动swift服务
[[email protected] swift]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[[email protected] swift]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[[email protected] swift]# systemctl status openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[[email protected] swift]# systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[[email protected] swift]# systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[[email protected] swift]# systemctl status openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[[email protected] swift]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
[[email protected] swift]# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
[[email protected] swift]# systemctl status openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
5.验证
在controller主节点上操作
[[email protected] ~]# cat demo-openrc
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=demo
export OS_PROJECT_NAME=demo
export OS_PASSWORD=devops
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://controller:5000/v3
[[email protected] ~]# source /root/demo-openrc
显示服务状态
[[email protected] ~]# swift stat

创建container1容器
[[email protected] ~]# openstack container create container1

将测试文件上传到container1容器:
[[email protected] ~]# openstack object create container1 admin-openrc

[[email protected] ~]# openstack objecct list container1

[[email protected] ~]# openstack object save container1 admin-openrc

所有的配置文件在百度云盘中:
链接:https://pan.baidu.com/s/1CnmKkFMTemv199ctgb5Oig
提取码:27om
复制这段内容后打开百度网盘手机App,操作更方便哦

原文地址:https://blog.51cto.com/343614597/2419487

时间: 07-12

Openstack O版 配置swift对象存储服务的相关文章

FreeNAS 11.0 正式发布,提供 S3 兼容的对象存储服务

FreeNAS 11.0 正式版已发布,该版本带来了新的虚拟化和对象存储功能.FreeNAS 11.0 将 bhyve 虚拟机添加到其受欢迎的 SAN / NAS.Jail 和插件中,让用户可以在 FreeNAS box 上使用 host web-scale VMs.它提供 S3 兼容的对象存储服务,可将 FreeNAS box 变成 S3 兼容的服务器,不用再依赖云端.点击此处查看 FreeNAS 11.0 的新功能 FreeNAS 11.0 基于 FreeBSD 11-STABLE ,它增加

腾讯对象存储服务COS加密签名上传文件与下载文件的剖析,福利提供给所有使用Android的小伙伴们!

在做一些用户需求的时候,公司往往需要工程师采集到更多有用的关于用户的个人信息,然后对用户群进行分析,今天我不是来分析这些的,今天我主要是说 腾讯推出的款云产品,那就是对象存储服务COS,这个产品面向所有开发者,新用户都有免费享有10G的使用权,10G可能对于做方案的工程师来说可能是微不 足道的,比如后视镜和车载方案,会常常需要用到视频的存储与云分享,当然这里不是只本地存储哦,我指的是用户在使用方案商的方案的时候,比如他开车 的时候录了一段视频需要分享到某个域,共享给大家看,比如微信,这时候他肯定

ceph对象存储(rgw)服务、高可用安装配置

ceph对象存储服务.高可用安装配置 简介:    Ceph本质上就是一个rados,利用命令rados就可以访问和使用ceph的对象存储,但作为一个真正产品机的对象存储服务,通常使用的是Restfulapi的方式进行访问和使用.而radosgw其实就是这个作用,安装完radosgw以后,就可以使用api来访问和使用ceph的对象存储服务了.    首先明白一下架构,radosgw其实名副其实,就是rados的一个网关,作用是对外提供对象存储服务.本质上radosgw(其实也是一个命令)和rbd

Ceph对象存储网关安装配置

引言 基于已部署好的Ceph集群,部署一个网关服务器,进行对象存储服务.操作系统CentOS6.5 CEPH0.94.3其实基于librados可以直接进行访问,但是我看了百度,UCLOUD的对象存储,用户在网页上进行文件的上传.下载时,都通过web服务器间接和存储集群打交道,进行了一层隔离,而不是直接和集群进行通信操作.我得理解是便于访问控制以及隔离. 1.依赖包安装 Ceph rados-gateway依赖Apache和FastCGI, 用户的请求先到web服务器,再走rados-gatew

OpenStack主要逻辑模块–Keystone身份验证服务

Keystone作为Openstack的核心模块,为Nova(计算),Glance(镜像),Swift(对象存储),Cinder(块存储),Neutron(网络)以及Horizon(Dashboard)提供认证服务 Keystone基本概念介绍之一 User User即用户,他们代表可以通过keystone进行访问的人或程序.Users通过认证信息(credentials,如密码.API Keys等)进行验证. Tenant Tenant即租户,它是各个服务中的一些可以访问的资源集合.例如,在N

OpenStack pike版 案例架构(1)

案例架构 至少需要两个节点(主机)来启动基本虚拟机或实例.可选的服务,如块存储和对象存储需要额外的节点. Controller 控制节点 控制节点运行: Identity service 身份验证服务 Image service  镜像服务 management portions of Compute 计算管理部分 management portion of Networking  网络管理部分 various Networking agents      各种网络代理 Dashboard   

Ceph对象存储RGW对接企业级网盘OwnCloud三步走

上篇文章我们把Ceph对象存储搭建成功了,但是成功了之后我们怎么用呢?下面我们本文就来讲下Ceph对象存储对接企业私有云网盘OwnCloud. OwnCloud分为企业版和社区版,我们只说社区版,在这里我就不多赘述了. 那么Ceph对接OwnCloud分三步走. 第一:安装Ceph配置RGW对象存储 第二:安装OwnCloud 第三:对接 第一步在上个文章里面已经做了,那么第二步是安装OwnCloud,可以看下我之前的文章进行安装. 如何搭建OwnCloud网盘 主要讲下第三步 要点:网盘节点D

OSS对象存储

一.产品概述 阿里云对象存储服务(Object Storage Service,简称 OSS),是阿里云提供的海量.安全.低成本.高可靠的云存储服务.它具有与平台无关的RESTful API接口,能够提供99.99999999%的服务持久性.您可以在任何应用.任何时间.任何地点存储和访问任意类型的数据.OSS适合各种网站.开发企业及开发者使用. 您可以使用阿里云提供的API/SDK接口或者OSS迁移工具轻松地将海量数据移入或移出阿里云OSS.数据存储到阿里云OSS以后,您可以选择标准类型(Sta

利用腾讯云COS云对象存储定时远程备份网站

版权声明:本文由张戈 原创文章,转载请注明出处: 文章原文链接:https://www.qcloud.com/community/article/942851001487125915 来源:腾云阁 https://www.qcloud.com/community 一.优点分析 内网传输:和阿里云OSS一样,腾讯云COS同样支持内网和外网文件传输,对于腾讯云服务器,使用内网传输绝对是最快.最稳定的备份方案! 免费方案:看了下腾讯云COS的定价说明,发现对于备份网站来说简直是绝佳搭档,甚至可以说是钻

OpenStack pike版 安装openstack服务(3) 续基本环境部署(2)

安装openstack服务 所有openstack服务的安装指南链接:https://docs.openstack.org/pike/install/ 最小化部署,需要按照下面指定的顺序安装以下服务: Identity service – keystone installation for Pike Image service – glance installation for Pike Compute service – nova installation for Pike Networkin