k8s1.5.4挂载volume之glusterfs

volume的例子集合

https://github.com/kubernetes/kubernetes/tree/master/examples/volumes

http://www.dockerinfo.net/2926.html

http://dockone.io/article/2087

https://www.kubernetes.org.cn/1146.html

https://kubernetes.io/docs/user-guide/volumes/

k8s集群安装部署

http://jerrymin.blog.51cto.com/3002256/1898243

k8s集群RC、SVC、POD部署

http://jerrymin.blog.51cto.com/3002256/1900260

k8s集群组件kubernetes-dashboard和kube-dns部署

http://jerrymin.blog.51cto.com/3002256/1900508

k8s集群监控组件heapster部署

http://jerrymin.blog.51cto.com/3002256/1904460

k8s集群反向代理负载均衡组件部署

http://jerrymin.blog.51cto.com/3002256/1904463

k8s集群挂载volume之nfs

http://jerrymin.blog.51cto.com/3002256/1906778

glusterfs参考文档

https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/

http://blog.gluster.org/2016/03/persistent-volume-and-claim-in-openshift-and-kubernetes-using-glusterfs-volume-plugin/

https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html

The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.

需要先在node节点安装部署gluster集群环境,可以参考http://www.linuxidc.com/Linux/2017-02/140517.htm

1,glusterfs服务端和客户端环境部署:

CentOS 安装 glusterfs 非常的简单

在三个节点都安装glusterfs

yum install centos-release-gluster

yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

配置 GlusterFS 集群:

启动 glusterFS

systemctl start glusterd.service

systemctl enable glusterd.service

配置

[[email protected] glusterfs]# cd /usr/local/kubernetes/examples/volumes/glusterfs

[[email protected] glusterfs]# ls

glusterfs-endpoints.json  glusterfs-pod.json  glusterfs-service.json  README.md

[[email protected] glusterfs]# gluster peer probe k8s-master

peer probe: success. Probe on localhost not needed

[[email protected] glusterfs]# gluster peer probe k8s-node1

peer probe: success.

[[email protected] glusterfs]# gluster peer probe k8s-node2

peer probe: success.

查看集群状态

[[email protected] glusterfs]# gluster peer status

Number of Peers: 2

Hostname: k8s-node1

Uuid: 4853baab-e8fb-41ad-9a93-bfb5f0d55692

State: Peer in Cluster (Connected)

Hostname: k8s-node2

Uuid: 2c9dea85-2305-4989-a74a-970f7eb08093

State: Peer in Cluster (Connected)

创建数据存储目录

[[email protected] glusterfs]# mkdir -p /data/gluster/data

[[email protected] ~]# mkdir -p /data/gluster/data

[[email protected] ~]# mkdir -p /data/gluster/data

创建volume使用默认DHT也叫分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。

[[email protected] glusterfs]# gluster volume create glusterfsdata replica 3 k8s-master:/data/gluster/data k8s-node1:/data/gluster/data k8s-node2:/data/gluster/data force

volume create: glusterfsdata: success: please start the volume to access data

查看volume信息

[[email protected] glusterfs]# gluster volume info

Volume Name: glusterfsdata

Type: Replicate

Volume ID: 100d1f33-fb0d-48c3-9a93-d08c2e2dabb3

Status: Created

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: k8s-master:/data/gluster/data

Brick2: k8s-node1:/data/gluster/data

Brick3: k8s-node2:/data/gluster/data

Options Reconfigured:

transport.address-family: inet

nfs.disable: on

启动volume

[[email protected] glusterfs]# gluster volume  start glusterfsdata

volume start: glusterfsdata: success

测试客户端

[[email protected] glusterfs]# mount -t glusterfs k8s-master:glusterfsdata /mnt

[[email protected] glusterfs]# df -h|grep gluster

k8s-master:glusterfsdata  422G  934M  421G   1% /mnt

[[email protected] mnt]# echo glusterfs > glusterfs

[[email protected] mnt]# cat glusterfs

glusterfs

2,k8s集群挂载glusterfs

注意endpoint改成glusterfs服务集群的一个节点的IP,配置文件是2个IP所以填2个节点IP

[[email protected] glusterfs]# vim glusterfs-endpoints.json

[[email protected] glusterfs]# kubectl create -f glusterfs-endpoints.json

endpoints "glusterfs-cluster" created

[[email protected] glusterfs]# kubectl get ep |grep glusterfs

glusterfs-cluster   172.17.3.7:1,172.17.3.8:1                1m

[[email protected] glusterfs]# kubectl create -f glusterfs-service.json

service "glusterfs-cluster" created

注意volumeMounts挂载name改成上面新建的glusterfs volume

"volumes": [

{

"name": "glusterfsvol",

"glusterfs": {

"endpoints": "glusterfs-cluster",

"path": "glusterfsdata",

"readOnly": true

}

}

]

[[email protected] glusterfs]# kubectl create -f glusterfs-pod.json

pod "glusterfs" created

[[email protected] glusterfs]# kubectl get pod -o wide |grep glus

glusterfs            1/1       Running   0          4m        10.1.39.8    k8s-node1

[[email protected] ~]# mount | grep gluster

172.17.3.7:glusterfsdata on /var/lib/kubelet/pods/61cd4cec-0955-11e7-a8c3-c81f66d97bc3/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

可见github上面例子不太直观,版本比较老,但是过程比较清楚。这里也可以参考nfs挂载把glusterfs挂载到nginx站点,那样测试更直观。下面是升级测试

创建PV/PVC

[[email protected] glusterfs]# cat glusterfs-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: gluster-default-volume

spec:

capacity:

storage: 8Gi

accessModes:

- ReadWriteMany

glusterfs:

endpoints: "glusterfs-cluster"

path: "glusterfsdata"

readOnly: false

[[email protected] glusterfs]# cat glusterfs-pvc.yaml

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: glusterfs-claim

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 8Gi

[[email protected] glusterfs]# kubectl create -f glusterfs-pv.yaml

persistentvolume "gluster-default-volume" created

[[email protected] glusterfs]# kubectl create -f glusterfs-pvc.yaml

persistentvolumeclaim "glusterfs-claim" created

[[email protected] glusterfs]# kubectl get pvc

NAME              STATUS    VOLUME                   CAPACITY   ACCESSMODES   AGE

glusterfs-claim   Bound     gluster-default-volume   8Gi        RWX           2m

创建nginx站点挂载glusterfs-claim

[[email protected] glusterfs]# kubectl create -f glusterfs-web-rc.yaml

replicationcontroller "glusterfs-web" created

[[email protected] glusterfs]# kubectl create -f glusterfs-web-service.yaml

service "glusterfs-web" created

配置文件如下

[[email protected] glusterfs]# cat glusterfs-web-rc.yaml

# This pod mounts the nfs volume claim into /usr/share/nginx/html and

# serves a simple web page.

apiVersion: v1

kind: ReplicationController

metadata:

name: glusterfs-web

spec:

replicas: 2

selector:

role: glusterfs-web-frontend

template:

metadata:

labels:

role: glusterfs-web-frontend

spec:

containers:

- name: glusterfsweb

image: nginx

imagePullPolicy: IfNotPresent

ports:

- name: glusterfsweb

containerPort: 80

volumeMounts:

# name must match the volume name below

- name: gluster-default-volume

mountPath: "/usr/share/nginx/html"

volumes:

- name: gluster-default-volume

persistentVolumeClaim:

claimName: glusterfs-claim

[[email protected] glusterfs]# cat glusterfs-web-service.yaml

kind: Service

apiVersion: v1

metadata:

name: glusterfs-web

spec:

ports:

- port: 80

selector:

role: glusterfs-web-frontend

验证

[[email protected] glusterfs]# kubectl get pods -o wide |grep glusterfs-web

glusterfs-web-280mz   1/1       Running   0          1m        10.1.39.12   k8s-node1

glusterfs-web-f952d   1/1       Running   0          1m        10.1.15.10   k8s-node2

[[email protected] glusterfs]# kubectl exec -ti glusterfs-web-280mz -- bash

[email protected]:/# df -h |grep glusterfs

172.17.3.7:glusterfsdata                                                                            422G  954M  421G   1% /usr/share/nginx/html

[email protected]:/# cd /usr/share/nginx/html/

[email protected]:/usr/share/nginx/html# ls

glusterfs

[email protected]:/usr/share/nginx/html# cat glusterfs

glusterfs

时间: 03-14

k8s1.5.4挂载volume之glusterfs的相关文章

k8s集群之日志收集EFK架构

参考文档 http://tonybai.com/2017/03/03/implement-kubernetes-cluster-level-logging-with-fluentd-and-elasticsearch-stack/ https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch https://t.goodrain.com/t/k8s/242 http://logz

在Glusterfs上创建distributed volume,replicated volume,dispersed volume,combined volume

前面一篇写到了在CentOS上如何安装glusterfs,以及简单创建了一个volume并实现了native-mount,今天我们重点看一下在glusterfs上都可以创建哪种类型的volume. 1. 首先还是先介绍下实验环境,今天共用到了5台虚拟机,其中4个虚拟机做server端,分别是: servera.lab.example.com serverb.lab.example.com serverc.lab.example.com serverd.lab.example.com 1个虚拟机做

openstack运维实战系列(九)之cinder与glusterfs结合

1. 概述     cinder作为openstack的快存储服务,为instance提供永久的volume服务,cinder作为一种可插拔式的服务,能够支持各种存储类型,包括专业的FC存储,如EMC,NetApp,HP,IBM,huawei等商场的专业存储服务器,存储厂商只要开发对应的驱动和cinder对接即可:此外,cinder还支持开源的分布式存储,如glusterfs,ceph,sheepdog,nfs等,通过开源的分布式存储方案,能够达到廉价的IP-SAN存储.本文以glusterfs

GlusterFS作为OpenStack后端存储

创建3个卷images.volumes.instances分别对接openstack的glance.cinder.nova组件 images卷用于存放OpenStack镜像 volumes卷用于存放OpenStack硬盘 instances卷用于存放OpenStack云主机 1.  创建卷 gluster volume create images replica 3 controller1:/data/brick1/image  controller2:/data/brick1/image  c

glusterfs命令简介

安装: yum install -y glusterfs{,-server,-fuse,-geo-replication} 如不使用主从复制,可以不装glusterfs-geo-replication 操作: gluster peer command gluster peer status gluster peer probe server //添加机器 gluster peer detach server //踢出机器 gluster volume create NEW-VOLNAME [st

在Centos上安装使用GlusterFS

最近接触了下GlusterFS,所以就想着在自己的笔记本上的虚拟机里安装个测试环境.起初想要从https://github.com/gluster/glusterfs/ 上下载一个build然后编译安装, 但是试了很多次,在make时总是失败,折腾了两天后,彻底死心了... 1. 先介绍下我的实验环境,由于笔记本配置不是很高,所以我就只开了两个CentOS7的虚拟机,网络选择NAT,以确保能够连接到外网,都做server,其中一台还要兼做client,没有单独的client端.还有两个serve

glusterFS分布式存储部署流程

转自:http://bangbangba.blog.51cto.com/3180873/1712061 GlusterFS是一款非常易于使用的分布式文件存储系统,实现了全部标准POSIX接口,并用fuse实现虚拟化,让用户看起来就像是本地磁盘一样.因此程序想从本地磁盘切换到GlusterFS时是不用修改任何代码的,做到了无缝切换.并且让多台电脑的程序看起来在使用同一块硬盘,简化了很多逻辑.如果您的应用单机磁盘不够放时,不妨考虑下GlusterFS. 一.    GlusterFS源码安装1. g

Glusterfs入门

实验环境  - 10Gb网络环境- gluster1  192.168.100.155 三块磁盘- gluster2  192.168.100.156 三块磁盘- gluster2  192.168.100.157 三块磁盘- gluster2  192.168.100.158 三块磁盘 安装配置 gluster1存储节点安装如下,其它节点同样操作 # 格式化磁盘 #     mkfs.ext4 -T largefile /dev/sdb # 超过2T的磁盘需要加入largefile,简称快格

cinder glusterfs driver代码结构

glusterfs.py文件 cinder/volume/drivers/glusterfs.py就是cinder调用glusterfs的驱动了 glusterfs.py只有一个GlusterfsDriver class,如下图所示 from os_brick.remotefs import remotefs as remotefs_brick # client端操作 from oslo_concurrency import processutils from oslo_config impor