1.简介
本文主要记录基于kubernetes1.11版本安装harbor私有镜像库,及配置nginx, traefik的代理及https相关过程。
此处harbor采用共享存储(GlusterFS)作为镜像的存储系统。
大约的逻辑架构为: user <- - -> nginx(https) <- - -> traefik(https) <- - -> harbor(https) <- - -> GlusterFS(volume)
;
本节首先说明如何在Centos7上安装规模为三个节点的GlusterFS集群。
2.安装共享存储(GlusterFS)
- 节点准备
节点IP | 节点Hostname | 角色 |
---|---|---|
192.168.1.11 | gfs-manager | manager |
192.168.1.12 | gfs-node1 | node1 |
192.168.1.13 | gfs-node2 | node2 |
- 所有节点配置/etc/hosts
192.168.1.11 gfs-manager
192.168.1.12 gfs-node1
192.168.1.13 gfs-node2
- 所有节点通过yum安装glusterfs
yum install -y centos-release-gluster
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
# 启动GlusterFS服务
systemctl start glusterd.service
systemctl enable glusterd.service
- 在manager节点上执行,将node节点加入集群
[root@gfs-manager ~]#gluster peer probe gfs-manager
peer probe: success. Probe on localhost not needed
[root@gfs-manager ~]#gluster peer probe gfs-node1
peer probe: success.
[root@gfs-manager ~]#gluster peer probe gfs-node2
peer probe: success.
- 查看集群状态
[root@gfs-manager ~]# gluster peer status
Number of Peers: 2
Hostname: gfs-node1
Uuid: 25f7804c-2b48-4f88-8658-3b9302d06a19
State: Peer in Cluster (Connected)
Hostname: gfs-node2
Uuid: 0c6196d3-318b-46f9-ac40-29a8212d4900
State: Peer in Cluster (Connected)
- 查看volume状态
[root@gfs-manager ~]#gluster volume info
No volumes present
3.创建数据存储目录
假设要创建的目录为/data/gluster/harbordata
, 用于挂载为Harbor
的存储目录。
- 所有节点执行创建,目录
mkdir -p /data/gluster/harbordata
- manager节点创建GlusterFS磁盘
[root@gfs-manager ~]#gluster volume create harbordata replica 3 gfs-manager:/data/gluster/harbordata gfs-node1:/data/gluster/harbordata gfs-node2:/data/gluster/harbordata force
volume create: harbordata: success: please start the volume to access data
- manager节点启动
harbordata
[root@gfs-manager ~]#gluster volume start harbordata
volume start: harbordata: success
- 查看volume状态
[root@gfs-manager ~]# gluster volume info
Volume Name: harbordata
Type: Replicate
Volume ID: c4fb0a43-c9e5-4a4e-ba98-cf14a7591ecd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gfs-manager:/data/gluster/harbordata
Brick2: gfs-node1:/data/gluster/harbordata
Brick3: gfs-node2:/data/gluster/harbordata
Options Reconfigured:
performance.write-behind: on
4. GlusterFS参数调优(供参考)
- 配置相关参数:
#开启指定volume的配额: (harbordata 为volume名)
gluster volume quota harbordata enable
#限制harbordata的根目录最大使用 100GB 空间
gluster volume quota harbordata limit-usage / 100GB
#设置cache
gluster volume set harbordata performance.cache-size 2GB
#开启异步操作
gluster volume set harbordata performance.flush-behind on
#设置io线程
gluster volume set harbordata performance.io-thread-count 16
#设置回写 (写数据时间,先写缓存,再写硬盘)
gluster volume set harbordata performance.write-behind on
- 查看当前volume状态
[root@gfs-manager ~]# gluster volume info
Volume Name: harbordata
Type: Replicate
Volume ID: c4fb0a43-c9e5-4a4e-ba98-cf14a7591ecd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gfs-manager:/data/gluster/harbordata
Brick2: gfs-node1:/data/gluster/harbordata
Brick3: gfs-node2:/data/gluster/harbordata
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 16
performance.flush-behind: on
performance.cache-size: 2GB
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Habor的安装后续请关注后面的文章。
点击查看更多内容
1人点赞
评论
共同学习,写下你的评论
评论加载中...
作者其他优质文章
正在加载中
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦