Add documentation for datastore role
This commit is contained in:
parent
01c0e21f94
commit
450d8fcb7a
67
roles/datastore/meta/argument_specs.yaml
Normal file
67
roles/datastore/meta/argument_specs.yaml
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
---
|
||||||
|
argument_specs:
|
||||||
|
main:
|
||||||
|
description: >-
|
||||||
|
This role makes several assumptions about the local storage configuration of the server:
|
||||||
|
|
||||||
|
1. There is one block device on the server that will be used for data storage
|
||||||
|
2. That block device will be joined to a glusterfs volume
|
||||||
|
3. The block device is encrypted with LUKS
|
||||||
|
|
||||||
|
This role mostly serves to perform housekeeping tasks and validation of expected configs.
|
||||||
|
Automating disk configuration seems like a really good way to lose all my data, so I decided
|
||||||
|
to leave that to the much more reliable manual configuration for the time being.
|
||||||
|
|
||||||
|
To that end, here is a quick cheatsheet of commands that might be useful in setting up
|
||||||
|
storage device(s) for this role (replace `DEVICE` with the block device for storage):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Encrypt a block device, provide encryption key when prompted
|
||||||
|
cryptsetup luksFormat --type luks2 /dev/DEVICE
|
||||||
|
|
||||||
|
# Unlock encrypted block device and mount under a mapper
|
||||||
|
cryptsetup luksOpen /dev/DEVICE LABEL
|
||||||
|
|
||||||
|
# Lock an encrypted block device
|
||||||
|
cryptsetup luksClose LABEL
|
||||||
|
|
||||||
|
# Create and format a partition on the encrypted block device
|
||||||
|
mkfs.xfs /dev/mapper/LABEL -L LABEL
|
||||||
|
|
||||||
|
# Run from an existing server already in the gluster pool
|
||||||
|
# Add server to the gluster pool
|
||||||
|
gluster peer probe HOSTNAME
|
||||||
|
|
||||||
|
# To replace a brick from an already offline'd node, the old brick first needs to be force
|
||||||
|
# removed, replication reduced, and (if arbiter is enabled) any arbiter nodes removed
|
||||||
|
#
|
||||||
|
# Remove arbiter brick
|
||||||
|
gluster volume remove-brick VOLUME replica 2 HOSTNAME:/EXPORT force
|
||||||
|
# Remove dead data brick
|
||||||
|
gluster volume remove-brick VOLUME replica 1 HOSTNAME:/EXPORT force
|
||||||
|
# Remove dead node
|
||||||
|
gluster peer detach HOSTNAME
|
||||||
|
# Add new data brick
|
||||||
|
gluster volume add-brick VOLUME replica 2 HOSTNAME:/EXPORT start
|
||||||
|
#
|
||||||
|
# To re-add the arbiter you might need to clean up the `.glusterfs` directory and remove
|
||||||
|
# directory parametes from the old brick. These next commands need to be run on the host
|
||||||
|
# with the arbiter brick physically attached
|
||||||
|
#
|
||||||
|
rm -rf /EXPORT/.glusterfs
|
||||||
|
setfattr -x trusted.gfid /EXPORT
|
||||||
|
setfattr -x trusted.glusterfs.volume-id /EXPORT
|
||||||
|
# Re-add arbiter brick
|
||||||
|
gluster volume add-brick VOLUME replica 3 arbiter 1 HOSTNAME:/EXPORT
|
||||||
|
# Trigger a resync
|
||||||
|
gluster volume heal datastore
|
||||||
|
|
||||||
|
# General gluster debug info
|
||||||
|
gluster volume info VOLUME
|
||||||
|
gluster volume status VOLUME
|
||||||
|
```
|
||||||
|
options:
|
||||||
|
skylab_datastore_device:
|
||||||
|
description: The block device under `/dev/` that should be configured as datastore storage
|
||||||
|
type: str
|
||||||
|
required: true
|
Reference in New Issue
Block a user