This repository has been archived on 2023-05-19. You can view files and clone it, but cannot push or open issues or pull requests.
skylab-ansible/roles/datastore/meta/argument_specs.yaml

68 lines
2.7 KiB
YAML

---
argument_specs:
main:
description: >-
This role makes several assumptions about the local storage configuration of the server:
1. There is one block device on the server that will be used for data storage
2. That block device will be joined to a glusterfs volume
3. The block device is encrypted with LUKS
This role mostly serves to perform housekeeping tasks and validation of expected configs.
Automating disk configuration seems like a really good way to lose all my data, so I decided
to leave that to the much more reliable manual configuration for the time being.
To that end, here is a quick cheatsheet of commands that might be useful in setting up
storage device(s) for this role (replace `DEVICE` with the block device for storage):
```bash
# Encrypt a block device, provide encryption key when prompted
cryptsetup luksFormat --type luks2 /dev/DEVICE
# Unlock encrypted block device and mount under a mapper
cryptsetup luksOpen /dev/DEVICE LABEL
# Lock an encrypted block device
cryptsetup luksClose LABEL
# Create and format a partition on the encrypted block device
mkfs.xfs /dev/mapper/LABEL -L LABEL
# Run from an existing server already in the gluster pool
# Add server to the gluster pool
gluster peer probe HOSTNAME
# To replace a brick from an already offline'd node, the old brick first needs to be force
# removed, replication reduced, and (if arbiter is enabled) any arbiter nodes removed
#
# Remove arbiter brick
gluster volume remove-brick VOLUME replica 2 HOSTNAME:/EXPORT force
# Remove dead data brick
gluster volume remove-brick VOLUME replica 1 HOSTNAME:/EXPORT force
# Remove dead node
gluster peer detach HOSTNAME
# Add new data brick
gluster volume add-brick VOLUME replica 2 HOSTNAME:/EXPORT start
#
# To re-add the arbiter you might need to clean up the `.glusterfs` directory and remove
# directory parametes from the old brick. These next commands need to be run on the host
# with the arbiter brick physically attached
#
rm -rf /EXPORT/.glusterfs
setfattr -x trusted.gfid /EXPORT
setfattr -x trusted.glusterfs.volume-id /EXPORT
# Re-add arbiter brick
gluster volume add-brick VOLUME replica 3 arbiter 1 HOSTNAME:/EXPORT
# Trigger a resync
gluster volume heal datastore
# General gluster debug info
gluster volume info VOLUME
gluster volume status VOLUME
```
options:
skylab_datastore_device:
description: The block device under `/dev/` that should be configured as datastore storage
type: str
required: true