Garage S3 compatible storage

garage

An open-source distributed object storage service tailored for self-hosting

Single binary, no dependencies, simple install and configuration, S3 API compatible
Works with restic and rclone

Does not expose a filesystem - but rclone can do that

Default location for garage configuration is /etc/garage.toml - use -c /path/to/garage.toml to override.

Install

wget https://garagehq.deuxfleurs.fr/_releases/v1.1.0/x86_64-unknown-linux-musl/garage
chmod 755 garage
mv garage /usr/local/bin
 
# create directory for garage data and metadata
mkdir -p /backup/garage
 
cat > /etc/garage.toml <<EOF
metadata_dir = "/backup/garage/meta"
data_dir = "/backup/garage/data"
db_engine = "sqlite"
 
replication_factor = 1
 
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
# rpc_secret Must be same on all nodes
rpc_secret = "82d62c120834e5aa879583796f9b292aa803df3cc797236b648234394257f9f9"
 
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"
 
[s3_web]
bind_addr = "[::]:3902"
# Optional, can be empty
root_domain = ".web.garage.localhost"
index = "index.html"
 
[k2v_api]
api_bind_addr = "[::]:3904"
 
[admin]
api_bind_addr = "[::]:3903"
admin_token = "gr846Ogi0b+C/2n0YhIxVNMi40zcUW5UzhTMlHVKWsI="
metrics_token = "E//rxKV+G2p4AbaYOxLWgJBfmqwH4tkLY2wihV/6x0Y="
EOF
 
## openrc service file
 
root@stor01:~$ cat /etc/init.d/garage 
#!/sbin/openrc-run
 
GARAGE_LOGFILE="${GARAGE_LOGFILE:-/var/log/${RC_SVCNAME}.log}"
 
supervisor=supervise-daemon
 
name="garage"
command="/usr/local/bin/garage"
command_args="server >>${GARAGE_LOGFILE} 2>&1"
 
output_log=${GARAGE_LOGFILE}
error_log=${GARAGE_LOGFILE}
 
pidfile="/run/${RC_SVCNAME}.pid"
respawn_delay=5
respawn_max=0
 
depend() {
        need net
        after firewall
        use logger
}
 
start_pre() {
        checkpath -f -m 0644 -o root:root "${GARAGE_LOGFILE}"
}
 
service garage start
rc-update add garage
 
## systemd service file
 
root@stor02:~$ cat /etc/systemd/system/garage.service 
[Unit]
Description=Garage Data Store
After=network-online.target
Wants=network-online.target
 
[Service]
User=root
Group=root
Environment='RUST_LOG=garage=info' 'RUST_BACKTRACE=1'
ExecStart=/usr/local/bin/garage server
StateDirectory=garage
# DynamicUser=true
# ProtectHome=true
NoNewPrivileges=true
 
[Install]
WantedBy=multi-user.target
 
systemctl daemon-reload
systemctl start garage
systemctl enable garage
 

Add nodes to cluster

stor01

substitute host ip address for localhost ip in following connect commands
repeat commands in both directions

$ garage node id
2f0814a07b32e85b1b9a02dc0aaded31477157fa3a3b2596b66d3ae5bbffbbd1@127.0.0.1:3901
 
To instruct a node to connect to this node, run the following command on that node:
    garage [-c <config file path>] node connect 2f0814xxxxx@127.0.0.1:3901
...

stor02

$ garage node connect 2f0814xxxxx@192.168.1.40:3901
Success

Layout

Layouts describe the storage available on all cluster nodes.

This example shows storage on each node being expanded.
Just telling garage how much space there is available on the underlying filesystem.

Layout changes can be made on any cluster node.

$ garage status
==== HEALTHY NODES ====
ID                Hostname           Address            Tags  Zone    Capacity  DataAvail
2f0814xxxxxxxxxx  stor01             127.0.0.1:3901     []    stor01  500.0 GB  879.6 GB (52.5%)
296435xxxxxxxxxx  stor02             192.168.1.46:3901  []    stor02  500.0 GB  531.3 GB (55.5%)
 
 
$ garage layout assign -z stor02 -c 750G 296435
Role changes are staged but not yet committed.
Use `garage layout show` to view staged role changes,
and `garage layout apply` to enact staged changes.
 
$ garage layout assign -z stor01 -c 750G 2f0814
Role changes are staged but not yet committed.
Use `garage layout show` to view staged role changes,
and `garage layout apply` to enact staged changes.
 
# version number should be incremented for every layout change
$ garage layout apply --version 3
==== COMPUTATION OF A NEW PARTITION ASSIGNATION ====
 
Partitions are replicated 1 times on at least 1 distinct zones.
 
Optimal partition size:                     5.9 GB (3.9 GB in previous layout)
Usable capacity / total cluster capacity:   1.5 TB / 1.5 TB (100.0 %)
Effective capacity (replication factor 1):  1.5 TB
 
A total of 0 new copies of partitions need to be transferred.
 
stor02              Tags  Partitions        Capacity  Usable capacity
  296435xxxxxxxxxx        128 (0 new)       750.0 GB  750.0 GB (100.0%)
  TOTAL                   128 (128 unique)  750.0 GB  750.0 GB (100.0%)
 
stor01              Tags  Partitions        Capacity  Usable capacity
  2f0814xxxxxxxxxx        128 (0 new)       750.0 GB  750.0 GB (100.0%)
  TOTAL                   128 (128 unique)  750.0 GB  750.0 GB (100.0%)
 
 
New cluster layout with updated role assignment has been applied in cluster.
Data will now be moved around between nodes accordingly.
 
$ garage layout show
==== CURRENT CLUSTER LAYOUT ====
ID                Tags  Zone    Capacity  Usable capacity
296435xxxxxxxxxx        stor02  750.0 GB  750.0 GB (100.0%)
2f0814xxxxxxxxxx        stor01  750.0 GB  750.0 GB (100.0%)
 
Zone redundancy: maximum
 
Current cluster layout version: 3

Bucket

# Create bucket named 'restic'
$ garage bucket create restic
Bucket home was created.
 
# Create a key - name can be any text
$ garage key create restic-app-key
Key name: home-app-key
Key ID: GK1fe88xxx
Secret key: f51abaxx
Can create buckets: false
 
Key-specific bucket aliases:
 
Authorized buckets:
 
# Grant permissions on buvket to key
$ garage bucket allow --read --write --owner restic --key restic-app-key
New permissions for GK945135xxx on home: read true, write true, owner true.
 
# List buckets
$ garage bucket list
List of buckets:
  home    ff3a5c1xxxx
 
# Show bucket information
$ garage bucket info home
Bucket: ff3a5cxxxx
 
Size: 0 B (0 B)
Objects: 0
Unfinished uploads (multipart and non-multipart): 0
Unfinished multipart uploads: 0
Size of unfinished multipart uploads: 0 B (0 B)
 
Website access: false
 
Global aliases:
  home
 
Key-specific aliases:
 
Authorized keys:
  RWO  GK945135xxxx  restic-app-key

Usage

Restic

Define AWS and Restic environment variables.

export AWS_ACCESS_KEY_ID=GK1fe88xxx
export AWS_SECRET_ACCESS_KEY=f51abaxx
export RESTIC_PASSWORD=xxxxxx
export RESTIC_REPOSITORY="s3:http://localhost:3900/restic"

Initialise a repository

restic init

Backup to repository.
This example uses sudo to backup all home directories.
Can be run on any cluster node.

# run on both nodes
sudo -E restic backup --host $(hostname -s) --tag home /home

List snapshots.

sudo -E restic snapshots
 
repository 5050f699 opened (version 2, compression level auto)
ID        Time                 Host        Tags          Paths          Size
----------------------------------------------------------------------------------
aee2c023  2025-03-30 21:54:58  stor02      home          /home          1.968 GiB
9c8ddc56  2025-03-30 21:55:31  stor01      home          /home          8.027 GiB
----------------------------------------------------------------------------------
2 snapshots