Salt part three
Fri, Jan 18, 2019 · 3 minute readlinuxsalt
Salt grains
From the documentation,
Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties.
Grains are automatically determined by salt but can also be assigned in several ways
- Statically on the minion in the configuration file,
/etc/salt/minion
- Statically on the minion in
/etc/salt/grains
- From the master using the grains module to assign values
While the automatically determined grains are generally useful we have found one issue.
Salt seems to be a little too comprehensive at reporting hostname, for example this is an extract from grains.items
,
# salt container20.lxd grains.items
container20.lxd:
----------
...
domain:
lxd
...
fqdn:
container20.lxd
...
host:
container20
...
localhost:
container20
...
nodename:
container20
Changes to hostname can also take some time to appear as a change to one or more of these grains and may not even appear at all (in our expoerience).
Particulary for hostname but also to keep other fixed grains under control we prefer to use an inventory of hosts to define a specific set of grains per host for use in state files.
This is similar to the concept of host variables in Ansible.
An example host file looks like he example below.
Example is loaded as YAML (load_yaml) into a variable bits
.
{%- load_yaml as bits %}
##
# bits host definition
# container20
##
bits:
# hostname
# fqdn, required as the default 'host' and 'localhost' grains are unreliable, should be equivalent `fqdn` grain.
hostname: container20.lxd
# environment
# base, prod
environment: base # prod
# provider
# who provides the host
# local - on-site
# ovh, azure, aws, etc
provider: local # ovh, azure, aws, etc
# datacenter
# where is host located
# local - on-site
# provider region, eg ovh-uk1, aws-us-east1, etc
datacenter: local # ovh-uk1, ovh-gra1, aws-us-east1, etc
# vm_type
# none - e.g. hardware
# lxc, vmware, virtualbox, etc
vm_type: lxc
# roles
roles:
- webserver
- database
{%- endload %}
This configuration is applied to a minion by executing a simple state file,
- Set a variable named inventory
poiting at the inventory file.
- From inventory
import intoa variable bits
- Use bits
to refer to the YAML data (using the bits.bits unfortunate naming convention).
# cat grains.sls
{% set inventory = 'inventory/hosts/' + grains['id'] %}
{% from inventory import bits %}
bits:
grains.present:
- value: {{ bits.bits }}
- force: True
State file is applied using state.apply
, e.g.
# salt container20.lxd state.apply grains
container20.lxd:
----------
ID: bits
Function: grains.present
Result: True
Comment: Grain is already set
Started: 09:18:37.558682
Duration: 3.044 ms
Changes:
Summary for container20.lxd
------------
Succeeded: 1
Failed: 0
------------
Total states run: 1
Total run time: 3.044 ms
Check the assigned values using grains.get
, e.g.
# salt container20.lxd grains.get bits
container20.lxd:
----------
datacenter:
local
environment:
base
hostname:
container20.lxd
provider:
local
roles:
- webserver
- database
vm_type:
lxc
These grains can be simply referenced, e.g.
# extract from top.sls
# target all minions with the caddy role set
'bits:roles:caddy':
- match: grain
- caddy
# extract from ssh/config state
{% set vault = grains['bits'] %}
...
/etc/ssh/host-keys:
file.recurse:
- name: /etc/ssh
- source: salt://ssh/files/keys/hosts
- include_pat: {{ vault.hostname }}*
- replace: true
/etc/ssh/host:
file.managed:
- name: /etc/ssh/{{ vault.hostname }}
- mode: '600'
# on the command line
# salt -G "bits:vm_type:lxc" test.ping
container20.lxd:
True