updated: 7th of April 2019
published: 7th of January 2018
Salt is open source software for infrastructure automation. Salt was initially an agent based system but the team have since implemented an ssh based connector for devices that do not support or have the ability to install an agent. This agent/agentless feature give you the ability to manage servers as well as network devices which makes it a great tool for network automation.
There is also an enterprise based system know as SaltStack that offers enterprise features like a GUI, RBAC and 24/7 support.
There are a number of logical topologies that can be created with a Salt infrastructure.
A detailed run down of the logical topologies can be found in the documentation .
The recommended method to install Salt is via the Salt maintained repositories. To install Salt in this manner requires a couple of steps.
As the root user or user with sudo privileges.
# RHEL/Centos
sudo yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm
sudo yum clean expire-cache
# Debian
wget -O - https://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main
sudo apt-get update
Install salt-daemons as required.
# RHEL/Centos
# Masters optionally have a salt-minion daemon
yum install -y salt-master
# Minions
yum install -y salt-minion
See the official documentation for info on how to install salt on other platforms.
Salt's configuration files are defined in YAML and located in the /etc/salt/ directory. Each daemon has its own configuration file. For example /etc/salt/master is the salt-master daemons configuration file.
Master
# /etc/salt/master
# File configs
file_roots:
base:
- /srv/salt/
- /srv/salt/pillar
- /srv/salt/states
- /srv/salt/reactors
- /srv/salt/templates
- /srv/salt/files
# Pillar configs
pillar_roots:
base:
- /srv/salt/pillar
Minion
# /etc/salt/minion
# Master IP or DNS name
master: master01
# ID of minion
id: minion01
A complete list of configuration options can be found in the documentation.
The Salt master and minions communicate over an AES encrypted ZeroMQ connection on TCP ports 4505 and 4506. These ports only need to be opened on the Salt master. Suggested methods for openning the firewall ports can be found in the official documentation .
Salt uses RSA keys to authenticate the master and the minions. By default minions are not automatically allowed to communicate with the master. When a minion tries to connect to the master it will send its RSA public key, the master must accept the minions key to permit the minion access. There are a number of 'states' that minion keys can exist in on the master, they are outlined as follows.
The salt-key command is used on the master to manage the keys of the minions. More on the salt-key command can be found in the docs .
The inventory in Salt is defined by the accepted minion keys. Groups of minions can be targeted in the states top.sls file. There are many option for targeting minions, the below uses a wildcard * .
# /srv/salt/states/top.sls
base:
'*':
- vim # states to be applied
- motd
'master*':
- curl
'minion*':
- python
'proxy-minion*':
- python
- napalm
Modules are the engine room of Salt and how state is applied to minions. Salt has a HUUUUGGGEEEE list of modules that likely do most of what you will ever need.
Module documentation can also be read from the command line with the sys.doc command.
sudo salt '*' sys.doc network
It is also possible to write custom Salt modules. If you need something additional to what Salt provides out of the box the process is documented here .
States define how a minion should be configured. The state system is made up of multiple layers, the layers from highest to lowest are as follows.
More details on the state layers can be found here .
The main states to be aware of are the high and low states.
The high state defines the complete configuration state that should be applied to a minion. The high state is defined in the top.sls file and is applied with the salt '<target>' state.apply command.
The low state is a list of function calls that will be executed.
The state tree is a collection of .sls files and directories under the states directory. The states directory is specified under the file_roots configuration parameter in the master configuration file. This is usually set to /srv/salt/states .
# /srv/salt/states
states/
├── development-tools.sls
├── epel-release.sls
├── motd
│ ├── init.sls
│ └── motd.j2
├── napalm
│ ├── init.sls
│ └── map.jinja
├── python
│ ├── init.sls
│ ├── map.jinja
│ ├── pip.sls
│ └── python.sls
├── top.sls
└── vim
├── init.sls
└── map.jinja
SaLt State roughly translates to .sls hence the .sls extension. Salt state files, by default are defined in YAML and describe the 'state' to apply to a minion. State files are passed through YAML and Jinja renderers by default, but it is possible to use different renderers to build the .sls files.
The following sets the states directory for the base environment.
# /etc/salt/master
file_roots:
base:
- /srv/salt/states
More on Salt states can be found in the documentation.
Map files are dictionaries that can be used to define variables across different systems. The below map file sets the vim package name based on the os_family grain.
Map files are generally name map.jinja
{% set vim = salt.grains.filter_by({
'Debian': {
'pkg': 'vim',
},
'RedHat': {
'pkg': 'vim-enhanced',
},
}) %}
Grains are facts about a device that are collected by Salt. Grains can be used to target minions and as variables in configuration templates.
Display a list the available grains with the grains.ls command.
sudo salt '<target>' grains.ls
Grain data can be displayed with the grains.items command.
sudo salt '<target>' grains.items
Custom grains can be set on the minions in the following locations
The documentation on grains can be found here .
Pillar is minion specific data that can be anything and is defined by the administrator. Like states, pillar uses a top.sls file to match minions to pillar data. Also like states, pillar data is stored in .sls files using YAML syntax.
The source pillar folder is defined in the /etc/salt/master config under the pillar_roots section.
# /etc/salt/master
pillar_roots:
base:
- /srv/salt/pillar
Pillar data can be accessed in states by calling the pillar dictionary with the variable name.
vim installed:
pkg.installed:
- name: {{ pillar['data_centre'] }}
See the docs for more on Salt pillar.
Templates allow for the reuse of configuration in a DRY fashion. Salt has the ability to use many different templating languages including Jinja2 and Mako .
The default templating engine in Salt is Jinja2. With Salt, templating is not isolated to configuration files, infact the .sls files are a combination of YAML and Jinja2. Below is an example of using Jinja2 in an .sls file.
# states/python/python.sls
{% from 'python/map.jinja' import required_packages with context %}
{% from 'python/map.jinja' import python_versions with context %}
python-install-required-packages:
pkg:
- installed
- pkgs:
{{ required_packages.pkgs }}
{% for python in python_versions %}
Python-{{ python.release }}.tgz:
file.managed:
- name: /tmp/Python-{{ python.release }}.tgz
- source: salt://files/Python-{{ python.release }}.tgz
- source_hash: {{ python.hash }}
extract-python-{{ python.release }}:
archive.extracted:
- name: /tmp
- source: /tmp/Python-{{ python.release }}.tgz
python-build-{{ python.release }}:
cmd.run:
- cwd: /tmp/Python-{{ python.release }}
- user: root
- name: |
./configure --prefix=/usr/local
make
make altinstall
- unless: stat /usr/local/bin/python{{ python.version }}
/usr/bin/python{{ python.release }}:
file.symlink:
- target: /usr/local/bin/python{{ python.version }}
{% endfor %}
Salt formulas are pre-written Salt states that do a thing. For example there is a Salt formula to install Napalm the network automation library. Salt has a github organization for formulas which can be found here .
https://saltstack.com/salt-open-source/
https://github.com/saltstack/salt
https://docs.saltstack.com/en/develop/topics/network_automation/index.html