Installation
First, I updated and upgraded the system.
sudo apt update && sudo apt upgrade -y
Then I installed essential build tools.
sudo apt install -y software-properties-common
sudo apt install -y curl wget
sudo apt install -y build-essential
software-properties-common
: Lets you manage and add repositories (needed for PPAs).curl wget
: Tools to fetch files and data over the internet.build-essential
: Compilers and tools to build software from source (handy for dependencies).
Next, I made sure Python 3 is installed.
python3 --version
I then installed Ansible.
sudo apt install -y ansible
This installs:
- ansible-core (the main engine)
- ansible (community collections)
- All necessary dependencies
Verification
I then verified the Ansible installation.
ansible --version
I checked ansible-playbook.
ansible-playbook --version
and ansible-galaxy.
ansible-galaxy --version
and ansible-galaxy.
ansible-galaxy --version
I then checked if Ansible can import properly.
python3 -c "import ansible; print('Ansible import successful')"
Additional Dependencies
Next I installed Git.
sudo apt install -y git
Then configured my name and email.
git config --global user.name "Nesto"
git config --global user.email "[email protected]"
These commands configure your Git identity – the information that Git attaches to every commit you make.
Next I installed rsync, which is used by Ansible for file transfers.
sudo apt install -y rsync
Then installed tree, to better see directory structure.
sudo apt install -y tree
Then installed jq, which is useful for debugging.
sudo apt install -y jq
Configuration
First, I make the main directory for Ansible.
sudo mkdir -p /opt/ansible
Next I changed ownership from root to my current user.
sudo chown $USER:$USER /opt/ansible
I then made the directory structure.
cd /opt/ansible
mkdir -p {inventories/{production,staging,development},playbooks,roles,group_vars,host_vars,files,templates,vars,vault,logs}
Then I made the ansible.cfg file.
sudo nano /opt/ansible/ansible.cfg
[defaults]
# Inventory location
inventory = inventories/hosts.yml
# SSH settings
host_key_checking = False
remote_user = nesto
private_key_file = ~/.ssh/id_rsa
# Performance settings
forks = 20
gathering = smart
fact_caching = memory
stdout_callback = yaml
# Logging
log_path = logs/ansible.log
# Role and collection paths
roles_path = roles:~/.ansible/roles
collections_path = ~/.ansible/collections
[privilege_escalation]
become = True
become_method = sudo
become_user = root
become_ask_pass = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
pipelining = True
inventory = inventories.hosts.yml
: Points Ansible to your default inventory file.host_key_checking = False
: Disable SSH host key verification.remote_user = nesto
: Default SSH user Ansible will use when connecting to hosts.private_key_file =
: Default SSH private key for authentication.forks = 20
: Allows Ansible to run tasks on up to 20 hosts in parallel.gathering = smart
: Only gathers facts when needed.fact_caching = memory
: Stores hosts facts in memory to avoid re-gathering within the same run.stdout_callback = yaml
: Makes output cleaner, in YAML format instead of default JSON-like text.log_path
: Logs all playbook run to this location.become = True
: Enables privilege escalation by defaultbecome_method = sudo
: Uses sudo to escalate.
Then made a test inventory file.
sudo nano /opt/ansible/inventories/hosts.yml
all:
hosts:
localhost:
ansible_connection: local
children:
webservers:
hosts:
# Add your web servers here
# web1.example.com:
# web2.example.com:
databases:
hosts:
# Add your database servers here
# db1.example.com:
# db2.example.com:
I then needed to create an SSH key.
ssh-keygen -t rsa -b 4096 -f ~/.ssh/ansible_rsa -C "ansible@$(hostname)"
ssh-keygen
: Tool used to generate SSH key pairs.-t rsa
: Specifies the key type.-b 4096
Sets the key size.-f ~/.ssh/ansible_rsa
:Defines the output file for the private key.-C "ansible@$(hostname)"
: Adds a comment to the key, making it easier to identify later.
I then set permissions.
chmod 600 ~/.ssh/ansible_rsa
chmod 644 ~/.ssh/ansible_rsa.pub
To see the public key I use the following command:
cat ~/.ssh/ansible_rsa.pub
I then updated the ansible.cfg file to use the SSH key I just created.
sed -i 's|private_key_file = ~/.ssh/id_rsa|private_key_file = ~/.ssh/ansible_rsa|' /opt/ansible/ansible.cfg
I created a playbook for testing.
sudo nano /opt/ansible/playbooks/test.yml
---
- name: Test Ansible Installation
hosts: localhost
gather_facts: yes
tasks:
- name: Display system information
debug:
msg: "Ansible is running on {{ ansible_distribution }} {{ ansible_distribution_version }}"
- name: Check disk space
shell: df -h /
register: disk_space
- name: Display disk space
debug:
var: disk_space.stdout_lines
I then ran the playbook to verify everything works.
ansible-playbook playbooks/test.yml
I then installed common collections.
ansible-galaxy collection install community.general
ansible-galaxy collection install ansible.posix
ansible-galaxy collection list
Lastly, I created a vault password file.
echo "HardPassword123" > ~/.ansible_vault_pass
chmod 600 ~/.ansible_vault_pass
What it does:
- Creates a hidden file named .ansible_vault_pass in your home directory.
- Writes the text HardPassword123 into that file.
- This password will be used by Ansible to encrypt and decrypt sensitive data with Ansible Vault.
Normally, when you run a playbook with Vault-protected files, Ansible prompts you for the password. By creating this file, you allow Ansible to automatically read the password without prompting (if configured in ansible.cfg). This is useful for automation (CI/CD pipelines, cron jobs, etc.).
I then needed to update the ansible.cfg file.
echo "vault_password_file = ~/.ansible_vault_pass" >> /opt/ansible/ansible.cfg
Do one don’t get shirty with me naff only a quid the full monty at public school burke Jeffrey smashing, blatant ruddy fanny around Charles.
Optimization
First, I increased file descriptor limits.
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
echo
: Prints the line of text.tee -a
: Appends (-a) the text to the file, using sudo so it has root permission.File: /etc/security/limits.conf
: Config file that defines per-user or global limits for processes.
Why This Matters for Ansible (and servers in general)
- By default, Linux often sets fairly low limits (like 1024 open files).
- Ansible and the services it manages may need to handle many files, sockets, or connections at once.
- Raising limits to 65536 ensures you donโt hit “Too many open files” errors under load.
Then configured sysctl for better networking.
echo "net.core.rmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
echo "net.core.wmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
net.core.rmem_max
: This sets the maximum receive buffer size for network sockets.net.core.wmem_max
: This sets the maximum send buffer size for network sockets
Ansible itself doesnโt strictly require this, but:
- When youโre running automation on servers with many connections (e.g., provisioning databases, message queues, or apps with high network demand), bigger socket buffers reduce latency and packet loss.
- Itโs especially useful for systems that:
- Handle large payloads (logs, metrics, file transfers).
- Operate over high-latency or high-bandwidth networks (e.g., cloud, WAN).
I then updates ansible.cfg for better performance.
sudo nano /ansible.cfg
# Performance optimizations
forks = 50
strategy = free
host_key_checking = False
ssh_args = -o ControlMaster=auto -o ControlPersist=300s -o PreferredAuthentications=publickey
pipelining = True
fact_caching_timeout = 3600
I then set restrictive permissions on Ansible directory.
sudo chmod 750 /opt/ansible
sudo chmod 600 /opt/ansible/ansible.cfg