How to use more than one inventory file in a playbook in one single command

If you want to run a playbook in more than one inventory file in one command you just need to put every inventory file you want inside a directory and then run:

ansible-playbook -i <inventory_file_directory> <playbook>

For example:

ansible-playbook -i inventory/rackspace_prod/ update_config.yml

So you can easily run a playbook and work with all servers that you need no matter in which environment file they are.

Let’s go a little deeper in the example above because you could think: why not create a bigger inventory file with everything inside it?

Imagine that you have one static inventory file (static) and a dynamic inventory file. For example Rackspace dynamic inventory file (rax.py). In both environments you have a group name called webservers because you use Rackspace Cloud Servers to scale up and down your static webservers.

If you want to operate all your webservers (static and dynamic servers) you could run the playbook twice:

ansible webservers -i inventory/static -m ping
ansible webservers -i inventory/rax.py -m ping

But you can create a directory called rackspace_prod and put there both inventory files and then run:

ansible webservers -i inventory/rackspace_prod/ -m ping

I normally use this feature to update configuration files like in apaches, loadbalancers or /etc/hosts file.

For example you can update haproxy configuration with your static and dynamic webservers by using a template file:

[...]
backend my_backend
option httpchk
cookie JSESSIONID prefix nocache
 balance roundrobin
{% for host in groups['webservers'] %}
 server {{ hostvars[host]['ansible_hostname'] }} {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }}:80 check  {{ hostvars[host]['ansible_hostname'] }}
 {% endfor %}
[...]

Role: Android-SDK for Linux, Win and Mac

This role allows you to install Android-SDK for Linux, Windows or Mac:

https://github.com/rcastells/ansible/tree/master/roles/androidsdk

Ubuntu OS

In order to run the playbook for Ubuntu OS you must have a user with SSH access to the server and sudo privileges to become root.

Edit your inventory file and add ubuntu server alias if necessary:

ubuntu ansible_host=<ubuntu_host>

Create your playbook file (for example androidsdk.yml) and add the ubuntu hosts:

--- 
# Playbook that installs AndroidSDK 
- hosts: ubuntu
    roles:
     - androidsdk

Then run the playbook as follows:

ansible-playbook -i <your_inventory> androidsdk.yml -u <user> -b -K -k

Notes:
‘-k’ it will ask you for your password. You can avoid it if you are going to use ssh keys.
‘-u <user>’ is the user you are going to use to connect to remote server. You can avoid it if you are going to use your current username.
‘-b’ is to become root on the remote server. Mandatory.
‘-K’ it will ask you for sudo password. Mandatory if you need password to become root.

As you will see the playbook shows the list of available packages and by default it will install 1,2,3,5,37.

You can change which packages do you want to install by editing file roles/androidsdk/defaults/main.yml variable:

android_tools_filter: "1,2,3,5,37"

Also you could edit a variable for every host, so you can specify different packages for every host or group of hosts.

WindowsOS

In order to run the playbook for WindowsOS you must have credentials for an Admin account.

Edit your inventory file and add ubuntu server alias if necessary:

windows ansible_host=<windows_host>

Edit or create the playbook file androidsdk.yml file and put windows host alias in host option as follows:

--- 
# Playbook that installs AndroidSDK + JAVA
- hosts: windows
     roles:
        - androidsdk

 

Ansible will use winrm connection. In order to be able to run the Ansible on the remote host you must run ConfigureRemotingForAnsible.ps1 file on the WindowsOS remote host. So connect to your windows host first and run the powershell script. You must be able also to connect to winrmi secure port (by default 5986).
Create or edit host_vars file for this windows host (not included in github) and put your Admin account credentials in host_vars/windows.yml file as follows (here you can also change wirmi port if necessary):

ansible_user: Administrator
ansible_password: "password_example"
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore

Remember to run the Powershell script ConfigureRemotingForAnsible.ps1 on remote host before to run the playbook.

Then run the playbook as follows:

ansible-playbook -i <your_inventory> androidsdk.yml

MacOS

In order to run the playbook for MacOSX you must have a user with SSH access to the server and sudo privileges to become root.

Edit your inventory file and add mac server alias if necessary:

mac ansible_host=<mac_host>

Edit or create androidsdk.yml file and put mac host alias in host option as follows:

--- 
# Playbook that installs AndroidSDK + JAVA
- hosts: mac
   roles:
     - androidsdk

Then run the playbook as follows:

ansible-playbook -i inventory/production androidsdk.yml -u <user> -b -K -k

Notes:
‘-k’ it will ask you for your password. You can avoid it if you are going to use ssh keys.
‘-u <user>’ is the user you are going to use to connect to remote server. You can avoid it if you are going to use your current username.
‘-b’ is to become root on the remote server. Mandatory.
‘-K’ it will ask you for sudo password. Mandatory if you need the password to become root.

You can change which packages do you want to install by editing file roles/androidsdk/defaults/main.yml variable:

android_tools_filter: "1,2,3,5,37"

Also you could edit a variable for every host, so you can specify different packages for every host or group of hosts.

How to forward your SSH key to hosts in Ansible

I prefer to work with SSH keys in order to connect to all my hosts. It makes the daily work easier. Now imagine that you need to use your SSH key in a host. For example I needed that in order to run a checkout of a SVN repo. You can’t specify SSH key with svn module.

Just edit your /etc/ansible/ansible.cfg file and search for [ssh_connection]. There you can edit all the options you want for your SSH connection. A really useful option. So just add the option ForwardAgent=yes to ssh_args.

[ssh_connection]
ssh_args = -o ForwardAgent=yes

Now in every SSH connection Ansible will forward your key.

Create a log entry for every Ansible execution

Besides of sending an email when you’re performing a task with Ansible, I also recommend the usage of log files.

- file: path=/var/log/ansible state=touch
- name: "Write into log"
   shell: echo "ANSIBLE | {{ ansible_date_time.iso8601 }} | <USER> | <ACTION> | <WHERE>" >> /var/log/ansible

Then you can easily review that log file with older executions on that server and even send it to a log manager.

How to know the amount of data that has been transferred using copy module

Ansible doesn’t have a progress bar and sometimes when you use copy module in a poor network or transferring a big file, we could be running the playbook and got stuck here:

TASK [Copy war into directory /deploy] ***************

And we don’t know what’s going on, if it’s working, if it’s going fast, slow,…

How can we know the status of our copy?

You should connect into the server where you’re copying your file and go into the temporary directory that has been created in the user home that is being used by Ansible to connect. So, if your playbook looks like:

- hosts: appserver
  user: rcastells
  become: yes

You should connect into appserver and go into temporary directory created by ansible:

/home/rcastells/.ansible/tmp/ansible-tmp-1462436406.52-187369121476281

Take in consideration that inside tmp you will find a lot of different temporary directories created by Ansible for every playbook execution. You’re only interested in the latest one.

Inside this temporary directory you can find a file called ‘source’ that will be increasing its size. This temporary file is the one that is using copy module.

56M -rw-rw-r-- 1 rcastells rcastells 56M May 5 16:34 source

Is not a really nice solution, but at least you can know what’s going on and how fast is your copy going.

Usage of serial mode for certain playbooks (like an application deployment)

Ansible runs in parallel mode, which means that every task that you define in your playbook will run by default in all hosts before to proceed to next task.

How to avoid that? With the usage of serial mode.

In a real scenario you can have two Application servers running your app and you will need to update it. Probably you will have a playbook that does all the job for you but if your playbook is as follows:

- hosts: appservers
  become: yes
  tasks:
    - name: "Deploy your new app"
      shell: <whatever to deploy>

    - name: "Restart of your service"
      shell: <whatever to restart>

And your inventory looks like:

[appservers]
app01
app02

You will restart your service in both servers at the same time. You could use wait_for module for example waiting a port to come up, but as a good practice it is good to add serial mode. So, if you define your playbook as follows:

- hosts: appservers
  serial: 1
  become: yes
  tasks:
    - name: "Deploy your new app"
      shell: <whatever to deploy>

    - name: "Restart of your service"
      shell: <whatever to restart>

Your tasks will be performed one by one. So, if you do things properly, you can avoid downtime for your application.

In our example we were talking on a simple example, but you can have more than two servers:

[appservers]
app[01:06]

And then specify the serial number that you want:

  • serial: 2: Tasks will be performed to 2 hosts at the same time.
  • serial: 3: Tasks will be performed to 3 hosts at the same time.

In another entry we will talk about the combination of max_fail_percentage and serial in order to manage our deployments in case that we want to do a rollback.

Usage of inventory file as conditional

I’ve used the name of the inventory file as conditional for some playbooks. This is a weird example where you use the same server to build the war for two different environments and using two different scripts. So imagine that you want to use the same playbook in order to do the deploy of your application.

Two (not recommended) solutions:

  1. Write two different playbooks: So if you do one change in one playbook you must remember to do the change in the other playbook. Try to avoid that.
  2. Use vars prompt: Of course you can prompt the user where the task will be performed, but you already have this information in your inventory file. So no need to ask.

Imagine that you have this task to be performed in server A:

- name: "War for Europe"
  shell: /opt/scripts/buildWarEurope.sh

And for USA another different script also in server A:

- name: "War for USA"
  shell: /opt/scripts/buildWarUSA.sh

Those scripts are on the same server, so you can’t rename them and use the same name for your Ansible task.

What I recommend you to do in that scenario is to use inventory file name in your conditional. You should have a directory called inventory with your environment files:

inventory/europe_prod
inventory/usa_prod

So, if you run your playbooks using inventory files (which you MUST do) as follows:

ansibe-playbook -i inventory/europe_prod deploy.yml

You can take this information from your gather_facts and then use it later as follows:

- name: "War for Europe"
  shell: /opt/scripts/buildWarEurope.sh
  when: ( 'europe' in inventory_file )
- name: "War for USA"
  shell: /opt/scripts/buildWarUSA.sh
  when: ( 'usa' in inventory_file )

And that’s it. One playbook and no usage of vars prompt.

How to integrate your existing roles with vagrant

If you have created an Ansible role and you want to reuse it for Vagrant there are a couple of things that you must now to run this role into Vagrant.

As you may know, you can use Ansible playbooks with Vagrant. In fact, is really easy. In your Vagrantfile you will have something like:

Vagrant.configure("2") do |config|
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
  end
end

But if you have a role.yml file and you want to apply it into Vagrant virtual machine you need to:

  1. Create an inventory alias for that server as follows in your inventory file:
    vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
  2. Create a playbook on the same directory than Vagrantfile location and specify there the alias created above. In our case we will call that file my_role.yml. Apply also the role (or roles) that you want to run for your server:
      - hosts: vagrant
        user: vagrant
        become: yes
        roles:
           - my_role
  3. Put the roles directory in the same location than Vagrantfile with the roles you need for your Vagrant virtual machine. In our example only a directory called ‘my_role’ with our templates, files, default vars and main tasks.
  4. Finally inside the configuration of the Vagrantfile you must configure the name of the virtual machine to be the same than the one specified above:
Vagrant.configure(2) do |config|
      config.vm.define "vagrant"
      config.vm.provision "ansible" do |ansible|
               ansible.playbook = "my_role.yml"
      end
end

Then you can easily run Vagrant with your roles.

Run ansible tasks on a remote server using a SSH Tunnel

If you want to run an ansible playbook on a remote server by using a ssh tunnel, you can use the following procedure:

Create an entry in your inventory file configuring the host as localhost and the port you want to use for the ssh tunnel. In our example we will use ‘tunnel’ as server alias:

tunnel ansible_host=127.0.0.1 ansible_port=2222

The procedure of the playbook should be as follows:

  1. Connect to localhost in order to create the tunnel.
  2. Connect to localhost using the tunnel and run tasks.
  3. Connect to localhost in order to delete the tunnel.

So first of all kill remaining SSH sessions that you can have using the port you’ve configured above (if any) and create the new connection. Take in consideration that we are also asking remote server IP (or hostname) and the remote SSH port. You don’t need to do that if you’re gonna connect always to the same server or if you know the remote SSH port. You can specify them in your playbook instead of using variables:

- hosts: 127.0.0.1
   connection: local
   vars_prompt:
         - name: "hostname"
           prompt: "Enter remote server hostname or IP"
           private: no
         - name: "ssh_port"
           prompt: "Enter remote ssh port"
           private: no
   tasks:
         - name: "Kill previous sessions on local port"
           shell: ps axuf | grep 2222 | grep ssh | awk '{print "kill -9 " $1}'

         - name: Create SSH tunnel
           shell: ssh -fN -L 2222:localhost:{{ ssh_port }} {{ hostname }}

Now that the connection has been established you can run commands on the remote server by using following code:

- hosts: tunnel
  user: <user with ssh access>
  tasks:
     - name: "Remote task"
           ...

It’s important to remark that you must know which user has ssh access to that server and you must use or key authentication or the same credentials used for localhost.

To finish your playbook properly is better if you kill your SSH tunnel:

- hosts: 127.0.0.1
   connection: local
   gather_facts: no
   tasks:
         - name: "Killing ssh process"
           shell: ps axuf | grep 2222 | grep ssh | awk '{ print "kill -9 " $1}'