Wednesday, July 18, 2018

Python script to generate Ansible ini inventory file from csv file

Ansible in memory inventory file created by add_host is often used in AWS EC2 provisioning. Inventory file can be generated easily,however it has drawback. Because it is in memory, all server post build tasks have to be in one big playbook. Which means it is not easy to re-run failed tasks if there is failure and existing post build playbooks can't be reused.

I created a Python script to generate a temporary inventory file from csv file used in EC2 provisioning. The inventory file can be used in multiple post build playbooks. The file name is static, however it will not be overwritten,if you set concurrent build limit to 1 in CI/CD server.
Some AWS EC2 instances in my company need static hostname  The ip field will be changed automatically with the EC2 private IP return right after provisioning and and there is a playbook to create host record in infoblox. the group are Ansible group vars,multipe groups are separated by semicolon and the order is important,vars in last group will take precedence
The csv file
name,ip,group,zone,env
awselk1,,elasticsearch;elasticsearch-master,2a,prod
awselk2,,elasticsearch;elasticsearch-data,2a,prod

The script
#!/usr/bin/python
# Takes a file CSV file "xxx.csv" and outputs xxx.ini for Ansible host inventory data
import csv
import sys
import os
 
if len(sys.argv) <= 1:
   print "Usage:" +sys.argv[0]+" input-filename"
   sys.exit(1) 
net_dn = {'prod':'prod.example.com', 'preprod':'preprod.example.com',
          'test':'test.example.com', 'dev':'dev.example.com'}
groups = []
envs = set()
hosts_ini = {}

csvname = sys.argv[1]
scriptpath = os.path.dirname(sys.argv[0])

ansible_ini = os.path.join(scriptpath, 'hosts-aws-tmp.ini')

lines = []
hosts_text = ''
with open(csvname) as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        domain = net_dn[row['env'].strip()]
        line = row['name'].strip()+'.'+domain
        #lines.append(line)
        envs.add(row['env'])
        # support multiple groups separated by ;
        for g in row['group'].strip().split(';'):
          g = g.strip()
          if (not g in groups):
            groups.append(g)
          hosts_ini.setdefault(g, []).append(line)

#groups=set(groups)
if ( len(envs) !=1 ):
   print "ERROR: only single enviroment is supported!"
   sys.exit(1)
env = list(envs)[0]
env_text = "["+env+":children]"+"\n"+"\n".join(groups)   
vars_text = "\n\n["+env+":vars]"
vars_text += """
ansible_user=ansible
ansible_ssh_private_key_file=~/.ssh/id_rsa
ansible_become=true
ansible_become_user=root
ansible_become_method=sudo
ansible_gather_facts=no
"""
vars_text+="aws_env=aws-"+env+'\n'
#generate groups in order as input
for g in groups:
   hosts_text+='\n['+g+']\n'
   hosts_text+='\n'.join(hosts_ini[g])
   hosts_text+='\n'
 
all_text = env_text+vars_text+hosts_text
print all_text
with open(ansible_ini,'w') as new_ini_file:
    new_ini_file.write(all_text)   
print "INFO:Generated Ansible host inventory file: " + ansible_ini  


The Ansible inventory file generated
[prod:children]
elasticsearch
elasticsearch-master
elasticsearch-data

[prod:vars]
ansible_user=ansible
ansible_ssh_private_key_file=~/.ssh/id_rsa
ansible_become=true
ansible_become_user=root
ansible_become_method=sudo
ansible_gather_facts=no
aws_env=aws-prod

[elasticsearch]
awselk1.prod.example.com
awselk2.prod.example.com

[elasticsearch-master]
awselk1.prod.example.com

[elasticsearch-data]
awselk2.prod.example.com

6 comments:

  1. Inspiring writings and I greatly admired what you have to say , I hope you continue to provide new ideas for us all and greetings success always for you..Keep update more information..

    rpa training in Chennai | best rpa training in chennai

    rpa training in pune

    rpa online training | rpa training in bangalore

    ReplyDelete
  2. It seems you are so busy in last month. The detail you shared about your work and it is really impressive that's why i am waiting for your post because i get the new ideas over here and you really write so well.
    Data Science Training in Chennai | Data Science training in anna nagar
    Data Science training in chennai | Data science training in Bangalore
    Data Science training in marathahalli | Data Science training in btm

    ReplyDelete
  3. Thank you for an additional great post. Exactly where else could anybody get that kind of facts in this kind of a ideal way of writing? I have a presentation next week, and I’m around the appear for this kind of data.
    java training in chennai | java training in bangalore


    java training in tambaram | java training in velachery

    ReplyDelete
  4. Superb. I really enjoyed very much with this article here. Really it is an amazing article I had ever read. I hope it will help a lot for all. Thank you so much for this amazing posts and please keep update like this excellent article. thank you for sharing such a great blog with us.
    python training in rajajinagar
    Python training in btm
    Python training in usa

    ReplyDelete
  5. Wow it is really wonderful and awesome thus it is very much useful for me to understand many concepts and helped me a lot. it is really explainable very well and i got more information from your blog.

    rpa training in velachery| rpa training in tambaram |rpa training in sholinganallur | rpa training in annanagar| rpa training in kalyannagar

    ReplyDelete
  6. Excellent blog, I wish to share your post with my folks circle. It’s really helped me a lot, so keep sharing post like this
    Devops training in sholinganallur
    Devops training in velachery
    Devops training in annanagar
    Devops training in tambaram

    ReplyDelete