Wednesday, July 11, 2018

Ansible - turn values in CSV file into list of dictonary

Ansible can load yaml values into Ansible variable easily, However I am still fan of plain CSV file, because it is easy to edit and has no strict format and repetitive of variable names.
How to convert csv into yaml? 
I used to use Python script to archive this,actually Ansible can do this natively.

Given this csv file for AWS EC2 instances

$ cat vars/aws/ec2data.csv
name,ip,zone,group,env
splunk01,10.1.1.1,2a,splunk,prod
splunk02,10.1.1.2,2b,splunk,prod

The playbook

---
- hosts: localhost
  connection: local
  gather_facts: no
  vars:
    ec2data_file: vars/aws/ec2data.csv

  tasks:
  - name:  reading {{ec2data_file}}
    command: /usr/bin/awk -F',' '!/^#/ && !/^$/ && NR!=1 { print $1,$2,$4,$5}' {{ec2data_file}}
    register: csvout
 #turn ec2_host into list with default filter and append list of dictionary in each loop. 
 #split is Python function to split string,default delimeter is space
  - name: turn csv output to list of dict
    set_fact:
      ec2_host: "{{ ec2_host|default([]) + [ { \
                      'name': item.split().0,  \
                      'ip':   item.split().1,  \
                      'group':item.split().2,  \
                      'env':  item.split().3 } ] }}"
    with_items: "{{csvout.stdout_lines}}"

  - debug: msg="{{item.name}},{{item.ip}}" verbosity=1
    with_items: "{{ ec2_host }}"


The result
skipping: [localhost] => (item={'ip': u'10.1.1.1', 'group': u'splunk', 'name': u'splunk01', 'env': u'prod'})
skipping: [localhost] => (item={'ip': u'10.1.1.2', 'group': u'splunk', 'name': u'splunk02', 'env': u'prod'})

No comments:

Post a Comment