Currently in Migration - Jason Edelman's Old Blog
  • Home
  • About
  • Contact

Server Bootstrap & Prep with Ansible

8/3/2014

2 Comments

 
Over the past few months, I’ve been posting on using Ansible for network automation.  Changing things up a bit, this post will cover using Ansible for server automation and I’ll share a few Ansible playbooks that I’ve built and have been using to bootstrap servers and prep them for various applications such as OpenStack and NSX deployments.  
Step 1 - Playbook 1
Creating password-less root account

Since Ansible uses SSH by default for connecting to the servers, you will realize the first thing that needs to be done is to copy the public key of where you will execute playbooks from onto the “new” server.  To do this, I use a playbook that is called server_one_time_run.yml.  You will notice that in this playbook, and only in this playbook, I have remote_user set to jedelman and sudo set to yes.

I’ve been testing against bare-metal and virtual machine installs using an Ubuntu ISO image.  During the OS install process, “jedelman” is the account that was created on all hosts and virtual machines. 

This playbook runs and copies over the public key in the root directory.  We are essentially creating a password-less login for the root account, so going forward, all other playbooks can be run using the root account.  Realize this may not be best practice!

Three other things to remember and take note of:

  • This first playbook needs to be with the --ask-sudo-pass and –k options.  This allows you to enter the SSH and sudo passwords for the account you created upon installing your Linux OS while running the playbook.  I was debugging ssh for longer than I’d like to admit to figure out the –k was needed
  • You also need to install openssh-server (or another of your choosing) on the server before running this playbook.
  • If like me and you are doing many re-builds in a short amount time, you may need to update/remote hosts from the local known_hosts file.  The ssh-keygen command can be used here, if needed.

After running this playbook, you can ssh to one of your servers with the root user and ensure you aren’t being prompted for a password.

Step 2 - Playbook 2
Baseline Apps Install
 
The second playbook that I’ve been running gets a slew of apps installed onto the system, but first adds the Ubuntu Cloud Archive (for OpenStack Icehouse) to host’s source list.  This playbook’s name is base_apps.yml.  Note that all playbooks going forward are using “remote_user: root” 

I was hitting a bug using older versions libvirt and OVS (from the standard repo) --- using the cloud archive cleared it up right away.  So net net, don’t install OVS or libvirt until after the Ubuntu Cloud key ring is added and an update and upgrade are performed, especially if you want to test OVS and libvirt with VLANs.  You can actually find this in some of the OpenStack install documentation as well.

This playbook finishes by adding a new user account and installing xrdp.  I’m just toying with building some hosted labs and “netops” is the account that I was using to login to rdp although any of them will work.

Step 3 - Playbook 3
OVS Install
 
The next step is to get OVS installed.  This is using the ovs-default.yml playbook.  Because I was testing with NSX, I used a few OVS files specific to NSX.  This playbook copies the NSX files from my machine running Ansible to each hypervisor node and installs each .deb after the initial package was untar’d.  Assuming most of you aren't testing this with NSX, you can use the off the shelf OVS files.

Step 4 - Playbook 4
OVS & Network Config
 
Once OVS is installed, the next step will be to get it configured. This time I used the playbook called ovs-config.  Because I was testing a few different variations not always using a management interface that wasn’t running OVS, many of the lines are still commented out, but if you had a dedicated management interface, most of it should run right through.  You can still try it, but you’ll get disconnected after making a change to the NIC you are SSH’d in through :). 

The physical servers I’m testing with have 2 physical NICs.  Most often, one was being used for management and one was used for production, or virtual machine, traffic. A single OVS bridge was being used for each and they are carrying multiple VLANs over the links.  Note: I only have br-mgmt’s config in the playbook right now.

Note: there is more background that is required to understand libvirt with OVS and VLANs.  This is not that post although you should be able to learn more about it by reading through the playbooks.

You will also notice in line 31 in ovs-config.yml, there is a template file being copied over (again, from my machine) onto the target node.  This is replacing the /etc/network/interfaces file on the target machine with the updated settings I wanted for each node.

While I was going through the libvirt, VLANs, and OVS build out, Scott Lowe (@scott_lowe) was a huge help.  Big shout out to Scott!!  Hopefully, in the near future, I can write a single post on that without Ansible to articulate some nuances encountered along the way!

All playbook information described can be found on my github page here.

Thanks,
Jason

Twitter: @jedelman8

2 Comments
termlen0
8/4/2014 09:53:27 pm

Jason,
Have you tried https://www.pubkey.in/console for the public key management? Any thoughts on this solution?

Reply
Jason Edelman link
9/15/2014 09:16:56 pm

Hi termlen0,

I have not tried that solution...any luck on your side? Curious to hear how it went if you tried.

Reply



Leave a Reply.

    Author

    Jason Edelman, Founder of Network to Code, focused on training and services for emerging network technologies. CCIE 15394.  VCDX-NV 167.


    Enter your email address:

    Delivered by FeedBurner


    Top Posts

    The Future of Networking and the Network Engineer

    OpenFlow, vPath, and SDN

    Network Virtualization vs. SDN

    Nexus 7000 FAQ

    Possibilities of OpenFlow/SDN Applications 

    Loved, Hated, but Never Ignored #OpenFlow #SDN

    Software Defined Networking: Cisco Domination to Market Education

    OpenFlow, SDN, and Meraki

    CAPWAP and OpenFlow - thinking outside the box

    Introduction to OpenFlow...for Network Engineers


    Categories

    All
    1cloudroad
    2011
    2960
    40gbe
    7000
    Arista
    Aruba
    Big Switch
    Brocade
    Capwap
    Christmas
    Cisco
    Controller
    Data Center
    Dell Force10
    Embrane
    Extreme
    Fex
    Hadoop
    Hp
    Ibm
    Isr G2
    Juniper
    Limited Lifetime Warranty
    Meraki
    Multicast
    N7k
    Nexus
    Nicira
    Ons
    Opendaylight
    Openflow
    Openstack
    Presidio
    Qsfp
    Quick Facts
    Routeflow
    Sdn
    Sdn Ecosystem
    Security
    Ucs


    Archives

    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    June 2014
    May 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012
    January 2012
    December 2011
    November 2011


    RSS Feed


    View my profile on LinkedIn
Photo used under Creative Commons from NASA Goddard Photo and Video