Skytap Metadata into Facter Facts

Puppet's Facter is great, and a great way to collect data for use with Puppet. With our dev/test workflow existing in Skytap, we wanted to have better access to the Skytap metadata so we could leverage that in Puppet.

Here is our solution to that...

The module 'module_skytap_metadata'  goes to the skytap metadata service and takes that returned JSON and re-parses it slightly, then adds them as Facter facts.

It takes all of the metadata values and prepends "skytap_" to them, grouping them all within the full list of Facter facts.

As an added bonus, it goes into the userdata of the VM and, if there's YAML there, will parse that as well creating 'skytap_userdata_xxxxx' values. This allows a tester to add values to the userdata themselves, which will flow all the way down to Facter and, ultimately, Puppet.

For us, this let's a tester mark a system for a given customer in Skytap, and Puppet can handle any customer specific setup needed. Handy for testing, POC, and otherwise giving power to the users themselves in a self-service way.

A system's user data that will end up as Facter facts.

A system's user data that will end up as Facter facts.

Our module_skytap_metadata is available on Github here:

Dynamically create dashboards in

We use Skytap for many of our VMs and Dashing is a great tool to display data. We wanted to combine these by providing a dynamic way to create a new dashboard for each Skytap environment. The VMs all report their information up to Dashing automatically (I'll cover this in a separate post), allowing us to make some assumptions on what widgets we have.

This case may not work for you out of the box (unless you use Skytap), but the idea of dynamically available dashboards could work as well for AWS or other services without too much change to this sort of thing.

We'll need two files to create this. One is a job that runs periodically looking for what environments we have. We then create symlinks for that environment number to our main template. This allows us to easily update one file and it'll apply to all environments' dashboards. The job also cleans itself up, so if there's an environment that's been destroyed, the symlink is removed.

The second is the template itself. This is the meat of things, and is what is accessed whenever someone pulls up any environment dashboard. Because each dashboard URL includes the name of the environment of interest, we can use that to determine what VMs are that environment and display things accordingly.

First, the job file:

environments.erb is a script we use to access the Skytap API. This just returns a list of environments.

This job essentially just gets the list of environments and loops through it to create symlinks to the template file. It also creates its own widget giving us the total number of Skytap environments we're using.

This piece is basically the meat of it. Parsed is a variable that has the list of environments in it and we just loop through it and add symlinks.

parsed.each do |value|
  total += 1
  erb = env_path + value.to_s + '.erb'
  next if File.file?(erb)
  puts 'Creating template link: ' + erb.to_s
  File.symlink(template, erb)




The symlinks created all point to this template.

It does a few things I think are interesting. 

env = File.basename(__FILE__, File.extname(__FILE__)) 
skynet = `/opt/skynet/ -a vms #{env.to_s}`
parsed = JSON.parse(skynet)

This gets the environment name from the filename of the symlink. We use this environment name to go to our skynet script to get a list of VMs in that environment.

widgets = Sinatra::Application.settings.history
widget_list = []
widgets.each do |key, value|
  if key.to_s.index(vm.to_s) != nil

This section gets the full list of widgets and loops through them all, using any that have the VM id in the name of the widget. We push all of our widget data from these machines named vmid_<name>, we can do this to get a list of widgets without even knowing in advance what widgets are available for a given VM.

widget_list.sort! { |a,b| a.order <=> b.order }
col = 0
widget_list.each do |w|
    col = col + 1
      <li data-row="<%=row%>" data-col="<%=col%>" data-sizex="1" data-sizey="1">
      <div data-id="<>" data-view="<%=w.view%>" ></div>

This takes the list of widgets made from the prior loop and sorts them by order (this allows a VM to order the widgets if there's a logical way to order them, again without us knowing about it here). We then loop through this list and builds the actual HTML widget information. 

And with that, Dashing doesn't know about our environments, or what widgets are out there, but we can still display great dashboards for each environment. To have a VM give more information about itself, we can just have it update a new widget with the right name and the rest will be automatically done for us, with no changes to the Dashing server.

To make things even easier, we use Puppet to deploy the scripts to the server that are needed to post the widgets, so a new server will be updating the dashboard nearly immediately after creation.

Click for enlarged detail. This sort of dashboard any of our engineers or QA folk can get about any of our server sets with just a click.

The actual files:

Both scripts are below for review, or you can get the from gist with the links, also below.