technology

Why "Culture Fit" is the Wrong Way to Hire Talent

Every workplace has a culture. Many think of that stuffy traditional marketing or accounting firm where everyone must dress sharp and even the water cooler is guarded like Fort Knox. Or perhaps the more modern and laid-back tech start-up where people can dress how they want, collaboration is encouraged, and there's a beer tap next to the jugs of organic coconut water.

But if you're hiring based on that cultural fit alone-- namely, who you think is going to last hours at a ping pong table after discussing APIs all day-- you're making a mistake.

At the Geekwire tech ping pong tournament. Why is ping pong the tech game of choice? We may never know. If you were different than these folks, would you feel welcome here? Why do we recruit from gatherings of "we're all the same" like this?

At the Geekwire tech ping pong tournament. Why is ping pong the tech game of choice? We may never know. If you were different than these folks, would you feel welcome here? Why do we recruit from gatherings of "we're all the same" like this?

A diverse workforce is a powerhouse.

"Hold the phone!" you yell. "The best talent that came my way and fit in the best with my vision for the company happened to be predominantly white and/or male. Why should I force 'diversity' for the sake of it?"

Let me guess: you've sat through the lectures on diverse hiring in tech and thought it was all wrong. Perhaps you've even groaned at one of those brochures for a conference where someone has the job title of "Chief Diversity Officer". But here's why you need to look beyond culture, particularly if yours is looking like a monoculture.

First things first, you're likely getting "diversity" all wrong.

Calm down. No one's saying your staffing problem is too many white men between 25-35 on deck. But people tend to have a limited and often myopic view of diversity: it's not just about gender and race. Diversity encompasses the following things aside from just gender and skin color:

  • Age
  • Sexual orientation
  • Gender identity
  • Marital/family status and lifestyle
  • Size
  • Disabilities
  • Religion
  • Nationality/country of origin
  • Upbringing/background

This list isn't even exhaustive, but all of these characteristics that also include gender and race are what makes a workforce diverse. There's no way to know all of these things on a job interview or trial, of course. Some of them may be apparent when you see that person (like race or size) but others may not be or it's impolite to ask.

When your employees are diverse, you get multiple perspectives.

You know what really sucks? Expensive focus groups.

User testing that gets the same results over and over.

Floods of reports that your users are facing abusive experiences, dark patterns, and other things your staff just never dealt with pre-launch.

When you have a diverse workforce, those perspectives that go beyond what you, and the subordinates you most identify with, are suddenly in-house without having to arrange tons of expensive market research and user testing. For instance, if you have more women and racial minorities on staff they're more likely to point out how someone is likely to be harassed or abused on an online platform and will come up with features to prevent this before the issue happens. Employees who come from marginalized groups know what your user base in the same group is looking for in their experience.

Got a problem that your team keeps hitting the wall on? Chances are if you've got employees who grew up in tough situations, or in cultures that adapt better to teamwork than "rugged individualism", they can think faster on their feet than the people who grew up in middle class comfort on the coasts did.

Want to do business around the world? Someone who's lived in other countries or is not from America can fill you in on how it's customary to do business there.

 

Diversity opens up more doors than you think. It's also not just about gender and race and hiring on those bases just to make things appear that way. Your users, other staff members, and other stakeholders all benefit.

Using Chef to configure Datadog

The Datadog documentation for Chef is, to be generous, minimal. It's helpful to tell you how to make a recipe to install the Chef handler. This will begin uploading metrics about Chef runs to Datadog. I've yet to find these particular stats to be all that helpful, though.

What I want is to have Chef manage the actual configuration of Datadog so when we add new systems, or change what we're watching it all happens automatically, like Chef is meant to do.

To begin this process, I created a simple recipe that sets up Datadog. This does what the original docs do, but with an extra bit: if there are attributes about some of the other Datadog monitors, this will see them and add the appropriate additional recipes. If these attributes aren't set, the additional monitor recipes are skipped. Datadog's recipes don't gracefully handle the situation where the recipe is included and the attributes aren't there to watch anything (a very simple default attribute in that recipe that was empty would go a long way here).

Here's the bulk of the interesting stuff from our default.rb Chef recipe:

This just checks for the various attributes and if they exist, then add the appropriate include_recipe.

To get the right attributes, we've added the following to our attribute file:

This is a little more complicated, but can be broken down to just a few parts. There are two helper functions, one to check if a service exists, and one to return a list of IIS websites if there are any.

Then, at least on Windows machines, we make a list of each of the IIS sites. If any exist it will add it to the default['datadog']['iis']['instances'] attribute. Going back to the recipe, above, this will also cause the IIS DataDog recipe to be included as well.

In addition, we can check for some services if we're wanting to check those as well. We have a few services and can easily check if they exist (through that helper function) and add the service and process to the appropriate list, which then can watch those as well. The windows_service and the process DataDog elements watch different things, and we want to be able to check both. Obviously if you only wanted to see the service status, or watch a process that wasn't a service, this wouldn't need a ton of modification. 

 

Skytap Metadata into Facter Facts

Puppet's Facter is great, and a great way to collect data for use with Puppet. With our dev/test workflow existing in Skytap, we wanted to have better access to the Skytap metadata so we could leverage that in Puppet.

Here is our solution to that...

The module 'module_skytap_metadata'  goes to the skytap metadata service and takes that returned JSON and re-parses it slightly, then adds them as Facter facts.

It takes all of the metadata values and prepends "skytap_" to them, grouping them all within the full list of Facter facts.

As an added bonus, it goes into the userdata of the VM and, if there's YAML there, will parse that as well creating 'skytap_userdata_xxxxx' values. This allows a tester to add values to the userdata themselves, which will flow all the way down to Facter and, ultimately, Puppet.

For us, this let's a tester mark a system for a given customer in Skytap, and Puppet can handle any customer specific setup needed. Handy for testing, POC, and otherwise giving power to the users themselves in a self-service way.

A system's user data that will end up as Facter facts.

A system's user data that will end up as Facter facts.

Our module_skytap_metadata is available on Github here: https://github.com/FulcrumIT/module_skytap_metadata

Skytap Python API module

Skytap is a cloud platform designed for traditional enterprise applications. They are a fantastic cloud based solution for development, testing, modernization and training aspects of these applications.

Back in the summer of 2012, shortly after we started using Skytap, we were asked to create a series of training environments that refreshed on a complex schedule so that our trainers could go into the field and have known good working environments for the purpose of training 3,500 folks on the use of our software. We needed to stand up an environment for each classroom, that both instructors and students could use during the course, and refresh that environment at the end to make way for the next class. We turned to what was then a reasonably flushed out API for Skytap. Using that, and a crazy cron file we were able to produce what our team needed in order to have a successful summer training. Four years, three major revisions, and two more sets of eyes has since transformed that little script that we called "Skynet" into a full fledged python module that we use to manipulate many aspects of our Skytap infrastructure.

With the current version of that script, we now can do a variety of tasks automatically, from suspending environments, deleting environments, maintaining users and groups, documenting our servers automatically in Confluence, and displaying Skytap vm statistics for users to easily access.

After the summer, we continued using a modified version of Skynet to manage the suspension of all of our Skytap systems each evening in an effort to conserve the our Skytap resources. This worked well and we began to look at other uses for the Skytap API. When we realized that the API exposed all the metrics that Skytap uses to bill us, we began work on a dashboard for our Network Operations Center that allows us to keep an eye on our Skytap usage.

Up to this point we were documenting each of our Skytap environments in Atlassian's Confluence Cloud - and like many of you this was done by hand, "when we had time..." that was not a workable solution so we brought in an intern who quickly demonstrated an aptitude with working with APIs. The next round of edits to Skynet gave him all that he needed to make our Skytap systems self document in Confluence. Nearly overnight we started getting requests from users asking for improvements to these pages as they were already providing significant value and we were excited about what else the pages could show users. This is even more important due to the dynamic nature of Skytap, as we create and destroy servers throughout the development process, we have confidence that our systems are accurately documented in a central location.

As our intern was completing this work, we began working with Okta for authentication of our users here at Fulcrum. Okta also has an API and we thought it would be exciting to integrate the creation of users in Okta with those same users in Skytap. With this new task we decided to step back and rethink the design of the Skynet script.

The Skynet script, rebranded as the python module 'skytap', is intended as a full Python wrapper of the Skytap API allowing more flexible use of future projects. This redesign also succeeded at our internal goal of being able to open source this work to potentially help other companies that work with Skytap.

Now you can get this skynet script both from its github repository if you want to see the source, or from pip (pip install skytap) if you want to just dive in and use it.

If you want to enhance this to do something new, please send us a pull request so we can use it as well!

Bill Wellington, Michael Knowles, Caleb Hawkins

Keeping the Inbox Clear with Inbox Zero

How many of us in the working world can't say that we haven't had that moment when you look at your inbox and it tells you that there are one hundred unread emails waiting there for you? Most of us look at that number and quickly dismiss it as something that we will do later, but by doing so you might miss that one key email in the haystack which you needed to look at. So to remedy this, we can start picking up the Inbox Zero method of email organization, first promoted by 43 Folders.

This idea of keeping the inbox clear or almost clear by Merlin Mann has a few basic tasks that can make this a possibility, those few being:

Delete

If there is an email that is unnecessary say spam or information that isn't critical or a priority, then you can go ahead and delete it to make room for the more important emails and to prevent clutter.

Delegate

If you have peers or subordinates or even superiors who might be able to answer an email better and more accurately than you, then don't be afraid to send it their way and ask them to take care of it so that it's A) Out of your hands and B) Getting done properly.

Respond

If you have an email in your inbox that you can look at and you know that a response wouldn't take you more than 3 minutes, then simply do It and get it out of the inbox. Send a response and then delete the email so that the dead clutter that it would leave behind is gone.

Defer

Some emails are going to take a bit more time and effort on your part than you can get done in a few minutes. If that is the case ten set aside the longer and more difficult emails to a folder that you will take care of at some point in the day when you have the time. Maybe set it so that you put a little time away to get the big emails done for an hour or do it at the end of each hour.

Get it Done

Probably the hardest part of course is putting all of this in action. Yes, it  will be tedious at first, but the gains are what makes it worth wild. There will be no more surfing through tides of emails to find that one special one you need. It's all a matter of taking the time to get it done. Be sure to set time away for yourself so that you can answer the big emails and do clutter maintenance often.

By putting all of these pieces into play, you can easily turn an inbox that would be flooded and confusing into an easy and streamlined railway for emails to come in and quickly leave or be stationed for later. It makes life easier and it makes getting the job done easier as well.