Why "Culture Fit" is the Wrong Way to Hire Talent

Every workplace has a culture. Many think of that stuffy traditional marketing or accounting firm where everyone must dress sharp and even the water cooler is guarded like Fort Knox. Or perhaps the more modern and laid-back tech start-up where people can dress how they want, collaboration is encouraged, and there's a beer tap next to the jugs of organic coconut water.

But if you're hiring based on that cultural fit alone-- namely, who you think is going to last hours at a ping pong table after discussing APIs all day-- you're making a mistake.

At the Geekwire tech ping pong tournament. Why is ping pong the tech game of choice? We may never know. If you were different than these folks, would you feel welcome here? Why do we recruit from gatherings of "we're all the same" like this?

At the Geekwire tech ping pong tournament. Why is ping pong the tech game of choice? We may never know. If you were different than these folks, would you feel welcome here? Why do we recruit from gatherings of "we're all the same" like this?

A diverse workforce is a powerhouse.

"Hold the phone!" you yell. "The best talent that came my way and fit in the best with my vision for the company happened to be predominantly white and/or male. Why should I force 'diversity' for the sake of it?"

Let me guess: you've sat through the lectures on diverse hiring in tech and thought it was all wrong. Perhaps you've even groaned at one of those brochures for a conference where someone has the job title of "Chief Diversity Officer". But here's why you need to look beyond culture, particularly if yours is looking like a monoculture.

First things first, you're likely getting "diversity" all wrong.

Calm down. No one's saying your staffing problem is too many white men between 25-35 on deck. But people tend to have a limited and often myopic view of diversity: it's not just about gender and race. Diversity encompasses the following things aside from just gender and skin color:

  • Age
  • Sexual orientation
  • Gender identity
  • Marital/family status and lifestyle
  • Size
  • Disabilities
  • Religion
  • Nationality/country of origin
  • Upbringing/background

This list isn't even exhaustive, but all of these characteristics that also include gender and race are what makes a workforce diverse. There's no way to know all of these things on a job interview or trial, of course. Some of them may be apparent when you see that person (like race or size) but others may not be or it's impolite to ask.

When your employees are diverse, you get multiple perspectives.

You know what really sucks? Expensive focus groups.

User testing that gets the same results over and over.

Floods of reports that your users are facing abusive experiences, dark patterns, and other things your staff just never dealt with pre-launch.

When you have a diverse workforce, those perspectives that go beyond what you, and the subordinates you most identify with, are suddenly in-house without having to arrange tons of expensive market research and user testing. For instance, if you have more women and racial minorities on staff they're more likely to point out how someone is likely to be harassed or abused on an online platform and will come up with features to prevent this before the issue happens. Employees who come from marginalized groups know what your user base in the same group is looking for in their experience.

Got a problem that your team keeps hitting the wall on? Chances are if you've got employees who grew up in tough situations, or in cultures that adapt better to teamwork than "rugged individualism", they can think faster on their feet than the people who grew up in middle class comfort on the coasts did.

Want to do business around the world? Someone who's lived in other countries or is not from America can fill you in on how it's customary to do business there.

 

Diversity opens up more doors than you think. It's also not just about gender and race and hiring on those bases just to make things appear that way. Your users, other staff members, and other stakeholders all benefit.

Using Chef to configure Datadog

The Datadog documentation for Chef is, to be generous, minimal. It's helpful to tell you how to make a recipe to install the Chef handler. This will begin uploading metrics about Chef runs to Datadog. I've yet to find these particular stats to be all that helpful, though.

What I want is to have Chef manage the actual configuration of Datadog so when we add new systems, or change what we're watching it all happens automatically, like Chef is meant to do.

To begin this process, I created a simple recipe that sets up Datadog. This does what the original docs do, but with an extra bit: if there are attributes about some of the other Datadog monitors, this will see them and add the appropriate additional recipes. If these attributes aren't set, the additional monitor recipes are skipped. Datadog's recipes don't gracefully handle the situation where the recipe is included and the attributes aren't there to watch anything (a very simple default attribute in that recipe that was empty would go a long way here).

Here's the bulk of the interesting stuff from our default.rb Chef recipe:

This just checks for the various attributes and if they exist, then add the appropriate include_recipe.

To get the right attributes, we've added the following to our attribute file:

This is a little more complicated, but can be broken down to just a few parts. There are two helper functions, one to check if a service exists, and one to return a list of IIS websites if there are any.

Then, at least on Windows machines, we make a list of each of the IIS sites. If any exist it will add it to the default['datadog']['iis']['instances'] attribute. Going back to the recipe, above, this will also cause the IIS DataDog recipe to be included as well.

In addition, we can check for some services if we're wanting to check those as well. We have a few services and can easily check if they exist (through that helper function) and add the service and process to the appropriate list, which then can watch those as well. The windows_service and the process DataDog elements watch different things, and we want to be able to check both. Obviously if you only wanted to see the service status, or watch a process that wasn't a service, this wouldn't need a ton of modification. 

 

Easy command line password generation

A quickie, but it's quite handy for me.

Often I want a new password for something. I use 1password to store passwords (and I hope you're using a similar tool), and it's great, but it doesn't have much of a command line tool. Sometimes I just want a quick password, and the terminal is sure handy otherwise, so it'd be nice to have this, too.

Note: There is a 1password command line tool, it just doesn't serve this purpose well.

Option 1: OpenSSL

OpenSSH is a handy tool and it can output some great random strings, almost as if it was made for this purpose (hint: it essentially is for this purpose):

openssl rand -base64 12

This will give you a nice, 12 character string, looking something like:

BMN/fyc/l2hJ0T90

You can also fiddle with the encoding to get different types of strings (-hex instead of -base64, for instance).

Option 2: md5

One of the reasons I like having a Mac is that there should be easy ways to do things whenever possible. OpenSSL is awesome, and probably better in most technical ways, but if I want something just quick, easy, and reasonable, I take a shorter path to get there:

date | md5

Really, anything can go into the md5 function, but using date as a seed changes every second and produces decent results without needing to think of much, like:

a622971557507cd17b0e07fcb7d84e41

Now, MD5 is a bad way to store passwords, but being used to generate passwords is still fairly useful.

 

Are these great for everything? Not really, but for a quick random string for general use, I've found it quite handy.

 

Skytap Metadata into Facter Facts

Puppet's Facter is great, and a great way to collect data for use with Puppet. With our dev/test workflow existing in Skytap, we wanted to have better access to the Skytap metadata so we could leverage that in Puppet.

Here is our solution to that...

The module 'module_skytap_metadata'  goes to the skytap metadata service and takes that returned JSON and re-parses it slightly, then adds them as Facter facts.

It takes all of the metadata values and prepends "skytap_" to them, grouping them all within the full list of Facter facts.

As an added bonus, it goes into the userdata of the VM and, if there's YAML there, will parse that as well creating 'skytap_userdata_xxxxx' values. This allows a tester to add values to the userdata themselves, which will flow all the way down to Facter and, ultimately, Puppet.

For us, this let's a tester mark a system for a given customer in Skytap, and Puppet can handle any customer specific setup needed. Handy for testing, POC, and otherwise giving power to the users themselves in a self-service way.

A system's user data that will end up as Facter facts.

A system's user data that will end up as Facter facts.

Our module_skytap_metadata is available on Github here: https://github.com/FulcrumIT/module_skytap_metadata

Skytap Python API module

Skytap is a cloud platform designed for traditional enterprise applications. They are a fantastic cloud based solution for development, testing, modernization and training aspects of these applications.

Back in the summer of 2012, shortly after we started using Skytap, we were asked to create a series of training environments that refreshed on a complex schedule so that our trainers could go into the field and have known good working environments for the purpose of training 3,500 folks on the use of our software. We needed to stand up an environment for each classroom, that both instructors and students could use during the course, and refresh that environment at the end to make way for the next class. We turned to what was then a reasonably flushed out API for Skytap. Using that, and a crazy cron file we were able to produce what our team needed in order to have a successful summer training. Four years, three major revisions, and two more sets of eyes has since transformed that little script that we called "Skynet" into a full fledged python module that we use to manipulate many aspects of our Skytap infrastructure.

With the current version of that script, we now can do a variety of tasks automatically, from suspending environments, deleting environments, maintaining users and groups, documenting our servers automatically in Confluence, and displaying Skytap vm statistics for users to easily access.

After the summer, we continued using a modified version of Skynet to manage the suspension of all of our Skytap systems each evening in an effort to conserve the our Skytap resources. This worked well and we began to look at other uses for the Skytap API. When we realized that the API exposed all the metrics that Skytap uses to bill us, we began work on a dashboard for our Network Operations Center that allows us to keep an eye on our Skytap usage.

Up to this point we were documenting each of our Skytap environments in Atlassian's Confluence Cloud - and like many of you this was done by hand, "when we had time..." that was not a workable solution so we brought in an intern who quickly demonstrated an aptitude with working with APIs. The next round of edits to Skynet gave him all that he needed to make our Skytap systems self document in Confluence. Nearly overnight we started getting requests from users asking for improvements to these pages as they were already providing significant value and we were excited about what else the pages could show users. This is even more important due to the dynamic nature of Skytap, as we create and destroy servers throughout the development process, we have confidence that our systems are accurately documented in a central location.

As our intern was completing this work, we began working with Okta for authentication of our users here at Fulcrum. Okta also has an API and we thought it would be exciting to integrate the creation of users in Okta with those same users in Skytap. With this new task we decided to step back and rethink the design of the Skynet script.

The Skynet script, rebranded as the python module 'skytap', is intended as a full Python wrapper of the Skytap API allowing more flexible use of future projects. This redesign also succeeded at our internal goal of being able to open source this work to potentially help other companies that work with Skytap.

Now you can get this skynet script both from its github repository if you want to see the source, or from pip (pip install skytap) if you want to just dive in and use it.

If you want to enhance this to do something new, please send us a pull request so we can use it as well!

Bill Wellington, Michael Knowles, Caleb Hawkins

Using Nagios to check Github MFA users

We still have a few users to work down before we can run full speed with this, but now we know who to bother.

We still have a few users to work down before we can run full speed with this, but now we know who to bother.

We want all of our users to be using multi-factor authentication to log into our more sensitive things, Github being high among that. We wanted a way to continually check to make sure our users were using MFA, which would catch both new users as well as anyone that had turned theirs off.

Our intern, Caleb, wrote up a Nagios check for us that did just this. Now the number of non-MFA users will display in Nagios and turn our system yellow if someone disables their MFA. We're now alerted whenever someone doesn't have it and can track that user down, without us having to periodically check the Github site.

The below script can be put on a Nagios server and it can keep some additional attention to security in your org as well. 


Dynamically create dashboards in Dashing.io

We use Skytap for many of our VMs and Dashing is a great tool to display data. We wanted to combine these by providing a dynamic way to create a new dashboard for each Skytap environment. The VMs all report their information up to Dashing automatically (I'll cover this in a separate post), allowing us to make some assumptions on what widgets we have.

This case may not work for you out of the box (unless you use Skytap), but the idea of dynamically available dashboards could work as well for AWS or other services without too much change to this sort of thing.

We'll need two files to create this. One is a job that runs periodically looking for what environments we have. We then create symlinks for that environment number to our main template. This allows us to easily update one file and it'll apply to all environments' dashboards. The job also cleans itself up, so if there's an environment that's been destroyed, the symlink is removed.

The second is the template itself. This is the meat of things, and is what is accessed whenever someone pulls up any environment dashboard. Because each dashboard URL includes the name of the environment of interest, we can use that to determine what VMs are that environment and display things accordingly.

First, the job file:

environments.erb

Skynet.py is a script we use to access the Skytap API. This just returns a list of environments.

This job essentially just gets the list of environments and loops through it to create symlinks to the template file. It also creates its own widget giving us the total number of Skytap environments we're using.

This piece is basically the meat of it. Parsed is a variable that has the list of environments in it and we just loop through it and add symlinks.

parsed.each do |value|
  total += 1
  erb = env_path + value.to_s + '.erb'
  next if File.file?(erb)
  puts 'Creating template link: ' + erb.to_s
  File.symlink(template, erb)
end

 

NEXT, THE DASHBOARD TEMPLATE:

environment.erb

The symlinks created all point to this template.

It does a few things I think are interesting. 

env = File.basename(__FILE__, File.extname(__FILE__)) 
skynet = `/opt/skynet/skynet.py -a vms #{env.to_s}`
parsed = JSON.parse(skynet)

This gets the environment name from the filename of the symlink. We use this environment name to go to our skynet script to get a list of VMs in that environment.

widgets = Sinatra::Application.settings.history
widget_list = []
widgets.each do |key, value|
  if key.to_s.index(vm.to_s) != nil
  .
  .
  .

This section gets the full list of widgets and loops through them all, using any that have the VM id in the name of the widget. We push all of our widget data from these machines named vmid_<name>, we can do this to get a list of widgets without even knowing in advance what widgets are available for a given VM.

widget_list.sort! { |a,b| a.order <=> b.order }
col = 0
widget_list.each do |w|
    col = col + 1
    %>
      <li data-row="<%=row%>" data-col="<%=col%>" data-sizex="1" data-sizey="1">
      <div data-id="<%=w.id%>" data-view="<%=w.view%>" ></div>
      </li>
    <%

This takes the list of widgets made from the prior loop and sorts them by order (this allows a VM to order the widgets if there's a logical way to order them, again without us knowing about it here). We then loop through this list and builds the actual HTML widget information. 

And with that, Dashing doesn't know about our environments, or what widgets are out there, but we can still display great dashboards for each environment. To have a VM give more information about itself, we can just have it update a new widget with the right name and the rest will be automatically done for us, with no changes to the Dashing server.

To make things even easier, we use Puppet to deploy the scripts to the server that are needed to post the widgets, so a new server will be updating the dashboard nearly immediately after creation.

Click for enlarged detail. This sort of dashboard any of our engineers or QA folk can get about any of our server sets with just a click.


The actual files:

Both scripts are below for review, or you can get the from gist with the links, also below.

 

 

Add FontAwesome icons to Übersicht widgets

Übersicht is a tool that can run small scripts ("widgets") and put the results on your desktop. It's a terrific way to keep pieces of information quickly accessible.

This does for me, among other things, give me information on my triage work as well as some basic network info on my desktop:

I like, however, to use FontAwesome icons in these displays. FontAwesome is clean, crisp, and contributes to a good overall look with all of the widgets using the same style.

Übersicht doesn't support FontAwesome by default though. To enable it we have to create a fake widget that doesn't actually display anything.

This file can just be put into the widgets directory and it'll enable FontAwesome for all of your widgets:

With that in there, you can use FontAwesome in your results in any of your scripts. You can check out my worklist or network widget to see how I use them as examples.

Dashing widget to show Internet bandwidth usage

Our network guy likes to regularly check on the company's overall network usage. It's a bit of a tic of his, but those sorts of checks are a good way to see if something has gone sideways and is gobbling up our network even before users complain about a problem.

This widget now lets us all get that same information just at a glance. No more logging onto the ASA to see how it's doing, and now we get the info simply at a glance.

One of the two graphs: our internet usage over the last five minutes.

One of the two graphs: our internet usage over the last five minutes.

In all, four widgets are created with this dashing.io job:

  • inbound_bandwidth: simply a number of the current bandwidth usage (good for a maxmeter)
  • outbound_bandwidth: the same as inbound_bandwidth, but for outbound data.
  • bandwidth_short: bandwidth (in and out) over the last five minutes
  • bandwidth_long: bandwidth (in and out) over the last eight hours

The two graphable data sets are best served with the rickshaw graph (that's what I'm using, here), but any other graph able to read similar data could produce something for you.

It's still fairly early in the day here...

It's still fairly early in the day here...

In the end, you'll need to change some of the settings in there (IP address and community names need to be set to legitimate things), and the timing can be changed to what you want, but this can be used fairly directly if you want a simple view of your overall bandwidth.

Update: Our dashboard itself (the erb file) entries to display this data look like this:

<li data-row="2" data-col="4" data-sizex="3" data-sizey="1">
<div data-id="bandwidth_long_smooth" data-view="Rickshawgraph" data-suffix="M" data-unstack="true" data-renderer="line" data-legend="true" data-min="0" data-max="350" data-title="Bandwidth (8 hr)"></div>
</li>

<li data-row="3" data-col="4" data-sizex="3" data-sizey="1">
<div data-id="bandwidth_short" data-legend="true" data-view="Rickshawgraph" data-renderer="line" data-suffix="M" data-unstack="true" data-min="0" data-max="350" data-title="Bandwidth (5 min)"></div>
</li>

Reset lost admin password for Raspberry Pi

Raspberry Pis are great, but sometimes their ability to keep running in the background can lead to forgotten root passwords. I've had more than one time where I was sure I knew the root password, only to learn that I had forgotten.

Luckily, Raspberry Pi has a "feature" that most Linux machines don't: very easily removable primary storage. 

To reset your password:

  • Power down and pull the SD card out from your Pi and put it into your computer.
  • Open the file 'cmdline.txt' and add 'init=/bin/sh' to the end. This will cause the machine to boot to single user mode.
  • Put the SD card back in the Pi and boot.
  • When the prompt comes up, type 'su' to log in as root (no password needed).
  • Type "passwd pi" and then follow the prompts to enter a new password.
  • Shut the machine down, then pull the card again and put the cmdline.txt file back the way it was by removing the 'init=/bin/sh' bit.

The cmdline.txt should look something like this:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait init=/bin/sh

It's worth noting that with this process being as easy as it is, to consider than a malicious person with physical access to your Raspberry Pi could do this as easily as you can.

 

Root account prompting for password:

If the root account is prompting for a password (not common) you can, back on your computer, open the /etc/shadow file and replace the root password in there with an asterisk. This will change the password to be blank.

 

Error when changing the password:

Note: Sometimes the password won't be able to be changed because the Pi will boot in a read-only mode. You'll get an error that you can't change the password. To fix this, remount the drive in read-write mode:

mount -o remount,rw /

 

Lint Your Scripts: Why It Matters

Sample lint checking from the Atom.io Lint package.

Programmers know that debugging code is simply part of their job description. While this debugging process tends to get more streamlined as a programmer gains experience, it can still be tedious and result in many hours spent combing meticulously through code looking for that one variable that escaped initialization or the one construct that is non-portable. Not only does an increase in the programmer's frustration level result, debugging can be a costly endeavor. 

Lint: An Impartial and Thorough Tool

Utilizing lint, a simple yet powerful tool, enables a programmer to instantly identify areas of code that might contain mistakes, effectively reducing the time spent taking apart each individual line. Similar to a compiler, lint looks at line of code that is generated and alerts you to areas that could possibly deliver results that are not consistent with the compiler that is used. While lint can often produce almost as many warnings as there are lines of code, it's important to remember that these are only possible errors. By targeting specific areas within the lines of code, lint allows programmers to focus only on that particular piece of code. 

Possible Errors That Lint Targets 

The list below is far from exhaustive as lint checks code based on both the version of the tool that is used as well as its implementation. Some examples of items that lint will warn on include: 

  • assignments that are suspicious
  • code that is unreachable
  • variable types that are mismatched
  • indexing that goes beyond the bounds of an array
  • constructs that are not portable
  • null pointers that are de-referenced
  • variables that are not being used
  • combinations of data types that could be dangerous
  • uninitialized variables that might be in use
  • unnecessary or duplicate header files

Long-Term Effects of Using Lint

Regardless of the size team that is developing a script, documentation of their efforts and the assurance of a uniform code structure are vital to the longevity and effectiveness of the code itself. Maintenance on scripts tends to be sporadic at best. Developing them within a prescribed format helps make maintenance more streamlined and less time consuming while fostering a sense of continuity that can be far reaching. New team members, as well as the code's original creator -- who has likely forgotten the specifics of the code or why it was written a particular way -- can easily pick it back up again. A bit of extra time and effort in the beginning of the coding process using lint ensures that scripts are standardized. This helps maximize time spent with them in the future as they are tweaked to meet present needs. 

Automating Evernote

Here's an AppleScript script I wrote that runs periodically for me. It goes through my "triage" notebook in Evernote and looks for items I'm regularly putting in there.

While this particular script won't help you, it may be able to be used as a base to see how you can go through your default notebook, pick out items you add to Evernote regularly, and reformat them according to whatever rules you'd like to add.

Note in here, I'm moving notes, adding tags, and renaming the notes to a more standard format (particularly handy for things like paystubs that don't have date info in the original title).

Close applications on a Mac

At the end of the day, I like to close all of the apps on my Mac to start the day fresh. Before I started doing this, I'd have web pages left up that were open for days - long enough for me to lose the context of why I was even on that particular page. 

Now, each day can start a bit cleaner and without debris from the previous day.

This is the little script I use to do this:


Exposing Dashing widget data

When troubleshooting Dashing widgets, I love being able to see what data Dashing really has about various widgets. This really helped troubleshoot the ability to check for stale widget data, among other things.

What I did was create a very simple dashboard called widgetdata. It has this code in it:

Which then creates a dashboard with some global widgets, followed by the raw data of all of the widgets. This list can get long, but I've found it invaluable to be able to search through this data to see what's going on. At the top are two widgets built from prior posts: Marking a widget as stale and Dashing widget to show widget count.

A small clip of some of the raw widget data

A small clip of some of the raw widget data

Keep your Skills Together with Open Badges

Now what is one thing that we have all had to do at one point or another in our lives? We have all had to fill out a resume, make ourselves sound amazing and present ourselves and the skills we can provide. In each resume there are always a few sections that we loath as a whole filling out, things like your skills, previous experience and past learning and education. Sometimes we need to go back and do a little research to get things like dates, the names of organizations and overall it proves a bit of a hassle. Well with open badges we find that it is going to start getting a lot easier.

Open badges are an online way for you to keep track and to store all of your personal achievements and past activities, certifications and education and any skills you might have picked up along the way. It brings all of these together in one online place where you can keep it and post it where you need it to, be it a social network or an online job posting site. Potential employers will be able to see it all put together in one location for them and it will keep them the trouble of going back to each source individually.

This open and free software, provided by Mozilla is the perfect tool for any job seeking individual who wants to stand out from the crowd with a quick and streamlined way to display what skills and background you bring to the table.

Dashing widgets for Active Directory

A list of about to expire passwords.

A list of about to expire passwords.

Here's a set of Dashing widgets that give us some visibility to users with expiring passwords. This should run as a scheduled job on a domain controller. It queries through PowerShell the users and their password expirations.

The three widgets created are:

expiring_users

A list of expiring users, defaulting to all users within the next 14 days

expired_users

All expired users

locked_users

All users who have locked out their accounts

For expiring_users and expired_users, the widget doesn't need to update very often, but if you want to use locked_users, you may want to have the scheduled job run more frequently so you can respond more quickly when a user locks themselves out.

Additionally, a fourth widget is made that is essentially a set of all three of those in one:

Looks like Brian reset his password in time, but not Patrick.

Looks like Brian reset his password in time, but not Patrick.

active_directory_users

This one then will turn to a yellow/warning status if there's an expired user, and red/critical if there is a locked out user.

 

Below is the PowerShell script. Then just add the list widget to your Dashing dashboard as desired.

Mark Dashing widget data as stale

I wanted an easy way to know if a Dashing widget hadn't been updated in a while to know if that data was stale. There's a little "last updated" line in the widget, but it's only really useful if I walk up to our dashboard. I want something that's obvious from a distance, but invisible when things are going well.

The below gist files can help set this up. The stale-widgets.rb file can go in your jobs/ folder and will run and go through each widget. If it's older than the threshold (set, here, at two hours), then it changes the status of that widget to "stale". The application.scss changes (just add those lines into your application.css file, or change to suit) then create a new status called "status-stale" which determine what a stale widget looks like.

Beware, though, that a status isn't changed until a new status shows up. All of our widgets push "status:normal" when all is good, which clears other status messages. If your widget updates don't push something similar, the widget may stay marked stale if it gets a future update.


Lego Seattle mosaic for the office

A Lego mosaic of Seattle I put together for my office. It's 60"x30", which made transporting it here to the office a bit tricky since it wouldn't fit in the car. I overlaid a drawings of the skyline, mountain, and the Space Needle and then converted that into Lego dimensions.

To create a little texture (hard to see in pictures), the white level is one plate high, then the mountain is two plates high, and the Needle and skyline are all a full brick (three plates) high. It's not obvious when you look at it, but I like the slight depth it creates. 

Displaying Nagios in a Dashing dashboard

We have a nice Dashing dashboard, but also have Nagios checking our general network environment, and wanted a way to combine them both - Dashing can display the general statistics, and Nagios can tell us if something is wrong on the network somewhere.

This is fairly easily accomplished using Dashing's iframe widget.

This makes a widget take up three columns (our NOC dashboards are three tiles wide...) and shows our Naglite3 Nagios board. Everything's in one, nice, easy-to-glance-at screen.

Nagios status integrated into the Dashing dashboard.

Nagios status integrated into the Dashing dashboard.


The Filabot Turns Scrap Plastic Into 3D Printing Filament

3D printing has been all over the news for some time now, and it seems like every other day this rapidly developing technology is changing the world. Creating print-to-fit prosthetic limbs for a fraction of the price of previous models? Check. Building pre-fabricated homes in an extremely short period of time, and with an affordable price tag? Check. Use 3D printers to recycle the huge amounts of plastic waste we produce?

We might have a check on that, too.

The Filabot: Combining Recycling With 3D Printing

The way 3D printing works is fairly simple. You put a 3D image into your computer, and the computer sends that image to the printer. The printer will then build that image one layer at a time, using plastic filament instead of ink. When the printer is finished, you have a 3D replica of the image that was in the computer.

The idea of being able to use recycled plastic to make that filament, and thus to remove a huge amount of waste from landfills, is an appealing idea. It's why the Filabot was invented. This device allows users to take certain types of plastic, chop it up, and turn it into usable filament. Whether you want to use PET plastic, nylon 101, or polypropylene, the Filabot can take that plastic waste and turn it into something useful again.

Save Money, and Save The Planet

As 3D printing becomes more and more prevalent it's likely that the Filabot, and devices like it, will also grow in popularity. The ability to recycle your plastic at home, without any kind of middleman, is an appealing one. If you can turn that plastic into ornaments, toys, or even spare parts, then so much the better!