Bulbous, Not Tapered

Foo-fu and other favorites…

Disable Touchpad While Typing


I have a Lenovo Thinkpad t460p laptop that currently runs Ubuntu 17.04. In general the system is a pleasure to use but one niggle has been mildly infuriating… the touchpad regularly engages when I’m typing and my cursor jumps to an unwanted position mid-word. It doesn’t happen frequently enough to be a serious problem, but it does happen frequently enough to be intensely irritating. The fix was simple, but researching it was not.

The Fix

Stop using the synaptics driver and start using libinput. For me this was as simple as running aptitude remove xserver-xorg-input-synaptics and rebooting.

Since your system may not be configured exactly as mine was, details follow so you can gain a better sense of what might be going on with your own system.

The Hardware

The Thinkpad t460p includes both a touchpad and a pointing stick.

Thinkpad Pointing Stick

These show up as separate input devices under X11:

$ xinput
⎡ Virtual core pointer                          id=2    [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
⎜   ↳ SynPS/2 Synaptics TouchPad                id=12   [slave  pointer  (2)]
⎜   ↳ TPPS/2 IBM TrackPoint                     id=13   [slave  pointer  (2)]
[... more output truncated...]

The Drivers

There are multiple drivers potentially in play here. By default the synaptics driver is installed via the xserver-xorg-input-synaptics package, and the libinput driver is also installed by default via the xserver-xorg-input-synaptics package:

$ sudo aptitude search xserver-xorg-input | egrep 'synaptics|libinput'
i  xserver-xorg-input-libinput - X.Org X server -- libinput input driver
p  xserver-xorg-input-libinput:i386 - X.Org X server -- libinput input driver
p  xserver-xorg-input-libinput-dev - X.Org X server -- libinput input driver (development headers)
i  xserver-xorg-input-synaptics - Synaptics TouchPad driver for X.Org server
p  xserver-xorg-input-synaptics:i386 - Synaptics TouchPad driver for X.Org server
p  xserver-xorg-input-synaptics-dev - Synaptics TouchPad driver for X.Org server (development headers)

The synaptics driver takes precedence over the libinput driver for the SynPS/2 Synaptics Touchpad device. This can be confirmed by looking at detailed information for the SynPS/2 Synaptics Touchpad device using the id 12 that we got in our previous xinput command. We can see that the Synaptics driver is in use because each of the properties in the list is prefixed by that driver name:

$ xinput list-props 12
Device 'SynPS/2 Synaptics TouchPad':
        Device Enabled:         1
        Synaptics Edges:                1632, 5312, 1575, 4281
        Synaptics Finger:               25, 30, 256
[... more output truncated...]

The libinput driver is working, though, and is in use by the pointer stick, as we can see by noting the libinput prefix on all the properties associated with id 13 (the TPPS/2 IBM TrackPoint from out initial xinput command).

$ xinput list-props 13
Device 'TPPS/2 IBM TrackPoint':
        Device Enabled (141):   1
        Coordinate Transformation Matrix (143): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
        libinput Accel Speed (284):     0.000000
        libinput Accel Speed Default (285):     0.000000
[... more output truncated...]

Synaptics and PalmDetect

One of commonly suggested approaches to address touchpad jumpiness while typing is the palm detection feature of the Synaptics driver. This can be configured either via synclient as described in the Arch Linux Wiki for Synaptics, via xinput set-prop, or by editing the xorg config in a file like /etc/X11/xorg.conf.d/50-synaptics.conf.

Enabling PalmDetect had no noticeable impact for me. I didn’t dig enough to determine if the feature was actually broken on my hardware, or if it just addresses a different problem. My palms don’t actually rest on the pad when I type, but the plastic of the laptop case flexes enough that the touchpad interprets it as input. It may be that PalmDetect is correctly detecting that no palm is resting on the trackpad and so allows the bad input though.

Synaptics and syndaemon

A second commonly suggested approach to erroneous touchpad input while typing is syndaemon. Syndaemon monitors xorg for keyboard activity (either by polling frequently or via the more efficient XRecord interface), and when activity is detected it briefly disables the touchpad by doing something roughly equivalent to xinput set-prop 12 "Device Enabled" 0. The Arch Wiki for Synaptics has advice on configuring syndaemon, or it can be added to your Gnome startup applications to run as your normal user on Gnome login.

Syndaemon also had no effect for me. It appears that in systems with multiple pointers, syndaemon only attempts to disable the first device. This issue is reported and confirmed in Ubuntu 1591699. In that bug report, the first pointing device was a “ghost” and could be disabled manually. In my case, there are legitimately two pointing devices present and I use them both, I don’t want to disable either of them. It appears that if you have two pointing devices and your touchpad doesn’t have the lowest xinput id, there is no way to configure syndaemon to supress input from the correct device.

Libinput and DWT

Libinput is a library that handles input devices (both keyboard and pointer devices) for Wayland, but as we found in our drivers section above, libinput is installed by default and works for xorg systems as well. Libinput has a disable-while-typing feature built in and enabled by default.

I was able to activate libinput for my Synaptics touchpad simply by uninstalling the synaptics driver and rebooting. The libinput DWT feature began working immediately and my pointer became inactive while I was typing. Problem solved!

Libinput and Right-Click

The synaptics driver also has features to divide the touchpad area into sections that trigger different buttons, and by default the right-half triggers a right-click when depressed. Libinput uses the whole touchpad for left-clicking, which is better behavior in my opinion. For right-clicks I use the hardware-button just above the trackpad.

It is allegedly possible to configure the synaptics driver to disable the right-click area but I never tried this myself as libinput has all the behaviors I want.

Note that the Thinkpad t460p’s touchpad acts as a hardware button, physically clicking when the pad is depressed. Both libinput howto’s above talk about how to enable the Tapping feature, which is not necessary for that hardware button to function. I suspect you only need the Tapping option if you want light taps to register as clicks, which seems unnecessary and undesirable for this hardware.

From Chef/LXC to Ansible/Docker


I recently changed the way I manage the handful of personal servers that I maintain. I was previously using Chef to provision containers on LXC. I’ve recently switched over to a combination of Ansible and Docker. So far, I’m happy with the switch. Going through the process, I feel like I’ve learned something about what each technology does well. The TLDR is:

  • Chef remains my favorite system for high-volume high-complexity configuration management work. Dependency management, test tooling, and the comfort and power of the language are all exceptional. But Chef itself is not low-maintenance, and the overhead of keeping a development environment functional dwarfs the “real” work when you have just a few hours of infrastructure automation to do each month.
  • The Ansible ecosystem is slightly less capable than Chef’s in almost every way, or at least I like using it less. It’s still really really good, though. It’s also simple to set up and never breaks. If you only do a little infrastructure automation, Ansible’s simplicity is ideal. If you do grow to have a very complex environment, Ansible will scale with you. I might be slightly happier managing many tens or a few hundred cookbooks in Chef, but I could certainly get the same job done in Ansible.
  • Dockerfiles are a massive step backward from both Ansible and Chef in every way. Most of the work done in Dockerfiles is so trivial that sloppy bash and sed for text-replacement is good enough, but it’s not good. I’ve found images on Docker Hub to do everything I want to so far, but when I need to write a nontrivial Dockerfile I’ll probably investigate ansible-container, or just use Ansible in my Dockerfile by installing it, running it in local-mode, and removing it in a single layer.
  • Though I don’t like the Docker tools for building complex images, I do like that it encourages (requires?) you to be much more rigorous about managing persisent state in volumes. For me Docker’s primary draw is that it helps me succeed at separating my persistent state from my software and config.

Read on for the details.

Your Mileage May Vary

I’m not advocating my own workflow or toolset for anyone else. I’m working in a toy environment, but my experiences might help inform your choices even if your needs are fairly different than mine.

My Environment

I’m doing all this to manage a handful of systems:

  1. A single physical machine in my house that runs half a dozen services.
  2. The Linode instance running this webserver.
  3. Whatever physical, virtual, or cloud lab boxes I might be toying with at the minute.

It’s fairly ridiculous overkill for a personal network, but it gives me a chance to try out ideas in a real, if small, environment.

From LXC

When I was using LXC, I used it only on the physical box running multiple services, not the Linode or lab boxes. Because the physical box ran a bunch of different services, I wanted to isolate them and stop worrying about an OS upgrade or library version upgrade from one service breaking a different service. I chose LXC rather than Xen or VirtualBox because I was memory constrained and LXC containers shared memory more efficiently than “real” virtualization. I didn’t have to allocate memory for each service statically up-front, each container used only what it needed when it needed it. But each container was a “fat” operating system running multiple processes, with a full-blown init-system, SSH, syslog, and all the ancillary services you’d expect to be running on physical hardware or in a VM.

LXC did it’s job smoothly and caused me no problems, but I found I wasn’t any less nervous to do upgrades than before I had split my services into containers. Although my deployment and configuration process was automated, data backup and restore was as much of a hassle as it had always been. And in many cases, I didn’t even really know where services were storing their data, so I had no idea if I was going to have a problem until I tried the upgrade.

LXC does have a mechanism to mount external volumes, but it was manual in my workflow. And my experience with LXC plugins for Chef Provisioning and Vagrant was that they weren’t terribly mature. I didn’t want to try to attempt automating volume configuration in LXC, which set me thinking about alternatives.

To Docker

Docker has great volume support and tons of tooling to automate it, so I figured I’d try migrating.

I was able to find existing images on Docker Hub for all the services I wanted to run. The Dockerfiles used to build these images didn’t leave a great impression compared to the community Chef cookbooks they were replacing. They were much less flexible, exposing many fewer options. The build processes hardcoded tons of assumptions that seem like they’ll make maintenance of the Dockerfile flaky and brittle in the face of upstream changes. But they do work and they seem to be actively maintained. When an image failed to set the environment up as I desired, I was generally able to hack an entrypoint shellscript to fix things up as I desired on container startup. Where configuration options weren’t exposed, I was generally able to override the config-file(s) entirely by mounting them as volumes. It all feels pretty hacky, but each individual hack is simple enough to document in a 2 or 3 line comment, and the number of them is manageable.

By trading off the elegance of my Chef cookbooks for the tire fire of shell scripts defining each container, I’ve gained confidence that my data as well as my configs will be available in each container after upgrade. I’ve already killed and recreated my containers dozens of times in the process of setting them up, and expect to be able to do upgrades and downgrades for each container independently with the minimum necessary hassle.

From Chef

When I was using Chef, I used it manage all my systems. I used it to set up LXC on my container host, to manage the services running inside of each LXC container, to set up the web-service on my Linode, as well as to manage whatever ephemeral lab boxes I was messing with at the moment.

To launch a new service in an LXC container, I would manually launch a new LXC container running a minimal Ubuntu base-image. At the time, the tools I tried using to automated LXC generally had missing features, were unreliable, or both… so I stuck to the bundled command-line interface. Each container would have its own ip-address and DNS name, which I would add to my Chef Provisioning cookbooks as a managed host to deploy my software and configs to the container over SSH. Chef Provisioning would run a wrapper-cookbook specific each node that:

  1. Called out to a base-cookbook to set up users, ssh, and other things that were consistent across all my systems.
  2. Generally called out to some community cookbook to install and configure the service.
  3. Overrode variables to control the behavior of the above cookbooks, and did whatever small tweaks weren’t already covered.

I used Berkshelf manage cookbook dependencies, which is a fantastic system modeled closely on Ruby’s bundler gem tool, and both tools have a powerful and flexible approach to dependency management.

The custom-cookbooks that I wrote had extensive testing to let me iterate on them quickly:

  • rubocop ran in milliseconds and ensured that any Ruby glue code in my system was syntactically valid and reasonably well styled.
  • Foodcritic similarly ran in milliseconds and ensured that my cookbooks were syntactically valid Chef code and reasonably well styled.
  • Chefspec unit tests ran in seconds and helped me iterate quickly to catch a large fraction of logic bugs.
  • test-kitchen and serverspec ran my cookbooks on real machines to provide slow feedback about the end-to-end behavior of my cookbooks in a real environment.
  • guard to automatically ran the appropriate tests whenever I saved changes to a source file.

When everything was working, I was able to iterate my cookbooks quickly, catch most errors without having to wait for slow runs against real hardware, enjoy writing Chef/ruby code, and have a lot of confidence in my deploys when I pushed changes to my “real” systems. The problem was, everything almost never worked. Every time I upgraded anything in my Chef development environment, something broke that took hours to fix, usually multiple somethings:

  1. Upgrading gems inevitably resulted in an arcane error that required reading the source code of at least 2 gems to debug. I ended up maintaining personal patches for 6 different gems at one point or another.
  2. ChefSpec tests regularly failed when I upgraded Chef, gems, or community cookbooks. Again, the errors were often difficult to interpret and required reading upstream source-code to debug (though the fixes were always mechanically simple once I understood them). I really like the idea of ChefSpec providing fast feedback on logic errors, but on balance, I definitely spent more time debugging synthetic problems that had no real-world implication than I spent catching “real” problems with ChefSpec.
  3. Using LXC targets with test-kitchen was amazingly fast and memory efficient, but also amazingly brittle. The LXC plugin for test-kitchen didn’t reliably work, so I ended up using test-kitchen to drive vagrant to drive LXC. This setup was unreasonably complicated and frequently broke on upgrades. This pain was largely self-inflicted, test-kitchen can be simple and reliable when run with more popular backends, but it was frustrating nonetheless.
  4. It’s idiomatic in Chef to store each cookbook in it’s own independent git repo (this makes sharing simpler at large scale). Gem versions, cookbook versions, and test configs are stored in various configuration files in the cookbook repository. This meant each upgrade cycle had to be performed separately for each cookbook, testing at each step to see what broke during the upgrade. Even when it went well, the boilerplate for this process was cumbersome, and it rarely went well.
  5. Chef Provisioning was another self-imposed pain-point. Chef provisioning over SSH has been reliable for me, but it’s overkill for my basic use-case. When I started with it, it was very new and I thought I’d be learning an up-and-coming system that would later be useful at work. In fact, it never got a huge user-base and I switched to a job that doesn’t involve Chef at all. It ended up being a bunch of complexity and boilerplate code that could have easily been accomplished with Chef’s built-in knife tool.

ChefDK can help with a lot of these problems, but I always found that I wanted to use something that wasn’t in it, so I either had to maintain two ruby environments or hack up the SDK install, so I tended to avoid it and manage my own environments, which probably caused me more pain than necessary in the long run. When I found something that didn’t work in the ChefDK, I probably should have just decided not to do things that way.

But regardless of whether you use the ChefDK or not, the cost of these problems amortizes well over a large team working on infrastructure automation problems all day long. One person periodically does the work of upgrading, they fix problems, lock versions in source control, and push changes to all your cookbooks. The whole team silently benefits from their work, and also benefits from the ability to iterate quickly on a well-tested library of Cookbooks. When I was working with Chef/Ruby professionally, the overhead of this setup felt tiny and things I learned were relevant to my work. Now that I’m not using Chef/Ruby at work, every problem is mine to solve and it feels like a massive time sink. The iteration speed never pays off because I’m only hacking Chef a few hours a month. It became hugely painful.

To Ansible

Although I migrated many of my services to Docker, I haven’t gone all-in. Minimally, I still need to configure Docker itself on my physical Docker host. And more generally, I’m not yet convinced that I’ll want to use Docker for everything I do. For these problems, I’ve decided to use Ansible.

In most ways, this is a worse-is-better transition.

  • Ansible Galaxy seems less mature than Berkshelf for managing role dependencies, and the best practice workflow certainly seem less well-documented (do you check in downloaded roles, or download them on-demand using requirements.yml, what’s the process for updating roles in each case?).
  • Standard practices around testing Ansible roles seem way less mature compared to what I’m used to in the Chef community, and seem mostly limited to running a handful of asserts in the role and running the role in a VM.
  • The yaml language feels less pleasant to read and write than Ruby to me, though practically they both work for my needs.
  • I won’t speak to Ansible’s extensibility as I haven’t attempted anything other than writing roles that use built-in resources.

Even though I feel like I’m accepting a downgrade in all the dimensions listed above, Ansible is good enough at each of those things that I don’t really miss the the things I like better about Chef. And the amount of time I spending fixing or troubleshooting Ansible tooling can be effectively rounded to zero. This simplicity and reliability more than makes up for the other tradeoffs.

I now have each of my handful of physical/cloud hosts defined in my Ansible inventory file and my Docker containers are defined in a role using docker_image and docker_container. Perhaps someday I’ll migrate to using Docker Compose but for now this is working well.

Testing is simple and basic, I have a Vagrantfile that launches one or two VirtualBox instances and runs whatever role(s) I’m current hacking on them (including the Docker roles if necessary). Testing a change takes a minute or two, but things mostly work the first time and when there is a problem the fix is usually simple and obvious. Even though my feedback cycle is slower with Ansible, I find that iteration is faster because I’m working on the problem I’m trying to solve instead of yak-shaving six degrees of separation away.


I miss writing Chef cookbooks, it’s still my favorite configuration management system. The overhead of maintaining simple things in Chef is just too high, though, and the power of its ecosystem can’t offset that overhead on small projects. My life with Ansible and Docker feels messier, but it’s also simpler.

I’ve also come to appreciate that while having a really sophisticated configuration management and deployment system is great, it does you precious little good if your management of persistent state across upgrades and node-replacement isn’t similarly sophisticated. Building images with Dockerfiles feels like a huge step backward in terms of configuration management sophistication, but it’s a huge step forward in terms of state management, and that’s a tradeoff well worth making in many situations.

Goodbye Wordpress!


After more than 12 years it’s time to say goodbye to Wordpress. It’s been a good run and WordPress is fantastic software but I spend considerably more time maintaining it than I do writing. A static site can do everything I want and needs way less maintenance when I’m not using it. I’ve switched over to Hugo and am relatively happy… though there were some minor bumps and bruises along the way.

Yearly and Monthly Archives

If you put the year or month in your permalink structure, Wordpress automatically creates yearly and monthly archive pages. For example, if your permalink structure is http://example.com/:year:/:month:/:slug:/, you can visit http://example.com/2017/06/ and see a list of postings from that month. I’m probably unusual, but I like to navigate sites this way and I want my site to have reasonable archive pages at those year and month urls.

Hugo can’t yet do this. It’s relatively straightforward to use Hugo’s templating features to create a single archive page that links to every post, but the per-year and per-month urls are important to me.

I wasn’t able to use Hugo to solve this problem, but most webservers do have the ability to automatically display the contents of a directory, which is already what Hugo generates. I configured my Caddy webserver to do this and it works ok. The generated page style is inconsistent with the rest of the site but Caddy does allow styling those pages if I choose to do so later. More likely I’ll live with the default style until the Hugo issue is resolved and then start generating monthly/yearly archives with Hugo.


Hugo is written in the Go programming language, which is a relatively young language that prominently features static and self-contained binaries. I’m a huge fan of Go’s static binaries. A large part of the reason I picked Hugo over Jekyll is ease of installation and upgrade (just download one binary and run it). But one downside of the self-contained nature of Go programs is that plugin systems are tricky to create. Hugo doesn’t have one yet. The lack of a plugin ecosystem does limit what Hugo can do compared to systems like Jekyll, but my needs are relatively simple and it hasn’t been a major issue.

Theme Inconsistency

The Hugo theme ecosystem seems immature compared to what I’m used to from the world of WordPress. WordPress has well-developed conventions for how themes are configured. In contrast, the Hugo theme ecosystem seems to have few broadly adopted conventions. Many Hugo themes don’t support every site layout that Hugo can generate, but instead assume that your site content adopts a specific category or filesystem layout. These limited/opinionated themes combined with my ignorance of Hugo’s site-layout conventions to create several confusing moments when the site rendered with missing content or in other unexpected ways. Only after reading Hugo’s site organization docs closely and poring through theme source code did I come to understand why things weren’t rendering as expected.

WordPress also has easy to use mechanisms for extending themes via plugins and widgets. With Hugo, themes themselves are the only mechanism for extending Hugo’s capabilities. Hugo does allow you to add and override things in your theme on a file-by-file basis without having to edit the upstream theme directly, which is relatively powerful but there’s an art to factoring a theme into easily overridden files and maintenance can be unpleasant if you end up having to override something in a large/core file in the theme. If you’re a front-end developer maintaining your own theme, none of this matters in the least. If you want to do light customization of an existing theme, minimizing maintenance headache so you can update the upstream theme easily is a little finicky.

I chose hugo-theme-bootstrap4-blog for my theme and have been happy. It has clear documentation about the content layout it expects, it provides config.toml options for most things I want to customize, and the maintainer has been responsive to pull requests to add the features I wanted without my having to keep a fork that deviates from upstream.


Thankfully, migrating my data was not terribly difficult. I read this post on migrating data from WordPress to Hugo and was able to use a combination of WordPress’s built-in export-to-xml feature and the ExitWP tool to convert my WordPress database to a skeleton Hugo site. I was able to keep my permalink structure the same, and I was already hosting files and images at static non-WordPressy URLs that didn’t change. The only url change I found was that I had my RSS feed at /feed/ and I added a webserver redirect from there to /index.xml where Hugo puts the feed.


It wasn’t the smoothest migration in the world, and Hugo had a non-trivial learning curve for me… but I’m happy with the result. Writing posts is dead-easy and when I’m not writing there’s no maintenance to do.

Filling up the Boot Partition

Ubuntu doesn’t remove old kernels when upgrading to new kernel versions, which is a great because sometimes there’s a compatibility problem and you want to roll back. If you don’t pay attention to free disk space, though, it’s really easy to fill up your boot partition which is only a couple hundred megs by default. When this happens, kernel upgrades start failing and apt may start throwing errors for all package operations, which isn’t fun. It’s relatively straightforward to recover from, but it happens infrequently enough that it always takes me too long to remember the details. Next time I’ll check here:

  1. Find the currently running kernel version, I never uninstall the currently running kernel in case there are compatibility issues with newer kernels:

     uname -r
  2. For each older kernel you want to remove:

     sudo dpkg --purge linux-image-x.x.x-xx-generic \

    You can do this with apt-get or aptitude as well, but dpkg is less likely to run into problems with broken dependencies or half-configured kernel installs (as is common if you discover this problem while upgrading your kernel).

  3. I almost always also have kernel headers installed. While they don’t take up space in /boot, they’re not needed once the old kernel is removed either. Might as well clean them up as well:

     sudo dpkg --list | grep linux-headers
     sudo dpkg --purge linux-headers-x.x.x-xx \
  4. At this point apt-get can probably finish whatever installs/upgrades were in-flight when this issue started:

     apt-get --fix-broken install

Now to set up free disk space monitoring so this doesn’t happen every few months.

Chefspec 3 and Guard Evaluation


Chefspec is an incredibly useful tool for testing Chef cookbooks. It’s much much faster than running chef on a real node, but it can provide you much of the testing feedback you’d get from a real chef run. Verious.com has a nice introduction to chefspec if you’re not already familiar with it.

What makes chefspec so fast is that it doesn’t perform a full chef run. It loads Chef with your cookbooks and modifies them in-memory so that they merely send messages to Chefspec instead of performing real system changes. It does this primarily by stubbing Chef’s Resource class. In Chef, just about every way to manipulate system state is a resource. Most of them have excellent metadata about the actions they will perform (files have filename attributes, packages have package names) and they all share common methods for executing the work, so it’s surprisingly straightforward for Chefspec to stub the “doing work” part so it performs no action, while retaining the ability to effectively test for what would have been done.

Execute Blocks

This process is nothing short of amazing for Chef built-in resources like files, templates, packages, etc. It’s fast, it’s accurate (excepting bugs in core Chef that result in unexpected actions due to the “doing work” code), and it’s simple to use. But it does have limits. A good chunk of Chef’s flexibility comes from the ability to run custom code in execute blocks and ruby blocks:

execute "thing1" do
  command "rm -rf /"
execute "thing2"; do
  command "find / -print0 | perl -0ne unlink"
execute "thing3" do
  command "python -c \"import shutil; shutil.rmtree('/')\"";

Chefspec is pretty limited in what it can tell you about execute blocks. There’s no way it can know that the 3 execute blocks above all do the same thing (delete the root filesystem), or that it’s not safe to run those commands on your development workstation. Out of the box, it’s largely limited to testing whether or not the execute block is called.


But even reliably determining if an execute block will run is not trivial. The not_if and only_if guards used to determine whether the block runs present similar problems to the execute block itself:

execute "create_database_schema" do
  command "mysql -u user -p password dbname > create_schema.sql";
  not_if "echo 'show tables;' | mysql -u user -p password dbname | grep tablename"

The not_if guard above will give unexpected results if the mysql binary is missing from the system where you run chefspec. Chefspec 2.x sidestepped the issue. It didn’t execute guards by default, and simply assumed that the execute block itself would always run… not ideal. Chefspec 3 does inspect the guards, but rather than executing the commands inside of them, it raises an error requiring you to stub it yourself like so:

it "Creates the Database Schema when needed" do
  stub_command("echo 'show tables;' | mysql -u user -p password dbname | grep tablename").and_return(false)
  expect(chef_run).to run_execute('create_database_schema')
it "Doesn't create the Database Schema when it already exists" do
  stub_command("echo 'show tables;' | mysql -u user -p password dbname | grep tablename").and_return(true)
  expect(chef_run).to_not run_execute('create_database_schema')

This is a pretty clean example. In practice, guards frequently contain wacky stuff. It’s not unusual to leverage a couple shell commands and do some ruby transformations on the resulting complex data type, possibly requiring several stub classes to stub a single guard. If you include several upstream cookbooks, you may have a substantial amount of stubbing ahead of you before chefspec 3 will run cleanly.

Test Coverage

The Chefspec 3 convention of encouraging the stubbing of not_if and only_if guards results in covering more of your Chef code with unit tests, and that’s a great thing. It comes with a non-trivial cost, though. Having to stub the code in included cookbooks in order to test your own code isn’t fun. With chefspec 2.x, I accepted a very low level of code coverage from chefspec, using it only to test “well-behaved” resources that required little to no stubbing. My complete testing workflow looks like this:

  • Syntax and style testing with Rubocop.
  • Chef syntax checking with knife cookbook test
  • Fast low-coverage unit tests with chefspec
  • Slow, high-coverage integration tests with minispec-handler (either via Vagrant provision while I’m hacking or test-kitchen in Jenkins/Travis)

Because the integration environment that Chef works is in so much more complex than most (non-infrastructure-automation) code, I prefer to invest in having a strong integration test suite in minitest-handler rather than spending a lot of time stubbing and mocking in chefspec. I still want to use Chefspec to catch low-hanging fruit because my integration tests are very slow by comparison, but I’m willing to accept a relatively low level of unit-test coverage. If I was doing lots of LWRP development or otherwise had to test very complex logic in Chef, I’d need stronger unit testing, but 90% of my Chef code is straightforward attributes, resources, and includes so integration testing is where a lot of my bugs are easiest to find.

Skipping Guard Evaluation

Which is a round about way of saying, I like the chefspec 2.x behavior of skipping guard evaluation. It results in a less robust unit-test suite, but I make up for it with my integration tests. If you prefer the same tradeoff, you can get the old behavior back by stubbing Chef’s resource class yourself:

require 'chefspec'
describe 'example_recipe' do
  let (:chef_run) { ChefSpec::Runner.new(platform:'ubuntu', version:'12.04').converge 'example_cookbook::example_recipe' }
  before(:each) do
    # Stub out guards so that execute blocks always "run"
  it 'Creates the database schema' do
    expect(chef_run).to run_execute('create_database_schema')

Capacity Planning for Snort IDS

Snort is a very capable network intrusion detection system, but planning a first-time hardware purchase can be difficult. It requires fairly deep knowledge of x86 server performance, network usage patterns at your site, along with some snort-specific knowledge. Documentation is poor, current planning guides tend to focus on one or two factors in depth without addressing other broad issues that can cause serious performance problems. This post aims to be a comprehensive but high-level overview of the issues that must be considered when sizing a medium to large snort deployment.

A Note About Small Sites

Small snort-deployments don’t require much planning. Almost any system or virtual-machine will suffice to experiment with Snort on a DSL or cable internet connection with a bandwidth of 5-10Mbits/sec, just jump right in. If you need to monitor 50-100Mbits/sec of network traffic, or 5-10Gbits/sec of network traffic, then this guide can help you size you size your sensor hardware.

Know Your Site

It helps to know a few things about your site before you start planning.

The most common way to get started is to monitor your internet link(s). Many organizations also expand to monitor some internal links: data-center routers, site-to-site links, or networks with VIP workstations. Unless you know what you’re doing, I suggest starting with your internet links and expanding once you’ve got that performing well. There are generally far fewer internet links to consider, and they are often much lower bandwidth than internal links which can make your first deployment simpler.

Life is simple if you have a single internet connection at a single site. If your network is more complicated then you’ll need to work with the team that manages your routers. They can help you figure out how many locations will need to get a sensor and how many capture interfaces each of those sensors will need to monitor the links at that site.

How much traffic do you need to monitor?

The single biggest factor when sizing your snort hardware is the amount of traffic that it must monitor. The values to consider are the maximum burst speed of each link, and also its average daily peak. It’s common to have burst capacity well in excess of actual usage and when you design your sensors you must decide what traffic level you’re going to plan for. Planning for the burst value ensures that you won’t drop packets even in a worst-case scenario, but may be much more expensive than planning for the average daily peak.

For example, it’s common to contract with an ISP for 100Mbits/sec of bandwidth that is delivered over a 1000Mbits/sec link. The average daily peak for such a link may be 60Mbits/sec, but on rare occasions it may burst up to the full 1000Mbits/sec for short durations. A sensor designed for the relatively small amount of daily peak traffic is inexpensive and simple to manage, but may drop 80% of packets or more during bursts.

If MRTG or Nagios graphs of router utilization are available, they can be very helpful in capacity planning.

Inline, Tap, Span, or VACL Capture

There are various ways to extract traffic for examination. Inline deployments where Snort is used as an intrusion prevention system should be treated with great caution because sizing problems and configuration issues related to Snort can cause network problems and outages for all your users. When running a detection configuration in conjunction with taps, spans, or VACL captures, Snort problems generally don’t cause user-facing network outages are a much much lower risk.

Security teams generally favor taps due to their consistent performance even when a router is overloaded, but there are successful Snort deployments that utilize all of the above methods of obtaining traffic for inspection. Ntop.org has a good document on making the tap vs span decision, and the wikipedia page on network taps provides informative background as well.

Operating System Expertise

Consider what operating systems your technical staff have expertise in. It is common to run high-performance Snort deployments on various Linux distributions or on FreeBSD. At one time, FreeBSD had a considerable performance advantage over equivalent Linux systems but it is currently possible to built a 10Gbit/sec deployment on Linux or BSD based systems using roughly equivalent hardware.

I recommend against deploying on Windows because not all Snort features are supported on that platform. Notably, shared-object rules do not function on windows as of Snort While there are far fewer shared-object rules than normal “GID 1” rules, and they are released less frequently, they can still be a useful source of intelligence.

I also recommend against deploying Snort on *nixes other than Linux or BSD. Although Snort may work well on these platforms, the community employing them is much smaller. It will be much more difficult to find guidance on any platform-specific issues that you encounter.

It’s worth mentioning that my own experience is with high-performance Snort deployments on Linux, and parts of this post reflect that bias.

Single-Threading vs Multiple-CPUs

Snort is essentially single-threaded, which means that out of the box it doesn’t make effective use of multiple CPUs (technically there is more than one thread in a snort process, but the others are used for housekeeping tasks that don’t require much CPU power, not for scaling traffic analysis across multiple CPUs)._ _ As of August 2011, Snort on a single-CPU can be tuned to examine 200-500Mbits/sec, depending on the size of the ruleset used.

It’s possible to scale to 10Gbits/sec by running multiple copies of snort on the same computer, each using a different CPU. A multi-snort/multi-CPU configuration is quite a lot more complex to manage than a single-cpu deployment. Traffic from high-capacity links must be divided up into 200-500Mbit/sec chunks that can be examined by a single CPU, techniques to perform this load-balancing are discussed in the next section. Additionally, startup-scripts often must be customized and it can be difficult to manage multiple configuration files and log files. In spite of the management complexity, large organizations have successfully managed high performance Snort deployments this way for many years.

Suricata is a relatively new project that is well-worth keeping an eye on. It has a multi-threaded architecture that makes effective use of multiple CPUs, but is not as CPU efficient as Snort as of Suricata 1.0.0. As such, Suricata on a large multi-core system is much faster than Snort running on a single CPU, but about 4x slower than many Snort instances running on that same multi-core system. As Suricata matures, performance will improve. Additionally, managing a single Suricata instance is simpler than managing many Snort instances. Update 2013-11: Suricata seems to have addressed it’s performance issues, it can now inspect several hundred mbits/sec/core… which is on par with Snort.

Traffic Capture Frameworks

Snort is a modular system that supports many frameworks for capturing traffic, but not all of them scale equally well.


The default capture framework on Linux since Snort 2.9, afpacket provides no features to load-balance traffic between multiple instances of snort running on multiple CPUs. As such, it can’t scale beyond 200-500Mbits/sec of throughput without some external technique to balance the load between several network interfaces. Even with this limitation, afpacket is the simplest and best choice for snort deployments with less than 200Mbits/sec of traffic.

Libpcap 0.9.x

The default capture framework on Linux for the Snort the 2.8.x series and prior, libpcap is very similar to afpacket from a user-perspective. It also lacks a built-in load-balancing feature, and can scale to a few hundred Mbits/sec of traffic. Consider upgrading Snort and using afpacket instead.

Libpcap >= 1.x.x

Around 1.0.0, libpcap introduced an mmapped feature designed to improve capture performance. Unfortunately the feature backfired and reduced performance due to a hard-coded buffer-size that is too small for most sites. Use afpacket instead unless you know what you’re doing.


Pfring is a linux kernel-module that provides load-balancing through its ring clusters feature. It additionally supports several capture cards through its TNAPI/DNA high-performance drivers, which are available for $200-250 from the ntop store. Pfring, used in conjunction with a TNAPI-compatible network interface, is the least expensive method available to load-balance traffic to several instances of Snort running on several CPUs, and can scale to 10G on appropriate hardware.

High-Performance Capture Cards

Endace and other companies manufacture high-performance capture cards with integrated drivers that have load-balancing features. Depending on speed and features these cards can cost anywhere from $2,000-$25,000, and at the high end scale to 10Gbits/sec. Most of my high-performance Snort experience is on Endace hardware, which has its niggles but generally works very well.

Sourcefire 3D Hardware

Last, but certainly not least, Sourcefire sells snort hardware that is throughput rated and can simplify much of your planning. Managing a multi-Snort deployment is a lot of work, and Sourcefire has designed their systems to provide the power of Snort with an easy to manage interface, plus some features like RNA that are only available via Sourcefire. They’re more expensive than similar hardware to run open-source snort, but they may be more cost-effective in the long-run unless your organization has a do-it-yourself culture with time and technical expertise to tackle a complex open-source Snort deployment.

Traffic Management Techniques

The following traffic management techniques can be used in conjunction with the capture frameworks above to provide additional flexibility.

Hardware Load-Balancers

Gigamon, CPacket, and Top-Layer produce specialized network switches that can perform load-balancing to multiple network interfaces. The port-channeling feature of retired Cisco routers can be used to similar effect. These devices can be used to distribute traffic to multiple network interfaces in a single server or even to multiple servers, possibly scaling beyond 10G (I haven’t tested beyond 10G). I’ve worked with both Gigamon and Top-Layer hardware and found that they both do what they claim, although only Gigamon offers many 10Gbit/sec interfaces in one device. CPacket has been used by knowledgeable peers of mine and offers a unique feature that allows you to use any vanilla network switch to expand the port count of their load-balancer by using mac-address rewriting. These systems are fairly expensive, typically carrying 5-figure price tags, but often can be put to many uses in a large organization.

Manual Load-Balancing

Sometimes, traffic can be manually divided simply by configuring routers to send about half of your networks over one port and half over another. This “poor man’s” load-balancing can be cost-effective for links that are just a bit too large for one network interface.

Linux Bonded Interfaces

The opposite of load-balancing, if you have several low-bandwidth interfaces that you would like to inspect without the overhead of managing multiple copies of snort you can use bonding to aggregate them together as long as the total throughput isn’t more than a few hundred Mbits/sec.

Sizing Hardware

Now that you know how many locations you need to place a server at, how many links there are to monitor at each location, and what capture-frameworks can work for you, it’s time to choose your servers.


A very rough and conservative rule of thumb is that Snort running on a single CPU can examine 200Mbits/sec of traffic without dropping an appreciable number of packets. Snort can examine 500Mbits/sec of traffic or even much more on a single CPU with the right networking hardware and a very small or very well-tuned ruleset, but don’t count on achieving that kind of throughput unless you have tested and measured it in your environment. Martin has posted a more detailed CPU sizing exercise on his blog if you’d like to dig a little deeper.

Remember that snort is single-threaded. Unless you plan to use a load-balanced capture-framework, single-CPU performance is more important than number of cores. Alternately, if you know that you have lots of traffic to monitor, you’ll need a multi-core system paired with a load-balanced capture framework. Snort scales very linearly with the number of cores you throw at it, so don’t worry about diminishing returns as you add cores.


Each snort process can occupy 2Gbytes-5Gbytes of ram. How much depends on:

  • Traffic - The more traffic a sensor handles, the more state it must track. Stream5 can use anywhere from a few Mbytes to 1Gbyte to track TCP state.
  • Pattern Matcher - Some pattern matchers are very CPU efficient, and others are very memory efficient. The ac-nq matcher is the most cpu-efficient, reducing CPU usage by up to 30% over ac-split, but adding over 1Gbyte of ram usage per process.  The ac-bnfa matcher is quite memory efficient, reducing ram usage by several hundred Mbytes per process, but increasing CPU usage by up to 20%.
  • Number of rules - The more rules that are active, the more memory the pattern matcher uses.
  • Preprocessor configs - The stream5 memcap is one crucial factor for controlling memory usage, but all preprocessors occupy memory and many can be configured to be conservative or resource-hungry.

A Snort process inspecting 400Mbits/sec of traffic, with 7000 active rules, using the ac-nq pattern matcher (which is memory-hungry), and a stream5 memcap of 1Gbyte uses about 4.5Gbytes of RAM. With a smaller ruleset and the ac-bnfa pattern matcher (which is memory-efficient), I’ve observed snort processes use about 2.5Gbytes of RAM.

Note that the operating system and other applications will need some RAM as well, and if you don’t have unusual needs 2G is generally plenty. A detailed discussion of RAM sizing for the database is beyond the scope of this post, but generally for a multi-snort deployment it’s worth putting the database on a separate server that has 1-4Gbytes of RAM.

Disk Capacity and I/O

Snort generates very little disk I/O when outputting unified2 logs. Similarly barnyard2 generates very little I/O when reading them. Any hard-disk configuration, even a single low-rpm disk will meet snort’s performance needs.

A detailed discussion of the database I/O needs is beyond the scope of this post. Again, most multi-snort sites should consider putting the database on a different server.  I/O needs will vary depending on the alert-rate, the number of users querying the database, and the front-end used, but in general a 4-disk raid-10 will suffice even for a large multi-gigabit deployment. Small sites with only a few hundred megabits/sec of traffic could even use a single-disk if it meets their availability requirements.

Administrative Network Interface

Snort doesn’t generate a notable amount of network traffic on the administrative interface unless you’re connecting to the database over a low-bandwidth wan-link. Any network interface that is supported under Linux will suffice for even the largest 10Gbit/sec deployments.

Capture Network Interfaces

Each site has widely varying requirements for capture interfaces, so it’s difficult to make generic recommendations. Consider the following factors:

  • Have enough servers to put one at each site where there is a link to be monitored.
  • Have enough interfaces in each server to monitor the number of links at its site.
  • Ensure that each interface is fast enough to monitor the link assigned to it without dropping packets.
  • If any individual link exceeds about 200Mbits/sec, employ a capture framework that features load-balancing and select a compatible interface.

PCI Bus Speed

At multi-Gbit/sec traffic rates, it is possible to saturate the PCI Express bus. Each PCI Express 16x slot has a bandwidth of 32Gbits/sec (4Gbytes/sec), 8x slots are half that, and 4x slots are half again.Theoretically, each slot has dedicated bandwidth such that two PCI Express 16x slots should have a combined bandwidth of 64Gbits/sec, but in practice the uplink between the PCI Express bus and the main memory bus is different in each motherboard chipset and may not be fast enough to provide the full theoretical bandwidth to every slot.

Bus saturation is only a potential issue at very high traffic rates, either involving multiple 10Gbit/sec links or inspection of a single 10Gbit/sec link with multiple sensor applications. Be prepared to split sensor functionality across multiple servers if testing shows unexpected performance bottlenecks that might be related to bus saturation. Hardware load-balancers such as those sold by Gigamon can be useful to duplicate and load-balance very high traffic rates to multiple 10Gbit/sec sensors.

Putting It All Together

There are many factors to consider listed above, but 80% or more of cases fall into a few broad classes that can be summed up briefly:

  1. One or two links, 200Mbits/sec or slower - Almost any server you buy today can handle this. Get 2-4 cores, 8 Gbytes of ram and 2-4 network interfaces of any type if you want to maximize your options.
  2. One or two links, 200-400Mbits/sec - You should consider multi-snort load-balancing with PFRING or another suitable capture framework. If you’re going try to feed this traffic to a single-snort instance in order to avoid the maintenance overhead of multi-snort, get the highest-clocked fastest single-CPU that you can find, otherwise any system with sufficient RAM will work well.
  3. One or two links, 500-1000Mbits/sec - You need multi-snort, consider pfring and with a TNAPI compatible network interface listed on ntop.org.  You’ll need 2-4 snort processes, which means 10-20Gbytes of ram and a quad-core system.
  4. One or two links 1-10Gbit/sec - You definitely need multi-snort with high-performance capture hardware. I’m partial to Endace, but pfring with a 10G TNAPI-compatible card should also work. You need 1-core and 4Gbytes of ram for every 250Mbits/sec of traffic that you need to inspect. Alternatively, consider a Sourcefire system. If you’re just getting started with Snort this is going to be a big project to do on your own.
  5. Many links or greater than 10Gbit/sec traffic - Try to break the problem down into multiple instances of the above cases. A Gigamon box at each site may give you the flexibility that you need to split the problem across multiple servers effectively. You also might also need a moderately high-performance database server, properly tuned and sized.

Wrapping Up

Good luck with your new Snort server. Now go get some rules:

  • Emerging Threats: Excellent for detecting trojans and malware that have successfully compromised systems on your network and are “phoning home”. The ET rules are available free of charge and anyone can contribute fixes or new rules if you find a gap or problem with the ruleset.
  • VRT Subscriber Feed: Excellent for detection of exploits and attacks before they become compromised systems. The subscriber feed is developed and maintained by the experts on Sourcefire’s Vulnerability Research Team, and they charge $30/yr for a personal subscription or $500/yr for a business subscription.
  • VRT Registered Feed: The registered feed contains the same rules as the subscriber feed, but updates are released 30-days after subscribers receive them. The registered feed is a reasonable alternative for personal use, but if you’re protecting a business I recommend the subscriber feed.
  • ETPro: ETPro aims to supplement the ET community sigs with attack/exploit sigs similar to what the VRT provides. Pricing is $35/yr for personal use or $350/yr for businesses. I haven’t used it, though it’s on my todo list to try.

Once you’ve got things running, consider reading my slides on monitoring Snort performance with Zabbix to see how well you sized your system.

License and Feedback

If you find errors in this guide or know of additions that would improve it, leave a comment below.

Creative Commons License Capacity Planning for Snort IDS by Mike Lococo is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license may be available at http://mikelococo.com/2011/08/snort-capacity-planning/#license.

If you’d like to reuse the contents of this post but the cc-by-sa license doesn’t work for you for some reason, I’m happy to discuss offering the contents of this guide under almost any reasonable terms at no cost to individuals and corporations alike. Whether you work for Sourcefire, the OISF, or are just another community member writing a Snort guide, I’m happy to work something out that lets you use any portion of this post you need. Leave a comment below or contact me using the information on the about page if you’d like to discuss.