MQTT in the Cloud: Firehose

This is the first post in what’s going to be a multi post series exploring how MQTT can be leveraged in the cloud. The series will look at several different models for deploying applications in the cloud and how you can leverage MQTT as an event based protocol for those applications. This first post will be looking at using MQTT as unified event bus for heterogeneous service deployed using more traditional configuration managed services by looking at the OpenStack community infrastructure’s firehose effort.

Continue reading MQTT in the Cloud: Firehose

Python Test Runners

For the past year and half I’ve been working on a parallel test runner for python tests, stestr. I started this project to fit a need in the OpenStack community and it since has been adopted as the test runner used in the OpenStack project. But, it is generally a useful tool and something I think would provide general value for people writing python tests. I thought it would be valuable to explain two things in this post, first the testing stack underlying stestr and the history behind the project. The second aspect is how it compares to other popular python test runners are out there, and the purpose it fills.

Continue reading Python Test Runners

Building a Manual PWM fan controller

I recently built a new home server, it’s a multipurpose box that will hold most of my infrastructure and also is a file server with a lot of hard drives (and room for more in the future) All of these drives meant this ended up being a very large machine so there was space to put them all. I ended up getting a CaseLabs Magnum THW10 for the case, which has space for a ton of stuff in it.  While the machine is working great and doing everything I need it to, there is one small problem with it. The front fans aren’t spinning fast enough.

Continue reading Building a Manual PWM fan controller

Dirty Clouds Done Cheap

I recently gave a presentations at the OpenStack Summit in Boston with the same title. You can find a video of the talk here: https://www.openstack.org/videos/boston-2017/dirty-clouds-done-dirt-cheap. This blog post will cover the same project, but it will go into a lot more detail, which I couldn’t cover during the presentation.

Just a heads up this post is long!  I try to cover every step of the project with all the details I could remember. It probably would have made sense to split things up into multiple posts, but I wrote it in a single sitting and doing that felt weird. If you’re just looking for a quicker overview, I recommend watching the video of my talk instead.


The Project Scope

When I was in college I had a part time job as a sysadmin at a HPC research lab in the aerospace engineering department. In that role I was responsible for all aspects of the IT in the lab, from the workstations and servers to the HPC clusters. In that role I  often had to deploy new software with no prior knowledge about it. I managed to muddle through most of the time by reading the official docs and frantically google searching when I encountered issues.

Since I started working on OpenStack I often think back to my work in college and wonder if I had been tasked with deploying an OpenStack cloud back then would I have been able to? As a naive college student who had no knowledge of OpenStack would I have been successful in trying to deploy OpenStack by myself? Since I had no knowledge  of configuration management (like puppet or chef) back then I would have gone about it by installing everything by hand. Basically the open question from that idea is how hard is it actually to install OpenStack by hand  using the documentation and google searches?

Aside from the interesting thought exercise I also have wanted a small cloud at home for a couple of reasons. I maintain a number of servers at home that run a bunch of critical infrastructure. For some time I’ve wanted to virtualize my home infrastructure mainly just for the increased flexibility and potential reliability improvements.  Running things off a residential ISP and power isn’t the best way to run a server with a decent uptime.  Besides virtualizing some of my servers it would be nice to have the extra resources for my upstream OpenStack development, I often do not have the resources available to me for running devstack or integration tests locally and have to rely on upstream testing.

So after the Ocata release I decided to combine these 2 ideas and build myself a small cloud at home. I would do it by hand (ie no automation or config management) to test out how hard it would be. I set myself a strict budget of $1500 USD (the rough cost of my first desktop computer in middle school, an IBM Netvista A30p that I bought with my Bar Mitzvah money) to acquire hardware. This was mostly just a fun project for me so I didn’t want to spend an obscene amount of money. $1500 USD is still a lot of money, but it seemed like a reasonable amount for the project.

However, I decided to take things a step further than I originally planned and build the cloud using the release tarballs from http://tarballs.openstack.org/. My reasoning behind this was to test out how hard it would be to take the raw code we release as a community and turn that into a working cloud. It basically invalidated the project as a test for my thought exercise of deploying the cloud if I was back in college (since I definitely would have just used my Linux distro’s packages back then) but it made the exercise more relevant for me personally as an upstream OpenStack developer. It would give me insight as to where what we’re there are gaps in our released code and how we could start to fix them.
Continue reading Dirty Clouds Done Cheap

Building a Better Thermostat

After I came home from a recent trip that happened to occur during the middle of a heat wave. I arrived to find my apartment quite too hot, it was at least 45 C inside. Needless to say it wasn’t the most comfortable way to come home after 15 days out of town. Especially given that it took several hours for my in-wall AC units to cool the apartment down. After this I decided it was time for me to do something about it to address this issue.

I had  been listening to some of my friends tell me about their home automation setups for a while, but I really never saw the point for myself. Living in a small one bedroom apartment where I don’t have any sensors, networked appliances, and only have 2 in wall AC units I figured there was nothing I could really automate.  But, after thinking about it and talking to a few people I came to the conclusion  could use it to solve my apartment temperature issues.

I would essentially need to create my own thermostat, in other words something that can read the current temperature of my apartment and the adjust the AC based on that. That sounds like the same basic premise behind home automation, so it seemed like a good fit for me.

Continue reading Building a Better Thermostat

Exploring the OpenStack-Health dashboard

A few months ago at the Liberty QA code sprint in Ft. Collins, CO we started work on the OpenStack-Health dashboard.  I also recently announced the dashboard to the openstack-dev ML to try and raise it to the attention of the broader community and to try and get more users and feedback on how to improve things.  I figured it’d be good to write up a more detailed post on the basics of how the dashboard is constructed, it’s current capabilities and limitations, and where we’d like to see it move in the future.  Especially given that the number of contributors to the project is still quite small.  For the project to really grow and be useful for everyone in the community we need more people helping out on it.
Continue reading Exploring the OpenStack-Health dashboard

Using subunit2sql with the gate

A little over a year ago I started writing  subunit2sql, which is a project to collect test results into a SQL DB. [1]The theory behind the project is that you can extract a great deal of information about what’s under test from the results of tests when you look at the trend data from it over a period of time. Over the past year the project has grown and matured quite a bit so I figured this was a good point to share what you can do with the project today and where I’d like to head with it over the next year. Continue reading Using subunit2sql with the gate

OpenStack QA Code Sprint in NYC

Last week we had a 3 day code sprint for the QA program in NYC: https://wiki.openstack.org/wiki/QA/CodeSprintKiloNYC HP hosted the event at the office in Chelsea. Overall it was a very productive week were we accomplished a great deal. The goal of the sprint was to make a good push on a list of priority items which had seemed to be a bit stagnant and get some code landed to try and push forward on them. It was also a valuable opportunity for a bunch of us to get together and get to know the people we work with daily basis a bit better. Something which is often hard to do over IRC, gerrit, or the ML. Continue reading OpenStack QA Code Sprint in NYC

Gate Bug Triage Conclusion

A few months ago I made the post about debugging a gate failure. It has been linked around and copied to quite a few places and seems to be a very popular post. (definitely the most popular so far on this blog) I figured since the bug I opened from that was closed as invalid a while ago that I should write an update about the conclusion to the triage efforts for the OOM failures on neutron jobs. It turns out that my suppositions in the earlier post were only partially correct. The cause of the failures was running out of memory but what was leading to the OOM failures wasn’t just limited to neutron. It was just that the neutron jobs ran with more services which used more memory which made failures there more common.

Continue reading Gate Bug Triage Conclusion

Triaging and classifying a gate failure

Recently I was helping someone debug a gate failure they had hit on one of their patches. After going through the logs with them and finding the cause of the failures, I was asked to go through how I debug gate failures. To help people understand how to get from a failure to a fix.

I figured I would go through what I did and why on that particular failure and use it as an example to explain my initial debug process. I do want to preface this by saying that I wasted a great deal of time debugging this failure, because I missed certain details at first, but I’m leaving all of those steps in here on purpose.

The log url for this failure is:

http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-full/ab17a70/

Continue reading Triaging and classifying a gate failure