Wednesday, September 5, 2012

Autoscaling with puppet

Puppet is great. Having a repeatable installation script of a server is the best documentation there is. Configuring an entire server with all kinds of services is easy! Even when the configuration of one service updates, dependent services can be made to reload automatically.

On a slightly different scale, where dependencies don't exist between services on the same machine, but on different machines, the options are limited

Puppet supports eventual consistency by running puppet agent on each machine on a fixed interval, which will update configuration. For auto scaling, this means your interval determines the time it takes for a machine to be visible by the load balancer. Also, to have this functionality, it is required to store all information about each server in the so called stored config database. Needless to say, with a large server park, having a low interval puts more stress on the database. Another drawback is that down scaling requires cleaning the database of servers that have been removed.

Meet mcollective

mcollective's philosophy can be summarized as 'the network is the database'. With a message queue based architecture, all kinds of information about your server park can be retrieved at mind boggling speed.

"There is no central asset database to go out of sync."

Queries are executed in parallel on all machines. Let's use mcollective instead of a database to retrieve facts about servers in the server park. And while we're at it let's use mcollective to update services on seperate machines. Having that, we can do autoscaling both up and down without stressing out a database server or risk being out of sync with reality.

puppet-kicker: cross server puppet notifications

puppet-kicker requires mcollective with the puppetd plugin for querying and notifying servers. Be sure to export all your facter/puppet facts to mcollective. (FactsFacterYAML for example)

Puppet-kicker triggers runs of puppet agent on dependent servers.

Suppose we've got a loadbalancer and some nodes that need to be balanced. With puppet-kicker that pattern looks like this;

On the node, you've got to make sure a fact is available with the name role. After that all it takes is

Immediately after a new node is installed, all haproxy servers will be kicked and puppet will update their configuration, adding the new server immediately.

On the loadbalancer, the role fact should also be defined and set to 'haproxy'

and the configfile would look like this

That should be enough to be able to add new nodes and have the loadbalancer be updated immediately. You can find puppet kicker on github



No comments:

Post a Comment