Turning a Blueprint Puppet recipe into a Puppet deployment

May 24, 2011 2 comments

A few weeks ago I had a post about Blueprint which is a tool that will map out a running linux system and turn out a puppet configuration.

At the time of the initial post I hadn’t really tried actually deploying the config, I was given that task last week. When I tried to do so I realized that not only that there was pretty much zero documentation on how to actually do this, but that the puppet module is not 100% working. No disrespect to devstructure here though, they wrote an amazing app that gets about 90% there but with a few problems. This is going to be a guide to how I got from the result of the blueprint command to a deployed ubuntu box using the generated module.

I will be using ubuntu 10.04, puppet packages from apt, and a blueprint of my personal VPS. I will also be using /etc/hosts as a stand in for a proper DNS server.

Throughout this guide both the server and the client should have the following in /etc/hosts(names can be changed). Also the hostname should match what is in /etc/hosts for both

IP.Of.Master puppetmaster.test.com
IP.Of.Client puppetclient.test.com

Part 1: Getting Blueprint to make you a puppet recipe
1. Install the pip and use it to install blueprint

apt-get install python-pip

pip install blueprint (you can use --upgrade to get the newest version)

2. Run the blueprint-create command with the -P switch to generate a puppet module of your system.

root@vps:~# blueprint-create -P vps_blueprint
# [blueprint] searching for software built from source
# [blueprint] searching for configuration files
# [blueprint] searching for Ruby gems
# [blueprint] searching for Python packages
# [blueprint] searching for apt packages
# [blueprint] searching for yum packages
# [blueprint] searching for PEAR/PECL packages
Reinitialized existing Git repository in /home/pratik/.blueprints.git

Now you have a blueprint of your system in a folder called vps_blueprint. The question now is, what do you do with this? The blueprint site doesn’t have any steps to take this config and put it into Puppet (to be fair however, I do not think that the puppet part of blueprint is their main concern).

Part 2: Setup puppet master

1. On the box that will be the puppet-master install the proper packages

apt-get install puppet-common puppetmaster

2. Copy the folder you created with blueprint to your puppetmaster, once there put it into the /etc/puppet/modules folder

root@puppetmaster:~# mv vps_blueprint/ /etc/puppet/modules/

3. Edit /etc/puppet/fileserver.conf to provide a path to the files directory, and to allow access to it.

path /etc/puppet/modules/vps_blueprint/files

4. Edit /etc/puppet/manifests/site.pp, this is the default file puppet looks for, and loads first. For now you probably want to do it in here, but later put it into node.pp or something(assuming client name is blogtest.test.com)

node vpsblueprint {
include vps_blueprint

node 'blogtest.test.com' inherits vpsblueprint{

This tells puppet for the client connecting with the hostname blogtest.test.com it should use the vpsblueprint node, which includes the vps_blueprint module.

5. Start up the puppet master damaemon for testing:

root@ubuntu:/etc/puppet# puppet master --no-daemonize --verbose
notice: Starting Puppet master version 2.6.1

Note: the puppet mater daemon will stay alive as long as the window is open.

Part 3: Setting up the client.

1. On your client box install puppet client

apt-get install puppet puppet-common

2. Make sure your /etc/hosts file is correct (see top), try to ping puppetmaster.test.com

3. Check to make sure your hostname is correct

4. Start puppet client:

root@ubuntu:~# puppetd --server puppetmaster.test.com --no-daemonize --waitforcert 5 --verbose

This will contact the server, send it a request and wait for it to be signed.

5. On the puppet master:

root@puppetmaster:/etc/puppet# puppetca --list

root@puppetmaster:/etc/puppet# puppetca --sign blogtest.test.com
notice: Signed certificate request for blogtest.test.com
notice: Removing file Puppet::SSL::CertificateRequest blogtest.test.com at '/var/lib/puppet/ssl/ca/requests/blogtest.test.com.pem'

6. Once its signed your client should start applying the module you generated via blueprint, success!

Part 4: Problems

So it probably didnt work, that is because the generated config in blueprint’s site.pp has several errors. I will go through them here one by one and how I got around them. I’m sure there are much much better methods than the ones I used🙂

1. err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find template ‘vps_blueprint/tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar’ at /etc/puppet/modules/vps_blueprint/manifests/init.pp:798 on node blogtest.test.com

This happens because the blueprint tires to use a template for a binary file, instead of using a source (it has both actually). Inside vps_blueprint/manifests/site.pp go all the way to the bottom and find something like this:

file { '/tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar':
content => template("vps_blueprint/tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar"),

Remote the “content=template” line.

2. err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not render to pson: invalid utf8 byte: ‘�’

This happens when one of the files in your templates folder (inside vps_blueprint) has non-ascii chars. These should be changed to source files as well, but if you are lazy you can just remove their entires from site.pp or blank them out with echo “” > filename

By default, /etc/mail/aliases.db has non-ascii chars in it.

3. err: Could not run Puppet configuration client: Could not find dependency File[“/tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar”] for Exec[tar xf /tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar] at /etc/puppet/modules/vps_blueprint/manifests/init.pp:79

This happens because the exec command is being run before the file is there, puppet files aren’t read in order.
I fixed it by removing the exec from the sources class and adding this below it:

class sources_exec {
exec { 'tar xf /tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar':
cwd => '/usr/local',
require => 'File["/tmp/aa4ea9aefdbfbfe2c9cd8736aa101003b9784d75.tar"]',

include sources_exec

4. err: Could not run Puppet configuration client: Could not find dependent Class[“apt”] for Exec[apt-get -q update] at /etc/puppet/modules/vps_blueprint/manifests/init.pp:266

On line 226 (in my example) there is a class calling apt-get update before a class that gets run after it. I just commented this class out (ahh horrible I know😦 )

5. Puppet says all my packages are not found!

You have to tell puppet to use aptitude instead of apt, I got around this by messing around with the binary and sym-linking aptitude into apt. I can hear people crying already.

Part 5: Conclusion

So yeah.. that should be it. You will need to tweak some more to get this working in a stable and more personal environment but this should get the ball rolling. Blueprint is a great tool (with some flaws), and using it can take a long part of initial puppet setup process out of your hands. Thanks for reading!

Categories: Uncategorized

Deploying Merengue inside Virtualenv via Apache/mod_wsgi

April 30, 2011 2 comments

Merengueis a Django based CMS web application, it allows you to create a website with paging and several other features. The CMS will enable you to have easy access to a panel that will allow you to change things. It is similar to wordpress but written in python.

In this post I am going to talk about how to deploy merengue with apache/mod_wsgi and have it all reside in a virtualenv environment. The steps I use assume you are using Ubuntu.

1. First we need to install some initial dependencies, since these are ubuntu packages and not python packages you can install these outside of virtualenv. This will install the various codecs you need, virtualenv, apache2, mod_wsgi and the python image library (PIL)

apt-get install python-setuptools python-virtualenv ffmpeg libavcodec52 libavdevice52 libavformat52 gettext

apt-get install apache2 libapache2_mod-wsgi mysql-server5.1

apt-get install libfreetype6-dev python-tk tcl8.5-dev tk8.5-dev liblcms1-dev liblcms-utils

2. Pick a directory where you want to setup your virtualenv to be in. For this example I will use /opt

root@vps:/opt# virtualenv merengue --no-site-packages
New python executable in merengue/bin/python
Installing distribute..................................................................................................................................................................................done.

root@vps:/opt# cd merengue/
root@vps:/opt/merengue# source bin/activate

3. Now you are running in virtualenv with no site packages, any files that you python packages you install here will be stored inside /opt/merengue/lib/python2.6/site-packages.

4. Install merengue via pip (or easy_install)

merengue)root@vps:/opt/merengue# pip install merengue
merengue)root@vps:/opt/merengue# pip install mysql-python
merengue)root@vps:cp -r /opt/merengue/lib/python2.6/site-packages/merengue/apps/* /opt/merengue/lib/python2.6/site-packages/

**NOTE: If you get an error on the second command saying that “mysql_config” not found then run apt-get install libmysqlclient16 libmysqlclient16-dev to fix the problem.

5. Go back up to up to /opt and using the merengue-admin command make a new project

(merengue)root@vps:/opt# merengue-admin.py startproject myproject
(merengue)root@vps:/opt/# ls myproject
__init__.py manage.py merengue settings.py urls.ini
apps media plugins templates urls.py

6. Log into your own mysql server, create a database for the website, create a user and give it permissions.

mysql> create database website;
Query OK, 1 row affected (0.00 sec)

mysql> create user site_user identified by 'P@ssw0rd';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on website.* to site_user;
Query OK, 0 rows affected (0.00 sec)

7. Edit settings.py, edit the following lines:

DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
DATABASE_NAME = 'website' # Or path to database file if using sqlite3.
DATABASE_USER = 'site_user' # Not used with sqlite3.
DATABASE_PASSWORD = 'P@ssw0rd' # Not used with sqlite3.

8. Run syncdb and migrate

(merengue)root@vps:/opt/myproject# python manage.py syncdb
...output omitted
(merengue)root@vps:/opt/myproject# python manage.py migrate

9. Create a wsgi file for your project, something like this will work:

import os
import sys
import site

os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'

import django.core.handlers.wsgi

application = django.core.handlers.wsgi.WSGIHandler()

10. Finally create a apache virtualhost to point to this wsgi file.
Create a file called merengue in /etc/apache2/sites-available/
with the basic following contents (add your own stuff for security here as well). Note that wordpress blocks the virtualhost tags or put “” and “” in for the Virtual hosts

VirtualHost *:80
WSGIPassAuthorization On
WSGIScriptAlias "/" "/opt/myproject/website.wsgi"

11. Load the website and restart apache

a2ensite merengue

12. Visit your IP on port 80 to see the homepage for the CMS! The admin username and password are the ones that were created during syncdb

Seneca CTY and CNS – My thoughts after 3 years.

April 23, 2011 45 comments

I recently graduated from CNS at Seneca (after finishing 5 semesters of CTY) and I thought I would do a write up on my thoughts on it for people who are interested in applying for it or are currently enrolled in it. This is going to be a pretty long post with a lot of personal opinions. I decided to write this because I often get asked about my opinion on the program via the Students thread at “Seneca College thread- Red Flag Deals which is a great placeto go if you are interested in starting at Seneca.

I started CTY a few years ago after transferring from another Seneca program that was not for me. When I started the program I had the following skills:

  • Was above average in computer use
  • Knew nothing about Linux
  • Had no idea how IP’s and networking worked

Pretty normal for someone who spends a lot of time playing video games and surfing sites like Digg and Reddit. I would say that I was very component a IT but had no where near the skills you would need to do a professional job in IT. Starting the program 3 years ago I had no idea what to expect but I had a few friend in it so I enrolled.

Starting in first semester you will take intro classes in hardware, windows OS and Linux. For most people the hardware and windows classes are basic stuff that anyone who has used a computer in the last few years would be able to do without any problems. The linux class (ULI101) is one of the more difficult classes for new people. In this class you will learn the very basic’s of how to use linux through an SSH client – all in command line. An intimidating thing but something that you need to pickup quickly.

After First Semester the program opens up into classes covering 4 major subject areas.

    Linux Administration

  • OPS235->OPS335->OPS435->OPS535
  • Windows Administration

  • WIN210->WIN310->WIN700
  • Networking

  • DCN286->DCN386->NDD430->CIS701
  • Web Server/Scripting

  • INT213->DAT701->INT420

The linux admin classes will teach you bash scripting, sysadmin stuff like NIS and IPtables and more advanced stuff like LDAP later. The networking classes which in my opinion are some of the best will take you through several key networking concepts leading up to CCNA level material. Server scripting classes are bit oudated, ASP, MYSQL/asp and Perl are not the easiest languages to learn but are ok. The windows admin classes are among the worst because of the tedium that is Microsoft documentation but even they are alright.

I’ve talked enough about how what the classes teach so now I will go onto more of my own personal views on the program. Overall I think the program is a good one as it teaches you a lot of hands on skills that you really will need to do sysadmin work out there in the real world. You never really get down to the nitty gritty about how things work but at this level it is already, for a diploma program I think that the level of the content is OK.

The teachers who teach the class are a bit hit and miss, there are some amazing professes out there such as Murray Sual, Scott Apted ,Brian Gray and last but not least the legendary Ian Allison. There are other profs who will probably annoy you a lot more, but at the end of the day for a college the skill of instructors is fairly good.

The facilities and the labs are top notch, I spent most of my time in the Open Lab using the computers there was a much much better experience than using the ones in the Library. In most cases you are given the tools that you need to do the labs.

To touch on co-op it is a great experience if you can get a job. One of the problems is due to the number of people applying to internships the chances of getting a interview are much lower, this is a personal responsibility that you should address on your own (build up your own resume). The job’s offered are pretty good and encompass a large sector of the IT community from sysadmin to techsupport at very small and very large companies.

Finally I want to touch on the difference between CTY and CNS, the main thing is the co-op however I decided to stay on in CTY and finsh the extra classes in hopes of learning more this. This was a mistake. The last two semesters of CTY are really poorly done and most (not all CIS701,OPS535🙂 ) of the classes are horrible. Just a fair warning. If you are not sure which one you want to take it doesn’t matter until 3rd semester so you can join and make up your mind within that time.

It felt good to get some of that stuff of my chest, I’m going to be editing this post to add much more detail later on.

Thanks for reading!

Categories: school Tags: , , ,

Git Post-Receive script on Trac with a shared repo.

April 9, 2011 1 comment

If you use the GitPlugin with trac and want to use the post-receive script in order to use the CommitTicketUpdater (http://trac-hacks.org/wiki/GitPlugin#post-receivehookscripts) and you have multiple people committing to the repository you will notice a problem. The person committing needs read and write access to the trac database, this is fine if the only person who is writing to the repo is yourself, however in a shared environment giving multiple people write access to the database is pretty dangerous.

The hook script needs to be in /your.gitrepo/hooks/post-receive. It should contain the line:

trac-admin TRAC_ENV changeset added commitid(s)

Trying to run that command normally will result in an error unless you have rw on the trac.db.

We solved this by creating a small script and allow people to run it via sudo -u www-data(or whatever you webserver’s user is). Thanks go to posionbit and stew from ServerFault.

Inside /bin we made a file, lets say its called git_trac.sh
The file contains

/usr/local/bin/trac-admin /path/to/trac changeset added $1 $2

Inside /etc/group you should make a group that contains all your code-commiters and visudo add the line

%groupname ALL=(www-data) NOPASSWD: /bin/git_trac.sh

The nopasswd is needed bit git-push does not have a tty option in SSH. This line will allow your users to run ONLY trac-admin changeset added as www-data, not the rest of trac-admin.

Finally in your post-receive script inside of calling trac-admin, call /bin/git_trac.sh with two arguments

sudo -u www-data /bin/git_trac.sh "REPONAME" "REVISON"

If you have more/less commit ID’s make sure that you change the number of positional parameters inside your git_trac.sh script.

Now when the commiting user runs the hook, it will run as your webserver which has to have write access to the DB, this will work without exposing the rest of trac-admin to the group.

Categories: sysadmin Tags: , , , ,

Generate Puppet recipe from running system

March 22, 2011 1 comment

Puppet a linux based config-management system that allows you to manage multiple server configs at the same time with little overhead.

The process of turning a running server into puppet recipe is a total pain, it takes ages to write a script and include all the dependencies that you might use. I’ve been experiencing this (through a proxy) second hand🙂. I was recently passed a awesome tool called Blueprint which will map out a linux config and write up the config file for Puppet, chef or in a shell script.

The code is located here https://github.com/devstructure/blueprint. The details on how to setup are on that page as well.

Categories: sysadmin Tags:

Dynamic iptables rules for NIS server

March 16, 2011 2 comments

When you start up the ypserv daemon it will start up with different ports every time, this can be a pain to manage if you have a firewall or use a RedHat system which blocks ports by default.

Here is a small bash script I wrote that goes into rc.local (It could also go at the bottom of the init.d script for ypserv). It will find the port and open a firewall hole for that port dynamically. Total hack job but it does the trick.

for tcpp in `rpcinfo -p | grep "ypserv" | cut -d " " -f16`; do iptables -I INPUT -p tcp --dport $tcpp -j ACCEPT; done
for udpp in `rpcinfo -p | grep "ypserv" | cut -d " " -f16`; do iptables -I INPUT -p udp --dport $udpp -j ACCEPT; done
iptables -I INPUT -p tcp --dport 111 -j ACCEPT
iptables -I INPUT -p udp --dport 111 -j ACCEPT

The first two will open tcp and udp ports, open that port/protocol based on output for rpcinfo -p. The last two lines are to open portmapper which always runs at port 111.

Categories: Uncategorized

Getting the the root cause of problems on Linux

March 15, 2011 Leave a comment

Debugging is a daily part of any sysadmins job, fixing things becomes second nature after a while. Its amazing how even the smallest of problems can eat up large parts of your day as you try to track down why something isn’t working or why a certain some application that worked fine yesterday is utterly broken today.

The amount of time I have spent helping people sitting in our schools Open Lab try and work there way through an assignment is a bit staggering. I see people making the same types of mistakes again and again, and when they get a error instead of following through on it until they get an answer they try messing with things that have no impact on what they are doing. This isn’t so much of a guide as it is just a collection of my thoughts on Linux administration troubleshooting.

  1. You root?: Like the infamous XKCD comic one of the most common reasons why a command you think should work doesn’t work is because you aren’t running as root. A lot of times the error message will straight up tell you that you dont have the permissions, however a lot of things will not. Apache for instance will fail out on Unable to open logs [FAIL].
  2. Have you installed the mind reader package? : One of the most difficult things to teach people who are learning to use shells that rely only on command line input is that the system cant read your mind. You want to mount /dev/sdb2? Cool.. where? Want to add a user? Ok.. whats her name? When a command or application fails you need to ask yourself does it have EVERYTHING that it needs to work? Unlike Windows Linux does not have a magic registry that you have to worry about. Everything that you need can usually be found in /etc, if the application’s needed information is not in a config file or provided by some command input it likely will not have what it needs.
  3. Logs man, Log! :Linux does an amazing job at logging, by default almost any error message you need will get sent to /var/log/messages, /var/log/syslog or other various logs inside /var/log. Check these! They will contain information that is not found in the strerr displayed to the terminal. A simply glance at these can reveal a lot of information. Apache’s in particular is really useful for debugging 500 error messages on WebApps.
    1. This ended up being more of just a mishmash of common problems when I see people trying to debug linux. The important thing to remember is that you should always think like the box does, understanding how things work and not just why will help you greatly.

Categories: sysadmin Tags: ,

Get every new post delivered to your Inbox.