August 13, 2014

Becoming DevOps

I've been working on creating development environments a while now and my current employer wants to have a quick way of setting up a development environment for the projects that we do.
Since I've worked with Vagrant before it befell to me to do the setting up of it all.

Vagrant is a tool that can make use of virtualisation software like Virtualbox or VMWare. It provides means to run the vm image (box) with a provisioned state specifically for your (development) needs.

I started by making use of packer. Packer allows me to setup a vagrant box in much less time than it would take me to if I used Vagrant itself for creating boxes.
Packer allows you to select the iso and the virtualisation software and also provides hooks to provision the box with the tools you would need to do your job once you start developing on the box within your project.
This provisioning is obviously the hard part and needs the play-rewind-repeat cycle to really put the stuff on the box that you want.

Let me tell you what I did without the repeat cuz of course I did it right in one time *cough*.

I found a set of packer templates on the web that allowed me to jumpstart the creation of the boxes.
Although I might now use PuPHPet with some adjustments I learned quite a bit about provisioning.
First the basics:
I modified the template I needed and I added an extra script for my provisioning needs.

I added this script in my template.json.

Then I added the puppet locations to my template.json within the provision section:
"type": "puppet-masterless",
"manifest_file": "/tools/vagr_build/puppet/manifests/default.pp",
"manifest_dir": "/tools/vagr_build/puppet/manifests",
"module_paths": ["/tools/vagr_build/puppet/modules"]
Obviously I have my build environment in tools/vagr_build as you can see here, you might have other locations.
I also make use of puppet-masterless because we do not have a puppet server within our company and I didn't want to invest time to set that up as well.

Now comes the big part, the real provisioning. I created my provisioning command in the default.pp file
I first install some base packages and mysql with apache. Then comes the PHP part and this part needed some extra configuration
To configure PHP correctly I needed to create a php.ini within /etc/php5/conf.d/ I updated its content using a tool called augeas Puppet knows how to use this but it needs to be installed serparately, so I did that with the base packages.
Note the context within a augeas part starts with /files/ this is a nessecary part to edit files.

From here I made my box file:
> packer build -only=virtualbox-iso .\template.json
and once this was done I did
> vagrant box add ubuntu_12_php53 .\
to place my box with vagrant

Later use of this box is just how you would normally use a vagrant box.

December 19, 2013

Debugging python with vim in a virtualenv

Yay I finally got it working my debugging of a python script with Vdebug in vim
Vdebug is a vim plugin that lets you debug all kinds of scripts/languages in vim.
It supports these languages:

At work I use it to debug Perl code and I wanted to use it to debug my python code for personal projects.

the key is: pip install  komodo-python-dbgp within your virtualenv once that is done it will create the executable pydbgp in your virtualenv. so you can launch in vim vdbug and then in the shell:
pydbgp <script>

December 16, 2013

Working on a new web project

Ok, so I started working on a web project for a friend of mine.
I want to work with some new stuff so I decided to implement this application with some neat new technologies (well new for me anyway).

All the while I have been dabbling with some stuff during the years now and I came up with a basic installation to get my workflow ready.
For this I am making use of a script that I execute to setup my projects environment:

This installs an isolated python environment and an isolated node.js environment for webdevelopment.
The script will also offer the option to work either with a Yeoman workflow or a Brunch based workflow.

When I start my work I just use another script that will launch my work environment in correct settings.
For that I make use of tmux a screen multiplexer that enters in the activated virtualenv:

I've decided to build this all with Angularjs in the frontend and flask or django on the backend. The backend I will decide later on.
For the workflow I will use Brunch because it is a fairly basic webapplication and the speed of usage in a Brunch workflow outweighs the flexibility/complexity of the Yeomand/Grunt  configuration.
I made a skeleton to work with coffeescript and Angularjs and made that available on github.

I will provide details of my progress.

October 14, 2011

VLC and AirportExpress

At work we got our hands on an Airport Express. So the first thing of course is "MUSIC".
Well we do have some people here that want to use iTunes. But for personal reasons I don't like iTunes.
So I normally use VLC like any sane person would do :) But while all of my co-workers were laughing at me cuz i couldn't join with the music streaming.
Not taken aback I was strolling the internet to find if there was a solution of streaming to Airport. There was a program called Airfoil but hey I'm dutch so i really don't want to pay for programs unless necessary. At the VLC forums i stumbled on a post by crzyhanko and he posted some great code you can put in the standard streamchain field of the VLC player:
#transcode{acodec=alac,channels=2,samplerate=44100}:raop{host=<ip address of airport express>,volume=175}
It works :D so who is laughing now

March 10, 2011

SQL remove of constraints

Note to self:

when doing large imports using a sql script in oracle. here's how to remove constrains and then enable them after insert:

This code is useful to disable the constraints in the database.
set serveroutput on;
  for c in (select constraint_name, table_name from user_constraints where constraint_type='R') loop
    execute immediate('alter table '||c.table_name||' disable constraint '||c.constraint_name);
  end loop;
the '/' at the end lets sql developer know that this is the end of an inline pl/sql script

then insert the normal sql insert script and when done include this code:
  for c in (select constraint_name, table_name from user_constraints where constraint_type='R') loop
    execute immediate('alter table '||c.table_name||' enable constraint '||c.constraint_name);
  end loop;
select constraint_name, status from user_constraints where constraint_type='R';
When the last line still shows disabled constraints the data is corrupt.

Blobs of type String can be inserted via a workaround:
declare myBLobVar varchar2(32767) := 'paste string here' ;
  update tableWithBlob set blobCol = myBlobVar where id = blah ;

July 20, 2009

Eclipse Templates

Templates are a usefull thing when working with code as we know.
A simple template is a simple thing to do but using an import is a different beast.

so here is a example to make sure that the import is also included in the java file.

/** Tapestry render phase method. Called before component body is rendered.*/
public void beforeRenderBody(){

May 21, 2009


Most applications using ORM tooling have need of a transaction management system.
One of these transaction managers is Atomikos. Atomikos provides several products. One transaction essentials is an opensource variant.
However you cannot get it via a maven repository. You'll have to register for a download link.
Transaction essentials is easy embeddable within a jetty container or even within a spring context.

First here's how to implement transaction essentials within a jetty container configured with maven.

Lets start with the jetty configuration within a maven pom file:

<!-- Run the application using "mvn jetty:run" -->
                    <!-- Log to the console. -->
                    <requestLog implementation="org.mortbay.jetty.NCSARequestLog">
                        This doesn't do anything for Jetty, but is a workaround for a
                        Maven bug that prevents the requestLog from being set.

The jettyConfig tx-jetty contains the configuration for the atomikos usertransactionmanager and will be applied before other setting set in the maven pom:

<Call class="java.lang.System" name="setProperty">

   <!-- Atomikos -->
   <New id="tx" class="">
         <New class="com.atomikos.icatch.jta.UserTransactionImp" />

The jetty-env contains the configuration for a specif webapplication in jetty and consists in part of the datasources for the webapp:

<Set name="configurationClasses">
    <Array type="java.lang.String">
  <!-- Add a mapping from name in web.xml to the environment -->
  <New id="map1" class="">
    <Arg><Ref id='rbudisplay'/></Arg>
    <Arg>jdbc/rbuconverter</Arg> <!-- name in web.xml -->
    <Arg>jdbc/rbu</Arg>  <!-- name in environment -->
  <New id="rbuconverter" class="">
     <Arg><Ref id="rbudisplay"/></Arg>
      <New class="com.atomikos.jdbc.AtomikosDataSourceBean">
       <Set name="minPoolSize">2</Set>
       <Set name="maxPoolSize">50</Set>
       <Set name="xaDataSourceClassName">org.postgresql.xa.PGXADataSource</Set>
       <Set name="UniqueResourceName">rbuconverter</Set>
       <Get name="xaProperties">
         <Call name="setProperty">
         <Call name="setProperty">
         <Call name="setProperty">
         <Call name="setProperty">
         <Call name="setProperty">

The first part sets up the jetty plus environment for jndi.
The second part sets up the reference for the jetty-env configuration and to bind the datasource to jndi so the web.xml can make a refernce to the configured datasource.
The third part sets up the datasource itself. Here an XADataSource is configured for PostgreSQL Note that the com.atomikos.jdbc.AtomikosDataSourceBean is the prefered DataSource since Atomikos 3.4.x.

The web.xml needs to have these lines in place for the above configured datasource:


It is also possible to configure Atomikos transaction Essentials completely in a spring context. In this case do not use the jettyConfig and jettyEnvXml in the pom file and omit the resource-ref within the web.xml:
I have a tx-context.xml:

<bean class="com.atomikos.jdbc.AtomikosDataSourceBean" destroy-method="close" id="dataSource" init-method="init">
        <property name="uniqueResourceName" value="rbudatasource"/>
        <property name="xaDataSourceClassName" value="org.postgresql.xa.PGXADataSource"/>
        <property name="xaProperties">
              <prop key="databaseName">rbuconverter</prop>
              <prop key="serverName">localhost</prop>
              <prop key="portNumber">5432</prop>
              <prop key="user">postgres</prop>
              <prop key="password">BlahBlah</prop>
        <property name="minPoolSize" value="5">
        <property name="maxPoolSize" value="50">

    <bean class="com.atomikos.icatch.config.UserTransactionServiceImp" destroy-method="shutdownForce" id="userTransactionService">
                <prop key="com.atomikos.icatch.service">
                <prop key="com.atomikos.icatch.output_dir">/tmp</prop>
                <prop key="com.atomikos.icatch.output_dir">/tmp</prop>
   <!-- Construct Atomikos UserTransactionManager, needed to configure Spring -->
   <bean id="AtomikosTransactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close" depends-on="userTransactionService">
      <!-- when close is called, should we force transactions to terminate or not? -->
      <property name="forceShutdown" value="false" />
      <property name="transactionTimeout" value="300"/>

   <!-- Also use Atomikos UserTransactionImp, needed to configure Spring -->
   <bean id="AtomikosUserTransaction" class="com.atomikos.icatch.jta.UserTransactionImp" />

   <!-- Configure the Spring framework to use JTA transactions from Atomikos -->
   <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager" depends-on="userTransactionService">
      <property name="transactionManager" ref="AtomikosTransactionManager" />
      <property name="userTransaction" ref="AtomikosUserTransaction" />

April 4, 2009

Tapestry activation passivation

Tapestry 5 can make use of the client to make information set to a page from another page available to the client.
For instance Page Origin places a message during a submit onto Page Next and using the mechanism of passivation and activation
the client will act as the middle man to make this information available to the client itself.
Tapestry will issue an HTTP 302 redirect after the post and thus the passivation of Page Next so the client recieves some information and will ask tapestry for that information to be acted upon.
Here you see that the value of 'Howdy' is submitted and using a redirect is send to the server again to be used in its activation event.

February 2, 2009

Apache2 VirtualHost and trac behind a lighttpd proxy

Note to self:
setup of trac for Apache2

follow this instructions on this site: Robert Basic: Trac on Ubuntu.
Although I use mercurial as scm.
make sure that the line containing VirtualHost points to port 81 or another port except port 80.

<VirtualHost *:81>
and make sure apache listens to this port by changing /etc/apache2/ports.conf
NameVirtualHost *:81
Listen 81

Now install lighttpd:

sudo apt-get install lighttpd

edit /etc/lighttpd/conf-available/10-proxy

$HTTP["host"] == "localhost" {
proxy.debug = 1
proxy.server     = ( "/trac" =>
                       ( "host" => "",
                         "port" => 81,
and symlink:
sudo ln -s /etc/lighttpd/conf-available/10-proxy /etc/lighttpd/conf-enabled/10-proxy

and start lighttpd
sudo /etc/init.d/lighttpd restart
et voilĂ 
the trac instance works both under localhost/trac as well as localhost:81/trac

January 5, 2009

Setting up my python development environment

Note to self:
my setup for a python development environment
sketch so far:

use virtualenv wrapper changed mkvirtualenv function to to virtualenvwrapper_bashrc
function mkvirtualenv () {
    (cd "$WORKON_HOME"; virtualenv $*)
    if [ ! -f $WORKON_HOME/$1/bin/postactivate ]
        workon "${@:-1}"
        easy_install ipython
        easy_install pysmell
        easy_install mkvimproject-
        rm mkvimproject-
        touch $WORKON_HOME/$1/bin/postactivate
        echo "cd $WORKON_HOME/$1" >> $postactivate 
        echo "export PYTHONPATH=$WORKON_HOME/$1/scr:$PYTHONPATH" >> $postactivate    
        echo "if [ ! -d $WORKON_HOME/$1/src ]" >> $postactivate
        echo "then" >> $postactivate
        echo "    mkdir -p $WORKON_HOME/$1/src" >> $postactivate
        echo "fi" >> $postactivate
        echo "if [ ! -f $WORKON_HOME/$1/$1.vpj ]" >> $postactivate
        echo "then" >> $postactivate
        echo "    mkvimproject -o $1.vpj -s python" >> $postactivate
        echo "fi" >> $postactivate
        echo "if [ ! -f $WORKON_HOME/$1/PYSMELLTAGS ]" >> $postactivate
        echo "then" >> $postactivate
        echo "    pysmell src;" >> $postactivate
        echo "  cd lib/python2.5;" >> $postactivate;
        echo "  pysmell . -x site-packages -o ../../PYSMELLTAGS.stdlib;" >> $postactivate
        echo "  cd ../..;" >> $postactivate 
        echo "fi" >> $postactivate
        echo "pproject -U" >> $postactivate
    workon "${@:-1}"

then create a new virtual env with mkvirtualenv projectName --no-site-packages
this will setup the environment for vim and allow for the use of pysmell omnicompletion
and allows a ipython shell within the environment.