Last week Google submitted their own entry into the JavaScript framework race by open sourcing Closure, the library that powers google docs, maps, mail and more.

I couldn’t resist spending some time playing with the closure tools, especially the dojo-like module dependency resolver (a.k.a goog.provide and goog.require) and the closure compiler. It was a bit tricky at first because the example "applications" from the tutorial are pretty contrived and there is no information on using closure with existing JavaScript libraries.

Anyway, I thought I would share how I hooked up Closure to an existing JavaScript library (the one that powers http://gvr.carduner.net).

Step 1 - get the Closure Library.

Closure is actually a set of three tools: a JavaScript Library, a JavaScript compiler, and a templating language. The dependency resolution tools are part of the library, so you should check that out first. At the moment, the closure library can be checked out from the project subversion repository:

svn checkout http://closure-library.googlecode.com/svn/trunk closure

If you are just interested in using the dependency resolution tools and not the entire framework, you just need two files: base.js and calcdeps.py. The full checkout can take a while as they've included all the generated api documentation, which you can also read online.

Step 2 - Instrument your JS library code

If you already have your JavaScript code split up into "modules" that live in different files, adding the dependency code is pretty easy. In my case, I wanted to use Closure with gvr.carduner.net, which already has a set of interdependent JavaScript files like so:

gvr.core.js		gvr.renderer.js		gvr.web.client.js	jquery.history.js
gvr.js			gvr.robot.js		gvr.web.tests.js	launcher.js
gvr.lang.js		gvr.runner.js		gvr.world.js
gvr.lang.parser.js	gvr.tests.js		gvr.world.parser.js

Open up each JavaScript file, and add declarations to the top about what each file provides and requires. For example, gvr.runner.js depends on gvr.robot.js and gvr.core.js. In turn it provides the gvr.runner namespace. So at the top of gvr.runner.js I added the following:

// gvr.runner.js
goog.require("gvr.core");
goog.require("gvr.robot");
goog.provide("gvr.runner");

Then of course I had to add the provide statements to gvr.core.js and gvr.robot.js, as in:

// gvr.core.js
goog.provide("gvr.core");
// gvr.robot.js
goog.require("gvr.core");
goog.provide("gvr.robot");

The goog.provide statements will actually create the object namespace you pass in, so the following would be valid:

goog.provide("foo.bar.baz");
foo.bar.baz.blah = "blah";
foo.somethingElse = "somethingElse";

In fact, if you try to create the namespaces later, the compiler will throw warnings/errors at you. So if you have any code that looks like the following, you should remove it.

var foo = foo || {};  // DELETE THESE LINES
foo.bar = foo.bar || {};  // AS THEY CONFLICT WITH goog.provide("foo.bar.baz");
foo.bar.baz = foo.bar.baz || {};

Once you have added all the right goog.require and goog.provide statements to your code, you'll need to generate a dependency graph using the calcdeps.py script.

Step 3 - Build the dependency graph with calcdeps.py

The dependency resolution system uses a pre-generated dependency graph to link namespaces like foo.bar.baz to their corresponding JavaScript files and the files they require. This is all stored in a file called deps.js. The one for the closure library can be found in trunk/closure/goog/deps.js. Here are the first few lines to give you an idea of what this looks like:

// This file has been auto-generated by GenJsDeps, please do not edit.
goog.addDependency('array/array.js', ['goog.array'], []);
goog.addDependency('asserts/asserts.js', ['goog.asserts'], []);
goog.addDependency('async/conditionaldelay.js', ['goog.async.ConditionalDelay'], ['goog.Disposable', 'goog.async.Delay']);
goog.addDependency('async/delay.js', ['goog.Delay', 'goog.async.Delay'], ['goog.Disposable', 'goog.Timer']);
// ... this goes on for quite a while ...

In order for closure's dependency mechanism to know about your libraries, you need to create a deps.js file of your own. This can be done with the calcdeps.py script you should have downloaded by now. If you checked out the entire closure library source, the calcdeps.py script can be found in trunk/closure/bin/calcdeps.py.

The calcdeps.py script must be run from the directory where your files will be served, as it stores the relative file paths in the generated deps.js file, which is in turn used to build urls to all your JavaScript files. For example, my application's directory structure looks like this:

gvr-online/
  closure/ # this is the trunk checkout of closure
    closure/
      bin/
        calcdeps.py # I'll use this script to generate my own deps.js
      goog/ # this is the closure library source, including deps.js and base.js
  app/
    src/
      ui/  # This directory is exposed to the web as http://localhost:8080/ui/
        lib/ # this is where my javascript library lives
        closure/
          goog/ # this is a symlink to the closure/closure/goog/ directory at the top level

The gvr-online/app/src/ui/ directory is what gets exposed through the web, so the calcdeps.py script should be run from that directory. Here is the command I used to run it:

cd app/src/ui/ && python calcdeps.py -p lib -o deps > deps.js

The -p lib option tells calcdeps.py to search in app/src/ui/lib/ for js files with goog.provide and goog.require statements. The -o deps option tells calcdeps.py to generate a dependency graph file, which gets saved to app/src/ui/deps.js. If you are using the rest of the closure library, and not just the dependency stuff, you will need to add an extra -p closure argument.

With that done, we can try this out in a browser.

Step 4 - Instrument your HTML

Next you'll need to add the closure hook to your html files. In my project, there is just one html file, index.html. If you just include the base.js file like the closure tutorial suggests, you will not be able to goog.require your own library modules. You have to tell base.js where to find your library, and where to find the deps.js file. I added the following to the <head> section of index.html:

    <script type="text/javascript">
      var CLOSURE_NO_DEPS = true;
      var CLOSURE_BASE_PATH = "/ui/";  //this is the directory where I ran calcdeps.py
    </script>
    <script type="text/javascript" src="/ui/closure/goog/base.js"></script>
    <script type="text/javascript" src="/ui/deps.js"></script>

The CLOSURE_NO_DEPS option tells base.js that it shouldn't load closure's deps.js file and that we will handle the dependency graph loading ourselves. The CLOSURE_BASE_PATH setting is a prefix that should be added to the paths specified in the deps.js file. Next we load base.js which defines the goog.require and goog.provide functions. And finally, we load the deps.js file that was generated in the last step.

With these files loaded, you can now goog.require any of your modules. For example, at the bottom of my index.html file, I can have this:

<script type="text/javascript">
goog.require("gvr.web.client");
</script>
<script type="text/javascript">
client = gvr.web.client.newClient();
client.getUser(function(user){ alert("Hi "+user.nickname); });
</script>

Closure - pun intended

So far, I think closure's dependency resolutions tools are my favorite. It's relatively simple (you only need two files really) and doesn't require you to structure your code in any particular directory hierarchy (unlike Dojo last I checked). My only wish at this point is for calcdeps.py and base.js to have a mechanism for registering third party libraries like jQuery without adding goog.provide() to the top of their files. You could add other libraries to the end of the generated deps.js, but that isn't very maintainable and won't work with the closure compiler (I think?). I haven't yet gotten to using the closure compiler with my code, so more on my experience with that later.

There has been some talk about using “class based views” in Django to make view code more reusable. Apparently, there was even a presentation given about it. At Divvyshot, our code base is growing quickly and we are starting to reuse view code a lot. We’ve been refactoring all of our view code into classes, which makes them much easier to customize and mash together. Today I worked on some pretty exciting stuff that makes harnessing class-based views a snap.

Here’s a scenario we run into a lot.

  1. We have a view that displays information about a person with a url like /people/{id}/ where the id is the person object's id field
  2. We have another view that displays information about an event with a url like /event/{slug}/ where the slug is some small number of alphanumeric characters uniquely identifying the event.
  3. We have a third view that shows information about an event relating to a person with a url like /event/{event_slug}/person/{person_id}/

The third piece to the above combination is where class-based views really pay off. We already have a bunch of code for working with a person's data and a bunch of code for working with an event's data. Wouldn't it be great if we could just magically combine those two pieces of code and get all the data about both an event and a person and their relationship spit out onto a page? Well, we can and here is a simplified example of how it would look in our code base.


First there is the code for displaying a page about a person. I'll explain in detail what's going on.

class PersonDetail(Handler):
    template = "myapp/person/detail.html"
    person = fromurl("id").model(Person)
    def update(self):
        # do a bunch of stuff with self.person, for example
        if self.request.user.get_person() == self.person:
            self.context['page_title'] = "This is you"
        else:
            self.context['page_title'] = "%s %s" % (self.person.first_name, self.person.last_name)

In Detail

First you'll notice that PersonDetail is a class and not a function. Django does not require views to be functions, just to be callable. PersonDetail subclasses Handler, which provides the __call__ method that's necessary to make an instance of PersonDetail callable. In case you are jumping to conclusions, we do not use an instance of PersonDetail directly as a callable view for thread safety reasons that I will explain later.

The next thing you'll see is that the template is specified as a class attribute with the path used by a template loader. The actual template rendering with a proper request context and all that jazz is abstracted away for us by a render method defined in the Handler class.

The next cool thing is the line person = fromurl("id").model(Person) which declaratively spells out the mapping from a url parameter to a Person model object. In particular, this says to pull out the id from the keyword arguments passed to the view function (based on the regex in the url conf) and use it to look up a Person object. By default, a 404 response is returned if no such object is found. This is sort of a replacement for person = get_object_or_404(Person, id=some_id) that works better with class-based views.

Next we have an update method, which gets called before the template is rendered. The purpose of the update method is just to prepare the view, and not to render a template to a response. That means adding stuff to the template context, adding additional attributes to the view instance, creating and processing forms, handling post data, etc. By putting all this logic in a standalone method, it is easy to modify the views behavior without having to worry about how the HttpResponse is created.

In this example, we put variables that should be made available to the template into self.context, which is just a dictionary. Alternatively, we could set attributes on the view instance itself, which is made available to the template. For example, having {{view.person.name}} in the template would yield the desired result. The request is also made available as the self.request instance attribute. By setting attributes in the view instance, it becomes much easier to share data between multiple helper methods of the view instance. For example, you might have a method that processes a GET request and a separate one for POST requests. Subclasses of your view can then selectively override just one of the methods and all the while you don't have to worry about passing around any required data, like the request object itself.


Next we have the code for displaying stuff about an event. This is a lot like the PersonDetail class. The only thing to note is that the event attribute has an additional piece of metadata which says that the "slug" url parameter corresponds to the "url_slug" field of the Event model.

class EventDetail(Handler):
    template = "myapp/event/detail.html"
    event = fromurl("slug").model(Event, "url_slug")
    def update(self):
        # do a bunch of stuff with self.event
        self.context['page_title'] = self.event.name

As the final section of the scenario I outlined above, we will combine these two classes using python's multiple inheritance support. Strictly speaking, it's not necessary to use multiple inheritance to combine the functionality of the previous two classes, and frankly I haven't decided yet whether it is a good idea. But as long as you are careful and know what's going on in the base classes, it should be OK. This is python after all and we don't do hand holding.

class EventForPerson(EventDetail, PersonDetail):
    template = "myapp/event/person.html"
    def update(self):
        # do a bunch more stuff with self.event and self.person
        EventDetail.update(self)
        PersonDetail.update(self)
        self.context['page_title'] = "%s and %s" % (self.person.first_name,
                                                     self.event.name)

This example is a bit contrived because the only thing any of the update methods do is set the same variable in the template context to something different. But the idea you should take home from this is that the views could have arbitrarily complex business logic that can be easily extended and customized through subclassing, just as can be done with Model objects, admin views, HttpResponse objects, or anything else that is object oriented. With the multiple inheritance setup we have, our template, myapp/event/person.html can access the person object, the event object, and anything else provided by the update methods from EventDetail and PersonDetail. We could even {% include %} the other two templates in myapp/event/person.html and they would just work. In creating the EventForPerson class, we didn't even have to worry about how the Event and Person objects get looked up from the parameterized url. If we refactor the object lookup later (for example, switching from person ids to person slugs), we'll only have to change the code in one place.

Url confs

Now for a quick note about how these get hooked up in a url conf file. You might be tempted to do something like this:

urlpatterns += patterns('',
    url(r'^event/(?P<slug>[\d\w\-]+)/person/(?P<id>\d+)/', EventForPerson()),
)

where the EventForPerson class is instantiated so as to provide the url conf with a callable object. But this means you would have one instance of EventForPerson for every request that gets processed. Besides this not being thread safe, it's just plain confusing because the update methods might "dirty up" the instance while processing one request, and that might affect the next request that gets processed. To avoid that, our urlconf looks like this:

urlpatterns += patterns('',
    url(r'^event/(?P<slug>[\d\w\-]+)/person/(?P<id>\d+)/', EventForPerson.view),
)

where EventForPerson.view is just a class method that instantiates and calls a brand new instance of EventForPerson for each request, passing in whatever parameters it receives and returning whatever result it gets. Unfortunately, due to a limitiation of Django, you cannot use the handy string notation url(r'^some-regex', "myapp.views.EventForPerson.view") to achieve the same result. So you have to import the view classes into the url conf.

Dealing with conflicting regex groups in a urlconf

The last feature I want to briefly mention is how we deal with conflicting groups in a urlconf. Suppose that both our base classes, PersonDetail and EventDetail looked up objects based on a regex group named "id". If we wanted to combine the these two view classes into one, the url regex pattern would have to use different group names. The pattern might look like ^event/(?P<event_id>\d+)/person/(?P<person_id>\d+)/. Even though the base classes are looking for the "id" group, we can override their behavior in a subclass. It would look like this:

class EventForPerson(EventDetail, PersonDetail):
    template = "myapp/event/person.html"
    event = EventDetail.event.fromurl("event_id")
    person = PersonDetail.person.fromurl("person_id")
    def update(self):
        # do a bunch more stuff with self.event and self.person

Without having to know which models are used to look up person and event, I can still reconfigure which parts of the url get used to look them up.

Conclusion

If you don't need to reuse your view code, you shouldn't bother writing them as classes. If you do need to reuse view code, writing them as classes is the only sane way to do it. The utility classes we use at Divvyshot for all our class-based views are still baked into the code base but I hope to open source the useful bits soon. If you are interested in using a similar class-based view implementation, let me know and I'll move the open sourcing of these utilities higher up on my to-do list.

If you are familiar with writing Django applications, you have probably run across the problem of extending the builtin User authentication model. Django does not yet have the hooks necessary for modifying the User object in a nice way, so you more or less have to resort to monkey patching.

Here is the basic monkey patching pattern I have seen:

def user_get_name(self):
    # do something with the user object which is self
    return "%s, %s" % (self.last_name, self.first_name)

User.get_name = user_get_name

Or if it is really just a one liner you can use a lambda, which avoids dirtying up the local namespace of wherever you are performing the monkey patching:

User.get_name = lambda self: "%s, %s" % (self.last_name, self.first_name)

The first monkey patching pattern makes reading the code incredibly painful (at least to me) and the lambda pattern isn't much better either.

Decorator Pattern

You can perform the same operations in a more readable manner using decorators. Here is what it would look like:

def monkeypatch(cls):
    def decorator(f):
        setattr(cls, f.__name__, f)
    return decorator

Now to monkey patch the get_name method of a User object, you would do this:

@monkeypatch(User)
def get_name(self):
    return "%s, %s" % (self.last_name, self.first_name)

I personally think this is a bit more readable. The real advantage to using a monkeypatch decorator though, is that you call out the fact that you are monkey patching. While reading the above code, it is very clear that monkey patching business is going on.

Monkey patching is almost never the best way to accomplish what you're trying to do, but it will often get the job done fast. To remind yourself that you should revisit any monkey patching code later and think of a better way to do it, consider renaming the decorator to XXXmonkeypatch.

Class decorators with python 2.6

If you are using python2.6, you can also use monkey patching decorators on entire classes. Here is an example of such a decorator:

def monkeypatch(cls_to_patch):
    def decorator(cls):
        cls_to_patch.__bases__ += (cls,)
        return cls
    return decorator

You would use this decorator like so:

@monkeypatch(User)
class MyUser:
    def get_name(self):
        return "%s, %s" % (self.last_name, self.first_name)

    def get_initials(self):
        return self.first_name[0]+self.last_name[0]

The main caveat with this method is that MyUser actually becomes a base class to User so if User ever gets a new method of the same name as one of your monkey patch methods, the User version will take precedence. This might be a feature depending on what exactly it is you are doing.

With my recent move to Divvyshot, I’ve started learning a lot more than I ever planned about Django, which to the chagrin of many a python web framework has become more or less the defacto standard for developing web application in Python (for better or worse), to the extent of being among the few 3rd party libraries included in Google App Engine.

Being a long time Zope 3 guy, you can imagine the shock I had encountering all the differences between Zope 3 and Django. ORMs? What are those? And what do you mean I have to save the object explicitly after changing it? You mean I have to implement a get_absolute_url method for every single object I want to publish? How come I can't easy_install Django? Where are all the doctests? How do I generate test coverage reports? Security? What security? And I'm sorry, but my editor just doesn't understand your non-xml templating language. There's more, but I'll leave it at that.

So now begins my long trek to making Django work for me and hopefully for others as well. At the top of my list are deployability (eggs, externalized configuration, buildout?!) and maintainability (more tests and test automation). Part of this trek will involve discovering solutions that have already become standard practice in the Django community. Another part will be writing new tools to fill the gaps. And one last part will be extracting some fantastic solutions from the Zope community and making them work for Django.

As a first step, I'm going to look at just getting a development environment setup. The installation instructions and tutorial on djangoproject.com are easy enough to follow but lack a certain level of repeatability and encapsulation that I'm used to with Zope 3 projects. To those who don't know, I'm talking about buildout. Let's look at an example.

Say I want to start hacking on this project called z3c.formdemo, which is a full fledged Zope 3 web application. Here's what I do:

  
    $ svn checkout svn://svn.zope.org/repos/main/z3c.formdemo/trunk z3c.formdemo
    $ cd z3c.formdemo
    $ python bootstrap.py
    $ ./bin/buildout

    ... go get a cup of tea if you haven't done this before ...
              ... I never said Zope 3 was small ...

    $ ./bin/demo fg
  

Just like that I've got everything installed including third party packages, test scripts, database configuration, web server, yadda yadda. It didn't even touch my system python.

Now I need a similar setup for my Django project. Enter Paver, a general utility for performing simple tasks using python. Paver is kind of like a Make for python, but with a few steroids. Buildout would be another very good option, and I wouldn't be surprised if eventually I find myself using Buildout instead of Paver. However, after having used both a fair amount, I get the feeling that Paver is more likely to jive with the Django community. After all, it does use python as its configuration language whereas buildout strictly uses the INI format. It's also a bit easier to create "paver tasks" than "buildout recipes" as you don't need a python egg to house your code, though you can use one easily. (Someone please correct me if you do not need a python package to house a buildout recipe.)

Paver uses pavement.py files to define "tasks" you want to perform. The first task I want to perform is to "bootstrap" the project. That means installing everything needed to start the server, and preferably without mucking up my system python.

Paver comes with a task for generating a bootstrap.py script that will:

  • Create a virtual environment where you can install things without messing up system python.
  • Install any number of 3rd party packages your project depends on into the virtual environment.
  • Run a function in your pavement.py file that can do anything else that needs to be done.

There is a bit of a catch-22 in that you have to have Paver and virtualenv installed to generate the bootstrap.py but thereafter anyone else who uses the bootstrap.py script will not need Paver to run it.

Here is what the initial pavement.py file will look like:

# /path/to/my/django/project/pavement.py

from paver.easy import *
options(
    virtualenv=dict(
        script_name="bootstrap.py",
        packages_to_install = [
            # Project dependencies
            'flickrapi',
            'BeautifulSoup',
            'Pygments',
            'Markdown',
            'gdata',
            'twitter',
            'Django',
            ],
        paver_command_line="init",
        ))

INSTRUCTIONS = """
Run
   $ source bin/activate
to enter the virtual environment and
   $ deactivate
to exit the environment.
"""

@task
def init():
    """Initializing everything so you can start working"""
    print "virtual environment successfully bootstrapped."
    print INSTRUCTIONS

With the initial pavement.py file created in my project directory, I can now generate the bootstrap.py script by running the command:


$ paver paver.virtual.bootstrap


If you get an error about an unknown task "paver.virtual.bootstrap" it means you don't have virtualenv installed.

Now for the cool part (note the file paths):

  $ python bootstrap.py
  ...
  $ source bin/activate
  $ which python
  /path/to/my/django/project/bin/python
  $ which easy_install
  /path/to/my/django/project/bin/easy_install
  $ python
  Python 2.5.4 (r254:67916, Aug  9 2009, 20:26:50)
  >>> import flickrapi
  >>> print flickrapi.__file__
  /path/to/my/django/project/lib/python2.5/site-packages/flickrapi-1.2-py2.5/flickrapi/__init__.py

virtualenv gives you your very own python executable and easy_install script that will install things directly to your very own site-packages folder. No need to have sudoer privileges. Having a dedicated site-packages folder for each project makes it really easy to work on multiple projects with different dependencies (different versions of the same dependencies too!) without installing anything system wide.

So that is the virtualenv step.

With this in place, getting started on an existing django project is easy. Here is what the entire process might look like on the command line:

  $ svn co svn://svn.someproject.com/someproject/trunk someproject
  $ cd someproject
  $ python bootstrap.py
  $ source bin/activate
  (someproject)$ python manage.py runserver

At first I thought I would be annoyed by having to type source bin/activate all the time, but in practice it hasn't been a problem. You could skip the activate step and always run scripts directly from the virtual environment's bin/ directory (i.e. ./bin/python) and get the same effect. You would have to use the bin/* scripts explicitly with buildout anyhow.

I'm now at djangocon in Portland and have been hearing some interesting things about pip, an easy_install replacement. Hopefully @ianbicking will be able to explain why it is worth using. Given what appears to be a trend towards using git and other DVCS both for code development and package distribution, pip's built-in support for installing packages from svn/bzr/git/hg
repositories could be extremely useful.

After 18 months at Keas Inc. I have made the very difficult decision to move on to a new (ad)venture.

I am very excited to announce that as of today I will be working at divvyshot.com, a very new Y Combinator backed startup currently based in San Francisco.

At Divvyshot I will continue my self education in all things Web; front to back, top to bottom, inside and out. My primary job will be writing code, and lots of it. There will be more details later, but right now I have to go write some code.

Seven years ago I worked on an open source project called Guido van Robot, a programming language and environment for teaching basic programming concepts to beginners. I recently jumped back onto the project to move Guido van Robot to the web. Originally written in python with a gtk front-end, I’ve rewritten Guido van Robot in JavaScript with an html front-end!

GvR-Online!

GvR-Online!

I have dubbed the result GvR Online! GvR Online is comprised of a core Javascript library that runs GvR programs, a jQuery powered editing/simulation environment, and a lightweight python web service with a RESTful json api for data storage that runs on Google App Engine.

The code base has just gotten cleaned up enough for other people to start hacking on it and I’m on the lookout for contributors. You might be wondering why GvR Online is a project worth working on, so let me tell you.

The desktop version of GvR has been used all across the world, in and outside of classrooms, to successfully introduce people to computer programming (it even runs on the OLPC XO laptop). I want the online version to be just as successful and more. With the online version, schools with locked down computer labs won’t need to install any additional software to run Guido van Robot in their classrooms. Students and teachers will be able to work on their GvR programs from any computer with an internet connection. Sharing GvR programs with others – programmers or not – will be easier than ever before. Beyond GvR Online itself, I hope to provide a reproducible example of how the web can be harnessed to create easy to use and massively distributable online learning tools. I can imagine an ecosystem of micro web applications that provide interactive learning environments for topics far beyond programming.

But if purely philosophical or philanthropic reasons are not enough to get you to work on GvR, then maybe the technology pitch will do it.

The technology that powers GvR is the future of the open web. GvR lives in the cloud. The code base lives in Mercurial, a distributed revision control system. The all new HTML 5 canvas tag is being used to render the GvR world. The primary programming language is JavaScript, the most popular programming language. The entire application runs off a RESTful web service that can be used to integrate with any other application. Imagine the possibilities of integration with projects like Bespin. I mean, this is cool stuff right?

So what are you waiting for?

Check out the site at http://gvr.carduner.net.

Download the code from http://bitbucket.org/pcardune/gvr-online/.

Read the api documentation at http://gvr.carduner.net/ui/docs/index.html.

And start hacking!

I am happy to announce the initial publication of a new tool called ZBoiler.

ZBoiler is a collection of tools for generating boiler plate code for starting python projects. There are three main pieces that comprise ZBoiler, a web application/service (http://prealpha.zboiler.com), the boil command line program, and a few libraries of code generators.

Brief Architecture

The main problem with boiler plate code generators used in python today is that they work completely from file templates. If you want to modify and improve upon a template, your only option is to fork the template. They are inherently non-pluggable and non-flexible.

ZBoiler improves on template based code generators by providing an abstract and pluggable representation of code snippets, called builders which handle the actual code generation while providing a clear API for modification. Instead of writing "class Foo:\n pass" to a file, you construct a class builder, modify it however you want, then render it to a file.

ZBoiler provides an additional layer on top of builders that configures a collection of builders into high level "features" like documentation, egg-based distribution, unit testing, or anything else you might want your python project to do. Since we're not using templates, multiple "features" can modify the same collection of builders safely, allowing you to mix and match high level features however you want. Once you have decided on your features, you generate all the boilerplate code in one step.

As a final level of abstraction, we also group features into project templates. Project templates are useful for getting started on a larger project that uses a framework. For example, you might want to start a Zope 3 project, or Grok project, or Django project, or PyGame project. Each of these projects will typically have their own solutions for testing, deployment, etc. that correspond to different features.

zboiler.com

zboiler.com website.

zboiler.com website.

The zboiler.com website provides a pluggable web interface to all the project templates and individual features that are available. Each feature can be configured through the web and once you are satisfied, you can download a tarball of the generated code. At the moment we have project templates for creating egg-based python packages, python command line programs, and full fledged Zope 3 applications. You can see an early screencast of how it works here: http://zboiler.com/demo.html. (I plan on doing an updated one soon.)

The boil command

For those who are not interested in clicking around on a website, there is also a relatively intelligent command line interface to the same project templates and features available on the zboiler.com website. The easiest way to use the boil command is with predefined templates. Here is what a typical session might look like:

We can start by listing the available project templates.

$ boil -l
Available Templates:

  zope-project   "Zope 3 Web Application"
                   Includes all the features you would want for a Zope 3 Web Application.
  command-line   "Command Line Program"
                   Includes all the features you would want for a command line program.
  python-package "Python Package"
                   Just a simple python package with few bells and whistles.

ZBoiler is completely pluggable using setuptools entry points so it is relatively easy to add more templates to this list.

You can then boil a template interactively, which will prompt you for any values missing from the template:

$ boil -t python-package
Enter the name for this project: z3c.foobar

The python-package project template will prompt us for a lot of data used by setuptools.

Options for: z3c.feature.core:meta-data
---------------------------------------
Project Description (? for help): The Foo Bar Project
License [GNU General Public License (GPL)]: 
using default value: GNU General Public License (GPL)
Author(s) (? for help): Paul Carduner
Author Email (? for help): 
URL (? for help): 
Version [0.1.0] (? for help): 0.5.0
Namespace Packages (? for help): ?
A list of namespace packages that should be created, one per line (i.e. zope or zc or z3c or collective)
Namespace Packages (? for help): z3c
Namespace Packages (? for help): 

The interactive editor handles validation and complex data types like list entry automatically.

Keywords (? for help): simple
Keywords (? for help): zboiler
Keywords (? for help): example
Keywords (? for help): 
Install Requires (? for help): 

Finished creating xml definition.

Once you finish going through the interactive wizard, you can see the xml project definition, which is what the features use to configure themselves. Notice that each feature has a type that points to an entry point.

Do you want to see the generated xml definition? (y/[n]): y

<project name="z3c.foobar">
  <feature type="z3c.feature.core:meta-data">
    <author>Paul Carduner</author>
    <author-email></author-email>
    <description>The Foo Bar Project</description>
    <version>0.5.0</version>
    <license>GNU General Public License (GPL)</license>
    <url></url>
    <keywords><item>simple</item><item>zboiler</item><item>example</item></keywords>
    <namespace-packages><item>z3c</item></namespace-packages>
    <install-requires/>
  </feature>
  <feature type="z3c.feature.core:python-interpreter"/>
  <feature type="z3c.feature.core:unit-testing"/>
  <feature type="z3c.feature.core:documentation"/>
</project>

Finally the complete boiler plate code for a new egg-based python package is generated.

Does this look right? ([y]/n): y
INFO - Creating directory ./z3c.foobar
INFO - Creating file ./z3c.foobar/bootstrap.py
INFO - Creating file ./z3c.foobar/setup.py
INFO - Creating file ./z3c.foobar/buildout.cfg
INFO - Creating directory ./z3c.foobar/src
INFO - Creating directory ./z3c.foobar/src/z3c
INFO - Creating directory ./z3c.foobar/src/z3c/foobar
INFO - Creating directory ./z3c.foobar/src/z3c/foobar/tests
INFO - Creating file ./z3c.foobar/src/z3c/foobar/tests/test_doc.py
INFO - Creating file ./z3c.foobar/src/z3c/foobar/tests/__init__.py
INFO - Creating file ./z3c.foobar/src/z3c/foobar/README.txt
INFO - Creating file ./z3c.foobar/src/z3c/foobar/index.txt
INFO - Creating file ./z3c.foobar/src/z3c/foobar/__init__.py
INFO - Creating file ./z3c.foobar/src/z3c/__init__.py
INFO - Creating file ./z3c.foobar/ZBOILER.txt
INFO - Build finished

The Python API

Finally there is also the Python API. Rather than describe it all here, I will let you read the doctests, which are nicely rendered using sphinx here: http://docs.carduner.net/. One of the probably more relevant sections is the python code builders: http://carduner.net/docs/z3c.builder.core/python.html. There is also a long example that walks through the configuration of all the builders necessary for a zope 3 application: http://carduner.net/docs/z3c.builder.core/example.html.

Get the Code

All the code for ZBoiler, including the web application is kept in the Zope subversion repository. Check it out:

$ svn co svn://svn.zope.org/repos/main/z3c.builder.core/trunk z3c.builder.core
$ svn co svn://svn.zope.org/repos/main/z3c.feature.core/trunk z3c.feature.core
$ svn co svn://svn.zope.org/repos/main/z3c.feature.zope/trunk z3c.feature.zope
$ svn co svn://svn.zope.org/repos/main/z3c.boiler/trunk z3c.boiler
$ svn co svn://svn.zope.org/repos/main/z3c.boilerweb/trunk z3c.boilerweb

Initial releases are available on pypi. To get the boil command, you can use easy_install:

$ easy_install z3c.boiler

Get Involved

ZBoiler is very new and so far not tested much in the wild. But the foundation is in place for anyone to start contributing new features and project templates to ZBoiler. Here is a short list of features I'd like to see:

  • Google App Engine / Django
  • Paver - automatically generate the paver bootstrap.py and a pavement.py files
  • Other unit testing harnesses
  • PyGame Projects
  • Other cool stuff!!!
Follow

Get every new post delivered to your Inbox.

Join 76 other followers

%d bloggers like this: