RSS

GitHub screencasts

If you’re using Git and haven’t heard about or used GitHub, which is a Git hosting service with a social networking twist, then here’s your chance! Scott Chacon is doing a screencast series called “Insider guide to GitHub” for the Pragmatic Programmers. The first episode is free of charge and a great way to get introduced to the features provided by GitHub.

 
Leave a comment

Posted by on December 9, 2008 in git, software engineering

 

collective.buildbot 0.3.0

I just pushed in a new version of collective.buildbot to the PyPI. Some highlights of the new release are:

  • Support for PyFlakes checks
  • Refactored project and poller recipes supporting multiple repositories (previously supported by the projects and pollers variants which are now gone)
  • SVN pollers work again
  • Cygwin fixes

If you were using an earlier version you will need to update your buildout configuration to accommodate the changes in the recipe configuration options.

 
1 Comment

Posted by on May 28, 2008 in software engineering

 

Tags: ,

Building Buildbots

Some time ago Tarek Ziade started a project to make it easier to configure and set up a Buildbot environment using zc.buildout. During the Paris Plone sprint I helped Jean-Francois Roche and Gael Pasgrimaud to further improve upon this work and after the sprint the collective.buildbot project was released.

I recently took some time to polish up the package with proper documentation and examples that should make it easier to deploy it for your own projects and released the changes as version 0.2.0.

Setting up a buildbot environment is pretty easy, you create a buildout for the build master that is responsible for configuring all the projects and one or more buildouts for the build slaves. The Putting it all together section in the documentation gives you an overall picture how to accomplish this.

Hopefully this will encourage people to use buildbot to improve the quality of their software. There are already some public buildbots available, check out buildbot.ingeniweb.com or buildbot.infrae.com for example. Is your buildbot next?

UPDATE: There was a bug in the “Putting it all together” example, which is fixed in 0.2.1.

 
 

Tags: ,

SWF metadata parser

Recently I needed to be able to determine the dimensions of SWF (Flash animation) files so I could embed them properly on a web page but I couldn’t immediately find something useful with Google that would perform the task. I am aware of the Hachoir project, but it seemed a bit overkill for my simple use case and a quick try with hachoir-metadata failed to parse my particular SWF file.

Luckily the container section of the SWF file format (which contains the metadata) is rather simple and writing a parser for it turned out to be a nice distraction from my normal duties. The result is hexagonit.swfheader which is a minimal package (no dependencies outside the standard library) that provides a single function that parses SWF files and returns the metadata.

The package comes also with a console script that you can use on the command line to quickly introspect local SWF files. In a buildout you’ll need to use the zc.recipe.egg:scripts recipe to get the script installed.

 
3 Comments

Posted by on April 16, 2008 in software engineering

 

Tags: ,

Snowsprint report

Once again the annual Snowsprint hosted by Lovely Systems was a great experience. This was my second time to attend the sprint and I enjoyed it very much. The scenery at the Austrian alps is just amazing. I even managed to hold off catching the cold only after the sprint this time 🙂

Alternative indexing for Plone

This year I wanted to work on subjects that I’m not the most familiar with. On the first night I expressed interest in the alternative indexing topic proposed by Tarek ZiadĂ© which lead us to work on an external indexing solution for Plone based on the Solr project. Enfold Systems had already started on working with Solr on a customer project and Tarek had arranged with Alan Runyan to collaborate on their work. Tom GroĂź joined us in our work and our first task was to produce a buildout that would give us a working Solr instance. We ended up creating two recipies to implement the buildout: collective.recipe.ant, which is a general purpose recipe for building ant based projects (kind of like hexagonit.recipe.cmmi for Java based projects, although you can use ant for non-Java projects just like make), and the Solr specific collective.recipe.solrinstance, which will create and configure a working Solr instance for instant use.

Enfold Systems had already a working implementation of a concept where the Plone search template (search.pt) was replaced by their own which implemented the search using only an external Solr indexing service. However, everything was still indexed in the portal_catalog as usual, so there was no gain in terms of ZODB size or indexing speed compared to a vanilla Plone site. Querying the Solr instance was of course extremely efficient which we verified using a JMeter based benchmark later on. We wanted to experiment on replacing some indexes from portal_catalog with Solr and try if we could gain any benefits in ZODB size or indexing speed.

As anyone who is at least a bit familiar with portal_catalog will know, replacing the whole of it can be a bit difficult because of special purpose indexes such as ExtendedPathIndex, which Plone heavily relies upon. So we decided to try if we could replace the “easier” indexes with Solr and have the rest be in portal_catalog. This would mean that we would need to merge results from both catalogs before returning them to the user. We did this by replacing the searchResults method in ZCatalog.Catalog.

To test our implementation we generated 20,000 Document objects in two Plone instances each and filled them with random content (more on this later) and compared the ZODB size, indexing time and query speed. The generated objects resulted in roughly 100 Mb worth of data and the size difference was about 8 % in favor of using Solr. Since we didn’t test this further with different data sets, I wouldn’t draw any conclusions based on this except to notice the (obvious) fact that externalizing the portal_catalog makes it possible to reduce the size of the ZODB to some degree. I know that some people use a separate ZODB mount for their catalogs so using an external catalog may be a good solution in some cases. The indexing times didn’t have much difference, but they were slightly in benefit of Solr. Querying our hybrid ZCatalog/Solr index turned out to be much slower than either ZCatalog or Solr by themselves 🙂 I’m sure this was because of our non-optimized merging code that we did in searchResults.

In the end, I think the approach Enfold Systems originally had is the correct one for near-term projects. Querying Solr is very fast and indexing objects in both the portal_catalog and an external Solr instance doesn’t produce much overhead. If you need a customized search interface for your project with better than portal_catalog performance you should check Solr out. The guys at Enfold Systems promised to put their code in the Collective for everybody to use, including our buildout.

zc.buildout improvement

Godefroid Chapelle had a proposal to improve the zc.buildout so that you can use buildout to get information about the recipes it uses. After discussing the matter with Godefroid and Tarek and a quick IRC consultation with Jim Fulton we decided to prototype a new buildout command — describe — that would return information about a given recipe. Jim Fulton expressed his desire to keep recipes as simple as possible so the describe command simply inspects all the entry points in a recipe egg and prints the docstrings of the recipe classes. If the functionality is merged into mainline buildout, recipe authors should consider putting a description about the recipe and the available options in the docstrings (something that we currently see in the PyPI pages of well disciplined recipes).

The code is in an svn branch available at http://svn.zope.org/zc.buildout/branches/help-api/. The following examples are shamelessly ripped from Tarek’s blog


$ bin/buildout describe my.recipes
my.recipes
    The coolest recipe on Earth.
    Ever.

Multiple entry point support


$ bin/buildout describe my.recipes:default my.recipes:second
my.recipes:default
    The coolest recipe on Earth.
    Ever.
my.recipes:second
    No description available

Random text generation with context-free grammars

The alternative indexing topic required us to generate some random content in our test sites and both me and Tarek found doing this quite interesting on its own. After the other work was finished we started playing with an idea of creating a library for generating random text based on context-free grammars. You can read Tarek’s post on the library for more information. The end result was that we created a project on http://repo.or.cz/w/gibberis.ch.git called Gibberisch which currently contains some random text modules and a Grok interface called Bullschit 🙂

I worked with Ethan Jucovy on the Grok interface and which was great fun. Since this was our last day project there were really no serious goals. We just wanted to play with Grok and ended up building a RESTful interface for building up a grammar and then generating random content out of it. If you’re working on a RESTful implementation I can recommend using the RestTest add-on for Firefox, it’s a real time saver!

Basically, Bullschit models the grammar using Zope containers so that you can have multiple different grammars in one application, each grammar consists of sections that contain parts of sentences (in the context-free grammar) called Schnippets. You can use the basic HTTP verbs: POST, PUT, GET and DELETE to maintain the grammar and generate the random text.

For our presentation we hooked in the S5 slide show template to produce endless slides of total gibberisch. You can have even more fun by using the OSX speech synthesizer (or any other for that matter) to read aloud your presentation! Here’s an example of a slide generated with Bullschit and S5.

Presentation with Bullschit & S5

If you’re interested in giving it a go, you can get the code using git.


$ git clone git://repo.or.cz/gibberis.ch.git

For those interested in Git, don’t miss the recent 1.5.4 release!

 
1 Comment

Posted by on February 3, 2008 in plone, zope

 

Improved zc.buildout recipes with ZopeSkel

Today I worked with Tarek Ziadé on ZopeSkel. Tarek concentrated on refactoring the ZopeSkel layout to put each template in its own module and wrote doctests for all available templates. Go Tarek! The test runner actually runs tests in two layers: first testing the output of the generated items and then, if the items contain tests themselves running them also.

I concentrated on improving the template for creating new zc.buildout recipes. Many useful recipes suffer from lacking documentation and an unappealing front page on PyPI. I refactored the template to include a common set of documentation files, such as CHANGES.txt, README.txt, CONTRIBUTORS.txt etc. and added code that puts all those documents nicely together to produce a serious looking ReST document that looks good on PyPI. So now its up to the recipe author to just fill in those files accordingly.

To help recipe authors and especially people new to zc.buildout I also added comments in both the documentation files and the code to help on implementing the recipe and especially on how to document it so that other people are able to use the recipe in their own buildouts. To me, one of the most importart parts of a recipe’s documentation is the list of available options and their semantics. Looking at the PyPI pages for zc.buildout and zc.recipe.egg you can easily get information about the component. I’ve also tried to do the same with my own recipes (hexagonit.recipe.cmmi, hexagonit.recipe.download). The template provides a stub for documenting the options in the README.txt file that authors can fill in.

I also created a minimal doctest for the buildout. While being only a skeleton the test actually runs a buildout using the recipe so you can run the test case for the recipe right after ZopeSkel is finished generating it. This should help recipe authors to get started with testing the recipe while they implement it.

In addition I updated the trove classifiers to appropriate values for a buildout recipe and added support for getting the trove classifier for the license to be added automatically in the setup.py file. So now when paster asks for a license for the recipe and you answer, for example, ZPL you get ‘License :: OSI Approved :: Zope Public License’ in your setup.py automatically. This code is actually in zopeskel.base and you can easily re-use it in the other ZopeSkel templates. Just take a look at how the recipe template uses it.

If you haven’t used ZopeSkel before, give it a try!

$ easy_install ZopeSkel
$ paster create --list-templates
$ paster create -t recipe collective.recipe.foobar

If you want to try the recent changes, you need to get ZopeSkel from the collective.


http://svn.plone.org/svn/collective/ZopeSkel/trunk/

There’s been lots of interest in ZopeSkel here at the Snowsprint so expect to have cool new templates there soon!

Update: 25.01.2007

ZopeSkel 1.5 was released which contains the latest changes.

 
8 Comments

Posted by on January 22, 2008 in plone, software engineering, zope

 

PrimaGIS sprint report

The Plone Conference was held in Naples, Italy this year and, as expected, was a great experience. Like last year in Seattle, we had a PrimaGIS focused sprint where a group of people worked on the next version of PrimaGIS which is based purely on Plone 3.0.

We had a list of sprint topics that people were able to choose from listed at the official sprint page. Below is a breakdown of the sprint activities, what was accomplished and what is still left to do.

OpenLayers integration

One of the biggest changes in the upcoming PrimaGIS version is OpenLayers integration. I had initially planned that OpenLayers would be used as an alternative user interface for PrimaGIS, but after a thorough introduction and discussion with Josh Livni we decided to use OpenLayers as the sole interface to PrimaGIS and retire the old custom one. I believe that this decision will serve us well in the future and the resources of PrimaGIS developers can be put to better use without having to worry about the map UI. The OpenLayers developers seem to be doing an excellent job at that and we’re grateful to use their work.

Currently the OpenLayers view works fine but there is little user configurability. The next step is to refactor the code so that the UI can be configured using user friendly forms and to make it easier for power users to customize the relevant parts. We also want to hook in the more advanced OpenLayers features such as spatial feature editing.

We also implemented TileCache support for PrimaGIS, so that people can easily configure their TileCache servers to improve PrimaGIS performance. Each presentation layer is able to produce a TileCache configuration section that can be copy/pasted to your TileCache configuration file.

Sean Gillies participated remotely and implemented support in PCL for the Python Geo Interface which allowed us to pull in feature data using all geometry types (point, line, polygon) directly into OpenLayers. Thanks Sean!

Big thanks also to Josh Livni for giving a lightning talk on using OpenLayers maps (including examples on FeatureServer and WPServer), for helping me to understand OpenLayers a bit better and working on the actual integration during the sprint!

New layer classes

Partly inspired by the OpenLayers and partly by the changes made to PCL and ZCO we decided to refactor the PrimaGIS layer classes. Previously there were two types of layers: PrimaGISLayer and PrimaGISDataLayer. PrimaGISLayer was used to create a map layer from spatial data from outside of Zope and PrimaGISDataLayer for data from within the ZODB. In either case it was difficult to share layers between multiple maps.

It had always been a goal to get rid of the artificial difference between the layers and data layers and adding support for sharing layers between maps seemed a good idea also.

The new types of layers are: Layer and Presentation Layer. A Layer class is a combination of what PrimaGISLayer and PrimaGISDataLayer used to be. It is used to select the data source where spatial data is acquired and to define the styling of the spatial features. It doesn’t matter whether the data comes from within Zope or not. The difference is that Layers don’t need to be located within Map objects. They can be located anywhere on your site and shared between multiple maps. A map administration may choose to have a folder where all Layers are stored but this is a totally arbitrary choice.

A Presentation Layer is an object that is stored within a Map object and refers to one or many Layer objects. It is tied more tightly into OpenLayers concepts such as a layer type which can be either base, overlay or vector layer. Each presentation layer is available as a Web Map Service (WMS) layer and renderable by OpenLayers. Connecting multiple Layers into a Presentation Layer allows you create composite layers that can be switched on and off as one. Below is a simple diagram that shows how the different objects are tied together.

PrimaGIS object relations

Buildout for Linux distributions

Alessandro Amici took on the daunting task of refactoring the primagis.buildout into an alternative buildout configuration that can take advantage of installed system libraries. By default, primagis.buildout builds all the required components from scratch which is a process that can more than an hour even on a modern machine.

The new buildout simply assumes that a given set of packages is installed system wide and compiles the rest of the dependencies against those. Currently the buildout provides a list of debian packages that users of Debian based distributions (such as Debian or Ubuntu) should install prior to running the buildout. See README-deb.txt in the buildout source tree for more specific instructions.

Eventhough the new buildout configuration was put together for Debian based distributions it is also usable on other distributions if a similar set of dependency packages are available. Users of other distributions are invited to contribute a list of packages suitable for their distributions.

The implementation of the buildout, which was initially estimated to take less then an hour :), turned out to be much more complicated task due to packaging bugs and differences in libraries between Debian and Ubuntu. We even created a new buildout recipe to help with working with C-libraries which Alessandro released as bopen.recipe.libinc. Users of 64-bit distributions should give the buildout a try since there have been reported problems when compiling some of the dependencies by hand on these systems.

Demo product

Sune Woeller and Chris Calloway worked together on a new demo product for PrimaGIS called zgeo.primagisdemo. The idea is to provide a similar effect to the createPrimaGISDemo.py script found in the previous versions. However, instead of putting the demo map in PrimaGIS itself, a separate product was created that sets up the map using a GenericSetup profile for content import. This is still work in progress so stay tuned for a release announcement soon!

Once it’s ready you can simply copy to package into your python path and run the GenericSetup import step to get a PrimaGIS demo in your Plone site.

Remaining tasks

There are still a few items I would like to see in PrimaGIS 0.7.

  • Persistent data stores and symbolizers
  • Better OWS properties management view
  • Spatial index for ZODB data
  • Feature inspection
  • Feature editing

On most items the work has already been started and some are are a simple matter of hooking into the functionality provided by OpenLayers.

Thanks again for all the people who participated in the sprint! It’s always great to people meet you know from IRC in real life.

 
1 Comment

Posted by on October 19, 2007 in GIS, plone, primagis