RSS

Category Archives: zope

Using a forward-proxy for direct access to production sites

At our company we use a fairly common production environment setup for our Plone sites. We’ve got Apache for virtual hosting, logging and SSL followed by Varnish for caching and Perlbal for  load-balancing finally backed up a farm of  Zope instances using a ZEO storage.

Each of our client sites are served by at least two different Zope instances so that we can perform rolling updates by bringing down, updating and restarting the instances one at a time without resulting in any downtime for our customers. While a Zope instance is down for maintenance we remove it from the load balancer pool so it won’t receive any requests during the maintenance period. Before returning the updated Zope instance back to the load balancer’s pool we need to be able to access it directly, mainly for two reasons:

  • to verify that the instance functions correctly and there are no regressions
  • to warm up various Zope level caches to avoid the first requests to pay the penalty of the cold start

It is also important that we can access the Zope instance without going through Varnish and the load balancer to make sure we are indeed seeing a response from the particular Zope instance instead of a cached copy.

Background

Originally we had set up a custom <VirtualHost> section in Apache that allowed us access to all sites from within a single domain name, something like https://zmi.mycompany.com/customer1 and https://zmi.mycompany.com/customer2. This worked out great in the past and allowed us to both verify easily that the instance was working properly and also to warm up the Zope memory caches. However, after warming up the customer sites and putting the instances back into the load balancer pool we started experimenting problems with inconsistent behaviour between the instances in the Zope farm with regard to links and page elements such as images. It seemed that links and images were pointing to our custom https://zmi.mycompany.com/ domain instead of the customer specific one they should have. Somehow the cache warm up process had broken up the live site.

It didn’t take long to find out that CacheFu and the in-memory Page Cache in particular were the culprit. The Page Cache caches the results of rendering a page thus persisting the links with the incorrect domain name. We could have simply purged the Page Cache after the warm up but since the goal was to warm up the caches (including the Page Cache) that did not feel right. To get around the issue we would need to access the particular Zope instance using the real customer domain name instead of our custom one. We chose to use a HTTP forward-proxy for this.

The main idea is that normal users accessing http://www.customer.com/ would be served through the normal production pipeline including Varnish and Perlbal and from the Zope instances active in the load balancer. However, when using the custom HTTP proxy we could use the same http://www.customer.com/ address but be served directly from the particular Zope instance bypassing the caching and load balancing. Since Apache provides all that we needed out of the box it was a simple choice to use it.

Configuring Apache to forward-proxy requests

We decided to implement the forward proxy configuration within a <VirtualHost> configuration. Requests coming to the customer domain would need to routed to a particular Zope instance but other requests would be proxied though to the outside world.  The configuration consisted mostly of common mod_rewrite rules but we still needed a mechanism to target a particular Zope instance from our farm.

A simple solution would have been to create a separate <VirtualHost> section for each Zope instance in the farm. However, a linear correlation on the number of Zope instances was not desirable as it would results in a large number of virtual host sections that would  be roughly 90% the same. Instead, we chose to do the following. Each physical backend machine running a number of  Zope instances would be handled by a single <VirtualHost> section and the virtual host’s ports would be mapped 1:1 to the Zope instances’ ports. In other words, we would use the port numbers to separate Zope instances within a single physical backend machine and then virtual hosts to separate the physical machines. This seemed like a good compromise.

The <VirtualHost> proxy configuration for a single backend machine running multiple Zope instances  would then look like this. We’ve called the proxy vhost backend1.proxy.mycompany.com and using 192.168.0.100 as the backend machine address.

<VirtualHost 1.2.3.4:*>
    ServerName backend1.proxy.mycompany.com
    ProxyRequests On
    <Proxy *>
        Order deny,allow
        Deny from all
        Allow from 4.5.6.7
    </Proxy>

    # Tell Apache to preserve the physical TCP port information so we
    # can map it directly to the Zope backends by reading
    # the %{SERVER_PORT} environment variable.
    UseCanonicalName Off
    UseCanonicalPhysicalPort On

    RewriteEngine On
    # Read the rewrite map from an external file. The file
    # provides a mapping from public host names to ZODB paths leading
    # to the corresponding Plone site roots.
    RewriteMap zope txt:/var/apache/proxy-rewrite-map.txt
    RewriteMap tolower int:tolower

    # Make sure we have a Host: header
    RewriteCond %{HTTP_HOST} !^$
    # Normalize the Host: header
    RewriteCond ${tolower:%{HTTP_HOST}|NONE} ^(.+)$
    # Lookup the hostname in our rewrite map
    RewriteCond ${zope:%1} ^(/.*)$
    # Finally, rewrite and proxy to the Zope backend. We map the proxy
    # ports directly to the Zope backend ports which allows a mechanism for
    # selecting a specific backend by choosing a matching port.
    RewriteRule ^proxy:http://[^/]*/?(.*)$
        http://192.168.0.100:%SERVER_PORT/VirtualHostBase/
          http/${tolower:%{HTTP_HOST}}:80/%1/VirtualHostRoot/$1 [P,L]

</VirtualHost>

I have removed parts not relevant to this post, such as log file configuration, from the example above. Also the last RewriteRule is split on multiple lines but it should all be on a single line. The main points in the config are

  • use of “*” as the port in the <VirtualHost> node. This allows us to use a single section configuration for multiport access
  • use of <Proxy> section to limit access to the proxy. This is very important so that do not expose the proxy to the whole world. Only allow yourself access to it.
  • setting UseCanonicalName Off and UseCanonicalPhysicalPort On to make sure we get the actual port used when reading the %{SERVER_PORT} environment variable later in the rewrite rule
  • setting up an external rewrite map file that can be shared among <VirtualHost> sections.
  • performing conditional rewrites that proxy requests targeted to the customer domain to the particular Zope instance and let others through.

The external rewrite map file is a simple text file in which each line contains two values: first the customer domain name (which will matched against the normalized contents of the Host: HTTP header) followed by the ZODB path to the root of the corresponding Plone site. For example:

www.customer1.com  /customers/customer1
www.customer2.com  /customers/customer2

Now, assuming that we have two Zope instances running on machine 192.168.0.100 (the address used in the rewrite rule above) on ports 10001 and 10002 we can access them through the proxy by configuring the browser to use a HTTP proxy at backend1.proxy.mycompany.com on port 10001 or 10002 respectively.

We now have a situation where we can access a customer site, e.g. http://www.customer1.com/, and choose the particular Zope instance by configuring the HTTP proxy in our browser. This is all well and good and does achieve what we started out to do. However, manually configuring the proxy setup everytime is not fun and there is nothing to differentiate our use of the site in “normal” mode from the “backdoor” proxied mode. It is too easy to forget the proxy configuration on once the update has been finished. Luckily, there is a solution available that will make this a breeze.

Using FoxyProxy to manage proxy configurations

FoxyProxy is a Firefox extension that helps with managing multiple proxy configurations and makes switching between them quick and easy. It also shows the current proxy configuration in the status bar which makes it easy to see which backend we’re currently talking to (if you name you proxy configuration accordingly).

To continue our example, we would make two separate proxy configurations for accessing each one of our two backend Zope instances. The first one we could call “Backend #1 — Zope instance #1” and use backend1.proxy.mycompany.com:10001 as the address and the other one “Backend #1 — Zope instance #2” using backend1.proxy.mycompany.com:10002 as the address. Having the name of both the physical machine and the Zope instance in the proxy configuration helps to identify the particular Zope instance quickly.

With FoxyProxy configured it is now very easy to switch between accessing a site through the full production pipeline with caching or bypassing that and talking directly to a given Zope instance. Because the solution is generic we can take advantage of it with any HTTP client that is capable of using a proxy. It would now be very easy, for example, to do benchmarking with and without Varnish by simply switching between proxy configurations when using ab or another benchmarking tool.

Advertisements
 
Leave a comment

Posted by on January 1, 2009 in zope

 

Snowsprint report

Once again the annual Snowsprint hosted by Lovely Systems was a great experience. This was my second time to attend the sprint and I enjoyed it very much. The scenery at the Austrian alps is just amazing. I even managed to hold off catching the cold only after the sprint this time 🙂

Alternative indexing for Plone

This year I wanted to work on subjects that I’m not the most familiar with. On the first night I expressed interest in the alternative indexing topic proposed by Tarek Ziadé which lead us to work on an external indexing solution for Plone based on the Solr project. Enfold Systems had already started on working with Solr on a customer project and Tarek had arranged with Alan Runyan to collaborate on their work. Tom Groß joined us in our work and our first task was to produce a buildout that would give us a working Solr instance. We ended up creating two recipies to implement the buildout: collective.recipe.ant, which is a general purpose recipe for building ant based projects (kind of like hexagonit.recipe.cmmi for Java based projects, although you can use ant for non-Java projects just like make), and the Solr specific collective.recipe.solrinstance, which will create and configure a working Solr instance for instant use.

Enfold Systems had already a working implementation of a concept where the Plone search template (search.pt) was replaced by their own which implemented the search using only an external Solr indexing service. However, everything was still indexed in the portal_catalog as usual, so there was no gain in terms of ZODB size or indexing speed compared to a vanilla Plone site. Querying the Solr instance was of course extremely efficient which we verified using a JMeter based benchmark later on. We wanted to experiment on replacing some indexes from portal_catalog with Solr and try if we could gain any benefits in ZODB size or indexing speed.

As anyone who is at least a bit familiar with portal_catalog will know, replacing the whole of it can be a bit difficult because of special purpose indexes such as ExtendedPathIndex, which Plone heavily relies upon. So we decided to try if we could replace the “easier” indexes with Solr and have the rest be in portal_catalog. This would mean that we would need to merge results from both catalogs before returning them to the user. We did this by replacing the searchResults method in ZCatalog.Catalog.

To test our implementation we generated 20,000 Document objects in two Plone instances each and filled them with random content (more on this later) and compared the ZODB size, indexing time and query speed. The generated objects resulted in roughly 100 Mb worth of data and the size difference was about 8 % in favor of using Solr. Since we didn’t test this further with different data sets, I wouldn’t draw any conclusions based on this except to notice the (obvious) fact that externalizing the portal_catalog makes it possible to reduce the size of the ZODB to some degree. I know that some people use a separate ZODB mount for their catalogs so using an external catalog may be a good solution in some cases. The indexing times didn’t have much difference, but they were slightly in benefit of Solr. Querying our hybrid ZCatalog/Solr index turned out to be much slower than either ZCatalog or Solr by themselves 🙂 I’m sure this was because of our non-optimized merging code that we did in searchResults.

In the end, I think the approach Enfold Systems originally had is the correct one for near-term projects. Querying Solr is very fast and indexing objects in both the portal_catalog and an external Solr instance doesn’t produce much overhead. If you need a customized search interface for your project with better than portal_catalog performance you should check Solr out. The guys at Enfold Systems promised to put their code in the Collective for everybody to use, including our buildout.

zc.buildout improvement

Godefroid Chapelle had a proposal to improve the zc.buildout so that you can use buildout to get information about the recipes it uses. After discussing the matter with Godefroid and Tarek and a quick IRC consultation with Jim Fulton we decided to prototype a new buildout command — describe — that would return information about a given recipe. Jim Fulton expressed his desire to keep recipes as simple as possible so the describe command simply inspects all the entry points in a recipe egg and prints the docstrings of the recipe classes. If the functionality is merged into mainline buildout, recipe authors should consider putting a description about the recipe and the available options in the docstrings (something that we currently see in the PyPI pages of well disciplined recipes).

The code is in an svn branch available at http://svn.zope.org/zc.buildout/branches/help-api/. The following examples are shamelessly ripped from Tarek’s blog


$ bin/buildout describe my.recipes
my.recipes
    The coolest recipe on Earth.
    Ever.

Multiple entry point support


$ bin/buildout describe my.recipes:default my.recipes:second
my.recipes:default
    The coolest recipe on Earth.
    Ever.
my.recipes:second
    No description available

Random text generation with context-free grammars

The alternative indexing topic required us to generate some random content in our test sites and both me and Tarek found doing this quite interesting on its own. After the other work was finished we started playing with an idea of creating a library for generating random text based on context-free grammars. You can read Tarek’s post on the library for more information. The end result was that we created a project on http://repo.or.cz/w/gibberis.ch.git called Gibberisch which currently contains some random text modules and a Grok interface called Bullschit 🙂

I worked with Ethan Jucovy on the Grok interface and which was great fun. Since this was our last day project there were really no serious goals. We just wanted to play with Grok and ended up building a RESTful interface for building up a grammar and then generating random content out of it. If you’re working on a RESTful implementation I can recommend using the RestTest add-on for Firefox, it’s a real time saver!

Basically, Bullschit models the grammar using Zope containers so that you can have multiple different grammars in one application, each grammar consists of sections that contain parts of sentences (in the context-free grammar) called Schnippets. You can use the basic HTTP verbs: POST, PUT, GET and DELETE to maintain the grammar and generate the random text.

For our presentation we hooked in the S5 slide show template to produce endless slides of total gibberisch. You can have even more fun by using the OSX speech synthesizer (or any other for that matter) to read aloud your presentation! Here’s an example of a slide generated with Bullschit and S5.

Presentation with Bullschit & S5

If you’re interested in giving it a go, you can get the code using git.


$ git clone git://repo.or.cz/gibberis.ch.git

For those interested in Git, don’t miss the recent 1.5.4 release!

 
1 Comment

Posted by on February 3, 2008 in plone, zope

 

Improved zc.buildout recipes with ZopeSkel

Today I worked with Tarek Ziadé on ZopeSkel. Tarek concentrated on refactoring the ZopeSkel layout to put each template in its own module and wrote doctests for all available templates. Go Tarek! The test runner actually runs tests in two layers: first testing the output of the generated items and then, if the items contain tests themselves running them also.

I concentrated on improving the template for creating new zc.buildout recipes. Many useful recipes suffer from lacking documentation and an unappealing front page on PyPI. I refactored the template to include a common set of documentation files, such as CHANGES.txt, README.txt, CONTRIBUTORS.txt etc. and added code that puts all those documents nicely together to produce a serious looking ReST document that looks good on PyPI. So now its up to the recipe author to just fill in those files accordingly.

To help recipe authors and especially people new to zc.buildout I also added comments in both the documentation files and the code to help on implementing the recipe and especially on how to document it so that other people are able to use the recipe in their own buildouts. To me, one of the most importart parts of a recipe’s documentation is the list of available options and their semantics. Looking at the PyPI pages for zc.buildout and zc.recipe.egg you can easily get information about the component. I’ve also tried to do the same with my own recipes (hexagonit.recipe.cmmi, hexagonit.recipe.download). The template provides a stub for documenting the options in the README.txt file that authors can fill in.

I also created a minimal doctest for the buildout. While being only a skeleton the test actually runs a buildout using the recipe so you can run the test case for the recipe right after ZopeSkel is finished generating it. This should help recipe authors to get started with testing the recipe while they implement it.

In addition I updated the trove classifiers to appropriate values for a buildout recipe and added support for getting the trove classifier for the license to be added automatically in the setup.py file. So now when paster asks for a license for the recipe and you answer, for example, ZPL you get ‘License :: OSI Approved :: Zope Public License’ in your setup.py automatically. This code is actually in zopeskel.base and you can easily re-use it in the other ZopeSkel templates. Just take a look at how the recipe template uses it.

If you haven’t used ZopeSkel before, give it a try!

$ easy_install ZopeSkel
$ paster create --list-templates
$ paster create -t recipe collective.recipe.foobar

If you want to try the recent changes, you need to get ZopeSkel from the collective.


http://svn.plone.org/svn/collective/ZopeSkel/trunk/

There’s been lots of interest in ZopeSkel here at the Snowsprint so expect to have cool new templates there soon!

Update: 25.01.2007

ZopeSkel 1.5 was released which contains the latest changes.

 
8 Comments

Posted by on January 22, 2008 in plone, software engineering, zope

 

Orderable formlib form fields

The zope.formlib is the form framework in Zope 3 that makes it easy to generate browser forms using Zope 3 schemas and perform validation on user input. This is of course something what we’ve come to expect using using existing tools like Archetypes schemas and CMFFormController. The good news about formlib is that you can already use it in Plone (and we do so extensively in the Plone 3 version of PrimaGIS).

One of the advantages of formlib is that you can easily take multiple schemas, throw them into a single form, select the fields that you want to include in the form and have formlib automatically handle the different schemas by adapting your content object accordingly when saving the data. Although formlib matches up very nicely to Archetypes generated forms (if you ignore the small amount of fields/widgets available for formlib compared to AT) there is one feature in Archetypes that does not exist in formlib: field reordering.

In Archetypes, you can take an Archetypes.Schema.Schema instance and reorder fields programmatically using the Schema.moveField() method, e.g.


>>> from Products.Archetypes.atapi import Schema, StringField
>>> schema = Schema((StringField('a'),
...                  StringField('b'),
...                  StringField('c')))
>>> schema.keys()
['a','b','c']
>>> schema.moveField('c', before='a')
>>> schema.keys()
['c','a','b']
>>> schema.moveField('a', pos='bottom')
>>> schema.keys()
['c','b','a']

Moving fields around is usually necessary when you’re (re)using an existing Schema defined somewhere else and wish to modify it for your own use. Having to define a new schema (by copying code) simply to get the form to display fields in a different order would feel like a waste of resources so having the ability (and a nice API) to modify existing schemas is useful.

For this reason I implemented an enhanced version of the zope.formlib.form.Fields class that supports reordering formlib fields using an API almost identical to the one in Archetypes. The package is called hexagonit.form and is available from the Cheeseshop.

To use the enhanced version, you simply use hexagonit.form.orderable.OrderableFields in place of zope.formlib.form.Fields in your code. Below is a dummy example demonstrating its use.

We first need to declare a simple schema for which the form will be generated.


>>> from zope.interface import Interface
>>> from zope.schema import TextLine, Bool, Int

>>> class ISomeSchema(Interface):
...     text = TextLine(title=u"text field")
...     boolean = Bool(title=u"boolean field")
...     integer = Int(title=u"integer field")

Now that we have a schema, we can generate the form fields using hexagonit.form.


>>> from hexagonit.form.orderable import OrderableFields
>>> form_fields = OrderableFields(ISomeSchema)

The form_fields variable now contains your normal formlib fields with the additional moveField method that allows reordering the fields on the fly.


>>> [field.__name__ for field in form_fields]
['text', 'boolean', 'integer']

>>> form_fields.moveField("boolean", direction="up")
>>> [field.__name__ for field in form_fields]
['boolean', 'text', 'integer']

>>> form_fields.moveField("boolean", position=2)
>>> [field.__name__ for field in form_fields]
['integer', 'text', 'boolean']

>>> form_fields.moveField('boolean', before='integer')
>>> [field.__name__ for field in form_fields]
['boolean', 'integer', 'text']

The moveField method allows reordering the form fields in a variety of ways using the different keyword parameters:

  • direction parameter with values “up” and “down” for changing the position of the field relative to its current position
  • position parameter with values “first” and “last” (or alternatively “top” and “bottom” ) or using absolute positions with integer values (first field at position 0) to place the field in a specific position
  • after and before parameters to place the field in a position relative to another field.

The doctests in the package describe the functionality of the moveField in full detail. An actual form implementation would look something like this in Plone:


from Products.Five.formlib import formbase
from hexagonit.form.orderable import OrderableFields
from somewhere import IMySchema, MyCustomWidget

class MyAddForm(formbase.AddFormBase):
    # Instantiate the form fields
    form_fields = OrderableFields(IMySchema)

    # All normal functionality of zope.formlib.form.Fields is
    # available, such as [field].custom_widget, .omit(), .select() etc.
    form_fields['somefield'].custom_widget = MyCustomWidget

    # After setting up the fields you can reorder them according
    # to your needs
    form_fields.moveField('somefield', position='last')
    form_fields.moveField('otherfield', direction='up')

    # Rest of form implementation follows..


Installation

The easiest way to install and try hexagonit.form is to use easy_install:


$ easy_install hexagonit.form

You can also manually download the egg or the source tarball from the Cheeseshop page.

 
1 Comment

Posted by on February 15, 2007 in plone, zope

 

PrimaGIS in Plone 3.0

Up to (and including) version 0.6 PrimaGIS has been a traditional Archetypes based project. Almost all map components (maps, layers, symbolizers, etc) were modelled and implemented as AT content types.

Having all the components be full-blown AT content types made development easy up to a certain point but also at the same time made the objects unnecessarily heavy. Also, most map components are not content-like by nature and thus, for example, having them all support Dublin Core and be indexed in the portal catalog is unnecessary.

The next major version of PrimaGIS is based on Zope 3 components.

Using the Component Architecture

The Zope 3 Component Architecture made it relatively easy to remodel the map components as light-weight domain objects . In most cases we are now able to use objects from the Python Cartographic Library (PCL) directly, where earlier versions needed to define wrappers around these PCL objects. One reason for this is that PCL uses zope.interface internally, so useful interfaces are already defined and available for use.

In Plone 3.0 map renderers, spatial data stores and feature symbolizers are registered as utilities in the Component Architecture using ZCML. The components are registered using custom ZCML directives defined under the http://namespaces.gispython.org/gis namespace. For details on the custom directive implementation you can refer to the metaconfigure.py and metadirectives.py files in the ZCO codebase.

Map renderer

A map renderer is a component that is responsible of rendering the final map image given a collection of map layers (and some metadata). The current map renderer in PrimaGIS is based on the MapServer project. It is registered using the following ZCML code:



  
      incoming="/tmp"
      fontset="/tmp/pg2/spatialdata/fonts/fontset.txt"
     />


which registers the MapServer based renderer as an unnamed utility providing cartography.styles.interfaces.IMapRenderer.

In the future we can also support other map renderers and swap them transparently by registering them instead of the MapServer renderer — without having to modify any existing map setups.

To acquire the map renderer in code we can now simply do:


>>> from cartography.styles.interfaces import IMapRenderer
>>> from zope.component import getUtility
>>> renderer = getUtility(IMapRenderer)

Spatial data stores

Data stores are components that provide the spatial data for map layers. Data stores can be divided into two categories: feature stores that provide vector data and raster stores that provide raster data.

The data stores are registered using custom ZCML directives. Each type of data store has its own parameters, but all are registered as named utilities providing cartography.data.interfaces.IDataStore. Below is an example of a Web Map Service (WMS) data store registered in ZCML.



    name="NASA Jet Propulsion Laboratory WMS"
    url="http://wms.jpl.nasa.gov/wms.cgi"
    version="1.1.1"
    incoming="/tmp"
    />

For a more comprehensive set of examples see the datastores.zcml.dist file in the ZCO codebase. Registered data stores can now be easily acquired in code:


>>> from cartography.data.interfaces import IDataStore
>>> from zope.component import getUtility
>>> datastore = getUtility(IDataStore, "Name of data store")

Symbolizers

Feature symbolizers determine how selected spatial features are rendered on the map and in concept are similar to CSS rules. In a similar manner the symbolizers are defined and registered using ZCML. The four different types of symbolizers: point, line, polygon and text symbolizers, are each registered using their respective ZCML directives.

The symbolizers get registered as named utilities providing cartography.styles.interfaces.ISLDSymbolizer. Below is an example of a line symbolizer defined in ZCML.



   


For a comprehensive set of examples see the symbolizers.zcml.dist in the ZCO codebase.

Maps and Layers

Maps are composite objects that contains one or more layers. Each layer draws data from a spatial data source and determines the styling (using rules and symbolizers) applied to its spatial features. For maps and layers there is benefit in modeling them as content types, for example to be able to apply workflow to maps or individual layers or allow the layers to be managed using the default folder management methods.

For this reason, primagis.map.Map and primagis.layer.Layer are Archetypes derived content types. However, the AT schema mechanism is not used to manage the configuration of the components themselves, but used only to manage the content like attributes such as Dublin Core metadata. This gives us a nice separation of concerns between the content related attributes and the actual mapping attributes.

The map configuration forms are implemented using Zope 3 schemas and formlib. There are some custom schema fields and widgets that make it easier to manage the mapping specific attributes, such as bounding boxes. KSS has also been used to make the editing screens more user-friendly. In the future KSS might also be used in the map view also.

Web Map Service (WMS)

The Web Map Service support makes it possible to share and re-use the imagery produced by PrimaGIS in other WMS compliant clients such as, for example, uDig, OpenLayers, and PrimaGIS itself. OpenLayers is of special interest to PrimaGIS since it allows us to provide an alternative UI for PrimaGIS itself.

OpenLayers uses a tiled approach where the map image is put together by tiling multiple smaller images requested from the WMS server. It also provides a nice Google Maps like panning mode previously not available in PrimaGIS.

The WMS support is implemented as a Zope 3 view registered as wms for primagis.interfaces.IMap. This means that you can just point your WMS client to an URL like http://domain.tld/path/to/primagis/@@wms.

For an efficient WMS implementation we will need to be able to cache the rendered tiles, possible using an existing solution like TileCache.

Future work

Below are listed some features (in no particular order) I would like to see in PrimaGIS.

  • Persistent local versions of data stores and symbolizers
  • Management UI for the local data stores and symbolizers
  • Cached WMS requests
  • GeoRSS support
  • Better OpenLayers integration / configuration support
  • Spatial indexing
  • ZODB datastores
 
5 Comments

Posted by on February 12, 2007 in plone, primagis, zope

 

Snowsprint 2007

The fourth (and my first) Snowsprint was held in Bregenzerwald, Austria during 27.03. – 3.2.2007. Altogether 56 developers attended the sprint from 14 different countries who worked on a variety of topics including calendering, multimedia, KSS, GIS, REST, textmate extensions, and caching.

The Gasthaus Hirschen where the sprint was held at proved to be everything one could hope for and the great common room was just the right environment for a week of intensive brain storming and coding.

My goals for the sprint were to learn more about the upcoming Plone 3.0 and to continue on the work on the next major version of PrimaGIS. In earlier sprints in Dublin and Seattle we had already refactored the Cartographic Objects for Zope (ZCO) and PrimaGIS using the Component Architecture of Zope 3 and now it was turn to work on the Plone UI for PrimaGIS.

I had decided to use Zope 3 technologies as much as I could, which meant I would be using Zope schemas and formlib for all forms in PrimaGIS. I also wanted to learn how to implement custom schema fields and widgets using formlib. This all turned out to work quite well, although the Zope2 publisher and security mechanism required to do some acquisition trickery to make everything play nicely together.

One exciting new feature of Plone 3.0 is the KSS (Kinetic Style Sheets) framework (found in the plone.kss and plone.app.kss packages), which makes it extremely simple to do cool AJAX style programming without having to touch Javascript at all. I took part in the KSS tutorial given by Godefroid Chapelle and Balázs Reé which helped to get started working with KSS really fast. This turned out especially useful in implementing the custom formlib widgets.

In addition to the UI work I also worked on an experimental Web Map Service (WMS) server implementation for PrimaGIS. In Zope 3 terms, this simply meant that I needed to implement a view for the primagis.map.Map object that implements the WMS specification. The immediate benefit of implementing WMS server support is that then PrimaGIS maps may be reused by any standards compliant mapping client totally independent of Zope and Plone.

For PrimaGIS itself the WMS support means that, for example, it’s now possible to chain multiple PrimaGIS maps together so that one PrimaGIS instance is able to use the data provided by another. Another cool thing was that we can now provide alternate UIs for the PrimaGIS maps. As an experiment I integrated the Open Layers Javascript UI into PrimaGIS which worked right out of the box. All this code is still very experimental and in the future we will need push most of it into OWSLib and have PrimaGIS use that to implement WMS support.

I will post more detailed entries on how the Plone 3 version of PrimaGIS works and how it differs from the current version. For now you can check out the code at the following branches:

http://svn.gispython.org/svn/zope/ZCO/branches/zco3

http://svn.gispython.org/svn/zope/PrimaGIS/branches/primagis-plone-3.0

or alternatively use the primagis.buildout system to build a development instance (using the --develop switch) which contains the latest code.

Many thanks to Lovely Systems for hosting a great sprint. See you next year!

 
1 Comment

Posted by on February 6, 2007 in plone, primagis, zope