Automating the Generation of Python Bindings in QGIS

If you are a PyQGIS developer, you probably already stumbled upon a situation where you needed to look at the signature of a QGIS function, and you dive into the C++ documentation or source code. This is not the friendliest thing if you are not a C++ developer…

Fortunately, @timlinux developed a tool for generating documentation for the Python API, and thanks to great work of @RouzaudDenis and @_mkuhn, it is now possible to generate the sip files automatically from the header files. Before, the sip file had to be created manually by the developer, which means it was subject to human mistakes (like for instance forgetting to port a function).

To support this automated generation for the entire source code, it is necessary to annotate the headers with the relevant SIP notations. As an example, I may not want to port the dataItem_t pointer to Python, in which case I would annotate the header with SIP_SKIP.

typedef QgsDataItem *dataItem_t( QString, QgsDataItem * ) SIP_SKIP;

A good place to start, before adding automated SIP generation to headers, is to read the SIP Bindings section, of the QGIS coding standards, or even having a look at qgis.h, where all these annotations(macros) are defined.

The sip files which are currently not generated automatically, are defined in autosip_blacklist.sh, so if you want to automate a sip, the first thing would be to remove that file from this list. Then you may run the sipify_all.sh script, which will scan through all the files which are not blacklisted and generate the sip files for them. This will generate a new sip file for the one you removed from the blacklist. If you compare the new file with the old file, in most of the cases the signatures of the functions won’t change. In that case, you don’t need to do anything. If you find differences in the signature, it is because there are some special instructions between “/”, like /Factory/ which you need to support. To do that, you need to add the appropriated annotations in the header file; in that case, do not forget to add the “qgis.h” file, in order to support the macros.

#include "qgis.h"

When you finish annotating the file, run the script again and check if the old and new sip match. If they do match, then you have supported the automated generation of the sip file; otherwise, you need to go again to the header and check what is missing.

There are still many sip files left to be automated, so we encourage you to contribute to QGIS with PR on this matter! 🙂

Easing the Creation of Metadata in QGIS

In a previous blog post, I presented QGIS enhancement #91, which aims at providing the infrastructure in QGIS to author, consume and share standards-based metadata (e.g.: ISO).

In this post I would like to focus on a specific WP which aims at easing the task of authoring metadata. Let’s face it: this is the long face many people put on, when they are told they need to create metadata.

minion

We would like to at least reduce this effort, by letting users create a metadata template, which would then be reused across the project, enabling the automated population of metadata. Having the repetitive bits out of the way, they could focus on the fun parts: creating specific layer metadata, and of course, working with the data.

More specifically, this WP covers the support to two events:

  • Filling of the template, which would then be associated to the project; this can happen in one tab of the project settings.
  • Automated population of metadata for a layer, based on this template; this can be triggered through the layer properties, or when the user loads/creates a layer.

The mockups bellow illustrate these application scenarios.

qgis_mockup1

Creation of the Metadata Template

qgis_mockup2

Application of the Metadata Template

This template would be based on the QGIS internal schema, developed on WP1. The fields presented on the following mockups are only examples, based on the Dublin Core schema.

One interesting enhancement would be to support the import/export of this template, so that it could be shared across an organization. One user could also have multiple templates, according to the layers he was working on (see image bellow). Both these scenarios would require detaching the template from the project file and storing it in an external format.

qgis_mockup3

Support to External Templates

We envision this WP to deliver the following:

  • UI and handlers for creating the template.
  • UI and handlers for applying the template.
  • UI and handlers for exporting/importing the template (optional).

I will submit a proposal for these developments to the QGIS Grant Applications Programme and will be looking forward to having the support of the community to ease the creation of metadata in QGIS 🙂

Welcoming the QGIS Metadata Store

Support to standards based metadata (e.g.: ISO) has been greatly missed in QGIS. We would like for that to no longer be the case in QGIS 3.0, with this enhancement proposal.

91

This blog post focuses on WP3: “QGIS Metadata Store”, which will introduce an external physical format for storing metadata internally in QGIS. The goal is to support portability, enabling users to share their layer metadata, even in offline scenarios. This WP will build directly on the outputs of WP1, which will define an “internal metadata schema” and WP2, “QGIS metadata API”, which will encode/decode from the internal schema to the supported schemas.

The final goal is for QGIS to support two types of metadata stores: remote and local. In this WP we will focus on local stores, only.

qgis_diagram1

In the diagram below we depict the inheritance model for metadata stores, where an abstract metadata store will have a polymorphic behavior, according to the particular data format. For instance in the case of a PostgreSQL DB, the method “save” will create a table on the database, whether in the case of a Shapefile, it would create an XML file.

stores

Some formats, such as text files, can be more limited than others. As an example, searches in text files can be quite slow. For that reason, we will create a “prime” format, the “QGIS metadata store”, which can accompany more restrictive formats.The prime format will be an SQLite database, because of its lightweight, and because it is well-known within the QGIS community.

As the goal is to support all these different formats in the future, we will design an infrastructure to accommodate that, but in this first iteration we will focus on the simple use case of creating an xml file, and an SQLite data store.

The metadata contents will be passed by the metadata API. In this WP we will implement format translation, but not schema translation.

Along with these developments we will implement a user interface to allow the user to configure serialization/deserialization behavior, e.g.: in which format we should write metadata, and where.

The QGIS metadata store will be synced with any changes that we apply to the metadata. In the moment that we export metadata into XML, it will write those changes to the XML file.

Metadata search will also be polymorphic, according to the data format. In this iteration, as a proof of concept, we will implement some simple text search, which will enable users to query their metadata.

We envision this WP to deliver the following:

  • An infrastructure to accommodate the external storage of metadata in QGIS, fully implemented for the use case of XML files.
  • Support for searching the metadata store.
  • UI for saving/loading metadata.

I will submit a proposal for these developments to the QGIS Grant Applications Programme and will be looking forward to having the support of the community to welcome the QGIS metadata store 🙂

Go On board with a GeoNetwork Container

GeoNetwork is a FOSS catalog for geospatial information. It is used around the world by organizations such as FAO, the Dutch Kadaster or Eurostat, just to mention a few.

As any software service, it may not be trivial to install and configure, which may put people away for giving it a try. This could change with docker.

gn-docker

Docker, which could be defined in a nutshell as infrastructure as code, automates the deployment of Linux applications inside software containers. It relies in a technology, LXC, which provides operating-system-level virtualization on Linux. In less than four years it experienced a massive adoption by the software community, and it has already been taken to production in many use cases.

The docker hub is a massive repository for ready-to-use images. You can find anything from web servers to databases, or even actual operative systems. With a docker pull at the tip of your fingers, you can have them running in your computer in a matter of minutes (depending on your internet connection).

Anyone can upload their docker images to docker hub, but there are some images which are released “officially”.
Official images sources live in the docker repositories, and they are considered good to use (and reuse), because they implement docker best practices, and therefore their code can be seen as an example. They are also heavily documented according to some standards, and they go through a security audit.

Although there are a couple of geonetwork images on the docker repositories, there is no official image yet, so I decided to create one. While the image goes through the approval process, I decided to publish it anyway, so that anyone can benefit from it in the meantime.

These images provides the two latest releases of geonetwork (3.0.5 and 3.2.0), as well as the previous release (3.0.4). By default, geonetwork runs on a local h2 database, but I created a variant which can use a postgresql database as backend, either running on a container or on a bare metal server. This should make it more fit for production.

You can read more about these and other features, such as setting and persisting the data directory, on the docker hub page.

Once the official images get released I will make an announcement here. But in the meantime, there is no excuse to not start playing with geonetwork:

docker pull geocat/geonetwork

gn_shell

gn_container

Have fun with docker & Geonetwork ! 🙂

Watching a Server through a Container

Lately I have been working a lot with docker, the new kid on the block on cloud computing, which is winning the heart of sysadmins, as well as developers.

The main idea is to setup a Spatial Data Infrastructure, something that has been at the core of other projects such as Georchestra.

Unfortunately having something running on a server is normally not a complete smooth experience, and this sets the ground for the need of a monitoring service.

After searching a bit, I found NewRelic, which provides monitoring on a service basis. I really liked the advanced functionality and the completeness of the dashboards, so it was not hard to convince myself to try it.

NewRelic provides two types of monitoring: application monitoring, and server monitoring, which is what I will cover today on this post. The server monitor, is basically a daemon that runs on the server and collects statistics about various metrics, such as: memory usage, CPU usage, bandwidth, etc. But what really caught my eye about this solution, was the ability to monitor the docker daemon and the different containers that run within it.

Unfortunately this functionality appears to be broken for docker 1.11 (my current version), but with the help of the NewRelic engineers I was able to apply a workaround.

My next step was to dockerize this solution. After all, wouldn’t it be great to spin another container in my SDI, that would monitor the other containers AND the server?

The bad news is that the existing images of Newrelic’s server on docker hub do not implement the workaround. So I went and implement my own image.

You can pull this image from the repository, with:

docker pull doublebyte/newrelic_sysmond

Then you can run it with:

 docker run -d \
–privileged=true –name nrsysmond \
–pid=host \
–net=host \
-v /sys:/sys \
-v /dev:/dev \
–env=”NRSYSMOND_license_key=REPLACE_BY_NEWRELIC_KEY” \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log:/var/log:rw \
newrelic_sysmond

The privileged flag and the bindings to the host directories are necessaries, because we need to be able to watch the docker daemon, and collect the docker metrics.

Note that if you also want to collect memory stats of the containers, it is necessary to configure it in the kernel. The procedure is explained on the docker documentation, but it really comes down to updating the bootloader and restarting. In the case of grub, you would need to add this line to /etc/default/grub:

GRUB_CMDLINE_LINUX=”cgroup_enable=memory swapaccount=1″

Then you need to update grub with:

update-grub

After a restart of the server, the docker memory statistics should be present on the server dashboard:

newrelic

Spatial Data Mining

Social media streams may generate massive clouds of geolocated points, but how can we extract useful information from these, sometimes huge, datasets? I think machine learning and GIS can be helpful here.

My PechaKucha talk at DataBeers : “Visualizing Geolocated Tweets: a Spatial Data Mining Approach”.

JSON to GeoJSON with jq

A lot of people and institutions have already made the jump of providing data in JSON, which is great, since it is an inter-operable standard and a semi-structured form of data. However when it comes down to geographic data, standards don’t seem to be so common. I have seen many different ways of encoding geospatial information within JSON, normally involving listing an array of coordinates, with or without name fields for lat and long. Rarely there is any CRS associated to this data (which could be ok, for the case that it uses WGS84), or any mention of the geometry type.

This information is more or less useless, without some pre-processing to convert it into a “GIS-friendly” format, that we could use in QGIS, GeoServer, or even R.

Since we are dealing with JSON, the natural thing would be to convert it into GeoJSON, a structured format for geographic data. And the perfect tool for doing this is jq, a tool that I mentioned in a previous post. To make it simpler to understand I will explain what I did a specific JSON dataset, but with some knowledge of jq (and GeoJSON), you could literally apply it to any JSON dataset with geographic information within it.

My test dataset was the description of a set of roads, from the city of zaragoza,.

http://www.zaragoza.es/trafico/estado/tramoswgs84.json

The description of the dataset says that it is in “Google” format, which one could erroneous interpret as spherical mercator, but the name of the file suggests WGS84, and a quick look at the coordinates can confirm that too. This is literally a list of tracks, each one containing a list of coordinates that define the geometry. Let us look at an example:

{
  "points": [                                                                                                                                        
    {                                                                                                                                                
      "lon": -0.8437921499884775,                                                                                                                    
      "lat": 41.6710232246183
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8439686263507937,                                                                                                                    
      "lat": 41.67098172145761
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8442926556112658,                                                                                                                    
      "lat": 41.670866465890654
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8448464412455035,                                                                                                                    
      "lat": 41.67062949885585
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8453763659750164,                                                                                                                    
      "lat": 41.67040130061031
    },
    {
      "lon": -0.8474617762602581,
      "lat": 41.669528132440355
    },
    {
      "lon": -0.8535340031154578,
      "lat": 41.66696540067222
    }
  ],
  "name": "AVDA. CATALUÑA 301 - RIO MATARRAÑA -> AVDA. CATALUÑA 226",
  "id": 5
}

So the task here would be to convert this into a GeoJSON geometry (a linestring). For instance:

  { "type": "LineString",
    "coordinates": [ [100.0, 0.0], [101.0, 1.0] ]
    }

In jq, we want to loop through the array of roads, and parse the lat, long coordinates of each road object. This coordinates are themselves another array. If we do something like this:

cat tramoswgs84.json | jq  '.tramos[2]| .points[].lon,.points[].lat'

We are asking for the longitude and latitude coordinates of track 2, but since jq evaluates expressions from left to right, it will gives us back the array of longitude coordinates and the array of latitude coordinates, not the pairs.

The key thing is to use map, that will run the filter for each element of the array:

cat tramoswgs84.json | jq  '.tramos[2].points| map([.lon,.lat])'

The complete jq syntax for generating one linestring object, would be:

cat tramoswgs84.json | jq  -c '.tramos[1]|  {"type": "LineString", "coordinates": .points | map([.lon,.lat])}'

The next step would be to create a GeoJSON containing the entire collection of linestrings. Since we would like to attach attributes to them (“name” and “id”), we rather generate a “feature collection“, than a “geometric collection”. The code for generating each feature would be:

cat $1 | jq  -c '.tramos[]| {"type": "Feature","geometry": {"type": "LineString", "coordinates": .points | map([.lon,.lat])},"properties":{"name": .name, "id": .id}}' >> $2

And then we need to do a few text manipulation operations, that I could not find a way of performing with jq. Basically we need to add the opening tags for the feature collection, commas between each object, and then add closing tags for the feature collection.

I did this text manipulating tasks with sed, and put everything inside a shell script, that will transform the JSON file (in the format that I described) directly into a valid GeoJSON. If you are interested, you can get it from github. The resulting file can be fed to QGIS, in order to produce pretty maps, like the one bellow 🙂

roads_zgz

Data Mining| Machine Learning

Together with a colleague, I have been involved in the “hard” task of drafting a diagram (or a “mindmap”) that would connect logically, some of the “buzz words” regarding “data science”; e.g.: artificial intelligence, machine learning, data mining, recommenders. Moreover we wanted to provide a classification that would organize the different “algorithmic families” into some sort of typology. Hard task, I know, mostly because there are many classifications, based on the approaches we want to take; e.g.: by learning method, by task. We ended up not with one diagram, but with two, separating “data mining” and “machine learning”, in order to explain them better.

In the “Data mining” diagram, we include a general distinction between “descriptive” and “predictive” data mining, and within these two, we follow with sub divisions that finish in data mining techniques that may or not belong to machine learning (e.g.: statistics). On the bottom of the diagram, we represent the generic data mining applications, that make use of these techniques. One key difficulty in drafting this diagram is the fact that some techniques can include other techniques, and it is not easy to reflect that in the diagram. For instance, machine learning techniques typically make use descriptive statistics such as dispersion or central tendency.

Data Mining

In the “Machine learning” diagram we went for a more “scientific” view (less problem oriented), and tried to show how machine learning fits into the broader field “Artificial Intelligence”. Then we took the “learning approach”, as a way of classifying ML techniques. At the leafs of this tree, as well as at the leafs of the “Data Mining tree”, there are examples of techniques/algorithms relevant for the specific types; it is not an *extensive* list of algorithms, neither it claims to select the most *important* algorithm (if there is such thing…); sometimes the criteria for choosing the algorithm is greatly *subjective*: because we worked or read about it, or even because it was the only example we could find…

Machine Learning

Clearly there is some degree of overlap between the two diagrams. Machine learning is part of Data Mining, and therefore some algorithmic “families” are presented in both diagrams. However we believe that in this way, it becomes easier to describe what “machine learning” is, as a scientific discipline, and how it “fits & mixes”, within the “wide umbrella” of data mining.

This diagram was based on a lot of reads (mostly blogs), on our own knowledge and a lot of discussion. It is not “written on stone”, and I don’t even know if it is possible to have such a thing, regarding a topic that is so difficult to classify, either because it is evolving so fast or because it is often very “fuzzy”. In any case, any (constructive) critics or commentaries regarding ways of improving these diagrams, or even just some thoughts would be greatly appreciated.

Ubuntu 4 Beginners

After installing Ubuntu three times, in the past few months, and after having many requests to do it again, I have finally decided to put it all together in a workshop. It is going to be next Saturday, in Barcelona, in my favourite co-working space. And it is “free” as beer, and GNU/Linux 🙂

The “official” announcement will be tomorrow, I think, but you can be the first to read it here 😉

UPDATE: THIS WORKSHOP HAS BEEN POSTPONED!

Ubuntu 4 Beginners

Did you ever think about installing Ubuntu, but never actually have the “courage” to do it alone? Then this workshop is for you.

tux1

In the first part I will introduce the GNU/Linux Operating System, by explaining some basic concepts and showing some applications.
The second part will focus mainly on the installation process of Ubuntu, and I will install it “live”, on a virtual machine.
At the end of the session I can help people who are interested, to perform the installation on their own computers. Note that this will be *at their own risk*!

Target Audience:
This workshop targets people with a limited knowledge of *Nix systems, although some proficiency in using computers would be nice.
If you are a proficient *nix user or developer, and are interested in specific parts of the OS (such as the kernel), you may be interested in a more advanced workshop. If you are wondering what a *nix user is, please come: this workshop is for you 🙂

If you have Ubuntu installed on your laptop, or you are planning to install it at the end of the workshop, you may bring it with you. Otherwise, laptops are not required.

ubuntu_banner

Practical Info:
The duration of the workshop is approximately 2 hours (11:30h-13:30h), including a 10 minute break. Note that this is a free workshop, but you do *need to register*, in order to attend. Please do it, by filling this form: it should only take 2 minutes.
For practical reasons, I will limit the number participants to 20, on a: “first come, first served” basis.

This workshop is hosted by MOB/Made (Calle Bailen 11, Bajos. 08010 BCN) and all donations collected by the bitcoin wallet bellow will be given to Made, a non profit organization.

1GcD6YZLMvV4cNv7WckS22FJKMNdtjQJPE

Bitcoin

Static Linking?

From time to time, I have this moments when I cannot deploy my application properly and decide that I want to link it statically (then I generally give up, because it requires me to link the Qt libraries statically…). But is it really better to prefer static over dynamic linking?

As in so many other cases, it depends on what you want to do. I read that in terms of performance, there are trade-offs in both approaches, so in the end it really does not matter so much. From my point of view, the biggest advantage of static linking is the fact that you can ship one single file with your application, removing the risk of “broken” dependencies. That is, in terms of deployment, quite an advantage!

On the other hand, if everybody would link statically, we would literally have “thousands” of libraries “repeated” inside our system, packed inside “huge” binaries. It does not make much sense, does it?

Dynamic libraries are also “cool”, because we can (till a certain extent) replace them by newer (improved) versions, without having to recompile our application. That is like a huge benefit, in terms of “bug fixing” of third party libraries.

After removing the performance issue, my verdict would be:

  • For myself, I would like to minimize resource consumption by using as much as possible, shared libraries (dynamic linking).
  • For “bullet proof” systems, where users are not experienced in installing software, and are likely to “mess up” the system by removing parts of it, I would consider providing them statically compiled versions of the software, instead. The software will likely be “bigger “(although there are tools to minimize this, such as UPX), and a bit more “hungry” of resources, but this is also the only way to prevent the DLL hell.

Finally, it is important to mention that the type of linking may be conditioned by licensing issues.  For instance due to the “nature” of the license, GPL libraries would “contaminate” any software statically linked with them.