CSV 2 GeoJSON

Recently I had another challenge, which I believe has the characteristics to be a common problem. I have a table with attributes, in CSV format, one of which is geospatial.

CSV is a structured format for storing tabular data (text and numbers), where each row corresponds to a record, and each field is separated by a known character(generally a comma). It is probably one of the most common formats to distribute that, probably because it is a standard output from relational databases.

Since people hand me data often in this format, and for a number of reasons it is more convenient for me to to use JSON data, I thought it would be handy to have a method to translating CSV into JSON, and this was the first milestone of this challenge.

The second milestone of this challenge, is that there is some geospatial information within this data, serialized in a non standard format, and I would like to convert it into a standard JSON format for spatial data; e.g.: GeoJSON. So the second milestone has actually two parts:

  • parse a GeoJSON geometry from the CSV fields
  • pack the geometry and the properties into GeoJSON field

To convert CSV (or XML) to JSON, I found this really nice website. It lets you upload a file, and save the results into another file,so I could transform this:

TMC,ROADNUMBER,DIR,PROV,CCAA,StartLatitude,StartLongitude,
EndLatitude,EndLongitude
E17+02412,A-2,E-90/AP-2/BARCELONA-ZARAGOZA (SOSES),LLEIDA,
CATALUNYA,41.5368273,0.4387071,
41.5388396,0.4638462

into this:

{
"TMC": "E17+02412",
"ROADNUMBER": "A-2",
"DIR": "E-90/AP-2/BARCELONA-ZARAGOZA (SOSES)",
"PROV": "LLEIDA",
"CCAA": "CATALUNYA",
"StartLatitude": "41.5368273",
"StartLongitude": "0.4387071",
"EndLatitude": "41.5388396",
"EndLongitude": "0.4638462"
}

This gave me a nicely formatted JSON output (the first milestone!), but as you can notice the geometry is not conform with any OGC standards. It is actually a linestring, which is defined by a start point (StartLongitude, StartLatitude) and an end point (EndLongitude, EndLatitude).

According to the JSON spec, a linestring is defined by an array of coordinates:

So the goal would be to transform the geometry above into:

"LineString",
"coordinates": [
[0.4387071, 41.5368273], [0.4638462, 41.5388396]
]

Once more, jq comes really handy to this task.

The JSON can be transformed into a feature using this syntax:

cat tramos.json | jq -c '[.[] | { type: "Feature", "geometry": {"type": "LineString","coordinates": [ [.StartLongitude, .StartLatitude| tonumber], [ .EndLongitude, .EndLatitude | tonumber] ] }, properties: {tmc: .TMC, roadnumber: .ROADNUMBER, dir: .DIR, prov: .PROV, ccaa: .CCAA}}]' > tramos.geojson

Since the JSON converser parse all the variables into strings, it is important to pass a filter (tonumber) to make sure that the coordinate numbers are converted back into numbers.

{
"properties": {
"ccaa": "CATALUNYA",
"prov": "LLEIDA",
"dir": "N-IIA/SOSES/TORRES DE SEGRE/ALCARRàS",
"roadnumber": "A-2",
"tmc": "E17+02413"
},
"geometry": {
"coordinates": [
[
0.4714937,
41.5420936
],
[
0.4891472,
41.5497014
]
],
"type": "LineString"
},
"type": "Feature"
}

Since we are creating an array of features (or “Feature Collection”), to be conform with GeoJSON, it is important to declare the root element too, by adding this outer element:

{ "type": "FeatureCollection","features": [ ]}

The result should be a valid GeoJSON, that you can view and manipulate in your favourite GIS (for instance QGIS!) 🙂

csv2geojson

JSON to GeoJSON with jq

A lot of people and institutions have already made the jump of providing data in JSON, which is great, since it is an inter-operable standard and a semi-structured form of data. However when it comes down to geographic data, standards don’t seem to be so common. I have seen many different ways of encoding geospatial information within JSON, normally involving listing an array of coordinates, with or without name fields for lat and long. Rarely there is any CRS associated to this data (which could be ok, for the case that it uses WGS84), or any mention of the geometry type.

This information is more or less useless, without some pre-processing to convert it into a “GIS-friendly” format, that we could use in QGIS, GeoServer, or even R.

Since we are dealing with JSON, the natural thing would be to convert it into GeoJSON, a structured format for geographic data. And the perfect tool for doing this is jq, a tool that I mentioned in a previous post. To make it simpler to understand I will explain what I did a specific JSON dataset, but with some knowledge of jq (and GeoJSON), you could literally apply it to any JSON dataset with geographic information within it.

My test dataset was the description of a set of roads, from the city of zaragoza,.

http://www.zaragoza.es/trafico/estado/tramoswgs84.json

The description of the dataset says that it is in “Google” format, which one could erroneous interpret as spherical mercator, but the name of the file suggests WGS84, and a quick look at the coordinates can confirm that too. This is literally a list of tracks, each one containing a list of coordinates that define the geometry. Let us look at an example:

{
  "points": [                                                                                                                                        
    {                                                                                                                                                
      "lon": -0.8437921499884775,                                                                                                                    
      "lat": 41.6710232246183
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8439686263507937,                                                                                                                    
      "lat": 41.67098172145761
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8442926556112658,                                                                                                                    
      "lat": 41.670866465890654
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8448464412455035,                                                                                                                    
      "lat": 41.67062949885585
    },                                                                                                                                               
    {                                                                                                                                                
      "lon": -0.8453763659750164,                                                                                                                    
      "lat": 41.67040130061031
    },
    {
      "lon": -0.8474617762602581,
      "lat": 41.669528132440355
    },
    {
      "lon": -0.8535340031154578,
      "lat": 41.66696540067222
    }
  ],
  "name": "AVDA. CATALUÑA 301 - RIO MATARRAÑA -> AVDA. CATALUÑA 226",
  "id": 5
}

So the task here would be to convert this into a GeoJSON geometry (a linestring). For instance:

  { "type": "LineString",
    "coordinates": [ [100.0, 0.0], [101.0, 1.0] ]
    }

In jq, we want to loop through the array of roads, and parse the lat, long coordinates of each road object. This coordinates are themselves another array. If we do something like this:

cat tramoswgs84.json | jq  '.tramos[2]| .points[].lon,.points[].lat'

We are asking for the longitude and latitude coordinates of track 2, but since jq evaluates expressions from left to right, it will gives us back the array of longitude coordinates and the array of latitude coordinates, not the pairs.

The key thing is to use map, that will run the filter for each element of the array:

cat tramoswgs84.json | jq  '.tramos[2].points| map([.lon,.lat])'

The complete jq syntax for generating one linestring object, would be:

cat tramoswgs84.json | jq  -c '.tramos[1]|  {"type": "LineString", "coordinates": .points | map([.lon,.lat])}'

The next step would be to create a GeoJSON containing the entire collection of linestrings. Since we would like to attach attributes to them (“name” and “id”), we rather generate a “feature collection“, than a “geometric collection”. The code for generating each feature would be:

cat $1 | jq  -c '.tramos[]| {"type": "Feature","geometry": {"type": "LineString", "coordinates": .points | map([.lon,.lat])},"properties":{"name": .name, "id": .id}}' >> $2

And then we need to do a few text manipulation operations, that I could not find a way of performing with jq. Basically we need to add the opening tags for the feature collection, commas between each object, and then add closing tags for the feature collection.

I did this text manipulating tasks with sed, and put everything inside a shell script, that will transform the JSON file (in the format that I described) directly into a valid GeoJSON. If you are interested, you can get it from github. The resulting file can be fed to QGIS, in order to produce pretty maps, like the one bellow 🙂

roads_zgz

Piping an API into R: a Data Science Workflow

Inspired by @jeroenhjanssens, author of the Data Science Toolbox, I decided to give a go to one of the most unfriendly data sources: An XML API.
Apart from its rich syntax with query capabilities, I tend think XML is highly verbose and human unfriendly, which is quite a discouraging if you don’t want to take advantage of all its capabilities. And in my case I didn’t: I just wanted to grab a data stream, in order to be able to build some analysis in R. APIs are generally a pain for data scientists, because they tend to want to have “a look at things” and get a general feeling of the dataset, before start building code. Normally, this is not possible with an API, unless you use these high-end drag-and-drop interfaces, that are generally costly. But following this approach I was able to setup a chain of tools that enable me to reproduce this AGILE workflow, where you can have a feel of the dataset in R, without having to write a Python client.

The first step was to pipe the xml output of the query into a file, and that is easy enough to do with curl

curl -s 'http://someurl.com/Data/Entity.ashx?Action=GetStuff&Par=59&Resolution=250&&token=OxWDsixG6n5sometoken' > out.xml

Now, if you are an XML wiz you can follow a different approach, but I personally feel more comfortable with JSON, so the next step for me was to convert the XML dump into some nice JSON, and fortunately there is another free tool for that too: xml2json

xml2json < out.xml > out.json

Having the JSON, it is possible to query it using jq, a command line JSON parser that I find really intuitive. With this command, I am able to narrow the dataset to the fields I am interested, and pipe the results into another text file. In this case I am skipping all the “headers”, and grabbing an array of elements, which is what I want to analyse.

cat out.json | jq '[.Root.ResultSet.Entity[] | {color: .color, width: .with, average: .average, reference: .reference, Time: .Time}]' > test.json

Now here I could add another step, to convert the JSON results into csv, but actually R has interfaces to JSON, so why not use those to import the data directly. There is actually more than one package that can do this, but I had some nice results with jsonlite.

library("jsonlite")
data1 <- fromJSON("test.json")

And with these two lines of code, I have a data frame that I can use for running ML algorithms.

df

Data Mining| Machine Learning

Together with a colleague, I have been involved in the “hard” task of drafting a diagram (or a “mindmap”) that would connect logically, some of the “buzz words” regarding “data science”; e.g.: artificial intelligence, machine learning, data mining, recommenders. Moreover we wanted to provide a classification that would organize the different “algorithmic families” into some sort of typology. Hard task, I know, mostly because there are many classifications, based on the approaches we want to take; e.g.: by learning method, by task. We ended up not with one diagram, but with two, separating “data mining” and “machine learning”, in order to explain them better.

In the “Data mining” diagram, we include a general distinction between “descriptive” and “predictive” data mining, and within these two, we follow with sub divisions that finish in data mining techniques that may or not belong to machine learning (e.g.: statistics). On the bottom of the diagram, we represent the generic data mining applications, that make use of these techniques. One key difficulty in drafting this diagram is the fact that some techniques can include other techniques, and it is not easy to reflect that in the diagram. For instance, machine learning techniques typically make use descriptive statistics such as dispersion or central tendency.

Data Mining

In the “Machine learning” diagram we went for a more “scientific” view (less problem oriented), and tried to show how machine learning fits into the broader field “Artificial Intelligence”. Then we took the “learning approach”, as a way of classifying ML techniques. At the leafs of this tree, as well as at the leafs of the “Data Mining tree”, there are examples of techniques/algorithms relevant for the specific types; it is not an *extensive* list of algorithms, neither it claims to select the most *important* algorithm (if there is such thing…); sometimes the criteria for choosing the algorithm is greatly *subjective*: because we worked or read about it, or even because it was the only example we could find…

Machine Learning

Clearly there is some degree of overlap between the two diagrams. Machine learning is part of Data Mining, and therefore some algorithmic “families” are presented in both diagrams. However we believe that in this way, it becomes easier to describe what “machine learning” is, as a scientific discipline, and how it “fits & mixes”, within the “wide umbrella” of data mining.

This diagram was based on a lot of reads (mostly blogs), on our own knowledge and a lot of discussion. It is not “written on stone”, and I don’t even know if it is possible to have such a thing, regarding a topic that is so difficult to classify, either because it is evolving so fast or because it is often very “fuzzy”. In any case, any (constructive) critics or commentaries regarding ways of improving these diagrams, or even just some thoughts would be greatly appreciated.

Importing Large Spatial Datasets into PostGIS

Is PostgreSQL/PostGIS suitable for Big Data? This is a question I have been asking myself for a while, and I now have the opportunity to test it with a PostgreSQL instance from Amazon Web Services (AWS); after all, if such a service cannot deal with “Big Data”, who can?
I have a dataset with Tweets in the city of Barcelona, over a time period. The entire dataset amounts to about 3 million points (82 MB), and I have some subsets: a smaller collection of about 2 million points (55 MB) and 300 000 points (8.2. MB).

big_data

We can argue whether this dataset is, or not, Big Data; but IMHO, a table with more than 1 million records starts to get “heavy” for any relational database. Now before doing any computations of spatial algorithms (e.g.: convex hull, point in polygon), the very first step is to actually load this dataset into the database. And this is where I started to struggle.

QGIS has a plugin called SPIT, that allows to import a Shapefile into PostGIS. Unfortunately it seems to be extremely slow any of the three files that I mentioned above (including the smaller one).

spit

Switching to the command line, where I have more control knowledge about the processes that are running, I tried shp2pgsql, a tool for importing Shapefiles that comes with the PostGIS installation.

On Linux it is possible to convert and import the Shapefile in one go, redirecting the output with pipe:

shp2pgsql -s <SRID> -c -D -I <path to shapefile> <schema>.<table> | \
psql -d <databasename> -h <hostname> -U <username>

Since this command was really slow (unresponsive?) I decided to do it in two steps, to see where it was hanging:

  • Create an sql file with the insert statements.
  • Load this sql file into the database.

Unsurprisingly, the SQL file with the statements was generated quite quickly, and the processing effort was being spent on the insert queries. This situation was verified with the three Shapefiles, even when I inserted a spatial index (Gist). After thinking a bit, I came to the conclusion that the spatial index should be created within the “CREATE TABLE” statement, so it should actually speed up the queries a bit (but I did not confirm this guess).

My next attempt, was to use ogr2ogr, the GDAL tool to convert between vector formats.

ogr2ogr -f "PostgreSQL" PG:"host=host user=user dbname=db password=pass" data.shp

I have to say that I read some suggestion to prefer shp2pgsql over this tool, and in fact it was quite slow as well (at this point I was only testing the smallest of the Shapefiles).

My conclusion up to this moment, is that it is quite expensive in terms of time (and processor) to import a large dataset into a PostGIS. Some ideas that could be explored:

  • Upgrade the PostGIS server (would scale vertically make any difference?).
  • Upload the data into the Amazon server and run the import tool from there. I am not sure whether it is possible to do that, but it seems like a good improvement to do this operation locally, and using the power of the Amazon Server, rather than my desktop computer.
  • Split the Shapefile into chunks of data. I am not entirely sure how to do this, and any suggestions would be welcome.

Next steps: once I manage to import a large Shapefile into my PostGIS instance, I will post some feedback about the computation of spatial operations.

Update: Apparently I underestimated the fact that I am importing data over a network connection, with a limited latency. The best thing would be to make the import locally, an option that I unfortunately don’t have, since the Amazon RDS does not give me SSH access (it is a service, not a machine!). I decided to switch to a new approach: import a plain csv file into a database table, using the /copy command, and that was actually really fast (a few seconds).

psql database -U user -h host.eu-west-1.rds.amazonaws.com -c "\copy newt_table from 'data.csv' with DELIMITER ','"

Of course, I then had to run a query populate the point geometry from the float (x,y) values, and create an update a spatial index (for a matter efficiency it is recommended that indexes are created after, and not during the import!).

UPDATE new_table SET geom = ST_SetSRID(ST_MakePoint(lon,lat),4326);

CREATE INDEX [indexname] ON [tablename] USING GIST ( [geometryfield] );

VACUUM ANALYZE [table_name] [column_name];

That was still considerably fast (although not as much as the csv import!), and my guess is: it is because I are now running queries on the remote machine (and not uploading data via an internet connection).

Following some very usefull on Stack Overflow, I decided to retry my previous approaches, tuning the configuration parameters.

First I tried the OGR command line tool, but this time with the config flag “PG_USE_COPY YES”, to force dumping of the data:

ogr2ogr -progress -f PostgreSQL PG:"dbname='database' user='user' host='host.eu-west-1.rds.amazonaws.com' port='5432' data.shp --config PG_USE_COPY YES -nlt POINT -nln import_ogr

Although speed has improved massively, it still took me 1:03:00 to run this query.

Then I tried the shp2pgsql tool, with the “-D” flag, which also forces the dump of data:

shp2pgsql -D -s 4326 data.shp import_shp2pgsql | psql

This was remarkably fast: 00:01:04

compare_imports

Comparing the two imports, we can see that shp2pgsql did not create any index (which we already commented is a costly operation), while ogr2ogr created a gist index on the spatial field. The extra step of creating an index, has to be added to the shp2pgsql import, but as we have seen before, that operation is not a problem since we are then operating on the server.

I conclude this review, stating that shp2pgsql has a great performance in the task of  loading a Big Data spatial table into a remote PostGIS server (e.g.: Amazon RDS). It is recommended that the “-D” flag is passed to the tool, and that no indexes are created during this operation.

Bellow we can see the PostGIS geometry of the imported table, with more than 3 million points:

layer_shp2pgsql

Post-processing OPTICSxi Clustering

On a previous post, I expressed my concerns regarding the results of OPTICSxi clustering. Namely, I mentioned an “annoying” spike effect, that turns out massively almost at any simulation (so massively that it is almost a “feature”).

A post in StackOverflow, originated an useful exchange of ideas with the author of the ELKI framework. Namely he pointed out a “weakness” of the algorithm, that basis its partition on the reachability distance, which is not always a synonymous of “spatial closeness”. Literally, outliers that are standing in the middle of clusters, could be erroneous misinterpretated as belonging to a cluster or the other.

Since this is a problem on the partition algorithm, the solution could pass through improving the partition algorithm, using a different partition algorithm, or using a different cluster algorithm all together.  As a “quick fix” I opted to some cluster “post-processing”, in order to remove the outliers.

So my research question was: how to identify a point that is a spatial outlier?

I tried a couple of approaches that I will discuss now, as I think they may be useful for someone or generate an useful discussion.

Image

On the image above, the coloured points are outliers, according to our definition. One simple approach, would be to calculate the average path length, the average distance of one point, to all other points in the point cloud. Then we could test the distance of each point, and say if it is greater than a certain value, let us say 5,  we would consider it an outlier and remove it from the dataset. I found this approach actually yield good results, and was able to reduce the “spikes” that we see in this figure:

Image

To this results:

Image

The code that calculates the average distance for each point, is bellow:

public static double getAverageDistance(Coordinate coord, ModifiableDBIDs ids, HashMap dataMap){

double sum=0.0;

try{
for (DBIDIter iter = ids.iter();
iter.valid(); iter.advance())
sum+=coord.distance(dataMap.get(DBIDUtil.toString(iter)));
}
catch( Exception  exp) {
System.out.println( "Unexpected exception:" + exp.getMessage() );
}
return sum/ids.size();
}

From the point of view of “correctness” this algorithm suffers of the “bottleneck” of removing the outlier from the clusters (and thus converting it into “noise”). To avoid this, it would have to be tested if the point actually belongs to another cluster.

Apart from that, in terms of computation the algorithm is extremely “costly” , being the most costly part when it computes the distances from all points to all points, literally a matrix of NxN that can easily increase to huge numbers with a large cluster.

To avoid that, I tried a few different approaches. One was to calculate this distances for a part of the dataset. We know that by the way the algorithm is written, the outliers tend to “appear” either in the beginning or the end of the cluster. Having this in mind, I calculated the average based on the 60% “middle” values (n<20% and n>20%) and tested the condition for the first 20% and last 20% (this refinement was actually not needed as the “costly” part of the algorithm is the distance matrix and not the condition testing). I was not able to reach any reasonable results with this approach, either because the points were not ordered (which defeats the all purpose of my “slicing”) or because outliers were appearing outside these “classes” (i.e.: in the middle of the dataset).

The other approach that I tried was to work with the “final” polygon (the convex hull of the cluster), rather than the raw points. The polygon border has only a few points, when compared to the point cloud used to generate it. It is very easy and quick to identify the “outlier” in the polygon border; however removing it, will understandable result in a “strange polygon”, since the reality is: if that point would not be used to build the polygon, another point (that we don’t have right now!) would be used, and so the real geometry would not be this one. This is particularly noticeable when we see overlapping clusters (which don’t overlap anymore). After this experience, it became clear that the processing would have to be done in the cluster point dataset, before building the polygon.

I ended up with an algorithm that successfully removes the “spike” effects from OPTICSxi, but is rather costly in terms of time (more costly than OPTICSxi itself) and unfortunately this grows exponentially with the size of the dataset, which limits its application with big data.

UPDATE

The approach above was improved, by testing the distances against the 2 neighbours of each point (previous and next point) rather than against the entire matrix. With this hack, the running time of the algorithm was reduced to reasonable values, that don’t grow up so much with the size of the vector.

Parametrizing and interpreting OPTICSxi Clustering

For a while now, I have been working on the application of the OPTICS clustering, for user generated data in cities.
OPTICS is a density-based algorithm that attempts to overcome some of the “weaknesses” of its most famous counterpart: DBSCAN. The major weaknesses of DBSCAN are:

  • the inability to detect clusters in zones of varying density.
  • the choice of parameter values, for which it is very sensitive.

I am using an implementation from the ELKI Data Mining libraries, which is one of the few existing ones. Unlike DBSCAN, the OPTICS algorithm does not produce a strict cluster partition, but an augmented ordering of the database. To produce the cluster partition, I use OPTICSxi, which is another algorithm that produces a classification based on the output of OPTICS. There are even fewer libraries capable of extracting a cluster partition from the output of OPTICS, and ELKI’s OPTICSxi implementation is one of the few ones.

In a nutshell, the parameters needed for the OPTICS algorithm are:

  • epsilon: this parameter is meant as a “maximum” distance to consider, and not a specific distance to consider (DBSCAN); a whole range of distances is considered in the OPTICS algorithm, up to epsilon; although we can choose an “extremely” large epsilon, that will increase the time the algorithm will take to converge;
  • minpts: this is the number of points required to form a cluster (the same as in DBSCAN);

The OPTICSxi algorithm adds an “extra” parameter:

  • xi: contrast parameter, that establishes the relative decrease in density; this parameter controls directly the number of classes we will obtain.

Taking in consideration what I wrote above, if we “replace” DBSCAN by OPTICxi, we will still have to choose a min points, the epsilon will become less important (but we still need to set it), and suddenly we have a “new” parameter to set: xi. I am not completely sure this is a gain in terms of easier parametrization….

It is very clear to me, how-to interpret the results of DBSCAN (although it is not that easy, to set “meaningful” global parameters); DBSCAN detects a “prototype” of a cluster, characterized by a density, expressed as a number of points per area (minpts/epsilon). The results of OPTICSxi seem a bit more difficult to interpret.

The clusters generated by OPTICSxi are hierarchical, where “outer” clusters (lower in the hierarchical scale), contain inner clusters (or “child” clusters). Looking at the algorithm, it is clear that the idea of varying densities, is implemented by having a range of epsilon parameters, that is actually based on the data. When we set the “contrast” parameter, we actually “decide”, which density variation we accept, in order to consider that group a cluster. Because the algorithm is focused on “density variation”, rather than on a global value of density (or a “prototype”), it may well happen that areas that have a very low density appear as a cluster, just because they have a density variation from their surrounds that is greater than the accepted threshold. Likewise, we may have areas that are extremely dense, and are not detected as clusters, because there is a smooth density variation from their surroundings. In terms of interpretation this may well result in a bottleneck.

On the image bellow, we see clusters in parts of the city that are not particularly dense (like the West) and are not detected as clusters by DBSCAN. On the other hand, the centre of the city (more dense), as only a few clusters.

Image

epsilon=500; xi=0.03; minpts=150;

It is interesting to note, that since OPTICS does not impose a strict “prototype” of density, the clusters in denser areas (in the centre of the city) are smaller in area than the clusters in less dense areas (the surroundings).

There are two phenomena that I sometimes detect in the outputs of OPTICSxi, and that I am not able to explain. One is the appearance of “spike” clusters, that link parts of the map. I cannot explain them, because they seem to be made of very few points and I don’t understand how the algorithm decides to group them in the same cluster. Do they really represent a “corridor” of density variation? looking at the underlying data, it does not look like that. You can see these “spikes” in the image bellow.

Image
epsilon=1000; xi=0.05; minpts=100;

The other phenomenon that I cannot explain is the fact that sometimes there are overlapping clusters of the same hierarchical level. OPTICSxi is based on the OPTICS ordering of the database (e.g. dendrogram) and there are no repeated points in that diagram.

Image
http://en.wikipedia.org/wiki/OPTICS_algorithm

Since this is a hierarchical clustering, we consider that clusters of a lower level contain clusters of a higher level, and that idea is enforced when building the convex hulls. However, I don’t see any justification for having clusters that intersect other clusters on the same hierarchical level, which in practice would mean that some points would have a double cluster “membership”. On the image bellow, we can see some intersecting clusters with the same hierarchical level (0).

Image

Finally the most important thought/question that I want to leave you with, is: what do we expect to see in an OPTICSxi cluster classification? This question is closely linked to the task of parametrizing OPTICSxi.
Since I see hardly any studies with runs of OPTICSxi for a particular cluster problem, I struggle to find what is an optimal clustering classification would be; i.e.: one that can provide some meaningful/useful results, and add some value to the DBSCAN clustering. To help me answering that question, I performed many runs of OPTICSxi, with different combinations of parameters, and I selected three that I will discuss bellow.

Image
epsilon=2000; xi=0.025; minpts=100;

On this run I used a large value of epsilon (2Km); the meaning of that value is that we accept large clusters (up to 2Km); since the algorithm “merges” clusters, we will end up with some very large clusters, that will have almost certainly a low density. I like this output, because it exposes the hierarchical structure of the classification, and it actually reminds me of several runs DBSCAN with a different combination of parameters (for different densities), which is the advertised “strength” of OPTICS. As it was mentioned before, smaller clusters correspond to higher levels in the hierarchical scale, and higher densities.

Image
epsilon=250; xi=0.035; minpts=10;

On this run we see a large number of clusters, even if the “contrast” parameter is the same from the previous run. That is mostly because I chosen a low number of minpts, which established that we accept clusters with a low number of points. Since the epsilon in this case is shorter, we don’t see these large clusters occupying a large part of the map. I find this output less interesting than the previous one, mostly because, even if we have an hierarchical structure there are many clusters at the same level, and many of them intersect. In terms of interpretation, I can see an overall “shape” that is similar to the previous one, but it is actually discretized in lots of small clusters that are easily overlooked as “noise”.

Image

epsilon=250; xi=0.035; minpts=100;

This run has a parameter choice that is similar to the previous one, except that the minpts is larger; the consequences is that not only we find less clusters and they overlap less, but also that they are mostly at the same level.

In a perspective of adding value to DBSCAN, I would opt for the first combination of parameters, since it provides a hierarchical picture of the data, exposing clearly which areas are more dense. IMHO the last combination of parameters, fails to provide an idea of the global distribution of density, since it is finding similar clusters all over the study area.

Clustering Geospatial Data

Recently I have been looking into different algorithms for the clustering geospatial data. The problem of finding “similar” regions in space, is a very interesting one, since this type of classification enables a whole range of applications (e.g.: urban development, transport planning, etc).
When we are working with “Big Data”, as the one resultant from streaming of crowd sourced date (for instance), the amount of information makes it difficult to visually detect things; this is often where artificial intelligence can “give us an hand”.

Image

DBSCAN is a density-based clustering algorithm, that can find an, a priori unknown, number of clusters with an arbitrary shape. It starts with a certain “idea” of what a cluster is, based on a density as defined by its parameters: epsilon and minpts. This “idea” is also the main weakness of DBSCAN, as the global parameters, don’t allow to capture different densities in the dataset, if that is the case; and often, in geographical datasets, it is.

Image

In the example above, we can see that a different value of epsilon, and therefore a different threshold of density, yields very different results. With the larger epsilon we detect clusters around all the AOI, but the centre comes up as a single cluster. With the smaller epsilon, we have more detail in the centre, but we “loose” the less dense clusters in the outskirts. In a way, DBSCAN corresponds to looking at the data with “semi closed” eyes, and this is why its results make so much sense to the user.

OPTICS is considered a generalization of DBSCAN, tackling exactly these difficulty in configuring the parameters, by becoming almost parameterless. Instead of using an epsilon to limit the search for neighbours, OPTICS considers a range of epsilons, up to a maximum, that could be a radius that includes the entire AOI (although for practical purposes, you don’t really want to do that, because it increases the complexity of the algorithm). OPTICS is essentially an ordering of the database, such as points that are spatially closest become neighbours in the ordering. The output of optics is a graphic plotting this ordering, and a special distance that is stored for each point, representing the density that needs to be accepted for a cluster in order to have both points belong to the same cluster; this is called a dendrogram.

Image

source: wikipedia

OPTICS does not produce a strict partitioning of the data, but this can be done using algorithms, that try to identify the “peaks” and “valleys” in the graphic, or by selecting a range of x and a threshold of y (xi). Generally speaking these algorithms produce a hierarchical clustering, which is more difficult to interpret than the “flat” partitioning produced by DBSCAN. Conceptually, this OPTICS output would correspond to many “runs” of DBSCAN.

Image

There are implementations of DBSCAN and OPTICS in the WEKA and ELKI libraries. WEKA’s implementation of DBSCAN has been thoroughly referred as slow, when compared to ELKI’s implementation. This benchmarking illustrates well the difference between the two. Although some improvements were added in the latest versions of WEKA, this difference was confirmed by invoking the two algorithms in some test cases. OPTICS implementation in WEKA does not produce a classification. In ELKI’s library, the OPTICS algorithm is separated from the classification algorithm, following ELKI’s modular philosophy; ELKI’s OPTICSxi produces a clustering classification, using a partitioning algorithm selected (or implemented) by the user.

DBSCAN is very sensitive to its two parameters, which are quite hard to setup. Also, the parameters “influence” each other in the result. They are hard to setup because they depend largely on the particular phenomena we are studying, and which type of clusters we want to detect. If we want to identify “meaningful” things, we need to have a pretty good idea of what we are looking for. If a good domain knowledge is strongly advised, it is also advised to inspect the results of the clustering algorithm, by plotting them in a map. For all these reasons, I think that DBSCAN is very good as a exploratory tool, for producing meaningful and scientific analysis about a dataset, but I don’t see it as very “realistic” to implement it as part of a “service” that automatically analyses any dataset, “on demand”.

OPTICS seems a bit more “obscure” to me. The algorithm is more sophisticated than DBSCAN, in the sense that it has a more “flexible” idea of a cluster, but it produces a more complex result, which is more difficult for the user to interpret. OPTCIS itself does not produce any classification, so the quality of the cluster “results”, actually heavily depend on the algorithm that we choose to perform the partition.

In a way, I don’t see OPTICS as a “replacement” for DBSCAN, but as a complement. It allows us to detect some “patterns” that are not “visible” to DBSCAN, but the lack of global parameters does not allow us to capture the “global” structure of the dataset.

Encrypting Passwords on the Client Side

I recently implemented an authentication system, supported by a table on the database. In this table, I store the usernames and passwords.
The PostgreSQL “pgcrypto” module supports encryption of specific columns, but I wanted to avoid “bulking” my system with more add-ons; also, I was not sure about trusting the administrator of the database. For all these reasons, I decided to go for client-side encryption.

As I am using the QT API, my first guess was to look for encryption related functionality on this framework.

Since Qt 4.3., there is a class called QCryptographicHash, that provides a way to generate cryptographic hashes, from binary or text data. It supports a couple of “popular” message-digest algorithms such as MD5, MD4 or SHA1. This is interesting, but not usable in my case since “hashing” is a “one-way process”: you can create an hash from data, but not “decrypt” it back to the original value.

For the purpose of encrypting and decrypting data, there is the QCA library, which is based on SSL and therefore uses asymmetric cryptography. I had a quick look at this library and it looks pretty powerful; however, I was in a “quest” to avoid installing libraries, so I decided to look for another alternative. I also have to add that I am not in need to achieve “bullet proof” encryption, but solely to keep my data hidden from “curious eyes of users”, that don’t know much about programming or cryptography. *If you want strong data encryption, then QCA is the way to go*.

Finally I stumbled upon SimpleCrypt (Simple Encryption), a class developed by a Qt user to protect against “casual observation”: just what I was looking for.
SimpleCrypt provides symmetric encryption (the same key is used for encode and decoding the data). This key is a 32 bits quint64, built into the program, on the SimpleCrypt class.

SimpleCrypt crypto(Q_UINT64_C(5ebed0982db747741f47)); //some random number

It can be used to encode strings, as well as streams of binary data. More details about the encryption algorithm can be found here.

In my application, I use a QDataWidgetMapper, to map different widgets such as line boxes, to fields in the database. In the case of the “password” field, the mapping won’t work since the user will type and see a password that is different from the cyphertext stored in the database table. To prevent this, I need a “proxy” between the database and widget; this proxy is the QItemDelegate.
By reimplementing this class and apply it to my widgets, I can “force” the application to transform the values that it writes/reads in the database, by applying the encrypting algorithm.
This would be my the implementation for the item delegate:

#include "passwddelegate.h"
#include "simplecrypt.h"

SimpleCrypt crypto(Q_UINT64_C(5ebed0982db747741f47)); //some random number

PasswdDelegate::PasswdDelegate(QList<int> colsOthers, QList<int> colsText, QList<int> colsPass,QObject *parent):
    NullRelationalDelegate(colsOthers,colsText,parent), m_colsPass(colsPass)
{

}
void PasswdDelegate::setModelData(QWidget *editor, QAbstractItemModel *model, const QModelIndex &index) const
{
    if (m_colsOthers.contains(index.column()) || m_colsText.contains(index.column()) ){
        NullRelationalDelegate::setModelData(editor,model,index);
    }else if (m_colsPass.contains(index.column())){

            model->setData(index, crypto.encryptToString(editor->property("text").toString()));

    }
    else QSqlRelationalDelegate::setModelData(editor,model,index);
}

void PasswdDelegate::setEditorData(QWidget *editor, const QModelIndex &index) const
{
    if (m_colsOthers.contains(index.column()) || m_colsText.contains(index.column())){
        NullRelationalDelegate::setEditorData(editor,index);
    }else if (m_colsPass.contains(index.column())){

           editor->setProperty("text", crypto.decryptToString(index.data().toString()));

    }
    else QSqlRelationalDelegate::setEditorData(editor,index);
}

The method “setModelData” is called each time that we want to write values in the encrypted field, and it transforms the plain text password into a cyphertext, by applying the compiled key. The method “setEditorData” on the other hand, transforms the cyphertext in the database into plain-text, by applying the same key.

This is how the pasword looks to the user:

cypher1

And this is what is actually stored on the database:

cypher2

SimpleCrypt provides a simple method for “cyphering” user passwords in the database; it is not “unbreakable” but it is one extra level of security, and it is pretty easy to use, not requiring the installation of any extra libraries (you may download simplecrypt.h and cimplecrypt.cpp from here). The QItemDelegate is a class that allows us to control what the user “sees”, while still benefiting from the advantages of using a QDataWidgetMapper. In this case, it works as a charm.

Quick guide to Multi-master Replication in PostgreSQL

A while ago, I wrote a post about multi-master replication using symmetricDS. My scenario consists of a system with multiple nodes, all of them writing in their copies of the database. Sometimes the nodes may be offline, but I would like the system to be *eventually consistent*.

SymmetricDS is a Java-based framework that supports a number of RDBMS, including PostgreSQL, the one that I use. I don’t particularly like the fact that it uses Java: specially the UI, seems slow and unresponsive. However, the fact that the application is cross-platform is quite “handy”, as we can have the databases running in a number of different, and “talking to each other”.
SymmetricDS itself is free and Open Source (GPL). However, if you want to use the configuration GUI, there is a commercial product called “SymmetricDS Pro”. I could not find out how much the pro version costs (they are quite secretive in the website), but since I was in a bit of a rush to setup the synchronization system, I decided to try it out.
Previously, I evaluated the FOSS version, and was able to synchronize 2 databases on Ubuntu systems: what they called a “Standard 2 Tier Configuration”. This time, I went for a slightly more complicated scenario: synchronizing three different databases, all in different hosts, with a mix of Windows and Linux systems. With the help of the “pro” GUI, and the “Quick-start manual”, it took me less than two days to do it, which I think is ok.

DISCLAIMER:
Before start reading this post, please note that database replication is a complicated issue. Multi-master asynchronous replication is *definitely* complicated, with many things involved, so don’t expect the configuration to be a simple wizard. To be able to use it you need to understand well a series of concepts, that won’t take you five minutes. Having said this, “SymmetricDS Pro” does a pretty good job in helping a person that *has this concepts*, performing that task.

My case study, is a real world scenario where I have three different hosts running copies of my application and database. However it may be over-simplified, since I am doing simple operations with the application (inserting/updating data with all the nodes online). Asynchronous multi-master replication “gives space” for the rise of conflicts, and although SymmetricDS does provide some support for dealing with conflicts, this is a highly sensitive topic, that must be dealt on a “case-to-case” basis, by a person with a good knowledge of the domain. On my case study, I did not arrive to any conflicts so I won’t evaluate how symmetriCDS deals with them. Please have this issue in mind, if you decide to adopt SymmetricDS.

SymmetricDS Pro is not free, but you may download it and evaluate it for 30 days:

http://www.jumpmind.com/products/symmetricds/download

It is essential to give your email address, where they will provide you with the key to “unlock” the full functionality. I found it very easy to install it, following the instructions on the quick-start guide:

http://www.jumpmind.com/downloads/symmetricds/doc/SymmetricDSPro-QuickStart-3.5.pdf

The only dependency is the Java Runtime Environment (JRE), which very likely you will already have running on your system, anyway.
In the guide they mention a “single-homed” scenario, where you will have a single instance of symmetricDS running and a “multi-homed” scenario, where you install a copy of symmetricDS for each host/database. Since I wanted to approach a “deployment scenario” with remote computers I went straight to the “multi-homed”. However, if you just want to test it, you may try the “single-homed” scenario (which is supported in the manual).

Although SymmetricDS enables a distributed system, you need to create a node that acts as a “registration server”. This node has to exist, even if you can make the other nodes “talk” to each other. Although it is ok if this node is offline for a while, I would pick a host that is mostly online (like a actual server).

I started by installing symmetriCDS in my “server” node. The installation is exactly the same on any node and when you finish, you start running the daemon (running something like “/symmetricDS/bin/sym”), and then run the node setup.

If you have installed symmetricDS on port 31415 (the default non-secure port), the configuration console can be run from pointing your browser to this address:

http://localhost:31415

Since I was on the server host, I choose to setup a “server” node. SymmetricDS presents you with two “ready made” configurations, and an option to create your own, called: “I’ll configure things myself”. This is actually a very important step of your configuration, since it will define the architecture of the system (how many nodes you have, how they connect to each other, etc); later you may refine the configuration options, but the first decision is made here, so it is important to think well. Since I was a bit intimidated by the “I’ll configure things myself” option, and the “Standard 2 Tier Configuration” is the only one supported in the manual, I decided to go for this one first. If you are looking for a sort of tutorial, I would recommend this one, in order to check that everything is working on your system, etc.

Although they “claim” in the manual that the “client” group may correspond to many nodes, connected to one server, I found out that I could only make each client to talk to the server (and vice-verse), but I could not make the client nodes to talk to each other. It was like they were subscribing the “news” from the server, but the “news” that were arriving to the server via other nodes were not actually considered as “news”.

After that, I decided to try the “Multiple Sources to One Target Configuration”, which is also described as “Data Warehousing”. This was not exactly what I was looking for, but I was able to modify the architecture, until I arrived to something that suited me (and that I will describe later). The next screens, let you define the database connection string, and the url for communicating with the SymmetricDS instance; in my case:

http://invislaptop:31415/sync/regsvr

(where invislaptop resolves to my server’s IP address)

After this, you are taken to the configuration dashboard, that should be “unlocked”, by using the key provided by email. The next thing you want to do, is to go to the “configuration” section. This section is very powerful, at the same time that is complicated and it allows you to tune and refine every aspect of the synchronization, with the aid of some tools for “bulk” tasks. It is certainly possible to do all this (on the FOSS version), by editing the configuration files, but I found this GUI very useful, at least for a “newbie”.

The “Data Warehousing” “pre-cooked” configuration generates a series of node groups:

  • regsrvr: registration server
  • target: target data source
  • source1: group of nodes that provide data to the target
  • source2: group of nodes that provide data to the target
  • sourceN: …

In my scenario I “left” only three nodes: the registration server, a target and a source (“source1”), and removed the other ones. The names are not so important, and I could have just called them “regsvr”, “node1” and “node2” (for instance).

sym1

The “group links” section, actually establishes the dynamics between all these groups of nodes, whatever name you called them. In my case, the registration server “waits for pulls” from both node groups (“target” and “node1”). The “target” group pushes changes to both, to the registration server and the “source1” groups. The “source1” group, pushes changes to both, the registration server and the “target” groups.

sym2

The system could be described, by something like this:

architecture

On the “routers” tab, you can define the details of these connections between nodes, through triggers (one for each action):

sym4

The triggers for each table, are defined on the “table triggers” tab.

sym3

you may defined them individually for the tables you are interested in, or do a “bulk” define by choosing “auto-create” Then, you have the option to connect the routers to the triggers on this tab, or in the “routers” tab.
When this is done, you should have a trigger for each each table, on each update/delete/remove action (according to what you have defined).

The server setup, is actually the most complex and time consuming configuration step (which I did not cover exhaustively!). After this, I went to each of my clients, and run the installation and setup again.
This time, I choose to add a “client” node instead. The “client” nodes will attempt to register during the setup, by contacting the server on the address you provide; in my case:

http://invislaptop:31415/sync/regsvr

Unless you open the registration on the server for that particular node (by imputing its ID and group) the registration will fail. This is ok, and you can go through the entire process of creating the client, without registering the node.
When you finished the registration, if you go to the server console, and open “Manage nodes”, you will see one url under the server entry. This should be the client node, that contacted the server in order to register. If you right-click this entry, and choose “allow”, the server should be able to register the node. If you want, you may re-load the data on the client, by choosing “Send initial load to” (this actually should not be necessary, as the server should send an initial load, when allowing the node).
After registering both nodes, my setup looked like this:

sym5

After successfully registering all clients on the server, the system should be up and running. Note that you should have the symetricDS daemons running on the three nodes, to have a fully functional scenario.
I edited a record on the server, and it got replicated to the Ubuntu and Windows clients.

server

target

source1

Then I tried to edit a record on each one of the other nodes (“target” and “client1”), and watched the changes being pushed to the other nodes. It seems that the daemon is listening for changes at very small intervals, since the changes were propagated through the system almost immediately. However I did not test it with more complex changes, including batches of data.

From this experience I would say SymmetriCDS performs quite well, and with the aid of the GUI on “symmetricDS pro”, it is not too hard to setup, once you are clear about what you are looking for and understand where to setup things. This is good because I did not find much documentation on the web apart from the simpler scenario (“Standard 2 Tier Configuration”), neither did I find posts on forums discussing this.

Furthermore it would be interesting to test this system with a “tougher” scenario: larger and more complex batches of changes, more nodes, and sometimes some (or all of) them offline. This would obviously trigger the “conflict” situation, which is also the one that “scares” me most.