Quick guide to Multi-master Replication in PostgreSQL

A while ago, I wrote a post about multi-master replication using symmetricDS. My scenario consists of a system with multiple nodes, all of them writing in their copies of the database. Sometimes the nodes may be offline, but I would like the system to be *eventually consistent*.

SymmetricDS is a Java-based framework that supports a number of RDBMS, including PostgreSQL, the one that I use. I don’t particularly like the fact that it uses Java: specially the UI, seems slow and unresponsive. However, the fact that the application is cross-platform is quite “handy”, as we can have the databases running in a number of different, and “talking to each other”.
SymmetricDS itself is free and Open Source (GPL). However, if you want to use the configuration GUI, there is a commercial product called “SymmetricDS Pro”. I could not find out how much the pro version costs (they are quite secretive in the website), but since I was in a bit of a rush to setup the synchronization system, I decided to try it out.
Previously, I evaluated the FOSS version, and was able to synchronize 2 databases on Ubuntu systems: what they called a “Standard 2 Tier Configuration”. This time, I went for a slightly more complicated scenario: synchronizing three different databases, all in different hosts, with a mix of Windows and Linux systems. With the help of the “pro” GUI, and the “Quick-start manual”, it took me less than two days to do it, which I think is ok.

DISCLAIMER:
Before start reading this post, please note that database replication is a complicated issue. Multi-master asynchronous replication is *definitely* complicated, with many things involved, so don’t expect the configuration to be a simple wizard. To be able to use it you need to understand well a series of concepts, that won’t take you five minutes. Having said this, “SymmetricDS Pro” does a pretty good job in helping a person that *has this concepts*, performing that task.

My case study, is a real world scenario where I have three different hosts running copies of my application and database. However it may be over-simplified, since I am doing simple operations with the application (inserting/updating data with all the nodes online). Asynchronous multi-master replication “gives space” for the rise of conflicts, and although SymmetricDS does provide some support for dealing with conflicts, this is a highly sensitive topic, that must be dealt on a “case-to-case” basis, by a person with a good knowledge of the domain. On my case study, I did not arrive to any conflicts so I won’t evaluate how symmetriCDS deals with them. Please have this issue in mind, if you decide to adopt SymmetricDS.

SymmetricDS Pro is not free, but you may download it and evaluate it for 30 days:

http://www.jumpmind.com/products/symmetricds/download

It is essential to give your email address, where they will provide you with the key to “unlock” the full functionality. I found it very easy to install it, following the instructions on the quick-start guide:

Click to access SymmetricDSPro-QuickStart-3.5.pdf

The only dependency is the Java Runtime Environment (JRE), which very likely you will already have running on your system, anyway.
In the guide they mention a “single-homed” scenario, where you will have a single instance of symmetricDS running and a “multi-homed” scenario, where you install a copy of symmetricDS for each host/database. Since I wanted to approach a “deployment scenario” with remote computers I went straight to the “multi-homed”. However, if you just want to test it, you may try the “single-homed” scenario (which is supported in the manual).

Although SymmetricDS enables a distributed system, you need to create a node that acts as a “registration server”. This node has to exist, even if you can make the other nodes “talk” to each other. Although it is ok if this node is offline for a while, I would pick a host that is mostly online (like a actual server).

I started by installing symmetriCDS in my “server” node. The installation is exactly the same on any node and when you finish, you start running the daemon (running something like “/symmetricDS/bin/sym”), and then run the node setup.

If you have installed symmetricDS on port 31415 (the default non-secure port), the configuration console can be run from pointing your browser to this address:

http://localhost:31415

Since I was on the server host, I choose to setup a “server” node. SymmetricDS presents you with two “ready made” configurations, and an option to create your own, called: “I’ll configure things myself”. This is actually a very important step of your configuration, since it will define the architecture of the system (how many nodes you have, how they connect to each other, etc); later you may refine the configuration options, but the first decision is made here, so it is important to think well. Since I was a bit intimidated by the “I’ll configure things myself” option, and the “Standard 2 Tier Configuration” is the only one supported in the manual, I decided to go for this one first. If you are looking for a sort of tutorial, I would recommend this one, in order to check that everything is working on your system, etc.

Although they “claim” in the manual that the “client” group may correspond to many nodes, connected to one server, I found out that I could only make each client to talk to the server (and vice-verse), but I could not make the client nodes to talk to each other. It was like they were subscribing the “news” from the server, but the “news” that were arriving to the server via other nodes were not actually considered as “news”.

After that, I decided to try the “Multiple Sources to One Target Configuration”, which is also described as “Data Warehousing”. This was not exactly what I was looking for, but I was able to modify the architecture, until I arrived to something that suited me (and that I will describe later). The next screens, let you define the database connection string, and the url for communicating with the SymmetricDS instance; in my case:

http://invislaptop:31415/sync/regsvr

(where invislaptop resolves to my server’s IP address)

After this, you are taken to the configuration dashboard, that should be “unlocked”, by using the key provided by email. The next thing you want to do, is to go to the “configuration” section. This section is very powerful, at the same time that is complicated and it allows you to tune and refine every aspect of the synchronization, with the aid of some tools for “bulk” tasks. It is certainly possible to do all this (on the FOSS version), by editing the configuration files, but I found this GUI very useful, at least for a “newbie”.

The “Data Warehousing” “pre-cooked” configuration generates a series of node groups:

  • regsrvr: registration server
  • target: target data source
  • source1: group of nodes that provide data to the target
  • source2: group of nodes that provide data to the target
  • sourceN: …

In my scenario I “left” only three nodes: the registration server, a target and a source (“source1”), and removed the other ones. The names are not so important, and I could have just called them “regsvr”, “node1” and “node2” (for instance).

sym1

The “group links” section, actually establishes the dynamics between all these groups of nodes, whatever name you called them. In my case, the registration server “waits for pulls” from both node groups (“target” and “node1”). The “target” group pushes changes to both, to the registration server and the “source1” groups. The “source1” group, pushes changes to both, the registration server and the “target” groups.

sym2

The system could be described, by something like this:

architecture

On the “routers” tab, you can define the details of these connections between nodes, through triggers (one for each action):

sym4

The triggers for each table, are defined on the “table triggers” tab.

sym3

you may defined them individually for the tables you are interested in, or do a “bulk” define by choosing “auto-create” Then, you have the option to connect the routers to the triggers on this tab, or in the “routers” tab.
When this is done, you should have a trigger for each each table, on each update/delete/remove action (according to what you have defined).

The server setup, is actually the most complex and time consuming configuration step (which I did not cover exhaustively!). After this, I went to each of my clients, and run the installation and setup again.
This time, I choose to add a “client” node instead. The “client” nodes will attempt to register during the setup, by contacting the server on the address you provide; in my case:

http://invislaptop:31415/sync/regsvr

Unless you open the registration on the server for that particular node (by imputing its ID and group) the registration will fail. This is ok, and you can go through the entire process of creating the client, without registering the node.
When you finished the registration, if you go to the server console, and open “Manage nodes”, you will see one url under the server entry. This should be the client node, that contacted the server in order to register. If you right-click this entry, and choose “allow”, the server should be able to register the node. If you want, you may re-load the data on the client, by choosing “Send initial load to” (this actually should not be necessary, as the server should send an initial load, when allowing the node).
After registering both nodes, my setup looked like this:

sym5

After successfully registering all clients on the server, the system should be up and running. Note that you should have the symetricDS daemons running on the three nodes, to have a fully functional scenario.
I edited a record on the server, and it got replicated to the Ubuntu and Windows clients.

server

target

source1

Then I tried to edit a record on each one of the other nodes (“target” and “client1”), and watched the changes being pushed to the other nodes. It seems that the daemon is listening for changes at very small intervals, since the changes were propagated through the system almost immediately. However I did not test it with more complex changes, including batches of data.

From this experience I would say SymmetriCDS performs quite well, and with the aid of the GUI on “symmetricDS pro”, it is not too hard to setup, once you are clear about what you are looking for and understand where to setup things. This is good because I did not find much documentation on the web apart from the simpler scenario (“Standard 2 Tier Configuration”), neither did I find posts on forums discussing this.

Furthermore it would be interesting to test this system with a “tougher” scenario: larger and more complex batches of changes, more nodes, and sometimes some (or all of) them offline. This would obviously trigger the “conflict” situation, which is also the one that “scares” me most.

Advertisement

Quick guide to Auditing a (postgreSQL) Database: putting it all together

On my previous post, I suggested how to create a schema, a table and a trigger function, in order to audit a PostgreSQL database.
To audit a table, you would have to create a trigger for that table, calling the code from the generic trigger.
In my case, I want to audit every table in the database, and I think most people will likely want to audit every table, or at least most tables in the database.
To escape the tedious task of writing code to implement that n-times, I put together a script that will generate an audit trigger for each table in the database.If you want to apply it to a restricted number of tables instead, you could easily change it to read the table names from a list.

CREATE OR REPLACE FUNCTION create_audit_triggers()
  RETURNS void AS
$BODY$  
 DECLARE 
 r RECORD; 
 _string varchar ( 1000 );	
  BEGIN


FOR r IN SELECT distinct tablename FROM pg_catalog.pg_tables where schemaname='public'  LOOP

	IF NOT EXISTS(SELECT *
			     FROM information_schema.triggers
			     WHERE event_object_table = r.tablename
			     AND trigger_name = r.tablename || '_audit'
			     )
			    THEN

				--raise info '%' , r.tablename;
				_string :=' CREATE TRIGGER ' || r.tablename || '_audit ' ||
				' AFTER INSERT OR UPDATE OR DELETE ON ' || r.tablename ||
				' FOR EACH ROW EXECUTE PROCEDURE audit.if_modified_func();';
				raise info '%', _string; 
				EXECUTE ( _string ) ; 	

	END IF ; 

end loop;

  END;
  $BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION update_info_tables2()
  OWNER TO postgres;

This will check if the trigger already exists (for which an error would be raised!), and generate the triggers during the blink of an eye (depending on the size of your database!). Thus you could use it for updating the triggers, after you added a couple of tables in the database.

Quick guide to Auditing a (postgreSQL) Database

According to Wikipedia, ‘Database auditing’ involves observing a database so as to be aware of the actions of database users. A bit like ‘spying on the user activity’.
This of course, could be useful, if you have a database with multiple users and store some sort of ‘confidential’ information.

spy

Previously I implemented some code to do this in SQL Server, that basically involved:

  • creating a table to store the audit information.
  • create an ‘encode scheme’ for this table (i.e.: how to distinguish ‘edits’ from ‘removes’, etc).
  •  creating triggers for every table that I wanted to audit, that do the ‘house-work’ (these were of course, generated from a script)

Moreover, these ‘encoded changes’ where exported to JSON, as they were the basis for a synchronization system that I implemented,

This was working fine, until I had to port the database to another RDBMS, which gave me the opportunity to rethink the structure that I had in place.
Instead of porting directly the code from T-SQL to PSQL, I did a little research and (gladly!) found out that Postgres had an ‘inbuilt’ support for audit. It took me about 10 minutes, to ‘get grips with it”, which is what I describe next.

First thing would be to create a table to store the ‘changes’. The PostgreSQL wiki, actually advises to use a different schema, which I I agree is a good idea.

-- create a schema named "audit"
CREATE schema audit;
REVOKE CREATE ON schema audit FROM public;

CREATE TABLE audit.logged_actions (
schema_name text NOT NULL,
table_name text NOT NULL,
user_name text,
action_tstamp timestamp WITH time zone NOT NULL DEFAULT current_timestamp,
action TEXT NOT NULL CHECK (action IN ('I','D','U')),
original_data text,
new_data text,
query text
) WITH (fillfactor=100);

REVOKE ALL ON audit.logged_actions FROM public;

The stored information is: the relevant schema and table names, the username (so it is important to enforce a user policy here) and a timestamp. Then we also have the ‘type’ of action, that is one of the following: insert, delete or update. This is not supporting triggers or selects, which I guess it’s ok. Then we have the previous value and current value. For inserts, we have a change from nothing to something; for deletes we have a change from something to nothing and from updates we have a change of something to something. In a way, you could figure out the change type from looking at these values, so the ‘action’ field is perhaps a bit redundant; but then, you would have to represent ‘nothing’ as a sort of special value (or keyword) and not an empty space (that could be misunderstood by an empty string).
Finally we have the ‘query’, which is the exact query that triggered this audit. Although this is also a bit redundant, since it could be reconstructed from the other values, it is not a bad idea since it allows to quickly see/reproduce exactly what happened.

Then we can create some indexes:

CREATE INDEX logged_actions_schema_table_idx
ON audit.logged_actions(((schema_name||'.'||table_name)::TEXT));

CREATE INDEX logged_actions_action_tstamp_idx
ON audit.logged_actions(action_tstamp);

CREATE INDEX logged_actions_action_idx
ON audit.logged_actions(action);

The next step is to create the trigger to ‘fill’ this table, on relevant actions:

CREATE OR REPLACE FUNCTION audit.if_modified_func() RETURNS TRIGGER AS $body$
DECLARE
v_old_data TEXT;
v_new_data TEXT;
BEGIN
/*  If this actually for real auditing (where you need to log EVERY action),
then you would need to use something like dblink or plperl that could log outside the transaction,
regardless of whether the transaction committed or rolled back.
*/

/* This dance with casting the NEW and OLD values to a ROW is not necessary in pg 9.0+ */

IF (TG_OP = 'UPDATE') THEN
v_old_data := ROW(OLD.*);
v_new_data := ROW(NEW.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,new_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data,v_new_data, current_query());
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
v_old_data := ROW(OLD.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data, current_query());
RETURN OLD;
ELSIF (TG_OP = 'INSERT') THEN
v_new_data := ROW(NEW.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,new_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_new_data, current_query());
RETURN NEW;
ELSE
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - Other action occurred: %, at %',TG_OP,now();
RETURN NULL;
END IF;

EXCEPTION
WHEN data_exception THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [DATA EXCEPTION] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
WHEN unique_violation THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [UNIQUE] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
WHEN OTHERS THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [OTHER] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
END;
$body$
LANGUAGE plpgsql
SECURITY DEFINER
SET search_path = pg_catalog, audit;

The ‘TG_OP’ variable inside a trigger, is a string that tells us for which operation the trigger was fired (INSERT, UPDATE, or DELETE). The rest is fiddling around with the ‘old’ and ‘new’ values.

Actually this is all to it, regarding having the audit ‘structure’ in place for postgresql. To put it ‘in action’, auditing a table is as simple as this:

CREATE TRIGGER fr_frame_audit
AFTER INSERT OR UPDATE OR DELETE ON fr_frame
FOR EACH ROW EXECUTE PROCEDURE audit.if_modified_func();

In the example above, I created a trigger for auditing table ‘fr_frame’ (of course you can create a sp procedure to generate these statements, if you want to generate a trigger for auditing every table in the database…).
Then I went to table ‘fr_frame’ and deleted a row. It got stored like this:

"public";"fr_frame";"postgres";"2014-02-11 12:36:52.063113+00";"D";"(67,"bin frame",missing,missing,1,1,missing)";"";"DELETE FROM public.fr_frame WHERE id = '67'::integer"

Then I modified and added a row; it got stored like this:

"public";"fr_frame";"postgres";"2014-02-11 12:38:54.793902+00";"U";"(59,"Sampling Frame",missing,"Initial Sampling frame",1,2,missing)";"(59,"Sampling Frame","Sampling Frame","Initial Sampling frame",1,2,missing)";"UPDATE public.fr_frame SET nameeng='Sampling Frame'::text WHERE id = '59'::integer"
"public";"fr_frame";"postgres";"2014-02-11 12:39:14.290898+00";"I";"";"(68,test,test,test,1,2,test)";"INSERT INTO public.fr_frame(name, nameeng, description, id_cloned_previous_frame, id_source, comments) VALUES ('test'::text, 'test'::text, 'test'::text, '1'::integer, '2'::smallint, 'test'::text)"

As I said before, the encode of non-existing values (nothing) as an empty string is not the most accurate approach, but since we have more information to complement it, it works. Also, the fact that it is row-based rather than field-based (as I had in my implementation), originates the serializing arrays, which is not exactly normalized… on the other hand, I can accept it from the point of view of storage.

Overall I think I accept it as a good solution, measuring all the pros and cons, and high fives for being so simple to implement.

You can find this information (and more), in the PostgreSQL wiki.