Slony-I for Debian ================== To run Slony-I, you need this package (slony1-2-bin) and the postgresql-x.y-slony1-2 package for the PostgreSQL server version that you intend to use. Please read the documentation in the package slony1-2-doc. Setting up a robust Slony-I system is not trivial. An introductory example can be found in the file /usr/share/doc/slony1-2-doc/examples/SAMPLE.gz. In the context of a Debian system with the slony1-2 packages installed, the basic setup procedure is as follows. Note that most of these commands need database superuser privileges. 1. Create the master database and create the database schema. (You can optionally load data now or later. Or you go with an existing database that is already in use.) $ createdb -h $MASTERHOST $MASTERDBNAME 2. Load the procedural language PL/pgSQL into the master database. $ createlang -h $MASTERHOST plpgsql $MASTERDBNAME 3. Create the slave database(s). $ createdb -h $SLAVEHOST $SLAVEDBNAME 4. Copy the database schema from the master to the slave(s). (Slony-I does not replicate schema changes.) $ pg_dump -s -h $MASTERHOST $MASTERDBNAME | psql -h $SLAVEHOST $SLAVEDBNAME This also copies the definition of PL/pgSQL, which you need on the slaves as well. You can also add PL/pgSQL to the slaves manually if you forgot it. Interlude: There are two ways to run and interact with Slony-I. One way is that you set up a separate configuration file for each node, start a slon daemon for each node, and write the slonik scripts for managing the nodes by hand. Let's call this the "by hand" method. The other way is to use the so-called Perl tools. You configure your nodes and sets in /etc/slony1/slon_tools.conf, start the nodes using the slon_start program, and manage the nodes using the slonik_* programs. This is the "Perl tools" method, and while it was sort of born as a hack, it seems to establish itself as the more popular method. The "by hand" method, however, is more flexible for the extreme cases. Depending on what you choose, the next steps differ. 5. [by hand] Configure the replication setup. This is done by writing elaborate scripts that are fed to the "slonik" tool. Details are not covered here, but see the aforementioned SAMPLE file. [Perl tools] Edit /etc/slony1/slon_tools.conf to reflect your setup. A sample of this file is in /usr/share/doc/slony1-2-bin/examples/. If you put passwords in that file, be sure to set up sensible permissions. Then initialize the cluster by running: $ slonik_init_cluster | slonik Then set up the sets on the master node, e.g., $ slonik_create_set set1 | slonik If you want to create more than one cluster, name the files /etc/slony1/slon_tools_foo.conf etc. instead. 6. Start the Slony-I daemon process, the program "slon". You need one running daemon for each cluster node (master or slave), and you need separate daemons for each replication cluster. [by hand] The Debian packages support starting an arbitrary number of "slon" daemons via the init script /etc/init.d/slony1. To do that, create a subdirectory under /etc/slony1/ for each instance and place a slon configuration file with the name slon.conf into each of these subdirectories (so it might be something like /etc/slony1/acctdb-slave/slon.conf). You can use the file /usr/share/doc/slony1-2-bin/examples/slon.conf-sample as an example. You need to change at least the parameters "cluster_name" and "conn_info". [Perl tools] Edit the file /etc/default/slony1 and set the variable SLON_TOOLS_START_NODES to a space-separated list of node numbers to start using the Perl tools method, either like "1 2 3" or like "node1 node2 node3". If you have created more than one cluster, prepend the cluster name, like "foo:1 foo:2 bar:node1". Then run # /etc/init.d/slony1 start on each host. This starts the slon daemons configured in either method. 7. Subscribe the slave node(s) to the master node. [by hand] This is again done using the "slonik" tool. [Perl tools] Use a command like this: $ slonik_subscribe_set set1 node2 | slonik At this point, data changes made on the master should eventually (after about ten seconds at most) appear on the slaves. All slon instances configured either way share a name space (for the purpose of log file and PID file names, for instance), with ties going to the by hand method. That is, if you have a node configured both ways, the init script will start it using the "by hand" method. Log files for each slon instance can be found in the directory /var/log/slony1/. Note that the slon processes need to be able to connect to the PostgreSQL servers through the ordinary authentication mechanism. In the normal case of a slon process running locally under the user "postgres", this is taken care of by the default authentication configuration of PostgreSQL on Debian. If you have a different setup, you need to adjust the authentication configuration. -- Peter Eisentraut, January 2010