Message-ID: <1912947990.2119.1409210079121.JavaMail.firstname.lastname@example.org> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_2118_769052748.1409210079120" ------=_Part_2118_769052748.1409210079120 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
This document describes a process for creating database replication node= s from a transactional backup of existing replicated databases. Let's assum= e that your database replication cluster is humming along nicely, but you f= ind that you need to add a slave node to it in order to handle increased lo= ad on your application.=20
Keeping the master and all of the existing slaves up and running during =
this process is important. Just as important is ensuring that you have a
transactional backup of either the master database or one of=
the slaves. Check out the PostgreSQL documentation for On-line backup and point-in-time-recovery.
For the sake of argument, let's say that your full database backup is st=
ored in the file
database-backup.sql. After the backup file ha=
s been created, import it into the new slave database. For example.
psql new_slave_db -f database-backup.sql
This will create a replica of the database on your new slave.=20
Once you have the transactional backup restored on your new slave, you'l= l need to provision it in the Bruce configuration database. You'll use the = admin tool for this. First, you'll need to create an XML file that contains= metadata about your new slave. The format is very straightforward.=20 =20
This XML document contains metadata for a single node. The format is sel=
f-explanatory. In this example, we are providing and ID, name, JDBC url, an=
d a regular expression used to determine which tables to replicated. Then w=
e add that new node to the cluster mapping table,
Now that you have the database configured and the node metadata in an XM= L file, the only other step is to use the admin tool to provision the new n= ode.=20
# cd $BRUCE_HOME/bin
# ./admin.sh -data newnode= .xml -initsnapshots NONE -operation INSERT -url jdbc:postgresql://localhost= :5432/bruce_config?user=3Dbruce
Some notes on the command line options.=20
-data newnode.xml- This is the XML file that contains the= node metadata. It will be loaded into the bruce configuration database.=20
-initsnapshots NONE- This tells the admin tool how the sl= ave's snapshot status table should be initialized. IMPORTANT: If the transactional backup is from a slave node, you should specify
SLAVEoption. (Yes t= he
SLAVEoption). This causes the admin tool to assume the new= node used to be a master, so the snapshot status is set based on the curre= nt state of the data in the new node. Confused yet? Good!
-operation INSERT- Tells the admin tool to insert the new= node data (as opposed to update, delete, or a clean insert where all other= data is wiped out).
-url jdbc:postgresql://localhost:5432/bruce_config?user=3Dbruce= code> - This URL should point to the configuration database that you used w= hen you setup replication.
Now that you've backed up, restored and provisioned, the replication dae= mon needs to be restarted. Don't worry. It will pick up any changes that oc= curred in the master database while it was down, once it is back up again. ==20
# cd $BRUCE_HOME/bin
# ./startup.sh MyCluster
shutdown.sh command can take a long time to complete. T=
he prompt will return immediately, but you should wait until the log indica=
tes that all threads have been stopped.