How to setup Firebird asynchronous replication with HQbird Enterprise in a distributed environment: network share or FTP/SSH
The asynchronous replication in HQbird
is the easiest way to create a mirror (also called warm-standby) copy of the production Firebird database. In this case, Firebird on the master server stores all changes into the files, called replication segments, and Firebird on the replica server consumes these segments – i.e., it inserts changes into replica database.
If master and replica Firebird are on the different servers, it is necessary to transfer replication segments from master to replica. There are 2 main ways to do it: network share or FTP/SSH.
How to transfer HQbird replication segments between master and replica via network share
If you have Firebird master and replica running on the servers in the fast local network (1Gbit recommended), probably the best way will be to use a network share to transfer replication segments. Let's consider how to setup it properly.
First, you need to decide where to setup network share to store asynchronous - it could be 1) on the master, 2) on the replica, or 3) at some third location.
Logs stored on the replica
Replication mechanism on master writes asynchronous segments to the specified location, and if it such location will become unavailable, replication will be stopped, and then operations will be stopped too.
Log stored on the master server
There is a better configuration in the case when the network share is created on the master and mounted on the replica server. In this case, replica reads replication segments from the network share, and if share becomes unavailable, replica server will stop updating the database, but it will be available for read-only requests. When network share will become available again, the replica will resume import of the replication segments.
Logs stored in the third location
The third option, when the network share is created on the third server or network storage, is the combination of the 1st and 2nd options.
The obvious option is to store replication segments at master in the local folder and share them with replica through network share mounted to the replica. Options for storing segments on the master server or in the third location are less convenient because master server depends on the availability of the network share.
User rights for Firebird to access network shares
Second, you need to make sure that Firebird has enough rights to access the network share where logs are located. Normally Firebird service is running as Local System on Windows, and as firebird user on Linux.
On Windows, the easiest way will be to specify another account, with enough rights to access network shares. Something like domain admin could be a good choice. Use applet Services (run services.msc), open properties of Firebird service and assign new user on tab Log On. Restart Firebird for these changes to take effect.
If master and replica servers are in different Windows domains, setup will require fine tuning of security settings for the network share. In the case we simply recommend to use FTP approach – see below.
On Linux situation is a bit more difficult. Even if master creates replication segments files as firebird user, and replica server also runs as firebird, the security pids of both firebird users should be identical to allow access of Firebird to replication segments on the network share. If you don't want to setup them, choose FTP instead.
So, choose what option you prefer from the above options (logs stored on the replica, on the master or at third-party location), and setup it accordingly.
For logs stored on the replica, you need to open replication setup dialog at master server and set parameter «Log archive directory» to the local folder which is exposed as a network share, and restart Firebird to apply new settings.
After that Firebird on the master will start to create replication segments files on the specified share. Firebird on the replica server should be configured to read these segments from the local folder.
For the option "Logs on the master" (recommended), setup master to store replication segments to the local folder, expose it as a network share, and configure replica to read logs from that share.
- Easy to configure in the typical local network
- Requires fast and stable network connection (from 1Gbit)
- Not applicable for distributed environments
- Segments are transferred through network as is, not compressed, not encrypted
How to transfer HQbird replication segments between master and replica via FTP
Cloud Backup in HQbird FBDataGuard
In a case of distributed environment, when the network connection between master and replica server is unstable, or with high latency, or when servers are in the different geographical regions, the best way to transfer replication segments will be through FTP or SSH. HQbird FBDataGuard has the ability to transfer replication segment files from the master server to remote server.
First, asynchronous master should be configured to save replication logs into the local folder, for example, into C:\Databases\Replication\LogArch in the example below:
Then we can configure Cloud Backup job to monitor the folder for the new replication segments and upload them to the remote FTP server.
As you can see in the screenshot below, Cloud backup job checks folder, specified in «Monitor directory» with an interval, specified in «Check period, minutes». It looks for files with filenames according to the mask in «Monitor files». By default, it looks for archived replication segments which have names like «dbwmaster.fdb.arch-000000001».
By default, Cloud backup compresses and encrypts replication segments before they sent. FBDataGuard creates the compressed and encrypted copy of the replication segment and uploads it to the specified target server.
There are several types of target servers: FTP, FTP over SSL/TLS, FTP over SSH. When you select the necessary type, the dialog shows mandatory fields to be completed.
Note: if you don't have FTP installed on the target server with Windows, install Filezilla – it is very popular fast and lightweight FTP-server for Windows.
The last part of parameters allows controlling the behavior of Cloud backup.
- Delete local, prepared copy – by default it is On. This parameter specifies that Cloud backup job deletes compressed copy of the replication segment after the successful upload to the target server. If you don't want to keep these copies on the master server, keep the parameter enabled.
- Remove original files after successful backup upload – by default is Off. It means status means that replication segment will be not deleted by FBDataGuard after uploading. It can be useful if you want to keep the full history of changes in replication segments, but, be careful; in the case of an intensive write activity replication, segments can occupy a lot of space (Terabytes).
- Send Ok report – send an email to the specified in Alerts address every time when replication segment is uploaded. By default, it is off.
- Perform fresh backup – disabled by default. Cloud backup remembers the last number of replication segment it sends. If you need to start again from scratch, from segment 1 (for example, after re-initialization of replication), enable this parameter. Please note that it will automatically become disabled after the resetting of the counter.
As a result, FBDataGuard will upload encrypted and compressed replication segments to the remote server. To decompress and decrypt them into the regular replication segments, another instance of HQbird FBDataGuard should be installed on the replica server. Let's consider how to configure it:
Cloud Backup Receiver
Cloud Backup Receiver checks files in the folder specified in «Monitor directory», with an interval equal to «Check periods, minutes». Its checks only files with specified mask (*arch* by default) and specified extension (.replpacked by default), and if it encounters such files, it decompresses and decrypts them with the password, specified in «Decrypt password», and copies to the folder, specified in «Unpack to directory».
There are the following additional parameters:
- Remove packed files after unpacking – by default is On. It means that FBDataGuard will delete received compressed files after successful unpacking.
- Send Ok report – by default is Off. If it is On, FBDataGuard sends an email about each successful unpacking of the segment.
- Perform fresh unpack – disabled by default. Cloud Backup Receiver remembers the last number of replication segment it unpacked. If you need to start unpacking from scratch, from segment 1 (for example, after re-initialization of replication), enable this parameter. Please note that it will automatically become disabled after the resetting of the counter.
After setup of Cloud Backup Receiver, configure the replica to look for replication segments: set in the «Log archive directory» the same path as in «Cloud Backup Receiver» -> «Unpack to directory».
As you can see, it is easy to setup asynchronous replication between master and replica either with network share or with FTP/SSH.
Please feel free to contact us with any questions: [email protected]