2.4. Migrating Boro Solution¶
2.4.1. Introduction¶
Description¶
This section describes how to migrate an existing Boro Solution application to another server or update the operating system with installed Boro Solution, while keeping all settings and data, except task statistics. During migration, you can update Boro Solution to a newer version (requires approval from the technical support team) and select an OS from the list of supported systems.
The migration comprises the following steps:
Creating an archive with current application settings and a partial database dump.
Installing a new operating system on the old or new server.
Installing Boro Solution on the new OS:
application migration — the installation is done with the previously obtained package;
application migration and update — the installation requires a package with the latest application version.
Importing old settings and the database dump into the newly installed Boro Solution.
Licensing the new application (obtaining keys from the technical support team).
Restoring probes settings so they could work with the new Boro server.
Notes¶
If an external PostgreSQL DB is used, some steps should be skipped. They are marked as optional. The database dump is not created as well, and so you don’t need to import it to the new system.
To create an SQL dump (with configuration described in this section), you will need disk space that amounts to not more than 5% of your current DB size.
During most of the migration steps, the monitoring service will be unavailable, i.e. it will experience downtime.
The following things remain during migration: admin panel settings (email, telegram), users, projects with settings, records about probes and tasks, archives with generated reports. The following data is not saved:
task statistics if external PostgreSQL DB is not used;
the versions of probes’ binary files (after migration, the server will offer the probe version from the installed package);
license keys and certificates (new certificates should be issued);
the HTTPS certificate and the corresponding web interface key (a certificate for the browser);
your own files (logo files if white label server is used).
Tip: if you plan to use the old server, save the settings from the
/etc
directory. They may be necessary to restore the configuration after installing a new OS, e.g., to configure network interfaces.time tar -czf etc.bckp_$(date -Id).tgz /etc # additionally include certbot home directory: #time tar -czf etc.bckp_$(date -Id).tgz /etc /home/certbot
If the migration is interrupted by any reason after the DB dump is created, and the old server is restored/restarted, repeat dump creation before resuming migration.
This guide describes the slowest process of DB migration (by creating an SQL dump using pg_dump) with long downtime. However, this method has minimum pitfalls.
You can create a backup without stopping Boro services (no downtime). It will allow you to restore the database, but some data may be lost (e.g., the information about new tasks that were run after the dump creation started). This backup may be useful when estimating the dump file size.
You can also create a dump in the binary format. Migrating with a binary dump is faster, but there is a chance that the major PostgreSQL version may change when updating Boro Solution. In this case, the full compatibility of dumps is not guaranteed.
An alternative technique is to use streaming replication (potentially reduces downtime), but in this case, you need to factor in the problem of locale data changes. This technique is also more complicated and requires a second PostgreSQL instance on another server.
2.4.2. Migration¶
Note
The guide below describes how to migrate Boro Solution with a local database. Read the entire guide and additional notes before starting the migration. If external PostgreSQL DB is used, contact the technical support team.
On the old server:¶
Run all code snippets as root:
Save the configuration of all running probes on the server:
RAILS_PATH=${RAILS_PATH:-/opt/elecard/boro-rails-server} _ruby_script=' _p_a = App.where(id: RedisCache::Alive.get_live_app_ids).map do |a| a.auto_save_config(auto_save_descr: "before migrating Solution") [a.project_id, [a.id, a.desc]] end.group_by(&:first).map{|_p, _p_as| [_p, _p_as.map(&:last)]}.to_h puts "Live Probes:" Project.where(id: _p_a.keys).order(:id).each do |_p| puts "• project id:%4d, \"%s\", probes:" % [_p.id, _p.title] _p_a[_p.id].sort.each do |_app_id, _app_descr| puts " • %4d, \"%s\"" % [_app_id, _app_descr] end end ' su - boro -c " cd \"$RAILS_PATH\"; source setup_env.sh; bin/rails runner '$_ruby_script' "
RAILS_PATH
can be different from the one indicated in the script. Specify a path to the installed Boro application.Stop Boro Solution (at this stage, the service downtime starts):
systemctl stop \ boro_puma.{default,web_api} \ boro_sidekiq.{default,timeline} \ boro_golang.{server,worker}
Note
Here you may get the following error message:
Failed to stop boro_sidekiq.timeline.service...
, you can skip it.Optional. Skip this step if external PostgreSQL is used. Create a database dump. Note that there are tables in this script that will NOT be included in the dump:
SQL_DUMP=${SQL_DUMP:-"dump.$(date -Id).common.sql.gz"} tables_statistics_to_exclude=( statistics alarm_journals alarm_journal_actives timeline_data events thumb kpi_availabilities records # kpi_reports # alarm_records # alive_timelines ) time ( sudo --non-interactive --login --user=postgres \ pg_dump boro_db \ --no-password \ --clean --if-exists \ $(printf -- "--exclude-table-data=%s* " "${tables_statistics_to_exclude[@]}") \ ) | gzip >"$SQL_DUMP" chmod go-rwx "$SQL_DUMP"
dump.YYY-MM-DD.common.sql.gz
is the dump file necessary for the database migration.Note
The database dump file contains security sensitive information.
Optional. Make simple validation of the dump file:
SQL_DUMP=${SQL_DUMP:-"dump.$(date -Id).common.sql.gz"} _tbls=(users projects apps tasks) _excerpt=$(gzip -dc "$SQL_DUMP" | sed -nE '/^COPY public\.('$(tr ' ' '|' <<<"${_tbls[@]}")') /,/^\\\.$/ { /^COPY public\./{p;=}; /^\\\.$/=}' ) for _tbl in "${_tbls[@]}"; do _m=$(grep -EA2 "^COPY public\\.$_tbl " <<<"$_excerpt") printf "%9s: " "$_tbl" if [ -z "$_m" ]; then echo "ERROR, not found! " continue fi _line_first=$(tail -2 <<<"$_m" | head -1) _line_last=$(tail -1 <<<"$_m") printf "%8d\n" "$(($_line_last - $_line_first - 1))" done
There should be no errors in main tables and should be a non-zero number of records , e.g.:
users: 51 projects: 141 apps: 1488 tasks: 458145
Backup the Rails application settings, secrets, and other data:
RAILS_PATH=${RAILS_PATH:-/opt/elecard/boro-rails-server} RAILS_DUMP=${RAILS_DUMP:-"dump.$(date -Id).rails_artifacts.tgz"} tar -C "$RAILS_PATH" -czf "$RAILS_DUMP" \ .secret_key.file .secretkey.systemd.env \ config/{.env,settings}.yml \ tmp/alarms/
dump.YYY-MM-DD.rails_artifacts.tgz
is the Rails artifacts tarball.Upload the dump file and Rails artifacts tarball to a location where it will persist during the Solution migration process (especially if you are reinstalling the OS). The old server is released now, so you can install a fresh OS on it and use it as a new one.
On the new server:¶
Run all code snippets as root:
Install Boro Solution as described in the Installing Boro Solution section. If you install the application on a different server, you can start installation simultaneously with creating the DB dump to reduce the service downtime.
Upload the dump file and the Rails artifacts tarball on the server.
Stop Boro Solution:
systemctl stop \ boro_puma.{default,web_api} \ boro_sidekiq.default \ boro_golang.{server,worker}
Optional. Skip this step if an external PostgreSQL is used. Create a new database:
RAILS_PATH=${RAILS_PATH:-/opt/elecard/boro-rails-server} DB_NAME=$(grep 'database: ' "$RAILS_PATH/config/database.yml" | cut -d: -f2 | tr -d ' ') DB_USER=$(grep 'username: ' "$RAILS_PATH/config/database.yml" | cut -d: -f2 | tr -d ' ') su - postgres -c " dropdb $DB_NAME createdb --encoding=UTF8 --locale=en_US.UTF-8 $DB_NAME psql --command=' GRANT all PRIVILEGES on DATABASE \"$DB_NAME\" to \"$DB_USER\"; GRANT all PRIVILEGES on SCHEMA public to \"$DB_USER\"; ' $DB_NAME "
RAILS_PATH
can be different from the one indicated in the script. Specify a path to the installed Boro application.Optional. Skip this step if external PostgreSQL is used. Restore data from the database dump file:
SQL_DUMP=${SQL_DUMP:-"dump.$(date -Id).common.sql.gz"} time gzip -dc "$SQL_DUMP" | (sudo -iu postgres psql -tq $DB_NAME) # Here you can change the user name if in the old database there was a different one: #time gzip -dc "$SQL_DUMP" | sed "s/\"SenSay\"/\"$DB_USER\"/" | (sudo -iu postgres psql -tq $DB_NAME)
Optional. Skip this step if external PostgreSQL is used. Start migration:
RAILS_PATH=${RAILS_PATH:-/opt/elecard/boro-rails-server} su - boro -c " cd \"$RAILS_PATH\"; source setup_env.sh; bin/rake db:migrate "
Restore settings, secrets and other data:
RAILS_PATH=${RAILS_PATH:-/opt/elecard/boro-rails-server} RAILS_DUMP=${RAILS_DUMP:-"dump.$(date -Id).rails_artifacts.tgz"} gzip -dc "$RAILS_DUMP" | (sudo -u boro tar -C "$RAILS_PATH" -x)
Run Boro Solution services:
systemctl restart \ boro_puma.{default,web_api} \ boro_sidekiq.default \ boro_golang.{server,worker}
Check Boro Solution:
run
/opt/elecard/bin/status.sh
;check the configured base url for Probes (see Changing the Server Name chapter):
grep client_api_base_url ${RAILS_PATH:-/opt/elecard/boro-rails-server}/config/.env.yml
;check the web interface manually; there should be projects data, all settings profiles, and other information (except running tasks and probes).
Complete the procedure for Installing Certificates.
Server migration is done.
2.4.3. Migrating Probes¶
The previously running probes should automatically connect to the new server if the following conditions are met:
if one or more IP addresses were specified in the HTTPS certificate for the old server, interfaces with such IP addresses must be present on the new server;
if one or more hostnames were specified in the HTTPS certificate for the old server, these DNS records must be updated to resolve to the new server;
a set of hostnames and/or IP addresses must be specified in the HTTPS certificate for the new server (what appears in the CN/common name attribute and optionally in the SAN).
If DNS records were updated, it is worth restarting probes manually (via the console of the servers they are running on), since the probe may need up to 20 minutes to update DNS records. Moreover, you need to consider the DNS propagation latency.
It is recommended to restart the probes from the web interface after they appear on the new server to prevent excessive RAM utilization.
If some of the conditions above are not met during migration (e.g. the host name or IP addresses change), then in order to restore the probes’ operation it is necessary either to install them anew, or to correct the server address of each probe in the monitor.cfg
file.
If some tasks did not start after moving the probe to the new server, you can restore them by applying saved configuration created before the old server was stopped. These configurations are called auto save.
Update all live probes from the web interface if a new version is available for them.