Back-up And Restoration Of Data On Mediawiki With Docker

From MIT Technology Roadmapping
Jump to navigation Jump to search

Launching mediawiki using Docker-compose

For Linux operating system:

As to the server, only Docker and Docker-compose is required. Use

$ [sudo] apt-get install docker.io

and

$ [sudo] pip install docker-compose

to install them.

The file 'docker-compose.yml' is already available on [1], download the file to the working directory, and launch the docker containers using

$ [sudo] docker-compose up

you can also add the option -d after the command up to run it in the background.

Comment: The docker images are directory pulled from bitnami, which is always up-to-date, reliable and convenient.

Using docker-compose, the containers will be automatically set up and start working, and part of the screenshots is as follows (when mediawiki is ready).

File:DOC1-25-2016011501-1.png

Comment: Due to the fact that all containers are launched in docker, it does not matter if there is anything wrong with the database, say MariaDB or MySQL, on the server. You can also change some configurations like the environment variables or ports by editing the file docker-compose.yml

The configuration file LocalSettings.php and logo of toyhouse is also available on TensorCloud(the github repository), and they can be used to overwrite the default settings on the new mediawiki. Let us change the logo for example:

First download the picture from github (or upload it from your computer using sftp or other tools), and put it on your working repository, say, ./

Use

$ [sudo] docker ps -a

to list all the running containers, and find the one related to your mediawiki.

Copy the Container ID of the container, which is 28114d4d0506 in the following screenshot.

File:DOC1-25-2016011501-2.png

Use the command docker cp to copy the picture into the container, for example

$ [sudo] docker cp ./toyhouse.png 28114d4d0506:/opt/bitnami/mediawiki/resources/assets/wiki.png

Then refresh the webpage and you will see the updated logo. Now the index page is as follows:

File:DOC1-25-2016011501-3.png

Exporting data from the database and packaging uploaded file

According to the design of TensorCloud, uploaded file and data stored in the database(MariaDB) should be separatedly stored but backed up together.

First we should locate the uploaded file.

View the file docker-compose.yml and find the volumes option, which determines where should the uploaded file be stored.

File:DOC1-25-2016011501-4.png

The directory after the colon indicates the directory in the container to be mounted to the local directory, say, in the screenshot it is /bitnami,which stores all the file required by the mediawiki in the container.

The directory before the colon indicates the local directory, which can either be an absolute directory or a volume label. In the screenshot, mediawiki_data is a volume label. Use the command

$ [sudo] docker volume ls

to view all the volume labels used by docker.

File:DOC1-25-2016011501-5.png

Find the one we need, which is mediawiki_mediawiki_data in the list shown in the screenshot. You can also use the command | grep to help find what you want, such as

$ [sudo] docker volume ls | grep mediawiki_mediawiki_data

Then use the command docker volume inspect to figure out the absolute directory of each volume:

$ [sudo] docker volume inspect mediawiki_mediawiki_data

File:DOC1-25-2016011501-6.png

The key 'Mountpoint' shows the absolute directory of the volume, use

$ [sudo] ls /var/lib/docker/volumes/mediawiki_mediawiki_data/_data

and see what is under this directory.

File:DOC1-25-2016011501-7.png

Actually, the uploaded file will be stored at /directory/to/_data/mediawiki/images, say

/var/lib/docker/volumes/mediawiki_mediawiki_data/_data/mediawiki/images

in this example.

To package it, use

$ [sudo] tar -czvf ./uploaded_file_on_wiki.tar /var/lib/docker/volumes/mediawiki_mediawiki_data/_data/mediawiki/images

and you can also use gzip to compress it(optional).

So far, we have collected what is uploaded to the mediawiki, which consists of half of the data we need.

The rest of data lie in the database, say MariaDB in this example, which is in the form of a docker container. First figure out its Container ID by using

$ [sudo] docker ps -a

File:DOC1-25-2016011501-8.png

And now we get its Container ID b5a20d272753, and its name is 'mediawiki_mariadb_1'

We will use the command mysqldump to export the data to .sql file. Just execute

$ [sudo] docker exec [Container ID] mysqldump -u [Username] -p -d bitnami_mediawiki > [/directory/to/your/file/xxxx.sql]

Press enter and input the password (which is empty by default), and the data in the databse bitnami_mediawiki will be exported to a file.

In this case, for example, use

$ [sudo] docker exec b5a20d272753 mysqldump -u root -p -d bitnami_mediawiki > /mediawiki_data.sql

You can still add the gzip command to compress the file.

Finally use docker cp to copy the file to your host machine(move the file outside the container), for example

$ [sudo] docker cp b5a20d272753:/mediawiki_data.sql ./mediawiki_data.sql

Now all the data on the mediawiki we need have been packaged into compressed files, so that we can store, transfer, synchronize or do whatever operation we want.

Restoring data from existing file

First we will deal with the database. Just use docker cp to copy file from your host machine to the container, for example

$ [sudo] docker cp ./mediawiki_data.sql b5a20d272753:/mediawiki_data.sql

If you have compressed the file, unzip it first.

Then we just need to restore the database from the file, use

$ [sudo] docker exec b5a20d272753 mysql -u root -p < /mediawiki_data.sql

and the task will be finished.

Note that the database bitnami_mediawiki must exist before you restore it from file, or the operation might be unsuccessful. If you deleted the database by accident, you should first build an empty one by launching docker-compose, which sets up an brand new container from the docker images.

Finally we just need to unzip the file containing all uploaded file. Use

$ [sudo] tar -xzvf ./uploaded_file_on_wiki.tar /var/lib/docker/volumes/mediawiki_mediawiki_data/_data/mediawiki/

Just directly overwrite the directory, and never deleted it before you unzip the .tar file, otherwise the database might be damaged.

Testing

To test this process, you can first finish the back-up process and get the compressed files. And then edit the wiki, register new accounts, or do whatever you want.

After finishing the restoring process, any changes will be lost, the whole wiki will be what is was before, and your new account no longer exist.

Automated Backup and Restore Processes

Due to the features of docker images and containers, the docker-compose should be launched only once, or your containers will be overwritten by a brand-new version of mediawiki, which directly comes from the original images.

Therefore, the Container ID, or the name of the container which contains your mediawiki, or the MariaDB, will remain unchanged.

Just write them down, and create a .sh script all the commands needed, so that the back-up and restore process can be done with only one line of command.