Developing in Linux containers ¶
In PR #1803 we added a lot of scripts to Searx’s boilerplate. In this blog post I will show you, how you can make use of them in distributed and heterogeneous development cycles (TL;DR; jump to the Abstract).
Normally in our development cycle, we edit the sources and run some test and/or
builds by using
make before we commit. This cycle is simple and perfect but
might fail in some aspects we should not overlook.
The environment in which we run all our development processes matters!
The Makefile and the Python environment encapsulate a lot for us, but they do not have access to all prerequisites. For example, there may have dependencies on packages that are installed on the developer’s desktop, but usually are not preinstalled on a server or client system. Another examples are; settings have been made to the software on the developer’s host that would never be set on a production system.
Linux Containers (LXC) are isolate environments and not to mix up on developer’s all the prerequisites of all the projects he contribute to, is always a good choice.
The scripts from PR #1803 can divide in those to install and maintain software:
and the script utils/lxc.sh, with we can scale our installation, maintenance or even development tasks over a stack of containers, what we call: Searx’s lxc suite.
Before you can start with containers, you need to install and initiate LXD once:
$ snap install lxd $ lxd init --auto
And you need to clone from origin or if you have your own fork, clone from your fork:
$ cd ~/Downloads $ git clone https://github.com/searx/searx.git $ cd searx
The searx suite consists of several images, see
LXC_SUITE=(... near by git://utils/lxc-searx.env#L19. For this blog post
we exercise on a archlinux image. The container of this image is named
searx-archlinux. Lets build the container, but be sure that this container
does not already exists, so first lets remove possible old one:
$ sudo -H ./utils/lxc.sh remove searx-archlinux $ sudo -H ./utils/lxc.sh build searx-archlinux
In this container we install all services including searx, morty & filtron in once:
$ sudo -H ./utils/lxc.sh install suite searx-archlinux
To proxy HTTP from filtron and morty in the container to the outside of the container, install nginx into the container. Once for the bot blocker filtron:
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ./utils/filtron.sh nginx install ... INFO: got 429 from http://10.174.184.156/searx
and once for the content sanitizer (content proxy morty):
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ./utils/morty.sh nginx install ... INFO: got 200 from http://10.174.184.156/morty/
On your system, the IP of your
searx-archlinux container differs from
http://10.174.184.156/searx, just open the URL reported in your installation
protocol in your WEB browser from the desktop to test the instance from outside
of the container.
In such a searx suite admins can maintain and access the debug log of the different services quite easy.
Usually you open a root-bash using
sudo -H bash. In case of LXC containers
open the root-bash in the container using
$ sudo -H ./utils/lxc.sh cmd searx-archlinux bash INFO: [searx-archlinux] bash [root@searx-archlinux searx]# pwd /share/searx
[root@searx-archlinux ...] signals, that you are the root user in
the searx-container. To debug the running searx instance use:
$ ./utils/searx.sh inspect service ... use [CTRL-C] to stop monitoring the log ...
Back in the browser on your desktop open the service http://10.174.184.156/searx
and run your application tests while the debug log is shown in the terminal from
above. You can stop monitoring using
CTRL-C, this also disables the “debug
option” in searx’s settings file and restarts the searx uwsgi application. To
debug services from filtron and morty analogous use:
$ ./utils/filtron.sh inspect service $ ./utils/morty.sh inspect service
Another point we have to notice is that each service (searx, filtron and morty) runs under dedicated system user account with the same name (compare Create user). To get a shell from theses accounts, simply call one of the scripts:
$ ./utils/searx.sh shell $ ./utils/filtron.sh shell $ ./utils/morty.sh shell
To get in touch, open a shell from the service user (searx@searx-archlinux):
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ./utils/searx.sh shell // exit with [CTRL-D] (searx-pyenv) [searx@searx-archlinux ~]$ ...
[searx@searx-archlinux] signals that you are logged in as system
searx in the
searx-archlinux container and the python virtualenv
(searx-pyenv) environment is activated.
(searx-pyenv) [searx@searx-archlinux ~]$ pwd /usr/local/searx
In this section we will see how to change the “Fully functional searx suite” from a LXC container (which is quite ready for production) into a developer suite. For this, we have to keep an eye on the Step by step installation:
searx setup in:
searx user’s home:
searx software in:
The searx software is a clone of the
git_url (see Global Settings) and
the working tree is checked out from the
git_branch. With the use of the
utils/searx.sh the searx service was installed as uWSGI application. To maintain this service, we can use
service architectures on distributions).
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ systemctl stop uwsgi@searx
With the command above, we stopped the searx uWSGI-App in the archlinux container.
The uWSGI-App for the archlinux dsitros is configured in
git://utils/templates/etc/uwsgi/apps-archlinux/searx.ini, from where at
least you should attend the settings of
env = SEARX_SETTINGS_PATH=/etc/searx/settings.yml http = 127.0.0.1:8888 chdir = /usr/local/searx/searx-src/searx virtualenv = /usr/local/searx/searx-pyenv pythonpath = /usr/local/searx/searx-src
If you have read the “Good to know section” you remember, that
each container shares the root folder of the repository and the command
utils/lxc.sh cmd handles relative path names transparent. To wrap the
searx installation into a developer one, we simple have to create a smylink to
the transparent reposetory from the desktop. Now lets replace the
searx-src in the container with the working tree from outside
of the container:
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ mv /usr/local/searx/searx-src /usr/local/searx/searx-src.old $ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ln -s /share/searx/ /usr/local/searx/searx-src
Now we can develop as usual in the working tree of our desktop system. Every time the software was changed, you have to restart the searx service (in the conatiner):
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ systemctl restart uwsgi@searx
Remember: In containers, work as usual .. here are just some examples from my daily usage:
To inspect the searx instance (already described above):
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ./utils/searx.sh inspect service
Run Makefile, e.g. to test inside the container:
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ make test
To install all prerequisites needed for a Buildhosts:
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ ./utils/searx.sh install buildhost
To build the docs on a buildhost Buildhosts:
$ sudo -H ./utils/lxc.sh cmd searx-archlinux \ make docs.html
We build up a fully functional searx suite in a archlinux container:
$ sudo -H ./utils/lxc.sh install suite searx-archlinux
To access HTTP from the desktop we installed nginx for the services inside the conatiner:
$ ./utils/filtron.sh nginx install $ ./utils/morty.sh nginx install
To wrap the suite into a developer one, we created a symbolic link to the repository which is shared transparent from the desktop’s file system into the container :
$ mv /usr/local/searx/searx-src /usr/local/searx/searx-src.old $ ln -s /share/searx/ /usr/local/searx/searx-src $ systemctl restart uwsgi@searx
To get remarks from the suite of the archlinux container we can use:
$ sudo -H ./utils/lxc.sh show suite searx-archlinux ... [searx-archlinux] INFO: (eth0) filtron: http://10.174.184.156:4004/ http://10.174.184.156/searx [searx-archlinux] INFO: (eth0) morty: http://10.174.184.156:3000/ [searx-archlinux] INFO: (eth0) docs.live: http://10.174.184.156:8080/ [searx-archlinux] INFO: (eth0) IPv6: http://[fd42:573b:e0b3:e97e:216:3eff:fea5:9b65] ...