Comment on page
# install isabl-cli from Github
pip install git+https://github.com/papaemmelab/isabl_cli#eggs=isabl-cli
# or clone locally and install as editable
git clone https://github.com/papaemmelab/isabl_cli <your-dir>/isabl_cli
pip install -e <your-dir>/isabl_cli
# let the client know what API should be used
# set client id, you can create a new client in the admin site
export ISABL_CLIENT_ID="<replace with client primary key>"
# isabl should be now available
This is how the admin website looks like for editing Isabl CLI settings:
Editing Isabl CLI settings from the Admin.
Isabl CLI can be used by multiple users. By default, any user can import data and result files are owned by whoever triggered the application. These capabilities can be limited to an
ADMIN_USER. In this setup, data and results are owned by the
ADMIN_USERyet applications can be triggered by any user.
First you need to assign the right API permissions to your users. To facilitate this Isabl comes with the following command:
# from the django project directory run
python manage.py create_default_groups
# if you are using docker compose
docker-compose -f production.yml run --rm django python manage.py create_default_groups
This command will create the following three Django groups:
Then you will need to configure the
DEFAULT_LINUX_GROUPin the Isabl CLI client object (you can do so by updating your client
ISABL_CLIENT_IDfrom the Django admin website). For example:
Once you follow the writing applications guide, you will understand that Isabl Applications can be managed using a python package. If you have multiple users triggering applications, you may want to have them all pointing to the same package. This can be either using the
PYTHONPATHenvironment variable or pip installing locally your apps repo:
# using an environment variable
# alternatively you can have other users pip install the repo
pip install --editable /path/to/my/isabl/apps
# you may need to update the .eggs directory permissions
chmod -R g+rwX /path/to/my/isabl/apps/.eggs
Pro tip: use the
Can Download Resultspermission to configure what users can download analyses results in your Isabl instance.
DIRS="00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99"
# go to your data lake base directory (see: BASE_STORAGE_DIRECTORY)
for i in $BASE;
for j in $DIRS;
for k in $DIRS;
mkdir -p $DIR
chmod u+wrX,g+wrX $DIR
chmod g-w "$i/$j/"
# install cookiecutter
pip install cookiecutter
# then bootstrap the project
- For Django 2.0 & Python 3.6
- Renders a Django project with 100% starting test coverage
- Secure by default with SSL.
- Optimized development and production settings
- Media storage using Amazon S3
- Run tests with
- Customizable PostgreSQL version
- Only maintained 3rd party libraries are used.
- Uses PostgreSQL everywhere (9.2+)
- Environment variables for configuration (This won't work with Apache/mod_wsgi except on AWS ELB).
Isabl Cookiecutter is a proud fork of cookiecutter-django, please note that most of their documentation remains relevant! Also see troubleshooting. For reference, we forked out at commit 4258ba9. If you have differences in your preferred setup, please fork Isabl Cookiecutter to create your own version. New to Django? Two Scoops of Django is the best dessert-themed Django reference in the universe!
Before you begin, check out the
production.ymlfile in the root of this project. Keep note of how it provides configuration for the following services:
django: your application running behind
postgres: PostgreSQL database with the application's relational data;
redis: Redis instance for caching;
caddy: Caddy web server with HTTPS on by default.
Provided you have opted for Celery (via setting
y) there are three more services:
Check the original
cookiecutter-djangodeployment documentation to learn about AWS deployment, Supervisor Examples, Sentry configuration, and more. If you are deploying on an intranet, please see the HTTPS is on by default section.
You will probably also need to setup the Mail backend, for example by adding a Mailgun API key and a Mailgun sender domain, otherwise, the account creation view will crash and result in a 500 error when the backend attempts to send an email to the account owner.
The Caddy web server used in the default configuration will get you a valid certificate from Lets Encrypt and update it automatically. All you need to do to enable this is to make sure that your DNS records are pointing to the server Caddy runs on. You can read more about this here at Automatic HTTPS in the Caddy docs. Please note:
- If you are not using a subdomain of the domain name set in the project, then remember to put the your staging/production IP address in the
DJANGO_ALLOWED_HOSTSenvironment variable (see settings) before you deploy your website. Failure to do this will mean you will not have access to your website through the HTTP protocol.
- Access to the Django admin is set up by default to require HTTPS in production or once live.
- ⚠️ Attention! If you are running your application on an intranet you may want to use tls caddy setting. Make sure that the
DOMAIN_NAMEconfiguration has the
https://schema prepended in the caddy environment file
.envs/.production/.caddy(see this ticket to learn more). Then include the following configuration in
compose/production/caddy/Caddyfilein order to use a self signed certificate:tls self_signedAlternatively, If you have a local certificate and key provided by your institution, you will need to copy the keys in the caddy
compose/production/caddy/Dockerfileand use:tls /path/to/cert path/to/key
Optional | Postgres is saving its database files to the
production_postgres_datavolume by default. Change that if you want something else and make sure to make backups since this is not done automatically.
You will need to build the stack first. To do that, run:
docker-compose -f production.yml build
Once this is ready, you can run it with:
docker-compose -f production.yml up
To run the stack and detach the containers, run:
docker-compose -f production.yml up -d
To run a migration, open up a second terminal and run:
docker-compose -f production.yml run --rm django python manage.py migrate
To create a superuser, run:
docker-compose -f production.yml run --rm django python manage.py createsuperuser
If you need a shell, run:
docker-compose -f production.yml run --rm django python manage.py shell
To check the logs out, run:
docker-compose -f production.yml logs
If you want to scale your application, run:
docker-compose -f production.yml scale django=4
docker-compose -f production.yml scale celeryworker=2
Warning! don't try to scale
To see how your containers are doing run:
docker-compose -f production.yml ps
Its likely that the data resides in a different server than the web application. To make results available for the web server you may want to consider
-o nonempty \
-o follow_symlinks \
-o IdentityFile=/path/to/id_rsa \
-o allow_other \
Note that we are mounting
/remote/pathso that the paths pushed by Isabl CLI match those available in the web server. Also note that you may need to restart the docker compose services after mounting this directory.