Documentation is auto-generated from the code using pdocs To start a local webserver and access the docs, run:
# the following command will spin up a web server on 0.0.0.0:8080
python django/manage.py docs [--http HOST:PORT]
.
├── app/ # Pyrog React App
├── diagrams/ # Nice drawings of the stack
├── django/ # Backend (api, ETL services, etc.)
├── hapi-loader/ # Loader service (based on HAPI fhir)
├── monitoring/ # Configs for monitoring services
├── requirements/ # Python dependencies
├── scripts/ # Shell scripts utils
├── tests/ # Python backend tests
├── e2e/ # End-to-end tests
├── pyrog-schema.yml # OpenAPI spec for the pyrog api
└── .template.env # Template env file
The project is structured as a django project. The codebase is shared between the API and the ETL services (extractor
, transformer
, loader
).
The services are deployed as the same docker arkhn/river
image, but are run with different arguments (See the docker-compose.yml
file).
docker
docker-compose >= 3.7
This repository contains 3 compose files that can be used for development.
docker-compose.yml |
The minimal functional configuration. fhir-api and pyrog-api URLs must be provided. |
docker-compose.monitoring.yml |
Optional monitoring services. Configuration is store in the root monitoring directory |
docker-compose.test.yml |
Optional services for testing (e.g. mimic ) |
- Environment variables can be stored in a
.env
file in the root directory. - The
.env
file is shared and loaded by:docker-compose
,- Django's
manage.py
script, - Vscode's integrated terminal and launch commands.
# 1. Create a virtual env
python3.8 -m venv --prompt "river" .venv
# 2. Activate the virtual env
source .venv/bin/activate
# 3. Install unixodbc-dev package
sudo apt install unixodbc-dev
# 4. Install dev requirements
pip install -r requirements/dev.txt
This concerns the API. Not the ETL services (which are not web applications).
First, you'll need to provide configuration by creating a dotenv file (.env
).
# 1. Copy the dotenv file. The template should be enough to get you started.
cp .template.env .env
# 2. Bring the db up
docker-compose up -d db
# 3. Migrate
python django/manage.py migrate
# 4. Run the development server
python django/manage.py runserver
In order to register a local OIDC application, use the admin interface of the identity provider See documentation
Code quality is enforced with pre-commit
hooks: black
, isort
, flake8
# Install the hooks
pre-commit install
# Run tests in dedicated virtual env
make unit-tests skip_markers=<skip_markers>
# For instance
make unit-tests skip_markers="not pagai"
The skip_markers
argument is here to skip some tests, you can use it as you would use pytest's -m
flag.
Warning: the django/
and tests/
are mounted as volumes into a docker container. If your folder contains python bytecode (__pycache__
folders), there might be an error when running pytest inside the river
container. You can fix this by removing the pre-compiled bytecode files locally:
find . | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
The following command runs river services (api
, extractor
, transformer
, loader
, topicleaner
) and dependencies (kafka
, zookeeper
, postgres
, jpaltime
, mimic
, redis
) inside docker with docker-compose
(it might be a lot to ask for your poor laptop...). It then runs end-to-end tests inside a dedicated container.
Basically, end-to-end tests consist in importing a mapping (mimic), running a full batch, waiting for the end of it and finally asserting that the result of the batch conforms to expectations.
# Run end-to-end tests in docker
make e2e-tests
This command executes 2 targets: setup-e2e-tests
(that prepares all the container for the tests) and run-e2e-tests
(that actually run the tests). Note that it may be more realiable to run e2e tests in 2 steps because some of the services may take a while to start.
An alternative to running end-to-end tests locally in docker-compose is to run them against a remote environment (which is usually already deployed, making things much faster). Follow the guide! (this page is only accessible for Arkhn team members).
Generation only concerns the pyrog
application at this point. To generate the schema (no virtual env required):
tox -e openapi
This produces a pyrog-schema.yml
file in the root project directory.
First, you'll need to provide configuration by creating a dotenv file (.env
).
# 1. Copy the dotenv file. The template should be enough to get you started.
cp .template.env .env
# 2. Bring the db up
docker-compose up -d db
# 3. Bring the backend up
docker-compose up river-api
# 4. (Optionally) To quickly create resources, visit the admin panel
# at http://localhost:8000/admin/
# username: admin
# password: admin