Merge pull request #463 from eReuse/dpp_docker
add dockerization of devicehub dpp
This commit is contained in:
commit
fde966ec13
|
@ -136,3 +136,6 @@ examples/create-db2.sh
|
|||
package-lock.json
|
||||
snapshots/
|
||||
modules/
|
||||
|
||||
# emacs
|
||||
*~
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
project := dkr-dsg.ac.upc.edu/ereuse
|
||||
|
||||
branch := `git branch --show-current`
|
||||
commit := `git log -1 --format=%h`
|
||||
tag := ${branch}__${commit}
|
||||
|
||||
# docker images
|
||||
devicehub_image := ${project}/devicehub:${tag}
|
||||
postgres_image := ${project}/postgres:${tag}
|
||||
|
||||
# 2. Create a virtual environment.
|
||||
docker_build:
|
||||
docker build -f docker/devicehub.Dockerfile -t ${devicehub_image} .
|
||||
# DEBUG
|
||||
#docker build -f docker/devicehub.Dockerfile -t ${devicehub_image} . --progress=plain --no-cache
|
||||
|
||||
docker build -f docker/postgres.Dockerfile -t ${postgres_image} .
|
||||
# DEBUG
|
||||
#docker build -f docker/postgres.Dockerfile -t ${postgres_image} . --progress=plain --no-cache
|
||||
@printf "\n##########################\n"
|
||||
@printf "\ndevicehub image: ${devicehub_image}\n"
|
||||
@printf "postgres image: ${postgres_image}\n"
|
||||
@printf "\ndocker images built\n"
|
||||
@printf "\n##########################\n\n"
|
||||
|
||||
docker_publish:
|
||||
docker push ${devicehub_image}
|
||||
docker push ${postgres_image}
|
||||
|
||||
.PHONY: docker
|
||||
docker:
|
||||
$(MAKE) docker_build
|
||||
$(MAKE) docker_publish
|
||||
@printf "\ndocker images published\n"
|
179
README.md
179
README.md
|
@ -7,171 +7,70 @@ This README explains how to install and use Devicehub. [The documentation](http:
|
|||
Devicehub is built with [Teal](https://github.com/ereuse/teal) and [Flask](http://flask.pocoo.org).
|
||||
|
||||
# Installing
|
||||
The requirements are:
|
||||
Please visit the [Manual Installation](#README_MANUAL_INSTALLATION.md) for understand how you can install locally or deploy in a server.
|
||||
|
||||
0. Required
|
||||
- python3.9
|
||||
- [PostgreSQL 11 or higher](https://www.postgresql.org/download/).
|
||||
- Weasyprint [dependencie](http://weasyprint.readthedocs.io/en/stable/install.html)
|
||||
# Docker
|
||||
You have a docker compose file for to do a automated deployment. In the next steps we can see as run and use.
|
||||
|
||||
1. Generate a clone of the repository.
|
||||
1. Download the sources:
|
||||
```
|
||||
git clone git@github.com:eReuse/devicehub-teal.git
|
||||
cd devicehub-teal
|
||||
git clone https://github.com/eReuse/devicehub-teal.git
|
||||
cd devicehub-teal
|
||||
```
|
||||
|
||||
2. Create a virtual environment and install Devicehub with *pip*.
|
||||
2. You need decide one dir in your system for share documents between your system and the dockers.
|
||||
For us only as example we use "/tmp/dhub/" and need create it:
|
||||
```
|
||||
python3.9 -m venv env
|
||||
source env/bin/activate
|
||||
pip3 install -U -r requirements.txt -e .
|
||||
pip3 install Authlib==1.2.1
|
||||
mkdir /tmp/dhub
|
||||
```
|
||||
|
||||
3. Create a PostgreSQL database called *devicehub* by running [create-db](examples/create-db.sh):
|
||||
|
||||
- In Linux, execute the following two commands (adapt them to your distro):
|
||||
|
||||
1. `sudo su - postgres`.
|
||||
2. `bash examples/create-db.sh devicehub dhub`, and password `ereuse`.
|
||||
|
||||
- In MacOS: `bash examples/create-db.sh devicehub dhub`, and password `ereuse`.
|
||||
|
||||
Configure project using environment file (you can use provided example as quickstart):
|
||||
```bash
|
||||
$ cp examples/env.example .env
|
||||
3. Copy your snapshots in this directory. If you don't have any snapshots copy one of the example directory.
|
||||
```
|
||||
cp examples/snapshot01.json /tmp/dhub
|
||||
```
|
||||
|
||||
4. Running alembic from oidc module.y
|
||||
4. Modify the file with environment variables in the file .env You can see one example in examples/env
|
||||
If you don't have one please copy the examples/env file and modify the basic vars
|
||||
```
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
cp examples/env.example .env
|
||||
```
|
||||
You can use this parameters for default as a test, but you need add values in this three variables:
|
||||
```
|
||||
API_DLT
|
||||
API_DLT_TOKEN
|
||||
API_RESOLVER
|
||||
```
|
||||
|
||||
5. Running alembic from oidc module.y
|
||||
5. run the dockers:
|
||||
```
|
||||
cd ereuse_devicehub/modules/oidc
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
docker compose up
|
||||
```
|
||||
For stop the docker you can use Ctl+c and if you run again "compose up" you maintain the datas and infrastructure.
|
||||
|
||||
6. If you want down the volumens and remove the datas, you can use:
|
||||
```
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
6. Running alembic from dpp module.
|
||||
7. If you want to enter a shell inside a new container:
|
||||
```
|
||||
cd ereuse_devicehub/modules/dpp/
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
docker run -it --entrypoint= ${target_docker_image} bash
|
||||
```
|
||||
|
||||
7. Add a suitable app.py file.
|
||||
If you want to enter a shell on already running container:
|
||||
```
|
||||
cp examples/app.py .
|
||||
docker exec -it ${target_docker_image} bash
|
||||
```
|
||||
|
||||
8. Generate a minimal data structure.
|
||||
For to know the valid value for ${target_docker_image} you can use:
|
||||
```
|
||||
flask initdata
|
||||
```
|
||||
|
||||
9. Add a new server to the 'api resolver' to be able to integrate it into the federation.
|
||||
The domain name for this new server has to be unique. When installing two instances their domain name must differ: e.g. dpp.mydomain1.cxm, dpp.mydomain2.cxm.
|
||||
If your domain is dpp.mydomain.cxm:
|
||||
```
|
||||
flask dlt_insert_members http://dpp.mydomain.cxm
|
||||
docker ps
|
||||
```
|
||||
|
||||
modify the .env file as indicated in point 3.
|
||||
Add the corresponding 'DH' in ID_FEDERATED.
|
||||
example: ID_FEDERATED='DH10'
|
||||
8. This are the details for use this implementation:
|
||||
|
||||
10. Do a rsync api resolve.
|
||||
```
|
||||
flask dlt_rsync_members
|
||||
```
|
||||
*devicehub with port 5000* is the identity provider of oidc and have user *user5000@example.com*
|
||||
|
||||
11. Register a new user in devicehub.
|
||||
```
|
||||
flask adduser email@cxm.cxm password
|
||||
```
|
||||
*devicehub with port 5001* is the client identity of oidc and have user *user5001@example.com*
|
||||
|
||||
12. Register a new user to the DLT.
|
||||
```
|
||||
flask dlt_register_user email@cxm.cxm password Operator
|
||||
```
|
||||
|
||||
13. Finally, run the app:
|
||||
|
||||
```bash
|
||||
$ flask run --debugger
|
||||
```
|
||||
|
||||
The error ‘bdist_wheel’ can happen when you work with a *virtual environment*.
|
||||
To fix it, install in the *virtual environment* wheel
|
||||
package. `pip3 install wheel`
|
||||
|
||||
# Testing
|
||||
|
||||
1. `git clone` this project.
|
||||
2. Create a database for testing executing `create-db.sh` like the normal installation but changing the first parameter from `devicehub` to `dh_test`: `create-db.sh dh_test dhub` and password `ereuse`.
|
||||
3. Execute at the root folder of the project `python3 setup.py test`.
|
||||
|
||||
# Upgrade a deployment
|
||||
|
||||
For upgrade an instance of devicehub you need to do:
|
||||
|
||||
```bash
|
||||
$ cd $PATH_TO_DEVIHUBTEAL
|
||||
$ source venv/bin/activate
|
||||
$ git pull
|
||||
$ alembic -x inventory=dbtest upgrade head
|
||||
```
|
||||
|
||||
If all migrations pass successfully, then it is necessary restart the devicehub.
|
||||
Normaly you can use a little script for restart or run.
|
||||
```
|
||||
# systemctl stop gunicorn_devicehub.socket
|
||||
# systemctl stop gunicorn_devicehub.service
|
||||
# systemctl start gunicorn_devicehub.service
|
||||
```
|
||||
|
||||
# OpenId Connect:
|
||||
We want to interconnect two devicehub instances already installed. One has a set of devices (OIDC client), the other has a set of users (OIDC identity server). Let's assume their domains are: dpp.mydomain1.cxm, dpp.mydomain2.cxm
|
||||
20. In order to connect the two devicehub instances, it is necessary:
|
||||
* 20.1. Register a user in the devicehub instance acting as OIDC identity server.
|
||||
* 20.2. Fill in the openid connect form.
|
||||
* 20.3. Add in the OIDC client inventory the data of client_id, client_secret.
|
||||
|
||||
For 20.1. This can be achieved on the terminal on the devicehub instance acting as OIDC identity server.
|
||||
```
|
||||
flask adduser email@cxm.cxm password
|
||||
```
|
||||
|
||||
* 20.2. This is an example of how to fill in the form.
|
||||
|
||||
In the web interface of the OIDC identity service, click on the profile of the just added user, select "My Profile" and click on "OpenID Connect":
|
||||
Then we can go to the "OpenID Connect" panel and fill out the form:
|
||||
|
||||
The important thing about this form is:
|
||||
* "Client URL" The URL of the OIDC Client instance, as registered in point 12. dpp.mydomain1.cxm in our example.
|
||||
* "Allowed Scope" has to have these three words:
|
||||
```
|
||||
openid profile rols
|
||||
```
|
||||
* "Redirect URIs" it has to be the URL that was put in "Client URL" plus "/allow_code"
|
||||
* "Allowed Grant Types" has to be "authorization_code"
|
||||
* "Allowed Response Types" has to be "code"
|
||||
* "Token Endpoint Auth Method" has to be "Client Secret Basic"
|
||||
|
||||
After clicking on "Submit" the "OpenID Connect" tab of the user profile should now include details for "client_id" and "client_secret".
|
||||
|
||||
* 20.3. In the OIDC client inventory run: (in our example: url_domain is dpp.mydomain2.cxm, client_id and client_secret as resulting from the previous step)
|
||||
```
|
||||
flask add_client_oidc url_domain client_id client_secret
|
||||
```
|
||||
After this step, both servers must be connected. Opening one DPP page on dpp.mydomain1.cxm (OIDC Client) the user can choose to authenticate using dpp.mydomain2.cxm (OIDC Server).
|
||||
|
||||
## Generating the docs
|
||||
|
||||
|
||||
1. `git clone` this project.
|
||||
2. Install plantuml. In Debian 9 is `# apt install plantuml`.
|
||||
3. Execute `pip3 install -e .[docs]` in the project root folder.
|
||||
4. Go to `<project root folder>/docs` and execute `make html`. Repeat this step to generate new docs.
|
||||
|
||||
To auto-generate the docs do `pip3 install -e .[docs-auto]`, then execute, in the root folder of the project `sphinx-autobuild docs docs/_build/html`.
|
||||
You can change this values in the file *.env*
|
||||
|
|
|
@ -0,0 +1,177 @@
|
|||
# Devicehub
|
||||
|
||||
Devicehub is a distributed IT Asset Management System focused in reusing devices, created under the project [eReuse.org](https://www.ereuse.org)
|
||||
|
||||
This README explains how to install and use Devicehub. [The documentation](http://devicehub.ereuse.org) explains the concepts and the API.
|
||||
|
||||
Devicehub is built with [Teal](https://github.com/ereuse/teal) and [Flask](http://flask.pocoo.org).
|
||||
|
||||
# Installing
|
||||
The requirements are:
|
||||
|
||||
0. Required
|
||||
- python3.9
|
||||
- [PostgreSQL 11 or higher](https://www.postgresql.org/download/).
|
||||
- Weasyprint [dependencie](http://weasyprint.readthedocs.io/en/stable/install.html)
|
||||
|
||||
1. Generate a clone of the repository.
|
||||
```
|
||||
git clone git@github.com:eReuse/devicehub-teal.git
|
||||
cd devicehub-teal
|
||||
```
|
||||
|
||||
2. Create a virtual environment and install Devicehub with *pip*.
|
||||
```
|
||||
python3.9 -m venv env
|
||||
source env/bin/activate
|
||||
pip3 install -U -r requirements.txt -e .
|
||||
pip3 install Authlib==1.2.1
|
||||
```
|
||||
|
||||
3. Create a PostgreSQL database called *devicehub* by running [create-db](examples/create-db.sh):
|
||||
|
||||
- In Linux, execute the following two commands (adapt them to your distro):
|
||||
|
||||
1. `sudo su - postgres`.
|
||||
2. `bash examples/create-db.sh devicehub dhub`, and password `ereuse`.
|
||||
|
||||
- In MacOS: `bash examples/create-db.sh devicehub dhub`, and password `ereuse`.
|
||||
|
||||
Configure project using environment file (you can use provided example as quickstart):
|
||||
```bash
|
||||
$ cp examples/env.example .env
|
||||
```
|
||||
|
||||
4. Running alembic from oidc module.y
|
||||
```
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
```
|
||||
|
||||
5. Running alembic from oidc module.y
|
||||
```
|
||||
cd ereuse_devicehub/modules/oidc
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
```
|
||||
|
||||
6. Running alembic from dpp module.
|
||||
```
|
||||
cd ereuse_devicehub/modules/dpp/
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
```
|
||||
|
||||
7. Add a suitable app.py file.
|
||||
```
|
||||
cp examples/app.py .
|
||||
```
|
||||
|
||||
8. Generate a minimal data structure.
|
||||
```
|
||||
flask initdata
|
||||
```
|
||||
|
||||
9. Add a new server to the 'api resolver' to be able to integrate it into the federation.
|
||||
The domain name for this new server has to be unique. When installing two instances their domain name must differ: e.g. dpp.mydomain1.cxm, dpp.mydomain2.cxm.
|
||||
If your domain is dpp.mydomain.cxm:
|
||||
```
|
||||
flask dlt_insert_members http://dpp.mydomain.cxm
|
||||
```
|
||||
|
||||
modify the .env file as indicated in point 3.
|
||||
Add the corresponding 'DH' in ID_FEDERATED.
|
||||
example: ID_FEDERATED='DH10'
|
||||
|
||||
10. Do a rsync api resolve.
|
||||
```
|
||||
flask dlt_rsync_members
|
||||
```
|
||||
|
||||
11. Register a new user in devicehub.
|
||||
```
|
||||
flask adduser email@cxm.cxm password
|
||||
```
|
||||
|
||||
12. Register a new user to the DLT.
|
||||
```
|
||||
flask dlt_register_user email@cxm.cxm password Operator
|
||||
```
|
||||
|
||||
13. Finally, run the app:
|
||||
|
||||
```bash
|
||||
$ flask run --debugger
|
||||
```
|
||||
|
||||
The error ‘bdist_wheel’ can happen when you work with a *virtual environment*.
|
||||
To fix it, install in the *virtual environment* wheel
|
||||
package. `pip3 install wheel`
|
||||
|
||||
# Testing
|
||||
|
||||
1. `git clone` this project.
|
||||
2. Create a database for testing executing `create-db.sh` like the normal installation but changing the first parameter from `devicehub` to `dh_test`: `create-db.sh dh_test dhub` and password `ereuse`.
|
||||
3. Execute at the root folder of the project `python3 setup.py test`.
|
||||
|
||||
# Upgrade a deployment
|
||||
|
||||
For upgrade an instance of devicehub you need to do:
|
||||
|
||||
```bash
|
||||
$ cd $PATH_TO_DEVIHUBTEAL
|
||||
$ source venv/bin/activate
|
||||
$ git pull
|
||||
$ alembic -x inventory=dbtest upgrade head
|
||||
```
|
||||
|
||||
If all migrations pass successfully, then it is necessary restart the devicehub.
|
||||
Normaly you can use a little script for restart or run.
|
||||
```
|
||||
# systemctl stop gunicorn_devicehub.socket
|
||||
# systemctl stop gunicorn_devicehub.service
|
||||
# systemctl start gunicorn_devicehub.service
|
||||
```
|
||||
|
||||
# OpenId Connect:
|
||||
We want to interconnect two devicehub instances already installed. One has a set of devices (OIDC client), the other has a set of users (OIDC identity server). Let's assume their domains are: dpp.mydomain1.cxm, dpp.mydomain2.cxm
|
||||
20. In order to connect the two devicehub instances, it is necessary:
|
||||
* 20.1. Register a user in the devicehub instance acting as OIDC identity server.
|
||||
* 20.2. Fill in the openid connect form.
|
||||
* 20.3. Add in the OIDC client inventory the data of client_id, client_secret.
|
||||
|
||||
For 20.1. This can be achieved on the terminal on the devicehub instance acting as OIDC identity server.
|
||||
```
|
||||
flask adduser email@cxm.cxm password
|
||||
```
|
||||
|
||||
* 20.2. This is an example of how to fill in the form.
|
||||
|
||||
In the web interface of the OIDC identity service, click on the profile of the just added user, select "My Profile" and click on "OpenID Connect":
|
||||
Then we can go to the "OpenID Connect" panel and fill out the form:
|
||||
|
||||
The important thing about this form is:
|
||||
* "Client URL" The URL of the OIDC Client instance, as registered in point 12. dpp.mydomain1.cxm in our example.
|
||||
* "Allowed Scope" has to have these three words:
|
||||
```
|
||||
openid profile rols
|
||||
```
|
||||
* "Redirect URIs" it has to be the URL that was put in "Client URL" plus "/allow_code"
|
||||
* "Allowed Grant Types" has to be "authorization_code"
|
||||
* "Allowed Response Types" has to be "code"
|
||||
* "Token Endpoint Auth Method" has to be "Client Secret Basic"
|
||||
|
||||
After clicking on "Submit" the "OpenID Connect" tab of the user profile should now include details for "client_id" and "client_secret".
|
||||
|
||||
* 20.3. In the OIDC client inventory run: (in our example: url_domain is dpp.mydomain2.cxm, client_id and client_secret as resulting from the previous step)
|
||||
```
|
||||
flask add_client_oidc url_domain client_id client_secret
|
||||
```
|
||||
After this step, both servers must be connected. Opening one DPP page on dpp.mydomain1.cxm (OIDC Client) the user can choose to authenticate using dpp.mydomain2.cxm (OIDC Server).
|
||||
|
||||
## Generating the docs
|
||||
|
||||
|
||||
1. `git clone` this project.
|
||||
2. Install plantuml. In Debian 9 is `# apt install plantuml`.
|
||||
3. Execute `pip3 install -e .[docs]` in the project root folder.
|
||||
4. Go to `<project root folder>/docs` and execute `make html`. Repeat this step to generate new docs.
|
||||
|
||||
To auto-generate the docs do `pip3 install -e .[docs-auto]`, then execute, in the root folder of the project `sphinx-autobuild docs docs/_build/html`.
|
|
@ -0,0 +1,95 @@
|
|||
version: "3.9"
|
||||
services:
|
||||
|
||||
devicehub-id-server:
|
||||
init: true
|
||||
image: dkr-dsg.ac.upc.edu/ereuse/devicehub:dpp_docker__2c4b0006
|
||||
environment:
|
||||
- DB_USER=${DB_USER}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
- DB_HOST=postgres-id-server
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- HOST=${HOST}
|
||||
- EMAIL_DEMO=user5000@dhub.com
|
||||
- PASSWORD_DEMO=${PASSWORD_DEMO}
|
||||
- JWT_PASS=${JWT_PASS}
|
||||
- SECRET_KEY=${SECRET_KEY}
|
||||
- API_DLT=${API_DLT}
|
||||
- API_RESOLVER=${API_RESOLVER}
|
||||
- API_DLT_TOKEN=${API_DLT_TOKEN}
|
||||
- DEVICEHUB_HOST=${SERVER_ID_DEVICEHUB_HOST}
|
||||
- ID_FEDERATED=${SERVER_ID_FEDERATED}
|
||||
- URL_MANUALS=${URL_MANUALS}
|
||||
- ID_SERVICE=${SERVER_ID_SERVICE}
|
||||
- AUTHORIZED_CLIENT_URL=${CLIENT_ID_DEVICEHUB_HOST}
|
||||
ports:
|
||||
- 5000:5000
|
||||
volumes:
|
||||
- ${SNAPSHOTS_PATH}:/mnt/snapshots:ro
|
||||
- shared:/shared:rw
|
||||
|
||||
postgres-id-server:
|
||||
image: dkr-dsg.ac.upc.edu/ereuse/postgres:dpp_docker__2c4b0006
|
||||
# 4. To create the database.
|
||||
# 5. Give permissions to the corresponding users in the database.
|
||||
# extra src https://github.com/docker-library/docs/blob/master/postgres/README.md#environment-variables
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||
- POSTGRES_USER=${DB_USER}
|
||||
- POSTGRES_DB=${DB_DATABASE}
|
||||
# DEBUG
|
||||
#ports:
|
||||
# - 5432:5432
|
||||
# TODO persistence
|
||||
#volumes:
|
||||
# - pg_data:/var/lib/postgresql/data
|
||||
|
||||
devicehub-id-client:
|
||||
init: true
|
||||
image: dkr-dsg.ac.upc.edu/ereuse/devicehub:dpp_docker__2c4b0006
|
||||
environment:
|
||||
- DB_USER=${DB_USER}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
- DB_HOST=postgres-id-client
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- HOST=${HOST}
|
||||
- EMAIL_DEMO=user5001@dhub.com
|
||||
- PASSWORD_DEMO=${PASSWORD_DEMO}
|
||||
- JWT_PASS=${JWT_PASS}
|
||||
- SECRET_KEY=${SECRET_KEY}
|
||||
- API_DLT=${API_DLT}
|
||||
- API_RESOLVER=${API_RESOLVER}
|
||||
- API_DLT_TOKEN=${API_DLT_TOKEN}
|
||||
- DEVICEHUB_HOST=${CLIENT_ID_DEVICEHUB_HOST}
|
||||
- SERVER_ID_HOST=${SERVER_ID_DEVICEHUB_HOST}
|
||||
- ID_FEDERATED=${CLIENT_ID_FEDERATED}
|
||||
- URL_MANUALS=${URL_MANUALS}
|
||||
- ID_SERVICE=${CLIENT_ID_SERVICE}
|
||||
ports:
|
||||
- 5001:5000
|
||||
volumes:
|
||||
- ${SNAPSHOTS_PATH}:/mnt/snapshots:ro
|
||||
- shared:/shared:ro
|
||||
|
||||
postgres-id-client:
|
||||
image: dkr-dsg.ac.upc.edu/ereuse/postgres:dpp_docker__2c4b0006
|
||||
# 4. To create the database.
|
||||
# 5. Give permissions to the corresponding users in the database.
|
||||
# extra src https://github.com/docker-library/docs/blob/master/postgres/README.md#environment-variables
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||
- POSTGRES_USER=${DB_USER}
|
||||
- POSTGRES_DB=${DB_DATABASE}
|
||||
# DEBUG
|
||||
#ports:
|
||||
# - 5432:5432
|
||||
# TODO persistence
|
||||
#volumes:
|
||||
# - pg_data:/var/lib/postgresql/data
|
||||
|
||||
|
||||
# TODO https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/
|
||||
#nginx
|
||||
|
||||
volumes:
|
||||
shared:
|
|
@ -0,0 +1,30 @@
|
|||
FROM debian:bullseye-slim
|
||||
|
||||
RUN apt update && apt-get install --no-install-recommends -y \
|
||||
python3-minimal \
|
||||
python3-pip \
|
||||
python-is-python3 \
|
||||
python3-psycopg2 \
|
||||
python3-dev \
|
||||
libpq-dev \
|
||||
build-essential \
|
||||
libpangocairo-1.0-0 \
|
||||
curl \
|
||||
jq \
|
||||
time \
|
||||
netcat
|
||||
|
||||
WORKDIR /opt/devicehub
|
||||
|
||||
# this is exactly the same as examples/pip_install.sh except the last command
|
||||
# to improve the docker layer builds, it has been separated
|
||||
RUN pip install --upgrade pip
|
||||
RUN pip install alembic==1.8.1 anytree==2.8.0 apispec==0.39.0 atomicwrites==1.4.0 blinker==1.5 boltons==23.0.0 cairocffi==1.4.0 cairosvg==2.5.2 certifi==2022.9.24 cffi==1.15.1 charset-normalizer==2.0.12 click==6.7 click-spinner==0.1.8 colorama==0.3.9 colour==0.1.5 cssselect2==0.7.0 defusedxml==0.7.1 et-xmlfile==1.1.0 flask==1.0.2 flask-cors==3.0.10 flask-login==0.5.0 flask-sqlalchemy==2.5.1 flask-weasyprint==0.4 flask-wtf==1.0.0 hashids==1.2.0 html5lib==1.1 idna==3.4 inflection==0.5.1 itsdangerous==2.0.1 jinja2==3.0.3 mako==1.2.3 markupsafe==2.1.1 marshmallow==3.0.0b11 marshmallow-enum==1.4.1 more-itertools==8.12.0 numpy==1.22.0 odfpy==1.4.1 openpyxl==3.0.10 pandas==1.3.5 passlib==1.7.1 phonenumbers==8.9.11 pillow==9.2.0 pint==0.9 psycopg2-binary==2.8.3 py-dmidecode==0.1.0 pycparser==2.21 pyjwt==2.4.0 pyphen==0.13.0 python-dateutil==2.7.3 python-decouple==3.3 python-dotenv==0.14.0 python-editor==1.0.4 python-stdnum==1.9 pytz==2022.2.1 pyyaml==5.4 requests==2.27.1 requests-mock==1.5.2 requests-toolbelt==0.9.1 six==1.16.0 sortedcontainers==2.1.0 sqlalchemy==1.3.24 sqlalchemy-citext==1.3.post0 sqlalchemy-utils==0.33.11 tinycss2==1.1.1 tqdm==4.32.2 urllib3==1.26.12 weasyprint==44 webargs==5.5.3 webencodings==0.5.1 werkzeug==2.0.3 wtforms==3.0.1 xlrd==2.0.1 cryptography==39.0.1 Authlib==1.2.1 gunicorn==21.2.0
|
||||
|
||||
RUN pip install -i https://test.pypi.org/simple/ ereuseapitest==0.0.8
|
||||
|
||||
COPY . .
|
||||
RUN pip install -e .
|
||||
|
||||
COPY docker/devicehub.entrypoint.sh .
|
||||
ENTRYPOINT sh ./devicehub.entrypoint.sh
|
|
@ -0,0 +1,12 @@
|
|||
.git
|
||||
.env
|
||||
# TODO need to comment it to copy the entrypoint
|
||||
#docker
|
||||
Makefile
|
||||
|
||||
# Emacs backup files
|
||||
*~
|
||||
.\#*
|
||||
# Vim swap files
|
||||
*.swp
|
||||
*.swo
|
|
@ -0,0 +1,199 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
set -u
|
||||
# DEBUG
|
||||
set -x
|
||||
|
||||
# 3. Generate an environment .env file.
|
||||
gen_env_vars() {
|
||||
# generate config using env vars from docker
|
||||
cat > .env <<END
|
||||
DB_USER='${DB_USER}'
|
||||
DB_PASSWORD='${DB_PASSWORD}'
|
||||
DB_HOST='${DB_HOST}'
|
||||
DB_DATABASE='${DB_DATABASE}'
|
||||
API_DLT='${API_DLT}'
|
||||
API_DLT_TOKEN='${API_DLT_TOKEN}'
|
||||
API_RESOLVER='${API_RESOLVER}'
|
||||
ID_FEDERATED='${ID_FEDERATED}'
|
||||
URL_MANUALS='${URL_MANUALS}'
|
||||
|
||||
HOST='${HOST}'
|
||||
|
||||
SCHEMA='dbtest'
|
||||
DB_SCHEMA='dbtest'
|
||||
|
||||
EMAIL_DEMO='${EMAIL_DEMO}'
|
||||
PASSWORD_DEMO='${PASSWORD_DEMO}'
|
||||
|
||||
JWT_PASS=${JWT_PASS}
|
||||
SECRET_KEY=${SECRET_KEY}
|
||||
END
|
||||
}
|
||||
|
||||
wait_for_postgres() {
|
||||
# old one was
|
||||
#sleep 4
|
||||
|
||||
default_postgres_port=5432
|
||||
# thanks https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/
|
||||
while ! nc -z ${DB_HOST} ${default_postgres_port}; do
|
||||
sleep 0.5
|
||||
done
|
||||
}
|
||||
|
||||
init_data() {
|
||||
|
||||
# 7. Run alembic of the project.
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
# 8. Running alembic from oidc module.y
|
||||
cd ereuse_devicehub/modules/oidc
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
cd -
|
||||
# 9. Running alembic from dpp module.
|
||||
cd ereuse_devicehub/modules/dpp/
|
||||
alembic -x inventory=dbtest upgrade head
|
||||
cd -
|
||||
|
||||
# 11. Generate a minimal data structure.
|
||||
# TODO it has some errors (?)
|
||||
flask initdata || true
|
||||
}
|
||||
|
||||
big_error() {
|
||||
local message="${@}"
|
||||
echo "###############################################" >&2
|
||||
echo "# ERROR: ${message}" >&2
|
||||
echo "###############################################" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
handle_federated_id() {
|
||||
|
||||
# devicehub host and id federated checker
|
||||
|
||||
EXPECTED_ID_FEDERATED="$(curl -s "${API_RESOLVER}/getAll" \
|
||||
| jq -r '.url | to_entries | .[] | select(.value == "'"${DEVICEHUB_HOST}"'") | .key' \
|
||||
| head -n 1)"
|
||||
|
||||
# if is a new DEVICEHUB_HOST, then register it
|
||||
if [ -z "${EXPECTED_ID_FEDERATED}" ]; then
|
||||
# TODO better docker compose run command
|
||||
cmd="docker compose run --entrypoint= devicehub flask dlt_insert_members ${DEVICEHUB_HOST}"
|
||||
big_error "No FEDERATED ID maybe you should run \`${cmd}\`"
|
||||
fi
|
||||
|
||||
# if not new DEVICEHUB_HOST, then check consistency
|
||||
|
||||
# if there is already an ID in the DLT, it should match with my internal ID
|
||||
if [ ! "${EXPECTED_ID_FEDERATED}" = "${ID_FEDERATED}" ]; then
|
||||
|
||||
big_error "ID_FEDERATED should be ${EXPECTED_ID_FEDERATED} instead of ${ID_FEDERATED}"
|
||||
fi
|
||||
|
||||
# not needed, but reserved
|
||||
# EXPECTED_DEVICEHUB_HOST="$(curl -s "${API_RESOLVER}/getAll" \
|
||||
# | jq -r '.url | to_entries | .[] | select(.key == "'"${ID_FEDERATED}"'") | .value' \
|
||||
# | head -n 1)"
|
||||
# if [ ! "${EXPECTED_DEVICEHUB_HOST}" = "${DEVICEHUB_HOST}" ]; then
|
||||
# big_error "ERROR: DEVICEHUB_HOST should be ${EXPECTED_DEVICEHUB_HOST} instead of ${DEVICEHUB_HOST}"
|
||||
# fi
|
||||
|
||||
}
|
||||
|
||||
config_oidc() {
|
||||
# TODO test allowing more than 1 client
|
||||
if [ "${ID_SERVICE}" = "server_id" ]; then
|
||||
|
||||
client_description="client identity from docker compose demo"
|
||||
|
||||
# in AUTHORIZED_CLIENT_URL we remove anything before ://
|
||||
flask add_contract_oidc \
|
||||
"${EMAIL_DEMO}" \
|
||||
"${client_description}" \
|
||||
"${AUTHORIZED_CLIENT_URL}" \
|
||||
> /shared/client_id_${AUTHORIZED_CLIENT_URL#*://}
|
||||
|
||||
elif [ "${ID_SERVICE}" = "client_id" ]; then
|
||||
|
||||
# in DEVICEHUB_HOST we remove anything before ://
|
||||
client_id_config="/shared/client_id_${DEVICEHUB_HOST#*://}"
|
||||
client_id=
|
||||
client_secret=
|
||||
|
||||
# wait that the file generated by the server_id is readable
|
||||
while true; do
|
||||
if [ -f "${client_id_config}" ]; then
|
||||
client_id="$(cat "${client_id_config}" | jq -r '.client_id')"
|
||||
client_secret="$(cat "${client_id_config}" | jq -r '.client_secret')"
|
||||
if [ "${client_id}" ] && [ "${client_secret}" ]; then
|
||||
break
|
||||
fi
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
flask add_client_oidc \
|
||||
"${SERVER_ID_HOST}" \
|
||||
"${client_id}" \
|
||||
"${client_secret}"
|
||||
|
||||
else
|
||||
big_error "Something went wrong ${ID_SERVICE} is not server_id nor client_id"
|
||||
fi
|
||||
}
|
||||
|
||||
config_phase() {
|
||||
init_flagfile='/already_configured'
|
||||
if [ ! -f "${init_flagfile}" ]; then
|
||||
# 7, 8, 9, 11
|
||||
init_data
|
||||
|
||||
# 12. Add a new server to the 'api resolver'
|
||||
handle_federated_id
|
||||
|
||||
# 13. Do a rsync api resolve
|
||||
flask dlt_rsync_members
|
||||
|
||||
# 14. Register a new user to the DLT
|
||||
flask dlt_register_user "${EMAIL_DEMO}" ${PASSWORD_DEMO} Operator
|
||||
|
||||
# non DL user (only for the inventory)
|
||||
# flask adduser user2@dhub.com ${PASSWORD_DEMO}
|
||||
|
||||
# # 15. Add inventory snapshots for user "${EMAIL_DEMO}".
|
||||
cp /mnt/snapshots/snapshot*.json ereuse_devicehub/commands/snapshot_files
|
||||
/usr/bin/time flask snapshot "${EMAIL_DEMO}" ${PASSWORD_DEMO}
|
||||
|
||||
# # 16.
|
||||
flask check_install "${EMAIL_DEMO}" ${PASSWORD_DEMO}
|
||||
|
||||
# config server or client ID
|
||||
config_oidc
|
||||
|
||||
# remain next command as the last operation for this if conditional
|
||||
touch "${init_flagfile}"
|
||||
fi
|
||||
}
|
||||
|
||||
main() {
|
||||
|
||||
gen_env_vars
|
||||
|
||||
wait_for_postgres
|
||||
|
||||
config_phase
|
||||
|
||||
# 17. Use gunicorn
|
||||
# thanks https://akira3030.github.io/formacion/articulos/python-flask-gunicorn-docker.html
|
||||
# TODO meanwhile no nginx (step 19), gunicorn cannot serve static files, then we prefer development server
|
||||
#gunicorn --access-logfile - --error-logfile - --workers 4 -b :5000 app:app
|
||||
# alternative: run development server
|
||||
flask run --host=0.0.0.0 --port 5000
|
||||
|
||||
# DEBUG
|
||||
#sleep infinity
|
||||
}
|
||||
|
||||
main "${@}"
|
|
@ -0,0 +1,8 @@
|
|||
FROM postgres:15.4-bookworm
|
||||
# this is the latest in 2023-09-14_13-01-38
|
||||
#FROM postgres:latest
|
||||
|
||||
# Add a SQL script that will be executed upon container startup
|
||||
COPY docker/postgres.setupdb.sql /docker-entrypoint-initdb.d/
|
||||
|
||||
EXPOSE 5432
|
|
@ -0,0 +1,5 @@
|
|||
-- 6. Create the necessary extensions.
|
||||
CREATE EXTENSION pgcrypto SCHEMA public;
|
||||
CREATE EXTENSION ltree SCHEMA public;
|
||||
CREATE EXTENSION citext SCHEMA public;
|
||||
CREATE EXTENSION pg_trgm SCHEMA public;
|
|
@ -105,3 +105,6 @@ class DevicehubConfig(Config):
|
|||
OAUTH2_JWT_ISS = config('OAUTH2_JWT_ISS', '')
|
||||
OAUTH2_JWT_KEY = config('OAUTH2_JWT_KEY', None)
|
||||
OAUTH2_JWT_ALG = config('OAUTH2_JWT_ALG', 'HS256')
|
||||
|
||||
if API_DLT:
|
||||
API_DLT = API_DLT.strip("/")
|
||||
|
|
|
@ -21,6 +21,9 @@ class InsertMember:
|
|||
print("Error: you need a entry var API_RESOLVER in .env")
|
||||
return
|
||||
|
||||
api = api.strip("/")
|
||||
domain = domain.strip("/")
|
||||
|
||||
data = {"url": domain}
|
||||
url = api + '/registerURL'
|
||||
res = requests.post(url, json=data)
|
||||
|
|
|
@ -19,6 +19,8 @@ class GetMembers:
|
|||
print("Error: you need a entry var API_RESOLVER in .env")
|
||||
return
|
||||
|
||||
api = api.strip("/")
|
||||
|
||||
url = api + '/getAll'
|
||||
res = requests.get(url)
|
||||
if res.status_code != 200:
|
||||
|
|
|
@ -1,11 +1,38 @@
|
|||
DB_USER='dhub'
|
||||
DB_PASSWORD='ereuse'
|
||||
DB_HOST='localhost'
|
||||
DB_DATABASE='devicehub'
|
||||
SECRET_KEY='aaaa'
|
||||
# Please fill in these three variables
|
||||
API_DLT='http://$IP_API_DLT'
|
||||
API_DLT_TOKEN=$TOKEN
|
||||
API_RESOLVER='http://$IP_API_RESOLVER'
|
||||
ID_FEDERATED='DH12'
|
||||
URL_MANUALS='http://$IP_MANUALS'
|
||||
|
||||
# Database Variables
|
||||
DB_USER='dhub'
|
||||
DB_PASSWORD='ereuse'
|
||||
DB_HOST='localhost'
|
||||
DB_DATABASE='dpp'
|
||||
SCHEMA='dbtest'
|
||||
DB_SCHEMA='dbtest'
|
||||
|
||||
# TODO this should be guessed by DEVICEHUB_HOST, and avoid hardcode of ID_FEDERATED
|
||||
SERVER_ID_FEDERATED='DH12'
|
||||
CLIENT_ID_FEDERATED='DH20'
|
||||
URL_MANUALS='http://localhost:4000'
|
||||
|
||||
#SERVER_ID_DEVICEHUB_HOST='http://devicehub-server-id.example.com'
|
||||
SERVER_ID_DEVICEHUB_HOST='http://localhost:5000'
|
||||
#CLIENT_ID_DEVICEHUB_HOST='http://devicehub-client-id.example.com'
|
||||
CLIENT_ID_DEVICEHUB_HOST='http://localhost:5001'
|
||||
SERVER_ID_SERVICE='server_id'
|
||||
CLIENT_ID_SERVICE='client_id'
|
||||
HOST='localhost'
|
||||
|
||||
EMAIL_DEMO='user@example.com'
|
||||
PASSWORD_DEMO='1234'
|
||||
|
||||
JWT_PASS='aaaa'
|
||||
SECRET_KEY='aaaa'
|
||||
|
||||
# important to import snapshots (step 15)
|
||||
# rel path starts with ./
|
||||
#SNAPSHOTS_PATH='./relpath/to/snapshots'
|
||||
# full path starts with /
|
||||
SNAPSHOTS_PATH='/tmp/dhub_docker/snapshots'
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
{"closed": true, "components": [{"actions": [], "manufacturer": "Intel Corporation", "model": "82579LM Gigabit Network Connection", "serialNumber": "00:11:11:11:11:00", "speed": 1000.0, "type": "NetworkAdapter", "variant": "04", "wireless": false}, {"actions": [], "manufacturer": "Intel Corporation", "model": "7 Series/C216 Chipset Family High Definition Audio Controller", "serialNumber": null, "type": "SoundCard"}, {"actions": [], "format": "DIMM", "interface": "DDR3", "manufacturer": "Micron", "model": "16KTF51264AZ", "serialNumber": "AAAAAAAA", "size": 4096.0, "speed": 1600.0, "type": "RamModule"}, {"actions": [{"endTime": "2022-10-11T13:45:31.239555+00:00", "severity": "Info", "startTime": "2021-10-11T09:45:19.623967+00:00", "steps": [{"endTime": "2021-10-11T11:05:28.090897+00:00", "severity": "Info", "startTime": "2021-10-11T09:45:19.624163+00:00", "type": "StepZero"}, {"endTime": "2021-10-11T13:45:31.239402+00:00", "severity": "Info", "startTime": "2021-10-11T11:05:28.091255+00:00", "type": "StepRandom"}], "type": "EraseSectors"}, {"assessment": true, "commandTimeout": 30, "currentPendingSectorCount": 0, "elapsed": 60, "length": "Short", "lifetime": 18720, "offlineUncorrectable": 0, "powerCycleCount": 2147, "reallocatedSectorCount": 0, "reportedUncorrectableErrors": 0, "severity": "Info", "status": "Completed without error", "type": "TestDataStorage"}, {"elapsed": 11, "readSpeed": 119.0, "type": "BenchmarkDataStorage", "writeSpeed": 32.7}], "interface": "ATA", "manufacturer": "Seagate", "model": "ST3500418AS", "serialNumber": "AAAAAAAA", "size": 500000.0, "type": "HardDrive", "variant": "CC46"}, {"actions": [{"elapsed": 0, "rate": 25540.36, "type": "BenchmarkProcessor"}, {"elapsed": 8, "rate": 7.6939, "type": "BenchmarkProcessorSysbench"}], "address": 64, "brand": "Core i5", "cores": 4, "generation": 3, "manufacturer": "Intel Corp.", "model": "Intel Core i5-3470 CPU @ 3.20GHz", "serialNumber": null, "speed": 1.6242180000000002, "threads": 4, "type": "Processor"}, {"actions": [], "manufacturer": "Intel Corporation", "memory": null, "model": "Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller", "serialNumber": null, "type": "GraphicCard"}, {"actions": [], "biosDate": "2012-08-07T00:00:00", "firewire": 0, "manufacturer": "LENOVO", "model": "MAHOBAY", "pcmcia": 0, "ramMaxSize": 32, "ramSlots": 4, "serial": 1, "serialNumber": null, "slots": 4, "type": "Motherboard", "usb": 3, "version": "9SKT39AUS"}], "device": {"actions": [{"elapsed": 1, "rate": 0.6507, "type": "BenchmarkRamSysbench"}], "chassis": "Tower", "manufacturer": "LENOVO", "model": "3227A2G", "serialNumber": "AAAAAAAA", "sku": "LENOVO_MT_3227", "type": "Desktop", "version": "ThinkCentre M92P"}, "elapsed": 187302510, "endTime": "2016-11-03T17:17:01.116554+00:00", "software": "Workbench", "type": "Snapshot", "uuid": "ae913de1-e639-476a-ad9b-78eabbe4628b", "version": "11.0b11"}
|
Reference in New Issue