Installation
Auditor step-by-step deployment guide
Last updated
Auditor step-by-step deployment guide
Last updated
Minimum system resources: 4 GB of RAM and 2 CPU cores.
Free disk space for installation and data storage of the Auditor.
Network access to and from the portal and to the location of your asset (the location of the product to be scanned).
Before installing the Auditor, make sure you have the following software installed on your machine:
Type of installation using containers:
Docker (version 19.03 or higher)
Docker Compose (version 1.26 or higher)
SSH keys (for GitLab CI installation option)
Installation in Kubernetes environment:
Helm configured with Kubernetes cluster
To securely connect to the Linux server, you will need to set up SSH keys.
If you don't have SSH keys already, you can generate them using the following command in your server terminal:
ssh-keygen
After generating the SSH keys, you need to copy the public SSH key to the Linux server. Use this command to copy the public key:
ssh-copy-id <username>@<server-ip-address>
Replace <username>
with your Linux server account username, and <server-ip-address>
with the IP address of the Linux server. You will be prompted to enter your password for authentication.
Open the file on your local machine where the private SSH key is stored. The private key is typically saved with a .pem
or .ssh
file extension.
Select and copy the contents of the private key file. Ensure you copy the key with the correct permissions and line breaks intact.
Option 1: GitLab CI installation Ansible playbook (automated docker compose installation)
Option 2: Install using Helm (install in Kubernetes environment)
Option 3: Docker compose installation (manual docker compose installation)
Step 1. Fork the Auditor Repository
Fork the Auditor repository on GitLab. This creates a copy of the repository under your GitLab account.
Step 2. Set the public SSH key on the host
Establish a secure connection between the host and the repository by setting the public SSH key.
Step 3. Configure GitLab CI/CD Environment Variables
In GitLab, go to "Settings" > "CI / CD" > "Variables" and configure the following environment variables:
SSH_KEY_PRIVATE
: Set the private SSH key within the forked repository for authentication.
ACCESS_TOKEN
: set the Access Token value that you will receive after the first run of CI Pipeline (step 9)
Optional environment variables:
IMAGE_VERSION
: The script will autonomously determine the most recent version.
DB_NAME
, DB_USER
, DB_PASS
, DB_HOST
, DB_PORT
: Required for database configuration.
RABBITMQ_DEFAULT_USER
, RABBITMQ_DEFAULT_PASS
, AMQP_HOST_STRING
: Message broker configuration.
The username and password in the RABBITMQ_DEFAULT_PASS and RABBITMQ_DEFAULT_USER variables must be the same as in AMQP_HOST_STRING.
Step 4. Update the Hosts File
In the repository's hosts
file, specify the group name and IP address of the hosts where Auditor will be installed:
[prod_portal]
- name of the group
206.189.63.52
- IP address
Step 5. Update Variables in prod_portal.yml
Update the variables in the prod_portal.yml
file in the
group_vars directory
ansible_user: root
ansible_ssh_private_key : ~/.ssh/id_rsa
work_dir: /opt
ansible_user
: Specify the user Ansible should use when connecting to the server
ansible_ssh_private_key
: Specify the path to the private SSH key for authentication
work_dir
: The working directory on the target server where the application will be installed
Step 6. Commit Changes
After updating the hosts file and group_vars/prod_portal.yml, commit the changes to your GitLab repository
Step 7. Run GitLab CI Pipeline
In the GitLab CI/CD > Pipelines section, you should see the pipeline running the deploy job.
Step 8. Monitor the Installation
Once the pipeline is running, click on the deploy job to view the logs. The Ansible playbook will be executed, deploying Auditor on the specified host.
Step 9. Adding an Access Token
Now your application should be accessible on the port specified in the configuration.
After the first run, you will receive an Access Token
.
Copy the value of the access token and add it in the CI/CD variables on GitLab
ACCESS_TOKEN
: your value
After adding the variable, must to restart the service from the command line using the command:
docker-compose down
docker-compose up -d
Save the key value in a safe place for later usage in the Auditor settings
Before using Helm, make sure that Helm is installed on your computer and that your Kubernetes cluster is configured to work with Helm
Step 1. Add helm package
Add the Auditor package to your server:
helm repo add auditor https://gitlab.com/api/v4/projects/51993931/packages/helm/stable
Step 2. Set environment variables
In the values.yaml file, change the default environment variables to meet your requirements:
In the deploymentSpec section:
global.image.tag=release_v24.11.3
Postgres:
postgresql.auth.database="postgres"
postgresql.auth.username="postgres"
postgresql.auth.password="postgres"
External postgres:
externalPostgresql.enabled="true"
externalPostgresql.host=""
externalPostgresql.port="5432"
externalPostgresql.database=""
externalPostgresql.username=""
externalPostgresql.password=""
Redis
redis.auth.password="11110000"
External redis
externalRedis.enabled="true"
externalRedis.host=""
externalRedis.password=""
Rabbitmq
rabbitmq.auth.username="admin"
rabbitmq.auth.password="admin"
External Rabbitmq
externalRabbitmq.enabled="true"
externalRabbitmq.host=""
externalRabbitmq.port="5672"
externalRabbitmq.username=""
externalRabbitmq.password=""
Step 3. Helm install with all resources inside cluster
In the example we use pre-installed nginx ingress controller and postgres, redis, rabbitmq from chart:
helm upgrade --install auditor auditor/appsecauditor \
--set rabbitmq.auth.username="admin" \
--set rabbitmq.auth.password="admin" \
--set postgresql.enabled=true \
--set ingress.enabled=true \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io\/scheme"=internet-facing \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io\/target\-type"=ip \
--set ingress.ingressClassName=nginx \
--set ingress.host=localhost \
-n whitespots-auditor --create-namespace
Test with your ingress If you don't have any:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
kubectl get svc -n ingress-nginx
Just in case if migrations haven't succeed
kubectl exec -it $(kubectl get pods -n whitespots-auditor -l app.kubernetes.io/name=appsecauditor-auditor -o jsonpath='{.items[0].metadata.name}') -n whitespots-auditor -- alembic upgrade head
After the first login you will receive an Access Token
. Copy and set as a variable token and relaunch service scanner_worker.
Copy and set as a variable token and relaunch service scanner_worker.
kubectl get deployments -n whitespots-auditor
kubectl delete deployment auditor-appsecauditor-scanner-worker -n whitespots-auditor
helm upgrade --install auditor auditor/appsecauditor \
--set rabbitmq.auth.username="admin" \
--set rabbitmq.auth.password="admin" \
--set postgresql.enabled=true \
--set ingress.enabled=true \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io\/scheme"=internet-facing \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io\/target\-type"=ip \
--set ingress.ingressClassName=nginx \
--set ingress.host=localhost \
--set configs.secret.access_token=access_token \
-n whitespots-auditor --create-namespace
Save the key value in a safe place for later usage in the Auditor settings
Step 1: Clone the repository
Clone the Auditor repository to your server:
git clone https://gitlab.com/whitespots-public/auditor.git auditor
Step 2 Navigate to the root directory
Navigate to the root directory of the Auditor project by executing the following command:
cd auditor
Step 3: Set environment variables
Environment variables are set by default. If changes are needed, create an .env file in the project's root folder.
Example .env file:
IMAGE_VERSION=release_v24.07.2
DB_NAME=postgres
DB_USER=postgres
DB_PASS=postgres
DB_HOST=postgres
DB_PORT=5432
RABBITMQ_DEFAULT_USER=admin
RABBITMQ_DEFAULT_PASS=mypass
AMQP_HOST_STRING=amqp://admin:mypass@rabbitmq:5672/
DOCKER_ENCRYPTION_TOKEN=defaultvaluetobechangedorelse...
ACCESS_TOKEN=<your value>
IMAGE_VERSION
the required variable must be specified. Specify a specific version, e.g. release_v24.07.2
DB_NAME
, DB_USER
, DB_PASS
, DB_HOST
, DB_PORT
variables are required for database configuration.
If the message broker is hosted on a third-party server, only the AMQP_HOST_STRING
must be specified. However, if the container is raised locally, all three variables, including RABBITMQ_DEFAULT_USER
and RABBITMQ_DEFAULT_PASS
need to be specified.
The username and password in the RABBITMQ_DEFAULT_PASS and RABBITMQ_DEFAULT_USER variables must be the same as in AMQP_HOST_STRING.
DOCKER_ENCRYPTION_TOKEN
this variable is essential when accessing images from a private registry. If your registry requires authentication, provide the appropriate encryption token here.
ACCESS_TOKEN: After the first run of the Auditor (step 4) you will get the value of the access token. You must to copy it and put this variable and its value in the .env file.
Step 4. Start the Auditor
From the terminal command line, navigate to the directory where the docker-compose.yml file is located.
Run the application by executing the following command:
docker compose up -d
This will start all the services described in the docker-compose.yml file in the background.
After successfully running the docker-compose up -d command, your application should be accessible on the port specified in the configuration.
You will receive an Access Token
the first time you start.
Copy it and set it in the .env file as the value of the variable ACCESS_TOKEN (step 3)
After adding the variable, must to restart the service from the command line using the command:
docker compose down
docker compose up -d
Save the key value in a safe place for later usage in the Auditor settings
When copying keys, make sure you copy without spaces.