Installation
Auditor step-by-step deployment guide
System Requirements for Auditor usage:
Minimum system resources: 4 GB of RAM and 2 CPU cores.
Free disk space for installation and data storage of the Auditor.
Network access for external users (users must be able to connect to the Auditor over the network).
Prerequisites
Before installing the Auditor, make sure you have the following software installed on your machine:
Type of installation using containers:
Docker (version 19.03 or higher)
Docker Compose (version 1.26 or higher)
SSH keys (for GitLab CI installation option)
Installation in Kubernetes environment:
Helm configured with Kubernetes cluster
SSH keys generation
To securely connect to the Linux server, you will need to set up SSH keys.
If you don't have SSH keys already, you can generate them using the following command in your server terminal:
ssh-keygen -t rsa -b 4096
Set SSH key to your Server
After generating the SSH keys, you need to copy the public SSH key to the Linux server. Use this command to copy the public key:
ssh-copy-id <username>@<server-ip-address>
Replace <username>
with your Linux server account username, and <server-ip-address>
with the IP address of the Linux server. You will be prompted to enter your password for authentication.
Open the file on your local machine where the private SSH key is stored. The private key is typically saved with a .pem
or .ssh
file extension.
Select and copy the contents of the private key file. Ensure you copy the key with the correct permissions and line breaks intact.
Installation
Option 1: GitLab CI installation Ansible playbook (automated docker-compose installation)
Option 2: Install using Helm (install in Kubernetes environment)
Option 3: Docker-compose installation (manual docker-compose installation)
GitLab CI installation (Ansible playbook)
Step 1. Fork the Auditor Repository
Fork the Auditor repository on GitLab. This creates a copy of the repository under your GitLab account.
Step 2. Set the public SSH key on the host
Establish a secure connection between the host and the repository by setting the public SSH key.
Step 3. Configure GitLab CI/CD Environment Variables
In GitLab, go to "Settings" > "CI / CD" > "Variables" and configure the following environment variables:
SSH_KEY_PRIVATE
: Set the private SSH key within the forked repository for authentication.ACCESS_TOKEN
: set the Access Token value that you will receive after the first run of CI Pipeline (step 9)
Optional environment variables:
IMAGE_VERSION
: The script will autonomously determine the most recent version.DB_NAME
,DB_USER
,DB_PASS
,DB_HOST
,DB_PORT
: Required for database configuration.RABBITMQ_DEFAULT_USER
,RABBITMQ_DEFAULT_PASS
,AMQP_HOST_STRING
: Message broker configuration.
Step 4. Update the Hosts File
In the repository's hosts
file, specify the group name and IP address of the hosts where Auditor will be installed:
[prod_portal]
- name of the group
206.189.63.52
- IP address
Step 5. Update Variables in prod_portal.yml
Update the variables in the prod_portal.yml
file in the
group_vars directory
ansible_user: root
ansible_ssh_private_key : ~/.ssh/id_rsa
work_dir: /opt
ansible_user
: Specify the user Ansible should use when connecting to the server
ansible_ssh_private_key
: Specify the path to the private SSH key for authentication
work_dir
: The working directory on the target server where the application will be installed
Step 6. Commit Changes
After updating the hosts file and group_vars/prod_portal.yml, commit the changes to your GitLab repository
Step 7. Run GitLab CI Pipeline
In the GitLab CI/CD > Pipelines section, you should see the pipeline running the deploy job.
Step 8. Monitor the Installation
Once the pipeline is running, click on the deploy job to view the logs. The Ansible playbook will be executed, deploying Auditor on the specified host.
Step 9. Adding an Access Token
After the first run, you will receive an Access Token.
Copy the value of the access token and add it in the CI/CD variables on GitLab
Save the key value in a safe place for later usage in the Auditor settings
Instal using Helm
Step 1. Clone the repository
Clone the Auditor repository to your server:
git clone https://gitlab.com/whitespots-public/auditor.git auditor
Step 2. Navigate to the root directory
Navigate to the directory where the Auditor files were cloned, the helm directory:
cd auditor/AuditorHelmChart
Step 3. Set environment variables
in the values.yaml file, change the default environment variables to meet your requirements:
In the deploymentSpec section:
release: release_v24.04.1
In the configMap section:
DB_HOST: "postgres"
DB_PORT: "5432"
DB_NAME: "postgres"
DB_USER: "postgres"
DOMAIN: http://localhost
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PORT: "5672"
In the secrets section:
AMQP_HOST_STRING: "amqp://admin:mypass@rabbitmq:5672/"
DB_PASS: "postgres"
RABBITMQ_DEFAULT_PASS: "mypass"
REDIS_PASSWORD: "11110000"
release
:
specify a particular release identifier
DB_NAME
, DB_USER
, DB_HOST
, DB_PORT
and DB_PASS
variables are required for database configuration.
If the message broker is hosted on a third-party server, only the AMQP_HOST_STRING
must be specified. However, if the container is raised locally, all three variables, including RABBITMQ_DEFAULT_USER
and RABBITMQ_DEFAULT_PASS
need to be specified
REDIS_PASSWORD
If the broker is hosted on a third-party server leave the variable at its default value
In the db section: It is recommended to use an external database. For this purpose it is enough only to specify the value
true
for the variableexternal_db
, other variables in this section do not need to be specified
But if you use a database inside the cluster, configure variables for it
external_db: false
name: postgres
storageClassName: local-storage
node: minikube
path: /mnt/local-storage
mountPath: /mnt
claimName: postgres-pv-claim
external_db
: false
name
: database name
storageClassName
: storage class name for the database
node
: the node in the cluster that will host the database
path
:path to the database storage on the node
mountPath
: the place inside the container where the database storage will be mounted
claimName
: the name of the PersistentVolumeClaim that is used to request storage allocation
Step 4. Install the application using Helm
Run the application by executing the following command:
helm install auditor <path-to-helm-directory>
replace with the path to the directory that contains the Helm Chart for your application.
After the first run you will receive an Access Token
.
Copy the value of the access token and add it in the values.yaml file in the secret section and restart scanner-worker pod
AMQP_HOST_STRING: "amqp://admin:mypass@rabbitmq:5672/"
DB_PASS: "postgres"
RABBITMQ_DEFAULT_PASS: "mypass"
REDIS_PASSWORD: "11110000"
ACCESS_TOKEN: "access_token"
Save the key value in a safe place for later usage in the Auditor settings
Docker-compose installation
Step 1: Clone the repository
Clone the Auditor repository to your server:
git clone https://gitlab.com/whitespots-public/auditor.git auditor
Step 2 Navigate to the root directory
Navigate to the root directory of the Auditor project by executing the following command:
cd auditor
Step 3: Set environment variables
Environment variables are set by default. If changes are needed, create an .env file in the project's root folder.
Example .env file:
DB_NAME=postgres
DB_USER=postgres
DB_PASS=postgres
DB_HOST=postgres
DB_PORT=5432
RABBITMQ_DEFAULT_USER=admin
RABBITMQ_DEFAULT_PASS=mypass
AMQP_HOST_STRING=amqp://admin:mypass@rabbitmq:5672/
DOCKER_ENCRYPTION_TOKEN=defaultvaluetobechangedorelse...
DB_NAME
,DB_USER
,DB_PASS
,DB_HOST
,DB_PORT
variables are required for database configuration.If the message broker is hosted on a third-party server, only the
AMQP_HOST_STRING
must be specified. However, if the container is raised locally, all three variables, includingRABBITMQ_DEFAULT_USER
andRABBITMQ_DEFAULT_PASS
need to be specified.DOCKER_ENCRYPTION_TOKEN
this variable is essential when accessing images from a private registry. If your registry requires authentication, provide the appropriate encryption token here.
Step 4. Start the Auditor
From the terminal command line, navigate to the directory where the docker-compose.yml file is located.
Run the application by executing the following command:
docker-compose up -d
This will start all the services described in the docker-compose.yml file in the background.
Step 5. Verify that your application is running
After successfully running the docker-compose up -d command, your application should be accessible on the port specified in the configuration.
Last updated