Skip to main content

System Agents

ORWELL's main lifecycle depends on the communication of three big layers: the Storage/Visualization layer, the Middleware layer and the Translators layer. The deployment process revolves around containerizing the components that form each layer, providing a configurable environment for easy integration and adaptation to infrastructure changes.

Storage/Visualization

This layer is composed of two services, a Prometheus instance which will contain the centralized metrics of the monitoring targets, integrated with Grafana, used for user visualization.

Deployment

In our original deployment, this layer was running in a single machine, with the following docker-compose:

services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
restart: always

grafana:
image: grafana/grafana
depends_on:
- prometheus
ports:
- 3000:3000
restart: always

Environment

VariableUsed ByDescription
MIDDLEWARE_ENDPOINTPrometheusThe endpoint Prometheus will use to fetch the metrics from the middleware (HTTP)

Middleware

The middleware is the core of our system. We refer to Middleware as the aggregation of components that make possible the exporting of metrics to reach the Prometheus database, which are:

  • REST API developed with FastAPI framework.
  • Redis cluster to cache exported metrics.
  • Kafka broker to better aggregate and handle the various translators' metric stream.
  • Postgres Database to store information about targets.

Deployment

All middleware components are deployed in the same machine. All these components could, however, be running on different machines, even different networks.

services:
api:
build: ./middleware
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
env_file: ./.env
environment:
- POSTGRES_HOST=postgres
- POSTGRES_DB=$POSTGRES_DB
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- REDIS_HOST=redis
- REDIS_PASSWPRD=$REDIS_PASSWORD
ports:
- $API_PORT:8000
healthcheck:
test: ["CMD-SHELL", "curl", "localhost:8000/"]
interval: 1s
timeout: 3s
retries: 30


postgres:
image: postgres:14.1-alpine
env_file: ./.env
environment:
- PGUSER=$POSTGRES_USER
- POSTGRES_DB=$POSTGRES_DB
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
ports:
- $POSTGRES_PORT:5432
volumes:
- ./db:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 1s
timeout: 3s
retries: 30

redis:
image: redis:6.2-alpine
restart: always
env_file: ./.env
ports:
- $REDIS_PORT:6379
command: redis-server --save 20 1 --loglevel warning --requirepass $REDIS_PASSWORD
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 1s
timeout: 3s
retries: 30

zookeeper:
image: bitnami/zookeeper:latest
ports:
- "$ZK_PORT:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes

kafka:
image: bitnami/kafka:latest
depends_on:
- zookeeper
ports:
- "$KAFKA_PORT:9092"
- "$KAFKA_LISTENER_PORT:9093"
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:$KAFKA_LISTENER_PORT
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://$KAFKA_HOST:$KAFKA_LISTENER_PORT
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
restart: unless-stopped

Environment

VariableUsed ByDescription
API_PORTFastAPIPort the FastAPI will run on
REDIS_PORTFastAPI, RedisPort Redis will run on
REDIS_PASSWORDFastAPI, RedisRedis login credential
POSTGRES_DBFastAPI, PostgresPostgres database name
POSTGRES_USERFastAPI, PostgresPostgres credentials
POSTGRES_PASSWORDFastAPI, PostgresPostgres credentials
POSTGRES_PORTFastAPI, PostgresPort running Postgres instance
ZK_PORTKafkaPort running zookeeper instance
KAFKA_HOSTKafkaKafka Bootstrap address
KAFKA_PORTKafkaKafka Bootstrap port
OSM_HOSTFastAPIOSM host address
OSM_USERFastAPIOSM credentials
OSM_PWDFastAPIOSM credentials
OSM_PROJECTFastAPIOSM project name
OPENSTACK_HOSTFastAPIOpenstack host address
OPENSTACK_IDFastAPIOpenstack credentials
OPENSTACK_SECRETFastAPIOpenstack credentials

Translators

Each translator is containerized in a python image, making it possible to deploy any number of translators in any node of the network, contributing for the scalability of the system. For testing purposes, we provide a compose file deploying one of each translator developed.

Image

FROM python:3.8-slim-buster

COPY requirements.txt requirements.txt

RUN pip3 install -r requirements.txt

COPY . .

CMD ["python3", "main.py", "prod"]

Compose

version: "3.0"
services:
gnocchi:
build: "./gnochi"
environment:
KAFKA_TOPIC: "gnocchi"
KAFKA_HOST: "10.0.12.82"
REDIS_HOST: "10.0.12.82"
netdata:
build: "./netdata"
environment:
KAFKA_TOPIC: "netdata"
KAFKA_HOST: "10.0.12.82"
REDIS_HOST: "10.0.12.82"
prometheus:
build: "./node_exporter"
environment:
KAFKA_TOPIC: "prometheus"
KAFKA_HOST: "10.0.12.82"
REDIS_HOST: "10.0.12.82"
restart: always
telegraf:
build: "./telegraf"
environment:
KAFKA_TOPIC: "telegraf"
KAFKA_HOST: "10.0.12.82"
REDIS_HOST: "10.0.12.82"