Install AI Server

A step-by-step guide for deploying a Prisma AI Server with Docker Compose.

This chapter covers the installation of a Prisma AI Server using Docker Compose and provides a reference docker compose file to use for deployment.

Deploying AI Server using Docker Compose

1. Prerequisites

To set up a new AI Server, a number of prerequisites have to be met.

  • Network access to a Prisma Server (gRPC and gRPC-Web endpoints)
  • A Linux host with Docker and Docker Compose installed
  • An AI Server Registration Token from an AI Server configuration in Prisma

You can check whether Docker and Docker Compose are installed by running the following commands:

docker version
docker compose version

2. Docker Compose

The following docker-compose.yaml file defines how the AI Server will start.

You need to modify the docker-compose.yaml file to correctly include the following information:

  • Connection details to your Prisma server
  • An AI Server registration token from your Prisma server (see AI Servers)
services:
    prisma-ai-server:
        image: images.intellitrend.de/prisma/prisma-ai-server:7.10.12
        restart: always
        environment:
            GOMEMLIMIT: "600MiB"
            PRISMA_AI_SERVER_LOG_LEVEL: "info"
            PRISMA_AI_SERVER_METRICS_IDENTITY: "prisma_ai_server"
            # Prisma Server connection
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_ADDRESS: "prisma.intellitrend.de"
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_PORT: "8090"
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_TLS_ENABLE: "false"
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_WEB_ADDRESS: "prisma.intellitrend.de" # Usually the same as the GRPC_ADDRESS, only with a different port
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_WEB_PORT: "8091"
            PRISMA_AI_SERVER_AISERVER_PRISMA_SERVER_GRPC_WEB_TLS_ENABLE: "false"
            # AI Server configuration
            PRISMA_AI_SERVER_AISERVER_REGISTRATION_TOKEN: "[REPLACE_ME]" # AI Server registration token from Prisma
            PRISMA_AI_SERVER_AISERVER_CONFIG_POLL_INTERVAL: 15 # Interval in seconds in which the AI Server checks for new deployments
            PRISMA_AI_SERVER_AISERVER_STATUS_INTERVAL: 60 # Interval in seconds between status prints
            PRISMA_AI_SERVER_AISERVER_PORT: 8096 # Port to listen on for incoming data
        volumes:
            - ./prisma-ai-server/data/var/:/usr/local/intellitrend/prisma/var/:rw

Running the AI Server

To start the AI Server, simply use the following commands, pulling the required docker image from our docker registry.

# Pull image
docker compose pull

# Start in background
docker compose up -d

# Check logs
docker compose logs -f prisma-ai-server

The AI Server should connect to the Prisma Server, poll AI deployments, and begin evaluating models.