AI Hardware Requirements
Supported compute devices by service
The Prisma AI anomaly detection system consists of three components: the prisma-ai-trainer, prisma-ai-tester, and prisma-ai-server. The AI Trainer supports both CPU and GPU training, while the AI Tester and AI Server currently only support CPU inference.
| Service | Supported Device Types |
|---|---|
| Prisma AI Trainer | CPU, GPU (NVIDIA CUDA) |
| Prisma AI Tester | CPU |
| Prisma AI Server | CPU |
Ensure that your NVIDIA GPU supports hardware accelerated bfloat16 operations. This is supported for all NVIDIA GPUs with compute capability 8.0 or higher (this is the standard starting with ampere, which introduced the RTX 3000 series of consumer GPUs and the A100 datacenter GPU).
Supported GPU driver versions
For services that support GPU acceleration, ensure you are using the correct Docker image. The main Docker image does not ship any GPU support. For GPU support, pull the image with the appropriate suffix (for example, for the Prisma AI Trainer you would pull images.intellitrend.de/prisma/prisma-ai-trainer:7.14.0-cuda for CUDA support).
| Vendor | Support Status | CUDA Toolkit | Image Suffix |
|---|---|---|---|
| Nvidia | Supported | CUDA 12.5 | -cuda |
| AMD | Unsupported (Work in progress) | ROCM | - |
| Intel | Unsupported (Not Planned) | oneAPI | - |
CUDA 12.5 requires an NVIDIA GPU driver of at least version 555.x on Linux. You can check the installed driver version with nvidia-smi.
Container runtime
All Prisma AI services run in Docker containers, so a working Docker engine is required on the host. For GPU-accelerated images (-cuda suffix), the host must additionally have:
- The NVIDIA GPU driver (which provides
nvidia-smi). See the official NVIDIA driver downloads for your distribution. - The NVIDIA Container Toolkit, which exposes the host GPU to Docker containers. Follow NVIDIA’s official installation guide for your distribution.
Once both are installed, verify the container runtime can see the GPU:
docker run --rm --gpus all nvidia/cuda:12.5.0-base-ubuntu22.04 nvidia-smi
Docker Compose example
To expose all GPUs on the host to the AI Trainer, add a deploy.resources.reservations.devices block to the service:
services:
prisma-ai-trainer:
image: images.intellitrend.de/prisma/prisma-ai-trainer:7.14.0-cuda
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
To restrict the container to a specific GPU (for example GPU index 0), replace count: all with device_ids: ["0"]:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
capabilities: [gpu]
Minimum System Specifications
| Service | Compute Device | CPU Cores | CPU Features | RAM | VRAM | GPU Compute Capability |
|---|---|---|---|---|---|---|
| Prisma AI Trainer | CPU | 4 | AVX | 8GB | - | - |
| Prisma AI Trainer | GPU | 2 | 8GB | 8GB | ≥ 8.0 | |
| Prisma AI Tester | CPU | 2 | AVX | 2GB | - | - |
| Prisma AI Server | CPU | 2 | AVX | 2GB | - | - |
Recommended System Specifications
| Service | Compute Device | CPU Cores | CPU Features | RAM | VRAM | GPU Compute Capability |
|---|---|---|---|---|---|---|
| Prisma AI Trainer | CPU | 16 | AVX512 | 32GB | - | - |
| Prisma AI Trainer | GPU | 4 | 16GB | 16GB | ≥ 8.0 | |
| Prisma AI Tester | CPU | 4 | AVX512 | 8GB | - | - |
| Prisma AI Server | CPU | 4 | AVX512 | 8GB | - | - |
The services themselves only persist model artifacts and intermediate training state locally; datasets and trained models are stored in S3, configured at the deployment level in the Prisma installation overview.
For optimal training and inference performance on CPUs your CPU should ideally support the AVX512 instruction set extension. Otherwise ensure that your CPU supports at least AVX2. Both inference and training on CPU will utilize all available cores and will run faster with more CPU cores. Providing more RAM will not provide any speedup unless the system is swapping.
See also
- AI Server installation Docker Compose deployment of the AI Server.
- AI Trainer configuration Command-line parameters and configuration keys for the AI Trainer.
- AI Tester configuration Command-line parameters and configuration keys for the AI Tester.
- Prisma installation overview Full deployment context and network requirements.