Architecture overview
Understand how Loc.ai's components work together to provide secure, scalable infrastructure for deploying AI models at the edge.
System components
Loc.ai is divided into the following systems:
- Loc.ai:Link – A lightweight Python runtime installed on remote devices (Windows, Mac, Linux PCs or local servers). Responsible for local model execution, hardware sensor interfacing, and telemetry reporting.
- Loc.ai:Control
- Backend – A FastAPI service that acts as the central brain. Manages device lifecycles, orchestrates model deployments, and processes telemetry and inference results.
- Frontend – A React web interface for monitoring devices, deploying models, and visualizing inference results.
Architectural layers
- Remote device layer – Where data is processed. Models run locally (TensorFlow Lite or GGUF/llama-cpp), keeping raw sensor data on-site.
- Secure communication layer – RESTful API bridge using TLS encryption.
- Orchestration & service layer – Cloud logic that manages state, queues commands (e.g. start inference), and delivers instructions when devices check in.
- Data & persistence layer – Distributed storage (e.g. Firestore, GCS) for system state, telemetry, and model artifacts.
Data flow
Interaction between Control and Link is designed for reliability in low-bandwidth environments.
Upstream (device → platform)
- Telemetry – Devices send health data (CPU, RAM, temperature) every 30 seconds.
- Inference results – Structured model results (classification, confidence, timestamp, etc.).
- Logs – Debugging and execution status for remote monitoring.
Downstream (platform → device)
- Command polling – The agent polls
/commands; dashboard actions are picked up on the next poll. - Secure artifact delivery – Signed URLs for encrypted model files and runtime configuration from cloud storage.
Security & trust model
- Mutual trust via activation – Devices need a valid owner-generated registration key.
- API key identity – Unique API key in the
Authorization: Bearerheader on every request. - Traffic security – TLS 1.2+ for data in transit.
- Infrastructure resilience – Rate limiting; isolated execution for inference tasks.
- User authentication – OAuth2/OIDC with JWT for platform access.
Data privacy
Your data stays local
Inference data is processed on your edge devices. Prompts, responses, and data processed by your models remain on your infrastructure.
Users can also run Loc.ai:Control and Link off-cloud with the Enterprise offering for a fully offline system.