Welcome to Loc.ai Documentation
Loc.ai is the distributed infrastructure layer designed to make the next generation of AI applications truly scalable. We address the critical bottlenecks of cloud cost and compute availability by enabling software vendors to execute AI models directly on end-user hardware.
What's New in V2.1
- VS Code Integration – Full guide for connecting Continue to locally served models via Loc.ai
- Loc.ai:Link v0.1.10 – Analytics telemetry, download tracking, and maintenance updates
- Version switcher – Prior site versions documented in the documentation changelog
What's New in V2
Documentation V2 brings a complete end-to-end guide covering the full journey:
- Account Registration – Approval-only registration workflow via the Loc.ai dashboard
- Device Registration via UI – Register devices directly from the Control platform with one-click key generation
- Model Deployment Guide – Step-by-step deployment from the Model Library
- Inference & Results – Run inference and view detailed results with filtering and CSV export
- Supported Models – TFLite (image/audio classification) and GGUF (language models)
- Device Management – Delete, unregister, and reset devices
Get Started
The fastest way to get up and running is the Quickstart guide:
Supported Models
Loc.ai currently supports the following model architectures:
- Image Classification – TFLite models
- Audio Classification – TFLite models
- Language Models (LLM) – GGUF format via llama-cpp
Libraries
- Loc.ai GitHub
- Loc.ai API (coming soon)