Complete documentation for the Furcate decentralized federated learning platform
https://api.furcate.ioAll API requests should be made to this base URL with the appropriate endpoint path
Furcate is a decentralized federated learning platform that enables collaborative AI training across edge devices without centralizing sensitive data. The system integrates multiple technologies:
Minima blockchain for trustless coordination and proof of training
Maxima protocol for secure node-to-node communication
GCP buckets via Tenzro Network for gradient and model artifact storage
TEE attestation for verifiable computation integrity
A fleet represents a collaborative learning group. Each fleet has an owner, privacy parameters, and defines the model architecture for federated training.
Rounds coordinate synchronized training cycles. Each round collects contributions from nodes, aggregates updates using FedAvg or other methods, and produces a new global model.
Individual model updates from edge devices. Each contribution includes gradient updates, privacy metrics, and trust attestations stored in distributed storage.
The process of combining verified contributions into an improved global model using weighted averaging and privacy-preserving techniques.
Background service that automatically monitors rounds, triggers aggregation when ready, distributes blockchain rewards, and creates next training rounds without manual intervention.
Initialize a new federated learning fleet with privacy parameters and model architecture
Start a training round with target participants and aggregation parameters
Edge devices train on local data and compute gradient updates
Nodes submit encrypted gradients with privacy proofs and trust attestations
Coordinator automatically detects round readiness and triggers FedAvg aggregation
Automatic blockchain payments to contributors based on trust level and sample count
Coordinator automatically initializes the next training round with updated model
100 requests per minute
Suitable for development and small-scale deployments
10,000 requests per minute
For production deployments with multiple fleets