API

Backprop's API is used to solve tasks and manage models. Tasks can be solved with our pre-trained and your own uploaded models.

See the API schema for in-depth information.

Performance

Each model runs in an isolated, serverless CPU runtime with 8GB of memory.
It is performant enough for most use cases and state-of-the-art models. Try our Sandbox for a first-hand experience.

Wherever possible, we have optimised the performance of our pre-trained models through distillation and quantization.

To get around the issue of cold starts, we always keep at least one instance of every model warm.

Both our pre-trained and uploaded models can scale to support thousands of requests a second.

After scaling up, the instance remains warm until it has received no requests for around 15 minutes. There is no charge for keeping a model warm.

Thanks to that, our billing is entirely usage based and our API can handle very high loads without any significant slowdown.