Using the sandbox

In this section, we'll walk through how to use tasks on the dashboard's sandbox.

To begin, let's navigate over to the sandbox. You can consider the sandbox a testing ground of sorts — here, you'll find all of the tasks Backprop offers out of the box, along with the models they support.

Sandbox emotion base view

Tour + Emotion Detection

The sandbox is split into three different pieces.

The first is the tasks & models section. This is where you choose what task you want to experiment with, as well as which model to use.

To the right of that, you'll see the Pipeline - in here, you'll find some extra info about the task and model you've selected. You can click on a model's name here to expand its description.

Below that, you'll find the task input and output.

The Emotion task is simple — when provided with some input text, the output is a list of emotions found in the text. Let's try it with, for example, a 'bad restaurant review'. Hit the Run button to make the API call.

Emotion example

Great, that wasn't so hard.

Not all tasks have such simple input formats. Navigate to the Image Classification task on the sandbox.

Sandbox IC base view

Image Classification

The input schema for this task is different: not only is there a place to upload an image, but there's also an additional field for text labels.

This task uses OpenAI's CLIP model, which is a{" "} zero-shot classifier. This means it wasn't trained for a specific classification task. Rather, when given an image and a set of labels, the model will assign probabilities to those supplied labels.

We'll try it with this picture of a pigeon:

Pigeon

As well as some labels, separated by commas:

crow, falcon, eagle, raven, pigeon, cardinal, bluejay, robin, sparrow

Now let's hit run, and see what we get out.

IC pigeon example

Here we can see some benefits of zero-shot classification: despite not being trained to recognize specific bird species, CLIP is still capable of doing so.

Finally, we'll take a look at a task with even more inputs that can be tweaked. Click on the Text Generation task.

Sandbox TG base view

Text Generation

Select the model gpt2-medium.

We've got quite a lot to play with here. These parameters are all passed to the model when inference is performed, and they affect the text generaion output.

More detail is fleshed out in our references section. Here, I'll give a brief overview of what these parameters are.

Token length is a range, and determines the minimum/maximum length of your outputs.

Temperature is a value that, without delving into the mathematics, introduces some 'wildness' to the generated text.

The repetition and length penalties are used to encourage/discourage specific behaviors when generating text. With a repetition penalty >1.0 applied, the model is less likely to generate tokens that appeared in the context, or had already been generated in the sequence. Similarly, the length penalty >1.0 encourages longer sequences. Setting these values to <1.0 reverses the effects.

Top p is a value — a probability. When top p is set, the model will choose its next word from the smallest set of 'most likely next words' that has a combined probability greater than p .

Top k is a value that limits how many tokens the model will consider to choose its next word: the k most likely.

The number of beams is a value that determines how many 'branches' the generation will run. Out of all branches, the one with the highest overall probability is then returned. This is a way to ensure the generator doesn't miss probable word sequences that may be obscured by an early low-probability word choice.

And finally, the number of generations is how many times generation will be run (with all texts generated being returned in a list).

With all those in mind, let's try a sample generation.

Set the input to "Once upon a time".

I'll be tweaking just a few parameters. Set the following, and leave the rest at their defaults:

  • Token Length: 50 - 75
  • Temperature: 0.8
  • Top P: 0.5

Hit run, and generate some text:TG example

And with that, you've reached the end of the sandbox tutorial.

Take some time to explore and test the environment, to get a better understanding of what Backprop supports out of the box.

In the next section, we'll be taking the next step to using these tasks in your projects, by accessing them directly through the API.

Using the API

In this section, we'll be running some inference tasks via the Backprop API.

While the sandbox is a great way to get yourself familiar with tasks, using them in your projects will require you to make calls to our API.

Luckily, this process is simple. To begin, we're first going to return to the sandbox's Emotion task.

Request view gif

Request View

If you click the "< >" button on the task (shown above), your view will change from the input view to the raw request view.

The API takes post requests, with a few components:

  • The "x-api-key" header, containing your API key for authorization
  • The "Content-Type" header, specifying that you're sending JSON
  • A JSON body containing your task input and model of choice
  • The task endpoint you're using (in this case, "/emotion")

Switch back to the input view by pressing the pencil icon. Try typing in some text input, and swap back over to the request view.Request view with text

You'll see that the request view is dynamic, and updates to show what request will be sent to the API when you hit the Run button.

All tasks in the sandbox have their own request view. If you're going to use a task in your software, you can use this as a way to familiarize yourself with that task's request schema.

Emotion from the API

Let's try calling the Emotion task outside the sandbox using Python's requests library.

import requests
# Set headers, include API key for authorization
headers = {
	"x-api-key": "your-api-key",
	"content-type": "application/json"
}
# Task input + model selection
body = {
	"text": "The service was awful, and so was the food.",
	"model": "t5-base-qa-summary-emotion"
}
# POST to Backprop API's "emotion" endpoint
res = requests.post("https://api.backprop.co/emotion",
					json=body,
					headers=headers)
res.json()
# {'emotion': 'disgust'}

We can now access T5 from anywhere! Let's look at one of the more complex tasks now.

Text Generation from the API

If you navigate to the Text Generation task on the sandbox, you'll be greeted with a set of tunable parameters.

Swap over to the request view for this task: using these extra input parameters in the API is easy. You just have to add the changed parameters to the request body, and post it to the new endpoint. Parameters that are set to their defaults don't have to be included.

import requests
# The headers stay the same
headers = {
	"x-api-key": "your-api-key",
	"content-type": "application/json"
}
# Add your extra parameters, and call GPT-2
# Only changing min/max length, temp, and top_p
body = {
	"text": "Once upon a time",
	"min_length": 50,
	"max_length": 75,
	"temperature": 0.8,
	"top_p": 0.5,
	"model": "gpt2-medium"
}
# Change the endpoint: /text-generation
res = requests.post("https://api.backprop.co/text-generation",
					json=body,
					headers=headers)
res.json()
# {"output": "Once upon a time ..."}

Great -- with a few changes, we're ready to use this new task.

One thing to note is that the task response has changed slightly: while the Emotion response had an "emotion" field, generation has an "output" field.

Be sure that you're familiar with what structure a response will have.

Hopefully you now have a better idea of how you can start to use the Backprop API for your use case.

While the examples here were in Python, you can of course send requests from any context you'd need to work within your software.

In the next tutorial, we'll be going over the basics of uploading a custom model to use with Backprop. In the meantime, I'm going to cycle my API key since it was visible in this tutorial. Make sure to keep yours secure!