The Opentrons OT-2 has a camera built in to its chassis, positioned to get a view of the deck. It has a resolution of 640x480px for still images and 320x240px for video.

The camera can be used in two ways: taking a picture (can only be a still image) through the HTTP API of the robot, and saving images or video from the robot's terminal.

 Taking Images With The HTTP API

The robot will respond to POST /camera/picture  with the picture in the body and a Content-Type: image/jpeg . For instance, you can find your robot's IP address from the Opentrons App:

In this image, the robot's IP address is 

You can then use an application like Postman , wget , or curl to make a POST request to that IP, port 31950:

Taking Images And Video From The Robot 

Sometimes, taking a still image is not enough, or you want to handle the images from the robot itself. In this case, you can SSH into the robot and use the ffmpeg utility to interact with the camera directly.

Once you have the robot's IP address (see above), you can ssh into the robot to gain access to its terminal:

ssh root@ROBOT_IP  (replace ROBOT_IP with your robot's IP)

Now, you can invoke the ffmpeg command line interface. Full documentation on this interface can be found online.

To save an image to a file, from the robot's terminal do:

ffmpeg -f video4linux2 -s 640x480 -i /dev/video0 -ss 0:0:1 -frames 1 image.jpg 

This will save a 640x480 (-s 640x480 ) jpeg image (-ss 0:0:1 -frames 1 ) from the camera (-i /dev/video0) to a file in the current directory named image.jpg .

To save a video from the robot's terminal do:

ffmpeg -video_size 320x240 -i /dev/video0 -t 00:00:01 video.mp4
This will save 1 second (-t 00:00:01 ) of 320x240 video (-video_size 320x240 ) from the camera (-i /dev/video0 ) to a file in the current directory named video.mp4 .

Taking Images And Video In A Protocol

To integrate the camera into your protocol, use the Python subprocess module to invoke the ffmpeg  commands above. For instance, to save an image from the camera from a protocol you might do this:

import subprocess
from opentrons import robot

if not robot.is_simulating():
    subprocess.check_call(['ffmpeg', '-f', 'video4linux2', '-s', '640x480', '-i', '/dev/video0', '-ss', '0:0:1', '-frames', '1', 'image.jpg'])
    contents = open('image.jpg', 'rb').read()

 And the image file would be in the variable contents

Note that the code that takes the screenshot is wrapped in if not robot.is_simulating()  to avoid taking screenshots during protocol simulation.

Did this answer your question?