Introduction

Welcome to Pupil - the open source head mounted mobile eye tracking platform.

If this is the first time hearing about the Pupil project we recommend you visit the Pupil Labs website.

This wiki is the main source of documentation for the Pupil users getting started with their Pupil headset and developers contributing to code.

Getting Started

This guide will lead you through a basic workflow using Pupil hardware and software.

Once you have a Pupil Headset all you need to do is install the Pupil apps on a computer running Linux, MacOS, or Windows.

We are always working on new features, fixing bugs, and making improvements. Make sure to visit the release page frequently to download the latest version and follow the Pupil Labs blog for updates.

Capture Workflow

Go through the following steps to get familiar with the Pupil workflow. You can also check out video tutorials at the end of the guide.

1. Put on Pupil

Put on the Pupil headset and plug it in to your computer. Headsets are adjustable and shipped with additional parts. For more information head over to the Pupil Hardware guide.

2. Start Pupil Capture

3. Check pupil detection

Take a look at the Eye window. If the pupil is detected you will see a red circle around the edge of your pupil and a red dot at the center of your pupil.

If the algorithm’s detection confidence is high, the red circle will be opaque. If confidence diminishes the circle will become more transparent.

Try moving your head around a bit while looking at your eye to see that the pupil is robustly detected in various orientations.

4. Calibrate

In order to know what someone is looking at, we must to establish a mapping between pupil and gaze positions. This is what we call calibration.

The calibration process establishes a mapping from pupil to gaze coordinates.

Screen Marker Calibration Method

Click c on the world screen or press c on the keyboard to start calibrate.

Follow the marker on the screen with your eyes and try to keep your head stationary

There are other calibration methods and lots more information how calibration works in the user guide.

5. Record

Start capturing data!

Pupil Capture will save the world video stream and all corresponding gaze data in a folder in your user directory named recordings.

  • Start recording: Press the r key on your keyboard or press the circular ‘R’ button in the left hand side of the world window.
  • The elapsed recording time will appear next to the ‘R’ button.
  • Stop recording: Press the r key on your keyboard or press the circular ‘R’ button in the left hand side of the world window.

See a video demonstration of how to set recordings path, session name, and start recording – here.

Where is the recording saved?

By default, each recording will live in its own unique data folder contained in the recordings folder.

You can make as many recordings as you like.

The default recordings directory will have the following hierarchy.

  • recordings
    • 2016-04-05
    • 001
    • 002
    • 003
    • ####

How recordings are saved?

Pupil capture saves the video frames in a fixed frame rate container. This means that the raw output video (world.mp4) does not show the correct duration and the correct frame rate of the recording. This information can be found in world_timestamps.npy, which tells you exactly where each frame belongs in time.

However, if you export using Pupil Player, the video will be made such that the frames will show at the exact right time. The output video will not miss any frame of the raw video, instead, output frames are spaced out exactly as they where initially captured.

Player Workflow

Use Pupil Player to visualize data recorded with Pupil Capture and export videos of visualization and datasets for further analysis.

1. Open Pupil Player

Now that you have recorded some data, you can play back the video and visualize gaze data, marker data, and more.

Player Window

Let’s get familiar with the Player window.

The Player window is the main control center for Pupil Player. It displays the recorded video feed from pupil capture file.

  1. Graphs - This area contains performance graphs. You can monitor CPU and FPS and pupil algorithm detection confidence. These graphs are the same as in the World window..
  2. Settings GUI Menu - This is the main GUI for Pupil Player. You can use this menu primarily to launch plugins and control global settings.
  3. Plugin GUIs - Each Plugin spawns its own GUI window. You can control settings of each Plugin in the GUI window. For details on all plugins see documentation on Pupil Player in the user guide.
  4. Seek Bar and Trim Marks - You can drag the playhead (large circle) to scrub through the video or space bar to play/pause. You can use the arrow keys to advance one frame at a time. Drag the small green circles at the end of the seek bar to set trim marks. Trim marks directly inform the section of video/data to export.
  5. Hot keys - This area contains clickable buttons for plugins.

Where are Pupil Player exports saved?

Exports are saved within a dedicated folder named exports within the original recording folder.

Each export is contained within a folder within the exports folder. The numbers of the export correlate to the trim marks (frame start and frame end) for the export.

Pupil Capture Demo Video

The video below demonstrates how to setup, calibrate, and make a recording with Pupil Capture.

Turn on closed captions CC to read annotations in the video.

Pupil Player Demo Video

The video below demonstrates how to view a dataset recorded with Pupil Capture, make and export visualizations.

Turn on closed captions CC to read annotations in the video.

Pupil Hardware

Pupil Labs is based in Berlin and ships Pupil eye tracking headsets and VR/AR eye tracking add-ons to individuals, universities, and corporate enterprises worldwide!

Go to the Pupil store for prices, versions, and specs.

Pupil Mobile Eye Tracking Headset

You wear Pupil like a pair of glasses. Pupil connects to a computing device via a USBA or USBC cable. The headset is designed to be lightweight and adjustable in order to accommodate a wide range of users.

To the right is an illustration of a monocular Pupil Headset, depending on your configuration your headset might look different, but working principles are the same.

Pupil ships with a number of additional parts. The below sections provide an overview of their use and a guide to adjusting the Pupil headset.

Additional parts

World Camera

The world camera comes with two lenses. 60 degree FOV lens (shown on the left) and a wide angle 100 degree FOV lens (shown on the right).

Nose Pads

All Pupil headsets come with 2 sets of nose pads. You can swap the nose pads to customize the fit.

Pupil Headset Adjustments

Adjust the ball joint firmness by adjusting the socket screw.

You can slide the eye camera arm along the track.

You can rotate the eye camera about its ball joint.

You can rotate the world camera to align with your FOV.

Focus Cameras

Focus Eye Camera

Very important - make sure the eye camera is in focus. If you can see details of your iris, then the focus is most likely good. Slide the eye camera arm to adjust focus and/or use the lens adjuster tool.

Focus World Camera

Set the focus for the distance at which you will be calibrating.

HTC Vive Add-On

Add eye tracking powers to your HTC Vive with our 120hz binocular eye tracking add-on.

HTC Vive Setup

This page will guide you through all steps needed to turn your HTC Vive into an eye tracking HMD using a Pupil Labs eye tracking add-on.

Install the add-on

A detailed look

… at the engagement process between the eye tracking ring the the lens holding geometry. Do not follow these steps. Just have a look to get a feeling for the snap-in part to the guide above.

HTC Vive USB connection options

The HTC Vive has one free USB port hidden under the top cover that hides the cable tether connection. This gives us two options to connect the pupil eye tracking add-on:

Connect the add-on to the free htc-vive usb port.

This means the cameras share the VIVEs usb tether bandwidth with other usb components inside the Vive. This works but only if the following rules are observed:

  • Disable the HTC-Vive built-in camera in the VR settings pane to free up bandwidth for Pupil’s dual VGA120 video streams.

or

  • Enable the HTC-Vive built-in camera and set it to 30hz. Then set the Pupil Cameras to 320x240 resolution to share the USB bus.

Run a separate USB lane along the tether

If you want full frame rate and resolution for both the Vive’s camera and the add-on you will have to connect the Pupil add-on to a separate usb port on the host PC. We recommend this approach.

Connection and Camera

Once you plug the usb cables into your computer:

  • the right eye camera will show up with the name: Pupil Cam 1 ID0
  • the left eye camera will show up with the name: Pupil Cam 1 ID1

Focus and Resolutions

After assembly and connection. Fire up Pupil Capture or Service and adjust the focus of the eye cameras by rotating the lens by a few degrees (not revolutions) in the lens housing.

Use 640x480 or 320x240 resolution to get 120fps and a good view of the eye. Other resolutions will crop the eye images.

Interfacing with other software or your own code

Both cameras are fully UVC compliant and will work with OpenCVs video backend, Pupil Capture, and libraries like libucv and pyuvc.

Oculus Rift DK2 Add-On

Add eye tracking powers to your Oculus Rift DK2 with our 120hz eye tracking add-ons.

Oculus DK2 Setup

This page will guide you through all steps needed to turn your Oculus DK2 into an eye tracking HMD using the Pupil Oculus DK2 eye tracking add-on cups.

Install lens in cup

Take the lens out of an existing Oculus lens cup.

Remove the LED ring and insert the lens into the Pupil eye tracking cup.

Install the LED ring and connect the LED power supply.

Install cup in DK2

Route cables

Route the USB cables through the vent holes in the top of the Oculus DK2.

Connect cameras

Connect the eye tracking cup to the USB cable. Remove the old cup and insert the eye tracking cup in the DK2.

Connection and Camera

Once you plug the usb cables into your computer:

  • the right eye camera will show up with the name: Pupil Cam 1 ID0
  • the left eye camera will show up with the name: Pupil Cam 1 ID1

Both cameras are fully UVC compliant and will work with OpenCVs video backend, Pupil Capture, and libraries like libucv and pyuvc.

DIY

If you are an individual planning on using Pupil exclusively for noncommercial purposes, and are not afraid of SMD soldering and hacking – then, buy the parts, modify the cameras, and assemble a Pupil DIY headset. We have made a guide to help you and a shopping list.

Getting all the parts

The 3d-printed headset is the centerpiece of the Pupil mobile eye tracker. You can buy it from the Pupil Labs team through the Pupil shapeways store. The price for the headset is part production cost and part support to the pupil development team. This enables us to give you support and continue to work on the project.

All other parts of the Pupil DIY kit have been specifically selected with availability and affordability in mind. See the Bill of Materials to learn what else you will need to get.

Tools

You will need access to these tools:

  • Solder station, wick, flux (for SMD solder work)
  • Tweezers
  • Small philips screwdriver
  • Prying tool to help un-case the webcams

Prepare Webcams

The first step is to modify the cameras so we can use them for eye-tracking.

De-case Cameras

Take both webcams out of their casings. Follow the video guides.

  1. decase Logitech C525/C512
  2. decase Microsoft HD-6000

Solder Work on Eye Camera PCB

This is by far the trickiest part. You will need some soldering experience, or work with someone that can help you for this step. In the video and photo the lens holder is removed, but you will do it with the lens holder attached.

  1. Cut off the microphone
  2. De-solder or break off the push button (Note: Some cameras don’t have this button.)
  3. De-solder the blue LED’s
  4. solder on the IR-LED’s. Please take note of LED polarity! video

Replace IR-blocking Filter on the Eye Camera

  1. Unscrew the lens from the mount.
  2. Carefully remove the IR filter. Be very careful! The IR filter is a thin piece of coated glass and right behind it is a lens element that must stay intact and unharmed! It is necessary to remove the IR filter, so that the image sensor will be able to “see” the IR light.
  3. Using a hole punch, cut out 1 round piece of exposed film and put it where the older filter was.
  4. Use plastic glue to fix the piece. Don’t let the glue touch the center!
  5. Put the lens back inside. You will have to manually focus the lens when you run the software for the first time by hand. Later you can use the focus control in software to fine tune.

Video

Assembly of the Pupil DIY Kit

If you are reading this, chances are that you received one or more Pupil headsets – Awesome! If you feel like letting us know something about the headset, print quality, good and bad, please go ahead and post your thoughts on the Pupil Google Group.

Headset 3D print Intro & Unboxing

  1. Get used to the material
  2. Clean out the eye-camera arm
  3. Try it on!

Pupil Headset 3D Print Unboxing Video

Camera Assembly

  1. Attach the world camera onto the mount using 4 small screws, leftover from disassembly.
  2. Clip the world camera clip onto the headset
  3. Slide the eye-cam into the mount video guide
  4. Slide the arm onto the headset
  5. Route the cables
  6. Attach USB extension cable(s)

Customization

The camera mounts can be replaced by custom build parts that suit your specific camera setup or other sensors.

Pupil Hardware Development

This page contains documentation and discussion on open source camera mounts, optics, and cameras.

Camera Mounts

We release the CAD files for the camera mounts for you to download, modify, in accordance with our license. CAD files for the frame are not open source; see explanation.

Interface Documentation

By releasing the mounts as example geometry we automatically document the interface. You can use the CAD files to take measurements and make your own mounts.

Compatibility

The mounts are developed as part of the whole headset and carry the revision number of the headset they where designed for.

Download Camera Mount CAD Files

All files are hosted in the pupil-hardware-diy repo here

You can clone the latest revision

git clone https://github.com/pupil-labs/pupil-hardware-diy.git

Or, if you want an older version, just checkout an older version. In this example we checkout rev006 rev006 with the git version id of 6ad49c6066d5

git clone https://github.com/pupil-labs/pupil-hardware-diy.git 
git checkout 6ad49c6066d5

User Docs

This section of the documentation is targeted towards users of Pupil software and provides deeper explanation of features and methods.

Pupil Capture

Pupil Capture is the software used with the Pupil Headset. The software reads the video streams coming in from the world camera and the eye camera. Pupil Capture uses the video streams to detect your pupil, track your gaze, detect and track markers in your environment, record video and events, and stream data in realtime.

Capture Selection

By default Pupil Capture will use Local USB as the capture source. If you have a Pupil headset connected to your machine you will see video displayed from your Pupil headset in the World and eye windows. If no headset is connected or Pupil Capture is unable to open capture devices it will fall back to the Test Image. Other options for capture source are described below.

  • Test Image - This is the fallback behavior if no capture device is found, or if you do not want to connect to any capture device.
  • Video File Source - select this option to use previously recorded videos for the capture selection.
  • Pupil Mobile - select this option When using Pupil Capture with the Pupil Mobile android application.
  • Local USB - select this option if your Pupil Headset is connected to the machine running Pupil Capture. This is the default setting.

Calibration

Pupil uses two cameras. One camera records a subject’s eye movements – we call this the eye camera. Another camera records the subject’s field of vision – we call this the world camera. In order to know what someone is looking at, we must find the parameters to a function that correlates these two streams of information.

Before every calibration

Make sure that the users pupil is properly tracked. Make sure that the world camera is in focus for the distance at which you want to calibrate, and that you can see the entire area you want to calibrate within the world cameras extents (FOV).

Calibration Methods

First select the calibration method you would like to use:

Screen Marker Calibration

This is the default method, and a quick method to get started. It is best suited for close range eye-tracking in a narrow field of view.

  1. Select Screen Marker Calibration
  2. Select your Monitor (if more than 1 monitor)
  3. Toggle Use fullscreen to use the entire extents of your monitor (recommended). You can adjust the scale of the pattern for a larger or smaller calibration target.
  4. Press c on your keyboard or click the blue circular C button in the left hand side of the world window to start calibration.
  5. Follow the marker on the screen with your eyes. Try to keep your head still during calibration.
  6. The calibration window will close when calibration is complete.

In the Advanced sub-menu you can set the sample duration – the number of frames to sample the eye and marker position. You can also set parameters that are used to debug and detect the circular marker on the screen.

Manual Marker Calibration

This method is done with an operator and a subject. It is suited for midrange distances and can accommodate a wide field of view. You need markers made of concentric circles, like the two shown below.

  1. Select Manual Marker Calibration
  2. Press c on your keyboard or click the blue circular C button in the left hand side of the world window to start calibration.
  3. Stand in front of the subject (the person wearing the Pupil headset) at the distance you would like to calibrate. (1.5-2m)
  4. Ask the subject to follow the marker with their eyes and hold their head still.
  5. Show the marker to the subject and hold the marker still. You will hear a “click” sound when data sampling starts, and one second later a “tick” sound when data sampling stops.
  6. Move the marker to the next location and hold the marker still.
  7. Repeat until you have covered the subjects field of view (generally about 9 points should suffice).
  8. Show the ‘stop marker’ or press c on your keyboard or click the blue circular C button in the left hand side of the world window to stop calibration.

You will notice that there are no standard controls, only an Advanced sub-menu to control detection parameters of the marker and to debug by showing edges of the detected marker in the world view.

Download markers to print or display on smartphone/tablet screen.

Natural Features Calibration

This method is for special situations and far distances. Usually not required.

  1. Select Natural Features Calibration
  2. Press c on your keyboard or click the blue circular C button in the left hand side of the world window to start calibration.
  3. Ask the subject (the person wearing the Pupil headset) to look a point in within their field of vision. Note – pick a salient feature in the environment.
  4. Click on that point in the world window.
  5. Data will be sampled.
  6. Repeat until you have covered the subjects field of view (generally about 9 points should suffice)
  7. Press c on your keyboard or click the blue circular C button in the left hand side of the world window to stop calibration.

Notes on calibration accuracy

Using screen based 9 point calibration method, you should easily be able to achieve tracking accuracy within the physiological limits (1-2 visual degrees).

  • Any calibration is accurate only at its depth level relative to the eye (parallax error).
  • Any calibration is only accurate inside the field of view (in the world video) you have calibrated. For example: If during your calibration you only looked at markers or natural features (depending on your calibration method) that are in the left half, you will not have good accuracy in the right half.

Recording

Press r on your keyboard or press the blue circular R button in the left hand side of the world window to start recording. You will see red text with the elapsed time of recording next to the R button. To stop recording, press r on your keyboard or press the R button on screen.

You can set the folder or Path to recordings and the Recording session name in the Recorder sub-menu within the GUI. Note - you must specify an existing folder, otherwise the Path to recordings will revert to the default path.

What will be in the session folder?

If you open up a session folder you will see a collection of video(s) and data files. Take a look at Data format to see exactly what you get.

Open a plugin

Click on the selector “Open Plugin” and select your plugin.

Pupil Sync

pupil_sync can help you to collect data from different devices and control an experiment with multiple actors (data generators and sensors) or use more than one Pupil device simultaneously:

  • Load the Pupil Sync plugin from the General sub-menu in the GUI.
  • Once the plugin is active it will show all other local network pupil sync nodes in the GUI
  • It will also automatically synchronise time up to 0.1ms.
  • Furthermore actions like starting and stopping a recording on one device will be mirrored instantly on all other devices.

For this to work your network needs to allow UDP transport. If the nodes do not find each other, create a local wifi network and use that instead.

Streaming Pupil Data over the network

Pupil Remote is a plugin that is used to broadcast data over the network using the excellent library Zero MQ.

  • Load the Pupil Remote plugin from the General sub-menu in the GUI (it is loaded by default).
  • It will automatically begin broadcasting at the default Address specified.
  • Change the address and port as desired.
  • If you want to change the address, just type in the address after the tcp://

Receiving Data with your own app

ZeroMQ has bindings to many languages. Reading the stream using python goes like so:

"""
Receive data from Pupil server broadcast over TCP
test script to see what the stream looks like
and for debugging
"""

import zmq
import json

#network setup
port = "5000"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:"+port)

# recv all messages
socket.setsockopt(zmq.SUBSCRIBE, '')
# recv just pupil postions
# socket.setsockopt(zmq.SUBSCRIBE, 'pupil_positions')
# recv just gaze postions
# socket.setsockopt(zmq.SUBSCRIBE, 'gaze_positions')

while True:
    topic,msg =  socket.recv_multipart()
    msg = json.loads(msg)
    print  "\n\n",topic,":\n",msg

We have written some simple Python scripts that you can try using Pupil Server to have your gaze control a mouse. Or just print out streaming from Pupil Server. For more simple scripts, check out the pupil-helpers repository.

Message Format for Pupil Server

Messages from pupil server mirror all objects in the events dict that is used internally in pupil capture and player. The data is send per topic (pupil_positions, gaze_positions …) and serialized using json. The example above tells it all.

Marker Tracking

The Marker Tracking plugin allows you to define surfaces within your environment and track surfaces in realtime using a 5x5 square marker. We were greatly inspired by the ArUco marker tracking library.

  • Markers - We use a 5x5 square marker. This is not the same marker that is used by ArUco (they use 7x7).
  • Using a 5x5 marker gives us 64 unique markers.
  • Why the 5x5 grid? The 5x5 grid allows us to make smaller markers that can still be detected. Markers can be printed on paper, stickers, or displayed on the screen.

See the video linked for an introduction and workflow.

Defining Surfaces with Markers

A surface can be defined by one or more markers. Surfaces can be defined with Pupil Capture in real-time, or offline with Pupil Player. Below we provide an outline of steps.

  • Define surfaces within your environment using one or more fiducial markers. Surfaces can be defined with a minimum of one marker. The maximum number of markers per surface is limited by the number of markers we can produce with a 5x5 grid.
  • Use Pupil Capture or Pupil Player to register surfaces, name them, and edit them.
  • Registered surfaces are saved automatically, so that the next time you run Pupil Capture or Pupil Player, your surfaces (if they can be seen) will appear when you start the marker tracking plugin.
  • Surfaces defined with more than 2 markers are detected even if some markers go outside the field of vision or are obscured.
  • We have created a window that shows registered surfaces within the world view and the gaze positions that occur within those surfaces in realtime.
  • Streaming Surfaces with Pupil Capture - Detected surfaces as well as gaze positions relative to the surface can be streamed locally or over the network with pupil server. Check out this video for a demonstration.
  • Surface Metrics with Pupil Player - if you have defined surfaces, you can generate surface visibility reports or gaze count per surface. See our blog post for more information.

  • Generate markers with this script, or download the image.

Pupil Player

Pupil Player is the second tool you will use after Pupil Capture. It is a media and data visualizer at its core. You will use it to look at Pupil Capture recordings. Visualize your data and export it.

Features like surface tracking found in Pupil Capture are also available in Pupil Player.

Starting Pupil Player

Drag the recording directory (the triple digit one) directly onto the app icon or launch the application and drag + drop the recording directory into Pupil Player window.

Running from source?

cd "path_to_pupil_dir/pupil_src/player"
python main.py "path/to/recording_directory"

Workflow

Pupil Player is similar to a video player. You can playback recordings and can load plugins to build visualizations.

Here is an example workflow:

  • Start Pupil Player
  • Opening a Plugin - From the Settings GUI menu load the Vis Circle plugin.
  • Playback - press the play button or space bar on your keyboard to view the video playback with visualization overlay, or drag the playhead in the seek bar to scrub through the dataset.
  • Set trim marks - you can drag the small circles on the ends of the seek bar. This will set the start and end frame for the exporter.
  • Export Video & Raw Data - Load the Video Export Launcher plugin and the Raw Data Exporter plugin. Press e on your keyboard or the e button in the left hand side of the window to start the export.
  • Check out exported data in the exports directory within your recording directory

Plugin Overview

Pupil Player uses the same Plugin framework found in Pupil Capture to add functionality.

We implement all visualizations, marker tracking, and the exporter using this structure. Very little work (often no work) needs to be done to make a Capture Plugin work for the Pupil Player and vice versa.

There are two general types of plugins:

  • Unique - You can only launch one instance of this plugin.
  • Not unique - You can launch multiple instances of this type of plugin. For example, you can load one Vis Circle plugin to render the gaze position with a translucent green circle, and another Vis Circle plugin to render the gaze circle with a green stroke of 3 pixels thickness. You can think of these types of plugins as additive.

In the following sections we provide a summary of plugins currently available and in Pupil Player.

Visualization Plugins and Utilities

For the sake of clarity, we will call plugins with the Vis prefix visualization plugins. These plugins are simple plugins, are mostly additive (or not unique), and directly operate on the gaze positions to produce visualizations. Other plugins like Offline Marker Detector also produces visualizations, but will be discussed elsewhere due to the extent of its features.

Vis Circle

Visualize the gaze positions with a circle for each gaze position. This plugin is not unique, therefore you can add multiple instances of the plugin to build your visualization.

You can set the following parameters:

  • radius - the radius of the circle around the gaze point.
  • stroke width - the thickness or width of the stoke in pixels.
  • fill - toggle on for a circle with solid fill. Toggle off for a circle with only stroke.
  • color - define the red, green, blue values for color. Alpha defines the opacity of the stroke and fill.

Here we show an example of how you could use 2 instances of the Vis Circle Plugin. The first instance renders the gaze position as a filled yellow circle. The second instance renders the same gaze position as an orange stroke circle.

Vis Cross

Visualize the gaze positions with a cross for each gaze position. This plugin is not unique, therefore you can add multiple instances of the plugin to build your visualization. You can set the following parameters:

  • inner offset length - the distance in pixels to offset the interior cross endpoints from the gaze position. A value of 0 will make the crosshairs intersect the gaze position.
  • outer length - The length of the cross lines in pixels from the gaze position. Note - equal values of inner offset length and outer length will result in a cross with no length, and therefore not rendered.
  • stroke width - the thickness or width of the stoke in pixels.
  • color - define the red, green, blue values for color.

Here we show an example of how you could use 2 instances of the Vis Cross Plugin. The first instance renders the gaze position as a red cross with that extends to the boundaries of the screen. The second instance renders the gaze position as a green cross, with a heavier stroke weight.

Scan Path

This plugin enables past gaze positions to stay visible for the duration of time specified by the user. This plugin is unique, therefore you can only load one instance of this plugin.

On its own, Scan Path does not render anything to the screen. It is designed to be used with other plugins. In some cases, it is even required to be enabled in order for other plugins to properly function. When used with Vis plugins (like Vis Circle, Vis Cross, Vis Polyline, or Vis Light Points) Scan Path will enable you to see both the current gaze positions and the past gaze positions for the specified duration of time.

Here we show an example of Scan Path set with 0.4 seconds duration used with Vis Circle. Each green circle is a gaze position within the last 0.4 seconds of the recording.

Vis Polyline

Visualize the gaze positions with a polyline for each gaze position. This plugin is not unique, therefore you can add multiple instances of the plugin to build your visualization. You can set the following parameters:

  • line thickness - the thickness or width of the polyline stroke in pixels.
  • color - define the red, green, blue values for color.

An example showing Vis Polyline used with Vis Circle and Scan Path. The polyline enables one to visualize the sequence of the gaze positions over the duration specified by Scan Path.

Vis Light Points

Visualize the gaze positions as a point of light for each gaze position. The falloff of the light from the gaze position is specified by the user. This plugin is not unique, therefore you can add multiple instances of the plugin to build your visualization. You can set the following parameters:

  • falloff - The distance (in pixels) at which the light begins to fall off (fade to black). A very low number will result in a very dark visualization with tiny white light points. A very large number will result in a visualization of the world view with little or no emphasis of the gaze positions.

Here is an example demonstrating Vis Light Points with a falloff of 73.

Manual Gaze Correction

This plugin allows one to manually offset the gaze position. The offset values are between -1 and 1. This plugin is unique, therefore you can only load one instance of this plugin. You can set the following parameters:

  • x_offset - the amount to offset the gaze position horizontally
  • y_offset - the amount to offset the gaze position vertically

Eye Video Overlay

Here is an example of the Eye Video Overlay with binocular eye videos.

This plugin can be used to overlay the eye video on top of the world video. Note that the eye video is not recorded by default in Pupil Capture, so if you want to use this plugin, make sure to check record eye video in Pupil Capture. This plugin is unique, therefore you can only load one instance of this plugin.

You can set the following parameters:

  • opacity - the opacity of the overlay eye video image. 1.0 is opaque and 0.0 is transparent.
  • video scale - use the slider to increase or decrease the size of the eye videos.
  • move overlay - toggle on and then click and drag eye video to move around in the player window. Toggle off when done moving the video frames.
  • show - show or hide eye video overlays.
  • horiz. and vert. flip - flip eye videos vertically or horizontally

Export

You can export data and videos by pressing e on your keyboard or the e hot key button in the Pupil Player window.

All open plugins that have export capability will export when you press e. All exports are separated from your raw data and contained in the exports sub-directory. The exports directory lives within your recording directory.

Exports directory

All exports are saved within the exports sub-directory within your recording directory. A new directory will be created within the exports directory named with the start frame and end frame that is specified by the trim marks.

Video Export Launcher

To export a video, load the Export Video plugin. You can select the frame range to export by setting trim marks in the seek bar or directly in the plugin GUI.

You can specify the name of the export in the GUI. Click press the e button or click e on your keyboard to start the export.

The exporter will run in the background and you can see the progress bar of the export in the GUI. While exporting you can continue working with Pupil Player and even launch new exports.

Raw Data Exporter

To export .csv files of your data, load the Raw Data Exporter plugin. You can select the frame range to export by setting trim marks in the seek bar or directly in the plugin GUI.

Click press the e button or click e on your keyboard to start the export.

Offline Surface Tracker

This plugin is an offline version of the Surface Tracking plugin for Pupil Capture. You can use this plugin to detect markers in the recording, define surfaces, edit surfaces, and create and export visualizations of gaze data within the defined surfaces.

Here is an example workflow for using the Offline Surface Detector plugin to generate heatmap visualizations and export surface data reports:

  • Load Offline Surface Detector plugin - if you already have surfaces defined, the load may take a few seconds because the plugin will look through the entire video and cache the detected surfaces.
  • Add surface - if you do not have any defined surfaces, you can click on the Add surface button when the markers you want to user are visible or just click the circular A button in the left hand side of the screen.
  • Surface name and size - In the Marker Detector GUI window, define the surface name and real world size. Note - defining size is important as it will affect how heatmaps are rendered.
  • Set trim marks - optional, but if you want to export data for a specific range, then you should set the trim marks.
  • Recalculate gaze distributions - click the (Re)calculate gaze distributions button after specifying surface sizes. You should now see heatmaps in the Player window (if gaze positions were within your defined surfaces).
  • Export gaze and surface data - click e and all surface metrics reports will be exported and saved for your trim section within your export folder.

Fixation Detector - Dispersion Duration

This plugin detects fixation based on a dispersion threshold in terms of degrees of visual angle. This plugin is unique, therefore you can only load one instance of this plugin.

Toggle Show fixations to show a visualization of fixations. The blue number is the number of the fixation (0 being the first fixation). You can export fixation reports for your current trim section by pressing e on your keyboard or the e hot key button in the left hand side of the window.

Batch Exporter

You can use this plugin to apply visualizations to an entire directory (folder) of recordings in one batch. You need to specify the following:

  • Recording source directory - a directory (folder) that contains one or more Pupil recording folder.
  • Recording destination directory - an existing directory (folder) where you want to save the visualizations.

Developing your own Plugin

To develop your own plugin see the developer guide.

Pupil Service

Pupil Service is like Pupil Capture except it does not have a world video feed or GUI. It is intended to be used with VR and AR eye tracking setups.

Pupil Service is designed to run in the background and to be controlled via network commands only. The service process has no GUI. The tools introduced in the hmd-eyes project are made to work with Pupil Service and Pupil Capture alike.

Talking to Pupil Service

Code examples below demonstrate how to control Pupil Service over the network.

Starting and stopping Pupil Service:
import zmq, msgpack, time
ctx = zmq.Context()

#create a zmq REQ socket to talk to Pupil Service/Capture
req = ctx.socket(zmq.REQ)
req.connect('tcp://localhost:50020')

#convenience functions
def send_recv_notification(n):
    # REQ REP requirese lock step communication with multipart msg (topic,msgpack_encoded dict)
    req.send_multipart(('notify.%s'%n['subject'], msgpack.dumps(n)))
    return req.recv()

def get_pupil_timestamp():
    req.send('t') #see Pupil Remote Plugin for details
    return float(req.recv())

# set start eye windows
n = {'subject':'eye_process.should_start.0','eye_id':0, 'args':{}}
print send_recv_notification(n)
n = {'subject':'eye_process.should_start.1','eye_id':1, 'args':{}}
print send_recv_notification(n)
time.sleep(2)


# set calibration method to hmd calibration
n = {'subject':'start_plugin','name':'HMD_Calibration', 'args':{}}
print send_recv_notification(n)


time.sleep(2)
# set calibration method to hmd calibration
n = {'subject':'service_process.should_stop'}
print send_recv_notification(n)

Notifications

The code demonstrates how you can listen to all notification from Pupil Service. This requires a little helper script called zmq_tools.py.

from zmq_tools import *

ctx = zmq.Context()
requester = ctx.socket(zmq.REQ)
requester.connect('tcp://localhost:50020') #change ip if using remote machine

requester.send('SUB_PORT')
ipc_sub_port = requester.recv()
monitor = Msg_Receiver(ctx,'tcp://localhost:%s'%ipc_sub_port,topics=('notify.',)) #change ip if using remote machine

while True:
    print(monitor.recv())

Clients

An example client written in Python can be found here

An example client for Unity3d can be found here

Data Format

Every time you click record in Pupil’s capture software, a new recording is started and your data is saved into a recording folder. It contains:

  • world.mp4 Video stream of the world view
  • world_timestamps.npy 1d array of timestamps for each world video frame.
  • info.csv a file with meta data
  • pupil_data python pickled pupil data. This is used by Pupil Player.
  • Other files - depending on your hardware setup and plugins loaded in Pupil Capture, additional files are saved in your recording directory. More on this later.

These files are stored in a newly created folder inside your_pupil_recordings_dir/your_recording_name/ XXX where XXX is an incrementing number. It will never overwrite previous recordings!

If you want to view the data, export videos, export raw data as .csv (and more) you can use Pupil Player.

Pupil - Data Format

The data format for Pupil recordings is 100% open. Sub-headings below provide details of each file and its data format.

World Video Stream

When using the setting more CPU smaller file: A mpeg4 compressed video stream of the world view in a .mp4 container. The video is compressed using ffmpeg’s default settings. It gives a good balance between image quality and files size. The frame rate of this file is set to your capture frame rate.

When using the setting less CPU bigger file: A raw mjpeg stream from the world camera world view in a .mp4 container. The video is compressed by the camera itself. While the file size is considerably larger than above, this will allow ultra low CPU while recording. It plays with recent version of ffmpeg and vlc player. The “frame rate” setting in the Pupil Capture sidebar (Camera Settings > Sensor Settings) controls the frame rate of the videos.

You can compress the videos afterwards using ffmpeg like so:

cd your_recording
ffmpeg -i world.mp4  -pix_fmt yuv420p  world.mp4 
ffmpeg -i eye0.mp4  -pix_fmt yuv420p  eye0.mp4 
ffmpeg -i eye1.mp4  -pix_fmt yuv420p  eye1.mp4 

OpenCV has a capture module that can be used to extract still frames from the video:

import cv2
capture = cv2.VideoCapture("absolute_path_to_video/world.mp4")
status, img1 = capture.read() # extract the first frame
status, img2 = capture.read() # second frame...

Coordinate Systems

We use a normalized coordinate system with the origin 0,0 at the bottom left and 1,1 at the top right.

  • Normalized Space

Origin 0,0 at the bottom left and 1,1 at the top right. This is the OpenGL convention and what we find to be an intuitive representation. This is the coordinate system we use most in Pupil. Vectors in this coordinate system are specified by a norm prefix or suffix in their variable name.

  • Image Coordinate System

In some rare cases we use the image coordinate system. This is mainly for pixel access of the image arrays. Here a unit is one pixel, origin is “top left” and “bottom right” is the maximum x,y.

Timestamps

All indexed data, (for example, still frames from the world camera, still frames from the eye camera, gaze and pupil coordinates, and so on) has timestamps associated to for synchronization purposes. The timestamp is derived from CLOCK_MONOTONIC on Linux and MacOS.

The time at which the clock starts counting is called PUPIL EPOCH. In pupil the epoch is adjustable through Pupil Remote and Pupil Timesync.

Timestamps are recorded for each sensor separately. Eye and World cameras may be capturing at very different rates (e.g. 120hz eye camera and 30hz world camera), and correlation of eye and world (and other sensors) can be done after the fact by using the timestamps. For more information on this see Synchronization below.

Observations:

  • Timestamps in seconds since PUPIL EPOCH.
  • PUPIL EPOCH is usually the time since last boot.
  • In UNIX like, PUPIL EPOCH is usually not the Unix Epoch (00:00:00 UTC on 1 January 1970).

More information:

  • Unit : Seconds
  • Precision: Full float64 precision with 15 significant digits, i.e. 10 μs.
  • Accuracy:
    • If WIFI, it is ~1 ms
    • If wired or ‘localhost’, it is in the range of μs.
  • Granularity:
    • It is machine specific (depends on clock_monotonic on Linux). It is constrained by the processor cycles and software.
    • In some machines (2 GHz processor), the result comes from clock_gettime(CLOCK_MONOTONIC, &time_record) function on Linux. This function delivers a record with nanosecond, 1 GHz, granularity. Then, PUPIL software does some math and delivers a float64.
  • Maximum Sampling Rate:
    • Depends on set-up, and it is lower when more cameras are present. (120Hz maximum based on a 5.7ms latency for the cameras and a 3.0ms processing latency.

Pupil Data

We store the gaze positions, pupil positions, and additional information within the pupil_data file. The pupil_data file is a pickled Python file.

Pupil Positions

Coordinates of the pupil center in the eye video are called the pupil position, that has x,y coordinates normalized as described in the coordinate system above. This is stored within a dictionary structure within the pupil_data file.

Gaze Positions

The pupil position get mapped into the world space and thus becomes the gaze position. This is the current center of the subject visual attention – or what you’re looking at in the world. This is stored within a dictionary structure within the pupil_data file.

Looking at the data

Pupil Player

Head over to Pupil Player to playback Pupil recordings, add visualizations, and export in various formats.

Access to raw data

Use the ‘Raw Data Exporter’ plugin to export .csv files that contain all the data captured with Pupil Capture.

An informational file that explains all fields in the .csv will be exported with the .csv file for documentation. Below is a list of the data exported using v0.7.4 of Pupil Player with a recording made from Pupil Capture v0.7.4.

pupil_positions.csv

  • timestamp - timestamp of the source image frame
  • index - associated_frame: closest world video frame
  • id - 0 or 1 for left/right eye
  • confidence - is an assessment by the pupil detector on how sure we can be on this measurement. A value of 0 indicates no confidence. 1 indicates perfect confidence. In our experience useful data carries a confidence value greater than ~0.6. A confidence of exactly 0 means that we don’t know anything. So you should ignore the position data.
  • norm_pos_x - x position in the eye image frame in normalized coordinates
  • norm_pos_x - x position in the eye image frame in normalized coordinates
  • norm_pos_y - y position in the eye image frame in normalized coordinates
  • diameter - diameter of the pupil in image pixels as observed in the eye image frame (is not corrected for perspective)
  • method - string that indicates what detector was used to detect the pupil

optional fields depending on detector in 2d the pupil appears as an ellipse available in 3d c++ and 2D c++ detector

  • 2d_ellipse_center_x - x center of the pupil in image pixels
  • 2d_ellipse_center_y - y center of the pupil in image pixels
  • 2d_ellipse_axis_a - first axis of the pupil ellipse in pixels
  • 2d_ellipse_axis_b - second axis of the pupil ellipse in pixels
  • 2d_ellipse_angle - angle of the ellipse in degrees

Data made available by the 3d c++ detector

  • diameter_3d - diameter of the pupil scaled to mm based on anthropomorphic avg eye ball diameter and corrected for perspective.
  • model_confidence - confidence of the current eye model (0-1)
  • model_id - id of the current eye model. When a slippage is detected the model is replaced and the id changes.
  • sphere_center_x - x pos of the eyeball sphere is eye pinhole camera 3d space units are scaled to mm.
  • sphere_center_y - y pos of the eye ball sphere
  • sphere_center_z - z pos of the eye ball sphere
  • sphere_radius - radius of the eyeball. This is always 12mm (the anthropomorphic avg.) We need to make this assumption because of the single camera scale ambiguity.
  • circle_3d_center_x - x center of the pupil as 3d circle in eye pinhole camera 3d space units are mm.
  • circle_3d_center_y - y center of the pupil as 3d circle
  • circle_3d_center_z - z center of the pupil as 3d circle
  • circle_3d_normal_x - x normal of the pupil as 3d circle. Indicates the direction that the pupil points at in 3d space.
  • circle_3d_normal_y - y normal of the pupil as 3d circle
  • circle_3d_normal_z - z normal of the pupil as 3d circle
  • circle_3d_radius - radius of the pupil as 3d circle. Same as diameter_3d
  • theta - circle_3d_normal described in spherical coordinates
  • phi - circle_3d_normal described in spherical coordinates
  • projected_sphere_center_x - x center of the 3d sphere projected back onto the eye image frame. Units are in image pixels.
  • projected_sphere_center_y - y center of the 3d sphere projected back onto the eye image frame
  • projected_sphere_axis_a - first axis of the 3d sphere projection.
  • projected_sphere_axis_b - second axis of the 3d sphere projection.
  • projected_sphere_angle - angle of the 3d sphere projection. Units are degrees.

gaze_positions.csv

  • timestamp - timestamp of the source image frame
  • index - associated_frame: closest world video frame
  • confidence - computed confidence between 0 (not confident) -1 (confident)
  • norm_pos_x - x position in the world image frame in normalized coordinates
  • norm_pos_y - y position in the world image frame in normalized coordinates
  • base_data - “timestamp-id timestamp-id …” of pupil data that this gaze position is computed from #data made available by the 3d vector gaze mappers
  • gaze_point_3d_x - x position of the 3d gaze point (the point the subject looks at) in the world camera coordinate system
  • gaze_point_3d_y - y position of the 3d gaze point
  • gaze_point_3d_z - z position of the 3d gaze point
  • eye_center0_3d_x - x center of eye-ball 0 in the world camera coordinate system (of camera 0 for binocular systems or any eye camera for monocular system)
  • eye_center0_3d_y - y center of eye-ball 0
  • eye_center0_3d_z - z center of eye-ball 0
  • gaze_normal0_x - x normal of the visual axis for eye 0 in the world camera coordinate system (of eye 0 for binocular systems or any eye for monocular system). The visual axis goes through the eye ball center and the object thats looked at.
  • gaze_normal0_y - y normal of the visual axis for eye 0
  • gaze_normal0_z - z normal of the visual axis for eye 0
  • eye_center1_3d_x - x center of eye-ball 1 in the world camera coordinate system (not available for monocular setups.)
  • eye_center1_3d_y - y center of eye-ball 1
  • eye_center1_3d_z - z center of eye-ball 1
  • gaze_normal1_x - x normal of the visual axis for eye 1 in the world camera coordinate system (not available for monocular setups.). The visual axis goes through the eye ball center and the object thats looked at.
  • gaze_normal1_y - y normal of the visual axis for eye 1
  • gaze_normal1_z - z normal of the visual axis for eye 1

Raw data with Python

You can read and inspect pupil_data with a couple lines of python code.

Synchronization

Pupil Capture software runs multiple processes. The world video feed and the eye video feeds run and record at the frame rates set by their capture devices (cameras). This allows us to be more flexible. Instead of locking everything into one frame rate, we can capture every feed at specifically set rates. But, this also means that we sometimes record world video frames with multiple gaze positions (higher eye-frame rate) or without any (no pupil detected or lower eye frame rate).

In player_methods.py you can find a function that takes timestamped data and correlates it with timestamps form a different source.

def correlate_data(data,timestamps):
    '''
    data:  list of data :
        each datum is a dict with at least:
            timestamp: float

    timestamps: timestamps list to correlate  data to

    this takes a data list and a timestamps list and makes a new list
    with the length of the number of timestamps.
    Each slot contains a list that will have 0, 1 or more associated data points.

    Finally we add an index field to the datum with the associated index
    '''
    timestamps = list(timestamps)
    data_by_frame = [[] for i in timestamps]

    frame_idx = 0
    data_index = 0

    data.sort(key=lambda d: d['timestamp'])

    while True:
        try:
            datum = data[data_index]
            # we can take the midpoint between two frames in time: More appropriate for SW timestamps
            ts = ( timestamps[frame_idx]+timestamps[frame_idx+1] ) / 2.
            # or the time of the next frame: More appropriate for Sart Of Exposure Timestamps (HW timestamps).
            # ts = timestamps[frame_idx+1]
        except IndexError:
            # we might loose a data point at the end but we don't care
            break

        if datum['timestamp'] <= ts:
            datum['index'] = frame_idx
            data_by_frame[frame_idx].append(datum)
            data_index +=1
        else:
            frame_idx+=1

    return data_by_frame

Developer Docs

Development Overview

Overview of language, code structure, and general conventions

Language

Pupil is written in Python, but no “heavy lifting” is done in Python. High performance computer vision, media compression, display libraries, and custom functions are written in external libraries or c/c++ and accessed though cython. Python plays the role of “glue” that sticks all the pieces together.

We also like writing code in Python because it’s quick and easy to move from initial idea to working proof-of-concept. If proof-of-concept code is slow, optimization and performance enhancement can happen in iterations of code.

Process Structure

When Pupil Capture starts, in default settings two processes are spawned:

Eye and World. Both processes grab image frames from a video capture stream but they have very different tasks.

Eye Process

The eye process only has one purpose - to detect the pupil and broadcast its position. The process breakdown looks like this:

  • Grabs eye camera images from eye camera video stream
  • Find the pupil position in the image
  • Broadcast/stream the detected pupil position.

World Process

This is the workhorse.

  • Grabs the world camera images from the world camera video stream
  • Receives pupil positions from the eye process
  • Performs calibration mapping from pupil positions to gaze positions
  • Loads plugins - to detect markers, broadcast pupil positions over the network, and more…
  • Records video and data. Most, and preferably all coordination and control happens within the World process.

TBA

Pupil Datum format

The pupil detector, run by the Eye process are required to return a result in the form of a Python dictionary with at least the following content:

    result = {}
    result['timestamp'] = frame.timestamp
    result['norm_pos'] = (x,y) # pupil center in normalized coordinates
    result['confidence'] = # a value between 1 (very certain) and 0 (not certain, nothing found)
    result['whatever_else_you_want'] = # you can add other things to this dict

    # if no pupil was detected
    result = {}
    result['timestamp'] = frame.timestamp
    result['confidence'] = 0

This dictionary is sent on the IPC and read by gaze mapping plugins in the world process. Mapping from pupil position to gaze position happens here. The mapping plugin is initialized by a calibration plugin.

Control: World > Eye

Happens via notifications on the IPC.

Timing & Data Conventions

Pupil Capture is designed to work with multiple captures that free-run at different frame rates that may not be in sync. World and eye images are timestamped and any resulting artifacts (detected pupil, markers, etc) inherit the source timestamp. Any correlation of these data streams is the responsibility of the functional part that needs the data to be correlated (e.g. calibration, visualization, analyses).

For example: The pupil capture data format records the world video frames with their respective timestamps. Independent of this, the recorder also saves the detected gaze and pupil positions at their frame rate and with their timestamps. For more detail see Data Format.

Git Conventions

We make changes almost daily and sometimes features will be temporarily broken in some development branches. However, we try to keep the master branch as stable as possible and use other branches for feature development and experiments. Here’s a breakdown of conventions we try to follow.

  • tags - We make a tag following the semantic versioning protocol. Check out the releases.
  • master - this branch tries to be as stable as possible - incremental and tested features will be merged into the master. Check out the master branch.
  • branches - branches are named after features that are being developed. These branches are experimental and what could be called ‘bleeding edge.’ This means features in these branches may not be fully functional, broken, or really cool… You’re certainly welcome to check them out and improve on the work!

Pull requests

If you’ve done something – even if work-in-progress – make a pull request and write a short update to the Pupil Community.

Developer Setup

Pages in the developer guide are oriented towards developers and will contain high level overview of code and organizational structure.

If you want to develop a plugin or to extend Pupil for your project, this is the place to start.

These pages will not contain detailed documentation of code. We’re working on code documentation, and when it’s done we will put code documentation online at read the docs.

If you have questions, encounter any problems, or want to share progress – write a post on the Pupil Google Group. We will try our best to help you out, and answer questions quickly.

Running Pupil from Source

Pupil is a prototype and will continue to be in active development. If you plan to make changes to Pupil, want to see how it works, make a fork, install all dependencies and run Pupil source directly with Python.

Installing Dependencies

  • Linux step-by-step instructions for Ubuntu 16.04 LTS +
  • MacOS step-by-step instructions for MacOS 10.8+
  • Windows step-by-step instructions for Windows 10

Download and Run Pupil Source Code

Once you have all dependencies installed, you’re 99% done. Now, all you have to do fork the github repository. Or, using the terminal you can clone the Pupil repository using git:

cd /the_folder_where_Pupil_will_live/
git clone https://github.com/pupil-labs/pupil.git

Run Pupil Capture from Source

You’re in development land now. If you’re running from the source, there will be no icon to click. So fire up the terminal, navigate to the cloned Pupil repository, and start Pupil using Python.

cd /the_folder_where_Pupil_lives/pupil_src/capture
python main.py

Linux Dependencies

These installation instructions are tested using Ubuntu 16.04 or higher running on many machines. Do not run Pupil on a VM unless you know what you are doing.

Install Dependencies

Let’s get started! Its time for apt! Just copy paste into the terminal and listen to your machine purr.

sudo apt install -y pkg-config git cmake build-essential nasm wget python3-setuptools libusb-1.0-0-dev  python3-dev python3-pip python3-numpy python3-scipy libglew-dev libglfw3-dev

ffmpeg >= 3.2

sudo add-apt-repository ppa:jonathonf/ffmpeg-3
sudo apt-get update
sudo apt install libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev ffmpeg libav-tools x264 x265

OpenCV

# The requisites for opencv to build python3 cv2.so library are:
# (1) python3 interpreter found
# (2) libpython***.so shared lib found (make sure to install python3-dev)
# (3) numpy for python3 installed.
# If cv2.so was not build, delete the build folder, recheck the requisites and try again.

git clone https://github.com/itseez/opencv
cd opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D BUILD_TBB=ON -D WITH_TBB=ON ..
make -j2
sudo make install
sudo ldconfig

Turbojpeg

wget -O libjpeg-turbo.tar.gz https://sourceforge.net/projects/libjpeg-turbo/files/1.5.1/libjpeg-turbo-1.5.1.tar.gz/download
tar xvzf libjpeg-turbo.tar.gz
cd libjpeg-turbo-1.5.1
./configure --with-pic --prefix=/usr/local
sudo make install
sudo ldconfig

libuvc

git clone https://github.com/pupil-labs/libuvc
cd libuvc
mkdir build
cd build
cmake ..
make && sudo make install

udev rules for running libuvc as normal user

echo 'SUBSYSTEM=="usb",  ENV{DEVTYPE}=="usb_device", GROUP="plugdev", MODE="0664"' | sudo tee /etc/udev/rules.d/10-libuvc.rules > /dev/null 
sudo udevadm trigger

Install packages with pip

sudo pip3 install numexpr
sudo pip3 install cython
sudo pip3 install psutil
sudo pip3 install pyzmq
sudo pip3 install msgpack_python
sudo pip3 install pyopengl
sudo pip3 install git+https://github.com/zeromq/pyre
sudo pip3 install git+https://github.com/pupil-labs/PyAV
sudo pip3 install git+https://github.com/pupil-labs/pyuvc
sudo pip3 install git+https://github.com/pupil-labs/pyndsi
sudo pip3 install git+https://github.com/pupil-labs/pyglui

Finally, we install 3D eye model dependencies

sudo apt-get install libboost-dev
sudo apt-get install libboost-python-dev
sudo apt-get install libgoogle-glog-dev libatlas-base-dev libeigen3-dev
# sudo apt-get install software-properties-common if add-apt-repository is not found
sudo add-apt-repository ppa:bzindovic/suitesparse-bugfix-1319687
sudo apt-get update
sudo apt-get install libsuitesparse-dev
# install ceres-solver
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver
mkdir build && cd build
cmake .. -DBUILD_SHARED_LIBS=ON 
make -j3
make test
sudo make install
sudo sh -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/ceres.conf'
sudo ldconfig

MacOS Dependencies

These instructions have been tested for MacOS 10.8, 10.9, 10.10, 10.11, and 10.12. Use the linked websites and Terminal to execute the instructions.

Install Apple Dev Tools

Trigger the install of the Command Line Tools (CLT) by typing this in your terminal and letting MacOS install the tools required:

git

Install Homebrew

Homebrew describes itself as “the missing package manager for OSX.” It makes development on MacOS much easier, plus it’s open source. Install with the ruby script.

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install Homebrew Python >=3.6

brew install python3

Add Homebrew installed executables and Python scripts to your path. Add the following two lines to your ~/.bash_profile. (you can open textedit from the terminal like so: open ~/.bash_profile)

export PATH=/usr/local/bin:/usr/local/sbin:$PATH
export PYTHONPATH=/usr/local/lib/python3.6/site-packages:$PYTHONPATH

Dependencies with brew

Let’s get started! Its time to put brew to work! Just copy paste commands into your terminal and listen to your machine purr.

brew tap homebrew/python
brew install pkg-config
brew install numpy
brew install scipy
brew install libjpeg-turbo
brew install libusb
brew tap homebrew/science
brew install ffmpeg
brew install opencv3 --with-contrib --with-python3 --with-tbb
brew install glew
brew tap homebrew/versions
brew install glfw3
# dependencies for 2d_3d c++ detector
brew install boost
brew install boost-python --with-python3
brew install ceres-solver
echo /usr/local/opt/opencv3/lib/python3.6/site-packages >> /usr/local/lib/python3.6/site-packages/opencv3.pth

Install libuvc

git clone https://github.com/pupil-labs/libuvc
cd libuvc
mkdir build
cd build
cmake ..
make && make install

Python Packages with pip

PyOpenGL, ZMQ, …

pip3 install PyOpenGL
pip3 install pyzmq
pip3 install numexpr
pip3 install cython
pip3 install psutil
pip3 install msgpack_python
pip3 install git+https://github.com/zeromq/pyre
pip3 install git+https://github.com/pupil-labs/PyAV
pip3 install git+https://github.com/pupil-labs/pyuvc
pip3 install git+https://github.com/pupil-labs/pyndsi
pip3 install git+https://github.com/pupil-labs/pyglui

That’s it – you’re Done!

Windows Dependencies

System Requirements

We develop the Windows version of Pupil using Windows 10.

Therefore we can only debug and support issues for Windows 10.

Install Dependencies

Running Pupil from source includes the installation of several dependencies. Please follow the instructions below.

For discussion or questions on Windows installation head over to the Pupil Google Group. If you find any problems please raise an issue!

Utils

  • Install 7-zip for extraction purposes.

Visual C++ Runtime

  • Install Visual Studio 2015 Community Update 3

Python (64-bit)

  • Download and install version 3.5.2: Windows Executable installer
  • During installation, select the tick box to add your Python installation path to the PATH environment variable

Python Wheels

Python extensions can be installed via pip. We recommend to download and install the pre-built wheel (*.whl) packages by Christoph Gohlke. Thanks for creating and sharing these packages! To install an extension open command line with admin rights and run python -m pip install [PACKAGE_NAME.whl]

  • SciPy: scipy-0.18.1-cp35-cp35m-win_amd64.whl
  • PyOpenGL: PyOpenGL-3.1.1-cp35-cp35m-win_amd64.whl
  • Numpy: numpy-1.11.2+mkl-cp35-cp35m-win_amd64.whl
  • OpenCV: opencv_python-3.1.0-cp35-cp35m-win_amd64.whl
  • PyZMQ: pyzmq-15.4.0-cp35-cp35m-win_amd64.whl
  • Cython: Cython‑0.24.1‑cp35*.whl
  • psutil: psutil-5.0.0-cp35-cp35m-win_amd64.whl
  • PyAudio: PyAudio-0.2.9-cp35-none-win_amd64.whl
  • boost_python: boost_python-1.59-cp35-none-win_amd64.whl

For networking install:

  • python -m pip install https://github.com/zeromq/pyre/archive/master.zip
  • python -m pip install win_inet_pton

You also need to install Python libraries that are specific to Pupil. Download the .whl file and install with pip.

Setup GLFW

  • Download 64-bit Windows binaries.
  • Unzip and search folder vs-2015 or lib-vs2015 containing glfw3.dll.
  • Copy glfw3.dll to pupil\pupil_external\.

Install Git

  • Download and install Git. This enables you to download and update the Pupil source code and further extensions it needs.
  • Add the /bin path of Git to the PATH environment variable, e.g. C:/Program Files (x86)/Git/bin.

Clone Pupil source code

  • Open the Git Bash and navigate to the directory you chose for pupil.
  • Run git clone http://github.com/pupil-labs/pupil (creates a sub-directory for pupil)

Download Eigen 3.2

Install ceres-windows

  • git clone –recursive https://github.com/tbennun/ceres-windows.git
  • Copy the Eigen directory to ceres-windows
  • Copy ceres-windows\ceres-solver\config\ceres\internal\config.h to ceres-windows\ceres-solver\include\ceres\internal
  • Open glog\src\windows\port.cc and comment out L58-64
  • Open the vs2012 sln file using VS2015. Agree to upgrade the compiler and libraries
  • Build the static library versions of libglog and ceres-solver

Install OpenCV for Windows

opencv-3.1.0

  • Copy opencv3.1.0\build\x64\vc14\bin\opencv_world310.dll to the pupil\pupil_external\ directory

Install Boost

  • Download and install Boost-1.59
  • Open boost_1_59_0\boost\python\detail\config.hpp
  • Change the macro definition “#define BOOST_LIB_NAME boost_python” to “#define BOOST_LIB_NAME boost_python3” and save the file

Edit the Pupil detectors and calibration cython setup files

  • Edit pupil\pupil_src\capture\pupil_detectors\setup.py . In the windows section, update the paths for OpenCV, Eigen, Boost, Ceres, Glog according to your installation locations
  • Edit pupil\pupil_src\shared_modules\calibration_routines\optimization_calibration\setup.py , in the same manner as above.

Install Drivers

In order to support isochronous USB transfer on Windows, you will need to install drivers for the cameras in your Pupil headset. Follow setup steps in the Windows Driver Setup section below.

Run Pupil!

Capture

cd your_pupil_path\pupil\pupil_src\capture
run_capture.bat

Player

cd your_pupil_path\pupil\pupil_src\player
run_player.bat path_to_recording

Setup PyAV for wheel creation

  • Clone PyAV to your system git clone https://github.com/pupil-labs/PyAV.git
  • Download and extract ffmpeg-3.2-dev
  • Download and extract ffmpeg-3.2-shared
  • Copy the dlls from the ffmpeg-3.2-win64-shared\bin directory to the pupil\pupil_external\ directory
  • Open “Developer command prompt for VS2015” and cd to PyAV directory
  • Run python setup.py clean --all build_ext --inplace --ffmpeg-dir=path\to\ffmpeg-3.2-dev -c msvc
  • pip wheel .
  • pip install .

Windows Driver Setup

In order to support isochronous USB transfer on Windows, you will need to install drivers for the cameras in your Pupil headset.

Download drivers and tools

  1. Download and install 7zip
  2. Download and extract Pupil camera driver installer

Install drivers for your Pupil headset

  1. Navigate to pupil_labs_camera_drivers_windows_x64 directory
  2. Double click InstallDriver.exe - this will install drivers. Follow on screen prompts.
  3. Open Windows Device Manager from System > Device Manager. Verify the drivers are correctly installed in Windows Device Manager. Your Pupil headset cameras should be listed under a new category titled: libusbK Usb Devices. Note: In some cases Pupil Cam1 may show three of the same ID as the camera name. Don’t worry - just make sure that the number of devices are the same as the number of cameras on your Pupil headset.
  4. Download the latest release of Pupil software and launch pupil_capture.exe to verify all cameras are accessible.

Troubleshooting

If you had tried to install drivers with previous driver install instructions and failed, or are not able to access cameras in Pupil Capture. Please try the following:

  1. In Device Manager (System > Device Manager)
  2. View > Show Hidden Devices
  3. Expand libUSBK Usb Devices
  4. For each device listed (even hidden devices) click Uninstall and check the box agreeing to Delete the driver software for this device and press OK
  5. Repeat for each device in libUSBK Usb Devices
  6. Unplug Pupil headset (if plugged in)
  7. Restart your computer
  8. Install drivers from step 2 in the Install drivers for your Pupil headset section

Interprocess and Network Communication

This page outlines the way Pupil Capture and Pupil Service communicate via a message bus internally and how to read and write to this bus from another application on the same machine or on a remote machine.

The IPC Backbone

Starting with v0.8 Pupil Capture and a new App called Pupil Service use a ZeroMQ PUBSUB Proxy as its messaging bus. We call it the IPC Backbone. The IPC Backbone runs as a thread in the main process.

IPC Backbone used by Pupil Capture and Service

The IPC Backbone has a SUB and a PUB address. Both are bound to a random port on app launch and known to all components of the app. All processes and threads within the app use the IPC backbone to communicate. - Using a ZMQ PUB socket other actors in the app connect to the pub_port of the Backbone and publish messages to the IPC Backbone. (For important low volume msgs a PUSH socket is also supported.) - Using a ZMQ SUB socket other actors connect to the sub_port of the Backbone to subscribe to parts of the message stream.

Example: The eye process sends pupil data onto the IPC Backbone. The gaze mappers in the world process receive this data, generate gaze data and publish it on the IPC Backbone. World, Launcher and Eye exchange control messages on the bus for coordination.

Message Format

Currently all messages on the IPC Backbone are multipart messages containing two messages frames:

  • Frame 1 contains a string we call topic. Examples are : pupil.0, logging.info, notify.recording.has_started

  • Frame 2 contains a msgpack encoded dictionary with key:value pairs. This is the actual message. We choose msgpack as the serializer due to its efficient format (45% smaller than json 200% faster than ujson) and because encoders exist for almost every language.

Message Topics

Messages can have any topic chooses by the user. Below a a list of Message types used by Pupil Capture.

Pupil and Gaze Messages

Pupil data is sent from the eye0 and eye1 process with topic pupil.0/1. Gaze mappers receive this data and publish messages with topic gaze. Example pupil message:

# message topic:
'pupil.0'
# message payload, a pupil datum dict:
{'diameter': 92.4450351347, 'confidence': 0.9986412066, 'projected_sphere': {'axes': [400.5235138265, 400.5235138265], 'angle': 90.0, 'center': [240.3164804152, 243.842873636]}, 'model_id': 1, 'timestamp': 123067.177618013, 'model_confidence': 0.8049109973, 'model_birth_timestamp': 123011.36560298, 'id': 0, 'phi': -1.8997389857, 'sphere': {'radius': 12.0, 'center': [-4.7747620402, 0.230271043, 37.1513768514]}, 'diameter_3d': 3.8605282008, 'ellipse': {'axes': [75.475922102, 92.4450351347], 'angle': -21.7620924999, 'center': [115.0446652426, 288.3183483897]}, 'norm_pos': [0.17975728940000002, 0.3993367742], 'theta': 1.7221210994, 'circle_3d': {'radius': 1.9302641004, 'center': [-8.606972898, 2.0392458162, 25.9245442521], 'normal': [-0.3193509048, 0.1507478978, -0.9355693833000001]}, 'method': '3d c++'})

Notification Message

Pupil uses special messages called notifications to coordinate all activities. Notifications are dictionaries with the required field subject. Subjects are grouped by categories category.command_or_statement. Example: recording.should_stop

# message topic:
'notify.recording.should_start'
# message payload, a notification dict
{'subject':'recording.should_start', 'session_name':'my session'}

The message topic construction in python:

topic = 'notify'+'.'+notification['subject']

You should use the notification topic for coordination with the app. All notifications on the IPC Backbone are automatically made available to all plugins in their on_notify callback and used in all Pupil Apps.

In stark contrast to gaze and pupil, the notify topic should not be used at high volume. If you find that you need to write more that 10 messages a second, its probably not a notification but another kind of data, make a custom topic instead.

Log Messages

Pupil sends all log messages onto the IPC.

The topic is logging.log_level_name (debug,info,warning,error,…). The message is a dictionary that contains all attributes of the python logging.record instance.

# message topic:
'logging.warning' 
# message payload, logging record attributes as dict:
{'levelname': 'WARNING', 'msg': 'Process started.', 'threadName': 'MainThread', 'name': 'eye', 'thread': 140735165432592L, 'created': 1465210820.609704, 'process': 14239, 'processName': 'eye0', 'args': [], 'module': 'eye', 'filename': 'eye.py', 'levelno': 30, 'msecs': 609.7040176392, 'pathname': '/Users/mkassner/Pupil/pupil_code/pupil_src/capture/eye.py', 'lineno': 299, 'exc_text': None, 'exc_info': None, 'funcName': 'eye', 'relativeCreated': 4107.3870658875})

Message Documentation

Read up on how to get documentation for all messages here: Message-Documentation

Connecting to the Backbone via Pupil Remote

If you want to tap into the IPC backbone you will not only need the IP address but also the session unique port. You can get these by talking to ‘Pupil Remote’:

import zmq
ctx = zmq.Context()
# The requester talks to Pupil remote and receives the session unique IPC SUB PORT
requester = ctx.socket(zmq.REQ)
ip = 'localhost' #If you talk to a different machine use its IP.
port = 50020 #The port defaults to 50020 but can be set in the GUI of Pupil Capture.
requester.connect('tcp://%s:%s'%(ip,port)) 
requester.send('SUB_PORT')
sub_port = requester.recv()

Reading from the Backbone

Subscribe to desired topics and receive all relevant messages (Meaning messages who’s topic prefix matches the subscription). Be aware that the IPC Backbone can carry a lot of data. Do not subscribe to the whole stream unless you know that your code can drink from a firehose. (If it can not, you become the snail, see Delivery Guarantees REQREP.)

#...continued from above
subscriber = ctx.socket(zmq.SUB)
subscriber.connect('tcp://%s:%s'%(ip,sub_port)) 
subscriber.set(zmq.SUBSCRIBE, 'notify.') #receive all notification messages
subscriber.set(zmq.SUBSCRIBE, 'logging.error') #receive logging error messages
#subscriber.set(zmq.SUBSCRIBE, '') #receive everything (don't do this)
# you can setup multiple subscriber sockets
# Sockets can be polled or read in different threads.

# we need a serializer 
import msgpack as serializer

while True:
    topic,payload = subscriber.recv_multipart()
    message = serializer.loads(payload)
    print topic,':',message

Writing to the Backbone from outside

You can send notifications to the IPC Backbone for everybody to read as well. Pupil Remote acts as an intermediary for reliable transport:

notification = {'subject':'recording.should_start', 'session_name':'my session'}
topic = 'notify.' + notification['subject']
payload = serializer.dumps(notification)
requester.send_multipart((topic,payload))
print requester.recv()

We say reliable transport because pupil remote will confirm every notification we send with ‘Notification received’. When we get this message we have a guarantee that the notification is on the IPC Backbone.

If we listen to the backbone using our subscriber from above, we will see the message again because we had subscribed to all notifications.

Pupil remote has a few additional commands that are useful:

#get the current Pupil time.
req.send('t')
current_pupil_time = float(req.recv())

#set the pupil timebase to 1000.
req.send('T 1000')
print req.recv() 

Pupil remote will only forward messages of the notify topic. If you need to send other topics see below.

Writing to the Backbone directly

If you want to write messages other than notifications onto the IPC backbone, you can publish to the bus directly. Because this uses a PUB socket, you should read up on Delivery Guarantees PUBSUB below.

requester.send('PUB_PORT')
pub_port = requester.recv()
publisher = ctx.socket(zmq.SUB)
publisher.connect('tcp://%s:%s'%(ip,sub_port)) 
from time import sleep
sleep(1) # see Async connect in the paragraphs below
notification = {'subject':'calibration.should_start'}
topic = notification['subject']
payload = serializer.dumps(notification)
publisher.send_multipart((topic,payload))

A full example

A full example can be found in shared_modules/zmq-tools.py.

Delivery guarantees ZMQ

ZMQ is a great abstraction for us. Its super fast, has a multitude of language bindings and solves a lot of the nitty-gritty networking problems we don’t want to deal with. Our short description of ZMQ does not do ZMQ any justice, we recommend reading the ZMQ guide if you have the time. Below are some insights from the guide that are relevant for our use cases.

  • Messages are guaranteed to be delivered whole or not at all.
  • Unlike bare TCP it is ok the connect before binding.
  • ZMQ will try to repair broken connections in the background for us.
  • It will deal with a lot of low level tcp handling so we don’t have to.

Delivery Guarantees PUBSUB

ZMQ PUB SUB will make no guarantees for delivery. Reasons for dropped messages are:

  • Async connect: PUB sockets drop messages before are connection has been made (connections are async in the background) and topics subscribed. *1
  • The Late joiner: SUB Sockets will only receive messages that have been sent after they connect. *2
  • The Snail: If SUB sockets do not consume delivered messages fast enough they start dropping them. *3
  • fast close: A PUB socket may loose packages if you close it right after sending. *1
  1. In Pupil we prevent this by using a PUSH socket as intermediary for notifications. See shared_modules/zmq_tools.py.

  2. Caching all massages in the sender or proxy is not an option. This is not really considered a problem of the transport.

  3. In Pupil we pay close attention to be fast enough or to subscribe only to low volume topics. Dropping messages in this case is by design. It is better than stalling data producers or running out of memory.

Delivery Guarantees REQREP

When writing to the Backbone via REQREP we will get confirmations/replies for every message sent. Since REPREQ requires lockstep communication that is always initiated from the actor connecting to Pupil Capture/Service. It does not suffer the above issues.

Delivery Guarantees in general

We use TCP in zmq, it is generally a reliable transport. The app communicates to the IPC Backbone via localhost loopback, this is very reliable. I have not been able to produce a dropped message for network reasons on localhost.

However, unreliable, congested networks (wifi with many actors.) can cause problems when talking and listening to Pupil Capture/Service from a different machine. If using a unreliable network we will need to design our scripts and apps so that interfaces are able to deal with dropped messages.

Latency

Latency is bound by the latency of the network. On the same machine we can use the loopback interface (localhost) and do a quick test to understand delay and jitter of Pupil Remote requests…

for x in range(100):
    sleep(0.003) #simulate spaced requests as in real world
    t = time()
    requester.send('t')
    requester.recv()
    ts.append(time()-t)
print min(ts), sum(ts)/len(ts), max(ts) 
>>>0.000266075134277 0.000597472190857 0.00339102745056

… and when talking directly to the IPC backbone and waiting for the same message to appear to the subscriber:

for x in range(100):
    sleep(0.003)  #simulate spaced requests as in real world
    t = time()
    publisher.notify({'subject':'pingback_test'}) #notify is a method of the Msg_Dispatcher class in zmq_tools.py
    monitor.recv()
    ts.append(time()-t)
print min(ts), sum(ts)/len(ts) , max(ts)
>>>0.000180959701538 0.000300960540771 0.000565052032471

Throughput

During a test we have run dual 120fps eye tracking with a dummy gaze mapper that turned every pupil datum into a gaze datum. This is effectively 480 messages/sec. The main process running the IPC backbone proxi showed a cpu load of 3% on a MacBook Air (late 2012).

Artificially increasing the pupil messages by a factor 100 increases the message load to 24.000 pupil messages/sec. At this rate the gaze mapper cannot keep up but the IPC backbone proxi runs at only 38% cpu load.

It appears ZMQ is indeed highly optimized for speed.

Final remarks

You can send a message anywhere in the app. Don’t send something that crashes anywhere.

Message Documentation

v0.8 of the Pupil software introduces a consistent naming scheme for message topics. They are used to publish and subscribe to the IPC Backbone. Pre-defined message topics are pupil, gaze, notify, delayed_notify, logging. Notifications sent with the notify_all() function of the Plugin class will be published automatically as notify.<notification subject>.

Message Reactor and Emitter Documentation

From version v0.8 on, every actor who either reacts to or emits messages is supposed to document its behavior. Therefore every actor should react to notify.meta.should_doc by emitting a message with the topic notify.meta.doc. The answer’s payload should be a serialized dictionary with the following format:

{
  'subject':'meta.doc',
  'actor': <actor name>,
  'doc': <string containing documentation>
}

Plugins use notifications as primary communication channel to the IPC Backbone. This makes plugins natural actors in the Pupil message scheme. To simplify the above mentioned documentation behavior, plugins will only have to add an docstring to their on_notify() method. It should include an list of messages to which the plugin reacts and those which the plugin emits itself. The docstring should follow Google docstring style. The main process will automatically generate messages in the format from above using the plugin’s class name as actor and the on_notify() docstring as content for the doc key.

Notification Overview

You can use the following script to get an overview over the notification handling of the currently running actors:

import zmq, msgpack
from zmq_tools import Msg_Receiver
ctx = zmq.Context()
url = 'tcp://localhost'

# open Pupil Remote socket
requester = ctx.socket(zmq.REQ)
requester.connect('%s:%s'%(url,50020))
requester.send('SUB_PORT')
ipc_sub_port = requester.recv()

# setup message receiver
sub_url = '%s:%s'%(url,ipc_sub_port)
receiver = Msg_Receiver(ctx,sub_url,topics=('notify.meta.doc',))

# construct message
topic = 'notify.meta.should_doc'
payload = msgpack.dumps({'subject':'meta.should_doc'})
requester.send_multipart([topic,payload])

# wait and print responses
while True:
    topic, payload = receiver.recv()
    actor = payload.get('actor')
    doc = payload.get('doc')
    print '%s: %s'%(actor,doc)

Example output for v0.8:

launcher: Starts eye processes. Hosts the IPC Backbone and Logging functions.

    Reacts to notifications:
       ``launcher_process.should_stop``: Stops the launcher process
       ``eye_process.should_start``: Starts the eye process
    
eye0: Reads eye video and detects the pupil.

    Creates a window, gl context.
    Grabs images from a capture.
    Streams Pupil coordinates.

    Reacts to notifications:
       ``set_detection_mapping_mode``: Sets detection method
       ``eye_process.should_stop``: Stops the eye process
       ``recording.started``: Starts recording eye video
       ``recording.stopped``: Stops recording eye video

    Emits notifications:
        ``eye_process.started``: Eye process started
        ``eye_process.stopped``: Eye process stopped

    Emits data:
        ``pupil.<eye id>``: Pupil data for eye with id ``<eye id>``
    
capture: Reads world video and runs plugins.

    Creates a window, gl context.
    Grabs images from a capture.
    Maps pupil to gaze data
    Can run various plugins.

    Reacts to notifications:
        ``set_detection_mapping_mode``
        ``eye_process.started``
        ``start_plugin``

    Emits notifications:
        ``eye_process.should_start``
        ``eye_process.should_stop``
        ``set_detection_mapping_mode``
        ``world_process.started``
        ``world_process.stopped``
        ``recording.should_stop``: Emits on camera failure
        ``launcher_process.should_stop``

    Emits data:
        ``gaze``: Gaze data from current gaze mapping plugin.``
        ``*``: any other plugin generated data in the events that it not [dt,pupil,gaze].
    
Pupil_Remote: send simple string messages to control application functions.

        Emits notifications:
            ``recording.should_start``
            ``recording.should_stop``
            ``calibration.should_start``
            ``calibration.should_stop``
            Any other notification received though the reqrepl port.
        
Screen_Marker_Calibration: Handles calibration notifications

        Reacts to notifications:
           ``calibration.should_start``: Starts the calibration procedure
           ``calibration.should_stop``: Stops the calibration procedure

        Emits notifications:
            ``calibration.started``: Calibration procedure started
            ``calibration.stopped``: Calibration procedure stopped
            ``calibration.failed``: Calibration failed
            ``calibration.successful``: Calibration succeeded

        Args:
            notification (dictionary): Notification dictionary
        
Recorder: Handles recorder notifications

        Reacts to notifications:
            ``recording.should_start``: Starts a new recording session
            ``recording.should_stop``: Stops current recording session

        Emits notifications:
            ``recording.started``: New recording session started
            ``recording.stopped``: Current recording session stopped

        Args:
            notification (dictionary): Notification dictionary
        

Plugin Guide

Plugins Basics

World Process Plugins in Pupil Capture

Pupil Capture’s World process can load plugins for easy integration of new features. Plugins have full access to:

  • World image frame
  • Events
    • pupil positions
    • gaze positions
    • surface events
    • note other events can be added to the event queue by other plugins
  • User input
  • Globally declared variables in the g_pool

Plugins can create their own UI elements, and even spawn their own OpenGL windows.

Pupil Player Plugins

Pupil Player uses an identical plugin structure. Little (often no work) needs to be done to use a Player Plugin in Capture and vice versa. But, it is important to keep in mind that plugins run in Pupil Capture may require more speed for real-time workflows, as opposed to plugins in Pupil Player.

Make your own plugin

These general steps are required if you want to make your own plugin and use it within Pupil:

  • Fork the pupil repository (if you haven’t done this already) and create a branch for your plugin. Try to make commits granular so that it can be merged easily with the official branch if so desired.
  • Create a new file
    • In /capture if your plugin only interacts with Pupil Capture’s World process.
    • In /player if your plugin only interacts with Pupil Player.
    • In /shared_modules if your plugin is used in both Pupil Capture and Pupil Player
  • Inherit from the Plugin class template. You can find the base class along with docs in plugin.py. (A good example to reference while developing your plugin is display_recent_gaze.py)
  • Write your plugin

Load your Plugin automatically

With Pupil v0.6 we introduce a plugin auto-loader. It works when running from either source or application bundle! There is no need to put your plugin into the directories mentioned above. Instead:

  • In ~/pupil_capture_settings or ~/pupil_player_settings (depending on the plugin application) create a folder called plugins
  • Move your plugin source code file into plugins
  • If your plugin is defined by multiple files inside a directory, move this directory into the plugins dir 1
  • On start-up Pupil will search this folder, and import and add all user plugins into the plugin drop-down menu

1 If your plugin is contained in a directory, make sure to include an __init__.py file similar to this:

from my_custom_plugin_code_module import My_Custom_Plugin_Class

Load your Plugin manually

This is the “old” way of loading plugins. This method gives more flexibility but thats about it.

  • Pupil Player
    • Import your plugin in player/main.py
    • Add your plugin to the user_launchable_plugins list in player/main.py
  • Pupil Capture - World Process

    • Import your plugin in capture/world.py
    • Add your plugin to the user_launchable_plugins list in capture/world.py
  • Select your plugin from the “Open plugin” in the main window to begin using it

Text below this line is currently being revised. Feel encouraged to contribute.

Example plugin development walkthrough

Inheriting from existing plugin

If you want to add or extend the functionality of an existing plugin, you should be able to apply standard inheritance principles of Python 2.7.

Things to keep in mind:

  • g_pool is an acronym to “global pool”, a system wide container full of stuff passed to all plugins.
  • if the base plugin is a system (always alive) plugin:
    • remember to close the base plugin at the __init__ method of the inheriting plugin with base_plugin.alive = False. You should find the base_plugin inside g_pool.plugins ;
    • remember to dereference the base plugin at the end of the file with del base_plugin to avoid repetition in the user plugin list;

Hacking an existing plugin

Another way to start plugin development, is to use an existing plugin as a template. For example, you could copy the vis_circle.py plugin as a starting point.

renaming it to, for example, open_cv_threshold.py.

Now you could give a new name to the class name:

class Open_Cv_Threshold(Plugin):

Rename its super reference:

super(Open_Cv_Threshold, self).__init__(g_pool)

Describe what your new plugin will do for yourself in the future and for future generations:

class Open_Cv_Threshold(Plugin):
"""
  Apply cv2.threshold filter to the world image.
"""

Rename its reference in the persistence method:

def clone(self):
    return Open_Cv_Threshold(**self.get_init_dict())

It is good to rename its menu caption as well:

self.menu = ui.Scrolling_Menu('Threshold')

Lets determine its execution order in relation to the other plugins:

self.order = .8

You can allow or disallow multiple instances of the Custom Plugin through the uniqueness attribute:

self.uniqueness = "by_class"

(uniqueness available options) (describe how to safely remove unneeded parameters/attributes)

Finally, lets implement what our new Plugin will do. Here we choose to apply an OpenCv threshold to the world image and give us proper feedback of the results, in real time. Good for OpenCv and related studies. It is possible by means of the update method:

(describe the world frame structure; maybe linking to trusted OpenCv docs)

def update(self,frame,events):
   img = frame.img
   height = img.shape[0] 
   width = img.shape[1] 
   
   blur = cv2.GaussianBlur(img,(5,5),0)

   edges = []
   threshold = 177
   blue, green, red = 0, 1, 2

   # apply the threshold to each channel 
   for channel in (blur[:,:,blue], blur[:,:,green], blur[:,:,red]):
      retval, edg = cv2.threshold(channel, threshold, 255, cv2.THRESH_TOZERO)
      edges.append(edg)
   
   # lets merge the channels again
   edges.append(np.zeros((height, width, 1), np.uint8))
   edges_edt = cv2.max(edges[blue], edges[green])
   edges_edt = cv2.max(edges_edt, edges[red])
   merge = [edges_edt, edges_edt, edges_edt]
   
   # lets check the result
   frame.img = cv2.merge(merge)

(considering the update method, describe stuff inside the events dictionary)

Plugin Integration

(describe PyGlui menu integration, for example, with a slider to the threshold value and illustrate how achieve persistence of the parameter)

(describe how to integrate the Custom Plugin visualization into the Video Exporter)

(describe how to integrate new data produced by the Custom Plugin into Pupil’s data export work-flow)

Fixation Detector

I-DT identifies fixations as groups of consecutive points within a particular dispersion, or maximum separation. Because fixations typically have a duration of at least 100 ms, dispersion-based identification techniques often incorporate a minimum duration threshold of 100-200 ms to help alleviate equipment variability.

In [1], Salvucci and Goldberg define different categories of fixation detectors. One of them describes dispersion-based algorithms:

The fixation detectors in Pupil Capture and Player implement such a dispersion-based algorithm. Player includes two different detector versions.

  1. Gaze Position 2D Fixation Detector Legacy version, uses mean gaze positions as dispersion measure. Does not comply technically to the exact maximal dispersion, only approximates it.

  2. Pupil Angle 3D Fixation Detector Uses the 3D model’s pupil angle as dispersion measure. Therefore, it requires the 3D pupil detection to be active [See below]. Calculates the maximal pairwise angle for all corresponding pupil positions. This version includes a plugin for Pupil Capture that operates in an online fashion since it does not depend on a calibrated gaze mapper.

[1] Salvucci, D. D., & Goldberg, J. H. (2000, November). Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications (pp. 71-78). ACM.

Parameters

As described above, Pupils fixation detectors implement dispersion-based algorithms. These have two parameters:

  1. Dispersion Threshold (spatial, degree): Maximal distance between all gaze locations during a fixation.
  2. Duration Threshold (temporal, seconds): The minimal duration in which the dispersion threshold must not be exceeded.

Usage

Activation

Any versions of the 3D fixation detector requires the 3D pupil data since it relies on the detected pupil angle for calculations. To activate 3D pupil detection, select 3D in Capture’s General settings under Detection & Mapping Mode. In Player, it is currently not possible to generate the required 3D data from a recording that only includes 2D detected data.

In Capture, the Fixation Detector 3D is loaded by default. In the future, it will be used to improve the calibration procedure.

In Player, the fixation detectors are not loaded by default. They are activated like every other plugin, too. See area 2 in Getting Started — Player Window. Depending on the length of the recording, the Player window might freeze for a short time. This is due to the detector looking for fixations in the whole recording.

Data access

All fixation detectors augment the events object which is passed to each plugin’s update(frame,events) method (see the Plugin Guide). They add a list of fixation dictionaries under the key 'fixations'. The exact format of these fixations is described below.

In Player, all fixations are known a priori and can be referenced in all their related frames. In case of a long fixation, the detector will try to generate a single representation instead of multiple ones. In contrast, the detector in Capture will look for the shortest fixation that complies with the parameters above and add it once to events if it is found. The plugin will look for a new fixation afterwards. This means that a long fixation might be split into multiple fixation events. These difference in behavior is due to the different data availabilities in Capture and Player.

Visualisation

Use the Vis Fixation Player plugin to visualise fixations. It will show a red dot during saccades and a green circle during fixations.

Fixation Format

Capture

Fixations are represented as Python dictionaries consisting of the following keys:

  • norm_pos: Normalized position of the fixation’s centroid
  • base_data: Pupil data during the fixation
  • duration: Exact fixation duration,
  • dispersion: Dispersion, in degree
  • timestamp: Timestamp of the first related pupil datum
  • pupil_diameter: Average pupil diameter
  • confidence: Average pupil confidence
  • eye_id: Eye id to which the fixation belongs

Player

Player detected fixations also include:

  • start_frame_index: Index of the first related frame
  • mid_frame_index: Index of the median related frame
  • end_frame_index: Index of the last related frame
  • pix_dispersion: Dispersion in pixels

USB Bandwidth And Synchronization

USB Bandwidth limits and ways to make it work regardless

The Pupil headset uses 2-3 cameras that are electrically- and firmware wise identical (except for the name in the usb descriptor). Our Pupil camera can supply frames in various resolutions and rates uncompressed (YUV) and compressed (MJPEG). When looking at uncompressed data even a single camera can saturate a high speed USB bus. This is why we always use MJPEG compression: We can squeeze the data of 3 cameras through one USB bus because the image data is compressed by a factor of ~10.

JPEG size estimation and custom video backends

However, the actual size of each image depends on the complexity of the content (JPEGs of images with more features will be bigger) and implementation details of the camera firmware. Because the cameras use isochronous usb transfers, we need to allocate bandwidth during stream initialization. Here we need to make an estimate on how much bandwidth we believe the camera will require. If we are too conservative we require more bandwidth for 3 cameras than is available and initialization will fail. If we allocate to little, we risk that image transport will fail during capture. According to the UVC specs the amount of bandwidth that is required must be read from the camera usb descriptor and usually this estimate is super conservative. This is why with the normal drivers you can never run more that one camera at decent resolutions on a single usb bus.

With our version of libuvc and pyuvc we ignore the cameras request and estimate the bandwidth ourselves like this:

//the proper way: ask the camera
config_bytes_per_packet = strmh->cur_ctrl.dwMaxPayloadTransferSize;

// our way: estimate it:
size_t bandwidth = frame_desc->wWidth * frame_desc->wHeight / 8 * bandwidth_factor; //the last one is bpp default 4 but we use if for compression, 2 is save, 1.5 is needed to run 3 high speed cameras. on one bus.
bandwidth *= 10000000 / strmh->cur_ctrl.dwFrameInterval + 1;
bandwidth /= 1000; //unit
bandwidth /= 8; // 8 high speed usb microframes per ms
bandwidth += 12; //header size
config_bytes_per_packet = bandwidth;

The scale factor bandwidth_factor is settable through the api.

We have tested these and found that we can run 3 pupil camera at 720p@60fps+2x480p@120fps on our Mac and Linux machines. If you play with the resolutions and frame rates in pupil capture you may hit a combination where the total bandwidth requirements cannot be met, thus the crash (I assume).

Use more BUS

If you want to not be limited by the bandwidth of a single usb bus you can use an alternative usb clip that will expose each camera on a separate usb connector. We’d be happy to send you this breakout board if you want. Just make sure that you also have three free USB controllers (not plugs) on your PC.

Multi Camera Synchronization

Each camera we use is a free running capture device. Additionally each camera runs in a separate process. Instead of frame-locking the camera through special hardware we acquire timestamps for each frame. These timestamps are then used to correlate data from each camera in time and match frames based on closest proximity.

Data from each eye camera is sent via IPC to the world process. Since this involves three separate processes it can happen that data from one camera arrives earlier that another. However for each camera the frames will be ordered and timestamps are monotonically increasing. In the main process we match the available data timewise when we need. In Pupil Player we can do matching after the fact to work with perfectly sorted data from all three cameras. If you require the data to be matched over being recent I would recommend collecting data in the queue for a few more frames in world.py before dispatching them in the events dict. (I ll actually do some tests on this subject soon.)

A note on synchronization and rolling shutters

While synchronization through hardware is preferable, its implementation would come at added hardware cost. The benefits of that become questionable at 120fps. At this rate the frame interval is about 8ms which very close to the exposure time of the eye cameras. Since our cameras use a rolling shutter the image is actually taken continuously and the time of exposure changes based on the pixel position on the sensor. You can think of the camera image stream as a scanning sensor readout with data packed into frames and timestamped with the time of the first pixel readout. If we then match frames from two or more sensors we can assume that the pixels across two camera are generally no further apart in time than the first and last pixel of one frame from a single camera.

License

We want Pupil to proliferate! We want you to use Pupil to empower and inspire whatever you do. Be it academic research, commercial work, teaching, art, or personally motivated projects.

We want you to be a member of the Pupil community and contribute as much as possible. The software is open and the hardware is modular and accessible. We encourage the modification of software in accordance to the open source license.

Software

All source code written by us is open source in accordance with the GNU Lesser General Public License (LGPL v3.0) license. We encourage you to change and improve the code. We require that you will share your work with the Pupil community.

Hardware

The camera mounts of the Pupil headset are open source for non-commercial use. We distribute CAD files for camera mounts and document the interface geometry in the Pupil Hardware Development so that you can use different cameras or customize for your specific needs. Again, we encourage this and are excited to see new designs, ideas, and support for other camera models.

The actual frame of the Pupil headset is not open-source and distributed via Shapeways and direct sales. We do this for a few reasons:

  • The printed geometry is not the actual CAD file. It is the result of a Finite Element Analysis (FEA) simulation that is exported as a triangle mesh. The CAD file itself is useless because the headset does not fit well without the FEA step. The FEA is useless because it is hard to properly manipulate in a CAD environment.
  • The entire design is based on the material properties of laser sintered nylon. This is what allows the headset to be so light, flexible, and strong. Unless you own an EOS SLS machine, Shapeways will always outperform the field in terms of price. In other words: The headset does not make any sense in another material and the material/manufacturing is expensive when you buy it from somewhere other than Shapeways.

We take a markup fee for every headset to finance the Pupil project. This fee supports open source development, so that you can continue to get software for free!

If you have ideas and suggestions for improvements on the actual frame of the headset we are happy to collaborate closely on improvements. Contact us.

Documentation

All content of the documentation written by us is open source, according to GNU Lesser General Public License (LGPL v3.0) license.

Using Pupil in Your Research and Projects

You can use Pupil in your research, academic work, commercial work, art projects and personal work. We only ask you to credit us appropriately. See Academic Citation for samples.

Pupil is developed and maintained by Pupil Labs. If you make a contribution to open source, we will include your name in our [[Contributors]] page. For more information about the people behind the project, check out Pupil Labs.

Alternate Licensing

If you would like to use Pupil outside of the GNU Lesser General Public License (LGPL v3.0) license, contact us so we can discuss a options. Send an email to us at sales [at] pupil-labs [dot] com

Community

Pupil Community

The Pupil community is made up of amazing individuals around the world. It is your effort and exchanges that enable us to discover novel applications for eye tracking and help us to improve the open source repository. The Pupil community is active, growing, and thrives from your contributions.

Connect with the community, share ideas, solve problems and help make Pupil awesome!

Contribute!

Fork the Pupil repository and start coding!

Github

  • Find a bug? Raise an issue in the GitHub project (pupil-labs/pupil).
  • Have you made a new feature, improvement, edit, something cool that you want to share? Send us a pull request. We check on the regular. If we merge your feature(s) you’ll be credited in the Pupil [[Contributors]] page.
  • Want to talk about ideas for a new feature, project, or general questions? Head over to the Pupil Labs Google Group.

Google Group

Pupil Google Group

A great place to discuss ideas for projects, to raise questions, search through old questions, and meet people in the community. Join the group for frequent updates and version release notifications (google login required).

Discord

For quick questions and casual discussion, chat with the community on discord.

Email

If you want to talk directly to someone at Pupil Labs, email is the easiest way.

Academic Citation

We have been asked a few times about how to cite Pupil in academic research. Please take a look at our papers below for citation options. If you’re using Pupil as a tool in your research please cite the below UbiComp 2014 paper.

Papers that cite Pupil

We have compiled a list of publications that cite Pupil in this spreadsheet

UbiComp 2014 Paper

Title

Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction

Abstract

In this paper we present Pupil – an accessible, affordable, and extensible open source platform for pervasive eye tracking and gaze-based interaction. Pupil comprises 1) a light-weight eye tracking headset, 2) an open source software framework for mobile eye tracking, as well as 3) a graphical user interface to playback and visualize video and gaze data. Pupil features high-resolution scene and eye cameras for monocular and binocular gaze estimation. The software and GUI are platform-independent and include state-of-the-art algorithms for real-time pupil detection and tracking, calibration, and accurate gaze estimation. Results of a performance evaluation show that Pupil can provide an average gaze estimation accuracy of 0.6 degree of visual angle (0.08 degree precision) with a processing pipeline latency of only 0.045 seconds.

Permalink to article

Available on dl.acm.org: http://dl.acm.org/citation.cfm?doid=2638728.2641695

BibTeX Style Citation

@inproceedings{Kassner:2014:POS:2638728.2641695,
 author = {Kassner, Moritz and Patera, William and Bulling, Andreas},
 title = {Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction},
 booktitle = {Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
 series = {UbiComp '14 Adjunct},
 year = {2014},
 isbn = {978-1-4503-3047-3},
 location = {Seattle, Washington},
 pages = {1151--1160},
 numpages = {10},
 url = {http://doi.acm.org/10.1145/2638728.2641695},
 doi = {10.1145/2638728.2641695},
 acmid = {2641695},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {eye movement, gaze-based interaction, mobile eye tracking, wearable computing},
}

Pupil Technical Report

Title

Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction

Abstract

Commercial head-mounted eye trackers provide useful features to customers in industry and research but are expensive and rely on closed source hardware and software. This limits the application areas and use of mobile eye tracking to expert users and inhibits user-driven development, customization, and extension. In this paper we present Pupil – an accessible, affordable, and extensible open source platform for mobile eye tracking and gaze-based interaction. Pupil comprises 1) a light-weight headset with high-resolution cameras, 2) an open source software framework for mobile eye tracking, as well as 3) a graphical user interface (GUI) to playback and visualize video and gaze data. Pupil features high-resolution scene and eye cameras for monocular and binocular gaze estimation. The software and GUI are platform-independent and include state-of-the-art algorithms for real-time pupil detection and tracking, calibration, and accurate gaze estimation. Results of a performance evaluation show that Pupil can provide an average gaze estimation accuracy of 0.6 degree of visual angle (0.08 degree precision) with a latency of the processing pipeline of only 0.045 seconds.

Permalink to article

Available on arxiv.org: http://arxiv.org/abs/1405.0006

BibTeX Style Citation

@article{KassnerPateraBulling:2014,
  author={Kassner, Moritz and Patera, William and Bulling, Andreas},
  title={Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction},
  keywords={Eye Movement, Mobile Eye Tracking, Wearable Computing, Gaze-based Interaction},
  year={2014},
  month={April},
  archivePrefix = "arXiv",
  eprint        = "1405.0006",
  primaryClass  = "cs-cv",
  url = {http://arxiv.org/abs/1405.0006}
}

MIT Thesis

Abstract

This thesis explores the nature of a human experience in space through a primary inquiry into vision. This inquiry begins by questioning the existing methods and instruments employed to capture and represent a human experience of space. While existing qualitative and quantitative methods and instruments – from “subjective” interviews to “objective” photographic documentation – may lead to insight in the study of a human experience in space, we argue that they are inherently limited with respect to physiological realities. As one moves about the world, one believes to see the world as continuous and fully resolved. However, this is not how human vision is currently understood to function on a physiological level. If we want to understand how humans visually construct a space, then we must examine patterns of visual attention on a physiological level. In order to inquire into patterns of visual attention in three dimensional space, we need to develop new instruments and new methods of representation. The instruments we require, directly address the physiological realities of vision, and the methods of representation seek to situate the human subject within a space of their own construction. In order to achieve this goal we have developed Pupil, a custom set of hardware and software instruments, that capture the subject’s eye movements. Using Pupil, we have conducted a series of trials from proof of concept – demonstrating the capabilities of our instruments – to critical inquiry of the relationship between a human subject and a space. We have developed software to visualize this unique spatial experience, and have posed open questions based on the initial findings of our trials. This thesis aims to contribute to spatial design disciplines, by providing a new way to capture and represent a human experience of space.

(Authors names appear in alphabetical order - equal hierarchy in authorship.)

On MIT DSpace: http://hdl.handle.net/1721.172626

BibTeX Style Citation

@mastersthesis{Kassner:Patera:2012,
  title={{PUPIL: Constructing the Space of Visual Attention}},
  author={Kassner, Moritz Philipp and Patera, William Rhoades},
  year={2012},
  school={Massachusetts Institute of Technology},
  url={http://hdl.handle.net/1721.1/72626}
}

Chicago Style Citation

Moritz Kassner, William Patera, Pupil: Constructing the Space of Visual Attention, SMArchS Master Thesis, (Cambridge: Massachusetts Institute of Technology, 2012).

APA Style Citation

Kassner, M., & Patera, W. (2012). Pupil: Constructing the space of visual attention (Unpublished master’s thesis). Massachusetts Institute of Technology, Cambridge, MA. Available from http://hdl.handle.net/1721.1/72626

You are also welcome to link to our code repositories: Pupil Github Repository