Skip to content

Instantly share code, notes, and snippets.

@mikelgg93
Last active October 20, 2022 12:08
Show Gist options
  • Save mikelgg93/b6ae779743aa180be06d36b803dedec2 to your computer and use it in GitHub Desktop.
Save mikelgg93/b6ae779743aa180be06d36b803dedec2 to your computer and use it in GitHub Desktop.
Post-Hoc Apply multiple offset corrections.

Post-Hoc Apply multiple offset corrections.

In this gist, we will learn all we need to apply different offset corrections on a Pupil Invisible recording.

To do this you will need to do either some manual adjustings, and use either a spreadsheet or programming to modify the database. By the end of the guide, you will obtain the gaze corrected positions but you will still need to create a visualisation.

To help you to follow these instructions, we will illustrate this with a driving case. In this case, we will like to apply an offset for the road (far distance) and a different one to the cockpit (close distance and where parallax errors may occur).

Get the offset value for a specific distance (near distance)

  1. Ask each participant to fixate on a point at the desired distance. In the car example, we will select an object from the car's cockpit (e.g. the speedometer needle or a number in the tachometer).
  2. Place this target/object in the centre of the scene camera. Use the preview mode in the Companion App to be sure the object is in the centre of the image. Otherwise, ask the participant to rotate the head until it is centred.
  3. With the object correctly placed and the subject gazing at it, we will correct the offset, save it and make a small recording.
  4. Later, we will create a Raw Enrichment in the Cloud and download the files. Navigate the download folder and find the info.json file. On that file, we can find the gaze offset value.

We will refer to this value as near offset correction values.

Back to data collection

After the small video with the near-offset correction, we can adjust the offset to a distant object (e.g. a sign on the road) and make our recording.

When do I correct the gaze?

Using surface coordinates

  1. You can use the Marker Mapper to label the surface you are interested in (e.g. the cockpit)
  2. On downloading this enrichment, you will find a gaze.csv file containing a boolean (true or false) field called gaze_detected on the surface.
  3. Using these fields, you can apply the near-offset correction values to the gaze coordinates only when the gaze is in the area of your interest (e.g. in the cockpit).

Using events annotations

  1. You can also achieve the same by manually creating event annotations to split your recordings, like we describe here
  2. In our example, you will need to create an event when looking at the cockpit, then another one when the participant stops. And then, using the timestamps, apply the near offset correction to all the points between the start looking at cockpit event and the stop looking at.

NOTE: To apply the near offset, you may have to remove the far offset correction first, but you can find the value using the same steps as before but on the new recording.

BONUS POINTS: After correcting the offset, you might be interested in undistorting the video and gaze to remove the fish eye lens distortion. Check out this tutorial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment