Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ac130kz/8633f7d68b02e989bc5ed52cb26e61b5 to your computer and use it in GitHub Desktop.
Save ac130kz/8633f7d68b02e989bc5ed52cb26e61b5 to your computer and use it in GitHub Desktop.
Multiple camera calibration

The document provides description on calibration of three Kinect for Microsoft sensors connected to one computer with several usb controllers. Three cameras setup is shown below:

Figure 1

Intrinsic, extrinsic, and Kinect2Kinect calibration is performed to know the position of each sensor in the space. Our setup is ROS Indigo within Ubuntu 14.04.

ROS Indigo

sudo apt update && sudo apt install software-properties-common python-software-properties usbutils git cmake build-essential -y
sudo add-apt-repository main
sudo add-apt-repository universe
sudo add-apt-repository restricted
sudo add-apt-repository multiverse
sudo apt update
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo apt update
sudo apt install ros-indigo-desktop-full -y
sudo rosdep init
rosdep update
echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc
source ~/.bashrc

freenect_launch and camera_pose ROS packages are used. Camera_pose package provides the pipeline to calibrate the relative 6D poses between multiple camera's. freenect_launch package contains launch files for using OpenNI-compliant devices in ROS. It creates a nodelet graph to transform raw data from the device driver into point clouds, disparity images, and other products suitable for processing and visualization. It is installed with catkin as follows:

# Prepare the catkin workspace for the libfreenect
mkdir -p ~/libfreenect_catkin_ws/src
cd ~/libfreenect_catkin_ws/src
catkin_init_workspace
cd ~/libfreenect_catkin_ws/
catkin_make
source devel/setup.bash

# Get the latest sources from the github repository
cd ~/libfreenect_catkin_ws/src/ 
git clone https://github.com/olzhas/libfreenect.git

# Grab the source of the freenect_stack
git clone https://github.com/ros-drivers/freenect_stack.git tmp
mv tmp/freenect_stack .
mv tmp/freenect_camera .
mv tmp/freenect_launch .
rm -rf tmp/

# catkinize the "vanilla" libfreenect sources
cd ~/libfreenect_catkin_ws/src/libfreenect/
wget -q https://raw.githubusercontent.com/ros-drivers-gbp/libfreenect-release/master/patches/package.xml

Replace :{version} in package.xml file with the 0.4.3 or the up-to-date version to the moment of accessing this page (0.4.3 is a working version in ROS Indigo).

Original file:

<?xml version="1.0"?>
<package>
  <name>libfreenect</name>
  <version>:{version}</version>

Update file:

<?xml version="1.0"?>
<package>
  <name>libfreenect</name>
  <version>0.4.3</version>

After that install the required packages using rosdep:

rosdep update
rosdep install libfreenect
sudo apt install ros-indigo-camera-info-manager ros-indigo-diagnostic-msgs ros-indigo-diagnostic-updater ros-indigo-dynamic-reconfigure libfreenect-dev

Finally, packages can be built with catkin_make_isolated:

# use sudo if needed
catkin_make_isolated
catkin_make_isolated --install --install-space=/opt/ros/indigo/

Object with known dimensions is required for calibration; we used 6x8 checkerboard with size of each square 0.1 m.

Intrinsic Calibration

Calibration of intrinsic parameters for depth and RGB sensors is performed separately as described here.

RGB: Run freenect_launch and start calibration:

roslaunch freenect_launch freenect.launch
rosrun camera_calibration cameracalibrator.py image:=/camera/rgb/image_raw camera:=/camera/rgb --size 8x6 --square 0.1

This will open up the calibration window which will highlight the checkerboard. To collect necessary data you should move the checkerboard in camera's field of view as follows:

  • left/right for X bar
  • top/bottom for Y bar
  • toward/away for Size bar
  • tilt

When CALIBRATE button lights, you have enough data for calibration and can click CALIBRATE to see the results. Do not forget to push COMMIT button to send the calibration parameters to the camera for permanent storage. By default, it is saved as $HOME/.ros/camera_info/NAME.yaml where NAME is camera serial number.

IR: When calibrating depth camera it is necessary to block IR emitter with paper. Separate illumination source was provided by another two Kinect depth sensors. To remove speckle pattern those IR emitters were also covered with transparent plastic bag. Alternatively,sunlight, halogen lamps, or incandescent lamps can be used. Launch calibration and do the same procedure as for RGB:

roslaunch freenect_launch freenect.launch
rosrun camera_calibration cameracalibrator.py image:=/camera/ir/image_raw camera:=/camera/ir --size 8x6 --square 0.1

Extrinsic Calibration

Depth to RGB calibration is based on this tutorial. Fix the checkerboard such that it is visible for RGB and Depth sensors and make sure that Kinect and checkerboard do not move during calibration. Firstly, create launch file that turns on Kinect cameras and starts the extrinsic tf publisher(after calibration is performed) (change n here to the camera id number):

<launch>

  <!-- Bring up Kinect and processing nodelets -->
  <include file="$(find freenect_launch)/launch/freenect.launch">
    <arg name="device_id" value="#n" />
    <arg name="camera" value="camera_n" />
    <arg name="publish_tf" value="false" />
  </include>

  <!-- Extrinsic transform publisher -->
  <include file="$(find camera_pose_calibration)/blocks/calibration_tf_publisher.launch">
    <arg name="cache_file" value="/some/path/kinect_extrinsics_cache.bag" />
  </include>

</launch>

We called this calibration.launch and put it in a camera_pose_calibration package. To launch the file run this:

roslaunch camera_pose_calibration calibration.launch

To record 10 seconds of RGB stream to rgb.bag run this in a new terminal:

rosbag record --limit=300 -O rgb.bag /camera/rgb/image_rect_color /camera/rgb/camera_info
rosparam set /use_sim_time true

Now it is necessary to Ctrl-C calibration.launch and launch it again.

For the extrinsic calibration of the RGB to IR camera the standard program (camera_pose_calibration) does not work correctly (source). It seems that IR image is provided to the checkerboard detector in 16 bit format. In turn, checkerboard detector uses 8 most significant bits while the IR image is placed in the 8 less significant bits. Thus, the checkerboard detector does not detect anything. However, the application seems to display 8 less significant bits, so visually it looks alright.

To make the checkerboard detector work contrast_augmenter there was created a node that gets the IR image and increases the contrast (multiplies every pixel value by 256). This will resolve the problem and checkerboard detector will work (but the image in the application will be completely white). This node is not limited to the IR image, but can increase the contrast of every grey scale image (e.g.: depth map). As freenect_launch is started again run following command to increase the contrast of depth image:

rosrun contrast contrast_augmenter image:=/camera/ir/image_raw

To run calibration:

roslaunch camera_pose_calibration calibrate_2_camera.launch camera1_ns:=/camera/rgb_bag camera2_ns:=/camera/ir_augmented checker_rows:=6 checker_cols:=8 checker_size:=0.1

And finally play recorded RGB stream:

rosbag play rgb.bag --clock /camera/rgb/camera_info:=/camera/rgb_bag/camera_info /camera/rgb/image_rect_color:=/camera/rgb_bag/image_rect_color

Ctrl-C everything and do not forget:

rosparam set /use_sim_time false

Run calibration.launch again. To see results of calibration:

rostopic echo /camera_calibration

There is a tf data that we have obtained:

for camera 1:

    position: 
      x: -2.17774617411e-08
      y: -4.84326304443e-09
      z: 3.14672124419e-08
    orientation: 
      x: 1.06121692259e-09
      y: 3.14327702883e-09
      z: -1.21954557514e-09
      w: 1.0
  - 
    position: 
      x: 0.0622525826336
      y: 0.0354693550275
      z: -0.0233953998406
    orientation: 
      x: 0.0343410207174
      y: -0.00768412663298
      z: 0.00777143559569
      w: 0.999350415661
camera_id: ['staubli/camera1_depth_optical_frame', 'staubli/camera1_rgb_optical_frame']

for camera 2:

    position: 
      x: -2.83841100571e-09
      y: -1.62525361519e-08
      z: -4.20796779814e-08
    orientation: 
      x: -1.28856368937e-09
      y: -3.29838025296e-09
      z: -8.68888815497e-09
      w: 1.0
  - 
    position: 
      x: -0.0105104469642
      y: 0.0309357871855
      z: -0.0449924461108
    orientation: 
      x: 0.0107373603171
      y: -0.00109165525595
      z: 0.00184034782636
      w: 0.999940063455
camera_id: ['staubli/camera2_depth_optical_frame', 'staubli/camera2_rgb_optical_frame']

and for camera 3:

  - 
    position: 
      x: -2.20456229824e-08
      y: 1.50375021304e-08
      z: -1.37520456933e-08
    orientation: 
      x: -3.01796671591e-09
      y: 3.02103995174e-09
      z: 1.78404419394e-08
      w: 1.0
  - 
    position: 
      x: -0.121509338799
      y: 0.0164168759187
      z: 0.0244907109148
    orientation: 
      x: -0.00248604996093
      y: 0.0133469213922
      z: -0.00229111396518
      w: 0.999905210528
camera_id: ['camera3_rgb_optical_frame', 'camera3_depth_optical_frame']

Start rviz and open a PointCloud2 display for topic /camera/depth_registered/points. Set the Fixed Frame to /camera_depth_optical_frame. You should see a color point cloud. There is a problem with visualization which is solved as follows.

Kinect2Kinect Calibration

As it can be observed on Figure only cameras 1 and 2, 1 and 3 have overlapping regions, so they will be calibrated respectively. Fix the checkerboard such that it is visible for both RGB sensors. To use the camera pose calibration, prepare .launch file that turns on two Kinect cameras and tf transform (similar to the one used for extrinsic calibration). Then run:

roslaunch camera_pose_calibration calibrate_2_camera.launch camera1_ns:=camera1/rgb camera2_ns:=camera2/rgb checker_rows:=6 checker_cols:=8 checker_size:=0.1

For camera 1 and 2 we obtained following result:

camera_pose: 
  - 
    position: 
      x: -1.33316832231e-06
      y: -1.61521949383e-06
      z: -4.3857209996e-07
    orientation: 
      x: -9.65474653691e-07
      y: 8.82939862045e-07
      z: -2.88101142914e-07
      w: 0.999999999999
  - 
    position: 
      x: -0.719024521948
      y: -0.746205606969
      z: 0.595393125452
    orientation: 
      x: 0.201557900888
      y: 0.15092888561
      z: 0.967775346997
      w: 0.00240037960227
camera_id: ['staubli/camera1_rgb_optical_frame', 'staubli/camera2_rgb_optical_frame']

Multi-camera Point Cloud

To obtain merged point cloud from two cameras we need to publish tf tree shown on Figure 2. As a result of extrinsic calibration we obtained transformation of RGB (staubli/cameraN_rgb_optical_frame) and Depth (staubli/cameraN_depth_optical_frame) frames with respect to the world. From Kinect2Kinect calibration we know transformation of each RGB frame with respect to the fixed world frame. Finally, we need to combine that in a united tf tree. Here, cameraN_link is fixed frame of an individual camera.

Figure 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment