Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ZzimonNanoq/527855642ed275fc0c67 to your computer and use it in GitHub Desktop.
Save ZzimonNanoq/527855642ed275fc0c67 to your computer and use it in GitHub Desktop.
\documentclass[a4paper, 11pt]{report}
\usepackage{graphicx}
\graphicspath{ {figures/} }
\usepackage{subcaption}
\usepackage{titlesec}
\usepackage{float}
\usepackage{hyperref}
\usepackage{fancyvrb}
\usepackage{afterpage}
\usepackage[utf8]{inputenc}
\usepackage{comment}
\usepackage[square, numbers]{natbib}
\bibliographystyle{unsrtnat}
\usepackage[paper=a4paper,layout=a4paper]{geometry}
\usepackage{fancyhdr}
\pagestyle{fancy}
\lhead{B262a - Robot Programming}
\rhead{this is page \thepage}
%\newcommand\blankpage{%
% \null
% \thispagestyle{empty}%
% \addtocounter{page}{-1}%
% \newpage}
\newcommand{\namesigdate}[2][5cm]{%
\begin{tabular}{@{}p{#1}@{}}
#2 \\[2\normalbaselineskip] \hrule \\[15pt]
\end{tabular}
\begin{document}
\chapter*{B262a}
\noindent \namesigdate{Martin B. Jensen \\20153142} \hfill \namesigdate[5cm]{Simon N. Callisen \\20155545} \\ \namesigdate{Mathiebhan Mahendran \\20157592} \hfill \namesigdate[5cm]{Ion Sircu \\20136350} \\ \namesigdate{Thomas J.Nielsen \\20153753} \hfill \namesigdate[5cm]{Nick J. Pedersen \\20143126} \\ \namesigdate{Daniil Avdejev \\20155436}
\pagenumbering{gobble}
\newpage
\pagenumbering{arabic}
\chapter*{Introduction}
\chapter*{Description of concept}
From Matthias: "Description of mini project concept. The mini project can be based on parts of your semester project but should be able to run independently of the rest of the project."
\\This robot programming mini-project is partially based on the P1 semester project created by B262a.
In the P1 project, the turtlebot was equipped with the Xtion Live Pro depth sensor and a thermal camera.
\begin{figure}[H]
%\hspace{-0.5em}
\centering
\includegraphics[scale=0.16]{prototype}
\caption{The turtlebot with the Xtion Pro Live and Axis Q1922 cameras}
\label{fig:my_label3}
\end{figure}
\newpage
The functionality of the robot is to find an object of relative high temperature, and avoid obstacles while searching for this. The thermal camera is pointed toward the ceiling, so that the fluorescent lamps can be used as the object of interest. The Xtion Pro Live depth sensor is mounted so that its parallel with the floor, and can be used for avoiding objects while the turtlebot is navigating.
\chapter{Hardware and software components}
From Matthias: "Description of necessary hardware and software components. The project should at least feature 2 ROS nodes that are in communication with each other. Usage of actual robot hardware is optional in this mini project."
\begin{comment}
\section{Hardware}
\subsection{Turtlebot}
%Put short description of the turtlebot platfrom here
\begin{figure}[H]
\centering
\includegraphics[scale=0.125, angle=270]{Turtlebot_picture01}
\caption{The turtlebot with a mounted laptop}
\label{Turtlebot}
\end{figure}
The robot that will be used in this project is the turtlebot\cite{Turtlebot}. The turtlebot is a ground based robot. The maximum speed of the turtlebot is at 0.65 m/s and it can rotate 360 degrees.
\\The turtlebot will be connected and controlled by a laptop that will be mounted on top of the turtlebot. The laptop has Ubuntu as the operating system, and will make it possible to program the turtlebot, giving specific commands to move around, and also it can be used for communication. Thus the sensors are needed to move around obstacles and have a clear vision for the robot. \vspace{0.5cm}
\subsection{Sensors}
%Short introduction here
During the P1 project, a research of sensor technologies were performed, and the necessary sensor technologies to fulfill the project requirements were identified. The acquired sensors are the following:
\subsubsection{Xtion Live Pro depth sensor}
%Short description of what this sensor is and its potential.
The Xtion pro live\cite{Xtion1} is a structured light 3D scanner. The Xtion pro live is built with two cameras (RGB and IR depth sensor) and one IR projector. The IR projector emits infra-red light which is then reflected by a surface and gets captured by the IR sensor, and can then be used to measure distance. This makes the sensor ideal for object avoidance programs.
\subsubsection{Axis Q1922 thermal camera}
The Axis Q1922 is a thermal network camera, designed for indoor use. The camera detects thermal radiation, and displays relative temperature variations as a video-stream. The video-stream has a resolution of 640x480 pixels, which are displayed at a frame-rate of 30 FPS. The video-stream can be in both H.264 and MPJG compression formats.
\section{Software}
\subsection{Image processing}
\subsection{Publisher software}
\subsection{ROS}
\lhead{poop}
\chapter*{Implementation}
From Matthias: "Implementation details with walk-through of selected source code. This description should try to make use of the concepts and terminology taught in the course."
\\NOTICE!! IN THE FOLLOWING CHAPTER THERE ARE MATERIAL COPIED FROM THE P1 PROJECT. IT HAS TO BE CONSIDERED WHETHER OR NOT IT SHOULD INCLUDED.
\\Even the sections that are relevant to programming may need to be "trimmed" down, so it only contains relevant information for the programming exam.
\subsection{Thresholding}
\label{Thresholding}
Thresholding is a method in image processing. Thresholding is usually applied to grayscale pictures where you choose a threshold T, which is a pixel value between 0 and 255 (grayscale), where 0 is black and 255 is white. Everything over the value T is a part of the object while the values of the pixels under T will turn into the background. By using this method the image will create blobs and these can be identified as humans or other heat emitting objects, depending on the size of the blob. Imagine a thermal camera filming in grayscale, the heat emitting objects of the image will glow bright white, while most of the inanimate things, such as tables, chairs etc. will be almost invisible. When every heat emitting object is much brighter than everything else, the threshold of the pixel value can be found quite easily. There are many different ways to threshold. One of the most used methods is called binary thresholding. Binary thresholding is having the pixels to have only two states that they can be in 0 or 255, black or white, which essentially creates a binary image that contains only 1s and 0s. The method can be further explained through this piece pseudo code:
\begin{verbatim}
If (pixVal < threshold){
pixVal = 0;
}
else{
pixVal = maxVal;}
\end{verbatim}
It is a really simple method to segment the people in need from the surroundings. An example of this can be seen in the two figures(\ref{ThermalThresh}):
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.4\linewidth]{martin}
\caption{A thermal picture taken with a axis Q1922 thermal camerea.}
\label{fig:sub1.1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.4\linewidth]{martin1}
\caption{The binary thresholded picture also known as a binary image.}
\label{fig:sub2.2}
\end{subfigure}
\caption{Demonstration of the method of thresholding}
\label{ThermalThresh}
\end{figure}
In the C++ library OpenCV(Open source Computer Vision) there is a command for thresholding. It can be used for 5 different types of thresholds\cite{threshold}, and it take in the following parameters:
\begin{itemize}
\item \textbf{src}: The command needs a source image or array.
\item \textbf{dst}: Is the output image or array.
\item \textbf{thresh}: Is the treshold that can be set to a value between 0 and 255.
\item \textbf{maxVal}: Is the maximum value that a pixel can have in the array and is set to a value between 0 and 255.
\item \textbf{type}: The the type of the threshold is set there are 5 different types, the one that has been used in figure (\ref{ThermalThresh}) is the THRESH{\_}BINARY. THRESH{\_}BINARY converts the image into a binary image.
\end{itemize}
While it is easy for a human to identify objects in a binary image, it is a bit more difficult for a robot to do so. In order for the robot to interpret the blobs as object they need an outline or contour of the object. In the next section the method of finding the contours of a blob will be described.
\subsection{Tracing contours algorithm and the findContours command}
\label{Tracing contours algorithm}
In this section the way of finding the outline of the white blob in the binary image will be described. There are many different ways to find the outline of a blob of pixels in a binary image. This example algorithm is based on the contour tracing method called "Moore-neighbor tracing"\cite{Moore}. Before beginning the description of the algorithm, 'the moore-neighborhood' of a pixel concept has to be explained. The moore-neighborhood of the pixel is shown in figure (\ref{moore}):
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{moore}
\caption{Illustration of the moore-neighborhood of a pixel \textbf{P}.}
\label{moore}
\end{figure}
The moore-neighborhood of a pixel is all the pixels surrounding a pixel \textbf{P}, these pixels can be called the 'neighbors' of \textbf{P}. In the example the algorithm looks for whiteblobs and therefore white pixels in the binary image.
The Moore-neighbor algorithm starts by scanning the binary image, which is a matrix, from the leftmost column, scans it from bottom to top, and then moves to the next column until it finds a white pixel.\vspace{0.5cm}
\\The white pixel will then be the "starting" point of the contour. The starting pixel or any other white pixel that is hit is called \textbf{P}. When at the starting pixel the algorithm begin by visiting all the 'neighbor' pixels to check if any of them are white, it does this by starting at one pixel for example \textbf{P1} in figure (\ref{moore}) and then it goes around either clockwise or counter-clockwise. If the algorithm hits a white pixel while going around the 8 pixels, it terminates the procedure and goes to the newly found white pixel. This new white pixel is now the \textbf{P} that has its own neighborhood where it also checks each of the surround pixels and if it hits a new white pixel it goes to that and repeats the procedure. The algorithm continues to do this until it hit the starting point once again, then it terminates. The contour will then be the trail of visited white pixels.
%\newpage
This is just one of many algorithms for finding the contours of blobs in a binary image. In the following sections the function called findContours will appear it is a predefined command in the C++ library called OpenCV(Open source Computer Vision). The command findContours\cite{findContours} finds the contours of a binary image. This command takes in 5-7 parameters, these 7 parameters are listed here:
\begin{itemize}
\item \textbf{Image}: findContours need a source image this has to be of the type: 8-bit single-channel image, which is a binary image.
\item \textbf{Contours}: Detected contours has to be stored in vectors of points.
\item \textbf{Hierarchy}: Optional output, that outputs a vector that tells which contours are inside each other and which are outside or rather which hierarchy they are in.
\item \textbf{Mode}: There are different modes that tells which contours to retrieve and how they are organized when retrieved
\item \textbf{Method}: This where the contour approximation method is chosen, whether one wants all contour points or only a few to be put into the contours vector.
\item \textbf{Offset}: Optional command, that shifts all the contour points by a given offset.
\end{itemize}
Here is an example image of the findContours command from openCV applied to the thresholded picture in figure (\ref{ThermalThresh}):
\begin{figure}[H]
\centering
\includegraphics[scale=0.50]{ContourImage}
\caption{Contours drawn from the thresholded/binary image from figure (\ref{ThermalThresh})}
\label{ContourPic}
\end{figure}
\subsection{Object moment in OpenCV}
\label{Objectmoment}
In this subsection function in OpenCV called 'moments'\cite{moments} is explained. In the library OpenCV for C++ there are many functions, one of them is the moments command. This command can do several things, such as finding the center of mass in an object or find central moment of the object. This function can only be used if there is a defined object in the image. This defined object could for example be a contour. So this command can find the different moments of a contour, this can for example be used for tracking the contours movements. It can track a contours movement because it can display the (x,y) coordinates of it.
\section{Program overview}
This section is made to give an overview of what the concept of each program is. The two programs, which concepts are to be described, are the thermal follower program and the obstacle avoidance capability. \vspace{0.5cm}
\subsection{Thermalfollower}
The thermal follower is a program that is supposed to follow a human once it has been found. For this to be done the program needs to identify the human as an object for it self. The method for doing so will be described in this subsection. First of all it needs to be able to extract the thermal characteristics from a human into an object that the program can monitor. The set of instruction to do this are the following: Get thermal characteristics, threshold the image and create a contour around the blob of white pixels. The can be made into a simple sketch that shows the process from raw footage to a object defined by contour lines (\ref{sketchProcessthermalfollower}):
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{concept}
\caption{A sketch over the image processing in the program: Thermal follower}
\label{sketchProcessthermalfollower}
\end{figure}
As seen in the the figure (\ref{sketchProcessthermalfollower}) first part of the image processing is to threshold the incoming video from the camera. It does this by using the OpenCV command: \begin{verbatim}threshold(param1, param2, param3, param4, param5);\end{verbatim} The description of the method can be found in subsection (\ref{Thresholding}) along with the description of each of the 5 parameters. When the image has been converted to a binary image, it can be used as source image for the findContours function that draws a red line around all the white blobs in the binary image. To decrease the amount of objects that is sees, a sorting mechanism is needed. There are many ways to do so, one of the simplest methods is simply to put an if-statement that has the requirement that the contours has to have a certain size before they are drawn and used further in the code. For finding out what areas the contours another OpenCV command is needed this one is called 'contourArea'\cite{contourArea} that depends on the contour as input in order to compute the area. A piece of pseudo code to explain further:
\begin{verbatim}
contourArea(contourVector);
if(area < x){
drawContours();
}
\end{verbatim}
The certain area is called 'x' in this pseudo code, because it is not known how big this value has to be, and it may vary depending on the distance to the object. Once the contours of the relevant objects have been drawn, the tracking of them is needed. This can be done by using the OpenCV command called 'moments' to find central moment of the contour. After finding the central contour a (x,y) coordinate is acquired from each of the drawn contours. These can be used for the next part of the program that is about creating a ROI(Region Of Interest) and making the central moments of the contours stay inside of the ROI. The next sketch for the thermal follower, which revolves around the movement of the turtlebot and keeping the central moments inside the ROI, is the following(\ref{sketchProcessthermalfollower2}):
\begin{figure}[H]
\centering
\includegraphics[scale=0.45]{Sketchoffollower}
\caption{A sketch of the part of the program that follows the contours with the use of moments.}
\label{sketchProcessthermalfollower2}
\end{figure}
The ROI can be created with simple If-statements. There has to be an If-statement for each side of the window; left, right, up and down. This can be shown through a simple piece of pseudo code:
\begin{verbatim}
if(540 < x){
TurtlebotMove();
}
\end{verbatim}
So if the central moment of a contour exceeds the boundary of 540 pixels on the x-axis, it will make the turtlebot move in a certain way to get the moment back into the ROI. NOTE: This has to be done with each side of the screen as seen in figure (\ref{ROI}):
\begin{figure}[H]
\hspace{-5em}
\includegraphics[scale=0.65]{ROI}
\caption{Illustration of the ROI shown inside the window that displays the live feed of contours. }
\label{ROI}
\end{figure}
In the figure (\ref{ROI}), the ROI is marked with red, the window that displays the contour live feed is marked with gray and the central moment of some contour is marked with green. If the green dot moves out of the red area also known as the ROI, the turtlebot is supposed to execute a set of movements for the green dot to be placed inside the ROI again.
This is the concept of the thermal follower program, the real c++ code and implementation in ros will be described in the in the chapter Implementation and Testing(\ref{implementation_testing}).
\subsection{Obstacle avoidance}
\label{obstacle_avoidance}
In order to demonstrate the obstacle avoidance ability of the turtlebot the use of several ROS Nodes and packages is required (\ref{Node} \& \ref{Packages}).
In addition to the main operation nodes a special collection of nodes needs to be launched as well.
\begin{verbatim}
roslaunch turtlebot_navigation amcl_demo.launch map_file:=map_file.yaml
\end{verbatim}
Various nodes will be launched as a result, including nodes for operating the Xtion Pro camera, which is used for making and navigating the map.
This allows the turtlebot to load a previously created map (\ref{map_rviz}) into memory.
The next step is to launch the rviz visualization program, by launching the following collection of nodes:
\begin{verbatim}
roslaunch turtlebot_rviz_launchers view_navigation.launch --screen
\end{verbatim}
The rviz program will display the map and the turtlebot's position (\ref{Rviz}
For optimal functionality, since the turtlebot can't know its initial position on the map accurately, an estimation of the position is given using the rviz function "2D Pose Estimate".
After that, by using the "2D Nav Goal", a point on the map can be set and the turtlebot will receive a velocity message, consisting of linear and angular velocities. As a result the turtlebot will navigate set destination while plotting it's projected path in rviz and also avoiding any new obstacles that were not initially on the map. The turlebot moves moves a set distance in a direction while it keeps scanning for things in front of it,and, if an obstacle is detected, find a way around it. When the turtlebot reaches its destination, it will display a message in the terminal window running program responsible for map navigation.
If the turtlebot should come up with any errors or obstacles it can't avoid, it is possible to transfer control of it's movement to a computer.
It is done by having a remote host computer launch the following command:
\begin{verbatim}
roslaunch turtlebot_teleop keyboard_teleop.launch
\end{verbatim}
The process of turtlebot navigating a map while avoiding obstacles can be seen in the recorded video(\cite{Testvid})
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{BLARGHK}
\caption{Example of scanned map in Rviz\cite{BLARGHK2}}
\label{Rviz}
\end{figure}
Through the use of this map it is then possible to point out a point on the map to which a path is calculated with any possible obstacles considered to make the best path to the given point.
\chapter{Implementation and testing}
\label{implementation_testing}
In the following chapter, the acquired hardware will be implemented in a working prototype, which will be used to perform tests of the system. As explained in the discussion and delimitation chapter (\ref{discussion and delimitation}), this chapter will be limited to the functions of identifying people in the water, and avoiding objects. This means that the prototype will have to imitate these functions, in order to assess the acquired hardware, and evaluate how it fits the requirements stated in (\ref{finalprob}).
\section{Implementation of algorithms in C++ and ROS}
This chapter documents the implementation and testing of the programs for the turtlebot. In this chapter the following sections can be found: 'C++ code and algorithms', 'Description of ROS and implementation' and 'Construction of the test setup and testing'.
\newpage
\subsection{C++ code and algorithms}
\label{c++ code}
In this subsection the flowcharts for the thermal follower program are displayed. These can be followed as one looks through the commented source code for the thermal follower program, which can be found inside the appendix.
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Flow1}
\caption{Part 1 of the flowchart.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Flow2}
\caption{Part 2 of the flowchart.}
\label{fig:sub2}
\end{subfigure}
\caption{The two first parts of the flowchart.}
\label{Flow}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Flow3}
\caption{Part 3 of the flowchart.}
\label{fig:sub3}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Flow4}
\caption{Part 4 of the flowchart.}
\label{fig:sub4}
\end{subfigure}
\caption{The two last parts of the flowchart.}
\label{Flow2}
\end{figure}
The implementation of the c++ code is done, now the code needs to be implemented in ROS so that the robot can be controlled with it.
\subsection{Description of ROS and implementation}
\label{Node}
%Thomas
ROS stands for Robot Operating System, and is classified as 'middleware'\cite{ROSstuff}. One of the key features of ROS is handling messages between individual programs, also known as 'nodes' in the ROS environment\cite{ROSTopics}.
Figure \ref{fig:my_label2} is an example of this communication between nodes.
\begin{figure}[H]
\hspace{-1em}
\includegraphics[scale=0.8]{rostopic1}
\caption{Node interaction in ROS, shown by using the ROS tool rqt\_graph\cite{ROSTopics}}
\label{fig:my_label2}
\end{figure}
The figure is created by a ROS tool called rqt\_graph, which shows all running nodes and their interconnections. A node is an executable file within ROS and they communicate with each other over ROS Topics. In the example of the figure \ref{fig:my_label2}, two nodes are running on ROS, and each program is marked with a circle. The arrows indicate the communication between the nodes. The "\/teleop\_turtle" node is publishing information to the "\/turtle1/command\_velocity" topic while the "\/turtlesim" node is subscribing to that topic to receive data. As a result of this setup all the data transmitted to the ros topic is received by the "\/turtlesim" node and it will perform operations based on the data received. Topics have a certain data type. The data type of" \/turtle1/command\_velocity" is "geometry\_msgs\/Twist" which means that it expects positional data.
\\Therefore the \/teleop\_turtle node sends positional data to the \/turtlesim node and as a result the turtle on the screen will move.
One of the strengths of handling the communication this way is that programs written in different languages can easily be made to communicate. For example, it is possible to have a image processing program written in C++, that communicates with a Python script made to control the motors of a robot. For the purpose of this project only C++ code will be used.
Whenever running ROS program, a main collection of nodes has to launched first, in order for everything to function properly, as it can be seen in the video. For the scope of this project the minimal.launch collection is used to launch these nodes.
About ROS in general : http://www.ros.org/core-components/
\subsubsection{Interactions between nodes}
Since we expect the finished software product to use a multitude of ROS nodes we have separated them into collections based on their function as shown in figure \ref{fig:my_labe2}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{nodeinteraction}
\caption{The interactions of the program nodes}
\label{fig:my_labe2}
\end{figure}
\paragraph{Map}
\label{map_rviz}
The Map collection\cite{Map} handles the mapping of the environment in which the robot operates. This is done by using both the Xtion pro live depth sensor, as well as the odometry data from the motorbase of the turtlebot as input.
\\The procedure of generating a map requires the program "turtlebot\_navigation gmapping\_demo.launch" (which launches a selection of nodes with special parameters) and a teleoperating node to navigate the turtlebot around the area that needs to be mapped. After this is done the generated map can be saved locally for future use.
The entire mapping process can be visualized in Rviz, which is a user interface showing the map and its data.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{empty}
\caption{This image shows the empty map in Rviz }
\label{empty_map}
\end{figure}
In figure \ref{empty_map} the mapping has just started. The only part that has been mapped is right in front of the sensor. As it can be seen the panel to the left in the image, the mapping program uses a variety of sensors, to create the map, and each sensor is assigned a unique colour, differetiated on the map. In this case, the blue colour is associated with the planner.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{finished}
\caption{This shows the finished map in Rviz}
\label{finished_map}
\end{figure}
Figure \ref{finished_map} is an example of how a finished map looks in Rviz. It is possible to make out the outer edges of the room that has been mapped, as well as the obstacles that was within the depth sensors field of view.
After the map is created, it is saved for later use, and can be used for navigation.
\paragraph{Detection}
In the detection phase, the thermal camera to identify objects above a certain threshold of relative temperature. As explained in (\ref{Tracing contours algorithm}), the outline of the shape is detected, and then later used to determine whether or not the object is leaving the region of interest (ROI). When the object leaves the ROI, the detection program will publish information indicating which direction the object left the ROI.
\paragraph{Thermal follower}
The thermal follower is subscribing to information regarding the direction the object is leaving the ROI.
If it receives information from the detection phase that the object is leaving the ROI in the upper direction of the image, the follower node will engage the motor-base, and move the robot in the same direction.
This is used to insure that the whole target has been identified and not just a part of it.
\\By example: The thermal camera, mounted on a flying drone, detects a target, but it cannot send the information to the part responsible to identify if the target is human, as the image may not be complete and the result inaccurate. Therefore the drone moves towards the object until the detection phase reports a valid target.
\paragraph{Autonomous navigation}
The Autonomous Navigation collection covers the functions of navigating an area (as the harbour) without a pilot controlling the robot. It loads a previously created map, which the robot uses to estimate its position as well as for route planning.
The envisioned functionality would be the robot mapping the area and then navigating it, taking decisions based on the input from the Thermal Camera. By example: a drone is flying along the Limfjord, back and forth on a predetermined route. It has a thermal camera pointing towards the water. If the "Thermal follower" collection reports that a person has been identified, an alert will be sent to the pilot or the emergency services directly.
\subsubsection{Using the code in ROS}
\label{Packages}
ROS programs, called packages need a workspace in order to function. A workspace is a special directory which contains the source code of the program(which can be written i a programming language like C++ or Python) as well as instuctions on how to make it executable.
ROS is able to work with multiple workspaces at once but in this project only two are used.in In order to start using ROS a workspace is needed first.
\\Next, an example will be shown,of how the package responsible for running the thermal camera was created and later used.
\\In order to use ROS, the operating system needs to know where its packages are located. In Ubuntu Linux we do this by adding the following line to the .basrhrc file, located in the user home directory:
\begin{verbatim}
source /opt/ros/indigo/setup.bash
\end{verbatim}
It is the default installation directory for ROS and it is also it's main workspace. When making other packages it is recommended to create another workspace, wich can be done with the following commands passed to the Ubuntu terminal:
\begin{verbatim}
$ mkdir -p ~/thermal/src
$ cd ~/thermal/src
$ catkin_init_workspace
\end{verbatim}
These commands will create a new workspace name "thermal" with a subdirectory called "src". The last command we tells the operating system that this folder is a ROS workspace.
(The \$ symbol denotes a new line so it should not be typed in)
Following that the workspace will be "built" in order to initialize all the needed files:
\begin{verbatim}
$ cd ~/thermal/
$ catkin_make
\end{verbatim}
These catkin{\_}make command is a tool for using catkin workspaces(which ROS uses). After inserting the above commands two more folders will appear,named "build" and "devel".
\\In order to use this workspace efficiently we need to update the .bashrc file, with another line:
\begin{verbatim}
source ~/thermal/devel/setup.bash
\end{verbatim}
At this moment the workspace has been created and properly set-up. Next a package will be created that would hold the source code for the thermal camera operation, by using the following commands:
\begin{verbatim}
$ cd ~/thermal/src
$ catkin_create_pkg vision roscpp geometry_msgs
\end{verbatim}
As a result the package named vision will be created and the roscpp and geometry\_msgs packages are passed as dependencies(required for the package to function correctly). A "vision" folder will also be created with the with the path \/thermal/src/vision.
In order to compile the source code correctly the CMakeLists.txt file, located in the vison folder, needs to be modified in order to locate and use all the necessary libraries. For the following example the contents of the CMakeLists.txt file are the following:
\begin{verbatim}
cmake_minimum_required(VERSION 2.8.3)
project(vision)
find_package(catkin REQUIRED COMPONENTS
geometry_msgs
roscpp
)
find_package(OpenCV REQUIRED)
message("---- OpenCV ----")
message(${OpenCV_CONFIG})
include_directories(
${catkin_INCLUDE_DIRS}
${OpenCV_INCLUDE_DIRS}
)
add_executable(vision_node src/thermalfollowerforturtlesim.cpp)
target_link_libraries(vision_node
${catkin_LIBRARIES}
${OpenCV_LIBS}
)
\end{verbatim}
It can be seen that the file contain the project name, the required dependency packages(specified at the package creation) as well as required directories and libraries.
Once this is done we will copy the file containing the C++ source code that was explained previously (\ref{c++ code}) into the \/thermal\/src\/vision\/src\/ folder and then build the package with the following commands:
\begin{verbatim}
$ cd ~/thermal/
$ catkin_make
\end{verbatim}
In a new terminal(in order to allow all the changes to take effect), once the main nodes have been launched, as it can be seen in the video
%reference video%
the created package can be launched by issuing the following command:
\begin{verbatim}
$ rosrun vision vision_nod
\end{verbatim}
"rosrun" is the command used to launch a ROS package
\\"vision" is the name of the package we created above
\\"vision{\_}node" is the name of the node belonging to that package
The command will launch the "vision" package and execute the source code, the functionality of which can be seen in the demonstration video\cite{Testvid}
\section*{Link to ALL source code}
From Matthias: "The full source code must be handed in though a source code management system preferably GitHub."
The source code can be found at the following git repository.
\\Source code:
\href{https://github.com/B262a/TurtleBot.git}{Press here}
\\Or by pasting the following link:
\begin{verbatim}
https://github.com/B262a/TurtleBot.git
\end{verbatim}
\chapter{Appendix}
blargh
%As we only have 10 pages (not including appendix) all figures / images / illustrations should be placed in the appendix, and referenced to.
\end{document}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment