Skip to content

Instantly share code, notes, and snippets.

View ros_conda.md

ROS + Conda (mambaforge) for all-in-one environment

Motivations

Some autodiff libraries such as torch or tensorflow are preferably worked on a conda environments. Moreover, most of modern Python libraries needs to be installed on Python 3. This instruction will introduce how to install every needed libraries of ROS and project specific Python libraries in the same environment, for real world robot experiments. Currently, Panda Arm is well supported in RobotStack. Hopefully in the future they support other robot platforms like Tiago as well.

Tested specs: Ubuntu 20.04 + ROS Noetic + Panda Franka

View SetupWSL2Ubuntu.md
View final_report.md
View Assignment5.md

3. Regression with Decision Tree and kNN

Decision Tree Classification vs Regression

Definitions

Decision trees used in data mining are of two main types[1]:

  • Decision Tree Classifier predicts discrete label outcomes associated with data.
  • Decision Tree Regressor predicts real/continuous outcome numbers (e.g. the price of a house, or a patient's length of stay in a hospital).
View [GSOD2019]RoboComp.md

Google Season of Docs (GSoD) 2019: RoboComp’s basic components

Description

As quoted on RoboComp GSoD website,

RoboComp is an open-source Robotics framework providing the tools to create and modify software components that communicate through public interfaces. The Components may require, subscribe, implement, or publish interfaces in a seamless way.

The robocomp repo includes a wide range of components (maintained in smaller repo named robocomp-robolab) for different robotic applications such as motor control, localization and mapping, navigation, recognition, etc. However, most of the components in robocomp-robolab repo currently lack detailed instructions on how to compile and how to use in different parameter configurations. This creates a huge obstacle for new developers who want to use components in their projects or contribute to the framework. The reason is that many components (i.e hokuyoComp) are a wrapper of external driver or libraries having

View keybase.md

Keybase proof

I hereby claim:

  • I am anindex on github.
  • I am anindex11 (https://keybase.io/anindex11) on keybase.
  • I have a public key ASCVBjfGN1l76zMNBJTTTnhkpWuEn0WXL6X8LAFXJjLwqgo

To claim this, I am signing this object:

@anindex
anindex / ros_ws.md
Last active Jun 5, 2019
Setting up ROS workspace with catkin tools and wstool
View ros_ws.md
mkdir -p ~/<name>/src
cd ~/<name>
catkin init
catkin config --extend /opt/ros/<distro>
View [GSOC2018]FlexiblePerceptionPipelineManipulationForRoboSherlock.md

Flexible perception pipeline manipulation for RoboSherlock (GSOC 2018)

Introduction

This project enables Robosherlock to handle multiple paralleled annotators processing branches while reserving the original capability of Robosherlock in linear execution. It enhances Robosherlock's flexibility to handle concurrent or sequential processes. Hence, it improves overall execution performance, scalability, and better error handlings. The project was done to fulfill the requirements of Google Summer of Code 2018 with IAI, University of Bremen.

The project is merged with the branch master of Robosherlock. The API is simple to use, it is a flag parallel in ROS launch file that indicates the pipeline should be executed parallel or not. However, it requires new dependency KnowRob to compile and run correctly.

Pull request: [Link]

Achievements

View [GSOC2018]FlexiblePipelineFirstEvaluation.md

GSOC2018 First milestone progress report

Flexible perception pipeline manipulation for RoboSherlock

Current status

The project is going on track as required by the scope of RoboSherlock on this year GSOC.

Two things have been completed:

  • Implement RSParallelPipelinePlanner that is able to mark execution orderings of annotators based on their required inputs and outputs.
  • Examine UIMACPP code repository, gain a deep level of how a AnalysisEngine execution calls annotators' process.

All implementation so far does not require new dependencies.

View AutonomousRobotDocumentation.md

Autonomous Mobile Robot (Experimental)

Introduction

The aim of this project is to create mobile robot that can navigate correctly in-door environment. The robot can scan and save surrounding environment as occupancy grid, then basing on that map, it can localize and navigate itself on scanned map. Due to these capabilites, the robot can plan a path from A to B as instructed via Rviz on global map (scanned map), it can also detect unknown obstacles during operation and plan a local path to robustly avoid them and ensure the success of navigation task.

Link to github: https://github.com/anindex/navigation-processing-base

Building robot (Hardware part)