Skip to content

Instantly share code, notes, and snippets.

View rongtuech's full-sized avatar

Trung Ngo rongtuech

  • Tokyo, Japan
View GitHub Profile
import argparse
import torch
from pytorch_i3d import InceptionI3d
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-w', '--weight', type=str, required=True)
parser.add_argument("-o", '--output_name', type=str, required=True)
args = parser.parse_args()
import argparse
import torch
from tgcn_model import GCN_muti_att
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-w', '--weight', type=str, required=True)
parser.add_argument("-o", '--output_name', type=str, required=True)
args = parser.parse_args()

GSoC 21 RoboComp project: Sign language recognition.

Introduction:

There are many ways that a robot can get information from humans, such as voice, keyboard, or camera. This project recognizes human interaction via visual features from body/hands actions. The main topic is divided into two parts:

  • Body and hand detection: detect body and hand joints in the image/video.
  • Gesture recognition: The sequence of detected body/hand joints is used for recognizing sign language. In this step, the library should be extended to acknowledge word gestures