Skip to content

Instantly share code, notes, and snippets.

@bshambaugh
Created February 24, 2023 23:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bshambaugh/c57385e5f87393677b982aa667673e9b to your computer and use it in GitHub Desktop.
Save bshambaugh/c57385e5f87393677b982aa667673e9b to your computer and use it in GitHub Desktop.
axtera tech
translation of: https://github.com/AXERA-TECH/ax-pipeline
Update Log
2023-01-16 Added YoloV8, please refer to ModelZoo for details.
2022-12-29 Added sample_v4l2_user_ivps_joint_vo for direct input of NV12 data, if the user has access to NV12 data. Modified the value of enum class SAMPLE_RUN_JOINT_MODEL_TYPE to meet the growing list of model support.
2022-12-26 Added sample code for rtsp input sample_rtsp_ivps_joint_vo
2022-12-26 Added face recognition
2022-12-16 Added sample for usb camera input. added license plate detection, license plate recognition, please refer to ModelZoo for details
2022-12-14 Added h264 file input pipeline, added yolov7-face, yolov7-palm-hand
2022-12-09 Added simplified pipeline build api, reduce the difficulty of building pipeline, please see new_pipeline for more details
2022-11-29 Added hand detection plus gesture recognition, thanks to FeiGeChuanShu, please check ModelZoo for more details. adaptive inline model input format, inline models can now not need the same input color space format
2022-11-24 Added Acuity Genius open source version of human detection as well as gesture model
2022-11-21
Added pp-human-seg portrait segmentation, yolov5s-seg instance segmentation
New secondary inference model hrnet-pose (human pose detection based on yolov5s detection of human body after keying), configuration file MODEL_TYPE supports string and int settings, the setting value corresponds to the enumeration SAMPLE_RUN_JOINT_MODEL_TYPE or ModelZoo
New model details in ModelZoo
MODEL_PATH is added to the configuration file, user can set it, refer to hrnet_pose.json
2022-11-17 New yolov7-tiny, yolox-s detection model, the configuration file added the int attribute of MODEL_TYPE, this value must be set, otherwise it will not run, set the value corresponding to see the enumeration SAMPLE_RUN_JOINT_MODEL_TYPE or ModelZoo
2022-11-15 decoupled sample_run_joint, you can load models of different tasks at the same time, which is convenient for reasoning about multi-level model tasks like face recognition, human pose, license plate recognition, etc.
2022-11-14 Added adaptive NV12/RGB/BGR model, output the data format needed by the model directly through IVPS, now the model of ax-samples can be directly white whored to ax-pipeline (except yolov5 and yolov5face, other models still need to be ported to the post-processing part by the user)
Development boards are now supported
AXera-Pi (AX620A)
Quick start
Documentation
Quick compilation Simple cross-platform compilation based on cmake.
How to replace your own trained yolov5 model
How to deploy your own additional models
How to reorient images
ModelZoo Some supported or to be supported models, and descriptions of some models
Configuration file descriptions
Simplified version of pipeline build api
Examples
Translated with www.DeepL.com/Translator (free version)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment