- SAM: Segment Anything Model
- TAM : Track Anything Model
- 2024/02/09 Transforming Imagery with AI: Exploring Generative Models and the Segment Anything Model (SAM)
- 2024/02/08 TinySAM: Pushing the Boundaries for Segment Anything Model
- 2024/01/30 How to Train an Instance Segmentation Model with No Training Data - All you need is a bit of computing power
- 2024/01/25 Fast SAM: The Easy Way to Segment Anything
- 2024/01/22 SAM and HQ-SAM: A New Generation of Image Segmentation Models
- 2024/01/22 How to Use the Segment Anything Model (SAM)
- 2024/01/20 How to change clothes with AI (Inpaint Anything)
- 2024/01/18 Instance Segmentation in Computer Vision: A Comprehensive Guide
- 2024/01/16 Instance Segmentation for Tree Species with Low-altitude Drone imagery
- 2024/01/09 Unlocking the Power of Segment Anything Model (SAM) from Meta AI
- 2024/01/05 SAM: Segment Anything Model - Quickly customize your product landing page with SAM
- 2024/01/05 Segment Anything in Python (Code Example included)
- 2024/01/03 Incredible, Segment Anything acceleration version
- 2024/01/02 Guide to Deploying Meta's 'Segment Anything' Model on Salad's Shared GPUs and save money
- 2023/12/11 On-device interactive segmentation with EdgeSAM
- 2023/11/22 What is Meta's Segment Anything AI Model and why should you care?
- 2023/11/01 Image Segmentation: Meta AI’s Segment Anything
- 2023/10/31 Segment Anything Model (SAM): Intro, Use Cases, V7 Tutorial
- 2023/07/10 Revolutionizing Object Segmentation: Fast Faster SAM - Efficient and Lightweight Models for Real-Time Object Segmentation on Mobile Devices
- 2023/06/29 AnyLabeling Auto-labeling with MobileSAM - the newest and fastest variant of Segment Anything
- 2023/06/24 Point Cloud Segmentation with SAM in Multi-angles - Enhancement and prototype in multi-directions
- 2023/04/13 How To Fine-Tune Segment Anything
- 2023/04/07 What is Segment Anything Model (SAM)? A Breakdown
- 2023/04/05 Segment Anything
- Meta: Segment Anything
- Ultralytics: Segment Anything Model (SAM)
- Materialise SAM — the 3D Scanning App
- AnyLabeling - Effortless data labeling with AI support from Segment Anything and YOLO models.
- Using Meta’s Segment Anything (SAM) model with YOLOv8 to automatically classify masks
- Segment Anything Model (SAM) – The Complete 2024 Guide
- Semantic vs Instance Segmentation (2024 Update)
- Instance Segmentation: Comprehensive Guide for 2024
- Replace Anything - A simple web application that lets you replace any part of an image with an image generated based on your description.
- Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
- Open-Vocabulary SAM - Segment and Recognize Twenty-Thousand Classes Interactively - SAM + CLIP
- ReplaceAnything3D: Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields
- RAP-SAM: Towards Real-Time All-Purpose Segment Anything
- SAMPro3D: Locating SAM Prompts in 3D for Zero-Shot Scene Segmentation
- EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM
- 2024 EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
- 2024 ClickSAM: Fine-tuning Segment Anything Model using click prompts for ultrasound image segmentation
- 2024 Segment Anything in 3D Gaussians
- 2024 EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
- 2024 Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions
- 2024 Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively
- 2024 BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model
- 2024 Depth Anything in Medical Images: A Comparative Study
- 2024 Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
- 2024 Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks
- 2024 Hi-SAM: Marrying Segment Anything Model for Hierarchical Text Segmentation
- 2024 Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping
- 2024 Personalize Segment Anything Model with One Shot - ICLR 2024
- 2024 SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding - ICLR 2024
- 2024 Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks
- 2024 UV-SAM: Adapting Segment Anything Model for Urban Village Identification
- 2024 Learning to Prompt Segment Anything Models
- 2024 Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions
- 2024 Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively
- 2024 Integrated Framework for Unsupervised Building Segmentation with Segment Anything Model-Based Pseudo-Labeling and Weakly Supervised Learning
- 2023 RepViT-SAM: Towards Real-Time Segmenting Anything
- 2023 Segment Anything in 3D with NeRFs
- 2023 SAM3D: Segment Anything in 3D Scenes
- 2023 SAM3D: Zero-Shot 3D Object Detection via Segment Anything Model
- 2023 Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
- 2023 MobileSAMv2: Faster Segment Anything to Everything
- 2023 MobileSAM-Track: Lightweight One-Shot Tracking and Segmentation of Small Objects on Edge Devices
- 2023 Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
- 2023 Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications
- 2023 Segment Everything Everywhere All at Once
- 2023 Track Anything: Segment Anything Meets Videos
- https://huggingface.co/spaces/HarborYuan/ovsam - Open-Vocabulary SAM
- https://huggingface.co/spaces/Xenova/depth-anything-web - Depth Anything with Transformers.js
- https://huggingface.co/spaces/Xenova/segment-anything-web - Segment Anything with Transformers.js
- https://huggingface.co/dhkim2810/MobileSAM - Faster Segement Anything (MobileSAM)
- https://huggingface.co/spaces/chongzhou/EdgeSAM
- https://huggingface.co/nielsr/slimsam-77-uniform - Model Card for SlimSAM (compressed version of SAM = Segment Anything)
- Colab:
- https://huggingface.co/spaces/curt-park/yolo-world-with-efficientvit-sam - Fast Text to Segmentation with YOLO-World + EfficientViT SAM
- https://github.com/xenova/transformers.js - State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server!
- https://github.com/Vision-Intelligence-and-Robots-Group/Awesome-Segment-Anything - A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
- https://github.com/facebookresearch/segment-anything - Meta AI Research, FAIR
- https://github.com/branislavhesko/segment-anything-ui - Segment anything UI for annotations https://github.com/continue-revolution/sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
- https://github.com/tsinghua-fib-lab/UV-SAM - UV-SAM
- https://github.com/IDEA-Research/Grounded-Segment-Anything - Marrying Grounding-DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
- https://github.com/CASIA-IVA-Lab/FastSAM - Fast Segment Anything
- https://github.com/Pointcept/SegmentAnything3D - Segment Anything 3D
- https://github.com/DYZhang09/SAM3D - SAM3D: Segment Anything in 3D Scenes
- https://github.com/ChaoningZhang/MobileSAM - the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
- https://github.com/akbartus/MobileSAM-in-the-Browser - Demo
- https://github.com/THU-MIG/RepViT - RepViT: Revisiting Mobile CNN From ViT Perspective
- https://github.com/NVIDIA-AI-IOT/nanosam - A distilled Segment Anything (SAM) model capable of running real-time with NVIDIA TensorRT
- https://github.com/vietanhdev/anylabeling - Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!
- https://github.com/czg1225/SlimSAM - SlimSAM: 0.1% Data Makes Segment Anything Slim
- https://github.com/ymy-k/Hi-SAM - Marrying Segment Anything Model for Hierarchical Text Segmentation
- https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once - SEEM: Segment Everything Everywhere All at Once
- https://github.com/UX-Decoder/Semantic-SAM - Semantic-SAM: Segment and Recognize Anything at Any Granularity
- https://github.com/IDEA-Research/Grounded-Segment-Anything - Grounded-SAM: Marrying Grounding-DINO with Segment Anything & Stable Diffusion & Recognize Anything
- https://github.com/gaomingqi/Track-Anything - Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
- https://github.com/LiheYoung/Depth-Anything - Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
- https://github.com/HarborYuan/ovsam - The official code of paper "Open-Vocabulary SAM".
- https://github.com/LiuTingWed/SAM-Not-Perfect - Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-World Application
- https://github.com/KidsWithTokens/Medical-SAM-Adapter - Adapting Segment Anything Model for Medical Image Segmentation
- https://github.com/xinghaochen/TinySAM - Official PyTorch implementation of "TinySAM: Pushing the Envelope for Efficient Segment Anything Model"