Meta AI has unveiled its latest innovation, the Segment Anything Model (SAM), which aims to revolutionize image segmentation. SAM is a versatile model that introduces a new task, dataset, and approach to democratize image segmentation. At the core of this project lies the Segment Anything 1-Billion mask dataset (SA-1B), which is the most extensive segmentation dataset to date, containing over 1 billion masks on 11 million licensed and privacy-respecting images.
By implementing an efficient model within a data collection loop, Meta AI’s researchers have created the most comprehensive segmentation dataset to date. SAM is designed to be promptable, enabling zero-shot transfer to new image distributions and tasks. Following an extensive evaluation of the model’s capabilities, it has been determined that its zero-shot performance is impressive, often exceeding previous fully supervised outcomes.
Previously, two categories of methods were available for solving segmentation problems, namely interactive segmentation and automatic segmentation. However, neither of these approaches provided a universal, fully automated approach to segmentation. SAM represents a synthesis of these two approaches, as it is a single model that can handle both interactive and automatic segmentation tasks effectively.
The promptable interface of SAM enables versatility in usage, making it suitable for a wide range of segmentation tasks. Additionally, SAM is trained on a diverse and high-quality dataset of over 1 billion masks, enabling it to generalize well to new types of objects and images beyond what it was trained on. This ability to generalize significantly reduces the need for practitioners to collect their own segmentation data and fine-tune a model for their specific use case.
Meta’s goal is to facilitate further advancements in segmentation and image and video understanding by sharing their research and dataset. SAM can be used as a powerful component in various domains, including AR/VR, content creation, scientific research, and more general AI systems. The composition approach allows a single model to be used in a variety of extensible ways, potentially leading to the accomplishment of tasks that were unknown at the time of model design.