Here's the real-time video demonstration using the Fast-SAM model.
There are two ways that you can use the model:
- Using Google Colab.
- You can have a look at the notebook and run it in Google Colab.
- Not recommended for real-time inference.
- Using a local environment for real-time inference.
- You can clone the repository and run the python script for real-time inference.
- Modify the
COLAB
andINITIALIZED
variables accordingly. - Menu -> Runtime -> Run All -> Sit back and relax.
- Clone the repository
git clone https://github.com/mora-bprs/bin-picking-real-time.git
cd bin-picking-real-time
- Create a python environment
python -m venv bin-venv && bin-venv\Scripts\activate
- Install the required packages. About 300MB of data will be downloaded.
pip install -r requirements.txt
- Run the python script for real-time inference
python smooth_main_rt.py
-
Press 'q' to exit the real-time inference.
-
Run the following command to deactivate the virtual environment
deactivate
- You have to install python and venv if not installed
- Tested Python versions:
3.10.7
,3.10.14
,3.12.4
If you run into
ModuleNotFoundError: No module named 'pkg_resources'
error, run the following command after activating the virtual environment. After that, run the script again.pip install --upgrade setuptools
- Install all the requirements in a virtual environment and then run the scripts
- Change camera index if there are multiple cameras connected to the system (
default
is0
)
Source: https://pypi.org/project/segment-anything-fast/
Click the links below to download the checkpoint for the corresponding model type.
default
orFastSAM
: YOLOv8x based Segment Anything Model | Baidu Cloud (pwd: 0000).