The reliance on language in Vision-Language-Action (VLA) models introduces ambiguity, cognitive overhead, and difficulties in precise object identification and sequential task execution, particularly in environments with multiple visually similar objects. To address these limitations, we propose Vision-Click-Action (VCA), a framework that replaces verbose textual commands with direct, click-based visual interaction using pretrained segmentation models. By allowing operators to specify target objects clearly through visual selection in the robot's 2D camera view, VCA reduces interpretation errors, lowers cognitive load, and provides a practical and scalable alternative to language-driven interfaces for real-world robotic manipulation. Experimental results validate that the proposed VCA framework achieves effective instance-level manipulation of specified target objects.
There's a lot of excellent work that was introduced around the same time as ours.
Visual prompting for robotic manipulation introduces visual prompting to ACT but with bounding boxes.
ClutterDexGrasp is optimized for grasp acquisition, using sam2 to generate 2d masks and projecting them to point clouds.
DexGraspVLA uses VLMs to generate masks for target objects.
Laser-guided interaction Interface and AR Point&Click are also an interesting read.