VCA: Vision-Click-Action Framework for Precise Manipulation of Segmented Objects in Target Ambiguous Environments

1ROBROS Inc.
VCA architecture

Overall architecutre of VCA

Abstract

The reliance on language in Vision-Language-Action (VLA) models introduces ambiguity, cognitive overhead, and difficulties in precise object identification and sequential task execution, particularly in environments with multiple visually similar objects. To address these limitations, we propose Vision-Click-Action (VCA), a framework that replaces verbose textual commands with direct, click-based visual interaction using pretrained segmentation models. By allowing operators to specify target objects clearly through visual selection in the robot's 2D camera view, VCA reduces interpretation errors, lowers cognitive load, and provides a practical and scalable alternative to language-driven interfaces for real-world robotic manipulation. Experimental results validate that the proposed VCA framework achieves effective instance-level manipulation of specified target objects.

Paper Video

Primary Tasks

Block Sorting

Tower of Hanoi

Visual Shift - Unseen Object

Black Block

Square Ring

Orange Ring

Visual Shift - Unseen Environment

Checkered Tablecloth

Plaid Tablecloth

Related Links

There's a lot of excellent work that was introduced around the same time as ours.

Visual prompting for robotic manipulation introduces visual prompting to ACT but with bounding boxes.

ClutterDexGrasp is optimized for grasp acquisition, using sam2 to generate 2d masks and projecting them to point clouds.

DexGraspVLA uses VLMs to generate masks for target objects.

Laser-guided interaction Interface and AR Point&Click are also an interesting read.