Operating robots in open-ended scenarios with diverse tasks is a crucial research and application direction in robotics. While recent progress in natural language processing and large multimodal models has enhanced robots' ability to understand complex instructions, robot manipulation still faces the procedural skill dilemma and the declarative skill dilemma in open environments. Existing methods often compromise cognitive and executive capabilities. To address these challenges, in this paper, we propose RoBridge, a hierarchical intelligent architecture for general robotic manipulation. It consists of a high-level cognitive planner (HCP) based on a large-scale pre-trained vision-language model (VLM), an invariant operable representation (IOR) serving as a symbolic bridge, and a generalist embodied agent (GEA). RoBridge maintains the declarative skill of VLM and unleashes the procedural skill of reinforcement learning, effectively bridging the gap between cognition and execution. RoBridge demonstrates significant performance improvements over existing baselines, achieving a 75% success rate on new tasks and an 83% average success rate in sim-to-real generalization using only five real-world data samples per task. This work represents a significant step towards integrating cognitive reasoning with physical execution in robotic systems, offering a new paradigm for general robotic manipulation.
Declarative skill methods (left) directly generate specific control commands in a formulaic way, such as determining trajectories by minimizing cost. However, due to a lack of interaction experience with the physical world, the generated commands are often incorrect. Procedural skill methods (middle) forcibly transform a vision-language model (VLM) into a robotics model using a data-driven approach, but it is not effective in dealing with unseen situations. Our method, RoBridge(right), enables the VLM to generate physically intuitive representations as a symbolic bridge. This symbolic bridge is characterized by its invariance, allowing it to communicate with the underlying embodied agent in a universal manner. Meanwhile, the embodied agent continuously interacts with the physical world to gain continual skill aggregation, fully leveraging the strengths of both the VLM and reinforcement learning.
RoBridge adopts a three-layer architecture, consisting of a high-level cognitive planner (HCP), an invariant operable representation (IOR), and a generalist embodied agent (GEA). For example, for the instruction ``Put the blocks into the corresponding shaped slots", HCP will first plan and split the task into multiple primitive actions. Then, combined with the APIs composed of the foundation model, it will give IOR, which mainly includes the masked depth of the first perspective, the mask of the third perspective, the type of action, and the constraints. IOR is updated by HCP at a low frequency and track-anything updates the mask at a high frequency. IOR is used as the input of GEA, and GEA performs specific actions until the task is completed.
pick up the blue block.
Sweep the yellow block into pink stickers.
Press the button.
Open the drawer.
Put the blocks into the corresponding shaped slots.
We introduce RoBridge, a novel hierarchical intelligent architecture designed to enhance robotic manipulation by bridging the gap between high-level cognitive planning and low-level physical execution. The architecture integrates a high-level cognitive planner, an invariant operable representation, and a generalist embodied agent, demonstrating significant advancements in task generalization and execution robustness. Through extensive experiments, RoBridge has shown superior performance and strong zero-shot generalization capabilities in unknown environments and novel tasks.