Hello, I am Peiqi!
I am currently a research intern at NYU Courant working with Prof. Lerrel Pinto.
I am currently focused on visual grounding, open vocabulary navigation, and a series of miscellaneous navigation problems such as SLAM, A* etc. Besides this, I am also interested in mobile manipulation in general. My short term goal is to transfer powerful zero shot vision and multi-modal models into open vocabulary visual grounding systems that can be used in robot navigation. My long term goal is to build mobile robots that can complete complex and long horizon real world tasks.
A few of my recent research projects. You can find a full list of my works on Google Scholar.
Peiqi Liu*, Yaswanth Orru*, Jay Vakil, Chris Paxton, Nur Muhammad Mahi Shafiullah*, Lerrel Pinto*
Remarkable progress has been made in recent years in the fields of vision, language, and robotics. We now have vision models capable of recognizing objects based on language queries, navigation systems that can effectively control mobile systems, and grasping models that can handle a wide range of objects. Despite these advancements, general-purpose applications of robotics still lag behind, even though they rely on these fundamental capabilities of recognition, navigation, and grasping. In this paper, we adopt a systems-first approach to develop a new Open Knowledge-based robotics framework called OK-Robot. By combining Vision-Language Models (VLMs) for object detection, navigation primitives for movement, and grasping primitives for object manipulation, OK-Robot offers a integrated solution for pick-and-drop operations without requiring any training. To evaluate its performance, we run OK-Robot in 10 real-world home environments. The results demonstrate that OK-Robot achieves a 58.5% success rate in open-ended pick-and-drop tasks, representing a new state-of-the-art in Open Vocabulary Mobile Manipulation (OVMM) with nearly 1.8x the performance of prior work. On cleaner, uncluttered environments, OK-Robot’s performance increases to 82%. However, the most important insight gained from OK-Robot is the critical role of nuanced details when combining Open Knowledge systems like VLMs with robotic modules. Videos of our experiments are available on our website: https://ok-robot.github.io.