PyRobot

PyRobot

  • Tutorials
  • API
  • Datasets
  • Help
  • GitHub
  • next-API
  • next-Github

›Locobot Examples

Getting Started

  • Overview
  • Install software

Locobot Examples

  • [Basic] Camera Calibration
  • [Basic] Navigation
  • [Basic] Manipulation
  • [Basic] Demonstration
  • [Basic] Pushing
  • [Basic] Active Camera
  • [Advanced] Grasping
  • [Advanced] Sim2Real
  • [Advanced] Visual Navigation (CMP)

Sawyer Examples

  • [Basic] Manipulation

Help and Support

  • New Robot Support
  • Datasets
  • Networking
  • Contributing to PyRobot
  • FAQ
  • Contact

The Next Version of PyRobot!

  • Install Software
  • PyRobot Next Version Overview

Cognitive Mapping and Planning

Note This tutorial is tested to work only with Python2.7 version on PyRobot

This example deploys visual navigation policies trained in Cognitive Mapping and Planning paper [1] onto LoCoBot using TensorFlow. These policies were trained in simulation on Matterport scans from [2,3], and are being run on the real robot as is. These policies take the pointgoal target, and the current RGB image from the on-board camera as input, and outputs discrete actions stay, turn left 90 degree, turn right 90 degree, move forward 40cm. These actions are implemented on Locobot using ILQR controllers implemented in PyRobot library. These policies assume a grid world and perfect odometry. For this example deployment, we assume that odometry from wheel encoders is perfect. This example is based on code released with [1].

Setup

This assumes that you have followed instructions for installing PyRobot API.

  1. Launch Robot in a terminal

    # Launch the robot using the following launch command.
    roslaunch locobot_control main.launch use_base:=true use_camera:=true \
        use_arm:=false use_sim:=false base:=kobuki use_rviz:=false
    
  2. Install additional dependencies (Open a new terminal)

    source ~/pyenv_pyrobot/bin/activate
    cd ~/low_cost_ws/src/pyrobot/examples/visual_nav_cmp/
    pip install -r requirements.txt
    
  3. Get pre-trained models

    wget https://www.dropbox.com/s/vw7aqmitsm3kas0/model.tgz?dl=0 -O model.tgz
    tar -xf model.tgz
    

Test Setup

Confirm that tensorflow is properly setup, by running the following command.

pytest test_cmp.py

Running

# Run CMP policy using the following command, going forward 1.2m.
python run_cmp.py --goal_x 1.2 --goal_y 0.0 --goal_t 0. --botname locobot

Demo Runs

Videos for some successful demo runs (at 10x speed).

  1. Go forward 4m: python run_cmp.py --goal_x 4.0 --goal_y 0.0 --goal_t 0. --compensate.

  2. Go forward 4m: python run_cmp.py --goal_x 4.0 --goal_y 0.0 --goal_t 0. --compensate.

  3. Go forward 2m, left 2.4m: python run_cmp.py --goal_x 2.0 --goal_y 2.4 --goal_t 0. --compensate.

  4. Go forward 3.2m: python run_cmp.py --goal_x 3.2 --goal_y 0.0 --goal_t 0. --compensate.

References

  1. Cognitive Mapping and Planning for Visual Navigation. IJCV 2019. Saurabh Gupta, Varun Tolani James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik.
  2. 3D semantic parsing of large-scale indoor spaces. CVPR 2016. Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, Silvio Savarese.
  3. Matterport3D: Learning from RGB-D Data in Indoor Environments. 3DV 2017. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, Yinda Zhang

Citing

If you find this policy useful, please consider citing the following paper:

@article{gupta2019cognitive,
    author = "Gupta, Saurabh and Tolani, Varun and Davidson, James and Levine, Sergey and Sukthankar, Rahul and Malik, Jitendra",
    title = "Cognitive mapping and planning for visual navigation",
    journal = "International Journal of Computer Vision",
    year = "2019"
}
← [Advanced] Sim2Real[Basic] Manipulation →
PyRobot
Docs
Getting StartedExamplesDatasetsHelp and Support
More
GitHubStarContact