Skip to content

SmilingRobo/imagination-to-real

Repository files navigation

imagination_to_real by SmilingRobo

Buy Me A Coffee

We are feeling sleepy... Can you buy us a coffee? 😴


imagination-to-real Train your robot to do whatever you want using Generative AI

Description

imagination-to-real empowers robotics developers by bridging the gap between generative AI and classical physics simulators. Our library prepares realistic, diverse, and geometrically accurate visual data from generative models. This data enables robots to learn complex and highly dynamic tasks, such as parkour, without requiring depth sensors.

πŸš€ What It Does:

βšͺ Integrates generative models with simulators to create rich, synthetic datasets.
βšͺ Ensures temporal consistency with tools like Dreams In Motion (DIM).
βšͺ Offers compatibility with MuJoCo environments for seamless data preparation.

πŸ› οΈ? How to Use:

βšͺ Use Image_Maker for text-to-image generation tailored to your simulation needs.
βšͺ Combine the generated data with your preferred training framework to develop robust robot learning models.

We are creating SmilingRobo Cloud, which will allow you to train your robot using our innovative libraries and drag-and-drop facilities.


Table of Contents

Installing imagination_to_real module

1. Setup Conda Environment

conda create -n imagination_to_real python=3.10
conda activate imagination_to_real
git clone https://github.com/SmilingRobo/imagination-to-real imagination_to_real
cd imagination_to_real
pip install -e .

Make Images using image_maker

1. Install ComfyUI + Dependencies

For consistency, we recommend using this version of ComfyUI.

# Choose the CUDA version that your GPU supports. We will use CUDA 12.1
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --extra-index-url https://download.pytorch.org/whl/cu121

# Installing ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
git checkout ed2fa105ae29af6621232dd8ef622ff1e3346b3f
pip install -r requirements.txt

2. Setting up Models

We recommend placing your models outside the ComfyUI repo for better housekeeping. For this, you'll need to link your model paths through a config file. Check out the configs folder for a template, where you'll specify locations for checkpoints, controlnets, and VAEs. For the provided three_mask_workflow example, these are the models you'll need:

After cloning this repository, you'll need to add ComfyUI to your $PYTHONPATH and link your model paths. We recommend managing these in a local .env file. Then, link the config file you just created.

export PYTHONPATH=/path/to/ComfyUI:$PYTHONPATH

# See the `configs` folder for a template
export COMFYUI_CONFIG_PATH=/path/to/extra_model_paths.yaml

Usage

imagination_to_real is organized by workflows. We include our main workflow called three_mask_workflow, which generates an image given a depth map along with three semantic masks, each coming with a different prompt (for example, foreground/background/object).

Running the Workflow

python imagination_to_real/image_maker/scripts/demo_three_mask_workflow.py [--path-to-folder] [--seed] [--save]

where path-to-folder corresponds to your data to generate images, and the save flag writes the output to the corresponding examples/three_mask_workflow/[example-name]/samples folder. The script will randomly select one of our provided prompts.

To make your data, take this example as the reference examples/image-maker/three_mask_workflow/ramps

Example

We provide example conditioning images and prompts for three_mask_workflow under the examples/image-maker/three_mask_workflow folder, grouped by scene.

To try it out, use:

python imagination_to_real/image_maker/scripts/demo_three_mask_workflow.py [--example-name] [--seed] [--save]

where example-name corresponds to one of the scenes in the examples/image-maker/three_mask_workflow folder.

Adding Your Own Workflows

The graphical interface for ComfyUI is very helpful for designing your own workflows. Please see their documentation for how to do this. By using this helpful workflow to python conversion tool, you can script your workflows as we've done with Image_Maker/workflows/three_mask_workflow.py.

Scaling Image Generation

In LucidSim, we use a distributed setup to generate images at scale. We utilize rendering nodes, launched independently on many machines, that receive and fulfill rendering requests from the physics engine containing prompts and conditioning images through a task queue (see Zaku). We hope to release setup instructions for this in the future, but we have included Image_Maker/render_node.py for your reference.


Create Environment



1.Installing gym_dmc

The last few dependencies require a downgraded setuptools and wheel to install. To install, please downgrade and revert after.

pip install setuptools==65.5.0 wheel==0.38.4 pip==23
pip install gym==0.21.0
pip install gym-dmc==0.2.9
pip install -U setuptools wheel pip

Usage

Note: On Linux, make sure to set the environment variable MUJOCO_GL=egl.

LucidSim generates photorealistic images by using a generative model to augment the simulator's rendering, using conditioning images to maintain control over the scene geometry.

Rendering Conditioning Images

We have provided an expert policy checkpoint under checkpoints/expert.pt. This policy was derived from that of Extreme Parkour. You can use this policy to sample an environment and visualize the conditioning images with:

If you are using custom data

if you are follwing example just run the script

  1. Create a name.py file in imagination_to_real/specs take gaps.py as reference, change line 5.

  2. create your name.py and name.xml file in imagination_to_real/lucidsim/tasks. Take the gaps.py and gaps.xml as reference and just change the line 11 of .py and 3 of .xml

if you are using your own robot then you have to change line 2 of xml too.

  1. Make you name.xml file and add it here imagination_to_real/lucidsim/tasks/assets/terrains. take the gaps.xml as reference
# example env-name: one of ['parkour', 'hurdle', 'gaps', 'stairs_v1', 'stairs_v2']
!python imagination_to_real/lucidsim/scripts/play.py --save-path [--env-name] [--num-steps] [--seed]

where save_path is where to save the resulting video.

Full Rendering Pipeline

To run the full generative augmentation pipeline, please also make sure the environment variables are still set correctly:

COMFYUI_CONFIG_PATH=/path/to/extra_model_paths.yaml
PYTHONPATH=/path/to/ComfyUI:$PYTHONPATH

You can then run the full pipeline with:

python imagination_to_real/lucidsim/scripts/play_three_mask_workflow.py --save-path --prompt-collection [--env-name] [--num-steps] [--seed]

where save_path and env_name are the same as before. prompt_collection should be a path to a .jsonl file with correctly formatted prompts, as in the imagination-to-real/tree/master/examples/image-maker/three_mask_workflow folder.


We thank the authors of LucidSim for their opensource code and Extreme Parkour for their open-source codebase, which we used as a starting point for our library.

Citation

If you find our work useful, please consider citing:

@inproceedings{yu2024learning,
  title={Learning Visual Parkour from Generated Images},
  author={Alan Yu and Ge Yang and Ran Choi and Yajvan Ravan and John Leonard and Phillip Isola},
  booktitle={8th Annual Conference on Robot Learning},
  year={2024},
}

About

Train your robot to do whatever you want using Generative AI

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages