We are feeling sleepy... Can you buy us a coffee? π΄
imagination-to-real Train your robot to do whatever you want using Generative AI
imagination-to-real empowers robotics developers by bridging the gap between generative AI and classical physics simulators. Our library prepares realistic, diverse, and geometrically accurate visual data from generative models. This data enables robots to learn complex and highly dynamic tasks, such as parkour, without requiring depth sensors.
π What It Does:
βͺ Integrates generative models with simulators to create rich, synthetic datasets.
βͺ Ensures temporal consistency with tools like Dreams In Motion (DIM).
βͺ Offers compatibility with MuJoCo environments for seamless data preparation.
π οΈ? How to Use:
βͺ Use Image_Maker for text-to-image generation tailored to your simulation needs.
βͺ Combine the generated data with your preferred training framework to develop robust robot learning models.
We are creating SmilingRobo Cloud, which will allow you to train your robot using our innovative libraries and drag-and-drop facilities.
Table of Contents
conda create -n imagination_to_real python=3.10
conda activate imagination_to_real
git clone https://github.com/SmilingRobo/imagination-to-real imagination_to_real
cd imagination_to_real
pip install -e .
For consistency, we recommend using this version of ComfyUI.
# Choose the CUDA version that your GPU supports. We will use CUDA 12.1
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --extra-index-url https://download.pytorch.org/whl/cu121
# Installing ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
git checkout ed2fa105ae29af6621232dd8ef622ff1e3346b3f
pip install -r requirements.txt
We recommend placing your models outside the ComfyUI
repo for better housekeeping. For this, you'll need to link your
model paths through a config file. Check out the configs
folder for a template, where you'll specify locations for
checkpoints, controlnets, and VAEs. For the provided three_mask_workflow
example, these are the models you'll need:
- SDXL Turbo 1.0: place
under
checkpoints
- SDXL Depth ControlNet: place under
controlnet
- SDXL VAE: place under
vae
After cloning this repository, you'll need to add ComfyUI to your $PYTHONPATH
and link your model paths. We recommend
managing these in a local .env
file. Then, link the config file you just created.
export PYTHONPATH=/path/to/ComfyUI:$PYTHONPATH
# See the `configs` folder for a template
export COMFYUI_CONFIG_PATH=/path/to/extra_model_paths.yaml
imagination_to_real is organized by workflows. We include our main workflow called three_mask_workflow
, which generates an image
given a depth map along with three semantic masks, each coming with a different prompt (for example,
foreground/background/object).
python imagination_to_real/image_maker/scripts/demo_three_mask_workflow.py [--path-to-folder] [--seed] [--save]
where path-to-folder
corresponds to your data to generate images, and the save
flag writes the output to the corresponding examples/three_mask_workflow/[example-name]/samples
folder. The script will randomly select one of our provided prompts.
To make your data, take this example as the reference examples/image-maker/three_mask_workflow/ramps
We provide example conditioning images and prompts for three_mask_workflow
under the examples/image-maker/three_mask_workflow
folder, grouped by scene.
To try it out, use:
python imagination_to_real/image_maker/scripts/demo_three_mask_workflow.py [--example-name] [--seed] [--save]
where example-name
corresponds to one of the scenes in the examples/image-maker/three_mask_workflow
folder.
The graphical interface for ComfyUI is very helpful for designing your own workflows. Please see their documentation for
how to do this. By using this
helpful workflow to python conversion tool, you can script
your workflows as we've done with Image_Maker/workflows/three_mask_workflow.py
.
In LucidSim, we use a distributed setup to generate images at scale. We utilize rendering nodes, launched independently
on many machines, that receive and fulfill rendering requests from the physics engine containing prompts and
conditioning images through a task queue (see Zaku). We hope to release setup
instructions for this in the future, but we have included Image_Maker/render_node.py
for your reference.
|
|
The last few dependencies require a downgraded setuptools
and wheel
to install. To install, please downgrade and
revert after.
pip install setuptools==65.5.0 wheel==0.38.4 pip==23
pip install gym==0.21.0
pip install gym-dmc==0.2.9
pip install -U setuptools wheel pip
Note: On Linux, make sure to set the environment variable MUJOCO_GL=egl
.
LucidSim generates photorealistic images by using a generative model to augment the simulator's rendering, using conditioning images to maintain control over the scene geometry.
We have provided an expert policy checkpoint under checkpoints/expert.pt
. This policy was derived from that
of Extreme Parkour. You can use this policy to sample an environment
and visualize the conditioning images with:
if you are follwing example just run the script
-
Create a
name.py
file inimagination_to_real/specs
takegaps.py
as reference, change line5
. -
create your
name
.py andname
.xml file inimagination_to_real/lucidsim/tasks
. Take thegaps.py
andgaps.xml
as reference and just change the line11
of .py and3
of .xml
if you are using your own robot then you have to change line
2
of xml too.
- Make you
name
.xml file and add it hereimagination_to_real/lucidsim/tasks/assets/terrains
. take thegaps.xml
as reference
# example env-name: one of ['parkour', 'hurdle', 'gaps', 'stairs_v1', 'stairs_v2']
!python imagination_to_real/lucidsim/scripts/play.py --save-path [--env-name] [--num-steps] [--seed]
where save_path
is where to save the resulting video.
To run the full generative augmentation pipeline, please also make sure the environment variables are still set correctly:
COMFYUI_CONFIG_PATH=/path/to/extra_model_paths.yaml
PYTHONPATH=/path/to/ComfyUI:$PYTHONPATH
You can then run the full pipeline with:
python imagination_to_real/lucidsim/scripts/play_three_mask_workflow.py --save-path --prompt-collection [--env-name] [--num-steps] [--seed]
where save_path
and env_name
are the same as before. prompt_collection
should be a path to a .jsonl
file with
correctly formatted prompts, as in the imagination-to-real/tree/master/examples/image-maker/three_mask_workflow
folder.
We thank the authors of LucidSim for their opensource code and Extreme Parkour for their open-source codebase, which we used as a starting point for our library.
If you find our work useful, please consider citing:
@inproceedings{yu2024learning,
title={Learning Visual Parkour from Generated Images},
author={Alan Yu and Ge Yang and Ran Choi and Yajvan Ravan and John Leonard and Phillip Isola},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
}