VLA & VLM Robot Model Directory

Side-by-side comparison of 50+ open-source vision-language-action (VLA) and vision-language (VLM) models for robotics — OpenVLA, Octo, RT-X, Diffusion Policy, LeRobot, and more — with benchmarks, licenses, hardware requirements, and paper links.

Need help picking a model?

Our team has deployed OpenVLA, Octo, and RT-X on OpenArm, Unitree G1, and Mobile ALOHA rigs. Tell us your robot and task and we will recommend a model and a dataset.

Get a Model Recommendation

Datasets & Tools to Pair

Practical Model Selection

Compare architectures by task fit, data need, and deployment complexity.

Data-Model Alignment

Model choices are connected to compatible dataset and format stacks.

Experiment Velocity

Open-source links and implementation-ready pointers reduce setup friction.

Scale to Production

From evaluation to deployment with support for tuning and integration.

Need Custom Models or Data?

We provide data collection, fine-tuning support, and deployment for robot learning.