Bridging the Gap: From Labs to Robots
We turn cutting-edge research into straightforward code you can actually use.
How It Works
1. Initialize RoboActions
Connect to the RoboActions SDK using your API key and project ID.
import roboactions
sdk = roboactions.init(
api_key="YOUR_API_KEY",
project_id="my_robot_project"
)
2. Easy Data Collection
Define your sensor configuration and start collecting data seamlessly within your robot's control loop.
# Initialize a Dataset object.
dataset = sdk.new_dataset(
name="my_robot_data",
sensor_config={
"camera": {"resolution": (640, 480), "fps": 30},
"lidar": {"enabled": True},
# Add more sensors here
}
)
# Collect data in your robot loop.
for step in range(100):
observation = robot.get_observation() # Get sensor data
action = robot.get_last_action() # Get last action taken
# Record each observation-action pair
dataset.record(observation, action)
Save and upload your collected dataset to RoboActions Cloud.
# Save and optionally upload your dataset to RoboActions Cloud
dataset_id = dataset.save(upload=True)
print("Dataset ID:", dataset_id)
3. Scale Up with MimicGen
Augment your training data with MimicGen to improve model robustness and performance.
# Augment data with MimicGen
augmented_dataset_id = sdk.mimicgen(
dataset_id=dataset_id,
num_variants=1000,
augmentation_options={
"noise_level": 0.05,
"domain_shift": True
}
)
print("Augmented Dataset ID:", augmented_dataset_id)
4. Fine-tune and Deploy
Fine-tune a pre-trained Vision-Language Action model (VLA) on your collected and augmented data, then deploy it for inference.
# Fine-tune a base model (e.g., OpenVLA)
model = sdk.finetune(
base_model="OpenVLA",
data=dataset_id,
augmented_data=augmented_dataset_id,
epochs=5,
batch_size=16,
gpu="nvidia-a100", # or 'auto'
deploy_for_inference=True
)
print("Fine-tuned Model Accuracy:", model.accuracy)
print("Model ID:", model.model_id)
5. Hybrid Inference
Run inference on your robot or in the cloud, leveraging the best of both worlds for optimal efficiency.
# Run hybrid inference (on-robot if available, otherwise cloud)
prediction = sdk.predict(
model_id=model.model_id,
observation=robot.get_observation(),
mode="hybrid"
)
print("Prediction:", prediction)
Research-Powered, Developer-Focused
RoboActions is built on a foundation of groundbreaking research in robotics and AI. We translate complex academic concepts into practical, easy-to-use code, empowering you to create advanced robots without needing a PhD.
Breakthroughs in the Lab
Researchers are rapidly advancing robotics with Vision-Language-Action (VLA) models, imitation learning, and reinforcement learning. These innovations show remarkable promise for understanding commands, adapting to new tasks, and accelerating robot training.
The Gap: Developer Adoption
Despite these strides, many robot developers struggle to leverage these breakthroughs. Robotics research remains complex, and the tools can feel out of reach for teams without specialized expertise.
Our Belief: Fine-Tuning Is Essential
We see no “one-model-solves-all” approach to robotics. Instead, fine-tuning models for specific tasks leads to more intelligent robots—fast. New VLA architectures reveal that even minimal finetuning can yield significant performance gains.
Our Vision
We’re committed to bringing these lab-tested advancements directly to developers. RoboActions bridges the gap by delivering practical tools and workflows so every robot can have its own fine-tuned intelligence—empowering you to build smarter, more capable robots for the real world.