Turn your ML infra 🧱 into an ML home 🏡
Runhouse demolishes research↔production and infra silos so ML Engineers, Researchers, and Data Scientists share a living, evolving ML stack.Get started
Runhouse is a unified interface into existing compute and data systems, built to reclaim the 50-75% of ML practitioners' time lost to debugging, adapting, or repackaging code for different environments.
Who is this for?
Who want to be able to update and improve production services, pipelines, and artifacts with a Pythonic, debuggable devX
ML Researchers & Data Scientists
Who don't want to spend or wait 3-6 months translating and packaging their work for production
Who want to improve the accessibility, reproducibility, and reach of their code, without having to build support or examples for every cloud or compute system
Runhouse OSS is a unified interface into compute and data infra which is unopinionated, ergonomic, and powerful enough for fast experimentation, full-scale deployment, and everything in between.
Save 3-6 months of research-to-production time per project.
See how Hugging Face and Langchain use Runhouse:
Runhouse Den is a sharing, lineage, and governance layer to make your ML stack multiplayer, visible, and manageable. No more rampant duplication, broken lineage, lost artifacts, and ownerless resources.
Uber, Shopify, and Spotify have all migrated to the 3rd gen ML platform. We built Runhouse so you can too, but incrementally and within your existing infra and tooling.
There’s really no migration.
Runhouse OSS works with your existing stack
inside your own infra,
wherever you run Python
How It Works
Send code and data to any infra,
and use them natively, from anywhere
The next generation of ML platforms will not have research-to-production silos, and will be automatically traced for lineage and governance.
Keep up with Runhouse features and announcements.