Rapid, cost-efficient AI development on your own infrastructure
Runhouse OSS πββοΈ
Build powerful training and inference pipelines across any compute in Python. Iterable, debuggable, DSL-free, and infrastructure-agnostic.
Runhouse Den π
A home for your growing AI stack. Straightforward observability, collaboration, and management for your compute and services.
Runhouse OSS
Effortlessly program powerful ML systems across arbitrary compute in regular Python. Demolish your research-to-production and infrastructure silos.
Adopt incrementally
No upfront migration. Drops into your existing pipelines, code, and development workflows.
$
pip install runhouse
Loved by research and infra teams alike
Runhouse is rapid enough for local development, like in notebooks or IDEs, but runs as-is inside Kubernetes, CI, or your favorite orchestrator. Debug and iterate on your GPUs directly - no more push and pray debugging.
Runs inside your own infrastructure
Execution stays inside your cloud(s), with total flexibility to cost-optimize or scale to new providers.
Restore the ML flywheel: Iterate your code with powerful compute, deploy as-is in production.
import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)
Instantaneous, eager execution
Iterate and debug live on the real infra
Works with arbitrary Python
so you can focus on AI, not DSLs and packaging
Call from anywhere, in any language
Your remote functions are services with HTTP endpoints
Use Cases
Runhouse Den
Share modular ML systems to stop reinventing the wheel, empower developers, and reduce costs.
Before: Tightly Coupled and Brittle
Difficult to update, A/B test, or scale to new infrastructure
With Den: Decoupled and Nimble
Easily update and scale services without breakage
Operationalize your ML as a living stack.
Try it outSearch & Sharing
Runhouse Den provides you with an interface to explore your AI services. Easily share with your teammates, search through your applications, and view details all in one place.
Observability
Gain insights by viewing how your teammates consume your AI apps and services. Explore changes over time and make better decisions based on trends.
Auth & Access Control
Den makes it easy to control access to your services. Provide individual teammates with read or write access, or choose "public" visibility to expose public API endpoints.