Serverless ML Training in Your Own Cloud
Dispatch regular code to arbitrary compute in Python. Iterable, debuggable, DSL-free, and infrastructure-agnostic.
Restore the ML flywheel: Iterate your code with powerful compute, deploy as-is in production.
Provision and scale compute
Dispatch execution to any compute in your own cloud. Use a mixture of elastic compute, Kubernetes, and bare metal.
import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)
Flexible and debuggable ML pipelines
Deploy code updates in <5s and get streamlining logs back for fast, iterable debug.
Break the barrier between research and production
Manage your ML lifecycle using software development best practices on regular code. Deploy with no extra translation.
High quality telemetry, out of the box
Automatically persist logs, track GPU/CPU/memory utilization, and audit resource access.
import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)
Runhouse OSS
Effortlessly program powerful ML systems across arbitrary compute in regular Python. Demolish your research-to-production and infrastructure silos.
Adopt incrementally
No upfront migration. Drops into your existing pipelines, code, and development workflows.
$
pip install runhouse
Loved by research and infra teams alike
Runhouse is rapid enough for local development, like in notebooks or IDEs, but runs as-is inside Kubernetes, CI, or your favorite orchestrator. Debug and iterate on your GPUs directly - no more push and pray debugging.
Runs inside your own infrastructure
Execution stays inside your cloud(s), with total flexibility to cost-optimize or scale to new providers.
Use Cases
Runhouse Den
Share modular ML systems to stop reinventing the wheel, empower developers, and reduce costs.
Before: Tightly Coupled and Brittle
Difficult to update, A/B test, or scale to new infrastructure
With Den: Decoupled and Nimble
Easily update and scale services without breakage
Operationalize your ML as a living stack.
Try it outSearch & Sharing
Runhouse Den provides you with an interface to explore your AI services. Easily share with your teammates, search through your applications, and view details all in one place.
Observability
Gain insights by viewing how your teammates consume your AI apps and services. Explore changes over time and make better decisions based on trends.
Auth & Access Control
Den makes it easy to control access to your services. Provide individual teammates with read or write access, or choose "public" visibility to expose public API endpoints.