Rapid, cost-efficient AI development on your own infrastructure

Runhouse OSS πŸƒβ€β™€οΈ

A Python interface into your ML compute for both research and production. Iterable, debuggable, DSL-free, and infrastructure-agnostic.

Get started

Runhouse Den 🏠

A modular home for ML infra: browse, manage, and grow your ML stack as a living set of shared services and compute.

Book a demo

Runhouse OSS

Effortlessly program powerful ML systems across arbitrary compute in regular Python. Demolish your research-to-production and infrastructure silos.

Adopt incrementally

No upfront migration. Drops into your existing pipelines, code, and development workflows.

$pip install runhouse

Loved by research and infra teams alike

Runhouse is rapid enough for local development, like in notebooks or IDEs, but runs as-is inside Kubernetes, CI, or your favorite orchestrator. Debug and iterate on your GPUs directly - no more push and pray debugging.

Graphic showing a laptop with a Python logo connecting to modules and functions running on cloud compute

Runs inside your own infrastructure

Execution stays inside your cloud(s), with total flexibility to cost-optimize or scale to new providers.

List of cloud provider logos: AWS, Google, Azure, Kubernetes, AWS Lambda, and SageMaker

Restore the ML flywheel: Iterate your code with powerful compute, deploy as-is in production.

import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)

Instantaneous, eager execution

Iterate and debug live on the real infra

Works with arbitrary Python

so you can focus on AI, not DSLs and packaging

Call from anywhere, in any language

Your remote functions are services with HTTP endpoints

Use Cases

Inference

OnlineBatchLLMsMulti-step

Call Llama3 on AWS EC2

Training

OfflineOnlineDistributedHPO

Fine-tuning with LoRA

Composite Systems

RAGMulti-taskEvaluation

LangChain RAG app with OpenAI and Chroma

Data Processing

BatchOnlineData Apps

Parallel GPU Batch Embeddings

Runhouse Den

Share modular ML systems to stop reinventing the wheel, empower developers, and reduce costs.

Line illustration showing researchers and engineers connected to an AI app or code via lines before and after changes

Before: Tightly Coupled and Brittle

Difficult to update, A/B test, or scale to new infrastructure

Line illustration showing researchers and engineers connected a block "/team/bert_preprocessing" that decouples a connection to changing AI code

With Den: Decoupled and Nimble

Easily update and scale services without breakage

Operationalize your ML as a living stack.

Try it out
Screenshot of a search interface with a search bar and listed resources

Search & Sharing

Runhouse Den provides you with an interface to explore your AI services. Easily share with your teammates, search through your applications, and view details all in one place.

Screenshot of a line graph and list of user requests

Observability

Gain insights by viewing how your teammates consume your AI apps and services. Explore changes over time and make better decisions based on trends.

Screenshot of a resource page and user access list

Auth & Access Control

Den makes it easy to control access to your services. Provide individual teammates with read or write access, or choose "public" visibility to expose public API endpoints.

Start building on a solid ML foundation.

Book a demo