Rapid ML development across any infrastructure in regular Python.

Runhouse OSS πŸƒβ€β™€οΈ

A magical AI/ML DevX: Develop, debug, and run Python, at any scale, directly on your own compute.

Get started

Runhouse Den 🏠

A collaborative home for your AI stack: manage and share your fleet of compute and services.

Book a demo

Runhouse OSS

Effortlessly program the full power of your ML compute from regular Python. Restore and operationalize your ML development.

Easy to get started

No-stress installation.
Anywhere you use Python.

$pip install runhouse

Research-to-production and back, instantly

Runhouse code is just Python, so it can move as-is from local development to running in a production container in Kubernetes or Airflow.
Even better, grab any production code and run it locally.

Graphic showing a laptop with a Python logo connecting to modules and functions running on cloud compute

Get more out of your own compute

Enjoy total flexibility to scale or cost-optimize, keeping

execution inside your cloud and existing infrastructure.

List of cloud provider logos: AWS, Google, Azure, Kubernetes, AWS Lambda, and SageMaker

Iterate with powerful compute locally, QA test instantly, and deploy as-is in production.

import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)

Sub-second redeployment

so you can iterate live on the real infra

Works with your existing code

so you can focus on AI, not DSLs and packaging

Call it from anywhere, in any language

Your remote function is a real service with an HTTP endpoint

Use Cases

AI Primitives

Model InferenceTrainingPreprocessing

Deploy Llama3 on AWS EC2

Higher-Order Services

Inference pipelinesWorkflow pipelines

LangChain RAG app with OpenAI and Chroma

Controls & Safety

Content moderationPII Obfuscation

Coming soon...

Data Apps

Batch processingData APIs

Coming soon...

Runhouse Den

Your ML compute and services, all in one place. Collaboration, observability, and management built-in.

Line illustration showing researchers and engineers connected to an AI app or code via lines before and after changes

Before: Tightly Coupled and Brittle

Difficult to update, A/B test, or scale to new infrastructure

Line illustration showing researchers and engineers connected a block "/team/bert_preprocessing" that decouples a connection to changing AI code

With Den: Decoupled and Nimble

Easily update and scale services without breakage

ML platform excellence out of the box.

Try it out
Screenshot of a search interface with a search bar and listed resources

Search & Sharing

Runhouse Den provides you with an interface to explore your AI services. Easily share with your teammates, search through your applications, and view details all in one place.

Screenshot of a line graph and list of user requests

Observability

Gain insights by viewing how your teammates consume your AI apps and services. Explore changes over time and make better decisions based on trends.

Screenshot of a resource page and user access list

Auth & Access Control

Den makes it easy to control access to your services. Provide individual teammates with read or write access, or choose "public" visibility to expose public API endpoints.

Start building on a solid ML foundation.

Book a demo