Why don't Amazon ML Teams use AWS Sagemaker?

Runhouse is a lean and flexible ML framework, not a heavy platform that slows you down.

Stop Paying for Inflexible, Expensive Compute

Runhouse makes it extremely easy to deploy research, training, and inference code written in normal Python directly on EC2 machines.

  • Save 40-60% on compute costs out of the gate by cutting out the hefty SageMaker markup.
  • Flexibly run your code on any compute or even multi-cloud with no overhead or repackaging.
  • Enable intelligent compute reuse, code-driven up/down, and job packing.

Take Back Control of Your Infrastructure

Sending ML pipeline jobs to Sagemaker is throwing it into a black box with no control or debugability. Fully manage your hardware with Runhouse.

  • Use best practice software DevOps on your code and pipelines.
  • Manage compute and execution with full visibility and control.
  • Onboard in 1 week with normal Python code, not in 6-9 months on siloed tools.

Runhouse Open Source

Github

Runhouse is a flexible framework for building a true ML Platform.

Instant Deployments

Send code to execute on real cloud infrastructure in seconds, letting you iterate and debug live.

import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)

Works with Normal Python

Runhouse isn't a DSL or complex SaaS tool.

import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)

Call from anywhere, in any language

Use Runhouse wherever you like, whether in Notebooks, orchestrators, or CI/CD platforms.

import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)
import runhouse as rh # Take any Python function or class def local_preproc_fn(dataset, filters): ... # Easily allocate the resources you want, whether from # existing VMs or K8s, or fresh from a cloud my_cluster = rh.cluster( name="my_cluster", instance_type="A10G:1", provider="cheapest" ) # Deploy your function to your compute (not pickling!) remote_preproc_fn = rh.function(local_preproc_fn).to(my_cluster) # Call it normally, but it's running remotely processed_data = remote_preproc_fn(dataset, filters)

Everything you needโ€จto get started with Runhouse today.

ย ย See an Example

Learn more about the technical details of Runhouse and try implementing the open-source package into your existing Python code. Here's an example of how to deploy Llama3 to EC2 in just a few lines.

See an Example

ย ย Talk to Donny (our founder)

We've been building ML platforms and open-source libraries like PyTorch for over a decade. We'd love to chat and get your feedback!

Book Time

Get in touch ๐Ÿ‘‹

Whether you'd like to learn more about Runhouse or need a little assistance trying out the product, we're here to help.