Getting Started

Open In Colab

Runhouse is a Python framework for composing and sharing production-quality backend apps and services ridiculously quickly and on your own infra.

This getting started guide gives a quick walkthrough of Runhouse basics:

  • Create your own Runhouse resource abstractions for a cluster and function

  • Send local code to remote infra to be run instantly

  • Save/reload/share resources through Runhouse Den resource management

Also, sneak peaks of more advanced Runhouse features and where to look to learn more about them.

Installing Runhouse

To use runhouse to interact with remote clusters, please instead run the following command. This additionally installs SkyPilot, which is used for launching on-demand clusters and interacting with runhouse clusters.

Runhouse Basics: Remote Cluster and Function

Let’s start by seeing how simple it is to send arbitrary code, in this case a function, and run it on your remote compute.

To run this function on remote compute:

  1. Construct a RH cluster, which wraps your remote compute

  2. Create a RH function for run_home, and send it to exist and run on the cluster

  3. Call the RH function as you would any other function. This function runs on your remote cluster and returns the results to local

Runhouse Cluster

Construct your Runhouse cluster by wrapping an existing cluster you have up. In Runhouse, a “cluster” is a unit of compute, somewhere you can send code, data, or requests to execute.

More advanced cluster types like on-demand (automatically spun up/down for you with your cloud credentials) or Sagemaker clusters are also supported. These require some setup and are discussed in Compute Tutorial.

Runhouse Function

For the function, simply wrap it in rh.function, then send it to the cluster with .to.

Modules, or classes, are also supported. For finer control of where the function/module runs, you will also be able to specify the environment (a list of package requirements, a Conda env, or Runhouse env) where it runs. These are covered in more detail in the Compute Tutorial.

INFO | 2024-01-04 19:16:41.757114 | Writing out function to /Users/caroline/Documents/runhouse/runhouse/docs/notebooks/api/ Please make sure the function does not rely on any local variables, including imports (which should be moved inside the function body).
INFO | 2024-01-04 19:16:41.760892 | Setting up Function on cluster.
INFO | 2024-01-04 19:16:41.763236 | Copying package from file:///Users/caroline/Documents/runhouse/runhouse to: example-cluster
INFO | 2024-01-04 19:16:41.764370 | Running command: ssh -T -i ~/.ssh/sky-key -o Port=22 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 -o ConnectTimeout=30s -o ForwardAgent=yes -o ControlMaster=auto -o ControlPath=/tmp/skypilot_ssh_caroline/41014bb4d3/%C -o ControlPersist=300s ubuntu@example-cluster 'bash --login -c -i '"'"'true && source ~/.bashrc && export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && (mkdir -p ~/runhouse/)'"'"' 2>&1'
INFO | 2024-01-04 19:16:42.395589 | Calling base_env.install
base_env servlet: Calling method install on module base_env
Env already installed, skipping
INFO | 2024-01-04 19:16:42.822556 | Time to call base_env.install: 0.43 seconds
INFO | 2024-01-04 19:16:43.044922 | Function setup complete.
INFO | 2024-01-04 19:16:46.179161 | Calling
base_env servlet: Calling method call on module run_home
INFO | 2024-01-04 19:16:46.426212 | Time to call 0.25 seconds
'Run home Jack!'


That was just a very basic example of taking local code/data, converting it to a Runhouse object, sending it to remote compute, and running it there. Each of these Runhouse abstractions, like a cluster or a function, is referred to as a Resource. Runhouse supports a whole lot of extra functionality on this front, including

  • Automated clusters: On-demand clusters (through SkyPilot), AWS Sagemaker clusters, and (soon) Kubernetes clusters

  • Env and package management: run functions/modules on dedicated envs on the cluster

  • Modules: setup and run Python classes in addition to functions

  • Additional function flexibility/features: remote or async functions, with logging, streaming, etc

  • Data resources: send folders, tables, blobs to remote clusters or file storage

Runhouse Den: Saving, Reloading, and Sharing

By creating a Runhouse Den account and logging in, you can save down your resources (cluster, function/module, data, etc), reload them from anywhere, or even share with other users, like your teammates. Once loaded, these resources are ready to be used without additional setup required.

Then, on the Web dashboard UI, access, visualize, and manage any of your resources, along with version history.


To login, simply call rh.login() in Python or runhouse login in CLI. As part of logging in, Runhouse also optionally offers secrets management, where it can automatically detect locally set up provider secrets, and gives you the option to upload them securely into your account. For more information on Secrets management, refer to the Secrets Tutorial.

Saving and Reloading

You can save the resources we created above with:

<runhouse.resources.functions.function.Function at 0x105bbdd60>

If you check on your dashboard, you’ll see that the cluster “rh-cluster” and function “run_home” have been saved. Clicking into the resource will show you the resource metadata.

Now, you can also jump to another environment (for example, your terminal if running this on a notebook), call runhouse login, and then run the function on the cluster with the following Python script:


To share your resources with another user, such as a teammate, simply call the following:


To recap, in this guide we covered:

  • Creating basic Runhouse resource types: cluster and function

  • Running a Runhouse function on remote compute

  • Saving, reloading, and sharing resources through Runhouse Den

Dive Deeper

This is just the start, and there’s much more to Runhouse. To learn more, please take a look at these other tutorials, or at the API documentation