Simple, Secure Secrets Management for AI
You deserve better than pasting API keys into environment variables
Founding Engineer @ 🏃♀️Runhouse🏠
Engineer @ 🏃♀️Runhouse🏠
CTO @ 🏃♀️Runhouse🏠
Secrets are everywhere in AI apps and activities, whether you’re using cloud credentials to access your blob storage bucket or launch VMs, passing OpenAI or HuggingFace API keys into your LLM app, or syncing your Experiment metadata to a store like Weights & Biases. With the wide inception of new applications and providers there’s an increase in the number and variety of secrets that each engineer needs to keep track of.
Unfortunately, managing and deploying those secrets is painful, flimsy, and insecure. ML developers are constantly re-checking where to put a secret in a filesystem path or env variable, copying and pasting keys from provider web pages, or hacking them directly into code or deployment scripts. General-purpose secrets management doesn’t fix this – they have no awareness of the formats and destinations of credentials expected by different providers. Developers are left to get secrets into their deployment environments however they can, often encoding them directly into code, scripts, and tools (e.g. passing in API keys into Dockerfiles), leaving them unprotected. Worse still, jumping into a new environment like a hosted notebook or fresh cluster always starts with wrangling and re-setting up secrets for the nth time.
This isn’t the sexiest problem in AI, but it’s a real one that we’re in a position to fix at Runhouse. Deploying hundreds of apps from many different environments, we ran into it constantly. We asked ourselves: What if you could
- Natively specify which secrets to deploy with your application,
- Load all your secrets down from secrets management into the right places in your environment by running a single “login” CLI command (and logout when you’re done), and
- Load any secret in real time from secrets management in Python?
The solution is obvious: Add support for Secrets saving, loading, and deployment in Runhouse OSS, include a bunch of built-in provider Secret handling to make this all super pleasant to use in AI, and connect that to seamless (but strictly optional!) built-in secrets management in Den (which is, as a reminder, still in alpha).
For a detailed API walkthrough of Runhouse Secrets, please refer to the documentation.
Secrets in Runhouse OSS
Runhouse provides a simple interface for constructing, saving, reloading, and syncing secrets across remote clusters, dev environments, or functions. It supports a large and growing list of built-in providers aimed at automatically extracting secrets from locally configured files and environment variables (more on this below).
Secrets API Basics
Creating a secret resource is easy with the
rh.provider_secret factory functions, for generic secrets or provider-specific secrets, respectively.
You can construct `secret` objects for built-in providers from their string name,
and access the values directly:
The values are extracted lazily in real time from the expected local location (in this case
~/.aws/credentials), either a file or in environment variables. The expected default location of the secrets can be found in the provider specific documentation.
You can also create a provider secret by explicitly passing in values:
For a generic secret, pass in the secret name and a corresponding key pair values dict
secret objects have a slew of API functionality, such as
.delete() to save or delete the secret from local or Runhouse Den (if logged in, see below),
.write() to set files or environment variables locally,
.to() to send it to a cluster/env, and more.
Deploying Secrets into Clusters and Apps
Secrets can be synced over to remote clusters one-off or bundled with an application. By default, if a secret’s contents are stored in the filesystem, then the file will be reconstructed and configured remotely under the same path; likewise, for local env var secrets, sending the secret to an env will set the corresponding environment variables in the environment. Note that each environment on the cluster is in its own process, so the environment variable won’t be accessible in other environments or from an ssh session.
Secrets can be sent individually with the
secret.to(cluster, env) API or bulk sent to a cluster/env with the
cluster.sync_secrets(list_of_secrets, env) API. They can also be included as part of a Runhouse env object, by specifying the secrets field. Then, when an env is sent to a remote cluster, or a function is sent onto a cluster with the env, the secrets will automatically be synced over as well, and accessible within the function.
Built-in Secrets – Credential, API Key, and SSH Key Management
The following built-in provider secret types exist to enable automatic detection of locally configured secrets, and make secret construction and accessibility APIs simpler. Note that these also include parent classes to be easily extensible for your own usage or as a contribution to Runhouse OSS. Please contact us on Discord or create a Github issue if you’d like to see a particular provider.
- Compute Providers: Amazon Web Services (AWS), Azure, Google Cloud Platform, LambdaLabs
- API Keys and Tokens: Anthropic, Cohere, GitHub, HuggingFace, LangChain, OpenAI, Pinecone, Weights & Biases
- SSH Keys: Generic, SkyPilot
- Planned: Heterogeneous Environment Variable/File combinations, Cluster Secrets (e.g. ssh config)
Runhouse Den Secrets Management Tool
The Runhouse OSS APIs make it easy to construct local secrets and sync them point-to-point across various environments, but having a Runhouse Den account makes them accessible from anywhere and even shareable with others!
runhouse login from a terminal or
rh.login() from Python to log in (and create an account if needed), and Runhouse will offer to automatically grab any secrets found in your environment for the built-in providers and sync them up to Den, or load down your secrets from Den straight into the right location in your environment. Here we’re loading secrets into Den:
And then jumping into a Google Colab and logging in to load the secrets down into the environment.
We keep track of which secrets Runhouse loaded into the environment so you can call
runhouse logout and will be prompted with options to remove each one.
As long as you’re logged in (i.e. your Runhouse Den token is saved in
~/.rh/config.yaml), you can add new secrets to Den by calling
my_secret.save(), or load existing ones simply by calling
rh.secret(name="my_secret_name"). We think this is a pretty amazing feature - accessing any secret as needed from anywhere in Python means you always have your secrets within arm’s reach. You can even call
to allow a friend to load your secret by name. No more slacking around SSH keys!
You can see all of your secrets saved in Den in the Den dashboard. Secrets Resources (i.e. metadata), including sharing and history, are located in the resources tab, while secrets values can be viewed, copied, edited, and deleted directly from the Secrets Page.
We plan to add support for more providers, group sharing, and more generic and heterogeneous secret types (e.g. contexts out of a kubeconfig).
You might have noticed that by creating a separation between the values and the metadata, we’ve left open the door to support other secrets management providers with Runhouse’s metadata and sharing capabilities. If you’re interested in bringing your own provider, please contact us!
Stay up to speed 🏃♀️📩
Subscribe to our newsletter to receive updates about upcoming Runhouse features and announcements.