Hello World

Test that your system is configured correctly by dispatching a minimal "hello world" script that runs on your remote Kubernetes cluster. You should save this file to a local folder inside a git repo or part of a Python project (Kubetorch will sync your code based on either a git root or finding a pyproject.toml / requirements.txt / similar).



import kubetorch as kt def hello_world(num_prints = 1): import time for print_num in range(num_prints): print('Hello world ', print_num) time.sleep(1) # sleep for 1 second to simulate a slow operation if __name__ == "__main__": # define compute compute = kt.Compute(cpus = 1) # deploy function as a service remote_hello = kt.fn(hello_world).to(compute) # call the remote service results = remote_hello(5)

You can test this by running the script from the command line. This will dispatch a service to your Kubernetes cluster and call the service as a remote function.

$ python hello_world.py

Run kt list via the Kubetorch CLI to view a list of all the services that have been dispatched to your cluster. You should see a service named helloworld.

For each subsequent call, Kubetorch will reuse the same service and pods, syncing over any updates to your code as needed when you call the .to() method. Learn more about service lifecycles in Developer Workflow & Concepts.

The service will remain running until you tear it down. When finished, you can run the CLI command kt teardown helloworld. This can also be done inside your Python script with remote_hello.teardown().

Extended Example

To more thoroughly test, try importing your own function or class and use a custom Docker image. Here is an extended demonstration of common Kubetorch features that you might want to use, such as specifying resources more clearly, defining an image for execution, setting up auto-down on inactivity, and running distributed programs with multiple workers.

import kubetorch as kt from my_module import my_function def main(args = None) img = kt.Image(image_id="my_registry.com/my_image:my_tag").pip_install(['datasets']) compute = kt.Compute( cpus=4, gpus=1, memory="12Gi", image=img, secrets=["wandb"], # Securely sync secrets into pod inactivity_ttl="1h" # Teardown after 1 hour of inactivity ).distribute("pytorch", workers=4) # Create 4 replicas and wire up PyTorch distribution remote_fn = kt.fn(my_function, name = "my_fn").to(compute) results = remote_fn(args) remote_fn.teardown() # Free the resources when complete if __name__ == "__main__": main()

For more examples and in-depth guides for real-world use cases, take a look at our Kubetorch Examples directory. For more details on the Kubetorch API, check out the API Reference.

Next Steps

Now that you've tested your setup, you can move on to the next step in our Developer Workflow & Concepts guide.