Hello World

You can test that your system is configured correctly by dispatching a minimal "hello world" script that runs on your remote Kubernetes cluster. You should save this file to a folder which is in a git repo or part of a Python project (Kubetorch will sync your code based on either a git root or finding a pyproject.toml / requirements.txt / similar).

import kubetorch as kt def hello_world(num_prints = 1): for print_num in range(num_prints): print('Hello world ', print_num) if __name__ == "__main__": # define compute compute = kt.Compute(cpus = 1) # deploy function as a service remote_hello = kt.fn(hello_world).to(compute) # call the remote service results = remote_hello(5)

Run kt list via the Kubetorch CLI to view a list of all the services that have been dispatched to your cluster. You should see a service named helloworld.

To more thoroughly test, try importing your own function or class and use a custom Docker image. Here is an extended demonstration of common Kubetorch features that you might want to use, such as specifying resources more clearly, defining an image for execution, setting up auto-down on inactivity, and running distributed programs with multiple workers.

import kubetorch as kt from my_module import my_function def main(args = None) img = kt.Image(image_id="my_registry.com/my_image:my_tag").pip_install(['datasets']) compute = kt.Compute( cpus = 4, gpus = 1, memory = "12Gi", image=img, secrets=["wandb"], # Securely sync secrets into pod inactivity_ttl="1h" # Teardown after 1 hour of inactivity ).distribute("pytorch", workers=4) # Create 4 replicas and wire up PyTorch distribution remote_fn = kt.fn(my_function, name = "my_fn").to(compute) results = remote_fn(args) remote_fn.teardown() # Free the resources when complete if __name__ == "__main__": main()