The OpenHandsCloudWorkspace demonstrates how to use the OpenHands Cloud to provision and manage sandboxed environments for agent execution. This provides a seamless experience with automatic sandbox provisioning, monitoring, and secure execution without managing your own infrastructure.
Instead of providing your own LLM_API_KEY, you can inherit the LLM configuration and secrets from your OpenHands Cloud account. This means you only need OPENHANDS_CLOUD_API_KEY — no separate LLM key required.
Under the hood, get_llm() calls GET /api/v1/users/me?expose_secrets=true, sending your Cloud API key in the Authorization header plus the sandbox’s X-Session-API-Key. That session key is issued by OpenHands Cloud for the running sandbox, so it scopes the request to that sandbox rather than acting like a separately provisioned second credential.
Builds LookupSecret references for your SaaS-configured secrets. Raw values never transit through the SDK client — they are resolved lazily by the agent-server inside the sandbox:
Copy
Ask AI
with OpenHandsCloudWorkspace(...) as workspace: secrets = workspace.get_secrets() conversation.update_secrets(secrets)
"""Example: OpenHandsCloudWorkspace for OpenHands Cloud API.This example demonstrates using OpenHandsCloudWorkspace to provision a sandboxvia OpenHands Cloud (app.all-hands.dev) and run an agent conversation.Usage: uv run examples/02_remote_agent_server/06_convo_with_cloud_workspace.pyRequirements: - LLM_API_KEY: API key for direct LLM provider access (e.g., Anthropic API key) - OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud accessNote: The LLM configuration is sent to the cloud sandbox, so you need an API key that works directly with the LLM provider (not a local proxy). If using Anthropic, set LLM_API_KEY to your Anthropic API key."""import osimport timefrom pydantic import SecretStrfrom openhands.sdk import ( LLM, Conversation, RemoteConversation, get_logger,)from openhands.tools.preset.default import get_default_agentfrom openhands.workspace import OpenHandsCloudWorkspacelogger = get_logger(__name__)api_key = os.getenv("LLM_API_KEY")assert api_key, "LLM_API_KEY required"# Note: Don't use a local proxy URL here - the cloud sandbox needs direct access# to the LLM provider. Use None for base_url to let LiteLLM use the default# provider endpoint, or specify the provider's direct URL.llm = LLM( usage_id="agent", model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"), base_url=os.getenv("LLM_BASE_URL") or None, api_key=SecretStr(api_key),)cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY")if not cloud_api_key: logger.error("OPENHANDS_CLOUD_API_KEY required") exit(1)cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev")logger.info(f"Using OpenHands Cloud API: {cloud_api_url}")with OpenHandsCloudWorkspace( cloud_api_url=cloud_api_url, cloud_api_key=cloud_api_key,) as workspace: agent = get_default_agent(llm=llm, cli_mode=True) received_events: list = [] last_event_time = {"ts": time.time()} def event_callback(event) -> None: received_events.append(event) last_event_time["ts"] = time.time() result = workspace.execute_command( "echo 'Hello from OpenHands Cloud sandbox!' && pwd" ) logger.info(f"Command completed: {result.exit_code}, {result.stdout}") conversation = Conversation( agent=agent, workspace=workspace, callbacks=[event_callback] ) assert isinstance(conversation, RemoteConversation) try: conversation.send_message( "Read the current repo and write 3 facts about the project into FACTS.txt." ) conversation.run() while time.time() - last_event_time["ts"] < 2.0: time.sleep(0.1) conversation.send_message("Great! Now delete that file.") conversation.run() cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost print(f"EXAMPLE_COST: {cost}") finally: conversation.close() logger.info("✅ Conversation completed successfully.") logger.info(f"Total {len(received_events)} events received during conversation.")
Running the Example
Copy
Ask AI
export LLM_API_KEY="your-llm-api-key"export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key"# Optional: specify a custom sandbox spec# export OPENHANDS_SANDBOX_SPEC_ID="your-sandbox-spec-id"cd agent-sdkuv run python examples/02_remote_agent_server/07_convo_with_cloud_workspace.py
This example demonstrates the simplified flow where your OpenHands Cloud account’s LLM configuration and secrets are inherited automatically — no need to provide LLM_API_KEY separately:
"""Example: Inherit SaaS credentials via OpenHandsCloudWorkspace.This example shows the simplified flow where your OpenHands Cloud account'sLLM configuration and secrets are inherited automatically — no need toprovide LLM_API_KEY separately.Compared to 07_convo_with_cloud_workspace.py (which requires a separateLLM_API_KEY), this approach uses: - workspace.get_llm() → fetches LLM config from your SaaS account - workspace.get_secrets() → builds lazy LookupSecret references for your secretsRaw secret values never transit through the SDK client. The agent-serverinside the sandbox resolves them on demand.Usage: uv run examples/02_remote_agent_server/10_cloud_workspace_share_credentials.pyRequirements: - OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud (the only credential needed)Optional: - OPENHANDS_CLOUD_API_URL: Override the Cloud API URL (default: https://app.all-hands.dev) - LLM_MODEL: Override the model from your SaaS settings"""import osimport timefrom openhands.sdk import ( Conversation, RemoteConversation, get_logger,)from openhands.tools.preset.default import get_default_agentfrom openhands.workspace import OpenHandsCloudWorkspacelogger = get_logger(__name__)cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY")if not cloud_api_key: logger.error("OPENHANDS_CLOUD_API_KEY required") exit(1)cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev")logger.info(f"Using OpenHands Cloud API: {cloud_api_url}")with OpenHandsCloudWorkspace( cloud_api_url=cloud_api_url, cloud_api_key=cloud_api_key,) as workspace: # --- LLM from SaaS account settings --- # get_llm() calls GET /users/me?expose_secrets=true, # sending your Cloud API key plus the sandbox session # key that OpenHands Cloud issued for this workspace. # It returns a fully configured LLM instance. # Override any parameter: workspace.get_llm(model="gpt-4o") llm = workspace.get_llm() logger.info(f"LLM configured: model={llm.model}") # --- Secrets from SaaS account --- # get_secrets() fetches secret *names* (not values) and builds LookupSecret # references. Values are resolved lazily inside the sandbox. secrets = workspace.get_secrets() logger.info(f"Available secrets: {list(secrets.keys())}") # Build agent and conversation agent = get_default_agent(llm=llm, cli_mode=True) received_events: list = [] last_event_time = {"ts": time.time()} def event_callback(event) -> None: received_events.append(event) last_event_time["ts"] = time.time() conversation = Conversation( agent=agent, workspace=workspace, callbacks=[event_callback] ) assert isinstance(conversation, RemoteConversation) # Inject SaaS secrets into the conversation if secrets: conversation.update_secrets(secrets) logger.info(f"Injected {len(secrets)} secrets into conversation") # Build a prompt that exercises the injected secrets by asking the agent to # print the last 50% of each token — proves values resolved without leaking # full secrets in logs. secret_names = list(secrets.keys()) if secrets else [] if secret_names: names_str = ", ".join(f"${name}" for name in secret_names) prompt = ( f"For each of these environment variables: {names_str} — " "print the variable name and the LAST 50% of its value " "(i.e. the second half of the string). " "Then write a short summary into SECRETS_CHECK.txt." ) else: # No secret was configured on OpenHands Cloud prompt = "Tell me, is there any secret configured for you?" try: conversation.send_message(prompt) conversation.run() while time.time() - last_event_time["ts"] < 2.0: time.sleep(0.1) cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost print(f"EXAMPLE_COST: {cost}") finally: conversation.close() logger.info("✅ Conversation completed successfully.") logger.info(f"Total {len(received_events)} events received during conversation.")
Running the SaaS Credentials Example
Copy
Ask AI
export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key"# Optional: override LLM model from your SaaS settings# export LLM_MODEL="gpt-4o"cd agent-sdkuv run python examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py
Use saas_runtime_mode=True when your SDK script is already running inside an OpenHands Cloud sandbox — for example, as part of an automation workflow deployed to the cloud.
To deploy an automation script that uses SaaS runtime mode:
Create a sandbox using normal mode (from your local machine or CI):
Copy
Ask AI
with OpenHandsCloudWorkspace( cloud_api_url="https://app.all-hands.dev", cloud_api_key=api_key, keep_alive=True, # Don't delete after setup) as workspace: workspace.file_upload("my_automation.py", "/workspace/my_automation.py")
The script uses saas_runtime_mode=True to connect to the local agent-server
Receive callback when the script completes (optional)
This pattern enables fire-and-forget automation where your orchestrator doesn’t need to maintain a connection for the entire agent session.
SaaS runtime mode is primarily used by the OpenHands automation service. For most SDK users, normal mode with get_llm() and get_secrets() provides a simpler experience.