Skip to main content
A ready-to-run example is available here!
The OpenHandsCloudWorkspace demonstrates how to use the OpenHands Cloud to provision and manage sandboxed environments for agent execution. This provides a seamless experience with automatic sandbox provisioning, monitoring, and secure execution without managing your own infrastructure.

Key Concepts

OpenHandsCloudWorkspace

The OpenHandsCloudWorkspace connects to OpenHands Cloud to provision sandboxes:
with OpenHandsCloudWorkspace(
    cloud_api_url="https://app.all-hands.dev",
    cloud_api_key=cloud_api_key,
) as workspace:
This workspace type:
  • Connects to OpenHands Cloud API
  • Automatically provisions sandboxed environments
  • Manages sandbox lifecycle (create, poll status, delete)
  • Handles all infrastructure concerns

Getting Your API Key

To use OpenHands Cloud, you need an API key:
  1. Go to app.all-hands.dev
  2. Sign in to your account
  3. Navigate to Settings → API Keys
  4. Create a new API key
Store this key securely and use it as the OPENHANDS_CLOUD_API_KEY environment variable.

Configuration Options

The OpenHandsCloudWorkspace supports several configuration options:
ParameterTypeDefaultDescription
cloud_api_urlstrRequiredOpenHands Cloud API URL
cloud_api_keystrRequiredAPI key for authentication
sandbox_spec_idstr | NoneNoneCustom sandbox specification ID
init_timeoutfloat300.0Timeout for sandbox initialization (seconds)
api_timeoutfloat60.0Timeout for API requests (seconds)
keep_aliveboolFalseKeep sandbox running after cleanup

Keep Alive Mode

By default, the sandbox is deleted when the workspace is closed. To keep it running:
workspace = OpenHandsCloudWorkspace(
    cloud_api_url="https://app.all-hands.dev",
    cloud_api_key=cloud_api_key,
    keep_alive=True,
)
This is useful for debugging or when you want to inspect the sandbox state after execution.

Workspace Testing

You can test the workspace before running the agent:
result = workspace.execute_command(
    "echo 'Hello from OpenHands Cloud sandbox!' && pwd"
)
logger.info(f"Command completed: {result.exit_code}, {result.stdout}")
This verifies connectivity to the cloud sandbox and ensures the environment is ready.

Inheriting SaaS Credentials

Instead of providing your own LLM_API_KEY, you can inherit the LLM configuration and secrets from your OpenHands Cloud account. This means you only need OPENHANDS_CLOUD_API_KEY — no separate LLM key required.

get_llm()

Fetches your account’s LLM settings (model, API key, base URL) and returns a ready-to-use LLM instance:
with OpenHandsCloudWorkspace(...) as workspace:
    llm = workspace.get_llm()
    agent = Agent(llm=llm, tools=get_default_tools())
You can override any parameter:
llm = workspace.get_llm(model="gpt-4o", temperature=0.5)
Under the hood, get_llm() calls GET /api/v1/users/me?expose_secrets=true, sending your Cloud API key in the Authorization header plus the sandbox’s X-Session-API-Key. That session key is issued by OpenHands Cloud for the running sandbox, so it scopes the request to that sandbox rather than acting like a separately provisioned second credential.

get_secrets()

Builds LookupSecret references for your SaaS-configured secrets. Raw values never transit through the SDK client — they are resolved lazily by the agent-server inside the sandbox:
with OpenHandsCloudWorkspace(...) as workspace:
    secrets = workspace.get_secrets()
    conversation.update_secrets(secrets)
You can also filter to specific secrets:
gh_secrets = workspace.get_secrets(names=["GITHUB_TOKEN"])
See the SaaS Credentials example below for a complete working example.

Comparison with Other Workspace Types

FeatureOpenHandsCloudWorkspaceAPIRemoteWorkspaceDockerWorkspace
InfrastructureOpenHands CloudRuntime APILocal Docker
AuthenticationAPI KeyAPI KeyNone
Setup RequiredNoneRuntime API accessDocker installed
Custom ImagesVia sandbox specsDirect image specificationDirect image specification
Best ForProduction useCustom runtime environmentsLocal development

Ready-to-run Example

This example shows how to connect to OpenHands Cloud for fully managed agent execution:
examples/02_remote_agent_server/07_convo_with_cloud_workspace.py
"""Example: OpenHandsCloudWorkspace for OpenHands Cloud API.

This example demonstrates using OpenHandsCloudWorkspace to provision a sandbox
via OpenHands Cloud (app.all-hands.dev) and run an agent conversation.

Usage:
  uv run examples/02_remote_agent_server/06_convo_with_cloud_workspace.py

Requirements:
  - LLM_API_KEY: API key for direct LLM provider access (e.g., Anthropic API key)
  - OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud access

Note:
  The LLM configuration is sent to the cloud sandbox, so you need an API key
  that works directly with the LLM provider (not a local proxy). If using
  Anthropic, set LLM_API_KEY to your Anthropic API key.
"""

import os
import time

from pydantic import SecretStr

from openhands.sdk import (
    LLM,
    Conversation,
    RemoteConversation,
    get_logger,
)
from openhands.tools.preset.default import get_default_agent
from openhands.workspace import OpenHandsCloudWorkspace


logger = get_logger(__name__)


api_key = os.getenv("LLM_API_KEY")
assert api_key, "LLM_API_KEY required"

# Note: Don't use a local proxy URL here - the cloud sandbox needs direct access
# to the LLM provider. Use None for base_url to let LiteLLM use the default
# provider endpoint, or specify the provider's direct URL.
llm = LLM(
    usage_id="agent",
    model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"),
    base_url=os.getenv("LLM_BASE_URL") or None,
    api_key=SecretStr(api_key),
)

cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY")
if not cloud_api_key:
    logger.error("OPENHANDS_CLOUD_API_KEY required")
    exit(1)

cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev")
logger.info(f"Using OpenHands Cloud API: {cloud_api_url}")

with OpenHandsCloudWorkspace(
    cloud_api_url=cloud_api_url,
    cloud_api_key=cloud_api_key,
) as workspace:
    agent = get_default_agent(llm=llm, cli_mode=True)
    received_events: list = []
    last_event_time = {"ts": time.time()}

    def event_callback(event) -> None:
        received_events.append(event)
        last_event_time["ts"] = time.time()

    result = workspace.execute_command(
        "echo 'Hello from OpenHands Cloud sandbox!' && pwd"
    )
    logger.info(f"Command completed: {result.exit_code}, {result.stdout}")

    conversation = Conversation(
        agent=agent, workspace=workspace, callbacks=[event_callback]
    )
    assert isinstance(conversation, RemoteConversation)

    try:
        conversation.send_message(
            "Read the current repo and write 3 facts about the project into FACTS.txt."
        )
        conversation.run()

        while time.time() - last_event_time["ts"] < 2.0:
            time.sleep(0.1)

        conversation.send_message("Great! Now delete that file.")
        conversation.run()
        cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
        print(f"EXAMPLE_COST: {cost}")
    finally:
        conversation.close()

    logger.info("✅ Conversation completed successfully.")
    logger.info(f"Total {len(received_events)} events received during conversation.")
Running the Example
export LLM_API_KEY="your-llm-api-key"
export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key"
# Optional: specify a custom sandbox spec
# export OPENHANDS_SANDBOX_SPEC_ID="your-sandbox-spec-id"
cd agent-sdk
uv run python examples/02_remote_agent_server/07_convo_with_cloud_workspace.py

SaaS Credentials Example

This example demonstrates the simplified flow where your OpenHands Cloud account’s LLM configuration and secrets are inherited automatically — no need to provide LLM_API_KEY separately:
examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py
"""Example: Inherit SaaS credentials via OpenHandsCloudWorkspace.

This example shows the simplified flow where your OpenHands Cloud account's
LLM configuration and secrets are inherited automatically — no need to
provide LLM_API_KEY separately.

Compared to 07_convo_with_cloud_workspace.py (which requires a separate
LLM_API_KEY), this approach uses:
  - workspace.get_llm()     → fetches LLM config from your SaaS account
  - workspace.get_secrets()  → builds lazy LookupSecret references for your secrets

Raw secret values never transit through the SDK client. The agent-server
inside the sandbox resolves them on demand.

Usage:
  uv run examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py

Requirements:
  - OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud (the only credential needed)

Optional:
  - OPENHANDS_CLOUD_API_URL: Override the Cloud API URL (default: https://app.all-hands.dev)
  - LLM_MODEL: Override the model from your SaaS settings
"""

import os
import time

from openhands.sdk import (
    Conversation,
    RemoteConversation,
    get_logger,
)
from openhands.tools.preset.default import get_default_agent
from openhands.workspace import OpenHandsCloudWorkspace


logger = get_logger(__name__)


cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY")
if not cloud_api_key:
    logger.error("OPENHANDS_CLOUD_API_KEY required")
    exit(1)

cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev")
logger.info(f"Using OpenHands Cloud API: {cloud_api_url}")

with OpenHandsCloudWorkspace(
    cloud_api_url=cloud_api_url,
    cloud_api_key=cloud_api_key,
) as workspace:
    # --- LLM from SaaS account settings ---
    # get_llm() calls GET /users/me?expose_secrets=true,
    # sending your Cloud API key plus the sandbox session
    # key that OpenHands Cloud issued for this workspace.
    # It returns a fully configured LLM instance.
    # Override any parameter: workspace.get_llm(model="gpt-4o")
    llm = workspace.get_llm()
    logger.info(f"LLM configured: model={llm.model}")

    # --- Secrets from SaaS account ---
    # get_secrets() fetches secret *names* (not values) and builds LookupSecret
    # references. Values are resolved lazily inside the sandbox.
    secrets = workspace.get_secrets()
    logger.info(f"Available secrets: {list(secrets.keys())}")

    # Build agent and conversation
    agent = get_default_agent(llm=llm, cli_mode=True)
    received_events: list = []
    last_event_time = {"ts": time.time()}

    def event_callback(event) -> None:
        received_events.append(event)
        last_event_time["ts"] = time.time()

    conversation = Conversation(
        agent=agent, workspace=workspace, callbacks=[event_callback]
    )
    assert isinstance(conversation, RemoteConversation)

    # Inject SaaS secrets into the conversation
    if secrets:
        conversation.update_secrets(secrets)
        logger.info(f"Injected {len(secrets)} secrets into conversation")

    # Build a prompt that exercises the injected secrets by asking the agent to
    # print the last 50% of each token — proves values resolved without leaking
    # full secrets in logs.
    secret_names = list(secrets.keys()) if secrets else []
    if secret_names:
        names_str = ", ".join(f"${name}" for name in secret_names)
        prompt = (
            f"For each of these environment variables: {names_str} — "
            "print the variable name and the LAST 50% of its value "
            "(i.e. the second half of the string). "
            "Then write a short summary into SECRETS_CHECK.txt."
        )
    else:
        # No secret was configured on OpenHands Cloud
        prompt = "Tell me, is there any secret configured for you?"

    try:
        conversation.send_message(prompt)
        conversation.run()

        while time.time() - last_event_time["ts"] < 2.0:
            time.sleep(0.1)

        cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
        print(f"EXAMPLE_COST: {cost}")
    finally:
        conversation.close()

    logger.info("✅ Conversation completed successfully.")
    logger.info(f"Total {len(received_events)} events received during conversation.")
Running the SaaS Credentials Example
export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key"
# Optional: override LLM model from your SaaS settings
# export LLM_MODEL="gpt-4o"
cd agent-sdk
uv run python examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py

SaaS Runtime Mode

Use saas_runtime_mode=True when your SDK script is already running inside an OpenHands Cloud sandbox — for example, as part of an automation workflow deployed to the cloud.

When to Use This Mode

ScenarioNormal ModeSaaS Runtime Mode
Script runs on your local machine
Script runs in CI (GitHub Actions runner)
Script deployed to run inside Cloud sandbox
Automation service executes your script

How It Differs from Normal Mode

In normal mode, OpenHandsCloudWorkspace provisions a new sandbox via the Cloud API:
Normal mode: Your machine communicates with OpenHands Cloud via API
In SaaS runtime mode, your script is already inside the sandbox and connects to the local agent-server:
SaaS runtime mode: Script runs inside Cloud sandbox
Key differences:
  • No sandbox provisioning — skips create/wait/delete lifecycle
  • Connects to localhost — talks to the agent-server already running in the sandbox
  • SaaS credentials still workget_llm() and get_secrets() call the Cloud API

Configuration

ParameterTypeDefaultDescription
saas_runtime_modeboolFalseSkip sandbox provisioning, connect to localhost
agent_server_portint60000Port of the local agent-server
automation_callback_urlstr | NoneNoneURL to POST completion status on exit
automation_run_idstr | NoneNoneID included in callback payload

Environment Variables

When running inside a Cloud sandbox, these environment variables are set automatically:
VariableDescription
SANDBOX_IDSandbox identifier for get_llm() / get_secrets()
SESSION_API_KEYSession auth key (fallback: OH_SESSION_API_KEYS_0)
AGENT_SERVER_PORTPort override (optional)

Example: Automation Script Inside a Cloud Sandbox

This script is designed to be uploaded and executed inside an OpenHands Cloud sandbox:
# my_automation.py — runs INSIDE a Cloud sandbox
import os
from openhands.workspace import OpenHandsCloudWorkspace
from openhands.sdk import Conversation
from openhands.tools.preset.default import get_default_agent

with OpenHandsCloudWorkspace(
    saas_runtime_mode=True,
    cloud_api_url="https://app.all-hands.dev",
    cloud_api_key=os.environ["OPENHANDS_API_KEY"],
    automation_callback_url=os.environ.get("CALLBACK_URL"),
    automation_run_id=os.environ.get("RUN_ID"),
) as workspace:
    # No sandbox created — connects to local agent-server at localhost:60000
    
    # SaaS credentials still work
    llm = workspace.get_llm()
    secrets = workspace.get_secrets()
    
    agent = get_default_agent(llm=llm, cli_mode=True)
    conversation = Conversation(agent=agent, workspace=workspace)
    
    if secrets:
        conversation.update_secrets(secrets)
    
    conversation.send_message("Perform the automation task")
    conversation.run()
    conversation.close()

# On exit: completion callback sent automatically (if callback URL configured)

Orchestration Pattern

To deploy an automation script that uses SaaS runtime mode:
  1. Create a sandbox using normal mode (from your local machine or CI):
    with OpenHandsCloudWorkspace(
        cloud_api_url="https://app.all-hands.dev",
        cloud_api_key=api_key,
        keep_alive=True,  # Don't delete after setup
    ) as workspace:
        workspace.file_upload("my_automation.py", "/workspace/my_automation.py")
    
  2. Execute the script inside the sandbox:
    workspace.execute_command("python /workspace/my_automation.py")
    
  3. The script uses saas_runtime_mode=True to connect to the local agent-server
  4. Receive callback when the script completes (optional)
This pattern enables fire-and-forget automation where your orchestrator doesn’t need to maintain a connection for the entire agent session.
SaaS runtime mode is primarily used by the OpenHands automation service. For most SDK users, normal mode with get_llm() and get_secrets() provides a simpler experience.

Next Steps