Skip to main content
The SDK supports multiple workspace types. All share the same API — switching between them requires only changing the workspace argument to Conversation.

Local Scenarios

Use these when you’re developing on your own machine and want the agent to run locally.

Development and Testing

For the fastest iteration cycle, use a simple path string. The agent runs in your Python process with direct filesystem access:
conversation = Conversation(agent=agent, workspace="./my-project")
Best for: Rapid prototyping, debugging agent behavior, learning the SDK. Trade-off: No isolation — the agent can access your entire filesystem and network.

Local Development with Isolation

When you need isolation but still want to run locally, use DockerWorkspace:
from openhands.workspace import DockerWorkspace

with DockerWorkspace(
    server_image="ghcr.io/openhands/agent-server:latest-python",
) as workspace:
    conversation = Conversation(agent=agent, workspace=workspace)
Best for: Testing agent behavior safely, verifying agents work in a sandboxed environment before deployment. Requirements: Docker installed locally.
For HPC environments using Singularity/Apptainer instead of Docker, see ApptainerWorkspace.

Remote & Integration Scenarios

Use these when building applications, integrating with CI/CD, or deploying agents to production.

Building Applications with OpenHands Cloud

When you’re building an application that uses OpenHands agents, OpenHandsCloudWorkspace provides fully managed infrastructure:
from openhands.workspace import OpenHandsCloudWorkspace

with OpenHandsCloudWorkspace(
    cloud_api_url="https://app.all-hands.dev",
    cloud_api_key=os.environ["OPENHANDS_CLOUD_API_KEY"],
) as workspace:
    llm = workspace.get_llm()  # Inherit LLM config from your Cloud account
    secrets = workspace.get_secrets()  # Inject secrets without exposing them
    
    agent = get_default_agent(llm=llm)
    conversation = Conversation(agent=agent, workspace=workspace)
Best for: Production applications, SaaS integrations, teams that don’t want to manage infrastructure. What you get:
  • Managed sandbox provisioning and lifecycle
  • LLM configuration inherited from your Cloud account (no API keys in your code)
  • Secrets injected securely without transiting through your application
  • No infrastructure to manage

CI/CD Pipeline Integration

For running agents in CI/CD pipelines (GitHub Actions, GitLab CI, etc.), you have two options: Option A: DockerWorkspace — Run the sandbox on the CI runner itself:
# In your CI script
with DockerWorkspace(...) as workspace:
    conversation = Conversation(agent=agent, workspace=workspace)
Option B: OpenHandsCloudWorkspace — Offload execution to OpenHands Cloud:
# In your CI script
with OpenHandsCloudWorkspace(
    cloud_api_url="https://app.all-hands.dev",
    cloud_api_key=os.environ["OPENHANDS_CLOUD_API_KEY"],
) as workspace:
    conversation = Conversation(agent=agent, workspace=workspace)
ConsiderationDockerWorkspaceOpenHandsCloudWorkspace
Runner requirementsDocker-in-Docker or privilegedNone (API calls only)
Resource usageConsumes runner resourcesOffloaded to Cloud
Secrets managementYou manageInherited from Cloud account
Setup complexityHigherLower

Running SDK Scripts Inside Cloud Sandboxes

For advanced orchestration, you may want to run SDK scripts inside a Cloud sandbox rather than from outside. This pattern is useful when:
  • You want fire-and-forget execution — your orchestrator doesn’t maintain a connection for the entire agent session
  • You need nested agent execution — an outer agent spawns inner agents
  • You’re building an automation service that deploys user-provided scripts
This uses saas_runtime_mode=True. See SaaS Runtime Mode for the full pattern.
SaaS Runtime Mode orchestration pattern

Enterprise: Self-Managed Infrastructure

If you’re running OpenHands Enterprise and need low-level control over runtime allocation, use APIRemoteWorkspace:
from openhands.workspace import APIRemoteWorkspace

with APIRemoteWorkspace(
    runtime_api_url="https://runtime.example.com",
    runtime_api_key=os.environ["RUNTIME_API_KEY"],
    server_image="ghcr.io/openhands/agent-server:latest-python",
) as workspace:
    conversation = Conversation(agent=agent, workspace=workspace)
Best for: Organizations that need fine-grained resource management, custom container images, or must run on their own infrastructure.
With APIRemoteWorkspace, you are responsible for:
  • Managing Runtime API credentials and access
  • Container image selection and updates
  • Resource allocation and scaling decisions
  • LLM and secret configuration (no SaaS credential inheritance)
For most use cases, OpenHandsCloudWorkspace provides a simpler experience.

Quick Reference

WorkspaceBest ForInfrastructureIsolatedSaaS
LocalWorkspaceDevelopment, testingNone
DockerWorkspaceLocal isolation, CI/CDLocal Docker
ApptainerWorkspaceHPC, shared computeSingularity
OpenHandsCloudWorkspaceProduction, managedOpenHands Cloud
APIRemoteWorkspaceEnterprise, low-level controlRuntime API

How Workspaces Relate to Conversations

The Conversation factory automatically selects the appropriate implementation:
Workspace TypeConversation TypeWhere Agent Runs
Path / LocalWorkspaceLocalConversationYour Python process
Any RemoteWorkspaceRemoteConversationOn the agent server
# LocalConversation (agent runs in your process)
conversation = Conversation(agent=agent, workspace="./project")

# RemoteConversation (agent runs on agent server)
with DockerWorkspace(...) as workspace:
    conversation = Conversation(agent=agent, workspace=workspace)

Feature Comparison

FeatureLocalDockerCloudAPIApptainer
No setup requiredDocker neededRuntime API accessApptainer needed
File isolation
Network isolation
get_llm()
get_secrets()
Pause/Resume
Custom imagesN/AVia specs