Conversation.
Local Scenarios
Use these when you’re developing on your own machine and want the agent to run locally.Development and Testing
For the fastest iteration cycle, use a simple path string. The agent runs in your Python process with direct filesystem access:Local Development with Isolation
When you need isolation but still want to run locally, use DockerWorkspace:For HPC environments using Singularity/Apptainer instead of Docker, see ApptainerWorkspace.
Remote & Integration Scenarios
Use these when building applications, integrating with CI/CD, or deploying agents to production.Building Applications with OpenHands Cloud
When you’re building an application that uses OpenHands agents, OpenHandsCloudWorkspace provides fully managed infrastructure:- Managed sandbox provisioning and lifecycle
- LLM configuration inherited from your Cloud account (no API keys in your code)
- Secrets injected securely without transiting through your application
- No infrastructure to manage
CI/CD Pipeline Integration
For running agents in CI/CD pipelines (GitHub Actions, GitLab CI, etc.), you have two options: Option A: DockerWorkspace — Run the sandbox on the CI runner itself:| Consideration | DockerWorkspace | OpenHandsCloudWorkspace |
|---|---|---|
| Runner requirements | Docker-in-Docker or privileged | None (API calls only) |
| Resource usage | Consumes runner resources | Offloaded to Cloud |
| Secrets management | You manage | Inherited from Cloud account |
| Setup complexity | Higher | Lower |
Running SDK Scripts Inside Cloud Sandboxes
For advanced orchestration, you may want to run SDK scripts inside a Cloud sandbox rather than from outside. This pattern is useful when:- You want fire-and-forget execution — your orchestrator doesn’t maintain a connection for the entire agent session
- You need nested agent execution — an outer agent spawns inner agents
- You’re building an automation service that deploys user-provided scripts
saas_runtime_mode=True. See SaaS Runtime Mode for the full pattern.
Enterprise: Self-Managed Infrastructure
If you’re running OpenHands Enterprise and need low-level control over runtime allocation, use APIRemoteWorkspace:Quick Reference
| Workspace | Best For | Infrastructure | Isolated | SaaS |
|---|---|---|---|---|
| LocalWorkspace | Development, testing | None | ❌ | ❌ |
| DockerWorkspace | Local isolation, CI/CD | Local Docker | ✅ | ❌ |
| ApptainerWorkspace | HPC, shared compute | Singularity | ✅ | ❌ |
| OpenHandsCloudWorkspace | Production, managed | OpenHands Cloud | ✅ | ✅ |
| APIRemoteWorkspace | Enterprise, low-level control | Runtime API | ✅ | ❌ |
How Workspaces Relate to Conversations
TheConversation factory automatically selects the appropriate implementation:
| Workspace Type | Conversation Type | Where Agent Runs |
|---|---|---|
Path / LocalWorkspace | LocalConversation | Your Python process |
Any RemoteWorkspace | RemoteConversation | On the agent server |
Feature Comparison
| Feature | Local | Docker | Cloud | API | Apptainer |
|---|---|---|---|---|---|
| No setup required | ✅ | Docker needed | ✅ | Runtime API access | Apptainer needed |
| File isolation | ❌ | ✅ | ✅ | ✅ | ✅ |
| Network isolation | ❌ | ✅ | ✅ | ✅ | ✅ |
get_llm() | ❌ | ❌ | ✅ | ❌ | ❌ |
get_secrets() | ❌ | ❌ | ✅ | ❌ | ❌ |
| Pause/Resume | ❌ | ❌ | ❌ | ✅ | ❌ |
| Custom images | N/A | ✅ | Via specs | ✅ | ✅ |

