Anthropic just rolled out a major update that lets Claude take direct control of your computer via its Cowork and Claude Code tools.
Claude can now: Take screenshots of your screen. Move the mouse cursor, click, drag, and interact with any UI element. Type on the keyboard and use shortcuts. Navigate and control desktop apps, browsers, files, and workflows — essentially acting like a human user sitting at your machine.
It starts by preferring connected apps and integrations like Slack, Calendar, Google Workspace. When those aren’t enough, it asks for permission to directly control the screen and perform actions. This builds on Anthropic’s earlier “computer use” API tool launched in 2024 for developers, but the new version integrates it more deeply into consumer-facing products like Cowork (for general knowledge work) and Claude Code (for development tasks).
It pairs especially well with Dispatch, allowing you to assign tasks from your phone and let Claude handle them on your desktop even when you’re away. Right now: Available to Claude Pro and Max subscribers on macOS via the Claude Desktop app. Windows support is coming soon.
It’s explicitly labeled as an early research preview — expect bugs, rate limits, and the need for user supervision. Safety features include permission prompts before actions, one-click pause, and scanning for prompt injection risks. Anthropic still recommends reviewing their “Use Cowork safely” guidelines.
This is a big step toward practical AI agents that don’t just chat or run in sandboxes — they can operate your actual computer like a remote assistant. It positions Claude as a strong competitor to viral tools like OpenClaw and could accelerate automation in coding, admin work, data entry, and more.
That said, handing any AI full screen, mouse and keyboard access is powerful but comes with obvious security considerations; keep sensitive stuff like crypto wallets air-gapped. Many users are calling it “wild” or “magic,” while others note it’s still early and best used with caution.
This shifts AI from conversational helper to an autonomous desktop agent that can see your screen, move the mouse, click, type, and interact with any app. Claude can handle repetitive or multi-step tasks across apps: organizing files, filling spreadsheets, navigating browsers, drafting reports, managing email/calendar, or even running workflows while you’re away via Dispatch on mobile.
Users describe it as “hypnotic” or “magic” for knowledge work. Non-technical users or busy professionals get a true “AI coworker” that executes rather than just suggests. Early tests show strong potential for admin, data entry, coding support, and research. Pairs especially well with Claude Code for building, testing, and iterating in IDEs or terminals. Broader trend: AI agents are already contributing a notable share of GitHub commits in some workflows.
Anthropic’s own data shows heavy usage in computer/math occupations; deeper computer control could accelerate labor shifts in white-collar roles, moving more tasks from human to API/agent execution. This represents a practical step toward reliable AI agents that operate like a remote human assistant.
Granting mouse/keyboard/screen control means Claude can read anything visible, modify/delete files, send messages, or interact with logged-in services. Prompt injection (malicious instructions hidden in web pages/emails) remains a real vulnerability — the AI could be tricked into harmful actions.
As a research preview, it’s slow (relies on repeated screenshots + interpretation), prone to vision hallucinations, misclicks, or getting stuck on complex UIs. One wrong move could corrupt data or break workflows.
You’re accountable for everything it does. Enterprises are already hesitant due to compliance, data leakage, and “delegation with anxiety” — time saved on execution often gets spent on verification. Best practice for now: sandbox it, start with low-risk tasks, and never leave it unsupervised with important data.
This builds on Anthropic’s earlier “computer use” API and competes with tools from OpenAI, Google, and others. Expect faster iteration toward more reliable, multi-device agents (phone control is reportedly in testing too). Many react with a mix of excitement and fear. It highlights the tension between capability and control — full autonomy sounds great until something goes wrong.
Could amplify productivity for individuals and teams, but also raise questions about job displacement in routine cognitive tasks, accountability, and the need for new oversight processes in companies. Anthropic maintains safety red lines, but deeper real-world control pushes boundaries on alignment, bias in actions, and unintended consequences.
In short, this is a milestone toward practical AI agents that “do” rather than just “talk.” It’s genuinely powerful for boosting output on macOS today (Windows soon), but treat it as experimental: powerful tool, not yet a fully trusted employee. Many users are testing it on throwaway folders first and reporting impressive results with careful scoping.






