Table of Contents
Executive Summary
Wes McKinney's recent blog on "agent ergonomics" makes the case that AI agents need languages optimised for fast iteration, reliability, and efficiency, where Python struggles and Go/Rust excel. He doesn't mention Mojo, but it's worth considering how this Python superset from Modular might fit the brief.
Mojo offers Python's familiar syntax with systems-level performance: including static typing, memory safety, and GPU acceleration. Crucially, Modular has already open-sourced over 700,000 lines of Mojo code, creating substantial training data for AI models beyond just Python similarity. For Python developers, this means scalable, performant code without learning Go's channels or Rust's ownership model.
But as agents generate more code, experienced developers' core skills in ensuring correctness become more valuable, not less. Languages like Mojo that provide compile-time safety checks serve both: faster feedback for agents, stronger verification tools for humans.
Wes McKinney's Take on Agent Ergonomics
I recently came across Wes McKinney's thought-provoking post (20 Jan 2026) on agent ergonomics, and it resonated deeply. Wes shifts the focus from human developers to AI agents as primary code authors, arguing for languages that enable quick compile-test cycles, self-contained binaries, predictable runtime, and low overhead. He calls out Python's dynamic nature as a hurdle: slow feedback, dependency hell (though tools like uv are alleviating much of this pain), and unpredictable performance make it tough for agents to iterate reliably.
Key points from his article:
- Fast Iteration: Agents need sub-second compiles to experiment rapidly.
- Distribution Ease: Standalone binaries over interpreter-dependent setups.
- Reliability: Static checks to catch errors early, avoiding runtime surprises.
- Efficiency: Minimal overhead for compute-intensive tasks like data processing or ML.
He praises Go for its speed and simplicity, and Rust for safety and performance, but notes the learning curve if you're coming from Python-heavy ecosystems. Interestingly, Mojo doesn't get a mention, which sparked my thinking about where it might fit.
Where Mojo Might Fit In
This is where Mojo becomes interesting. I've been exploring it, for example, through my GPU puzzles series, and whilst I'm still early in my learning journey, I can see how it addresses many of Wes' concerns. Created by Chris Lattner (of LLVM fame) at Modular, Mojo is designed as a Python superset specifically for AI and systems programming and it inherits Python's readable syntax.
Crucially, Modular has already open-sourced over 700,000 lines of Mojo code, meaning there's substantial training data specifically for Mojo available to future AI models. This isn't just about Python similarity providing context; there's now a growing corpus of idiomatic Mojo code that can be used to train agents directly. As more developers contribute and the standard library continues to open-source, this training dataset will only expand.
Based on what I've learnt so far and Wes's criteria, here's how Mojo appears to align:
-
Static Typing and Safety: Optional types with Rust-inspired borrow checking (from what I understand) aim to eliminate common Python pitfalls like type errors or memory leaks. Agents would get compile-time feedback, potentially boosting reliability.
-
Performance Boosts: No GIL, direct hardware control (e.g., unified GPU support across vendors), and zero-overhead abstractions. Published benchmarks suggest impressive performance gains, with compile times that appear competitive with Go.
-
Seamless Integration: You can import Python libraries like NumPy directly; agents could start in Python and "mojofy" for speed without ecosystem lock-in.
In Wes' framing, where durable value lies in compute layers, Mojo aims to unify fragmented tools, using MLIR under the hood for portable, optimised code.
Mojo vs. Go and Rust: How They Compare
I haven't used Go or Rust personally, but based on the article and Mojo's documentation, here's how they appear to stack up against his agent ergonomics criteria:
| Aspect | Go | Rust | Mojo |
|---|---|---|---|
| Iteration Speed | Fast compiles, simple syntax | Excellent, but steeper curve | Python-like rapid prototyping with fast static compiles |
| Reliability | Garbage collection, no nulls | Borrow checker for safety | Borrow checker + static types, predictable no-JIT |
| Efficiency | Low overhead, concurrency | Zero-cost abstractions | GPU/TPU acceleration, significant Python speedups |
| Agent Friendliness | Standalone binaries | Safe, but syntax shift from Python | Python superset—agents reuse Python knowledge |
| Learning Curve | Moderate from Python | High | Low; builds on Python skills |
For someone coming from Python like me, Mojo's appeal is that you don't need to learn entirely new paradigms like Go's channels or Rust's ownership model: you can evolve your existing Python code.
A Hypothetical Agent Workflow
Based on my experiments with the GPU puzzles, here's how I imagine an agent workflow might work: An agent starts with Python code for a matrix multiply, hits performance walls, then switches to Mojo to write a GPU kernel. The kernel compiles to a binary in seconds, runs on different hardware, and scales, avoiding the "magic" and unpredictability that Wes warns about with Python JITs.
Whilst I haven't built a full agent system myself, the fast compile-test cycles I've experienced with Mojo suggest it could support the rapid iteration Wes describes.
The Human Element
As Josh Bloom notes in his provocatively titled Medium piece "The Last Programming Languages Created by People", we may be entering an era where AI agents increasingly shape language design and usage. This raises fascinating questions about the changing role of human developers. Will we shift from writing code to curating and reviewing agent-generated code?
Yet I suspect experienced developers remain essential and irreplaceable, not for raw code generation, but for the rigorous scrutiny they bring to questioning inputs, validating outputs, and catching edge cases that agents might miss. Their core skill has always been "ensuring correctness", not typing speed. What changes is the ratio: fewer keystrokes spent on implementation, more time on design and verification.
Languages like Mojo that provide compile-time safety checks become even more valuable in this world: they give both agents and human reviewers earlier, clearer feedback when something's wrong. Mojo's Python compatibility positions it uniquely for this transition. Agents can leverage existing Python knowledge whilst accessing systems-level capabilities, and human developers can verify agent-generated code in a familiar syntax with stronger safety guarantees.
Final Thoughts
Wes makes a compelling case: as agents take over more coding tasks, we'll need languages that empower them at scale. From my limited but growing experience, Mojo appears to offer an interesting middle path, evolving Python into something more performant whilst maintaining compatibility with the Python ecosystem that both humans and AI agents already know.
If you're exploring similar territory: agent ergonomics, Mojo, or performance Python, I'd love to hear your thoughts in the comments. 🚀
Related Posts
- GPU Threads Unraveled: Series Introduction — My exploration of Mojo's GPU capabilities
- Mojo 2026: From Emerging to Mainstream — Why Mojo is gaining traction