My setup for running 10+ CLI coding agents at once
Apr 28, 2026 · 8 min readtmux, Tailscale, Termius on iOS, and a small PWA panel — how I keep ~10 parallel coding agents tractable from desk or phone.
ML Engineer / Independent Researcher / Vibe Coder · Toronto
Research-oriented ML engineer building agent systems, multimodal learning workflows, and graph-based representation learning. Interested in post-training, ICL, RL, imitation learning, evolution-based approaches, and world-model-inspired research.
tmux, Tailscale, Termius on iOS, and a small PWA panel — how I keep ~10 parallel coding agents tractable from desk or phone.
I'm an ML engineer working on large-scale applied ML — across post-training, agent systems, in-context learning, and graph-based representation learning. Earlier in my career I built ML systems for finance, and conducted HCI / human-in-the-loop ML research focused on multimodal interaction.
My current interests sit at the intersection of agentic coding, long-horizon task automation, RL, world models, and evolution-based methods. I'm particularly interested in how agents can plan, execute, and self-correct over horizons longer than a single prompt — and in foundational architectures that let them learn from their own trajectories.
When I'm not training models, I write about agentic coding workflows and tooling. Recent notes live below.
Large-scale applied ML across post-training, agent systems, in-context learning, and graph-based representation learning. Core areas: retrieval / search / clustering, representation learning, GNNs, agents, post-training, and RL.
ML systems for large-scale financial time-series, focused on anomaly detection, feedback-driven model improvement, and end-to-end production ML pipelines.
HCI and human-in-the-loop ML research on multimodal interaction, wearable input, and human-drone control — focused on subtle one-handed drone interaction using touch, force, and IMU sensing.
CV research and engineering for visual recognition on edge devices — dataset construction, transfer learning, model compression, and deployment to mobile and embedded platforms. Worked on car model classification with Inception-v3 transfer learning.
Self-directed study and implementation work on world models, LLMs, and agent systems. Reimplemented ideas from World Models (Ha et al.) with VAE-based latent representations, and explored TinyWorld — a minimal Genie-1-inspired replication.
Led a small team in the NeurIPS 2024 LLM Merging Competition. Implemented and benchmarked Linear, SLERP, TIES, Git Re-Basin, and MoE-based merging; explored evolutionary merging and FunSearch-inspired search.
Re-implemented DQN, A2C, and PPO from scratch. Built a parallel data-collection and evaluation pipeline; improved sample efficiency with reward / feature engineering and a VAE-based world encoder for sparse-reward environments.
Multimodal assistive vision system for visually impaired users — image classification, object detection, and image-to-text scene description deployed across mobile, cloud, and embedded components.
How subtle can it get? A Trimodal Study of Ring-sized Interfaces for One-handed Drone Control
UbiComp · ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2020
MyoKey: Surface Electromyography and Inertial Motion Sensing-based Text Entry in AR
PerCom · IEEE International Conference on Pervasive Computing and Communications, 2020
One-thumb Text Acquisition on Force-assisted Miniature Interfaces for Mobile Headsets
PerCom · IEEE International Conference on Pervasive Computing and Communications, 2020
HIBEY: Hide the Keyboard in Augmented Reality
PerCom · IEEE International Conference on Pervasive Computing and Communications, 2019