The thesis
Most fitness products assume motivation starts in a dashboard. Swoleby assumes the opposite: people need help in the moment they are about to avoid the workout, ignore the plan, or spiral into all-or-nothing thinking.
That makes SMS a serious product choice, not a fallback interface. It is close to the behavior, light enough to use repeatedly, and direct enough to test whether coaching actually changes what someone does next.
The product loop
The loop is intentionally small: onboard the user, understand their current goal and constraints, send timely reminders, handle replies naturally, suggest the next realistic action, and keep enough state to make follow-up useful. That is a better proof surface than a dashboard screenshot because it tests whether the AI can help inside a real behavior loop.
What makes it technically interesting
The hard part is not generating a workout. The hard part is the system around the conversation: onboarding, memory, reminders, subscriptions, opt-outs, dashboard auth, user state, tone control, retrieval, and evaluation.
The repo has grown into a real applied AI surface: AI coach agents, workout and goal agents, Twilio SMS flows, Stripe pricing experiments, DynamoDB/S3 persistence, MCP tools, e2e tests, quality checks, and operational scripts.
Evaluation mindset
Swoleby is also a useful eval lab. A coach response has to be short enough to use, specific enough to act on, safe enough for the domain, and grounded enough in the user's state to feel personal without becoming creepy. That pushes evaluation toward behavior quality rather than generic answer quality.
Agent-driven development lab
Swoleby is also where I can push agent-led development practices hard without putting enterprise customers or DataRobot production systems at risk. It is real software with payments, SMS, auth, reminders, dashboards, user state, tests, deployment, and product consequences, but the blast radius is appropriate for experimentation.
That makes it a useful place to test the working system around agents: architect/coder splits, Codex and Claude Code harnesses, PR review agents, babysitting flows, CI gates, browser checks, rollback testing, feature flags, and the operational habit of turning agent output into reviewable product changes.
The interesting part is not that an agent can write code. The interesting part is whether the process produces a steady stream of small, coherent, tested changes that a human can review without becoming the bottleneck for every low-risk improvement.
agent-authored commit bursts
Local Swoleby/OpenClaw workspaces, filtered to agent identities such as Zorg, OpenClaw, happyclaw-agent, and Nullius. This is a build-velocity signal, not a public GitHub contribution total.