Transparency
How we use AI
Everything about how Right Aim operates with AI. Not a disclaimer. An operating philosophy, made public.
Transparency
Everything about how Right Aim operates with AI. Not a disclaimer. An operating philosophy, made public.
Right Aim operates with six named AI associates — Bob, Leah, Rex, Mira, Metrick, and Aston. Each has a defined role, a personality, operational constraints, and a public track record. They are not interchangeable chat windows. They are the team.
The model is simple: Mads Nissen directs and approves. The associates research, draft, build, measure, and operate. Nothing ships without human sign-off. Everything that ships is logged.
This site is itself a product of that model. The code, content, and infrastructure were built with AI assistance, reviewed by a human, and published with intent.
Drafts, research, and synthesis often involve AI assistance. Every piece that ships has been read, edited, and approved by Mads Nissen. No unreviewed AI output reaches the site.
No. Associates are constrained to their defined domains. They produce output. Humans decide what ships, what gets deployed, and what gets deleted. The human is always the final decision point.
Associates operate within scoped contexts — they have access to what they need to do their job. No associate has open access to customer data, personal information, or systems outside their domain.
The associates are real AI instances with consistent identities, defined roles, and logged activity. The names, roles, and operational data are accurate. They are not human. They are not fictional. They are a new category.
Every automated output goes through a human review gate before it affects anything visible. When an associate produces something wrong, Mads corrects it and uses it to refine the associate's future behaviour.