AI Engineer
Million Dollar Sellers • Mexico
Posted: April 21, 2026
Job Description
We are hiring an AI Engineer to build the AI and agent systems that run MDS. This is a pure individual contributor role focused on one thing: using Claude and modern agent tooling to replace manual work that currently depends on operator judgment.
You are joining an established tech team. Our Tech Lead owns our app and the broader automation architecture. Our Automations Specialist keeps the existing Make, Zapier, and GHL workflows running. Your role is to sit alongside them as the AI specialist: identifying where a Claude-powered agent beats a traditional automation, designing and shipping those builds, and upgrading existing workflows with AI when it raises the ceiling.
A representative project: take our event registration review workflow (Luma inbound, Airtable lookups, LinkedIn and web verification, outcome emails, currently about 20 minutes of manual work per registrant) and ship a Claude-powered agent that handles the enrichment and qualification end to end, with a reviewer surface for one-click human approval, a custom MCP connector to Luma, full audit logging in Airtable, a test harness, and a runbook. You own it from whiteboard to production to month-six maintenance.
We are hiring an AI Engineer to build the AI and agent systems that run MDS. This is a pure individual contributor role focused on one thing: using Claude and modern agent tooling to replace manual work that currently depends on operator judgment.You a...Key Responsibilities
Agent and System Engineering
- Design and ship AI agent systems end to end using Anthropic’s Claude (API, Agent SDK, Managed Agents platform) to automate complex multi-step workflows that currently depend on manual operator judgment.
- Build MCP (Model Context Protocol) connectors, including custom connectors for platforms that do not have them. Luma is our first target; you should be comfortable building similar integrations against arbitrary APIs.
- Use Claude Code as a core part of your daily engineering workflow to build, test, and maintain production systems, not just as a chat companion.
- Develop prompts and rubrics as engineered artifacts with eval sets, version control, and a feedback loop when the agent gets decisions wrong. You treat prompt quality as a testable property of the system.
- Build reviewer surfaces (email or Slack handlers, Airtable Interfaces, or lightweight Next.js apps on Vercel) appropriate to the use case. You pick the lightest surface that solves the problem and upgrade only when justified.
Upgrading Existing Automations with AI
- Audit current Make, Zapier, GHL, and Airtable automations for opportunities where an AI layer would make them meaningfully better (smarter routing, better classification, personalization at scale, handling edge cases that currently break workflows).
- Propose, design, and implement AI upgrades to existing workflows in partnership with the Tech Lead and Automations Specialist. You do not rebuild for the sake of rebuilding. You upgrade where AI raises the ceiling.
- Hand off maintenance cleanly. Once a system is stable and documented, our Automations Specialist takes routine monitoring and fixes so you stay focused on the next build.
Software Engineering Discipline
- Write production-grade code in Python and TypeScript / JavaScript, with idiomatic use of modern frameworks. Node.js and Next.js experience expected for web surface work.
- Operate a real engineering environment: Git-based version control, pull requests, code review with the Tech Lead, separate dev / staging / production environments, environment variables and secrets management, and reproducible builds.
- Build testable systems: unit tests where they matter, integration tests for external APIs, and eval harnesses for agent behavior. Every shipped system has a way to verify it still works.
- Set up observability: logging, error tracking, and monitoring so failures surface visibly rather than silently.
- Design idempotent webhook and event-driven pipelines with retry logic, dead-letter handling, and no half-applied state on partial failures.
Proposing and Prioritizing AI Work
- Continuously audit MDS workflows for opportunities where an agent would replace meaningful manual effort. Quantify the opportunity and write up concrete build proposals with effort estimates and expected impact.
- Present proposals to the Tech Lead, who prioritizes against the broader technical roadmap.
- Stay current on new Claude models, Agent SDK features, MCP ecosystem developments, and emerging agent patterns. Bring what is worth adopting back to the team.
Working with the Team
- Partner with the Tech Lead on architecture decisions that touch the app, shared infrastructure, or the broader automation layer. You own AI specifically; they own the overall technical picture.
- Coach the Automations Specialist on AI patterns as a peer, not a manager. Help them level up on prompt engineering, agent basics, and how to debug agent-backed workflows. No formal management responsibilities.
- Partner with Operations, Revenue, Community, and Events to understand the workflows behind the automations. You need to deeply understand what the humans currently do before you can replace it with an agent.
Documentation
- Every system you ship has a README, an architecture note, and a runbook. The Automations Specialist should be able to take first-line maintenance from your documentation alone.
- Write clearly. Non-technical stakeholders read your proposals; the tech team reads your runbooks. Both should be able to act on what you write.
Qualifications
Required
- 3+ years of software engineering experience. You can read and write production code confidently, think in systems, and debug your own work.
- Strong Python and TypeScript / JavaScript. Comfortable in both. Node.js experience expected.
- Demonstrable hands-on experience with Claude. You use Claude Code in your daily workflow. You have built something real with the Anthropic API, Agent SDK, or a similar agent framework. You understand tool use, multi-turn orchestration, and structured outputs.
- At least one shipped project involving an LLM agent or MCP. We want to see the repo, the deployed app, the video walkthrough, or the writeup. This matters more than years on your resume.
- Modern web development basics. Comfortable building and deploying small Next.js or similar apps on Vercel, Render, or Railway. You do not need to be a frontend specialist; you need to be able to ship a functional reviewer interface when the use case calls for one.
- API and webhook infrastructure. You have built webhook receivers, handled retries and idempotency, and managed secrets properly.
- Engineering hygiene: Git, code review, testing, environments, observability. Non-negotiable.
- Familiarity with our internal tooling: Airtable, Go High Level, ClickUp, Google Workspace, Slack. If you have not used all of them, you have used close analogs and can ramp fast.
- Automation platform fluency (Make, Zapier, or equivalent). You know when to reach for code, when to reach for a no-code tool, and when to reach for an agent.
- English (written and verbal) at a level where you can write documentation, present proposals, and pair with native speakers on complex technical discussions.
- Proactive and ownership-driven. You spot problems and ship solutions without waiting to be asked.
Preferred
- Experience building custom MCP servers for platforms without official connectors.
- Experience with LLM eval frameworks (Braintrust, Langfuse, Promptfoo, or internal eval harnesses).
- Experience with event or community operations software (Luma, Hubilo, Cvent) or community platforms (Circle, Mighty Networks).
- Experience working in a small tech team where you had to ship across the stack.
- Background in membership, community, or event-driven organizations.
What We Are Not Looking For
We do not need five years of AI experience. That would be dishonest; these tools barely existed five years ago. We need a strong software engineer who has picked up Claude Code, agent frameworks, and MCP recently and has shipped something real with them. Years of classical software engineering matter. Years of AI matter much less than evidence that you know how to use today’s tools well.
Additional Content
We are hiring an AI Engineer to build the AI and agent systems that run MDS. This is a pure individual contributor role focused on one thing: using Claude and modern agent tooling to replace manual work that currently depends on operator judgment.
You are joining an established tech team. Our Tech Lead owns our app and the broader automation architecture. Our Automations Specialist keeps the existing Make, Zapier, and GHL workflows running. Your role is to sit alongside them as the AI specialist: identifying where a Claude-powered agent beats a traditional automation, designing and shipping those builds, and upgrading existing workflows with AI when it raises the ceiling.
A representative project: take our event registration review workflow (Luma inbound, Airtable lookups, LinkedIn and web verification, outcome emails, currently about 20 minutes of manual work per registrant) and ship a Claude-powered agent that handles the enrichment and qualification end to end, with a reviewer surface for one-click human approval, a custom MCP connector to Luma, full audit logging in Airtable, a test harness, and a runbook. You own it from whiteboard to production to month-six maintenance.
We are hiring an AI Engineer to build the AI and agent systems that run MDS. This is a pure individual contributor role focused on one thing: using Claude and modern agent tooling to replace manual work that currently depends on operator judgment.You a...Key Responsibilities
Agent and System Engineering
- Design and ship AI agent systems end to end using Anthropic’s Claude (API, Agent SDK, Managed Agents platform) to automate complex multi-step workflows that currently depend on manual operator judgment.
- Build MCP (Model Context Protocol) connectors, including custom connectors for platforms that do not have them. Luma is our first target; you should be comfortable building similar integrations against arbitrary APIs.
- Use Claude Code as a core part of your daily engineering workflow to build, test, and maintain production systems, not just as a chat companion.
- Develop prompts and rubrics as engineered artifacts with eval sets, version control, and a feedback loop when the agent gets decisions wrong. You treat prompt quality as a testable property of the system.
- Build reviewer surfaces (email or Slack handlers, Airtable Interfaces, or lightweight Next.js apps on Vercel) appropriate to the use case. You pick the lightest surface that solves the problem and upgrade only when justified.
Upgrading Existing Automations with AI
- Audit current Make, Zapier, GHL, and Airtable automations for opportunities where an AI layer would make them meaningfully better (smarter routing, better classification, personalization at scale, handling edge cases that currently break workflows).
- Propose, design, and implement AI upgrades to existing workflows in partnership with the Tech Lead and Automations Specialist. You do not rebuild for the sake of rebuilding. You upgrade where AI raises the ceiling.
- Hand off maintenance cleanly. Once a system is stable and documented, our Automations Specialist takes routine monitoring and fixes so you stay focused on the next build.
Software Engineering Discipline
- Write production-grade code in Python and TypeScript / JavaScript, with idiomatic use of modern frameworks. Node.js and Next.js experience expected for web surface work.
- Operate a real engineering environment: Git-based version control, pull requests, code review with the Tech Lead, separate dev / staging / production environments, environment variables and secrets management, and reproducible builds.
- Build testable systems: unit tests where they matter, integration tests for external APIs, and eval harnesses for agent behavior. Every shipped system has a way to verify it still works.
- Set up observability: logging, error tracking, and monitoring so failures surface visibly rather than silently.
- Design idempotent webhook and event-driven pipelines with retry logic, dead-letter handling, and no half-applied state on partial failures.
Proposing and Prioritizing AI Work
- Continuously audit MDS workflows for opportunities where an agent would replace meaningful manual effort. Quantify the opportunity and write up concrete build proposals with effort estimates and expected impact.
- Present proposals to the Tech Lead, who prioritizes against the broader technical roadmap.
- Stay current on new Claude models, Agent SDK features, MCP ecosystem developments, and emerging agent patterns. Bring what is worth adopting back to the team.
Working with the Team
- Partner with the Tech Lead on architecture decisions that touch the app, shared infrastructure, or the broader automation layer. You own AI specifically; they own the overall technical picture.
- Coach the Automations Specialist on AI patterns as a peer, not a manager. Help them level up on prompt engineering, agent basics, and how to debug agent-backed workflows. No formal management responsibilities.
- Partner with Operations, Revenue, Community, and Events to understand the workflows behind the automations. You need to deeply understand what the humans currently do before you can replace it with an agent.
Documentation
- Every system you ship has a README, an architecture note, and a runbook. The Automations Specialist should be able to take first-line maintenance from your documentation alone.
- Write clearly. Non-technical stakeholders read your proposals; the tech team reads your runbooks. Both should be able to act on what you write.
Qualifications
Required
- 3+ years of software engineering experience. You can read and write production code confidently, think in systems, and debug your own work.
- Strong Python and TypeScript / JavaScript. Comfortable in both. Node.js experience expected.
- Demonstrable hands-on experience with Claude. You use Claude Code in your daily workflow. You have built something real with the Anthropic API, Agent SDK, or a similar agent framework. You understand tool use, multi-turn orchestration, and structured outputs.
- At least one shipped project involving an LLM agent or MCP. We want to see the repo, the deployed app, the video walkthrough, or the writeup. This matters more than years on your resume.
- Modern web development basics. Comfortable building and deploying small Next.js or similar apps on Vercel, Render, or Railway. You do not need to be a frontend specialist; you need to be able to ship a functional reviewer interface when the use case calls for one.
- API and webhook infrastructure. You have built webhook receivers, handled retries and idempotency, and managed secrets properly.
- Engineering hygiene: Git, code review, testing, environments, observability. Non-negotiable.
- Familiarity with our internal tooling: Airtable, Go High Level, ClickUp, Google Workspace, Slack. If you have not used all of them, you have used close analogs and can ramp fast.
- Automation platform fluency (Make, Zapier, or equivalent). You know when to reach for code, when to reach for a no-code tool, and when to reach for an agent.
- English (written and verbal) at a level where you can write documentation, present proposals, and pair with native speakers on complex technical discussions.
- Proactive and ownership-driven. You spot problems and ship solutions without waiting to be asked.
Preferred
- Experience building custom MCP servers for platforms without official connectors.
- Experience with LLM eval frameworks (Braintrust, Langfuse, Promptfoo, or internal eval harnesses).
- Experience with event or community operations software (Luma, Hubilo, Cvent) or community platforms (Circle, Mighty Networks).
- Experience working in a small tech team where you had to ship across the stack.
- Background in membership, community, or event-driven organizations.
What We Are Not Looking For
We do not need five years of AI experience. That would be dishonest; these tools barely existed five years ago. We need a strong software engineer who has picked up Claude Code, agent frameworks, and MCP recently and has shipped something real with them. Years of classical software engineering matter. Years of AI matter much less than evidence that you know how to use today’s tools well.