_Logo_2023_TM_(2).png?1720824465)
Senior Data Engineer
roo • Remote
Posted: January 10, 2026
Job Description
About the Role
Data is at the core of what we do at Roo. Our growing data team is tight-knit, essential, and recognized for the high-quality, innovative work we deliver. You will be right at the center of this work as we build and maintain the systems that power Roo’s most impactful initiatives. This role will push you to use every part of your data and analytics engineering skill set to develop an extensible data ecosystem that serves humans, machine learning models, and internal AI agents alike.
Your Responsibilities
- Data Pipelines & Integrations: Design, develop, and maintain reliable end-to-end data pipelines (both batch and streaming) that connect internal and external systems in ways that best support marketplace growth, customer experience, and operational efficiency.
- Data Storage, Warehousing & Database Support: Contribute to the performance, scalability, and reliability of our entire data ecosystem. Cultivate our dbt/Snowflake environment, develop and maintain our data-centric AWS assets, and partner with product engineers to support the health and efficiency of our transactional databases.
- Data Transformation & Analytics Support (dbt): Work with analysts and other data stakeholders to engineer data structures and orchestrate workflows that encode core business logic. Produce clean, well-structured datasets that underpin traditional reporting, analyst experimentation, and ML and agentic AI use cases.
- Data Quality, Governance & Metric Trust: Implement observability, testing, monitoring, validation, and documentation to ensure accuracy, stability, and consistency throughout the data stack. Help shape shared definitions, metrics, and data semantics across the company.
- Business Collaboration & Insight Enablement: Join cross-functional squads and tiger teams to rapidly translate evolving data needs into scalable and extensible data models, metrics, and analytical frameworks. You will favor iterative delivery over one-shot solutions to support fast-moving OKRs and drive meaningful incremental progress week to week.
- Technical Expertise & Mentorship: Bring strong expertise in modern code quality, data modeling, and data stack patterns. Mentor data stakeholders throughout the organization, share best practices, and meaningfully contribute to architectural and tooling decisions as the data stack evolves.
Qualifications
- Expert-level SQL and data modeling skills (5+ years of experience)
- Intermediate proficiency with data-centric Python packages and Node.js data interaction frameworks like Kysely, Prisma, and Sequelize
- Deep experience with Snowflake, dbt, MySQL, and AWS data services
About You
- You care deeply about data quality, scalability, and clarity of purpose. You take pride in crafting systems that other engineers and analysts enjoy using and extending.
- You collaborate naturally with product engineers, data analysts, and business stakeholders, and you are comfortable translating ambiguity into clear technical plans.
- You are resilient and adaptable. You don’t lose your footing when priorities shift, you work well with uncertainty and experimentation, and you make thoughtful decisions even when speed matters.
- You thrive in fast-growing environments and value iterative development. You know how to deliver impact quickly while still building toward a healthy, extensible stack.
- You bring experience across multiple business domains, such as product, marketing, sales, finance, and operations.
- You enjoy mentoring teammates, raising the technical bar, and contributing thoughtful perspectives to architectural decisions.
- You handle multiple simultaneous priorities well, communicate clearly, and maintain crisp expectation-setting with partners across the company.
Additional Content
About the Role
Data is at the core of what we do at Roo. Our growing data team is tight-knit, essential, and recognized for the high-quality, innovative work we deliver. You will be right at the center of this work as we build and maintain the systems that power Roo’s most impactful initiatives. This role will push you to use every part of your data and analytics engineering skill set to develop an extensible data ecosystem that serves humans, machine learning models, and internal AI agents alike.
Your Responsibilities
- Data Pipelines & Integrations: Design, develop, and maintain reliable end-to-end data pipelines (both batch and streaming) that connect internal and external systems in ways that best support marketplace growth, customer experience, and operational efficiency.
- Data Storage, Warehousing & Database Support: Contribute to the performance, scalability, and reliability of our entire data ecosystem. Cultivate our dbt/Snowflake environment, develop and maintain our data-centric AWS assets, and partner with product engineers to support the health and efficiency of our transactional databases.
- Data Transformation & Analytics Support (dbt): Work with analysts and other data stakeholders to engineer data structures and orchestrate workflows that encode core business logic. Produce clean, well-structured datasets that underpin traditional reporting, analyst experimentation, and ML and agentic AI use cases.
- Data Quality, Governance & Metric Trust: Implement observability, testing, monitoring, validation, and documentation to ensure accuracy, stability, and consistency throughout the data stack. Help shape shared definitions, metrics, and data semantics across the company.
- Business Collaboration & Insight Enablement: Join cross-functional squads and tiger teams to rapidly translate evolving data needs into scalable and extensible data models, metrics, and analytical frameworks. You will favor iterative delivery over one-shot solutions to support fast-moving OKRs and drive meaningful incremental progress week to week.
- Technical Expertise & Mentorship: Bring strong expertise in modern code quality, data modeling, and data stack patterns. Mentor data stakeholders throughout the organization, share best practices, and meaningfully contribute to architectural and tooling decisions as the data stack evolves.
Qualifications
- Expert-level SQL and data modeling skills (5+ years of experience)
- Intermediate proficiency with data-centric Python packages and Node.js data interaction frameworks like Kysely, Prisma, and Sequelize
- Deep experience with Snowflake, dbt, MySQL, and AWS data services
About You
- You care deeply about data quality, scalability, and clarity of purpose. You take pride in crafting systems that other engineers and analysts enjoy using and extending.
- You collaborate naturally with product engineers, data analysts, and business stakeholders, and you are comfortable translating ambiguity into clear technical plans.
- You are resilient and adaptable. You don’t lose your footing when priorities shift, you work well with uncertainty and experimentation, and you make thoughtful decisions even when speed matters.
- You thrive in fast-growing environments and value iterative development. You know how to deliver impact quickly while still building toward a healthy, extensible stack.
- You bring experience across multiple business domains, such as product, marketing, sales, finance, and operations.
- You enjoy mentoring teammates, raising the technical bar, and contributing thoughtful perspectives to architectural decisions.
- You handle multiple simultaneous priorities well, communicate clearly, and maintain crisp expectation-setting with partners across the company.