Data Platform Intelligence Manager
Job Description
JOB DETAILS
REQUIREMENTS
- 5+ years of hands-on experience leading Data Engineering teams, with a proven track record of delivering complex data products end-to-end. Candidates new to people management or transitioning into their first leadership role will not be considered.
- Demonstrated success in hiring, developing, and retaining high-performing engineers.
- Experience managing remote or distributed teams preferred.
- 8+ years total experience in Data Engineering or Software Engineering.
- Deep understanding of the modern data stack: Python, SQL, Cloud Data Warehouses (Redshift/Snowflake/BigQuery), and Orchestration tools (Airflow/Dagster/Prefect).
- You don’t need to be the best coder in the room anymore, but you must have the technical depth to review code, challenge estimates, and spot architectural risks.
- Strong experience with Agile/Scrum methodologies and project management tools (Jira, Linear, etc.).
- Experience managing SLAs and production incidents in high-stakes environments.
- Track record of establishing operational excellence through monitoring, alerting, and incident response.
RESPONSIBILITIES
1. Business Alignment & Strategic Delivery
Drive Business OKRs
- Move beyond “closing tickets” to delivering tangible business value. Ensure your team understands why they’re building a pipeline, not just how.
- Partner closely with Product Managers to translate business requirements into technical roadmaps, prioritizing initiatives that directly impact company objectives.
- Own the “Universal Translator” role: Articulate technical debt, risks, and architectural trade-offs (e.g., cost vs. latency) in language that Product, Finance, and Executive teams understand.
Delivery Excellence
- Own the agile delivery process (Sprint Planning, Standups, Retrospectives), ensuring predictable, high-quality delivery of data initiatives.
- Collaborate with stakeholders to prioritize backlogs, balancing new features, tech debt, and infrastructure investments.
- Act as the primary point of contact for downstream data consumers (Data Science, Analytics, Product teams).
2. Systems Thinking & Operational Excellence
Build Resilient Systems
- Move the team away from “hero engineering.” When something breaks twice, fix the system that allowed it to break, not just the symptom.
- Enforce coding standards, CI/CD practices, and architectural guidelines that scale.
- Proactively identify and eliminate bad processes, useless meetings, and low-value work dragging down velocity.
Own Platform Health
- Monitor and maintain SLAs, data freshness, pipeline reliability, and incident response processes.
- Lead Root Cause Analysis (RCA) with focus on systemic fixes and preventing recurrence.
- Champion Cloud Cost Optimization (FinOps) practices on AWS.
3. People Leadership & Talent Development
Coaching & Growth
- Manage, mentor, and coach a team of 6-8 Data Engineers, fostering a culture of technical excellence and psychological safety.
- Conduct meaningful 1:1s, performance reviews, and career planning, develop engineers into Senior and Principal roles.
- Your goal: Make yourself redundant in day-to-day operations by growing your direct reports into leaders.
Performance Management
- Set high standards. Believe that “speed with correction beats slowness with perfection.”
- Address performance issues directly and constructively. Have difficult conversations when needed to maintain team excellence.
- Celebrate wins and create opportunities for engineers to showcase their work.
Recruiting & Retention
- Lead hiring efforts as a bar-raiser for talent, ensuring we hire engineers who fit our culture of ownership and autonomy.
- Drive onboarding processes that set new hires up for rapid impact.
- Build diverse, inclusive teams and create an environment where everyone can do their best work.
4. AI & Generative Intelligence Enablement
- Automate core workflows (documentation, data validation, metadata) to multiply team output.
- Build natural-language interfaces for frictionless, enterprise-wide data access.
- Architect scalable feature pipelines to power predictive models and monetization algorithms.
- Slash data cycle times using AI-assisted development and automation.
- Deploy LLM-powered observability and automated RCA to drastically reduce MTTR.
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#CrossChannelJobs #JobSearch
#CareerOpportunities #HiringNow
#Employment #JobOpenings