Apply now

Apply for Job

Senior Principal Data Engineer

Date:  14 Apr 2026
Location: 

SG

Company:  StarHub Ltd

Role Mission:
To lead and scale the Data Engineering, DataOps and Data Stewardship functions within StarHub’s Digital Experience platform (DXP) Data organization. This role ensures end-to-end delivery excellence of the cloud-native data platform – spanning data ingestion, transformation, modeling, and operations – to enable reliable, high-quality, and self-service analytics across business domains.

 

Accountabilities:

  1. Build and lead the Data Engineering & DataOps team (engineers and data stewards) under the DXP Data domain.
  2. Manage and mentor a hybrid team of internal engineers and vendor resources (augmented team) to maintain DevOp speed and cost efficiency while progressively strengthening in-house capability.
  3. Drive engineering standards, observability, and quality across data ingestion, transformation, and orchestration.
  4. Govern day-to-day data operations, ensuring SLA compliance, cost efficiency, and audit readiness.
  5. Implement enterprise-level data quality and stewardship frameworks across business domains.
  6. Partner with business, BI, and platform engineering teams to enable new data use cases and model extensions.
  7. Partner with Platform Engineering, Architecture & Governance, and cross domains teams to align on data standards, automation, and governance. 

 

Responsibilities:

  1. Team Leadership: Recruit, mentor, and lead a hybrid team of data engineers and stewards across Singapore, Malaysia and India, establishing in-house technical leadership and delivery ownership.
  2. Data Engineering Delivery: Oversee design, development, and optimization of ELT/ETL pipelines and data models, ensuring scalable, reusable, and cost-efficient workflows.
  3. Data Quality & Stewardship: Institutionalize stewardship processes — define ownership models, implement DQ monitoring, and drive remediation workflows with cross-functional data users.
  4. Operational Excellence: Manage daily pipeline operations, SLA compliance, and production issue resolution with strong root-cause analysis and continuous improvement.
  5. Technical Governance: Set engineering standards for observability, RBAC, cost tagging, and CI/CD practices.
  6. Collaboration & Enablement: Enable self-service analytics by curating trusted datasets and modelled views, working with BI and business teams.
  7. Strategic Contribution: Drive the evolution of the DXP data architecture, supporting StarHub’s broader digital transformation and AI/ML readiness. 

 

Team Scope/ Stakeholders:

  1. Scope: StarHub DXP Data Platform (C360, Datapipe ingestion solution based on Apache airbyte & airflow, Snowflake, SageMaker, Cloud native skills) and enterprise data quality ecosystem.
  2. Decision Rights: Technical design approval, pipeline engineering standards, operational and DQ prioritization, vendor oversight, and team structure decisions.
  3. Stakeholders: Platform Engineering, Architecture & Governance, BI, Data Science, and Business Data Owners, Infrastructure, Cybersecurity/ISO, Application domain teams.
  4. Resources: Core team of ~ 6 to 8 (StarHub employees and augmented engineers) across Singapore, Malaysia and India; expanding to include 2–3 data stewards.

 

Requirements:

  1. 8–12 years of experience in cloud-native data engineering, with strong architecture and delivery experience on AWS.
  2. Proven leadership of cross-functional and hybrid engineering teams, including vendor-augmented resources.
  3. Experience partnering with BI and business teams to design modelled datasets and enable self-service analytics.
  4. Deep hands-on technical expertise, including:
    • Snowflake: schema design, Streams/Tasks, Stored Procedures, UDFs, RBAC, performance tuning, Cortex AI, Streamlit, cost monitoring.
    • Airflow or similar data orchestration tools: orchestration, scheduling, dependency management, and observability.
    • Python and SQL: pipeline scripting, transformation logic, and data validation.
    • ELT/ETL frameworks: Airbyte, Fivetran, and custom connector development.
    • AWS services: S3 (data lake structures and archival), Lambda, KMS, Transfer Family, CloudWatch, Sagemaker.
  5. Demonstrated success delivering medallion architecture (Bronze/Silver/Gold) and enabling self-service data use cases.
  6. Experience building data quality frameworks, stewardship policies, and data lineage tracking across enterprise datasets.
  7. Familiarity with machine learning integration using platforms like AWS SageMaker.
  8. Proven ability to troubleshoot complex data issues, lead root-cause analysis, and ensure production stability.
  9. Track record of transitioning delivery ownership from vendors to internal teams while maintaining quality and velocity. 

To APPLY NOW, click on Skye!

Apply now

Apply for Job