Apply now

Apply for Job

Senior Data Engineer

Date:  15 Aug 2025
Location: 

SG

Company:  StarHub Ltd

Job Description

  • The Data Platform Team is responsible for designing, implementing, and managing a modern data platform that embraces the principles of data mesh, empowering teams to create and manage their own data products. Our mission is to deliver high-quality, scalable data solutions that drive business value across the organization.
  • As a key member of this team, you will be responsible for building scalable, stable, and secure data pipelines that support both batch and streaming workloads. Your work ensures reliable data delivery across domains and supports the development of reusable, self-serve data products.
  • In this role, you will collaborate with business owners, engineers, and data stewards to implement ingestion frameworks and transformation jobs that align with the data-as-a-product vision. You will apply best practices in data engineering to enable efficient data integration across cloud and on-prem environments.

Key Responsibilities

  • Design and develop scalable, secure, and efficient data ingestion pipelines for structured and unstructured data from internal and external systems across AWS and on-prem environments.
  • Work closely with architects and business domain teams to translate data requirements into robust data pipelines and process workflows.
  • Design, build, and maintain real-time and batch data pipelines to ingest and process high-frequency data from diverse internal and external sources.
  • Implement data partitioning, compaction, and optimization techniques to improve data processing performance and reduce cloud storage costs.
  • Document data flow designs, ingestion standards, and transformation logic clearly for use by other engineers, data stewards, and auditors.

Qualifications

  • Tetiary education in Data Engineering, Software Engineering or related fields.
  • At least 2-5 years of relevant experience
  • Relevant background in managing or building scalable, enterprise-grade data platforms to support reliable, high-throughput data processing and analytics.
  • Extensive experience in designing and implementing low-latency real-time data pipelines using streaming technologies such as Kafka, Flink, or Spark.

  • Skilled in Python, SQL, and Linux CLI for data processing, automation, and operational tasks.

  • Proficient in managing data transfer and organization using SFTP and enterprise-grade object storage solutions such as Amazon S3, with a focus on secure and efficient data operations.

  • Strong collaboration skills with the ability to communicate effectively across teams and maintain clear documentation of data flows and transformation logic.

    Preferred:

    • Good knowledge in infrastructure components such as IAM roles, Security Groups, and VPC networking to support secure data access and movement.
    • Familiarity with container orchestration platforms such as Kubernetes and OpenShift for deploying and managing data applications.
    Familiarity with data fabric and data mesh concepts, including their implementation and benefits in distributed data environments, is a bonus

To APPLY NOW, click on Skye!

Apply now

Apply for Job