نشان کن
کد آگهی: KP8373834715

We are looking for a Senior Data Platform Engineer to help design and build our Cloud Data Platform at scale.You will work on the core infrastructure that powers realtime data streaming, batch processing, and data lakehouse analytics.This role sits at the intersection of Big Data, Backend Engineering, and DevOps.Responsibilities:Design and maintain scalable, faulttolerant data platform architecturesBuild and operate realtime and batch data processing systems using Apache Spark and Apache FlinkDevelop backend services and APIs using Python or GolangImplement and operate a Data Lakehouse using Iceberg / Delta Lake / HudiDesign data ingestion and orchestration pipelines with Apache AirflowImplement CDC and eventdriven architectures using Debezium and Apache KafkaOwn production systems: containerization, deployment, and operations using Docker and KubernetesEnsure platform reliability through monitoring, logging, and alerting (Prometheus, Grafana, ELK)Requirements:Strong software engineering fundamentals (data structures, algorithms, system design)Proficiency in Java/Scala and Python (Golang is a plus)Handson experience with Kafka, Spark/FlinkStrong SQL skills and experience with PostgreSQLExperience with Data Lakehouse architectures and open table formatsExperience building and operating production systems on KubernetesNice to Have:Experience with cloud platforms (AWS, Databricks, Snowflake)Familiarity with dbt, MLflow, or KubeflowExperience with workflow and ingestion tools like Vector, Benthos, NiFiExperience working with data science or machine learning teams

دیجی کالا
در تهران
در وبسایت ایران استخدام  (1 روز پیش)
اطلاعات شغل:
نوع همکاری:  تمام‌وقت
ساعت کاری:  تمام وقت
متن کامل آگهی:
We are looking for a Senior Data Platform Engineer to help design and build our Cloud Data Platform at scale.
You will work on the core infrastructure that powers realtime data streaming, batch processing, and data lakehouse analytics.
This role sits at the intersection of Big Data, Backend Engineering, and DevOps.
Responsibilities:
Design and maintain scalable, faulttolerant data platform architectures
Build and operate realtime and batch data processing systems using Apache Spark and Apache Flink
Develop backend services and APIs using Python or Golang
Implement and operate a Data Lakehouse using Iceberg / Delta Lake / Hudi
Design data ingestion and orchestration pipelines with Apache Airflow
Implement CDC and eventdriven architectures using Debezium and Apache Kafka
Own production systems: containerization, deployment, and operations using Docker and Kubernetes
Ensure platform reliability through monitoring, logging, and alerting (Prometheus, Grafana, ELK)
Requirements:
Strong software engineering fundamentals (data structures, algorithms, system design)
Proficiency in Java/Scala and Python (Golang is a plus)
Handson experience with Kafka, Spark/Flink
Strong SQL skills and experience with PostgreSQL
Experience with Data Lakehouse architectures and open table formats
Experience building and operating production systems on Kubernetes
Nice to Have:
Experience with cloud platforms (AWS, Databricks, Snowflake)
Familiarity with dbt, MLflow, or Kubeflow
Experience with workflow and ingestion tools like Vector, Benthos, NiFi
Experience working with data science or machine learning teams

این آگهی از وبسایت ایران استخدام پیدا شده، با زدن دکمه‌ی تماس با کارفرما، به وبسایت ایران استخدام برین و از اون‌جا برای این شغل اقدام کنین.

هشدار
توجه داشته باشید که دریافت هزینه از کارجو برای استخدام با هر عنوانی غیرقانونی است. در صورت مواجهه با موارد مشکوک،‌ با کلیک بر روی «گزارش مشکل آگهی» به ما در پیگیری تخلفات کمک کنید.
گزارش مشکل آگهی
تماس با کارفرما
این آگهی رو برای دیگران بفرست
نشان کن
گزارش مشکل آگهی
سه‌شنبه 29 بهمن 1404، ساعت 14:25