نشان کن
کد آگهی: KP7993800863

As a Senior Data Engineer, you will design, build, and maintain scalable data infrastructure and pipelines that handle billions of records each day You will ensure fast, reliable, and high-quality data flows across our lakehouse platform, supporting both streaming and batch processing Your work will be essential in enabling dependable data access, powering analytics, and accelerating AI-driven initiatives across the organization:What You’ll Drive ForwardDesign and maintain large-scale ETL/ELT pipelines in Apache Flink, Ariflow and Spark for both streaming and batch workloadsBuild and optimize real-time streaming systems using KafkaDevelop scalable ingestion frameworks for Delta Lake, Iceberg, and HudiManage and optimize Ceph-based object storage within our data lakehouseOversee ClickHouse operations to ensure high-performance analytical queryingDrive reliability, scalability, and cost efficiency across systems handling billions of daily recordsDeliver production-grade code in Python, Go, or JavaImplement data quality, monitoring, and observability frameworksCollaborate with ML/AI teams to support model training, feature pipelines, and inference workflowsReduce data pipeline latency by implementing efficient streaming architecturesOptimize storage costs while maintaining query performance across lakehouse layers:What Powers Your Drive+6 years of experience in data engineering rolesStrong proficiency in at least two programming languages: Python, Go, or JavaHands-on experience with Kafka and stream processing (Flink or Spark Streaming)Solid understanding of Spark and distributed computingExperience with at least one lakehouse table format (Delta Lake, Iceberg, or Hudi)Strong SQL skills and experience with analytical databases (ClickHouse or similar columnar databases)Experience with DataOps practices for managing production environments, including Infrastructure as Code (eg, Terraform, Ansible) and GitOps-based deployment strategies (eg, Kubernetes, ArgoCD)Strong understanding of data modeling, data warehousing concepts, and ETL best practicesExperience with version control (Git) and CI/CD practicesStrong problem-solving abilities and analytical thinkingExcellent collaboration and communication skillsAdaptability to rapidly evolving technology landscape?Ready to Get on BoardHelp us shape the future of ride-hailing and urban mobility Submit your CV and let’s build smarter cities together

اسنپ
در تهران
در وبسایت ایران استخدام  (6 روز پیش)
اطلاعات شغل:
نوع همکاری:  تمام‌وقت
نیاز به سابقه:  حداقل 5 سال
ساعت کاری:  تمام وقت
متن کامل آگهی:
As a Senior Data Engineer, you will design, build, and maintain scalable data infrastructure and pipelines that handle billions of records each day You will ensure fast, reliable, and high-quality data flows across our lakehouse platform, supporting both streaming and batch processing Your work will be essential in enabling dependable data access, powering analytics, and accelerating AI-driven initiatives across the organization
:What You’ll Drive Forward
Design and maintain large-scale ETL/ELT pipelines in Apache Flink, Ariflow and Spark for both streaming and batch workloads
Build and optimize real-time streaming systems using Kafka
Develop scalable ingestion frameworks for Delta Lake, Iceberg, and Hudi
Manage and optimize Ceph-based object storage within our data lakehouse
Oversee ClickHouse operations to ensure high-performance analytical querying
Drive reliability, scalability, and cost efficiency across systems handling billions of daily records
Deliver production-grade code in Python, Go, or Java
Implement data quality, monitoring, and observability frameworks
Collaborate with ML/AI teams to support model training, feature pipelines, and inference workflows
Reduce data pipeline latency by implementing efficient streaming architectures
Optimize storage costs while maintaining query performance across lakehouse layers
:What Powers Your Drive
+6 years of experience in data engineering roles
Strong proficiency in at least two programming languages: Python, Go, or Java
Hands-on experience with Kafka and stream processing (Flink or Spark Streaming)
Solid understanding of Spark and distributed computing
Experience with at least one lakehouse table format (Delta Lake, Iceberg, or Hudi)
Strong SQL skills and experience with analytical databases (ClickHouse or similar columnar databases)
Experience with DataOps practices for managing production environments, including Infrastructure as Code (eg, Terraform, Ansible) and GitOps-based deployment strategies (eg, Kubernetes, ArgoCD)
Strong understanding of data modeling, data warehousing concepts, and ETL best practices
Experience with version control (Git) and CI/CD practices
Strong problem-solving abilities and analytical thinking
Excellent collaboration and communication skills
Adaptability to rapidly evolving technology landscape
?Ready to Get on Board
Help us shape the future of ride-hailing and urban mobility Submit your CV and let’s build smarter cities together

این آگهی از وبسایت ایران استخدام پیدا شده، با زدن دکمه‌ی تماس با کارفرما، به وبسایت ایران استخدام برین و از اون‌جا برای این شغل اقدام کنین.

هشدار
توجه داشته باشید که دریافت هزینه از کارجو برای استخدام با هر عنوانی غیرقانونی است. در صورت مواجهه با موارد مشکوک،‌ با کلیک بر روی «گزارش مشکل آگهی» به ما در پیگیری تخلفات کمک کنید.
گزارش مشکل آگهی
تماس با کارفرما
این آگهی رو برای دیگران بفرست
نشان کن
گزارش مشکل آگهی
جمعه 9 اسفند 1404، ساعت 09:23