نشان کن
کد آگهی: KP6116308134

Senior Data Engineer

اسنپ تریپ
در تهران - جردن
در وبسایت جاب ویژن  (5 روز پیش)
اطلاعات شغل:
نوع همکاری:  تمام‌وقت
مهارت‌های مورد نیاز:
GIT
ساعت کاری:  Saturday to Wednesday (9:00-18:00)
متن کامل آگهی:

Do you have a passion for working with the leading modern open-source Big Data frameworks in an agile team at SnappTrip? 

Who we are: 
Our team is cross-functional with a mix of Data analysts and data engineers. We are working with open-source tools to help the business make perfect decisions and also learn and develop ourselves. 

Who you are: 
First off, we believe that you are a software engineer who has the concept of distributed systems and can harness the complexity and volume of data whilst saving resources of the Data stack. 

Secondly, you were the best in Programming, DB, DS, Algorithm, and OS classes. You are a geek and invincible developer who has a passion for learning modern and open-source Big Data engines. 

Why you should apply: 
SnappTrip has both mature and expanding products that are all data-driven. In the Data team, we are working with the leading modern open-source technologies to collect the products' data and subsequently, extract knowledge, So you have the opportunity to work directly with a bunch of cutting-edge open-source toolkits and distributed frameworks. 

Responsibilities: 
●     Collaborate with data team members and other stakeholders to understand data requirements and design efficient data solutions. 
●     Develop and maintain scalable data pipelines using Python, Scala, Spark, Trino, and the Hadoop ecosystem. 
●     Implement and optimize ETL/ELT processes for large-scale data processing. 
●     Work with Linux, Git, and CI/CD tools to ensure robust version control and continuous integration of data solutions. 
●     Utilize Hive, Postgresql, ClickHouse, dbt, and other database technologies for data storage and retrieval. 
●     Manage data streaming and processing using Kafka. 
●     Implement and maintain database changes using Debezium. 
●     Utilize monitoring tools such as Grafana, Prometheus, and Zabbix to ensure the health and performance of data systems. 
●     Work with container orchestration tools such as Kubernetes and Helm to deploy and manage data applications. 
●     Implement workflow automation using Apache Airflow. 

Core Qualifications: 
●     Bachelor's or Master’s degree in computer engineering, computer science, information technology, or a related field. 
●       Proven experience as a Data Engineer or in a similar role. 
●     At least three years of practical experience in programming Scala, Java, or Python (preferably a JVM-based language). 
●     Linux 
●     At least two years of practical experience in developing Spark applications. 
●     Experience working with relational databases (preferably PostgreSQL) 
●     Clickhouse 
●     HDFS 
●     SQL 
●     Git 
●     Experience with CI tools (CI/CD or Jenkins) 
●     Experience with workflow automation tools like Apache Airflow. 
●     Strong understanding of ETL/ELT processes and data processing technologies. 
●     Familiarity with monitoring tools such as Grafana, Prometheus, and Zabbix. 
●     Experience with streaming technologies like Kafka and Debezium. 
●     Prometheus/Grafana 
●     A thorough understanding of parallel and distributed computing (We run Spark applications deployed on Kubernetes cluster and process data on HDFS) 
●     A self-starter 
●     Effective communication 
●     Kubernetes 
●     PrestoSQL (Trino) 

Nice to Have – Preferred Qualification 
●     Experience with Power BI, SSIS, SSAS, and SQL Server. 
●     Familiarity with Airbyte and dbt. 
●     Exposure to the ELK stack. 

You will be doing the following: 
●     Developing and deploying applications in Scala, Spark (Scala or Python API) and Python to ingest|consume|analyze data in batch or streaming manner. 
●     Proposing solutions for some problematic issues and maintaining the whole Data stack including PostgreSQL, HDFS, Kafka, Airflow, Metabase, Spark, Kubernetes, Clickhouse, and PrestoSQL (You will be central for the whole Data stack). 
●     You will support other roles to work easily with open-source tools and resolve their performance issues in SQL and Spark.

این آگهی از وبسایت جاب ویژن پیدا شده، با زدن دکمه‌ی تماس با کارفرما، به وبسایت جاب ویژن برین و از اون‌جا برای این شغل اقدام کنین.

هشدار
توجه داشته باشید که دریافت هزینه از کارجو برای استخدام با هر عنوانی غیرقانونی است. در صورت مواجهه با موارد مشکوک،‌ با کلیک بر روی «گزارش مشکل آگهی» به ما در پیگیری تخلفات کمک کنید.
گزارش مشکل آگهی
تماس با کارفرما
این آگهی رو برای دیگران بفرست
نشان کن
گزارش مشکل آگهی
دوشنبه 9 مهر 1403، ساعت 15:31