Role Summary:
Design, build, and optimize systems for ingesting, storing, and analyzing massive volumes of structured and unstructured data. To ensure the reliability, scalability, and performance of Big Data platforms.
Key Responsibilities:
• Develop, construct, test, and maintain Big Data architectures (data lakes, data warehouses, ETL pipelines).
• Design scalable and fault-tolerant data ingestion frameworks (batch and real-time).
• Integrate Big Data tools and frameworks like Hadoop, Spark, Kafka, Hive, HBase.
• Optimize data workflows for performance and cost.
• Ensure data security, privacy, and governance compliance.
Skills and Qualifications:
• Experience with distributed systems and Big Data technologies (Hadoop ecosystem, Spark, Kafka).
• Proficiency in programming languages: Python, Java, Scala, SQL.
• Knowledge of data modeling, ETL design, and pipeline management.
• Bachelor’s/Master’s in Computer Science, Engineering, or related field.
Why Join Saei ?
By joining our team, you’ll have the chance to work on diverse and challenging projects alongside talented professionals who are passionate about technology and problem-solving. We value your growth, encourage creativity, and support a healthy work-life balance. If you’re looking to make a meaningful impact in a fast-growing IT company, Saei is the place for you.
این آگهی از وبسایت جاب ویژن پیدا شده، با زدن دکمهی تماس با کارفرما، به وبسایت جاب ویژن برین و از اونجا برای این شغل اقدام کنین.