We’re looking for a Data Engineer who excels in Python development and has hands-on experience with modern data systems such as Airflow, Kafka, ClickHouse, and PostgreSQL. The ideal candidate will be responsible for building, optimizing, and maintaining scalable data pipelines and models that enable efficient data flow and analytics across the organization.
Key Responsibilities :
- Design, develop, and maintain data pipelines and ETL processes using Python.
- Work with Kafka to handle real-time data ingestion and event-driven data flows.
- Manage and optimize ClickHouse and PostgreSQL databases for analytical and transactional workloads.
- Implement and maintain data models, ensuring performance, consistency, and scalability.
- Monitor data quality and develop automated checks and validation scripts.
Required Skills & Qualifications :
- 1–3 years of experience as a Data Engineer or in a similar role.
- Strong programming skills in Python, including experience with web development frameworks such as Django.
- Solid understanding of data modeling, relational schemas, and normalization principles.
- Hands-on experience with kafka, clickhouse and postgresql.
- Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.