About Us:
At Koocafe, we're dedicated to curating the ultimate café experience for everyone. Our mission is to connect coffee enthusiasts with the best cafes while also empowering these cafes with the technical infrastructure to effortlessly provide top-quality online services to their customers. Koocafe currently serves users and cafes in multiple countries worldwide and is actively expanding its business in Iran. If you're passionate about joining us on this journey and becoming part of our team, we eagerly await your CV.
Job Description:
We are seeking a skilled Data and Machine Learning Engineer to join our growing technical team. In this role, you will be responsible for designing, building, deploying, and maintaining robust and scalable data pipelines and machine learning systems. You will bridge the gap between data science and software engineering, ensuring our data scientists can effectively develop models and that these models are efficiently integrated into our production environment to enhance Koocafe's platform and user experience.
Responsibilities:
- Design, build, and maintain scalable and reliable ETL/ELT data pipelines to collect, process, and store data from various sources.
- Develop and manage the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud technologies.
- Build and operationalize machine learning models developed by data scientists, ensuring they are performant, scalable, and easily deployable.
- Develop MLOps practices including model versioning, automated training/retraining pipelines, model monitoring, and CI/CD for ML workflows.
- Collaborate closely with data scientists to understand model requirements and provide engineering support throughout the model lifecycle.
- Work with software engineers to integrate ML models and data insights into user-facing applications and internal tools via APIs or other mechanisms.
- Monitor, troubleshoot, and optimize data pipelines and ML systems in production to ensure high availability and performance.
- Implement data quality checks and ensure data governance standards are met.
- Stay updated on emerging technologies and best practices in data engineering, machine learning engineering, and MLOps.
Requirements:
- High level of English, both written and verbal.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
- Proven experience as a Data Engineer, Machine Learning Engineer, or Software Engineer with a strong focus on data infrastructure and ML systems (at least 3-4 years of relevant experience).
- Strong proficiency in programming languages, particularly Python, and experience with relevant libraries (e.g., Pandas, Dask, Scikit-learn).
- Solid experience with SQL and working with relational databases, data warehouses (e.g., BigQuery, Redshift, Snowflake), and data lakes.
- Hands-on experience building and optimizing data pipelines using tools like Airflow, Prefect, Spark, or similar technologies.
- Experience with cloud platforms (AWS, Azure, or GCP) and their data/ML services (e.g., S3, EC2, SageMaker, EMR, Dataflow, Vertex AI).
- Familiarity with machine learning concepts and experience deploying ML models into production environments.
- Experience with containerization technologies (Docker) and orchestration systems (Kubernetes) is a plus.
- Knowledge of MLOps principles and tools (e.g., MLflow, Kubeflow) is highly desirable.
- Strong software engineering fundamentals, including code reviews, version control (Git), testing, and CI/CD practices.
- Excellent problem-solving, communication, and collaboration skills.
- Ability to work independently and in a team environment.
Benefits:
- Competitive salary commensurate with experience.
- Remote work opportunity with flexible hours.
- Comprehensive benefits package.
- Professional development and training opportunities.
- Dynamic and inclusive work culture with opportunities for growth and advancement.
Please send your CV in English.