About Us:
At Koocafe, we're dedicated to curating the ultimate café experience for everyone. Our mission is to connect coffee enthusiasts with the best cafes while also empowering these cafes with the technical infrastructure to effortlessly provide top-quality online services to their customers. Koocafe currently serves users and cafes in multiple countries worldwide and is actively expanding its business in Iran. If you're passionate about joining us on this journey and becoming part of our team, we eagerly await your CV.
Job Description:
We are seeking a talented and highly technical Senior AI/ML Data Scientist to join our dynamic team. As a key member of our data science division, you will be responsible for designing, building, and deploying sophisticated machine learning systems into production. This is a hands-on engineering role focused on creating real-time predictive models, robust feature engineering pipelines, and end-to-end ML platforms that drive our product's intelligence. This position is not focused on analytics or reporting.
Responsibilities:
- Design and Implement ML Models: Develop, implement, and fine-tune a wide range of machine learning models, including supervised/unsupervised algorithms, deep learning architectures (CNN, RNN, Transformers), and advanced recommendation engines.
- Build Production-Grade Pipelines: Construct robust, scalable, and automated ML pipelines using Python and core frameworks like TensorFlow, PyTorch, and scikit-learn, ensuring seamless integration with our CI/CD processes.
- Deploy and Scale Solutions: Manage the deployment of machine learning models into production environments using cloud platforms (AWS SageMaker, GCP AI Platform, Azure ML) and containerization technologies like Docker and Kubernetes.
- Advanced Feature Engineering: Create and maintain sophisticated feature engineering frameworks to process structured, unstructured, and real-time streaming data from sources like Kafka, Spark Streaming, or Flink.
- Experiment and Innovate: Lead advanced statistical modeling, design and analyze A/B tests, and conduct experiments with cutting-edge techniques like Bayesian inference and reinforcement learning to drive continuous improvement.
- Champion MLOps Practices: Collaborate closely with engineering teams to establish and enhance our MLOps practices, including automated model monitoring, versioning, and retraining pipelines.
- Optimize System Performance: Proactively identify and resolve performance bottlenecks, optimizing model latency and resource usage for our large-scale production systems.
- Ensure Clear Documentation: Create and maintain comprehensive documentation for system architecture, codebases, and workflows to ensure maintainability and facilitate knowledge sharing across the team.
Requirements:
- High level of English, both written and verbal.
- Please Note: This is a hands-on engineering and model-building role. Applications from candidates with a primary background in data analysis or business intelligence will not be considered.
- Advanced degree in Computer Science, Statistics, Mathematics, or a related quantitative field.
- Hands-on experience in a production-focused AI/ML role, with a proven track record of building and deploying models at scale.
- Expert-level proficiency in Python and deep experience with major ML/DL frameworks such as TensorFlow, PyTorch, and scikit-learn.
- Demonstrated experience with cloud-based ML platforms, such as AWS SageMaker, Azure ML, or GCP AI Platform.
- Strong understanding of core computer science concepts, including algorithms, data structures, and distributed systems.
- Experience with streaming data processing technologies (e.g., Kafka, Spark Streaming, Flink).Solid knowledge of MLOps principles and experience with CI/CD, Docker, and Kubernetes for ML systems.
- Proven experience in one or more specialized areas: NLP, computer vision, recommendation systems, or reinforcement learning.
- Excellent communication skills, with the ability to explain complex technical concepts to both engineering teams and non-technical stakeholders.
Nice-to-Haves:
- Experience with vector databases (e.g., Pinecone, Weaviate, Chroma) for implementing similarity search.
- Knowledge of Large Language Models (LLMs), foundation models, and prompt engineering.
- Experience building high-throughput, low-latency ML systems for real-time applications.
- Familiarity with various data storage technologies, including SQL, NoSQL (e.g., Redis), and big data platforms for feature management.
- Contributions to open-source projects in the AI/ML space.
Benefits:
- Competitive salary commensurate with experience.
- Remote work opportunity with flexible hours.
- Comprehensive benefits package.
- Professional development and training opportunities.
- Dynamic and inclusive work culture with opportunities for growth and advancement.
Please send your CV in English.