As an AI Engineer, you’ll play a key role in building, deploying, and maintaining scalable AI solutions that power data-driven decision-making and customer experiences. You’ll work closely with Data Scientists, Data Engineers to bring machine learning models and large language models (LLMs) from research to production — ensuring they are reliable, efficient, and impactful.
 
Key Responsibilities
1. Model Development & Integration
- Collaborate with Data Scientists to implement, fine-tune, and optimize ML and AI models for production.
- Support feature engineering, data preprocessing, and dataset management.
- Integrate AI models into business systems and customer-facing applications.
2. Infrastructure & MLOps
- Design and maintain CI/CD pipelines for model training, testing, and deployment.
- Use Docker, Kubernetes to deploy and scale AI services.
- Build and manage end-to-end ML pipelines using orchestration tools such as MLFlow or Kubeflow.
- Monitor model and system performance, ensuring uptime, scalability, and reliability.
3. Vector Databases & LLM Applications
- Develop and optimize retrieval-augmented generation (RAG) pipelines using vector databases such as Qdrant, Pinecone, or Weaviate.
- Integrate LLM-based solutions (e.g., chatbots, assistants, summarization tools) into Snapp Market’s products.
- Experiment with embeddings, prompt engineering, and model fine-tuning for business use cases.
4. Collaboration & Documentation
- Partner cross-functionally with Data Science, Engineering, and Product teams.
- Document model architectures, workflows, and deployment procedures for reproducibility.
- Contribute to internal AI best practices, standards, and tooling improvements.
5. Innovation & Continuous Learning
- Stay ahead of emerging AI and MLOps technologies, frameworks, and research.
- Prototype new AI solutions and assess their business value.
- Promote a culture of experimentation, learning, and technical excellence.
 
What You’ll Bring
- Strong programming skills in Python.
- Hands-on experience with PyTorch, TensorFlow, or Hugging Face Transformers.
- Solid understanding of MLOps, CI/CD.
- Experience with Docker, Kubernetes, and model orchestration tools.
- Familiarity with vector databases and retrieval-based AI systems.
- Experience working with or deploying LLMs in production.
- Understanding of APIs, data pipelines, and system integration.
- Excellent problem-solving, analytical, and collaboration skills.