Who we are
Mindtickle is the market-leading revenue productivity platform that combines on-the-job learning and deal execution to get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product. Weβre honoured to be recognized as a Leader in the first-ever Forrester Waveβ’: Revenue Enablement Platforms, Q3 2024!
Role Overview
As an SDE-2 in CoE-ML, you are an independent contributor who owns modules end-to-end, brings strong engineering judgment to AI/ML problems, and actively raises the technical bar of the team. You have moved beyond task execution β you drive design, anticipate failure modes, and begin to influence the technical direction of your pod.
You will:
β’
Own, improve, and extend production AI/ML components β deeply understanding what exists before choosing to build new.
β’
Take end-to-end responsibility for the reliability, performance, and cost-efficiency of the AI modules you own.
β’
Contribute meaningfully to architecture discussions and challenge designs with data and first-principles thinking.
β’
Actively leverage AI-assisted development tools and agentic workflows to multiply your own productivity.
β’
Mentor SDE-1 engineers and interns, sharing technical knowledge and engineering best practices.
β’
Partner closely with product managers, QA, data engineering, and DevOps to ship cohesive AI-powered features.
Key Responsibilities
AI/ML Development & Productionization
β’
Design, implement, and continuously improve production-grade AI/ML components β including LLM-powered features, RAG pipelines, agentic workflows, and model inference services. You are expected to deeply understand existing systems, identify opportunities to enhance their quality, reliability, or performance, and own those improvements end-to-end.
β’
Improve and extend existing AI infrastructure β including prompt pipelines, retrieval systems, embedding workflows, and agentic orchestration layers β rather than defaulting to greenfield solutions.
β’
Write clean, well-tested, maintainable code in Python (primary) and optionally Java or Go, following software engineering best practices.
β’
Implement unit, integration, and regression tests for AI components, including evaluation harnesses for LLM output quality.
β’
Contribute to CI/CD pipelines and ensure smooth deployment of AI services on AWS/Kubernetes infrastructure.
β’
Optimize model inference for latency, throughput, and cost β identifying bottlenecks and proposing concrete solutions.
Model Quality & Evaluation
β’
Build and maintain evaluation frameworks to assess model performance, output quality, and regression across releases β using platforms such as Maxim, LangFuse, or Weights & Biases.
β’
Define and track quality metrics (precision, recall, BLEU, ROUGE, LLM-as-judge scores, or task-specific KPIs) for modules under ownership.
β’
Contribute to prompt engineering, few-shot design, and model selection to measurably improve output quality.
β’
Treat evaluation as an ongoing operational discipline β not a one-time pre-release check β and integrate it into the development and deployment lifecycle.
β’
Identify data quality issues affecting model performance and work with data engineering to resolve them.
Production Operations & Observability
β’
Monitor AI services in production using infrastructure observability tooling such as Datadog, Prometheus, and Grafana.
β’
Use AI gateway platforms (e.g., LiteLLM, Portkey, TrueFoundry) to track LLM traffic, enforce per-project cost attribution, and maintain governance over model access across environments.
β’
Instrument and observe agentic workflows built on frameworks such as LangGraph or CrewAI β tracing multi-step executions, identifying failure points, and improving reliability.
β’
Respond to production incidents, conduct root cause analysis, and implement preventive fixes.
β’
Participate in on-call rotations and contribute to runbooks and post-mortems.
β’
Proactively surface model drift, latency degradation, and cost anomalies before they escalate.
Architecture & Design
β’
Lead low-level design (LLD) for features and modules under ownership, and actively participate in high-level design (HLD) discussions.
β’
Surface tradeoffs around scalability, cost, and reliability β and back recommendations with data from production systems.
β’
Document technical designs, API contracts, and component behaviours clearly and keep them up to date.
β’
Propose and drive improvements to existing systems based on production learnings.
Collaboration & Communication
β’
Work closely with SDE-3s and Tech Leads to align on design decisions and delivery plans.
β’
Communicate progress, blockers, and technical risks clearly to the pod and stakeholders β without waiting to be asked.
β’
Collaborate with product and QA to translate requirements into precise technical acceptance criteria.
β’
Contribute to design reviews and provide constructive, evidence-based feedback on peers' work.
Mentorship & Knowledge Sharing
β’
Mentor SDE-1s and interns on technical approaches, code quality, debugging methodology, and AI tooling.
β’
Document learnings, failure analyses, and best practices for the team's knowledge base.
β’
Participate in team tech talks, brown-bags, and internal AI community events.
AI-Native Ways of Working
β’
Use AI-assisted development tools (e.g., Claude Code, Cursor, Copilot) as a standard part of the daily development workflow β not as an afterthought.
β’
Experiment with agentic workflows to automate repetitive engineering tasks such as test generation, documentation, and code review preparation.
β’
Develop familiarity with the agentic frameworks the team uses (LangGraph, CrewAI, or similar) both as a builder and as an operator debugging them in production.
β’
Share productivity patterns, prompt techniques, and tool evaluations with the team β actively contributing to raising the AI tool adoption floor across the pod.
Required Qualifications
β’
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
β’
3β5 years of professional software development experience, with at least 1β2 years working on AI/ML systems in production.
β’
Solid programming skills in Python; familiarity with Java or Go is a plus.
β’
Hands-on experience building or operating ML pipelines, LLM-based features, or data-intensive services.
β’
Working knowledge of cloud platforms (AWS preferred) and containerised deployments (Docker, Kubernetes).
β’
Understanding of machine learning fundamentals β model training, evaluation, feature engineering, and deployment trade-offs.
β’
Familiarity with agentic frameworks (LangChain, LangGraph, CrewAI, or similar) and LLM observability/eval platforms (Maxim, LangFuse, Weights & Biases, or equivalent).
β’
Experience writing automated tests and contributing to CI/CD pipelines.
β’
Clear written and verbal communication skills; able to document and explain technical decisions to peers and stakeholders.
Preferred Experience
β’
Experience with vector databases and embedding-based retrieval systems (e.g., Pinecone, Weaviate, pgvector).
β’
Hands-on prompt engineering experience β chain-of-thought, few-shot, structured output, and tool-calling patterns.
β’
Exposure to AI gateway platforms (LiteLLM, Portkey, TrueFoundry) for LLM cost governance and traffic management.
β’
Familiarity with open-source LLMs (Hugging Face) and fine-tuning workflows.
β’
Exposure to ASR/TTS or multimodal systems is a bonus.
β’
Prior experience using AI coding assistants (Claude Code, Cursor, GitHub Copilot) as part of a daily development workflow.
Our culture & accolades
As an organization, itβs our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks and many opportunities for growth.
Our culture reflects our employee's globally diverse backgrounds along with our commitment to our customers, and each other, and a passion for excellence. We live up to our values, DAB, Delight your customers, Act as a Founder, and Better Together.
Mindtickle is proud to be an Equal Opportunity Employer.
All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.