Job Description:
"Role Description:
We are looking for a hands-on Data Engineer who can rapidly build AI-enabled applications, automate workflows, and deliver solutions end-to-end using modern SDLC practices.
The role requires a builder mindset—someone comfortable moving from idea to prototype to production quickly, while following good engineering discipline.
Key Responsibilities
6–8+ years in data analytics, Data Engineer , ETL & BI
Build lightweight data & AI applications (dashboards, APIs, copilots, automations, PoCs)
Develop data pipelines, feature flows, and AI integrations (batch & near-real-time)
Implement automation and workflows across systems (APIs, event-driven, scheduled jobs)
Apply SDLC best practices: version control, CI/CD, testing, documentation
Collaborate with business and product teams to translate ideas into working solutions
Package solutions for reusability and scale (modular code, templates, configs)
Stay current with new techniques, tools, and frameworks in data science and AI
Functional & Business Competencies
Machine Learning & Model Development
Statistical Analysis & Feature Engineering
Problem Framing & Hypothesis Testing
Business Impact Orientation
Collaboration with Engineering & Product Teams
Data Pipeline Design & Optimization
ETL/ELT Development
Cloud Data Services
Collaboration with Analysts & Scientists
Data Quality & Performance Monitoring
Clear Communication of Analytical Findings
Qualifications & Experience
5-7 years of experience in data science, machine learning, or applied statistics
Hands on experience in application development and workflows.
Proficient in Python (e.g., scikit-learn, pandas), SQL, and ML frameworks
Familiarity with cloud services for data (e.g., Databricks, Azure Data Factory)
Experience working with cloud-based ML environments (AWS Sagemaker, Azure ML, etc.)
Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or related field
"