
Manager -Data Engineer (AWS, Python, Spark, Databricks for ETL - Agentic AI
3 weeks ago
Job Description
Inviting applications for the role of Manager -Data Engineer (AWS, Python, Spark, Databricks for ETL - Agentic AI
In this role, you%27ll be part of Genpact%27s transformation under GenpactNext, as we lead the shift to Agentic AI Solutions-domain-specific, autonomous systems that redefine how we deliver value to clients. You%27ll help drive the adoption of innovations like the Genpact AP Suite in finance and accounting, with more Agentic AI products set to expand across service lines.
Responsibilities
- Design, develop, and manage scalable ETL pipelines using AWS Glue, Databricks, Apache Spark, and Python to process structured and unstructured data from diverse sources.
- Manage releases, Oversee testing, PoCs , including cost evaluations, of various AWS services and other tools, AWS, Python, Spark, Databricks for ETL-Agentic AI, Strong experience in Python and SQL , Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Strong understanding of security principles and best practices for cloud-based environments.
- Build and orchestrate data workflows integrating with services such as AWS Lambda, Step Functions, S3, and Redshift, ensuring high availability and performance.
- Optimize Spark jobs for performance and cost-efficiency across Databricks and AWS Glue environments using partitioning, job bookmarks, and dynamic frame operations.
- Maintain secure data solutions in AWS, leveraging IAM roles, KMS encryption, and VPC-based security to meet compliance and governance standards.
- Migrate legacy ETL jobs and data from on-prem systems to cloud-native architectures on AWS Glue, Redshift, and DynamoDB.
- Implement/monitor data pipeline performance, performing debugging and tuning of Spark jobs to ensure reliable execution and minimal downtime.
- Contribute in the design and review of technical solutions, translating business requirements and user stories into scalable data engineering architectures.
- Conduct unit testing and data validation to ensure functional correctness of pipelines before deployment.
- Contribute to production deployment and collaborate with release management to ensure seamless delivery of data solutions.
- Recommend cost-effective, secure, and high-performing cloud-based data solutions, reducing manual overhead and operational burden.
Qualifications we seek in you
Minimum Qualifications
- Experience in designing, implementing data pipelines, build data applications, data migration on AWS
- Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift
- Experience of Databricks will be added advantage
- Strong experience in Python and SQL
- Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift.
- Advanced programming skills in Python for data processing and automation.
- Hands-on experience with Apache Spark for large-scale data processing.
- Proficiency in SQL for data querying and transformation.
- Strong understanding of security principles and best practices for cloud-based environments.
- Experience with monitoring tools and implementing proactive measures to ensure system availability and performance.
- Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
Preferred Qualifications/ Skills
- Bachelor's degree in business information systems (IS), computer science or related field, or equivalent-related IT experience.
- AWS Data Engineering & Cloud certifications, Databricks certifications
- Familiar with multiple data integration technologies and cloud platforms
- Knowledge of Change & Incident Management process
-
-
-
-
-
-
-
AWS Data Engineer
3 weeks ago
Gurugram, India Deqode Full timeProfile : AWS Data Engineer Mandate Skills : AWS + Databricks + Pyspark + SQL roleLocation : Bangalore /Pune /Hyderabad /Chennai /GurgaonNotice Period : ImmediateKey Responsibilities : - Design, build, and maintain scalable data pipelines to collect, process, and store data from multiple datasets- Optimize data storage solutions for performance,...
-
Data Engineer
3 weeks ago
Gurugram, India SK HR Consultants Full timeJob Summary :- Understanding of how to design technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions- Understand, implement, and automate ETL pipelines with better industry standards- Identify, design, and implement internal process improvements : automating manual...
-
Data Engineer
7 days ago
Gurugram, India AuxoAI Full timeJob Description Role Summary AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 7-10 years of prior experience in data engineering, with a strong background in Databricks. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design,...
-
Freecharge - Data Engineer - Spark/Python
2 weeks ago
Gurugram, India Freecharge Full timeJob Title : Engineering (Data Engineering)Location : GurgaonExperience Required : 1+ YearsEmployment Type : Full-TimeDepartment : Data & AnalyticsAbout the Role :We are looking for a motivated Data Engineer to contribute to the design and development of scalable data frameworks and solutions. The role involves working with big data technologies like Spark,...
-
Gurugram, Gurugram, India Freecharge Full timeJob Description Job Title : Engineering (Data Engineering) Location : Gurgaon Experience Required : 1+ Years Employment Type : Full-Time Department : Data & Analytics About The Role We are looking for a motivated Data Engineer to contribute to the design and development of scalable data frameworks and solutions. The role involves working with big data...
-
AWS Databricks Engineer
2 weeks ago
Bengaluru, Gurugram, Pune, India Infogain Full time ₹ 15,00,000 - ₹ 25,00,000 per yearRequired Skills & Experience6+ years of experience in Data Engineering (cloud-based preferred).Strong hands-on expertise in Databricks (PySpark, Delta Lake, SQL).Proficiency in AWS data services: S3, Glue, Lambda, Kinesis, Athena, Redshift.Experience in ETL/ELT design patterns and best practices.Strong Python and SQL skills for data manipulation and...
-
Senior Data Engineer
3 weeks ago
Gurugram, India Weekday AI Full timeThis role is for one of the Weekday's clients Min Experience: 5 years Location: Gurgaon JobType: full-time We are seeking an experienced Senior Data Engineer with strong expertise in building and managing large-scale data pipelines within the AWS ecosystem. The ideal candidate will have a solid background in SQL, cloud-native data platforms, and...
-
Consultant, Data Engineer
3 weeks ago
Gurugram, Gurugram, India Genpact Full timeJob Description Inviting applications for the role of Consultant, Data Engineer (AWS, Python, UI and Web Engineer! - Agentic AI! Responsibilities - Design and develop scalable backend services and RESTful APIs using Node.js, integrating with AWS Lambda, DynamoDB, and PostgreSQL (PGDB) to process structured and semi-structured data. - JavaScript,...
-
Data Engineer
3 weeks ago
Gurugram, India Sureminds Solutions Full timeJob Description : Data Engineer Python + AWS + AIExp Level : 5 to 8 yearsLocation : Gurgaon & RemoteInterview Process : Assessment OR 1st Technical interview , 2 - Client interview , 1 HR Round.Role Overview : Lead design and delivery of LLM-powered, agentic AI solutions with robust RAG pipelines and prompt-engineering best practices.Key Responsibilities :-...
-
Data Engineer
5 days ago
Gurugram, India Coders Brain Technology Private Limited Full timeSummary : Seeking a passionate Data Engineer (2-3 years' experience) to build scalable data solutions. Must have expertise in Spark (Scala), Java, AWS EMR, and AWS Glue, with end-to-end data pipeline exposure. Oracle ERP migration experience is a plus.Key Responsibilities : Required Skills & Experience : Design, build, and optimize scalable data...