Consultant -Data Engineer ( AWS, Python, Spark, Databricks for ETL -Agentic AI

19 hours ago


Gurgaon, Haryana, India Genpact Full time
Job Description

Inviting applications for the role of Consultant -Data Engineer ( AWS, Python, Spark, Databricks for ETL -Agentic AI

In this role, you%27ll be part of Genpact%27s transformation under GenpactNext, as we lead the shift to Agentic AI Solutions-domain-specific, autonomous systems that redefine how we deliver value to clients. You%27ll help drive the adoption of innovations like the Genpact AP Suite in finance and accounting, with more Agentic AI products set to expand across service lines.

Responsibilities

- Design, develop, and manage scalable ETL pipelines using AWS Glue, Databricks, Apache Spark, and Python to process structured and unstructured data from diverse sources.

- AWS, Python, Spark, Databricks for ETL-Agentic AI, Strong experience in Python and SQL , Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Strong understanding of security principles and best practices for cloud-based environments. Write clean testable code, engage in code reviews and agile ceremonies

- Build and orchestrate data workflows integrating with services such as AWS Lambda, Step Functions, S3, and Redshift, ensuring high availability and performance.

- Optimize Spark jobs for performance and cost-efficiency across Databricks and AWS Glue environments using partitioning, job bookmarks, and dynamic frame operations.

- Develop and maintain secure data solutions in AWS, leveraging IAM roles, KMS encryption, and VPC-based security to meet compliance and governance standards.

- Migrate legacy ETL jobs and data from on-prem systems to cloud-native architectures on AWS Glue, Redshift, and DynamoDB.

- Implement and monitor data pipeline performance, performing debugging and tuning of Spark jobs to ensure reliable execution and minimal downtime.

- Collaborate in the design and review of technical solutions, translating business requirements and user stories into scalable data engineering architectures.

- Perform unit testing and data validation to ensure functional correctness of pipelines before deployment.

- Lead production deployment and coordinate with release management to ensure seamless delivery of data solutions.

- Recommend cost-effective, secure, and high-performing cloud-based data solutions, reducing manual overhead and operational burden.

- Contribute to backup, disaster recovery, and business continuity strategies for critical data assets.

- Participate in code reviews, technical design discussions, and DevOps integration for CI/CD of data pipelines using tools like Git, CodePipeline, or Databricks Repos.

Qualifications we seek in you

Minimum Qualifications

- Experience in designing, implementing data pipelines, build data applications, data migration on AWS

- Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift

- Experience of Databricks will be added advantage

- Strong experience in Python and SQL

- Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift.

- Advanced programming skills in Python for data processing and automation.

- Hands-on experience with Apache Spark for large-scale data processing.

- Proficiency in SQL for data querying and transformation.

- Strong understanding of security principles and best practices for cloud-based environments.

- Experience with monitoring tools and implementing proactive measures to ensure system availability and performance.

- Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment.

- Strong communication and collaboration skills to work effectively with cross-functional teams.

Preferred Qualifications/ Skills

- Master's Degree-Computer Science, Electronics, Electrical or equivalent

- AWS Data Engineering & Cloud certifications, Databricks certifications

- Experience with multiple data integration technologies and cloud platforms

- Knowledge of Change & Incident Management process

-

-

-

-

-

-
  • Senior Data Engineer

    10 hours ago


    Gurgaon, Haryana, India beBeeDataEngineer Full time ₹ 15,00,000 - ₹ 25,00,000

    Job OverviewA senior data engineer is needed to lead the development of our ETL processes using AWS, Python, Spark, and Databricks. The ideal candidate will have expertise in designing and implementing scalable data pipelines and collaborating with cross-functional teams.

  • Big Data Engineer

    4 weeks ago


    Gurgaon, Haryana, India One Click AI Full time

    Job Title : Big Data Engineer (Hadoop/Spark)Experience : 36 YearsLocation : Noida / Gurugram (Hybrid or On-site, as applicable)Notice Period : 30 DaysEmployment Type : Summary :We are looking for a skilled and passionate Big Data Engineer with strong expertise in the Hadoop ecosystem and Apache Spark to join our growing data engineering team. The ideal...


  • Gurgaon, Haryana, India Mindera Full time ₹ 15,00,000 - ₹ 20,00,000 per year

    We are looking for an experienced Data Engineer to become a valuable member of our energetic team. The perfect candidate will possess extensive knowledge of big data technologies, ETL/ELT workflows, and data modeling techniques. This position will concentrate on designing and enhancing data pipelines, maintaining data integrity, and bolstering our analytics...

  • Data Engineer

    2 weeks ago


    Gurgaon, Haryana, India Michael Page Full time

    About Our ClientOur client is an international professional services brand of firms, operating as partnerships under the brand. It is the second-largest professional services network in the worldJob DescriptionQualifications⎯ Bachelor's degree in Computer Engineering, Computer Science or related discipline, Master's Degree preferred.⎯ 3+ years of ETL...


  • Gurgaon, Haryana, India Genpact Full time

    Job DescriptionInviting applications for the role of Consultant - Multi Agent Developers - Agentic AIIn this role, you%27ll be part of Genpact%27s transformation under GenpactNext, as we lead the shift to Agentic AI Solutions-domain-specific, autonomous systems that redefine how we deliver value to clients. You%27ll help drive the adoption of innovations...


  • Gurgaon, Haryana, India Simpplr Full time US$ 1,20,000 - US$ 2,00,000 per year

    Who We AreSimpplr is the AI-powered platform that unifies the digital workplace – bringing together engagement, enablement, and services to transform the employee experience. It streamlines communication, simplifies interactions, automates workflows, and elevates the everyday experience of work. The platform is intuitive, highly extensible, and built to...

  • Data Engineer

    4 weeks ago


    Gurgaon, Haryana, India Sureminds Solutions Full time

    Job Description : Data Engineer Python + AWS + AIExp Level : 5 to 8 yearsLocation : Gurgaon & RemoteInterview Process : Assessment OR 1st Technical interview , 2 - Client interview , 1 HR Round.Role Overview : Lead design and delivery of LLM-powered, agentic AI solutions with robust RAG pipelines and prompt-engineering best practices.Key Responsibilities :-...

  • Urgent Manager

    2 days ago


    Gurgaon, Haryana, India EXL IT service management Full time

    Job DescriptionMLOpsWe are looking for a highly skilled Analytics & Data Engineering professional with a strong background in Machine Learning, MLOps, and DevOps. The ideal candidate will have experience designing and implementing scalable data and analytics pipelines, enabling production-grade ML systems, and supporting agent-based development leveraging...


  • Gurgaon, Haryana, India Simpplr Full time ₹ 15,00,000 - ₹ 20,00,000 per year

    Who We AreSimpplr is the AI-powered platform that unifies the digital workplace – bringing together engagement, enablement, and services to transform the employee experience. It streamlines communication, simplifies interactions, automates workflows, and elevates the everyday experience of work. The platform is intuitive, highly extensible, and built to...

  • Data Engineer

    4 weeks ago


    Gurgaon, Haryana, India SK HR Consultants Full time

    Job Summary :- Understanding of how to design technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions- Understand, implement, and automate ETL pipelines with better industry standards- Identify, design, and implement internal process improvements : automating manual...