
3 Days Left Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL
3 weeks ago
Ready to shape the future of work
At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges.
If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment.
Genpact (NYSE: G) is anadvanced technology services and solutions company that deliverslastingvalue for leading enterprisesglobally.Through ourdeep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead.Powered by curiosity, courage, and innovation,our teamsimplementdata, technology, and AItocreate tomorrow, today.Get to know us atand on,,, and.
Inviting applications for the role of Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL
Responsibilities
-
Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka.
-
Integrate structured and unstructured data from various data sources into data lakes and data warehouses.
-
Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift)
-
Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness.
-
Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms.
-
Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost
-
Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc.
-
Build data pipelines by building ETL processes (Extract-Transform-Load)
-
Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data.
-
Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs
-
Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements
-
Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems
-
Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security
-
Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way
-
Coordinate with release management, other supporting teams to deploy changes in production environment
Qualifications we seek in you
Minimum Qualifications
-
Experience in designing, implementing data pipelines, build data applications, data migration on AWS
-
Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift
-
Experience of Databricks will be added advantage
-
Strong experience in Python and SQL
-
Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift.
-
Advanced programming skills in Python for data processing and automation.
-
Hands-on experience with Apache Spark for large-scale data processing.
-
Experience with Apache Kafka for real-time data streaming and event processing.
-
Proficiency in SQL for data querying and transformation.
-
Strong understanding of security principles and best practices for cloud-based environments.
-
Experience with monitoring tools and implementing proactive measures to ensure system availability and performance.
-
Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment.
-
Strong communication and collaboration skills to work effectively with cross-functional teams.
Preferred Qualifications/ Skills
-
Master&rsquos Degree-Computer Science, Electronics, Electrical.
-
AWS Data Engineering & Cloud certifications, Databricks certifications
-
Experience with multiple data integration technologies and cloud platforms
-
Knowledge of Change & Incident Management process
Why join Genpact
-
Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation
-
Make an impact - Drive change for global enterprises and solve business challenges that matter
-
Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities
-
Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day
-
Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress
Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up.
Let&rsquos build tomorrow together.
Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation.
Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
-
3 Days Left Data Engineer
3 weeks ago
Hyderabad, Telangana, India ADP Full timeData EngineeringBelow is the JD followed for Data EngineeringPosition : Data Engineer with 4 to Y8ears experience in AWS, PySpark, Python, SQL and Data bricksLocation : Hyderabad1. In this role candidate will be responsible for understanding complex data sets, managing multiple disparate data sources, and implementing strong validation processes to ensure...
-
AWS Data Engineer
1 week ago
Hyderabad, Telangana, India Zorba AI Full time ₹ 15,00,000 - ₹ 28,00,000 per yearSenior AWS Data Engineer (PySpark & Python) — On-site, IndiaIndustry & Sector:Leading IT services & cloud data engineering sector focused on end-to-end data platforms, analytics, and enterprise-scale ETL/ELT solutions. We deliver production-grade data pipelines, real-time streaming, and analytics integrations for large enterprise customers across finance,...
-
AWS Data Engineer
4 weeks ago
Hyderabad, Telangana, India STANCO Solutions Pvt Ltd Full timeHiring for AWS Data EngineerExperience: 5+ YearsSkill Set: AWS, Python, Pyspark, Spark, ScalaLocation: Hyderabad, ChennaiSalary: 35 LPANotice Period: Immediate to 15 DaysJD:-Primary Skills: Python, Pyspark, Glue, Redshift, Lambda, DMS, RDS ,Cloud Formation and other AWS serverlessStrong exp in SQLDetailed JD:-Seeking a developer who has good Experience in...
-
Senior Data Engineer/Architect
2 weeks ago
Hyderabad, Telangana, India Enable Data Incorporated Full time ₹ 15,00,000 - ₹ 20,00,000 per yearExperience Required: 8+YearsMode of work: RemoteSkills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark,Copilot Studio , Databricks Lakehouse Platform, its architecture, and its capabilities.Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within 15th September...
-
AWS Data Engineer
3 weeks ago
Hyderabad, Telangana, India STANCO Solutions Pvt Ltd Full timeHiring for AWS Data EngineerExperience: 5+ YearsSkill Set: AWS, Python, Pyspark, Spark, ScalaLocation: Hyderabad, ChennaiSalary: 35 LPANotice Period: Immediate to 15 DaysJD:-Primary Skills: Python, Pyspark, Glue, Redshift, Lambda, DMS, RDS ,Cloud Formation and other AWS serverlessStrong exp in SQLDetailed JD:-Seeking a developer who has good Experience in...
-
AWS Data Engineer
3 weeks ago
Hyderabad, Telangana, India STANCO Solutions Pvt Ltd Full timeHiring for AWS Data Engineer Experience: 5+ Years Skill Set: AWS, Python, Pyspark, Spark, Scala Location: Hyderabad, Chennai Salary: 35 LPA Notice Period: Immediate to 15 Days JD:- Primary Skills: Python, Pyspark, Glue, Redshift, Lambda, DMS, RDS ,Cloud Formation and other AWS serverless Strong exp in SQL Detailed JD:- Seeking a developer who has good...
-
Lead AWS Data Engineer
4 weeks ago
Hyderabad, Telangana, India Coforge Full timeJob Title: Lead AWS Data Engineer Skills: Big Data technologies (e.g., Hadoop, Spark) and AWS S3. Python or Java or Scala. data modeling, ETL processes, and data warehousing solutions Experience Required: 8 - 12 Years Job Location: Hyderabad, Pune & Greater Noida We at Coforge are hiring Lead AWS Data Engineer with following skillset: As a Technical...
-
Lead AWS Data Engineer
2 weeks ago
Hyderabad, Telangana, India Coforge Full timeJob Title: Lead AWS Data EngineerSkills: Big Data technologies (e.g., Hadoop, Spark) and AWS S3. Python or Java or Scala. data modeling, ETL processes, and data warehousing solutionsExperience Required: 8 - 12 YearsJob Location: Hyderabad, Pune & Greater NoidaWe at Coforge are hiring Lead AWS Data Engineer with following skillset:As a Technical Lead in Data...
-
Data Engineer
2 days ago
Hyderabad, Telangana, India Loyalty Methods Full time ₹ 15,00,000 - ₹ 25,00,000 per yearKey Responsibilities:Design, develop, and maintain scalable real-time / streaming ETL pipelines.Manage and optimize data workflows & pipelines using tools such as Pentaho PDI, AWS S3, AWS Redshift.Collaborate with cross-functional teams to ensure data integration and quality across the organization.Write complex SQL queries for data extraction, manipulation,...
-
AWS Data Engineer
2 days ago
Hyderabad, Telangana, India Freyr Full time ₹ 15,00,000 - ₹ 25,00,000 per yearRoles and responsibilitiesDesign AWS architectures based on business requirements.Create architectural diagrams and documentation.Present cloud solutions to stakeholders.Skills and Qualifications:Design, develop, and maintain scalable ETL/ELT pipelines using AWS services like Glue, Lambda, and Step Functions.Work with batch and real-time data processing...