AWS Pyspark Data Engineer
1 week ago
We are seeking a highly skilled and experienced Senior Data Engineer to lead the end -to -end development of complex models for compliance and supervision. The ideal candidate will have deep expertise in cloud -based infrastructure, ETL pipeline development, and financial domains, with a strong focus on creating robust, scalable, and efficient solutions. Key Responsibilities: • Model Development: Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. • Cloud Infrastructure: Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. • ETL Pipeline Development: Create, manage, and optimize ETL pipelines using PySpark for large -scale data processing. • CI/CD Implementation: Build and maintain CI/CD pipelines for deploying and maintaining cloud -based applications. • Data Analysis: Perform detailed data analysis and deliver actionable insights to stakeholders. • Collaboration: Work closely with cross -functional teams to understand requirements, present solutions, and ensure alignment with business goals. • Agile Methodology: Operate effectively in agile or hybrid agile environments, delivering high -quality results within tight deadlines. • Framework Development: Enhance and expand existing frameworks and capabilities to support evolving business needs. • Documentation and Communication: Create clear documentation and present technical solutions to both technical and non -technical audiences. RequirementsRequirements Required Qualifications: • years of experience with Python programming. • years of experience in cloud infrastructure, particularly AWS. • years of experience with PySpark, including usage with EMR or Glue Notebooks. • years of experience with Apache Airflow for workflow orchestration. • Solid experience with data analysis in fast -paced environments. • Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must. • Proficiency with cloud -native technologies and frameworks. • Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. • Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. • Excellent problem -solving skills and ability to handle complex technical challenges. • Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences. • Ability to thrive in a fast -paced, dynamic environment. Benefits Benefits Standard Company Benefits
-
AWS Data Engineer
2 days ago
Pune City, Maharashtra, , India Aligned Automation Services Full time ₹ 12,00,000 - ₹ 24,00,000 per yearAboutthe jobA'Better Together' philosophy towards building a better world Aligned Automation isa strategic service provider that partners with Fortune 500 leaders to digitizeenterprise operations and enable business strategies. We believe we can createpositive, lasting change in the way our clients work while advancing the globalimpact of their business...
-
AWS Data Engineer
3 weeks ago
Pune, India ACENET CONSULTING PRIVATE LIMITED Full timeDescription : About Us : AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer...
-
AWS Data engineer+ Pyspark
1 week ago
Hyderabad, Indore, Pune, India Tata Consultancy Services Full time ₹ 40,00,000 - ₹ 1,20,00,000 per yearDear CandidateGreetings from TATA Consultancy ServicesJob Openings at TCSSkill : AWS Data engineer+ PysparkExp range: 4 yrs to 7 yrsLocation : Pune/Hyd/IndoreNotice period ImmediatePls find the Job Description below.Hands-on experience in Glue, PysparkExperience on EMR, S3, IAM, Lambda, Cloud formation, PythonAMI Rehydration, Python, ELB and other AWS...
-
AWS Glue PySpark Developer
1 week ago
India CIGNEX Full time ₹ 15,00,000 - ₹ 25,00,000 per yearWe are looking for an experienced AWS Glue PySpark Developer to design, develop, and optimize ETL pipelines and data processing solutions on AWS. The ideal candidate will have deep expertise in PySpark, AWS Glue, and data engineering best practices, along with hands-on experience in building scalable, high-performance data solutions in the cloud.Key...
-
Pyspark Data Engineer
1 week ago
Gurugaon, Haryana, India Dataeconomy Full time ₹ 20,00,000 - ₹ 25,00,000 per yearJobTitle: PySpark Data Engineer Experience: 5 – 8 Years Location: Hyderabad Employment Type: Full -Time JobSummary:We are looking for a skilledand experienced PySpark Data Engineer to join our growing dataengineering team. The ideal candidate will have 5–8 years of experience indesigning and implementing data pipelines using PySpark, AWS Glue,and Apache...
-
Pune, India Tata Consultancy Services Full timeJob Title: AWS Senior Data Engineer with Pyspark, AWS, Glue_Pune Location: Pune Experience: 6 to 10 Years Notice Period: 30-45 days Job Description: Must: PySpark, AWS(ETL Concepts, S3, Glue, EMR, Redshift, DMS, AppFlow) ,Qlik Replicate, Data Testing Nice To Have: Hadoop, Teradata Background, IaC(Cloud Formation / Terraform), Git Kind Regards, Priyankha M
-
Data Engineer
1 week ago
Pune, India RSquareSoft Technologies Full timeRole: Data Engineer Location: Pune (On-site)Job Type: Full-TimeExperience: 4–7 YearsKey ResponsibilitiesDesign, develop, and maintain scalable data pipelines and ETL workflows.Work with large datasets using PySpark , Python , and SQL to ensure efficient data transformation and integration.Implement data solutions on AWS, leveraging...
-
Data Engineer
7 days ago
Pune, India RSquareSoft Technologies Full timeRole: Data Engineer Location: Pune (On-site)Job Type: Full-TimeExperience: 4–7 YearsKey ResponsibilitiesDesign, develop, and maintain scalable data pipelines and ETL workflows.Work with large datasets using PySpark , Python , and SQL to ensure efficient data transformation and integration.Implement data solutions on AWS, leveraging services like S3, Glue,...
-
Data Engineer
1 week ago
Pune, India RSquareSoft Technologies Full timeRole: Data EngineerLocation: Pune (On-site)Job Type: Full-TimeExperience: 4–7 YearsKey ResponsibilitiesDesign, develop, and maintain scalable data pipelines and ETL workflows.Work with large datasets using PySpark, Python, and SQL to ensure efficient data transformation and integration.Implement data solutions on AWS, leveraging services like S3, Glue,...
-
Data Engineer
1 week ago
pune, India RSquareSoft Technologies Full timeRole: Data EngineerLocation: Pune (On-site)Job Type: Full-TimeExperience: 4–7 YearsKey ResponsibilitiesDesign, develop, and maintain scalable data pipelines and ETL workflows.Work with large datasets using PySpark, Python, and SQL to ensure efficient data transformation and integration.Implement data solutions on AWS, leveraging services like S3, Glue,...