Job - Data Engineer(Pyspark+SQL)
1 week ago
General Description Of Role
Professionals in this group design and implement high-performance, scalable and optimized data solutions for large enterprise-wide data mining and processing. They are responsible for design and development of data flows and Big Data Platform deployment and implementation. Incumbents usually require expert knowledge in Databricks, Spark, SQL etc.
JOB RESPONSIBILITIES
Design and propose end-to-end data pipelines ((ingestion to storage to consumption) for data projects as well as databases to support web-based applications.
Design and implement data warehouses and data marts to serve data consumers.
Execute implementation of the designs and bring it from design stages to implementation followed by operationalization to maintenance.
Database design and modelling. To be able to organize data at both macro and micro level and provide logical data models for the consumers.
Database performance tuning and data lifecycle management
Assist in support and enhancements of existing data pipelines and databases.
SKILLS/COMPETENCIES REQUIRED
6-10 years of total experience working with data integration teams.
3+ years of in-depth experience developing data pipelines within an Apache Spark environment (preferably Databricks).
2+ years of active work with Databricks, demonstrating in-depth experience.
2+ years of in-depth experience in working with a data warehouse and knowledge of data warehouse modelling techniques.
Strong knowledge on Pyspark, Python and SQL and distributed computing principles.
Strong knowledge of data modelling, database technologies, data warehousing.
Experience with ETL/ELT processes, both design and implementation via SSIS or some other ELT/ETL tool.
Knowledge of cloud platforms (Aws or Azure) and big data technologies (Hadoop, Spark, etc.)
Fluent in both complex SQL query performance tuning and database performance tuning
Understand The Importance Of Performance And Able To Implement Best Practices In Ensuring Performance And Maintainability For Data-centric Projects. Nice To Haves
Some experience developing data solutions using native IaaS and PaaS solutions on AWS (Redshift, RDS, S3) will be an advantage.
-
Data Engineer(Pyspark+SQL)
2 weeks ago
Gurgaon, Haryana, India Accolite Full time ₹ 15,00,000 - ₹ 25,00,000 per yearAbout The RoleWe are seeking an experienced Data Engineer to design, implement, and optimize a global data handling and synchronization solution across multiple regions. You will work with cloud-based databases, data lakes, and distributed systems, ensuring compliance with data residency and privacy requirements (e.g., GDPR).Requirements6+ years of...
-
ETL - Pyspark Testing - Strong SQL Professional
2 weeks ago
Gurgaon, Haryana, India IDESLABS PRIVATE LIMITED Full time ₹ 15,00,000 - ₹ 25,00,000 per yearAt least 6-8 yrs of experience in ETL Testing with Automation Testing Expert in database testing using SQL. Must have worked on Databricks and aware of Databricks related concepts Check the data source locations and formats, perform a data count, and verify that the columns and data types meet the requirements. Test the accuracy of the data, and its...
-
Manager-Data Engineering-Cloud Data Engineering
2 weeks ago
Gurgaon, Haryana, India EXL Full time ₹ 12,00,000 - ₹ 36,00,000 per yearDescriptionData Engineer Consultant – Job DescriptionJob Summary:Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders..Qualifications and Skills:Strong knowledge on Python and Pyspark Expectation is to have ability to write Pyspark scripts for...
-
Data Engineer
1 week ago
Gurgaon, Haryana, India Impetus Full time ₹ 15,00,000 - ₹ 25,00,000 per yearOpen Location- Gurgaon and BangaloreJob Description4-7 years' experience working on Data engineering & ETL/ELT processes, data warehousing, and data lake implementation with Cloud servicesHands on experience in designing and implementing solutions like creating/deploying jobs, Orchestrating the job/pipeline and infrastructure configurationsExpertise in...
-
Software Engineer, PySpark
7 days ago
Gurgaon, Haryana, India RBS Full time ₹ 15,00,000 - ₹ 30,00,000 per yearJoin us as a Software Engineer, PySparkThis is an opportunity for a driven Software Engineer to take on an exciting new career challengeDay-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutionsIt's a chance to hone your existing technical skills and advance your career while building a wide...
-
Manager-Data Engineering-Cloud Data Engineering
2 weeks ago
Gurgaon, Haryana, India EXL Full time ₹ 12,00,000 - ₹ 36,00,000 per yearDescriptionDesign, develop, and maintain scalable and efficient data pipelines using PySpark and DatabricksPerform data extraction, transformation, and loading (ETL) from diverse structured and unstructured data sourcesDevelop and maintain data models, data warehouses, and data marts in DatabricksProficient in Python, Apache Spark, and PySpark Integrate...
-
Software Engineer, PySpark, VP
2 weeks ago
Gurgaon, Haryana, India RBS Full time ₹ 20,00,000 - ₹ 25,00,000 per yearJoin us as a Software Engineer, PySparkThis is an opportunity for a technically minded individual to join us as a Software EngineerYou'll be designing, producing, testing and implementing working software, working across the lifecycle of the systemHone your existing software engineering skills and advance your career in this critical roleWe're offering this...
-
Lead Data Engineer(MDM)
2 weeks ago
Gurgaon, Haryana, India Accolite Full time ₹ 12,00,000 - ₹ 36,00,000 per yearAbout The RoleWe are looking for a Lead Data Engineer to architect and guide the design and implementation of a global data handling and synchronization platform. In addition to being hands-on, you will provide technical leadership to a small team of data engineers and advise on master data management (MDM) best practices, ensuring compliance with global...
-
Data Engineer
6 days ago
Gurgaon, Haryana, India Apps Associates Careers Full time ₹ 6,00,000 - ₹ 18,00,000 per yearRole: Data Engineer Skills:• years of experience as a Data engineer with extensive working experience using Pyspark, Advanced SQL, Snowflake, complex understanding of SQL and Skills, Spark, Snowflake and Glue.• AWS expertise (Azure, google will work too) - Lambda, Glue, S3 etc.• Experience in software development, CI/CD, Agile methodology•...
-
AWS Data Engineer
7 days ago
Gurgaon, Haryana, India Imbibe Consultancy Services Pvt Ltd Full time ₹ 6,00,000 - ₹ 18,00,000 per yearExperience:5 to 8 yearsLocation:Bengaluru, Gurgaon, PuneJob code:101356Posted on:Oct 27, 2025About UsAceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across...