Data Engineer(Pyspark+SQL)

2 weeks ago


Gurgaon, Haryana, India Accolite Full time ₹ 15,00,000 - ₹ 25,00,000 per year

About The Role
We are seeking an experienced Data Engineer to design, implement, and optimize a global data handling and synchronization solution across multiple regions. You will work with cloud-based databases, data lakes, and distributed systems, ensuring compliance with data residency and privacy requirements (e.g., GDPR).

Requirements
6+ years of experience as a Data Engineer in cloud environments (preferably Microsoft Azure).

Strong knowledge of Azure SQL, Data Lake, Data Factory, PySpark, and related services.

Experience in Spark Optimization.

Experience with distributed data architectures and data synchronization across regions.

Familiarity with data privacy, security, and compliance (GDPR, etc.).

Proficiency in Python, SQL, and ETL tools.

Excellent problem-solving and communication skills.

Hands-on and self-contributing.

Preferred
Experience with MS-SQL, Cosmos DB, Databricks, and event-driven architectures.

Knowledge of CI/CD and infrastructure-as-code (Azure DevOps, ARM/Bicep, Terraform).

Key Responsibilities
Design and implement data pipelines for global and regional data synchronization (Azure SQL, Data Lake, Data Factory, PySpark, etc.).

Develop solutions for secure handling of PII and non-PII data, ensuring compliance with GDPR and other regulations.

Build and optimize ETL processes for anonymization, transformation, global synchronization and distribution of data.

Collaborate with software architects and DevOps to integrate data flows with application logic and deployment pipelines.

Set up monitoring, alerting, and documentation for data processes within the existing frameworks.

Advise on best practices for data partitioning, replication, and schema evolution.



  • Gurgaon, Haryana, India Accolite Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    General Description Of RoleProfessionals in this group design and implement high-performance, scalable and optimized data solutions for large enterprise-wide data mining and processing. They are responsible for design and development of data flows and Big Data Platform deployment and implementation. Incumbents usually require expert knowledge in Databricks,...


  • Gurgaon, Haryana, India IDESLABS PRIVATE LIMITED Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    At least 6-8 yrs of experience in ETL Testing with Automation Testing Expert in database testing using SQL. Must have worked on Databricks and aware of Databricks related concepts Check the data source locations and formats, perform a data count, and verify that the columns and data types meet the requirements. Test the accuracy of the data, and its...


  • Gurgaon, Haryana, India EXL Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    DescriptionData Engineer Consultant – Job DescriptionJob Summary:Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders..Qualifications and Skills:Strong knowledge on Python and Pyspark  Expectation is to have ability to write Pyspark scripts for...

  • Data Engineer

    1 week ago


    Gurgaon, Haryana, India Impetus Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    Open Location- Gurgaon and BangaloreJob Description4-7 years' experience working on Data engineering & ETL/ELT processes, data warehousing, and data lake implementation with Cloud servicesHands on experience in designing and implementing solutions like creating/deploying jobs, Orchestrating the job/pipeline and infrastructure configurationsExpertise in...


  • Gurgaon, Haryana, India EXL Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    DescriptionDesign, develop, and maintain scalable and efficient data pipelines using PySpark and DatabricksPerform data extraction, transformation, and loading (ETL) from diverse structured and unstructured data sourcesDevelop and maintain data models, data warehouses, and data marts in DatabricksProficient in Python, Apache Spark, and PySpark Integrate...


  • Gurgaon, Haryana, India Accolite Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    About The RoleWe are looking for a Lead Data Engineer to architect and guide the design and implementation of a global data handling and synchronization platform. In addition to being hands-on, you will provide technical leadership to a small team of data engineers and advise on master data management (MDM) best practices, ensuring compliance with global...


  • Gurgaon, Haryana, India RBS Full time ₹ 20,00,000 - ₹ 25,00,000 per year

    Join us as a Software Engineer, PySparkThis is an opportunity for a technically minded individual to join us as a Software EngineerYou'll be designing, producing, testing and implementing working software, working across the lifecycle of the systemHone your existing software engineering skills and advance your career in this critical roleWe're offering this...

  • Data Engineer

    6 days ago


    Gurgaon, Haryana, India Apps Associates Careers Full time ₹ 6,00,000 - ₹ 18,00,000 per year

    Role: Data Engineer Skills:• years of experience as a Data engineer with extensive working experience using Pyspark, Advanced SQL, Snowflake, complex understanding of SQL and Skills, Spark, Snowflake and Glue.•    AWS expertise (Azure, google will work too) - Lambda, Glue, S3 etc.•    Experience in software development, CI/CD, Agile methodology•...

  • Data Engineer

    2 weeks ago


    Gurgaon, Haryana, India NAB Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    • Bachelor's degree in Computer Science, Engineering, or related field.• 5–8 years of hands-on experience in data engineering.• Strong programming experience in Python, PySpark and databricks for big data processing.• Solid experience in SQL writing.• Proven experience working on AWS services such as S3, EMR.• Proficient in building and...

  • Data Engineer

    4 days ago


    Gurgaon, Haryana, India Unicorn Workforce Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    About the RoleWe are seeking a passionate and experiencedData Engineerproficient inPySpark, SQL, ETL, and AWSto design and manage large-scale data processing systems. The ideal candidate will have a strong understanding of modern data architecture, pipeline automation, and performance optimization. You will collaborate with stakeholders and architects to...