Pyspark Developer

2 days ago


Bengaluru BCIT India Synechron Technologies Full time ₹ 10,00,000 - ₹ 25,00,000 per year

Lead Pyspark Developer

Overall Responsibilities:
  • Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
  • Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  • Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
  • Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  • Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
  • Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  • Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  • Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  • Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Software Requirements:
  • Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Strong scripting skills in Linux.
Category-wise Technical Skills:
  • PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Scripting and Automation: Strong scripting skills in Linux.
Experience:
  • 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
  • Proven track record of implementing data engineering best practices.
  • Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform.
Day-to-Day Activities:
  • Design, develop, and maintain ETL pipelines using PySpark on CDP.
  • Implement and manage data ingestion processes from various sources.
  • Process, cleanse, and transform large datasets using PySpark.
  • Conduct performance tuning and optimization of ETL processes.
  • Implement data quality checks and validation routines.
  • Automate data workflows using orchestration tools.
  • Monitor pipeline performance and troubleshoot issues.
  • Collaborate with team members to understand data requirements.
  • Maintain documentation of data engineering processes and configurations.
Qualifications:
  • Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • Relevant certifications in PySpark and Cloudera technologies are a plus.
Soft Skills:
  • Strong analytical and problem-solving skills.
  • Excellent verbal and written communication abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Attention to detail and commitment to data quality.

S​YNECHRON'S DIVERSITY & INCLUSION STATEMENT

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative 'Same Difference' is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Experience LevelSenior Level
  • Pyspark Developer

    21 hours ago


    india Tata Consultancy Services Full time

    Greetings from TCS!!TCS is hiring forPyspark DeveloperRequired Skill Set:Pyspark, Python, SQL and relational databases, SparkSQL, Spark Scripting, UNIX Shell Scripting, ETL, Data Warehousing, CI/CDDesired Experience Range:4 to 10 Years Job Location:Hyderabad, Bangalore, Chennai, Kolkata, PuneMust Have: Data Engineer, Python developer with specialty in Pandas...


  • India DigiHelic Solutions Pvt. Ltd. Full time

    Job Role: Snowflake Developer Experience: 6-10 Years Location: Trivandrum/Kochi/Bangalore/Chennai/Pune/Noida/Hyderabad Work Model: Hybrid Mandatory Skills: Snowflake, PySpark, ETL, SQL Must Have Skills Data Warehouse: · Design, implement, and optimize data warehouses on the Snowflake platform. · Ensure effective utilization of Snowflake features for...

  • Pyspark Developer

    16 hours ago


    Bengaluru, India Tata Consultancy Services Full time

    Greetings from TCS!!TCS is hiring for Pyspark DeveloperRequired Skill Set: Pyspark, Python, SQL and relational databases, SparkSQL, Spark Scripting, UNIX Shell Scripting, ETL, Data Warehousing, CI/CDDesired Experience Range: 4 to 10 YearsJob Location: Hyderabad, Bangalore, Chennai, Kolkata, PuneMust Have:Data Engineer, Python developer with specialty in...

  • Pyspark Developer

    2 days ago


    All India Infosys Full time ₹ 2,00,000 - ₹ 6,00,000 per year

    As an experienced professional with over 5 years of experience, your role will primarily involve working with Pyspark, Spark, and proficient in SQL. You will be responsible for creating, deploying, maintaining, and debugging data pipelines using Spark and Hadoop Ecosystems to handle large amounts of big data. Your key responsibilities will include: -...

  • Pyspark Developer

    21 hours ago


    Bengaluru, India Tata Consultancy Services Full time

    Greetings from TCS!!TCS is hiring for Pyspark DeveloperRequired Skill Set: Pyspark, Python, SQL and relational databases, SparkSQL, Spark Scripting, UNIX Shell Scripting, ETL, Data Warehousing, CI/CDDesired Experience Range: 4 to 10 YearsJob Location: Hyderabad, Bangalore, Chennai, Kolkata, PuneMust Have:Data Engineer, Python developer with specialty in...

  • Pyspark Developer

    1 day ago


    Bengaluru, India Tata Consultancy Services Full time

    Greetings from TCS!!TCS is hiring for Pyspark DeveloperRequired Skill Set: Pyspark, Python, SQL and relational databases, SparkSQL, Spark Scripting, UNIX Shell Scripting, ETL, Data Warehousing, CI/CDDesired Experience Range: 4 to 10 YearsJob Location: Hyderabad, Bangalore, Chennai, Kolkata, PuneMust Have:Data Engineer, Python developer with specialty in...

  • Pyspark Developer

    17 hours ago


    Bengaluru, India Tata Consultancy Services Full time

    Greetings from TCS!! TCS is hiring for Pyspark Developer Required Skill Set: Pyspark, Python, SQL and relational databases, SparkSQL, Spark Scripting, UNIX Shell Scripting, ETL, Data Warehousing, CI/CD Desired Experience Range: 4 to 10 Years Job Location: Hyderabad, Bangalore, Chennai, Kolkata, Pune Must Have: Data Engineer, Python developer with...

  • Pyspark Developer

    2 weeks ago


    Bengaluru, India Sigma Allied Services Full time

    Position - Pyspark Developer Location - Bangalore Experience - 6 to 9 yrs Required Skills -Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySparkMinimum 5 year(s) of experience is requiredEducational Qualification : 15 years full time educationSummary:As an...

  • Pyspark Developer

    3 days ago


    Bengaluru, Karnataka, India Sigma Allied Services Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    Position - Pyspark DeveloperLocation - BangaloreExperience - 6 to 9 yrsRequired Skills -Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact.Must have skills : PySparkMinimum 5 year(s) of experience is requiredEducational Qualification : 15 years full time educationSummary:As an...


  • India CIGNEX Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    We are looking for an experienced AWS Glue PySpark Developer to design, develop, and optimize ETL pipelines and data processing solutions on AWS. The ideal candidate will have deep expertise in PySpark, AWS Glue, and data engineering best practices, along with hands-on experience in building scalable, high-performance data solutions in the cloud.Key...