Current jobs related to Data Engineer- PySpark - Bengaluru, Karnataka - Valuelabs
-
PySpark Data Engineer
2 weeks ago
Bengaluru, Karnataka, India BLS360 Full timeRole OverviewThis is a contract remote role. We are seeking a highly motivated PySpark Data Engineer with 3–4 years of experience in building, optimizing, and maintaining large-scale data processing solutions. The ideal candidate will bring hands-on expertise in PySpark, Python, SQL, and real-time data streaming and will contribute to the design and...
-
Pyspark
2 weeks ago
Bengaluru, Karnataka, India Virtusa Full time ₹ 1,04,000 - ₹ 1,30,878 per yearhighly skilled Data Engineer with 6+ years of experience in the IT industry, specializing in Data Engineering and Data Migration. The ideal candidate will have strong expertise in Oracle to PostgreSQL migration and be well-versed in AWS DMS (Database Migration Service). The role requires proficiency in Python and PySpark to work on various data migration...
-
Python Pyspark Data Engineer
2 weeks ago
Bengaluru, Karnataka, India DXC Technology Full timePython Pyspark Data Engineer Job Location Hyderabad Bangalore Chennai Kolkata Noida Gurgaon Pune Indore Mumbai Strong Python Skills - Proficient in Python for data manipulation automation and building reusable components Data Pipeline Development - Experience designing and maintaining ETL data pipelines using tools like Airflow or...
-
Pyspark Developer
2 weeks ago
Bengaluru, Karnataka, India Synechron Technologies Pvt. Ltd. Full time ₹ 9,00,000 - ₹ 12,00,000 per yearDay-to-Day Activities:Design, develop, and maintain ETL pipelines using PySpark on CDP.Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows...
-
Pyspark Architect
17 hours ago
Bengaluru, Karnataka, India Talent Worx Full time ₹ 1,04,000 - ₹ 1,30,878 per yearJob Description: PySpark ArchitectLocation: Bangalore,Pune,Kolkata Experience Required: 7 to 13 YearsRole Overview:We are seeking a highly skilled and motivated PySpark Manager to lead our dataengineering initiatives. The ideal candidate will have deep expertise in PySpark,Hadoop, AWS, Teradata, Scala, and data visualization tools. You will be responsible...
-
Data Engineer
1 week ago
Bengaluru, Karnataka, India NTT DATA, Inc. Full time ₹ 1,04,000 - ₹ 1,30,878 per yearReq ID:321800We are currently seeking a Data Engineer (Talend &Pyspark) to join our team in Bangalore, Karntaka (IN-KA), India (IN)."Job Duties: Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical...
-
Data Engineer
2 weeks ago
Bengaluru, Karnataka, India Golden Opportunities Full time ₹ 1,04,000 - ₹ 1,30,878 per yearJob DescriptionJob title:Data Engineer ( Python + PySpark + SQL )Candidate Specification: Minimum 6 to 8 years of experience in Data EngineerJob DescriptionData Engineer with strong expertise in Python, PySpark, and SQL.Design, develop, and maintain robust data pipelines using PySpark and Python.Strong understanding of SQL and relational databases (e.g.,...
-
Synechron - Data Engineer - Spark/Scala/PySpark
2 weeks ago
Bengaluru, Karnataka, India Hirist Full timeNote : If shortlisted, you will be invited for initial rounds on 13th September'25 (Saturday) in BengaluruJob Overview :We are seeking talented and driven Data Engineers with strong expertise in Big Data technologies and hands-on experience in Spark/Scala and PySpark. The ideal candidate will also possess frontend development skills using Angular (v10+) or...
-
Data Engineer
2 weeks ago
Bengaluru, Karnataka, India O2F Info Solutions Pvt. Ltd. Full timeJob Summary :We are seeking a highly skilled Senior Data Engineer with 4 to 8 years of experience in building robust data pipelines and working extensively with PySpark to join our data engineering team.Key Responsibilities :Data Pipeline Development :- Design, build, and maintain scalable data pipelines using PySpark to process large datasets and support...
-
Software Engineer, PySPark
1 week ago
Bengaluru, Karnataka, India RBS Full time ₹ 1,50,000 - ₹ 28,00,000 per yearJoin us as a Software Engineer, PySparkThis is an opportunity for a driven Software Engineer to take on an exciting new career challengeDay-to-day, you'll build a wide network of stakeholders of varying levels of seniorityIt's a chance to hone your existing technical skills and advance your careerWe're offering this role at senior analyst levelWhat you'll...
Data Engineer- PySpark
2 weeks ago
Job Title- PySpark Data Engineer
We're growing our Data Engineering team at ValueLabs and looking for a talented individual to build scalable data pipelines on Cloudera Data Platform
Experience- 5years to 9years.
Pyspark Job Description:
- Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
- Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
- Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
- Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
- Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
- Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
- Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
- Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
- Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications Education and Experience
- Bachelors or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
- 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills
- PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
- Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
- Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
- Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
- Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
- Scripting and Automation: Strong scripting skills in Linux.