Current jobs related to Data Engineer python, spark, Hadoop, big data - Bengaluru - Prontoex Consulting Services Pvt Ltd
-
Big Data Engineer
2 months ago
Bengaluru, India MaimsD Technology Full timeJob Title : Big Data DeveloperExperience : 4-7 yearsLocation : BengaluruJob Type : Full-TimeJob Description : About the Role : As a Big Data Developer, you will be responsible for designing, developing, and maintaining applications that process and analyze large datasets. You will work closely with data analysts, data scientists, and other team members to...
-
Big Data Engineer/Lead
4 weeks ago
Bengaluru, India TekPillar Full timeJoin our dynamic team in Bangalore as a Big Data Engineer contribute to our key data engineering initiatives! You will play a crucial role in managing and prioritizing projects while driving strategies and executing data engineering solutions.Job Title : Big Data Engineer/ LeadExperience : 4 to 9 YearsLocation : BangaloreKey Responsibilities : - Collaborate...
-
Data Engineer
4 weeks ago
Bengaluru, India Arting Digital Full timeJob Title : Data EngineerExperience : 5-7yrLocations : BangaloreSkills : Data Engineer, Snowflake, Python, DBT AND Spark or Hadoop AND Machine Learning.Notice period :1 Monthly - Immediate joinerRoles and Responsibility :- Migration of machine learning models from Spark to Snowflake.- Bachelor's/ master's degree in computer science, information...
-
Big Data Engineer
3 weeks ago
Bengaluru, Karnataka, India Weekday Full timeJob Title: Big Data EngineerThis role is for a talented Big Data Engineer to join our team at Weekdays. As a key member of our engineering team, you will be responsible for designing, developing, and deploying large-scale data processing systems using Big Data technologies such as Hadoop, Spark, and Elasticsearch.Responsibilities:Design and develop scalable...
-
Senior Data Consultant
2 weeks ago
Bengaluru, Karnataka, India NTT DATA Full timeAbout the Role:We are seeking a highly skilled Hadoop Big Data Engineer to join our team in Bengaluru, India. As a key member of our team, you will be responsible for designing and implementing big data applications and robust data pipelines running on Hadoop clusters.Key Responsibilities:Design and develop big data applications using Spark, Hive, and other...
-
krtrimaIQ - Data Engineer - Python/Spark/Hadoop
2 months ago
Bengaluru, India KrtrimaIQ Cognitive Solution Full timeJob Description :We are looking for an Only immediate joiner and experienced Data Engineer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability...
-
Data Engineer
1 week ago
Bengaluru, India NTT DATA Full timeJob Description Mandatory Skills and Qualifications:Snowflake: Proven experience in using Snowflake for data warehousing solutions.Databricks: Strong hands-on experience with Databricks for big data processing and analytics using Machine Learning.Redshift: Expertise in managing and optimizing Amazon Redshift data warehouses.Spark: Advanced knowledge of...
-
Big Data Engineer
2 months ago
Bengaluru, India IT Source Global Full timeJob Description :Role & Responsibilities :Data Pipeline Development :- Design and implement scalable, fault-tolerant data pipelines using Hadoop and Spark.- Process large datasets from various sources into clean, structured formats for analytics.Hadoop Ecosystem :- Work with HDFS (Hadoop Distributed File System) for storing and managing massive datasets.-...
-
Data Engineer
1 week ago
Bengaluru, Karnataka, India NTT DATA Full timeJob DescriptionKey Skills and Qualifications:Cloud Platforms: Proficient in AWS.Programming Languages: Expert in SQL and Python.ETL Tools: Experience with ETL tools and frameworks.Requirements:Education: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field.Skills:Proven experience in using Snowflake for data...
-
Big Data Engineer
4 weeks ago
Bengaluru, India TekPillar Full timePosition : Big Data EngineerExperience : 4 to 7 YearsLocation : BangaloreMandatory Skills: Hadoop , Hive , Spark , Python , SQL , AWS/Azure , AirflowJob Description :- 4-6 years of relevant experience with a bachelor's degree- Exposure to Satellite/Telecom industries- Proven leadership in large data engineering initiatives- Proficiency in Big Data tools...
-
Lead Data Engineer
2 months ago
Bengaluru, India Rigel networks Full timeJob Title : Lead Data EngineerLocation : BangaloreNotice Period : Immediate - 15 DaysAbout the Role :We are seeking a highly skilled Lead Data Engineer with extensive experience in data engineering and a strong background in Big Data technologies. The ideal candidate will have 7+ years of relevant experience, with expertise in Python, SQL, and AWS, and a...
-
Data Analyst
3 months ago
Bengaluru, India Sadup Softech Full timeJob Description : - Minimum of 3 years of hands-on experience.- Python/ML, Hadoop, Spark : Minimum of 2 years of experience.- At least 3 years of prior experience as a Data Analyst.- Detail-oriented with a structured thinking and analytical mindset.- Proven analytic skills, including data analysis, data validation, and technical writing.- Strong proficiency...
-
Big Data Engineer
4 weeks ago
Bengaluru, Karnataka, India TapTalent Full timeJob Description:Company: TapTalentYears of experience: 7-12 yearsLocation: Bangalore, ChennaiBudget: As per company normsNotice Period: Immediate/ ServingWork Model: WFOJob Type: PermanentKey Responsibilities:We are seeking a highly skilled Data Engineer with hands-on experience in Big Query, DataProc, Airflow, Cloud Composer, and GCP Hydra Services. The...
-
Big Data Engineer
4 weeks ago
Bengaluru, Karnataka, India Weekday Full timeJob Title: Big Data EngineerThis role is for a talented Big Data Engineer to join our team at Weekdays. As a key member of our engineering team, you will be responsible for designing, developing, and deploying large-scale data processing systems using Big Data technologies such as Hadoop, Spark, and Elasticsearch.Responsibilities:Design and develop scalable...
-
Data Engineer
2 weeks ago
Bengaluru, Karnataka, India TapTalent Full timeJob Title: Big Data EngineerCompany: TapTalentRole:Data Engineer - Big Data Hadoop GCPKey Responsibilities:Design and develop big data solutions using GCPWork with cross-functional teams to implement data pipelinesCollaborate with data scientists to develop predictive modelsRequirements:3+ years of experience in big data engineeringHands-on experience with...
-
Hadoop Administrator
4 months ago
Bengaluru, India Tehno Right Full timeJob Role : Hadoop Administrator (Role open for multiple locations) - WFH and WFOJob description :What is your Role ?- You will manage Hadoop clusters, data storage, server resources, and other virtual computing platforms. - You perform a variety of functions, including data migration, virtual machine set-up and training, troubleshooting end-user problems,...
-
Data Engineer
3 months ago
Bengaluru, India CAPCO Full timeData Engineer - (Hadoop/Hive/python/Spark/Scala) at Capco India - Bengaluru Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description: The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in...
-
Big Data Engineer
4 weeks ago
Bengaluru, Karnataka, India TekPillar Full timeJob Title: Big Data EngineerJob Summary:We are seeking a highly skilled Big Data Engineer to join our team at TekPillar. The ideal candidate will have a strong background in big data technologies, including Hadoop, Hive, and Spark, as well as experience with programming languages such as Scala, Python, and SQL.Key Responsibilities:Design and develop...
-
Big Data Engineer
4 weeks ago
Bengaluru, Karnataka, India Hexaware Technologies Full timeJob Title: Big Data DeveloperWe are seeking a highly skilled Big Data Developer to join our team at Hexaware Technologies. As a Big Data Developer, you will be responsible for designing, developing, and implementing large-scale data processing systems.Key Responsibilities:Design and develop data processing systems using big data technologies such as Hadoop,...
-
Big Data Engineer
2 months ago
Bengaluru, India TEAMLEASE DIGITAL PRIVATE LIMITED Full timeJob Title : Data EngineerLocation : Bangalore / HyderabadJob Type : Full-timeWork Arrangement : 4 days a week in the office, 1 day remoteExperience Level : 4+ yearsRole Overview : We are seeking an experienced Data Engineer with a strong background in Hadoop, MapReduce, and Java. The ideal candidate will have a proven track record of handling large-scale...
Data Engineer python, spark, Hadoop, big data
2 months ago
Client : Capgemini - Direct Payrolls
Location: Bangalore
Max Budget : 3-4 Yrs : 12LPA / 4 -6 Yrs : 18LPA
Title: Data Engineer with python, spark, Hadoop, big data
Exp: 3 to 6 yrs
Data Engineer
python, spark, Hadoop, big data
Exp: 3-6 Yr
Job Description:
As a Data Engineer with expertise in PySpark, Databricks, and Microsoft Azure, you will be
responsible for designing, developing, and maintaining robust and scalable data pipelines and
processing systems. You will work closely with data scientists, analysts, and other stakeholders to
ensure our data solutions are efficient, reliable, and scalable.
Responsibilities:
• Design, develop, and optimize ETL pipelines using PySpark and Databricks to process large-scale
data on the Azure cloud platform.
• Implement data ingestion processes from various data sources into Azure Data Lake and Azure SQL
Data Warehouse.
• Develop and maintain data models, data schemas, and data transformation logic tailored for Azure.
• Collaborate with data scientists and analysts to understand data requirements and deliver highquality datasets.
• Ensure data quality and integrity through robust testing, validation, and monitoring procedures.
• Optimize and tune PySpark jobs for performance and scalability within the Azure and Databricks
environments.
• Implement data governance and security best practices in Azure.
• Monitor and troubleshoot data pipelines to ensure timely and reliable data delivery.
• Document data engineering processes, workflows, and best practices specific to Azure and
Databricks.
Requirements:
• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
• Proven experience as a Data Engineer with a strong focus on PySpark and Databricks.
• Proficiency in .
• Strong experience with Azure data services, including Azure Data Lake, Azure Data Factory, Azure
SQL Data Warehouse, and Azure Databricks.
• Strong SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL
databases (e.g., MongoDB, Cassandra).
• Experience with big data technologies such as Hadoop, Spark, Hive, and Kafka.
• Strong understanding of data architecture, data modeling, and data integration techniques.
• Familiarity with Azure DevOps, version control systems (e.g., Git), and CI/CD pipelines.
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration skills.
Preferred Qualifications:
• Experience with Delta Lake on Azure Databricks.
• Knowledge of data visualization tools (e.g., Power BI, Tableau).
• Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
• Understanding of machine learning concepts and experience working with data scientists.
Skills-------------
• Azure Data Factory: Experience in creating and orchestrating data pipelines, understanding of
triggers and data flows.
• Databricks: Knowledge of Apache Spark, programming in Python,Scala or R, experience optimizing
data processing and transformation jobs. Experience querying databases and tablesin SQL.
• Azure Data Lake Storage: Experience working with ADLS Gen 1and Gen 2, knowledge of hierarchy,
file systems and security aspects.
• Azure DevOps: Experience working with repositories, pipelines, builds and releases, understanding
CI/CD processes.
• Data integration: Knowledge of various data sources and dataformats such as JSON, CSV, XML,
Parquet and Delta. Also knowledge of databases such as Azure SQL, MySQL orPostgreSQL.Tasks--------
--• Data extraction: Identifying and extracting data from various sources such as databases, APIs, file
systems and external services.
• Data transformation: Data cleaning, enrichment and normalization according to project
requirements.
• Data loading: Loading the transformed data into target databases, data warehouses or data lakes.
• Data pipeline development: Implementing and automating ETL orELT processes using Azure Data