
Databricks +Hadoop+ Python + SQL+Pyspark Professional
10 hours ago
We are looking for a skilled professional with expertise in Databricks, Hadoop, Python, SQL, and Pyspark to join our team. The ideal candidate should have 6-9 years of experience.
Roles and Responsibility
- Design and develop scalable data pipelines using Databricks and Hadoop.
- Collaborate with cross-functional teams to integrate data from various sources.
- Develop and maintain large-scale data warehouses using SQL and Pyspark.
- Troubleshoot and resolve complex technical issues related to data processing.
- Optimize data storage and retrieval processes for improved efficiency.
- Ensure data quality and integrity by implementing robust testing and validation procedures.
Job Requirements
- Strong proficiency in Databricks, Hadoop, Python, SQL, and Pyspark.
- Experience working with big data technologies is highly desirable.
- Excellent problem-solving skills and attention to detail are required.
- Ability to work collaboratively in a team environment.
- Strong communication and interpersonal skills are necessary.
- Familiarity with agile development methodologies is an added advantage.
- Notice period: 0-30 days.
- Full-time graduate education is required.
- PAN India (Bangalore, Hyderabad, Pune, Chennai, Noida, Kolkata, Gurgaon) locations are available.
Location - Bengaluru,Chennai,Gurugram,Hyderabad,Kolkata,Noida,Pune
-
Databricks+ Snowflake Professional
11 hours ago
Bengaluru, Chennai, Coimbatore, India Krazy Mantra HR Solutions Pvt. Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per yearWe are looking for a skilled professional with 6 to 11 years of experience in Databricks and Snowflake to join our team in Pune, Bangalore, Chennai, and Coimbatore. The ideal candidate will have a strong background in data processing and analytics.Roles and ResponsibilityDesign and develop scalable data pipelines using Azure Data Factory and Azure Data...
-
Etl Databricks
2 weeks ago
Chennai, Tamil Nadu, India Virtusa Full timeDevelop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. InsureMO rating engine experience required. Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines...
-
PySpark Developer
7 days ago
Hyderabad, Bengaluru, Chennai, India Coders Brain Technology Private Limited Full timeJob Description ROLE RESPONSIBILITIES Data Engineering and Processing: Develop and manage data pipelines using PySpark on Databricks. Implement ETL/ELT processes to process structured and unstructured data at scale. Optimize data pipelines for performance, scalability, and cost-efficiency in Databricks. Databricks Platform Expertise: Experience in...
-
Chennai, India Suzva Software Technologies Full timeExperience with Azure datafactory,Azure ,Python, Pyspark, Databricks, SQL Key ResponsibilitiesDesign and build ETL/ELT pipelines to ingest, transform, and load data from clinical, omics, research, and operational sources.Optimize performance and scalability of data flows using tools like Apache Spark, Databricks, or AWS Glue.Collaborate with domain experts...
-
Chennai, Tamil Nadu, India Suzva Software Technologies Full time ₹ 15,000 - ₹ 28,00,000 per yearExperience with Azure datafactory,Azure ,Python, Pyspark, Databricks, SQL Key ResponsibilitiesDesign and build ETL/ELT pipelines to ingest, transform, and load data from clinical, omics, research, and operational sources.Optimize performance and scalability of data flows using tools like Apache Spark, Databricks, or AWS Glue.Collaborate with domain experts...
-
Python, Pyspark
2 weeks ago
Bengaluru, India Ziniosedge Full timePython, Pyspark 5 Technology Lead (Data Engineer) BE - 8+ years of industry experience as Lead Developer - Experience in implementing ETL and ELT data pipelines with PySpark - Spark Structured API, SPARK SQL & Spark Performance tuning are highly preferred. - Experience in building data pipelines on data lake or Lakehouse(AWS Databricks) & handling...
-
Lead Cloud Data Engineer
1 week ago
Gurugram, India upGrad Full timeAbout the Job :We're hiring an cloud data engineering (preferably Azure) data pipelines and Spark. - Work with Databricks platform using Spark for big data processing and analytics. - Write optimized and efficient code using PySpark, Spark SQL and Python. - Develop and maintain ETL processes using Databricks notebooks and workflows. - Implement and...
-
Databricks Professional
8 hours ago
Bengaluru, Chennai, Coimbatore, India Krazy Mantra HR Solutions Pvt. Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per yearWe are looking for a skilled professional with 5-10 years of experience to join our team as a Databricks expert in Pune, Chennai, Coimbatore, and Bangalore.Roles and ResponsibilityDesign and develop data pipelines using Azure Data Bricks and Azure Data Factory.Collaborate with cross-functional teams to integrate data from various sources into a unified data...
-
Infobell IT
1 week ago
Bengaluru, India Infobell IT Full timeJOB ROLE : Data engineer Work mode : On-site /Hybrid Location : Bangalore Experience : 6+ years We are looking for a self-driven and technically strong Data Engineer with 46 years of experience to join our growing team. The ideal candidate will be proficient in SQL, Databricks,Kafka and PySpark, and capable of managing end-to-end (E2E) data deliverables...
-
Databrick Pyspark
5 days ago
Chennai, India Virtusa Full timeKey Responsibilities: Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks. Write efficient and production-ready PySpark or Scala code for data transformation and ETL processes. Integrate data from various structured and unstructured sources into a unified platform. Implement Delta Lake and manage data versioning, updates,...