▷ Immediate Start: Principal Consultant, Sr. Databricks Developer
3 weeks ago
Job Description Inviting applications for the role of Principal Consultant, Sr. Databricks Developer In this role, the Sr. Databricks developer is responsible for solving real world cutting edge problem and mentor a team of one or more junior developer to address the goal. Responsibilities - Design and develop end-to-end data pipelines using PySpark, SQL/Spark-SQL, and Delta Lake. - Translate requirements into scalable, high-performance data solutions. - Participate in architecture discussions and provide technical inputs. - Implement ETL/ELT workflows in Lakehouse (Bronze, Silver, Gold layers). - Optimize pipelines, clusters, and queries for performance and cost. - Integrate Databricks with cloud services, APIs, and messaging systems. - Implement data quality checks, unit tests, and integration tests. - Develop both batch and streaming pipelines for real-time use cases. - Contribute to shared assets, frameworks, and accelerators. - Collaborate with leads, architects, and analysts during delivery. - Support data migration projects to Databricks Lakehouse. - Document technical solutions, coding standards, and best practices. - Act as a strategic advisor on Databricks adoption, modernization, and data strategy. - Define enterprise-scale architectures leveraging the Databricks Lakehouse platform. - Establish data governance frameworks with Unity Catalog (lineage, auditing, security, compliance). - Oversee multiple delivery streams, ensuring alignment with standards and best practices. - Drive reusable accelerators, frameworks, and reference architectures. - Engage with stakeholders on cost optimization, innovation, and roadmap planning. - Lead thought leadership initiatives (whitepapers, blogs, conferences). - Mentor and coach developers, senior developers, and leads. - Support Centers of Excellence (CoE) and competency frameworks. - Stay ahead of emerging technologies, trends, and Databricks innovations. Qualifications we seek in you Minimum qualifications - Bachelor's Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. - Relevant years in data engineering with hands-on Databricks experience. - End-to-end implementation of at least 2 Databricks projects(migration/integration). - Strong background in batch and streaming data pipelines. - Proficiency in Python (preferred) or Scala for Spark-based development. - Expertise in SQL & Spark-SQL, data structures, and algorithms. - Deep knowledge of Databricks components: Delta Lake, DLT, dbConnect, REST API 2.0, Workflows orchestration. - Design, develop, and optimize large-scale batch pipelines for ingestion, transformation, and analytics using Databricks. - Build and manage low-latency streaming pipelines using Spark Structured Streaming, Delta Live Tables, or other Databricks-native frameworks to enable real-time insights and decision-making. - Strong in performance optimization for pipelines (efficiency, scalability, cost reduction). - Hands-on experience with Apache Spark, Hive, and Lakehouse architecture. - Cloud expertise (Azure/AWS) includes storage (ADLS/S3), messaging (ASB/SQS), compute (ADF/Lambda), and databases (CosmosDB/DynamoDB/Cloud SQL). - Experience writing unit tests and integration tests for data pipelines. - Ability to work with architects and lead engineers to design solutions meeting functional & non-functional requirements. - Excellent technical skills to enabling the creation of future-proof, complex global solutions - Team player with experience leading teams of 5+ engineers. - Strong communication and client-facing skills. - Keeps updated with emerging technologies and industry trends. - Strong analytical and problem-solving abilities. - Positive attitude towards continuous learning and upskilling - Good to have Databricks SQL Endpoint understanding. - Good to have understanding on LakeflowConnect, Lakeflow Declarative Pipelines - Good To have CI/CD experience to build the pipeline for Databricks jobs. - Good to have if worked on migration project to build Unified data platform. - Good to have knowledge of DBT. - Good to have knowledge of docker and Kubernetes. - Certification on Databricks Associate or Professional level. - Any one Cloud Certification (AWS/Azure) Associate/Professional level -Data Engineer or Architect level is an added advantage - - - - -
-
Hyderabad, India Genpact Full timeJob Description Inviting applications for the role of Principal Consultant, Sr. Databricks Developer! In this role, the Sr. Databricks developer is responsible for solving real world cutting edgeproblem andmentor a team of one or more junior developer to address the goal. Responsibilities - Design and develop end-to-end data pipelines using PySpark,...
-
Principal Consultant- Databricks Developer
3 days ago
Hyderabad, India Genpact Full timeJob Description Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work...
-
Hyderabad, India Genpact Full timeReady to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster,...
-
Senior Principal Consultant
1 week ago
Hyderabad, India Genpact Full timeReady to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster,...
-
Databricks with Python
1 week ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with PythonExperience range: 5-8 yearsLocation: HyderabadNOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in HyderabadJob description:Must Have:Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...
-
Databricks with Python
1 week ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with PythonExperience range: 5-8 yearsLocation: HyderabadNOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in HyderabadJob description:Must Have:Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...
-
Databricks with Python
1 week ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with PythonExperience range: 5-8 yearsLocation: HyderabadNOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in HyderabadJob description:Must Have:Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...
-
Databricks with Python
2 weeks ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with PythonExperience range: 5-8 yearsLocation: HyderabadNOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in HyderabadJob description:Must Have:Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...
-
Databricks with Python
1 week ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with Python Experience range: 5-8 years Location: Hyderabad NOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in Hyderabad Job description: Must Have: Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...
-
Databricks with Python
2 weeks ago
Hyderabad, India Tata Consultancy Services Full timeRole: Databricks with PythonExperience range: 5-8 yearsLocation: HyderabadNOTE: Relevant Databricks experience must be 5 years and only apply if interested for WALK-IN Drive in HyderabadJob description:Must Have:Extensive expertise in designing and implementing data load processes using Azure Data Factory, Azure Databricks, Delta Lake, Azure Delta Lake...