
Pyspark/databricks
1 week ago
Bachelors degree or equivalent in Computer Engineering, Computer Science, or a related field.
5+ years of experience in data / software engineering role.
3+ years of experience with building AWS or Azure cloud-based data pipelines and AI solutions.
3+ years of experience with Python and Spark.
Strong experience with Databricks, including Spark-based processing, data pipelines, and the platform's tools for analytics and machine learning.
The role holder will possess a blend of design skills needed for Agile data development projects.
Proficiency or demonstratable passion for learning, in data engineer techniques and testing methodologies.
Strong written and verbal communications skills.
Desirable Skill:
Databricks certification.
Experience working within a DevOps delivery model.
Industry experience with data in large, complex data settings, consultancy or vendor experience
**About Virtusa**
Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.
Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.
Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
-
Databricks + Pyspark
1 week ago
Chennai, Tamil Nadu, India Virtusa Full timeData Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines using PySpark and Databricks for ETL processing of large volumes of data. Cloud Integration: Develop solutions leveraging Databricks on cloud platforms (AWS/Azure/GCP) to process and analyze data in a distributed computing environment. Data Modeling: Build robust...
-
Databrick Pyspark
2 weeks ago
Chennai, India Virtusa Full timeKey Responsibilities: Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks. Write efficient and production-ready PySpark or Scala code for data transformation and ETL processes. Integrate data from various structured and unstructured sources into a unified platform. Implement Delta Lake and manage data versioning, updates,...
-
Databrick Pyspark
2 days ago
Chennai, Tamil Nadu, India Virtusa Full time ₹ 5,00,000 - ₹ 15,00,000 per yearKey Responsibilities:Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks. Write efficient and production-ready PySpark or Scala code for data transformation and ETL processes. Integrate data from various structured and unstructured sources into a unified platform. Implement Delta Lake and manage data versioning, updates,...
-
Etl Databricks with Aws
3 days ago
Chennai, Tamil Nadu, India Virtusa Full timeDevelop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta...
-
Databrick
1 day ago
Chennai, Tamil Nadu, India Virtusa Full timeKey Responsibilities: Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks. Write efficient and production-ready PySpark or Scala code for data transformation and ETL processes. Integrate data from various structured and unstructured sources into a unified platform. Implement Delta Lake and manage data versioning, updates,...
-
Pyspark & Databricks
6 days ago
Chennai, Tamil Nadu, India Virtusa Full timeBachelors degree or equivalent in Computer Engineering, Computer Science, or a related field. 7+ years of experience in data / software engineering role. 3+ years of experience with building AWS or Azure cloud-based data pipelines and AI solutions. 3+ years of experience with Python and Spark. Strong experience with Databricks, including Spark-based...
-
PySpark Developer
2 weeks ago
Hyderabad, Bengaluru, Chennai, India Coders Brain Technology Private Limited Full timeJob Description ROLE RESPONSIBILITIES Data Engineering and Processing: Develop and manage data pipelines using PySpark on Databricks. Implement ETL/ELT processes to process structured and unstructured data at scale. Optimize data pipelines for performance, scalability, and cost-efficiency in Databricks. Databricks Platform Expertise: Experience in...
-
Bengaluru, Chennai, Gurugram, India Krazy Mantra HR Solutions Pvt. Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per yearWe are looking for a skilled professional with expertise in Databricks, Hadoop, Python, SQL, and Pyspark to join our team. The ideal candidate should have 6-9 years of experience.Roles and ResponsibilityDesign and develop scalable data pipelines using Databricks and Hadoop.Collaborate with cross-functional teams to integrate data from various sources.Develop...
-
Pyspark Developer
2 days ago
Bengaluru, Chennai, Pune, India Quess Corp Limited Full time ₹ 8,00,000 - ₹ 24,00,000 per yearROLE SUMMARYWe are seeking a highly skilled PySpark Developer with hands-on experience in Databricks IT Systems Development unit in an offshore capacity. This role focuses on designing, building, and optimizing large-scale data pipelines and processing solutions on the Databricks Unified Analytics Platform. The ideal candidate will have expertise in big data...
-
Pyspark Reltio
6 days ago
Chennai, Tamil Nadu, India Virtusa Full timeKey Responsibilities: Develop and maintain data pipelines using PySpark in distributed computing environments (e.g., AWS EMR, Databricks). Integrate and synchronize data between enterprise systems and the Reltio MDM platform. Design and implement data transformation, cleansing, and enrichment processes. Collaborate with data architects, business analysts,...