
Pyspark Architect
7 hours ago
Job Description: PySpark Architect
Location: Bangalore,Pune,Kolkata
Experience Required: 7 to 13 Years
Role Overview:
We are seeking a highly skilled and motivated PySpark Manager to lead our data
engineering initiatives. The ideal candidate will have deep expertise in PySpark,
Hadoop, AWS, Teradata, Scala, and data visualization tools. You will be responsible for
designing, building, and managing large-scale data processing solutions while leading a
team of data engineers to deliver high-quality outcomes.
Key Responsibilities:
- Leadership & Team Management:
• Lead and mentor a team of data engineers, ensuring high performance and
professional growth.
• Oversee project timelines, quality, and deliverables for multiple data initiatives.
- Data Engineering & Development:
• Design and implement robust data pipelines and workflows using PySpark and
Hadoop.
• Optimize performance and scalability of big data solutions for complex data sets.
• Develop and integrate solutions with AWS services for cloud-based data processing.
- Technical Expertise:
• Leverage expertise in PySpark, Hadoop, and Scala to solve complex technical
challenges.
• Work with Teradata for large-scale data warehousing solutions.
• Utilize data visualization tools (e.g., Tableau, Power BI) to provide insights and
reporting solutions.
- Stakeholder Collaboration:
• Collaborate with cross-functional teams, including business stakeholders and product
managers, to understand requirements and deliver solutions.
• Provide thought leadership on best practices in big data technologies and architecture.
Required Skills & Qualifications:
• Technical Expertise:
• Proficiency in PySpark, Hadoop, AWS (S3, Glue, EMR, Redshift), and Teradata.
• Hands-on experience with Scala for data engineering tasks.
• Familiarity with data visualization tools like Tableau, Power BI, or similar platforms.
• Experience:
• 7–13 years of experience in data engineering, with a focus on big data technologies.
• Proven track record of managing end-to-end data engineering projects and leading
teams.
• Educational Background:
• Bachelor's or Master's degree in Computer Science, Information Technology, or a
related field.
• Soft Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and stakeholder management abilities.
• Ability to work in a fast-paced, dynamic environment.
Preferred Skills:
• Certification in AWS or related big data tools.
• Experience in real-time data streaming platforms like Kafka.
• Knowledge of DevOps practices, including CI/CD pipelines and infrastructure as code.
Talent Worx is a growing services & recruitment consulting firm, we are hiring for our client which is a Global and leading provider of financial intelligence, data analytics, and AI-driven solutions, empowering businesses worldwide with insights for confident decision-making. Join to work on cutting-edge technologies, drive digital transformation, and shape the future of global markets.
-
AWS Data Engineering
2 days ago
Kolkata, West Bengal, India Careernet Full time ₹ 8,00,000 - ₹ 24,00,000 per yearKey Skills: AWS, Data EngineerRoles and Responsibilities:Architect and develop data pipelines using AWS services.Design and optimize relational and non-relational databases.Implement ETL/ELT processes using AWS Glue, Lambda, and other tools.Manage data lakes and warehouses (e.g., S3, Redshift, Athena).Collaborate with data scientists, analysts, and business...
-
Data Science Architect
4 weeks ago
Kolkata, West Bengal, India UST Full timeProfessional experience in business analysis requirement gathering and solution workflow design for AI ML Analytics projects Experience in LLM Gen AI Prompt Engineering RAG and parameter hyperparameter tuning Strong programming skills in Python and SQL Solid understanding of ML libraries and applications such as Time Series Analysis Neural Networks ...
-
Principal Data Engineer
4 weeks ago
Kolkata, West Bengal, India Xebia Full timePosition name: Principal Data EngineerExperience: 10 -14 yearsLocation - Gurugram/Chennai/Jaipur/Pune/Bhopal/BangaloreSummary of JD: AWS, Spark, Pyspark, Python, SQL, AWS, Data Warehouse design and development, Databricks and dbt, architecting and implementing workflow management systems, Airflow, Spark applications, CI/CD processes, experience with GitHub...
-
Databricks-PySpark
2 days ago
Bengaluru, Kolkata, Pune, India Artech Full time ₹ 15,00,000 - ₹ 20,00,000 per yearWe are looking for a Senior Data Engineer with 6+ years of experience to design, build, and optimize scalable data solutions. The ideal candidate will have deep expertise in Databricks, Azure, Python, PySpark, Scala, ETL, and data governance. You will lead the development of efficient data pipelines and mentor junior engineers, ensuring operational...
-
Snowflake Architect(Python
7 days ago
Hyderabad, Kolkata, Pune, India Alike Thoughts Full time ₹ 15,00,000 - ₹ 25,00,000 per yearMandatory Skills: Snowflake Architect, Python, PysparkExperience: 8 to 15 yrsLocation: PAN IndiaNotice Period: 60 MaxInterested candidates Please apply.
-
Azure Data Factory
6 days ago
Kolkata, India Orcapod Full time**Roles and Responsibilities** - Mandatory: Strong in Azure, ADF, Data Lake, Databricks, Pyspark Hands-on-experience in developing data lake solutions using Azure (Azure data factory for ingestion, Data Lake gen 2 and Azure SQL server for storage, Azure analysis service for transformations, Azure data bricks) Implement a robust data pipeline using Microsoft...
-
AWS Data Engineering
2 weeks ago
Kolkata, India Careernet Full timeKey Skills: AWS, Data Engineer Roles and Responsibilities: Architect and develop data pipelines using AWS services. Design and optimize relational and non-relational databases. Implement ETL/ELT processes using AWS Glue, Lambda, and other tools. Manage data lakes and warehouses (e.g., S3, Redshift, Athena). Collaborate with data scientists, analysts, and...
-
MS Fabric Data Engineering
2 weeks ago
Chennai, Hyderabad, Kolkata, India Tata Consultancy Services Full time ₹ 15,00,000 - ₹ 25,00,000 per yearRole & responsibilitiesHands-on experience in Pyspark, Notebook• Experience in Azure Data Factory, Synapse Analytics• Experience in converting SQL Stored Procedure into Pyspark • Experience in Medallion Architecture• Experience in SQL Aggregation functionsGood-to-Have • Certification in DP-600: Implementing Analytics Solutions Using Microsoft...
-
Azure Data Factory
4 days ago
Kolkata, India Orcapod Full time**Roles and Responsibilities** ADF Mandatory: Strong in Azure, ADF, Data Lake, Databricks, Pyspark Hands-on-experience in developing data lake solutions using Azure (Azure data factory for ingestion, Data Lake gen 2 and Azure SQL server for storage, Azure analysis service for transformations, Azure data bricks) Implement a robust data pipeline using...
-
Sse - Databricks
1 week ago
Kolkata, India Mindtree Full timeRoles & Responsibilities: Data Lead Mind should have experience on the Azure Data platform specifically Azure Data Lake ,Azure Data Bricks, Spark SQL Mind should be familiar with Agile ways of working Should have hands on experience in Integratingd ifferent Source systems especially eCommerce related data sources Should have hands on experience in streaming...
-
Kolkata, West Bengal, India Capgemini Full timeChoosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of...
-
Consultant-Databricks
24 hours ago
Kolkata, Chennai, Pune, India Tredence Analytics Solutions Private Limited Full timeJob Description - Primary Roles and Responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements - Triage issues to find gaps in existing...