
Data Pipeline Architect
3 days ago
As a Kafka expert, you will be responsible for designing and implementing scalable data pipelines to optimize message processing efficiency.
You will work on automating the installation and configuration of production-grade Kafka clusters using tools such as Ansible and Chef, and troubleshoot critical issues to ensure high availability and fault tolerance.
Additionally, you will configure MirrorMaker for data replication between two data centers, deploy and manage Provectus Kafka UI on a Kubernetes cluster for real-time monitoring and management, and integrate Schema Registry and deploy multiple Kafka Connectors to facilitate seamless data flow.
You will also onboard new users by managing ACL permissions, creating topics, and implementing end-to-end security with SSL/TLS, design and manage Kafka infrastructure to enhance message processing efficiency, and conduct performance testing to optimize partitioning, replication, and retention policies.
Furthermore, you will integrate Kafka with ELK Stack and Solr for scalable data storage and analysis, and ensure that all systems are running smoothly and efficiently.
- Design and implement scalable data pipelines.
- Automate the installation and configuration of Kafka clusters.
- Troubleshoot critical issues.
- Configure MirrorMaker for data replication.
- Deploy and manage Provectus Kafka UI.
- Integrate Schema Registry and deploy multiple Kafka Connectors.
- Onboard new users.
- Design and manage Kafka infrastructure.
- Conduct performance testing.
- Integrate Kafka with ELK Stack and Solr.
Be or BTech, MCA only.
3+ years of experience in Kafka administration required.
-
Data Pipelines Specialist
3 days ago
Gandhinagar, Gujarat, India beBeeDataEngineer Full time ₹ 20,00,000 - ₹ 25,00,000Senior Data EngineerAs a seasoned data professional, you will design and implement high-quality, secure, and scalable data pipelines and systems. The ideal candidate has 4–5 years of experience in designing, building, and maintaining complex data flows.Key Responsibilities:Design and build scalable data pipelines using Snowflake, Fivetran, DBT, and...
-
Data Architect
4 days ago
Gandhinagar, Gujarat, India beBeeDataEngineer Full time ₹ 18,00,000 - ₹ 24,00,000We are seeking a skilled Data Architect to lead the design and development of our data infrastructure.This is a full-time role that involves creating scalable data architectures, developing real-time data processing pipelines using technologies like Kafka, Flink, and Spark. The ideal candidate will have experience with data modeling, ETL processes, and...
-
Senior Data Architect
3 days ago
Gandhinagar, Gujarat, India beBeeDataEngineer Full time ₹ 1,50,00,000 - ₹ 2,50,00,000Job Title: Senior Data EngineerAbout the RoleWe are seeking a highly skilled and experienced data engineer to lead our modern data platform development. This role is ideal for someone with strong technical expertise in ETL pipeline design, data modeling, and data infrastructure who thrives in a fast-paced engineering-driven environment.You will be...
-
Gandhinagar, Gujarat, India beBeeData Full time ₹ 20,00,000 - ₹ 25,00,000ETL Data Engineer Position">Job Title: ETL Developer – DataStage, AWS, SnowflakeExperience: 5–7 YearsLocation: RemoteJob Type: Full-timeAbout the RoleWe are seeking a highly skilled and motivated ETL Data Engineer to join our data engineering team. The selected candidate will work on building scalable and efficient data pipelines using IBM DataStage (on...
-
Data Architect Developer
3 days ago
Gandhinagar, Gujarat, India beBeeData Full time US$ 1,30,000 - US$ 1,70,000Job Title: Data Architect DeveloperLocation: India/Remote Duration: 3 Years Job Overview:We are seeking a highly experienced Data Architect Developer with strong knowledge in data modeling, data quality, real-time integrations, and secure role-based access implementation. The ideal candidate will have experience designing, implementing, and supporting...
-
Cloud Data Architect
5 days ago
Gandhinagar, Gujarat, India beBeeData Full time ₹ 18,00,000 - ₹ 24,00,000Transformative SolutionsWe are crafting a world-class cloud data platform using cutting-edge technologies. As a Cloud Data Architect, you will spearhead the design and development of our data infrastructure, making it scalable, reliable, and cost-efficient.Key Responsibilities:Analyze existing data warehouse implementations to inform migration and...
-
Senior Data Pipeline Engineer
3 days ago
Gandhinagar, Gujarat, India beBeeData Full time ₹ 25,00,000 - ₹ 30,00,000ETL TesterWe are seeking an ETL Tester to join our team. As a key member of the QA team, you will be responsible for ensuring the quality and accuracy of our data pipelines.Key Responsibilities:Design and execute test cases for ETL processesIdentify and report defects in data pipelinesCollaborate with development teams to resolve issuesDevelop and maintain...
-
Data Pipeline Assurance Specialist
3 days ago
Gandhinagar, Gujarat, India beBeeSeniorTestEngineer Full time ₹ 15,00,000 - ₹ 25,00,000Job Title: Senior Test EngineerAbout the Role:We are seeking an experienced Senior Test Engineer to join our team. As a key member of our engineering department, you will play a critical role in ensuring the accuracy, integrity, and performance of our ETL pipelines deployed into Azure cloud environments.Responsibilities:Evaluate and Validate Data Flows:...
-
Senior Data Pipeline Specialist
5 days ago
Gandhinagar, Gujarat, India beBeeTalent Full time ₹ 18,00,000 - ₹ 20,00,000Talend ETL Developer Job OpeningWe are looking for a highly skilled and experienced Talend ETL developer to join our team on a contractual basis. The ideal candidate will have expertise in Talend Open Studio, BigQuery, PostgreSQL, Python, and GCP.The role requires strong analytical skills, attention to detail, and the ability to design, optimize, and deploy...
-
Expert Python Data Pipeline Specialist
2 days ago
Gandhinagar, Gujarat, India beBeeData Full time ₹ 15,00,000 - ₹ 20,00,000Job DescriptionAs a Freelance Data Engineer with Python expertise, you will design and develop data pipelines using AWS Glue, PySpark, and Step Functions. Your primary responsibility will be to create efficient and scalable data engineering workflows that meet the needs of our organization.In this role, you will work closely with cross-functional teams to...