
Scalable Data Pipelines Specialist
4 days ago
**Job Title:** Data Engineer
Job SummaryWe're seeking a skilled data engineer to build and maintain our analytics backbone, creating scalable data pipelines that fuel real-time insights for our global operations.
This role involves turning raw data points into actionable intelligence, driving machine learning models and AI-driven decisions that propel our business forward.
Key Responsibilities- Create and maintain data highways: Develop and manage cloud-based data lakes, warehouses, and ETL/ELT pipelines that ingest, process, and deliver data from various sources.
- Ensure systems reliability: Monitor cloud infrastructure performance, resolve bottlenecks, and ensure scalability for 24/7 operations.
- Secure data: Implement data quality checks, encryption, and compliance standards to protect sensitive information.
- Automate workflows: Use tools like Airflow to streamline maintenance and reduce manual intervention.
- Fuel AI/ML engines: Partner with data scientists to prep datasets for predictive models and troubleshoot pipeline issues impacting their work.
- Solve data mysteries: Diagnose root causes of pipeline failures, data discrepancies, or MLOps hiccups, then implement fixes that prevent repeat issues.
- Map the data terrain: Document source-to-target mappings, conduct data profiling, and clarify dependencies so analysts can self-serve without guesswork.
- Stay curious: Experiment with new tools and techniques to improve data quality, system performance, and pipeline resilience.
- Be the glue: Work closely with developers, IT Operations, and stakeholders across teams to deliver solutions that balance technical rigor with real-world usability.
- Communicate clearly: Break down complex data concepts for non-technical audiences and ask questions to avoid ambiguity.
- Learn by doing: Shadow senior engineers, participate in code reviews, and absorb best practices to level up your craft.
- 3+ years' experience in a data engineering role using SQL, Py Spark, and Air Flow.
- Strong understanding of data lake and data warehouse design best practices and principles.
- Practical hands-on experience in cloud-based data services for ETL/ELT covering AWS EC2, S3 Storage, and EMR.
- Ability to manage and enhance infrastructure related to environments covering Spark, Hive, Presto.
- Experience with databases such as Postgres, My SQL, Oracle.
- Strong work ethic and ability to work independently on agreed goals.
- Clear communication skills in English – both in speaking and writing.
- Deployment experience and management of MLOps framework, such as AWS Sage Maker AI, ECR.
- Experience in other cloud platforms and hybrid cloud infrastructure, e.g. GCP, Azure.
- Experience in the maritime industry.
- Analysis & Problem Solving (Level 2): Uses critical thinking to address problems.
- Listening & Communication (Level 2): Focuses on the individual they are communicating with.
- Collaboration, Inclusion & Teamwork (Level 1): A good team player that is personable, friendly, polite, and takes the time to know people.
- Customer Focus (Level 2): Understands the needs of the customer clarifying requirements and expectations.
- Planning & Organising (Level 2): Uses the supplied tools for structured project planning for optimal time use.
- Initiative (Level 2): Challenges existing ways of doing things.
- Accountability (Level 2): Responsible for delivery own work without unnecessary supervision.
-
Data Pipeline Specialist
1 hour ago
Anantapur, Andhra Pradesh, India beBeeData Full time ₹ 80,00,000 - ₹ 1,80,00,000Job Title:Data Pipeline SpecialistWe are seeking a highly skilled Data Pipeline Specialist to join our team. As a key member of our data engineering group, you will be responsible for designing, building, and maintaining production-level data pipelines that handle large datasets.Key Responsibilities:Design and develop scalable data pipelines using Python and...
-
Expert Data Pipeline Specialist
2 days ago
Anantapur, Andhra Pradesh, India beBeeAzure Full time ₹ 1,50,00,000 - ₹ 2,50,00,000Job OpportunityProtecting businesses and organizations from cyber threats is a top priority for any organization.Our data integration and ETL needs are driven by the need to safeguard our clients' sensitive information.Key ResponsibilitiesDesign, build, and maintain data pipelines for extract, transform, and load (ETL) processes.Integrate data from multiple...
-
Cloud Data Pipeline Specialist
3 days ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 1,50,00,000 - ₹ 2,00,00,000Data Engineering Role OverviewWe are seeking an experienced Data Engineer to lead the design, development, and operation of data pipelines on the Azure platform. This is a key position in our organization that requires expertise in data processing, warehousing, and business insights.About the PositionDesign, develop, and maintain complex data pipelines using...
-
Senior Data Pipeline Specialist
2 days ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 1,50,00,000 - ₹ 2,50,00,000Job Title: ETL Developer – DataStage, AWS, SnowflakeWe are seeking an experienced and skilled ETL Developer to join our data engineering team. The successful candidate will work on designing and developing scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data), AWS Glue, and Snowflake.The ideal candidate will have a strong...
-
Scalable Data Solutions Engineer
2 days ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 1,44,00,000 - ₹ 2,16,00,000Job Title: Scalable Data Solutions EngineerAbout Us: We are a fast-growing global technology services firm with a strong presence across the USA, UK, Europe, Singapore, UAE, and India. Our team thrives on innovation, excellence, and a commitment to delivering measurable business value to our customers.Job Description: We are seeking a skilled engineer to...
-
Cloud Data Engineer
3 days ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 20,00,000 - ₹ 30,00,000We are seeking a highly skilled data engineer to lead the design and implementation of robust data pipelines using AWS services, Python, and SQL. The ideal candidate will have expertise in cloud-based data engineering and strong proficiency in related technologies.Key ResponsibilitiesDesign and implement scalable and efficient data processing systems using...
-
Future Data Pipeline Specialist
4 hours ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 6,00,000 - ₹ 12,00,000Job Title:Trainee Engineer A trainee engineer is responsible for assisting in building data pipelines and automation scripts, supporting the implementation of analytical algorithms under guidance, and contributing to documentation and testing processes.Key Responsibilities:Build data pipelines and automation scripts using Python programming languageSupport...
-
Effective Data Pipeline Design Specialist
7 hours ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 1,50,00,000 - ₹ 2,00,00,000Job Title: Data EngineerWe are seeking an accomplished Data Engineer to design, develop, and maintain data pipelines for ingestion, transformation, and storage of large-scale data.Key Responsibilities:Design and Development: Create efficient data pipelines that integrate multiple data sources into a unified platform.Big Data Ecosystems: Work with Big Data...
-
Data Architect Specialist
1 hour ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 20,00,000 - ₹ 25,00,000Senior Data EngineerOur organization is seeking a seasoned data professional to lead the design, development and operation of scalable data pipelines. As a key member of our AI-powered applications team, you will collaborate with software engineers, machine learning specialists and clinical partners to drive innovation.The ideal candidate will be responsible...
-
Highly Skilled Data Pipeline Architect
3 days ago
Anantapur, Andhra Pradesh, India beBeeDataEngineer Full time ₹ 1,80,00,000 - ₹ 2,00,00,000We are currently seeking an experienced Data Engineer to join our team.The ideal candidate will have 4–5 years of experience in designing and implementing high-quality, secure, and scalable data pipelines and systems that power reporting, analytics, and operational use cases.