Data Pipeline Architect

2 weeks ago


Hyderabad, India Logic Pursuits Full time

Job Title: Data Engineer (Snowflake + dbt/Matillion)Location: Hyderabad, IndiaJob Type: Full-timeJob Description:We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt, and able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization.Key Responsibilities:Design, configure, and optimize ingestion, transformation, and orchestration workflows using Matillion DPC where applicable.Design and implement scalable ELT pipelines using dbt on Snowflake, following industry-accepted best practices.Build ingestion pipelines from various sources including relational databases, APIs, cloud storage, and flat files into Snowflake.Implement data modelling and transformation logic to support layered architecture (e.G., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.Leverage orchestration tools (e.G., Airflow, dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows.Apply dbt best practices: modular SQL development, testing, documentation, and version control.Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design.Apply CI/CD and Git-based workflows for version-controlled deployments.Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks.Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets.Write well-documented, maintainable code using Git for version control and CI/CD processes.Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives.Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions.Required Qualifications:3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and dbt.Hands-on experience with Matillion Data Productivity Cloud (Matillion DPC) for data ingestion, transformation, or orchestration.Experience building and deploying dbt models in a production environment.Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred).Familiarity with data quality and validation techniques: dbt tests, dbt docs, etc.Experience with Git, CI/CD, and deployment workflows in a team setting.Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory.Core Competencies:Data Engineering and ELT Development:Building robust and modular data pipelines using dbt.Writing efficient SQL for data transformation and performance tuning in Snowflake.Managing environments, sources, and deployment pipelines in dbt.Cloud Data Platform Expertise:Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization.Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages.Technical Toolset:Languages & Frameworks:Python: For data transformation, notebook development, automation.SQL: Strong grasp of SQL for querying and performance tuning.Best Practices and Standards:Knowledge of modern data architecture concepts including layered architecture (e.G., staging → intermediate → marts, Medallion architecture).Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs).Security & Governance:Access and Permissions:Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling.Familiar with data privacy policies (GDPR basics), encryption at rest/in transit.Deployment & Monitoring:DevOps and Automation:Version control using Git, experience with CI/CD practices in a data context.Monitoring and logging of pipeline executions, alerting on failures.Soft Skills:Communication & Collaboration:Ability to present solutions and handle client demos/discussions.Work closely with onshore and offshore teams of analysts, data scientists, and architects.Ability to document pipelines and transformations clearly.Basic SSIS and Matillion.Comfort with ambiguity, competing priorities, and fast-changing client environment.Nice to Have:Experience in client-facing roles or consulting engagements.Exposure to AI/ML data pipelines, feature stores.Exposure to ML flow for basic ML model tracking.Experience/Exposure using Data quality tooling.Education:Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.Why Join Us?Opportunity to work on diverse and challenging projects in a consulting environment.Collaborative work culture that values innovation and curiosity.Access to cutting-edge technologies and a focus on professional development.Competitive compensation and benefits package.Be part of a dynamic team delivering impactful data solutions.About Us:Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, leading to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy.



  • Hyderabad, India Atyeti Inc Full time

    Required Skills & QualificationsBachelor’s or Master’s degree in Computer Science or equivalent experience5+ years of hands-on experience as an Application Developer or in similar software engineering rolesAdvanced skills in Python; strong SQL and cloud-native development experience (AWS)Proven expertise in architecting scalable data pipelines (ETL/ELT)...


  • Hyderabad, India Tata Consultancy Services Full time

    Greetings from TCS!TCS has always been in the spotlight for being adept in the next big technologies. What we can offer you is a space to explore varied technologies and quench your techie soul.We are hiring Data Engineer role for Hyderabad LocationRole: Data EngineerRequired Skill: Python ,Databricks ,SQLExperience: 6-8 yearsLocation: HyderabadMinimum...


  • Hyderabad, India Live Connections Full time

    Role - Data EngineerExperience - 6 to 11 years onlyLocations - Hyderabad or Bangalore or Noida, or ChennaiRequired Notice Period - Immediate Joiners Interview ProcessL1 - Bare Bot TestL2 - Face-to-Face at client locationMust Have Skills6 to 11 years of overall experienceShould have working experience with PythonShould have working experience with...


  • Hyderabad, India Tata Consultancy Services Full time

    - Advanced working Scala ,SQL,Python/PySpark knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. - Experience building and optimizing ‘Cloud big data’ data pipelines, architectures and data sets. - Experience performing root cause analysis on internal and external...


  • hyderabad, India beBeeData Full time

    Data Engineer– Python ExpertWe are looking for a seasoned Senior Data Engineer to architect, build, and own the data pipelines that power our large language model (LLM) development. This is an opportunity to showcase expertise in designing, developing, and owning robust, scalable, automated ETL/ELT pipelines in Python for ingesting and processing...


  • hyderabad, India beBeeDataEngineer Full time

    Job OverviewAs a Data Engineer, you will be responsible for designing and developing robust data pipelines and platforms that enable efficient storage, processing, and consumption of data across the enterprise.The role requires high-quality, timely, and governed data delivery to data scientists, analysts, and business users. You will play a crucial role in...


  • Hyderabad, India Tata Consultancy Services Full time

    ROLE : Azure Data Factory E1Exp : 4 to 6Location : HyderabadJD:Role Overview We are seeking a highly skilled Azure Data Engineer with strong expertise in Azure Synapse Analytics and Microsoft Fabric. The ideal candidate will have hands-on experience in designing, developing, and maintaining scalable data solutions, modern data warehouses, and advanced...


  • Hyderabad, India ValueMomentum Full time

    Role: Lead Data EngineerPrimary skills: Databricks, PySpark, SQLSecondary skills: Advanced SQL, Azure Data Factory, and Azure Data lake.Experience: 7 to 15 YearAbout the Job We are seeking a Tech Lead –Databricks Data Engineer with experience in designing and developing data pipelines using Azure Databricks, Data Factory, and Datalake.The role involves...

  • Ai Architects

    1 week ago


    Hyderabad, Telangana, India NTT DATA Full time

    **Req ID**: 341039 We are currently seeking a AI Architects to join our team in Hyderabad, Telangana (IN-TG), India (IN). **Location**: | **Experience**: 15+ years (5+ in AI/ML architecture) **Role Overview** **Key Responsibilities** - Architect and deliver **GenAI, LLM, and agentic AI solutions**: - Excellent understanding of Agentic Frameworks, MLOps,...


  • Hyderabad, India Tech Mahindra Full time

    We are seeking a highly skilled and motivated Senior Data Engineer/s to architect and implement scalable ETL and data storage solutions using Microsoft Fabric and the broader Azure technology stack. This role will be pivotal in building a metadata driven data lake that ingests data from over 100 structured and semi structured sources, enabling rich insights...