Data Engineer

3 weeks ago


Lal Bahadur Nagar, India Logic Pursuits Full time

Job Title: Data Engineer (Snowflake + dbt) Location: Hyderabad, India Job Type: Full-time Job Description: We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt, and able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization. Key Responsibilities: 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry-accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage, and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.G., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets. 4. Leverage orchestration tools (e.G., Airflow, dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications: - 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and dbt. - Experience building and deploying dbt models in a production environment. - Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). - Familiarity with data quality and validation techniques: dbt tests, dbt docs, etc. - Experience with Git, CI/CD, and deployment workflows in a team setting. - Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: Data Engineering and ELT Development: - Building robust and modular data pipelines using dbt. - Writing efficient SQL for data transformation and performance tuning in Snowflake. - Managing environments, sources, and deployment pipelines in dbt. Cloud Data Platform Expertise: - Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. - Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. Technical Toolset: - Languages & Frameworks: - Python: For data transformation, notebook development, automation. - SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: - Knowledge of modern data architecture concepts including layered architecture (e.G., staging → intermediate → marts, Medallion architecture). - Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: - Access and Permissions: - Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. - Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: - DevOps and Automation: - Version control using Git, experience with CI/CD practices in a data context. - Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: - Communication & Collaboration: - Ability to present solutions and handle client demos/discussions. - Work closely with onshore and offshore teams of analysts, data scientists, and architects. - Ability to document pipelines and transformations clearly. - Basic Agile/Scrum familiarity – working in sprints and logging tasks. - Comfort with ambiguity, competing priorities, and fast-changing client environment. Nice to Have: - Experience in client-facing roles or consulting engagements. - Exposure to AI/ML data pipelines, feature stores. - Exposure to ML flow for basic ML model tracking. - Experience/Exposure using Data quality tooling. Education: - Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. - Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus. Why Join Us? - Opportunity to work on diverse and challenging projects in a consulting environment. - Collaborative work culture that values innovation and curiosity. - Access to cutting-edge technologies and a focus on professional development. - Competitive compensation and benefits package. - Be part of a dynamic team delivering impactful data solutions. About Us: Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, leading to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy.


  • Data Engineer

    3 weeks ago


    Lal Bahadur Nagar, India INFEC Services Full time

    Key Responsibilities Design, develop, and optimize data pipelines and ETL processes on GCP or Azure. Work with structured and unstructured data, integrating sources such as databases, APIs, and streaming platforms. Implement and manage data warehouses, data lakes, or lakehouse architectures. Develop clean, efficient, and reusable Python scripts for...

  • Senior Data Engineer

    3 weeks ago


    Lal Bahadur Nagar, India Straive Full time

    Job Summary We're looking for a Senior Data Engineer with 5-8 years of experience to build and maintain scalable, production-grade data pipelines. The ideal candidate is a strong software engineer with hands-on experience in Spark (3.X), Scala, SQL, and Python. You'll be responsible for designing and implementing ETL/ELT solutions, collaborating with teams...

  • Data Modeler

    2 weeks ago


    Lal Bahadur Nagar, India Tezo Full time

    Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Data Modeler – Azure Data Engineering Location: Hyderabad Experience Level: 8–13 Years - 12+ years of experience in...


  • Lal Bahadur Nagar, India ANSR Full time

    About T-Mobile T-Mobile US, Inc. (NASDAQ: TMUS), headquartered in Bellevue, Washington, is America’s supercharged Un-carrier, connecting millions through its strong nationwide network and flagship brands, T-Mobile and Metro by T-Mobile. Customers benefit from an unmatched combination of value, quality, and exceptional service experience. About TMUS Global...

  • Data Engineer

    2 days ago


    Lal Bahadur Nagar, India Tata Consultancy Services Full time

    TCS is Hiring Data Engineer (Spark/Scala)!! Location: Hyderabad, Bangalore, Chennai, Pune, Gurugram Experience: 7 - 10 yrs (Accurate) Mode of Interview: Virtual Date of Interview: 30th October,2025 Notice Period: 0-30 Days / Immediate Joiners are preferred Required Technical Skill : Scala/Spark, Hadoop, Hive Responsibilities: - Good work experience on Big...


  • Lal Bahadur Nagar, India Bohiyaanam Talent Full time

    Role: Data Engineer Experience Preferred: 4-8 Years Location: Hyderabad Certification Required: Databricks Data Engineer Associate/Professional Job Description: ● 4+ years of experience in data engineering, data architecture, data platforms & analytics ● At least 3+ years of experience with Databricks, PySpark, Python, and SQL. ● Consulting /...


  • Lal Bahadur Nagar, India McDonald's Full time

    About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together...

  • Data Scientist

    3 weeks ago


    Lal Bahadur Nagar, India Anewa Engineering Pvt. Ltd. Full time

    🚀 We're Hiring! Data Scientist 🚀 Are you passionate about connecting great talent with the right opportunities? Join our growing team and play a key role in shaping our workforce! 📍 Location: Hyderabad 🧭 Experience: 5+ years of related work experience in similar industries Work Mode & Relation: Hybrid Mode & Working for NMDC Energy - Abu Dhabi....

  • Data Scientist

    3 weeks ago


    Lal Bahadur Nagar, India Tanla Platforms Limited Full time

    Job Description: - You'll be Responsible for: - Design, develop and implement cutting-edge AI/ML solutions, including Large Language Models (LLMs) and Generative AI applications - Lead projects end-to-end while mentoring team members in AI-ML, including traditional ML and emerging AI technologies - Drive innovation in AI agent development and orchestration...

  • Data Scientist

    3 weeks ago


    Lal Bahadur Nagar, India Xemplar Insights Full time

    Must Have Skills: 1. Machine learning techniques: regression, classification, clustering, deep learning 2. Programming languages: Python, R 3. Machine learning libraries: TensorFlow, scikit-learn, PyTorch 4. Data wrangling, feature engineering 5. Model validation and testing 6. Cloud platforms, distributed computing, big data technologies Nice to Have...