Senior Data Engineer

2 weeks ago


Ahmedabad, Gujarat, India Circle K Full time ₹ 12,00,000 - ₹ 36,00,000 per year

Job Description

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day

It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success.

About the role

We are looking for a Senior Data Engineer with a collaborative, "can-do" attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K's next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team.

Roles and Responsibilities

  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals.

  • Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options.

  • Determine solutions that are best suited to develop a pipeline for a particular data source.

  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development.

  • Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance).

  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery.

  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders.

  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability).

  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions.

  • Build cross-platform data strategy to aggregate multiple sources and process development datasets.

  • Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. 

Job Requirements

  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline, Master's Degree preferred.

  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment.

  • 5+ years of experience with setting up and operating data pipelines using Python or SQL

  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL

  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization.

  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads.

  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data.

  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions.

  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring.

  • Strong analytical abilities and a strong intellectual curiosity

  • In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts

  • Understanding of REST and good API design.

  • Experience working with Apache Iceberg, Delta tables and distributed computing frameworks

  • Strong collaboration and teamwork skills & excellent written and verbal communications skills.

  • Self-starter and motivated with ability to work in a fast-paced development environment.

  • Agile experience highly desirable.

  • Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools.

Knowledge

  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management).

  • Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques.

  • Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks.

  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools.

  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance).

  • Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting.

  • ADF, Databricks and Azure certification is a plus.

Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake

#LI-DS1



  • Ahmedabad, Gujarat, India Crest Data Full time ₹ 20,00,000 - ₹ 25,00,000 per year

    Company Overview:Crest Data is the global leading provider of Data Analytics, Security, DevOps, Cloud Solutions, Software integrations, Analytics, and security-based technological services. With a clientele that includes several Fortune 500 corporations and some of the innovative Silicon Valley Startups.Company URL: Job Location- AhmedabadJob DescriptionWe...

  • Senior Data Engineer

    2 weeks ago


    Ahmedabad, Gujarat, India Augmented Ally IT Solutions Pvt Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    Seeking a Senior Data Engineer skilled in ETL, Palantir Foundry, Python, PySpark, and AWS. Build scalable data solutions, ensure quality, and collaborate in Agile teams. 5+ years' experience required; utility domain is a plus. Required Candidate profileSenior Data Engineer expert in cloud, ETL, and big data tools. Strong in Python, PySpark, and AWS with...


  • Ahmedabad, Gujarat, India Crest Data Full time ₹ 5,00,000 - ₹ 15,00,000 per year

    Company Overview:Crest Data is a leading provider of data center solutions and engineering/marketing services in the areas of Networking/SDN, Storage, Security, Virtualization, Cloud Computing, and Big Data / Data Analytics. The team has extensive experience in building and deploying various Data Center products from Cisco, VMware, NetApp, Amazon AWS, EMC,...


  • Ahmedabad, Gujarat, India eInfochips (An Arrow Company) Full time ₹ 20,00,000 - ₹ 25,00,000 per year

    Role: Senior Data Engineer - (AWS, Opensearch)Years of Experience: 5-8 YearsLocation: Ahmedabad, Pune, IndoreNotice Period: Immediate to 45 Days MaxJob Description:We are seeking a highly motivated and enthusiastic Senior Data Engineer with 5- 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in Data...


  • Ahmedabad, Gujarat, India Crest Data Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    Company Overview:Crest Data is a leading provider of data center solutions and engineering/marketing services in the areas of Networking/SDN, Storage, Security, Virtualization, Cloud Computing, and Big Data / Data Analytics. The team has extensive experience in building and deploying various Data Center products fromCisco, VMware, NetApp, Amazon AWS, EMC,...


  • Ahmedabad, Gujarat, India Crest Data Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    DescriptionWe are looking for a strong ReactJS Lead Engineers to join our engineering team in Ahmedabad.The primary responsibility will be to lead the design and develop Enterprise Software for our Global Fortune 500 clients in Data Center and Big Data segments.Location :PUNE / AHMEDABAD (WFO Only).Key ResponsibilitiesResponsible for providing expertise in...


  • Ahmedabad, Gujarat, India 8f677468-ed88-42ba-b96e-71ae7f2a03f3 Full time ₹ 10,97,000 - ₹ 12,17,269 per year

    Key ResponsibilitiesDesign, build, and maintain data pipelines and ETL/ELT workflows.Work with large datasets to enable analytics, reporting, and business intelligence.Develop and optimize data models, databases, and data warehouse solutions.Integrate data from multiple sources (APIs, databases, cloud systems).Ensure data quality, performance, and...


  • Ahmedabad, Gujarat, India ProClink Full time ₹ 12,00,000 - ₹ 36,00,000 per year

    Core Responsibilities:Lead the design and optimization of large-scale, cloud-based data pipelines and systems.Develop infrastructure to collect, transform, combine, and publish/distribute customer dataMentor junior engineers and contribute to best practices for data engineering standards.Collaborate closely with data architects, analysts, and stakeholders to...


  • Ahmedabad, Gujarat, India Flexilant works pvt Ltd Full time ₹ 15,00,000 - ₹ 20,00,000 per year

    Data & AI Engineer Develop and deploy AI solutions for regulatory compliance, utilizing AWS Bedrock and large language models LLMs.● Develop parsing and chunking strategies for various types of data sources Files, APIs, Newsletters, Email Lists, Web pages) and embed documents into vector databases● Manage and maintain vector databases with...

  • Senior Data Engineer

    8 minutes ago


    Ahmedabad, Gujarat, India RK HR Management Private Limited Full time ₹ 20,00,000 - ₹ 25,00,000 per year

    Key Responsibilities:Design and build scalable data pipeline architecture that can handle large volumes of dataDevelop ELT/ETL pipelines to extract, load and transform data from various sources into our data warehouseOptimize and maintain the data infrastructure to ensure high availability and performanceCollaborate with data scientists and analysts to...