Data Engineer
2 weeks ago
Job Description
Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day
It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success.
About the role
We are looking for a Data Engineer with a collaborative, "can-do" attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K's next phase in the digital journey by transforming data to achieve actionable business outcomes.
Roles and Responsibilities
Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options
Determine solutions that are best suited to develop a pipeline for a particular data source
Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance)
Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver
Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
Build cross-platform data strategy to aggregate multiple sources and process development datasets
Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation
Job Requirements
Bachelor's degree in Computer Engineering, Computer Science or related discipline, Master's Degree preferred
3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
3+ years of experience with setting up and operating data pipelines using Python or SQL
3+ years of advanced SQL Programming: PL/SQL, T-SQL
3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
3+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Strong analytical abilities and a strong intellectual curiosity.
In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts
Understanding of REST and good API design
Experience working with Apache Iceberg, Delta tables and distributed computing frameworks
Strong collaboration, teamwork skills, excellent written and verbal communications skills
Self-starter and motivated with ability to work in a fast-paced development environment
Agile experience highly desirable
Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools
Preferred Skills
Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques
Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks
Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting
ADF, Databricks and Azure certification is a plus
Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake
#LI-DS1
-
Gen AI Engineer
3 days ago
Ahmedabad, Gujarat, India Comuputer Vision and ML Engineer Full timeGenAI & LLM Developer (3–5 Years)BrilworksPosition Overview:We are looking for a GenAI & LLM Developer with 3–4 years of experience to work on advanced projects using Generative AI and Large Language Models like Llama2 and ChatGPT. You will help build, fine-tune, and deploy AI solutions while collaborating with engineering and product teams.Key...
-
Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India ProductSquads Full time ₹ 8,00,000 - ₹ 12,00,000 per yearAbout the CompanyCompany Profile: ProductSquads was founded with a bold mission: to engineer capital efficiency through autonomous AI agents, exceptional engineering, and real-time decision intelligence. We're building an AI-native platform that redefines how software teams deliver value—whether through code written by humans, agents, or both. Our stack...
-
Systems - L2 Site Reliability Engineer
2 weeks ago
Ahmedabad, Gujarat, India Crest Data Full timeDescriptionHiring Sr.Site Reliability Engineer (Network) L2 | CCNA Certified.Experience 5+ Years.Location Ahmedabad.Certifications Required CCNA / CCNP Certified.Network EngineeringComplex Operations : Can manage and optimize complex network environments, including large-scale deployments and high-availability systems.Advanced Troubleshooting : Proficient in...
-
Data Engineer
1 day ago
Ahmedabad, Gujarat, India ZURU Full timeAbout Us Zuru Tech is digitalizing the construction process of buildings all around the world. We have a multi-national team developing the world's first digital building fabrication platform, you design, we build it We at ZURU develop the Zuru Home app, a BIM software meant for the general public, architects, and engineers, from here anyone can...
-
Data Engineer
5 days ago
Ahmedabad, Gujarat, India NDM HR Solutions Full timeJob Description:Experience: 4 to 6 years of experience in Data Engineering or related roles.Required KnowledgeSnowflake (Data warehousing)DBT (Data transformation)Airflow (Workflow orchestration)Responsibilities Develop and maintain data pipelines using Snowflake, DBT, and Airflow. Design and implement data integration strategies. Optimize ETL...
-
Senior Data Engineer
5 days ago
Ahmedabad, Gujarat, India Maruti Techlabs Full timeWe are looking for aLead Data Engineerto drive the development of modern data platform. This role will focus on building scalable and reliable data pipelines using tools likeDBT,Snowflake, andApache Airflow, and will play a key part in shaping data architecture and strategy.As a technical leader, you'll work closely with cross-functional teams including...
-
Data Pipeline Engineer
1 day ago
Ahmedabad, Gujarat, India TIGI HR Full timeData EngineerExperience: 4–6 YearsLocation: AhmedabadWe're looking for a skilledData Engineerto design, build, and maintain scalable data pipelines and systems. The ideal candidate will have hands-on experience withSnowflake,DBT, andAirflow, with a passion for building efficient, high-performance data workflows.Key Responsibilities:Develop, optimize, and...
-
Lead Data Engineer/Sr. Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India Innovatics Full time ₹ 12,00,000 - ₹ 36,00,000 per yearJob Description:5+ years of experience in a Data Engineer role,Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc.Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/HiveExperience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event HubExperience with GCP/Azure data factory/AWSStrong in...
-
Azure Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India Macersoft Technologies, a DataPlatformExperts Company Full time ₹ 8,00,000 - ₹ 12,00,000 per yearJob Description:We are looking for a skilled and motivated AI Azure Data Engineer with strong expertise in Microsoft Azure data services. The ideal candidate must have hands-on experience with Databricks or Microsoft Fabric and hold a valid certification in either. You will be responsible for designing, developing, and deploying scalable data...
-
Senior Data Engineer
7 days ago
Ahmedabad, Gujarat, India Augmented Ally IT Solutions Pvt Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per yearSeeking a Senior Data Engineer skilled in ETL, Palantir Foundry, Python, PySpark, and AWS. Build scalable data solutions, ensure quality, and collaborate in Agile teams. 5+ years' experience required; utility domain is a plus. Required Candidate profileSenior Data Engineer expert in cloud, ETL, and big data tools. Strong in Python, PySpark, and AWS with...