Data Engineer
1 day ago
Architect and develop complex data pipelines, ETL/ELT workflows, and data models on platforms such as Snowflake, Databricks, Azure Synapse, Redshift, BigQuery, etc. Build scalable data transformation pipelines using the Medallion Architecture (Bronze → Silver → Gold layers). Develop, manage, and optimize Airflow DAGs for orchestration and scheduling. Implement transformation logic and semantic models using DBT, enforcing analytics engineering best practices. Write, optimize, and maintain advanced SQL queries, stored procedures, and performance-tuned transformations. Design and maintain reusable data ingestion and transformation frameworks, including support for geospatial data. Build connectors and integrate streaming/event-driven architectures using Kafka for near real-time data pipelines. Enable downstream analytics by preparing curated datasets and data models for BI consumption, including Power BI dashboards. Collaborate with Architects, Senior Engineers, API teams, and Visualization teams to deliver end-to-end data solutions. Conduct PoCs/PoVs to evaluate cloud data integration tools and modern data engineering technologies. Ensure strong data quality, lineage, governance, metadata management, and cloud security standards. Work within Agile/DevOps methodologies to deliver iterative, high-quality solutions. Troubleshoot and proactively resolve complex pipeline and performance issues. What are we Looking for? 5+ years of hands-on experience as a Data Engineer on cloud-based data transformation and platform modernization projects. Strong experience with at least one full lifecycle implementation of a cloud data lake/data warehouse using Snowflake, Databricks, Redshift, Synapse, or BigQuery. Hands-on experience with Medallion Architecture or other layered data modelling approaches. Proficient in Airflow for workflow orchestration and DBT for SQL-based transformations and modelling. Strong skills in Advanced SQL, data modelling (star, snowflake), ETL/ELT, and performance tuning. Experience supporting BI teams by creating curated datasets, semantic models, and optimized schemas for tools like Power BI. Proficient in Python and PySpark for building scalable data processing pipelines. Experience with cloud object storage (S3, ADLS, GCS, MinIO) and cloud security (IAM/RBAC, networking, resource monitoring). Familiarity with relational and NoSQL databases, distributed frameworks (Spark, Hadoop), and modern data integration patterns. Strong analytical and problem-solving skills with the ability to handle complex data challenges independently.
Required Skills Cloud Platforms: AWS / Azure / GCP (any one). Data Platforms: Snowflake, Databricks, Redshift, Synapse, BigQuery. Orchestration & Transformation: Airflow, DBT, CI/CD, DevOps. Streaming & Monitoring: Kafka, ELK, Grafana. Distributed Processing: Spark, Hadoop ecosystem. Programming: Python, PySpark. BI & Visualization Support: Power BI, DAX basics (optional but beneficial). Data Modelling: Dimensional modelling, Medallion Architecture, SQL optimization.
What do we Offer? Competitive compensation Annual Performance Bonus 5 Working Days with Flexible Working Hours Annual trips & Team outings Medical Insurance for self & family Training & skill development programs Work with the Global team, Make the most of the diverse knowledge Several discussions over Multiple Pizza Parties
A lot more Come and discover us We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
-
Gen AI Engineer
3 days ago
Ahmedabad, Gujarat, India Comuputer Vision and ML Engineer Full timeGenAI & LLM Developer (3–5 Years)BrilworksPosition Overview:We are looking for a GenAI & LLM Developer with 3–4 years of experience to work on advanced projects using Generative AI and Large Language Models like Llama2 and ChatGPT. You will help build, fine-tune, and deploy AI solutions while collaborating with engineering and product teams.Key...
-
Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India ProductSquads Full time ₹ 8,00,000 - ₹ 12,00,000 per yearAbout the CompanyCompany Profile: ProductSquads was founded with a bold mission: to engineer capital efficiency through autonomous AI agents, exceptional engineering, and real-time decision intelligence. We're building an AI-native platform that redefines how software teams deliver value—whether through code written by humans, agents, or both. Our stack...
-
Systems - L2 Site Reliability Engineer
2 weeks ago
Ahmedabad, Gujarat, India Crest Data Full timeDescriptionHiring Sr.Site Reliability Engineer (Network) L2 | CCNA Certified.Experience 5+ Years.Location Ahmedabad.Certifications Required CCNA / CCNP Certified.Network EngineeringComplex Operations : Can manage and optimize complex network environments, including large-scale deployments and high-availability systems.Advanced Troubleshooting : Proficient in...
-
Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India Circle K Full time ₹ 12,00,000 - ₹ 36,00,000 per yearJob DescriptionAlimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy...
-
Data Engineer
5 days ago
Ahmedabad, Gujarat, India NDM HR Solutions Full timeJob Description:Experience: 4 to 6 years of experience in Data Engineering or related roles.Required KnowledgeSnowflake (Data warehousing)DBT (Data transformation)Airflow (Workflow orchestration)Responsibilities Develop and maintain data pipelines using Snowflake, DBT, and Airflow. Design and implement data integration strategies. Optimize ETL...
-
Senior Data Engineer
4 days ago
Ahmedabad, Gujarat, India Maruti Techlabs Full timeWe are looking for aLead Data Engineerto drive the development of modern data platform. This role will focus on building scalable and reliable data pipelines using tools likeDBT,Snowflake, andApache Airflow, and will play a key part in shaping data architecture and strategy.As a technical leader, you'll work closely with cross-functional teams including...
-
Lead Data Engineer/Sr. Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India Innovatics Full time ₹ 12,00,000 - ₹ 36,00,000 per yearJob Description:5+ years of experience in a Data Engineer role,Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc.Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/HiveExperience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event HubExperience with GCP/Azure data factory/AWSStrong in...
-
Data Pipeline Engineer
24 hours ago
Ahmedabad, Gujarat, India TIGI HR Full timeData EngineerExperience: 4–6 YearsLocation: AhmedabadWe're looking for a skilledData Engineerto design, build, and maintain scalable data pipelines and systems. The ideal candidate will have hands-on experience withSnowflake,DBT, andAirflow, with a passion for building efficient, high-performance data workflows.Key Responsibilities:Develop, optimize, and...
-
Azure Data Engineer
2 weeks ago
Ahmedabad, Gujarat, India Macersoft Technologies, a DataPlatformExperts Company Full time ₹ 8,00,000 - ₹ 12,00,000 per yearJob Description:We are looking for a skilled and motivated AI Azure Data Engineer with strong expertise in Microsoft Azure data services. The ideal candidate must have hands-on experience with Databricks or Microsoft Fabric and hold a valid certification in either. You will be responsible for designing, developing, and deploying scalable data...
-
Senior Data Engineer
7 days ago
Ahmedabad, Gujarat, India Augmented Ally IT Solutions Pvt Ltd Full time ₹ 15,00,000 - ₹ 25,00,000 per yearSeeking a Senior Data Engineer skilled in ETL, Palantir Foundry, Python, PySpark, and AWS. Build scalable data solutions, ensure quality, and collaborate in Agile teams. 5+ years' experience required; utility domain is a plus. Required Candidate profileSenior Data Engineer expert in cloud, ETL, and big data tools. Strong in Python, PySpark, and AWS with...