Data Engineer
1 day ago
Job Title: Data Engineer Before You Apply — A Message from Our Founder. For generations, women have navigated systems never built with them in mind. The gaps in leadership, health, wealth, and opportunity were engineered into the world we inherited—and now, for the first time, we have the tools to build something better. AI gives us the chance to design technology that truly partners with the women and organizations working tirelessly to advance women and girls. Being trusted to do this work is both a privilege and a responsibility, and I am looking for people who feel that responsibility as deeply as I do. Uplevyl is not a conventional workplace. We have a distinct culture. We exist to create meaningful impact for society, and we hold ourselves to a standard of moving fast while delivering quality that endures. We partner only with those who share our commitment to doing important work, and we build technology that opens doors, expands opportunity, and enables more people to participate and benefit. We hire only A-players and compensate them accordingly. We do not compromise on talent, and we part ways respectfully when performance does not meet the standards required by our mission. Everyone at Uplevyl wears multiple hats. No task is beneath anyone. Decisions are driven by merit, not hierarchy. We prioritize customers over internal convenience, avoid politics, and maintain zero tolerance for unnecessary bureaucracy. We move with urgency because the mission demands it. When the work calls for it, we stretch beyond traditional hours—not out of obligation, but out of genuine commitment to building something transformative. We look for people energized by bold problems, people who find meaning in momentum and possibility, and who bring a proactive, high-ownership mindset to everything they do. If you thrive in environments where innovation, purpose, and high standards come together, you will feel at home at Uplevyl. Key Responsibilities Design, develop, and maintain scalable, resilient data pipelines ingesting data from 100+ global sources (World Bank, UN, OECD, national statistics offices, and more). Build real-time and near real-time data refresh systems to keep intelligence and insights updated across 50+ countries. Create robust ETL/ELT workflows capable of processing diverse data formats including APIs, CSVs, PDFs, semi-structured, and unstructured text. Implement enterprise-grade data isolation , multi-tenancy models, and secure data access architectures. Optimize vector databases and embedding pipelines for high-performance retrieval of gender-segmented insights. Design and maintain scalable orchestration workflows using Airflow, Dagster, Prefect, or similar tools. Apply Clean Data Architecture principles such as Medallion Architecture, Data Vault 2.0, and domain-driven modeling. Ensure high data quality through validations, profiling, lineage tracking, and automated monitoring. Implement observability practices (logging, metrics, tracing) across data pipelines. Collaborate with AI/ML, engineering, and product teams to supply clean, reliable, and well-modeled data for analytics and AI workloads. Maintain CI/CD automation for data workflows, schema management, and environment consistency. Review pipeline designs and contribute to best practices that enhance team efficiency, reliability, and scalability. Performance Expectations Deliver highly reliable, scalable, and secure data pipelines within agreed timelines. Maintain exceptional uptime and freshness SLAs across global data pipelines. Proactively identify bottlenecks in ingestion, transformation, storage, and retrieval layers—and implement long-term architectural fixes. Ensure high data quality, accuracy, and consistency across all datasets. Take ownership from design through deployment, monitoring, and continuous improvement. Bring best practices that uplift engineering quality, data reliability, and team productivity. Communicate effectively with stakeholders, product teams, and AI/ML engineers to ensure smooth delivery of data products. Qualifications 3+ years of experience building production-grade data pipelines at scale. Strong skills in Python and SQL , and hands-on experience with modern data stacks: Orchestration: Airflow, Dagster, Prefect Transformation: dbt or equivalent Warehouses/Lakes: Snowflake, BigQuery, Redshift, Databricks or Lakehouse platforms Experience with cloud platforms (AWS/GCP/Azure) and services used for large-scale data processing. Strong knowledge of ETL/ELT patterns, workflow orchestration, API integrations, and streaming pipelines. Experience handling large-scale datasets , high-ingestion throughput, and diverse pipeline workloads for multiple business domains. Exposure to Medallion Architecture, Data Vault 2.0, data modeling best practices , and schema evolution strategies. Deep data quality mindset—experience with validation, testing, metadata, lineage, and anomaly detection tools. Experience with vector databases (e.G., Pinecone, Qdrant, Weaviate) and embedding pipelines is a strong plus. Curiosity and passion for turning “messy” public data into reliable and meaningful intelligence. Awareness, exposure, or hands-on experience with AI model integration, embeddings, fine-tuning pipelines, or LLM-driven data processing is a strong added advantage. Self-driven, detail-oriented, innovative, and a strong team collaborator with a passion for social impact. Cultural & Work-Style Skills 1. Low Ego, High Contribution You collaborate across disciplines, share knowledge openly, and welcome feedback on architecture, code, and decisions. You care more about building the right system than about being “right.” 2. Ownership From Design to Delivery You think beyond tickets. You take responsibility for the full lifecycle of what you build including architecture, implementation, testing, reliability, and iteration. You solve problems end-to-end rather than waiting for perfect specifications. 3. Mission-Driven Urgency With Technical Depth You move quickly when needed, but never at the expense of long-term stability. You can distinguish when to build fast and when to build right and you communicate trade-offs clearly. 4. Agility in Ambiguity You are comfortable building in evolving environments with partial information. You prototype, test assumptions early, reduce dependencies, and find creative paths through constraints. 5. Purpose-Fueled Resilience & Curiosity You stay steady when debugging complex issues, you learn rapidly, and you adapt as the product and architecture evolve. You draw energy from solving meaningful problems and building systems that open doors for millions of women.
-
Data Engineer
2 weeks ago
Meerut, India Insight Global Full timeInsight Global is seeking a Senior Data Engineer to design, build, and scale robust data solutions on Microsoft Azure. You’ll own modern data pipelines and models that power analytics and reporting across the business. The ideal candidate is hands‑on with SQL databases, Azure Data Lake Storage, Azure Data Factory (ADF), and Power BI, and is comfortable...
-
Data Engineer
2 weeks ago
Meerut, India Insight Global Full timeInsight Global is seeking a Senior Data Engineer to design, build, and scale robust data solutions on Microsoft Azure. You’ll own modern data pipelines and models that power analytics and reporting across the business. The ideal candidate is hands‑on with SQL databases, Azure Data Lake Storage, Azure Data Factory (ADF), and Power BI, and is comfortable...
-
Data Engineer
5 days ago
Meerut, India IntraEdge Full timeWe are seeking a highly skilled Data Engineer with strong experience in Python, PySpark, Snowflake, and AWS Glue to join our growing data team. You will be responsible for building scalable and reliable data pipelines that drive business insights and operational efficiency. This role requires a deep understanding of data modeling, ETL frameworks, and...
-
Data Engineer
2 weeks ago
Meerut, India Insight Global Full timeRequired Skills & Experience• Minimum 6 years of hands-on experience with Azure data services (Data Factory, Databricks, ADLS, SQL DB, etc.). - 2-3 years of architecture experience: Designing and optimizing data sources, application ingestion, datalake hosting. Python and Pysparks • Bachelor’s degree in computer science, Computer Engineering, or a STEM...
-
Data Engineer
3 days ago
Meerut, India Neoware Technology Solutions Full timeData Scientist with strong expertise in Python programming , Machine Learning and Data Engineering . The ideal candidate will design, develop, and deploy scalable AI/ML solutions while ensuring robust data pipelines and infrastructure to support advanced analytics and predictive modeling. Requirements: Develop predictive and prescriptive models to...
-
Snowflake Data Engineer
2 weeks ago
Meerut, India MaxMyCloud Full timeCompany DescriptionMaxMyCloud specializes in assisting companies to optimize and maximize the value of their Snowflake investments while cutting unnecessary cloud costs. Utilizing a powerful AI-driven optimization platform, MaxMyCloud analyzes usage patterns, identifies inefficiencies, and generates immediate savings. On average, customers save over 30% on...
-
Snowflake Data Engineer
2 weeks ago
Meerut, India MaxMyCloud Full timeCompany DescriptionMaxMyCloud specializes in assisting companies to optimize and maximize the value of their Snowflake investments while cutting unnecessary cloud costs. Utilizing a powerful AI-driven optimization platform, MaxMyCloud analyzes usage patterns, identifies inefficiencies, and generates immediate savings. On average, customers save over 30% on...
-
Azure Data Engineer
2 weeks ago
Meerut, India Talentgigs Full timeRole: Azure Data EngineerWork Mode: Onsite/Full-timeLocation: Sholinganallur - Chennai Year of Experience: 5 - 10 YearsNotice Period: Immediate Good To Have Skills: Good CommunicationMust Have Skills: Databricks (Strong), Azure Data Engineering, PySpark, Python, SQLExternal Description:Azure Data Engineering + Databricks (Strong) + Python + SQL
-
AWS Data Engineer
2 weeks ago
Meerut, India Tata Consultancy Services Full timeRole: AWS Data Engineer Required Technical Skill Set: AWS Data Engineer Desired Experience Range: 6-8 yrs Location of Requirement: Hyderabad, Bangalore, Kolkata, Chennai Notice period: Immediately We are currently planning to do Virtual Interview on 10th –Dec- 2025 (Wednesday) Interview Date: 10th –Dec- 2025 (Wednesday) Job Description: - Proficiency in...
-
Data Engineer
2 weeks ago
Meerut, India LTIMindtree Full timeGreetings from LTIMindtree! We are thrilled to announce an exciting opportunity for the role of QlikSense Developer! We are on the lookout for exceptional candidates who can join us within an immediate to 60 days' time. If you are passionate about this field and eager to take on new challenges, we would love to hear from you! This is a fantastic chance to...