Current jobs related to Data Engineer - bangalore district - Recro

  • Lead Data Engineer

    1 week ago


    bangalore, India Eucloid Data Solutions Full time

    Eucloid is looking for a Lead Data Engineer to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of...

  • Lead Data Engineer

    1 week ago


    bangalore, India Eucloid Data Solutions Full time

    Eucloid is looking for a Lead Data Engineer to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of...

  • Data Engineer

    7 days ago


    bangalore district, India Acqueon Full time

    About the Job We are building a Customer Data Platform (CDP) designed to unlock the full potential of customer experience (CX) across our products and services. This role offers the opportunity to design and scale a platform that unifies customer data from multiple sources, ensures data quality and governance, and provides a single source of truth for...

  • Data Engineer

    7 days ago


    bangalore district, India Mastek Full time

    Data Engineer – NiFi / Cloudera / Iceberg / Snowflake / Databricks Overview We are seeking a Data Engineer with strong Apache NiFi expertise to design and implement pipelines that move and transform data from Cloudera (HDFS/Hive/Impala) into Apache Iceberg tables, with downstream integration into Snowflake and Databricks. The ideal candidate will have...

  • Data Engineer

    2 weeks ago


    bangalore district, India LTIMindtree Full time

    GCP Big Data Engineer Location -Bangalore & Gurgaon Leadership role with 8-10 yrs experience Skills Set GCP SQL PySpark ETL knowledge MUST Skills Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Cloud Composer,GCP DMS,Apache airflow,Java,Python,Scala,GCP Datastream,Google Analytics Hub,GCP Workflows,GCP Dataform,GCP Datafusion,GCP...


  • bangalore district, India Guidewire Software Full time

    Responsibilities: Design and Development: Architect, design, and develop robust, scalable, and efficient data pipelines. Design and manage platform solutions to support data engineering needs to ensure seamless integration and performance. Write clean, efficient, and maintainable code. Leadership and Collaboration: Lead and mentor a team of Data engineers,...


  • Kanyakumari district, India Skills Engineer Full time ₹ 1,50,000 - ₹ 2,50,000 per year

    Company DescriptionSkills Engineer Academy (SEA) is a premier institute specializing in training for Tekla Structures, aimed at shaping the future of structural engineering professionals. As an Authorized Training Center (ATC), SEA provides top-tier Tekla training, ensuring that students acquire industry-ready skills. Additionally, SEA supports job placement...

  • Data Engineer

    1 week ago


    bangalore district, India Digitrix Software LLP Full time

    Experience: 5 -10 years Location: Bangalore / Pune / Kolkata / Hyderabad / Gurugram (Hybrid)  Notice Period:  Immediate Joiners Only Must Have : PostgreSQL (Advanced), Python (Intermediate level) Concepts : ETL Process, Data modelling CI/CD - Deployment Process, Gitlab Cloud experience - AWS S3 / Azure BLOB, ADLS Good to Have : Orchestration (Airflow) /...

  • AWS Data Engineer

    7 days ago


    bangalore district, India Coforge Full time

    We are Hiring AWS Data Engineers at Coforge Ltd. Job Location: Bangalore Experience Required: 5 to 7 Years. Availability: Immediate joiners preferred 📧 Send your CV to Gaurav.2.Kumar@coforge.com 📱 WhatsApp: 9667427662 for any queries Role Overview:- Coforge Ltd. is seeking a skilled AWS Engineer with 5–7 years of hands-on experience in designing,...


  • bangalore district, India Ascendion Full time

    Job Title: Senior GCP Data Engineer (7 - 12 Years) Job Type: Full-Time Work Mode: Hybrid Locations: Bengaluru, Hyderabad, Chennai, Pune Job Summary: We are looking for a talented GCP Big Query Data Engineer with strong SQL skills and basic proficiency in Python to join our data engineering team. The ideal candidate should have hands-on experience working...

Data Engineer

2 weeks ago


bangalore district, India Recro Full time

Role - Data Engineer Experience - 2+ Yrs Location - Bangalore Roles and Responsibilities: Create and maintain optimal data pipeline architecture Create and maintain events/streaming based architecture/design Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data scientists to strive for greater functionality in our data systems. Mandatory Qualifications: Proficiency in either Scalar or Python, expertise in Spark, and experience with performance tuning and optimization. DBT is also Mandatory. We are looking for a candidate with 2+ years of experience in a Data Engineer role. Experience with big data tools: HDFS/S3, Spark/Flink,Hive,Hbase, Kafka/Kinesis, etc. Experience with relational SQL and NoSQL databases, including Elasticsearch and Cassandra/Mongodb. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS /GCP cloud services Experience with stream-processing systems: Spark-Streaming/Flink etc. Experience with object-oriented/object function scripting languages: Java, Scala, etc. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Strong analytic skills related to working with structured/unstructured datasets. Build processes supporting data transformation, data structures, dimensional modeling, metadata, dependency, schema registration/evolution and workload management. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Experience supporting and working with cross-functional teams in a dynamic environment Have a few weekend side projects up on GitHub Have contributed to an open-source project Have worked at a product company Have a working knowledge of a backend programming language Programming Language (Mandatory) → Candidate should be really good in either Scala OR Python (not necessarily both). Used mainly for writing data pipelines and transformations. Big Data Framework (Mandatory) → Strong hands-on expertise in Apache Spark. Must know how to write Spark jobs and also tune/optimize them for better performance (not just basic Spark usage). Performance Tuning & Optimization (Mandatory) → Able to handle large datasets efficiently. Optimize Spark jobs (partitions, shuffles, caching, memory usage, etc.) DBT (Mandatory) → Must have hands-on experience with dbt (Data Build Tool) for data transformations and data modeling. This is often used on top of warehouses like BigQuery, Snowflake, Redshift.