Gcp+pyspark

4 days ago


Gurugram Haryana, India Cognizant Full time

Role
- GCP + Pyspark
Exp
- 5 to 12 YRS
Notice period
- 60 days
Location
- Gurgaon
**Job description**:
- 1. Strong knowledge of Pyspark
2. Experience of working in SQL and NOSQL
3. Bigquery and bigtable knowledge
4. working experience of working in GCP,
5 knowledge of hadoop, distributed execution, bigdata concepts
6 Hive shell scripting


  • Gcp, Dag Testing

    4 days ago


    Gurugram, Haryana, India Cognizant Full time

    **Location**: Bangalore **Grade**: SA/M **Skills**: GCP, Big Query, Hive, PySpark, DAG Testing, airflow

  • Big Data- Gcp

    2 weeks ago


    Gurugram, Haryana, India Ikrux Full time

    Hadoop BigData Pyspark SQL Hands on experience is mandatory for the GCP Python Java Spark SQL and Bigdata Key Responsibilities - You will be challenged with identifying innovative ideas and proof of concept to deliver against the existing and future needs of our customers Software Engineers who join our Loyalty Technology team will be assigned to one of...


  • Bengaluru, Chennai, Gurugram, India Bcforward Full time ₹ 15,00,000 - ₹ 20,00,000 per year

    Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED.Contract To Hire(C2H) RoleLocation: Bengaluru/Gurgaon/ChennaiPayroll: BCforwardWork Mode: HybridJDSkills: GCP; PySpark; ETL - Big Data / Data Warehousing; SQL; Python, BigQuery, AirflowExperienced data engineer with hands on experience on GCP offerings Experienced in BigQuery/ BigTable/ Pyspark,...

  • Lead Data Engineer

    2 days ago


    Gurugram, India SUPERSOURCING TECHNOLOGIES PRIVATE LIMITED Full time

    About the Role:We are looking for an experienced Lead Data Engineer with deep expertise in Big Data technologies, particularly within the Google Cloud Platform (GCP) ecosystem. The ideal candidate should have a strong command of PySpark/Spark, SQL, and Python, and a proven track record in building, optimizing, and managing large- scale data pipelines and...


  • Bengaluru, Chennai, Gurugram, India Bcforward Full time ₹ 1,04,000 - ₹ 1,30,878 per year

    Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED.Contract To Hire(C2H) RoleLocation: Delhi/Bengaluru/Gurgaon/Chennai-HybridPayroll: BCforwardWork Mode: HybridDomain: Banking & FinanceJDSkills: GCP; ETL - Big Data / Data Warehousing; SQL; Python; PySpark, AirflowExperienced data engineer with hands on experience on GCP offerings Experienced in...


  • Gurugram, India upGrad Full time

    About the Job :We're hiring an cloud data engineering (preferably Azure) data pipelines and Spark. - Work with Databricks platform using Spark for big data processing and analytics. - Write optimized and efficient code using PySpark, Spark SQL and Python. - Develop and maintain ETL processes using Databricks notebooks and workflows. - Implement and...

  • Now HRing

    1 week ago


    Gurugram, Haryana, India Syncwell Infotech Private Limited Full time

    **Job Title**: GCP DevOps Engineer **Location**: Gurgaon **Job Type**: Full-Time **Job Summary**: **Key Responsibilities**: - Design, build, and manage cloud infrastructure and services in GCP. - Implement and maintain CI/CD pipelines using tools like Cloud Build, Jenkins, GitLab CI/CD, or GitHub Actions. - Automate provisioning using Infrastructure as...

  • GCP Data Lead

    7 days ago


    Haryana, India 9NEXUS Full time ₹ 9,00,000 - ₹ 12,00,000 per year

    Job Title: GCP Data Lead Experience: 5 to 8 Years Location: [ Remote] Employment Type: Full-Time Job Summary: We are seeking an experienced GCP Data Lead with a strong background in data engineering and cloud architecture on Google Cloud Platform (GCP). The ideal candidate will be responsible for designing scalable data solutions, leading data teams, and...


  • Gurugram, India Strategic HR Solutions Full time

    Job SummaryWe are seeking a highly skilled and hands-on Snowflake Data Engineer to join our data engineering team. This role requires a deep understanding of Snowflake's core components including Snowpipe, Streams, and Tasks, as well as strong experience with query profiling, data pipeline orchestration, and performance tuning. The ideal candidate will...


  • Gurugram, India Skeps Full time

    Responsibilities :- Develop and maintain scalable data pipelines using PySpark and Delta Lake, ensuring efficient processing of large, structured, and semi-structured datasets.- Analyse complex data sets using SQL, Python (Pandas, NumPy, scikit-learn, Seaborn) to identify trends, patterns, and opportunities for process improvement.- Continuously optimize the...