Current jobs related to PySpark Hive Data Engineer - Pune, Maharashtra - Citi

  • Data Engineering Role

    2 weeks ago


    Pune, Maharashtra, India beBeeDataEngineering Full time ₹ 2,00,00,000 - ₹ 2,50,00,000

    Big Data EngineerWelcome to this exciting opportunity to leverage your skills in Big Data engineering!Design and implement data ingestion pipelines from diverse sources onto a scalable big data platform.Develop and maintain complex data processing workflows using distributed computing frameworks such as Apache PySpark and Hive.Key Responsibilities:The ideal...

  • Data Engineer

    9 hours ago


    Pune, Maharashtra, India Tata Consultancy Services Full time ₹ 15,00,000 - ₹ 25,00,000 per year

    Job Title :- Data Engineer - PysparkExperience: 5 to 8 YearsLocation: Pune/HyderabadJob DescriptionRequired Skills:5+ years of experience in Big data and pysparkMust-HaveGood work experience on Big Data Platforms like Hadoop, Spark, Scala, Hive, Impala, SQLGood-to-HaveGood Spark, Pyspark,Big Data experienceSpark UI/Optimization/debugging techniquesGood...

  • Big Data Engineer

    2 weeks ago


    Pune, Maharashtra, India Nice Software Solutions Pvt. Ltd. Full time

    Big Data Engineer (PySpark)Location: Pune/Nagpur (WFO)Experience: 8 - 12 YearsEmployment Type: Full-timeJob OverviewWe are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high...

  • Big Data Engineer

    3 weeks ago


    Pune, Maharashtra, India Nice Software Solutions Pvt. Ltd. Full time

    Big Data Engineer (PySpark) Location: Pune/Nagpur (WFO) Experience: 8 - 12 Years Employment Type: Full-time Job Overview We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high...

  • Data Engineer

    4 weeks ago


    Pune, Maharashtra, India Servion Global Solutions Full time

    Immediate opening for Data engineer @ Pune/Chennai location EXP: 5+ YRS CTC: ECTC: NP: Immediate to 10 Days (Currently serving notice period) Work Location: Chennai (Taramani)/Pune (Hinjewadi) Work mode: Work from office Shift: UK Shift JD: Data engineer, python, pyspark, airflow, hive, sql, trino If interested candidates kindly share me your resume ...


  • Pune, Maharashtra, India NCS Group Full time

    Job DescriptionEssential for this role:Education and Qualifications:Bachelors degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent.Minimum seven plus years of experience in data analytics fieldExperience with Azure/AWS DatabricksExperience in building and optimizing data pipelines, architectures and data setsExcellent...


  • Pune, Maharashtra, India beBeeDataEngineer Full time ₹ 12,00,000 - ₹ 15,00,000

    Job OpportunityAs a highly motivated data engineer with 3–4 years of experience, you will be responsible for designing, developing and optimizing large-scale data processing solutions.This contract remote role requires hands-on expertise in PySpark, Python, SQL and real-time data streaming to contribute to the design and implementation of scalable data...


  • Pune, Maharashtra, India beBeeDataInsights Full time ₹ 1,50,00,000 - ₹ 2,00,00,000

    Job Title: Data Insights SpecialistWe are seeking a highly skilled professional to lead our big data initiatives.The ideal candidate will have a strong background in analytics and experience with technologies such as PySpark, Hive, Spark, HBase, and DQ tools.Key responsibilities:Design and implement scalable big data solutions using PySpark, Hive, and...


  • Pune, Maharashtra, India Capco Technologies Pvt Ltd Full time

    Position : Big Data TesterExperience : 5-9 YearsLocation : Pune, IndiaJob Summary :We are seeking an experienced Big Data Tester with 5-9 years of experience in software quality assurance, with a dedicated focus on data. The ideal candidate will have a strong background in Python, PySpark, and SQL, along with a proven track record of implementing best...

  • Big Data Engineer

    2 weeks ago


    Pune, Maharashtra, India Tata Consultancy Services Full time

    Greetings from TCS TCS is hiring for Big Data Location: - Chennai/Mumbai/Pune Desired Experience Range: 6 to 12 years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika

PySpark Hive Data Engineer

2 weeks ago


Pune, Maharashtra, India Citi Full time US$ 80,000 - US$ 1,50,000 per year

The Role

The Data Engineer is accountable for developing high quality data products to support the Bank's regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team.

Responsibilities

  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models

Required Qualifications & Work Experience

  • First Class Degree in Engineering/Technology (4-year graduate course)
  • 8 to 12 years' experience implementing data-intensive solutions using agile methodologies, should be hands on on PySpark, Hive, HDFS, Hadoop
  • Should have strong understanding of AWS Glue Serverless Data Integration, Terraform, deploying Apache Spark on AWS, using Elastic Kubernetes Service (EKS), use of deployment tools LightSpeed Tetkon
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers

  • Ability to automate and streamline the build, test and deployment of data pipelines

  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • An inclination to mentor; an ability to lead and deliver medium sized components independently

Technical Skills (Must Have)

  • ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data: Experience of 'big data' platforms such as Hadoop, Hive or Snowflake for data storage and processing
  • Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design
  • Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures
  • Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala
  • DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management
  • Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance

Technical Skills (Valuable)

  • Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows
  • Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs
  • Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls
  • Containerization: Fair understanding of containerization platforms like Docker, Kubernetes
  • File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta
  • Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI

Certification on any one or more of the above topics would be an advantage.

-

Job Family Group:

Technology

-

Job Family:

Digital Software Engineering

-

Time Type:

Full time

-

Most Relevant Skills

Please see the requirements listed above.

-

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

-

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.

View Citi's EEO Policy Statement and the Know Your Rights poster.