Current jobs related to AWS Pyspark Developer - Chennai Coimbatore - Saama Technologies

  • Pyspark Developer

    7 days ago


    Chennai, Tamil Nadu, India MP DOMINIC AND CO Full time

    Job Summary - Design develop and implement scalable data pipelines and streaming use cases using PySpark and Spark on a distributed computing platform - Possess strong programming skills in Spark streaming - Have familiarity with cloud platforms like GCP - Gain experience in big data technologies such as Hadoop Hive and HDFS - Perform ETL operations...

  • PySpark Developer

    2 weeks ago


    Coimbatore, Tamil Nadu, India Corpxcel Consulting Full time

    Location : Chennai/Bangalore/Hyderabad/Coimbatore/ PuneWFO : 3 days Mandatory from the above-mentioned locations.Role Summary :We are seeking a highly skilled PySpark Developer with hands-on experience in Databricks to join Companies IT Systems Development unit in an offshore capacity. This role focuses on designing, building, and optimizing large-scale data...

  • AWS Data Engineer

    11 hours ago


    Chennai, India Tata Consultancy Services Full time

    Dear CandidateGreetings from TATA Consultancy ServicesJob Openings at TCSSkill - AWS Data Engineer - Redshift, Pyspark, GlueExp range - 5 yrs - 8yrsLocation- ChennaiNotice period 30 daysPls find the Job Description below.Good hands-on experience in Python programming and PysparkData Engineering experience using AWS core services (Lambda, Glue, EMR and...

  • AWS Data Engineer

    2 days ago


    Chennai, India Tata Consultancy Services Full time

    Dear CandidateGreetings from TATA Consultancy ServicesJob Openings at TCSSkill - AWS Data Engineer - Redshift, Pyspark, GlueExp range - 5 yrs - 8yrsLocation- ChennaiNotice period 30 daysPls find the Job Description below.Good hands-on experience in Python programming and PysparkData Engineering experience using AWS core services (Lambda, Glue, EMR and...

  • AWS Data Engineer

    1 day ago


    Chennai, India Tata Consultancy Services Full time

    Dear Candidate Greetings from TATA Consultancy Services Job Openings at TCS Skill - AWS Data Engineer - Redshift, Pyspark, Glue Exp range - 5 yrs - 8yrs Location- Chennai Notice period 30 days Pls find the Job Description below. Good hands-on experience in Python programming and Pyspark Data Engineering experience using AWS core services (Lambda, Glue,...

  • AWS Data Engineer

    2 days ago


    Chennai, India Tata Consultancy Services Full time

    Dear CandidateGreetings from TATA Consultancy ServicesJob Openings at TCSSkill - AWS Data Engineer - Redshift, Pyspark, GlueExp range - 5 yrs - 8yrsLocation- ChennaiNotice period 30 daysPls find the Job Description below.Good hands-on experience in Python programming and PysparkData Engineering experience using AWS core services (Lambda, Glue, EMR and...

  • AWS Data Engineer

    5 hours ago


    Chennai, India Tata Consultancy Services Full time

    Dear CandidateGreetings from TATA Consultancy ServicesJob Openings at TCSSkill - AWS Data Engineer - Redshift, Pyspark, GlueExp range - 5 yrs - 8yrsLocation- ChennaiNotice period 30 daysPls find the Job Description below.Good hands-on experience in Python programming and PysparkData Engineering experience using AWS core services (Lambda, Glue, EMR and...


  • Chennai, India Tata Consultancy Services Full time

    Dear Candidate Greetings from TATA Consultancy Services Job Openings at TCS Skill - AWS Data Engineer - Redshift, Pyspark, Glue Exp range - 5 yrs - 8yrs Location- Chennai Notice period 30 days Pls find the Job Description below. - Good hands-on experience in Python programming and Pyspark - Data Engineering experience using AWS core services (Lambda,...


  • Chennai, India Cynosure Corporate Solutions Full time

    Build up data pipelines for consumption by the data science team. Clear understanding and experience with Python and PySpark. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Build and maintain data pipelines in Pyspark with SQL and Python. Knowledge of Cloud (Azure/AWS) technologies is additional. Suggest and implement...

  • PySpark Developer

    6 days ago


    Hyderabad, Bengaluru, Chennai, India Coders Brain Technology Private Limited Full time

    Job Description ROLE RESPONSIBILITIES Data Engineering and Processing: Develop and manage data pipelines using PySpark on Databricks. Implement ETL/ELT processes to process structured and unstructured data at scale. Optimize data pipelines for performance, scalability, and cost-efficiency in Databricks. Databricks Platform Expertise: Experience in...

AWS Pyspark Developer

2 weeks ago


Chennai Coimbatore, India Saama Technologies Full time US$ 90,000 - US$ 1,20,000 per year

We are seeking an experienced AWS PySpark Developer with 2-8 years of experience to design, build, and optimize our data pipelines and analytics architecture. The ideal candidate will have a strong background in data wrangling and analysis, with a deep understanding of AWS data services.Key Responsibilities:

  • Design, build, and optimize robust data pipelines and data architecture on the AWS cloud platform.
  • Wrangle, explore, and analyze large datasets to identify trends, answer business questions, and pinpoint areas for improvement.
  • Develop and maintain a next-generation analytics environment, providing a self-service, centralized platform for all data-centric activities.
  • Formulate and implement distributed algorithms for effective data processing and trend identification.
  • Configure and manage Identity and Access Management (IAM) on the AWS platform.
  • Collaborate with stakeholders to understand data requirements and deliver effective solutions.

Required Skills & Experience:

  • 2-8 years of experience as a Data Engineer or Developer.
  • Proven experience building and optimizing data pipelines on AWS.
  • Proficiency in scripting with Python.
  • Strong working knowledge of:

  • Big Data Tools: AWS Athena.

  • Relational & NoSQL Databases: AWS Redshift and PostgreSQL.
  • Data Pipeline Tools: AWS Glue, AWS Data Pipeline, or AWS Lake Formation.
  • Container Orchestration: Kubernetes, Docker, Amazon ECR/ECS/EKS.

  • Experience with wrangling, exploring, and analyzing data.

  • Strong organizational and problem-solving skills.

Preferred Skills:

  • Experience with machine learning tools (SageMaker, TensorFlow).
  • Working knowledge of stream processing (Kinesis, Spark-Streaming).
  • Experience with analytics and visualization tools (Tableau, Power BI).
  • Knowledge of optimizing AWS Redshift performance.
  • Familiarity with SAP Business Objects (BO).