Data Engineer, Pyspark
1 week ago
Join us as a Data Engineer
- We're looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure
- Day-to-day, you'll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights
- If you're ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you
- We're offering this role at associate vice president level
What you'll do
Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development. You'll also provide transformation solutions and carry out complex data extractions.
We'll expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions. You'll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers.
You'll Also Be Responsible For
- Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions
- Participating in the data engineering community to deliver opportunities to support our strategic direction
- Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists
- Building advanced automation of data engineering pipelines through the removal of manual stages
- Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required
The skills you'll need
To be successful in this role, you'll have an understanding of data usage and dependencies with wider teams and the end customer. You'll also have experience of extracting value and features from large scale data.
We'll expect you to have at least seven years of experience in ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities.
You'll Also Need
- Experience of using programming languages alongside knowledge of data and software engineering fundamentals
- Experience in Oracle PL-SQL, PySpark, AWS S3, Glue, and Airflow
- Good knowledge of modern code development practices
- Great communication skills with the ability to proactively engage with a range of stakeholders
-
Oracle + PySpark Data Engineer
1 hour ago
Bengaluru, Karnataka, India PradeepIT Consulting Services Full time ₹ 6,00,000 - ₹ 18,00,000 per yearJob Description: Experiance:5 to 7 YearsWe are seeking a highly skilled and motivated Oracle + PySpark Data Engineer/Analyst to join our team. The ideal candidate will be responsible for leveraging the Oracle database and PySpark to manage, transform, and analyze data to support our business's decision-making processes. This role will play a crucial part...
-
Data Engineer
2 weeks ago
Bengaluru, Karnataka, India NTT DATA Full time ₹ 15,00,000 - ₹ 25,00,000 per yearMigrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend. Develop and maintain scalable ETL pipelines in AWS. Write PySpark scripts for large-scale data processing. Optimize SQL queries and transformations for AWS PostgreSQL. Work with Cloud Engineers to ensure smooth deployment and performance tuning. Integrate data pipelines with existing Unix systems....
-
Senior Data Engineer
6 hours ago
Bengaluru, Karnataka, India NTT DATA Full time ₹ 20,00,000 - ₹ 30,00,000 per yearBuild scalable ETL/ELT pipelines with Informatica or Spark/PySpark; orchestrate with AWS Glue or Azure ADF/Databricks. Implement cleansing, enrichment, and modeling (dimensional/Data Vault); optimize partitioning, job performance, and cost. Publish curated datasets/semantic layers for Power BI, Tableau, SSRS/SSAS; enable self-service analytics. Automate data...
-
Sr. Snowflake Data Engineer Pyspark
1 week ago
Bengaluru, Karnataka, India EduRun Group Full time ₹ 20,00,000 - ₹ 25,00,000 per yearSenior Data Engineer | 10-12 Years Experience | Must Have: Proven expertise in building scalable batch and streaming data pipelines using Databricks (PySpark) and Snowflake.Lead the design, implementation, and optimization of application data stores using PostgreSQL, DynamoDB, and advanced SQL.Strong programming skills in SQL, Python, and PySpark for...
-
Data Engineer
3 days ago
Bengaluru, Karnataka, India NTT DATA Full time ₹ 12,00,000 - ₹ 24,00,000 per yearReq ID: 345819NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer / Developer (Pipelines & Analytics) to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Role:...
-
Data Engineer with SQL/ETL/Pyspark
5 days ago
Bengaluru, Karnataka, India Infosys Finacle Full time ₹ 12,00,000 - ₹ 36,00,000 per yearMandate Skills - SQL, ETL ,hadoop, pyspark, Apache, KafkaRequired Skills5+ years of relevant Data engineering experience using Python or Java as tech be well versed into HadoopDevelopment expertise with 2+ years of experience, in SQL, ETL processes and other similar technologiesSkilled in Identifying and optimizing database queriesStrong understanding of...
-
Pyspark developer
2 days ago
Bengaluru, Karnataka, India Infosys Full time ₹ 15,00,000 - ₹ 25,00,000 per yearJob Description:In the role of Data Engineer you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation Application Architecture definition and DesignYou will play an important role in creating the high level design artifactsYou will also...
-
Data & AI Engineer Lead
2 days ago
Bengaluru, Karnataka, India NTT DATA Full time ₹ 12,00,000 - ₹ 36,00,000 per yearFramework Design & Architecture Architect a metadata-driven, Python/Spark-based framework for automated data validation across high-volume production datasets. Define DQ rule templates for completeness, integrity, conformity, accuracy, and timeliness. Establish data quality thresholds, escalation protocols, and exception workflows. Automation & Integration...
-
Senior Data Engineer
7 hours ago
Bengaluru, Karnataka, India NTT DATA Full time ₹ 6,00,000 - ₹ 14,00,000 per yearReq ID: 345817NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Senior Data Engineer / Developer (Pipelines & Analytics) to join our team in Bangalore, Karnātaka (IN-KA), India (IN)....
-
Databricks + Pyspark + SQL
58 minutes ago
Bengaluru, Karnataka, India PradeepIT Consulting Services Full time ₹ 12,00,000 - ₹ 36,00,000 per yearJob description:Role: Databricks + Pyspark + SQLYear of Experience:4-8 YearsResponsibilities:Collaborate with cross-functional teams to understand data requirements and design efficient data processing solutions.Develop and maintain ETL processes using Databricks and PySpark for large-scale data processing.Optimize and tune existing data pipelines to ensure...