Data Engineer:Applications_AFL
4 days ago
Role description
Job description
Key Responsibilities:
Should have experience in below
Design, develop, and implement a Data Lake House architecture on AWS, ensuring scalability, flexibility, and performance.
Build ETL/ELT pipelines for ingesting, transforming, and processing structured and unstructured data.
Collaborate with cross-functional teams to gather data requirements and deliver data solutions aligned with business needs.
Develop and manage data models, schemas, and data lakes for analytics, reporting, and BI purposes.
Implement data governance practices, ensuring data quality, security, and compliance.
Perform data integration between on-premise and cloud systems using AWS services.
Monitor and troubleshoot data pipelines and infrastructure for reliability and scalability.
Skills and Qualifications:
7 + years of experience in data engineering, with a focus on cloud data platforms.
Strong experience with AWS services: S3, Glue, Redshift, Athena, Lambda, IAM, RDS, and EC2.
Hands-on experience in building data lakes, data warehouses, and lake house architectures.
Should have experience in ETL/ELT pipelines using tools like AWS Glue, Apache Spark, or similar.
Expertise in SQL and Python or Java for data processing and transformations.
Familiarity with data modeling and schema design in cloud environments.
Understanding of data security and governance practices, including IAM policies and data encryption.
Experience with big data technologies (e.g., Hadoop, Spark) and data streaming services (e.g., Kinesis, Kafka).
Have lending domain knowledge will be added advantage
Preferred Skills:
Experience with Databricks or similar platforms for data engineering.
Familiarity with DevOps practices for deploying data solutions on AWS (CI/CD pipelines).
Knowledge of API integration and cloud data migration strategies.