Job Overview
JOB DETAILS
REQUIREMENTS
- Experience with data modeling, data integration, and ETL processes
- Strong knowledge of SQL and database systems
- Familiarity with managing cloud-native databases.
- Understanding of security integration in CI/CD pipelines.
- Understanding of data warehousing concepts and best practices
- Proficiency in working with large-scale data sets and distributed computing frameworks
- Strong problem-solving and analytical skills
- Excellent communication and teamwork abilities
- Working experience as a Data Engineer
- Good programming skills in Python and experience with Spark for data processing and analytics
- Experience in Google Cloud Platform services such as GCS, Dataflow, Cloud Functions, Cloud Composer, Cloud Scheduler, Datastream (CDC), Pub/Sub, BigQuery, Dataproc, etc. with Apache Beam (Batch & Stream data processing).
- Experience with scripting languages like Shell, Perl etc.
RESPONSIBILITIES
- Develop JSON messaging structure for integrating with various application
- Leverage DevOps and CI/CD practices (GitHub, Terraform) to ensure the reliability and scalability of data pipelines.
- Play a role in designing, developing, and implementing data pipelines and data integration solutions using Python and Google Cloud Platform services.
- Develop, construct, test and maintain data acquisition pipelines for large volumes of structured and unstructured data. This includes batch and real-time processing
- Develop and maintain data pipelines and ETL processes using Python.
- Design, build, and optimize data models and data architecture for efficient data processing and storage
- Implement data integration and data transformation workflows to ensure data quality and consistency
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#CrossChannelJobs#JobSearch
#CareerOpportunities#HiringNow
#Employment#JobOpenings
#JobSeekers
FacebookLinkedIn