GCP Lead or Architect

🌍 Remote, USA πŸ’Ή Full-time πŸ• Posted Recently

Job Description

Client is seeking a Senior Big Data Engineer to become part of the team implementing and supporting edge analytical solutions on a Google Cloud Ecosystem. You will find a great place to work if you are passionate about designing data ingestion jobs, learning new technologies, and proposing and adopting new technologies. What you'll do β€’ Extract, Transform and Load data from multiple sources and multiple formats using Big Data Technologies. β€’ Development, enhancement, and support of data ingestion jobs from various source systems following existing design patterns using GCP Services such as Apache Spark, Dataproc, Dataflow, BigQuery, Airflow, etc. β€’ Work across Teams and senior engineers to make Data more accessible to others within the organization. β€’ Modify data extraction pipelines into standardized approaches that can be repeatable and reusable with minimal supervision from senior engineers. β€’ Automation of manual processes, optimize data delivery, re-designing infrastructure for greater scalability, etc. β€’ Work closely with senior engineers to optimize query and data access techniques. β€’ Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) β€’ Participate in a tight-knit engineering team employing agile software development practices. What experience you need β€’ Bachelor's degree in Computer Science, Systems Engineering or equivalent experience. β€’ 5+ years of work experience as a Big Data Engineer. β€’ 3+ years of experience using Technologies such as Apache Spark, Hive, HDFS, Beam (Optional). β€’ 3+ years of experience in SQL and Scala or Python. β€’ 2+ years experience with software build management tools like Maven or Gradle. β€’ 2+ years of experience working with Cloud Technologies such as GCP, AWS or Azure. β€’ What could set you apart β€’ Data Engineering using GCP Technologies (BigQuery, DataProc, Dataflow, Composer, DataStream, etc) β€’ Experience writing data pipelines. β€’ Self-starter that identifies/responds to priority shifts with minimal supervision β€’ Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. β€’ Agile environments (e.g. Scrum, XP) β€’ Relational databases (e.g. SQL Server, Oracle, MySQL) β€’ Atlassian tooling (e.g. JIRA, Confluence, and Github Apply tot his job

Ready to Apply?

Don't miss out on this amazing opportunity!

πŸš€ Apply Now

Similar Jobs

Recent Jobs

You May Also Like