Sr Data Platform Engineer with Cloud Expertise

Job Description

If you are a skilled data developer with 7+ years of expertise in building cloud-native data pipelines and would like to join the force in fighting cancer, we have a great opening that might catch your interest. One of our clients builds the largest libraries globally to gather clinical data to beat cancer through precision medicine.
Great software doesnt happen on its own. It takes great people. That happens to be our forte. With nearly 20 years of matching top engineering talent with preeminent and innovative brands, we look for inquisitive, resourceful, and dedicated to their craft, and driven to help companies build great software. If this sounds like you, read on.
What's crucial here:
· You build and run data process pipelines by implementing ETL to load an enterprise DW
· You are expertise in building cloud-native data pipelines using AWS, Docker, DevOps
· An expert in full-stack Python development and built microservices utilizing high-level frameworks such as Flask/Django
You have experience writing specifications including REST APIs or moving trained machine learning models into production data pipelines
· Spark/PySpark and healthcare domain knowledge would be a big plus
Responsibilities:
You will architect and implement cloud-native data pipelines and infrastructure to enable analytics and machine learning on the firms rich clinical, molecular, and imaging datasets
Connect with communities of engineers, scientists, operators using data platform tools and recommend adoption of existing technology or incremental improvements to eliminate adoption barriers
Work with product team members and stakeholders to discuss project priorities, scope, and trade-offs to deliver the right solutions at the right time
Present new technologies and ideas to the broader organization, and the ability to communicate effectively and without bias while accepting critical feedback
Guide application teams, providing the best practices and constraints for implementation of new technology to fit successfully in the broader platform objectives
Effectively embed on engineering teams to ensure success and practical learning of first movers in order to provide feedback back to technology radar and product roadmap
Teck Stack: AWS Redshift, Py.Spark, Dask, Airflow, and AWS Batch Moving towards GCP Composer, DataProc, BigQuery, GKE
If you feel you could make an impact on this mission, please apply to Contact
#newopportunities #data #awscloud #itjobsearch

Work in United States
Employment Options
Professional Experience
Skills
  • Data
  • Data Pipelines
  • AWS
  • Cloud
  • Spark
  • PySpark
  • Python
  • ETL
  • Data Migration
Apply to Job

Recruiter

Nadiia Vykhrystiuk

Sr IT Researcher

Chicago, Illinois, United States

View Details

Recruiter Contacts

Phone
+38 (097) 413-4579