We are looking for a talented AI Engineer to join us in our commitment to improve the quality of care for patients in every community across the nation. In this role, you will join our Artificial Intelligence - Computer Vision group and work on mission-critical software projects that help us improve our medical image data processing pipeline.
As an AI Engineer, your responsibilities will include helping to further develop, manage, and deploy models in our analytics environment and support the infrastructure that powers our work. You will be helping to ensure that our analytics pipelines leverage the best technology and reliably ETL DICOM data, execute models, track data drift, facilitate systems of active learning, and develop monitoring dashboards. You will also have the opportunity to support state of the art computer vision model training.
You will be expected to:
- AWS system troubleshooting, problem resolution, and support across various application domains and platforms
- Providing recommendations for architecture and implementing high-performance distributed computing on the AWS platform
- Provide recommendations and implementing systems that are highly available, scalable, and self-healing on the AWS platform
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Pipeline, maintain, enhance, and deploy models, contributing to the enhancement and extension of our data analytics.
- Designing, maintenance and management of tools for automation of different AI processes.
- Assist with communication of results of analysis and research projects to internal and external stakeholders.
- BS or Masters degree in computer science or related field
- 1+ year of work experience as an AI or software engineer (internship included) for MS and 3+ years of experience for Bachelors
- 1+ year of development, product, troubleshooting, and problem resolution experience on AWS ecosystem
- AWS DevOps experience strongly preferred
- Applied programming experience in Python, Java, and/or C++
- Experience with libraries and tools like PyTorch, Tensorflow, and CUDA
- Experience with SQL database
- Experience with Unix-based environments and shell scripting
- Experience setting up deep learning pipelines
- Experience with containers / orchestration such as Kubernetes
- Experience with distributed system preferred
- Experience with deep learning inference engines is a plus (Intel OpenVino, NVIDIA TensorRT, TritonRT or similar frameworks)
- Ability to work well as an individual contributor as well as within a multidisciplinary team environment
- Strong communicator with excellent interpersonal skills and can-do attitude to work and thrive in a fast-paced team environment
Work in United States