Senior Machine Learning Engineer - Chameleon Technologies
Job #2511: Chameleon Technologies is searching for a senior AWS Machine Learning Engineer. This is a green field opportunity to help crystalize a vision for the application of machine learning. This person can be located anywhere in the United States.
You will be joining a world-class data engineering team who has completed migration of all legacy databases to the cloud, allowing you to deploy and scale MLE solutions without limits.
The mission is to connect people and work. This is a publicly traded organization specialized in blue-collar staffing at scale. Now more than ever, people are looking for a flexible and fulfilling work-life experience – and the team wakes up every morning to make this a reality. Optimization and maximization of user experiences are at the core of everything they do.
As an AWS Machine Learning Engineer, you will have unlimited opportunities to improve the performance of the company’s organization by combining business strategy, problem-solving, critical thinking, data analysis, data science, and state-of-the-art machine learning engineering. Your contributions through disruptive innovation will enable the evolution of the organization, as well as the global staffing industry.
If you believe creating business value and helping companies evolve is more important than simply building ML models, then this data science team is perfect for you.
- Work collaboratively with data scientists, machine learning engineers, data engineers, product teams, and business stakeholders to develop AI/ML/DS solutions to important business problems.
- Execute data analysis with SQL to uncover business truths and feature spaces in preparation for DS/ML workflows within AWS.
- Convert Jupyter notebooks into scalable training & inference pipelines using AWS.
- Engage in a DevOps/MLOps culture with a bias to automate everything.
- Participate in regular peer code reviews with other team members as well as full-team development showcases to the rest of the organization (other teams are excited to see what you are building!).
- Learn continuously through online courses, AWS certifications, YouTube playlists, and on-the-job exploration – anything from statistics to deep reinforcement learning.
- Experience developing end-to-end data science workflows within Jupyter notebooks and converting them into scalable training and inference pipelines using AWS
- Experience with Amazon SageMaker, Lambda, and Glue
- Strong PySpark, SQL, and feature engineering skills
- Experience applying software engineering best practices, such as modularization, performance optimization, unit testing, proper documentation, logging, code commits/reviews, and clean code best practices
- Obsession with creating tangible business value through DS/ML solutions, such as increasing revenue and profitability, enhancing user experiences, and optimizing business KPIs
Other Desirable Skills/Experience (Bonus):
- Experience with AWS EMR, Kinesis, Athena, and other analytics services – especially in the context of near real-time streaming inference pipelines
- Ability to apply deep reinforcement learning to solve business problems
- Knowledge of machine learning container orchestration, such as TFX/Kubeflow or ECS/EKS with Fargate
- AWS Machine Learning Specialty Certification and/or AWS Data Analytics Specialty Certification
- Degree in a quantitative field, such as computer science, engineering, physics, statistics, or applied mathematics