Job #2410: Chameleon Technologies is searching for a Data Scientist to join a team responsible for delivering the core infrastructure and foundational technologies for their over 200 online businesses. As cloud business continues to grow the ability to deploy new offerings and hardware infrastructure on time, in high volume with high quality, and lowest cost is of paramount importance. To achieve this goal, our client is looking for a dynamic, highly motivated Data Scientist/Program Manager to develop intelligent quality tracking and control models for new product integration and deployment. We’re looking for a passionate, high energy individual to help building the quality management programs that deliver the best in the class cloud hardware platforms.
- of this role include executing on the quality management excellence strategy, successfully supporting quality teams with industry standard process and tool improvements. This role also requires strong project/program management experience, understanding of the production development and quality engineering disciplines, and the ability to successfully deliver data analysis dashboards and analytic tools. Strong SQL, Python and Power BI skills are required to create data pipelines and handle big data across multiple data sources. This is a contract role in Redmond, WA.
- Work with data science engineers to scale our data pipelines, drive the collection of new data and the refinement of existing data sources, and improve our data model and instrumentation as the cloud platform product evolves
- Build relationships across quality engineering teams, program managers, and data scientists to generalize data needs and guide the usage models for quality tracking and management.
- Help to build and scale an end-to-end data service models as well as an analytical experiment system.
- Develop the state of data visualization tooling and automation programs for data processing across multiple data sources.
- 3+ years of experience in a Data Engineering and/or Data Science role, with a focus on building data pipelines or conducting data intensive analysis.
- Expertise with Python, SQL and Power BI.
- Experience writing and debugging data pipelines using a distributed data framework.
- An inquisitive nature in diving into data inconsistencies to pinpoint issues.
- The ability to derive usage model requirements and architect shared datasets