Big Data Engineer - Chameleon Technologies
Job #2498: Chameleon Technologies is searching for a Big Data Engineer for a long-term contract role in Portland, OR. The top skills we’re looking for: Bigdata / Hadoop, Spark / Pyspark, Hive, AWS EMR / Athena, Data Pipeline, Java with OO programming skills.
REQUIREMENTS:
- Working experience and communicating with business stakeholders and architects
- Industry experience in developing relevant big data/ETL data warehouse experience building cloud native data pipelines
- Experience in Python, Pyspark, Scala, Java and SQL Strong Object and Functional programming experience in Python
- Experience worked with REST and SOAP based APIs to extract data for data pipelines
- Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, Sqoop, etc.
- Experience working in a public cloud environment, particularly AWS is mandatory
- Ability to implement solutions with AWS Virtual Private Cloud, EC2, AWS Data Pipeline, AWS Cloud Formation, Auto Scaling, AWS Simple Storage Service, EMR and other AWS products, HIVE, Athena
- Experience in working with Real time data streams and Kafka Platform
- Working knowledge with workflow orchestration tools like Apache Airflow design and deploy dags
- Hands on experience with performance and scalability tuning
- Professional experience in Agile/Scrum application development using JIRA
Nice to have
- Teradata and Snowflake experience
- Professional experience with source control, merging strategies and coding standards, specifically Bitbucket/Git and deployment through Jenkins pipelines
- Demonstrated experience developing in a continuous integration/continuous delivery (CI/CD) environment using tools like Jenkins, CircleCI Frameworks
- Demonstrated ability to maintain the build and deployment process through the use of build integration tools
- Experience designing instrumentation into code and using and integrating with software and logging analysis tools like log4Python, New Relic, Signal FX and/or Splunk