W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9jewjlcm5ldgljc2vhcmnoig5ldy9qcgcvymfubmvylwrlzmf1bhqtbmv3lmpwzyjdxq

Technical Lead – Data Operations

Technical Lead – Data Operations

  • Location

    Sunnyvale, United States

  • Sector:

    Data

  • Job type:

    Contract

  • Salary:

    Flexible

  • Contact:

    Matt Hezlep

  • Job ref:

    39748

  • Published:

    4 months ago

  • Expiry date:

    2019-02-14

Technical Lead – Data Operations
Location: San Francisco Bay Area

 

We’re looking for a senior hand-on engineering lead with experience in modern data management to support both basic business intelligence as well as advanced analytics efforts. The position involves the implementation and extension of our new data lake and data processing capabilities to assist client reporting initiatives on some of the world’s most visible advertising brands across the globe.

The ideal candidate will build modern data management and reporting solutions and lead a team of data professionals to deliver high quality and dependable reporting as well as support our analytics teams with high performance and scalable data infrastructure. The role reports directly into an executive sponsor and will involve multiple projects across the entire organization.

Required Experience

Seven years or more in hands-on data and ETL development

Two years or more leading database/data management teams

Two years or more data warehousing experience

Five years or more working in an agile SDLC

 

Technical skills

Required

Database design with massively parallel processing appliances: Redshift, Vertica, Greenplum, Netezza or similar

Expert structured query language (SQL) knowledge

RDBMS database design in data warehousing context (SQL Server, Oracle, Postgres) and query optimization

Java and Scala language skills associated with ETL and data processing.

AWS data-centric tooling: Redshift, Redshift Spectrum, Athena, DynamoDB, Lambda, Glue and S3.

Design of CI pipelines and Git DVCS.

 

Desired

Experience with “big data” processing tools on AWS: EMR, Spark, Hive.

Python language experience and ETL frameworks.

Experience with stream processing tools: Kafka, Kinesis, Flink, Storm, Nifi, Kafka Streams, or Apex

OLAP Cube Design: SQL Server Analysis Services, Mondrian

ETL frameworks: SQL Server Integration Services or similar.

 

Soft skills

Strong communication and problem-solving skills

Detail oriented and passionate about data quality

Proven track record on delivering on non-functional/performance requirements.