Create and maintain an optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
Work with data and analytics experts to strive for greater functionality in our data systems
Minimum 3 years of experience in a Data Engineer role
A graduate degree in Computer Science/Statistics/Informatics/Information Systems or any other quantitative field
Strong analytical skills related to working with unstructured datasets
Worked with big data tools like Hadoop, Scala, Spark or Kafka
Experience with AWS cloud services like EC2, EMR, RDS and Redshift
Experience with stream processing systems like Storm and Spark Streaming etc
Experience with object-oriented/object function scripting languages: Java, Scala, python, etc