Jobs Career Advice Post Job
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Dec 12, 2023
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Duplo is building the platform to power the next generation of financial services. Our mission is to help companies expand financial access for all. Our simple and powerful banking-as-a-service API helps companies quickly launch financial products.


    Read more about this company

     

    Data Engineer

    • We are looking to  hire a Data Engineer who will be responsible for building and optimizing our data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
    • The Data Engineer will support all data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

    Responsibilities:

    • Analyzing and organizing raw data 
    • Building and maintaining optimal data pipeline architecture
    • Evaluating business needs and objectives
    • Combining raw information from different sources
    • Building analytics tools that utilize the data pipeline to provide actionable insights into customer, operational efficiency and other key business performance metrics.
    • Interpreting trends, patterns and exploring ways to enhance data quality and reliability
    • Identifying opportunities for data acquisition

    Requirements and Skills:

    • Advanced working SQL knowledge and experience working with relational  and nosql databases, query authoring (SQL)
    • We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
    • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
    • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
    • Strong analytic skills related to working with unstructured datasets.
    • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
    • A successful history of manipulating, processing and extracting value from large disconnected data sets.
    • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
    • Strong project management and organizational skills.
    • Experience supporting and working with cross-functional teams in a dynamic environment.

    The ideal candidate should also have experience using the following software/tools:

    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and mongo.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

    Check how your CV aligns with this job

    Method of Application

    Interested and qualified? Go to Duplo on duplo.bamboohr.com to apply

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Duplo Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail