Jobs Career Advice Signup
X

Send this job to a friend

X

Did you notice an error or suspect this job is scam? Tell us.

  • Posted: Apr 1, 2019
    Deadline: Not specified
    • @gmail.com
    • @yahoo.com
    • @outlook.com
  • Never pay for any CBT, test or assessment as part of any recruitment process. When in doubt, contact us

    A global logistics platform right from the heart of Africa, leveraging technology and efficient processes to serve its citizens and the rest of the world. Putting transportation in the hands of every individual. Facilitating the movement of your goods and packages by guaranteeing, safety, speed and affordability.


    Read more about this company

     

    Data Engineer

    Job Description

    • We are looking for a Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data.
    • The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
    • You will also be responsible for integrating them with the architecture used across the company.

    Responsibilities

    • Selecting and integrating any Data tools and frameworks required to provide requested capabilities
    • Implementing the ETL process
    • Monitoring performance and advising any necessary infrastructure changes
    • Defining data retention policies

    Skills and Qualifications

    • Proficient understanding of distributed computing principles
    • Management of Hadoop cluster, with all included services {{unless you are going to have specific Big Data DevOps roles for this}}
    • Ability to solve any ongoing issues with operating the cluster {{unless you are going to have specific Big Data DevOps roles for this}}
    • Proficiency with Hadoop v2, MapReduce, HDFS
    • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming {{if stream-processing is relevant for the role}}
    • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
    • Experience with Spark {{if you are including or planning to include it}}
    • Experience with integration of data from multiple data sources
    • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
    • Knowledge of various ETL techniques and frameworks, such as Flume
    • Experience with various messaging systems, such as Kafka or RabbitMQ
    • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O {{if you are going to integrate Machine Learning in your Big Data infrastructure}}
    • Good understanding of Lambda Architecture, along with its advantages and drawbacks
    • Experience with Cloudera/MapR/Hortonwork

    Method of Application

    Interested and qualified candidates should send their CV to: careers@kobo360.com

    Note: Even though no degree is listed, we certainly appreciate you may have one. But in this business, what matters is what you can do. You will certainly like to be part of this. So, apply.

    Build your CV for free. Download in different templates.

  • Send your application

    View All Vacancies at Kobo360 Back To Home

Subscribe to Job Alert

 

Join our happy subscribers

 
 
Send your application through

GmailGmail YahoomailYahoomail