Tek Experts provides the services of a uniquely passionate and expert workforce that takes intense pride in helping companies manage their business operations. We care about the work we do, the companies we partner with and the customers they serve.
By delivering unrivaled levels of business and IT support, we make sure nothing gets in the way of our clie...
Read more about this company
We’re seeking a diligent engineer to connect and model complex distributed data sets to build repositories like data warehouses and data lakes.
You’ll leverage your expertise in machine learning, data mining, and information retrieval to design, prototype, and build next-generation engines and services.
This will be a challenging but rewarding role that requires effective communication and collaboration as well as a keen attention to detail and the ability to learn and adapt to emerging technologies.
Responsibilities
Define the data lifecycle, including data models and data sources for analytics platforms.
Create and maintain an optimal data pipeline and architecture.
Build analytics tools and conduct advanced statistical analysis to provide actionable insights, identify trends, and measure performance.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, and more.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Perform aggregations on data across various warehousing models.
Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep data secure and work closely with database teams on topics related to data requirements, cleanliness, accuracy, and others.
Qualifications
Bachelor's or equivalent Degree in Computer Science or another related field.
3 - 5 years of direct experience in a statistical and/or data science role.
Experience working with large data sets, simulation/ optimization, and distributed computing tools.
Hands-on knowledge of data extraction and transformation tools - traditional ETL tools (Informatica, Alteryx) as well as the latest big data tools.
Experience working with large datasets, relational databases (SQL), and distributed systems (Hadoop, Hive)
Extensive experience with R and/or SAS.
Knowledge of other statistical software like SPSS, RapidMiner, and others is a plus.
Knowledge of data architecture and experience defining data retention policies, monitoring performance, and advising any necessary infrastructure changes.
Comfortability with developing dashboards (Tableau, Power Bi, Qlik, etc.) and data analytics models (R, Python, Spark)
Strong analytical, communication, and problem-solving skills.
Professional fluency in English is vital, both written and spoke.
MyJobMag Career Kickstart Scholarship 2026: Training Report & HighlightsFollowing the resounding success of the pilot programme, the MyJobMag Career Kickstart Scholarship 2025, the second edition was launched in 2026 to expand impact and deepen outcomes. Here's everything you need to know about how the training went.
AI's Impact on Jobs and Organisations (Nigeria report)This report examines the extent to which AI is affecting jobs and organisations in Nigeria. It brings together perspectives from HR professionals and managers across different industries.
30 Contract Staffing Risks That Could Get Your Company SuedThis piece outlines 30 contract staffing risks that have real legal consequences under Nigerian law. If you are a business owner, HR professional, or staffing agency operator, you will find this highly valuable.
10 Steps to Building an Effective Talent PipelineLearn how to keep a list of good candidates ready in advance, before a role becomes vacant. Discover step by step the process of building a talent pipeline that works.