Welcome To Our Capstone Project Page!
It's Nice To Meet You
Tell Me More

Project Description

To empower the client to automate certain warehousing processing activities, reduce errors and improve operational efficiencies through the use of image recognition technologies and robust application design

problem

The Problem

Description

In the domain of MRO (Maintenance, Repair, and Operations) items, warehouse staff members often lack the technical knowledge to correctly identify certain stock items. This leads to items being placed in the incorrect containers, which then leads to picking errors and the requirement of further housekeeping. Not only is this time-consuming, but it also leads to millions of rands being lost per annum.

The Solution

Description

The solution comes in the form of a user-friendly, cross-platform application that gives the user the ability to capture or select an image of an item and, through the use of Computer Vision and Artificial Intelligence, have the application classify the item in the image. Both a desktop web app and native mobile app is used to provide the solution, which also includes basic stock counting and weight analysis functionality. The solution is abstracted further through the Dashboard API, which allows developers and interested clients to create their own custom image classification model for use in their products, in order to meet their specific needs. No high-level knowledge is required to produce these custom models.


User Story

step1

1

Bob is part of the warehouse staff. He cannot identify every stock item he needs to.

step2

2

Bob wants to identify what a certain item is so that he can handle it accordingly.

step3

3

Bob uses the Ninshiki app on his phone to identify the class or type of the item.

step4

4

Ninshiki uses Artificial Intelligence and Computer Vision to predict the class of the item.

step5

5

Bob gets the item's class from the app, and can now correctly handle the item. Problem solved!

problem
problem

The process

An Agile development process (Scrum methodology) was followed, where the implementation of the system was broken into sprints. Each sprint lasted two or three weeks and included (pre-) sprint planning and (post-) sprint review meetings, as well as regular weekly meetings. The process is shown in the flowchart below.

flow
The Agile process followed for each sprint

The project

The project solution was realized through the design and development of several subsystems to meet the requirements of warehouse staff members, customers, and business clients hoping to create their own image classifier. The main features of the system include the following:

Stock Management

Stock Management

Ninshiki allows users to work out the quantity of items in a container based on input weights

Image classification

Image Classification

Ninshiki uses the power of artificial intelligence to predict the class of an item in an image

Dashboard API

Dashboard API

The dashboard assists in the creation of custom image classifiers without AI expertise

Implementation Technologies

The following technologies are used to implement the Ninshiki system

Collaboration Technologies

The following technologies are used to improve collaboration, productivity, and workflow

Documentation

This section contains links to all the documentation related to the Ninshiki project

Coding Standards

Requirements and Design

test

Testing Policy

manual

User Manual

Mobile App Documentation

Web App Documentation

Dashboard App Documentation

Dashboard Backend Documentation

Testing

Various levels of testing were implemented in the system. The purpose of each level is explained below, along with sample test reports or code.

Unit Testing

Small tests written to check the internal structure and flow of a function in an isolated manner

E2E Testing

End-to-end tests were written to check that components interacted as intended


Integration Testing

A set of tests performed in a remote environment to ensure that the system is valid before deployment

Non-Functional Testing

A wide variety of tests that ensure that the various quality requirements of the system are met

Project History

This section provides a chronological overview of the project in terms of the demonstrations that took place during it's development

  • 16 March

    Demo 1

    Initial web interface for uploading an image. Used Clarifai's general model to predict the image.

  • 13 April

    Demo 2

    Fleshed out web app features and started development of mobile app.

  • 11 May

    Demo 3

    Developed a program to build image prediction models using Keras and TensorFlow. Completed initial version of mobile app using Android Studio.

  • 20 July

    Demo 4

    Created a custom image prediction model for specific domain. Model set up as a locally deployed backend service using a NodeJS server. Set up automated testing and deployment for web app.

  • 21 September

    Demo 5

    Created a new, more accurate model that could be used by TensorFlowJS for on-device predicting. Developed brand new mobile app using Ionic and deployed to Google Play Store. Deployed web app and backend services to Firebase.

  • Here
    We
    Are

    17 October

    Final Evaluation

    Implemented backend and web interface for Dashboard API. Made changes to mobile app to resolve common functionality and aesthetic issues identified from analysis of user feedback.

  • The
    Future


Our Amazing Team

Meet the people that brought this project to life

Jonathan Lew

Jonathan Lew

AI and Backend Developer

yonwebs@gmail.com

Mark Coetzer

Mark Coetzer

Team Lead, Web and API Developer

mark.coetzerjnr@gmail.com

Orisha Orrie

Orisha Orrie

Mobile and Frontend Developer

orisha.orrie@gmail.com

Tobias Bester

Tobias Bester

Web and AI Developer

tbester23@gmail.com

Mukundi Matodzi

Mukundi Matodzi

Web and Mobile Developer

mukundimatodzi@gmail.com

Contact Us