In the domain of MRO (Maintenance, Repair, and Operations) items, warehouse staff members often lack the technical knowledge to correctly identify certain stock items. This leads to items being placed in the incorrect containers, which then leads to picking errors and the requirement of further housekeeping. Not only is this time-consuming, but it also leads to millions of rands being lost per annum.
The solution comes in the form of a user-friendly, cross-platform application that gives the user the ability to capture or select an image of an item and, through the use of Computer Vision and Artificial Intelligence, have the application classify the item in the image. Both a desktop web app and native mobile app is used to provide the solution, which also includes basic stock counting and weight analysis functionality. The solution is abstracted further through the Dashboard API, which allows developers and interested clients to create their own custom image classification model for use in their products, in order to meet their specific needs. No high-level knowledge is required to produce these custom models.
Bob is part of the warehouse staff. He cannot identify every stock item he needs to.
Bob wants to identify what a certain item is so that he can handle it accordingly.
Bob uses the Ninshiki app on his phone to identify the class or type of the item.
Ninshiki uses Artificial Intelligence and Computer Vision to predict the class of the item.
Bob gets the item's class from the app, and can now correctly handle the item. Problem solved!
An Agile development process (Scrum methodology) was followed, where the implementation of the system was broken into sprints. Each sprint lasted two or three weeks and included (pre-) sprint planning and (post-) sprint review meetings, as well as regular weekly meetings. The process is shown in the flowchart below.
Ninshiki allows users to work out the quantity of items in a container based on input weights
Ninshiki uses the power of artificial intelligence to predict the class of an item in an image
The dashboard assists in the creation of custom image classifiers without AI expertise
The following technologies are used to implement the Ninshiki system
Small tests written to check the internal structure and flow of a function in an isolated manner
End-to-end tests were written to check that components interacted as intended
A set of tests performed in a remote environment to ensure that the system is valid before deployment
A wide variety of tests that ensure that the various quality requirements of the system are met
Initial web interface for uploading an image. Used Clarifai's general model to predict the image.
Fleshed out web app features and started development of mobile app.
Developed a program to build image prediction models using Keras and TensorFlow. Completed initial version of mobile app using Android Studio.
Created a custom image prediction model for specific domain. Model set up as a locally deployed backend service using a NodeJS server. Set up automated testing and deployment for web app.
Created a new, more accurate model that could be used by TensorFlowJS for on-device predicting. Developed brand new mobile app using Ionic and deployed to Google Play Store. Deployed web app and backend services to Firebase.
Implemented backend and web interface for Dashboard API. Made changes to mobile app to resolve common functionality and aesthetic issues identified from analysis of user feedback.