Creating anonymous image classification and description vectors for privately owned assets and providing anonymised search for object vectors
Normalisation of natural language image description to optimise search indexes by application of image classifiers, word2vec algorithms and predictive recommendation.
Object description provided in natural language creates a complex amount of data containing noise, due to individual interpretation of object properties, language preference and syntax.
When creating search indexes in an automated fashion, indexing this kind of data creates overhead and leads to unsatisfying results when searching for specific objects by entering search criteria.
Creating normalised semantic descriptors for objects improves indexing, accuracy of search results and enables automated processing of these vectors.
To achieve normalised, semantic content, one could normalise user input by implementing word2vec algorithms, resulting in clusters of words describing similar properties being stored as unified vectors.
Another approach would be to remove natural language input when creating the description by implementing an image classifier, leading again to clusters and tags to describe objects in a unified, semantic fashion.
Both approaches in combination provide the possibility of prediction and recommendation of input text in relation to the classified (and similar) images, offering reasonable tags to simplify and ease the input leading to lower error rates and a much improved user experience:
1 input image set
2 one shot learning image classifier
3 natural language input by user
4 normalisation and clustering
5 confirm normalised text by user