Kannada Recognition using Machine Learning

Uday Agarwal
3 min readOct 21, 2019

Deep Learning and Machine Learning are changing the way we see the world. These are the two words which are helping new companies to make new products, which are making people’s life easier. Machine Learning is same as a newborn baby, in this case, baby is the model, both the newborn baby and the model doesn’t know anything in the first, but as we give them the data they start to learn, they start to recognise patterns, but the only difference between them is that a newborn baby takes a less time and less data than a model to learn.

In this post, I will tell you how i trained my model using Kannada Dataset. To train the model you must know what you dataset is about, and by this i doesn’t mean to say Kannada Dataset, it means you should know from where your dataset has come from, and also to know whether it is a biased dataset ( covering only a specific type of words or letters) or not.

Dataset : https://www.kaggle.com/c/Kannada-MNIST/data

Importing Relevant Libraries

First step to solve every machine learning problem is to import relevant libraries needed to solve that problem. For this problem you need to import the relevant libraries.

Data Preparation

Whenever you prepare a data, or took it from the Internet. Let’s assume whenever you get out for a party at night after a tired working day in office or school or college, you need a bath to fresh up and after that you get ready for the party like, perfumes and other things you use to look good. Data Preparation is same as the above analogy, first we gather the dataset from the public or took it from the internet and then we prepare it according to our problem, like removing irrelevant things, customising null values and other things(It is same as getting ready before the party) and after this we train our model on our prepared dataset(Going to the party).

Training your Data

You are make your own model or you can use the same model given by the tensorflow itself. If you have some knowledge of how to make an effective model for your dataset then, making your own model actually performs better, but this not the case everytime. So, here I have made my own model for training.

After making your model you just have to compile your model and fit it into your prepared dataset. This, can be done in 4 lines of code.

Model Accuracy

It is the highest accuracy we are doing all of these because if you don’t have the highest accuracy there is a very high probability that it doesn’t predict better. To get this the highest sometimes ,we train like 5–6 models and then we compare them to see which is the better model. Model, is not the only reason which affects the accuracy , data is much more responsible for the it. If we are getting the low accuracy it means that out data contain irrelevant data, or may be our data is baised which contain only some group of people, or many other factors. But in this problem we got the accuracy of around 99.5 % that means whenever our model predict something there is 99.5 chances that it will be correct.

YouTube Link: https://youtu.be/0hEAWrPn8LE

Thank You !!

--

--