Tiring-Text
Baseline for TIRING TEXT Challenge
A getting started code for the TIRING TEXT challenge.
Download Necessary Packages¶
import sys
!pip install numpy
!pip install pandas
!pip install scikit-learn
!pip install git+https://gitlab.aicrowd.com/aicrowd/aicrowd-cli.git >/dev/null
%load_ext aicrowd.magic
Download data¶
The first step is to download out training and testing dataset. We will be training a classifier on the training data and make predictions testing data. We submit our predictions
API_KEY = "" # Please enter your API Key [https://www.aicrowd.com/participants/me]
%aicrowd login --api-key $API_KEY
%aicrowd dataset list -c tiring-text
%aicrowd dataset download -c tiring-text -j 3
Import packages¶
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score,log_loss
train_data = pd.read_csv("train.csv")
Visualize the data 👀¶
train_data.head()
train_data.shape
The dataset contains texts along with the labels as unscrambled or scrambled.
X,y = train_data['text'],train_data['tag']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
print(X_train.shape)
print(X_val.shape)
TRAINING PHASE 🏋️¶
Preprocessing¶
Text files are actually series of words (ordered). In order to run machine learning algorithms we need to convert the text files into numerical feature vectors. We will be using bag of words
model for our example. Briefly, we segment each text file into words (for English splitting by space), and count number of times each word occurs in each document and finally assign each word an integer id. Each unique word in our dictionary will correspond to a feature (descriptive feature).
Scikit-learn has a high level component which will create feature vectors for us CountVectorizer
. More about it here.
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
Here by doing count_vect.fit_transform(X_train)
, we are learning the vocabulary dictionary and it returns a Document-Term matrix. [n_samples, n_features].
TF: Just counting the number of words in each document has 1 issue: it will give more weightage to longer documents than shorter documents. To avoid this, we can use frequency (TF - Term Frequencies) i.e. #count(word) / #Total words
, in each document.
TF-IDF: Finally, we can even reduce the weightage of more common words like (the, is, an etc.) which occurs in all document. This is called as TF-IDF
i.e Term Frequency times inverse document frequency.
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
Define the Model¶
We have fixed our data and now we are ready to train our model.
There are a ton of classifiers to choose from some being Naive Bayes, Logistic Regression, SVM, Random Forests, Decision Trees, etc.๐ง
Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.๐ง
classifier = DecisionTreeClassifier(max_depth = 2)
- To start you off, We have used a basic Decision Tree classifier here.
- Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.
Train the Model¶
Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:
text_clf = Pipeline([('vect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', classifier)])
text_clf = text_clf.fit(X_train, y_train)
Tip: To Improve your accuracy you can do something called stemming. Stemming
is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form. E.g. A stemming algorithm reduces the words โfishingโ, โfishedโ, and โfisherโ to the root word, โfishโ.
You can use NLTK which can be installed from here. NLTK comes with various stemmers which can help reducing the words to their root form.
Validation Phase 🤔¶
Wonder how well your model learned! Lets check it.
Predict on Validation¶
Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.
y_pred = text_clf.predict(X_val)
print(y_pred)
Evaluate the Performance¶
- We have used basic metrics to quantify the performance of our model.
- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
- F1 score is the metric for this challenge
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Testing Phase 😅¶
We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.
Load Test Set¶
Load the test data on which final submission is to be made.
final_test_path = "test.csv"
final_test = pd.read_csv(final_test_path)
len(final_test)
Predict Test Set¶
Predict on the test set and you are all set to make the submission !
submission = text_clf.predict(final_test['text'])
Save the prediction to csv¶
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['tag'], index=False)
๐ง Note :
- Do take a look at the submission format.
- The submission file should contain a header.
- Follow all submission guidelines strictly to avoid inconvenience.
%aicrowd submission create -c tiring-text -f submission.csv # submit the csv
Content
Comments
You must login before you can post a comment.