{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "![MLU Logo](../data/MLU_Logo.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine Learning Accelerator - Natural Language Processing - Lecture 1\n", "\n", "## Bag of Words Method\n", "\n", "In this notebook, we go over the Bag of Words (BoW) method to convert text data into numerical values, that will be later used for predictions with machine learning algorithms.\n", "\n", "To convert text data to vectors of numbers, a vocabulary of known words (tokens) is extracted from the text, the occurence of words is scored, and the resulting numerical values are saved in vocabulary-long vectors.\n", "\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", "
\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from mluvisuals import *\n", "\n", "BagOfWords()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are a few versions of BoW, corresponding to different words scoring methods. We use the Sklearn library to calculate the BoW numerical values using:\n", "\n", "1. Binary\n", "2. Word Counts\n", "3. Term Frequencies\n", "4. Term Frequency-Inverse Document Frequencies\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -q -r ../requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Binary\n", "(Go to top)\n", "\n", "Let's calculate the first type of BoW, recording whether the word is in the sentence or not. We will also go over some useful features of Sklearn's vectorizers here.\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "sentences = [\"This document is the first document\",\n", " \"This document is the second document\",\n", " \"and this is the third one\"]\n", "\n", "# Initialize the count vectorizer with the parameter: binary=True\n", "binary_vectorizer = CountVectorizer(binary=True)\n", "\n", "# fit_transform() function fits the text data and gets the binary BoW vectors\n", "x = binary_vectorizer.fit_transform(sentences)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the vocabulary size grows, the BoW vectors also get very large in size. They are usually made of many zeros and very few non-zero values. Sklearn stores these vectors in a compressed form. If we want to use them as Numpy arrays, we call the __toarray()__ function. Here are our binary BoW features. Each row corresponds to a single document." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 1, 1, 1, 0, 0, 1, 0, 1],\n", " [0, 1, 0, 1, 0, 1, 1, 0, 1],\n", " [1, 0, 0, 1, 1, 0, 1, 1, 1]])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check out our vocabulary. We can use the __vocabulary___ attribute. This returns a dictionary with each word as key and index as value. Notice that they are alphabetically ordered." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'this': 8,\n", " 'document': 1,\n", " 'is': 3,\n", " 'the': 6,\n", " 'first': 2,\n", " 'second': 5,\n", " 'and': 0,\n", " 'third': 7,\n", " 'one': 4}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "binary_vectorizer.vocabulary_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similar information can be reached with the __get_feature_names()__ function. The position of the terms in the .get_feature_names() correspond to the column position of the elements in the BoW matrix." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function get_feature_names is deprecated; get_feature_names is deprecated in 1.0 and will be removed in 1.2. Please use get_feature_names_out instead.\n", " warnings.warn(msg, category=FutureWarning)\n" ] } ], "source": [ "print(binary_vectorizer.get_feature_names())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How can we calculate BoW for a new text? We will use the __transform()__ function this time. You can see below this doesn't change the vocabulary. New words are simply skipped in this case." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "new_sentence = [\"This is the new sentence\"]\n", "\n", "new_vectors = binary_vectorizer.transform(new_sentence)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 0, 0, 1, 0, 0, 1, 0, 1]])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_vectors.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Word Counts\n", "(Go to top)\n", "\n", "Word counts can be simply calculated using the same __CountVectorizer()__ function __without__ the __binary__ parameter.\n", "\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 2, 1, 1, 0, 0, 1, 0, 1],\n", " [0, 2, 0, 1, 0, 1, 1, 0, 1],\n", " [1, 0, 0, 1, 1, 0, 1, 1, 1]])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "sentences = [\"This document is the first document\", \"This document is the second document\", \"and this is the third one\"]\n", "\n", "# Initialize the count vectorizer\n", "count_vectorizer = CountVectorizer()\n", "\n", "xc = count_vectorizer.fit_transform(sentences)\n", "\n", "xc.toarray()" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 0, 0, 1, 0, 0, 1, 0, 1]])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_sentence = [\"This is the new sentence\"]\n", "new_vectors = count_vectorizer.transform(new_sentence)\n", "new_vectors.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Term Frequency (TF)\n", "(Go to top)\n", "\n", "Term Frequency (TF) vectors that show how important words are to the documents, are computed using\n", "\n", "$$tf(term, doc) = \\frac{number\\, of\\, times\\, the\\, term\\, occurs\\, in\\, the\\, doc}{total\\, number\\, of\\, terms\\, in\\, the\\, doc}$$\n", "\n", "From sklearn we use the __TfidfVectorizer()__ function with the parameter __use_idf=False__, which additionally *automatically normalizes the term frequencies vectors by their Euclidean ($l2$) norm*. \n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0. , 0.70710678, 0.35355339, 0.35355339, 0. ,\n", " 0. , 0.35355339, 0. , 0.35355339],\n", " [0. , 0.70710678, 0. , 0.35355339, 0. ,\n", " 0.35355339, 0.35355339, 0. , 0.35355339],\n", " [0.40824829, 0. , 0. , 0.40824829, 0.40824829,\n", " 0. , 0.40824829, 0.40824829, 0.40824829]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.feature_extraction.text import TfidfVectorizer\n", "\n", "tf_vectorizer = TfidfVectorizer(use_idf=False)\n", "\n", "x = tf_vectorizer.fit_transform(sentences)\n", "\n", "x.toarray()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0. , 0. , 0. , 0.57735027, 0. ,\n", " 0. , 0.57735027, 0. , 0.57735027]])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_sentence = [\"This is the new sentence\"]\n", "new_vectors = tf_vectorizer.transform(new_sentence)\n", "new_vectors.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Term Frequency Inverse Document Frequency (TF-IDF)\n", "(Go to top)\n", "\n", "Term Frequency Inverse Document Frequency (TF-IDF) vectors are computed using the __TfidfVectorizer()__ function with the parameter __use_idf=True__. We can also skip this parameter as it is already __True__ by default.\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0. , 0.7284449 , 0.47890875, 0.28285122, 0. ,\n", " 0. , 0.28285122, 0. , 0.28285122],\n", " [0. , 0.7284449 , 0. , 0.28285122, 0. ,\n", " 0.47890875, 0.28285122, 0. , 0.28285122],\n", " [0.49711994, 0. , 0. , 0.29360705, 0.49711994,\n", " 0. , 0.29360705, 0.49711994, 0.29360705]])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.feature_extraction.text import TfidfVectorizer\n", "\n", "tfidf_vectorizer = TfidfVectorizer(use_idf=True)\n", "\n", "sentences = [\"This document is the first document\",\n", " \"This document is the second document\",\n", " \"and this is the third one\"]\n", "\n", "xf = tfidf_vectorizer.fit_transform(sentences)\n", "\n", "xf.toarray()" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0. , 0. , 0. , 0.57735027, 0. ,\n", " 0. , 0.57735027, 0. , 0.57735027]])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_sentence = [\"This is the new sentence\"]\n", "new_vectors = tfidf_vectorizer.transform(new_sentence)\n", "new_vectors.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Note 1__: In addition to *automatically normalizing the term frequencies vectors by their Euclidean ($l2$) norm*, sklearn also uses a *smoothed version of idf*, computing \n", "\n", "$$idf(term) = \\ln \\Big( \\frac{n_{documents} +1}{n_{documents\\,containing\\,the\\,term}+1}\\Big) + 1$$" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1.69314718, 1.28768207, 1.69314718, 1. , 1.69314718,\n", " 1.69314718, 1. , 1.69314718, 1. ])" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tfidf_vectorizer.idf_" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0. , 0.7284449 , 0.47890875, 0.28285122, 0. ,\n", " 0. , 0.28285122, 0. , 0.28285122],\n", " [0. , 0.7284449 , 0. , 0.28285122, 0. ,\n", " 0.47890875, 0.28285122, 0. , 0.28285122],\n", " [0.49711994, 0. , 0. , 0.29360705, 0.49711994,\n", " 0. , 0.29360705, 0.49711994, 0.29360705]])" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.feature_extraction.text import TfidfVectorizer\n", "\n", "sentences = [\"This document is the first document\",\n", " \"This document is the second document\",\n", " \"and this is the third one\"]\n", "\n", "tfidf_vectorizer = TfidfVectorizer()\n", "xf = tfidf_vectorizer.fit_transform(sentences)\n", "xf.toarray()" ] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:conda_pytorch_p39] *", "language": "python", "name": "conda-env-conda_pytorch_p39-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 2 }