From 429c1862546a2cbda044f459865e6cee7d9aa314 Mon Sep 17 00:00:00 2001 From: Navan Chauhan Date: Sun, 24 May 2020 18:57:49 +0530 Subject: Publish deploy 2020-05-24 18:57 --- .styles 3.css.icloud | Bin 0 -> 160 bytes feed.rss | 2 +- .../index.html | 24 ++-- .../index.html | 64 +++------ posts/2019-12-08-Splitting-Zips/index.html | 12 +- .../index.html | 28 +--- .../index.html | 150 ++++++--------------- posts/2019-12-22-Fake-News-Detector/index.html | 68 +++------- .../index.html | 12 +- .../index.html | 8 +- .../index.html | 62 +++------ .../index.html | 30 ++--- .../index.html | 10 +- .../2020-03-14-generating-vaporwave/index.html | 34 ++--- sitemap.xml | 2 +- 15 files changed, 162 insertions(+), 344 deletions(-) create mode 100644 .styles 3.css.icloud diff --git a/.styles 3.css.icloud b/.styles 3.css.icloud new file mode 100644 index 0000000..625f925 Binary files /dev/null and b/.styles 3.css.icloud differ diff --git a/feed.rss b/feed.rss index d691e34..f808cbb 100644 --- a/feed.rss +++ b/feed.rss @@ -1,4 +1,4 @@ -Navan ChauhanWelcome to my personal fragment of the internet. Majority of the posts should be complete.https://navanchauhan.github.io/enMon, 18 May 2020 17:33:07 +0530Mon, 18 May 2020 17:33:07 +0530250https://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSFixing X11 Error on macOS Catalina for AmberTools 18/19Fixing Could not find the X11 libraries; you may need to edit config.h, AmberTools macOS Catalinahttps://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSMon, 13 Apr 2020 11:41:00 +0530Fixing X11 Error on macOS Catalina for AmberTools 18/19

I was trying to install AmberTools on my macOS Catalina Installation. Running ./configure -macAccelerate clang gave me an error that it could not find X11 libraries, even though locate libXt showed that my installation was correct.

Error:

Could not find the X11 libraries; you may need to edit config.h +Navan ChauhanWelcome to my personal fragment of the internet. Majority of the posts should be complete.https://navanchauhan.github.io/enSun, 24 May 2020 18:26:31 +0530Sun, 24 May 2020 18:26:31 +0530250https://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSFixing X11 Error on macOS Catalina for AmberTools 18/19Fixing Could not find the X11 libraries; you may need to edit config.h, AmberTools macOS Catalinahttps://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSMon, 13 Apr 2020 11:41:00 +0530Fixing X11 Error on macOS Catalina for AmberTools 18/19

I was trying to install AmberTools on my macOS Catalina Installation. Running ./configure -macAccelerate clang gave me an error that it could not find X11 libraries, even though locate libXt showed that my installation was correct.

Error:

Could not find the X11 libraries; you may need to edit config.h to set the XHOME and XLIBS variables. Error: The X11 libraries are not in the usual location ! To search for them try the command: locate libXt diff --git a/posts/2019-05-05-Custom-Snowboard-Anemone-Theme/index.html b/posts/2019-05-05-Custom-Snowboard-Anemone-Theme/index.html index 2900cd2..943eb35 100644 --- a/posts/2019-05-05-Custom-Snowboard-Anemone-Theme/index.html +++ b/posts/2019-05-05-Custom-Snowboard-Anemone-Theme/index.html @@ -1,16 +1,14 @@ -Creating your own custom theme for Snowboard or Anemone | Navan Chauhan
5 minute readCreated on May 5, 2019Last modified on March 9, 2020

Creating your own custom theme for Snowboard or Anemone

Contents

  • Getting Started
  • Theme Configuration
  • Creating Icons
  • Exporting Icons
  • Icon Masks
  • Packaging
  • Building the DEB

Getting Started

Note: Without the proper folder structure, your theme may not show up!

  • Create a new folder called themeName.theme (Replace themeName with your desired theme name)
  • Within themeName.theme folder, create another folder called IconBundles (You cannot change this name)

Theme Configuration

  • Now, inside the themeName.theme folder, create a file called Info.plist and paste the following
<?xml version="1.0" encoding="UTF-8"?> -<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> - <plist version="1.0"> - <dict> - <key>PackageName</key> - <string>ThemeName</string> - <key>ThemeType</key> - <string>Icons</string> - </dict> -</plist> -
- -
  • Replace PackageName with the name of the Pacakge and replace ThemeName with the Theme Name

Now, you might ask what is the difference between PackageName and ThemeName?

Well, if for example you want to publish two variants of your icons, one dark and one white but you do not want the user to seperately install them. Then, you would name the package MyTheme and include two themes Blackie and White thus creating two entries. More about this in the end

Creating Icons

  • Open up the Image Editor of your choice and create a new file having a resolution of 512x512

Note: Due to IconBundles, we just need to create the icons in one size and they get resized automaticaly :ghost:

Want to create rounded icons? Create them squared only, we will learn how to apply masks!

Exporting Icons

Note: All icons must be saved as *.png (Tip: This means you can even create partially transparent icons!)

  • All Icons must be saved in themeName.theme>IconBundles as bundleID-large.png
Finding BundleIDs

Stock Application BundleIDs

| Name | BundleID | |-------------|----------------------| | App Store | com.apple.AppStore | | Apple Watch | com.apple.Bridge | | Calculator | com.apple.calculator | | Calendar | com.apple.mobilecal | | Camera | com.apple.camera | | Classroom | com.apple.classroom | | Clock | com.apple.mobiletimer | | Compass | com.apple.compass | | FaceTime | com.apple.facetime | | Files | com.apple.DocumentsApp | | Game Center | com.apple.gamecenter | | Health | com.apple.Health | | Home | com.apple.Home | | iBooks | com.apple.iBooks | | iTunes Store | com.apple.MobileStore | | Mail | com.apple.mobilemail | | Maps | com.apple.Maps | | Measure | com.apple.measure | | Messages | com.apple.MobileSMS | | Music | com.apple.Music | | News | com.apple.news | | Notes | com.apple.mobilenotes | | Phone | com.apple.mobilephone | | Photo Booth | com.apple.Photo-Booth | | Photos | com.apple.mobileslideshow | | Playgrounds | come.apple.Playgrounds | | Podcasts | com.apple.podcasts | | Reminders | com.apple.reminders | | Safari | com.apple.mobilesafari | | Settings | com.apple.Preferences | | Stocks | com.apple.stocks | | Tips | com.apple.tips | | TV | com.apple.tv | | Videos | com.apple.videos | | Voice Memos | com.apple.VoiceMemos | | Wallet | com.apple.Passbook | | Weather | com.apple.weather |

3rd Party Applications BundleID Click here

Icon Masks

  • Getting the Classic Rounded Rectangle Masks

In your Info.plist file add the following value between <dict> and +Creating your own custom theme for Snowboard or Anemone | Navan Chauhan

5 minute readCreated on May 5, 2019Last modified on March 9, 2020

Creating your own custom theme for Snowboard or Anemone

Contents

  • Getting Started
  • Theme Configuration
  • Creating Icons
  • Exporting Icons
  • Icon Masks
  • Packaging
  • Building the DEB

Getting Started

Note: Without the proper folder structure, your theme may not show up!

  • Create a new folder called themeName.theme (Replace themeName with your desired theme name)
  • Within themeName.theme folder, create another folder called IconBundles (You cannot change this name)

Theme Configuration

  • Now, inside the themeName.theme folder, create a file called Info.plist and paste the following
<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> + <plist version="1.0"> + <dict> + <key>PackageName</key> + <string>ThemeName</string> + <key>ThemeType</key> + <string>Icons</string> + </dict> +</plist> +
  • Replace PackageName with the name of the Pacakge and replace ThemeName with the Theme Name

Now, you might ask what is the difference between PackageName and ThemeName?

Well, if for example you want to publish two variants of your icons, one dark and one white but you do not want the user to seperately install them. Then, you would name the package MyTheme and include two themes Blackie and White thus creating two entries. More about this in the end

Creating Icons

  • Open up the Image Editor of your choice and create a new file having a resolution of 512x512

Note: Due to IconBundles, we just need to create the icons in one size and they get resized automaticaly :ghost:

Want to create rounded icons? Create them squared only, we will learn how to apply masks!

Exporting Icons

Note: All icons must be saved as *.png (Tip: This means you can even create partially transparent icons!)

  • All Icons must be saved in themeName.theme>IconBundles as bundleID-large.png
Finding BundleIDs

Stock Application BundleIDs

NameBundleID
App Storecom.apple.AppStore
Apple Watchcom.apple.Bridge
Calculatorcom.apple.calculator
Calendarcom.apple.mobilecal
Cameracom.apple.camera
Classroomcom.apple.classroom
Clockcom.apple.mobiletimer
Compasscom.apple.compass
FaceTimecom.apple.facetime
Filescom.apple.DocumentsApp
Game Centercom.apple.gamecenter
Healthcom.apple.Health
Homecom.apple.Home
iBookscom.apple.iBooks
iTunes Storecom.apple.MobileStore
Mailcom.apple.mobilemail
Mapscom.apple.Maps
Measurecom.apple.measure
Messagescom.apple.MobileSMS
Musiccom.apple.Music
Newscom.apple.news
Notescom.apple.mobilenotes
Phonecom.apple.mobilephone
Photo Boothcom.apple.Photo-Booth
Photoscom.apple.mobileslideshow
Playgroundscome.apple.Playgrounds
Podcastscom.apple.podcasts
Reminderscom.apple.reminders
Safaricom.apple.mobilesafari
Settingscom.apple.Preferences
Stockscom.apple.stocks
Tipscom.apple.tips
TVcom.apple.tv
Videoscom.apple.videos
Voice Memoscom.apple.VoiceMemos
Walletcom.apple.Passbook
Weathercom.apple.weather

3rd Party Applications BundleID Click here

Icon Masks

  • Getting the Classic Rounded Rectangle Masks

In your Info.plist file add the following value between <dict> and ``` IB-MaskIcons diff --git a/posts/2019-12-08-Image-Classifier-Tensorflow/index.html b/posts/2019-12-08-Image-Classifier-Tensorflow/index.html index 76ac4f8..0822e30 100644 --- a/posts/2019-12-08-Image-Classifier-Tensorflow/index.html +++ b/posts/2019-12-08-Image-Classifier-Tensorflow/index.html @@ -1,20 +1,18 @@ -Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria | Navan Chauhan

4 minute readCreated on December 8, 2019Last modified on January 18, 2020

Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria

Done during Google Code-In. Org: Tensorflow.

Imports

%tensorflow_version 2.x #This is for telling Colab that you want to use TF 2.0, ignore if running on local machine +Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria | Navan Chauhan
4 minute readCreated on December 8, 2019Last modified on May 24, 2020

Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria

Done during Google Code-In. Org: Tensorflow.

Imports

%tensorflow_version 2.x #This is for telling Colab that you want to use TF 2.0, ignore if running on local machine from PIL import Image # We use the PIL Library to resize images -import numpy as np +import numpy as np import os import cv2 -import tensorflow as tf +import tensorflow as tf from tensorflow.keras import datasets, layers, models -import pandas as pd -import matplotlib.pyplot as plt +import pandas as pd +import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout

Dataset

Fetching the Data

!wget ftp://lhcftp.nlm.nih.gov/Open-Access-Datasets/Malaria/cell_images.zip !unzip cell_images.zip -
- -

Processing the Data

We resize all the images as 50x50 and add the numpy array of that image as well as their label names (Infected or Not) to common arrays.

data = [] +

Processing the Data

We resize all the images as 50x50 and add the numpy array of that image as well as their label names (Infected or Not) to common arrays.

data = [] labels = [] Parasitized = os.listdir("./cell_images/Parasitized/") @@ -26,7 +24,7 @@ data.append(np.array(size_image)) labels.append(0) except AttributeError: - print("") + print("") Uninfected = os.listdir("./cell_images/Uninfected/") for uninfect in Uninfected: @@ -37,23 +35,17 @@ data.append(np.array(size_image)) labels.append(1) except AttributeError: - print("") -
- -

Splitting Data

df = np.array(data) + print("") +

Splitting Data

df = np.array(data) labels = np.array(labels) (X_train, X_test) = df[(int)(0.1*len(df)):],df[:(int)(0.1*len(df))] (y_train, y_test) = labels[(int)(0.1*len(labels)):],labels[:(int)(0.1*len(labels))] -
- -
s=np.arange(X_train.shape[0]) -np.random.shuffle(s) -X_train=X_train[s] -y_train=y_train[s] -X_train = X_train/255.0 -
- -

Model

Creating Model

By creating a sequential model, we create a linear stack of layers.

Note: The input shape for the first layer is 50,50 which corresponds with the sizes of the resized images

model = models.Sequential() +
s=np.arange(X_train.shape[0]) +np.random.shuffle(s) +X_train=X_train[s] +y_train=y_train[s] +X_train = X_train/255.0 +

Model

Creating Model

By creating a sequential model, we create a linear stack of layers.

Note: The input shape for the first layer is 50,50 which corresponds with the sizes of the resized images

model = models.Sequential() model.add(layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(50,50,3))) model.add(layers.MaxPooling2D(pool_size=2)) model.add(layers.Conv2D(filters=32,kernel_size=2,padding='same',activation='relu')) @@ -66,17 +58,11 @@ model.add(layers.Dropout(0.2)) model.add(layers.Dense(2,activation="softmax"))#2 represent output layer neurons model.summary() -
- -

Compiling Model

We use the adam optimiser as it is an adaptive learning rate optimization algorithm that's been designed specifically for training deep neural networks, which means it changes its learning rate automaticaly to get the best results

model.compile(optimizer="adam", +

Compiling Model

We use the adam optimiser as it is an adaptive learning rate optimization algorithm that's been designed specifically for training deep neural networks, which means it changes its learning rate automaticaly to get the best results

model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) -
- -

Training Model

We train the model for 10 epochs on the training data and then validate it using the testing data

history = model.fit(X_train,y_train, epochs=10, validation_data=(X_test,y_test)) -
- -
Train on 24803 samples, validate on 2755 samples +

Training Model

We train the model for 10 epochs on the training data and then validate it using the testing data

history = model.fit(X_train,y_train, epochs=10, validation_data=(X_test,y_test)) +
Train on 24803 samples, validate on 2755 samples Epoch 1/10 24803/24803 [==============================] - 57s 2ms/sample - loss: 0.0786 - accuracy: 0.9729 - val_loss: 0.0000e+00 - val_accuracy: 1.0000 Epoch 2/10 @@ -97,25 +83,19 @@ 24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0352 - accuracy: 0.9878 - val_loss: 0.0000e+00 - val_accuracy: 1.0000 Epoch 10/10 24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0373 - accuracy: 0.9865 - val_loss: 0.0000e+00 - val_accuracy: 1.0000 -
- -

Results

accuracy = history.history['accuracy'][-1]*100 +

Results

accuracy = history.history['accuracy'][-1]*100 loss = history.history['loss'][-1]*100 val_accuracy = history.history['val_accuracy'][-1]*100 val_loss = history.history['val_loss'][-1]*100 -print( +print( 'Accuracy:', accuracy, '\nLoss:', loss, '\nValidation Accuracy:', val_accuracy, '\nValidation Loss:', val_loss ) -
- -
Accuracy: 98.64532351493835 +
Accuracy: 98.64532351493835 Loss: 3.732407123270176 Validation Accuracy: 100.0 Validation Loss: 0.0 -
- -

We have achieved 98% Accuracy!

Link to Colab Notebook

Tagged with:
+

We have achieved 98% Accuracy!

Link to Colab Notebook

Tagged with:
\ No newline at end of file diff --git a/posts/2019-12-08-Splitting-Zips/index.html b/posts/2019-12-08-Splitting-Zips/index.html index 74c1f08..17da0d6 100644 --- a/posts/2019-12-08-Splitting-Zips/index.html +++ b/posts/2019-12-08-Splitting-Zips/index.html @@ -1,10 +1,4 @@ Splitting ZIPs into Multiple Parts | Navan Chauhan
1 minute readCreated on December 8, 2019Last modified on January 18, 2020

Splitting ZIPs into Multiple Parts

Tested on macOS

Creating the archive:

zip -r -s 5 oodlesofnoodles.zip website/ -
- -

5 stands for each split files' size (in mb, kb and gb can also be specified)

For encrypting the zip:

zip -er -s 5 oodlesofnoodles.zip website -
- -

Extracting Files

First we need to collect all parts, then

zip -F oodlesofnoodles.zip --out merged.zip -
- -
Tagged with:
\ No newline at end of file +

5 stands for each split files' size (in mb, kb and gb can also be specified)

For encrypting the zip:

zip -er -s 5 oodlesofnoodles.zip website +

Extracting Files

First we need to collect all parts, then

zip -F oodlesofnoodles.zip --out merged.zip +
Tagged with:
\ No newline at end of file diff --git a/posts/2019-12-10-TensorFlow-Model-Prediction/index.html b/posts/2019-12-10-TensorFlow-Model-Prediction/index.html index ebd6f4a..da7cae5 100644 --- a/posts/2019-12-10-TensorFlow-Model-Prediction/index.html +++ b/posts/2019-12-10-TensorFlow-Model-Prediction/index.html @@ -1,23 +1,9 @@ Making Predictions using Image Classifier (TensorFlow) | Navan Chauhan
1 minute readCreated on December 10, 2019Last modified on January 18, 2020

Making Predictions using Image Classifier (TensorFlow)

This was tested on TF 2.x and works as of 2019-12-10

If you want to understand how to make your own custom image classifier, please refer to my previous post.

If you followed my last post, then you created a model which took an image of dimensions 50x50 as an input.

First we import the following if we have not imported these before

import cv2 import os -
- -

Then we read the file using OpenCV.

image=cv2.imread(imagePath) -
- -

The cv2. imread() function returns a NumPy array representing the image. Therefore, we need to convert it before we can use it.

image_from_array = Image.fromarray(image, 'RGB') -
- -

Then we resize the image

size_image = image_from_array.resize((50,50)) -
- -

After this we create a batch consisting of only one image

p = np.expand_dims(size_image, 0) -
- -

We then convert this uint8 datatype to a float32 datatype

img = tf.cast(p, tf.float32) -
- -

Finally we make the prediction

print(['Infected','Uninfected'][np.argmax(model.predict(img))]) -
- -

Infected

Tagged with:
\ No newline at end of file +

Then we read the file using OpenCV.

image=cv2.imread(imagePath) +

The cv2. imread() function returns a NumPy array representing the image. Therefore, we need to convert it before we can use it.

image_from_array = Image.fromarray(image, 'RGB') +

Then we resize the image

size_image = image_from_array.resize((50,50)) +

After this we create a batch consisting of only one image

p = np.expand_dims(size_image, 0) +

We then convert this uint8 datatype to a float32 datatype

img = tf.cast(p, tf.float32) +

Finally we make the prediction

print(['Infected','Uninfected'][np.argmax(model.predict(img))]) +

Infected

Tagged with:
\ No newline at end of file diff --git a/posts/2019-12-16-TensorFlow-Polynomial-Regression/index.html b/posts/2019-12-16-TensorFlow-Polynomial-Regression/index.html index 835d671..5365d89 100644 --- a/posts/2019-12-16-TensorFlow-Polynomial-Regression/index.html +++ b/posts/2019-12-16-TensorFlow-Polynomial-Regression/index.html @@ -1,28 +1,16 @@ -Polynomial Regression Using TensorFlow | Navan Chauhan
17 minute readCreated on December 16, 2019Last modified on January 18, 2020

Polynomial Regression Using TensorFlow

In this tutorial you will learn about polynomial regression and how you can implement it in Tensorflow.

In this, we will be performing polynomial regression using 5 types of equations -

  • Linear
  • Quadratic
  • Cubic
  • Quartic
  • Quintic

Regression

What is Regression?

Regression is a statistical measurement that is used to try to determine the relationship between a dependent variable (often denoted by Y), and series of varying variables (called independent variables, often denoted by X ).

What is Polynomial Regression

This is a form of Regression Analysis where the relationship between Y and X is denoted as the nth degree/power of X. Polynomial regression even fits a non-linear relationship (e.g when the points don't form a straight line).

Imports

import tensorflow.compat.v1 as tf +Polynomial Regression Using TensorFlow | Navan Chauhan
17 minute readCreated on December 16, 2019Last modified on January 18, 2020

Polynomial Regression Using TensorFlow

In this tutorial you will learn about polynomial regression and how you can implement it in Tensorflow.

In this, we will be performing polynomial regression using 5 types of equations -

  • Linear
  • Quadratic
  • Cubic
  • Quartic
  • Quintic

Regression

What is Regression?

Regression is a statistical measurement that is used to try to determine the relationship between a dependent variable (often denoted by Y), and series of varying variables (called independent variables, often denoted by X ).

What is Polynomial Regression

This is a form of Regression Analysis where the relationship between Y and X is denoted as the nth degree/power of X. Polynomial regression even fits a non-linear relationship (e.g when the points don't form a straight line).

Imports

import tensorflow.compat.v1 as tf tf.disable_v2_behavior() -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -
- -

Dataset

Creating Random Data

Even though in this tutorial we will use a Position Vs Salary datasset, it is important to know how to create synthetic data

To create 50 values spaced evenly between 0 and 50, we use NumPy's linspace funtion

linspace(lower_limit, upper_limit, no_of_observations)

x = np.linspace(0, 50, 50) +import matplotlib.pyplot as plt +import numpy as np +import pandas as pd +

Dataset

Creating Random Data

Even though in this tutorial we will use a Position Vs Salary datasset, it is important to know how to create synthetic data

To create 50 values spaced evenly between 0 and 50, we use NumPy's linspace funtion

linspace(lower_limit, upper_limit, no_of_observations)

x = np.linspace(0, 50, 50) y = np.linspace(0, 50, 50) -
- -

We use the following function to add noise to the data, so that our values

x += np.random.uniform(-4, 4, 50) +

We use the following function to add noise to the data, so that our values

x += np.random.uniform(-4, 4, 50) y += np.random.uniform(-4, 4, 50) -
- -

Position vs Salary Dataset

We will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)

!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv -
- -
df = pd.read_csv("data.csv") -
- -
df # this gives us a preview of the dataset we are working with -
- -
| Position | Level | Salary | +

Position vs Salary Dataset

We will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)

!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv +
df = pd.read_csv("data.csv") +
df # this gives us a preview of the dataset we are working with +
| Position | Level | Salary | |-------------------|-------|---------| | Business Analyst | 1 | 45000 | | Junior Consultant | 2 | 50000 | @@ -34,77 +22,55 @@ | Senior Partner | 8 | 300000 | | C-level | 9 | 500000 | | CEO | 10 | 1000000 | -
- -

We convert the salary column as the ordinate (y-cordinate) and level column as the abscissa

abscissa = df["Level"].to_list() # abscissa = [1,2,3,4,5,6,7,8,9,10] +

We convert the salary column as the ordinate (y-cordinate) and level column as the abscissa

abscissa = df["Level"].to_list() # abscissa = [1,2,3,4,5,6,7,8,9,10] ordinate = df["Salary"].to_list() # ordinate = [45000,50000,60000,80000,110000,150000,200000,300000,500000,1000000] -
- -
n = len(abscissa) # no of observations +
n = len(abscissa) # no of observations plt.scatter(abscissa, ordinate) plt.ylabel('Salary') plt.xlabel('Position') plt.title("Salary vs Position") plt.show() -
- -

Defining Stuff

X = tf.placeholder("float") +

Defining Stuff

X = tf.placeholder("float") Y = tf.placeholder("float") -
- -

Defining Variables

We first define all the coefficients and constant as tensorflow variables haveing a random intitial value

a = tf.Variable(np.random.randn(), name = "a") +

Defining Variables

We first define all the coefficients and constant as tensorflow variables haveing a random intitial value

a = tf.Variable(np.random.randn(), name = "a") b = tf.Variable(np.random.randn(), name = "b") c = tf.Variable(np.random.randn(), name = "c") d = tf.Variable(np.random.randn(), name = "d") e = tf.Variable(np.random.randn(), name = "e") f = tf.Variable(np.random.randn(), name = "f") -
- -

Model Configuration

learning_rate = 0.2 +

Model Configuration

learning_rate = 0.2 no_of_epochs = 25000 -
- -

Equations

deg1 = a*X + b +

Equations

deg1 = a*X + b deg2 = a*tf.pow(X,2) + b*X + c deg3 = a*tf.pow(X,3) + b*tf.pow(X,2) + c*X + d deg4 = a*tf.pow(X,4) + b*tf.pow(X,3) + c*tf.pow(X,2) + d*X + e deg5 = a*tf.pow(X,5) + b*tf.pow(X,4) + c*tf.pow(X,3) + d*tf.pow(X,2) + e*X + f -
- -

Cost Function

We use the Mean Squared Error Function

mse1 = tf.reduce_sum(tf.pow(deg1-Y,2))/(2*n) +

Cost Function

We use the Mean Squared Error Function

mse1 = tf.reduce_sum(tf.pow(deg1-Y,2))/(2*n) mse2 = tf.reduce_sum(tf.pow(deg2-Y,2))/(2*n) mse3 = tf.reduce_sum(tf.pow(deg3-Y,2))/(2*n) mse4 = tf.reduce_sum(tf.pow(deg4-Y,2))/(2*n) mse5 = tf.reduce_sum(tf.pow(deg5-Y,2))/(2*n) -
- -

Optimizer

We use the AdamOptimizer for the polynomial functions and GradientDescentOptimizer for the linear function

optimizer1 = tf.train.GradientDescentOptimizer(learning_rate).minimize(mse1) +

Optimizer

We use the AdamOptimizer for the polynomial functions and GradientDescentOptimizer for the linear function

optimizer1 = tf.train.GradientDescentOptimizer(learning_rate).minimize(mse1) optimizer2 = tf.train.AdamOptimizer(learning_rate).minimize(mse2) optimizer3 = tf.train.AdamOptimizer(learning_rate).minimize(mse3) optimizer4 = tf.train.AdamOptimizer(learning_rate).minimize(mse4) optimizer5 = tf.train.AdamOptimizer(learning_rate).minimize(mse5) -
- -
init=tf.global_variables_initializer() -
- -

Model Predictions

For each type of equation first we make the model predict the values of the coefficient(s) and constant, once we get these values we use it to predict the Y values using the X values. We then plot it to compare the actual data and predicted line.

Linear Equation

with tf.Session() as sess: +
init=tf.global_variables_initializer() +

Model Predictions

For each type of equation first we make the model predict the values of the coefficient(s) and constant, once we get these values we use it to predict the Y values using the X values. We then plot it to compare the actual data and predicted line.

Linear Equation

with tf.Session() as sess: sess.run(init) for epoch in range(no_of_epochs): for (x,y) in zip(abscissa, ordinate): sess.run(optimizer1, feed_dict={X:x, Y:y}) if (epoch+1)%1000==0: cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate}) - print("Epoch",(epoch+1), ": Training Cost:", cost," a,b:",sess.run(a),sess.run(b)) + print("Epoch",(epoch+1), ": Training Cost:", cost," a,b:",sess.run(a),sess.run(b)) training_cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate}) coefficient1 = sess.run(a) constant = sess.run(b) -print(training_cost, coefficient1, constant) -
- -
Epoch 1000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 +print(training_cost, coefficient1, constant) +
Epoch 1000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 Epoch 2000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 Epoch 3000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 Epoch 4000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 @@ -130,9 +96,7 @@ Epoch 24000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 Epoch 25000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12 88999125000.0 180396.42 -478869.12 -
- -
predictions = [] +
predictions = [] for x in abscissa: predictions.append((coefficient1*x + constant)) plt.plot(abscissa , ordinate, 'ro', label ='Original data') @@ -140,26 +104,22 @@ plt.title('Linear Regression Result') plt.legend() plt.show() -
- -

Quadratic Equation

with tf.Session() as sess: +

Quadratic Equation

with tf.Session() as sess: sess.run(init) for epoch in range(no_of_epochs): for (x,y) in zip(abscissa, ordinate): sess.run(optimizer2, feed_dict={X:x, Y:y}) if (epoch+1)%1000==0: cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate}) - print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c:",sess.run(a),sess.run(b),sess.run(c)) + print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c:",sess.run(a),sess.run(b),sess.run(c)) training_cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate}) coefficient1 = sess.run(a) coefficient2 = sess.run(b) constant = sess.run(c) -print(training_cost, coefficient1, coefficient2, constant) -
- -
Epoch 1000 : Training Cost: 52571360000.0 a,b,c: 1002.4456 1097.0197 1276.6921 +print(training_cost, coefficient1, coefficient2, constant) +
Epoch 1000 : Training Cost: 52571360000.0 a,b,c: 1002.4456 1097.0197 1276.6921 Epoch 2000 : Training Cost: 37798890000.0 a,b,c: 1952.4263 2130.2825 2469.7756 Epoch 3000 : Training Cost: 26751185000.0 a,b,c: 2839.5825 3081.6118 3554.351 Epoch 4000 : Training Cost: 19020106000.0 a,b,c: 3644.56 3922.9563 4486.3135 @@ -185,9 +145,7 @@ Epoch 24000 : Training Cost: 8088001000.0 a,b,c: 6632.96 3399.878 -79.89219 Epoch 25000 : Training Cost: 8058094600.0 a,b,c: 6659.793 3227.2517 -463.03156 8058094600.0 6659.793 3227.2517 -463.03156 -
- -
predictions = [] +
predictions = [] for x in abscissa: predictions.append((coefficient1*pow(x,2) + coefficient2*x + constant)) plt.plot(abscissa , ordinate, 'ro', label ='Original data') @@ -195,16 +153,14 @@ plt.title('Quadratic Regression Result') plt.legend() plt.show() -
- -

Cubic

with tf.Session() as sess: +

Cubic

with tf.Session() as sess: sess.run(init) for epoch in range(no_of_epochs): for (x,y) in zip(abscissa, ordinate): sess.run(optimizer3, feed_dict={X:x, Y:y}) if (epoch+1)%1000==0: cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate}) - print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d)) + print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d)) training_cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate}) coefficient1 = sess.run(a) @@ -212,10 +168,8 @@ coefficient3 = sess.run(c) constant = sess.run(d) -print(training_cost, coefficient1, coefficient2, coefficient3, constant) -
- -
Epoch 1000 : Training Cost: 4279814000.0 a,b,c,d: 670.1527 694.4212 751.4653 903.9527 +print(training_cost, coefficient1, coefficient2, coefficient3, constant) +
Epoch 1000 : Training Cost: 4279814000.0 a,b,c,d: 670.1527 694.4212 751.4653 903.9527 Epoch 2000 : Training Cost: 3770950400.0 a,b,c,d: 742.6414 666.3489 636.94525 859.2088 Epoch 3000 : Training Cost: 3717708300.0 a,b,c,d: 756.2582 569.3339 448.105 748.23956 Epoch 4000 : Training Cost: 3667464000.0 a,b,c,d: 769.4476 474.0318 265.5761 654.75525 @@ -241,9 +195,7 @@ Epoch 24000 : Training Cost: 3070361300.0 a,b,c,d: 975.52875 -1095.4292 -2211.854 1847.4485 Epoch 25000 : Training Cost: 3052791300.0 a,b,c,d: 983.4346 -1159.7922 -2286.9412 2027.4857 3052791300.0 983.4346 -1159.7922 -2286.9412 2027.4857 -
- -
predictions = [] +
predictions = [] for x in abscissa: predictions.append((coefficient1*pow(x,3) + coefficient2*pow(x,2) + coefficient3*x + constant)) plt.plot(abscissa , ordinate, 'ro', label ='Original data') @@ -251,16 +203,14 @@ plt.title('Cubic Regression Result') plt.legend() plt.show() -
- -

Quartic

with tf.Session() as sess: +

Quartic

with tf.Session() as sess: sess.run(init) for epoch in range(no_of_epochs): for (x,y) in zip(abscissa, ordinate): sess.run(optimizer4, feed_dict={X:x, Y:y}) if (epoch+1)%1000==0: cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate}) - print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e)) + print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e)) training_cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate}) coefficient1 = sess.run(a) @@ -269,10 +219,8 @@ coefficient4 = sess.run(d) constant = sess.run(e) -print(training_cost, coefficient1, coefficient2, coefficient3, coefficient4, constant) -
- -
Epoch 1000 : Training Cost: 1902632600.0 a,b,c,d: 84.48304 52.210594 54.791424 142.51952 512.0343 +print(training_cost, coefficient1, coefficient2, coefficient3, coefficient4, constant) +
Epoch 1000 : Training Cost: 1902632600.0 a,b,c,d: 84.48304 52.210594 54.791424 142.51952 512.0343 Epoch 2000 : Training Cost: 1854316200.0 a,b,c,d: 88.998955 13.073557 14.276088 223.55667 1056.4655 Epoch 3000 : Training Cost: 1812812400.0 a,b,c,d: 92.9462 -22.331177 -15.262934 327.41858 1634.9054 Epoch 4000 : Training Cost: 1775716000.0 a,b,c,d: 96.42522 -54.64535 -35.829437 449.5028 2239.1392 @@ -298,9 +246,7 @@ Epoch 24000 : Training Cost: 1252052600.0 a,b,c,d: 135.9583 -493.38254 90.268616 3764.0078 15010.481 Epoch 25000 : Training Cost: 1231713700.0 a,b,c,d: 137.54753 -512.1876 101.59372 3926.4897 15609.368 1231713700.0 137.54753 -512.1876 101.59372 3926.4897 15609.368 -
- -
predictions = [] +
predictions = [] for x in abscissa: predictions.append((coefficient1*pow(x,4) + coefficient2*pow(x,3) + coefficient3*pow(x,2) + coefficient4*x + constant)) plt.plot(abscissa , ordinate, 'ro', label ='Original data') @@ -308,16 +254,14 @@ plt.title('Quartic Regression Result') plt.legend() plt.show() -
- -

Quintic

with tf.Session() as sess: +

Quintic

with tf.Session() as sess: sess.run(init) for epoch in range(no_of_epochs): for (x,y) in zip(abscissa, ordinate): sess.run(optimizer5, feed_dict={X:x, Y:y}) if (epoch+1)%1000==0: cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate}) - print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d,e,f:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e),sess.run(f)) + print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d,e,f:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e),sess.run(f)) training_cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate}) coefficient1 = sess.run(a) @@ -326,9 +270,7 @@ coefficient4 = sess.run(d) coefficient5 = sess.run(e) constant = sess.run(f) -
- -
Epoch 1000 : Training Cost: 1409200100.0 a,b,c,d,e,f: 7.949472 7.46219 55.626034 184.29028 484.00223 1024.0083 +
Epoch 1000 : Training Cost: 1409200100.0 a,b,c,d,e,f: 7.949472 7.46219 55.626034 184.29028 484.00223 1024.0083 Epoch 2000 : Training Cost: 1306882400.0 a,b,c,d,e,f: 8.732181 -4.0085897 73.25298 315.90103 904.08887 2004.9749 Epoch 3000 : Training Cost: 1212606000.0 a,b,c,d,e,f: 9.732249 -16.90125 86.28379 437.06552 1305.055 2966.2188 Epoch 4000 : Training Cost: 1123640400.0 a,b,c,d,e,f: 10.74851 -29.82692 98.59997 555.331 1698.4631 3917.9155 @@ -354,9 +296,7 @@ Epoch 24000 : Training Cost: 229660080.0 a,b,c,d,e,f: 27.102589 -238.44817 309.35342 2420.4185 7770.5728 19536.19 Epoch 25000 : Training Cost: 216972400.0 a,b,c,d,e,f: 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707 216972400.0 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707 -
- -
predictions = [] +
predictions = [] for x in abscissa: predictions.append((coefficient1*pow(x,5) + coefficient2*pow(x,4) + coefficient3*pow(x,3) + coefficient4*pow(x,2) + coefficient5*x + constant)) plt.plot(abscissa , ordinate, 'ro', label ='Original data') @@ -364,6 +304,4 @@ plt.title('Quintic Regression Result') plt.legend() plt.show() -
- -

Results and Conclusion

You just learnt Polynomial Regression using TensorFlow!

Notes

Overfitting

> Overfitting refers to a model that models the training data too well.Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.

Source: Machine Learning Mastery

Basically if you train your machine learning model on a small dataset for a really large number of epochs, the model will learn all the deformities/noise in the data and will actually think that it is a normal part. Therefore when it will see some new data, it will discard that new data as noise and will impact the accuracy of the model in a negative manner

Tagged with:
\ No newline at end of file +

Results and Conclusion

You just learnt Polynomial Regression using TensorFlow!

Notes

Overfitting

> Overfitting refers to a model that models the training data too well.Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.

Source: Machine Learning Mastery

Basically if you train your machine learning model on a small dataset for a really large number of epochs, the model will learn all the deformities/noise in the data and will actually think that it is a normal part. Therefore when it will see some new data, it will discard that new data as noise and will impact the accuracy of the model in a negative manner

Tagged with:
\ No newline at end of file diff --git a/posts/2019-12-22-Fake-News-Detector/index.html b/posts/2019-12-22-Fake-News-Detector/index.html index 7254fc0..96ebd30 100644 --- a/posts/2019-12-22-Fake-News-Detector/index.html +++ b/posts/2019-12-22-Fake-News-Detector/index.html @@ -1,33 +1,19 @@ Building a Fake News Detector with Turicreate | Navan Chauhan
7 minute readCreated on December 22, 2019Last modified on January 18, 2020

Building a Fake News Detector with Turicreate

In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app

Note: These commands are written as if you are running a jupyter notebook.

Building the Machine Learning Model

Data Gathering

To build a classifier, you need a lot of data. George McIntire (GH: @joolsa) has created a wonderful dataset containing the headline, body and wheter it is fake or real. Whenever you are looking for a dataset, always try searching on Kaggle and GitHub before you start building your own

Dependencies

I used a Google Colab instance for training my model. If you also plan on using Google Colab then I reccomend choosing a GPU Instance (It is Free) This allows you to train the model on the GPU. Turicreat is built on top of Apache's MXNet Framework, for us to use GPU we need to install a CUDA compatible MXNet package.

!pip install turicreate !pip uninstall -y mxnet !pip install mxnet-cu100==1.4.0.post0 -
- -

If you do not wish to train on GPU or are running it on your computer, you can ignore the last two lines

Downloading the Dataset

!wget -q "https://github.com/joolsa/fake_real_news_dataset/raw/master/fake_or_real_news.csv.zip" +

If you do not wish to train on GPU or are running it on your computer, you can ignore the last two lines

Downloading the Dataset

!wget -q "https://github.com/joolsa/fake_real_news_dataset/raw/master/fake_or_real_news.csv.zip" !unzip fake_or_real_news.csv.zip -
- -

Model Creation

import turicreate as tc +

Model Creation

import turicreate as tc tc.config.set_num_gpus(-1) # If you do not wish to use GPUs, set it to 0 -
- -
dataSFrame = tc.SFrame('fake_or_real_news.csv') -
- -

The dataset contains a column named "X1", which is of no use to us. Therefore, we simply drop it

dataSFrame.remove_column('X1') -
- -

Splitting Dataset

train, test = dataSFrame.random_split(.9) -
- -

Training

model = tc.text_classifier.create( +
dataSFrame = tc.SFrame('fake_or_real_news.csv') +

The dataset contains a column named "X1", which is of no use to us. Therefore, we simply drop it

dataSFrame.remove_column('X1') +

Splitting Dataset

train, test = dataSFrame.random_split(.9) +

Training

model = tc.text_classifier.create( dataset=train, target='label', features=['title','text'] ) -
- -
+-----------+----------+-----------+--------------+-------------------+---------------------+ +
+-----------+----------+-----------+--------------+-------------------+---------------------+ | Iteration | Passes | Step size | Elapsed Time | Training Accuracy | Validation Accuracy | +-----------+----------+-----------+--------------+-------------------+---------------------+ | 0 | 2 | 1.000000 | 1.156349 | 0.889680 | 0.790036 | @@ -37,35 +23,23 @@ | 4 | 8 | 1.000000 | 1.814194 | 0.999063 | 0.925267 | | 9 | 14 | 1.000000 | 2.507072 | 1.000000 | 0.911032 | +-----------+----------+-----------+--------------+-------------------+---------------------+ -
- -

Testing the Model

est_predictions = model.predict(test) +

Testing the Model

est_predictions = model.predict(test) accuracy = tc.evaluation.accuracy(test['label'], test_predictions) -print(f'Topic classifier model has a testing accuracy of {accuracy*100}% ', flush=True) -
- -
Topic classifier model has a testing accuracy of 92.3076923076923% -
- -

We have just created our own Fake News Detection Model which has an accuracy of 92%!

example_text = {"title": ["Middling ‘Rise Of Skywalker’ Review Leaves Fan On Fence About Whether To Threaten To Kill Critic"], "text": ["Expressing ambivalence toward the relatively balanced appraisal of the film, Star Wars fan Miles Ariely admitted Thursday that an online publication’s middling review of The Rise Of Skywalker had left him on the fence about whether he would still threaten to kill the critic who wrote it. “I’m really of two minds about this, because on the one hand, he said the new movie fails to live up to the original trilogy, which makes me at least want to throw a brick through his window with a note telling him to watch his back,” said Ariely, confirming he had already drafted an eight-page-long death threat to Stan Corimer of the website Screen-On Time, but had not yet decided whether to post it to the reviewer’s Facebook page. “On the other hand, though, he commended J.J. Abrams’ skillful pacing and faithfulness to George Lucas’ vision, which makes me wonder if I should just call the whole thing off. Now, I really don’t feel like camping outside his house for hours. Maybe I could go with a response that’s somewhere in between, like, threatening to kill his dog but not everyone in his whole family? I don’t know. This is a tough one.” At press time, sources reported that Ariely had resolved to wear his Ewok costume while he murdered the critic in his sleep."]} +print(f'Topic classifier model has a testing accuracy of {accuracy*100}% ', flush=True) +
Topic classifier model has a testing accuracy of 92.3076923076923% +

We have just created our own Fake News Detection Model which has an accuracy of 92%!

example_text = {"title": ["Middling ‘Rise Of Skywalker’ Review Leaves Fan On Fence About Whether To Threaten To Kill Critic"], "text": ["Expressing ambivalence toward the relatively balanced appraisal of the film, Star Wars fan Miles Ariely admitted Thursday that an online publication’s middling review of The Rise Of Skywalker had left him on the fence about whether he would still threaten to kill the critic who wrote it. “I’m really of two minds about this, because on the one hand, he said the new movie fails to live up to the original trilogy, which makes me at least want to throw a brick through his window with a note telling him to watch his back,” said Ariely, confirming he had already drafted an eight-page-long death threat to Stan Corimer of the website Screen-On Time, but had not yet decided whether to post it to the reviewer’s Facebook page. “On the other hand, though, he commended J.J. Abrams’ skillful pacing and faithfulness to George Lucas’ vision, which makes me wonder if I should just call the whole thing off. Now, I really don’t feel like camping outside his house for hours. Maybe I could go with a response that’s somewhere in between, like, threatening to kill his dog but not everyone in his whole family? I don’t know. This is a tough one.” At press time, sources reported that Ariely had resolved to wear his Ewok costume while he murdered the critic in his sleep."]} example_prediction = model.classify(tc.SFrame(example_text)) -print(example_prediction, flush=True) -
- -
+-------+--------------------+ +print(example_prediction, flush=True) +
+-------+--------------------+ | class | probability | +-------+--------------------+ | FAKE | 0.9245648658345308 | +-------+--------------------+ [1 rows x 2 columns] -
- -

Exporting the Model

model_name = 'FakeNews' +

Exporting the Model

model_name = 'FakeNews' coreml_model_name = model_name + '.mlmodel' exportedModel = model.export_coreml(coreml_model_name) -
- -

Note: To download files from Google Volab, simply click on the files section in the sidebar, right click on filename and then click on downlaod

Link to Colab Notebook

Building the App using SwiftUI

Initial Setup

First we create a single view app (make sure you check the use SwiftUI button)

Then we copy our .mlmodel file to our project (Just drag and drop the file in the XCode Files Sidebar)

Our ML Model does not take a string directly as an input, rather it takes bag of words as an input. DescriptionThe bag-of-words model is a simplifying representation used in NLP, in this text is represented as a bag of words, without any regatd of grammar or order, but noting multiplicity

We define our bag of words function

func bow(text: String) -> [String: Double] { +

Note: To download files from Google Volab, simply click on the files section in the sidebar, right click on filename and then click on downlaod

Link to Colab Notebook

Building the App using SwiftUI

Initial Setup

First we create a single view app (make sure you check the use SwiftUI button)

Then we copy our .mlmodel file to our project (Just drag and drop the file in the XCode Files Sidebar)

Our ML Model does not take a string directly as an input, rather it takes bag of words as an input. DescriptionThe bag-of-words model is a simplifying representation used in NLP, in this text is represented as a bag of words, without any regatd of grammar or order, but noting multiplicity

We define our bag of words function

func bow(text: String) -> [String: Double] { var bagOfWords = [String: Double]() let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0) @@ -84,16 +58,12 @@ return bagOfWords } -
- -

We also declare our variables

@State private var title: String = "" +

We also declare our variables

@State private var title: String = "" @State private var headline: String = "" @State private var alertTitle = "" @State private var alertText = "" @State private var showingAlert = false -
- -

Finally, we implement a simple function which reads the two text fields, creates their bag of words representation and displays an alert with the appropriate result

Complete Code

import SwiftUI +

Finally, we implement a simple function which reads the two text fields, creates their bag of words representation and displays an alert with the appropriate result

Complete Code

import SwiftUI struct ContentView: View { @State private var title: String = "" @@ -168,6 +138,4 @@ ContentView() } } -
- -
Tagged with:
\ No newline at end of file +Tagged with:
\ No newline at end of file diff --git a/posts/2020-01-14-Converting-between-PIL-NumPy/index.html b/posts/2020-01-14-Converting-between-PIL-NumPy/index.html index 13d0379..13cd71d 100644 --- a/posts/2020-01-14-Converting-between-PIL-NumPy/index.html +++ b/posts/2020-01-14-Converting-between-PIL-NumPy/index.html @@ -7,13 +7,9 @@ # Convert array to Image img = PIL.Image.fromarray(arr) - - -

Saving an Image

try: - img.save(destination, "JPEG", quality=80, optimize=True, progressive=True) +

Saving an Image

try: + img.save(destination, "JPEG", quality=80, optimize=True, progressive=True) except IOError: PIL.ImageFile.MAXBLOCK = img.size[0] * img.size[1] - img.save(destination, "JPEG", quality=80, optimize=True, progressive=True) -
- -
Tagged with:
\ No newline at end of file + img.save(destination, "JPEG", quality=80, optimize=True, progressive=True) +Tagged with:
\ No newline at end of file diff --git a/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colab/index.html b/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colab/index.html index ea6a41c..229d6dd 100644 --- a/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colab/index.html +++ b/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colab/index.html @@ -1,9 +1,5 @@ Setting up Kaggle to use with Google Colab | Navan Chauhan
1 minute readCreated on January 15, 2020Last modified on January 19, 2020

Setting up Kaggle to use with Google Colab

In order to be able to access Kaggle Datasets, you will need to have an account on Kaggle (which is Free)

Grabbing Our Tokens

Go to Kaggle

Click on your User Profile and Click on My Account

Scroll Down untill you see Create New API Token

This will download your token as a JSON file

Copy the File to the root folder of your Google Drive

Setting up Colab

Mounting Google Drive

import os from google.colab import drive drive.mount('/content/drive') -
- -

After this click on the URL in the output section, login and then paste the Auth Code

Configuring Kaggle

os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/" -
- -

Voila! You can now download kaggel datasets

Tagged with:
\ No newline at end of file +

After this click on the URL in the output section, login and then paste the Auth Code

Configuring Kaggle

os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/" +

Voila! You can now download kaggel datasets

Tagged with:
\ No newline at end of file diff --git a/posts/2020-01-16-Image-Classifier-Using-Turicreate/index.html b/posts/2020-01-16-Image-Classifier-Using-Turicreate/index.html index 187a8d2..f21e657 100644 --- a/posts/2020-01-16-Image-Classifier-Using-Turicreate/index.html +++ b/posts/2020-01-16-Image-Classifier-Using-Turicreate/index.html @@ -1,20 +1,12 @@ Creating a Custom Image Classifier using Turicreate to detect Smoke and Fire | Navan Chauhan
6 minute readCreated on January 16, 2020Last modified on January 19, 2020

Creating a Custom Image Classifier using Turicreate to detect Smoke and Fire

For setting up Kaggle with Google Colab, please refer to my previous post

Dataset

Mounting Google Drive

import os from google.colab import drive drive.mount('/content/drive') -
- -

Downloading Dataset from Kaggle

os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/" +

Downloading Dataset from Kaggle

os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/" !kaggle datasets download ashutosh69/fire-and-smoke-dataset !unzip "fire-and-smoke-dataset.zip" -
- -

Pre-Processing

!mkdir default smoke fire -
- -


!ls data/data/img_data/train/default/*.jpg -
- -


img_1002.jpg img_20.jpg img_519.jpg img_604.jpg img_80.jpg +

Pre-Processing

!mkdir default smoke fire +


!ls data/data/img_data/train/default/*.jpg +


img_1002.jpg img_20.jpg img_519.jpg img_604.jpg img_80.jpg img_1003.jpg img_21.jpg img_51.jpg img_60.jpg img_8.jpg img_1007.jpg img_22.jpg img_520.jpg img_61.jpg img_900.jpg img_100.jpg img_23.jpg img_521.jpg 'img_62 (2).jpg' img_920.jpg @@ -47,49 +39,39 @@ img_204.jpg img_501.jpg img_601.jpg img_78.jpg img_205.jpg img_502.jpg img_602.jpg img_79.jpg img_206.jpg img_50.jpg img_603.jpg img_7.jpg -
- -

The image files are not actually JPEG, thus we first need to save them in the correct format for Turicreate

from PIL import Image +

The image files are not actually JPEG, thus we first need to save them in the correct format for Turicreate

from PIL import Image import glob folders = ["default","smoke","fire"] for folder in folders: n = 1 - for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"): - im = Image.open(file) + for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"): + im = Image.open(file) rgb_im = im.convert('RGB') rgb_im.save((folder + "/" + str(n) + ".jpg"), quality=100) n +=1 - for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"): - im = Image.open(file) + for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"): + im = Image.open(file) rgb_im = im.convert('RGB') rgb_im.save((folder + "/" + str(n) + ".jpg"), quality=100) n +=1 -
- -


!mkdir train +


!mkdir train !mv default ./train !mv smoke ./train !mv fire ./train -
- -

Making the Image Classifier

Making an SFrame

!pip install turicreate -
- -


import turicreate as tc +

Making the Image Classifier

Making an SFrame

!pip install turicreate +


import turicreate as tc import os -data = tc.image_analysis.load_images("./train", with_path=True) +data = tc.image_analysis.load_images("./train", with_path=True) data["label"] = data["path"].apply(lambda path: os.path.basename(os.path.dirname(path))) -print(data) +print(data) data.save('fire-smoke.sframe') -
- -


+-------------------------+------------------------+ +


+-------------------------+------------------------+ | path | image | +-------------------------+------------------------+ | ./train/default/1.jpg | Height: 224 Width: 224 | @@ -123,9 +105,7 @@ [2028 rows x 3 columns] Note: Only the head of the SFrame is printed. You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns. -
- -

Making the Model

import turicreate as tc +

Making the Model

import turicreate as tc # Load the data data = tc.SFrame('fire-smoke.sframe') @@ -141,16 +121,14 @@ # Evaluate the model and print the results metrics = model.evaluate(test_data) -print(metrics['accuracy']) +print(metrics['accuracy']) # Save the model for later use in Turi Create model.save('fire-smoke.model') # Export for use in Core ML model.export_coreml('fire-smoke.mlmodel') -
- -


Performing feature extraction on resized images... +


Performing feature extraction on resized images... Completed 64/1633 Completed 128/1633 Completed 192/1633 @@ -208,6 +186,4 @@ Completed 384/395 Completed 395/395 0.9316455696202531 -
- -

We just got an accuracy of 94% on Training Data and 97% on Validation Data!

Tagged with:
\ No newline at end of file +

We just got an accuracy of 94% on Training Data and 97% on Validation Data!

Tagged with:
\ No newline at end of file diff --git a/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOS/index.html b/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOS/index.html index 0cffcf0..c9f79b4 100644 --- a/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOS/index.html +++ b/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOS/index.html @@ -1,18 +1,16 @@ -Fixing X11 Error on macOS Catalina for AmberTools 18/19 | Navan Chauhan
2 minute readCreated on April 13, 2020

Fixing X11 Error on macOS Catalina for AmberTools 18/19

I was trying to install AmberTools on my macOS Catalina Installation. Running ./configure -macAccelerate clang gave me an error that it could not find X11 libraries, even though locate libXt showed that my installation was correct.

Error:

Could not find the X11 libraries; you may need to edit config.h - to set the XHOME and XLIBS variables. -Error: The X11 libraries are not in the usual location ! - To search for them try the command: locate libXt - On new Fedora OS's install the libXt-devel libXext-devel - libX11-devel libICE-devel libSM-devel packages. - On old Fedora OS's install the xorg-x11-devel package. - On RedHat OS's install the XFree86-devel package. - On Ubuntu OS's install the xorg-dev and xserver-xorg packages. +Fixing X11 Error on macOS Catalina for AmberTools 18/19 | Navan Chauhan
2 minute readCreated on April 13, 2020Last modified on May 18, 2020

Fixing X11 Error on macOS Catalina for AmberTools 18/19

I was trying to install AmberTools on my macOS Catalina Installation. Running ./configure -macAccelerate clang gave me an error that it could not find X11 libraries, even though locate libXt showed that my installation was correct.

Error:

Could not find the X11 libraries; you may need to edit config.h + to set the XHOME and XLIBS variables. +Error: The X11 libraries are not in the usual location ! + To search for them try the command: locate libXt + On new Fedora OS's install the libXt-devel libXext-devel + libX11-devel libICE-devel libSM-devel packages. + On old Fedora OS's install the xorg-x11-devel package. + On RedHat OS's install the XFree86-devel package. + On Ubuntu OS's install the xorg-dev and xserver-xorg packages. - ...more info for various linuxes at ambermd.org/ubuntu.html + ...more info for various linuxes at ambermd.org/ubuntu.html - To build Amber without XLEaP, re-run configure with '-noX11: - ./configure -noX11 --with-python /usr/local/bin/python3 -macAccelerate clang -Configure failed due to the errors above! -
- -

I searcehd on Google for a solution on their, sadly there was not even a single thread which had a solution about this error.

The Fix

Simply reinstalling XQuartz using homebrew fixed the error brew cask reinstall xquartz

If you do not have xquartz installed, you need to run brew cask install xquartz

Tagged with:
\ No newline at end of file + To build Amber without XLEaP, re-run configure with '-noX11: + ./configure -noX11 --with-python /usr/local/bin/python3 -macAccelerate clang +Configure failed due to the errors above! +

I searcehd on Google for a solution on their, sadly there was not even a single thread which had a solution about this error.

The Fix

Simply reinstalling XQuartz using homebrew fixed the error brew cask reinstall xquartz

If you do not have xquartz installed, you need to run brew cask install xquartz

Tagged with:
\ No newline at end of file diff --git a/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response/index.html b/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response/index.html index 7ec00f4..7b383c5 100644 --- a/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response/index.html +++ b/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response/index.html @@ -1,7 +1,3 @@ -Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response | Navan Chauhan
1 minute readCreated on May 14, 2019Last modified on March 14, 2020

Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response

Based on the project showcased at Toyota Hackathon, IITD - 17/18th December 2018

Edit: It seems like I haven't mentioned Adrian Rosebrock of PyImageSearch anywhere. I apologize for this mistake.

Download paper here

Recommended citation:

ATP

Chauhan, N. (2019). &quot;Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response.&quot; <i>International Research Journal of Engineering and Technology (IRJET), 6(5)</i>. -
- -

BibTeX

@article{chauhan_2019, title={Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response}, volume={6}, url={https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf}, number={5}, journal={International Research Journal of Engineering and Technology (IRJET)}, author={Chauhan, Navan}, year={2019}} -
- -
Tagged with:
\ No newline at end of file +Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response | Navan Chauhan
1 minute readCreated on May 14, 2019Last modified on March 14, 2020

Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response

Based on the project showcased at Toyota Hackathon, IITD - 17/18th December 2018

Edit: It seems like I haven't mentioned Adrian Rosebrock of PyImageSearch anywhere. I apologize for this mistake.

Download paper here

Recommended citation:

ATP

Chauhan, N. (2019). &quot;Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response.&quot; <i>International Research Journal of Engineering and Technology (IRJET), 6(5)</i>. +

BibTeX

@article{chauhan_2019, title={Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response}, volume={6}, url={https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf}, number={5}, journal={International Research Journal of Engineering and Technology (IRJET)}, author={Chauhan, Navan}, year={2019}} +
Tagged with:
\ No newline at end of file diff --git a/publications/2020-03-14-generating-vaporwave/index.html b/publications/2020-03-14-generating-vaporwave/index.html index 491da29..1fa5e82 100644 --- a/publications/2020-03-14-generating-vaporwave/index.html +++ b/publications/2020-03-14-generating-vaporwave/index.html @@ -1,21 +1,13 @@ -Is it possible to programmatically generate Vaporwave? | Navan Chauhan
1 minute readCreated on March 14, 2020Last modified on March 15, 2020

Is it possible to programmatically generate Vaporwave?

This is still a pre-print.

Download paper here

Recommended citation:

APA

Chauhan, N. (2020, March 15). Is it possible to programmatically generate Vaporwave?. https://doi.org/10.35543/osf.io/9um2r -
- -

MLA

Chauhan, Navan. Is It Possible to Programmatically Generate Vaporwave?. IndiaRxiv, 15 Mar. 2020. Web. -
- -

Chicago

Chauhan, Navan. 2020. Is It Possible to Programmatically Generate Vaporwave?. IndiaRxiv. March 15. doi:10.35543/osf.io/9um2r. -
- -

Bibtex

@misc{chauhan_2020, - title={Is it possible to programmatically generate Vaporwave?}, - url={indiarxiv.org/9um2r}, - DOI={10.35543/osf.io/9um2r}, - publisher={IndiaRxiv}, - author={Chauhan, Navan}, - year={2020}, - month={Mar} -} -
- -
Tagged with:
\ No newline at end of file +Is it possible to programmatically generate Vaporwave? | Navan Chauhan
1 minute readCreated on March 14, 2020Last modified on March 15, 2020

Is it possible to programmatically generate Vaporwave?

This is still a pre-print.

Download paper here

Recommended citation:

APA

Chauhan, N. (2020, March 15). Is it possible to programmatically generate Vaporwave?. https://doi.org/10.35543/osf.io/9um2r +

MLA

Chauhan, Navan. “Is It Possible to Programmatically Generate Vaporwave?.” IndiaRxiv, 15 Mar. 2020. Web. +

Chicago

Chauhan, Navan. 2020. “Is It Possible to Programmatically Generate Vaporwave?.” IndiaRxiv. March 15. doi:10.35543/osf.io/9um2r. +

Bibtex

@misc{chauhan_2020, + title={Is it possible to programmatically generate Vaporwave?}, + url={indiarxiv.org/9um2r}, + DOI={10.35543/osf.io/9um2r}, + publisher={IndiaRxiv}, + author={Chauhan, Navan}, + year={2020}, + month={Mar} +} +
Tagged with:
\ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index d5615e8..e3245aa 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -https://navanchauhan.github.io/aboutdaily1.02020-02-07https://navanchauhan.github.io/postsdaily1.02020-04-13https://navanchauhan.github.io/posts/2010-01-24-experimentsmonthly0.52020-02-04https://navanchauhan.github.io/posts/2019-05-05-Custom-Snowboard-Anemone-Thememonthly0.52020-03-09https://navanchauhan.github.io/posts/2019-12-04-Google-Teachable-Machinesmonthly0.52020-03-09https://navanchauhan.github.io/posts/2019-12-08-Image-Classifier-Tensorflowmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-08-Splitting-Zipsmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-10-TensorFlow-Model-Predictionmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-16-TensorFlow-Polynomial-Regressionmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-22-Fake-News-Detectormonthly0.52020-01-18https://navanchauhan.github.io/posts/2020-01-14-Converting-between-PIL-NumPymonthly0.52020-03-09https://navanchauhan.github.io/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colabmonthly0.52020-01-19https://navanchauhan.github.io/posts/2020-01-16-Image-Classifier-Using-Turicreatemonthly0.52020-01-19https://navanchauhan.github.io/posts/2020-01-19-Connect-To-Bluetooth-Devices-Linux-Terminalmonthly0.52020-01-20https://navanchauhan.github.io/posts/2020-03-03-Playing-With-Android-TVmonthly0.52020-03-10https://navanchauhan.github.io/posts/2020-03-08-Making-Vaporwave-Trackmonthly0.52020-03-08https://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSmonthly0.52020-04-13https://navanchauhan.github.io/posts/hello-worldmonthly0.52020-01-04https://navanchauhan.github.io/publicationsdaily1.02020-03-17https://navanchauhan.github.io/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Responsemonthly0.52020-03-14https://navanchauhan.github.io/publications/2020-03-14-generating-vaporwavemonthly0.52020-03-15https://navanchauhan.github.io/publications/2020-03-17-Possible-Drug-Candidates-COVID-19monthly0.52020-03-18 \ No newline at end of file +https://navanchauhan.github.io/aboutdaily1.02020-02-07https://navanchauhan.github.io/postsdaily1.02020-04-13https://navanchauhan.github.io/posts/2010-01-24-experimentsmonthly0.52020-02-04https://navanchauhan.github.io/posts/2019-05-05-Custom-Snowboard-Anemone-Thememonthly0.52020-03-09https://navanchauhan.github.io/posts/2019-12-04-Google-Teachable-Machinesmonthly0.52020-03-09https://navanchauhan.github.io/posts/2019-12-08-Image-Classifier-Tensorflowmonthly0.52020-05-24https://navanchauhan.github.io/posts/2019-12-08-Splitting-Zipsmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-10-TensorFlow-Model-Predictionmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-16-TensorFlow-Polynomial-Regressionmonthly0.52020-01-18https://navanchauhan.github.io/posts/2019-12-22-Fake-News-Detectormonthly0.52020-01-18https://navanchauhan.github.io/posts/2020-01-14-Converting-between-PIL-NumPymonthly0.52020-03-09https://navanchauhan.github.io/posts/2020-01-15-Setting-up-Kaggle-to-use-with-Colabmonthly0.52020-01-19https://navanchauhan.github.io/posts/2020-01-16-Image-Classifier-Using-Turicreatemonthly0.52020-01-19https://navanchauhan.github.io/posts/2020-01-19-Connect-To-Bluetooth-Devices-Linux-Terminalmonthly0.52020-01-20https://navanchauhan.github.io/posts/2020-03-03-Playing-With-Android-TVmonthly0.52020-03-10https://navanchauhan.github.io/posts/2020-03-08-Making-Vaporwave-Trackmonthly0.52020-03-08https://navanchauhan.github.io/posts/2020-04-13-Fixing-X11-Error-AmberTools-macOSmonthly0.52020-05-18https://navanchauhan.github.io/posts/hello-worldmonthly0.52020-01-04https://navanchauhan.github.io/publicationsdaily1.02020-03-17https://navanchauhan.github.io/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Responsemonthly0.52020-03-14https://navanchauhan.github.io/publications/2020-03-14-generating-vaporwavemonthly0.52020-03-15https://navanchauhan.github.io/publications/2020-03-17-Possible-Drug-Candidates-COVID-19monthly0.52020-03-18 \ No newline at end of file -- cgit v1.2.3