summaryrefslogtreecommitdiff
path: root/feed.rss
diff options
context:
space:
mode:
authorNavan Chauhan <navanchauhan@gmail.com>2020-01-18 19:47:54 +0530
committerNavan Chauhan <navanchauhan@gmail.com>2020-01-18 19:47:54 +0530
commitef5a0a9f9f621e0550dc05ebddbae3c3eac8f352 (patch)
tree7e92e6c0aea5e8f1542f2167a4c5637ae2915ea0 /feed.rss
parent3307f004b0b41e6d1b1f526f6f9f60204b5fa2fe (diff)
Publish deploy 2020-01-18 19:47
Diffstat (limited to 'feed.rss')
-rw-r--r--feed.rss1271
1 files changed, 707 insertions, 564 deletions
diff --git a/feed.rss b/feed.rss
index 77ca90b..0da058c 100644
--- a/feed.rss
+++ b/feed.rss
@@ -1,575 +1,718 @@
-<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content"><channel><title>Navan Chauhan</title><description>Welcome to my personal fragment of the internet.</description><link>https://output.navanchauhan.now.sh</link><language>en</language><lastBuildDate>Tue, 14 Jan 2020 22:27:44 +0530</lastBuildDate><pubDate>Tue, 14 Jan 2020 22:27:44 +0530</pubDate><ttl>250</ttl><atom:link href="https://output.navanchauhan.now.sh/feed.rss" rel="self" type="application/rss+xml"/><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2020-01-14-Converting-between-PIL-NumPy</guid><title>Converting between image and NumPy array</title><description>Short code snippet for converting between PIL image and NumPy arrays.</description><link>https://output.navanchauhan.now.sh/posts/2020-01-14-Converting-between-PIL-NumPy</link><pubDate>Tue, 14 Jan 2020 00:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Converting between image and NumPy array</h1><pre><code>import numpy
-import PIL
-
-# Convert PIL Image to NumPy array
-img = PIL.Image.open("foo.jpg")
-arr = numpy.array(img)
-
-# Convert array to Image
-img = PIL.Image.fromarray(arr)
-</code></pre><h2>Saving an Image</h2><pre><code>try:
- img.save(destination, "JPEG", quality=80, optimize=True, progressive=True)
-except IOError:
- PIL.ImageFile.MAXBLOCK = img.size[0] * img.size[1]
- img.save(destination, "JPEG", quality=80, optimize=True, progressive=True)
-</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2019-12-22-Fake-News-Detector</guid><title>Building a Fake News Detector with Turicreate</title><description>In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app</description><link>https://output.navanchauhan.now.sh/posts/2019-12-22-Fake-News-Detector</link><pubDate>Sun, 22 Dec 2019 11:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Building a Fake News Detector with Turicreate</h1><p><strong>In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app</strong></p><p>Note: These commands are written as if you are running a jupyter notebook.</p><h2>Building the Machine Learning Model</h2><h3>Data Gathering</h3><p>To build a classifier, you need a lot of data. George McIntire (GH: @joolsa) has created a wonderful dataset containing the headline, body and wheter it is fake or real. Whenever you are looking for a dataset, always try searching on Kaggle and GitHub before you start building your own</p><h3>Dependencies</h3><p>I used a Google Colab instance for training my model. If you also plan on using Google Colab then I reccomend choosing a GPU Instance (It is Free) This allows you to train the model on the GPU. Turicreat is built on top of Apache's MXNet Framework, for us to use GPU we need to install a CUDA compatible MXNet package.</p><pre><code>!pip install turicreate
-!pip uninstall -y mxnet
-!pip install mxnet-cu100==1.4.0.post0
-</code></pre><p>If you do not wish to train on GPU or are running it on your computer, you can ignore the last two lines</p><h3>Downloading the Dataset</h3><pre><code>!wget -q "https://github.com/joolsa/fake_real_news_dataset/raw/master/fake_or_real_news.csv.zip"
-!unzip fake_or_real_news.csv.zip
-</code></pre><h3>Model Creation</h3><pre><code>import turicreate as tc
-tc.config.set_num_gpus(-1) # If you do not wish to use GPUs, set it to 0
-</code></pre><pre><code>dataSFrame = tc.SFrame('fake_or_real_news.csv')
-</code></pre><p>The dataset contains a column named "X1", which is of no use to us. Therefore, we simply drop it</p><pre><code>dataSFrame.remove_column('X1')
-</code></pre><h4>Splitting Dataset</h4><pre><code>train, test = dataSFrame.random_split(.9)
-</code></pre><h4>Training</h4><pre><code>model = tc.text_classifier.create(
- dataset=train,
- target='label',
- features=['title','text']
-)
-</code></pre><pre><code>+-----------+----------+-----------+--------------+-------------------+---------------------+
-| Iteration | Passes | Step size | Elapsed Time | Training Accuracy | Validation Accuracy |
-+-----------+----------+-----------+--------------+-------------------+---------------------+
-| 0 | 2 | 1.000000 | 1.156349 | 0.889680 | 0.790036 |
-| 1 | 4 | 1.000000 | 1.359196 | 0.985952 | 0.918149 |
-| 2 | 6 | 0.820091 | 1.557205 | 0.990260 | 0.914591 |
-| 3 | 7 | 1.000000 | 1.684872 | 0.998689 | 0.925267 |
-| 4 | 8 | 1.000000 | 1.814194 | 0.999063 | 0.925267 |
-| 9 | 14 | 1.000000 | 2.507072 | 1.000000 | 0.911032 |
-+-----------+----------+-----------+--------------+-------------------+---------------------+
-</code></pre><h3>Testing the Model</h3><pre><code>est_predictions = model.predict(test)
-accuracy = tc.evaluation.accuracy(test['label'], test_predictions)
-print(f'Topic classifier model has a testing accuracy of {accuracy*100}% ', flush=True)
-</code></pre><pre><code>Topic classifier model has a testing accuracy of 92.3076923076923%
-</code></pre><p>We have just created our own Fake News Detection Model which has an accuracy of 92%!</p><pre><code>example_text = {"title": ["Middling ‘Rise Of Skywalker’ Review Leaves Fan On Fence About Whether To Threaten To Kill Critic"], "text": ["Expressing ambivalence toward the relatively balanced appraisal of the film, Star Wars fan Miles Ariely admitted Thursday that an online publication’s middling review of The Rise Of Skywalker had left him on the fence about whether he would still threaten to kill the critic who wrote it. “I’m really of two minds about this, because on the one hand, he said the new movie fails to live up to the original trilogy, which makes me at least want to throw a brick through his window with a note telling him to watch his back,” said Ariely, confirming he had already drafted an eight-page-long death threat to Stan Corimer of the website Screen-On Time, but had not yet decided whether to post it to the reviewer’s Facebook page. “On the other hand, though, he commended J.J. Abrams’ skillful pacing and faithfulness to George Lucas’ vision, which makes me wonder if I should just call the whole thing off. Now, I really don’t feel like camping outside his house for hours. Maybe I could go with a response that’s somewhere in between, like, threatening to kill his dog but not everyone in his whole family? I don’t know. This is a tough one.” At press time, sources reported that Ariely had resolved to wear his Ewok costume while he murdered the critic in his sleep."]}
-example_prediction = model.classify(tc.SFrame(example_text))
-print(example_prediction, flush=True)
-</code></pre><pre><code>+-------+--------------------+
-| class | probability |
-+-------+--------------------+
-| FAKE | 0.9245648658345308 |
-+-------+--------------------+
-[1 rows x 2 columns]
-</code></pre><h3>Exporting the Model</h3><pre><code>model_name = 'FakeNews'
-coreml_model_name = model_name + '.mlmodel'
-exportedModel = model.export_coreml(coreml_model_name)
-</code></pre><p><strong>Note: To download files from Google Volab, simply click on the files section in the sidebar, right click on filename and then click on downlaod</strong></p><p><a href="https://colab.research.google.com/drive/1onMXGkhA__X2aOFdsoVL-6HQBsWQhOP4">Link to Colab Notebook</a></p><h2>Building the App using SwiftUI</h2><h3>Initial Setup</h3><p>First we create a single view app (make sure you check the use SwiftUI button)</p><p>Then we copy our .mlmodel file to our project (Just drag and drop the file in the XCode Files Sidebar)</p><p>Our ML Model does not take a string directly as an input, rather it takes bag of words as an input. DescriptionThe bag-of-words model is a simplifying representation used in NLP, in this text is represented as a bag of words, without any regatd of grammar or order, but noting multiplicity</p><p>We define our bag of words function</p><pre><code>func bow(text: String) -&gt; [String: Double] {
- var bagOfWords = [String: Double]()
+<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content"><channel><title>Navan Chauhan</title><description>Welcome to my personal fragment of the internet.</description><link>https://navanchauhan.github.io/</link><language>en</language><lastBuildDate>Sat, 18 Jan 2020 19:19:50 +0530</lastBuildDate><pubDate>Sat, 18 Jan 2020 19:19:50 +0530</pubDate><ttl>250</ttl><atom:link href="https://navanchauhan.github.io/feed.rss" rel="self" type="application/rss+xml"/><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2020-01-14-Converting-between-PIL-NumPy</guid><title>Converting between image and NumPy array</title><description>Short code snippet for converting between PIL image and NumPy arrays.</description><link>https://navanchauhan.github.io/posts/2020-01-14-Converting-between-PIL-NumPy</link><pubDate>Tue, 14 Jan 2020 00:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Converting between image and NumPy array</h1><pre><code><div class="highlight"><span></span><span class="kn">import</span> <span class="nn">numpy</span>
+<span class="kn">import</span> <span class="nn">PIL</span>
+
+<span class="c1"># Convert PIL Image to NumPy array</span>
+<span class="n">img</span> <span class="o">=</span> <span class="n">PIL</span><span class="o">.</span><span class="n">Image</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="s2">&quot;foo.jpg&quot;</span><span class="p">)</span>
+<span class="n">arr</span> <span class="o">=</span> <span class="n">numpy</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">img</span><span class="p">)</span>
+
+<span class="c1"># Convert array to Image</span>
+<span class="n">img</span> <span class="o">=</span> <span class="n">PIL</span><span class="o">.</span><span class="n">Image</span><span class="o">.</span><span class="n">fromarray</span><span class="p">(</span><span class="n">arr</span><span class="p">)</span>
+</div>
+
+</code></pre><h2>Saving an Image</h2><pre><code><div class="highlight"><span></span><span class="k">try</span><span class="p">:</span>
+ <span class="n">img</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">destination</span><span class="p">,</span> <span class="s2">&quot;JPEG&quot;</span><span class="p">,</span> <span class="n">quality</span><span class="o">=</span><span class="mi">80</span><span class="p">,</span> <span class="n">optimize</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">progressive</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
+<span class="k">except</span> <span class="ne">IOError</span><span class="p">:</span>
+ <span class="n">PIL</span><span class="o">.</span><span class="n">ImageFile</span><span class="o">.</span><span class="n">MAXBLOCK</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="n">img</span><span class="o">.</span><span class="n">size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
+ <span class="n">img</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">destination</span><span class="p">,</span> <span class="s2">&quot;JPEG&quot;</span><span class="p">,</span> <span class="n">quality</span><span class="o">=</span><span class="mi">80</span><span class="p">,</span> <span class="n">optimize</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">progressive</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
+</div>
+
+</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2019-12-22-Fake-News-Detector</guid><title>Building a Fake News Detector with Turicreate</title><description>In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app</description><link>https://navanchauhan.github.io/posts/2019-12-22-Fake-News-Detector</link><pubDate>Sun, 22 Dec 2019 11:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Building a Fake News Detector with Turicreate</h1><p><strong>In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app</strong></p><p>Note: These commands are written as if you are running a jupyter notebook.</p><h2>Building the Machine Learning Model</h2><h3>Data Gathering</h3><p>To build a classifier, you need a lot of data. George McIntire (GH: @joolsa) has created a wonderful dataset containing the headline, body and wheter it is fake or real. Whenever you are looking for a dataset, always try searching on Kaggle and GitHub before you start building your own</p><h3>Dependencies</h3><p>I used a Google Colab instance for training my model. If you also plan on using Google Colab then I reccomend choosing a GPU Instance (It is Free) This allows you to train the model on the GPU. Turicreat is built on top of Apache's MXNet Framework, for us to use GPU we need to install a CUDA compatible MXNet package.</p><pre><code><div class="highlight"><span></span><span class="nt">!pip</span><span class="na"> install turicreate</span>
+<span class="na">!pip uninstall -y mxnet</span>
+<span class="na">!pip install mxnet-cu100==1.4.0.post0</span>
+</div>
+
+</code></pre><p>If you do not wish to train on GPU or are running it on your computer, you can ignore the last two lines</p><h3>Downloading the Dataset</h3><pre><code><div class="highlight"><span></span><span class="nt">!wget</span><span class="na"> -q &quot;https</span><span class="p">:</span><span class="nc">//github.com/joolsa/fake_real_news_dataset/raw/master/fake_or_real_news.csv.zip&quot;</span>
+<span class="nt">!unzip</span><span class="na"> fake_or_real_news.csv.zip</span>
+</div>
+
+</code></pre><h3>Model Creation</h3><pre><code><div class="highlight"><span></span><span class="kn">import</span> <span class="nn">turicreate</span> <span class="kn">as</span> <span class="nn">tc</span>
+<span class="n">tc</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">set_num_gpus</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># If you do not wish to use GPUs, set it to 0</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">dataSFrame</span> <span class="o">=</span> <span class="n">tc</span><span class="o">.</span><span class="n">SFrame</span><span class="p">(</span><span class="s1">&#39;fake_or_real_news.csv&#39;</span><span class="p">)</span>
+</div>
+
+</code></pre><p>The dataset contains a column named "X1", which is of no use to us. Therefore, we simply drop it</p><pre><code><div class="highlight"><span></span><span class="n">dataSFrame</span><span class="o">.</span><span class="n">remove_column</span><span class="p">(</span><span class="s1">&#39;X1&#39;</span><span class="p">)</span>
+</div>
+
+</code></pre><h4>Splitting Dataset</h4><pre><code><div class="highlight"><span></span><span class="n">train</span><span class="p">,</span> <span class="n">test</span> <span class="o">=</span> <span class="n">dataSFrame</span><span class="o">.</span><span class="n">random_split</span><span class="p">(</span><span class="o">.</span><span class="mi">9</span><span class="p">)</span>
+</div>
+
+</code></pre><h4>Training</h4><pre><code><div class="highlight"><span></span><span class="n">model</span> <span class="o">=</span> <span class="n">tc</span><span class="o">.</span><span class="n">text_classifier</span><span class="o">.</span><span class="n">create</span><span class="p">(</span>
+ <span class="n">dataset</span><span class="o">=</span><span class="n">train</span><span class="p">,</span>
+ <span class="n">target</span><span class="o">=</span><span class="s1">&#39;label&#39;</span><span class="p">,</span>
+ <span class="n">features</span><span class="o">=</span><span class="p">[</span><span class="s1">&#39;title&#39;</span><span class="p">,</span><span class="s1">&#39;text&#39;</span><span class="p">]</span>
+<span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="o">+-----------+----------+-----------+--------------+-------------------+---------------------+</span>
+<span class="o">|</span> <span class="n">Iteration</span> <span class="o">|</span> <span class="n">Passes</span> <span class="o">|</span> <span class="n">Step</span> <span class="n">size</span> <span class="o">|</span> <span class="n">Elapsed</span> <span class="n">Time</span> <span class="o">|</span> <span class="n">Training</span> <span class="n">Accuracy</span> <span class="o">|</span> <span class="n">Validation</span> <span class="n">Accuracy</span> <span class="o">|</span>
+<span class="o">+-----------+----------+-----------+--------------+-------------------+---------------------+</span>
+<span class="o">|</span> <span class="mi">0</span> <span class="o">|</span> <span class="mi">2</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">1.156349</span> <span class="o">|</span> <span class="mf">0.889680</span> <span class="o">|</span> <span class="mf">0.790036</span> <span class="o">|</span>
+<span class="o">|</span> <span class="mi">1</span> <span class="o">|</span> <span class="mi">4</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">1.359196</span> <span class="o">|</span> <span class="mf">0.985952</span> <span class="o">|</span> <span class="mf">0.918149</span> <span class="o">|</span>
+<span class="o">|</span> <span class="mi">2</span> <span class="o">|</span> <span class="mi">6</span> <span class="o">|</span> <span class="mf">0.820091</span> <span class="o">|</span> <span class="mf">1.557205</span> <span class="o">|</span> <span class="mf">0.990260</span> <span class="o">|</span> <span class="mf">0.914591</span> <span class="o">|</span>
+<span class="o">|</span> <span class="mi">3</span> <span class="o">|</span> <span class="mi">7</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">1.684872</span> <span class="o">|</span> <span class="mf">0.998689</span> <span class="o">|</span> <span class="mf">0.925267</span> <span class="o">|</span>
+<span class="o">|</span> <span class="mi">4</span> <span class="o">|</span> <span class="mi">8</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">1.814194</span> <span class="o">|</span> <span class="mf">0.999063</span> <span class="o">|</span> <span class="mf">0.925267</span> <span class="o">|</span>
+<span class="o">|</span> <span class="mi">9</span> <span class="o">|</span> <span class="mi">14</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">2.507072</span> <span class="o">|</span> <span class="mf">1.000000</span> <span class="o">|</span> <span class="mf">0.911032</span> <span class="o">|</span>
+<span class="o">+-----------+----------+-----------+--------------+-------------------+---------------------+</span>
+</div>
+
+</code></pre><h3>Testing the Model</h3><pre><code><div class="highlight"><span></span><span class="n">est_predictions</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">test</span><span class="p">)</span>
+<span class="n">accuracy</span> <span class="o">=</span> <span class="n">tc</span><span class="o">.</span><span class="n">evaluation</span><span class="o">.</span><span class="n">accuracy</span><span class="p">(</span><span class="n">test</span><span class="p">[</span><span class="s1">&#39;label&#39;</span><span class="p">],</span> <span class="n">test_predictions</span><span class="p">)</span>
+<span class="k">print</span><span class="p">(</span><span class="n">f</span><span class="s1">&#39;Topic classifier model has a testing accuracy of {accuracy*100}% &#39;</span><span class="p">,</span> <span class="n">flush</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">Topic</span> <span class="n">classifier</span> <span class="n">model</span> <span class="n">has</span> <span class="n">a</span> <span class="n">testing</span> <span class="n">accuracy</span> <span class="n">of</span> <span class="mf">92.3076923076923</span><span class="o">%</span>
+</div>
+
+</code></pre><p>We have just created our own Fake News Detection Model which has an accuracy of 92%!</p><pre><code><div class="highlight"><span></span><span class="n">example_text</span> <span class="o">=</span> <span class="p">{</span><span class="s2">&quot;title&quot;</span><span class="p">:</span> <span class="p">[</span><span class="s2">&quot;Middling ‘Rise Of Skywalker’ Review Leaves Fan On Fence About Whether To Threaten To Kill Critic&quot;</span><span class="p">],</span> <span class="s2">&quot;text&quot;</span><span class="p">:</span> <span class="p">[</span><span class="s2">&quot;Expressing ambivalence toward the relatively balanced appraisal of the film, Star Wars fan Miles Ariely admitted Thursday that an online publication’s middling review of The Rise Of Skywalker had left him on the fence about whether he would still threaten to kill the critic who wrote it. “I’m really of two minds about this, because on the one hand, he said the new movie fails to live up to the original trilogy, which makes me at least want to throw a brick through his window with a note telling him to watch his back,” said Ariely, confirming he had already drafted an eight-page-long death threat to Stan Corimer of the website Screen-On Time, but had not yet decided whether to post it to the reviewer’s Facebook page. “On the other hand, though, he commended J.J. Abrams’ skillful pacing and faithfulness to George Lucas’ vision, which makes me wonder if I should just call the whole thing off. Now, I really don’t feel like camping outside his house for hours. Maybe I could go with a response that’s somewhere in between, like, threatening to kill his dog but not everyone in his whole family? I don’t know. This is a tough one.” At press time, sources reported that Ariely had resolved to wear his Ewok costume while he murdered the critic in his sleep.&quot;</span><span class="p">]}</span>
+<span class="n">example_prediction</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">classify</span><span class="p">(</span><span class="n">tc</span><span class="o">.</span><span class="n">SFrame</span><span class="p">(</span><span class="n">example_text</span><span class="p">))</span>
+<span class="k">print</span><span class="p">(</span><span class="n">example_prediction</span><span class="p">,</span> <span class="n">flush</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="o">+-------+--------------------+</span>
+<span class="o">|</span> <span class="k">class</span> <span class="err">| </span><span class="nc">probability</span> <span class="o">|</span>
+<span class="o">+-------+--------------------+</span>
+<span class="o">|</span> <span class="n">FAKE</span> <span class="o">|</span> <span class="mf">0.9245648658345308</span> <span class="o">|</span>
+<span class="o">+-------+--------------------+</span>
+<span class="p">[</span><span class="mi">1</span> <span class="n">rows</span> <span class="n">x</span> <span class="mi">2</span> <span class="n">columns</span><span class="p">]</span>
+</div>
+
+</code></pre><h3>Exporting the Model</h3><pre><code><div class="highlight"><span></span><span class="n">model_name</span> <span class="o">=</span> <span class="s1">&#39;FakeNews&#39;</span>
+<span class="n">coreml_model_name</span> <span class="o">=</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">&#39;.mlmodel&#39;</span>
+<span class="n">exportedModel</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">export_coreml</span><span class="p">(</span><span class="n">coreml_model_name</span><span class="p">)</span>
+</div>
+
+</code></pre><p><strong>Note: To download files from Google Volab, simply click on the files section in the sidebar, right click on filename and then click on downlaod</strong></p><p><a href="https://colab.research.google.com/drive/1onMXGkhA__X2aOFdsoVL-6HQBsWQhOP4">Link to Colab Notebook</a></p><h2>Building the App using SwiftUI</h2><h3>Initial Setup</h3><p>First we create a single view app (make sure you check the use SwiftUI button)</p><p>Then we copy our .mlmodel file to our project (Just drag and drop the file in the XCode Files Sidebar)</p><p>Our ML Model does not take a string directly as an input, rather it takes bag of words as an input. DescriptionThe bag-of-words model is a simplifying representation used in NLP, in this text is represented as a bag of words, without any regatd of grammar or order, but noting multiplicity</p><p>We define our bag of words function</p><pre><code><div class="highlight"><span></span><span class="kd">func</span> <span class="nf">bow</span><span class="p">(</span><span class="n">text</span><span class="p">:</span> <span class="nb">String</span><span class="p">)</span> <span class="p">-&gt;</span> <span class="p">[</span><span class="nb">String</span><span class="p">:</span> <span class="nb">Double</span><span class="p">]</span> <span class="p">{</span>
+ <span class="kd">var</span> <span class="nv">bagOfWords</span> <span class="p">=</span> <span class="p">[</span><span class="nb">String</span><span class="p">:</span> <span class="nb">Double</span><span class="p">]()</span>
- let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0)
- let range = NSRange(location: 0, length: text.utf16.count)
- let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace]
- tagger.string = text
+ <span class="kd">let</span> <span class="nv">tagger</span> <span class="p">=</span> <span class="bp">NSLinguisticTagger</span><span class="p">(</span><span class="n">tagSchemes</span><span class="p">:</span> <span class="p">[.</span><span class="n">tokenType</span><span class="p">],</span> <span class="n">options</span><span class="p">:</span> <span class="mi">0</span><span class="p">)</span>
+ <span class="kd">let</span> <span class="nv">range</span> <span class="p">=</span> <span class="n">NSRange</span><span class="p">(</span><span class="n">location</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="n">length</span><span class="p">:</span> <span class="n">text</span><span class="p">.</span><span class="n">utf16</span><span class="p">.</span><span class="bp">count</span><span class="p">)</span>
+ <span class="kd">let</span> <span class="nv">options</span><span class="p">:</span> <span class="bp">NSLinguisticTagger</span><span class="p">.</span><span class="n">Options</span> <span class="p">=</span> <span class="p">[.</span><span class="n">omitPunctuation</span><span class="p">,</span> <span class="p">.</span><span class="n">omitWhitespace</span><span class="p">]</span>
+ <span class="n">tagger</span><span class="p">.</span><span class="n">string</span> <span class="p">=</span> <span class="n">text</span>
- tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { _, tokenRange, _ in
- let word = (text as NSString).substring(with: tokenRange)
- if bagOfWords[word] != nil {
- bagOfWords[word]! += 1
- } else {
- bagOfWords[word] = 1
- }
- }
+ <span class="n">tagger</span><span class="p">.</span><span class="n">enumerateTags</span><span class="p">(</span><span class="k">in</span><span class="p">:</span> <span class="n">range</span><span class="p">,</span> <span class="n">unit</span><span class="p">:</span> <span class="p">.</span><span class="n">word</span><span class="p">,</span> <span class="n">scheme</span><span class="p">:</span> <span class="p">.</span><span class="n">tokenType</span><span class="p">,</span> <span class="n">options</span><span class="p">:</span> <span class="n">options</span><span class="p">)</span> <span class="p">{</span> <span class="kc">_</span><span class="p">,</span> <span class="n">tokenRange</span><span class="p">,</span> <span class="kc">_</span> <span class="k">in</span>
+ <span class="kd">let</span> <span class="nv">word</span> <span class="p">=</span> <span class="p">(</span><span class="n">text</span> <span class="k">as</span> <span class="bp">NSString</span><span class="p">).</span><span class="n">substring</span><span class="p">(</span><span class="n">with</span><span class="p">:</span> <span class="n">tokenRange</span><span class="p">)</span>
+ <span class="k">if</span> <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span> <span class="o">!=</span> <span class="kc">nil</span> <span class="p">{</span>
+ <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span><span class="o">!</span> <span class="o">+=</span> <span class="mi">1</span>
+ <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
+ <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span> <span class="p">=</span> <span class="mi">1</span>
+ <span class="p">}</span>
+ <span class="p">}</span>
- return bagOfWords
- }
-</code></pre><p>We also declare our variables</p><pre><code>@State private var title: String = ""
-@State private var headline: String = ""
-@State private var alertTitle = ""
-@State private var alertText = ""
-@State private var showingAlert = false
-</code></pre><p>Finally, we implement a simple function which reads the two text fields, creates their bag of words representation and displays an alert with the appropriate result</p><p><strong>Complete Code</strong></p><pre><code>import SwiftUI
-
-struct ContentView: View {
- @State private var title: String = ""
- @State private var headline: String = ""
+ <span class="k">return</span> <span class="n">bagOfWords</span>
+ <span class="p">}</span>
+</div>
+
+</code></pre><p>We also declare our variables</p><pre><code><div class="highlight"><span></span><span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">title</span><span class="p">:</span> <span class="nb">String</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+<span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">headline</span><span class="p">:</span> <span class="nb">String</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+<span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">alertTitle</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+<span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">alertText</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+<span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">showingAlert</span> <span class="p">=</span> <span class="kc">false</span>
+</div>
+
+</code></pre><p>Finally, we implement a simple function which reads the two text fields, creates their bag of words representation and displays an alert with the appropriate result</p><p><strong>Complete Code</strong></p><pre><code><div class="highlight"><span></span><span class="kd">import</span> <span class="nc">SwiftUI</span>
+
+<span class="kd">struct</span> <span class="nc">ContentView</span><span class="p">:</span> <span class="n">View</span> <span class="p">{</span>
+ <span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">title</span><span class="p">:</span> <span class="nb">String</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+ <span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">headline</span><span class="p">:</span> <span class="nb">String</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
- @State private var alertTitle = ""
- @State private var alertText = ""
- @State private var showingAlert = false
+ <span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">alertTitle</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+ <span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">alertText</span> <span class="p">=</span> <span class="s">&quot;&quot;</span>
+ <span class="p">@</span><span class="n">State</span> <span class="kd">private</span> <span class="kd">var</span> <span class="nv">showingAlert</span> <span class="p">=</span> <span class="kc">false</span>
- var body: some View {
- NavigationView {
- VStack(alignment: .leading) {
- Text("Headline").font(.headline)
- TextField("Please Enter Headline", text: $title)
- .lineLimit(nil)
- Text("Body").font(.headline)
- TextField("Please Enter the content", text: $headline)
- .lineLimit(nil)
- }
- .navigationBarTitle("Fake News Checker")
- .navigationBarItems(trailing:
- Button(action: classifyFakeNews) {
- Text("Check")
- })
- .padding()
- .alert(isPresented: $showingAlert){
- Alert(title: Text(alertTitle), message: Text(alertText), dismissButton: .default(Text("OK")))
- }
- }
+ <span class="kd">var</span> <span class="nv">body</span><span class="p">:</span> <span class="n">some</span> <span class="n">View</span> <span class="p">{</span>
+ <span class="n">NavigationView</span> <span class="p">{</span>
+ <span class="n">VStack</span><span class="p">(</span><span class="n">alignment</span><span class="p">:</span> <span class="p">.</span><span class="n">leading</span><span class="p">)</span> <span class="p">{</span>
+ <span class="n">Text</span><span class="p">(</span><span class="s">&quot;Headline&quot;</span><span class="p">).</span><span class="n">font</span><span class="p">(.</span><span class="n">headline</span><span class="p">)</span>
+ <span class="n">TextField</span><span class="p">(</span><span class="s">&quot;Please Enter Headline&quot;</span><span class="p">,</span> <span class="n">text</span><span class="p">:</span> <span class="err">$</span><span class="n">title</span><span class="p">)</span>
+ <span class="p">.</span><span class="n">lineLimit</span><span class="p">(</span><span class="kc">nil</span><span class="p">)</span>
+ <span class="n">Text</span><span class="p">(</span><span class="s">&quot;Body&quot;</span><span class="p">).</span><span class="n">font</span><span class="p">(.</span><span class="n">headline</span><span class="p">)</span>
+ <span class="n">TextField</span><span class="p">(</span><span class="s">&quot;Please Enter the content&quot;</span><span class="p">,</span> <span class="n">text</span><span class="p">:</span> <span class="err">$</span><span class="n">headline</span><span class="p">)</span>
+ <span class="p">.</span><span class="n">lineLimit</span><span class="p">(</span><span class="kc">nil</span><span class="p">)</span>
+ <span class="p">}</span>
+ <span class="p">.</span><span class="n">navigationBarTitle</span><span class="p">(</span><span class="s">&quot;Fake News Checker&quot;</span><span class="p">)</span>
+ <span class="p">.</span><span class="n">navigationBarItems</span><span class="p">(</span><span class="n">trailing</span><span class="p">:</span>
+ <span class="n">Button</span><span class="p">(</span><span class="n">action</span><span class="p">:</span> <span class="n">classifyFakeNews</span><span class="p">)</span> <span class="p">{</span>
+ <span class="n">Text</span><span class="p">(</span><span class="s">&quot;Check&quot;</span><span class="p">)</span>
+ <span class="p">})</span>
+ <span class="p">.</span><span class="n">padding</span><span class="p">()</span>
+ <span class="p">.</span><span class="n">alert</span><span class="p">(</span><span class="n">isPresented</span><span class="p">:</span> <span class="err">$</span><span class="n">showingAlert</span><span class="p">){</span>
+ <span class="n">Alert</span><span class="p">(</span><span class="n">title</span><span class="p">:</span> <span class="n">Text</span><span class="p">(</span><span class="n">alertTitle</span><span class="p">),</span> <span class="n">message</span><span class="p">:</span> <span class="n">Text</span><span class="p">(</span><span class="n">alertText</span><span class="p">),</span> <span class="n">dismissButton</span><span class="p">:</span> <span class="p">.</span><span class="k">default</span><span class="p">(</span><span class="n">Text</span><span class="p">(</span><span class="s">&quot;OK&quot;</span><span class="p">)))</span>
+ <span class="p">}</span>
+ <span class="p">}</span>
- }
+ <span class="p">}</span>
- func classifyFakeNews(){
- let model = FakeNews()
- let myTitle = bow(text: title)
- let myText = bow(text: headline)
- do {
- let prediction = try model.prediction(title: myTitle, text: myText)
- alertTitle = prediction.label
- alertText = "It is likely that this piece of news is \(prediction.label.lowercased())."
- print(alertText)
- } catch {
- alertTitle = "Error"
- alertText = "Sorry, could not classify if the input news was fake or not."
- }
+ <span class="kd">func</span> <span class="nf">classifyFakeNews</span><span class="p">(){</span>
+ <span class="kd">let</span> <span class="nv">model</span> <span class="p">=</span> <span class="n">FakeNews</span><span class="p">()</span>
+ <span class="kd">let</span> <span class="nv">myTitle</span> <span class="p">=</span> <span class="n">bow</span><span class="p">(</span><span class="n">text</span><span class="p">:</span> <span class="n">title</span><span class="p">)</span>
+ <span class="kd">let</span> <span class="nv">myText</span> <span class="p">=</span> <span class="n">bow</span><span class="p">(</span><span class="n">text</span><span class="p">:</span> <span class="n">headline</span><span class="p">)</span>
+ <span class="k">do</span> <span class="p">{</span>
+ <span class="kd">let</span> <span class="nv">prediction</span> <span class="p">=</span> <span class="k">try</span> <span class="n">model</span><span class="p">.</span><span class="n">prediction</span><span class="p">(</span><span class="n">title</span><span class="p">:</span> <span class="n">myTitle</span><span class="p">,</span> <span class="n">text</span><span class="p">:</span> <span class="n">myText</span><span class="p">)</span>
+ <span class="n">alertTitle</span> <span class="p">=</span> <span class="n">prediction</span><span class="p">.</span><span class="n">label</span>
+ <span class="n">alertText</span> <span class="p">=</span> <span class="s">&quot;It is likely that this piece of news is </span><span class="si">\(</span><span class="n">prediction</span><span class="p">.</span><span class="n">label</span><span class="p">.</span><span class="n">lowercased</span><span class="si">())</span><span class="s">.&quot;</span>
+ <span class="bp">print</span><span class="p">(</span><span class="n">alertText</span><span class="p">)</span>
+ <span class="p">}</span> <span class="k">catch</span> <span class="p">{</span>
+ <span class="n">alertTitle</span> <span class="p">=</span> <span class="s">&quot;Error&quot;</span>
+ <span class="n">alertText</span> <span class="p">=</span> <span class="s">&quot;Sorry, could not classify if the input news was fake or not.&quot;</span>
+ <span class="p">}</span>
- showingAlert = true
- }
- func bow(text: String) -&gt; [String: Double] {
- var bagOfWords = [String: Double]()
+ <span class="n">showingAlert</span> <span class="p">=</span> <span class="kc">true</span>
+ <span class="p">}</span>
+ <span class="kd">func</span> <span class="nf">bow</span><span class="p">(</span><span class="n">text</span><span class="p">:</span> <span class="nb">String</span><span class="p">)</span> <span class="p">-&gt;</span> <span class="p">[</span><span class="nb">String</span><span class="p">:</span> <span class="nb">Double</span><span class="p">]</span> <span class="p">{</span>
+ <span class="kd">var</span> <span class="nv">bagOfWords</span> <span class="p">=</span> <span class="p">[</span><span class="nb">String</span><span class="p">:</span> <span class="nb">Double</span><span class="p">]()</span>
- let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0)
- let range = NSRange(location: 0, length: text.utf16.count)
- let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace]
- tagger.string = text
+ <span class="kd">let</span> <span class="nv">tagger</span> <span class="p">=</span> <span class="bp">NSLinguisticTagger</span><span class="p">(</span><span class="n">tagSchemes</span><span class="p">:</span> <span class="p">[.</span><span class="n">tokenType</span><span class="p">],</span> <span class="n">options</span><span class="p">:</span> <span class="mi">0</span><span class="p">)</span>
+ <span class="kd">let</span> <span class="nv">range</span> <span class="p">=</span> <span class="n">NSRange</span><span class="p">(</span><span class="n">location</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span> <span class="n">length</span><span class="p">:</span> <span class="n">text</span><span class="p">.</span><span class="n">utf16</span><span class="p">.</span><span class="bp">count</span><span class="p">)</span>
+ <span class="kd">let</span> <span class="nv">options</span><span class="p">:</span> <span class="bp">NSLinguisticTagger</span><span class="p">.</span><span class="n">Options</span> <span class="p">=</span> <span class="p">[.</span><span class="n">omitPunctuation</span><span class="p">,</span> <span class="p">.</span><span class="n">omitWhitespace</span><span class="p">]</span>
+ <span class="n">tagger</span><span class="p">.</span><span class="n">string</span> <span class="p">=</span> <span class="n">text</span>
- tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { _, tokenRange, _ in
- let word = (text as NSString).substring(with: tokenRange)
- if bagOfWords[word] != nil {
- bagOfWords[word]! += 1
- } else {
- bagOfWords[word] = 1
- }
- }
+ <span class="n">tagger</span><span class="p">.</span><span class="n">enumerateTags</span><span class="p">(</span><span class="k">in</span><span class="p">:</span> <span class="n">range</span><span class="p">,</span> <span class="n">unit</span><span class="p">:</span> <span class="p">.</span><span class="n">word</span><span class="p">,</span> <span class="n">scheme</span><span class="p">:</span> <span class="p">.</span><span class="n">tokenType</span><span class="p">,</span> <span class="n">options</span><span class="p">:</span> <span class="n">options</span><span class="p">)</span> <span class="p">{</span> <span class="kc">_</span><span class="p">,</span> <span class="n">tokenRange</span><span class="p">,</span> <span class="kc">_</span> <span class="k">in</span>
+ <span class="kd">let</span> <span class="nv">word</span> <span class="p">=</span> <span class="p">(</span><span class="n">text</span> <span class="k">as</span> <span class="bp">NSString</span><span class="p">).</span><span class="n">substring</span><span class="p">(</span><span class="n">with</span><span class="p">:</span> <span class="n">tokenRange</span><span class="p">)</span>
+ <span class="k">if</span> <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span> <span class="o">!=</span> <span class="kc">nil</span> <span class="p">{</span>
+ <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span><span class="o">!</span> <span class="o">+=</span> <span class="mi">1</span>
+ <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
+ <span class="n">bagOfWords</span><span class="p">[</span><span class="n">word</span><span class="p">]</span> <span class="p">=</span> <span class="mi">1</span>
+ <span class="p">}</span>
+ <span class="p">}</span>
- return bagOfWords
- }
-}
-
-struct ContentView_Previews: PreviewProvider {
- static var previews: some View {
- ContentView()
- }
-}
-
-</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2019-12-16-TensorFlow-Polynomial-Regression</guid><title>Polynomial Regression Using TensorFlow</title><description>Polynomial regression using TensorFlow</description><link>https://output.navanchauhan.now.sh/posts/2019-12-16-TensorFlow-Polynomial-Regression</link><pubDate>Mon, 16 Dec 2019 14:16:00 +0530</pubDate><content:encoded><![CDATA[<h1>Polynomial Regression Using TensorFlow</h1><p><strong>In this tutorial you will learn about polynomial regression and how you can implement it in Tensorflow.</strong></p><p>In this, we will be performing polynomial regression using 5 types of equations -</p><ul><li>Linear</li><li>Quadratic</li><li>Cubic</li><li>Quartic</li><li>Quintic</li></ul><h2>Regression</h2><h3>What is Regression?</h3><p>Regression is a statistical measurement that is used to try to determine the relationship between a dependent variable (often denoted by Y), and series of varying variables (called independent variables, often denoted by X ).</p><h3>What is Polynomial Regression</h3><p>This is a form of Regression Analysis where the relationship between Y and X is denoted as the nth degree/power of X. Polynomial regression even fits a non-linear relationship (e.g when the points don't form a straight line).</p><h2>Imports</h2><pre><code>import tensorflow.compat.v1 as tf
-tf.disable_v2_behavior()
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-</code></pre><h2>Dataset</h2><h3>Creating Random Data</h3><p>Even though in this tutorial we will use a Position Vs Salary datasset, it is important to know how to create synthetic data</p><p>To create 50 values spaced evenly between 0 and 50, we use NumPy's linspace funtion</p><p><code>linspace(lower_limit, upper_limit, no_of_observations)</code></p><pre><code>x = np.linspace(0, 50, 50)
-y = np.linspace(0, 50, 50)
-</code></pre><p>We use the following function to add noise to the data, so that our values</p><pre><code>x += np.random.uniform(-4, 4, 50)
-y += np.random.uniform(-4, 4, 50)
-</code></pre><h3>Position vs Salary Dataset</h3><p>We will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)</p><pre><code>!wget --no-check-certificate 'https://docs.google.com/uc?export=download&amp;id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv
-</code></pre><pre><code>df = pd.read_csv("data.csv")
-</code></pre><pre><code>df # this gives us a preview of the dataset we are working with
-</code></pre><pre><code>| Position | Level | Salary |
-|-------------------|-------|---------|
-| Business Analyst | 1 | 45000 |
-| Junior Consultant | 2 | 50000 |
-| Senior Consultant | 3 | 60000 |
-| Manager | 4 | 80000 |
-| Country Manager | 5 | 110000 |
-| Region Manager | 6 | 150000 |
-| Partner | 7 | 200000 |
-| Senior Partner | 8 | 300000 |
-| C-level | 9 | 500000 |
-| CEO | 10 | 1000000 |
-</code></pre><p>We convert the salary column as the ordinate (y-cordinate) and level column as the abscissa</p><pre><code>abscissa = df["Level"].to_list() # abscissa = [1,2,3,4,5,6,7,8,9,10]
-ordinate = df["Salary"].to_list() # ordinate = [45000,50000,60000,80000,110000,150000,200000,300000,500000,1000000]
-</code></pre><pre><code>n = len(abscissa) # no of observations
-plt.scatter(abscissa, ordinate)
-plt.ylabel('Salary')
-plt.xlabel('Position')
-plt.title("Salary vs Position")
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/1.png"/><h2>Defining Stuff</h2><pre><code>X = tf.placeholder("float")
-Y = tf.placeholder("float")
-</code></pre><h3>Defining Variables</h3><p>We first define all the coefficients and constant as tensorflow variables haveing a random intitial value</p><pre><code>a = tf.Variable(np.random.randn(), name = "a")
-b = tf.Variable(np.random.randn(), name = "b")
-c = tf.Variable(np.random.randn(), name = "c")
-d = tf.Variable(np.random.randn(), name = "d")
-e = tf.Variable(np.random.randn(), name = "e")
-f = tf.Variable(np.random.randn(), name = "f")
-</code></pre><h3>Model Configuration</h3><pre><code>learning_rate = 0.2
-no_of_epochs = 25000
-</code></pre><h3>Equations</h3><pre><code>deg1 = a*X + b
-deg2 = a*tf.pow(X,2) + b*X + c
-deg3 = a*tf.pow(X,3) + b*tf.pow(X,2) + c*X + d
-deg4 = a*tf.pow(X,4) + b*tf.pow(X,3) + c*tf.pow(X,2) + d*X + e
-deg5 = a*tf.pow(X,5) + b*tf.pow(X,4) + c*tf.pow(X,3) + d*tf.pow(X,2) + e*X + f
-</code></pre><h3>Cost Function</h3><p>We use the Mean Squared Error Function</p><pre><code>mse1 = tf.reduce_sum(tf.pow(deg1-Y,2))/(2*n)
-mse2 = tf.reduce_sum(tf.pow(deg2-Y,2))/(2*n)
-mse3 = tf.reduce_sum(tf.pow(deg3-Y,2))/(2*n)
-mse4 = tf.reduce_sum(tf.pow(deg4-Y,2))/(2*n)
-mse5 = tf.reduce_sum(tf.pow(deg5-Y,2))/(2*n)
-</code></pre><h3>Optimizer</h3><p>We use the AdamOptimizer for the polynomial functions and GradientDescentOptimizer for the linear function</p><pre><code>optimizer1 = tf.train.GradientDescentOptimizer(learning_rate).minimize(mse1)
-optimizer2 = tf.train.AdamOptimizer(learning_rate).minimize(mse2)
-optimizer3 = tf.train.AdamOptimizer(learning_rate).minimize(mse3)
-optimizer4 = tf.train.AdamOptimizer(learning_rate).minimize(mse4)
-optimizer5 = tf.train.AdamOptimizer(learning_rate).minimize(mse5)
-</code></pre><pre><code>init=tf.global_variables_initializer()
-</code></pre><h2>Model Predictions</h2><p>For each type of equation first we make the model predict the values of the coefficient(s) and constant, once we get these values we use it to predict the Y values using the X values. We then plot it to compare the actual data and predicted line.</p><h3>Linear Equation</h3><pre><code>with tf.Session() as sess:
- sess.run(init)
- for epoch in range(no_of_epochs):
- for (x,y) in zip(abscissa, ordinate):
- sess.run(optimizer1, feed_dict={X:x, Y:y})
- if (epoch+1)%1000==0:
- cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate})
- print("Epoch",(epoch+1), ": Training Cost:", cost," a,b:",sess.run(a),sess.run(b))
-
- training_cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate})
- coefficient1 = sess.run(a)
- constant = sess.run(b)
-
-print(training_cost, coefficient1, constant)
-</code></pre><pre><code>Epoch 1000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 2000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 3000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 4000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 5000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 6000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 7000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 8000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 9000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 10000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 11000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 12000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 13000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 14000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 15000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 16000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 17000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 18000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 19000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 20000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 21000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 22000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 23000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 24000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-Epoch 25000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
-88999125000.0 180396.42 -478869.12
-</code></pre><pre><code>predictions = []
-for x in abscissa:
- predictions.append((coefficient1*x + constant))
-plt.plot(abscissa , ordinate, 'ro', label ='Original data')
-plt.plot(abscissa, predictions, label ='Fitted line')
-plt.title('Linear Regression Result')
-plt.legend()
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/2.png"/><h3>Quadratic Equation</h3><pre><code>with tf.Session() as sess:
- sess.run(init)
- for epoch in range(no_of_epochs):
- for (x,y) in zip(abscissa, ordinate):
- sess.run(optimizer2, feed_dict={X:x, Y:y})
- if (epoch+1)%1000==0:
- cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate})
- print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c:",sess.run(a),sess.run(b),sess.run(c))
-
- training_cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate})
- coefficient1 = sess.run(a)
- coefficient2 = sess.run(b)
- constant = sess.run(c)
-
-print(training_cost, coefficient1, coefficient2, constant)
-</code></pre><pre><code>Epoch 1000 : Training Cost: 52571360000.0 a,b,c: 1002.4456 1097.0197 1276.6921
-Epoch 2000 : Training Cost: 37798890000.0 a,b,c: 1952.4263 2130.2825 2469.7756
-Epoch 3000 : Training Cost: 26751185000.0 a,b,c: 2839.5825 3081.6118 3554.351
-Epoch 4000 : Training Cost: 19020106000.0 a,b,c: 3644.56 3922.9563 4486.3135
-Epoch 5000 : Training Cost: 14060446000.0 a,b,c: 4345.042 4621.4233 5212.693
-Epoch 6000 : Training Cost: 11201084000.0 a,b,c: 4921.1855 5148.1504 5689.0713
-Epoch 7000 : Training Cost: 9732740000.0 a,b,c: 5364.764 5493.0156 5906.754
-Epoch 8000 : Training Cost: 9050918000.0 a,b,c: 5685.4067 5673.182 5902.0728
-Epoch 9000 : Training Cost: 8750394000.0 a,b,c: 5906.9814 5724.8906 5734.746
-Epoch 10000 : Training Cost: 8613128000.0 a,b,c: 6057.3677 5687.3364 5461.167
-Epoch 11000 : Training Cost: 8540034600.0 a,b,c: 6160.547 5592.3022 5122.8633
-Epoch 12000 : Training Cost: 8490983000.0 a,b,c: 6233.9175 5462.025 4747.111
-Epoch 13000 : Training Cost: 8450816500.0 a,b,c: 6289.048 5310.7583 4350.6997
-Epoch 14000 : Training Cost: 8414082000.0 a,b,c: 6333.199 5147.394 3943.9294
-Epoch 15000 : Training Cost: 8378841600.0 a,b,c: 6370.7944 4977.1704 3532.476
-Epoch 16000 : Training Cost: 8344471000.0 a,b,c: 6404.468 4803.542 3120.2087
-Epoch 17000 : Training Cost: 8310785500.0 a,b,c: 6435.365 4628.1523 2709.1445
-Epoch 18000 : Training Cost: 8277482000.0 a,b,c: 6465.5493 4451.833 2300.2783
-Epoch 19000 : Training Cost: 8244650000.0 a,b,c: 6494.609 4274.826 1894.3738
-Epoch 20000 : Training Cost: 8212349000.0 a,b,c: 6522.8247 4098.1733 1491.9915
-Epoch 21000 : Training Cost: 8180598300.0 a,b,c: 6550.6567 3922.7405 1093.3868
-Epoch 22000 : Training Cost: 8149257700.0 a,b,c: 6578.489 3747.8362 698.53357
-Epoch 23000 : Training Cost: 8118325000.0 a,b,c: 6606.1973 3573.2742 307.3541
-Epoch 24000 : Training Cost: 8088001000.0 a,b,c: 6632.96 3399.878 -79.89219
-Epoch 25000 : Training Cost: 8058094600.0 a,b,c: 6659.793 3227.2517 -463.03156
-8058094600.0 6659.793 3227.2517 -463.03156
-</code></pre><pre><code>predictions = []
-for x in abscissa:
- predictions.append((coefficient1*pow(x,2) + coefficient2*x + constant))
-plt.plot(abscissa , ordinate, 'ro', label ='Original data')
-plt.plot(abscissa, predictions, label ='Fitted line')
-plt.title('Quadratic Regression Result')
-plt.legend()
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/3.png"/><h3>Cubic</h3><pre><code>with tf.Session() as sess:
- sess.run(init)
- for epoch in range(no_of_epochs):
- for (x,y) in zip(abscissa, ordinate):
- sess.run(optimizer3, feed_dict={X:x, Y:y})
- if (epoch+1)%1000==0:
- cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate})
- print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d))
-
- training_cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate})
- coefficient1 = sess.run(a)
- coefficient2 = sess.run(b)
- coefficient3 = sess.run(c)
- constant = sess.run(d)
-
-print(training_cost, coefficient1, coefficient2, coefficient3, constant)
-</code></pre><pre><code>Epoch 1000 : Training Cost: 4279814000.0 a,b,c,d: 670.1527 694.4212 751.4653 903.9527
-Epoch 2000 : Training Cost: 3770950400.0 a,b,c,d: 742.6414 666.3489 636.94525 859.2088
-Epoch 3000 : Training Cost: 3717708300.0 a,b,c,d: 756.2582 569.3339 448.105 748.23956
-Epoch 4000 : Training Cost: 3667464000.0 a,b,c,d: 769.4476 474.0318 265.5761 654.75525
-Epoch 5000 : Training Cost: 3620040700.0 a,b,c,d: 782.32324 380.54272 89.39888 578.5136
-Epoch 6000 : Training Cost: 3575265800.0 a,b,c,d: 794.8898 288.83356 -80.5215 519.13654
-Epoch 7000 : Training Cost: 3532972000.0 a,b,c,d: 807.1608 198.87044 -244.31102 476.2061
-Epoch 8000 : Training Cost: 3493009200.0 a,b,c,d: 819.13513 110.64169 -402.0677 449.3291
-Epoch 9000 : Training Cost: 3455228400.0 a,b,c,d: 830.80255 24.0964 -553.92804 438.0652
-Epoch 10000 : Training Cost: 3419475500.0 a,b,c,d: 842.21594 -60.797424 -700.0123 441.983
-Epoch 11000 : Training Cost: 3385625300.0 a,b,c,d: 853.3363 -144.08699 -840.467 460.6356
-Epoch 12000 : Training Cost: 3353544700.0 a,b,c,d: 864.19135 -225.8125 -975.4196 493.57703
-Epoch 13000 : Training Cost: 3323125000.0 a,b,c,d: 874.778 -305.98932 -1104.9867 540.39465
-Epoch 14000 : Training Cost: 3294257000.0 a,b,c,d: 885.1007 -384.63474 -1229.277 600.65607
-Epoch 15000 : Training Cost: 3266820000.0 a,b,c,d: 895.18823 -461.819 -1348.4417 673.9051
-Epoch 16000 : Training Cost: 3240736000.0 a,b,c,d: 905.0128 -537.541 -1462.6171 759.7118
-Epoch 17000 : Training Cost: 3215895000.0 a,b,c,d: 914.60065 -611.8676 -1571.9058 857.6638
-Epoch 18000 : Training Cost: 3192216800.0 a,b,c,d: 923.9603 -684.8093 -1676.4642 967.30475
-Epoch 19000 : Training Cost: 3169632300.0 a,b,c,d: 933.08594 -756.3582 -1776.4275 1088.2198
-Epoch 20000 : Training Cost: 3148046300.0 a,b,c,d: 941.9928 -826.6257 -1871.9355 1219.9702
-Epoch 21000 : Training Cost: 3127394800.0 a,b,c,d: 950.67896 -895.6205 -1963.0989 1362.1665
-Epoch 22000 : Training Cost: 3107608600.0 a,b,c,d: 959.1487 -963.38116 -2050.0586 1514.4026
-Epoch 23000 : Training Cost: 3088618200.0 a,b,c,d: 967.4355 -1029.9625 -2132.961 1676.2717
-Epoch 24000 : Training Cost: 3070361300.0 a,b,c,d: 975.52875 -1095.4292 -2211.854 1847.4485
-Epoch 25000 : Training Cost: 3052791300.0 a,b,c,d: 983.4346 -1159.7922 -2286.9412 2027.4857
-3052791300.0 983.4346 -1159.7922 -2286.9412 2027.4857
-</code></pre><pre><code>predictions = []
-for x in abscissa:
- predictions.append((coefficient1*pow(x,3) + coefficient2*pow(x,2) + coefficient3*x + constant))
-plt.plot(abscissa , ordinate, 'ro', label ='Original data')
-plt.plot(abscissa, predictions, label ='Fitted line')
-plt.title('Cubic Regression Result')
-plt.legend()
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/4.png"/><h3>Quartic</h3><pre><code>with tf.Session() as sess:
- sess.run(init)
- for epoch in range(no_of_epochs):
- for (x,y) in zip(abscissa, ordinate):
- sess.run(optimizer4, feed_dict={X:x, Y:y})
- if (epoch+1)%1000==0:
- cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate})
- print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e))
-
- training_cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate})
- coefficient1 = sess.run(a)
- coefficient2 = sess.run(b)
- coefficient3 = sess.run(c)
- coefficient4 = sess.run(d)
- constant = sess.run(e)
-
-print(training_cost, coefficient1, coefficient2, coefficient3, coefficient4, constant)
-</code></pre><pre><code>Epoch 1000 : Training Cost: 1902632600.0 a,b,c,d: 84.48304 52.210594 54.791424 142.51952 512.0343
-Epoch 2000 : Training Cost: 1854316200.0 a,b,c,d: 88.998955 13.073557 14.276088 223.55667 1056.4655
-Epoch 3000 : Training Cost: 1812812400.0 a,b,c,d: 92.9462 -22.331177 -15.262934 327.41858 1634.9054
-Epoch 4000 : Training Cost: 1775716000.0 a,b,c,d: 96.42522 -54.64535 -35.829437 449.5028 2239.1392
-Epoch 5000 : Training Cost: 1741494100.0 a,b,c,d: 99.524734 -84.43976 -49.181057 585.85876 2862.4915
-Epoch 6000 : Training Cost: 1709199600.0 a,b,c,d: 102.31984 -112.19895 -56.808075 733.1876 3499.6199
-Epoch 7000 : Training Cost: 1678261800.0 a,b,c,d: 104.87324 -138.32709 -59.9442 888.79626 4146.2944
-Epoch 8000 : Training Cost: 1648340600.0 a,b,c,d: 107.23536 -163.15173 -59.58964 1050.524 4798.979
-Epoch 9000 : Training Cost: 1619243400.0 a,b,c,d: 109.44742 -186.9409 -56.53944 1216.6432 5454.9463
-Epoch 10000 : Training Cost: 1590821900.0 a,b,c,d: 111.54233 -209.91287 -51.423084 1385.8513 6113.5137
-Epoch 11000 : Training Cost: 1563042200.0 a,b,c,d: 113.54405 -232.21953 -44.73371 1557.1084 6771.7046
-Epoch 12000 : Training Cost: 1535855600.0 a,b,c,d: 115.471565 -253.9838 -36.851135 1729.535 7429.069
-Epoch 13000 : Training Cost: 1509255300.0 a,b,c,d: 117.33939 -275.29697 -28.0714 1902.5308 8083.9634
-Epoch 14000 : Training Cost: 1483227000.0 a,b,c,d: 119.1605 -296.2472 -18.618649 2075.6094 8735.381
-Epoch 15000 : Training Cost: 1457726700.0 a,b,c,d: 120.94584 -316.915 -8.650095 2248.3247 9384.197
-Epoch 16000 : Training Cost: 1432777300.0 a,b,c,d: 122.69806 -337.30704 1.7027153 2420.5771 10028.871
-Epoch 17000 : Training Cost: 1408365000.0 a,b,c,d: 124.42179 -357.45245 12.33499 2592.2983 10669.157
-Epoch 18000 : Training Cost: 1384480000.0 a,b,c,d: 126.12332 -377.39734 23.168756 2763.0933 11305.027
-Epoch 19000 : Training Cost: 1361116800.0 a,b,c,d: 127.80568 -397.16415 34.160156 2933.0452 11935.669
-Epoch 20000 : Training Cost: 1338288100.0 a,b,c,d: 129.4674 -416.72803 45.259155 3101.7727 12561.179
-Epoch 21000 : Training Cost: 1315959700.0 a,b,c,d: 131.11403 -436.14285 56.4436 3269.3142 13182.058
-Epoch 22000 : Training Cost: 1294164700.0 a,b,c,d: 132.74377 -455.3779 67.6757 3435.3833 13796.807
-Epoch 23000 : Training Cost: 1272863600.0 a,b,c,d: 134.35779 -474.45316 78.96117 3600.264 14406.58
-Epoch 24000 : Training Cost: 1252052600.0 a,b,c,d: 135.9583 -493.38254 90.268616 3764.0078 15010.481
-Epoch 25000 : Training Cost: 1231713700.0 a,b,c,d: 137.54753 -512.1876 101.59372 3926.4897 15609.368
-1231713700.0 137.54753 -512.1876 101.59372 3926.4897 15609.368
-</code></pre><pre><code>predictions = []
-for x in abscissa:
- predictions.append((coefficient1*pow(x,4) + coefficient2*pow(x,3) + coefficient3*pow(x,2) + coefficient4*x + constant))
-plt.plot(abscissa , ordinate, 'ro', label ='Original data')
-plt.plot(abscissa, predictions, label ='Fitted line')
-plt.title('Quartic Regression Result')
-plt.legend()
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/5.png"/><h3>Quintic</h3><pre><code>with tf.Session() as sess:
- sess.run(init)
- for epoch in range(no_of_epochs):
- for (x,y) in zip(abscissa, ordinate):
- sess.run(optimizer5, feed_dict={X:x, Y:y})
- if (epoch+1)%1000==0:
- cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate})
- print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d,e,f:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e),sess.run(f))
-
- training_cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate})
- coefficient1 = sess.run(a)
- coefficient2 = sess.run(b)
- coefficient3 = sess.run(c)
- coefficient4 = sess.run(d)
- coefficient5 = sess.run(e)
- constant = sess.run(f)
-</code></pre><pre><code>Epoch 1000 : Training Cost: 1409200100.0 a,b,c,d,e,f: 7.949472 7.46219 55.626034 184.29028 484.00223 1024.0083
-Epoch 2000 : Training Cost: 1306882400.0 a,b,c,d,e,f: 8.732181 -4.0085897 73.25298 315.90103 904.08887 2004.9749
-Epoch 3000 : Training Cost: 1212606000.0 a,b,c,d,e,f: 9.732249 -16.90125 86.28379 437.06552 1305.055 2966.2188
-Epoch 4000 : Training Cost: 1123640400.0 a,b,c,d,e,f: 10.74851 -29.82692 98.59997 555.331 1698.4631 3917.9155
-Epoch 5000 : Training Cost: 1039694300.0 a,b,c,d,e,f: 11.75426 -42.598194 110.698326 671.64355 2085.5513 4860.8535
-Epoch 6000 : Training Cost: 960663550.0 a,b,c,d,e,f: 12.745439 -55.18337 122.644936 786.00214 2466.1638 5794.3735
-Epoch 7000 : Training Cost: 886438340.0 a,b,c,d,e,f: 13.721028 -67.57168 134.43822 898.3691 2839.9958 6717.659
-Epoch 8000 : Training Cost: 816913100.0 a,b,c,d,e,f: 14.679965 -79.75113 146.07385 1008.66895 3206.6692 7629.812
-Epoch 9000 : Training Cost: 751971500.0 a,b,c,d,e,f: 15.62181 -91.71608 157.55713 1116.7715 3565.8323 8529.976
-Epoch 10000 : Training Cost: 691508740.0 a,b,c,d,e,f: 16.545347 -103.4531 168.88321 1222.6348 3916.9785 9416.236
-Epoch 11000 : Training Cost: 635382000.0 a,b,c,d,e,f: 17.450052 -114.954254 180.03932 1326.1565 4259.842 10287.99
-Epoch 12000 : Training Cost: 583477250.0 a,b,c,d,e,f: 18.334944 -126.20821 191.02948 1427.2095 4593.8 11143.449
-Epoch 13000 : Training Cost: 535640400.0 a,b,c,d,e,f: 19.198917 -137.20206 201.84718 1525.6926 4918.5327 11981.633
-Epoch 14000 : Training Cost: 491722240.0 a,b,c,d,e,f: 20.041153 -147.92719 212.49709 1621.5496 5233.627 12800.468
-Epoch 15000 : Training Cost: 451559520.0 a,b,c,d,e,f: 20.860966 -158.37456 222.97133 1714.7141 5538.676 13598.337
-Epoch 16000 : Training Cost: 414988960.0 a,b,c,d,e,f: 21.657421 -168.53406 233.27422 1805.0874 5833.1978 14373.658
-Epoch 17000 : Training Cost: 381837920.0 a,b,c,d,e,f: 22.429693 -178.39536 243.39914 1892.5883 6116.847 15124.394
-Epoch 18000 : Training Cost: 351931300.0 a,b,c,d,e,f: 23.176882 -187.94789 253.3445 1977.137 6389.117 15848.417
-Epoch 19000 : Training Cost: 325074400.0 a,b,c,d,e,f: 23.898485 -197.18741 263.12512 2058.6716 6649.8037 16543.95
-Epoch 20000 : Training Cost: 301073570.0 a,b,c,d,e,f: 24.593851 -206.10497 272.72385 2137.1797 6898.544 17209.367
-Epoch 21000 : Training Cost: 279727000.0 a,b,c,d,e,f: 25.262104 -214.69217 282.14642 2212.6372 7135.217 17842.854
-Epoch 22000 : Training Cost: 260845550.0 a,b,c,d,e,f: 25.903376 -222.94969 291.4003 2284.9844 7359.4644 18442.408
-Epoch 23000 : Training Cost: 244218030.0 a,b,c,d,e,f: 26.517094 -230.8697 300.45532 2354.3003 7571.261 19007.49
-Epoch 24000 : Training Cost: 229660080.0 a,b,c,d,e,f: 27.102589 -238.44817 309.35342 2420.4185 7770.5728 19536.19
-Epoch 25000 : Training Cost: 216972400.0 a,b,c,d,e,f: 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707
-216972400.0 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707
-</code></pre><pre><code>predictions = []
-for x in abscissa:
- predictions.append((coefficient1*pow(x,5) + coefficient2*pow(x,4) + coefficient3*pow(x,3) + coefficient4*pow(x,2) + coefficient5*x + constant))
-plt.plot(abscissa , ordinate, 'ro', label ='Original data')
-plt.plot(abscissa, predictions, label ='Fitted line')
-plt.title('Quintic Regression Result')
-plt.legend()
-plt.show()
-</code></pre><img src="https://output.navanchauhan.now.sh/assets/gciTales/03-regression/6.png"/><h2>Results and Conclusion</h2><p>You just learnt Polynomial Regression using TensorFlow!</p><h2>Notes</h2><h3>Overfitting</h3><blockquote><p>&gt; Overfitting refers to a model that models the training data too well.Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.</p></blockquote><blockquote><p>Source: Machine Learning Mastery</p></blockquote><p>Basically if you train your machine learning model on a small dataset for a really large number of epochs, the model will learn all the deformities/noise in the data and will actually think that it is a normal part. Therefore when it will see some new data, it will discard that new data as noise and will impact the accuracy of the model in a negative manner</p>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2019-12-10-TensorFlow-Model-Prediction</guid><title>Making Predictions using Image Classifier (TensorFlow)</title><description>Making predictions for image classification models built using TensorFlow</description><link>https://output.navanchauhan.now.sh/posts/2019-12-10-TensorFlow-Model-Prediction</link><pubDate>Tue, 10 Dec 2019 11:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Making Predictions using Image Classifier (TensorFlow)</h1><p><em>This was tested on TF 2.x and works as of 2019-12-10</em></p><p>If you want to understand how to make your own custom image classifier, please refer to my previous post.</p><p>If you followed my last post, then you created a model which took an image of dimensions 50x50 as an input.</p><p>First we import the following if we have not imported these before</p><pre><code>import cv2
-import os
-</code></pre><p>Then we read the file using OpenCV.</p><pre><code>image=cv2.imread(imagePath)
-</code></pre><p>The cv2. imread() function returns a NumPy array representing the image. Therefore, we need to convert it before we can use it.</p><pre><code>image_from_array = Image.fromarray(image, 'RGB')
-</code></pre><p>Then we resize the image</p><pre><code>size_image = image_from_array.resize((50,50))
-</code></pre><p>After this we create a batch consisting of only one image</p><pre><code>p = np.expand_dims(size_image, 0)
-</code></pre><p>We then convert this uint8 datatype to a float32 datatype</p><pre><code>img = tf.cast(p, tf.float32)
-</code></pre><p>Finally we make the prediction</p><pre><code>print(['Infected','Uninfected'][np.argmax(model.predict(img))])
-</code></pre><p><code>Infected</code></p>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2019-12-08-Image-Classifier-Tensorflow</guid><title>Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria</title><description>Tutorial on creating an image classifier model using TensorFlow which detects malaria</description><link>https://output.navanchauhan.now.sh/posts/2019-12-08-Image-Classifier-Tensorflow</link><pubDate>Sun, 8 Dec 2019 14:16:00 +0530</pubDate><content:encoded><![CDATA[<h1>Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria</h1><p><strong>Done during Google Code-In. Org: Tensorflow.</strong></p><h2>Imports</h2><pre><code>%tensorflow_version 2.x #This is for telling Colab that you want to use TF 2.0, ignore if running on local machine
-
-from PIL import Image # We use the PIL Library to resize images
-import numpy as np
-import os
-import cv2
-import tensorflow as tf
-from tensorflow.keras import datasets, layers, models
-import pandas as pd
-import matplotlib.pyplot as plt
-from keras.models import Sequential
-from keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout
-</code></pre><h2>Dataset</h2><h3>Fetching the Data</h3><pre><code>!wget ftp://lhcftp.nlm.nih.gov/Open-Access-Datasets/Malaria/cell_images.zip
-!unzip cell_images.zip
-</code></pre><h3>Processing the Data</h3><p>We resize all the images as 50x50 and add the numpy array of that image as well as their label names (Infected or Not) to common arrays.</p><pre><code>data = []
-labels = []
-
-Parasitized = os.listdir("./cell_images/Parasitized/")
-for parasite in Parasitized:
- try:
- image=cv2.imread("./cell_images/Parasitized/"+parasite)
- image_from_array = Image.fromarray(image, 'RGB')
- size_image = image_from_array.resize((50, 50))
- data.append(np.array(size_image))
- labels.append(0)
- except AttributeError:
- print("")
-
-Uninfected = os.listdir("./cell_images/Uninfected/")
-for uninfect in Uninfected:
- try:
- image=cv2.imread("./cell_images/Uninfected/"+uninfect)
- image_from_array = Image.fromarray(image, 'RGB')
- size_image = image_from_array.resize((50, 50))
- data.append(np.array(size_image))
- labels.append(1)
- except AttributeError:
- print("")
-</code></pre><h3>Splitting Data</h3><pre><code>df = np.array(data)
-labels = np.array(labels)
-(X_train, X_test) = df[(int)(0.1*len(df)):],df[:(int)(0.1*len(df))]
-(y_train, y_test) = labels[(int)(0.1*len(labels)):],labels[:(int)(0.1*len(labels))]
-</code></pre><pre><code>s=np.arange(X_train.shape[0])
-np.random.shuffle(s)
-X_train=X_train[s]
-y_train=y_train[s]
-X_train = X_train/255.0
-</code></pre><h2>Model</h2><h3>Creating Model</h3><p>By creating a sequential model, we create a linear stack of layers.</p><p><em>Note: The input shape for the first layer is 50,50 which corresponds with the sizes of the resized images</em></p><pre><code>model = models.Sequential()
-model.add(layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(50,50,3)))
-model.add(layers.MaxPooling2D(pool_size=2))
-model.add(layers.Conv2D(filters=32,kernel_size=2,padding='same',activation='relu'))
-model.add(layers.MaxPooling2D(pool_size=2))
-model.add(layers.Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
-model.add(layers.MaxPooling2D(pool_size=2))
-model.add(layers.Dropout(0.2))
-model.add(layers.Flatten())
-model.add(layers.Dense(500,activation="relu"))
-model.add(layers.Dropout(0.2))
-model.add(layers.Dense(2,activation="softmax"))#2 represent output layer neurons
-model.summary()
-</code></pre><h3>Compiling Model</h3><p>We use the adam optimiser as it is an adaptive learning rate optimization algorithm that's been designed specifically for <em>training</em> deep neural networks, which means it changes its learning rate automaticaly to get the best results</p><pre><code>model.compile(optimizer="adam",
- loss="sparse_categorical_crossentropy",
- metrics=["accuracy"])
-</code></pre><h3>Training Model</h3><p>We train the model for 10 epochs on the training data and then validate it using the testing data</p><pre><code>history = model.fit(X_train,y_train, epochs=10, validation_data=(X_test,y_test))
-</code></pre><pre><code>Train on 24803 samples, validate on 2755 samples
-Epoch 1/10
-24803/24803 [==============================] - 57s 2ms/sample - loss: 0.0786 - accuracy: 0.9729 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 2/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0746 - accuracy: 0.9731 - val_loss: 0.0290 - val_accuracy: 0.9996
-Epoch 3/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0672 - accuracy: 0.9764 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 4/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0601 - accuracy: 0.9789 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 5/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0558 - accuracy: 0.9804 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 6/10
-24803/24803 [==============================] - 57s 2ms/sample - loss: 0.0513 - accuracy: 0.9819 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 7/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0452 - accuracy: 0.9849 - val_loss: 0.3190 - val_accuracy: 0.9985
-Epoch 8/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0404 - accuracy: 0.9858 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 9/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0352 - accuracy: 0.9878 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-Epoch 10/10
-24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0373 - accuracy: 0.9865 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
-</code></pre><h3>Results</h3><pre><code>accuracy = history.history['accuracy'][-1]*100
-loss = history.history['loss'][-1]*100
-val_accuracy = history.history['val_accuracy'][-1]*100
-val_loss = history.history['val_loss'][-1]*100
-
-print(
- 'Accuracy:', accuracy,
- '\nLoss:', loss,
- '\nValidation Accuracy:', val_accuracy,
- '\nValidation Loss:', val_loss
-)
-</code></pre><pre><code>Accuracy: 98.64532351493835
-Loss: 3.732407123270176
-Validation Accuracy: 100.0
-Validation Loss: 0.0
-</code></pre><p>We have achieved 98% Accuracy!</p><p><a href="https://colab.research.google.com/drive/1ZswDsxLwYZEnev89MzlL5Lwt6ut7iwp- "Colab Notebook"">Link to Colab Notebook</a></p>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/2019-12-08-Splitting-Zips</guid><title>Splitting ZIPs into Multiple Parts</title><description>Short code snippet for splitting zips.</description><link>https://output.navanchauhan.now.sh/posts/2019-12-08-Splitting-Zips</link><pubDate>Sun, 8 Dec 2019 13:27:00 +0530</pubDate><content:encoded><![CDATA[<h1>Splitting ZIPs into Multiple Parts</h1><p><strong>Tested on macOS</strong></p><p>Creating the archive:</p><pre><code>zip -r -s 5 oodlesofnoodles.zip website/
-</code></pre><p>5 stands for each split files' size (in mb, kb and gb can also be specified)</p><p>For encrypting the zip:</p><pre><code>zip -er -s 5 oodlesofnoodles.zip website
-</code></pre><p>Extracting Files</p><p>First we need to collect all parts, then</p><pre><code>zip -F oodlesofnoodles.zip --out merged.zip
-</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response</guid><title>Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response</title><description>This paper is about Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response.</description><link>https://output.navanchauhan.now.sh/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response</link><pubDate>Tue, 14 May 2019 02:42:00 +0530</pubDate><content:encoded><![CDATA[<h1>Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response</h1><blockquote><p>Based on the project showcased at Toyota Hackathon, IITD - 17/18th December 2018</p></blockquote><p><a href="https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf">Download paper here</a></p><p>Recommended citation:</p><h3>ATP</h3><pre><code>Chauhan, N. (2019). &amp;quot;Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response.&amp;quot; &lt;i&gt;International Research Journal of Engineering and Technology (IRJET), 6(5)&lt;/i&gt;.
-</code></pre><h3>BibTeX</h3><pre><code>@article{chauhan_2019, title={Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response}, volume={6}, url={https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf}, number={5}, journal={International Research Journal of Engineering and Technology (IRJET)}, author={Chauhan, Navan}, year={2019}}
-</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://output.navanchauhan.now.sh/posts/hello-world</guid><title>Hello World</title><description>My first post.</description><link>https://output.navanchauhan.now.sh/posts/hello-world</link><pubDate>Tue, 16 Apr 2019 17:39:00 +0530</pubDate><content:encoded><![CDATA[<h1>Hello World</h1><p><strong>Why a Hello World post?</strong></p><p>Just re-did the entire website using Publish (Publish by John Sundell). So, a new hello world post :)</p>]]></content:encoded></item></channel></rss> \ No newline at end of file
+ <span class="k">return</span> <span class="n">bagOfWords</span>
+ <span class="p">}</span>
+<span class="p">}</span>
+
+<span class="kd">struct</span> <span class="nc">ContentView_Previews</span><span class="p">:</span> <span class="n">PreviewProvider</span> <span class="p">{</span>
+ <span class="kd">static</span> <span class="kd">var</span> <span class="nv">previews</span><span class="p">:</span> <span class="n">some</span> <span class="n">View</span> <span class="p">{</span>
+ <span class="n">ContentView</span><span class="p">()</span>
+ <span class="p">}</span>
+<span class="p">}</span>
+</div>
+
+</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2019-12-16-TensorFlow-Polynomial-Regression</guid><title>Polynomial Regression Using TensorFlow</title><description>Polynomial regression using TensorFlow</description><link>https://navanchauhan.github.io/posts/2019-12-16-TensorFlow-Polynomial-Regression</link><pubDate>Mon, 16 Dec 2019 14:16:00 +0530</pubDate><content:encoded><![CDATA[<h1>Polynomial Regression Using TensorFlow</h1><p><strong>In this tutorial you will learn about polynomial regression and how you can implement it in Tensorflow.</strong></p><p>In this, we will be performing polynomial regression using 5 types of equations -</p><ul><li>Linear</li><li>Quadratic</li><li>Cubic</li><li>Quartic</li><li>Quintic</li></ul><h2>Regression</h2><h3>What is Regression?</h3><p>Regression is a statistical measurement that is used to try to determine the relationship between a dependent variable (often denoted by Y), and series of varying variables (called independent variables, often denoted by X ).</p><h3>What is Polynomial Regression</h3><p>This is a form of Regression Analysis where the relationship between Y and X is denoted as the nth degree/power of X. Polynomial regression even fits a non-linear relationship (e.g when the points don't form a straight line).</p><h2>Imports</h2><pre><code><div class="highlight"><span></span><span class="kn">import</span> <span class="nn">tensorflow.compat.v1</span> <span class="kn">as</span> <span class="nn">tf</span>
+<span class="n">tf</span><span class="o">.</span><span class="n">disable_v2_behavior</span><span class="p">()</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="kn">as</span> <span class="nn">plt</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="kn">as</span> <span class="nn">pd</span>
+</div>
+
+</code></pre><h2>Dataset</h2><h3>Creating Random Data</h3><p>Even though in this tutorial we will use a Position Vs Salary datasset, it is important to know how to create synthetic data</p><p>To create 50 values spaced evenly between 0 and 50, we use NumPy's linspace funtion</p><p><code>linspace(lower_limit, upper_limit, no_of_observations)</code></p><pre><code><div class="highlight"><span></span><span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span>
+<span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span>
+</div>
+
+</code></pre><p>We use the following function to add noise to the data, so that our values</p><pre><code><div class="highlight"><span></span><span class="n">x</span> <span class="o">+=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span>
+<span class="n">y</span> <span class="o">+=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span>
+</div>
+
+</code></pre><h3>Position vs Salary Dataset</h3><p>We will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)</p><pre><code><div class="highlight"><span></span><span class="nt">!wget</span><span class="na"> --no-check-certificate &#39;https</span><span class="p">:</span><span class="nc">//docs.google.com/uc?export</span><span class="o">=</span><span class="l">download&amp;id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9&#39; -O data.csv</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">df</span> <span class="c1"># this gives us a preview of the dataset we are working with</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="o">|</span> <span class="n">Position</span> <span class="o">|</span> <span class="n">Level</span> <span class="o">|</span> <span class="n">Salary</span> <span class="o">|</span>
+<span class="o">|-------------------|-------|---------|</span>
+<span class="o">|</span> <span class="n">Business</span> <span class="n">Analyst</span> <span class="o">|</span> <span class="mi">1</span> <span class="o">|</span> <span class="mi">45000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Junior</span> <span class="n">Consultant</span> <span class="o">|</span> <span class="mi">2</span> <span class="o">|</span> <span class="mi">50000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Senior</span> <span class="n">Consultant</span> <span class="o">|</span> <span class="mi">3</span> <span class="o">|</span> <span class="mi">60000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Manager</span> <span class="o">|</span> <span class="mi">4</span> <span class="o">|</span> <span class="mi">80000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Country</span> <span class="n">Manager</span> <span class="o">|</span> <span class="mi">5</span> <span class="o">|</span> <span class="mi">110000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Region</span> <span class="n">Manager</span> <span class="o">|</span> <span class="mi">6</span> <span class="o">|</span> <span class="mi">150000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Partner</span> <span class="o">|</span> <span class="mi">7</span> <span class="o">|</span> <span class="mi">200000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">Senior</span> <span class="n">Partner</span> <span class="o">|</span> <span class="mi">8</span> <span class="o">|</span> <span class="mi">300000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">C</span><span class="o">-</span><span class="n">level</span> <span class="o">|</span> <span class="mi">9</span> <span class="o">|</span> <span class="mi">500000</span> <span class="o">|</span>
+<span class="o">|</span> <span class="n">CEO</span> <span class="o">|</span> <span class="mi">10</span> <span class="o">|</span> <span class="mi">1000000</span> <span class="o">|</span>
+</div>
+
+</code></pre><p>We convert the salary column as the ordinate (y-cordinate) and level column as the abscissa</p><pre><code><div class="highlight"><span></span><span class="n">abscissa</span> <span class="o">=</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">]</span><span class="o">.</span><span class="n">to_list</span><span class="p">()</span> <span class="c1"># abscissa = [1,2,3,4,5,6,7,8,9,10]</span>
+<span class="n">ordinate</span> <span class="o">=</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Salary&quot;</span><span class="p">]</span><span class="o">.</span><span class="n">to_list</span><span class="p">()</span> <span class="c1"># ordinate = [45000,50000,60000,80000,110000,150000,200000,300000,500000,1000000]</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">n</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">abscissa</span><span class="p">)</span> <span class="c1"># no of observations</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s1">&#39;Salary&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s1">&#39;Position&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s2">&quot;Salary vs Position&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/1.png"/><h2>Defining Stuff</h2><pre><code><div class="highlight"><span></span><span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="s2">&quot;float&quot;</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="s2">&quot;float&quot;</span><span class="p">)</span>
+</div>
+
+</code></pre><h3>Defining Variables</h3><p>We first define all the coefficients and constant as tensorflow variables haveing a random intitial value</p><pre><code><div class="highlight"><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;a&quot;</span><span class="p">)</span>
+<span class="n">b</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;b&quot;</span><span class="p">)</span>
+<span class="n">c</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;c&quot;</span><span class="p">)</span>
+<span class="n">d</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;d&quot;</span><span class="p">)</span>
+<span class="n">e</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;e&quot;</span><span class="p">)</span>
+<span class="n">f</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">(),</span> <span class="n">name</span> <span class="o">=</span> <span class="s2">&quot;f&quot;</span><span class="p">)</span>
+</div>
+
+</code></pre><h3>Model Configuration</h3><pre><code><div class="highlight"><span></span><span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.2</span>
+<span class="n">no_of_epochs</span> <span class="o">=</span> <span class="mi">25000</span>
+</div>
+
+</code></pre><h3>Equations</h3><pre><code><div class="highlight"><span></span><span class="n">deg1</span> <span class="o">=</span> <span class="n">a</span><span class="o">*</span><span class="n">X</span> <span class="o">+</span> <span class="n">b</span>
+<span class="n">deg2</span> <span class="o">=</span> <span class="n">a</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">b</span><span class="o">*</span><span class="n">X</span> <span class="o">+</span> <span class="n">c</span>
+<span class="n">deg3</span> <span class="o">=</span> <span class="n">a</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">b</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">c</span><span class="o">*</span><span class="n">X</span> <span class="o">+</span> <span class="n">d</span>
+<span class="n">deg4</span> <span class="o">=</span> <span class="n">a</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span> <span class="o">+</span> <span class="n">b</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">c</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">d</span><span class="o">*</span><span class="n">X</span> <span class="o">+</span> <span class="n">e</span>
+<span class="n">deg5</span> <span class="o">=</span> <span class="n">a</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">5</span><span class="p">)</span> <span class="o">+</span> <span class="n">b</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span> <span class="o">+</span> <span class="n">c</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">d</span><span class="o">*</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">e</span><span class="o">*</span><span class="n">X</span> <span class="o">+</span> <span class="n">f</span>
+</div>
+
+</code></pre><h3>Cost Function</h3><p>We use the Mean Squared Error Function</p><pre><code><div class="highlight"><span></span><span class="n">mse1</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">deg1</span><span class="o">-</span><span class="n">Y</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">n</span><span class="p">)</span>
+<span class="n">mse2</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">deg2</span><span class="o">-</span><span class="n">Y</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">n</span><span class="p">)</span>
+<span class="n">mse3</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">deg3</span><span class="o">-</span><span class="n">Y</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">n</span><span class="p">)</span>
+<span class="n">mse4</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">deg4</span><span class="o">-</span><span class="n">Y</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">n</span><span class="p">)</span>
+<span class="n">mse5</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">deg5</span><span class="o">-</span><span class="n">Y</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">n</span><span class="p">)</span>
+</div>
+
+</code></pre><h3>Optimizer</h3><p>We use the AdamOptimizer for the polynomial functions and GradientDescentOptimizer for the linear function</p><pre><code><div class="highlight"><span></span><span class="n">optimizer1</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">GradientDescentOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">mse1</span><span class="p">)</span>
+<span class="n">optimizer2</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">mse2</span><span class="p">)</span>
+<span class="n">optimizer3</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">mse3</span><span class="p">)</span>
+<span class="n">optimizer4</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">mse4</span><span class="p">)</span>
+<span class="n">optimizer5</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">AdamOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">mse5</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">init</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">global_variables_initializer</span><span class="p">()</span>
+</div>
+
+</code></pre><h2>Model Predictions</h2><p>For each type of equation first we make the model predict the values of the coefficient(s) and constant, once we get these values we use it to predict the Y values using the X values. We then plot it to compare the actual data and predicted line.</p><h3>Linear Equation</h3><pre><code><div class="highlight"><span></span><span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span>
+ <span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">no_of_epochs</span><span class="p">):</span>
+ <span class="k">for</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">y</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">):</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">optimizer1</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">:</span><span class="n">y</span><span class="p">})</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">%</span><span class="mi">1000</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
+ <span class="n">cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse1</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;Epoch&quot;</span><span class="p">,(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">),</span> <span class="s2">&quot;: Training Cost:&quot;</span><span class="p">,</span> <span class="n">cost</span><span class="p">,</span><span class="s2">&quot; a,b:&quot;</span><span class="p">,</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">))</span>
+
+ <span class="n">training_cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse1</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="n">coefficient1</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
+ <span class="n">constant</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
+
+<span class="k">print</span><span class="p">(</span><span class="n">training_cost</span><span class="p">,</span> <span class="n">coefficient1</span><span class="p">,</span> <span class="n">constant</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="nt">Epoch</span><span class="na"> 1000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 2000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 3000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 4000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 5000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 6000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 7000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 8000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 9000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 10000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 11000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 12000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 13000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 14000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 15000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 16000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 17000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 18000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 19000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 20000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 21000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 22000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 23000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 24000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">Epoch</span><span class="na"> 25000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">88999125000.0</span><span class="err"> </span><span class="nc">a,b</span><span class="p">:</span><span class="err"> </span><span class="nc">180396.42</span><span class="err"> </span><span class="nc">-478869.12</span>
+<span class="nt">88999125000.0</span><span class="na"> 180396.42 -478869.12</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">abscissa</span><span class="p">:</span>
+ <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">coefficient1</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">constant</span><span class="p">))</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span> <span class="p">,</span> <span class="n">ordinate</span><span class="p">,</span> <span class="s1">&#39;ro&#39;</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Original data&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">predictions</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Fitted line&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s1">&#39;Linear Regression Result&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/2.png"/><h3>Quadratic Equation</h3><pre><code><div class="highlight"><span></span><span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span>
+ <span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">no_of_epochs</span><span class="p">):</span>
+ <span class="k">for</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">y</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">):</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">optimizer2</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">:</span><span class="n">y</span><span class="p">})</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">%</span><span class="mi">1000</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
+ <span class="n">cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse2</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;Epoch&quot;</span><span class="p">,(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">),</span> <span class="s2">&quot;: Training Cost:&quot;</span><span class="p">,</span> <span class="n">cost</span><span class="p">,</span><span class="s2">&quot; a,b,c:&quot;</span><span class="p">,</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">))</span>
+
+ <span class="n">training_cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse2</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="n">coefficient1</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
+ <span class="n">coefficient2</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
+ <span class="n">constant</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
+
+<span class="k">print</span><span class="p">(</span><span class="n">training_cost</span><span class="p">,</span> <span class="n">coefficient1</span><span class="p">,</span> <span class="n">coefficient2</span><span class="p">,</span> <span class="n">constant</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="nt">Epoch</span><span class="na"> 1000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">52571360000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">1002.4456</span><span class="err"> </span><span class="nc">1097.0197</span><span class="err"> </span><span class="nc">1276.6921</span>
+<span class="nt">Epoch</span><span class="na"> 2000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">37798890000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">1952.4263</span><span class="err"> </span><span class="nc">2130.2825</span><span class="err"> </span><span class="nc">2469.7756</span>
+<span class="nt">Epoch</span><span class="na"> 3000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">26751185000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">2839.5825</span><span class="err"> </span><span class="nc">3081.6118</span><span class="err"> </span><span class="nc">3554.351</span>
+<span class="nt">Epoch</span><span class="na"> 4000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">19020106000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">3644.56</span><span class="err"> </span><span class="nc">3922.9563</span><span class="err"> </span><span class="nc">4486.3135</span>
+<span class="nt">Epoch</span><span class="na"> 5000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">14060446000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">4345.042</span><span class="err"> </span><span class="nc">4621.4233</span><span class="err"> </span><span class="nc">5212.693</span>
+<span class="nt">Epoch</span><span class="na"> 6000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">11201084000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">4921.1855</span><span class="err"> </span><span class="nc">5148.1504</span><span class="err"> </span><span class="nc">5689.0713</span>
+<span class="nt">Epoch</span><span class="na"> 7000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">9732740000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">5364.764</span><span class="err"> </span><span class="nc">5493.0156</span><span class="err"> </span><span class="nc">5906.754</span>
+<span class="nt">Epoch</span><span class="na"> 8000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">9050918000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">5685.4067</span><span class="err"> </span><span class="nc">5673.182</span><span class="err"> </span><span class="nc">5902.0728</span>
+<span class="nt">Epoch</span><span class="na"> 9000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8750394000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">5906.9814</span><span class="err"> </span><span class="nc">5724.8906</span><span class="err"> </span><span class="nc">5734.746</span>
+<span class="nt">Epoch</span><span class="na"> 10000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8613128000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6057.3677</span><span class="err"> </span><span class="nc">5687.3364</span><span class="err"> </span><span class="nc">5461.167</span>
+<span class="nt">Epoch</span><span class="na"> 11000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8540034600.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6160.547</span><span class="err"> </span><span class="nc">5592.3022</span><span class="err"> </span><span class="nc">5122.8633</span>
+<span class="nt">Epoch</span><span class="na"> 12000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8490983000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6233.9175</span><span class="err"> </span><span class="nc">5462.025</span><span class="err"> </span><span class="nc">4747.111</span>
+<span class="nt">Epoch</span><span class="na"> 13000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8450816500.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6289.048</span><span class="err"> </span><span class="nc">5310.7583</span><span class="err"> </span><span class="nc">4350.6997</span>
+<span class="nt">Epoch</span><span class="na"> 14000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8414082000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6333.199</span><span class="err"> </span><span class="nc">5147.394</span><span class="err"> </span><span class="nc">3943.9294</span>
+<span class="nt">Epoch</span><span class="na"> 15000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8378841600.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6370.7944</span><span class="err"> </span><span class="nc">4977.1704</span><span class="err"> </span><span class="nc">3532.476</span>
+<span class="nt">Epoch</span><span class="na"> 16000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8344471000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6404.468</span><span class="err"> </span><span class="nc">4803.542</span><span class="err"> </span><span class="nc">3120.2087</span>
+<span class="nt">Epoch</span><span class="na"> 17000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8310785500.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6435.365</span><span class="err"> </span><span class="nc">4628.1523</span><span class="err"> </span><span class="nc">2709.1445</span>
+<span class="nt">Epoch</span><span class="na"> 18000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8277482000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6465.5493</span><span class="err"> </span><span class="nc">4451.833</span><span class="err"> </span><span class="nc">2300.2783</span>
+<span class="nt">Epoch</span><span class="na"> 19000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8244650000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6494.609</span><span class="err"> </span><span class="nc">4274.826</span><span class="err"> </span><span class="nc">1894.3738</span>
+<span class="nt">Epoch</span><span class="na"> 20000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8212349000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6522.8247</span><span class="err"> </span><span class="nc">4098.1733</span><span class="err"> </span><span class="nc">1491.9915</span>
+<span class="nt">Epoch</span><span class="na"> 21000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8180598300.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6550.6567</span><span class="err"> </span><span class="nc">3922.7405</span><span class="err"> </span><span class="nc">1093.3868</span>
+<span class="nt">Epoch</span><span class="na"> 22000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8149257700.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6578.489</span><span class="err"> </span><span class="nc">3747.8362</span><span class="err"> </span><span class="nc">698.53357</span>
+<span class="nt">Epoch</span><span class="na"> 23000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8118325000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6606.1973</span><span class="err"> </span><span class="nc">3573.2742</span><span class="err"> </span><span class="nc">307.3541</span>
+<span class="nt">Epoch</span><span class="na"> 24000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8088001000.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6632.96</span><span class="err"> </span><span class="nc">3399.878</span><span class="err"> </span><span class="nc">-79.89219</span>
+<span class="nt">Epoch</span><span class="na"> 25000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">8058094600.0</span><span class="err"> </span><span class="nc">a,b,c</span><span class="p">:</span><span class="err"> </span><span class="nc">6659.793</span><span class="err"> </span><span class="nc">3227.2517</span><span class="err"> </span><span class="nc">-463.03156</span>
+<span class="nt">8058094600.0</span><span class="na"> 6659.793 3227.2517 -463.03156</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">abscissa</span><span class="p">:</span>
+ <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">coefficient1</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient2</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">constant</span><span class="p">))</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span> <span class="p">,</span> <span class="n">ordinate</span><span class="p">,</span> <span class="s1">&#39;ro&#39;</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Original data&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">predictions</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Fitted line&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s1">&#39;Quadratic Regression Result&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/3.png"/><h3>Cubic</h3><pre><code><div class="highlight"><span></span><span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span>
+ <span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">no_of_epochs</span><span class="p">):</span>
+ <span class="k">for</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">y</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">):</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">optimizer3</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">:</span><span class="n">y</span><span class="p">})</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">%</span><span class="mi">1000</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
+ <span class="n">cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse3</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;Epoch&quot;</span><span class="p">,(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">),</span> <span class="s2">&quot;: Training Cost:&quot;</span><span class="p">,</span> <span class="n">cost</span><span class="p">,</span><span class="s2">&quot; a,b,c,d:&quot;</span><span class="p">,</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">))</span>
+
+ <span class="n">training_cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse3</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="n">coefficient1</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
+ <span class="n">coefficient2</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
+ <span class="n">coefficient3</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
+ <span class="n">constant</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
+
+<span class="k">print</span><span class="p">(</span><span class="n">training_cost</span><span class="p">,</span> <span class="n">coefficient1</span><span class="p">,</span> <span class="n">coefficient2</span><span class="p">,</span> <span class="n">coefficient3</span><span class="p">,</span> <span class="n">constant</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="nt">Epoch</span><span class="na"> 1000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">4279814000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">670.1527</span><span class="err"> </span><span class="nc">694.4212</span><span class="err"> </span><span class="nc">751.4653</span><span class="err"> </span><span class="nc">903.9527</span>
+<span class="nt">Epoch</span><span class="na"> 2000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3770950400.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">742.6414</span><span class="err"> </span><span class="nc">666.3489</span><span class="err"> </span><span class="nc">636.94525</span><span class="err"> </span><span class="nc">859.2088</span>
+<span class="nt">Epoch</span><span class="na"> 3000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3717708300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">756.2582</span><span class="err"> </span><span class="nc">569.3339</span><span class="err"> </span><span class="nc">448.105</span><span class="err"> </span><span class="nc">748.23956</span>
+<span class="nt">Epoch</span><span class="na"> 4000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3667464000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">769.4476</span><span class="err"> </span><span class="nc">474.0318</span><span class="err"> </span><span class="nc">265.5761</span><span class="err"> </span><span class="nc">654.75525</span>
+<span class="nt">Epoch</span><span class="na"> 5000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3620040700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">782.32324</span><span class="err"> </span><span class="nc">380.54272</span><span class="err"> </span><span class="nc">89.39888</span><span class="err"> </span><span class="nc">578.5136</span>
+<span class="nt">Epoch</span><span class="na"> 6000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3575265800.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">794.8898</span><span class="err"> </span><span class="nc">288.83356</span><span class="err"> </span><span class="nc">-80.5215</span><span class="err"> </span><span class="nc">519.13654</span>
+<span class="nt">Epoch</span><span class="na"> 7000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3532972000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">807.1608</span><span class="err"> </span><span class="nc">198.87044</span><span class="err"> </span><span class="nc">-244.31102</span><span class="err"> </span><span class="nc">476.2061</span>
+<span class="nt">Epoch</span><span class="na"> 8000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3493009200.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">819.13513</span><span class="err"> </span><span class="nc">110.64169</span><span class="err"> </span><span class="nc">-402.0677</span><span class="err"> </span><span class="nc">449.3291</span>
+<span class="nt">Epoch</span><span class="na"> 9000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3455228400.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">830.80255</span><span class="err"> </span><span class="nc">24.0964</span><span class="err"> </span><span class="nc">-553.92804</span><span class="err"> </span><span class="nc">438.0652</span>
+<span class="nt">Epoch</span><span class="na"> 10000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3419475500.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">842.21594</span><span class="err"> </span><span class="nc">-60.797424</span><span class="err"> </span><span class="nc">-700.0123</span><span class="err"> </span><span class="nc">441.983</span>
+<span class="nt">Epoch</span><span class="na"> 11000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3385625300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">853.3363</span><span class="err"> </span><span class="nc">-144.08699</span><span class="err"> </span><span class="nc">-840.467</span><span class="err"> </span><span class="nc">460.6356</span>
+<span class="nt">Epoch</span><span class="na"> 12000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3353544700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">864.19135</span><span class="err"> </span><span class="nc">-225.8125</span><span class="err"> </span><span class="nc">-975.4196</span><span class="err"> </span><span class="nc">493.57703</span>
+<span class="nt">Epoch</span><span class="na"> 13000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3323125000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">874.778</span><span class="err"> </span><span class="nc">-305.98932</span><span class="err"> </span><span class="nc">-1104.9867</span><span class="err"> </span><span class="nc">540.39465</span>
+<span class="nt">Epoch</span><span class="na"> 14000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3294257000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">885.1007</span><span class="err"> </span><span class="nc">-384.63474</span><span class="err"> </span><span class="nc">-1229.277</span><span class="err"> </span><span class="nc">600.65607</span>
+<span class="nt">Epoch</span><span class="na"> 15000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3266820000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">895.18823</span><span class="err"> </span><span class="nc">-461.819</span><span class="err"> </span><span class="nc">-1348.4417</span><span class="err"> </span><span class="nc">673.9051</span>
+<span class="nt">Epoch</span><span class="na"> 16000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3240736000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">905.0128</span><span class="err"> </span><span class="nc">-537.541</span><span class="err"> </span><span class="nc">-1462.6171</span><span class="err"> </span><span class="nc">759.7118</span>
+<span class="nt">Epoch</span><span class="na"> 17000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3215895000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">914.60065</span><span class="err"> </span><span class="nc">-611.8676</span><span class="err"> </span><span class="nc">-1571.9058</span><span class="err"> </span><span class="nc">857.6638</span>
+<span class="nt">Epoch</span><span class="na"> 18000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3192216800.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">923.9603</span><span class="err"> </span><span class="nc">-684.8093</span><span class="err"> </span><span class="nc">-1676.4642</span><span class="err"> </span><span class="nc">967.30475</span>
+<span class="nt">Epoch</span><span class="na"> 19000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3169632300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">933.08594</span><span class="err"> </span><span class="nc">-756.3582</span><span class="err"> </span><span class="nc">-1776.4275</span><span class="err"> </span><span class="nc">1088.2198</span>
+<span class="nt">Epoch</span><span class="na"> 20000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3148046300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">941.9928</span><span class="err"> </span><span class="nc">-826.6257</span><span class="err"> </span><span class="nc">-1871.9355</span><span class="err"> </span><span class="nc">1219.9702</span>
+<span class="nt">Epoch</span><span class="na"> 21000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3127394800.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">950.67896</span><span class="err"> </span><span class="nc">-895.6205</span><span class="err"> </span><span class="nc">-1963.0989</span><span class="err"> </span><span class="nc">1362.1665</span>
+<span class="nt">Epoch</span><span class="na"> 22000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3107608600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">959.1487</span><span class="err"> </span><span class="nc">-963.38116</span><span class="err"> </span><span class="nc">-2050.0586</span><span class="err"> </span><span class="nc">1514.4026</span>
+<span class="nt">Epoch</span><span class="na"> 23000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3088618200.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">967.4355</span><span class="err"> </span><span class="nc">-1029.9625</span><span class="err"> </span><span class="nc">-2132.961</span><span class="err"> </span><span class="nc">1676.2717</span>
+<span class="nt">Epoch</span><span class="na"> 24000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3070361300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">975.52875</span><span class="err"> </span><span class="nc">-1095.4292</span><span class="err"> </span><span class="nc">-2211.854</span><span class="err"> </span><span class="nc">1847.4485</span>
+<span class="nt">Epoch</span><span class="na"> 25000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">3052791300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">983.4346</span><span class="err"> </span><span class="nc">-1159.7922</span><span class="err"> </span><span class="nc">-2286.9412</span><span class="err"> </span><span class="nc">2027.4857</span>
+<span class="nt">3052791300.0</span><span class="na"> 983.4346 -1159.7922 -2286.9412 2027.4857</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">abscissa</span><span class="p">:</span>
+ <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">coefficient1</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient2</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient3</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">constant</span><span class="p">))</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span> <span class="p">,</span> <span class="n">ordinate</span><span class="p">,</span> <span class="s1">&#39;ro&#39;</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Original data&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">predictions</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Fitted line&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s1">&#39;Cubic Regression Result&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/4.png"/><h3>Quartic</h3><pre><code><div class="highlight"><span></span><span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span>
+ <span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">no_of_epochs</span><span class="p">):</span>
+ <span class="k">for</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">y</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">):</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">optimizer4</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">:</span><span class="n">y</span><span class="p">})</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">%</span><span class="mi">1000</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
+ <span class="n">cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse4</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;Epoch&quot;</span><span class="p">,(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">),</span> <span class="s2">&quot;: Training Cost:&quot;</span><span class="p">,</span> <span class="n">cost</span><span class="p">,</span><span class="s2">&quot; a,b,c,d:&quot;</span><span class="p">,</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">e</span><span class="p">))</span>
+
+ <span class="n">training_cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse4</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="n">coefficient1</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
+ <span class="n">coefficient2</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
+ <span class="n">coefficient3</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
+ <span class="n">coefficient4</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
+ <span class="n">constant</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">e</span><span class="p">)</span>
+
+<span class="k">print</span><span class="p">(</span><span class="n">training_cost</span><span class="p">,</span> <span class="n">coefficient1</span><span class="p">,</span> <span class="n">coefficient2</span><span class="p">,</span> <span class="n">coefficient3</span><span class="p">,</span> <span class="n">coefficient4</span><span class="p">,</span> <span class="n">constant</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="nt">Epoch</span><span class="na"> 1000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1902632600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">84.48304</span><span class="err"> </span><span class="nc">52.210594</span><span class="err"> </span><span class="nc">54.791424</span><span class="err"> </span><span class="nc">142.51952</span><span class="err"> </span><span class="nc">512.0343</span>
+<span class="nt">Epoch</span><span class="na"> 2000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1854316200.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">88.998955</span><span class="err"> </span><span class="nc">13.073557</span><span class="err"> </span><span class="nc">14.276088</span><span class="err"> </span><span class="nc">223.55667</span><span class="err"> </span><span class="nc">1056.4655</span>
+<span class="nt">Epoch</span><span class="na"> 3000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1812812400.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">92.9462</span><span class="err"> </span><span class="nc">-22.331177</span><span class="err"> </span><span class="nc">-15.262934</span><span class="err"> </span><span class="nc">327.41858</span><span class="err"> </span><span class="nc">1634.9054</span>
+<span class="nt">Epoch</span><span class="na"> 4000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1775716000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">96.42522</span><span class="err"> </span><span class="nc">-54.64535</span><span class="err"> </span><span class="nc">-35.829437</span><span class="err"> </span><span class="nc">449.5028</span><span class="err"> </span><span class="nc">2239.1392</span>
+<span class="nt">Epoch</span><span class="na"> 5000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1741494100.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">99.524734</span><span class="err"> </span><span class="nc">-84.43976</span><span class="err"> </span><span class="nc">-49.181057</span><span class="err"> </span><span class="nc">585.85876</span><span class="err"> </span><span class="nc">2862.4915</span>
+<span class="nt">Epoch</span><span class="na"> 6000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1709199600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">102.31984</span><span class="err"> </span><span class="nc">-112.19895</span><span class="err"> </span><span class="nc">-56.808075</span><span class="err"> </span><span class="nc">733.1876</span><span class="err"> </span><span class="nc">3499.6199</span>
+<span class="nt">Epoch</span><span class="na"> 7000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1678261800.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">104.87324</span><span class="err"> </span><span class="nc">-138.32709</span><span class="err"> </span><span class="nc">-59.9442</span><span class="err"> </span><span class="nc">888.79626</span><span class="err"> </span><span class="nc">4146.2944</span>
+<span class="nt">Epoch</span><span class="na"> 8000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1648340600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">107.23536</span><span class="err"> </span><span class="nc">-163.15173</span><span class="err"> </span><span class="nc">-59.58964</span><span class="err"> </span><span class="nc">1050.524</span><span class="err"> </span><span class="nc">4798.979</span>
+<span class="nt">Epoch</span><span class="na"> 9000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1619243400.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">109.44742</span><span class="err"> </span><span class="nc">-186.9409</span><span class="err"> </span><span class="nc">-56.53944</span><span class="err"> </span><span class="nc">1216.6432</span><span class="err"> </span><span class="nc">5454.9463</span>
+<span class="nt">Epoch</span><span class="na"> 10000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1590821900.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">111.54233</span><span class="err"> </span><span class="nc">-209.91287</span><span class="err"> </span><span class="nc">-51.423084</span><span class="err"> </span><span class="nc">1385.8513</span><span class="err"> </span><span class="nc">6113.5137</span>
+<span class="nt">Epoch</span><span class="na"> 11000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1563042200.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">113.54405</span><span class="err"> </span><span class="nc">-232.21953</span><span class="err"> </span><span class="nc">-44.73371</span><span class="err"> </span><span class="nc">1557.1084</span><span class="err"> </span><span class="nc">6771.7046</span>
+<span class="nt">Epoch</span><span class="na"> 12000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1535855600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">115.471565</span><span class="err"> </span><span class="nc">-253.9838</span><span class="err"> </span><span class="nc">-36.851135</span><span class="err"> </span><span class="nc">1729.535</span><span class="err"> </span><span class="nc">7429.069</span>
+<span class="nt">Epoch</span><span class="na"> 13000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1509255300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">117.33939</span><span class="err"> </span><span class="nc">-275.29697</span><span class="err"> </span><span class="nc">-28.0714</span><span class="err"> </span><span class="nc">1902.5308</span><span class="err"> </span><span class="nc">8083.9634</span>
+<span class="nt">Epoch</span><span class="na"> 14000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1483227000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">119.1605</span><span class="err"> </span><span class="nc">-296.2472</span><span class="err"> </span><span class="nc">-18.618649</span><span class="err"> </span><span class="nc">2075.6094</span><span class="err"> </span><span class="nc">8735.381</span>
+<span class="nt">Epoch</span><span class="na"> 15000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1457726700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">120.94584</span><span class="err"> </span><span class="nc">-316.915</span><span class="err"> </span><span class="nc">-8.650095</span><span class="err"> </span><span class="nc">2248.3247</span><span class="err"> </span><span class="nc">9384.197</span>
+<span class="nt">Epoch</span><span class="na"> 16000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1432777300.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">122.69806</span><span class="err"> </span><span class="nc">-337.30704</span><span class="err"> </span><span class="nc">1.7027153</span><span class="err"> </span><span class="nc">2420.5771</span><span class="err"> </span><span class="nc">10028.871</span>
+<span class="nt">Epoch</span><span class="na"> 17000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1408365000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">124.42179</span><span class="err"> </span><span class="nc">-357.45245</span><span class="err"> </span><span class="nc">12.33499</span><span class="err"> </span><span class="nc">2592.2983</span><span class="err"> </span><span class="nc">10669.157</span>
+<span class="nt">Epoch</span><span class="na"> 18000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1384480000.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">126.12332</span><span class="err"> </span><span class="nc">-377.39734</span><span class="err"> </span><span class="nc">23.168756</span><span class="err"> </span><span class="nc">2763.0933</span><span class="err"> </span><span class="nc">11305.027</span>
+<span class="nt">Epoch</span><span class="na"> 19000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1361116800.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">127.80568</span><span class="err"> </span><span class="nc">-397.16415</span><span class="err"> </span><span class="nc">34.160156</span><span class="err"> </span><span class="nc">2933.0452</span><span class="err"> </span><span class="nc">11935.669</span>
+<span class="nt">Epoch</span><span class="na"> 20000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1338288100.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">129.4674</span><span class="err"> </span><span class="nc">-416.72803</span><span class="err"> </span><span class="nc">45.259155</span><span class="err"> </span><span class="nc">3101.7727</span><span class="err"> </span><span class="nc">12561.179</span>
+<span class="nt">Epoch</span><span class="na"> 21000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1315959700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">131.11403</span><span class="err"> </span><span class="nc">-436.14285</span><span class="err"> </span><span class="nc">56.4436</span><span class="err"> </span><span class="nc">3269.3142</span><span class="err"> </span><span class="nc">13182.058</span>
+<span class="nt">Epoch</span><span class="na"> 22000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1294164700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">132.74377</span><span class="err"> </span><span class="nc">-455.3779</span><span class="err"> </span><span class="nc">67.6757</span><span class="err"> </span><span class="nc">3435.3833</span><span class="err"> </span><span class="nc">13796.807</span>
+<span class="nt">Epoch</span><span class="na"> 23000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1272863600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">134.35779</span><span class="err"> </span><span class="nc">-474.45316</span><span class="err"> </span><span class="nc">78.96117</span><span class="err"> </span><span class="nc">3600.264</span><span class="err"> </span><span class="nc">14406.58</span>
+<span class="nt">Epoch</span><span class="na"> 24000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1252052600.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">135.9583</span><span class="err"> </span><span class="nc">-493.38254</span><span class="err"> </span><span class="nc">90.268616</span><span class="err"> </span><span class="nc">3764.0078</span><span class="err"> </span><span class="nc">15010.481</span>
+<span class="nt">Epoch</span><span class="na"> 25000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1231713700.0</span><span class="err"> </span><span class="nc">a,b,c,d</span><span class="p">:</span><span class="err"> </span><span class="nc">137.54753</span><span class="err"> </span><span class="nc">-512.1876</span><span class="err"> </span><span class="nc">101.59372</span><span class="err"> </span><span class="nc">3926.4897</span><span class="err"> </span><span class="nc">15609.368</span>
+<span class="nt">1231713700.0</span><span class="na"> 137.54753 -512.1876 101.59372 3926.4897 15609.368</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">abscissa</span><span class="p">:</span>
+ <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">coefficient1</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient2</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient3</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient4</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">constant</span><span class="p">))</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span> <span class="p">,</span> <span class="n">ordinate</span><span class="p">,</span> <span class="s1">&#39;ro&#39;</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Original data&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">predictions</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Fitted line&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s1">&#39;Quartic Regression Result&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/5.png"/><h3>Quintic</h3><pre><code><div class="highlight"><span></span><span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span>
+ <span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">no_of_epochs</span><span class="p">):</span>
+ <span class="k">for</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">y</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">ordinate</span><span class="p">):</span>
+ <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">optimizer5</span><span class="p">,</span> <span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">:</span><span class="n">y</span><span class="p">})</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">%</span><span class="mi">1000</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
+ <span class="n">cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse5</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;Epoch&quot;</span><span class="p">,(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">),</span> <span class="s2">&quot;: Training Cost:&quot;</span><span class="p">,</span> <span class="n">cost</span><span class="p">,</span><span class="s2">&quot; a,b,c,d,e,f:&quot;</span><span class="p">,</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">e</span><span class="p">),</span><span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">f</span><span class="p">))</span>
+
+ <span class="n">training_cost</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">mse5</span><span class="p">,</span><span class="n">feed_dict</span><span class="o">=</span><span class="p">{</span><span class="n">X</span><span class="p">:</span><span class="n">abscissa</span><span class="p">,</span><span class="n">Y</span><span class="p">:</span><span class="n">ordinate</span><span class="p">})</span>
+ <span class="n">coefficient1</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
+ <span class="n">coefficient2</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
+ <span class="n">coefficient3</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">c</span><span class="p">)</span>
+ <span class="n">coefficient4</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">d</span><span class="p">)</span>
+ <span class="n">coefficient5</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">e</span><span class="p">)</span>
+ <span class="n">constant</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="nt">Epoch</span><span class="na"> 1000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1409200100.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">7.949472</span><span class="err"> </span><span class="nc">7.46219</span><span class="err"> </span><span class="nc">55.626034</span><span class="err"> </span><span class="nc">184.29028</span><span class="err"> </span><span class="nc">484.00223</span><span class="err"> </span><span class="nc">1024.0083</span>
+<span class="nt">Epoch</span><span class="na"> 2000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1306882400.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">8.732181</span><span class="err"> </span><span class="nc">-4.0085897</span><span class="err"> </span><span class="nc">73.25298</span><span class="err"> </span><span class="nc">315.90103</span><span class="err"> </span><span class="nc">904.08887</span><span class="err"> </span><span class="nc">2004.9749</span>
+<span class="nt">Epoch</span><span class="na"> 3000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1212606000.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">9.732249</span><span class="err"> </span><span class="nc">-16.90125</span><span class="err"> </span><span class="nc">86.28379</span><span class="err"> </span><span class="nc">437.06552</span><span class="err"> </span><span class="nc">1305.055</span><span class="err"> </span><span class="nc">2966.2188</span>
+<span class="nt">Epoch</span><span class="na"> 4000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1123640400.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">10.74851</span><span class="err"> </span><span class="nc">-29.82692</span><span class="err"> </span><span class="nc">98.59997</span><span class="err"> </span><span class="nc">555.331</span><span class="err"> </span><span class="nc">1698.4631</span><span class="err"> </span><span class="nc">3917.9155</span>
+<span class="nt">Epoch</span><span class="na"> 5000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">1039694300.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">11.75426</span><span class="err"> </span><span class="nc">-42.598194</span><span class="err"> </span><span class="nc">110.698326</span><span class="err"> </span><span class="nc">671.64355</span><span class="err"> </span><span class="nc">2085.5513</span><span class="err"> </span><span class="nc">4860.8535</span>
+<span class="nt">Epoch</span><span class="na"> 6000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">960663550.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">12.745439</span><span class="err"> </span><span class="nc">-55.18337</span><span class="err"> </span><span class="nc">122.644936</span><span class="err"> </span><span class="nc">786.00214</span><span class="err"> </span><span class="nc">2466.1638</span><span class="err"> </span><span class="nc">5794.3735</span>
+<span class="nt">Epoch</span><span class="na"> 7000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">886438340.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">13.721028</span><span class="err"> </span><span class="nc">-67.57168</span><span class="err"> </span><span class="nc">134.43822</span><span class="err"> </span><span class="nc">898.3691</span><span class="err"> </span><span class="nc">2839.9958</span><span class="err"> </span><span class="nc">6717.659</span>
+<span class="nt">Epoch</span><span class="na"> 8000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">816913100.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">14.679965</span><span class="err"> </span><span class="nc">-79.75113</span><span class="err"> </span><span class="nc">146.07385</span><span class="err"> </span><span class="nc">1008.66895</span><span class="err"> </span><span class="nc">3206.6692</span><span class="err"> </span><span class="nc">7629.812</span>
+<span class="nt">Epoch</span><span class="na"> 9000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">751971500.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">15.62181</span><span class="err"> </span><span class="nc">-91.71608</span><span class="err"> </span><span class="nc">157.55713</span><span class="err"> </span><span class="nc">1116.7715</span><span class="err"> </span><span class="nc">3565.8323</span><span class="err"> </span><span class="nc">8529.976</span>
+<span class="nt">Epoch</span><span class="na"> 10000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">691508740.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">16.545347</span><span class="err"> </span><span class="nc">-103.4531</span><span class="err"> </span><span class="nc">168.88321</span><span class="err"> </span><span class="nc">1222.6348</span><span class="err"> </span><span class="nc">3916.9785</span><span class="err"> </span><span class="nc">9416.236</span>
+<span class="nt">Epoch</span><span class="na"> 11000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">635382000.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">17.450052</span><span class="err"> </span><span class="nc">-114.954254</span><span class="err"> </span><span class="nc">180.03932</span><span class="err"> </span><span class="nc">1326.1565</span><span class="err"> </span><span class="nc">4259.842</span><span class="err"> </span><span class="nc">10287.99</span>
+<span class="nt">Epoch</span><span class="na"> 12000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">583477250.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">18.334944</span><span class="err"> </span><span class="nc">-126.20821</span><span class="err"> </span><span class="nc">191.02948</span><span class="err"> </span><span class="nc">1427.2095</span><span class="err"> </span><span class="nc">4593.8</span><span class="err"> </span><span class="nc">11143.449</span>
+<span class="nt">Epoch</span><span class="na"> 13000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">535640400.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">19.198917</span><span class="err"> </span><span class="nc">-137.20206</span><span class="err"> </span><span class="nc">201.84718</span><span class="err"> </span><span class="nc">1525.6926</span><span class="err"> </span><span class="nc">4918.5327</span><span class="err"> </span><span class="nc">11981.633</span>
+<span class="nt">Epoch</span><span class="na"> 14000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">491722240.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">20.041153</span><span class="err"> </span><span class="nc">-147.92719</span><span class="err"> </span><span class="nc">212.49709</span><span class="err"> </span><span class="nc">1621.5496</span><span class="err"> </span><span class="nc">5233.627</span><span class="err"> </span><span class="nc">12800.468</span>
+<span class="nt">Epoch</span><span class="na"> 15000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">451559520.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">20.860966</span><span class="err"> </span><span class="nc">-158.37456</span><span class="err"> </span><span class="nc">222.97133</span><span class="err"> </span><span class="nc">1714.7141</span><span class="err"> </span><span class="nc">5538.676</span><span class="err"> </span><span class="nc">13598.337</span>
+<span class="nt">Epoch</span><span class="na"> 16000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">414988960.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">21.657421</span><span class="err"> </span><span class="nc">-168.53406</span><span class="err"> </span><span class="nc">233.27422</span><span class="err"> </span><span class="nc">1805.0874</span><span class="err"> </span><span class="nc">5833.1978</span><span class="err"> </span><span class="nc">14373.658</span>
+<span class="nt">Epoch</span><span class="na"> 17000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">381837920.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">22.429693</span><span class="err"> </span><span class="nc">-178.39536</span><span class="err"> </span><span class="nc">243.39914</span><span class="err"> </span><span class="nc">1892.5883</span><span class="err"> </span><span class="nc">6116.847</span><span class="err"> </span><span class="nc">15124.394</span>
+<span class="nt">Epoch</span><span class="na"> 18000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">351931300.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">23.176882</span><span class="err"> </span><span class="nc">-187.94789</span><span class="err"> </span><span class="nc">253.3445</span><span class="err"> </span><span class="nc">1977.137</span><span class="err"> </span><span class="nc">6389.117</span><span class="err"> </span><span class="nc">15848.417</span>
+<span class="nt">Epoch</span><span class="na"> 19000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">325074400.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">23.898485</span><span class="err"> </span><span class="nc">-197.18741</span><span class="err"> </span><span class="nc">263.12512</span><span class="err"> </span><span class="nc">2058.6716</span><span class="err"> </span><span class="nc">6649.8037</span><span class="err"> </span><span class="nc">16543.95</span>
+<span class="nt">Epoch</span><span class="na"> 20000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">301073570.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">24.593851</span><span class="err"> </span><span class="nc">-206.10497</span><span class="err"> </span><span class="nc">272.72385</span><span class="err"> </span><span class="nc">2137.1797</span><span class="err"> </span><span class="nc">6898.544</span><span class="err"> </span><span class="nc">17209.367</span>
+<span class="nt">Epoch</span><span class="na"> 21000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">279727000.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">25.262104</span><span class="err"> </span><span class="nc">-214.69217</span><span class="err"> </span><span class="nc">282.14642</span><span class="err"> </span><span class="nc">2212.6372</span><span class="err"> </span><span class="nc">7135.217</span><span class="err"> </span><span class="nc">17842.854</span>
+<span class="nt">Epoch</span><span class="na"> 22000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">260845550.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">25.903376</span><span class="err"> </span><span class="nc">-222.94969</span><span class="err"> </span><span class="nc">291.4003</span><span class="err"> </span><span class="nc">2284.9844</span><span class="err"> </span><span class="nc">7359.4644</span><span class="err"> </span><span class="nc">18442.408</span>
+<span class="nt">Epoch</span><span class="na"> 23000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">244218030.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">26.517094</span><span class="err"> </span><span class="nc">-230.8697</span><span class="err"> </span><span class="nc">300.45532</span><span class="err"> </span><span class="nc">2354.3003</span><span class="err"> </span><span class="nc">7571.261</span><span class="err"> </span><span class="nc">19007.49</span>
+<span class="nt">Epoch</span><span class="na"> 24000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">229660080.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">27.102589</span><span class="err"> </span><span class="nc">-238.44817</span><span class="err"> </span><span class="nc">309.35342</span><span class="err"> </span><span class="nc">2420.4185</span><span class="err"> </span><span class="nc">7770.5728</span><span class="err"> </span><span class="nc">19536.19</span>
+<span class="nt">Epoch</span><span class="na"> 25000 </span><span class="p">:</span><span class="err"> </span><span class="nc">Training</span><span class="err"> </span><span class="nc">Cost</span><span class="p">:</span><span class="err"> </span><span class="nc">216972400.0</span><span class="err"> </span><span class="nc">a,b,c,d,e,f</span><span class="p">:</span><span class="err"> </span><span class="nc">27.660324</span><span class="err"> </span><span class="nc">-245.69016</span><span class="err"> </span><span class="nc">318.10062</span><span class="err"> </span><span class="nc">2483.3608</span><span class="err"> </span><span class="nc">7957.354</span><span class="err"> </span><span class="nc">20027.707</span>
+<span class="nt">216972400.0</span><span class="na"> 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">abscissa</span><span class="p">:</span>
+ <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">coefficient1</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">5</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient2</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient3</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">3</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient4</span><span class="o">*</span><span class="nb">pow</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">coefficient5</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">constant</span><span class="p">))</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span> <span class="p">,</span> <span class="n">ordinate</span><span class="p">,</span> <span class="s1">&#39;ro&#39;</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Original data&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">abscissa</span><span class="p">,</span> <span class="n">predictions</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span><span class="s1">&#39;Fitted line&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s1">&#39;Quintic Regression Result&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</div>
+
+</code></pre><img src="https://navanchauhan.github.io//assets/gciTales/03-regression/6.png"/><h2>Results and Conclusion</h2><p>You just learnt Polynomial Regression using TensorFlow!</p><h2>Notes</h2><h3>Overfitting</h3><blockquote><p>&gt; Overfitting refers to a model that models the training data too well.Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.</p></blockquote><blockquote><p>Source: Machine Learning Mastery</p></blockquote><p>Basically if you train your machine learning model on a small dataset for a really large number of epochs, the model will learn all the deformities/noise in the data and will actually think that it is a normal part. Therefore when it will see some new data, it will discard that new data as noise and will impact the accuracy of the model in a negative manner</p>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2019-12-10-TensorFlow-Model-Prediction</guid><title>Making Predictions using Image Classifier (TensorFlow)</title><description>Making predictions for image classification models built using TensorFlow</description><link>https://navanchauhan.github.io/posts/2019-12-10-TensorFlow-Model-Prediction</link><pubDate>Tue, 10 Dec 2019 11:10:00 +0530</pubDate><content:encoded><![CDATA[<h1>Making Predictions using Image Classifier (TensorFlow)</h1><p><em>This was tested on TF 2.x and works as of 2019-12-10</em></p><p>If you want to understand how to make your own custom image classifier, please refer to my previous post.</p><p>If you followed my last post, then you created a model which took an image of dimensions 50x50 as an input.</p><p>First we import the following if we have not imported these before</p><pre><code><div class="highlight"><span></span><span class="kn">import</span> <span class="nn">cv2</span>
+<span class="kn">import</span> <span class="nn">os</span>
+</div>
+
+</code></pre><p>Then we read the file using OpenCV.</p><pre><code><div class="highlight"><span></span><span class="n">image</span><span class="o">=</span><span class="n">cv2</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="n">imagePath</span><span class="p">)</span>
+</div>
+
+</code></pre><p>The cv2. imread() function returns a NumPy array representing the image. Therefore, we need to convert it before we can use it.</p><pre><code><div class="highlight"><span></span><span class="n">image_from_array</span> <span class="o">=</span> <span class="n">Image</span><span class="o">.</span><span class="n">fromarray</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="s1">&#39;RGB&#39;</span><span class="p">)</span>
+</div>
+
+</code></pre><p>Then we resize the image</p><pre><code><div class="highlight"><span></span><span class="n">size_image</span> <span class="o">=</span> <span class="n">image_from_array</span><span class="o">.</span><span class="n">resize</span><span class="p">((</span><span class="mi">50</span><span class="p">,</span><span class="mi">50</span><span class="p">))</span>
+</div>
+
+</code></pre><p>After this we create a batch consisting of only one image</p><pre><code><div class="highlight"><span></span><span class="n">p</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">size_image</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
+</div>
+
+</code></pre><p>We then convert this uint8 datatype to a float32 datatype</p><pre><code><div class="highlight"><span></span><span class="n">img</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+</div>
+
+</code></pre><p>Finally we make the prediction</p><pre><code><div class="highlight"><span></span><span class="k">print</span><span class="p">([</span><span class="s1">&#39;Infected&#39;</span><span class="p">,</span><span class="s1">&#39;Uninfected&#39;</span><span class="p">][</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">img</span><span class="p">))])</span>
+</div>
+
+</code></pre><p><code>Infected</code></p>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2019-12-08-Image-Classifier-Tensorflow</guid><title>Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria</title><description>Tutorial on creating an image classifier model using TensorFlow which detects malaria</description><link>https://navanchauhan.github.io/posts/2019-12-08-Image-Classifier-Tensorflow</link><pubDate>Sun, 8 Dec 2019 14:16:00 +0530</pubDate><content:encoded><![CDATA[<h1>Creating a Custom Image Classifier using Tensorflow 2.x and Keras for Detecting Malaria</h1><p><strong>Done during Google Code-In. Org: Tensorflow.</strong></p><h2>Imports</h2><pre><code><div class="highlight"><span></span><span class="o">%</span><span class="n">tensorflow_version</span> <span class="mf">2.</span><span class="n">x</span> <span class="c1">#This is for telling Colab that you want to use TF 2.0, ignore if running on local machine</span>
+
+<span class="kn">from</span> <span class="nn">PIL</span> <span class="kn">import</span> <span class="n">Image</span> <span class="c1"># We use the PIL Library to resize images</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">os</span>
+<span class="kn">import</span> <span class="nn">cv2</span>
+<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="kn">as</span> <span class="nn">tf</span>
+<span class="kn">from</span> <span class="nn">tensorflow.keras</span> <span class="kn">import</span> <span class="n">datasets</span><span class="p">,</span> <span class="n">layers</span><span class="p">,</span> <span class="n">models</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="kn">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="kn">as</span> <span class="nn">plt</span>
+<span class="kn">from</span> <span class="nn">keras.models</span> <span class="kn">import</span> <span class="n">Sequential</span>
+<span class="kn">from</span> <span class="nn">keras.layers</span> <span class="kn">import</span> <span class="n">Conv2D</span><span class="p">,</span><span class="n">MaxPooling2D</span><span class="p">,</span><span class="n">Dense</span><span class="p">,</span><span class="n">Flatten</span><span class="p">,</span><span class="n">Dropout</span>
+</div>
+
+</code></pre><h2>Dataset</h2><h3>Fetching the Data</h3><pre><code><div class="highlight"><span></span><span class="err">!</span><span class="n">wget</span> <span class="n">ftp</span><span class="p">:</span><span class="o">//</span><span class="n">lhcftp</span><span class="o">.</span><span class="n">nlm</span><span class="o">.</span><span class="n">nih</span><span class="o">.</span><span class="n">gov</span><span class="o">/</span><span class="n">Open</span><span class="o">-</span><span class="n">Access</span><span class="o">-</span><span class="n">Datasets</span><span class="o">/</span><span class="n">Malaria</span><span class="o">/</span><span class="n">cell_images</span><span class="o">.</span><span class="n">zip</span>
+<span class="err">!</span><span class="n">unzip</span> <span class="n">cell_images</span><span class="o">.</span><span class="n">zip</span>
+</div>
+
+</code></pre><h3>Processing the Data</h3><p>We resize all the images as 50x50 and add the numpy array of that image as well as their label names (Infected or Not) to common arrays.</p><pre><code><div class="highlight"><span></span><span class="n">data</span> <span class="o">=</span> <span class="p">[]</span>
+<span class="n">labels</span> <span class="o">=</span> <span class="p">[]</span>
+
+<span class="n">Parasitized</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">listdir</span><span class="p">(</span><span class="s2">&quot;./cell_images/Parasitized/&quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">parasite</span> <span class="ow">in</span> <span class="n">Parasitized</span><span class="p">:</span>
+ <span class="k">try</span><span class="p">:</span>
+ <span class="n">image</span><span class="o">=</span><span class="n">cv2</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="s2">&quot;./cell_images/Parasitized/&quot;</span><span class="o">+</span><span class="n">parasite</span><span class="p">)</span>
+ <span class="n">image_from_array</span> <span class="o">=</span> <span class="n">Image</span><span class="o">.</span><span class="n">fromarray</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="s1">&#39;RGB&#39;</span><span class="p">)</span>
+ <span class="n">size_image</span> <span class="o">=</span> <span class="n">image_from_array</span><span class="o">.</span><span class="n">resize</span><span class="p">((</span><span class="mi">50</span><span class="p">,</span> <span class="mi">50</span><span class="p">))</span>
+ <span class="n">data</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">size_image</span><span class="p">))</span>
+ <span class="n">labels</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
+ <span class="k">except</span> <span class="ne">AttributeError</span><span class="p">:</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;&quot;</span><span class="p">)</span>
+
+<span class="n">Uninfected</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">listdir</span><span class="p">(</span><span class="s2">&quot;./cell_images/Uninfected/&quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">uninfect</span> <span class="ow">in</span> <span class="n">Uninfected</span><span class="p">:</span>
+ <span class="k">try</span><span class="p">:</span>
+ <span class="n">image</span><span class="o">=</span><span class="n">cv2</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="s2">&quot;./cell_images/Uninfected/&quot;</span><span class="o">+</span><span class="n">uninfect</span><span class="p">)</span>
+ <span class="n">image_from_array</span> <span class="o">=</span> <span class="n">Image</span><span class="o">.</span><span class="n">fromarray</span><span class="p">(</span><span class="n">image</span><span class="p">,</span> <span class="s1">&#39;RGB&#39;</span><span class="p">)</span>
+ <span class="n">size_image</span> <span class="o">=</span> <span class="n">image_from_array</span><span class="o">.</span><span class="n">resize</span><span class="p">((</span><span class="mi">50</span><span class="p">,</span> <span class="mi">50</span><span class="p">))</span>
+ <span class="n">data</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">size_image</span><span class="p">))</span>
+ <span class="n">labels</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>
+ <span class="k">except</span> <span class="ne">AttributeError</span><span class="p">:</span>
+ <span class="k">print</span><span class="p">(</span><span class="s2">&quot;&quot;</span><span class="p">)</span>
+</div>
+
+</code></pre><h3>Splitting Data</h3><pre><code><div class="highlight"><span></span><span class="n">df</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
+<span class="n">labels</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">labels</span><span class="p">)</span>
+<span class="p">(</span><span class="n">X_train</span><span class="p">,</span> <span class="n">X_test</span><span class="p">)</span> <span class="o">=</span> <span class="n">df</span><span class="p">[(</span><span class="nb">int</span><span class="p">)(</span><span class="mf">0.1</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">df</span><span class="p">)):],</span><span class="n">df</span><span class="p">[:(</span><span class="nb">int</span><span class="p">)(</span><span class="mf">0.1</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">df</span><span class="p">))]</span>
+<span class="p">(</span><span class="n">y_train</span><span class="p">,</span> <span class="n">y_test</span><span class="p">)</span> <span class="o">=</span> <span class="n">labels</span><span class="p">[(</span><span class="nb">int</span><span class="p">)(</span><span class="mf">0.1</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">labels</span><span class="p">)):],</span><span class="n">labels</span><span class="p">[:(</span><span class="nb">int</span><span class="p">)(</span><span class="mf">0.1</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">labels</span><span class="p">))]</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">s</span><span class="p">=</span><span class="n">np</span><span class="p">.</span><span class="n">arange</span><span class="p">(</span><span class="n">X_train</span><span class="p">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
+<span class="n">np</span><span class="p">.</span><span class="n">random</span><span class="p">.</span><span class="n">shuffle</span><span class="p">(</span><span class="n">s</span><span class="p">)</span>
+<span class="n">X_train</span><span class="p">=</span><span class="n">X_train</span><span class="p">[</span><span class="n">s</span><span class="p">]</span>
+<span class="n">y_train</span><span class="p">=</span><span class="n">y_train</span><span class="p">[</span><span class="n">s</span><span class="p">]</span>
+<span class="n">X_train</span> <span class="p">=</span> <span class="n">X_train</span><span class="o">/</span><span class="mf">255.0</span>
+</div>
+
+</code></pre><h2>Model</h2><h3>Creating Model</h3><p>By creating a sequential model, we create a linear stack of layers.</p><p><em>Note: The input shape for the first layer is 50,50 which corresponds with the sizes of the resized images</em></p><pre><code><div class="highlight"><span></span><span class="n">model</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">Sequential</span><span class="p">()</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="o">=</span><span class="mi">16</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">&#39;same&#39;</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">&#39;relu&#39;</span><span class="p">,</span> <span class="n">input_shape</span><span class="o">=</span><span class="p">(</span><span class="mi">50</span><span class="p">,</span><span class="mi">50</span><span class="p">,</span><span class="mi">3</span><span class="p">)))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">MaxPooling2D</span><span class="p">(</span><span class="n">pool_size</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="o">=</span><span class="mi">32</span><span class="p">,</span><span class="n">kernel_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span><span class="n">padding</span><span class="o">=</span><span class="s1">&#39;same&#39;</span><span class="p">,</span><span class="n">activation</span><span class="o">=</span><span class="s1">&#39;relu&#39;</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">MaxPooling2D</span><span class="p">(</span><span class="n">pool_size</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="o">=</span><span class="mi">64</span><span class="p">,</span><span class="n">kernel_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span><span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">,</span><span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">MaxPooling2D</span><span class="p">(</span><span class="n">pool_size</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Dropout</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Flatten</span><span class="p">())</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">500</span><span class="p">,</span><span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Dropout</span><span class="p">(</span><span class="mf">0.2</span><span class="p">))</span>
+<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="n">activation</span><span class="o">=</span><span class="s2">&quot;softmax&quot;</span><span class="p">))</span><span class="c1">#2 represent output layer neurons </span>
+<span class="n">model</span><span class="o">.</span><span class="n">summary</span><span class="p">()</span>
+</div>
+
+</code></pre><h3>Compiling Model</h3><p>We use the adam optimiser as it is an adaptive learning rate optimization algorithm that's been designed specifically for <em>training</em> deep neural networks, which means it changes its learning rate automaticaly to get the best results</p><pre><code><div class="highlight"><span></span><span class="n">model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">optimizer</span><span class="o">=</span><span class="s2">&quot;adam&quot;</span><span class="p">,</span>
+ <span class="n">loss</span><span class="o">=</span><span class="s2">&quot;sparse_categorical_crossentropy&quot;</span><span class="p">,</span>
+ <span class="n">metrics</span><span class="o">=</span><span class="p">[</span><span class="s2">&quot;accuracy&quot;</span><span class="p">])</span>
+</div>
+
+</code></pre><h3>Training Model</h3><p>We train the model for 10 epochs on the training data and then validate it using the testing data</p><pre><code><div class="highlight"><span></span><span class="n">history</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X_train</span><span class="p">,</span><span class="n">y_train</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span> <span class="n">validation_data</span><span class="o">=</span><span class="p">(</span><span class="n">X_test</span><span class="p">,</span><span class="n">y_test</span><span class="p">))</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">Train</span> <span class="n">on</span> <span class="mi">24803</span> <span class="n">samples</span><span class="p">,</span> <span class="n">validate</span> <span class="n">on</span> <span class="mi">2755</span> <span class="n">samples</span>
+<span class="n">Epoch</span> <span class="mi">1</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">57</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0786</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9729</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">2</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0746</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9731</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0290</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">0.9996</span>
+<span class="n">Epoch</span> <span class="mi">3</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0672</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9764</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">4</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0601</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9789</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">5</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0558</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9804</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">6</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">57</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0513</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9819</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">7</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0452</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9849</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.3190</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">0.9985</span>
+<span class="n">Epoch</span> <span class="mi">8</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0404</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9858</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">9</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0352</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9878</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+<span class="n">Epoch</span> <span class="mi">10</span><span class="o">/</span><span class="mi">10</span>
+<span class="mi">24803</span><span class="o">/</span><span class="mi">24803</span> <span class="p">[</span><span class="o">==============================</span><span class="p">]</span> <span class="o">-</span> <span class="mi">58</span><span class="n">s</span> <span class="mi">2</span><span class="n">ms</span><span class="o">/</span><span class="n">sample</span> <span class="o">-</span> <span class="n">loss</span><span class="p">:</span> <span class="mf">0.0373</span> <span class="o">-</span> <span class="n">accuracy</span><span class="p">:</span> <span class="mf">0.9865</span> <span class="o">-</span> <span class="n">val_loss</span><span class="p">:</span> <span class="mf">0.0000e+00</span> <span class="o">-</span> <span class="n">val_accuracy</span><span class="p">:</span> <span class="mf">1.0000</span>
+</div>
+
+</code></pre><h3>Results</h3><pre><code><div class="highlight"><span></span><span class="n">accuracy</span> <span class="o">=</span> <span class="n">history</span><span class="o">.</span><span class="n">history</span><span class="p">[</span><span class="s1">&#39;accuracy&#39;</span><span class="p">][</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="mi">100</span>
+<span class="n">loss</span> <span class="o">=</span> <span class="n">history</span><span class="o">.</span><span class="n">history</span><span class="p">[</span><span class="s1">&#39;loss&#39;</span><span class="p">][</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="mi">100</span>
+<span class="n">val_accuracy</span> <span class="o">=</span> <span class="n">history</span><span class="o">.</span><span class="n">history</span><span class="p">[</span><span class="s1">&#39;val_accuracy&#39;</span><span class="p">][</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="mi">100</span>
+<span class="n">val_loss</span> <span class="o">=</span> <span class="n">history</span><span class="o">.</span><span class="n">history</span><span class="p">[</span><span class="s1">&#39;val_loss&#39;</span><span class="p">][</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="mi">100</span>
+
+<span class="k">print</span><span class="p">(</span>
+ <span class="s1">&#39;Accuracy:&#39;</span><span class="p">,</span> <span class="n">accuracy</span><span class="p">,</span>
+ <span class="s1">&#39;</span><span class="se">\n</span><span class="s1">Loss:&#39;</span><span class="p">,</span> <span class="n">loss</span><span class="p">,</span>
+ <span class="s1">&#39;</span><span class="se">\n</span><span class="s1">Validation Accuracy:&#39;</span><span class="p">,</span> <span class="n">val_accuracy</span><span class="p">,</span>
+ <span class="s1">&#39;</span><span class="se">\n</span><span class="s1">Validation Loss:&#39;</span><span class="p">,</span> <span class="n">val_loss</span>
+<span class="p">)</span>
+</div>
+
+</code></pre><pre><code><div class="highlight"><span></span><span class="n">Accuracy</span><span class="p">:</span> <span class="mf">98.64532351493835</span>
+<span class="n">Loss</span><span class="p">:</span> <span class="mf">3.732407123270176</span>
+<span class="n">Validation</span> <span class="n">Accuracy</span><span class="p">:</span> <span class="mf">100.0</span>
+<span class="n">Validation</span> <span class="n">Loss</span><span class="p">:</span> <span class="mf">0.0</span>
+</div>
+
+</code></pre><p>We have achieved 98% Accuracy!</p><p><a href="https://colab.research.google.com/drive/1ZswDsxLwYZEnev89MzlL5Lwt6ut7iwp- "Colab Notebook"">Link to Colab Notebook</a></p>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/2019-12-08-Splitting-Zips</guid><title>Splitting ZIPs into Multiple Parts</title><description>Short code snippet for splitting zips.</description><link>https://navanchauhan.github.io/posts/2019-12-08-Splitting-Zips</link><pubDate>Sun, 8 Dec 2019 13:27:00 +0530</pubDate><content:encoded><![CDATA[<h1>Splitting ZIPs into Multiple Parts</h1><p><strong>Tested on macOS</strong></p><p>Creating the archive:</p><pre><code><div class="highlight"><span></span><span class="nt">zip</span><span class="na"> -r -s 5 oodlesofnoodles.zip website/</span>
+</div>
+
+</code></pre><p>5 stands for each split files' size (in mb, kb and gb can also be specified)</p><p>For encrypting the zip:</p><pre><code><div class="highlight"><span></span><span class="nt">zip</span><span class="na"> -er -s 5 oodlesofnoodles.zip website</span>
+</div>
+
+</code></pre><p>Extracting Files</p><p>First we need to collect all parts, then</p><pre><code><div class="highlight"><span></span><span class="nt">zip</span><span class="na"> -F oodlesofnoodles.zip --out merged.zip</span>
+</div>
+
+</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response</guid><title>Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response</title><description>This paper is about Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response.</description><link>https://navanchauhan.github.io/publications/2019-05-14-Detecting-Driver-Fatigue-Over-Speeding-and-Speeding-up-Post-Accident-Response</link><pubDate>Tue, 14 May 2019 02:42:00 +0530</pubDate><content:encoded><![CDATA[<h1>Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response</h1><blockquote><p>Based on the project showcased at Toyota Hackathon, IITD - 17/18th December 2018</p></blockquote><p><a href="https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf">Download paper here</a></p><p>Recommended citation:</p><h3>ATP</h3><pre><code><div class="highlight"><span></span><span class="n">Chauhan</span><span class="p">,</span> <span class="n">N</span><span class="p">.</span> <span class="p">(</span><span class="mi">2019</span><span class="p">).</span> <span class="p">&amp;</span><span class="n">quot</span><span class="p">;</span><span class="n">Detecting</span> <span class="n">Driver</span> <span class="n">Fatigue</span><span class="p">,</span> <span class="n">Over</span><span class="o">-</span><span class="n">Speeding</span><span class="p">,</span> <span class="n">and</span> <span class="n">Speeding</span> <span class="n">up</span> <span class="n">Post</span><span class="o">-</span><span class="n">Accident</span> <span class="n">Response</span><span class="p">.&amp;</span><span class="n">quot</span><span class="p">;</span> <span class="p">&lt;</span><span class="n">i</span><span class="p">&gt;</span><span class="n">International</span> <span class="n">Research</span> <span class="n">Journal</span> <span class="n">of</span> <span class="n">Engineering</span> <span class="n">and</span> <span class="n">Technology</span> <span class="p">(</span><span class="n">IRJET</span><span class="p">),</span> <span class="mi">6</span><span class="p">(</span><span class="mi">5</span><span class="p">)</span><span class="o">&lt;/</span><span class="n">i</span><span class="p">&gt;.</span>
+</div>
+
+</code></pre><h3>BibTeX</h3><pre><code><div class="highlight"><span></span><span class="p">@</span><span class="n">article</span><span class="p">{</span><span class="n">chauhan_2019</span><span class="p">,</span> <span class="n">title</span><span class="p">={</span><span class="n">Detecting</span> <span class="n">Driver</span> <span class="n">Fatigue</span><span class="p">,</span> <span class="n">Over</span><span class="o">-</span><span class="n">Speeding</span><span class="p">,</span> <span class="n">and</span> <span class="n">Speeding</span> <span class="n">up</span> <span class="n">Post</span><span class="o">-</span><span class="n">Accident</span> <span class="n">Response</span><span class="p">},</span> <span class="n">volume</span><span class="p">={</span><span class="mi">6</span><span class="p">},</span> <span class="n">url</span><span class="p">={</span><span class="n">https</span><span class="p">:</span><span class="c1">//www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf}, number={5}, journal={International Research Journal of Engineering and Technology (IRJET)}, author={Chauhan, Navan}, year={2019}}</span>
+</div>
+
+</code></pre>]]></content:encoded></item><item><guid isPermaLink="true">https://navanchauhan.github.io/posts/hello-world</guid><title>Hello World</title><description>My first post.</description><link>https://navanchauhan.github.io/posts/hello-world</link><pubDate>Tue, 16 Apr 2019 17:39:00 +0530</pubDate><content:encoded><![CDATA[<h1>Hello World</h1><p><strong>Why a Hello World post?</strong></p><p>Just re-did the entire website using Publish (Publish by John Sundell). So, a new hello world post :)</p>]]></content:encoded></item></channel></rss> \ No newline at end of file