summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNavan Chauhan <navanchauhan@gmail.com>2024-03-21 13:54:53 -0600
committerNavan Chauhan <navanchauhan@gmail.com>2024-03-21 13:54:53 -0600
commitb484b8a672a907af87e73fe7006497a6ca86c259 (patch)
treea479a08a775064a9b1337b26bb3415fb4123bda0
parentea7a6ce794dbda1a7eded1d5e663897d46d21fa2 (diff)
add polynomial regression for tf 2.0
-rw-r--r--Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md241
-rw-r--r--Resources/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.pngbin0 -> 27498 bytes
-rw-r--r--docs/feed.rss271
-rw-r--r--docs/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.pngbin0 -> 27498 bytes
-rw-r--r--docs/index.html28
-rw-r--r--docs/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html311
-rw-r--r--docs/posts/index.html15
-rw-r--r--docs/tags/Colab.html15
-rw-r--r--docs/tags/Tensorflow.html15
9 files changed, 881 insertions, 15 deletions
diff --git a/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md b/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md
new file mode 100644
index 0000000..4341f09
--- /dev/null
+++ b/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md
@@ -0,0 +1,241 @@
+---
+date: 2024-03-21 12:46
+description: Predicting n-th degree polynomials using TensorFlow 2.x
+tags: Tutorial, Tensorflow, Colab
+---
+
+# Polynomial Regression Using TensorFlow 2.x
+
+I have a similar post titled [Polynomial Regression Using Tensorflow](/posts/2019-12-16-TensorFlow-Polynomial-Regression.html) that used `tensorflow.compat.v1` (Which still works as of TF 2.16). But, I thought it would be nicer to redo it with newer TF versions.
+
+I will be skipping all the introductions about polynomial regression and jumping straight to the code. Personally, I prefer using `scikit-learn` for this task.
+
+## Position vs Salary Dataset
+
+Again, we will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)
+
+If you are in a Python Notebook environment like Kaggle or Google Colaboratory, you can simply run:
+```Termcap
+!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv
+```
+
+## Code
+
+If you just want to copy-paste the code, scroll to the bottom for the entire snippet. Here I will try and walk through setting up code for a 3rd-degree (cubic) polynomial
+
+### Imports
+
+```python
+import pandas as pd
+import tensorflow as tf
+import matplotlib.pyplot as plt
+import numpy as np
+```
+
+### Reading the Dataset
+
+```python
+df = pd.read_csv("data.csv")
+```
+
+### Variables and Constants
+
+Here, we initialize the X and Y values as constants, since they are not going to change. The coefficients are defined as variables.
+
+```python
+X = tf.constant(df["Level"], dtype=tf.float32)
+Y = tf.constant(df["Salary"], dtype=tf.float32)
+
+coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(4)]
+```
+
+Here, `X` and `Y` are the values from our dataset. We initialize the coefficients for the equations as small random values.
+
+These coefficients are evaluated by Tensorflow's `tf.math.poyval` function which returns the n-th order polynomial based on how many coefficients are passed. Since our list of coefficients contains 4 different variables, it will be evaluated as:
+
+```
+y = (x**3)*coefficients[3] + (x**2)*coefficients[2] + (x**1)*coefficients[1] (x**0)*coefficients[0]
+```
+
+Which is equivalent to the general cubic equation:
+
+<script type="text/javascript"
+ src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
+</script>
+
+$$
+y = ax^3 + bx^2 + cx + d
+$$
+
+### Optimizer Selection & Training
+
+```python
+optimizer = tf.keras.optimizers.Adam(learning_rate=0.3)
+num_epochs = 10_000
+
+for epoch in range(num_epochs):
+ with tf.GradientTape() as tape:
+ y_pred = tf.math.polyval(coefficients, X)
+ loss = tf.reduce_mean(tf.square(y - y_pred))
+ grads = tape.gradient(loss, coefficients)
+ optimizer.apply_gradients(zip(grads, coefficients))
+ if (epoch+1) % 1000 == 0:
+ print(f"Epoch: {epoch+1}, Loss: {loss.numpy()}"
+```
+
+In TensorFlow 1, we would have been using `tf.Session` instead.
+
+Here we are using `GradientTape()` instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
+
+Our loss function is Mean Squared Error (MSE)
+
+$$
+= \frac{1}{n}\sum_{i=1}^{n} (Y_i - \^{Y_i})
+$$
+
+Where $\^{Y_i}$ is the predicted value and $Y_i$ is the actual value
+
+### Plotting Final Coefficients
+
+```python
+final_coefficients = [c.numpy() for c in coefficients]
+print("Final Coefficients:", final_coefficients)
+
+plt.plot(df["Level"], df["Salary"], label="Original Data")
+plt.plot(df["Level"],[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df["Level"]])
+plt.ylabel('Salary')
+plt.xlabel('Position')
+plt.title("Salary vs Position")
+plt.show()
+```
+
+
+## Code Snippet for a Polynomial of Degree N
+
+### Using Gradient Tape
+
+This should work regardless of the Keras backend version (2 or 3)
+
+```python
+import tensorflow as tf
+import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+
+df = pd.read_csv("data.csv")
+
+############################
+## Change Parameters Here ##
+############################
+x_column = "Level" #
+y_column = "Salary" #
+degree = 2 #
+learning_rate = 0.3 #
+num_epochs = 25_000 #
+############################
+
+X = tf.constant(df[x_column], dtype=tf.float32)
+Y = tf.constant(df[y_column], dtype=tf.float32)
+
+coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(degree + 1)]
+
+optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
+
+for epoch in range(num_epochs):
+ with tf.GradientTape() as tape:
+ y_pred = tf.math.polyval(coefficients, X)
+ loss = tf.reduce_mean(tf.square(Y - y_pred))
+ grads = tape.gradient(loss, coefficients)
+ optimizer.apply_gradients(zip(grads, coefficients))
+ if (epoch+1) % 1000 == 0:
+ print(f"Epoch: {epoch+1}, Loss: {loss.numpy()}")
+
+final_coefficients = [c.numpy() for c in coefficients]
+print("Final Coefficients:", final_coefficients)
+
+print("Final Equation:", end=" ")
+for i in range(degree+1):
+ print(f"{final_coefficients[i]} * x^{degree-i}", end=" + " if i < degree else "\n")
+
+plt.plot(X, Y, label="Original Data")
+plt.plot(X,[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df[x_column]]), label="Our Poynomial"
+plt.ylabel(y_column)
+plt.xlabel(x_column)
+plt.title(f"{x_column} vs {y_column}")
+plt.legend()
+plt.show()
+```
+
+### Without Gradient Tape
+
+This relies on the Optimizer's `minimize` function and uses the `var_list` parameter to update the variables.
+
+This will not work with Keras 3 backend in TF 2.16.0 and above unless you switch to the legacy backend.
+
+```python
+import tensorflow as tf
+import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+
+df = pd.read_csv("data.csv")
+
+############################
+## Change Parameters Here ##
+############################
+x_column = "Level" #
+y_column = "Salary" #
+degree = 2 #
+learning_rate = 0.3 #
+num_epochs = 25_000 #
+############################
+
+X = tf.constant(df[x_column], dtype=tf.float32)
+Y = tf.constant(df[y_column], dtype=tf.float32)
+
+coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(degree + 1)]
+
+optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
+
+def loss_function():
+ pred_y = tf.math.polyval(coefficients, X)
+ return tf.reduce_mean(tf.square(pred_y - Y))
+
+for epoch in range(num_epochs):
+ optimizer.minimize(loss_function, var_list=coefficients)
+ if (epoch+1) % 1000 == 0:
+ current_loss = loss_function().numpy()
+ print(f"Epoch {epoch+1}: Training Loss: {current_loss}")
+
+final_coefficients = coefficients.numpy()
+print("Final Coefficients:", final_coefficients)
+
+print("Final Equation:", end=" ")
+for i in range(degree+1):
+ print(f"{final_coefficients[i]} * x^{degree-i}", end=" + " if i < degree else "\n")
+
+plt.plot(X, Y, label="Original Data")
+plt.plot(X,[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df[x_column]], label="Our Polynomial")
+plt.ylabel(y_column)
+plt.xlabel(x_column)
+plt.legend()
+plt.title(f"{x_column} vs {y_column}")
+plt.show()
+```
+
+
+As always, remember to tweak the parameters and choose the correct model for the job. A polynomial regression model might not even be the best model for this particular dataset.
+
+## Further Programming
+
+How would you modify this code to use another type of nonlinear regression? Say, $ y = ab^x $
+
+Hint: Your loss calculation would be similar to:
+
+```python
+bx = tf.pow(coefficients[1], X)
+pred_y = tf.math.multiply(coefficients[0], bx)
+loss = tf.reduce_mean(tf.square(pred_y - Y))
+```
+
+
diff --git a/Resources/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png b/Resources/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png
new file mode 100644
index 0000000..0dbdd08
--- /dev/null
+++ b/Resources/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png
Binary files differ
diff --git a/docs/feed.rss b/docs/feed.rss
index d242684..df334a3 100644
--- a/docs/feed.rss
+++ b/docs/feed.rss
@@ -4,8 +4,8 @@
<title>Navan's Archive</title>
<description>Rare Tips, Tricks and Posts</description>
<link>https://web.navan.dev/</link><language>en</language>
- <lastBuildDate>Fri, 15 Mar 2024 15:00:25 -0000</lastBuildDate>
- <pubDate>Fri, 15 Mar 2024 15:00:25 -0000</pubDate>
+ <lastBuildDate>Thu, 21 Mar 2024 13:54:34 -0000</lastBuildDate>
+ <pubDate>Thu, 21 Mar 2024 13:54:34 -0000</pubDate>
<ttl>250</ttl>
<atom:link href="https://web.navan.dev/feed.rss" rel="self" type="application/rss+xml"/>
@@ -484,6 +484,273 @@ creating<span class="w"> </span>a<span class="w"> </span>DOS<span class="w"> </s
<item>
<guid isPermaLink="true">
+ https://web.navan.dev/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html
+ </guid>
+ <title>
+ Polynomial Regression Using TensorFlow 2.x
+ </title>
+ <description>
+ Predicting n-th degree polynomials using TensorFlow 2.x
+ </description>
+ <link>https://web.navan.dev/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html</link>
+ <pubDate>Thu, 21 Mar 2024 12:46:00 -0000</pubDate>
+ <content:encoded><![CDATA[<h1>Polynomial Regression Using TensorFlow 2.x</h1>
+
+<p>I have a similar post titled <a rel="noopener" target="_blank" href="/posts/2019-12-16-TensorFlow-Polynomial-Regression.html">Polynomial Regression Using Tensorflow</a> that used <code>tensorflow.compat.v1</code> (Which still works as of TF 2.16). But, I thought it would be nicer to redo it with newer TF versions. </p>
+
+<p>I will be skipping all the introductions about polynomial regression and jumping straight to the code. Personally, I prefer using <code>scikit-learn</code> for this task.</p>
+
+<h2>Position vs Salary Dataset</h2>
+
+<p>Again, we will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)</p>
+
+<p>If you are in a Python Notebook environment like Kaggle or Google Colaboratory, you can simply run:</p>
+
+<div class="codehilite">
+<pre><span></span><code><span class="nt">!wget</span><span class="na"> --no-check-certificate &#39;https</span><span class="p">:</span><span class="nc">//docs.google.com/uc?export</span><span class="o">=</span><span class="l">download&amp;id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9&#39; -O data.csv</span>
+</code></pre>
+</div>
+
+<h2>Code</h2>
+
+<p>If you just want to copy-paste the code, scroll to the bottom for the entire snippet. Here I will try and walk through setting up code for a 3rd-degree (cubic) polynomial</p>
+
+<h3>Imports</h3>
+
+<div class="codehilite">
+<pre><span></span><code><span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+</code></pre>
+</div>
+
+<h3>Reading the Dataset</h3>
+
+<div class="codehilite">
+<pre><span></span><code><span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+</code></pre>
+</div>
+
+<h3>Variables and Constants</h3>
+
+<p>Here, we initialize the X and Y values as constants, since they are not going to change. The coefficients are defined as variables.</p>
+
+<div class="codehilite">
+<pre><span></span><code><span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Salary&quot;</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">)]</span>
+</code></pre>
+</div>
+
+<p>Here, <code>X</code> and <code>Y</code> are the values from our dataset. We initialize the coefficients for the equations as small random values.</p>
+
+<p>These coefficients are evaluated by Tensorflow's <code>tf.math.poyval</code> function which returns the n-th order polynomial based on how many coefficients are passed. Since our list of coefficients contains 4 different variables, it will be evaluated as:</p>
+
+<pre><code>y = (x**3)*coefficients[3] + (x**2)*coefficients[2] + (x**1)*coefficients[1] (x**0)*coefficients[0]
+</code></pre>
+
+<p>Which is equivalent to the general cubic equation:</p>
+
+<p><script type="text/javascript"
+ src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></p>
+
+</script>
+
+$$
+y = ax^3 + bx^2 + cx + d
+$$
+
+### Optimizer Selection & Training
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="mf">0.3</span><span class="p">)</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">10_000</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
+ <span class="n">y_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">y</span> <span class="o">-</span> <span class="n">y_pred</span><span class="p">))</span>
+ <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">)</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">))</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch: </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">, Loss: </span><span class="si">{</span><span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span><span class="si">}</span><span class="s2">&quot;</span>
+</code></pre>
+
+</div>
+
+
+In TensorFlow 1, we would have been using `tf.Session` instead.
+
+Here we are using `GradientTape()` instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
+
+Our loss function is Mean Squared Error (MSE)
+
+$$
+= \frac{1}{n}\sum_{i=1}^{n} (Y_i - \^{Y_i})
+$$
+
+Where $\^{Y_i}$ is the predicted value and $Y_i$ is the actual value
+
+### Plotting Final Coefficients
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">final_coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">c</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">c</span> <span class="ow">in</span> <span class="n">coefficients</span><span class="p">]</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Salary&quot;</span><span class="p">],</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">]])</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s1">&#39;Salary&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s1">&#39;Position&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s2">&quot;Salary vs Position&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+
+## Code Snippet for a Polynomial of Degree N
+
+### Using Gradient Tape
+
+This should work regardless of the Keras backend version (2 or 3)
+<div class="codehilite">
+
+<pre><span></span><code><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+
+<span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+
+<span class="c1">############################</span>
+<span class="c1">## Change Parameters Here ##</span>
+<span class="c1">############################</span>
+<span class="n">x_column</span> <span class="o">=</span> <span class="s2">&quot;Level&quot;</span> <span class="c1">#</span>
+<span class="n">y_column</span> <span class="o">=</span> <span class="s2">&quot;Salary&quot;</span> <span class="c1">#</span>
+<span class="n">degree</span> <span class="o">=</span> <span class="mi">2</span> <span class="c1">#</span>
+<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.3</span> <span class="c1">#</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">25_000</span> <span class="c1">#</span>
+<span class="c1">############################</span>
+
+<span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">y_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)]</span>
+
+<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
+ <span class="n">y_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">Y</span> <span class="o">-</span> <span class="n">y_pred</span><span class="p">))</span>
+ <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">)</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">))</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch: </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">, Loss: </span><span class="si">{</span><span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">final_coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">c</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">c</span> <span class="ow">in</span> <span class="n">coefficients</span><span class="p">]</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Equation:&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; &quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">final_coefficients</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="si">}</span><span class="s2"> * x^</span><span class="si">{</span><span class="n">degree</span><span class="o">-</span><span class="n">i</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; + &quot;</span> <span class="k">if</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">degree</span> <span class="k">else</span> <span class="s2">&quot;</span><span class="se">\n</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">Y</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">]]),</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Our Poynomial&quot;</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="n">y_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="n">x_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">x_column</span><span class="si">}</span><span class="s2"> vs </span><span class="si">{</span><span class="n">y_column</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+### Without Gradient Tape
+
+This relies on the Optimizer's `minimize` function and uses the `var_list` parameter to update the variables.
+
+This will not work with Keras 3 backend in TF 2.16.0 and above unless you switch to the legacy backend.
+<div class="codehilite">
+
+<pre><span></span><code><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+
+<span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+
+<span class="c1">############################</span>
+<span class="c1">## Change Parameters Here ##</span>
+<span class="c1">############################</span>
+<span class="n">x_column</span> <span class="o">=</span> <span class="s2">&quot;Level&quot;</span> <span class="c1">#</span>
+<span class="n">y_column</span> <span class="o">=</span> <span class="s2">&quot;Salary&quot;</span> <span class="c1">#</span>
+<span class="n">degree</span> <span class="o">=</span> <span class="mi">2</span> <span class="c1">#</span>
+<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.3</span> <span class="c1">#</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">25_000</span> <span class="c1">#</span>
+<span class="c1">############################</span>
+
+<span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">y_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)]</span>
+
+<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
+
+<span class="k">def</span> <span class="nf">loss_function</span><span class="p">():</span>
+ <span class="n">pred_y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">pred_y</span> <span class="o">-</span> <span class="n">Y</span><span class="p">))</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">loss_function</span><span class="p">,</span> <span class="n">var_list</span><span class="o">=</span><span class="n">coefficients</span><span class="p">)</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="n">current_loss</span> <span class="o">=</span> <span class="n">loss_function</span><span class="p">()</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">: Training Loss: </span><span class="si">{</span><span class="n">current_loss</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">final_coefficients</span> <span class="o">=</span> <span class="n">coefficients</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Equation:&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; &quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">final_coefficients</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="si">}</span><span class="s2"> * x^</span><span class="si">{</span><span class="n">degree</span><span class="o">-</span><span class="n">i</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; + &quot;</span> <span class="k">if</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">degree</span> <span class="k">else</span> <span class="s2">&quot;</span><span class="se">\n</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">Y</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">]],</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Our Polynomial&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="n">y_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="n">x_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">x_column</span><span class="si">}</span><span class="s2"> vs </span><span class="si">{</span><span class="n">y_column</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+
+As always, remember to tweak the parameters and choose the correct model for the job. A polynomial regression model might not even be the best model for this particular dataset.
+
+## Further Programming
+
+How would you modify this code to use another type of nonlinear regression? Say, $ y = ab^x $
+
+Hint: Your loss calculation would be similar to:
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">bx</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">coefficients</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">X</span><span class="p">)</span>
+<span class="n">pred_y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">multiply</span><span class="p">(</span><span class="n">coefficients</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">bx</span><span class="p">)</span>
+<span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">pred_y</span> <span class="o">-</span> <span class="n">Y</span><span class="p">))</span>
+</code></pre>
+
+<p></div></p>
+]]></content:encoded>
+ </item>
+
+ <item>
+ <guid isPermaLink="true">
https://web.navan.dev/posts/hello-world.html
</guid>
<title>
diff --git a/docs/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png b/docs/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png
new file mode 100644
index 0000000..0dbdd08
--- /dev/null
+++ b/docs/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png
Binary files differ
diff --git a/docs/index.html b/docs/index.html
index f6a4942..0a3070a 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -50,6 +50,21 @@
<h2>Recent Posts</h2>
<ul>
+ <li><a href="/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html">Polynomial Regression Using TensorFlow 2.x</a></li>
+ <ul>
+ <li>Predicting n-th degree polynomials using TensorFlow 2.x</li>
+ <li>Published On: 2024-03-21 12:46</li>
+ <li>Tags:
+
+ <a href='/tags/Tutorial.html'>Tutorial</a>,
+
+ <a href='/tags/Tensorflow.html'>Tensorflow</a>,
+
+ <a href='/tags/Colab.html'>Colab</a>
+
+ </ul>
+
+
<li><a href="/posts/2024-03-15-setting-up-macos-for-8088-dos-dev.html">Cross-Compiling Hello World for DOS on macOS</a></li>
<ul>
<li>This goes through compiling Open Watcom 2 and creating simple hello-world exampls</li>
@@ -104,19 +119,6 @@
</ul>
- <li><a href="/posts/2023-10-22-search-by-flair-reddit.html">Search / Filter posts by flair on Reddit</a></li>
- <ul>
- <li>Search posts by flair on Reddit Web by using _</li>
- <li>Published On: 2023-10-22 00:37</li>
- <li>Tags:
-
- <a href='/tags/Tech Tip.html'>Tech Tip</a>,
-
- <a href='/tags/Reddit.html'>Reddit</a>
-
- </ul>
-
-
</ul>
<b>For all posts go to <a href="/posts">Posts</a></b>
diff --git a/docs/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html b/docs/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html
new file mode 100644
index 0000000..c1a4ae4
--- /dev/null
+++ b/docs/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html
@@ -0,0 +1,311 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+
+ <link rel="stylesheet" href="https://unpkg.com/latex.css/style.min.css" />
+ <link rel="stylesheet" href="/assets/main.css" />
+ <meta charset="utf-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <title>Polynomial Regression Using TensorFlow 2.x</title>
+ <meta name="og:site_name" content="Navan Chauhan" />
+ <link rel="canonical" href="https://web.navan.dev/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html" />
+ <meta name="twitter:url" content="https://web.navan.dev/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html />
+ <meta name="og:url" content="https://web.navan.dev/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html" />
+ <meta name="twitter:title" content="Polynomial Regression Using TensorFlow 2.x" />
+ <meta name="og:title" content="Polynomial Regression Using TensorFlow 2.x" />
+ <meta name="description" content="Predicting n-th degree polynomials using TensorFlow 2.x" />
+ <meta name="twitter:description" content="Predicting n-th degree polynomials using TensorFlow 2.x" />
+ <meta name="og:description" content="Predicting n-th degree polynomials using TensorFlow 2.x" />
+ <meta name="twitter:card" content="summary_large_image" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+ <link rel="shortcut icon" href="/images/favicon.png" type="image/png" />
+ <link rel="alternate" href="/feed.rss" type="application/rss+xml" title="Subscribe to Navan Chauhan" />
+ <meta name="twitter:image" content="https://web.navan.dev/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png" />
+ <meta name="og:image" content="https://web.navan.dev/images/opengraph/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.png" />
+ <meta name="google-site-verification" content="LVeSZxz-QskhbEjHxOi7-BM5dDxTg53x2TwrjFxfL0k" />
+ <script data-goatcounter="https://navanchauhan.goatcounter.com/count"
+ async src="//gc.zgo.at/count.js"></script>
+ <script defer data-domain="web.navan.dev" src="https://plausible.io/js/plausible.js"></script>
+ <link rel="manifest" href="/manifest.json" />
+
+</head>
+<body>
+ <center><nav style="display: block;">
+|
+<a href="/">home</a> |
+<a href="/about/">about/links</a> |
+<a href="/posts/">posts</a> |
+<a href="/3D-Designs/">3D designs</a> |
+<!--<a href="/publications/">publications</a> |-->
+<!--<a href="/repo/">iOS repo</a> |-->
+<a href="/feed.rss">RSS Feed</a> |
+</nav>
+</center>
+
+<main>
+
+ <h1>Polynomial Regression Using TensorFlow 2.x</h1>
+
+<p>I have a similar post titled <a rel="noopener" target="_blank" href="/posts/2019-12-16-TensorFlow-Polynomial-Regression.html">Polynomial Regression Using Tensorflow</a> that used <code>tensorflow.compat.v1</code> (Which still works as of TF 2.16). But, I thought it would be nicer to redo it with newer TF versions. </p>
+
+<p>I will be skipping all the introductions about polynomial regression and jumping straight to the code. Personally, I prefer using <code>scikit-learn</code> for this task.</p>
+
+<h2>Position vs Salary Dataset</h2>
+
+<p>Again, we will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)</p>
+
+<p>If you are in a Python Notebook environment like Kaggle or Google Colaboratory, you can simply run:</p>
+
+<div class="codehilite">
+<pre><span></span><code><span class="nt">!wget</span><span class="na"> --no-check-certificate &#39;https</span><span class="p">:</span><span class="nc">//docs.google.com/uc?export</span><span class="o">=</span><span class="l">download&amp;id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9&#39; -O data.csv</span>
+</code></pre>
+</div>
+
+<h2>Code</h2>
+
+<p>If you just want to copy-paste the code, scroll to the bottom for the entire snippet. Here I will try and walk through setting up code for a 3rd-degree (cubic) polynomial</p>
+
+<h3>Imports</h3>
+
+<div class="codehilite">
+<pre><span></span><code><span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+</code></pre>
+</div>
+
+<h3>Reading the Dataset</h3>
+
+<div class="codehilite">
+<pre><span></span><code><span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+</code></pre>
+</div>
+
+<h3>Variables and Constants</h3>
+
+<p>Here, we initialize the X and Y values as constants, since they are not going to change. The coefficients are defined as variables.</p>
+
+<div class="codehilite">
+<pre><span></span><code><span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Salary&quot;</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">)]</span>
+</code></pre>
+</div>
+
+<p>Here, <code>X</code> and <code>Y</code> are the values from our dataset. We initialize the coefficients for the equations as small random values.</p>
+
+<p>These coefficients are evaluated by Tensorflow's <code>tf.math.poyval</code> function which returns the n-th order polynomial based on how many coefficients are passed. Since our list of coefficients contains 4 different variables, it will be evaluated as:</p>
+
+<pre><code>y = (x**3)*coefficients[3] + (x**2)*coefficients[2] + (x**1)*coefficients[1] (x**0)*coefficients[0]
+</code></pre>
+
+<p>Which is equivalent to the general cubic equation:</p>
+
+<p><script type="text/javascript"
+ src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></p>
+
+</script>
+
+$$
+y = ax^3 + bx^2 + cx + d
+$$
+
+### Optimizer Selection & Training
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="mf">0.3</span><span class="p">)</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">10_000</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
+ <span class="n">y_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">y</span> <span class="o">-</span> <span class="n">y_pred</span><span class="p">))</span>
+ <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">)</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">))</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch: </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">, Loss: </span><span class="si">{</span><span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span><span class="si">}</span><span class="s2">&quot;</span>
+</code></pre>
+
+</div>
+
+
+In TensorFlow 1, we would have been using `tf.Session` instead.
+
+Here we are using `GradientTape()` instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
+
+Our loss function is Mean Squared Error (MSE)
+
+$$
+= \frac{1}{n}\sum_{i=1}^{n} (Y_i - \^{Y_i})
+$$
+
+Where $\^{Y_i}$ is the predicted value and $Y_i$ is the actual value
+
+### Plotting Final Coefficients
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">final_coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">c</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">c</span> <span class="ow">in</span> <span class="n">coefficients</span><span class="p">]</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Salary&quot;</span><span class="p">],</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">],[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="s2">&quot;Level&quot;</span><span class="p">]])</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s1">&#39;Salary&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s1">&#39;Position&#39;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s2">&quot;Salary vs Position&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+
+## Code Snippet for a Polynomial of Degree N
+
+### Using Gradient Tape
+
+This should work regardless of the Keras backend version (2 or 3)
+<div class="codehilite">
+
+<pre><span></span><code><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+
+<span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+
+<span class="c1">############################</span>
+<span class="c1">## Change Parameters Here ##</span>
+<span class="c1">############################</span>
+<span class="n">x_column</span> <span class="o">=</span> <span class="s2">&quot;Level&quot;</span> <span class="c1">#</span>
+<span class="n">y_column</span> <span class="o">=</span> <span class="s2">&quot;Salary&quot;</span> <span class="c1">#</span>
+<span class="n">degree</span> <span class="o">=</span> <span class="mi">2</span> <span class="c1">#</span>
+<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.3</span> <span class="c1">#</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">25_000</span> <span class="c1">#</span>
+<span class="c1">############################</span>
+
+<span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">y_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)]</span>
+
+<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span>
+ <span class="n">y_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">Y</span> <span class="o">-</span> <span class="n">y_pred</span><span class="p">))</span>
+ <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">)</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">coefficients</span><span class="p">))</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch: </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">, Loss: </span><span class="si">{</span><span class="n">loss</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">final_coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">c</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">c</span> <span class="ow">in</span> <span class="n">coefficients</span><span class="p">]</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Equation:&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; &quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">final_coefficients</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="si">}</span><span class="s2"> * x^</span><span class="si">{</span><span class="n">degree</span><span class="o">-</span><span class="n">i</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; + &quot;</span> <span class="k">if</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">degree</span> <span class="k">else</span> <span class="s2">&quot;</span><span class="se">\n</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">Y</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">]]),</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Our Poynomial&quot;</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="n">y_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="n">x_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">x_column</span><span class="si">}</span><span class="s2"> vs </span><span class="si">{</span><span class="n">y_column</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+### Without Gradient Tape
+
+This relies on the Optimizer's `minimize` function and uses the `var_list` parameter to update the variables.
+
+This will not work with Keras 3 backend in TF 2.16.0 and above unless you switch to the legacy backend.
+<div class="codehilite">
+
+<pre><span></span><code><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
+<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
+<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="nn">pd</span>
+<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
+
+<span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">read_csv</span><span class="p">(</span><span class="s2">&quot;data.csv&quot;</span><span class="p">)</span>
+
+<span class="c1">############################</span>
+<span class="c1">## Change Parameters Here ##</span>
+<span class="c1">############################</span>
+<span class="n">x_column</span> <span class="o">=</span> <span class="s2">&quot;Level&quot;</span> <span class="c1">#</span>
+<span class="n">y_column</span> <span class="o">=</span> <span class="s2">&quot;Salary&quot;</span> <span class="c1">#</span>
+<span class="n">degree</span> <span class="o">=</span> <span class="mi">2</span> <span class="c1">#</span>
+<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.3</span> <span class="c1">#</span>
+<span class="n">num_epochs</span> <span class="o">=</span> <span class="mi">25_000</span> <span class="c1">#</span>
+<span class="c1">############################</span>
+
+<span class="n">X</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+<span class="n">Y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">df</span><span class="p">[</span><span class="n">y_column</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
+
+<span class="n">coefficients</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randn</span><span class="p">()</span> <span class="o">*</span> <span class="mf">0.01</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)]</span>
+
+<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">learning_rate</span><span class="p">)</span>
+
+<span class="k">def</span> <span class="nf">loss_function</span><span class="p">():</span>
+ <span class="n">pred_y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">coefficients</span><span class="p">,</span> <span class="n">X</span><span class="p">)</span>
+ <span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">pred_y</span> <span class="o">-</span> <span class="n">Y</span><span class="p">))</span>
+
+<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
+ <span class="n">optimizer</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">loss_function</span><span class="p">,</span> <span class="n">var_list</span><span class="o">=</span><span class="n">coefficients</span><span class="p">)</span>
+ <span class="k">if</span> <span class="p">(</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">1000</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
+ <span class="n">current_loss</span> <span class="o">=</span> <span class="n">loss_function</span><span class="p">()</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Epoch </span><span class="si">{</span><span class="n">epoch</span><span class="o">+</span><span class="mi">1</span><span class="si">}</span><span class="s2">: Training Loss: </span><span class="si">{</span><span class="n">current_loss</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">final_coefficients</span> <span class="o">=</span> <span class="n">coefficients</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span>
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Coefficients:&quot;</span><span class="p">,</span> <span class="n">final_coefficients</span><span class="p">)</span>
+
+<span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Final Equation:&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; &quot;</span><span class="p">)</span>
+<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">degree</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
+ <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">final_coefficients</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="si">}</span><span class="s2"> * x^</span><span class="si">{</span><span class="n">degree</span><span class="o">-</span><span class="n">i</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="s2">&quot; + &quot;</span> <span class="k">if</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">degree</span> <span class="k">else</span> <span class="s2">&quot;</span><span class="se">\n</span><span class="s2">&quot;</span><span class="p">)</span>
+
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">Y</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Original Data&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">X</span><span class="p">,[</span><span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">polyval</span><span class="p">(</span><span class="n">final_coefficients</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">df</span><span class="p">[</span><span class="n">x_column</span><span class="p">]],</span> <span class="n">label</span><span class="o">=</span><span class="s2">&quot;Our Polynomial&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="n">y_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="n">x_column</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">legend</span><span class="p">()</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">x_column</span><span class="si">}</span><span class="s2"> vs </span><span class="si">{</span><span class="n">y_column</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
+<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+</code></pre>
+
+</div>
+
+
+
+As always, remember to tweak the parameters and choose the correct model for the job. A polynomial regression model might not even be the best model for this particular dataset.
+
+## Further Programming
+
+How would you modify this code to use another type of nonlinear regression? Say, $ y = ab^x $
+
+Hint: Your loss calculation would be similar to:
+<div class="codehilite">
+
+<pre><span></span><code><span class="n">bx</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">pow</span><span class="p">(</span><span class="n">coefficients</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">X</span><span class="p">)</span>
+<span class="n">pred_y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">multiply</span><span class="p">(</span><span class="n">coefficients</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">bx</span><span class="p">)</span>
+<span class="n">loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">square</span><span class="p">(</span><span class="n">pred_y</span> <span class="o">-</span> <span class="n">Y</span><span class="p">))</span>
+</code></pre>
+
+<p></div></p>
+
+ <blockquote>If you have scrolled this far, consider subscribing to my mailing list <a href="https://listmonk.navan.dev/subscription/form">here.</a> You can subscribe to either a specific type of post you are interested in, or subscribe to everything with the "Everything" list.</blockquote>
+ <script data-isso="https://comments.navan.dev/"
+ src="https://comments.navan.dev/js/embed.min.js"></script>
+ <section id="isso-thread">
+ <noscript>Javascript needs to be activated to view comments.</noscript>
+ </section>
+</main>
+
+ <script src="assets/manup.min.js"></script>
+ <script src="/pwabuilder-sw-register.js"></script>
+</body>
+</html> \ No newline at end of file
diff --git a/docs/posts/index.html b/docs/posts/index.html
index 2d9d613..d886b19 100644
--- a/docs/posts/index.html
+++ b/docs/posts/index.html
@@ -52,6 +52,21 @@
<ul>
+ <li><a href="/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html">Polynomial Regression Using TensorFlow 2.x</a></li>
+ <ul>
+ <li>Predicting n-th degree polynomials using TensorFlow 2.x</li>
+ <li>Published On: 2024-03-21 12:46</li>
+ <li>Tags:
+
+ <a href='/tags/Tutorial.html'>Tutorial</a>,
+
+ <a href='/tags/Tensorflow.html'>Tensorflow</a>,
+
+ <a href='/tags/Colab.html'>Colab</a>
+
+ </ul>
+
+
<li><a href="/posts/2024-03-15-setting-up-macos-for-8088-dos-dev.html">Cross-Compiling Hello World for DOS on macOS</a></li>
<ul>
<li>This goes through compiling Open Watcom 2 and creating simple hello-world exampls</li>
diff --git a/docs/tags/Colab.html b/docs/tags/Colab.html
index a3721a7..fd8ef08 100644
--- a/docs/tags/Colab.html
+++ b/docs/tags/Colab.html
@@ -49,6 +49,21 @@
<ul>
+ <li><a href="/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html">Polynomial Regression Using TensorFlow 2.x</a></li>
+ <ul>
+ <li>Predicting n-th degree polynomials using TensorFlow 2.x</li>
+ <li>Published On: 2024-03-21 12:46</li>
+ <li>Tags:
+
+ <a href='/tags/Tutorial.html'>Tutorial</a>,
+
+ <a href='/tags/Tensorflow.html'>Tensorflow</a>,
+
+ <a href='/tags/Colab.html'>Colab</a>
+
+ </ul>
+
+
<li><a href="/posts/2020-07-01-Install-rdkit-colab.html">Installing RDKit on Google Colab</a></li>
<ul>
<li>Install RDKit on Google Colab with one code snippet.</li>
diff --git a/docs/tags/Tensorflow.html b/docs/tags/Tensorflow.html
index 3bcb911..04006bb 100644
--- a/docs/tags/Tensorflow.html
+++ b/docs/tags/Tensorflow.html
@@ -49,6 +49,21 @@
<ul>
+ <li><a href="/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.html">Polynomial Regression Using TensorFlow 2.x</a></li>
+ <ul>
+ <li>Predicting n-th degree polynomials using TensorFlow 2.x</li>
+ <li>Published On: 2024-03-21 12:46</li>
+ <li>Tags:
+
+ <a href='/tags/Tutorial.html'>Tutorial</a>,
+
+ <a href='/tags/Tensorflow.html'>Tensorflow</a>,
+
+ <a href='/tags/Colab.html'>Colab</a>
+
+ </ul>
+
+
<li><a href="/posts/2019-12-16-TensorFlow-Polynomial-Regression.html">Polynomial Regression Using TensorFlow</a></li>
<ul>
<li>Polynomial regression using TensorFlow</li>