summaryrefslogtreecommitdiff
path: root/Content
diff options
context:
space:
mode:
authorNavan Chauhan <navanchauhan@gmail.com>2024-03-21 14:29:50 -0600
committerNavan Chauhan <navanchauhan@gmail.com>2024-03-21 14:29:50 -0600
commit37661080a111768e565ae53299c4796ebe711a71 (patch)
tree27376ca608b92dfa53ce22f07e982c9523cb1875 /Content
parentb484b8a672a907af87e73fe7006497a6ca86c259 (diff)
fix mathjax stuff
Diffstat (limited to 'Content')
-rw-r--r--Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md17
1 files changed, 9 insertions, 8 deletions
diff --git a/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md b/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md
index 4341f09..6317175 100644
--- a/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md
+++ b/Content/posts/2024-03-21-Polynomial-Regression-in-TensorFlow-2.md
@@ -59,9 +59,8 @@ y = (x**3)*coefficients[3] + (x**2)*coefficients[2] + (x**1)*coefficients[1] (x*
Which is equivalent to the general cubic equation:
-<script type="text/javascript"
- src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
-</script>
+<script src="https://cdn.jsdelivr.net/npm/mathjax@4.0.0-beta.4/tex-mml-chtml.js" id="MathJax-script"></script>
+<script src="https://cdn.jsdelivr.net/npm/mathjax@4.0.0-beta.4/input/tex/extensions/noerrors.js" charset="UTF-8"></script>
$$
y = ax^3 + bx^2 + cx + d
@@ -85,15 +84,15 @@ for epoch in range(num_epochs):
In TensorFlow 1, we would have been using `tf.Session` instead.
-Here we are using `GradientTape()` instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
+Here we are using `GradientTape()` instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
-Our loss function is Mean Squared Error (MSE)
+Our loss function is Mean Squared Error (MSE):
$$
-= \frac{1}{n}\sum_{i=1}^{n} (Y_i - \^{Y_i})
+= \frac{1}{n} \sum_{i=1}^{n}{(Y\_i - \hat{Y\_i})^2}
$$
-Where $\^{Y_i}$ is the predicted value and $Y_i$ is the actual value
+Where <math xmlns="http://www.w3.org/1998/Math/MathML"><mover><msub><mi>Y</mi><mi>i</mi></msub><mo stretchy="false" style="math-style:normal;math-depth:0;">^</mo></mover></math> is the predicted value and <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>Y</mi><mi>i</mi></msub></math> is the actual value
### Plotting Final Coefficients
@@ -228,7 +227,9 @@ As always, remember to tweak the parameters and choose the correct model for the
## Further Programming
-How would you modify this code to use another type of nonlinear regression? Say, $ y = ab^x $
+How would you modify this code to use another type of nonlinear regression? Say,
+
+$$ y = ab^x $$
Hint: Your loss calculation would be similar to: