summaryrefslogtreecommitdiff
path: root/docs/feed.rss
diff options
context:
space:
mode:
Diffstat (limited to 'docs/feed.rss')
-rw-r--r--docs/feed.rss11
1 files changed, 7 insertions, 4 deletions
diff --git a/docs/feed.rss b/docs/feed.rss
index 11e6861..85c0a02 100644
--- a/docs/feed.rss
+++ b/docs/feed.rss
@@ -4,8 +4,8 @@
<title>Navan's Archive</title>
<description>Rare Tips, Tricks and Posts</description>
<link>https://web.navan.dev/</link><language>en</language>
- <lastBuildDate>Sun, 22 May 2022 12:18:20 -0000</lastBuildDate>
- <pubDate>Sun, 22 May 2022 12:18:20 -0000</pubDate>
+ <lastBuildDate>Sun, 22 May 2022 12:30:06 -0000</lastBuildDate>
+ <pubDate>Sun, 22 May 2022 12:30:06 -0000</pubDate>
<ttl>250</ttl>
<atom:link href="https://web.navan.dev/feed.rss" rel="self" type="application/rss+xml"/>
@@ -776,7 +776,9 @@ export BABEL_LIBDIR="/usr/lib/openbabel/3.1.0"
<p>Because of the small size of the database file, I was able to just upload the file.</p>
-<p>For the encoding model, I decided to use the pretrained <code>paraphrase-multilingual-MiniLM-L12-v2</code> model for SentenceTransformers, a Python framework for SOTA sentence, text and image embeddings. I wanted to use a multilingual model as I personally consume content in various languages (natively, no dubs or subs) and some of the sources for their information do not translate to English. As of writing this post, I did not include any other database except Trakt. </p>
+<p>For the encoding model, I decided to use the pretrained <code>paraphrase-multilingual-MiniLM-L12-v2</code> model for SentenceTransformers, a Python framework for SOTA sentence, text and image embeddings.
+I wanted to use a multilingual model as I personally consume content in various languages and some of the sources for their information do not translate to English.
+As of writing this post, I did not include any other database except Trakt. </p>
<p>While deciding how I was going to process the embeddings, I came across multiple solutions:</p>
@@ -835,7 +837,8 @@ export BABEL_LIBDIR="/usr/lib/openbabel/3.1.0"
<p>We use the <code>trakt_id</code> for the movie as the ID for the vectors and upsert it into the index. </p>
-<p>To find similar items, we will first have to map the name of the movie to its trakt_id, get the embeddings we have for that id and then perform a similarity search. It is possible that this additional step of mapping could be avoided by storing information as metadata in the index.</p>
+<p>To find similar items, we will first have to map the name of the movie to its trakt_id, get the embeddings we have for that id and then perform a similarity search.
+It is possible that this additional step of mapping could be avoided by storing information as metadata in the index.</p>
<div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">get_trakt_id</span><span class="p">(</span><span class="n">df</span><span class="p">,</span> <span class="n">title</span><span class="p">:</span> <span class="nb">str</span><span class="p">):</span>
<span class="n">rec</span> <span class="o">=</span> <span class="n">df</span><span class="p">[</span><span class="n">df</span><span class="p">[</span><span class="s2">&quot;title&quot;</span><span class="p">]</span><span class="o">.</span><span class="n">str</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span><span class="o">==</span><span class="n">movie_name</span><span class="o">.</span><span class="n">lower</span><span class="p">()]</span>