aboutsummaryrefslogtreecommitdiff
path: root/README.md
blob: d6f578c23916d37e68412131e017ab8cf932028b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# iTeXSnip

Image -> LaTeX

![iTeXSnip App Icon](./iTexSnip/Assets.xcassets/AppIcon.appiconset/icon_256x256.png)

![Demo GIF](./demo.gif)

Works with handwritten formulae as well!

## TODO

### V1

- [x] Rating API
- [x] Preferences
    - Model load preferences
    - Detailed view preferences
    - Rating API server
- [x] Complete Detailed Snippet View

### V2
- [ ] Math Solver
- [ ] TeX Snippet Editor
- [ ] Image Export
- [ ] UI Overhaul
- [ ] Optimizations

## Misc

### Quantization

You can download and replace the quantized files with  non-quantized versions from [here](https://www.dropbox.com/scl/fo/0dg2g7vkf9f2lixd8menf/AOWPRd4-2Cywh_YCElLgkgk?rlkey=f3fdqnm2ao64up693ew4g5kil&st=bmw0r8ij&dl=0)

#### Encoder Model

```bash
python -m onnxruntime.quantization.preprocess --input  iTexSnip/models/encoder_model.onnx --output  encoder-infer.onnx
```

```python
import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType
og = "encoder-infer.onnx"
quant = "encoder-quant.onnx"
quantized_model = quantize_dynamic(og, quant, nodes_to_exclude=['/embeddings/patch_embeddings/projection/Conv'])
```

It might be better if we quantize the encoder using static quantization.

#### Decoder Model

```bash
python -m onnxruntime.quantization.preprocess --input  iTexSnip/models/decoder_model.onnx --output  decoder-infer.onnx
```

```python
import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType
og = "decoder-infer.onnx"
quant = "decoder-quant.onnx"
quantized_model = quantize_dynamic(og, quant)
```