blob: bf94169d3f6a8315cffd2163d84ff3af4f21f814 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
# iTeXSnip
Image -> LaTeX


Works with handwritten formulae as well!
## TODO
### V1
- [x] Rating API
- [x] Preferences
- Model load preferences
- Detailed view preferences
- Rating API server
- [x] Complete Detailed Snippet View
### V2
- [ ] Math Solver
- [ ] TeX Snippet Editor
- [ ] Image Export
- [ ] UI Overhaul
- [ ] Optimizations
## Misc
### Quantization
#### Encoder Model
```bash
python -m onnxruntime.quantization.preprocess --input iTexSnip/models/encoder_model.onnx --output encoder-infer.onnx
```
```python
import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType
og = "encoder-infer.onnx"
quant = "encoder-quant.onnx"
quantized_model = quantize_dynamic(og, quant, nodes_to_exclude=['/embeddings/patch_embeddings/projection/Conv'])
```
It might be better if we quantize the encoder using static quantization.
#### Decoder Model
```bash
python -m onnxruntime.quantization.preprocess --input iTexSnip/models/decoder_model.onnx --output decoder-infer.onnx
```
```python
import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType
og = "decoder-infer.onnx"
quant = "decoder-quant.onnx"
quantized_model = quantize_dynamic(og, quant)
```
|