You can try taking a picture of the notes and having a multimodal model try and read and extracts its text. You can either use chatgpt4(paid but probably more accurate) or run llama.cpp llava multimodal function with a llava model locally(free but might hallucinate).
Maybe scanning your notes into PDF format and trying a RAG approach might yield some results too. You can upload the PDF to GPT/Claude or run a local RAG project like h2oGPT or privategpt and see how well they can transcribe your notes.
You can try taking a picture of the notes and having a multimodal model try and read and extracts its text. You can either use chatgpt4(paid but probably more accurate) or run llama.cpp llava multimodal function with a llava model locally(free but might hallucinate).
Maybe scanning your notes into PDF format and trying a RAG approach might yield some results too. You can upload the PDF to GPT/Claude or run a local RAG project like h2oGPT or privategpt and see how well they can transcribe your notes.