Skip to content

Instantly share code, notes, and snippets.

@ttchengab
Last active January 8, 2024 19:14
Show Gist options
  • Save ttchengab/c2f7614cbeaa8cd14883d4ebbcd36ba6 to your computer and use it in GitHub Desktop.
Save ttchengab/c2f7614cbeaa8cd14883d4ebbcd36ba6 to your computer and use it in GitHub Desktop.
VOCAB= ascii_uppercase+digits+punctuation+" \t\n"
#Change to CUDA to run using GPU
device = 'cpu'
def get_test_data(etfo):
text = etfo
text_tensor = torch.zeros(len(text), 1, dtype=torch.long)
text_tensor[:, 0] = torch.LongTensor([VOCAB.find(c) for c in text])
return text_tensor.to(device)
etfo = get_info('invoice.png')
# etfo = get_info('X51005621482.jpeg')
etfo = etfo.upper()
text_tensor = get_test_data(etfo)
temp = []
for i in range(len(text_tensor)):
if text_tensor[i]>=0 and text_tensor[i]<=70:
temp.append([int(text_tensor[i])])
text_tensor = torch.LongTensor(temp)
#model initialization
hidden_size = 256
device= torch.device('cpu')
model = ExtractLSTM(len(VOCAB), 16, hidden_size).to(device)
model.load_state_dict(torch.load('model.pth'))
result = test(model)
print(result)
@effive001
Copy link

do we have any other pretained model to use in load method ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment