Skip to content

Instantly share code, notes, and snippets.

@ApolloZhu
Last active May 5, 2019 02:39
Show Gist options
  • Save ApolloZhu/9adca213338acb9d78f5b53920a2ebb0 to your computer and use it in GitHub Desktop.
Save ApolloZhu/9adca213338acb9d78f5b53920a2ebb0 to your computer and use it in GitHub Desktop.
fun runTextRecognition(bitmap: Bitmap) {
val image = FirebaseVisionImage.fromBitmap(bitmap)
// FirebaseVisionTextRecognizer will be used
val detector = FirebaseVision.getInstance().visionTextDetector
detector.detectInImage(image)
.addOnSuccessListener {
texts ->
val blocks = texts.blocks
if (blocks.size == 0) { /*nothing found*/ return }
blocks.forEach {
block -> // a bunch of text
block.lines.forEach {
line -> // a line of text
line.elements.forEach {
element -> // each token
element.text // OCR result
element.boundingBox // where it is in image
}
}
}
}
.addOnFailureListener {
e -> e.printStackTrace()
}
}
/*
If using cloud detector instead, processing would be similar,
because we are using FirebaseVisionDocumentTextRecognizer,
structure of the returned result would be:
pages -> blocks -> words -> symbols
So we'll need to join symbols together back to a word:
*/
fun toString(word: FirebaseVisionCloudText.Word): String
= word.symbols.joinToString("") { it.text }
/*
Notice: @ symbol will be a separate word, that's why the author joined
2 words together to be a single token for the regex matching function.
That's unnecessary for other use cases, so only zipWithNext if needed.
Why cloud? It should have higher accuracy, but cost you money, so nah.
*/
@ApolloZhu
Copy link
Author

ApolloZhu commented May 5, 2019

最终效果:

image

MLKit 的组成:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment