Updated 27 April 2023
In this article, we are going to learn Text recognition from the camera in Flutter.
Recognizes the text from the image/ real-time camera. The ML Kit Text Recognition API can recognize text in any Latin-based character set. Please check Google ML Kit for more on Text recognition.
Check out more about our Flutter app development.
For text recognition in Flutter, we are going to use these flutter plugins: google_ml_kit and camera.
google_ml_kit: for the text recognition and many more, please check the official pub here.
camera: for accessing the device cameras, image stream which we are going to use later and for more check here.
Before we get started read about the platform-specific requirements here and known issues of the google_ml_kit plugin here.
And for the camera plugin platform-specific requirements check here and for the known issues check here.
We start by adding these plugins to our project’s pubspec.yaml file:
1 2 3 4 5 |
dependencies: flutter: sdk: flutter google_ml_kit: ^0.7.3 camera: ^0.9.4+16 |
After adding the plugins, import them into your camera_screen.dart file like:
1 2 |
import 'package:google_ml_kit/google_ml_kit.dart'; import 'package:camera/camera.dart'; |
Now define your camera controller and initialize it inside initState method and Initialize TextDetector like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
// Create the CameraController CameraController? _camera; // Initializing the TextDetector final textDetector = GoogleMlKit.vision.textDetector(); final String recognizedText = ""; @override void initState() { super.initState(); _initializeCamera(); // for camera initialization } void _initializeCamera() async { // Get list of cameras of the device List<CameraDescription> cameras = await availableCameras(); _camera = CameraController(cameras[0], ResolutionPreset.low); // Initialize the CameraController _camera?.initialize().then((_) async { // Start streaming images from platform camera await _camera ?.startImageStream((CameraImage image) => _processCameraImage(image)); // image processing and text recognition. }); } |
Once the camera is initialized, we start streaming images from the platform camera using the startImageStream method and pass the _processCameraImage method which is defined below as the parameter to it:
1 2 3 4 5 6 7 8 9 10 11 |
void _processCameraImage(CameraImage image) async { // getting InputImage from CameraImage InputImage inputImage = getInputImage(image); final RecognisedText recognisedText = await textDetector.processImage(inputImage); // Using the recognised text. for (TextBlock block in recognisedText.blocks) { recognizedText = block.text + " "; } } |
As you can see in the above code, textDetector.processImage(inputImage) is the main method for recognizing the text from the image and requires an InputImage type image as a parameter so first, we need to get the InputImage for the recognition from the CameraImage and we are getting this using getInputImage(image) method defined below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
InputImage getInputImage(CameraImage cameraImage) { final WriteBuffer allBytes = WriteBuffer(); for (Plane plane in cameraImage.planes) { allBytes.putUint8List(plane.bytes); } final bytes = allBytes.done().buffer.asUint8List(); final Size imageSize = Size(cameraImage.width.toDouble(), cameraImage.height.toDouble()); final InputImageRotation imageRotation = InputImageRotationMethods.fromRawValue( _camera!.description.sensorOrientation) ?? InputImageRotation.Rotation_0deg; final InputImageFormat inputImageFormat = InputImageFormatMethods.fromRawValue(cameraImage.format.raw) ?? InputImageFormat.NV21; final planeData = cameraImage.planes.map( (Plane plane) { return InputImagePlaneMetadata( bytesPerRow: plane.bytesPerRow, height: plane.height, width: plane.width, ); }, ).toList(); final inputImageData = InputImageData( size: imageSize, imageRotation: imageRotation, inputImageFormat: inputImageFormat, planeData: planeData, ); return InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData); } |
Now create your CameraPreview widget for creating a preview widget for the given camera controller. and pass the above-defined CameraController like:
1 2 3 4 5 |
// Preview widget for the given camera controller. CameraPreview( _camera!, child: Text("Show your recognized text${recognizedText}"), ) |
And put the above camera preview widget inside your build method like any other widget and that’s it.
Thanks for reading.
Happy Coding 🙂
For more Flutter tutorials visit Mobikul Blogs.
If you have more details or questions, you can reply to the received confirmation email.
Back to Home
2 comments
Yes, it supports live text recognition. You can try it by making a sample app from the above code.