Updated 27 April 2023
In this article, we are going to learn Image recognition from the camera in Flutter.
Image labeling gives you insight into the content of images. With ML Kit’s image labeling APIs, we can detect and extract information about entities in an image across a broad group of categories. The default image labeling model can identify general objects, places, activities, animal species, products, and more. Please check Google ML Kit for more on Image Labeling.
You may also check our Flutter app development page.
For text recognition using Google ML Kit check out this blog.
For image labeling in Flutter, we are going to use these flutter plugins: google_ml_kit and camera.
google_ml_kit: for the text recognition and many more, please check the official pub here.
camera: for accessing the device cameras, image stream which we are going to use later and for more check here.
Before we get started read about the platform-specific requirements here and known issues of the google_ml_kit plugin here.
We start by adding these plugins to our project’s pubspec.yaml file:
1 2 3 4 5 |
dependencies: flutter: sdk: flutter google_ml_kit: ^0.7.3 camera: ^0.9.4+16 |
After adding the plugins, import them into your camera_screen.dart file like:
1 2 3 |
import 'package:google_ml_kit/google_ml_kit.dart'; import 'package:camera/camera.dart'; |
Now define your camera controller and initialize it inside initState method and Initialize ImageLabeler like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
// Create the CameraController CameraController? _camera; // Initializing the ImageLabeler final imageLabeler = GoogleMlKit.vision.imageLabeler(); final String labeledData = ""; @override void initState() { super.initState(); _initializeCamera(); // for camera initialization } void _initializeCamera() async { // Get list of cameras of the device List<CameraDescription> cameras = await availableCameras(); _camera = CameraController(cameras[0], ResolutionPreset.low); // Initialize the CameraController _camera?.initialize().then((_) async { // Start streaming images from platform camera await _camera ?.startImageStream((CameraImage image) => _processCameraImage(image)); // image processing and labeling. }); } |
Once the camera is initialized, we start streaming images from the platform camera using the startImageStream method and pass the _processCameraImage method which is defined below as the parameter to it:
1 2 3 4 5 6 7 8 9 10 11 12 |
void _processCameraImage(CameraImage image) async { // getting InputImage from CameraImage InputImage inputImage = getInputImage(image); final List<ImageLabel> labels = await imageLabeler.processImage(inputImage); // Using the labeled data. for (ImageLabel labelData in labels) { labeledData = labelData.label + " "; } } |
As you can see in the above code, imageLabeler.processImage(inputImage) is the main method for labeling the image and requires an InputImage type image as a parameter so first, we need to get the InputImage for the recognition from the CameraImage and we are getting this using getInputImage(image) method defined below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
InputImage getInputImage(CameraImage cameraImage) { final WriteBuffer allBytes = WriteBuffer(); for (Plane plane in cameraImage.planes) { allBytes.putUint8List(plane.bytes); } final bytes = allBytes.done().buffer.asUint8List(); final Size imageSize = Size(cameraImage.width.toDouble(), cameraImage.height.toDouble()); final InputImageRotation imageRotation = InputImageRotationMethods.fromRawValue( _camera!.description.sensorOrientation) ?? InputImageRotation.Rotation_0deg; final InputImageFormat inputImageFormat = InputImageFormatMethods.fromRawValue(cameraImage.format.raw) ?? InputImageFormat.NV21; final planeData = cameraImage.planes.map( (Plane plane) { return InputImagePlaneMetadata( bytesPerRow: plane.bytesPerRow, height: plane.height, width: plane.width, ); }, ).toList(); final inputImageData = InputImageData( size: imageSize, imageRotation: imageRotation, inputImageFormat: inputImageFormat, planeData: planeData, ); return InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData); } |
Now create your CameraPreview widget for creating a preview widget for the given camera controller. and pass the above-defined CameraController like:
1 2 3 4 5 |
// Preview widget for the given camera controller. CameraPreview( _camera!, child: Text("Show our labeled data${labeledData}"), ) |
And put the above camera preview widget inside your build method like any other widget and that’s it.
Thanks for reading.
Happy Coding 🙂
For more Flutter tutorials visit Mobikul Blogs.
If you have more details or questions, you can reply to the received confirmation email.
Back to Home
Be the first to comment.