Learn How To Perform Real-time Object Detection Using TensorFlow in React Native in 3 Simple Steps

Real-time object detection using TensorFlow in React Native


Problem Statement

Object detection, machine learning, and deep learning all sound quite intimidating. Object detection is one of the computer vision techniques that is used to identify and locate objects in images, videos, and live streams as well.

I was recently working on a React Native project in which I was required to perform Real-Time object detection using TensorFlow in React Native. In this article, I will shed light on my journey of achieving this by overcoming all the hurdles along the path. I used Tensorflow.js to get it done.

The major problem I was facing to get Real-time object detection was the camerawithTensors provided by TensorFlow, there was a lag and the recorded video which was too laggy, after research I found a solution to record a video first by using expo camera and then extract images from the video and send them to model for detection.

Main Task

Record a video then get Real-time object detection using TensorFlow in React Native in an mobile application.


TensorFlow.js is a library for machine learning in JavaScript. TensorFlow.js lets you develop or execute ML models in JavaScript, and use ML directly in the browser client side, server side via Node.js, mobile native via React Native, desktop native via Electron, and even on IoT devices via Node.js on Raspberry Pi.

React Native

React Native is an open-source UI software framework. React Native combines the best parts of native development with React, a best-in-class JavaScript library for building user interfaces. It is used to develop applications for Android, Android TV, iOS, macOS, tvOS, Web, Windows, and UWP by enabling developers to use the React framework along with native platform capabilities

Major packages used

Solution Steps

  • Extract images from the video
  • Convert images to tensor
  • Send tensor to model for detection

1 . Extract Images From Video

I used the react native ffmpeg package which helps to extract images from videos. As the video is recorded using expo camera by the user it goes to the function which extracts all images using this command


  • videoURL is the video source URL
  • To output 25 images every second, named out1.png, out2.png, out3.png, etc, we use fps=25.
  • The %01d dictates that the ordinal number of each output image will be formatted using 1 digits.
  • saveFilePath is the path at which you want your pictures to be saved

2 . Convert Images To Tensor

After extraction these images will be converted to tensor and pass for object detection to our custom TensorFlow tfjs model, Filesystem and tfjs were used for conversion


  • img is the path of the image
  • imagesTensor is the tensor

3 . Send Tensors To Model

These tensor will be passed to the model for detection , as I was using our custom TensorFlow tfjs model for object detection, the tensor needs to be resized, and normalized in the particular size to pass to the model as an input


That’s Wrap! I have provided all the steps of how to get real-time object detection using TensorFlow in react native application on the videos recorded by the user in real-time.

Daniyal Habib

Daniyal Habib

MERN Stack Developer


Suggested Articles