DAT602 – uSense Cognitive Functionality Testing with Node-RED – Visual Recognition

In my previous post, I documented my tests with Node-RED and IBM Watson’s  sentiment and tone analysis services. This post will look at basic face detection using the visual recognition service.

Ultimately, I hope to use visual recognition for emotion detection. 

For this testing, I will be using a Raspberry Pi, Pi Camera, and a local installation on the Pi of Node-RED.

Flow Summary

Visual recognition flow

  1. An injection node sends an empty string to the execute node.
  2. The execute node runs the following script:
    raspistill -o /home/pi/Pictures/image1.jpg -q 25
    This uses the Pi camera to take a still image and save it in the specified directory with a JPEG quality of 25%.
  3. A template node can optionally be used to output to the debug console.
  4. The file in node gets the previously saved image and outputs it as a buffer object.
  5. The function node receives the image buffer and passes it to the visual recognition node.
  6. The visual recognition node processes the image data and passes the result to a debug node to output to the console.

Results

Here are some results of testing on two different images:

Male visual recognition result output

Female visual recognition result output

In both tests, the results of the analysis of age and gender of the individual’s face was accurate.

GitHub

The code for the above flow is available here:

https://github.com/mfrench71/DAT602/blob/master/Node%20Red%20Flows/pi_face_detection.json

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.