DAT602 – uSense Cognitive Functionality Testing with Node-RED – Visual Recognition
In my previous post, I documented my tests with Node-RED and IBM Watson’s sentiment and tone analysis services. This post will look at basic face detection using the visual recognition service.
Ultimately, I hope to use visual recognition for emotion detection.
For this testing, I will be using a Raspberry Pi, Pi Camera, and a local installation on the Pi of Node-RED.
Flow Summary

- An injection node sends an empty string to the execute node.
- The execute node runs the following script:
raspistill -o /home/pi/Pictures/image1.jpg -q 25
This uses the Pi camera to take a still image and save it in the specified directory with a JPEG quality of 25%. - A template node can optionally be used to output to the debug console.
- The file in node gets the previously saved image and outputs it as a buffer object.
- The function node receives the image buffer and passes it to the visual recognition node.
- The visual recognition node processes the image data and passes the result to a debug node to output to the console.
Results
Here are some results of testing on two different images:




In both tests, the results of the analysis of age and gender of the individual’s face was accurate.
GitHub
The code for the above flow is available here:
https://github.com/mfrench71/DAT602/blob/master/Node%20Red%20Flows/pi_face_detection.json
Leave a Reply