As part of the Face Login system, I have access to a number of images of faces that were used to train the face recognition system. It would make sense to be able to display a profile image of the logged-in user using one of these photographs.
The plan is to upload all the images of users to Amazon S3, make each image publicly accessible, and then display the relevant image by linking to the image from the profile page of the Face Login system
Continue reading “DAT602 – Face Login – Profile Image/Amazon S3”
I have developed a functioning face login system based on Microsoft’s Azure Face API, Node.js and Express. After users have logged in using the face recognition system, I would like them to be able to access some personalised content from their social media platforms.
The first platform I looked at was Twitter and I made use of Tolga Tezel’s Twit package (Tezel, no date).
Continue reading “DAT602 – Face Login – Twitter Visualisations”
Now that this basic system is functioning as expected, I can use it as the basis for a Node.js/Express user authentication system using the popular Node package, Passport.
Continue reading “DAT602 – Face Login and User Authentication with Node.js Passport”
In my previous post, I developed some Python scripts which used Microsoft’s Azure Face API (Microsoft, no date) to train and recognise faces.
Whilst the scripts functioned in the way I intended, the usability of the Python scripts for face recognition is not ideal for a number of reasons.
Having spent many hours tinkering with Amazon’s Rekognition API and making little progress, I decided to investigate the face recognition Face API provided as part of Microsoft Azure Cognitive Services (Microsoft, no date).
The API provides functionality to implement face detection (“detect one or more human faces in an image and get back face rectangles for where in the image the faces are, along with face attributes which contain machine learning-based predictions of facial features. The face attribute features available are: Age, Emotion, Gender, Pose, Smile, and Facial Hair along with 27 landmarks for each face in the image”) and face verification (“check the likelihood that two faces belong to the same person. The API will return a confidence score about how likely it is that the two faces belong to one person”).
Continue reading “DAT602 – Face Recognition with Azure Face API and Python”
Following our project proposal presentation, we had a meeting with our tutor to discuss our uSense project. These are the notes from the meeting.
Continue reading “DAT602 – Project Feedback Meeting”
In my previous post, I documented my tests with Node-RED and IBM Watson’s sentiment and tone analysis services. This post will look at basic face detection using the visual recognition service.
Ultimately, I hope to use visual recognition for emotion detection. Continue reading “DAT602 – uSense Cognitive Functionality Testing with Node-RED – Visual Recognition”
As part of the uSense project, we are intending that the device will be able to gauge a person’s mood through the things they say, their facial features, and their social media postings. We can leverage IBM’s Cognitive Services that are provided with IBM Watson.
The relevant services include tone analysis, sentiment analysis, visual recognition, and speech to text.
Continue reading “DAT602 – uSense Cognitive Functionality Testing with Node-RED – Sentiment and Tone”
Each group will have 20 minutes to present their proposal (10 minutes of presentation, and 10 minutes of Q&A). For each presentation, it is recommended that you use a minimum of 5 to a maximum of 10 slides.
It is highly recommended to do as much preparation for the development of this proposal as possible, so that you will receive useful feedback. Laying the foundations of your project at this stage is quite critical for its later development. Continue reading “DAT602 – Proposal Presentation and Feedback – uSense”
The Project Brief
In this assignment, you are asked in groups to develop a physical “intelligent” object for the Future Home.