Software apps and online services
Hand tools and fabrication machines
Samsung ARTIK team had made a Face/Identity Detection System built with Eagleye 530s in April, 2018 (more details about the demo, check out here). Last month, the team made a new upgraded version of the demo for the IoT World in Santa Clara Convention Center which took place on May 14-17. They used an Eagleye 530s, an OV5640 5M Auto Focus USB Camera and a touchscreen to build up a security system to ensure family security.
When a person comes to the touchscreen, he/she could follow the instruction on the screen to take a selfies, and the photo will be uploaded to the Face Recognition cloud – Kairos. The system will detect if the person matches with the already enrolled faces in the system.
The system could be connected to door latches to unlock the door automatically only when a family member is detected. It would be a end-user solutions for smart home/ building.
Here comes the full instruction from the Samsung ARTIK team:
We introduced a prototype of Face/Identity Detection system when Eagleye 530s was just released in April, 2018. https://developer.artik.io/documentation/artik/projects/facial-recog.html
We then improved that project and added a touchscreen in the system and made it more appealing.
In this project, we have 3 main components:
We remove the GrovePi+ accessories from the previous project and there is no Pi HAT/Shield being used. We also replace the MIPI camera with a USB camera coming with the same type of sensor.
The final assembly would look like this. It is mounted to a specially designed hexagon plastic box for events purpose.
- Connect OV5640 USB camera to Eagleye 530S top USB port. (There are 2 USB ports on Eagleye. Use the top one for the camera; use the bottom one for the touch screen.)
- Connect Touch Screen mini USB to Eagleye 530S bottom USB port.
- Connect Touch Screen HDMI to Eagleye 530S HDMI port.
- Connect the power cables of Eagleye 530S and Touch Screen.
- Connect internet cable (CAT 5) to Eagleye 530S if you want to use wired internet.
- Install x-windows. The main application will run in x-windows environment. Please refer the following instructions.
- Install Python Qt (pyqt). The application is developed with Qt.
apt-cache search pyqt sudo apt-get install python-qt4
- Install OpenCV. The video streaming and picture taking are based on OpenCV.
sudo apt-get install libopencv-dev python-opencv
- Install Watchdog. There is a ‘watchdog’ program watch_for_changes.py watching if the jpg file changed. That means if the camera takes your picture and saves it to the current folder as a jpg file, watch_for_changes.py will call validate.py and do the picture validation process.
pip install watchdog
- Create a bash file watch.sh. In the file there are two lines. (See the attached python watch_for_changes.py for the content)
startx & python watch_for_changes.py
- Create a bash file start.sh which has one line. (See the attached videowindow.py for the content)
- You need to run ./watch.sh & from the console; when x-window starts up from the touch window use keyboard shortcut Ctrl+Alt+t to start a LX terminal there.
- From the LX terminal run ./start.sh & to start the main program videowindow.py. You will see the application pops up as the following.
- Pull out the keyboard USB cable from the USB port and push in the USB cable from the touch screen to the port. From now on the rest of the jobs would be done through the touch screen.
- Press Start button from the application and start the video streaming. You will see yourself in the left window.
- Press Take Picture button and your picture will be taken and transferred to the Face Recognition cloud – Kairos. Your picture will in the right window. When the validation result comes back you will see from the right screen if it is “Successful” or “Failed”.
- You could Enroll the last taken picture to the Cloud for the next validation.
This python code is implementing the core functionalities of the system. It is for setting up the application, streaming video, taking a picture, calling the validation function and checking the validation result.
1. Import necessary PyQt4, OpenCV, watchdog and other libraries.
2. Capture the video from Video port 0. (If you use MIPI camera with Eagleye 530S the Video port should be 6.)
3. Create a Qt application and an instance of Window which hosts the application.
4. Define the Window. Give the name of the window as Control Panel. Define the 4 buttons (Start, Take Picture, Enroll and Quit) in the window. Set up the background image.
From 5 to 8 are the definitions of the four buttons.
5. When click Start button, the watchdog starts to observe the txt files defined in MyHandler function.
6. Thereafter, starts the video streaming in Video window.
7. When click Take Picture button, a frame of the current video streaming will be saved as original.jpg; the Validating Face wording will be put on the bottom left of the image; the image will be shown in the Photo window.
8. When click Enroll button, the enroll.py will be called and the previously taken image will be enrolled to the gallery on the Cloud side. This part will be explained in part d.
9. Define the detection pattern for the watchdog. In this case, when the validation result from Cloud comes back, it will be saved to a txt file – Detection.txt. Thus, we detect this file.
10. When Detection.txt gets modified, it gets opened and parsed.
11. Put either “Successful!” or “Failed!” wording on the image taken previously.
This code checks if jpg pictures get changed. Specifically, when a jpg picture is taken and saved to the current folder, this python code will call validate.py.
1. Set up the watchdog pattern as jpg files. All the jpg files creation or modifications will trigger an action – call validate.py.
2. Create an observer and start to monitor the changes.
This code sends the encoded picture to Face Recognition platform – Kairos for the validation. The whole returned result will be saved to Result.txt. Then it will be parsed to the Detection.txt.
1. Encode the original.jpg to base64 format and give a gallery name to the “values” field.
2. Create headers – include app_id/app_key pair created at Kairos.com (https://developer.kairos.com/admin). Send request to Kairos web service.
3. Get the validation result and save it to Result.txt.
4. Open Result.txt and parse it.
5. Save the parsed result to Detection.txt. It could be either success or failure.
Of course, initially there is no image in your gallery. You need to enroll some images.
1. Encode the picture just taken to base64 and give it a subject_id. Also indicate which gallery you want to enroll the image to.
2. Include the app_id/app_key pair created before and send the request to Kairos web service.
3. Read the result from the web service and put it to a txt file for later reference.