Project Group 60: WalkTexter

This is an archive site. Current senoir design projects are at https://projects.eng.uci.edu.

Topic:

Idea source:

Week 10 Update 

 

Team Members

  • Stefan Cao, EECS
  • Andrew Yu, EECS
  • Linda Vang, EECS
  • Tony Nguyen, EECS
 
Mentor
  • Professor Aparna Chandramowlishwaran
 

 
Overview
 
As the twenty-first century progresses, more and more people are glued to their phones texting or playing games. It can especially be a problem when they are walking and not paying attention to the path ahead of them. We wanted to create a system which will warn these users through notifications so they can avoid these obstacles. 
 
Using the Raspberry Pi 3, an ultrasonic sensor, and a camera, we can detect objects in front of a user reliably. The Raspberry Pi will be able to send the notification to the user's device through a Bluetooth connection. 
 
Figure 1: WalkTexter
Figure 1 : WalkTexter
 
Materials and Resources
 
Hardware
 
As mentioned, we used the Raspberry Pi 3[1] as the main platform for our device. The Raspberry Pi 3 offers full connectivity in its SoC. It has built-in WiFi and Bluetooth, making an attractive option for our needs as wireless communication is key. It also provides full size USB connectivity for our camera. Another appealing aspect of the Raspberry Pi is it can fully power its peripherals with its USB ports. This allows our project to fully utilize the camera and its tracking capabilities. This all-in-one solution is perfectly capable in data crunching and hosting the sensors in our project.
 
Figure 2 : Raspberry Pi 3
 
A tiny camera with great imaging capabilities can be difficult to acquire. As a prototype, we went with an off the shelf webcam. We used a Logitech C270[2] webcam for our project due to its cheap price. This webcam offers 720p video and automatic light correction.
 
Figure 3 : Logitech C270
 
The last sensor we used is an ultrasonic distance sensor. The ultra sonic sensor reliably tells us the distance between an object and the user. It is more reliable than using the camera alone. This way, we have options with using the sensors for reliable detection and notification.
 
Figure 4 : Ultrasonic Distance Sensor
 
Software
 
To enable machine vision and help us detect objects and signs, we use a machine library called OpenCV[3]. OpenCV allows the Raspberry Pi 3 to communicate with the camera and take pictures. Furthermore, the library has extensive features for analyzing the frame. In each frame, we analyze to make sure we have an object of interest; this can be a stop sign or a traffic light. The algorithms we used are the Haar Cascade[4]Oriented FAST and Rotated BRIEF[5], Brute Force Matcher[6], and something of our own. 
 
These algorithms are used to detect an object and narrow down for signs and lights to warn users in case there is a crossroad or a street crossing. The Haar Cascade is used to detect objects that matches its trained profiles. If we want to detect a specific object, we need to train the classifiers. Based on its positive and negative examples, this algorithm can decide if we detected a certain object. Using the Haar Cascade by itself gives lots of false positives, depending on how much it is trained. We need to use more algorithms for better accuracy.Oriented FAST and Rotated BRIEF, or ORB, is the next algorithm we used. The Fast portion is used to find keypoints and a BRIEF does the description matching. This algorithm allows us to weed out the non matching false positives because the objects description is wrong.We use a Brute Force matcher which compares two images, our object we just captured and passed two algorithms and a picture we set as the base. The brute force algorithm analyzes all its aspects and if it is close enough to a threshold we choose, the image is the correct one. Even after all that, a false positive can still be detected. Therefore, we analyze two frames. If the 2 frames have the same object (especially when a person is moving), then for sure it is a real positive. With all these algorithms in our program, we our confident on our accuracy of our project.
 
On the User phone side, we programmed the app for Android, mainly because it is free to develop on Android. Our focus is to allow the users to do anything on their phone and still receive a notification. To do this, we created a Service Intent, which allows our service to run in the background. In the service intent, we connect to the Raspberry Pi 3 by Bluetooth. It creates a socket, streams for communication and maintains it. After the connection phase, the service stays in a while loop listening to any messages from the Raspberry Pi 3. After receiving the message, the app decodes it and determines the type of notification we want to send.
 
                Figure 5 : Different WalkTexter notifications
 
 
Results
Training
 
To increase the accuracy of our object detection, we have to train our algorithm. We have to give it a large total of positive examples and a large amount of negative examples. Negative examples can be of anything that is not the object we want. The training process takes over 3-4 days for basic results. Both our walk[7] and don't walk[8] training files can be used and improved upon below.
 

 

    Figure 6 : Positive Examples
 
    Figure 7 : Negative Examples
 
We found a trained XML file for stop signs, so we do not need to spend time training the stop sign detection. However, we were not able to find trained XML files for pedestrian traffic lights, and we need to train it ourselves. Since there is a stop and walk light, we need to do double the amount of training. For positive examples, we used different pictures of stop light, for stop, and walk lights for walks. We use pictures from Google Images to train our detector. For negatives, we grab random pictures and allow it to analyze. This process took a total of 3-4 days each.
 
Accuracy
 
In terms of accuracy, the Stop Sign XML gives the best results. We believe that the Stop Sign XML was given more samples and time, creating a more accurate detector. In our testing, detecting stop signs have an accuracy of 95% or higher. For our own trained XML files, we have a detection accuracy of about 70-80% for the red halt sign and 60-70% for the white walk sign. We can attribute this low accuracy to a short amount of training and not a lot of real world examples. We have an increased amount of false positives because we do not want to miss a crucial sign, and want to give caution to the user, even if it is a false positive. It is better to be safe than sorry. This means that we increased the threshold, allowing the detection to be more lax. A combination of all these factors give us low accuracy to our very own trained XML files for detection.
 
Hardships and Outlook
 
Hardships
 
Originally, our idea is to have a small wearable device that will accompany the user. We planned to use the Intel Edison[9] for being a fully featured IoT device and its small size. However, we ran into problems quite quickly. The Intel Edison could not supply enough power to reliably start the camera and process our algorithms. The camera would sometimes be detected and allow us to capture a frame, but with further testing, the camera would not be detected. A few weeks into the quarter, we have to swap platforms, which proved to be costly in terms of time, but with prior experience in using the Raspberry Pi, we were able to get back into it with only a few hiccups. 
 
Figure 8 : Intel Edison and its size
 
Android programming has its own hardships. There is a lot of material to learn, and how it all goes together. Example apps can be hard to find and the Bluetooth connection at first proved to be painful. After correctly pairing the the 2 devices within the app, we set our sights for the app to run in the background. We do this by using a ServiceIntent and debugging the app every chance we get. Fortunately it all went together at the end. 
 
Future improvements
 
In terms of the project, we can expand it to not just include texters, but people with disabilities. We can also increase the accuracy of our detection and increase the amount of signs we can detect. There are a vast amount of signs and objects we can warn the users with, that may prove beneficial to people looking at their phone, or just the blind who can receive notifications of the world around them.
 
Further improvements with the device itself is to downsize the project. We use the Pi as a last resort because we were focused on our project being a wearable. Not having small components mean our project expanded in size. However, we did made it able to clip onto a backpack for ergonomics. To reiterate, we believe our project can be a lot smaller in size, as long as we resolve the power issue of small IoT devices, and have smaller sensors. 
 
We also would like to improve the user experience with the Android App. At this moment, it looks a little plain and only does the basic of notifications. We wanted to show the object it detected, but we want a real time notification detection, so we sacrificed sending a picture for speed. In the future we want to test this so the user can know exactly what they need to watch out for.
 
Special Thanks
 
We like to thank our mentor, Dr. Aparna Chandramowlishwaran, for her valuable words of wisdom and support. We encountered many problems with our project since the beginning of the quarter, but she was able to guide us through them. Once again, we would like to thank our mentor for her valuable role during our Senior Design process.
 
Links and Resources
[3] OpenCV - http://opencv.org/

 

Project Name:
   
Walk Texter

Project Group 60: Team SALT 
    Stefan Cao : Computer Engineering
    Andrew Yu : Computer Engineering
    Linda Vang : Computer Engineering
    Tony Nguyen : Computer Engineering

Project Objective:
   Our project is going to be a system which can warn texters who are walking if the user is going to have a collision. This system will include a camera on the texters clothes. From there, the camera or camera system will be able to detect objects that are nearby that can collide with the texter. The system will warn the texter with a pop up message on his or her phone; after all, the texter will be texting.

Team Mentor:
   Professor Aparna Chandramowlishwaran

 

PROJECT UPDATE #1

Mock-up:

In the image above, we have a camera that can detect objects near the user. The camera will be placed on the user’s clothing accessories such as a cap. If the system detects that the user will collide with an object, the system will send a text alert to the user’s phone. The mock-up above shows roughly how the system will communicate with the smartphone.

 

Basic Behavioral Overview:

 

Untitled Diagram.png

 

BASIC CRITIQUE/QUESTIONS

  • How are we going to get the camera data to the server?

    • We can try to find a wireless camera. A wireless camera can transmit data to the smartphone. The smartphone, hopefully connected to the internet either by WiFi or 3G/4G/LTE technologies, will transmit the data into the server. The server will detect objects and send a warning text back if it detects and object.

    • If there is not a small wireless camera available that can fit on our user’s head, it is possible to use a mini computer, such as a raspberry pi, to send data.

 

Pros

Cons

A server provides much needed power to process the video feed.

Getting warnings in real time may be a problem. There is a delay if we send data to the smartphone and then send to the server and back to the smartphone. Today’s data transfers may be severely limited in this way.

 

IDEA #2

We can have the mini computer on the users hat too. The camera will transmit its information to the mini computer which will process all video data. The mini computer will send the warning text if it detects the user will collide into an object.

 

Note7 mockup2.png

 

Pros

Cons

Data can be processed real time since the data is being processed right when it is captured. There is no need to transfer video data.

There are more things in the user’s head. The mini computer will need a power source, meaning a battery. This along with a mini computer, will bring size and weight to the system.

 

As we want this project to be processed in real time, we will consider the bulky option. The next section will detail what mini computer we can use.

 

Mini Computer Options

 

 

Pro

Cons

Arduino

The arduino is a very popular device and may have lots of open source code we may study and use.

We do not have experience with the arduino. This option may lack bluetooth and WiFi.

Raspberry Pi 3

We have experience programming with the Raspberry Pi. It also has built in WiFi and bluetooth. It also has 4 dedicated ARM cores to do video processing. We all own the RP3.

This option is very bulky and heavy. Because of this, it may use the most power.

Raspberry Pi 0

We have experience programming with the Raspberry Pi. It is also a lot smaller than the Raspberry Pi 3. It can be bought for cheap at Microcenter.

This option does not have built in WiFi and bluetooth. This is needed to connect to the smartphone. Because we need these components, it may add to the bulk. This option only has 1 core to use.

Intel Edison

This is one of the smallest solutions. It also have built in WiFi and bluetooth. Intel created this mini computer for Internet of Things and wearables. Our project deals with wearables. It may also use the least power.

We do not have experience programming with Intel Atom processors. However, there may be code repositories online. Intel Edison may be pricy.

 

With these options, we are leaning on the Intel Edison. We will also need to choose a suitable camera and a power system. We will require a battery but should look into a solar power source. If there are flexible solar panels, it may solve the battery bulk problem considerably. More research will be needed in the software to determine what is "powerful" enough.