After having finished the prototype guidance system earlier, an idea struck into my mind to make it real-time. Since everything was implemented in matlab (for the prototype) and because matlab is not good for real-time systems, I had to implement everything again (well sort of) in C++. I love working with OpenCV so there I began, doing all the stuff again in C++ using OpenCV library.
In the video above, you can see the actual input on the left, with the output from the code in the middle and an edge detected image showing all the edges detected by the code on the right. It also has information about number of lines and intersection points for each frame at the bottom. And finally you can see the guidance output showing how to correct the deviation of the blind person at the bottom left of output sequence.
Multiple markers can also be used to make perdition of deviation more accurate, for example using four markers will make five decisions namely: go straight, turn slightly towards left, turn slightly towards right, turn left and turn right.
This is just the beginning, there is a lot of room for adding more constraints and making it more and more accurate and adaptable to real world scenarios.From here this project can be further implemented as an embedded hardware device, which gives instructions using voice commands. These voice commands will be directly based on the decisions of the algorithm.
Apart from using this algorithm for visually impaired people, this simple algorithm has wide variety of applications from simple robots to unmanned car and aerial vehicles. This algorithm can be adjusted for any application and on top of everything its REAL-TIME now!!
After much work, and tunning it with different algo's, i finally had it up and running. However, now came the main problem of testing this system in a real envoirnment, and since I have implmented everything on my desktop, it was impossible to take the desktop in a real path. So I took my camera and shot a small clip simulating a blind person walking, and deviading in both left and right directions slightly and then coming back to straight path.
I opened up this video using OpenCV and simply ran my code, the output was really impressive. This was the first time my system was working in a real world scenario, without controlled conditions.
Here is the output from the actual code:
Here is the output from the actual code:
In the video above, you can see the actual input on the left, with the output from the code in the middle and an edge detected image showing all the edges detected by the code on the right. It also has information about number of lines and intersection points for each frame at the bottom. And finally you can see the guidance output showing how to correct the deviation of the blind person at the bottom left of output sequence.
Multiple markers can also be used to make perdition of deviation more accurate, for example using four markers will make five decisions namely: go straight, turn slightly towards left, turn slightly towards right, turn left and turn right.
This is just the beginning, there is a lot of room for adding more constraints and making it more and more accurate and adaptable to real world scenarios.From here this project can be further implemented as an embedded hardware device, which gives instructions using voice commands. These voice commands will be directly based on the decisions of the algorithm.
Apart from using this algorithm for visually impaired people, this simple algorithm has wide variety of applications from simple robots to unmanned car and aerial vehicles. This algorithm can be adjusted for any application and on top of everything its REAL-TIME now!!