Part I: DocumentationThe Documentation has been completed here. Part II: QuickMatch RuntimeWith 3 agents running (6 ROS nodes since each agent has a feature node and a quickmatch node) and each agent processing 6 images of size 1008 x 756, the QuickMatch Runtime is as follows:
The total end-to-end runtime (from starting reading image to finishing QuickMatch) is as follows:
Part III: Dealing with video processingIf we are dealing with a recorded videos, we can easily convert videos to images based on a certain frame rate, and then run the existing algorithm on the extracted images. I've found tutorials of how to do that in openCV (https://medium.com/@iKhushPatel/convert-video-to-images-images-to-video-using-opencv-python-db27a128a481) so this is definitely possible. If we are dealing with live videos, there's a ROS package called video_stream_opencv that enables a camera to publish a stream of ROS images. Here is the official ROS page and here is the GitHub page. QuestionI have roughly looked over Zack's NetMatch code (QuickMatch with contested points), but I have not looked deeply into it. It seems like there is a good amount of code that I need to read in order to understand what exactly is going on. Given the amount of time left this semester, I feel that I would not have enough time to finish implementing it in the distributed version and debugging.
I will work until the last day of final exam (May 9), which means I have two more weeks to work on things. So the question is: Should I work on a version with contested points or a version of analyzing recorded video frames? I am more confident that I will have the recorded video frames incorporated into the existing code in the next 2 weeks, but I will work on the contested points if that is of higher priority.
0 Comments
Leave a Reply. |
Archives
May 2020
Categories |