Part 1: New functionality & DebuggingThe previous version uses images in a folder to extract features. Now the new version can process any saved video files. It reads one image from the video each second (read rate can be specified), and then passes the images to the Distributed QuickMatch System for processing. The video read functionality is added on top of the image read functionality. It is written in a different function, so the user can switch between the new get_images_from_video function and the old get_images function. Distributed QuickMatch initially was not working after I added the new video read functionality. I spent a few hours debugging and found that in the calc_density function, I passed in the number of features as the last argument when I am supposed to pass in the number of images. After the correction, everything works correctly. Part II: Testing results for video processingSince previous testing data only includes images, I took a new video to use for testing. You can find the original 7 seconds video here. Below are the 7 images outputted by the algorithm: (Note: these are the results of the debugging mode, meaning k-means is not actually computed/one node gets all the features) Figure 1: Image 1 vs Image1 Figure 2: Image 1 vs Image2 Figure 3: Image 1 vs Image 3 Figure 4: Image 1 vs Image 4 Figure 5: Image 1 vs Image 5 Figure 6: Image 1 vs Image 6 Figure 7: Image 1 vs Image 7 Nest Steps:There are still some leftover stuff to be done. Since next week is the last week, I'll finish the following and conclude the the project:
0 Comments
Leave a Reply. |
Archives
May 2020
Categories |