We investigate the problem of stitching timely synchronized video streams captured by freely moving devices. Recently, it was shown that using frame-to-frame correlation can greatly enhance the efficiency and effectiveness of video stitching algorithms . In this paper, we address some of the shortcomings in , namely the simple blending approach, causing almost a third of stitching errors and the fact that the stitching algorithm is only tested on a frame-by-frame basis which does not realistically mimic the user perception of the output system quality as a complete video. We propose the use of a modified blending technique based on optimal seam selection and experimentally validate its superiority using precision, recall and F1 measures on a frame-by-frame basis, while maintaining low computational complexity. Furthermore, we validate that the performance gains measured on a frame-by-frame basis are also evident when the stitched video output is evaluated as a single unit.