Recently, several image segmentation methods that welcome and leverage different types of user assistance have been developed. In these methods, the user inputs can be provided by drawing bounding boxes… Click to show full abstract
Recently, several image segmentation methods that welcome and leverage different types of user assistance have been developed. In these methods, the user inputs can be provided by drawing bounding boxes over image objects, drawing scribbles or planting seeds that help to differentiate between image boundaries or by interactively refining the missegmented image regions. Due to the variety in the types and the amounts of these inputs, relative assessment of different segmentation methods becomes difficult. As a possible solution, we propose a simple yet effective, statistical segmentation method that can handle and utilize different input types and amounts. The proposed method is based on robust hypothesis testing, specifically the DGL test, and can be implemented with time complexity that is linear in the number of pixels and quadratic in the number of image regions. Therefore, it is suitable to be used as a baseline method for quick benchmarking and assessing the relative performance improvements of different types of user-assisted segmentation algorithms. We provide a mathematical analysis on the operation of the proposed method, discuss its capabilities and limitations, provide design guidelines and present simulations that validate its operation.
               
Click one of the above tabs to view related content.