(EN) The invention relates to a moving target segmentation method and system based on a twin deep neural network, and the method comprises the steps: obtaining a plurality of groups of historical image information, wherein each group of historical image information comprises a current frame and a reference frame which are the same in size in the same video, and a label marked with the movement condition of a target; training a VGG16 network model according to each group of historical image information; and according to the trained VGG16 network model, carrying out motion transformation detection and/or relative background detection on a to-be-detected image, and determining the moving target condition in the to-be-detected image. According to the invention, multiple groups of current frames, reference frames and labels are used to train a VGG16 network model, time dimension information is compared with template frames, and templates are flexibly selected in a twin network, so that the method can well adapt to motion photography conditions under the condition of utilizing the time dimension information, and the accuracy of moving target segmentation is effectively improved.
(ZH) 本发明涉及一种基于孪生深度神经网络的动目标分割方法及系统,所述分割方法包括:获取多组历史图像信息,每组历史图像信息包括同一视频中、尺寸大小相同的当前帧和参考帧、以及标有目标的运动情况的标签;根据各组历史图像信息,训练VGG16网络模型;根据训练后的VGG16网络模型,对待检测图像进行运动变换检测和相对背景检测,确定所述待检测图像中的动目标情况。本发明通过多组当前帧、参考帧及标签,对VGG16网络模型训练,将时间维度的信息通过对模板帧的对比,由于孪生网络中对模板的灵活选取,使本发明能够在利用时间维度信息的情况下良好地适应运动摄影条件,有效提高对动目标分割的准确度。