Subsea inspection of oil and gas pipelines is a time-consuming task that may not yield accurate data, as results may be affected by sand agitation, sea-life or vegetation. The reliability of information provided by camera-equipped remotely-operated vehicles may be improved with deep learning techniques. Efforts are underway in Scotland to apply this technology to the automation of video footage interpretation.
With the support of The Data Lab Innovation Centre, the inspection expertise of subsea service provider N-Sea has been combined with data analytics research at the Institute of Sensors Signals and Communications in the University of Strathclyde’s Department of Electronic and Electrical Engineering to develop an algorithm that annotates video frames automatically and in real time. The approach is based on ensembles that combine the output of three classifiers operating on the three video streams — port, starboard and center — independently.
David Murray, survey and inspection data centre manager at N-Sea, said, “Recently a number of automatic video annotation approaches have been announced. However, these have been demonstrated in clear waters, using bespoke and vendor specific camera systems that mitigate motion blur and poor image quality, through strobed lighting and high shutter speeds.
“Although these technological advancements in the equipment are beneficial, the vast majority of working class ROVs are still equipped with standard cameras operating in murky waters.”
A 24-layer convolutional neural network newly designed to identify features in the video frames supports a number of events such as burial, exposure and field joints with high classification accuracy on still images. Combining predictions on a number of consecutive frames further boosts the network performance.
The team is continuing research to increase the technology readiness level of the model and to permit easy adoption by the inspection industry.