MACKIEWICZM_UCMP18NEX - Monitoring coastal environments using imaging sonars and machine learning
Monitoring coastal environments using imaging sonars and machine learning
Imaging sonars are now capable of producing video like images at frame rates (typically 8-30 f/ps) in the underwater marine environment. Such systems work well in the turbid coastal and estuarine environments where low light video systems do not provide useful imagery and as such imaging sonars provide new remote sensing tools for studying previously intractable problems of importance to industry and to marine managers including detection of potential clogging organisms for power station water intakes and fish behavior around coastal structures.
However, the amount of data that such systems can generate (Tb/day) creates a real barrier to their routine deployment due to the staff requirement to analyse images and the associated costs and delays. Recently, there have been advances in the capability of machine vision modules making them now practical components of underwater remote sensing systems which typically have severe constraints on the available power and bandwidth of communications links to data processing locations onshore. This project aims to develop automated machine learning to detect and classify targets of interest in near real time thereby dramatically reducing the image analysis costs and opening up the use of such systems in autonomous remote sensing applications.
Traditional image processing techniques employed to detect and classify imaging sonar features use pixel-based supervised classification. However, these are ineffective in scenarios where large quantities of data are available - increasing costs and causing delays in data production. Moreover, despite the large volume of the data, the sonar footage may contain few occurrences of relevant objects for long periods of time. This said, while the imaging sonar footage is expensive to capture and later annotate, the appearance of objects e.g. fish, jelly fish etc. often bears resemblance to that acquired using the traditional RGB cameras. Consequently, the research will concentrate on developing machine learning algorithms capable of aiding processing of the sonar images (target domain) by letting them learn other imaging domains (source domains) e.g. traditional RGB as well. The developed algorithms will belong to the family of ‘deep learning’ algorithms, a complex machine learning technique that has recently proven to provide a step-change in a number of computer vision applications. This will require a large dataset of annotated imagery for training and the expert knowledge on the image appearance which are available in Cefas. The student will also contribute to the deployment of Cefas sonars as the research progresses.
The NEXUSS CDT provides state-of-the-art, highly experiential training in the application and development of cutting-edge Smart and Autonomous Observing Systems for the environmental sciences, alongside comprehensive personal and professional development. There will be extensive opportunities for students to expand their multi-disciplinary outlook through interactions with a wide network of academic, research and industrial / government / policy partners. The student will be registered at University of East Anglia, hosted at School of Computing Sciences in the Graphics, Vision and Speech laboratory. The student will receive training in all areas relevant to the project including computer vision, machine learning as well as Matlab and Python programming. The student will spend periods of time at Cefas, Lowestoft and University of Southampton in order to familiarize with the images and the ecological aspects of the project.
This project has been shortlisted for funding by the NEXUSS Centre for Doctoral Training. Undertaking a PhD with the NEXUSS CDT will involve attendance at mandatory training events throughout the course of the PhD.
Selected candidates who meet RCUK’s eligibility criteria will be awarded a NERC/EPSRC studentship - in 2017/18, the stipend is £14,553.
In most cases, UK and EU nationals who have been resident in the UK for 3 years are eligible for a stipend. For non-UK EU-resident applicants NERC/EPSRC funding can be used to cover tuition fees, RTSG and training costs, but not any part of the stipend. Individual institutes may, however, elect to provide a stipend from their own resources.
This PhD studentship is expected to begin in September/October 2018. Both full-time and part-time study are possible (those planning to study part-time may wish to discuss this with the supervisor before applying).
Li, M., et al. Object Detection and Tracking Based on Multi-Beam Sonar Image Processing, In Proceeding of the IEEE International Conference on Robotics and Biomimetics, 2013
LeCun, Y., Bengio, Y. & Hinton, G (2015). Deep learning. Nature 521, 436-444.
Ganin, Y. & Lempitsky, V., Unsupervised Domain Adaptation by Backpropagation, In Proceedings of the 32nd International Conference on Machine Learning, p.1180-1189, 2015
Prof. Alastair Grant – School of Environmental Sciences, University of East Anglia
Dr Julian Leyland – University of Southampton
Dr Julie Bremner – Cefas, Lowestoft
Mr Brian Robinson – Cefas, Lowestoft
Mr Mike Challiss- Cefas Technology Limited
- Start date October 2018
- Studentship Length 3 years 8 months
- Acceptable First Degree Any numerate discipline
- Minimum Entry Standard 2:1 or equivalent