EPSRC Centre for Doctoral Training in Agri-Food Robotics: AgriFoRwArdS - Mazvydas Gudelis (2)

Mazvydas Gudelis

  • University of East Anglia in collaboration with CEFAS

Research Interests

Computer vision and machine learning, object recognition and autonomous vehicles.

Presentations

  • ICRA Task-Informed Grasping Workshop – III (2021): Perception in Agri-Food Manipulation: A Review. Watch here.

Other Activities and Outputs

  • Awarded Best Summer School Presentation at the AgriFoRwArdS CDT Annual Conference 2021 for contribution to ‘Computer vision for quality assessment of apples’.
  • Took part in the AgriFoRwArdS Summer School 2021 resulting in a co-authored presentation at the AgriFoRwArdS Annual Conference 2021: Computer vision for quality assessment of apples (in collaboration with David Larby, Vishnu Rajendran, Srikishan Vayakkattil, Amie Owen).

About me

I would like to focus on computer vision and machine learning, particularly object recognition and autonomous vehicles but am also excited to learn more about the technical side of perception and movement.

MSc Project

Detection and Segmentation of Fish in RGB-D images

PhD Project

Analysing Videos of Fish in the Field

Colour & Imaging Lab at UEA has been involved in research involving automatic analysis of videos captured on fishing vessels equipped with Catch Quota Monitoring Systems (CQMS).   A view of the conveyor belt bringing fish to the sorting and discard areas is critical to any CQMS. Manually reviewing the video footage to classify, count and measure the catch is a tedious and costly task that represents a bottleneck in CQMS. Our past and current research focus is on developing computer vision algorithms that would automate this process.

In this project we propose to take this research a step further. In the above described scenario, although fish are imaged in some very challenging environment, the fixed geometry of the belt, the fixed camera position and a relatively stable indoor lighting provide constraints that can be exploited in subsequent video processing. Here, we are proposing to analyse videos captured in significantly less constrained environments. A typical scenario would involve video analyses of fish in an outdoor environment of a port, a market, or a sorting table on board a small fishing vessel.  In contrast to the previous scenario where fixed and known cameras are available, here videos would often need to be captured by handheld devices, typically mobile phones. One of the major challenges that would need to be faced is a varying illumination. In order to succeed in this project, we will build on the most recent research in areas including depth sensing, photogrammetry, image fusion and machine (deep) learning.

Successful deep learning algorithms often require large datasets of images with ground-truth annotations – here, these would be fish species as well as weight and length annotations. These requirements are often so prohibitive, that development of these algorithms must be initiated using synthetic data, and later improved using captured data annotated by human experts. In this project, we are planning to exploit this approach.