EPSRC Centre for Doctoral Training in Agri-Food Robotics: AgriFoRwArdS - Mazvydas Gudelis (2)

Mazvydas Gudelis

  • University of East Anglia in collaboration with CEFAS

Research Interests

Mazvydas’s research interests include; artificial intelligence; deep learning; computer vision; and software engineering for robotics applications.

Publications

Presentations

  • “Perception in Agri-food Manipulation: A Review” (oral) – International Conference on Robotics and Automation (ICRA) Task-Informed Grasping Workshop – III [May 2021] – Online.
  • “Computer vision for quality assessment of apples” (oral) – AgriFoRwArdS CDT Summer School 2021 [June 2021] – Online.
  • “Computer vision for quality assessment of apples” (oral) – AgriFoRwArdS CDT Annual Conference 2021 [July 2021] – Online.
  • “Developing a Computer Vision pipeline for automated analysis of Antarctic Krill” (oral) – AgriFoRwArdS CDT Annual Conference 2022 [June 2022] – Lincoln, UK.
  • “Serving a Full English Breakfast” (oral) – AgriFoRwArdS CDT Summer School 2023 [March 2023] – Lincoln, UK.
  • “Title unknown” (poster) – University of East Anglia Computing Sciences Postgraduate Showcase Day 2023 [May 2023] – Norwich, UK.
  • “Deep Learning for Antarctic Krill staging and morphology analysis from high resolution image pairs” (poster) – Towards Autonomous Robotic Systems (TAROS) 2023 / AgriFoRwArdS CDT Annual Conference 2023 / Joint Robotics CDT Annual Conference 2023 [September 2023] – Cambridge, UK.
  • “Computer Vision Pipeline for Automated Antarctic Krill Analysis” (oral) – British Machine Vision Association Conference 2023 [November 2023] – Aberdeen, UK.

Other Activities and Outputs

  • Awarded Best Summer School Presentation at the AgriFoRwArdS CDT Annual Conference 2021 for contribution to ‘Computer vision for quality assessment of apples’.
  • Took part in the AgriFoRwArdS Summer School 2021 resulting in a co-authored presentation at the AgriFoRwArdS Annual Conference 2021: Computer vision for quality assessment of apples (in collaboration with David Larby, Vishnu Rajendran, Srikishan Vayakkattil, Amie Owen).
  • Winning group for Poinsettia Hackathon.
  • UEA Post-Graduate day Best Poster winner 2023.
  • PyTorch Docathon 2023 3rd place winner 2023 (May 2023).

About me

Hello! My name is Mazvydas and I am a PhD researcher with the Agriforwards CDT working at the UEA Vision Laboratory. I am currently enrolled in a PhD studentship (finishing ~2025) with the AgriFoRwArdS CDT in agri-food robotics. I often describe myself as a curious programmer with experience in web and mobile application development who’s learning to be a better researcher every day. I have been programming since 2017 when I started my higher education with a BSc in Computer Science at the University of East Anglia. The projects I undertake usually focus on AI applications for real-world perception problems through the use of computer vision and machine learning. My goal is to do work that will make the planet a better place for everyone. My current research focuses on the automation of fisheries and the aquaculture sector, more specifically fish segmentation and recognition from multi-modal image sequences.

MSc Project

Detection and Segmentation of Fish in RGB-D images

It is essential for the success of systems tasked with stock management and stock quality analysis to implement accurate item recognition methods. Recognition algorithms face a challenge given that fish are often seen in close proximity to one another, whether in the context of physical processing domain or in their natural environment. Because the input data contains more richly detailed geometry, shape, and scale information, 3D instance segmentation provides a more insightful approach for scene comprehension. In this work, we design several pipelines for data acquisition using a Kinect V2 camera and the iPhone 12 Pro LIDAR sensor. We also review a handful of tools for data labelling, present an open-source “Fish Box RGB-D” data set, and carry out experiments in colour and depth spaces to determine its usefulness and robustness. The findings from the experiments show that accurate localisation and segmentation of fish in a box can be achieved in both colour (~67% mAP) and depth (~58% mAP) spaces.

PhD Project

Analysing Videos of Fish in the Field

Colour & Imaging Lab at UEA has been involved in research involving automatic analysis of videos captured on fishing vessels equipped with Catch Quota Monitoring Systems (CQMS).   A view of the conveyor belt bringing fish to the sorting and discard areas is critical to any CQMS. Manually reviewing the video footage to classify, count and measure the catch is a tedious and costly task that represents a bottleneck in CQMS. Our past and current research focus is on developing computer vision algorithms that would automate this process.

In this project we propose to take this research a step further. In the above described scenario, although fish are imaged in some very challenging environment, the fixed geometry of the belt, the fixed camera position and a relatively stable indoor lighting provide constraints that can be exploited in subsequent video processing. Here, we are proposing to analyse videos captured in significantly less constrained environments. A typical scenario would involve video analyses of fish in an outdoor environment of a port, a market, or a sorting table on board a small fishing vessel.  In contrast to the previous scenario where fixed and known cameras are available, here videos would often need to be captured by handheld devices, typically mobile phones. One of the major challenges that would need to be faced is a varying illumination. In order to succeed in this project, we will build on the most recent research in areas including depth sensing, photogrammetry, image fusion and machine (deep) learning.

Successful deep learning algorithms often require large datasets of images with ground-truth annotations – here, these would be fish species as well as weight and length annotations. These requirements are often so prohibitive, that development of these algorithms must be initiated using synthetic data, and later improved using captured data annotated by human experts. In this project, we are planning to exploit this approach.

Mazvydas’s PhD project is being carried out in collaboration with CEFAS, under the primary supervision of Dr Michal Mackiewicz.