EPSRC Centre for Doctoral Training in Agri-Food Robotics: AgriFoRwArdS - Sean Chow V2

Sean Chow

  • University of East Anglia in collaboration with CEFAS

Research Interests

Computer Vision, Imaging, Deep Learning, AMR.

Activities and Outputs

  • Member of the AgriFoRwArdS CDT Annual Conference 2024 Programme Committee (Mar to Jul 2024)
  • Member of the AgriFoRwArdS CDT Student Panel (March 2023 to present).
  • Associate tutor for BSc and MSc modules at the University of East Anglia (2023/24).

About me

My longstanding interest in technology, especially in computer science and engineering, has driven my research in computer vision, novel machine learning, and AI frameworks that fuel innovation in robotic automation.

Previously, I collaborated with Siemens Rail Automation, where we introduced a GPS-R powered automated signalling and tracking solution to reduce fatalities for the southwest railways. Additionally, as a prior trainee and lab technical assistant at the State Key Laboratory of Marine Pollution (City University of Hong Kong), I contributed to data analysis for a super coral research project, where I first realised the critical role of robust computer vision systems in environmental research.

Before joining AgriFoRwArdS, I worked on various projects with AEL(HK), including initiatives for CLP Power. I also designed health and safety monitoring systems to maintain operational worksites during COVID-19.

I am currently a PhD student at UEA’s world leading Colour and Imaging lab. With this incredible opportunity garner to a wide gamut of experience and knowledge, I wish to be a part of implementing advanced computing technology to the industry.

MSc Project

Sean joined the CDT as a 4-year PhD student as he had already completed the MSc Robotics and Autonomous Systems at the University of Lincoln.

Enhancing In-Field Vision Systems. Performance through Shadow Invariant. Footage Interpretation with cGAN.

Central to this research endeavour is to tackle a critical challenge in agricultural robotics (Agri-robotics) by focusing on the interpretation of video footages under inconsistent lighting conditions. The proposed method enhances in-field vision systems through advanced shadow-invariant processing techniques that significantly reduces shadow-related complexities. Validated by experiments with an average PSNR of 22.03 and SSIM of 0.84, this open-source solution demonstrates potential in improving Agri-robotic performance, reliability, and adaptability in various terrains. By mitigating shadow issues in real-time footage, the project aims to boost agricultural robots’ navigational and operational capabilities, promoting productivity, cost efficiency, and sustainable farming. The contribution seeks to foster future advancements in the agricultural computer vision sphere, paving the way for precise, data-driven practices.

PhD Project

Beyond a Shadow of Doubt: Land Surveying in the Real World 

Understanding and interpreting visual data is a complex process extensively studied in both human and computer vision. While human vision excels in interpreting images despite varying conditions, computer vision systems, particularly in land surveying, struggle significantly with illumination challenges, such as shadows. This project aims to address these challenges by developing advanced image processing techniques to improve the accuracy of land surveys in shadowed environments.

Human visual system and computer vision systems share fundamental similarities, yet the latter lack the analytical capability of the human brain, leading to frequent misclassification of environmental features. In coastal surveying, comprehending vegetation, geomorphology, and ecological changes is crucial for conducting environmental assessments. Current algorithms perform well under direct sunlight but falter in shadowed conditions, often misclassifying the same substrate when exposed to or shaded from the sun. Building on existing shadow removal methods, this project aims to enhance the performance of in-field vision systems for agriculture and coastal surveying. In collaboration with CEFAS, the objective is to calibrate the remotely piloted aircraft (RPA)’s vision system and develop shadow invariant processing algorithms to effectively addresses shadow misclassification issues and maintains reliability under diverse weather and sunlight conditions.

The core of this research is the development of such algorithms to accurately detect structural, biological, and geomorphological details, revealing temporal and spatial changes obscured by shadows. Leveraging near-infrared (NIR) imaging, which can better penetrate shadows, promises significant advancements in visual data interpretation. The project will culminate in a prototype in-field vision system (Agri-robotics) capable of accurate environmental classification regardless of shadow presence.

Current progress includes developing a Hamiltonian-CNN approach aimed at improving shadow removal. Classical shadow removal methods often face challenges with computational efficiency and adaptability under variable lighting conditions, while deep learning-based methods, although powerful, can introduce artifacts that degrade image structural similarity during shadow mitigation. By combining the robustness of classical methods, such as the Hamiltonian path-based approach, with the computational efficiency and learning capabilities of convolutional neural networks, this hybrid approach seeks to leverage the strengths of both methodologies, promising artifact-free results with improved efficiency, positioning the research at the forefront of land surveying technology.

This research aims to push the boundaries of land surveying technology, addressing pervasive illumination issues, and improving the accuracy of image-based surveys. The findings will be instrumental for agricultural and coastal applications, providing robust solutions to real-world surveying challenges.

Sean’s PhD project is being carried out in collaboration with CEFAS, under the primary supervision of Prof.Graham Finlayson