David (Grey) Churchill
Open-source hardware and software.
Other Activities and Outputs
- Member of the AgriFoRwArdS CDT Equality, Diversity and Inclusion Panel.
- Took part in the AgriFoRwArdS Summer School 2021 resulting in a co-authored presentation at the AgriFoRwArdS CDT Annual Conference 2021: Automatic Detection of Black Rot in Images of Grapes (in collaboration with Mohammed Terry-Jack, Haihui Yan, YoonJu Cho, Callum Lennox, Charalampos Matsantonis)
- Member of AgriFoRwArdS CDT Annual Conference 2022 discussion panel.
My long-term goal is to be able to create/modify an open-source system, which is accessible to as many people as possible.
Machine Learning for the Detection of Weeds among Sugar Beets
This project will create a vision system able to detect and localise weeds in images gathered from an RGB camera mounted on phenotyping robots. Previous work has led to a number of systems that can provide bounding boxes of weeds in images, however the accuracy of localisation is a rarely used metric during evaluation. The output of the system proposed by this project in intended for informing the use of herbicides, and as such the localisation accuracy will be key to its success. The data used to train the model(s) will be a combination of labelled images gathered from the University of Lincoln’s Riseholme campus and Campus Klein Altendorf in Bonn, Germany.
Machine learning-based vision for “green-on-green” spraying
The goal of intelligent spraying is that herbicides are more precisely targeted. This reduces waste and is beneficial for the environment. A key step in such spraying is identifying weeds. Typical approaches to doing this use computer vision, typically methods based around the use of machine learning, operating on pictures taken from cameras that view weeds and crops from above.
Current vision technology has proved able to handle “green-on-brown” scenarios, producing good accuracy of detection of weeds where weeds and crops are easy to spot against very distinctly coloured backgrounds, such as soil. This is sufficient in the early stages of growth, when crops and weeds are small. However, in later stages of growth, and the canopies of crops and weeds begin to overlap, accurately and efficiently detecting weeds becomes much harder. This “green-on-green” scenario is currently beyond what can feasibly be handled. Solving the “green-on-green” weed detection problem is the focus of this PhD.
The reason that “green-on-green” is hard, is that we cannot rely on simple colour segmentation. In the “green-on-brown”, segmenting images into green and brown areas identifies green areas with distinctive shapes that can easily be classified. (This is what is going on in existing detectors whether they are based on classical machine vision or more modern deep learning approaches.) When plants overlap, the green regions no longer contain such distinctive shapes, or such large areas of distinctive shapes, and existing approaches to detection struggle as a result.
The answer is to build detectors that look for things other than just colour. This PhD will pursue two lines of inquiry: adding additional dimensions to the image data, and building detectors that focus on different features, in the framework of deep learning-based vision.