Callum’s research interests include, robot vision, robot navigation and sensing, control of embedded systems, computer vision and robotics.
- Darbyshire, M., Salazar-Gomes, A., Lennox, C., Gao, J., Sklar, E., and Parsons, S. (2022) ‘Localising Weeds Using a Prototype Weed Sprayer‘, UKRAS2022 proceedings.
- Gao, X., Xue, W., Lennox, C., Stevens, M., & Gao, J. (2023) ‘Advancing Early Detection of Virus Yellows: Developing a Hybrid Convolutional Neural Network for Automatic Aphid Counting in Sugar Beet Fields‘, Preprint (August 2023).
- AgriFoRwArdS CDT Annual Conference 2021 (July 2021): Automatic Detection of Black Rot in Images of Grapes.
- AgriFoRwArdS CDT Annual Conference 2022 (June 2022): Synthetic Image Generation Pipeline for Weed Detection in Fields.
- The Towards Autonomous Robots and Systems (TAROS) Conference 2023 / CDT Annual Conference / Joint Robotics CDT Conference (September 2023): Real-time vision-based spot spraying development for high efficiency and precision weed management.
- Took part in the AgriFoRwArdS Summer School 2021 resulting in a co-authored presentation at AgriFoRwArdS CDT Annual Conference 2021: Automatic Detection of Black Rot in Images of Grapes (in collaboration with Mohammed Terry-Jack, Haihui Yan, YoonJu Cho, Grey Churchill, Charalampos Matsantonis)
- Worked on an AI Unleashed Robotics project with Prof Elizabeth Sklar
- Represented the CDT at the Douglas Bomford Trust bi-annual meeting (Mar 2022)
- UKRAS 2022 Robot Lab Live demonstration – robotics systems in strawberry polytunnel for two applications.
I studied an MEng in Electronical Engineering at the University of Southampton. During this programme I also completed two placements at the University of Manchester. I am particularly excited about the industry links within the CDT as I feels that industry input will provide specific direction and ensure practical application for research. I am attracted to the areas of robot vision, robot navigation and sensing but also has a general interest in control of embedded systems, computer vision and robotics.
Synthetic Image Generation Pipeline for Weed Detection in Fields
This project will involve the generation of a pipeline that will produce synthetic images of weeds/crops by taking existing images containing weeds/crops and transposing them into other images of fields. This can be used to greatly increase the size of the datasets that are available to train machine learning models for weed detection as these synthetic images can be used for training alongside the original image dataset. The visual corrections that are necessary to make the weed/crop look like it belongs in the image it’s being transposed onto will be the main area of interest/research for this project.
Real-time vision-based spot spraying development for high efficiency and precision weed management
Spot spraying, as a spatially variable weed management strategy, targets only weed species in fields to minimize the use of chemicals. Commercially available technologies based on sensing of vegetation optical properties are typically constrained by detecting weeds on a soil background (i.e. greenness detection in a bare soil background) and are not suitable to detect weeds among a growing crop. A vision-based spot spraying system enables discrimination between vegetation species. One of the key components for the vision-based spot spraying development is to build a reliable and robust weed/crop discrimination model. Traditionally, the development of a vision-based weed/crop discrimination model is highly relying on image analysis with prior knowledge of the defined colour, texture and morphology features between weed and crop. But this might fail to generalise over different crop fields with multiple weed species. The recent technology advancements in machine learning and computer vision have provided new opportunities to develop a robust and reliable vision-based weed/crop discrimination model under unstructured field conditions.
The main objectives for this project are as follows: To build image libraries containing weeds and crops using various methods of both data augmentation and data collection; to develop a deep learning model to detect the presence of weeds/crops in an image; and then to integrate this deep learning model onto a physical system that uses a spray boom to spray weeds