Samuel’s research interests include, indoor farming, agricultural automation, and mobile robotics.
- Learning robot navigation from demonstrations @ AgriFoRwArdS CDT Annual Conference 2022 (June 2022)
- Represented the CDT at the school outreach University of Lincoln British Science Week event 2022 (March 2022).
- Boston College visit to Riseholme – robotics demonstrations (June 2022)
- Project featured in Hackaday Magazine.
- Project received an official CERN-OHL-W (open hardware) license.
- Entered the Pi-Wars competition.
My name is Samuel, aged 24 and am from Maidenhead. My background has been in mobile robotics. My MEng project involved 3D autonomous navigation and I have worked with mobile robots as an intern at Fox Robotics Ltd and Ross Robotics Limited. I believe that software is the core of robotics.
I have chosen to start the Agri-food CDT because I want to confront and overcome the demanding challenges of self sustainability. I was impressed by Lincoln University’s world leading involvement in agri-robotics research and am looking forward to being on the frontline of the cutting edge technology.
One of the areas of research I’m particularly interested in is automated indoor growing. This has been inspired by projects such as the Eden Project. I think there is a future in developing and converting non farmland into biodomes which have the capacity to grow exotic produce.
A fun fact about me is that I’ve had an 11 year career as a dancer doing tap and ballet. I’ve been an associate of the Royal Ballet School and performed in Sleeping Beauty at the Royal Opera House in Covent Garden. I hope to one day acquire my own farm which grows food using robots.
Learning robot navigation from demonstrations.
Mobile robot navigation is a complex task for human operators. Despite all the progress in autonomous navigation, the developed approaches are domain specific. Hence, a human operator is in charge of controlling the movements of a mobile robot. Autonomous navigation of mobile robots is a challenging and complex task. Developing a domain specific autonomous mobile system is effort demanding. This project will study the learning from demonstrations (LfD) for mobile robot navigation. This project studies deep Movement Primitives (an LfD method) for mobile robot (non-holonomic mobile robot) navigation. This project aims to map visual information into robot movement trajectories and generalise it. Simulated experiments on a mobile robot will be conducted to assess the feasibility of the LfD algorithms; and its performances are compared. Performance is measured on the adaptability to new goal positions and via points, obstacle avoidance and trajectory smoothness.
Learning robot navigation and manipulation from demonstrations
Humans teleoperate machines to perform mobile navigation and manipulation tasks. Current autonomous system approaches are domain specific. Therefore, human operators are still in charge of the movement of their robots. This research studies ‘Learning from demonstrations’ (LfDs) so the same teleoperated machines can be transformed to perform autonomously. The proposed research involves taking quantitative control data from human demonstrations. While a robot is learning a task from a demonstration, it must decipher useful task information from the noise in the control data. The research extends to not only just being able to replay the demonstration, but to also to adapting the execution of the task according to variations within the robot’s environment.
LfDs methods for mobile robots and mobile manipulators already exist, however these methods do not generalise the task and depend on the robot’s system dynamics being known. They also use sensors which are expensive such as LIDAR rather than camera sensors. The LfD methods I would research into, and implement on mobile robots and mobile manipulators, is inspired from the work into manipulator robots conducted by Dr. Amir Ghalamzan. However, remapping of the existing models for manipulator robots onto the mobile robots and mobile manipulators will not be enough to make these robots fully autonomous.
I will be looking further into state-of-the-art deep learning methods so that the robots do not only mimic or imitate the demonstrated task. But be able to generate ways of emulating demonstrations and include those demonstrations when learning the task. The idea is to improve the execution of the task and be able to generalise to be independent of the robot’s domain. The outcome of the research is to produce a computationally efficient and effective method of implementing autonomy on mobile machines. The human re-programmable nature of the LfDs for the robots will increase the level of robot adaptation, as robot experts will not be required to continually reprogram the robots.
Samuel’s PhD project is being carried out in collaboration with 2 Sisters Food Group, with primary supervision by Dr Amir Ghalamzan Esfahani.