Fleet Management, Robot Vision, Robotic Mapping, Robot Task Planning, Robot Navigation, Swarm Robotics, Agri-Robotics
- Nihla, M.I.F., et al. (2022) ‘ANDTi Virtual Assistant‘, 2022 2nd International Conference on Image Processing and Robotics (ICIPRob).
- Human detection and body posture recognition for human-robot collaborative applications @ The Towards Autonomous Robots and Systems (TAROS) Conference 2023 / CDT Annual Conference / Joint Robotics CDT Conference (September 2023)
Other activities and outputs
- Engagement with the Innovate UK Robot Highways project developing a smart trolley device (February 2023)
I am from Sri Lanka. In my spare time, I like to sing songs and read classics.
I am joining the CDT because I am excited to carry out studies related to the emerging trend of using robotics in agriculture and I believe that CDT could lay a strong foundation for my future career.
I am looking forward to moving to Lincoln because it is a fantastic place to live and study in UK and it is a perfect mix of outstanding natural beauty and lively atmosphere.
I will be studying my PhD at the University of Lincoln.
I chose to study my PhD at Lincoln because it seems to provide a good study-friendly environment and I cannot think of a better place to study than the University of Lincoln.
My career goal is to pursue further research within my research area and to work in the academia. Before joining the CDT, I was working as a research assistant at Sri Lanka Institute of Information Technology (SLIIT), Sri Lanka.
A fun fact about me is that my retirement plan would be to live in a little cottage by a calm river and to be an old cat lady.
Human detection and body posture recognition for human-robot collaborative applications
Accurate and efficient human detection and body posture recognition are crucial factors that facilitate the safe and productive operation of collaborative robots in applications such as industrial manufacturing, healthcare, and agriculture. This work focuses on developing a human sensing framework that utilizes Red Green Blue (RGB) images from a Three-Dimensional (3D) camera mounted on a mobile robot that navigates in environments surrounded by human co-workers. This framework will generate bounding box detections of humans followed by skeleton extraction within the detected bounding boxes. The system is able to recognize various human postures that will be relevant for further activity inference. An important feature of this system is scalability, which allows the human sensing framework to receive images from multiple cameras simultaneously as input without excessively increasing the computation cost. Future work will comprise of enhancing the system to not only detect humans in the camera frame but also to determine and track their positions in a 3D world. This will be later used for integrating a human-aware motion planning into the robot.
Human sensing based on affordable sensors for collaborative robotics in agricultural scenarios
Although most of agricultural robots are considered small/medium size machinery, they still represent a risk of causing injuries to human operators/collaborators, especially in situations when the robots are not aware of the human presence or intentions.
Thus, to accelerate the deployment of robots on industrial-scale farms, it is crucial to develop a reliable human sensing methodology capable to ensure a safe and efficient human-robot interaction. Most of existent solutions for human sensing have been developed for robotic applications in controlled environments or requires expensive sensors (>£10k each). In contrast, this proposal aims to develop a human sensing system based on affordable sensors (<£1k each), making the proposed solution suitable to be adopted by agri-robotic companies as a cost-effective safety solution.
To accomplish this goal, it is necessary to divide the work plan into two parts. The first part is the core of the proposal that focuses on developing the human sensing system. This system will fuse information collected from multiple sensors with different features. The solution will be able to adjust dynamically the human detection performance according to lighting and occlusion conditions, the agricultural environment context, and the robot energy savings requirements. The human sensing will cover not only tracking human position but also infer the human intensions based on motion prediction, human-object interactions, and the action they are performing.
Once the first part was completed, the second part is to close the loop by using the information from the human sensing system to feed the decision-making system of the robot. Thus, robot actions will depend not only on the agricultural task but will consider the human presence and intention inference. The performance of this human-aware navigation will be validated experimentally, demonstrating its potential to be deploy in different robotic platforms and different agricultural environments.
In this project, the student will acquire technical skills such as robot programming, sensors integration, computer vision techniques, machine learning, human-robot interaction, and scientific writing.