MSc Project Plan
Title
Automatic counting of ducks passing through a gate using PIR sensors and camera systems
Abstract (maximum 100 words)
This project presents the design and implementation of an automated duck counting system aimed at supporting intelligent poultry management. The system integrates a passive infrared (PIR) motion sensor with a computer vision module, enabling counting of ducks passing through a gate. To enhance robustness and accuracy, image processing techniques and lightweight deep learning models are employed to handle challenges such as occlusion, motion blur, and variable lighting. The experiments are expected to demonstrate the system’s effectiveness in achieving high counting accuracy with low latency. The proposed solution offers a scalable and cost-efficient approach in modern farming.
Aims of the Project (maximum 100 words).
The goal of this system is to automatically and accurately count ducks as they pass through a designated gate, using a combination of passive infrared (PIR) sensing and computer vision techniques. It aims to reduce manual labour, improve data accuracy, and support intelligent livestock management. To evaluate the system’s effectiveness, experiments will be conducted in a controlled farm-like environment. The testing will involve tracking duck movements under various lighting and motion conditions, and comparing the system’s count to ground truth values obtained through manual annotation. Performance will be assessed based on accuracy, latency, and robustness to environmental disturbances.
Significance of topic: (maximum 100 words).
The automation of livestock farming activities has become a key driver of modern agricultural transformation. By integrating embedded systems, computer vision, and IoT technologies, automated systems enable real-time monitoring, improve efficiency, and reduce reliance on labour. This is particularly important in addressing global challenges such as rising food demand, labour shortages, and the need for sustainable farming practices. Moreover, automation supports data-driven decision and improves animal welfare through early detection of anomalies [1][2]. As precision livestock farming continues to evolve, the development of intelligent, low-cost automation solutions holds great potential for improving both productivity and animal care in large-scale operations [3].
Science/Engineering context: (maximum 150 words).
The automated duck counting system faces several scientific and engineering challenges. Scientifically, occlusion, motion blur, and variable lighting are major hurdles. Occlusion, where ducks overlap or are partially obscured, complicates detection[4]. Motion blur, caused by fast movement, can degrade the clarity of visual data [5]. While variable lighting conditions, such as fluctuating light levels, shadows, and reflections, create inconsistencies in object recognition [6]. On the engineering side, the system must deal with false motion triggers from environmental factors like wind or lighting changes. It also needs to maintain real-time, accurate detection while operating on embedded platforms with limited resources[7]. Low-latency inference is crucial, along with ensuring stable coordination between sensors and processing units under real-world conditions [8]. Addressing these challenges is key to ensuring the system’s efficiency and reliability in practical applications.
Relevant literature: (maximum 150 words).
An edge-computing poultry monitoring system using YOLOv10 on an Orange Pi 5B was proposed to count chickens in real-time with over 93% accuracy under varying environmental conditions [9]. A related approach developed LC-DenseFCN for chicken counting from surveillance footage using point supervision, addressing occlusion and light variation challenges [10]. For pig monitoring, an embedded board system called EmbeddedPigCount used a lightweight YOLOv4 model and a simplified LightSORT tracker to count pigs passing through a hallway with low computational cost [11]. A computer vision-based livestock monitoring system was developed to identify and track specific behaviors of individual nursery pigs within a group-housed environment, enabling continuous behavioral analysis [12]. Additionally, a deep learning approach was applied for the detection of dairy cows in free stall barns, utilizing computer vision techniques to accurately identify individual cows based on their morphological appearance [13].
Design of solution (maximum 200 words).
The automated duck counting system is structured as an integrated solution consisting of three core functional components: a sensing module, a vision module, and a data processing and control module. The system initiates operation when the sensing module, typically composed of passive infrared (PIR) motion sensors, detects movement near the gate area. This detection triggers the vision module, which captures image or video data in real time through cameras placed in suitable locations to cover the passage zone. The vision module is also responsible for performing basic image processing tasks, such as filtering, background subtraction, and region extraction, to enhance the quality and relevance of the captured data. The processed visual data is then passed to the data processing and control module, which acts as the computational core of the system. This module employs deep learning-based object detection models to identify and count individual ducks, and handles system coordination tasks such as managing sensor input, controlling capture events, storing results, and transmitting data to external platforms. Through the seamless collaboration of these components, the system enables accurate, efficient, and automated monitoring of duck passage in real-world poultry farming scenarios.
Method of solution (maximum 200 words).
This system is built upon a modular architecture consisting of sensing, vision, and data processing components, all integrated on a Raspberry Pi 5 platform. The sensing module employs a passive infrared (PIR) sensor to detect motion within a predefined range. Upon detecting movement, it sends a trigger signal to activate the camera, thereby reducing unnecessary computation and ensuring energy-efficient operation. The vision module, powered by the Raspberry Pi Camera Module 3, captures images or short video clips when triggered. To enhance image quality under variable lighting conditions, basic preprocessing techniques such as image resizing, color normalization, and optional histogram equalization are applied prior to further analysis. In the data processing module, the preprocessed frames are fed into a lightweight object detection model such as MobileNet-SSD or YOLOv5-Nano, selected for their balance between accuracy and real-time inference capability on resource-constrained devices. Postprocessing includes non-maximum suppression and object tracking (e.g., using SORT or Deep SORT) to reduce false detections and ensure each duck is counted only once. Count data is logged locally and can optionally be synchronized to a remote server via Wi-Fi for further analysis or integration with farm management platforms.