Digital Transformation in Livestock Management

*header image generated by Artificial Intelligence.

Digital Transformation in Livestock Management

Precision livestock farming has emerged as an innovative solution to the challenges faced by traditional farming methods in monitoring livestock health and improving operational efficiency. Today, Serket offers a cutting-edge solution for pig farmers to further enhance their precision farming practices.

Serket leverages computer vision and artificial intelligence to help pig farmers improve the welfare of their livestock and the efficiency of their operations. Our software solution, Pig Guard, employs image recognition and deep learning algorithms to detect sick animals early and alert farmers to make timely interventions.

With Pig Guard, farmers can take a proactive approach to managing their livestock. The software streamlines feeding plans, labor hours, and operational structure, enhancing animal health and farm profitability.

Serket’s Computer Vision: The Eyes that Never Sleep

Serket is a game-changing technology that aims to revolutionize the livestock industry. By leveraging computer vision and AI, Serket enables farmers to monitor and analyze animal behavior in real-time.

With this advanced behavioral analysis, farmers can receive real-time warnings before the risk of disease or infection becomes a serious concern. This "predictive maintenance" approach not only improves animal welfare, but can also decrease the amount of antibiotics used in farms. Ultimately, this leads to a decrease in the amount of antibiotics that reach humans through meat or other channels in the agricultural food chain.

Serket identifies the daily rhythm and abnormal patterns in pig behavior, enabling tailored daily operations in the farm that meet the animals' needs and improve their welfare even further. Overall, Serket represents an exciting new development in the livestock industry that promises to improve animal welfare and reduce the impact of farming on human health by for example, reducing antibiotic usage.

Overcoming Challenges in Continuous Monitoring of Livestock

Successfully monitoring the behavior of individual pigs requires a complex process that involves uniquely identifying each animal in video frames, tracking their movements, and classifying their behaviors as one of six possibilities: eating, drinking, standing, walking, laying, or aggression (as depicted in the figure below).

Figure 1: Nursery pigs being monitored by computer vision

Serket's advanced technology is based on deep learning models for computer vision. While these models are incredibly accurate, they require vast amounts of data to train properly. When we began this project, there were no suitable datasets available for our needs on pig farms. Therefore, we curated one of the largest and most diverse datasets from pig farms around the world.

It is crucial to note that deep learning models (or any machine learning models) must be trained on datasets that reflect the diversity found in the real world. For instance, our training data consists of images from various camera angles, lighting conditions, pig sizes and colors, feeder and drinker configurations, and image resolutions. We also focus on collecting video samples from different genetic types, ranging from traditional breeds to hybrids. This ensures that the models we train are resilient enough to handle these changes when deployed across various farms.

Figure 2: Using masks to track pig behavior

Tracking Pig Behavior: A Journey of Innovation

Tracking pig behavior is a complex task that requires careful consideration of various factors. Initially, we employed a simple approach that involved using masks from two consecutive frames of all the pigs and measuring the overlap of the segmentation masks to propagate the ID of each pig to the next frame. While this approach worked very well when we had a high sampling rate and no missed detections or false positives, it was computationally expensive and difficult to implement in real-world conditions, therefore we switched to pose estimation.

Our rationale to switch  from masks to pose estimation was due to two primary reasons:

(i) Pose estimation was faster, and the memory required on the edge device was minimal. For instance, in the case of masks, we had to store the whole mask per pig on the device. When the number of pigs exceeds 25 or 30 pigs per pen, the memory required frequently surpasses the capacity of smaller edge devices. However, for the poses, we only need to store about 15 points per pig, whose memory footprint is much more affordable.

(ii) Moreover, poses provide you with information on the anatomy of the pig, such as the position of the mouth, head, ears, tail, and so on. This information proves to be invaluable when you want to estimate its activity robustly.

Figure 3: Using pose estimation to track pig behavior

Through identifying specific actions performed by pigs within a certain timeframe, such as biting, we will be able to trigger messages to the farmer. However, identifying anomalous behavior, such as illness, posed a more challenging task. To tackle this, we developed a sophisticated method of identifying anomalous behavior through long-term action transitions. By meticulously modeling the action transitions of pigs over time through recurrent networks, we were able to confidently trigger relevant warnings by matching them with prior knowledge about anomalous pig behavior.

We are proud to have made the following innovations and contributions:

  • The creation of the most comprehensive dataset on pig behavior
  • The ability to individually recognize each pig based on its physical attributes
  • Implementation of a real-time monitoring system for group-level behavioral analysis.
  • The successful deployment of our tracking algorithms on low-energy and memory devices in order to increase scalability.

Future Roadmap

Serket has already achieved several significant milestones, including:

  • Achieving group-level behavior detection with an accuracy of 90-95%
  • Successfully detecting heat stress with an accuracy of 80-90%
  • Counting pigs in the pen with a remarkable 100% accuracy

As Serket collaborates with the farmers worldwide, we continue to improve and expand our product offerings. Technologically, we have several immediate goals we would like to achieve.

Our roadmap includes creating the most advanced weight measurement system using computer vision in an ecological setting. This system will measure animal weight within 0.5 kg without requiring the pigs to be in a specific location. Our team is also perfecting our tracking system and behavior recognition to detect aggression and playfulness, both of which are important in understanding the animal's well-being and the farm’s productivity.

Tracking individual animals over long periods remains a challenge without additional sensors to confirm their identity. Camera downtime or darkness can cause periods of lost tracking, requiring re-identification. However, we are committed to overcoming this challenge in order to achieve early sickness detection at the individual level with confidence.

Figure 4: Finisher pigs tracked by computer vision using pose estimation

In conclusion, as the world population grows and the demand for quality protein increases, the livestock industry faces challenges in responding quickly and effectively. Thinking outside the box and being open to innovation, including the utilization of computer vision and artificial intelligence, presents clear advantages in both economic and health aspects, benefiting not only animals but also farmers and industry players globally.

At Serket, we are committed to disrupting the livestock industry through the utilization of cutting-edge technology that elevates animal welfare and augments operational efficiency, thereby enabling sustainable and efficient farming practices. We are determined to lead the digital transformation of livestock management with our innovative solutions and forward-thinking approach, and collaborating with the industry towards a better future.

REFERENCES

  1. Ji, Jingwei, et al. "Action Genome: Actions As Compositions of Spatio-Temporal Scene Graphs." Conference on Computer Vision and Pattern Recognition. 2020.
  2. Pascal Mettes and Cees G.M. Snoek. “Spatial-aware object embeddings for zero-shot localization and classification of actions." International Conference on Computer Vision. 2017.
  3. Kalogeiton, Vicky, et al. "Action tubelet detector for spatio-temporal action localization." International Conference on Computer Vision. 2017.
  4. Xu, Yihong, et al. "How To Train Your Deep Multi-Object Tracker." Conference on Computer Vision and Pattern Recognition. 2020.
  5. Dong, Xingping, and Jianbing Shen. "Triplet loss in siamese network for object tracking." European Conference on Computer Vision. 2018.
  6. Carreira, Joao, and Andrew Zisserman. "Quo vadis, action recognition? a new model and the kinetics dataset." Conference on Computer Vision and Pattern Recognition. 2017.
  7. Song, Lin, et al. "TACNet: Transition-aware context network for spatio-temporal action detection." Conference on Computer Vision and Pattern Recognition. 2019.

Written by: Rajat Thomas

Edited by: Hilal Karakaya