<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.korea.ac.kr/handle/2021.sw.korea/2448">
    <title>ScholarWorks Community:</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/2448</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.korea.ac.kr/handle/2021.sw.korea/197889" />
        <rdf:li rdf:resource="https://scholar.korea.ac.kr/handle/2021.sw.korea/193730" />
        <rdf:li rdf:resource="https://scholar.korea.ac.kr/handle/2021.sw.korea/168633" />
        <rdf:li rdf:resource="https://scholar.korea.ac.kr/handle/2021.sw.korea/190098" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-09T23:02:05Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.korea.ac.kr/handle/2021.sw.korea/197889">
    <title>Pattern Mining-Based Pig Behavior Analysis for Health and Welfare Monitoring</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/197889</link>
    <description>Title: Pattern Mining-Based Pig Behavior Analysis for Health and Welfare Monitoring
Authors: Mluba, Hassan Seif; Atif, Othmane; Lee, Jonguk; Park, Daihee; Chung, Yongwha
Abstract: The increasing popularity of pigs has prompted farmers to increase pig production to meet the growing demand. However, while the number of pigs is increasing, that of farm workers has been declining, making it challenging to perform various farm tasks, the most important among them being managing the pigs&amp;apos; health and welfare. This study proposes a pattern mining-based pig behavior analysis system to provide visualized information and behavioral patterns, assisting farmers in effectively monitoring and assessing pigs&amp;apos; health and welfare. The system consists of four modules: (1) data acquisition module for collecting pigs video; (2) detection and tracking module for localizing and uniquely identifying pigs, using tracking information to crop pig images; (3) pig behavior recognition module for recognizing pig behaviors from sequences of cropped images; and (4) pig behavior analysis module for providing visualized information and behavioral patterns to effectively help farmers understand and manage pigs. In the second module, we utilize ByteTrack, which comprises YOLOx as the detector and the BYTE algorithm as the tracker, while MnasNet and LSTM serve as appearance features and temporal information extractors in the third module. The experimental results show that the system achieved a multi-object tracking accuracy of 0.971 for tracking and an F1 score of 0.931 for behavior recognition, while also highlighting the effectiveness of visualization and pattern mining in helping farmers comprehend and manage pigs&amp;apos; health and welfare.</description>
    <dc:date>2024-04-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.korea.ac.kr/handle/2021.sw.korea/193730">
    <title>SAFP-YOLO: Enhanced Object Detection Speed Using Spatial Attention-Based Filter Pruning</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/193730</link>
    <description>Title: SAFP-YOLO: Enhanced Object Detection Speed Using Spatial Attention-Based Filter Pruning
Authors: Ahn, Hanse; Son, Seungwook; Roh, Jaehyeon; Baek, Hwapyeong; Lee, Sungju; Chung, Yongwha; Park, Daihee
Abstract: Because object detection accuracy has significantly improved advancements in deep learning techniques, many real-time applications have applied one-stage detectors, such as You Only Look Once (YOLO), owing to their fast execution speed and accuracy. However, for a practical deployment, the deployment cost should be considered. In this paper, a method for pruning the unimportant filters of YOLO is proposed to satisfy the real-time requirements of a low-cost embedded board. Attention mechanisms have been widely used to improve the accuracy of deep learning models. However, the proposed method uses spatial attention to improve the execution speed of YOLO by evaluating the importance of each YOLO filter. The feature maps before and after spatial attention are compared, and then the unimportant filters of YOLO can be pruned based on this comparison. To the best of our knowledge, this is the first report considering both accuracy and speed with Spatial Attention-based Filter Pruning (SAFP) for lightweight object detectors. To demonstrate the effectiveness of the proposed method, it was applied to the YOLOv4 and YOLOv7 baseline models. With the pig (baseline YOLOv4 84.4%@3.9FPS vs. proposed SAFP-YOLO 78.6%@20.9FPS) and vehicle (baseline YOLOv7 81.8%@3.8FPS vs. proposed SAFP-YOLO 75.7%@20.0FPS) datasets, the proposed method significantly improved the execution speed of YOLOv4 and YOLOv7 (i.e., by a factor of five) on a low-cost embedded board, TX-2, with acceptable accuracy.</description>
    <dc:date>2023-10-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.korea.ac.kr/handle/2021.sw.korea/168633">
    <title>EnsembleVehicleDet: Detection of Faraway Vehicles with Real-Time Consideration</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/168633</link>
    <description>Title: EnsembleVehicleDet: Detection of Faraway Vehicles with Real-Time Consideration
Authors: Yu, Seunghyun; Son, Seungwook; Ahn, Hanse; Baek, Hwapyeong; Nam, Kijeong; Chung, Yongwha; Park, Daihee
Abstract: Featured Application Autonomous Driving. While detecting surrounding vehicles in autonomous driving is possible with advances in object detection using deep learning, there are cases where small vehicles are not being detected accurately. Additionally, real-time processing requirements must be met for implementation in autonomous vehicles. However, detection accuracy and execution speed have an inversely proportional relationship. To improve the accuracy-speed tradeoff, this study proposes an ensemble method. An input image is downsampled first, and the vehicle detection result is acquired for the downsampled image through an object detector. Then, warping or upsampling is performed on the Region of Interest (RoI) where the small vehicles are located, and the small vehicle detection result is acquired for the transformed image through another object detector. If the input image is downsampled, the effect on the detection accuracy of large vehicles is minimal, but the effect on the detection accuracy of small vehicles is significant. Therefore, the detection accuracy of small vehicles can be improved by increasing the pixel sizes of small vehicles in the transformed image more than the given input image. To validate the proposed method&amp;apos;s efficiency, the experiment was conducted with Argoverse vehicle data used in an autonomous vehicle contest, and the accuracy-speed tradeoff improved by up to a factor of two using the proposed ensemble method.</description>
    <dc:date>2023-03-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.korea.ac.kr/handle/2021.sw.korea/190098">
    <title>Behavior-Based Video Summarization System for Dog Health and Welfare Monitoring</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/190098</link>
    <description>Title: Behavior-Based Video Summarization System for Dog Health and Welfare Monitoring
Authors: Atif, Othmane; Lee, Jonguk; Park, Daihee; Chung, Yongwha
Abstract: The popularity of dogs has been increasing owing to factors such as the physical and mental health benefits associated with raising them. While owners care about their dogs&amp;apos; health and welfare, it is difficult for them to assess these, and frequent veterinary checkups represent a growing financial burden. In this study, we propose a behavior-based video summarization and visualization system for monitoring a dog&amp;apos;s behavioral patterns to help assess its health and welfare. The system proceeds in four modules: (1) a video data collection and preprocessing module; (2) an object detection-based module for retrieving image sequences where the dog is alone and cropping them to reduce background noise; (3) a dog behavior recognition module using two-stream EfficientNetV2 to extract appearance and motion features from the cropped images and their respective optical flow, followed by a long short-term memory (LSTM) model to recognize the dog&amp;apos;s behaviors; and (4) a summarization and visualization module to provide effective visual summaries of the dog&amp;apos;s location and behavior information to help assess and understand its health and welfare. The experimental results show that the system achieved an average F1 score of 0.955 for behavior recognition, with an execution time allowing real-time processing, while the summarization and visualization results demonstrate how the system can help owners assess and understand their dog&amp;apos;s health and welfare.</description>
    <dc:date>2023-03-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

