<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>ScholarWorks Community:</title>
    <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/836</link>
    <description />
    <pubDate>Sun, 05 Apr 2026 16:12:42 GMT</pubDate>
    <dc:date>2026-04-05T16:12:42Z</dc:date>
    <item>
      <title>SleepWatcher: Detecting sleep apnea/hypopnea syndrome from wearable devices using deep learning</title>
      <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/271409</link>
      <description>Title: SleepWatcher: Detecting sleep apnea/hypopnea syndrome from wearable devices using deep learning
Authors: Kim, Hyungbin; Lee, Hyubjin; Kim, Minsoo; Chung, Yon Dohn
Abstract: Due to the lack of polysomnography facilities and the cost of testing, it is crucial to detect sleep apnea/hypopnea syndrome (SAHS) using measurable biomedical signals from a wearable device. This paper presents a methodology called SleepWatcher to detect SAHS using biomedical signals from a wearable device. This work addresses the problem of class imbalance and handcrafted feature dependencies of biomedical signals in SAHS detection. Heart rate variability (HRV) and blood oxygen saturation (SpO2) signals are used to train two-dimensional convolutional neural networks, and classify normal, apnea, and hypopnea events. This work experimentally demonstrates that multiple signals can be trained with the same framework without handcrafted features. SleepWatcher consists of two stages. Stage 1 of SleepWatcher classifies normal and abnormal events. The classifier of Stage 1 achieves an accuracy of 89%, and specificity and sensitivity of 89%, and 89% when using only HRV signals, and 86%, 84%, and 89% when using only SpO2 signals. SleepWatcher trains on each of the HRV and SpO2 signals and has improved classification performance when compared to state-ofthe-art methods using each signal. The final results of Stage 1 achieve an accuracy of 87%, and specificity and sensitivity of 83%, and 91%, respectively. Stage 2 classifies apnea and hypopnea events and computes apnea/hypopnea index. The classifier of Stage 2 achieves an accuracy of 97%, and specificity and sensitivity of 99%, and 76%, respectively. With SleepWatcher, SAHS diagnosis could become an out-of-hospital procedure with satisfactory performance in terms of accuracy, specificity, and sensitivity.</description>
      <pubDate>Thu, 01 May 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.korea.ac.kr/handle/2021.sw.korea/271409</guid>
      <dc:date>2025-05-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>An analysis on language transfer of pre-trained language model with cross-lingual post-training</title>
      <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/267891</link>
      <description>Title: An analysis on language transfer of pre-trained language model with cross-lingual post-training
Authors: Son, Suhyune; Park, Chanjun; Lee, Jungseob; Shim, Midan; Lee, Chanhee; Jang, Yoonna; Seo, Jaehyung; Lim, Jungwoo; Lim, Heuiseok
Abstract: As recent pre-trained language models require enormous corpus and resources, an inequality between rich- resource languages and scarce-resource languages has become prominent. To mitigate this problem, studies on cross-lingual transfer learning and multilingual training have attempted to endow long-tail languages with the knowledge acquired from rich-resource languages. Although successful, existing work has mainly focused on experimenting with as many languages as possible, leaving the targeted in-depth analysis absent. In this study, we spotlight a single low-resource language and perform extensive evaluation and probing experiments using cross-lingual post-training (XPT). To make the transfer scenario challenging, we adopt Korean as a target language due to its low linguistic similarity to English, which is suitable to show XPT&amp;apos;s efficiency in transferring capability. With the comprehensive experiments, we observe that XPT outperforms monolingual models trained with a large amount of corpus in language understanding tasks and shows comparable performance even with limited training data. Also, we found that XPT-based method is effective when transferring the source languages into the target language, which has a low similarity.</description>
      <pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.korea.ac.kr/handle/2021.sw.korea/267891</guid>
      <dc:date>2025-04-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Visual Interfaces to Mitigate Eye Problems in a Virtual Environment via Triggering Eye Blinking and Movement</title>
      <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/268157</link>
      <description>Title: Visual Interfaces to Mitigate Eye Problems in a Virtual Environment via Triggering Eye Blinking and Movement
Authors: Jeong, Jongwook; Kwak, Myeongseok; Kang, HyeongYeop
Abstract: With the increase of virtual reality (VR) applications in daily life, protecting the comfort and health of VR users has become increasingly important. The immersive nature of VR often results in decreased eye blinking and movement, putting users at risk of developing conditions such as dry eye syndrome and eye strain. In this article, we propose visual interfaces to induce temporary eye blinks or movements by drawing users&amp;apos; attention temporarily in order to mitigate the negative effects of VR on eye health. Our proposed interfaces can induce eye blinking and movement, which are known to mitigate eye problems in VR. The experimental results confirmed that our interfaces increase the frequency of eye blinking and movement in VR users. © 2013 IEEE.</description>
      <pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.korea.ac.kr/handle/2021.sw.korea/268157</guid>
      <dc:date>2025-04-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>TaPIN: Reinforcing PIN Authentication on Smartphones With Tap Biometrics</title>
      <link>https://scholar.korea.ac.kr/handle/2021.sw.korea/267351</link>
      <description>Title: TaPIN: Reinforcing PIN Authentication on Smartphones With Tap Biometrics
Authors: Lee, Junhyub; Kim, Insu; Oh, Sangeun; Kim, Hyosu
Abstract: PIN authentication is the first line of defense for protecting private data on many smartphone applications, such as lock screens, messengers, and banking apps. However, existing PIN authentication systems have several constraints regarding security, usability, and robustness. To go beyond their limitations, this paper presents TaPIN, a reliable system that authenticates smartphone users with the collaborative use of PINs and tap biometrics. A user is first instructed to enter her PIN by tapping a smartphone screen for authentication. During the PIN entry, the user&amp;apos;s fingertip collides with the screen, producing user-specific vibration and sound signals. TaPIN then senses the tap-induced signals and the collision properties, e.g., pressures and sizes, using the smartphone&amp;apos;s built-in sensors and leverages them as biometric features. That is, it authenticates the user by verifying not only the entered PIN but also the collected features. Our experiments with 20 real-world users demonstrate that this two-factor authentication system is easy to use, more secure than existing methods, and deployable without dedicated hardware. For example, it accurately authenticates users with an average EER of 1.9% in stationary environments and maintains a reasonable level of security regardless of devices, tap styles, and noise.</description>
      <pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.korea.ac.kr/handle/2021.sw.korea/267351</guid>
      <dc:date>2025-04-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

