Deep Representation of a Normal Map for Screen-Space Fluid Rendering
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Choi, Myungjin | - |
dc.contributor.author | Park, Jee-Hyeok | - |
dc.contributor.author | Zhang, Qimeng | - |
dc.contributor.author | Hong, Byeung-Sun | - |
dc.contributor.author | Kim, Chang-Hun | - |
dc.date.accessioned | 2022-02-18T00:40:35Z | - |
dc.date.available | 2022-02-18T00:40:35Z | - |
dc.date.created | 2022-02-08 | - |
dc.date.issued | 2021-10 | - |
dc.identifier.issn | 2076-3417 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/136161 | - |
dc.description.abstract | We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.title | Deep Representation of a Normal Map for Screen-Space Fluid Rendering | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Chang-Hun | - |
dc.identifier.doi | 10.3390/app11199065 | - |
dc.identifier.scopusid | 2-s2.0-85116058655 | - |
dc.identifier.wosid | 000708019300001 | - |
dc.identifier.bibliographicCitation | APPLIED SCIENCES-BASEL, v.11, no.19 | - |
dc.relation.isPartOf | APPLIED SCIENCES-BASEL | - |
dc.citation.title | APPLIED SCIENCES-BASEL | - |
dc.citation.volume | 11 | - |
dc.citation.number | 19 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Materials Science | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.subject.keywordAuthor | fluid rendering | - |
dc.subject.keywordAuthor | image-based rendering | - |
dc.subject.keywordAuthor | machine learning | - |
dc.subject.keywordAuthor | screen space rendering | - |
dc.subject.keywordAuthor | supervised learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.