Scoring of tumor-infiltrating lymphocytes: From visual estimation to machine learning
- Authors
- Klauschen, F.; Mueller, K. -R.; Binder, A.; Bockmayr, M.; Haegele, M.; Seegerer, P.; Wienert, S.; Pruneri, G.; de Maria, S.; Badve, S.; Michiels, S.; Nielsen, T. O.; Adams, S.; Savas, P.; Symmans, F.; Willis, S.; Gruosso, T.; Park, M.; Haibe-Kains, B.; Gallas, B.; Thompson, A. M.; Cree, I.; Sotiriou, C.; Solinas, C.; Preusser, M.; Hewitt, S. M.; Rimm, D.; Viale, G.; Loi, S.; Loibl, S.; Salgado, R.; Denkert, C.
- Issue Date
- 10월-2018
- Publisher
- ACADEMIC PRESS LTD- ELSEVIER SCIENCE LTD
- Citation
- SEMINARS IN CANCER BIOLOGY, v.52, pp.151 - 157
- Indexed
- SCIE
SCOPUS
- Journal Title
- SEMINARS IN CANCER BIOLOGY
- Volume
- 52
- Start Page
- 151
- End Page
- 157
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/73029
- DOI
- 10.1016/j.semcancer.2018.07.001
- ISSN
- 1044-579X
- Abstract
- The extent of tumor-infiltrating lymphocytes (TILs), along with immunomodulatory ligands, tumor-mutational burden and other biomarkers, has been demonstrated to be a marker of response to immune-checkpoint therapy in several cancers. Pathologists have therefore started to devise standardized visual approaches to quantify TILs for therapy prediction. However, despite successful standardization efforts visual TIL estimation is slow, with limited precision and lacks the ability to evaluate more complex properties such as TIL distribution patterns. Therefore, computational image analysis approaches are needed to provide standardized and efficient TIL quantification. Here, we discuss different automated TIL scoring approaches ranging from classical image segmentation, where cell boundaries are identified and the resulting objects classified according to shape properties, to machine learning-based approaches that directly classify cells without segmentation but rely on large amounts of training data. In contrast to conventional machine learning (ML) approaches that are often criticized for their "black-box" characteristics, we also discuss explainable machine learning. Such approaches render ML results interpretable and explain the computational decision-making process through high-resolution heatmaps that highlight TILs and cancer cells and therefore allow for quantification and plausibility checks in biomedical research and diagnostics.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.