Invited Lecutures

March 20, 2025 13:20–14:10
"Unsupervised Learning for Perceptual Quality Assessment of Medical Images"
Prof. Jang-Hwan Choi
Department of Mechanical & Biomedical Engineering, Ewha Womans University, Korea
Abstract
Optimizing the balance between radiation dose and image quality is critical in clinical computed tomography (CT) due to the potential risks associated with excessive radiation. However, developing an accurate and reliable CT image quality assessment (IQA) tool is challenging, particularly in the absence of pristine reference images or radiologists’ ground-truth scores. In this talk, I present three recent advances in no-reference IQA methods that tackle these limitations through a combination of self-supervised learning, knowledge distillation, and large-scale dataset curation.
First, I introduce Deep Detector IQA (D2IQA), a method leveraging unsupervised object detection to insert virtual objects into CT scans for training. D2IQA accurately quantifies perceptual image quality, showing strong correlations with radiologists’ ratings while outperforming traditional IQA metrics. Building upon this, I propose the Efficient Deep-Detector Image Quality Assessment (EDIQA) framework, which employs a knowledge-distillation process from a modified D2IQA teacher model to a lightweight student model. The student model dramatically reduces computation time—by more than four orders of magnitude—while preserving accuracy, making it suitable for real-time clinical applications across various imaging modalities and anatomical regions.
Lastly, I highlight the Low-dose CT Perceptual Image Quality Assessment Challenge from MICCAI 2023, which introduced the first open-source CT IQA dataset consisting of 1,000 images independently evaluated by expert radiologists. Serving as a valuable benchmark, the challenge compared six submitted approaches, illuminating key strengths and limitations of existing IQA methods. Moreover, it underscored the potential for no-reference IQA algorithms to meet or exceed the capabilities of full-reference methods, accelerating progress toward universal IQA standards.
Taken together, these three contributions represent significant steps toward robust and efficient IQA in medical imaging, aiming to reduce radiation exposure while ensuring high diagnostic confidence. The challenge dataset is publicly available at https://zenodo.org/records/7833096.
Biography
Prof. Jang-Hwan Choi received his Ph.D. from Stanford University in 2015, focusing on medical big data and image processing. After completing a postdoctoral fellowship at Stanford Medical School, he served as a Senior Researcher at the Electronics and Telecommunications Research Institute in 2016. In 2017, he joined Ewha Womans University as an Associate Professor in the Department of Artificial Intelligence, where he leads the Medical AI and Computer Vision Lab specializing in advanced medical AI and computer vision. Over the past five years, Dr. Choi has published 50 articles in top-tier journals and conferences—such as Medical Image Analysis (1% JCI), IEEE JBHI (1% JCI), IEEE TIM (8% JCI), IEEE TRPMS (7% JCI), Medical Physics (15% JCI), AAAI, ICDM, IPMI, and MICCAI—filed or registered 23 patents (16 domestic and 7 overseas), and completed five technology transfers (including one international). He has also led or participated in major research initiatives funded by the Pan-Ministerial Full-Cycle Medical Device R&D Project, the Small & Medium Business Technology Innovation Development Project, and various National Research Foundation grants. In recognition of his outstanding research and academic contributions, he has received the Orthopedic Research Society New Investigator Recognition Award (2016), the KSME Bioengineering Division Outstanding Early-Career Researcher Award (2024), and Ewha Womans University’s Outstanding Research Achievement Award (2024).

March 20, 2025 14:10–15:00
"Applications and Challenges of Generative AI and Image Recognition Technology in Smart Healthcare"
Prof. Chuan-Yu Chang
Department of Electrical Engineering, National Cheng Kung Univesity, Taiwan
Abstract
With the significant advancements in computing power and the rapid development of artificial intelligence (AI) technologies, image recognition technology has gradually matured. On the other hand, since 2021, generative AI has revolutionized traditional AI applications, leading to the emergence of several unicorn companies in 2022 and 2023, such as OpenAI and Hugging Face, highlighting the rapid growth of the generative AI market. The development of generative AI is unstoppable, but how can these technologies be integrated into the biomedical industry? And what challenges might be encountered?
This presentation will share and demonstrate several technologies developed and practically applied in various fields by the Service Systems Technology Center of ITRI and the Intelligent Recognition Industry Service Research Center of YunTech, including:
(1) Smart Medical Assistant: Image-assisted diagnostic technology that helps doctors in clinical diagnosis, reducing the difficulty of diagnosis, shortening examination time, and improving diagnosis accuracy.
(2) Three-in-One Wound Care Decision System: AI analysis to measure wound area and size, and automatically identify wound tissues.
(3) "Zero-Contact" Heart Rate and Respiration Detection Technology: Non-contact respiration and heart rate detection developed by combining AI and computer vision technology, capable of real-time, accurate, and continuous measurement, with accuracy close to medical-grade physiological measurement instruments.
(4) Medical Imaging Report Generation System (MrGPU-GPT): Combining object detection and large language models to generate medical reports for parathyroid ultrasound images, reducing the workload of radiologists.
Additionally, the presentation will analyze the technical limitations encountered during the development of smart healthcare and the solutions to overcome them.
Biography
Prof. Chuan-Yu Chang (IET Fellow, IEEE Senior Member) received his Ph.D. in Electrical Engineering from National Cheng Kung University in 2000. He is a Distinguished Professor in the Department of Computer Science and Information Engineering at National Yunlin University of Science and Technology (YunTech). He has served as the department chair, dean of research and development, and director of the Industry-Academia and Intellectual Property Incubation Center. He is currently the director of the Intelligent Recognition Industry Service Research Center. Starting in February 2025, he will assume the role of the sixth president of YunTech.
Dr. Chang was seconded to the Service Systems Technology Center at the Industrial Technology Research Institute (ITRI) as Deputy Executive Director from August 2019 to July 2023. In August 2023, he returned to YunTech and was jointly appointed as Chief Digital Officer at the Service Systems Technology Center of ITRI. Dr. Chang’s current research interests include neural networks and their application to medical image processing, defect detection, and infant cry recognition. He has authored or coauthored more than 300 publications in journals and conference proceedings in these
fields.
He has developed technologies with industrial benefits, obtained numerous domestic and international invention patents, and completed technology transfers. He frequently interacts with related fields and industries both domestically and internationally. He is a Fellow of the IET, a lifetime member of IPPR and TAAI, and a Senior Member of IEEE. He has served as the chair of the IEEE Signal Processing Society, Tainan Chapter, and as the Representative Committee member for Region 10 of IEEE SPS.
From 2017 to 2021, he was the president of the Taiwan Association for Web Intelligence Consortium. From 2020 to 2022, he was elected president of the Image Processing and Pattern Recognition Society of Taiwan. In 2021, he received the Ministry of Education's National Industry-Academia Master Award, the Outstanding Engineering Professor Award from the Chinese Institute of Electrical Engineering, the Outstanding Research Award from ITRI, and the IEEE Tainan Section Outstanding Technical Achievement Award. Since 2021, he has been listed in Stanford University's World’s Top 2% Scientists (both in the career-long and annual scientific impact rankings). In 2023, he received the Outstanding Engineering Professor Award from the Chinese Institute of Engineers. He was awarded the Future Technology Award by the National Science Council in both 2020 and 2023.

March 21, 2025 13:00 –13:50
"Label Efficient Learning in Biomedical Image Analysis"
Prof. Ryoma Bise
Department of Advanced Information Technology, Faculty of Information Science and Electrical Engineering, Kyushu University, Japan
Abstract
Supervised machine learning, including deep learning, is widely utilized in biomedical image analysis for tasks such as detection, segmentation, and tracking. However, in the medical and biological domains, creating accurate labels for supervised learning requires expert knowledge, making the process both costly and time-consuming. Unlike general object recognition tasks, where labels can be created by anyone, the cost of acquiring labeled data in biomedical fields is significantly higher, as expert annotations are essential. As a result, despite having access to a large amount of data, it is common to only annotate a small portion of it and use this limited labeled data for training.
To address this challenge, semi-supervised learning techniques can be employed, where a small amount of labeled data is combined with a larger set of unlabeled data. This approach allows models to make use of all available data, thus improving the accuracy and generalizability of the model without the need for extensive labeling efforts. Additionally, weakly supervised learning techniques offer another promising solution. These methods utilize indirectly related information, often gathered during the diagnostic process, which may not directly answer the task but can still guide the learning process. Such information can serve as weak supervision, reducing the need for fully labeled data while still enhancing model performance.
Another significant challenge in biomedical image analysis is domain shift, where models trained on data from one facility or imaging device may perform poorly when tested on data from a different facility. Domain adaptation techniques aim to address this issue by using supervised data from one facility in combination with unsupervised data from another, allowing the model to generalize better to new environments.
This presentation will explore these methodologies for effectively utilizing weakly supervised and unsupervised data in biomedical image analysis tasks, with a focus on improving detection, segmentation, and tracking performance.
Biography
RYOMA BISE received an M.S. from the Graduate School of Information Science and Electrical Engineering, Kyushu University, Japan, in 2002, and a Ph.D. in interdisciplinary information studies from The University of Tokyo in 2015. From 2002 to 2015, he was involved in research and development in informatics at Dai Nippon Printing Company, Ltd., Japan. He was a visiting scholar at Carnegie Mellon University under the supervision of Prof. Takeo Kanade from 2008 to 2010. Following this, he worked at the National Institute of Informatics from 2015 to 2017. In 2017, he joined the Faculty of Information Science and Electrical Engineering at Kyushu University as an Associate Professor and was promoted to Full Professor in 2023. His research interests include computer vision, with a particular focus on biomedical image analysis. He has published extensively in leading AI conferences and journals, including CVPR, ECCV, MICCAI, PAMI, and MedIA. Additionally, he has served as an Associate Editor for Pattern Recognition (Elsevier), Area Chair for CVPR 2024 and 2025, MICCAI 2024, and as General Chair for MVA 2025.