リサーチアシスタントの韓さん九州大学内田研究室早志助教らによる、脳MRI画像から悪性度の高い脳腫瘍である膠芽腫を判定する研究の成果がIEEE Accessに論文発表されました。

これは佐藤センター長が実行委員長を務めた、2017年のMIRU若手プログラム「研究の立ち上げ」プレゼンコンペ優勝チームがこつこつ研究を続け、2年の時を経て出した成果です。医療ビッグデータ研究センター開設よりも前から予備実験が始まったプロジェクトは、センター内でようやく成果が実りました。

GANを利用して膠芽腫画像を生成し、学習データの拡張を行う研究です。Noise-to-ImageとImage-to-Imageの2つのステップを組み合わせたデータ拡張で、膠芽腫分類における感度を93.67%から97.48%まで大幅に高めることに成功しました。

Changhee Han, Leonardo Rundo, Ryosuke Araki, Yudai Nagano, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama and Hideaki Hayashi

Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection

Accepted to IEEE ACCESS

Abstract

Convolutional Neural Networks (CNNs) achieve excellent computer-assisted diagnosis with sufficient annotated training data. However, most medical imaging datasets are small and fragmented. In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e.g., random noise samples to diverse pathological images) or image-to-image GANs (e.g., a benign image to a malignant one). Yet, no research has reported results combining noise-to-image and image-to-image GANs for further performance boost. Therefore, to maximize the DA effect with the GAN combinations, we propose a two-step GAN-based DA that generates and refines brain Magnetic Resonance (MR) images with/without tumors separately: (i) Progressive Growing of GANs (PGGANs), multi-stage noise-to-image GAN for high-resolution MR image generation, first generates realistic/diverse 256 X 256 images; (ii) Multimodal UNsupervised Image-to-image Translation (MUNIT) that combines GANs/Variational AutoEncoders or SimGAN that uses a DA-focused GAN loss, further refines the texture/shape of the PGGAN-generated images similarly to the real ones. We thoroughly investigate CNN-based tumor classification results, also considering the influence of pre-training on ImageNet and discarding weird-looking GAN-generated images. The results show that, when combined with classic DA, our two-step GAN-based DA can significantly outperform the classic DA alone, in tumor detection (i.e., boosting sensitivity 93.67% to 97.48%) and also in other medical imaging tasks.

https://arxiv.org/abs/1905.13456