2021-10-04-Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis (MICCAI2019)
Models Genesis:用于 3D 醫學圖像分析的通用自學模型
代碼鏈接:https://github.com/MrGiovanni/ModelsGenesis
Motivation:
3D圖像任務轉換成2D來解決,這樣做丟失豐富的3D解剖信息,降低性能。
為了解決這個問題,本文提出Models Genesis,because they are created ex nihilo(with no manual labeling),self-taught(learned by self-supervision),and generic(served as source models for generating application-specific target models).因為它們是從無到有創建的(沒有手動標記)、自學(通過自監督學習)和通用(用作生成特定于應用程序的目標模型的源模型)。
the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision?醫學圖像中復雜但反復出現的解剖結構可以作為深度模型的強大監督信號,通過自我監督自動學習常見的解剖學表征
Given the marked differences between natural images and medical images,we hypothesize that transfer learning can yield more powerful(application-specific)target models if the source models are built directly from medical images.
Can we utilize the large number of available Chest CT images without systematic annotation to train source models that can yield high-performance target models via transfer learning?
方法
the encoder alone can be fine-tuned for target classification tasks;while the encoder and decoder together can be for target segmentation tasks.
Learning appearance (shape and intensity distribution)via non-linear transformation.
intensity information can be used as a strong source of pixel-wise supervision。
為了保留圖像變換時,解剖結構的相對強度信息。我們使用Bezier Curve,一個變換函數(smooth,monotonous單調),分配每個像素一個唯一值,確保一到一映射。
【Bezier Curve是什么】2021-10-04-貝塞爾曲線 - 簡書
Learning texture via local pixel shuffling.
給定一個原始patch,local pixel shuffling從patch中隨機采樣一個窗口,然后對包含的像素的順序進行混洗,從而得到一個轉換后的patch。local window的大小決定了任務的困難度,比模型的感受野小。PatchShuffling[5]是一個正則化技術防止過擬合。為了從local pixel shuffle中恢復,模型必須記住local 邊界和紋理。
Learning context via out-painting and in-painting
為了通過out-painting實現自監督學習,我們生成不同大小的任意數量的窗口,互相疊加,產生復雜形狀的一個窗口。然后為窗口外的所有像素分配一個隨機值,同時保留內部像素的原始強度。對于in-painting,窗口外保留原始強度值,窗口內分配隨機值。Out-painting 迫使 ModelsGenesis 通過外推extrapolating學習器官的全局幾何和空間布局,而在in-painting中需要 ModelsGenesis 通過內插interpolating來了解器官的局部連續性。
Four unique properties:
1)Autodidactic—requiring no manual labeling.
2)Eclectic—learning from multiple perspectives. appearance,texture,context,
to learn more comprehensive representation
3)Scalable—eliminating proxy-task-specific heads.
如果每個任務都需要自己的解碼器,由于 GPU 上的內存有限,我們的框架將無法適應大量自監督任務。通過將所有任務統一為單個圖像恢復任務,任何有利的變換都可以輕松修改到我們的框架中,克服與多任務學習相關的可擴展性問題 [2],其中network heads受制于特定的代理任務proxy tasks。
4)Generic—yielding diverse applications.
Models Genesis learn a general purpose image representation that can be leveraged for a wide range of target tasks.Specifically,Models Genesis can be utilized to initialize the encoder for the target classification tasks and to initialize the encoder-decoder for the target segmentation tasks.
實驗和結果
Experiment protocol.?
534CT scans inLIDC-IDRI1 and 77,074 X-rays in ChestXray83
Models Genesis outperform 3D models trained from scratch.
Models Genesis consistently top any 2D approaches.
ModelsGenesis(2D)offer equivalent performances to supervised pretrained models.