2023-03-24 更新
Feature-Conditioned Cascaded Video Diffusion Models for Precise Echocardiogram Synthesis
Authors:Hadrien Reynaud, Mengyun Qiao, Mischa Dombrowski, Thomas Day, Reza Razavi, Alberto Gomez, Paul Leeson, Bernhard Kainz
Image synthesis is expected to provide value for the translation of machine learning methods into clinical practice. Fundamental problems like model robustness, domain transfer, causal modelling, and operator training become approachable through synthetic data. Especially, heavily operator-dependant modalities like Ultrasound imaging require robust frameworks for image and video generation. So far, video generation has only been possible by providing input data that is as rich as the output data, e.g., image sequence plus conditioning in, video out. However, clinical documentation is usually scarce and only single images are reported and stored, thus retrospective patient-specific analysis or the generation of rich training data becomes impossible with current approaches. In this paper, we extend elucidated diffusion models for video modelling to generate plausible video sequences from single images and arbitrary conditioning with clinical parameters. We explore this idea within the context of echocardiograms by looking into the variation of the Left Ventricle Ejection Fraction, the most essential clinical metric gained from these examinations. We use the publicly available EchoNet-Dynamic dataset for all our experiments. Our image to sequence approach achieves an $R^2$ score of 93%, which is 38 points higher than recently proposed sequence to sequence generation methods. Code and models will be available at: https://github.com/HReynaud/EchoDiffusion.
PDF Under Review
点此查看论文截图
AptSim2Real: Approximately-Paired Sim-to-Real Image Translation
Authors:Charles Y Zhang, Ashish Shrivastava
Advancements in graphics technology has increased the use of simulated data for training machine learning models. However, the simulated data often differs from real-world data, creating a distribution gap that can decrease the efficacy of models trained on simulation data in real-world applications. To mitigate this gap, sim-to-real domain transfer modifies simulated images to better match real-world data, enabling the effective use of simulation data in model training. Sim-to-real transfer utilizes image translation methods, which are divided into two main categories: paired and unpaired image-to-image translation. Paired image translation requires a perfect pixel match, making it difficult to apply in practice due to the lack of pixel-wise correspondence between simulation and real-world data. Unpaired image translation, while more suitable for sim-to-real transfer, is still challenging to learn for complex natural scenes. To address these challenges, we propose a third category: approximately-paired sim-to-real translation, where the source and target images do not need to be exactly paired. Our approximately-paired method, AptSim2Real, exploits the fact that simulators can generate scenes loosely resembling real-world scenes in terms of lighting, environment, and composition. Our novel training strategy results in significant qualitative and quantitative improvements, with up to a 24% improvement in FID score compared to the state-of-the-art unpaired image-translation methods.
PDF
点此查看论文截图
Zero-guidance Segmentation Using Zero Segment Labels
Authors:Pitchaporn Rewatbowornwong, Nattanat Chatthee, Ekapol Chuangsuwanich, Supasorn Suwajanakorn
CLIP has enabled new and exciting joint vision-language applications, one of which is open-vocabulary segmentation, which can locate any segment given an arbitrary text query. In our research, we ask whether it is possible to discover semantic segments without any user guidance in the form of text queries or predefined classes, and label them using natural language automatically? We propose a novel problem zero-guidance segmentation and the first baseline that leverages two pre-trained generalist models, DINO and CLIP, to solve this problem without any fine-tuning or segmentation dataset. The general idea is to first segment an image into small over-segments, encode them into CLIP’s visual-language space, translate them into text labels, and merge semantically similar segments together. The key challenge, however, is how to encode a visual segment into a segment-specific embedding that balances global and local context information, both useful for recognition. Our main contribution is a novel attention-masking technique that balances the two contexts by analyzing the attention layers inside CLIP. We also introduce several metrics for the evaluation of this new task. With CLIP’s innate knowledge, our method can precisely locate the Mona Lisa painting among a museum crowd. Project page: https://zero-guide-seg.github.io/.
PDF