this post was submitted on 01 Jul 2023
1 points (100.0% liked)

Machine Learning - Theory | Research

0 readers
1 users here now

We follow Lemmy’s code of conduct.

Communities

Useful links

founded 1 year ago
MODERATORS
 

Title: DreamDiffusion: Generating High-Quality Images from Brain EEG Signals Authors: Yunpeng Bai1 , Xintao Wang2 , Yanpei Cao2 , Yixiao Ge2 , Chun Yuan1,3 , Ying Shan2

  • Word Count: 3,697
  • Estimated Reading Time: ~15 minutes
  • Source Code: No source code provided

Summary: The paper proposes DreamDiffusion, a method to generate high-quality images based on EEG signals recorded from the human brain. This is achieved by:

Pre-training an EEG encoder using masked signal modeling on a large EEG dataset to learn robust EEG representations.

Fine-tuning a pre-trained Stable Diffusion text-to-image model using limited paired EEG-image data.

Using CLIP image embeddings to further optimize the EEG embeddings, aligning the EEG, text and image embeddings for improved image generation.

The results show that DreamDiffusion can generate realistic images from EEG signals alone, demonstrating progress towards more portable and affordable "thought-to-image" systems.

Applicability: The proposed method demonstrates the promising potential of large language models and GANs for applications that generate images directly from brain activity. The key ingredients - masked modelling pre-training, fine-tuning on diffusion models, and alignment with multi-modal embeddings - are applicable techniques for developing other brain-computer interface systems. However, the current results still show limitations in capturing fine-grained semantic information from EEG data. Overall, the paper outlines a path forward for building more capable brain-to-image generation systems.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here