App lets users chat with human-looking AI

 

VIEWED IMAGES (TOP) AND AI-GENERATED VERSIONS (BELOW)

Osaka University scientists used AI models to turn human brain scans into images that resemble what the person viewed. 

The researchers used the Stable Diffusion image generator from startup Stability AI to help reconstruct the images. The AI method could re-create what test subjects saw by reading their brain scans.

  • The researchers used functional Magnetic Resonance Imaging (fMRI) scans that were processed while participants from a prior study were shown distinct images, such as a teddy bear and an airplane.
    • They initially trained a model to link the fMRI data shown in the participants' early visual cortex to the images they had viewed.
    • They then trained a second model to connect image descriptions and fMRI data to the ventral visual cortex.
    • Once those models were created, they fed the brain-imaging data into Stable Diffusion, which was able to reconstruct images viewed by the participants at around 80% accuracy.
  • Notably, the study only involved brain scans from four participants, and the AI models had to be customized to each person, which required "lengthy brain-scanning sessions and huge fMRI machines," according to Sikun Lin at the University of California, Santa Barbara.
  • The preprint paper, "High-resolution image reconstruction with latent diffusion models from human brain activity," was co-authored by Shinji Nishimoto and Yu Takagi of the Graduate School of Frontier Biosciences (FBS) at Osaka University.

Post a Comment

Previous Next

Contact Form