The paper, at least as shown here, [1] is vague about which results came from implanted electrodes and which came from functional MRI data. Functional MRI data is showing blood flow. It's like looking at an IC with a thermal imager and trying to figure out what it is doing.
Just to clarify, the paper [0] does use both implanted electrodes and fMRI data, but it is actually quite transparent about which data came from which source. The authors worked with two datasets: the B2G dataset, which includes multi-unit activity from implanted Utah arrays in macaques, and the Shen-19 dataset, which uses noninvasive fMRI from human participants.
You’re right that fMRI measures blood flow rather than direct neural activity, and the authors acknowledge that limitation. But the study doesn’t treat it as a direct window into brain function. Instead, it proposes a predictive attention mechanism (PAM) that learns to selectively weigh signals from different brain areas, depending on the task of reconstructing perceived images from those signals.
The “thermal imager” analogy might make sense in a different context, but in this case, the model is explicitly designed to deal with those signal differences and works across both modalities. If you’re curious, the paper is available here:
[0] https://www.biorxiv.org/content/10.1101/2024.06.04.596589v2....
If you can extract private keys by measuring how much power a chip consumes I don’t really see a problem with extracting images from fMRI data….
Fair point. Side-channel attacks show how much signal you can pull from noise. But fMRI is a different kind of beast. It’s slow, indirect, and coarse. You’re not measuring neural activity directly, just blood flow changes that lag by a few seconds.
The paper [0] doesn’t pretend otherwise. It trains a model (PAM) to learn which brain regions carry useful info for reconstructing images, and applies this to both fMRI data from humans and intracranial recordings from macaques. The two signal types are handled separately.
If you want an analogy, it’s less like tapping power lines and more like trying to figure out which YouTube video someone is watching by measuring heat on the back of their laptop every few seconds. There’s a pattern in there, but pulling it out takes work.
[0] https://www.biorxiv.org/content/10.1101/2024.06.04.596589v2....
That could be an interesting project in itself, take a simple 8 but microcontroller, a thermal camera, and some code that does different kinds of operations, see if you can train a classification model at least, or even generate the code running via an image to text llm.
Ah yes, yet another attack vector chip manufacturers will have to protect against now.
I want to see a cats POV when its startled by a cucumber (Youtube has lots of examples). A theory is that part of the brain mistook it for a snake. Also research on "constant bearing, decreasing range (CBDR)" where drivers may not notice another car/cycle in a perfectly clear crossroads till its too late.'
For something like these kinds of reflexes, my understanding is that the response comes from the central nervous system, even before the brain has had the chance to fully process the input. This shortcut makes one avoid, say, burns or snakes, quicker than if it required the brain. Still, I agree with you that seeing what a cat sees (here or anywhere) would be awesome.
I think the distinction you're drawing between "the central nervous system" and "the brain" is mistaken here -- the brain is part of the CNS. This kind of reflex basically has to involve the brain b/c it involves both the visual system and the motor system i.e. there's not a fast path from the retina to moving your appendages etc that doesn't include the brain.
The "fully process" part is part of the story though -- e.g. perhaps some reactions use the dorsal stream based on peripheral vision while ventral stream is still waiting on a saccade and focus to get higher resolution foveal signals. But though these different pathways in the brain operate at different speeds, they're both still very much in the brain.
In this article you can see a typical and a "broken" "visual to amygdala fear shortcuts" (the "broken" is MRI of the famous climber Honnold)
https://assets.nautil.us/10086_6412121cbb2dc2cb9e460cfee7046...
https://nautil.us/the-strange-brain-of-the-worlds-greatest-s...
(the path is from the back of the head (V5?) where the visual nerve comes into brain)
Some touch-based reflexes might avoid the higher parts of the brain though no?
Yeah I think there are multiple documented cases of this, where especially well-practiced motor-plans seem to be 'pushed down', and if they're interrupted, correction can start faster than a round trip to the brain.
Reflexes do not necessarily have to exist in the brain, but they do exist in the central nervous system. The peripheral nervous system doesn't handle reflexes as far as I'm aware.
PNS does not process reflexes, but it is essential for transmitting the sensory and motor components of reflex arcs though.
But yeah, reflexes are processed in the central nervous system (CNS), typically the spinal cord or brainstem, not necessarily the brain.
I think it would be interesting to know if the viewer's familiarity with the object informs how accurate the reconstruction is. This shows presumably lab-raised macaques looking at boats and tarantulas and goldfish -- and that's cool. But presumably a macaque especially whose life has been indoors in confinement has no mental concepts for these things, so they're basically seeing still images of unfamiliar objects. If the animal has e.g. some favorite toys, or has eaten a range of foods, do they perceive these things with a higher detail and fidelity?
It reminds of this research where faces monkey's were seeing were recreated almost identically.
I hope one day we can turn this on for coma patients and see if they're dreaming or otherwise processing the world.
Using these techniques, never. The electrode methods can only see a tiny section of processing and are missing all the information elsewhere. fMRI is very low resolution. Because of this they are all very overfitted- they cue off very particular subject-specific quirks that will not generalize well.
More importantly, these techniques operate on the V1, V4 and inferior temporal cortex areas of the brain. These areas will fire in response to retina stimulation regardless of what's happening in the rest of your brain. V1 in particular is connected directly to your retinas. While deeper areas may be sympathetically activated by hallucinations etc, they aren't really related to your conception of things. In general if you want to read someone's thoughts you would look elsewhere in the brain.
Maybe I missed this, but isn't the underlying concept here big news?
Am I understanding this right? It seems that by reading areas of the brain, a machine can effectively act as a rendering engine with knowledge on colour, brightness etc per pixel based on an image the person is seeing? And AI is being used to help because this method is lossy?
This seems huge, is there other terminology around this I can kagi to understand more?
>And AI is being used to help because this method is lossy?
AI is the method. They put somebody in a brain scanner and flash images on a screen in front of them. Then they train a neural network on the correlations between their brain activity and the known images.
To test it, you display unknown images on the screen and have the neural network predict the image from the brain activity.
> Then they train a neural network on the correlations between their brain activity and the known images.
Not onto known images, onto latent spaces of existing image networks. The recognition network is getting a very approximate representation which it is then mapping onto latent spaces (which may or may not be equivalent) and then the image network is filling in the blanks.
When you're using single-subject, well-framed images like this they're obviously very predictable. If you showed something unexpected, like a teddy bear with blue skin, the network probably would just show you a normal-ish teddy bear. It's also screwy if it doesn't have a well-correlated input, which is how you get those weird distortions. It will also be very off for things that require precision like seeing the actual outlines of an object, because the network is creating all that detail from nothing.
At least the stuff using a Utah array (a square implanted electrode array) is not transferrable between subjects, and the fmri stuff also might not be transferrable. These models are not able to see enough detail to know what is happening- they only see glimpses of a small section of the process (Utah array) or very vague indirect processes (fmri). They're all very overfitted.
This requires intrusive electrodes, "fMRI visual recognition", https://scholar.google.com/scholar?q=fmri+visual+recognition
There are startups working on less intrusive (e.g. headset) brain-computer interfaces (BCI).
fMRI isn't the one with the electrodes, it's the one with the giant scanner and no metal objects in the room.
Big blocker I believe, besides giant expensive fMRI machine, is each person is different, so model trained on Bob won’t work on Jane.
Big jump when we go from decoding what you’re seeing to what you’re thinking.
This is a big jump ethically, but technically it feels like it's a hop away. If we can do this for visual images, we could use the same strategy on patterns of thought - especially if the person is a skilled at visualisation.
In that case, you'll need the equivalent of ad-blockers for the brain, to prevent eavesdropping and intrusions by commercial and state actors.