Glossary

Glossary#

The attached notebook uses some jargon that might be new to you. Hopefully this glossary will help clarify things, but please ask if something is unclear!

  • Bandpass: a filter or model that is most sensitive to frequencies in the middle range and is less sensitive or insensitive to high and low frequencies. Much of the early visual system, including retinal ganglion cells, lateral geniculate nucleus neurons, and primary visual cortical neurons, display bandpass selectivity. Example functional forms include difference-of-Gaussians, Gabor filters, Morlet wavelets, and the steerable pyramid filters. Compare to highpass and lowpass.

  • Eigendistortions: image distortions that produce the most and least noticeable change in model response. They are the eigenvectors of the model’s Fisher information matrix, which provides a quadratic approximation of the discriminability of distortions on a given image. In the cases we consider, all models have a deterministic and differentiable mapping from images to representations, and thus the Fisher information matrix is equal to $J^T J$, where $J$ is the model’s Jacobian matrix with respect to the target image.

  • Gain control: also known as divisive normalization, gain control is ubiquitous in the central nervous system and has been proposed as a canonical neural computation which allows the brain to maximize sensitivity to relevant stimuli in changing contexts. An example is the way that the human eye adapts to different light levels: when entering a dark room from a bright environment, we are initially unable to make out any details, but adaptation allows the eye to change the range of intensities that it is sensitive to. Physical processes (e.g., change in pupil size) account for some of this, but gain control is another way this can be implemented.

  • Highpass: a filter or model that is most sensitive to high frequencies and is less sensitive or insensitive to middle and low frequencies. Compare to bandpass and lowpass.

  • Invariances / invariant: if a model is invariant to an image feature, the presence of the feature will not affect the model output. Even stronger, that feature can be randomized and it will have no effect on the model output. These features are called the model’s invariances.

  • Lowpass: a filter or model that is most sensitive to low frequencies and is less sensitive or insensitive to middle and high frequencies. The classic example is a Gaussian. Compare to bandpass and lowpass.

  • Metamers: visual input that are physically distinct but perceptually identical, such as a scene and an RGB image of that scene. In plenoptic, we synthesize model metamers, which are images with different pixel values that produce identical model outputs.

  • Model: a computational model maps some input stimulus to a representation, based on some parameters. Neural networks, Gaussian filters, and the energy model of V1 complex cells are all examples of models. In vision science, we typically use these models to better understand some aspect of a biological visual system, by trying to map the model representation to some aspect of the system being modeled, such as neuronal firing rate, behavioral responses, or fMRI BOLD. The goal of plenoptic is to facilitate understanding and improvement of these models.

  • Parameters: values that govern a model’s behavior, such as the numbers that make up a convolutional filter in a neural network, the standard deviation of a Gaussian, or the orientation of a Gabor. Most models have multiple parameters (some have a great deal!), and typically these parameters are fit to observed data after some experiments using optimization. In plenoptic, these model parameters are fixed and do not change.

  • Representation: the model output. These are often a vector of numbers or a two-dimensional image-like representation. These may be abstract but are often mapped to some aspect of the system being modeled, such as neuronal firing rate, behavioral responses, or fMRI BOLD.

  • Stimuli: the model input. Typically in vision science, these are images or videos. In plenoptic, we synthesize stimuli in order to facilitate some goal. See Metamers and Eigendistortions.

  • Synthesis: the process by which stimuli are generated in plenoptic. This is generally accomplished via iterative optimization, though not always.