Instruments
AC magnetic field generator for Magnetic Nanoparticles & fluid hyperthermia applications
Instrumentation | Stimulation

The system consists of a high-power radio frequency (RF) amplifier, driving a resonant coil/ capacitor combination. The coil surrounds the Sample Test Volume, which is accessible via the top opening lid and removable front panel. The RF amplifier is powered by an external DC Power Supply and the frequency of operation is set by an external Function Generator.

The system operates at a wide range of frequencies (from 100 kHz to 1MHz) with field strengths up to 20 kA/m (25 mT). 

Purpose of Equipment:

Used for magnetic nanoparticle heating applications utilizing AC field and solenoid coil principles.

We supply the MagneTherm system, which has been designed specifically for this purpose, operating at a wide range of frequencies (from 100 kHz to 1MHz) with field strengths up to 20 kA/m (25 mT) all in one system with no hidden extras needed.

Type of research that was enhanced by its use:

The RF field generator is used to stimulate new protein construct with potential magnetogenetic capabilities. It is also used to stimulated biogenic and artificial magnetic nanoparticles, to measure the heat produced at the surface of the particles. 

Additional Resources: 

User Manual

Video manuals 

Learn more about the tool. 

 

 

Adaptive Optics
Instrumentation | Data Analysis

UC Santa Barbara’s Che-Hang Yu has built a demo system that involves a deformable mirror and a Hartmann wavefront sensor. In this YouTube video, Yu demonstrates how this system can be used to measure optical aberrations and correct for them. 

Affine and Regularized DEformative Numeric Transform (ARDENT)
Software | Image analysis

Affine and Regularized DEformative Numeric Transform (ARDENT) is a Python package for performing automated image registration using LDDMM. 

See code.

A theory of multineuronal dimensionality, dynamics and measurement
Commentary

We recently discussed this paper by Gao et al. from Ganguli lab. They present a theory of neural dimensionality and sufficiency conditions for accurate recovery of neural trajectories, providing a much-needed theoretical perspective from which to judge a majority of systems neuroscience studies that rely on dimensionality reduction. Their results also provide a long overdue mathematical justification for drawing conclusions about entire neural systems based on the activity of a small number of neurons. I felt the paper was well written, and the mathematical arguments used in the proofs were pretty engaging — I don’t remember the last time I enjoyed reading supplementary material quite like this. Here’s a brief summary and some additional thoughts on the paper.

Linear dimensionality reduction techniques are widely used in neuroscience to study how behaviourally-relevant variables are represented in the neurons. The general approach goes like this – (i) apply dimensionality reduction e.g. PCA on trial-averaged activity of a population of M neurons to identify a P -dimensional subspace (P<M ) capturing a sufficient fraction of neural activity, and (ii) examine how neural dynamics evolve within this subspace to (hopefully) gain insights about neural computation. This recipe has largely been successful (ignoring failures that generally go unpublished): the reduced dimensionality of neural datasets is often quite small and the corresponding low-dimensional dynamical portraits are usually interpretable. However, neuroscientists observe only a tiny fraction of the complete neural population. So could the success of dimensionality reduction be an artefact of severe subsampling? This is precisely the question that Gao et al. attempt to answer in their paper.

They first develop a theory that describes how neural dimensionality (defined below) is bounded by the task design and some easy-to-measure properties of neurons. Then they adapt the mathematical theory of random projection to neuroscience setting and obtain the amount of geometric distortion in the neural trajectories introduced by subsampling, or equivalently, the minimum number of neurons one has to measure in order to achieve an arbitrarily small distortion in a real experiment. Throughout this post, I use the term neural dimensionality in the same sense that the authors use in the paper: the dimension of the smallest affine subspace that contains a large (~80 – 90%) fraction of the neural trajectories. Note that this notion of dimensionality differs from the intrinsic dimensionality of the neural manifold, which is usually much smaller.

To derive an analytical expression for dimensionality, the authors note that there is an inherent biological limit to how fast the neural trajectory can evolve as a function of the task parameters. Concretely, consider the response of a population of visual neurons to an oriented bar. As you change the orientation from 0 to \pi , the activity of the neural population will likely change too. If \vartheta denotes the minimum change in orientation required to induce an appreciable change in the population activity (i.e. the width of the autocorrelation in the population activity pattern), then the population will be able to explore roughly \pi/\vartheta linear dimensions. Of course, the scale of autocorrelation will differ across brain areas (presumably increases as one goes from the retina to higher visual areas), so the neural dimensionality would depend on the properties of the population being sampled, not just on the task design. Similar reasoning applies to other task parameters such as time (yes, they consider time as a task parameter because, after all, neural activity is variable in time). If you wait for time period T , the dimensionality will be roughly equal to T/\tau where \tau is now the width of temporal autocorrelation. For the general case of K different task parameters, they prove that neural dimensionality D is ultimately bounded by (even if you record from millions of neurons):

\displaystyle \LARGE D \le C\frac{\prod_{k=1}^{K}{L_k}}{\prod_{k=1}^{K}{\lambda_k}} \qquad \qquad (1)

where \\L_k is the range of the k^{th} task parameter, \lambda_k is the corresponding autocorrelation length and C is an O(1) constant which they prove is close to 1. The numerator and denominator depend on task design and smoothness of neural dynamics respectively, so they label the term on the right-hand side neural task complexity (NTC). This terminology was a source of confusion among some of us as it appears to downplay the fundamental role of the neural circuit properties in restricting the dimensionality, but its intended meaning is pretty clear if you read the paper.

To derive NTC, the authors assume that the neural response is stationary in the task parameters and the joint autocorrelation function is factorisable as a product of individual task parameters’ autocorrelation functions, and then show that the above bound becomes weak when these assumptions do not hold for the particular population being studied. The proof was also facilitated in part by a clever choice of the definition of dimensionality: ‘participation ratio’ ={\left (\sum_i \mu_i \right )^2}/{\left (\sum_i \mu_i^2 \right )} where \mu_i are the eigenvalues of the neuronal covariance matrix, instead of the more common but analytically cumbersome measure based on ‘fraction x of variance explained’ =\begin{matrix} argmin\\ D \end{matrix} \ s.t. \ \left ( \sum_{i=1}^{D} \mu_i \right )/\left ( \sum_i \mu_i \right ) \geq x , but they demonstrate that their choice is reasonable.

Much of the discussion in our journal club centred on whether equation (1) is just circular reasoning, and whether we really gain any new insight from this theory. This view was somewhat understandable because the authors introduce the paper by promising to present a theory that explains the origin of the simplicity betrayed by the low dimensionality of neural recordings… only to show us that it emerges from the specific way in which neural populations respond (smooth dynamics \approx large denominator) to specific tasks (low complexity \approx small numerator). Although this result may seem qualitatively trivial, the strength of their work lies in making our intuitions precise and packaging them in the form of a compact theorem. Moreover, as shown later in the paper, knowing this bound on dimensionality can be practically helpful in determining how many neurons to record. Before discussing that aspect, I’d like to briefly dwell a little bit on a potentially interesting corollary and a possible extension of the above theorem.

Based on the above theorem, one can identify three regimes of dimensionality for a recording size of M neurons:
(i) D\approx M;\ D\ll NTC
(ii) D\approx NTC;\ D\ll M
(iii) D\ll M;\ D\ll NTC

The first two regimes are pretty straightforward to interpret. (i) implies that you might not have sampled enough neurons, while (ii) means that the task was not complex enough to elicit richer dynamics. The authors call (iii) the most interesting and say ‘Then, and only then, can one say that the dimensionality of neural state space dynamics is constrained by neural circuit properties above and beyond the constraints imposed by the task and smoothness of dynamics alone’. What could those properties be? Here, it is worth noting that their theory takes the speed of neural dynamics into account, but not the direction. Recurrent connections, for example, might prevent the neural trajectory from wandering in certain directions thereby constraining the dimensionality. Such constraints may in fact lead to nonstationary and/or unfactorisable neuronal covariance, violating the conditions that are necessary for dimensionality to approach NTC. Although this is not explicitly discussed, they simulate a non-normal network to demonstrate that its dimensionality is reduced by recurrent amplification. So I guess it must be possible to derive a stronger theorem with a tighter bound on neural dimensionality by incorporating the influence of the strength and structure of connections between neurons.

NTC is a bound on the size of the linear subspace within which neural activity is mostly confined. But even if NTC is small, it is not clear whether we can accurately estimate the neural trajectory within this subspace simply by recording M neurons such that M\gg NTC. After all, M is still only a tiny fraction of the total number of neurons in the population N. To explore this, the authors use the theory of random projection and show that it is possible to achieve some desired level of fractional error \epsilon in estimating the neural trajectory by ensuring:

 \displaystyle M(\epsilon)=K[O(log\ NTC)\ +\ O(log\ N)\ +\ O(1)]\ \epsilon^{-2} \qquad \qquad (2)

 

where K is the number of task parameters. This means that the demands on the size of the neural recording grow only linearly in the number of task parameters and logarithmically (!!) in both NTC and N. Equation (2) holds as long as the recorded sample is statistically homogenous to the rest of the neurons, a restriction that is guaranteed for most higher brain areas provided the sampling is unbiased i.e. the experimenter does not cherry-pick which neurons to record/analyse. The authors encourage us to use their theorems to obtain back-of-the-envelope estimates of recording size and to guide experimental design. This is easier said than done, especially when studying a new brain area or when designing a completely new task. Nevertheless, their work is likely to push the status quo in neuroscience experiments by encouraging experimentalists to move boldly towards more complex tasks without radically revising their approach to neural recordings.

 

AutoGMM: Automatic Gaussian Mixture Modeling in Python
Software | Modeling

Gaussian mixture modeling is a fundamental tool in clustering, as well as discriminant analysis and semiparametric density estimation. However, estimating the optimal model for any given number of components is an NP-hard problem, and estimating the number of components is in some respects an even harder problem. In R, a popular package called mclust addresses both of these problems. However, Python has lacked such a package. We therefore introduce AutoGMM, a Python algorithm for automatic Gaussian mixture modeling. AutoGMM builds upon scikit-learn's AgglomerativeClustering and GaussianMixture classes, with certain modifications to make the results more stable. Empirically, on several different applications, AutoGMM performs approximately as well as mclust. This algorithm is freely available and therefore further shrinks the gap between functionality of R and Python for data science. Read more

See code (integrated into GraSPy). 

Balanced Network
Software | Modeling

Software repository

Simulation of a plastic balanced network.

The code to run a simulation of this balanced network is composed of 4 different scripts and one function. The main code that runs the simulation is BalancedNetworkMain.m. Here is where the time of the run is determined, a long with individual neuron parameters, and the number of neurons in the network. Additionally, here is where plots such as the rasterplot, and time-dependent firign rates are generated. Lastly, this script also includes the computation of mean field spike count covariances and correlations. The spike count covariances are computed using the function SpikeCountCov.m, which calculates the spike count covariances by counting spikes over some time interval (T1 to T2) with some window size, and for each neuron. The covariances are finally obtained by taking the covariance of the matrix of spike counts.

A key component that determines the dynamics of the network is the choice of connectivity matrix. This can be adjusted at Connectivity.m, and currently the matrix is sparse and it is built with mean connection strengths between cell pairs of each type and scaled by $$1/sqrt{N}$$.

The script named ExternalLayer.m determines the structure of the external feedforward layer, which in this case it is a network of Poisson neurons with pairwise correlation 'c'.

The actual run of the network takes place in Simulation.m. In this script, we first define variables to record from some neurons and we preallocate memory. The for loop goes through each time step of the simulation and starts by propagating the feedforward spikes onto the recurrent network. Euler's method on the EIF differential equation is used to solve for the voltage at each point in time. Next, spikes are recorded into the matrix 's', whose first row stores spike times, and its second row, the neuron index. Synaptic currents are also updated using Euler's method and are recorded.

The simulation can actually run using different models. For example, if we use Simulation_AdEx.m on the main script, then each neuron will be modeled by the Adaptive EIF, instead of just the EIF formalism. This script contains only three extra parameters that control adaptation (a, b, and tauw), and it creates a figure that shows the evolution of the adaptation current over time.

Bayesian Efficient Coding
Commentary

On 15 sep 2017, we discussed Bayesian Efficient Coding by Il Memming Park and Jonathan Pillow.

As the title suggests, the authors aim to synthesize bayesian inference with efficient coding. The Bayesian brain hypothesis states that the brain computes posterior probabilities based on its model of the world (prior) and its sensory measurements (likelihood). Efficient coding assumes that the brain distributes its resources to maximize a cost, typically information. In particular, they note that efficient coding that optimizes mutual information is a special case of their more general framework, but ask whether other maximizations based on the Bayesian posterior might better explain data.

Denoting stimulus x, measurements y, and model parameters \theta, they use the following ingredients for their theory: a prior p(x), a likelihood p(y|x), an encoding capacity constraint C(\theta), and a loss functional L(\cdot). They assume that the brain is able to construct the true posterior p(x|y,\theta). The goal is to find a model that optimizes the expected loss

\bar{L}(\theta)=\mathbb{E}_{p(y|\theta)}\left[L(p(x|y,\theta))\right]

under the constraint C(\theta)\leq c.

The loss functional is the key. The authors consider two things the loss might depend on: the posterior L(p(x|y)), or the ground truth L(x,p(x|y)). They needed to make the loss explicitly dependent on the posterior in order to optimize for mutual information. It was unclear whether they also considered a loss depending on both, which seems critical. We communicated with them and they said they’d clarify this in the next version.

They state that there is no clear a priori reason to maximize mutual information (or equivalently to minimize the average posterior entropy, since the prior is fixed). They give a nice example of a multiple choice test for which encodings that maximize information will achieve fewer correct answers than encodings that maximize percent correct for the MAP estimates. The ‘best’ answer depends on how one defines ‘best’.

After another few interesting gaussian examples, they revisit the famous Laughlin (1981) result on efficient coding in the blowfly. This was hailed as a triumph for efficient coding theory in predicting the nonlinear input-output photoreceptor curve derived directly from the measured prior over luminance. But here the authors found that instead a different loss function on the posterior gave a better fit. Interestingly, though, that loss function was based on a point estimate,

L(x,p(x|y))=\mathbb{E}_{p(x|y)}\left[\left|x-\hat{x}(y)\right|^p\right]

where the point estimate is the Bayesian optimum for this cost function and p is a parameter. The limit p\to 0 gives the familiar entropy, p=2 is the conventional squared error, and the best fit to the data was p=1/2, a “square root loss.” It’s hard to provide any normative explanation of why this or any other choice is best (since the loss is basically the definition of ‘best’, and you’d have to relate the theoretical loss to some real consequences in the world), it is very interesting that the efficient coding solution explains data worse than their other Bayesian efficient coding losses.

Besides the minor confusion about whether their loss does/should include the ground truth x, and some minor disagreement about how much others have done things along this line (Ganguli and Simoncelli, Wei and Stocker, whom they do cite), my biggest question is whether the cost really should depend on the posterior as opposed to a point estimate. I’m a fan of Bayesianism, but ultimately one must take a single action, not a distribution. I discussed this with Jonathan over email, and he maintained that it’s important to distinguish an action from a point estimate of the stimulus: there’s a difference between the width of the river and whether to try to jump over it. I countered that one could refer actions back to the stimulus: the river is jumpable, or unjumpable (essentially a Gibsonian affordance). In a world of latent variables, any point estimate based on a posterior is a compromise based on the loss function.

So when should you keep around a posterior, rather than a point estimate? It may be that the appropriate loss function changes with context, and so the best point estimate would change too. While one could certainly consider that to be a bigger computation to produce a context-dependent point estimate, it may be more parsimonious to just represent information about the posterior directly.

Brainbow
Molecular Tools | Viral vectors

Plasmid DNA for packaging AAV; Recombinant AAV (stereotypes 1-9, retro, PhP.eb, PhPs); primary antibodies to Brainbow FPs;

Description of what it is used for

Labeling Cre expressing cells stochastically in colors

Description of its capabilities

Allow labeling the fine neuronal structures of neighboring neurons with membrane targeted Brainbow FPs

Location (Research Facility) 

Plasmid DNA [Addgene]; AAVs [University of Michigan Vector Core]; Brainbow FP antibody [kerafast]

Link to a User Manual 

Roossien DH, Cai D; Methods in Molecular Biology 2017;1642:211-228

Type of research that was enhanced by its use 

Single and multiple neuron morphology reconstruction in densely labeled samples

BrainModules
Software | Image analysis

BrainModules is an online interactive visualization tool of the brain architecture based on the tracer database. Learn more about the tool.