The multinomial logistic regression model is a generalization of

The multinomial logistic regression model is a generalization of logistic regression to more than two categorical Protein Tyrosine Kinase inhibitor variables. This is a minimal model consistent with the chosen set of observations (in this case the firing rate of neurons) that does not make any additional assumptions, and in particular does not assume that variations

in the neural responses follow a Gaussian distribution (Graf et al., 2011). A similar approach is also used for conditional random fields in machine learning research (Lafferty et al., 2001) and for maximum noise entropy models in neuroscience (Fitzgerald et al., 2011). Given a set of neural responses Xi, the classifier produces the probability that this was caused by motif j as: Pr(Motif=j)=exp(∑iβjiXi)∑k=1Kexp(∑iβkiXi)where each βji is a set of coefficients fitted to the model by maximum likelihood estimation, with the index j (or k in the above equation) describing one of the possible K classification outputs and index i enumerating the neural responses among the n neurons in the population. This technique provides

a convenient and mathematically optimal way to quantify how well a set of neurons can discriminate between multiple motifs. To find the coefficients, Everolimus solubility dmso we used the MATLAB function mnrfit, and to find the probabilities of each motif from the model, we used the MATLAB function mnrval (Statistics toolbox, version 7.3, release 2010a). To avoid overfitting, we fit the model to 75% of the trials for each population and predicted motif identity for the remaining 25% of trials. This procedure was then repeated four times, to ensure

that all trials in each population received a prediction. For each unless trial, the model predicted the probability that the set of firing rates resulted from each of the four motifs. To compute the probability of correct classification, the probability of predicting the correct motif was averaged over all trials and all motifs for each population. Because we were interested in the net effect of correlations on motif discrimination, we needed an estimate of discrimination performance in the absence of correlations. To do this, we shuffled the trial ordering of each neuron in each data set, refit the model, and recomputed the probability of correct classification. This destroys trial-by-trial correlations (i.e., noise correlations), while leaving mean firing rates and signal correlations completely unaltered. To ensure that random correlations introduced by this process did not affect our analysis, we repeated the shuffling process 50 times and used the average probability of correct classification from these shuffles. We then computed the classification ratio as the probability of correct classification divided by the shuffled probability of correct classification.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>