Example: barber

Entropy and Information in Neural Spike Trains

VOLUME80, NUMBER1 PHYSICAL REVIEW LETTERS5 JANUARY1998 Entropy and Information in Neural Spike TrainsS. P. Strong,1 Roland Koberle,1,2 Rob R. de Ruyter van Steveninck,1and William Bialek11 NEC Research Institute, 4 Independence Way, Princeton, New Jersey 085402 Department of Physics, Princeton University, Princeton, New Jersey 08544(Received 29 February 1996; revised manuscript received 11 July 1997)The nervous system represents time dependent signals in sequences of discrete, identical actionpotentials or spikes; Information is carried only in the Spike arrival times. We show how to quantify thisinformation, in bits, free from any assumptions about which features of the Spike train or input signalare most important, and we apply this approach to the analysis of experiments on a motion sensitiveneuron in the fly visual system.

the spike train represents that direct “decoding” of the spike train is possible; the information extracted by these decoding methods can be more than half of the total spike

Tags:

  Information, Train, Neural, Entropy, Spike, Entropy and information in neural spike trains

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Entropy and Information in Neural Spike Trains

1 VOLUME80, NUMBER1 PHYSICAL REVIEW LETTERS5 JANUARY1998 Entropy and Information in Neural Spike TrainsS. P. Strong,1 Roland Koberle,1,2 Rob R. de Ruyter van Steveninck,1and William Bialek11 NEC Research Institute, 4 Independence Way, Princeton, New Jersey 085402 Department of Physics, Princeton University, Princeton, New Jersey 08544(Received 29 February 1996; revised manuscript received 11 July 1997)The nervous system represents time dependent signals in sequences of discrete, identical actionpotentials or spikes; Information is carried only in the Spike arrival times. We show how to quantify thisinformation, in bits, free from any assumptions about which features of the Spike train or input signalare most important, and we apply this approach to the analysis of experiments on a motion sensitiveneuron in the fly visual system.

2 This neuron transmits Information about the visual stimulus at ratesof up to 90 bitsys, within a factor of 2 of the physical limit set by the Entropy of the Spike train itself.[S0031-9007(97)04939-9]PACS numbers: + e, y, + cAs you read this text, optical signals reaching yourretina are encoded into sequences of identical pulses,termed action potentials or spikes, that propagate along the,106fibers of the optic nerve from the eye to the encoding appears universal, occurring in animalsas diverse as worms and humans, and spanning all thesensory modalities [1]. The molecular mechanisms for thegeneration and propagation of action potentials are wellunderstood [2], as are the mathematical reasons for theselection of stereotyped pulses [3].

3 Less well understoodis the function of these spikes as a code [4]: How dosequences of spikes represent the sensory world, and howmuch Information is conveyed in this representation?The temporal sequence of spikes provides a large capac-ity for transmitting Information [5]. One central questionis whether the brain takes advantage of this large capac-ity, or whether variations in Spike timing represent noisewhich must be averaged away [4,6]. In response to a longsample of time varying stimuli, the Spike train of a singleneuron varies, and we can quantify this variability by theentropy per unit time of the Spike train ,SsDtd[7], whichdepends on the time resolutionDtwith which we recordthe Spike arrival times. If we repeat the same time depen-dent stimulus, we see a similar, but not precisely identical,sequence of spikes (Fig.)

4 1). This variability at fixed in-put can also be quantified by an Entropy , which we call theconditional or noise Entropy per unit timeNsDtd. The in-formation that the Spike train provides about the stimulus isthe difference between these entropies,Rinf o S2N[7]. BecauseNis positive (semi)definite,Ssets the ca-pacity for transmitting Information , and we can define anefficiencyesDtd Rinf osDtdySsDtdwith which this ca-pacity is used [9]. The question of whether Spike timing isimportant is really the question of whether this efficiencyis high at smallDt[4].For some neurons, we understand enough about whatthe Spike train represents that direct decoding of thespike train is possible; the Information extracted by thesedecoding methods can be more than half of the total spiketrain Entropy withDt,1ms [9].

5 The idea that sen-sory neurons provide a maximally efficient representationof the outside world has also been suggested as an op-timization principle from which many features of thesecells responses can be derived [10]. But, particularly inthe central nervous system [6], assumptions about what isbeing encoded should be viewed with caution. The goalof this paper is, therefore, to give a completely model in-dependent estimate of Entropy and Information in neuralspike Trains as they encode dynamic begin by discretizing the Spike train into time binsof sizeDt, and examining segments of the Spike trainin windows of lengthT, so that each possible neuralresponse is a word withTyDtsymbols. Let us callthe normalized count of theith word pi.

6 Then the naiveestimate of the Entropy isSnaivesT,Dt;sized 2Xi pilog2 pi;(1)the notation reminds us that our estimate depends on thesize of the data set. The true Entropy isSsT,Dtd limsize!`SnaivesT,Dt;sized,(2)and we are interested in the Entropy rateSsDtd limT!`SsT,DtdyT. At largeT, very large data setsare required to ensure convergence ofSnaiveto the trueentropyS. Imagine a Spike train with mean Spike rate r,40spikesys, sampled with a time resolutionDt a window ofT 100ms, the maximum entropyconsistent with this mean rate [4,5] isS, ,and the Entropy of real Spike Trains is not far from thisbound. But then there are roughly2S,23105wordswith significantpi, and our naive estimation procedureseems to require that we observe many samples of eachword.

7 If our samples come from nonoverlapping 100 mswindows, then we need much more than 3 hours of is possible to make progress despite these pessimisticestimates. First, we examine explicitly the dependence ofSnaiveon the size of the data set and find regular behaviors[11] that can be extrapolated to the infinite data , we evaluate robust bounds [7,12] on the entropythat serve as a check on our extrapolation (1)y197(4)$ 1997 The American Physical Society197 VOLUME80, NUMBER1 PHYSICAL REVIEW LETTERS5 JANUARY1998 FIG. 1. (a) Raw voltage records from a tungsten microelec-trode near the cell H1 are filtered to isolate the action expanded scale shows a superposition of several spikes toillustrate their stereotyped form.

8 (b) Angular velocity of a pat-tern moving across the fly s visual field produces a sequenceof spikes in H1, indicated by dots. Repeated presentations pro-duce slightly different Spike sequences. For experimental meth-ods, see Ref. [8].Third, we are interested in the extensive component of theentropy, and we find that a clean approach to extensivityis visible before sampling problems set in. Finally, forthe neuron studied the motion sensitive neuron H1 in thefly s visual system we can actually collect many hoursof responds to motion across the entire visual field,producing more spikes for an inward horizontal motionand fewer spikes for an outward motion; vertical motionsare coded by other neurons [13]. These cells provide vi-sual feedback for flight control.

9 In the experiments an-alyzed here, the fly is immobilized and views computergenerated images on a display oscilloscope. For simplic-ity these images consist of vertical stripes with randomlychosen grey levels, and this pattern takes a random walkin the horizontal direction [14].We begin our analysis with time bins of sizeDt 3ms. For a window ofT 30ms corresponding tothe behavioral response time of the fly [15] Fig. 2shows the histogramh pij, and the naive Entropy esti-mates. We see that there are very small finite data setcorrections (,1023), well fit by [11]SnaivesT,Dt;sized SsT,Dtd1S1sT,Dtdsize1S2sT,Dtdsize2.(3)Un der these conditions we feel confident that the extrapo-latedSsT,Dtdis the correct Entropy .

10 For sufficientlylargeT, finite size corrections are larger, the contributionof the second correction is significant, and the extrapola-tion to infinite size is [12] discussed the problem of Entropy estimation inthe undersampled limit. For probability distributions thatare uniform on a set ofNbins (as in the microcanonicalensemble), the Entropy is log2 Nand the problem is toestimateN. Ma noted that this could be done by countingFIG. 2. The frequency of occurrence for different words inthe Spike train , withDt 3ms andT placed in order so that the histogram is monotonicallydecreasing; at this value ofTthe most likely word correspondsto no spikes. Inset shows the dependence of the Entropy ,computed from this histogram according to Eq.


Related search queries