Since a spectrograph only selects one component of the signal to be analyzed (and this component is infinite in time), it should detect that component both before and after receiving the signal.
The standard answer is that spectrographs have a finite resolution: when selecting light with a given wavelength λ, the result is in practice a finite interval (λ−Δλ,λ+Δλ), defining the spectral resolution R=λ/Δλ. Let us assume a (respectable) value R=105 in the visible range, at λ=500nm.
The uncertainty relation connects the resolution to the lenth of the pulse by: ΔωΔt≥12⇒Δt=R2ω≥810−11s≃100ps where in the second step I used |Δω/ω|=|Δλ/λ|. A pulse with the spectral sharpness Δλ must then be at least 100 ps long.
This is a very short interval for classical spectroscopy measurements. However, using modern optical techniques one can create ultrashort pulses, down to Δt≃10fs. When observing such a pulse at a resolution R we should then see it as spread out to 100 ps. Where is the error?
More about in in the next part, where I'll also try to give a version of the paradox that is not affected by resolution.