Since a spectrograph only selects one component of the signal to be analyzed (and this component is infinite in time), it should detect that component both before and after receiving the signal.
The standard answer is that spectrographs have a finite resolution: when selecting light with a given wavelength \(\lambda\), the result is in practice a finite interval \( ( \lambda - \Delta \lambda, \lambda + \Delta \lambda)\), defining the spectral resolution \( R = \lambda / \Delta \lambda \). Let us assume a (respectable) value \( R = 10^5\) in the visible range, at \(\lambda = 500 \, \text{nm}\).
The uncertainty relation connects the resolution to the lenth of the pulse by: \[ \Delta \omega \Delta t \geq \frac{1}{2} \Rightarrow \Delta t = \frac{R}{2 \omega} \geq 8 \; 10^{-11} \, \text{s} \simeq 100 \, \text{ps}\] where in the second step I used \( \left | \Delta \omega / \omega \right | = \left | \Delta \lambda / \lambda \right |\). A pulse with the spectral sharpness \(\Delta \lambda\) must then be at least 100 ps long.
This is a very short interval for classical spectroscopy measurements. However, using modern optical techniques one can create ultrashort pulses, down to \( \Delta t \simeq 10 \, \text{fs}\). When observing such a pulse at a resolution \( R \) we should then see it as spread out to 100 ps. Where is the error?
More about in in the next part, where I'll also try to give a version of the paradox that is not affected by resolution.