Focusing on the universe, one spectrum at a time.
Focusing on the universe, one spectrum at a time.
When measuring the spectra of celestial bodies that are not close to Earth, it is important to achieve the highest possible Resolution. In spectroscopy the spectral resolution (R) is a dimensionless ratio that defines how fine the resolution is relative to the wavelength. Formally, it is defined as:
where λ is the wavelength of the measured light and Δλ is the smallest resolvable wavelength difference.
The resolution can be calculated for each component of the spectrometer, with the lowest value determining the overall performance of the entire system.
In my Czerny–Turner configuration, I will therefore calculate the resolution of the entrance slit, the reflective diffraction grating, and the CCD detector. Since I am using the optical assembly of a B&W Tek BTC100-2S spectroscope but replacing its original sensor, a direct comparison between the resolutions of the original and the new CCD (TCD1304DG) will provide valuable insight into how the detector influences the instrument’s overall resolving power.
Starting with the calculations for the TCD1304DG we first need to define its specs:
3648 pixels with an 8 μm pitch, resulting in an active length L= 29,184 mm.
The goal is to see the entire VIS spectrum ± 30 nm, therefore giving us a Δλtotal= 430 nm.
This allows us to calculate the linear dispersion (D):
To calculate the resolution of the CCD (RCCD) we need to calculate the wavelength per pixel by multiplying the Dispersion with the pitch (p):
This results in the following calculations for the CCD resolution. Since the Spectrometer works in the VIS area, we will be using an example wavelength λ of 500 nm:
To calculate the resolution of the 50 μm slit, we need to determine the bandwidth Δλslit:
The resolution at 500 nm is then calculated like so:
The gratings resolution is defined as the product of the diffraction order and the number of illuminated grooves (N; at 1800 lines/mm and an example beam width of 10 mm)
The corresponding Δλgrating can also be calculated like so:
When comparing all results, it is very clear, that the slit is the main bottleneck of the optical system. Since the resolution of the slit depends on its width, one option would be to decrease its size. If we want to calculate the needed slit width for the resolution of the CCD and slit to be equal, we could do so like this:
If we then divide the slit width by this factor, we get our final slit width where both the CCD and slit have the same resolution:
This makes sense since each pixel on the CCD is exactly 8 μm long.
If we compare the TCD1304DG’s resolution to the BTC100-2S’s original Sony ILX511 sensor, which has a 14 μm pitch at 2048 pixels (Δλ=0,209; R(500 nm) =2381), we can see that we would get a 78.16% increase in resolution, assuming the slit would not be the main bottleneck.
If the slit were to stay at 50 μm, the resolution would only improve by 1,8% due to a change in active length and therefore also in the dispersion.
While theoretical calculations provide a solid foundation for understanding spectrometer performance, they often don't fully align with real-life results due to practical factors such as optical aberrations (e.g., coma or astigmatism in the Czerny-Turner design), imperfect alignment of components, manufacturing tolerances in slit width or grating groove spacing, environmental influences like temperature variations affecting dispersion, signal-to-noise limitations from light intensity or detector noise, and non-ideal sampling (e.g., requiring multiple pixels to resolve features per Nyquist criteria). Empirical testing with calibration sources is essential to validate and adjust for these discrepancies.