SWIR is the acronym for Short Wave Infra-Red and refers to non-visible light falling roughly between 1400 and 3000 nanometers (nm) in wavelength.The visible spectrum ranges from 400nm to 700nm, therefore SWIR light is invisible to the human eye. In order to detect SWIR wavelengths, we need dedicated sensors made of InGaAs (Indium Gallium Arsenide) or MCT (mercury cadmium Telluride) as silicon detectors are no longer sensitive to wavelengths larger than 1100 nm. InGaAs sensors are the primary sensors used in typical SWIR range. MCT is also an option and can extend the SWIR range, but these sensors are usually more costly and application dependent.
SWIR light interacts with objects similarly to visible light as it is reflective, consequently it exhibits shadows and contrasts in its imagery. Images from a SWIR camera are comparable to visible images in terms of resolution and detail.
Objects that are almost the same color while imaging in visible region can be easily differentiated using SWIR light, making objects easily recognizable. This is one tactical advantage of imaging in SWIR compared to visible region. Some of the natural emitters of SWIR are ambient star light and background radiance, therefore SWIR is an excellent application for outdoor imaging. Conventional quartz/halogen bulbs also act as a SWIR light source. Depending on the application, some sensors in SWIR cameras can be adjusted to have linear or logarithmic response to avoid saturation.
There are many advantages of using SWIR over a conventional sensor. Some applications that are not possible to image in visible range can be imaged using SWIR range. One example, is silicon wafer inspection, which is only possible due to silicon being transparent in the SWIR range. Other examples of materials that are transparent in SWIR region are; Sodium Chloride (NaCl) and Quartz (SiO2). Water vapor is also transparent in SWIR, making SWIR cameras more desirable when imaging through haze or fog. Applications where using SWIR is crucial are detailed in the following section.
SWIR imaging is used for a variety of applications in different aspects of the industry and research, ranging from inspection, quality control, identification, detection, surveillance, and more. Here we summarize many applications where SWIR range is widely used. More applications are being discovered all the time.
Small Animal Imaging
Small animal imaging is one of the main research areas for preclinical studies, including but not limited to; drug discovery, drug effectiveness, and early detection of cancer. Over time, imaging in the SWIR range became more profitable for scientists studying small animals. The short wave infrared (SWIR) range has several advantages compared to visible and infrared wavelengths in the domain of in vivo imaging. SWIR light provides higher depth of penetration while maintaining high resolution, low light absorption and reduced scattering within the tissue which makes it desirable to study living organisms. One of the biggest advantages of the SWIR range is that the autofluorescence is negligible. This low level autofluorescence increases the contrast and sensitivity compared to conventional imaging in NIR and visible ranges. Some of NIR fluorescence imaging contrast agents such as; ICG (indocyanine green), IRDye800CW and IR-12N3, has a non-negligible long tails passing 1500 nm region (NIR-II/SWIR) . InGaAs (indium gallium arsenide) based SWIR cameras fill the gap for imaging in NIR-II/SWIR wavelength range (900-1700nm) where silicon detectors are no longer sensitive.
We recommend following the products for this application: C-RED2
Active/Range Imaging for Security and Defense
SWIR is considered one of the most versatile technologies for the defense & security sectors. SWIR cameras can provide valuable information such as the ability to identify or recognize a target compared to MWIR & LWIR band cameras. It also brings better vision through harsh weather conditions such as fog or smoke. With a low readout noise and a high dynamic range, SWIR cameras can cope with the challenging requirements of the defense and security industry.
Gated imaging provides the ability to image a specific depth of a scene (i.e 3D imaging). There are multiple applications for gated imaging, including; observation through severe weather conditions or other obscurants, estimation of distance and localization of obstacles (i.e. drone detection) with background suppression, and others. Imaging devices must be fast enough to cope with reflected light from a laser source. SWIR cameras offer precision with the shortest effective exposure time, the shortest rise time, and highest dynamic range on the market. For this reason, these cameras can cover a broad number of situations in the field.
We recommend following the products for this application: WiDy SenS Gated
Carbon Nanotubes Imaging
SWIR cameras can be used for detection of single-walled carbon nanotubes requiring fast frame rates. Single-walled carbon nanotubes (SWCNT) have been established as remarkable fluorophores for probing the nanoscale organization of biological tissues [1,2]. They are stiff, quasi-one-dimensional nano structures, with a small diameter (~1nm) which enables excellent penetration into complex environments, and a large length (100nm to 1µm) which slows down their diffusion and thus allows the tracking of single fluorescent particles. Finally, their bright and stable near-infrared (NIR) fluorescence allows long-term tracking deep in biological tissues without suffering from biological autofluorescence. For example, single-walled carbon nanotubes could be detected in distant regions of the brain extracellular space (ECS) following their injection into the lateral ventricles of young rat brains, and the tracking of their diffusion. This yields novel and quantitative insights about the local morphology and viscosity variations within the brain ECS[1,2]. Such studies require a camera capable of tracking single-walled carbon nanotubes at high speed, making SWIR cameras desirable. One limiting factor for the spatial resolution of such diffusion analyses is the ability to observe displacements of SWCNTs over short time lags.
Visible and SWIR Comparisons
Oil and Water
Laser Beam Profiling
Laser Beam Profiling
How to choose the right beam profiler for my application?
When it comes to choosing a laser beam profiler, the choice can often be overwhelming. What follows are suggestions to make the right choice for you. All our laser beam profilers are based on 2D arrays (not scanning slits).
What is my wavelength and am I working with 1 or several wavelengths? This will determine which technology to use (Si based CMOS sensors, InGaAs based Infrared sensors etc) for your laser beam profiling application
For wavelengths <1150 nm and in some cases <1320 nm, a CMOS or CCD based sensor will do the job. All CMOS and CCD based laser beam profilers we offer have no cover-glass. Nowadays, CMOS sensors are very performant and will be a cost-effective solution in >95% of the cases. CCD sensors have the advantage of being available with larger active areas for applications having multiple beams or large beams (>10mm). The type of neutral density (ND) filter used impacts the cut-on of the sensitivity.
400 – 1150 nm (or 1320 nm)
Use an absorptive type filter. Each filter is fabricated from a glass substrate that has been selected for its spectrally flat absorption coefficient in the visible region. By varying the type and thickness of the glass used, an entire line of absorptive ND filter is possible.
By default, all CinCam CMOS laser beam profilers are delivered with a built-in OD3.0 absorptive filter with a wedge to minimize interference effects due to parallel surfaces.
320 – 1150 nm (or 1320 nm)
Use a reflective type filter. Reflective ND filters consist of thin film optical coatings, typically metallic, that has been applied to a glass substrate. The coating optimization is available for specific wavelength ranges such as UV, VIS or NIR. The thin film coating primarily reflects light back toward the source. User should take special care to ensure the reflected light does not interfere with the system setup.
240 – 1150 nm
This range can be achieved by removing the micro-lens array used typically to increase the fill-factor of each pixel. The glass used blocks UV light and therefore, by removing it, sensitivity as low as 240 nm is achievable.
<150 – 1150 nm
This range can be achieved using a special fluorescent coating directly on the CMOS sensor. A thin layer of UV to VIS converting coating which absorbs UV light and emits visible light instead is covering the sensor. The robust fluorescent material is ideal for UV imaging. The material shows an excellent quantum yield of nearly 100% for wavelengths below 450 nm and down to 100 nm. In contrast there is a high transparency of the material for wavelengths above 480 nm which gives a very good response even in the visible and near infrared wavelengths.
Can be achieved using a special phosphor coating directly on the CMOS sensor. The coating used is based on a complex and non-charge anti-stokes material with unique emission properties and converts 1495 nm-1595 nm photons to visible and detectable wavelengths without fading or image lag. We offer a non-linearity software correction.
Because of the particles’ size, the effective resolution is 5-9µm no matter how small the pixel pitch of the sensor is.
900 – 1700 nm
This requires the use of an InGaAs based sensor. You can achieve high QE from 1µm to 1.6µm. Even though QE can be very small before 900 nm, it is possible to some sensitivity as low as 800nm.
2- Beam Size
How to choose the right laser beam profiler model given my beam size for laser beam profiling? Beam size is often taken at 1/e2 for a Gaussian beam. It is important to understand that a 1mm beam size for instance, does not mean that there is 0% energy outside a circle of 1mm of diameter. The tail of the beam, although of small intensity, is necessary in order to compute accurate ISO standard measurements on the beam.
Minimum beam size measurable: The accuracy of the size measurement depends on the number of illuminated pixels. 15 pixels will give very high accuracy. Under 12 pixels, the accuracy is acceptable. It is not recommended to profile a laser beam with less than 10 illuminated pixels. Therefore, the minimum measurable beam size of a laser beam profiler is pixel_ pitch (µm) x ~10. For example: the CinCam CMOS 1201 laser beam profiler has a pixel pitch of 5.2µm. Therefore, the minimum beam size recommended is >52µm. The smallest pixel pitch available is 2.2µm (see CinCam CMOS 1204 laser beam profiler and CinCam CMOS PICO laser beam profiler)
Maximum beam size measurable: The active area of the laser beam profiler sensor will define the maximum beam size measurable. A rule of thumb is to take ~75% of the length in one direction. For example: The CinCam CMOS Nano 1.001 laser beam profiler has an active area of 11.3 x 11.3 mm. Therefore, the maximum beam size measurable is ~8.5mm.
The largest CMOS sensor is the CinCam CMOS Nano 1.001 laser beam profiler with 11.3 x 11.3 mm active area. The largest CCD sensor is the CinCam CCD 3501 laser beam profiler with 36 x 24 mm active area.
3- Pulsed or CW?
Is my laser continuous-wave or pulsed, and if so, what is its repetition rate? Some sensors have a rolling shutter which means all the pixels are not read at the same time but rather in a row-by-row fashion. For CW lasers, a rolling or global shutter is suitable. However for pulsed lasers with a repetition rate <1kHz, a global shutter is necessary. Pulsed lasers with a repetition rate >1kHz or >>1kHz, a global shutter or rolling shutter is suitable as such frequency will be ‘seen’ as CW by the laser beam profiler. In other words, the sensor will not see the difference.
What is my beam power and what OD or attenuation do I need? Whether using an absorptive or reflective type ND filter, laser beam profilers allows the maximum peak power of~1W. For power greater than 1W, a attenuation unit can be added directly to the laser beam profiler (works on all CMOS, CCD and InGaAs models thanks to a large spectral range of 190 nm to 2000 nm). The attenuator is based on two uncoated fused silica wedges and is designed for pre-attenuation of high intensity laser beams. The principle is based on the polarization effect by reflection on an optical surface. The s-pol. and p-pol. parts of the laser beam have different reflection factors. Orthogonal arrangement of the wedges compensates the polarization effect and allows neutral attenuation of the laser beam
You can use the prism attenuator up to intensities of 2GW/cm2 for pulse wave and 25kW/cm2 for continuous wave. It is possible to combine with neutral density filters for final power adjustment to the beam profiler sensitivity level. The high performance optical design in compact housing allows precise beam attenuation.
3 models are available: 5W, 100W or 200W.
5- Form factor
Do I have limited space in my optical setup? For the CMOS laser beam profilers, there are 3 body styles depending on the available room in the optical setup.
The standard CinCam CMOS laser beam profiler models measure 40 x 40 x 20 mm.
The CinCam CMOS Nano laser beam profiler models measure 29 x 29 x 20 mm with standard body.
The CinCam CMOS PICO laser beam profiler is the smallest beam profiler in the world with only 15 x 15 x 11.5 mm.
Click for more information and technical specs on: CinCam CMOS ,
Do I need a special calibration for my application? Go beyond standard laser beam profiling by enabling high performance features for your applications!
RayCi software automatically perform corrections but some corrections require user activation. Please refer to the Rayci Laser Beam Profiling Software user manual for more information. Additional calibrations such as absolute power calibration or angular calibration for divergent sources such as VCSEL are available and must be requested at the time of order.
Saturation level is in some cases linear with input power. Based on this simple principle, the laser beam profiler can be power calibrated to show the absolute laser power in real time. Because the sensor sensitivity depends strongly on the laser wavelength, the calibration has to be done for each target wavelength.
Price: $ per wavelength (specify wavelength upon order)
For laser beam profiling applications with a highly divergent beam (typically > ±10°), QE and therefore the sensor response decreases with the angle of incidence. The angular response calibration corrects this effect, thus allowing accurate results.
Price: $ per wavelength (specify wavelength upon order)
The calibration removes pixel sensitivity variations achieved by an illuminated flat field (White homogeneous image). If available, the 2D-View will be permanently corrected with these pixel sensitivity variations.
The background calibration is a process to permanently subtract an acquired background image (reference image) from the live stream. This correction eliminates undesired illumination effects and it includes also a cold and hot pixel correction.
Hardware availability: All
RayCi version availability: Lite, Standard, Pro
Type: Acquired image by the user
RayCi laser beam profiling software calculates a correction plane from every live frame. This correction plane is a dynamically baseline correction. The software subtracts the correction plane from the live data and corrects the dynamical non-uniformities in the background. Additionally this eliminates the temporal changes in the background level.
In laser science, the parameter M²,M^2, also known as the beam quality factor, represents the degree of variation of a beam from an ideal Gaussian beam. It is calculated from the ratio of the beam parameter product (BPP) of the beam to that of a Gaussian beam with the same wavelength. It relates the beam divergence of a laser beam to the minimum focussed spot size that can be achieved. For a single mode TEM00 (Gaussian) laser beam, M²,M^2, is exactly one.
The M², M^2, value for a laser beam is widely used in the laser industry as a specification, and its method of measurement is regulated as an ISO Standard (11146-1 and 11146-2).
Why is M² important?
M² is useful because it reflects how well a collimated laser beam can be focused to a small spot, or how well a divergent laser source can be collimated. It is a better guide to beam quality than Gaussian appearance because there are many cases in which a beam can look Gaussian, yet have an M² value far from unity. Likewise, a beam intensity profile can appear very “un-Gaussian”, yet have an M² value close to unity.
The value of M² is determined by measuring D4σ or “second moment” width. Unlike the beam parameter product, M² is unitless and does not vary with wavelength.
The quality of a beam is important for many applications. In fiber-optic communications beams with an M² close to 1 are required for coupling to single-mode optical fiber.
M²,,M2, determines how tightly a collimated beam of a given diameter can be focused: the diameter of the focal spot varies as M², M2, and the irradiance scales as 1/M4. For a given laser cavity, the output beam diameter (collimated or focused) scales as M, and the irradiance as 1/M². This is very important in laser machining and laser welding, which depend on high fluence at the weld location.
Generally, M², M2, increases as a laser’s output power increases. It is difficult to obtain excellent beam quality and high average power at the same time due to thermal lensing in the laser gain medium.
How is M² measured?
Siegman’s proposal became popular because of its simplicity, but experimentally it isn’t so straightforward, and some uncertainties arise from these principles. For example, if you want to measure the waist radius in the lab, how can you be sure that your measurement device is positioned exactly at the focus?
And how far do you need to go to be in the far field to measure the divergence? Are these two data points enough? The folks at the International Organization for Standardization, or ISO, decided to put an end to all this confusion, so they wrote a norm explaining how to measure and calculate M2, M2, properly: ISO 11146.
The ISO norm explains a method to calculate M², M2, from a set of beam diameter measurements, in a way that minimizes sources of error. Here are the main steps:
Focus it with an aberration-free lens
Use the regression equations detailed in the norm to fit a hyperbola to your data points for both the X and Y axes. This improves the accuracy of the calculation by minimizing measurement error.
From this fit, extract the values for θ, w0, R and M², M2, for each axis.
The ISO norm also states a few extra rules about the measurement of diameters (especially when using array sensors such as CCD or CMOS sensors):
Use a region of interest of 3 times the diameter
Always remove the background noise before taking a measurement
Using CinSquare to accurately measure your laser beam quality.
CINOGY’s CinSquare is a compact and fully automated tool to measure the beam quality of CW and pulsed laser systems from the UV to SWIR spectral range. The system consists of a fixed focusing lens in front of a motorized translation stage carrying the camera-based CinCam beam profiler. Its operational robustness and reliability ensures continuous use applications in industry, science, research and development.
According to ISO 11146-1/2 the CinSquare system measures the complete beam caustic and determines M², M^2, waist position, divergence, etc., related to the reference plane. To facilitate its use, the CinSquare system is equipped with two alignment mirrors for exact positioning of the laser beam and a filter wheel for incremental beam attenuation.
There are several ways to measure the laser beam divergence We describe here 2 methods for laser beam divergence measurement.
Far-field laser beam divergence measurement using a lens of known focal length. By definition, the full divergence Θ=D/f where D is the diameter of the beam at the focal distance and f the focal length.
By placing the CinCam profiler at the focus distance and inputting the focal length directly in RayCi software, laser beam divergence measurement can be easily achieved.
Measurement by direct beam size calculation at several positions in the beam path
By definition, the divergence (full angle) Θ is given by:
Θ=2 arctan(D1-D2)/2L where D1,D2 are the beam diameter at different positions and L the distance between the measurement positions.
Confocal microscopy became an essential tool in biology and biomedical sciences as well as in material sciences due to the contrast improvement compared to more traditional microscopy techniques (widefield, phase contrast, dark field and more). One of the key advantages of confocal microscope is providing lateral information of the sample. What is the shape of the sample along the optical axis of the microscope objective, how thick are those features, etc.In this application note we will explain what does a confocal microscope has to offer compare to a widefield system and how does it work. Then, we will review the different confocal microscopy techniques / technologies. Finally, we will focus on the latest developments of confocal microscopes which goes above the diffraction limit!
1 – What is confocal microscopy?
Confocal microscopy is an optical imaging technique that uses spatial filtering (in most cases a pinhole), to block the out-of-focus light from physically reaching the sensor – in other words, optical sectioning. Although a confocal microscope will theoretically improve the lateral and axial resolution slightly compared to a widefield microscope. It can lead to a huge increase of the optical sectioning, therefore offering much thinner and effective axial resolution.
In a widefield configuration the entire specimen is illuminated evenly, and all parts of the sample can be excited at the same time (both in the lateral direction, i.e. X-Y and in the axial direction, i.e. Z). Every fluorophore contained in that volume will emit light, and all this light will be collected by the camera no matter where it comes from. The camera collects what is called out-of-focus light, and this can drastically decrease the contrast of an image, especially for thick samples (typically bigger than 2µm) .
In a confocal configuration though, the light from the laser is focused onto the sample in the focus plane of the objective. Only a small area is illuminated on that plane, and because the beam is focused, the energy density is much higher at the focus plane compared to the planes above or below. Therefore, fluorophores in the focus plane are much more likely to emit light than fluorophores on the planes above or below. Nonetheless there is still some out-of-focus light being emitted.
Below is a schematic of the light path in a widefield microscope (left) and in a confocal microscope (right).
As we can see in the schematic below the pinhole blocks the light coming from the planes below or above the focus plane. So, using both a focused illumination and a pinhole, a confocal system will acquire only the light coming from the focus plane and therefore the information contained in the image only corresponds to fluorophores in that plane. Acquiring multiple planes then helps to reconstruct the 3D shape of the sample.
Below are images of the same sample, acquired with a widefield microscope on the left and with a confocal on the right :
Nowadays confocal microscopes are much more complex than just a focused illumination and a pinhole, but overall, these are the two features that bring the confocal capability.
But let’s have a closer look at the first scheme comparing widefield and confocal configurations. As mentioned, a widefield microscope evenly illuminates all parts of the sample at the same time, while a confocal microscope only illuminates a small area at one time. That means that to completely illuminate the sample in one plane we need to scan the light all over the FOV (in X-Y). Therefore, confocal microscopes also feature moving mechanical parts to scan the light all over the sample. There are two main categories of confocal systems: laser-scanning confocal microscopes and spinning-disk confocal microscopes.
2 – Laser-scanning & spinning-disk confocal
2.1 – Laser-scanning confocal microscopy
The most straightforward way to illuminate the whole FOV with the confocal configuration is to scan the illumination in X-Y. Microscope manufacturers do that by using scanning mirrors. Usually two mirrors: one to scan along the X direction (fast mirror) and one to scan along the Y direction (slow mirror). For every single point, the detector records the fluorescence light emitted by the sample using a “single-pixel” detector: typically, PMTs (photomultiplier tubes) or HyDs (hybrid detectors). In such a configuration the image is formed digitally after sequentially capturing the light from the sample emitted at each position of the scanners. This technique is called laser-scanning confocal microscopy.
[Image from http://www.physics.emory.edu/faculty/weeks//confocal/]
Laser scanning confocal microscopy is the most widely used method for confocal imaging. The main advantages of are; the high optical sectioning (Z resolution, as good as 500nm raw data), high lateral resolution (X-Y resolution, as good as 240nm raw data), and versatility of implementations and uses (upright or inverted stands, low mag or high mag objective lenses, different samples holders types etc.). The two main drawbacks are the speed and the phototoxicity
The speed, because we need to sequentially point-scan the region of interest. Typically, the scanning speed of galvo mirrors is 1µs/pixel so for an image of 1024×1024 pixels, we need about one second to acquire one frame. This is for one plane only, however the goal of a confocal microscope is to capture the 3D structure of the sample, so we would need to acquire multiple 2D images. This can lead to acquisition time of few minutes, and sometimes hours. In recent years confocal microscopes feature a new type of scanning mirrors to increase the speed, resonant galvo mirrors.
Although those mirrors can go much faster, offering ultra-short pixel dwell-time this does not change the fact that the scanning process here is sequential so noticeably short illumination will probably lead to insufficient signal-to-noise ratio. This becomes even more noticeable considering that the detectors used in laser-scanning confocal microscopes, PMTs or HyDs are relatively not sensitive.
The phototoxicity could be a big problem on a laser-scanning confocal microscope. The main reason lies to the fact that phototoxicity is not a linear process. Therefore, applying twice the of the light intensity for half the time will create more stress and photo-damage than half the intensity for twice the time. Here is an example to understand the process better. If we want to collect as many photons from the sample in a laser-scanning confocal microscope than in a widefield microscope, assuming both systems are the same in terms of light efficiency.
For a 1024×1024, each point in the sample will be excited for about one over one million of the entire acquisition time. To collect as many photons as in a widefield system we therefore need to apply one million times more power density to each pixel than in a widefield configuration. As mentioned above, this will create much more photo-damage than in a widefield experiment. This is taking the assumption that both systems have similar light efficiency, which is generally not true, as cameras have higher QE than PMTs or HyDs.
Because of those two drawbacks (speed and phototoxicity), laser-scanning confocal microscopes are typically not the best systems to image living cells. Both because of the low temporal resolution and the photo-damage that the illumination will create on the sample.
2.2 – Spinning-disk confocal microscopy
There is another way to increase speed: parallel illumination. This technique was originally introduced 40 years ago but recent improvements in microscope designs and camera technologies have significantly expanded the potential of this technique. The spinning-disk confocal microscope consists of a disk with multiple pinholes, rotating at a very high-speed (5,000 – 10,000 rpm). The illumination is parallel before the disk. The objective lens focuses multiple scanning points onto the sample. This creates a parallel illumination by opposition with sequential scanning on a laser-scanning confocal microscope. Within one rotation, every part of the sample has been illuminated several times, offering acquisition speed up to 1,000 – 2,000 fps. Again, the limitation is most likely going to be the signal-to-noise ratio, but video rate (~30 fps) can easily be achieved.
Nowadays, spinning-disk confocal microscopes are more complex than that. They usually feature two rotating disks, the first one has microlenses in order to more efficient to inject the light into the second disk with pinholes. Furthermore, those systems are using cameras as detectors and are not technology dependent: both CMOS, CCD, sCMOS or EMCCD can be used!
[Image from https://www.cherrybiotech.com/scientific-note/microscopy/introduction-to-spinning-disk-confocal-microscopy]
This imaging method is much more appropriate for living cells imaging. The main advantages are the high temporal resolution and the lower photo-toxicity than a regular laser-scanning confocal microscope. But there are several drawbacks, like the fact that the fixed pinhole size and as a result both the axial and lateral resolution are not optimized for each objective lens.
Furthermore, several imaging artifacts are to be considered when using a spinning-disk. First, pinhole crosstalk, the out-of-focus light can still reach the detector by travelling through adjacent pinholes. Second, low light efficiency, as most of excitation and emission light do not pass through the disks and back-reflection could increase the background of the image. Third, field inhomogeneity, which could be compensated using a liquid light guide. Fourth, the inherent limitation of selecting and scanning only a specific ROI (like for FRAP experiments). These are the most common reasons, result in choosing a laser-scanning confocal architecture over a spinning-disk confocal one.
2.3 – Comparisons
There is no perfect system, and everything is a trade-off. Confocal microscopes are the ideal solution for thick tissues, but then the imaging method will mostly depend on your sample. For fixed tissues which will not suffer too much of the photo-damage, laser-scanning confocal microscope is the ideal solution, offering higher lateral and axial resolution. However, for living cells, if you do not need the extra resolution offered by a laser-scanning confocal (LSCM) architecture, then spinning-disk confocal systems (SDCM) are the ideal solution. Chart below summarizes this section.
3 – Super-Resolution Confocal Microscopy
3.1 – Discussion about lateral resolution
Now that we have a good understanding of how a confocal microscope works and that we have reviewed the different techniques, we can discuss the latest big developments that have been achieved in confocal microscopy.
At the very beginning of this article we mentioned that confocal microscopy increases the lateral resolution compared to a regular widefield microscope. But we did not explain why. Under ideal aberration-free conditions the lateral resolution of a regular widefield microscope is given by Abbe’s formula. Resolution = 0.6*λem/NA and only depends on the emission wavelength and the numerical aperture of the objective lens.
Lateral Resolution for Confocal
In a confocal microscope, the size spot is related to the excitation wavelength only and depends on the pinhole diameter. In addition to rejecting most of the out-of-focus light, pinhole will also cut some of the signal coming from the side of the illuminated area. Theoretically, in the extreme case of a fully closed pinhole we could get almost a two-fold in the lateral resolution. Resolution= 0.4*λexc/NA. However, in that case, no light will ever reach the detector. This makes that scenario a theoretical limit than a real case. Even a moderate reduction of the pinhole diameter drastically reduces the light throughout.
In practice researchers use a pinhole diameter of 1AU (i.e. Airy Unit, 1AU = 1.2*λ/NA). 1AU offers good confocal capability without sacrificing too much of the incoming light. In that case we typically get about 15% improvement, leading to a lateral resolution of: Resolution = 0.6*λexc/NA.
These are theoretical limits, but the experience has shown that the lateral resolution is usually worse. Mostly because optical inhomogeneities in the specimens distort the quality of the light and therefore decrease the resolution. In general, using a lens with NA=1.4 and an excitation of λexc=488nm (therefore λem=520nm), we get a lateral resolution of ~260nm at a widefield configuration, ~240nm at a confocal microscope with a pinhole diameter of 1AU and ~170nm at a confocal microscope with the approximation of a completely closed pinhole.
3.2 – Computer-based Image Scanning Microscopy
However there has been a lot of work done to go beyond that and still get the resolution enhancement that could be brought by a confocal, without sacrificing the light throughout. The first paper came out in 1988 when Sheppard et al. proposed an elegant solution to get this 1.4-fold in the resolution and set the basis of all future developments . The idea basically consists in replacing the pinhole and single-pixel detector by a CCD camera. Each pixel of the camera acts as a single pinhole and captures all the light. With appropriate computation we can reconstruct an image which exhibits the 1.4-fold in the resolution without sacrificing the light throughout.
This technique called Image Scanning Microscopy (ISM) was commercialized by Carl Zeiss through the product line AiryScan. This AiryScan detector uses a matrix of PMTs, rather than a CCD chip, but the working principle remains the same. Performing additional deconvolution on the image gives you another 1.4-fold in the resolution, hence the 120nm lateral resolution advertised (total of 2-fold in the resolution).
3.3 – Optics-based Image Scanning Microscopy
Soon after the first demonstration of ISM it became evident that those improvements could also be achieved in an all-optical way. Within a few years, two papers came out. The first one in 2013 when York et al. presented a method called Instant Structured Illumination Microscopy (ISIM) . In their setup, they use a microlens-array for generating a multi-point pattern that is scanned over the sample. The revolutionizing new idea was that after de-scanning the emission light, it is passed through a matching pinhole array and a second microlens array that expands the beam size by a factor two. This light is ﬁnally sent back to the backside of the scanner before being imaged onto the camera.
The main difference is that further computation is not necessary in order to get the 1.4-fold in the lateral resolution. A similar method was detailed by Roth et al. where they used a single beam . Those two optics-only methods achieve a 1.4-fold in the lateral resolution without any computation. But you can still apply deconvolution to get another 1.4-fold therefore reaching out 120nm lateral resolution. This led to the development of commercial systems such as VT-iSIM from VisiTech International and the SoRa spinning-disk from Yokogawa.
Optics-only ISM for RCM
Two years later, De Luca et al. realized another optics-only ISM in a different way. Instead of shrinking the image at each focus position and then placing it back in the ﬁnal image at the corresponding focus position, one can alternatively maintain the image size and place them with double distance to each other . Although both procedures are mathematically equivalent, they are experimentally different. In the rescan system of De Luca, one uses a second synchronized scanner to rescan the emission onto a camera, but with twice larger scan amplitude than used by the excitation scanner. This is the main working principle of the Rescan Confocal Microscope (RCM) system manufactured by Confocal.nl.
Rescan Confocal Microscope
1 – Working principle
The Rescan Confocal Microscope (RCM) belongs to the same family of the “enhanced confocal” systems mentioned above. It is a confocal microscope, but it also beats the diffraction limit by a factor 1.4 before deconvolution (2-fold after deconvolution) and is also a super-resolution system.
The excitation part works the same as pretty much any laser-scanning confocal microscope. The sample is being point-scanned using scanning mirrors to move the beam in X-Y and cover the entire FOV. The light is being emitted, de-scanned and focused through a pinhole to get optical sectioning the same way it would on a regular laser-scanning confocal system. This is after that the magic happens. Instead of using a single-pixel detector we use a camera (typically a sCMOS) and a second pair of galvo mirrors to re-write the signal on the camera chip. Those mirrors are called the rescanners (hence the name of Rescan Confocal Microscope).
In the scenario where both pairs of mirrors have the same amplitude (scanners and rescanners), the system simply re-write the fluorescence signal of the sample onto the sensor chip of the camera. But when the rescanners have a bigger amplitude, it stretches the image in both X and Y directions. As a result, we obtain a magnified image.
Now, any optical magnification, no matter how big it is, does not increase the resolution of a system. It magnifies the image, but it magnifies the blurry function in the same way. Therefore, at the end the resolution power remains the same. But here we are not talking about an optical magnification, but a mechanical one. In the sense that this is the movement of the scanners that spread the light to a bigger area of the sensor and create the magnification. Therefore, we are not limited to the diffraction limit anymore.
As a result of the rescanners move faster than the scanners (same frequency but higher amplitude = movement speed is faster) they introduce a motion blur in the image. That counterbalance to some extent the resolution improvement obtained with the mechanical magnification. The graph (right) below shows the resolution and improvement ratio using a 100X / NA1.4 lens at different sweep factors between the rescanners and the scanners. Sweep factor of 2 means the rescanners have twice the amplitude of the scanners. As we can see, the sweep factor of 2 is optimal for the resolution. In that case the resolution improvement due to the mechanical magnification is equal to 2. The motion blur due to the higher scan speed is only of 1.4, so the total resolution improvement equals 2/1.4 = 1.4.
This part is nicely explained in the following video. Above the 170nm lateral resolution we can run deconvolution to go down to 120nm. This is the same resolution as a SIM, Airyscan, ISIM and SoRa systems.
Other advantages of RCM
The resolution enhancement is only one aspect of this system. Because the RCM uses a camera as a detector, it is much more sensitive than PMTs or HyDs based systems. A typical sCMOS camera offers QE over 70% between 400-700nm with peak at 82%, and the most sensitive ones even offer peak at 95%! This is to compare to 30% at best for a PMT or 45% for a HyD detector. RCM’s optical architecture is remarkably simple. It ensures all photons that are captured by the objective lens are imaged onto the camera. As a result, the overall light throughout is very high, approximately 3-4 times higher than other regular confocal microscopes. RCM is extremely sensitive, which makes it a particularly good choice for samples like live cells.
Though the RCM remains a laser-scanning confocal system and therefore is rather slow. As a result of a precise synchronization requirement between scanners and rescanners, non-resonant scanners are the only available option. Yet this limits the speed of the system. The RCM typically runs at 1fps for 512×512. Those specs are summarized in the following chart:
2 – Comparisons
As shown in the previous chart and all over this discussion there is no perfect system. It is most of the time a question of compromise and depends on your application. If speed is not mandatory for your research then the RCM is the ideal system. It offers confocal capability, high lateral resolution, low photo-toxicity and is as modular as any laser-scanning confocal microscope (FRAP, …). The camera feature allows RCM to operate in a regular widefield configuration. This makes the workflow of finding and focusing onto the sample extremely easy. Finally, the RCM is a cost-effective solution, thanks to the simple design. It does not require expensive components (only galvo mirrors). Additionally, it is an upgrade compatible with any widefield microscope brand, and most of the cameras widely used for microscopy.
Images below shows the nuclear spread from fixed mouse spermatocytes. Immunostained for SYCP3 (a component of the synaptonemal complex) and labeled with Alexa 488. The images on the top were acquired using a regular laser-scanning confocal system with a pinhole of 1AU. The images on the bottom were acquired using the RCM with a pinhole of 1AU. On the left side we have the whole FOV before and after deconvolution. On the right side we have a zoom before and after deconvolution.
The raw data of a regular laser-scanning system exhibit a lateral resolution of 240nm. Yet we can reach 170nm after deconvolution. RCM can achieve the same resolution without any deconvolution. It is possible to enhance images further, using deconvolution to reach 120nm lateral resolution. We can see the spreading of the spermatocytes in the bottom right image. However we can’t clearly see spreading of the spermatocytes with the deconvolved images from the regular laser-scanning system.
The video is a timelapse of HO1N1 cells expressing Mitochondria-RFP through the Bacmam expression system. This timelapse video was recorded over 61 hours, with images every 20 seconds (total of almost 22,000 images!). Imaging a live sample over long periods of time is only possible thanks to the low photo-toxicity of the RCM. The laser power on the sample plane was 1.3 µW and measured using a power meter.
There are more images and movies on our website, as well as more info about the specs of the RCM. Please visit out product page Rescan Confocal Microscope and contact us if you have any questions.
 Claxton, Nathan S., Thomas J. Fellers, and Michael W. Davidson. “Laser scanning confocal microscopy.” Department of Optical Microscopy and Digital Imaging, Florida State University, Tallahassee (2006).
 Jonkman, James, Claire M. Brown, and Richard W. Cole. “Quantitative confocal microscopy: beyond a pretty picture.” Methods in cell biology. Vol. 123. Academic Press, 2014. 113-134.
 SHEPPARD, CJ R. “Super-resolution in confocal imaging.” Optik (Stuttgart) 80.2 (1988): 53-54.
 York, Andrew G., et al. “Instant super-resolution imaging in live cells and embryos via analog image processing.” Nature methods 10.11 (2013): 1122.
 Roth, Stephan, et al. “Optical photon reassignment microscopy (OPRA).” Optical Nanoscopy 2.1 (2013): 1-6.
 De Luca, Giulia MR, et al. “Re-scan confocal microscopy: scanning twice for better resolution.” Biomedical optics express 4.11 (2013): 2644-2656.
Laser Additive Manufacturing
Laser Additive Manufacturing Diagnostic & Control Tools
Laser-based Additive Manufacturing
This application note focuses on using focus beam profiler for laser additive manufacturing and answering the following questions. Where is my beam focusing? How big is my spot? Is it stable in time?
Selective laser sintering (SLS) is an additive manufacturing (AM) technique that uses a laser as the power source to sinter powdered material (typically nylon or polyamide), aiming the laser automatically at points in space defined by a 3D model, binding the material together to create a solid structure. It is similar to Selective Laser Melting (SLM); the two are instantiations of the same concept but differ in technical details. Selective laser melting (SLM) uses a comparable concept, but in SLM the material is fully melted rather than sintered, allowing different properties (crystal structure, porosity, and so on). Those processes require uniform, symmetrical and stable power density distribution of the laser beam. More specifically, the focus spot size and intensity have to be maintained within a finite acceptance range throughout each build. Because of the high power of the laser used (typically in the range of several 100s of Watts to kW, thermal lensing can occur and affect the focus position in time. Therefore, understanding the stability of this parameter is critical in order to avoid structural weakness captured stress during the building process. The Focus Beam Profiler (FBP) is a proven solution developed by Cinogy Technologies hand-in-hand with major actors in the industry. The Focus Beam Profiler is a robust industrial system designed to directly image the high-power on a 2D focal plane array after being attenuated with passive optical components. The position of the measurement plane is calibrated and known with high accuracy, thus giving a direct overview what the beam looks like at that position.
Schematic of the Focus Beam Profiler:
After positioning the Focus Beam profiler onto the build-plate and directly under the path of the laser beam, a complete beam caustic can be acquired by changing the build-plate position. The software quickly outputs the beam parameters according to ISO standard 11146-1 (a detailed description of the method used can be found in our application page —>https://axiomoptics.com/application/laser-beam-profiling-applications/).
Top-left corner: 2D profile of the beam at selected position. Dotted line delimitates the active area of the beam. Red and blue lines show the beam orientation in beam coordinates (U,V), as opposed to lab coordinates (x,y).Bottom-left corner: 3D view of the beam caustic. Top-right corner: beam size (y-axis) along the beam path (x-axis). Red and blue lines correspond to the beam coordinate system (U,V). The continuous vertical lines indicate the position of the beam waist (U,V). Note that in this case the beam shows some astigmatism. Bottom-right corner: Numerical data computed from the beam caustic.Output parameters include:
Caustic position z0 (mm)—> position along the beam path where the focus spot size is minimum (aka waist position or focus position)
Beam Waist Diameter d0 (mm) —> diameter of spot size at the true focus position
Rayleigh Length zR (mm) —> distance from the beam waist (in the propagation direction) where the beam radius is increased by a factor of the square root of 2
Divergence theta (mrad) —> angular measure of the increase in beam diameter or radius with distance
M2 —> beam quality factor, represents the degree of variation of a beam from an ideal Gaussian beam
For measurements in the field, an alternate software solution (Focus Beam Profiler Control Tool) is available for quick step-by-step measurements:
Overview of the Focus Beam Profiler models:
Real-time closed-loop control and monitoring of laser processing
Coaxial imaging of the melt pool in laser processing has enabled a number of approaches to real time closed-loop control and monitoring of different laser processing applications. CMOS technology has dominated this research area with most relevant works appearing during the last decade. As a result, the few imaging commercial systems available for this purpose are mostly based on CMOS sensors. However, these sensors present a number of issues that seriously limit their performance in practical settings. Firstly, they are sensitive only to wavelengths under 1 μm hardly seeing thermal emission from bodies at temperatures under 900ºC, thus being blind to typical cooling processes (e.g. in laser cladding). Secondly, they suffer much in the presence of reflections and bright spots from projections or powder, due to their high sensitivity (in the visible range). Moreover, radiance increases much faster with body temperature in the visible range at process temperatures than in the IR. As a result, a very limited dynamic range is available for process observation. The images acquired are practically binary with little information about actual heat distribution in space.
In the last years novel uncooled PbSe imagers that work in the Mid-wave infrared (MWIR) spectral range (1-5 μm wavelength) have appeared with the potential of being game changers in this field. Being sensitive in the MWIR means that these sensors can see radiance emitted at much lower temperatures -down to 100ºC- and they can make a better use of their dynamic range, even at high temperatures.
Application to additive manufacturing (AM) by Laser Metal Deposition (LMD)
Direct Energy Deposition (DED) processes are showing a growing interest in the industry as they have strong capabilities to build large-sized components, even over non-flat surfaces and with fast building rates compared to other AM processes. Among them, Laser metal deposition (LMD), also known as Direct Laser Deposition (DLD), processes are gaining importance and have been investigated heavily in the last several years as it provides the potential to rapidly prototype metallic parts, produce complex and customized parts, clad/repair precious metallic components.
Recently, different closed-loop control systems have been implemented to improve the robustness, reliability and the geometrical accuracy of components built by powder- LMD. Specifically, researchers have monitored laser parameters, melt pool metrics, part temperature, feed material, geometry, and optical emissions during processing. A common strategy is sensing and control of melt-pool size or temperature. Other efforts have attempted to maintain a constant layer build height by directly sensing build height and adjusting processing head position, processing speed, material feed rate or laser power. As a result, the exploitation of LMD processes continues to accelerate.
However, work remains for AM to reach the status of a full production-ready technology. Production challenges remain such as: assurance of quality, right-first-time manufacturing capability and the complexity of AM processes involving many input parameters are technological barriers preventing the widespread deployment in manufacturing sectors at industrial level. Ensuring AM process qualification and good part quality has many different aspects, such as: part design, feedstock material, process parametrization, process planning, manufacturing strategies, inline and online monitoring and control systems, etc. Besides geometrical accuracy of the part, microstructure is a very important characteristic of the laser deposit because it has a strong impact on the mechanical properties. The two main common defects or material discontinuities that limit final part quality are porosity and cracks. Thus, the wider adoption of AM technologies require techniques that improve the quality of parts, namely, microstructure anomalies and main process defects such as porosity and cracks.
CLAMIR by New Infrared Technologies is a closed loop control system based on high speed coaxial MWIR imaging. The embedded system with real time processing capabilities obtains IR images of melt pool at very high frame rates (1 KHz), extract key features of this image (such as width) and based on specific algorithms, it controls the power of the laser during process by the action of an embedded proportional-integrated (PI) controller. CLAMIR main features:
Continuous monitoring and measurement of the melt pool geometry using a MWIR infrared camera (1µm – 5.0 µm)
Closed-loop control of the laser power during the complete process, ensuring quality and repeatability
Compatible with most of laser optics and powders
Easy mechanical integration and quick configuration
Consistent operation, no need of reconfiguration during the process
Principle of operation:
Continuous on-axis monitoring of the melt pool geometry using a high-speed MWIR infrared camera (1.1 um – 5.0 um)
Embedded processing electronics performs a real-time dimensional measurement of the melt pool.
The optimum laser power is calculated and controlled through an analog output (0 VDC – 10 VDC)
Advantages over existing CMOS-based solutions:
Wider range of detected temperatures (+100C) – better accuracy
Robustness against high-power, high-intensity signals and spatters
Wider dynamic range
Advantages compared to pyrometry-based solutions:
Image processing techniques vs single point measurement
2-color pyrometer is required to achieve accurate temperature reading
1.Camera and embedded processing unit
3.Software (configuration, control and visualization)
Desktop application for configuration and data logging (not required for operation of CLAMIR)
Allows configuration of process parameters (range of the laser power) and closed-loop feedback control
Other features: selection of the operation modes, camera control, definition of ROIs (rounded, square)
Data files visualization and analysis
DLL for custom S/W development
On-axis optical system integration to monitor melt pool geometry
Laser head optical path needs IR transmission (>1.1 um) Integration in the laser head using an existing optical port
Easy mechanical integration and quick configuration
Dichroic mirror for compatibility with present VIS cameras for alignment and process visualization
Results with LMD processes:
Wavefront Sensing Applications
What is a wavefront ?
A wavefront is an essential parameter in the propagation of light and can be used to characterize optical surfaces, align optical assemblies or help to improve the performance of optical systems. In this application note we will cover most common applications of wavefront sensors and illustrate with a few examples.
In physics, the term light refers to electromagnetic radiation of any wavelength, whether visible or not and like every type of EM radiation it propagates as waves and the set of all points where the wave has the same phase of the sinusoid is called the wavefront.
The wavefront can be plane or spherical and carries the aberrations which are the differences to the perfect sphere or plane. Aberrations are generated when light goes through media or optical components.
What is a wavefront sensor ?
A wavefront sensor is a device for measuring the aberrations of the optical wavefront, and this term is normally applied to instruments that do not require an unaberrated reference beam to interfere with to deliver the wavefront or phase measurement.
It provides a direct measure of the phase and intensity of a wavefront. The most common type of wavefront sensor is the Shack–Hartmann wavefront sensor (SHWFS). This apparatus is associating a 2D detector with a lenslet array. Those devices were developed for adaptive optics and have been widely used in optical metrology and laser diagnostics. Their level of performance has met with typical standards in optical metrology.
The best factory calibrated Shack–Hartmann wavefront sensors are able to provide nanometric accuracy. Thousands of waves of dynamic range with a linearity of 99.9%. This level of performance combined with the intrinsic properties of the instrument such as insensitivity to vibrations, speed and achromaticity. These features make the Shack-Hartmann wavefront sensor a key tool for a wide spectrum of applications in research and industry. Imagine Optic is the leading manufacturer of SHWFS.
Over the last decade, alternative wavefront sensing techniques to the Shack–Hartmann system have been emerging. Mathematical techniques such as phase imaging or curvature sensing are also capable of providing wavefront estimations. While Shack-Hartmann lenslet arrays are limited in lateral resolution to the size of the lenslet array, mathematical techniques such as those mentioned above are only limited by the resolution of digital images used to compute the wavefront measurements. That being said, those wavefront sensors are suffering from linearity issues and are much less robust than the original Shack–Hartmann wavefront sensor.
Measurement principle of the SHWFS
Shack–Hartmann wavefront sensor is measuring the phase and the intensity in the same plan. This allows to calculate many parameters describing the propagation of light such as the Point Spread Function or the Modulation Transfer Function, with accuracy less than 1%.
A wavefront sensor is able to deliver the following parameters
– Tip and Tilt – Curvature – Refractive power – Focal point positions – Wavefront PV and rms – Intensity – Spot diagram – Zernike coefficients – Encircled energy – MTF – PSF – MSquare
and many more…
Applications of the Shack–Hartmann wavefront sensor in Optical metrology
Optical testing in reflection (double pass)
The characterization of optical surfaces is an essential step in the manufacturing of any type of optical components. Interferometers such as Fizeau were developed for that purpose but the Shack–Hartmann wavefront sensor is a competitive alternative because it offers an excellent trade-off between performance and versatility/ease-of-use.
For reflective optics and especially large mirrors, the Shack–Hartmann wavefront sensor can perform a rapid and accurate measurement of the radius of curvature. When measuring the radius of curvature using accessories such as the R-FLEX system developed by Imagine Optic, can increase the versatility of the Shack–Hartmann wavefront sensor and simplify the measurement setup without degrading the performance of the wavefront sensor.
The R-FLEX can adapt with the f/# of the component to be tested thanks to a large choice of optical focusing modules. The measurement is then performed after a reference measurement is recorded, in order to distinguish the aberrations coming from the component under test or coming from the measurement system itself.
Characterization of the primary mirror of Herschel space observatory
The Herschel Space Observatory was a space observatory built and operated by the European Space Agency. It was active from 2009 to 2013, and was the largest infrared telescope ever launched, carrying a 3.5-metre mirror and instruments sensitive to the far infrared and sub mm wavebands. The characterization of the primary mirror was challenging since the mirror (SIC) was polished to perform imaging in the far infrared and the wavefront measurement made in the visible . The required dynamic range was extremely high (1.2mm) and only a Shack–Hartmann wavefront sensor (Imagine Optic HASO) could make that measurement possible.
Thin Dielectric Mirror Characterization in Reflection
The application case above shows an example of a measurement of a large dielectric mirror in reflection with a R-FLEX Large Aperture, which is used to characterize the wavefront error of some region of interest of the optical component.
Optical testing in transmission
For the test of optics in transmission, the measurement can be made in single pass or double-pass. For the test of filters and dichroics, the Shack–Hartmann wavefront sensor has the advantage to be achromatic and perform characterization at several wavelengths. The main challenge for this application is the adaptation in size of the area of interest on the component under test and for this some accessories such as the R-FLEX LA were designed to allow seamless integration of a Shack–Hartmann wavefront sensor for the measurement of aperture up to 200 mm in double path.
Optical testing of eye wears
In the past few years smart glasses have been made accessible to mass market. Those devices offer several features including hands-free access to all sorts of information directly relayed into the pupil of the eye, potentially improving user’s safety for a number of applications, professional or not. While reducing the production costs, manufacturers of this type of optical systems have to follow some quality standards defined for safety eye wear by norms such as
– EN166: European Standards for Eye protection – ANSI Z87.1: Eye protection from The American National Standards Institute – SANS 1404: Eye-protectors for industrial and non-industrial use in South Africa
Accuracy of vision is one of the four optical clarity classes. It qualifies image distortion through eye wear. The highest level of optical clarity or correctness is defined as Class 1 (0.06 diopters),
In general the Shack–Hartmann wavefront sensor are used for the characterization of a wide variety of optical components such as:
– Concave and convex mirror – Toroids mirrors – Flat windows such as filters, dichroics, Vacuum viewport flanges – Curved windows such as heads up displays, TV displays, heated windshield
Assistance for optical alignment
The alignment of optical assemblies for the minimization of aberrations became more and more critical in optical systems dedicated to produce images. Over the past decade the need for high performance optical alignment increased drastically with the constant evolution of imaging devices. Cameras for smartphones, VR devices, inspection lenses for semiconductor and optical systems for defense and security industry are some of the examples.
The first optical adjustment in which a Shack–Hartmann wavefront sensor can be used is the collimation, the Shack–Hartmann wavefront sensor measures the curvature information with a sensitivity that can reach 1/1000 m-1, in real time .
The Shack–Hartmann wavefront sensor is also able to provide a Zernicke polynomial decomposition which can be compared for instance with a wavefront error (WFE) established by simulations. The alignment can be performed by minimizing the off-axis aberrations with a sensitivity on the wavefront as low as l/200 rms on zernikes coefficients of interest.
Those alignment processes can be automated thanks to communication between the simulation and the degrees of freedom made available on the system being aligned.
Over the past few years standard off-the-shelf Shack–Hartmann wavefront sensor have proven their ability to perform optical alignment on very demanding optical systems. The space telescope GAIA was able reach diffraction limited performance thanks to R-FLEX.
A standard HASO R-FLEX was used for the alignment and the optimization of the dual telescope system in two GAIA 3 mirrors anastigmatic telescopes. I was able to reach approximately 50nm wavefront error. The R-FLEX was located in the focal plane of the telescope system
In the industry the Shack–Hartmann wavefront sensor is used as the primary tool for the alignment of some complex optical system dedicated for the inspection of wafers or 8″ telescope which is used for earth observation. Thanks to its versatility the Shack–Hartmann wavefront sensor is also a prime tool in industrial R&D.
Top Alignment of a 8″ Shmidt Cassegrain telescope with R-FLEX in the focal plane, bottom left wavefront error before alignment 226 nm rms , bottom right wavefront error after alignment 19nm rms
Laser beam diagnostic
The Shack–Hartmann wavefront sensor measures the phase term but also the amplitude or intensity term. The characterization of those 2 terms in a single plan allows to propagate the electro-magnetic field everywhere in free space. Furthermore, the phase has more weight than the amplitude in the propagation of a laser beam. This makes the Shack–Hartmann wavefront sensor a remarkably interesting option for applications related to the development, integration, or maintenance of a laser system.
Just like for any optical system the Shack–Hartmann wavefront sensor can be used to minimize the wavefront aberrations of the output beam going out of the laser cavity. The measurement can be made in the near field and the reduction of the aberrations will allow to obtain an optimized far-field, by maximizing the encircled energy for instance.
Furthermore, some lasers are emitting on a broader spectral bandwidth and can produce aberrations that vary in function of the wavelength. The Shack–Hartmann wavefront sensor and its achromatic nature combined with filters can be used to characterize the spatio-temporal coupling in ultra-short pulses or continuum laser sources.
Here a Shack–Hartmann wavefront sensor (HASO Imagine Optic) was used to characterize the wavefront error of the output beam of a NdYAG rod laser before and after static correction of spherical aberration by a variable radius mirror (VRM).
Some amplification methods being used in lasers can introduce a thermal lensing effect which will affect the beam propagation over time. The Shack–Hartmann wavefront sensor can be used to simply characterize and monitor the beam curvature variations. Additionally it can measure pointing stability.
Above Characterization of thermal properties of different gain media with a Shack–Hartmann wavefront sensor (HASO Imagine Optic) Measured aberrations are dominated by focus (thermal lensing) and shows detailed residual for the different media.
The full characterization of the electromagnetic field in one snapshot and the possibility to monitor / measure the curvature allows the Shack–Hartmann wavefront sensor to provide advanced beam diagnostics. Thanks to its very fine collimation of laser diodes can be performed along with MSquare (M2) measurement. Measuring the M2 with a SHWFS is surely possible, however, initial conditions are very important to obtain reliable measurements. The beam needs to be single mode transverse, the measurement has to be done within the Rayleigh length and the sampling of the beam needs to be sufficient to accurately measure aberrations and the feet of the gaussian beam.
The history of the Shack-Hartmann wavefront sensor is linked to Adaptive Optics (AO). It was developed to measure phase distortions so they could be corrected with a deformable mirror. Applications of AO have boomed over the past 2 decades and the Shack-Hartmann wavefront sensor is still the most common wavefront sensor being used in:
Beyond the native applications of monitoring the closed loop the it can also be used to optimize the AO system in some other ways and to characterize the wavefront threat of a system. The analysis of the wavefront threat can be used to determine the necessity of deploying an AO correction or not, choose the deformable mirror and also study the temporal properties of the wavefront distortions.
Study of the wavefront threat of HAPLS pump laser
Every adaptive optics system is associating a wavefront sensor, a control system (RTC or computer) and a deformable mirror. The adaptive optics can be functioning in close loop or open loop but the performance of these 2 control mode rely on the interaction matrix. On that matter, the linearity of the wavefront sensor is crucial in order to get the closed loop to converge and obtain a stable correction. The Shack–Hartmann wavefront sensor is a very robust candidate for that application as a result of its linearity.
Every adaptive optics system requires a wavefront sensor for the correction and some advanced systems rely on another wavefront sensor. These setups typically use a Shack–Hartmann wavefront sensor to monitor the corrected wavefront and adjust the closed loop correction by reinjecting non common path aberrations (NCPA). NCPA are the aberrations not seen by the wavefront sensor upstream in the closed loop. These are called truth wavefront sensors and the Shack–Hartmann wavefront sensor because of its linearity is a very interesting candidate for this application.
Gemini Planet Imager (GPI) is an ExAO system dedicated to directly image planets located inside and outside of our solar system. This is the state of the art AO system coupling athnospheric correction to a coronograph allowing imaging and spectrometry of exosuns companions under extreme angular resolution.
The spectrum of applications where the Shack–Hartmann wavefront sensor is being used is very broad. Visit Imagine Optic website to explore the application notes available to download and have an overview of the publications.
Comparison of the Shack Hartmann Wavefront sensor with interferometers
Interferometers have for a long time been the reference tool in optical workshops and polishing labs. The characterization of surface roughness / finish, Mid Spatial Frequencies are inevitably reserved to interferometry based techniques such as optical profilers, interferometric microscopes and state of the art interferometers.
Fizeau interferometers have been also the tool of reference for the characterization of optics in reflection and transmission, optical systems and components. Over the past 2 decades, commercial Fizeau interferometers have evolved and overcome part of its inherent limitations related to environmental conditions such as temperature drift, air turbulences and vibrations thanks to innovative phase-shifting techniques. On the other hand their limited dynamic range requires the use of nulling optical components which can dramatically increase cost and complexity.
SHWFS exhibits lower spatial resolution but provide higher dynamic range and less sensitivity for environmental conditions thanks to their measurement principle and smaller footprint. High performance SHWFS such as the HASO from Imagine Optic have a factory calibration that allows direct wavefront measurement with a lambda/100rms accuracy and lambda/200rms sensitivity in referenced mode. On top of those technical advantages, the SHWFS is also quicker to set up and much more compact, a system such as the R-FLEX from Imagine optic can be set at the center of curvature of a large concave mirror and perform a precise characterization within minutes.
Eventually the overall budget for an high performance SHWFS is usually much lower compared to an interferometer set up. In conclusion, the SHWFS could be employed as a cross check system or even replace the interferometer in applications where the measurement of low frequencies aberration of a component (zernike’s coefficients) is the main objective or for the alignment of optical systems such a collimator.
The spectrum of applications where the Shack–Hartmann wavefront sensor is being used is very broad. Visit Imagine Optic website to explore the application notes available to download and have an overview of the publications.