-
$\textcolor{red}{\text{Ground Segment System Software Engineering - Satellite Camera Design, Calibration and Image Preprocessing}}$ ↗- Pushbroom Optical Hyperspectral Spectrometer Calibration and Pre-processing (L0-L1b)
- Pushbroom Multispectral High-Resolution Camera Calibration and Pre-processing (L0-L2)
- On-Orbit Radiometric, Spatial, and Geometric Calibration
-
$\textcolor{red}{\text{Remote Sensing Software Engineer}}$ ↗ -
$\textcolor{red}{\text{Goofing Around Berlin}}$ 🍺 ↗
*I am a Satellite Image Processing and CalVal Engineer, as well as a Remote Sensing Engineer.
I specialize in developing and implementing algorithms for the calibration and validation of pushbroom multispectral and hyperspectral (SWIR) spaceborne / airborne optical instruments and preprocessing between Level 0 - Level 2 dataset.
I have experience with
So far, I've participated in 11 satellite missions, focusing on multispectral and hyperspectral imaging systems, covering design, manufacturing, testing, launch, on-orbit calibration/validation, and ground segment software development.
My interdisciplinary experience extends to AOCS, Software Engineering, System Engineering, Optical Design, Optoelectronics, and Mission Planning.
-
$\textcolor{red}{\text{Note:}}$ $\textcolor{red}{\text{This document has been prepared using publicly available information}}$ $\textcolor{red}{\text{from the literature and reflects concepts I have learned during my}}$ $\textcolor{red}{\text{professional experience. It does not include any proprietary or confidential}}$ $\textcolor{red}{\text{information.}}$
Ground Segment System Software Engineering - Satellite Camera Design, Calibration and Image Preprocessing
- I perform laboratory calibration of optical hyperspectral spectrometers for spaceborne applications. Following Baumgartner's calibration framework and utilizing DLR's Optical Calibration Laboratory facilities, I ensure the instrument meets its spectral, radiometric, and geometric fidelity requirements. Through my calibration campaign, I systematically address and characterize:
- Detector-related effects
- Spectral response characteristics
- Radiometric accuracy parameters
- Geometric alignment specifications
- This comprehensive calibration process allows me to validate that the instrument will perform according to mission specifications once deployed in space.
-
- Determine the Instrument Spectral Response Function (ISRF) as a function of wavelength and pixel position.
- Verify ISRF Full Width at Half Maximum (FWHM) meets the target.
- Validate spectral oversampling and a spectral sampling interval.
- Characterize and minimize smile and keystone effects to thresholds below acceptable levels.
- Confirm pixel spectral linearity and ISRF stability under thermal and mechanical perturbations.
- Measure and characterize spectral resolution across the full spectral range.
- Determine wavelength calibration accuracy and precision.
- Evaluate spectral sampling interval and spectral resolution stability.
-
- Establish a radiometric reference achieving high multiplicative accuracy using multiple radiance levels.
- Verify overall optical transmission and detector response consistency.
- Ensure radiometric stability and zero-level offset stability.
- Validate detector linearity up to the saturation level and stray light contributions.
-
- Characterize focal length, aperture, slit geometry, and alignment.
- Confirm spatial sampling distance at specified orbital altitude.
- Measure and correct spatial smile and keystone effects to below the specified threshold for design.
-
-
- Mount the instrument on a vibration-isolated optical table.
- Verify alignment of optical components using a collimated reference beam.
- Record slit orientation and grating alignment.
-
- Operate the detector in a dark and thermally controlled environment.
- Characterize dark current and offset for reserved dark pixels.
-
-
-
- Use a monochromator to supply narrow spectral lines across the spectrometer's bandwidth.
- Measure ISRF profiles for each pixel and extract FWHM.
-
- Shift the illumination spot across the Field of View (FoV) and record spectral and spatial shifts.
-
- Simulate thermal and mechanical stresses and evaluate ISRF variation.
-
-
-
- Illuminate the entrance slit with a calibrated integrating sphere.
- Derive gain and offset coefficients for each pixel.
-
- Assess signal-to-noise ratio (SNR), dark current, and readout noise.
- Vary integration time and radiance levels to confirm detector linearity.
-
- Simulate high-contrast scenes and evaluate stray light suppression.
-
-
-
- Use a collimated beam and pinhole mask to map the Point Spread Function (PSF).
-
- Measure FoV and ensure alignment with design specifications.
-
- Introduce distinct spectral lines across the FoV and measure detector output to quantify smile and keystone effects.
-
-
- Spectral calibration files (ISRF maps, wavelength alignment).
- Radiometric calibration files (gain, offset, linearity corrections).
- Geometric calibration files (distortion maps, smile/keystone corrections).
-
- Assess calibration uncertainties based on reference standards and environmental stability.
- Document potential sources of error, including mechanical shifts and signal noise.
-
- Apply calibration files to a test dataset and verify compliance with requirements.
-
Responsible for developing calibration, validation and preprocessing pipeline ground segment software starting from Level-0 (raw) to Level-2 (science ready dataset). The pipeline includes the following information and algorithms:
-
A demonstration preprocessing pipeline for pushbroom multispectral optical instruments (under development).
This project implements an image preprocessing pipeline for multispectral satellite imagery, with features including:
-
- decoding
- missing package check, flag generation
-
-
- Using key data (gain,offset)
-
- Using various filters or designing digital signal filters
-
- using key data from lab or on-orbit calibration campaigns.
-
- Sensor image acquisition model or key point extraction and matching
-
- Using GDAL or rasterio
- If possible, using central pixel coordinate from Metadata file to georeference image.
- If GNSS data is missing, downloading sentinel-2 images using TLE of the satellite and google earth engine. Thereafter, image-to-image georeferencing applied by keypoint extraction and matching.
-
-
-
- using Py6S
-
-
-
- Implemented various algorithms, including Simple Brovey, Gram-Schmidt, ESRI, and Brovey.
-
- To evaluate the quality of raw images and the effectiveness of the applied correction methods, metrics such as Peak Signal-to-Noise Ratio (PSNR), RMSE, SSIM, MSE, GIQE, CE95, radiometric accuracy are calculated and reported in PDF file.
-
Requirements Testing and Development Framework Python and Packages: Development Environment: - Python 3.x - Docker, Kubernetes - NumPy - Dask for distributed computing - OpenCV Testing: - GDAL - Pytest for automated testing - rasterio - Unit Testing coverage - scikit-image - Comprehensive logging system - matplotlib - Earth Engine API 02_scripts- Core processing scripts
level_0.py- Level 0 processinglevel_1.py- Level 1 processing and correctionsband_coreg.py- Band co-registrationgeoreferencing_v1.py- Georeferencingmetrics_ips.py- Quality metricspansharp.py- Pansharpening
- Core processing scripts
This project is licensed under the GNU GPL v3 - see the
LICENSEfile for details. -
-
-
I send commands to capture images of pseudo-invariant sites such as the Mauritania Desert, Dome-C, or Antarctic for the flatfield image at different TDI stages and exposure times. I follow the USGS Test Sites Catalog.
-
I use images taken at night during passes over the Atlantic Ocean, ensuring there are no clouds and no light, as darkfield images.
-
Non Uniformity Correction (NUC)
-
Calculate the mean of each column for the flatfield and darkfield images. Call the results for each column
flatfield_desiredanddarkfield_desired. -
Calculate gain and offset as:
$$ gain = \frac{\overline{flatfield_{desired}} - \overline{darkfield_{desired}}}{flatfield_{desired} - darkfield_{desired}} $$
$$ offset = \overline{flatfield_{desired}} - gain \cdot flatfield_{desired} $$
-
Apply non-uniformity correction and flatfielding simultaneously (NUC). A
dark_offsetparameter is taken from laboratory results:$$ NUC_{frame} = {img \cdot gain} + {offset - dark_{offset}} $$
-
Store the gain and offset data in Calibration Key Data (CKD) container.
-
-
Bad Pixel Correction
- Calculate the global variance of the pixels as a threshold.
- Identify pixels exceeding the threshold.
- Replace bad pixels with the average value of neighboring pixels.
-
Denoising
- Implement a Butterworth low-pass filter with parameters chosen by trial and error.
-
Image Restoration
-
In the worst-case scenario, relying on the satellite's internal clock for image capture may introduce a 1-second timing offset, leading to a positional deviation of up to 7 km. Therefore, I use structures like take-off runways and bridges as MTF targets if I cannot capture dedicated MTF targets like Baotou.
-
MTF Calculation and PSF Sharpening
- Identify a suitable edge (close to main scan or cross-scan axes) with sufficient contrast and low noise.
- Construct the Edge Spread Function (ESF) from the edge.
- Derive the Line Spread Function (LSF) by differentiating the ESF.
- Take the normalized Fourier transform of the LSF to obtain the MTF.
- The PSF is derived from the MTF (Fourier Transform of the PSF).
- If the PSF is noisy or inaccurate, consider simulating a PSF model.
- Normalize the PSF kernel and convolve the image. Alternatively, use Wiener deconvolution.
-
-
Band Registration
- Because of subpixel alignment issues:
- Convert images to 8-bit and apply CLAHE to create dummy bands.
- Use the selected reference band to find keypoints and descriptors of the other bands with SIFT.
- Match keypoints with a FLANN-based matcher.
- Calculate the homography matrix.
- Warp the bands relative to the reference band using the homography matrix.
- Because of subpixel alignment issues:
-
Georeferencing Process
- Use Sentinel-2 bands as a reference for image-to-image georeferencing.
- Estimate coordinates of the image scene using TLE information.
- Download Sentinel-2 images from Google Earth Engine for those coordinates.
- Apply the same feature matching and warping method used in band registration.
- Copy the corner coordinates, CRS, and transform from Sentinel-2 to the T2.1 bands.
-
Top of Atmospheric (TOA) Conversion
-
The result of NUC is still in Digital Numbers. Convert to radiometric units using radiometric gain/offset for each band. The Sentinel-2 TOA Reflectance equation can be applied:
$$ Radiance_{TOA} = (NUC_{frame} - radiance_{offset}) \cdot radiance_{gain} $$
-
-
Atmospheric Correction
- Using Py6S for atmospheric correction
- Familiar with MODTRAN and LibRadTran atmospheric models.
- Used FLAASH model on ENVI for Landsat 8 OLI dataset.
- Because of MODTRAN licensing, follow the atmospheric correction algorithm indicated in Landsat 8-9 Calibration and Validation (Cal/Val) Algorithm Description Document (ADD), page 776.
- Work in progress on atmospheric modeling.
-
Mathematical Modeling and SNR Simulation
- Expertise in mathematical modeling of the optics and sensor integrated system to calculate total photon collected by camera depending on the satellite's attitude (roll, pitch,yaw).
-
- Proficient in Python, C and MATLAB.
- Experienced in cloud-based deployments, implementing CI/CD, Git, Pytest, Docker, and PEP 8 compliance for robust software engineering.
- Utilize GPU Programming, Cloud Computing, Database Management and Container-based isolated Development
- Skilled in software like ENVI, ArcMap, Global Mapper, QGIS, and PCI Geomatica.
- Comfortable with Linux environments, shell scripting, git / github and containerization (Docker).
- Leverage various Python libraries indicated in the skills section (e.g., NumPy, GDAL, etc.).
- Comprehensive understanding of the entire data processing and mission planning lifecycle, from design to deployment, ensuring high-quality data products and efficient satellite operations.
-
- Sentinel-1 (SAR)
- Sentinel-2 A/B
- Sentinel-5P
- Landsat 7/8/9
- ASTER
- MRO CTX and HiRISE
-
- Metallic mineral exploration
- Fault line detection
- Natural disaster analysis
- NDVI, NDWI, NBR
- Image segmentation
- Deep learning approaches
- Surface deformation detection
-
- Deep knowledge in Sentinel 5P TROPOMI Algorithm Theoretical Basis Document (ATBD)
- Deep knowledge in Sentinel 2 ATBD
-
- Developed image processing pipeline tools.
- Created image database search and download utilities.
-
- Processed images for:
- Normalized Difference Vegetation Index (NDVI)
- Normalized Difference Water Index (NDWI)
- Normalized Burn Ratio (NBR)
- Lineament extraction
- Processed images for:
-
- Leveraged ASTER's VNIR, SWIR, and TIR channels.
- Processed ASTER data to identify minerals and mineral groups for mining applications.
-
- Purpose: To detect and identify target objects in satellite imagery using both onboard and ground segment software
- Object detection using color segmentation, template matching, corner/edge/contour detection.
- Implemented:
- Feature matching
- Watershed algorithm
- Face and cat-face recognition
- Pedestrian detection
- Developed various object tracking algorithms.
- Built CNNs for real-time digit classification and object detection.
- Utilized pyramid representation, sliding window, non-maximum suppression, and region proposals.
- Implemented R-CNN, YOLO, and SSD for object detection.
- I just moved to Berlin and I'm practically glued to my computer chair. If anyone's bored and wants to grab a coffee or beer to chat about satellites, space, rock music, Elon Musk's wife, astrophysics, parallel universes, or Laika the dog, feel free to get in touch! 😎







