SafeSpace is an iOS application that uses advanced machine learning to detect objects in space station environments. Leveraging the YOLOv8 object detection model, the app helps astronauts and space personnel identify and categorize objects in various lighting conditions and environments that might be encountered aboard a space station.
The current implementation is trained on the Falcon synthetic dataset and can detect three critical safety-related object classes:
- Fire extinguishers
- Toolboxes
- Oxygen tanks
- Camera-based detection with advanced YOLOv8 model
- Real-time processing and feedback
- Detection metrics including accuracy and processing time
- Test object detection in simulated space environments
- Adjustable lighting and occlusion conditions
- Multiple environment presets (Normal Station, Dim Lighting, Emergency Lighting, etc.)
- Interactive zoom functionality for detailed analysis
- Comprehensive detection results with confidence scores
- Performance metrics for detection accuracy
- List of detected objects with classification
- Classification of fire extinguishers, toolboxes, and oxygen tanks with confidence scores
- Implements YOLOv8 (You Only Look Once) object detection model
- Trained on the Falcon synthetic dataset of space station environments
- Optimized for iOS using Core ML framework
- Fast inference time (typically under 50ms)
- Specialized in detecting safety equipment in space habitats
The optimized YOLOv8m model was trained on the HackByte_DataSet (Falcon synthetic dataset) specifically for space station environments. Below are the performance metrics:
The confusion matrix shows the model's classification accuracy across the three object classes:
The PR curve demonstrates the balance between precision and recall across different confidence thresholds:
The results graph shows the model's performance metrics during training, including mAP (mean Average Precision), precision, and recall:
Key performance indicators:
- mAP@50: 0.812(all classes)
- Precision: 1.00 at 0.952(all classes)
- Recall: 0.85 at 0.000(all classes)
- F1-Score: 0.810 at 0.344(all classes)
- SwiftUI for modern, responsive UI
- AVFoundation for camera handling
- Core ML for on-device machine learning
- Combine for reactive state management
- iOS 16.0 or later
- iPhone or iPad with camera
- Xcode 14+ (for development)
-
Clone the repository
git clone https://github.com/yourusername/SafeSpace.git -
Install dependencies.
-
Open the project in Xcode
cd SafeSpace open SafeSpace.xcodeproj -
Build and run on a physical device for full functionality
- Launch the app and navigate to the Detection tab
- Grant camera permissions when prompted
- Point the camera at objects to detect
- Tap the capture button to analyze the current frame
- Review detected objects and their confidence scores
- Navigate to the Simulation tab
- Upload an image using the photo picker or use a previously captured image
- Select an environment preset or adjust lighting/occlusion levels manually
- Press "Start Simulation" to run the detection model
- Pinch to zoom in/out on the results for detailed inspection
- Double-tap to reset zoom level
[MIT]
- YOLOv8 developed by Ultralytics
- HackByte_DataSet (Falcon synthetic dataset) for space station environments








