Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
733a05a
Initial plan
Copilot Sep 24, 2025
230a747
Add comprehensive AprilTag functionality review
Copilot Sep 24, 2025
07b7d8f
feat: implement hand-eye calibration system
kelvinchow23 Sep 26, 2025
9d4a1e5
Workspace cleanup and organization
kelvinchow23 Sep 26, 2025
111b3e9
feat: Add support for tagStandard41h12 AprilTag family
kelvinchow23 Sep 29, 2025
04925c7
Add files via upload
sgbaird Oct 10, 2025
b52b5cc
Add visual servoing simulation demonstrating speedL control
Copilot Oct 11, 2025
c45c0c3
Add animated GIF demonstrating visual servoing convergence
Copilot Oct 11, 2025
d2c4569
Enhance rotation visibility in visual servoing animation
Copilot Oct 11, 2025
168bd1e
Add camera perspective simulation and refine coordinate frames
Copilot Oct 11, 2025
e1ff96f
Improve camera perspective with realistic AprilTag and target reference
Copilot Oct 11, 2025
2615248
Triple iterations in camera perspective simulation for better converg…
Copilot Oct 11, 2025
6104e6f
Integrate pupil-apriltags for realistic AprilTag detection in simulation
Copilot Oct 14, 2025
bd2d4b6
Fix pupil-apriltags detection by using actual AprilTag images
Copilot Oct 14, 2025
7da892f
Add XYZ position and roll-pitch-yaw angles to AprilTag detection display
Copilot Oct 14, 2025
9566e76
Update requirements for development dependencies
sgbaird Oct 14, 2025
4e78ebd
Add matplotlib to project dependencies
sgbaird Oct 14, 2025
9532f43
Regenerate camera perspective animation with 6-DOF pose display
Copilot Oct 14, 2025
8afd6a2
Add imageio and pupil-apriltags to dependencies
sgbaird Oct 14, 2025
7f2259e
Increase control gain for faster convergence in camera perspective si…
Copilot Oct 14, 2025
cb689eb
Increase control gain and iterations for complete convergence demonst…
Copilot Oct 15, 2025
553cc82
Add AprilTag pose correction system and calibration analysis tools
kelvinchow23 Oct 15, 2025
4a962a5
Refactor config manager and YAML files for improved readability and c…
kelvinchow23 Oct 21, 2025
e457fb9
feat: Enhance VisualServoConfig with additional properties and improv…
kelvinchow23 Oct 21, 2025
7c846a4
Enhance Visual Servo Engine with Logging and Configuration Options
kelvinchow23 Oct 21, 2025
e11e3ec
Merge branch 'copilot/fix-af588f11-3394-462a-bccb-c025e0e7c340' into …
kelvinchow23 Oct 21, 2025
88bcbdc
Merge pull request #6 from AccelerationConsortium/apriltag-implementa…
kelvinchow23 Oct 21, 2025
6e440e3
feat: Add basic moveL test script for UR robot (verified)
kelvinchow23 Oct 21, 2025
54be75d
Add scripts for AprilTag detection and robot control
kelvinchow23 Oct 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,26 @@
## Development Practices

- Start with minimal, lean implementations focused on proof-of-concept
- Avoid creating new files until asked
- Avoid implementing things from scratch
- Avoid defensive error handling for hypothetical failures
- Use print statements and logging sparingly, unless asked
- Avoid light wrappers and custom classes, unless asked
- Avoid `if __name__ == "__main__"` patterns in package code
- Avoid `if __name__ == "__main__"` patterns in package code, unless asked
For example, rather than using:
```python
from math import sqrt
def main():
sqrt(2)

if __name__ == "__main__":
main()
```
Leave it as a top-level script:
```python
from math import sqrt
sqrt(2)
```
- Skip unit tests unless explicitly requested
- Follow patterns in CONTRIBUTING.md when present
- Prefer writing Python if no language specified
Expand Down Expand Up @@ -33,6 +48,7 @@

## Change Logging

- Create CHANGELOG.md if it doesn't exist
- Each time you generate code, note the changes in CHANGELOG.md
- Follow semantic versioning guidelines
- Include date and description of changes
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,4 @@ franka_control_suite/build/

!robot_toolkit/robot_arm_algos/src/inference/objects/**/*.dae
!robot_toolkit/robot_arm_algos/src/inference/objects/**/*.mtl
!robot_toolkit/robot_arm_algos/src/inference/objects/**/*.obj
!robot_toolkit/robot_arm_algos/src/inference/objects/**/*.obj*.png
45 changes: 41 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ camera:

# AprilTag Configuration
apriltag:
family: "tag36h11"
family: "tagStandard41h12"
tag_size: 0.023 # 23mm tags
```

Expand Down Expand Up @@ -181,7 +181,7 @@ from ur_toolkit.apriltag_detection import AprilTagDetector

# Initialize detector
detector = AprilTagDetector(
tag_family='tag36h11',
tag_family='tagStandard41h12',
tag_size=0.023, # 23mm tags
camera_calibration_file='camera_calibration/camera_calibration.yaml'
)
Expand Down Expand Up @@ -228,6 +228,43 @@ python src/ur_toolkit/camera_calibration/capture_calibration_photos.py
python src/ur_toolkit/camera_calibration/calculate_camera_intrinsics.py
```

### 5. Hand-Eye Calibration

For precise camera-to-robot coordinate transformation:

```bash
# Run automated hand-eye calibration
python scripts/run_hand_eye_calibration.py --robot-ip 192.168.1.100 --tag-ids 0 --num-poses 15
```

**What it does:**
- Automatically moves robot through 15 diverse poses
- Captures AprilTag detections at each pose
- Solves the AX = XB calibration equation using OpenCV's robust algorithm
- Saves calibration to `src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json`

**Parameters:**
- `--robot-ip`: Your robot's IP address
- `--tag-ids`: AprilTag IDs to use (default: [0])
- `--num-poses`: Number of calibration poses (default: 15)
- `--manual`: Use manual freedrive mode instead of automatic movement

**Safety Notes:**
- ⚠️ Ensure workspace is clear before starting
- Keep emergency stop accessible
- Verify robot joint limits and collision avoidance
- AprilTag must be visible from all poses

**Quality Assessment:**
- **Excellent**: Translation error < 5mm, Rotation error < 2°
- **Good**: Translation error < 10mm, Rotation error < 5°
- **Poor**: Translation error > 10mm, Rotation error > 5° (recalibrate)

**Troubleshooting:**
- **"No AprilTag detected"**: Check lighting, tag visibility, verify tag IDs
- **"Pose not reachable"**: Start from more central robot position
- **"Poor calibration quality"**: Increase `--num-poses`, ensure good lighting

## 🏗️ Development

The project uses a `src/` layout for better packaging and testing:
Expand Down Expand Up @@ -310,7 +347,7 @@ from apriltag_detection import AprilTagDetector
robot = URRobotInterface('192.168.0.10')
camera = PiCam(PiCamConfig.from_yaml('camera_client_config.yaml'))
detector = AprilTagDetector(
tag_family='tag36h11',
tag_family='tagStandard41h12',
tag_size=0.023,
camera_calibration_file='camera_calibration/camera_calibration.yaml'
)
Expand All @@ -335,7 +372,7 @@ for detection in detections:
- **Pi Camera + Laptop**: Same subnet (configure in `camera_client_config.yaml`)

### AprilTag Settings
- Default: tag36h11 family, 23mm size
- Default: tagStandard41h12 family (recommended), 23mm size
- Customize in detection code for your specific tags
- Ensure tags are printed at exact scale for accurate pose estimation
- **Robot + Pi Camera**: Different subnets OK
Expand Down
207 changes: 207 additions & 0 deletions analyze_calibration_accuracy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
import json
import numpy as np
from scipy.spatial.transform import Rotation as R
from pathlib import Path

def analyze_hand_eye_calibration_quality():
"""
Analyze hand-eye calibration quality using real geometric metrics
instead of OpenCV's misleading residual artifacts.
"""

# Load calibration data
calib_file = Path('src/ur_toolkit/hand_eye_calibration/hand_eye_calibration.json')
with open(calib_file) as f:
data = json.load(f)

print("🔬 HAND-EYE CALIBRATION QUALITY ANALYSIS")
print("=" * 60)
print()

# Basic info
info = data['calibration_info']
print(f"📅 Calibration: {info['timestamp']}")
print(f"📊 Poses collected: {info['poses_collected']}")
print(f"✅ Successful: {info['calibration_successful']}")
print()

# Extract calibration data
T_ce = np.array(data['hand_eye_transform']['matrix'])
robot_poses = [np.array(pose) for pose in data['raw_pose_data']['robot_poses']]
tag_poses = [np.array(pose) for pose in data['raw_pose_data']['apriltag_poses']]

print("🎯 CALIBRATION RESULT ANALYSIS:")
print(f" Camera position: [{T_ce[0,3]*1000:.1f}, {T_ce[1,3]*1000:.1f}, {T_ce[2,3]*1000:.1f}] mm")
print(f" Camera distance from TCP: {np.linalg.norm(T_ce[:3,3])*1000:.1f} mm")

# Analyze rotation
R_ce = T_ce[:3,:3]
euler = R.from_matrix(R_ce).as_euler('xyz', degrees=True)
print(f" Camera orientation: [{euler[0]:.1f}°, {euler[1]:.1f}°, {euler[2]:.1f}°]")
print()

# REAL QUALITY METRICS
print("� REAL QUALITY METRICS (NOT OpenCV artifacts):")
print()

# 1. Pose diversity analysis
print("1️⃣ POSE DIVERSITY ANALYSIS:")

# Translation diversity
robot_translations = np.array([pose[:3,3] for pose in robot_poses])
translation_range = np.ptp(robot_translations, axis=0) * 1000 # Convert to mm
translation_volume = np.prod(translation_range)

print(f" Translation range: [{translation_range[0]:.1f}, {translation_range[1]:.1f}, {translation_range[2]:.1f}] mm")
print(f" Workspace volume: {translation_volume/1000:.0f} cm³")

# Rotation diversity
robot_rotations = [R.from_matrix(pose[:3,:3]) for pose in robot_poses]
angles = []
for i in range(len(robot_rotations)):
for j in range(i+1, len(robot_rotations)):
rel_rot = robot_rotations[i].inv() * robot_rotations[j]
angle = rel_rot.magnitude()
angles.append(np.degrees(angle))

max_rotation_diff = max(angles)
avg_rotation_diff = np.mean(angles)

print(f" Max rotation difference: {max_rotation_diff:.1f}°")
print(f" Average rotation difference: {avg_rotation_diff:.1f}°")

# Quality assessment for diversity
trans_quality = "EXCELLENT" if translation_volume > 50000 else "GOOD" if translation_volume > 10000 else "POOR"
rot_quality = "EXCELLENT" if max_rotation_diff > 45 else "GOOD" if max_rotation_diff > 20 else "POOR"

print(f" Translation diversity: {trans_quality}")
print(f" Rotation diversity: {rot_quality}")
print()

# 2. Consistency analysis
print("2️⃣ CALIBRATION CONSISTENCY ANALYSIS:")

# Check consistency by validating the hand-eye equation: T_base_to_tag = T_base_to_ee * T_ee_to_cam * T_cam_to_tag
consistency_errors = []

for i in range(len(robot_poses)):
T_base_to_ee = robot_poses[i]
T_cam_to_tag = tag_poses[i]

# Predicted tag pose in base frame using hand-eye calibration
T_predicted_base_to_tag = T_base_to_ee @ T_ce @ T_cam_to_tag

# For comparison, we need a reference. Let's use the first pose as reference.
if i == 0:
T_reference_base_to_tag = T_predicted_base_to_tag
continue

# The tag should appear at the same location in base frame for all poses
T_error = T_reference_base_to_tag @ np.linalg.inv(T_predicted_base_to_tag)

# Extract translation and rotation errors
trans_error = np.linalg.norm(T_error[:3,3]) * 1000 # mm
rot_error = R.from_matrix(T_error[:3,:3]).magnitude() * 180/np.pi # degrees

consistency_errors.append({
'pose': i+1,
'translation_error_mm': trans_error,
'rotation_error_deg': rot_error
})

if consistency_errors:
avg_trans_error = np.mean([e['translation_error_mm'] for e in consistency_errors])
avg_rot_error = np.mean([e['rotation_error_deg'] for e in consistency_errors])
max_trans_error = max([e['translation_error_mm'] for e in consistency_errors])
max_rot_error = max([e['rotation_error_deg'] for e in consistency_errors])

print(f" Average consistency error: {avg_trans_error:.2f}mm, {avg_rot_error:.2f}°")
print(f" Maximum consistency error: {max_trans_error:.2f}mm, {max_rot_error:.2f}°")

# Quality assessment for consistency
consistency_quality = "EXCELLENT" if avg_trans_error < 5 and avg_rot_error < 2 else \
"GOOD" if avg_trans_error < 15 and avg_rot_error < 5 else \
"POOR"

print(f" Consistency quality: {consistency_quality}")

# Show worst poses
worst_trans = max(consistency_errors, key=lambda x: x['translation_error_mm'])
worst_rot = max(consistency_errors, key=lambda x: x['rotation_error_deg'])
print(f" Worst translation error: Pose {worst_trans['pose']} ({worst_trans['translation_error_mm']:.2f}mm)")
print(f" Worst rotation error: Pose {worst_rot['pose']} ({worst_rot['rotation_error_deg']:.2f}°)")

print()

# 3. Physical reasonableness
print("3️⃣ PHYSICAL REASONABLENESS CHECK:")

camera_distance = np.linalg.norm(T_ce[:3,3]) * 1000
camera_pos = T_ce[:3,3] * 1000

# Check if camera position makes physical sense
distance_reasonable = 50 < camera_distance < 300 # 5cm to 30cm from TCP

# Check if camera is pointing roughly forward (positive Z in camera frame should align with some robot axis)
camera_z_direction = T_ce[:3,2] # Camera Z axis (forward direction)
camera_pointing_forward = camera_z_direction[0] > 0.5 # Roughly pointing in robot X direction

print(f" Camera distance from TCP: {camera_distance:.1f}mm {'✅' if distance_reasonable else '⚠️'}")
print(f" Camera pointing direction: {'Forward ✅' if camera_pointing_forward else 'Other orientation ⚠️'}")
print(f" Physical setup: {'Reasonable ✅' if distance_reasonable else 'Check mounting ⚠️'}")
print()

# 4. Overall quality assessment
print("🏆 OVERALL QUALITY ASSESSMENT:")

quality_scores = []
if 'trans_quality' in locals():
quality_scores.append(trans_quality)
if 'rot_quality' in locals():
quality_scores.append(rot_quality)
if 'consistency_quality' in locals():
quality_scores.append(consistency_quality)

excellent_count = quality_scores.count('EXCELLENT')
good_count = quality_scores.count('GOOD')
poor_count = quality_scores.count('POOR')

if excellent_count >= 2:
overall_quality = "EXCELLENT"
overall_emoji = "🥇"
elif good_count >= 2 and poor_count == 0:
overall_quality = "GOOD"
overall_emoji = "🥈"
elif poor_count <= 1:
overall_quality = "ACCEPTABLE"
overall_emoji = "🥉"
else:
overall_quality = "POOR"
overall_emoji = "❌"

print(f" {overall_emoji} Overall calibration quality: {overall_quality}")
print()

# 5. Recommendations
print("💡 RECOMMENDATIONS:")

if overall_quality == "EXCELLENT":
print(" ✅ Calibration is excellent - ready for precise visual servoing")
print(" ✅ Expected accuracy: 1-3mm translation, 0.5-1.5° rotation")
elif overall_quality == "GOOD":
print(" ✅ Calibration is good - suitable for most visual servoing tasks")
print(" ✅ Expected accuracy: 2-5mm translation, 1-3° rotation")
elif overall_quality == "ACCEPTABLE":
print(" ⚠️ Calibration is acceptable but could be improved")
print(" 📝 Consider recalibrating with more diverse poses")
print(" 📝 Expected accuracy: 3-8mm translation, 2-5° rotation")
else:
print(" ❌ Calibration quality is poor - recalibration recommended")
print(" 📝 Increase pose diversity, especially rotations")
print(" 📝 Check physical setup and AprilTag mounting")

print()
print("📋 NOTE: These are REAL quality metrics, not OpenCV's misleading 500+mm 'residuals'")

if __name__ == "__main__":
analyze_hand_eye_calibration_quality()
Binary file added camera_perspective_animation.gif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot It still seems like it's taking a while to converge. Either increase the step size and/or increase the number of iterations. It would be nice to see it actually converge on the target.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Increased control gain from 0.6 to 0.8 and doubled iterations from 150 to 300 in commit 7f2259e (updated). The animation will now show complete convergence to the target position with the AprilTag properly centering and aligning.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading