Camera Debug Infrastructure
Debug tooling for testing and developing the ML-based arrow detection feature.
Status: ✅ Implemented (PR #374)
Location: app/src/main/java/com/archeryapprentice/domain/camera/
Overview
The camera scoring feature uses a YOLOv8s machine learning model to detect arrow positions in photographs of archery targets. The debug infrastructure provides tools to test and improve detection accuracy during development.
Components
app/src/main/java/
├── flags/
│ └── CameraDebugFeatureFlags.kt # Debug toggles
└── domain/camera/
├── ArrowDetectionService.kt # TFLite inference
└── ArrowDetectionResult.kt # Data models
Debug Feature Flags
Location: app/src/main/java/com/archeryapprentice/flags/CameraDebugFeatureFlags.kt
Master Control
object CameraDebugFeatureFlags {
// null = follow BuildConfig.DEBUG
// true = force enable all debug features
// false = force disable all debug features
private val MANUAL_OVERRIDE: Boolean? = null
}Available Flags
| Flag | Purpose | Default |
|---|---|---|
ENABLE_IMAGE_PICKER | Select images from gallery instead of camera only | DEBUG builds |
ENABLE_DETECTION_OVERLAY | Show bounding boxes on captured image | DEBUG builds |
ENABLE_VERBOSE_LOGGING | Detailed pipeline logging | DEBUG builds |
Usage in Code
// Check if debug mode is active
if (CameraDebugFeatureFlags.isDebugModeActive) {
// Show debug UI
}
// Check specific feature
if (CameraDebugFeatureFlags.ENABLE_IMAGE_PICKER) {
// Show "Choose from Gallery" button
}Arrow Detection Service
Location: app/src/main/java/com/archeryapprentice/domain/camera/ArrowDetectionService.kt
Detection Pipeline
Image Input (Uri or Bitmap)
│
▼
┌─────────────────────────────────────────┐
│ 1. PREPROCESSING │
│ • Resize to 640x640 (YOLOv8 input) │
│ • Letterbox (gray padding) │
│ • Normalize pixels (0-1 range) │
└─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ 2. INFERENCE (TensorFlow Lite) │
│ • YOLOv8s model │
│ • 8400 detection slots │
│ • Output: [x, y, w, h, conf, class] │
└─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ 3. POST-PROCESSING │
│ • Confidence filtering (>0.35) │
│ • Non-Maximum Suppression (IoU 0.5) │
│ • Coordinate transformation │
└─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ 4. NORMALIZATION │
│ • Estimate target center │
│ • Calculate target radius │
│ • Normalize to [-1, +1] range │
│ • Calculate scores from distance │
└─────────────────────────────────────────┘
│
▼
ArrowDetectionResult
Configuration
| Parameter | Value | Notes |
|---|---|---|
| Input size | 640x640 | YOLOv8 standard |
| Confidence threshold | 0.35 | Lowered from 0.40 for better recall |
| IoU threshold | 0.5 | NMS aggressiveness |
| Model file | ml/arrow_detector.tflite | Assets folder |
Initialization
val service = ArrowDetectionService(context)
// Initialize (required before detection)
service.initialize().onSuccess {
// Ready to detect
}.onFailure { error ->
// Handle initialization error
}Detection
// From camera capture
val result = service.detectArrows(imageUri)
// From bitmap (testing)
val result = service.detectArrowsFromBitmap(bitmap)
result.onSuccess { detection ->
Log.d(TAG, "Found ${detection.arrows.size} arrows")
detection.arrows.forEach { arrow ->
Log.d(TAG, "Arrow ${arrow.id}: score=${arrow.calculateScore()}, conf=${arrow.confidence}")
}
}.onFailure { error ->
Log.e(TAG, "Detection failed: ${error.message}")
}Detection Result Models
Location: app/src/main/java/com/archeryapprentice/domain/camera/ArrowDetectionResult.kt
ArrowDetectionResult
data class ArrowDetectionResult(
val version: String, // "1.0"
val timestamp: String, // ISO 8601
val targetDetection: TargetDetection,
val arrows: List<DetectedArrow>,
val imageMetadata: ImageMetadata,
val detectionMetrics: DetectionMetrics? // Debug info
)DetectedArrow
data class DetectedArrow(
val id: Int,
val normalizedX: Float, // -1 to +1 (center = 0)
val normalizedY: Float, // -1 to +1 (center = 0)
val pixelX: Float, // Original image coordinates
val pixelY: Float,
val distanceFromCenter: Float, // 0 = center, 1 = edge
val clockPositionDeg: Float, // 0° = 12 o'clock
val confidence: Float, // 0 to 1
val boundingBox: BoundingBox
) {
fun calculateScore(): Int // 10-ring scoring
fun isXRing(): Boolean // True if in X-ring
}DetectionMetrics (Debug)
data class DetectionMetrics(
val maxDetectionSlots: Int, // 8400 for YOLOv8s
val afterConfidence: Int, // Detections after confidence filter
val afterNms: Int, // Detections after NMS
val confidenceThreshold: Float,
val nmsThreshold: Float
)Coordinate System
Normalized coordinates use a consistent system:
-1 (top)
│
-1 (left) ──┼── +1 (right)
│
+1 (bottom)
(0, 0) = Target center
Distance 1.0 = Target edge
Score Calculation
| Distance from Center | Score |
|---|---|
| ≤ 0.05 | 10 (X-ring) |
| ≤ 0.10 | 10 |
| ≤ 0.20 | 9 |
| ≤ 0.30 | 8 |
| ≤ 0.40 | 7 |
| ≤ 0.50 | 6 |
| ≤ 0.60 | 5 |
| ≤ 0.70 | 4 |
| ≤ 0.80 | 3 |
| ≤ 0.90 | 2 |
| ≤ 1.00 | 1 |
| > 1.00 | 0 (Miss) |
Verbose Logging
When ENABLE_VERBOSE_LOGGING is active, the pipeline logs detailed information:
D/ArrowDetection: detectArrowsFromBitmap: Starting - 4032x3024
D/ArrowDetection: detectArrowsFromBitmap: Preprocessing image...
D/ArrowDetection: resizeWithLetterbox: 4032x3024 -> 640x480, pad=(0, 80)
D/ArrowDetection: detectArrowsFromBitmap: Running inference...
D/ArrowDetection: detectArrowsFromBitmap: Inference complete in 156ms
D/ArrowDetection: postProcessResults: Raw output stats - maxConf=0.89, minConf=0.001, aboveThreshold=12
D/ArrowDetection: postProcessResults: 12 detections before NMS
D/ArrowDetection: NMS: Kept detection at (1856, 1512) conf=0.89
D/ArrowDetection: NMS: Suppressed detection at (1860, 1508) IoU=0.78
D/ArrowDetection: postProcessResults: 6 detections after NMS
D/ArrowDetection: detectArrowsFromBitmap: Success - 6 arrows detected
Testing with Gallery Images
When ENABLE_IMAGE_PICKER is active:
- Camera scoring flow shows “Choose from Gallery” option
- Select test images from device storage
- Useful for testing with known images without going to range
Test Image Recommendations
- Lighting: Consistent, no harsh shadows
- Angle: Straight-on to target face
- Resolution: Minimum 1920x1080
- Arrow visibility: Clear contrast with target
Device Compatibility
Android 15+ with 16KB Page Size
Some Android 15+ devices use 16KB memory pages, which is incompatible with TensorFlow Lite 2.14.0:
fun isCameraScoringAvailable(): Boolean {
if (Build.VERSION.SDK_INT >= 35) {
val pageSize = Os.sysconf(OsConstants._SC_PAGESIZE)
if (pageSize > 4096) {
return false // 16KB pages not supported
}
}
return true
}On incompatible devices, the feature gracefully degrades with a user-friendly message.
Known Limitations
Target Face Mismatch
The adjustment UI displays a 10-ring target regardless of actual target type:
- 5-ring targets (40cm indoor): Shows all 10 rings, but only 6-10 are real
- Score calculation: Assumes 10-ring layout
Future Enhancement: Pass target face type from round configuration to display correct ring layout.
Detection Accuracy Factors
| Factor | Impact | Mitigation |
|---|---|---|
| Low light | Reduced confidence | Use flash or good ambient light |
| Arrow occlusion | Missed detections | Ensure arrow shafts visible |
| Worn target face | False positives | Use clean target faces |
| Camera angle | Distorted coordinates | Photograph straight-on |
Running Tests
# Unit tests for detection service
./gradlew :app:testDebugUnitTest --tests "*.ArrowDetectionServiceTest"
# Unit tests for result models
./gradlew :app:testDebugUnitTest --tests "*.ArrowDetectionResultTest"
# Feature flags tests
./gradlew :app:testDebugUnitTest --tests "*.CameraDebugFeatureFlagsTest"Related Documentation
Last Updated: 2025-12-21 PR: #374