Based on test feedback - 3 issues addressed:
1. Primary Toggle (Frontend Debug):
- Add console.log in handleSaveGoal
- Shows what data is sent to backend
- Helps debug if checkbox state is correct
2. Lean Mass Display (Backend Debug):
- Add error handling in lean_mass calculation
- Log why calculation fails (missing weight/bf data)
- Try-catch for value conversion errors
3. BP/Strength/Flexibility Warning (UI):
- Yellow warning box for incomplete goal types
- BP: "benötigt 2 Werte (geplant für v2.0)"
- Strength/Flexibility: "Keine Datenquelle"
- Transparent about limitations
Next: User re-tests with debug output to identify root cause.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tracking-Dokument für alle offenen Punkte:
- Phase 0b Tasks (120+ Platzhalter)
- v2.0 Redesign Probleme
- Gitea Issues Referenzen
- Timeline & Roadmap
Verhindert dass wichtige Punkte vergessen werden.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Created comprehensive redesign document addressing all identified issues:
Problems addressed:
1. Primary goal too simplistic → Weight system (0-100%)
2. Single goal mode too simple → Multi-mode with weights
3. Missing current values → All goal types with data sources
4. Abstract goal types → Concrete, measurable goals
5. Blood pressure single value → Compound goals (systolic/diastolic)
6. No user guidance → Norms, examples, age-specific values
New Concept:
- Focus Areas: Weighted distribution (30% weight loss + 25% endurance + ...)
- Goal Weights: Each goal has individual weight (not binary primary/not)
- Concrete Goal Types: cooper_test, pushups_max, squat_1rm, etc.
- Compound Goals: Support for multi-value targets (BP: 120/80)
- Guidance System: Age/gender-specific norms and examples
Schema Changes:
- New table: focus_areas (replaces single goal_mode)
- goals: Add goal_weight, target_value_secondary, current_value_secondary
- goals: Remove is_primary (replaced by weight)
UI/UX Redesign:
- Slider interface for focus areas (must sum to 100%)
- Goal editor with guidance and norms
- Weight indicators on all goals
- Special UI for compound goals
Implementation Phases: 16-21h total
- Phase 2: Backend Redesign (6-8h)
- Phase 3: Frontend Redesign (8-10h)
- Phase 4: Testing & Refinement (2-3h)
Status: WAITING FOR USER FEEDBACK & APPROVAL
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Analysis Page:
- Add 'Ziele' button next to page title
- Direct navigation to /goals from analysis page
- Thematic link: goals influence AI analysis weighting
Goals Page:
- Fix text-align for text inputs (name, date, description)
- Text fields now left-aligned (numbers remain right-aligned)
- Better UX for non-numeric inputs
Navigation strategy: Goals accessible from Analysis page where
goal_mode directly impacts score calculation and interpretation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Full-width inputs throughout the form
- Labels above inputs (mobile best practice)
- Section headers with emoji (🎯 Zielwert)
- Consistent spacing (marginBottom: 16)
- Read-only unit display as styled badge
- Primary goal checkbox in highlighted section
- Full-width buttons (btn-full class)
- Scrollable modal with top padding
- Error display above form
Matches VitalsPage design pattern for consistency.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The migration system tracks migrations via filename automatically.
Removed manual DO block that used wrong column name (version vs filename).
Also removed unused json import from goals.py.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Key Decision: Minimal Goal System BEFORE Placeholders
Critical Finding:
- Same data = different interpretation per goal
- Example: -5kg FM, -2kg LBM
- weight_loss: 78/100 (good!)
- strength: 32/100 (LBM loss critical!)
- Without goal: 50/100 (generic, wrong for both)
Recommended Approach (Hybrid):
1. Phase 0a (2-3h): Minimal Goal System
- DB: goal_mode field
- API: Get/Set Goal
- UI: Goal Selector
- Default: health
2. Phase 0b (16-20h): Goal-Aware Placeholders
- 84 placeholders with goal-dependent calculations
- Scores use goal_mode from day 1
- No rework needed later
3. Phase 2+ (6-8h): Full Goal System
- Goal recognition from patterns
- Secondary goals
- Goal progression tracking
Why Hybrid Works:
✅ Charts show correct interpretations immediately
✅ No rework of 84 placeholders later
✅ Goal recognition can come later (needs placeholders anyway)
✅ System is "smart coach" from day 1
File: docs/GOAL_SYSTEM_PRIORITY_ANALYSIS.md (650 lines)
- Normal-Modus: Nur Einzelwerte (übersichtlich)
- Experten-Modus: Zusätzlich Stage-Rohdaten
- Beschreibungen für alle Platzhalter vervollständigen
- Schema-basierte Beschreibungen für extrahierte Werte
Aufwand: 4-6h, Priority: Medium
- stage_debug now includes 'output' dict with all stage outputs
- Fixes empty values for stage_X_outputkey in expert mode
- Stage outputs are the actual AI responses passed to next stage
Backend:
- Add ALL stage outputs to metadata (not just referenced ones)
- Format JSON with indent for readability
- Description: 'Zwischenergebnis aus Stage X'
Frontend:
- Stage raw values shown in collapsible <details> element
- JSON formatted in <pre> tag with syntax highlighting
- 'JSON anzeigen ▼' summary for better UX
Fixes: Stage X - Rohdaten now shows intermediate results
- Each circumference point shows most recent value (even from different dates)
- Age annotations: heute, gestern, vor X Tagen/Wochen/Monaten
- Gives AI better context about measurement freshness
- Example: 'Brust 105cm (heute), Nacken 38cm (vor 2 Wochen)'
- Previously only checked c_chest, c_waist, c_hip
- Now includes c_neck, c_belly, c_thigh, c_calf, c_arm
- Fixes 'keine Daten' when entries exist with only non-primary measurements
BUG: Wertetabelle wurde nicht angezeigt bei neuer Analyse
ROOT CAUSE: newResult hatte nur {scope, content}, kein metadata
FIX: Build metadata from result.debug.resolved_placeholders
- Für Base: direkt aus resolved_placeholders
- Für Pipeline: collect aus allen stages
- Metadata structure: {prompt_type, placeholders: {key: {value, description}}}
NOTE: Immediate preview hat keine descriptions (nur values)
Saved insights (nach loadAll) haben full metadata with descriptions aus DB
version: 9.6.2 (bugfix)
BUG: Wertetabelle wurde nicht angezeigt
FIX: enable_debug=true wenn save=true (für metadata collection)
- metadata wird nur gespeichert wenn debug aktiv
- jetzt: debug or save → metadata immer verfügbar
BUG: {{placeholder|d}} Modifier funktionierte nicht
ROOT CAUSE: catalog wurde bei Exception nicht zu variables hinzugefügt
FIX:
- variables['_catalog'] = catalog (auch wenn None)
- Warning-Log wenn catalog nicht geladen werden kann
- Debug warning wenn |d ohne catalog verwendet
BUG: Platzhalter in Pipeline-Stages am Ende statt an Cursor
FIX:
- stageTemplateRefs Map für alle Stage-Textareas
- onClick + onKeyUp tracking für Cursor-Position
- Insert at cursor: template.slice(0, pos) + placeholder + template.slice(pos)
- Focus + Cursor restore nach Insert
TECHNICAL:
- prompt_executor.py: Besseres Exception Handling für catalog
- UnifiedPromptModal.jsx: Refs für alle Template-Felder
- prompts.py: enable_debug=debug or save
version: 9.6.1 (bugfix)
module: prompts 2.1.1
Problem: dob Spalte ist DATE (PostgreSQL) → Python bekommt datetime.date,
nicht String → strptime() schlägt fehl → age = "unbekannt"
Fix: Prüfe isinstance(dob, str) und handle beide Typen:
- String → strptime()
- date object → direkt verwenden
Jetzt funktioniert {{age}} Platzhalter korrekt.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Added "📋 Platzhalter exportieren" button in debug viewer:
- Exports all resolved placeholders with values
- Includes all available_variables
- For pipelines: exports per-stage placeholder data
- JSON format with timestamp and prompt metadata
- Filename: placeholders-{slug}-{date}.json
Use case: Development aid - see exactly what data is available
for prompt templates without null values.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BREAKING: Analysis page switched from old /insights/run to new /prompts/execute
Changes:
- Backend: Added save=true parameter to /prompts/execute
- When enabled, saves final output to ai_insights table
- Extracts content from pipeline output (last stage)
- Frontend api.js: Added save parameter to executeUnifiedPrompt()
- Frontend Analysis.jsx: Switched from api.runInsight() to api.executeUnifiedPrompt()
- Transforms new result format to match InsightCard expectations
- Pipeline outputs properly extracted and displayed
Fixes: PIPELINE_MASTER responses (old template being sent to AI)
The old /insights/run endpoint used raw template field, which for the
legacy "pipeline" prompt was literally "PIPELINE_MASTER". The new
executor properly handles stages and data processing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Removed conditional hiding of test button (prompt?.slug)
- Button now always visible with helpful tooltip
- handleTest already has save-check logic
Improves discoverability of test functionality.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- caliper_summary: use body_fat_pct (not bf_jpl)
- circ_summary: use c_chest, c_waist, c_hip (not brust, taille, huefte)
- get_latest_bf: use body_fat_pct for consistency
Fixes SQL errors when running base prompts that feed pipeline prompts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- New placeholder: {{activity_detail}} returns formatted activity log
- Shows last 20 activities with date, type, duration, kcal, HR
- Makes activity analysis prompts work properly
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Placeholder resolver returns keys with {{ }} wrappers,
but resolve_placeholders expects clean keys.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Backend: integrate get_placeholder_example_values in execute_prompt_with_data
- Backend: now provides BOTH raw data AND processed placeholders
- Backend: unwrap Markdown-wrapped JSON (```json ... ```)
- Fixes old-style prompts that expect name, weight_trend, caliper_summary
Resolves unresolved placeholders issue.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Frontend: debug viewer now shows even when test fails
- Frontend: export button to download complete prompt config as JSON
- Backend: attach debug info to JSON validation errors
- Backend: include raw output and length in error details
Users can now debug failed prompts and export configs for analysis.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Backend: debug mode in prompt_executor with placeholder tracking
- Backend: show resolved/unresolved placeholders, final prompts, AI responses
- Frontend: test button in UnifiedPromptModal for saved prompts
- Frontend: debug output viewer with JSON preview
- Frontend: wider placeholder example fields in PlaceholderPicker
Resolves pipeline execution debugging issues.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- PlaceholderPicker: Example values in separate full-width row
- Analysis.jsx: Show only pipeline-type prompts
- Analysis.jsx: Remove base prompts and Prompts tab
- Cleanup: Remove PromptEditor component and unused imports
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Major improvements:
1. PlaceholderPicker component (new)
- Loads placeholders dynamically from backend catalog
- Grouped by categories: Profil, Körper, Ernährung, Training, etc.
- Search/filter functionality
- Shows live example values from user data
- Popup modal with expand/collapse categories
2. Replaced hardcoded placeholder chips
- 'Platzhalter einfügen' button opens picker
- Works in both base templates and pipeline inline templates
- Auto-closes after selection
3. Uses existing backend system
- GET /api/prompts/placeholders
- placeholder_resolver.py with PLACEHOLDER_MAP
- Dynamic, module-based placeholder system
- No manual updates needed when modules add new placeholders
Benefits:
- Scalable: New modules can add placeholders without frontend changes
- User-friendly: Search and categorization
- Context-aware: Shows real example values
- Future-proof: Backend-driven catalog
New features:
1. Placeholder chips now visible in pipeline inline templates
- Click to insert: weight_data, nutrition_data, activity_data, etc.
- Same UX as base prompts
2. Convert to Base Prompt button
- New icon (ArrowDownToLine) in actions column
- Only visible for 1-stage pipeline prompts
- Converts pipeline → base by extracting inline template
- Validates: must be 1-stage, 1-prompt, inline source
This allows migrated prompts to be properly categorized as base prompts
for reuse in other pipelines.