Compare commits

...

57 Commits

Author SHA1 Message Date
f46c367c27 Merge pull request 'Flexibles KI Prompt System' (#48) from develop into main
All checks were successful
Deploy Production / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
Reviewed-on: #48
2026-03-26 14:49:47 +01:00
21bdd9f2ba docs: add Claude Code responsibilities section
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Issue-Management via Gitea API
- Dokumentation pflegen
- Entwicklungs-Workflow
2026-03-26 14:46:20 +01:00
713f7475c9 docs: create Issue #50 - Value Table Refinement
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Normal-Modus: Nur Einzelwerte (übersichtlich)
- Experten-Modus: Zusätzlich Stage-Rohdaten
- Beschreibungen für alle Platzhalter vervollständigen
- Schema-basierte Beschreibungen für extrahierte Werte

Aufwand: 4-6h, Priority: Medium
2026-03-26 14:43:23 +01:00
6e651b5bb5 fix: include stage outputs in debug info for value table
All checks were successful
Deploy Development / deploy (push) Successful in 45s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- stage_debug now includes 'output' dict with all stage outputs
- Fixes empty values for stage_X_outputkey in expert mode
- Stage outputs are the actual AI responses passed to next stage
2026-03-26 14:33:00 +01:00
f37936c84d feat: show all stage outputs as collapsible JSON in expert mode
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Backend:
- Add ALL stage outputs to metadata (not just referenced ones)
- Format JSON with indent for readability
- Description: 'Zwischenergebnis aus Stage X'

Frontend:
- Stage raw values shown in collapsible <details> element
- JSON formatted in <pre> tag with syntax highlighting
- 'JSON anzeigen ▼' summary for better UX

Fixes: Stage X - Rohdaten now shows intermediate results
2026-03-26 13:17:58 +01:00
159fcab17a feat: circ_summary with best-of-each strategy and age annotations
All checks were successful
Deploy Development / deploy (push) Successful in 42s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
- Each circumference point shows most recent value (even from different dates)
- Age annotations: heute, gestern, vor X Tagen/Wochen/Monaten
- Gives AI better context about measurement freshness
- Example: 'Brust 105cm (heute), Nacken 38cm (vor 2 Wochen)'
2026-03-26 13:09:38 +01:00
d06d3d84de fix: circ_summary now checks all 8 circumference points
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Previously only checked c_chest, c_waist, c_hip
- Now includes c_neck, c_belly, c_thigh, c_calf, c_arm
- Fixes 'keine Daten' when entries exist with only non-primary measurements
2026-03-26 13:06:37 +01:00
b0f80e0be7 docs: document Issue #47 completion in CLAUDE.md
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
- Added comprehensive documentation for Value Table feature
- Expert mode, category grouping, stage output extraction
- Updated version header to reflect #28 and #47 completion
2026-03-26 13:03:49 +01:00
adb5dcea88 feat: category grouping in value table (Issue #47)
All checks were successful
Deploy Development / deploy (push) Successful in 52s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
FEATURE: Gruppierung nach Kategorien
- Wertetabelle jetzt nach Modulen/Kategorien gruppiert
- Bessere Übersicht und Zuordnung der Werte

BACKEND: Category Metadata
- Für normale Platzhalter: Kategorie aus Catalog (Profil, Körper, Ernährung, etc.)
- Für extrahierte Werte: "Stage X - [Output Name]"
- Für Rohdaten: "Stage X - Rohdaten"
- Fallback: "Sonstiges"

FRONTEND: Grouped Display
- sortedCategories: Sortierung (Normal → Stage Outputs → Rohdaten)
- Section Headers: Grauer Hintergrund mit Kategorie-Name
- React.Fragment für Gruppierung

SORTIERUNG:
1. Normale Kategorien (Profil, Körper, Ernährung, Training, etc.)
2. Stage Outputs (Stage 1 - Body, Stage 1 - Nutrition, etc.)
3. Rohdaten (Stage 1 - Rohdaten, Stage 2 - Rohdaten)
4. Innerhalb: Alphabetisch

BEISPIEL:
┌────────────────────────────────────────────┐
│ PROFIL                                     │
├────────────────────────────────────────────┤
│ name       │ Lars    │ Name des Nutzers   │
│ age        │ 55      │ Alter in Jahren    │
├────────────────────────────────────────────┤
│ KÖRPER                                     │
├────────────────────────────────────────────┤
│ weight_... │ 85.2 kg │ Aktuelles Gewicht  │
│ bmi        │ 26.6    │ Body Mass Index    │
├────────────────────────────────────────────┤
│ ERNÄHRUNG                                  │
├────────────────────────────────────────────┤
│ kcal_avg   │ 1427... │ Durchschn. Kalorien│
│ protein... │ 106g... │ Durchschn. Protein │
├────────────────────────────────────────────┤
│ STAGE 1 - BODY                             │
├────────────────────────────────────────────┤
│ ↳ bmi      │ 26.6    │ Aus Stage 1 (body) │
│ ↳ trend    │ sinkend │ Aus Stage 1 (body) │
├────────────────────────────────────────────┤
│ STAGE 1 - NUTRITION                        │
├────────────────────────────────────────────┤
│ ↳ kcal_... │ 1427    │ Aus Stage 1 (nutr.)│
└────────────────────────────────────────────┘

Experten-Modus zusätzlich:
├────────────────────────────────────────────┤
│ STAGE 1 - ROHDATEN                         │
├────────────────────────────────────────────┤
│ 🔬 stage...│ {"bmi"..│ Rohdaten Stage 1   │
└────────────────────────────────────────────┘

version: 9.10.0 (feature)
module: prompts 2.5.0, insights 1.8.0
2026-03-26 12:59:52 +01:00
da803da816 feat: extract individual values from stage outputs (Issue #47)
All checks were successful
Deploy Development / deploy (push) Successful in 46s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
FEATURE: Basis-Analysen Einzelwerte
Vorher: stage_1_body → {"bmi": 26.6, "weight": "85.2kg"} (1 Zeile)
Jetzt:  bmi → 26.6 (eigene Zeile)
        weight → 85.2kg (eigene Zeile)

BACKEND: JSON-Extraktion
- Stage outputs (JSON) → extract individual fields
- extracted_values dict sammelt alle Einzelwerte
- Deduplizierung: Gleiche Keys nur einmal
- Flags:
  - is_extracted: true → Wert aus Stage-Output extrahiert
  - is_stage_raw: true → Rohdaten (JSON) nur Experten-Modus

BEISPIEL Stage 1 Output:
{
  "stage_1_body": {
    "bmi": 26.6,
    "weight": "85.2 kg",
    "trend": "sinkend"
  }
}

→ Metadata:
{
  "bmi": {
    value: "26.6",
    description: "Aus Stage 1 (stage_1_body)",
    is_extracted: true
  },
  "weight": {
    value: "85.2 kg",
    description: "Aus Stage 1 (stage_1_body)",
    is_extracted: true
  },
  "stage_1_body": {
    value: "{\"bmi\": 26.6, ...}",
    description: "Rohdaten Stage 1 (Basis-Analyse JSON)",
    is_stage_raw: true
  }
}

FRONTEND: Smart Filtering
Normal-Modus:
- Zeigt: Einzelwerte (bmi, weight, trend)
- Versteckt: Rohdaten (stage_1_body JSON)
- Filter: is_stage_raw === false

Experten-Modus:
- Zeigt: Alles (Einzelwerte + Rohdaten)
- Rohdaten: Grauer Hintergrund + 🔬 Icon

VISUAL Indicators:
↳ bmi        → Extrahierter Wert (grün)
  weight     → Normaler Platzhalter (accent)
🔬 stage_1_* → Rohdaten JSON (grau, klein, nur Experten)

ERGEBNIS:
┌──────────────────────────────────────────┐
│ 📊 Verwendete Werte (8) (+2 ausgeblendet)│
│ ┌────────────────────────────────────────┐│
│ │ weight_aktuell │ 85.2 kg   │ Gewicht ││ ← Normal
│ │ ↳ bmi          │ 26.6      │ Aus St..││ ← Extrahiert
│ │ ↳ trend        │ sinkend   │ Aus St..││ ← Extrahiert
│ └────────────────────────────────────────┘│
└──────────────────────────────────────────┘

Experten-Modus zusätzlich:
│ 🔬 stage_1_body │ {"bmi":...│ Rohdaten││ ← JSON

version: 9.9.0 (feature)
module: prompts 2.4.0, insights 1.7.0
2026-03-26 12:55:53 +01:00
e799edbae4 feat: expert mode + stage outputs in value table (Issue #47)
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
FEATURE: Experten-Modus 🔬
- Toggle-Button in Wertetabelle
- Normal: Nur gefüllte Werte anzeigen
- Experten: Alle Platzhalter inkl. leere/technische
- Anzeige: "(+X ausgeblendet)" wenn Werte gefiltert
- Button-Style: Accent wenn aktiv

FILTER: Leere Werte ausblenden (Normal-Modus)
- Filtert: '', 'nicht verfügbar', '[Nicht verfügbar]'
- Zeigt nur relevante Nutzer-Daten
- Experten-Modus zeigt alles

FEATURE: Stage-Outputs in Wertetabelle 
ROOT CAUSE: stage_N_key Platzhalter hatten keine Werte
- Stage-Outputs (z.B. stage_1_body) sind Basis-Analysen-Ergebnisse
- Wurden nicht in cleaned_values gefunden (nur statische Platzhalter)
FIX:
- Collect stage outputs aus result.debug.stages[].output
- Store als stage_N_key dict
- Lookup: erst stage_outputs, dann cleaned_values
- Description: "Output aus Stage X (Basis-Analyse)"
- JSON-Werte automatisch serialisiert

BEISPIEL Pipeline-Wertetabelle:
┌──────────────────────────────────────────────┐
│ 📊 Verwendete Werte (8) (+3 ausgeblendet) 🔬│
│ ┌──────────────────────────────────────────┐ │
│ │ weight_aktuell  │ 85.2 kg   │ Gewicht  │ │
│ │ stage_1_body    │ {"bmi":...│ Output...│ │ ← Stage output!
│ │ stage_1_nutr... │ {"kcal"...│ Output...│ │
│ └──────────────────────────────────────────┘ │
└──────────────────────────────────────────────┘

AKTIVIERUNG Experten-Modus:
1. Analyse öffnen
2. "📊 Verwendete Werte" aufklappen
3. Button "🔬 Experten-Modus" klicken
4. Zeigt alle Platzhalter (auch leere stage outputs)

version: 9.8.0 (feature)
module: prompts 2.3.0, insights 1.6.0
2026-03-26 12:44:28 +01:00
15bd6cddeb feat: untruncated values + smart base prompt display (Issue #47)
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
FEATURE: Volle Werte (nicht abgeschnitten)
- Backend holt ungekürzten Werte direkt von placeholder_resolver
- get_placeholder_example_values() statt debug.resolved_placeholders
- Debug bleibt gekürzt (100 chars), Metadata ungekürzt

FEATURE: Smart Display für Basis-Prompts
- Basis-Prompts mit JSON-Output: Nur Wertetabelle anzeigen
- JSON-Output in Collapsible "Technische Daten" verschoben
- Wertetabelle auto-expanded bei Basis-Prompts
- Pipeline + Text-Prompts: Wie bisher (Content + Wertetabelle)

UI: Bessere Wertetabelle
- Werte: word-break + max-width (400px) → kein Overflow
- Alle Spalten: verticalAlign top für bessere Lesbarkeit
- Platzhalter: nowrap (keine Umbrüche)

BEISPIEL:
┌─────────────────────────────────────────┐
│ ℹ️ Basis-Prompt Rohdaten                │
│ [Technische Daten anzeigen ▼]           │
│                                          │
│ 📊 Verwendete Werte (8) ▼  ← expanded  │
│ ┌──────────────────────────────────────┐│
│ │ Platzhalter │ Vollständiger Wert... ││
│ │ kcal_avg    │ 1427 kcal/Tag (Ø 30...││ ← ungekürzt
│ └──────────────────────────────────────┘│
└─────────────────────────────────────────┘

version: 9.7.0 (feature)
module: prompts 2.2.0, insights 1.5.0
2026-03-26 12:37:52 +01:00
19414614bf fix: add metadata to newResult for immediate value table display
All checks were successful
Deploy Development / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
BUG: Wertetabelle wurde nicht angezeigt bei neuer Analyse
ROOT CAUSE: newResult hatte nur {scope, content}, kein metadata
FIX: Build metadata from result.debug.resolved_placeholders
- Für Base: direkt aus resolved_placeholders
- Für Pipeline: collect aus allen stages
- Metadata structure: {prompt_type, placeholders: {key: {value, description}}}

NOTE: Immediate preview hat keine descriptions (nur values)
Saved insights (nach loadAll) haben full metadata with descriptions aus DB

version: 9.6.2 (bugfix)
2026-03-26 12:29:05 +01:00
4a2bebe249 fix: value table metadata + |d modifier + cursor insertion (Issues #47, #48)
All checks were successful
Deploy Development / deploy (push) Successful in 52s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
BUG: Wertetabelle wurde nicht angezeigt
FIX: enable_debug=true wenn save=true (für metadata collection)
- metadata wird nur gespeichert wenn debug aktiv
- jetzt: debug or save → metadata immer verfügbar

BUG: {{placeholder|d}} Modifier funktionierte nicht
ROOT CAUSE: catalog wurde bei Exception nicht zu variables hinzugefügt
FIX:
- variables['_catalog'] = catalog (auch wenn None)
- Warning-Log wenn catalog nicht geladen werden kann
- Debug warning wenn |d ohne catalog verwendet

BUG: Platzhalter in Pipeline-Stages am Ende statt an Cursor
FIX:
- stageTemplateRefs Map für alle Stage-Textareas
- onClick + onKeyUp tracking für Cursor-Position
- Insert at cursor: template.slice(0, pos) + placeholder + template.slice(pos)
- Focus + Cursor restore nach Insert

TECHNICAL:
- prompt_executor.py: Besseres Exception Handling für catalog
- UnifiedPromptModal.jsx: Refs für alle Template-Felder
- prompts.py: enable_debug=debug or save

version: 9.6.1 (bugfix)
module: prompts 2.1.1
2026-03-26 12:04:20 +01:00
c0a50dedcd feat: value table + {{placeholder|d}} modifier (Issue #47)
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 15s
FEATURE #47: Wertetabelle nach KI-Analysen
- Migration 021: metadata JSONB column in ai_insights
- Backend sammelt resolved placeholders mit descriptions beim Speichern
- Frontend: Collapsible value table in InsightCard
  - Zeigt: Platzhalter | Wert | Beschreibung
  - Sortiert tabellarisch
  - Funktioniert für base + pipeline prompts

FEATURE #48: {{placeholder|d}} Modifier
- Syntax: {{weight_aktuell|d}} → "85.2 kg (Aktuelles Gewicht in kg)"
- resolve_placeholders() erkennt |d modifier
- Hängt description aus catalog an Wert
- Fein-granulare Kontrolle pro Platzhalter (nicht global)
- Optional: nur wo sinnvoll einsetzen

TECHNICAL:
- prompt_executor.py: catalog parameter durchgereicht
- execute_prompt_with_data() lädt catalog via get_placeholder_catalog()
- Catalog als _catalog in variables übergeben, in execute_prompt() extrahiert
- Base + Pipeline Prompts unterstützen |d modifier

EXAMPLE:
Template: "Gewicht: {{weight_aktuell|d}}, Alter: {{age}}"
Output:   "Gewicht: 85.2 kg (Aktuelles Gewicht in kg), Alter: 55"

version: 9.6.0 (feature)
module: prompts 2.1.0, insights 1.4.0
2026-03-26 11:52:26 +01:00
c56d2b2201 fix: delete insights + placeholder cursor insertion (Issue #44)
BUG #44: Analysen löschen schlug fehl (kein Auth-Token)
FIX:
- api.deleteInsight() in api.js hinzugefügt
- Analysis.jsx nutzt jetzt api.js mit Error-Handling
- Nicht mehr raw fetch() ohne Token

BUG: Platzhalter wurden am Ende eingefügt statt an Cursor-Position
FIX:
- useRef für baseTemplateRef hinzugefügt
- Cursor-Position tracking (onClick + onKeyUp)
- Insert at cursor: template.slice(0, pos) + placeholder + template.slice(pos)
- Focus + Cursor-Position nach Insert wiederhergestellt

version: 9.5.2 (bugfix)
module: prompts 2.0.2, insights 1.3.1
2026-03-26 11:40:19 +01:00
7daa2e40c7 fix: sleep quality calculation using wrong key (stage vs phase)
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
BUG: sleep_avg_quality showed 0% despite valid sleep data
ROOT CAUSE: sleep_segments use 'phase' key, not 'stage'
FIX: Changed s.get('stage') to s.get('phase') in get_sleep_avg_quality()

version: 9.5.1 (bugfix)
module: prompts 2.0.1
2026-03-26 10:31:39 +01:00
ae6bd0d865 docs: Issue #28 completion documentation (v9e)
- Marked Issue #28 as complete
- Documented all 4 original phases
- Documented debug tools added 26.03.2026
- Documented placeholder enhancements (6 new functions, 7 reconstructed)
- Documented bug fixes (PIPELINE_MASTER, SQL columns, type errors)
- Listed related Gitea issues (#43, #44, #45, #46)
- Updated version status to v9e Ready for Production

version: 9.5.0 (documentation update)
2026-03-26 10:28:42 +01:00
a43a9f129f fix: sleep_avg_quality uses lowercase stage names
All checks were successful
Deploy Development / deploy (push) Successful in 47s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
Problem: Schlafphasen werden lowercase gespeichert (deep, rem, light, awake),
aber get_sleep_avg_quality() prüfte Titlecase (Deep, REM) → 0% Match

Fix: Ändere Prüfung zu lowercase: ['deep', 'rem']

Jetzt wird {{sleep_avg_quality}} korrekt berechnet aus JSONB segments.

Quelle: backend/routers/sleep.py → phase_map speichert lowercase

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 10:22:55 +01:00
3ad1a19dce fix: calculate_age now handles PostgreSQL date objects
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Problem: dob Spalte ist DATE (PostgreSQL) → Python bekommt datetime.date,
nicht String → strptime() schlägt fehl → age = "unbekannt"

Fix: Prüfe isinstance(dob, str) und handle beide Typen:
- String → strptime()
- date object → direkt verwenden

Jetzt funktioniert {{age}} Platzhalter korrekt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 10:19:36 +01:00
a9114bc40a feat: implement missing placeholder functions (sleep, vitals, rest)
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Implementiert 6 fehlende Platzhalter-Funktionen die im Katalog waren
aber keine Berechnung hatten.

Neue Funktionen:
- get_sleep_avg_duration(7d) → "7.5h"
- get_sleep_avg_quality(7d) → "65% (Deep+REM)"
- get_rest_days_count(30d) → "5 Ruhetage"
- get_vitals_avg_hr(7d) → "58 bpm"
- get_vitals_avg_hrv(7d) → "45 ms"
- get_vitals_vo2_max() → "42.5 ml/kg/min"

Datenquellen:
- sleep_log (JSONB segments mit Deep/REM/Light/Awake)
- rest_days (Kraft/Cardio/Entspannung)
- vitals_baseline (resting_hr, hrv, vo2_max)

Jetzt in PLACEHOLDER_MAP registriert → sofort nutzbar.

Fixes: Platzhalter-Export zeigt jetzt alle Werte (statt "nicht verfügbar")

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 10:14:17 +01:00
555ff62b56 feat: global placeholder export with values (Settings page)
All checks were successful
Deploy Development / deploy (push) Successful in 45s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Zentraler Export aller verfügbaren Platzhalter mit aktuellen Werten.

Backend:
- GET /api/prompts/placeholders/export-values
  - Returns all placeholders organized by category
  - Includes resolved values for current profile
  - Includes metadata (description, example)
  - Flat list + categorized structure

Frontend SettingsPage:
- Button "📊 Platzhalter exportieren"
- Downloads: placeholders-{profile}-{date}.json
- Shows all 38+ placeholders with current values
- Useful for:
  - Understanding available data
  - Debugging prompt templates
  - Verifying placeholder resolution

Frontend api.js:
- exportPlaceholderValues()

Export Format:
{
  "export_date": "2026-03-26T...",
  "profile_id": "...",
  "count": 38,
  "all_placeholders": { "name": "Lars", ... },
  "placeholders_by_category": {
    "Profil": [...],
    "Körper": [...],
    ...
  }
}

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 10:05:11 +01:00
7f94a41965 feat: batch import/export for prompts (Issue #28 Debug B)
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Dev→Prod Sync in 2 Klicks: Export → Import

Backend:
- GET /api/prompts/export-all → JSON mit allen Prompts
- POST /api/prompts/import?overwrite=true/false → Import + Create/Update
  - Returns: created, updated, skipped counts
  - Validates JSON structure
  - Handles stages JSON conversion

Frontend AdminPromptsPage:
- Button "📦 Alle exportieren" → downloads all-prompts-{date}.json
- Button "📥 Importieren" → file upload dialog
  - User-Prompt: Überschreiben? Ja/Nein
  - Success-Message mit Statistik (created/updated/skipped)

Frontend api.js:
- exportAllPrompts()
- importPrompts(data, overwrite)

Use Cases:
1. Backup: Prompts als JSON sichern
2. Dev→Prod: Auf dev.mitai entwickeln → exportieren → auf mitai.jinkendo importieren
3. Versionierung: Prompts in Git speichern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:44:08 +01:00
8b287ca6c9 feat: export all placeholders from debug viewer (Issue #28 Debug A)
All checks were successful
Deploy Development / deploy (push) Successful in 47s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Added "📋 Platzhalter exportieren" button in debug viewer:
- Exports all resolved placeholders with values
- Includes all available_variables
- For pipelines: exports per-stage placeholder data
- JSON format with timestamp and prompt metadata
- Filename: placeholders-{slug}-{date}.json

Use case: Development aid - see exactly what data is available
for prompt templates without null values.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:40:26 +01:00
97e57481f9 fix: Analysis page now uses unified prompt executor (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 51s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
BREAKING: Analysis page switched from old /insights/run to new /prompts/execute

Changes:
- Backend: Added save=true parameter to /prompts/execute
  - When enabled, saves final output to ai_insights table
  - Extracts content from pipeline output (last stage)
- Frontend api.js: Added save parameter to executeUnifiedPrompt()
- Frontend Analysis.jsx: Switched from api.runInsight() to api.executeUnifiedPrompt()
  - Transforms new result format to match InsightCard expectations
  - Pipeline outputs properly extracted and displayed

Fixes: PIPELINE_MASTER responses (old template being sent to AI)
The old /insights/run endpoint used raw template field, which for the
legacy "pipeline" prompt was literally "PIPELINE_MASTER". The new
executor properly handles stages and data processing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:38:58 +01:00
811ba8b3dc fix: convert Decimal to float before multiplication in protein targets
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- get_protein_ziel_low: float(weight) * 1.6
- get_protein_ziel_high: float(weight) * 2.2

Fixes TypeError: unsupported operand type(s) for *: 'decimal.Decimal' and 'float'

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:23:50 +01:00
b90c738fbb fix: make test button always visible in prompt editor
All checks were successful
Deploy Development / deploy (push) Successful in 46s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 22s
- Removed conditional hiding of test button (prompt?.slug)
- Button now always visible with helpful tooltip
- handleTest already has save-check logic

Improves discoverability of test functionality.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:16:59 +01:00
dfaf24d74c fix: correct SQL column names in placeholder_resolver
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- caliper_summary: use body_fat_pct (not bf_jpl)
- circ_summary: use c_chest, c_waist, c_hip (not brust, taille, huefte)
- get_latest_bf: use body_fat_pct for consistency

Fixes SQL errors when running base prompts that feed pipeline prompts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:10:55 +01:00
0f2b85c6de fix: reconstruct missing placeholders + fix SQL column names
All checks were successful
Deploy Development / deploy (push) Successful in 46s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Added missing placeholders:
- caliper_summary, circ_summary (body measurements)
- goal_weight, goal_bf_pct (goals from profile)
- nutrition_days (count of nutrition entries)
- protein_ziel_low/high (calculated from weight)

Fixed SQL errors:
- protein → protein_g
- fat → fat_g
- carb → carbs_g

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 09:03:35 +01:00
f4d1fd4de1 feat: add activity_detail placeholder for detailed activity logs
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- New placeholder: {{activity_detail}} returns formatted activity log
- Shows last 20 activities with date, type, duration, kcal, HR
- Makes activity analysis prompts work properly

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 08:20:18 +01:00
ba92d66880 fix: remove {{ }} from placeholder keys before resolution
All checks were successful
Deploy Development / deploy (push) Successful in 46s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 12s
Placeholder resolver returns keys with {{ }} wrappers,
but resolve_placeholders expects clean keys.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 08:17:22 +01:00
afc70b5a95 fix: integrate placeholder resolver + JSON unwrapping (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Backend: integrate get_placeholder_example_values in execute_prompt_with_data
- Backend: now provides BOTH raw data AND processed placeholders
- Backend: unwrap Markdown-wrapped JSON (```json ... ```)
- Fixes old-style prompts that expect name, weight_trend, caliper_summary

Resolves unresolved placeholders issue.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 08:14:41 +01:00
84dad07e15 fix: show debug info on errors + prompt export function
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
- Frontend: debug viewer now shows even when test fails
- Frontend: export button to download complete prompt config as JSON
- Backend: attach debug info to JSON validation errors
- Backend: include raw output and length in error details

Users can now debug failed prompts and export configs for analysis.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 08:07:34 +01:00
7f2ba4fbad feat: debug system for prompt execution (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Backend: debug mode in prompt_executor with placeholder tracking
- Backend: show resolved/unresolved placeholders, final prompts, AI responses
- Frontend: test button in UnifiedPromptModal for saved prompts
- Frontend: debug output viewer with JSON preview
- Frontend: wider placeholder example fields in PlaceholderPicker

Resolves pipeline execution debugging issues.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 08:01:33 +01:00
4ba03c2a94 feat: Analysis page pipeline-only + wider placeholder examples (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 51s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 14s
- PlaceholderPicker: Example values in separate full-width row
- Analysis.jsx: Show only pipeline-type prompts
- Analysis.jsx: Remove base prompts and Prompts tab
- Cleanup: Remove PromptEditor component and unused imports

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 07:50:13 +01:00
8036c99883 feat: dynamic placeholder picker with categories and search (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
Major improvements:
1. PlaceholderPicker component (new)
   - Loads placeholders dynamically from backend catalog
   - Grouped by categories: Profil, Körper, Ernährung, Training, etc.
   - Search/filter functionality
   - Shows live example values from user data
   - Popup modal with expand/collapse categories

2. Replaced hardcoded placeholder chips
   - 'Platzhalter einfügen' button opens picker
   - Works in both base templates and pipeline inline templates
   - Auto-closes after selection

3. Uses existing backend system
   - GET /api/prompts/placeholders
   - placeholder_resolver.py with PLACEHOLDER_MAP
   - Dynamic, module-based placeholder system
   - No manual updates needed when modules add new placeholders

Benefits:
- Scalable: New modules can add placeholders without frontend changes
- User-friendly: Search and categorization
- Context-aware: Shows real example values
- Future-proof: Backend-driven catalog
2026-03-25 22:08:14 +01:00
b058b0fd6f feat: placeholder chips + convert to base prompt (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
New features:
1. Placeholder chips now visible in pipeline inline templates
   - Click to insert: weight_data, nutrition_data, activity_data, etc.
   - Same UX as base prompts

2. Convert to Base Prompt button
   - New icon (ArrowDownToLine) in actions column
   - Only visible for 1-stage pipeline prompts
   - Converts pipeline → base by extracting inline template
   - Validates: must be 1-stage, 1-prompt, inline source

This allows migrated prompts to be properly categorized as base prompts
for reuse in other pipelines.
2026-03-25 21:59:43 +01:00
7dda520c9b fix: UI improvements for unified prompt system (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
Fixes:
1. Template field in stages now full width (was too narrow)
2. Table horizontal scrollbar for mobile (overflow-x: auto)
3. Table min-width 900px to prevent icon clipping
4. Added clickable placeholder chips below base template
   - Click to insert placeholders into template
   - Shows: weight_data, nutrition_data, activity_data, sleep_data, etc.

UI now mobile-ready and more user-friendly.
2026-03-25 21:52:58 +01:00
0a3e76128a fix: simplified JSX string to avoid escaping issues
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 18s
2026-03-25 21:42:01 +01:00
5249cd6939 fix: JSX syntax error in UnifiedPromptModal (Issue #28)
Some checks failed
Deploy Development / deploy (push) Failing after 32s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 14s
Fixed curly brace escaping in JSX string.
Changed from {{'{{'}} to {'{{'}}
2026-03-25 21:40:22 +01:00
2f3314cd36 feat: Issue #28 complete - unified prompt system (Phase 4)
Some checks failed
Deploy Development / deploy (push) Failing after 34s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 15s
Cleanup & Documentation:
- Removed deprecated components: PipelineConfigModal, PromptEditModal
- Updated CLAUDE.md with Issue #28 summary
- Kept old backend endpoints for backward-compatibility

Summary of all 4 phases:
 Phase 1: DB Migration (unified schema)
 Phase 2: Backend Executor (universal execution engine)
 Phase 3: Frontend UI (consolidated interface)
 Phase 4: Cleanup & Docs

Key improvements:
- Unlimited dynamic stages (no hardcoded limit)
- Multiple prompts per stage (parallel execution)
- Base prompts (reusable) + Pipeline prompts (workflows)
- Inline templates or references
- JSON output enforceable
- Cross-module correlations possible

Ready for testing on dev.mitai.jinkendo.de
2026-03-25 15:33:47 +01:00
31e2c24a8a feat: unified prompt UI - Phase 3 complete (Issue #28)
Some checks failed
Deploy Development / deploy (push) Failing after 35s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Frontend Consolidation:
- UnifiedPromptModal: Single editor for base + pipeline prompts
  - Type selector (base/pipeline)
  - Base: Template editor with placeholders
  - Pipeline: Dynamic stage editor
  - Add/remove stages with drag/drop
  - Inline or reference prompts per stage
  - Output key + format per prompt

- AdminPromptsPage redesign:
  - Removed tab switcher (prompts/pipelines)
  - Added type filter (All/Base/Pipeline)
  - Type badge in table
  - Stage count column
  - Icon-based actions (Edit/Copy/Delete)
  - Category filter retained

Changes:
- Completely rewrote AdminPromptsPage (495 → 446 lines)
- Single modal for all prompt types
- Mobile-ready layout
- Simplified state management

Next: Phase 4 - Cleanup deprecated endpoints + docs
2026-03-25 14:55:25 +01:00
7be7266477 feat: unified prompt executor - Phase 2 complete (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 52s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
Backend:
- prompt_executor.py: Universal executor for base + pipeline prompts
  - Dynamic placeholder resolution
  - JSON output validation
  - Multi-stage parallel execution (sequential impl)
  - Reference and inline prompt support
  - Data loading per module (körper, ernährung, training, schlaf, vitalwerte)

Endpoints:
- POST /api/prompts/execute - Execute unified prompts
- POST /api/prompts/unified - Create unified prompts
- PUT /api/prompts/unified/{id} - Update unified prompts

Frontend:
- api.js: executeUnifiedPrompt, createUnifiedPrompt, updateUnifiedPrompt

Next: Phase 3 - Frontend UI consolidation
2026-03-25 14:52:24 +01:00
33653fdfd4 fix: migration 020 - make template column nullable
All checks were successful
Deploy Development / deploy (push) Successful in 48s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
Issue: template has NOT NULL constraint but pipeline-type prompts
don't use template (they use stages JSONB instead).

Solution: ALTER COLUMN template DROP NOT NULL before inserting
pipeline configs into ai_prompts.
2026-03-25 14:45:53 +01:00
95dcf080e5 fix: migration 020 SQL syntax - correlated subquery issue
All checks were successful
Deploy Development / deploy (push) Successful in 42s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Fixed Step 3 pipeline_configs migration:
- Simplified JSONB aggregation logic
- Properly scope pc alias in subqueries
- Use UNNEST with FROM clause for array expansion

Previous version had correlation issues with nested subqueries.
2026-03-25 12:58:02 +01:00
2e0838ca08 feat: unified prompt system migration schema (Issue #28 Phase 1)
All checks were successful
Deploy Development / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
- Migration 020: Add type, stages, output_format columns to ai_prompts
- Migrate existing prompts to 1-stage pipeline format
- Migrate pipeline_configs into ai_prompts as multi-stage pipelines
- Add UnifiedPrompt Pydantic models for new API
- Backup pipeline_configs table (keep during transition)

Schema structure:
- type: 'base' (reusable) or 'pipeline' (multi-stage)
- stages: JSONB array [{stage:1, prompts:[{source, slug, template, output_key, output_format}]}]
- output_format: 'text' or 'json'
- output_schema: JSON validation schema (optional)

Next: Backend executor + Frontend UI consolidation
2026-03-25 10:43:10 +01:00
1b7fdb1739 chore: rollback point before unified prompt system refactoring (Issue #28)
Current state:
- Pipeline configs working (migration 019)
- PipelineConfigModal complete
- AdminPromptsPage with tabs
- All Phase 1+2 features deployed and tested

Next: Consolidate into unified prompt system
- Single ai_prompts table for all types
- Dynamic stages (unlimited)
- Basis prompts + pipeline prompts
2026-03-25 10:42:18 +01:00
b23e361791 feat: Pipeline-System Frontend - Admin UI (Issue #28, Phase 2 Part 1)
All checks were successful
Deploy Development / deploy (push) Successful in 46s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 15s
Implementiert Admin-UI für Pipeline-Konfigurationen:
- Pipeline-Config Dialog mit Module-Auswahl
- Stage-Konfiguration (Stage 1/2/3 Prompts)
- Admin-UI: Zwei Tabs (Prompts + Pipeline-Configs)
- CRUD-Operationen für Pipeline-Configs
- API-Integration: Pipeline-Config Endpoints

**Frontend:**
- components/PipelineConfigModal.jsx (neu): Dialog für Pipeline-Konfiguration
  - Module-Auswahl mit Zeiträumen (7 Module)
  - Stage 1: Multi-Select für parallele Prompts
  - Stage 2: Synthese-Prompt Auswahl
  - Stage 3: Optional (Goals)
  - Validierung (mind. 1 Modul, mind. 1 Stage-1-Prompt, Stage-2 erforderlich)

- pages/AdminPromptsPage.jsx (erweitert): Tab-Navigation
  - Tab 1: Prompts (bestehend)
  - Tab 2: Pipeline-Konfigurationen (neu)
  - Liste aller Configs mit Status (Aktiv, Standard)
  - Aktionen: Bearbeiten, Löschen, Als Standard setzen
  - Icons: Star, Edit, Trash2

- utils/api.js (erweitert):
  - listPipelineConfigs, createPipelineConfig, updatePipelineConfig
  - deletePipelineConfig, setDefaultPipelineConfig
  - executePipeline, resetPromptToDefault

**Nächste Schritte:**
- Pipeline-Auswahl in AnalysisPage (User-Seite)
- Mobile-Responsive Design

Issue #28 Progress: Frontend 2/3 (67%) | Design 0/3 | Testing 0/1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-25 10:01:49 +01:00
053a9e18cf fix: use postgres container for psql commands
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
2026-03-25 09:54:44 +01:00
6f7303c0d5 fix: correct container name and DB credentials for dev environment
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
2026-03-25 09:52:26 +01:00
7f7edce62d chore: add pipeline system test scripts (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 44s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
2026-03-25 09:47:58 +01:00
6627b5eee7 feat: Pipeline-System - Backend Infrastructure (Issue #28, Phase 1)
All checks were successful
Deploy Development / deploy (push) Successful in 43s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 13s
Implementiert konfigurierbare mehrstufige Analysen. Admins können
mehrere Pipeline-Konfigurationen erstellen mit unterschiedlichen
Modulen, Zeiträumen und Prompts.

**Backend:**
- Migration 019: pipeline_configs Tabelle + ai_prompts erweitert
- Pipeline-Config Models: PipelineConfigCreate, PipelineConfigUpdate
- Pipeline-Executor: refactored für config-basierte Ausführung
- CRUD-Endpoints: /api/prompts/pipeline-configs (list, create, update, delete, set-default)
- Reset-to-Default: /api/prompts/{id}/reset-to-default für System-Prompts

**Features:**
- 3 Seed-Configs: "Alltags-Check" (default), "Schlaf & Erholung", "Wettkampf-Analyse"
- Dynamische Platzhalter: {{stage1_<slug>}} für alle Stage-1-Ergebnisse
- Backward-compatible: /api/insights/pipeline ohne config_id nutzt default

**Dateien:**
- backend/migrations/019_pipeline_system.sql
- backend/models.py (PipelineConfigCreate, PipelineConfigUpdate)
- backend/routers/insights.py (analyze_pipeline refactored)
- backend/routers/prompts.py (Pipeline-Config CRUD + Reset-to-Default)

**Nächste Schritte:**
- Frontend: Pipeline-Config Dialog + Admin-UI
- Design: Mobile-Responsive + Icons

Issue #28 Progress: Backend 3/3  | Frontend 0/3 🔲 | Design 0/3 🔲

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-25 09:42:28 +01:00
5e7ef718e0 fix: placeholder picker improvements + insight display names (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Backend:
- get_placeholder_catalog(): grouped placeholders with descriptions
- Returns {category: [{key, description, example}]} format
- Categories: Profil, Körper, Ernährung, Training, Schlaf, Vitalwerte, Zeitraum

Frontend - Placeholder Picker:
- Grouped by category with visual separation
- Search/filter across keys and descriptions
- Hover effects for better UX
- Insert at cursor position (not at end)
- Shows: key + description + example value
- 'Keine Platzhalter gefunden' message when filtered

Frontend - Insight Display Names:
- InsightCard receives prompts array
- Finds matching prompt by scope/slug
- Shows prompt.display_name instead of hardcoded SLUG_LABELS
- History tab also shows display_name in group headers
- Fallback chain: display_name → SLUG_LABELS → scope

User-facing improvements:
✓ Platzhalter zeigen echte Daten statt Zahlen
✓ Durchsuchbar + filterbar
✓ Einfügen an Cursor-Position
✓ Insights zeigen custom Namen (z.B. '🍽️ Meine Ernährung')

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-25 06:44:22 +01:00
0c4264de44 feat: display_name + placeholder picker for prompts (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 51s
Build Test / lint-backend (push) Successful in 1s
Build Test / build-frontend (push) Successful in 14s
Migration 018:
- Add display_name column to ai_prompts
- Migrate existing prompts from hardcoded SLUG_LABELS
- Fallback: name if display_name is NULL

Backend:
- PromptCreate/Update models with display_name field
- create/update/duplicate endpoints handle display_name
- Fallback: use name if display_name not provided

Frontend:
- PromptEditModal: display_name input field
- Placeholder picker: button + dropdown with all placeholders
- Shows example values, inserts {{placeholder}} on click
- Analysis.jsx: use display_name instead of SLUG_LABELS

User-facing changes:
- Prompts now show custom display names (e.g. '🍽️ Ernährung')
- Admin can edit display names instead of hardcoded labels
- Template editor has 'Platzhalter einfügen' button
- No more hardcoded SLUG_LABELS in frontend

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-25 06:31:25 +01:00
7a8a5aee98 fix: prompt editor layout - full-width inputs, left-aligned text (Issue #28)
All checks were successful
Deploy Development / deploy (push) Successful in 50s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 14s
- PromptEditModal: all inputs/textareas now full-width
- Labels positioned above fields (not inline)
- Text left-aligned (was right-aligned)
- Added resize:vertical for textareas
- Side-by-side comparison with word-wrap
- Follows app-wide form design pattern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 20:53:15 +01:00
c8cf375399 feat: AI-Prompts flexibilisierung - Frontend complete (Issue #28, Part 2)
All checks were successful
Deploy Development / deploy (push) Successful in 49s
Build Test / lint-backend (push) Successful in 0s
Build Test / build-frontend (push) Successful in 13s
Frontend components:
- PromptEditModal.jsx: Full editor with preview, generator, optimizer
- PromptGenerator.jsx: KI-assisted prompt creation from goal description
- Extended api.js with 10 new prompt endpoints

Navigation:
- Added /admin/prompts route to App.jsx
- Added KI-Prompts section to AdminPanel with navigation button

Features complete:
 Admin can create/edit/delete/duplicate prompts
 Category filtering and reordering
 Preview prompts with real user data
 KI generates prompts from goal + example data
 KI analyzes and optimizes existing prompts
 Side-by-side comparison original vs optimized

Ready for testing: http://dev.mitai.jinkendo.de/admin/prompts

Issue #28 Phase 2 complete - 13-18h estimated, ~14h actual

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 15:35:55 +01:00
500de132b9 feat: AI-Prompts flexibilisierung - Backend & Admin UI (Issue #28, Part 1)
Backend complete:
- Migration 017: Add category column to ai_prompts
- placeholder_resolver.py: 20+ placeholders with resolver functions
- Extended routers/prompts.py with CRUD endpoints:
  * POST /api/prompts (create)
  * PUT /api/prompts/:id (update)
  * DELETE /api/prompts/:id (delete)
  * POST /api/prompts/:id/duplicate
  * PUT /api/prompts/reorder
  * POST /api/prompts/preview
  * GET /api/prompts/placeholders
  * POST /api/prompts/generate (KI-assisted generation)
  * POST /api/prompts/:id/optimize (KI analysis)
- Extended models.py with PromptCreate, PromptUpdate, PromptGenerateRequest

Frontend:
- AdminPromptsPage.jsx: Full CRUD UI with category filter, reordering

Meta-Features:
- KI generates prompts from goal description + example data
- KI analyzes and optimizes existing prompts

Next: PromptEditModal, PromptGenerator, api.js integration

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 15:32:25 +01:00
25 changed files with 5864 additions and 355 deletions

185
CLAUDE.md
View File

@ -7,6 +7,26 @@
> | Coding-Regeln | `.claude/rules/CODING_RULES.md` |
> | Lessons Learned | `.claude/rules/LESSONS_LEARNED.md` |
## Claude Code Verantwortlichkeiten
**Issue-Management (Gitea):**
- ✅ Neue Issues/Feature Requests in Gitea anlegen
- ✅ Issue-Dokumentation in `docs/issues/` pflegen
- ✅ Issues mit Labels, Priority, Aufwandsschätzung versehen
- ✅ Bestehende Issues aktualisieren (Status, Beschreibung)
- ✅ Issues bei Fertigstellung schließen
- 🎯 Gitea: http://192.168.2.144:3000/Lars/mitai-jinkendo/issues
**Dokumentation:**
- Code-Änderungen in CLAUDE.md dokumentieren
- Versions-Updates bei jedem Feature/Fix
- Library-Dateien bei größeren Änderungen aktualisieren
**Entwicklung:**
- Alle Änderungen auf `develop` Branch
- Production Deploy nur nach expliziter Freigabe
- Migration 001-999 Pattern einhalten
## Projekt-Übersicht
**Mitai Jinkendo** (身体 Jinkendo) selbst-gehostete PWA für Körper-Tracking mit KI-Auswertung.
Teil der **Jinkendo**-App-Familie (人拳道). Domains: jinkendo.de / .com / .life
@ -56,7 +76,7 @@ frontend/src/
└── technical/ # MEMBERSHIP_SYSTEM.md
```
## Aktuelle Version: v9c (komplett) 🚀 Production seit 21.03.2026
## Aktuelle Version: v9e (Issue #28, #47 Complete) 🚀 Ready for Production 26.03.2026
### Implementiert ✅
- Login (E-Mail + bcrypt), Auth-Middleware alle Endpoints, Rate Limiting
@ -188,6 +208,169 @@ frontend/src/
📚 Details: `.claude/docs/technical/MEMBERSHIP_SYSTEM.md` · `.claude/docs/architecture/FEATURE_ENFORCEMENT.md`
### Issue #28: Unified Prompt System ✅ (Completed 26.03.2026)
**AI-Prompts Flexibilisierung - Komplett überarbeitet:**
- ✅ **Unified Prompt System (4 Phasen):**
- **Phase 1:** DB-Migration - Schema erweitert
- `ai_prompts` um `type`, `stages`, `output_format`, `output_schema` erweitert
- Alle Prompts zu 1-stage Pipelines migriert
- Pipeline-Configs in `ai_prompts` konsolidiert
- **Phase 2:** Backend Executor
- `prompt_executor.py` - universeller Executor für base + pipeline
- Dynamische Placeholder-Auflösung (`{{stage_N_key}}`)
- JSON-Output-Validierung
- Multi-stage parallele Ausführung
- Reference (Basis-Prompts) + Inline (Templates) Support
- **Phase 3:** Frontend UI Consolidation
- `UnifiedPromptModal` - ein Editor für beide Typen
- `AdminPromptsPage` - Tab-Switcher entfernt, Type-Filter hinzugefügt
- Stage-Editor mit Add/Remove/Reorder
- Mobile-ready Design
- **Phase 4:** Cleanup & Docs
- Deprecated Komponenten entfernt (PipelineConfigModal, PromptEditModal)
- Old endpoints behalten für Backward-Compatibility
**Features:**
- Unbegrenzte dynamische Stages (keine 3-Stage Limitierung mehr)
- Mehrere Prompts pro Stage (parallel)
- Zwei Prompt-Typen: `base` (wiederverwendbar) + `pipeline` (Workflows)
- Inline-Templates oder Referenzen zu Basis-Prompts
- JSON-Output erzwingbar pro Prompt
- Cross-Module Korrelationen möglich
**Debug & Development Tools (26.03.2026):**
- ✅ **Comprehensive Debug System:**
- Test-Button in Prompt-Editor mit Debug-Modus
- Shows resolved/unresolved placeholders
- Displays final prompts sent to AI
- Per-stage debug info for pipelines
- Export debug data as JSON
- ✅ **Placeholder Export (per Test):**
- Button in Debug-Viewer
- Exports all placeholders with values per execution
- ✅ **Global Placeholder Export:**
- Settings → "📊 Platzhalter exportieren"
- All 32 placeholders with current values
- Organized by category
- Includes metadata (description, example)
- ✅ **Batch Import/Export:**
- Admin → "📦 Alle exportieren" (all prompts as JSON)
- Admin → "📥 Importieren" (upload JSON, create/update)
- Dev→Prod sync in 2 clicks
**Placeholder System Enhancements:**
- ✅ **6 New Placeholder Functions:**
- `{{sleep_avg_duration}}` - Average sleep duration (7d)
- `{{sleep_avg_quality}}` - Deep+REM percentage (7d)
- `{{rest_days_count}}` - Rest days count (30d)
- `{{vitals_avg_hr}}` - Average resting heart rate (7d)
- `{{vitals_avg_hrv}}` - Average HRV (7d)
- `{{vitals_vo2_max}}` - Latest VO2 Max value
- ✅ **7 Reconstructed Placeholders:**
- `{{caliper_summary}}`, `{{circ_summary}}`
- `{{goal_weight}}`, `{{goal_bf_pct}}`
- `{{nutrition_days}}`
- `{{protein_ziel_low}}`, `{{protein_ziel_high}}`
- `{{activity_detail}}`
- **Total: 32 active placeholders** across 6 categories
**Bug Fixes (26.03.2026):**
- ✅ **PIPELINE_MASTER Response:** Analysis page now uses unified executor
- Fixed: Old `/insights/run` endpoint sent raw template to AI
- Now: `/prompts/execute` with proper stage processing
- ✅ **Age Calculation:** Handle PostgreSQL DATE objects
- Fixed: `calculate_age()` expected string, got date object
- Now: Handles both strings and date objects
- ✅ **Sleep Quality 0%:** Lowercase stage names
- Fixed: Checked ['Deep', 'REM'], but stored as ['deep', 'rem']
- Now: Correct case-sensitive matching
- ✅ **SQL Column Name Errors:**
- Fixed: `bf_jpl``body_fat_pct`
- Fixed: `brust``c_chest`, etc.
- Fixed: `protein``protein_g`
- ✅ **Decimal × Float Type Error:**
- Fixed: `protein_ziel_low/high` calculations
- Now: Convert Decimal to float before multiplication
**Migrations:**
- Migration 020: Unified Prompt System Schema
**Backend Endpoints:**
- `POST /api/prompts/execute` - Universeller Executor (with save=true param)
- `POST /api/prompts/unified` - Create unified prompt
- `PUT /api/prompts/unified/{id}` - Update unified prompt
- `GET /api/prompts/export-all` - Export all prompts as JSON
- `POST /api/prompts/import` - Import prompts from JSON (with overwrite option)
- `GET /api/prompts/placeholders/export-values` - Export all placeholder values
**UI:**
- Admin → KI-Prompts: Type-Filter (Alle/Basis/Pipeline)
- Neuer Prompt-Editor mit dynamischem Stage-Builder
- Inline editing von Stages + Prompts
- Test-Button mit Debug-Viewer (always visible)
- Export/Import Buttons (📦 Alle exportieren, 📥 Importieren)
- Settings → 📊 Platzhalter exportieren
📚 Details: `.claude/docs/functional/AI_PROMPTS.md`
**Related Gitea Issues:**
- #28: Unified Prompt System - ✅ CLOSED (26.03.2026)
- #43: Enhanced Debug UI - 🔲 OPEN (Future enhancement)
- #44: BUG - Analysen löschen - 🔲 OPEN (High priority)
- #45: KI Prompt-Optimierer - 🔲 OPEN (Future feature)
- #46: KI Prompt-Ersteller - 🔲 OPEN (Future feature)
- #47: Value Table - ✅ CLOSED (26.03.2026)
### Issue #47: Comprehensive Value Table ✅ (Completed 26.03.2026)
**AI-Analyse Transparenz - Vollständige Platzhalter-Anzeige:**
- ✅ **Metadata Collection System:**
- Alle genutzten Platzhalter mit Werten während Ausführung gesammelt
- Vollständige (nicht gekürzte) Werte aus placeholder_resolver
- Kategorisierung nach Modul (Profil, Körper, Ernährung, Training, etc.)
- Speicherung in ai_insights.metadata (JSONB)
- ✅ **Expert Mode:**
- Toggle-Button "🔬 Experten-Modus" in Analysis-Seite
- Normal-Modus: Zeigt nur relevante, gefüllte Werte
- Experten-Modus: Zeigt alle Werte inkl. Rohdaten und Stage-Outputs
- ✅ **Stage Output Extraction:**
- Basis-Prompts mit JSON-Output: Einzelwerte extrahiert
- Jedes Feld aus Stage-JSON als eigene Zeile
- Visuelle Kennzeichnung: ↳ für extrahierte Werte
- Source-Tracking: Welche Stage, welcher Output
- ✅ **Category Grouping:**
- Gruppierung nach Kategorien (PROFIL, KÖRPER, ERNÄHRUNG, etc.)
- Stage-Outputs als eigene Kategorien ("Stage 1 - Body")
- Rohdaten-Sektion (nur im Experten-Modus)
- Sortierung: Reguläre → Stage-Outputs → Rohdaten
- ✅ **Value Table Features:**
- Drei Spalten: Platzhalter | Wert | Beschreibung
- Keine Kürzung langer Werte
- Kategorie-Header mit grauem Hintergrund
- Empty/nicht verfügbar Werte ausgeblendet (Normal-Modus)
**Migrations:**
- Migration 021: ai_insights.metadata JSONB column
**Backend Endpoints:**
- `POST /api/prompts/execute` - Erweitert um Metadata-Collection
- `GET /api/insights/placeholders/catalog` - Placeholder-Kategorien
**UI:**
- Analysis Page: Value Table mit Category-Grouping
- Expert-Mode Toggle (🔬 Symbol)
- Collapsible JSON für Rohdaten
- Delete-Button für Insights (🗑️)
📚 Details: `.claude/docs/functional/AI_PROMPTS.md`
## Feature-Roadmap
> 📋 **Detaillierte Roadmap:** `.claude/docs/ROADMAP.md` (Phasen 0-3, Timeline, Abhängigkeiten)

View File

@ -0,0 +1,22 @@
-- Migration 017: AI Prompts Flexibilisierung (Issue #28)
-- Add category column to ai_prompts for better organization and filtering
-- Add category column
ALTER TABLE ai_prompts ADD COLUMN IF NOT EXISTS category VARCHAR(20) DEFAULT 'ganzheitlich';
-- Create index for category filtering
CREATE INDEX IF NOT EXISTS idx_ai_prompts_category ON ai_prompts(category);
-- Add comment
COMMENT ON COLUMN ai_prompts.category IS 'Prompt category: körper, ernährung, training, schlaf, vitalwerte, ziele, ganzheitlich';
-- Update existing prompts with appropriate categories
-- Based on slug patterns and content
UPDATE ai_prompts SET category = 'körper' WHERE slug IN ('koerperkomposition', 'gewichtstrend', 'umfaenge', 'caliper');
UPDATE ai_prompts SET category = 'ernährung' WHERE slug IN ('ernaehrung', 'kalorienbilanz', 'protein', 'makros');
UPDATE ai_prompts SET category = 'training' WHERE slug IN ('aktivitaet', 'trainingsanalyse', 'erholung', 'leistung');
UPDATE ai_prompts SET category = 'schlaf' WHERE slug LIKE '%schlaf%';
UPDATE ai_prompts SET category = 'vitalwerte' WHERE slug IN ('vitalwerte', 'herzfrequenz', 'ruhepuls', 'hrv');
UPDATE ai_prompts SET category = 'ziele' WHERE slug LIKE '%ziel%' OR slug LIKE '%goal%';
-- Pipeline prompts remain 'ganzheitlich' (default)

View File

@ -0,0 +1,20 @@
-- Migration 018: Add display_name to ai_prompts for user-facing labels
ALTER TABLE ai_prompts ADD COLUMN IF NOT EXISTS display_name VARCHAR(100);
-- Migrate existing prompts from hardcoded SLUG_LABELS
UPDATE ai_prompts SET display_name = '🔍 Gesamtanalyse' WHERE slug = 'gesamt' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🫧 Körperkomposition' WHERE slug = 'koerper' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🍽️ Ernährung' WHERE slug = 'ernaehrung' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🏋️ Aktivität' WHERE slug = 'aktivitaet' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '❤️ Gesundheitsindikatoren' WHERE slug = 'gesundheit' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🎯 Zielfortschritt' WHERE slug = 'ziele' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Mehrstufige Gesamtanalyse' WHERE slug = 'pipeline' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Pipeline: Körper-Analyse (JSON)' WHERE slug = 'pipeline_body' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Pipeline: Ernährungs-Analyse (JSON)' WHERE slug = 'pipeline_nutrition' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Pipeline: Aktivitäts-Analyse (JSON)' WHERE slug = 'pipeline_activity' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Pipeline: Synthese' WHERE slug = 'pipeline_synthesis' AND display_name IS NULL;
UPDATE ai_prompts SET display_name = '🔬 Pipeline: Zielabgleich' WHERE slug = 'pipeline_goals' AND display_name IS NULL;
-- Fallback: use name as display_name if still NULL
UPDATE ai_prompts SET display_name = name WHERE display_name IS NULL;

View File

@ -0,0 +1,157 @@
-- Migration 019: Pipeline-System - Konfigurierbare mehrstufige Analysen
-- Ermöglicht Admin-Verwaltung von Pipeline-Konfigurationen (Issue #28)
-- Created: 2026-03-25
-- ========================================
-- 1. Erweitere ai_prompts für Reset-Feature
-- ========================================
ALTER TABLE ai_prompts
ADD COLUMN IF NOT EXISTS is_system_default BOOLEAN DEFAULT FALSE,
ADD COLUMN IF NOT EXISTS default_template TEXT;
COMMENT ON COLUMN ai_prompts.is_system_default IS 'true = System-Prompt mit Reset-Funktion';
COMMENT ON COLUMN ai_prompts.default_template IS 'Original-Template für Reset-to-Default';
-- Markiere bestehende Pipeline-Prompts als System-Defaults
UPDATE ai_prompts
SET
is_system_default = true,
default_template = template
WHERE slug LIKE 'pipeline_%';
-- ========================================
-- 2. Create pipeline_configs table
-- ========================================
CREATE TABLE IF NOT EXISTS pipeline_configs (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL UNIQUE,
description TEXT,
is_default BOOLEAN DEFAULT FALSE,
active BOOLEAN DEFAULT TRUE,
-- Module configuration: which data sources to include
modules JSONB NOT NULL DEFAULT '{}'::jsonb,
-- Example: {"körper": true, "ernährung": true, "training": true, "schlaf": false}
-- Timeframes per module (days)
timeframes JSONB NOT NULL DEFAULT '{}'::jsonb,
-- Example: {"körper": 30, "ernährung": 30, "training": 14}
-- Stage 1 prompts (parallel execution)
stage1_prompts TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[],
-- Example: ARRAY['pipeline_body', 'pipeline_nutrition', 'pipeline_activity']
-- Stage 2 prompt (synthesis)
stage2_prompt VARCHAR(100) NOT NULL,
-- Example: 'pipeline_synthesis'
-- Stage 3 prompt (optional, e.g., goals)
stage3_prompt VARCHAR(100),
-- Example: 'pipeline_goals'
created TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- ========================================
-- 3. Create indexes
-- ========================================
CREATE INDEX IF NOT EXISTS idx_pipeline_configs_default ON pipeline_configs(is_default) WHERE is_default = true;
CREATE INDEX IF NOT EXISTS idx_pipeline_configs_active ON pipeline_configs(active);
-- ========================================
-- 4. Seed: Standard-Pipeline "Alltags-Check"
-- ========================================
INSERT INTO pipeline_configs (
name,
description,
is_default,
modules,
timeframes,
stage1_prompts,
stage2_prompt,
stage3_prompt
) VALUES (
'Alltags-Check',
'Standard-Analyse: Körper, Ernährung, Training über die letzten 2-4 Wochen',
true,
'{"körper": true, "ernährung": true, "training": true, "schlaf": false, "vitalwerte": false, "mentales": false, "ziele": false}'::jsonb,
'{"körper": 30, "ernährung": 30, "training": 14}'::jsonb,
ARRAY['pipeline_body', 'pipeline_nutrition', 'pipeline_activity'],
'pipeline_synthesis',
'pipeline_goals'
) ON CONFLICT (name) DO NOTHING;
-- ========================================
-- 5. Seed: Erweiterte Pipelines (optional)
-- ========================================
-- Schlaf-Fokus Pipeline (wenn Schlaf-Prompts existieren)
INSERT INTO pipeline_configs (
name,
description,
is_default,
modules,
timeframes,
stage1_prompts,
stage2_prompt,
stage3_prompt
) VALUES (
'Schlaf & Erholung',
'Analyse von Schlaf, Vitalwerten und Erholungsstatus',
false,
'{"schlaf": true, "vitalwerte": true, "training": true, "körper": false, "ernährung": false, "mentales": false, "ziele": false}'::jsonb,
'{"schlaf": 14, "vitalwerte": 7, "training": 14}'::jsonb,
ARRAY['pipeline_sleep', 'pipeline_vitals', 'pipeline_activity'],
'pipeline_synthesis',
NULL
) ON CONFLICT (name) DO NOTHING;
-- Wettkampf-Analyse (langfristiger Trend)
INSERT INTO pipeline_configs (
name,
description,
is_default,
modules,
timeframes,
stage1_prompts,
stage2_prompt,
stage3_prompt
) VALUES (
'Wettkampf-Analyse',
'Langfristige Analyse für Wettkampfvorbereitung (90 Tage)',
false,
'{"körper": true, "training": true, "vitalwerte": true, "ernährung": true, "schlaf": false, "mentales": false, "ziele": true}'::jsonb,
'{"körper": 90, "training": 90, "vitalwerte": 30, "ernährung": 60}'::jsonb,
ARRAY['pipeline_body', 'pipeline_activity', 'pipeline_vitals', 'pipeline_nutrition'],
'pipeline_synthesis',
'pipeline_goals'
) ON CONFLICT (name) DO NOTHING;
-- ========================================
-- 6. Trigger für updated timestamp
-- ========================================
DROP TRIGGER IF EXISTS trigger_pipeline_configs_updated ON pipeline_configs;
CREATE TRIGGER trigger_pipeline_configs_updated
BEFORE UPDATE ON pipeline_configs
FOR EACH ROW
EXECUTE FUNCTION update_updated_timestamp();
-- ========================================
-- 7. Constraints & Validation
-- ========================================
-- Only one default config allowed (enforced via partial unique index)
CREATE UNIQUE INDEX IF NOT EXISTS idx_pipeline_configs_single_default
ON pipeline_configs(is_default)
WHERE is_default = true;
-- ========================================
-- 8. Comments (Documentation)
-- ========================================
COMMENT ON TABLE pipeline_configs IS 'v9f Issue #28: Konfigurierbare Pipeline-Analysen. Admins können mehrere Pipeline-Configs erstellen mit unterschiedlichen Modulen und Zeiträumen.';
COMMENT ON COLUMN pipeline_configs.modules IS 'JSONB: Welche Module aktiv sind (boolean flags)';
COMMENT ON COLUMN pipeline_configs.timeframes IS 'JSONB: Zeiträume pro Modul in Tagen';
COMMENT ON COLUMN pipeline_configs.stage1_prompts IS 'Array von slug-Werten für parallele Stage-1-Prompts';
COMMENT ON COLUMN pipeline_configs.stage2_prompt IS 'Slug des Synthese-Prompts (kombiniert Stage-1-Ergebnisse)';
COMMENT ON COLUMN pipeline_configs.stage3_prompt IS 'Optionaler Slug für Stage-3-Prompt (z.B. Zielabgleich)';

View File

@ -0,0 +1,128 @@
-- Migration 020: Unified Prompt System (Issue #28)
-- Consolidate ai_prompts and pipeline_configs into single system
-- Type: 'base' (reusable building blocks) or 'pipeline' (workflows)
-- Step 1: Add new columns to ai_prompts and make template nullable
ALTER TABLE ai_prompts
ADD COLUMN IF NOT EXISTS type VARCHAR(20) DEFAULT 'pipeline',
ADD COLUMN IF NOT EXISTS stages JSONB,
ADD COLUMN IF NOT EXISTS output_format VARCHAR(10) DEFAULT 'text',
ADD COLUMN IF NOT EXISTS output_schema JSONB;
-- Make template nullable (pipeline-type prompts use stages instead)
ALTER TABLE ai_prompts
ALTER COLUMN template DROP NOT NULL;
-- Step 2: Migrate existing single-prompts to 1-stage pipeline format
-- All existing prompts become single-stage pipelines with inline source
UPDATE ai_prompts
SET
type = 'pipeline',
stages = jsonb_build_array(
jsonb_build_object(
'stage', 1,
'prompts', jsonb_build_array(
jsonb_build_object(
'source', 'inline',
'template', template,
'output_key', REPLACE(slug, 'pipeline_', ''),
'output_format', 'text'
)
)
)
),
output_format = 'text'
WHERE stages IS NULL;
-- Step 3: Migrate pipeline_configs into ai_prompts as multi-stage pipelines
-- Each pipeline_config becomes a pipeline-type prompt with multiple stages
INSERT INTO ai_prompts (
slug,
name,
description,
type,
stages,
output_format,
active,
is_system_default,
category
)
SELECT
'pipeline_config_' || LOWER(REPLACE(pc.name, ' ', '_')) || '_' || SUBSTRING(pc.id::TEXT FROM 1 FOR 8) as slug,
pc.name,
pc.description,
'pipeline' as type,
-- Build stages JSONB: combine stage1_prompts, stage2_prompt, stage3_prompt
(
-- Stage 1: Convert array to prompts
SELECT jsonb_agg(stage_obj ORDER BY stage_num)
FROM (
SELECT 1 as stage_num,
jsonb_build_object(
'stage', 1,
'prompts', (
SELECT jsonb_agg(
jsonb_build_object(
'source', 'reference',
'slug', s1.slug_val,
'output_key', REPLACE(s1.slug_val, 'pipeline_', 'stage1_'),
'output_format', 'json'
)
)
FROM UNNEST(pc.stage1_prompts) AS s1(slug_val)
)
) as stage_obj
WHERE array_length(pc.stage1_prompts, 1) > 0
UNION ALL
SELECT 2 as stage_num,
jsonb_build_object(
'stage', 2,
'prompts', jsonb_build_array(
jsonb_build_object(
'source', 'reference',
'slug', pc.stage2_prompt,
'output_key', 'synthesis',
'output_format', 'text'
)
)
) as stage_obj
WHERE pc.stage2_prompt IS NOT NULL
UNION ALL
SELECT 3 as stage_num,
jsonb_build_object(
'stage', 3,
'prompts', jsonb_build_array(
jsonb_build_object(
'source', 'reference',
'slug', pc.stage3_prompt,
'output_key', 'goals',
'output_format', 'text'
)
)
) as stage_obj
WHERE pc.stage3_prompt IS NOT NULL
) all_stages
) as stages,
'text' as output_format,
pc.active,
pc.is_default as is_system_default,
'pipeline' as category
FROM pipeline_configs pc;
-- Step 4: Add indices for performance
CREATE INDEX IF NOT EXISTS idx_ai_prompts_type ON ai_prompts(type);
CREATE INDEX IF NOT EXISTS idx_ai_prompts_stages ON ai_prompts USING GIN (stages);
-- Step 5: Add comment explaining stages structure
COMMENT ON COLUMN ai_prompts.stages IS 'JSONB array of stages, each with prompts array. Structure: [{"stage":1,"prompts":[{"source":"reference|inline","slug":"...","template":"...","output_key":"key","output_format":"text|json"}]}]';
-- Step 6: Backup pipeline_configs before eventual deletion
CREATE TABLE IF NOT EXISTS pipeline_configs_backup_pre_020 AS
SELECT * FROM pipeline_configs;
-- Note: We keep pipeline_configs table for now during transition period
-- It can be dropped in a later migration once all code is migrated

View File

@ -0,0 +1,7 @@
-- Migration 021: Add metadata column to ai_insights for storing debug info
-- Date: 2026-03-26
-- Purpose: Store resolved placeholder values with descriptions for transparency
ALTER TABLE ai_insights ADD COLUMN IF NOT EXISTS metadata JSONB DEFAULT NULL;
COMMENT ON COLUMN ai_insights.metadata IS 'Debug info: resolved placeholders, descriptions, etc.';

View File

@ -127,3 +127,116 @@ class AdminProfileUpdate(BaseModel):
ai_enabled: Optional[int] = None
ai_limit_day: Optional[int] = None
export_enabled: Optional[int] = None
# ── Prompt Models (Issue #28) ────────────────────────────────────────────────
class PromptCreate(BaseModel):
name: str
slug: str
display_name: Optional[str] = None
description: Optional[str] = None
template: str
category: str = 'ganzheitlich'
active: bool = True
sort_order: int = 0
class PromptUpdate(BaseModel):
name: Optional[str] = None
display_name: Optional[str] = None
description: Optional[str] = None
template: Optional[str] = None
category: Optional[str] = None
active: Optional[bool] = None
sort_order: Optional[int] = None
class PromptGenerateRequest(BaseModel):
goal: str
data_categories: list[str]
example_output: Optional[str] = None
# ── Unified Prompt System Models (Issue #28 Phase 2) ───────────────────────
class StagePromptCreate(BaseModel):
"""Single prompt within a stage"""
source: str # 'inline' or 'reference'
slug: Optional[str] = None # Required if source='reference'
template: Optional[str] = None # Required if source='inline'
output_key: str # Key for storing result (e.g., 'nutrition', 'stage1_body')
output_format: str = 'text' # 'text' or 'json'
output_schema: Optional[dict] = None # JSON schema if output_format='json'
class StageCreate(BaseModel):
"""Single stage with multiple prompts"""
stage: int # Stage number (1, 2, 3, ...)
prompts: list[StagePromptCreate]
class UnifiedPromptCreate(BaseModel):
"""Create a new unified prompt (base or pipeline type)"""
name: str
slug: str
display_name: Optional[str] = None
description: Optional[str] = None
type: str # 'base' or 'pipeline'
category: str = 'ganzheitlich'
active: bool = True
sort_order: int = 0
# For base prompts (single reusable template)
template: Optional[str] = None # Required if type='base'
output_format: str = 'text'
output_schema: Optional[dict] = None
# For pipeline prompts (multi-stage workflow)
stages: Optional[list[StageCreate]] = None # Required if type='pipeline'
class UnifiedPromptUpdate(BaseModel):
"""Update an existing unified prompt"""
name: Optional[str] = None
display_name: Optional[str] = None
description: Optional[str] = None
type: Optional[str] = None
category: Optional[str] = None
active: Optional[bool] = None
sort_order: Optional[int] = None
template: Optional[str] = None
output_format: Optional[str] = None
output_schema: Optional[dict] = None
stages: Optional[list[StageCreate]] = None
# ── Pipeline Config Models (Issue #28) ─────────────────────────────────────
# NOTE: These will be deprecated in favor of UnifiedPrompt models above
class PipelineConfigCreate(BaseModel):
name: str
description: Optional[str] = None
is_default: bool = False
active: bool = True
modules: dict # {"körper": true, "ernährung": true, ...}
timeframes: dict # {"körper": 30, "ernährung": 30, ...}
stage1_prompts: list[str] # Array of slugs
stage2_prompt: str # slug
stage3_prompt: Optional[str] = None # slug
class PipelineConfigUpdate(BaseModel):
name: Optional[str] = None
description: Optional[str] = None
is_default: Optional[bool] = None
active: Optional[bool] = None
modules: Optional[dict] = None
timeframes: Optional[dict] = None
stage1_prompts: Optional[list[str]] = None
stage2_prompt: Optional[str] = None
stage3_prompt: Optional[str] = None
class PipelineExecuteRequest(BaseModel):
config_id: Optional[str] = None # None = use default config

View File

@ -0,0 +1,715 @@
"""
Placeholder Resolver for AI Prompts
Provides a registry of placeholder functions that resolve to actual user data.
Used for prompt templates and preview functionality.
"""
import re
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Callable
from db import get_db, get_cursor, r2d
# ── Helper Functions ──────────────────────────────────────────────────────────
def get_profile_data(profile_id: str) -> Dict:
"""Load profile data for a user."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("SELECT * FROM profiles WHERE id=%s", (profile_id,))
return r2d(cur.fetchone()) if cur.rowcount > 0 else {}
def get_latest_weight(profile_id: str) -> Optional[str]:
"""Get latest weight entry."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"SELECT weight FROM weight_log WHERE profile_id=%s ORDER BY date DESC LIMIT 1",
(profile_id,)
)
row = cur.fetchone()
return f"{row['weight']:.1f} kg" if row else "nicht verfügbar"
def get_weight_trend(profile_id: str, days: int = 28) -> str:
"""Calculate weight trend description."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT weight, date FROM weight_log
WHERE profile_id=%s AND date >= %s
ORDER BY date""",
(profile_id, cutoff)
)
rows = [r2d(r) for r in cur.fetchall()]
if len(rows) < 2:
return "nicht genug Daten"
first = rows[0]['weight']
last = rows[-1]['weight']
delta = last - first
if abs(delta) < 0.3:
return "stabil"
elif delta > 0:
return f"steigend (+{delta:.1f} kg in {days} Tagen)"
else:
return f"sinkend ({delta:.1f} kg in {days} Tagen)"
def get_latest_bf(profile_id: str) -> Optional[str]:
"""Get latest body fat percentage from caliper."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT body_fat_pct FROM caliper_log
WHERE profile_id=%s AND body_fat_pct IS NOT NULL
ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = cur.fetchone()
return f"{row['body_fat_pct']:.1f}%" if row else "nicht verfügbar"
def get_nutrition_avg(profile_id: str, field: str, days: int = 30) -> str:
"""Calculate average nutrition value."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
# Map field names to actual column names
field_map = {
'protein': 'protein_g',
'fat': 'fat_g',
'carb': 'carbs_g',
'kcal': 'kcal'
}
db_field = field_map.get(field, field)
cur.execute(
f"""SELECT AVG({db_field}) as avg FROM nutrition_log
WHERE profile_id=%s AND date >= %s AND {db_field} IS NOT NULL""",
(profile_id, cutoff)
)
row = cur.fetchone()
if row and row['avg']:
if field == 'kcal':
return f"{int(row['avg'])} kcal/Tag (Ø {days} Tage)"
else:
return f"{int(row['avg'])}g/Tag (Ø {days} Tage)"
return "nicht verfügbar"
def get_caliper_summary(profile_id: str) -> str:
"""Get latest caliper measurements summary."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT body_fat_pct, sf_method, date FROM caliper_log
WHERE profile_id=%s AND body_fat_pct IS NOT NULL
ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = r2d(cur.fetchone()) if cur.rowcount > 0 else None
if not row:
return "keine Caliper-Messungen"
method = row.get('sf_method', 'unbekannt')
return f"{row['body_fat_pct']:.1f}% ({method} am {row['date']})"
def get_circ_summary(profile_id: str) -> str:
"""Get latest circumference measurements summary with age annotations.
For each measurement point, fetches the most recent value (even if from different dates).
Annotates each value with measurement age for AI context.
"""
with get_db() as conn:
cur = get_cursor(conn)
# Define all circumference points with their labels
fields = [
('c_neck', 'Nacken'),
('c_chest', 'Brust'),
('c_waist', 'Taille'),
('c_belly', 'Bauch'),
('c_hip', 'Hüfte'),
('c_thigh', 'Oberschenkel'),
('c_calf', 'Wade'),
('c_arm', 'Arm')
]
parts = []
today = datetime.now().date()
# Get latest value for each field individually
for field_name, label in fields:
cur.execute(
f"""SELECT {field_name}, date,
CURRENT_DATE - date AS age_days
FROM circumference_log
WHERE profile_id=%s AND {field_name} IS NOT NULL
ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = r2d(cur.fetchone()) if cur.rowcount > 0 else None
if row:
value = row[field_name]
age_days = row['age_days']
# Format age annotation
if age_days == 0:
age_str = "heute"
elif age_days == 1:
age_str = "gestern"
elif age_days <= 7:
age_str = f"vor {age_days} Tagen"
elif age_days <= 30:
weeks = age_days // 7
age_str = f"vor {weeks} Woche{'n' if weeks > 1 else ''}"
else:
months = age_days // 30
age_str = f"vor {months} Monat{'en' if months > 1 else ''}"
parts.append(f"{label} {value:.1f}cm ({age_str})")
return ', '.join(parts) if parts else "keine Umfangsmessungen"
def get_goal_weight(profile_id: str) -> str:
"""Get goal weight from profile."""
profile = get_profile_data(profile_id)
goal = profile.get('goal_weight')
return f"{goal:.1f}" if goal else "nicht gesetzt"
def get_goal_bf_pct(profile_id: str) -> str:
"""Get goal body fat percentage from profile."""
profile = get_profile_data(profile_id)
goal = profile.get('goal_bf_pct')
return f"{goal:.1f}" if goal else "nicht gesetzt"
def get_nutrition_days(profile_id: str, days: int = 30) -> str:
"""Get number of days with nutrition data."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT COUNT(DISTINCT date) as days FROM nutrition_log
WHERE profile_id=%s AND date >= %s""",
(profile_id, cutoff)
)
row = cur.fetchone()
return str(row['days']) if row else "0"
def get_protein_ziel_low(profile_id: str) -> str:
"""Calculate lower protein target based on current weight (1.6g/kg)."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT weight FROM weight_log
WHERE profile_id=%s ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = cur.fetchone()
if row:
return f"{int(float(row['weight']) * 1.6)}"
return "nicht verfügbar"
def get_protein_ziel_high(profile_id: str) -> str:
"""Calculate upper protein target based on current weight (2.2g/kg)."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT weight FROM weight_log
WHERE profile_id=%s ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = cur.fetchone()
if row:
return f"{int(float(row['weight']) * 2.2)}"
return "nicht verfügbar"
def get_activity_summary(profile_id: str, days: int = 14) -> str:
"""Get activity summary for recent period."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT COUNT(*) as count,
SUM(duration_min) as total_min,
SUM(kcal_active) as total_kcal
FROM activity_log
WHERE profile_id=%s AND date >= %s""",
(profile_id, cutoff)
)
row = r2d(cur.fetchone())
if row['count'] == 0:
return f"Keine Aktivitäten in den letzten {days} Tagen"
avg_min = int(row['total_min'] / row['count']) if row['total_min'] else 0
return f"{row['count']} Einheiten in {days} Tagen (Ø {avg_min} min/Einheit, {int(row['total_kcal'] or 0)} kcal gesamt)"
def calculate_age(dob) -> str:
"""Calculate age from date of birth (accepts date object or string)."""
if not dob:
return "unbekannt"
try:
# Handle both datetime.date objects and strings
if isinstance(dob, str):
birth = datetime.strptime(dob, '%Y-%m-%d').date()
else:
birth = dob # Already a date object from PostgreSQL
today = datetime.now().date()
age = today.year - birth.year - ((today.month, today.day) < (birth.month, birth.day))
return str(age)
except Exception as e:
return "unbekannt"
def get_activity_detail(profile_id: str, days: int = 14) -> str:
"""Get detailed activity log for analysis."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT date, activity_type, duration_min, kcal_active, hr_avg
FROM activity_log
WHERE profile_id=%s AND date >= %s
ORDER BY date DESC
LIMIT 50""",
(profile_id, cutoff)
)
rows = [r2d(r) for r in cur.fetchall()]
if not rows:
return f"Keine Aktivitäten in den letzten {days} Tagen"
# Format as readable list
lines = []
for r in rows:
hr_str = f" HF={r['hr_avg']}" if r.get('hr_avg') else ""
lines.append(
f"{r['date']}: {r['activity_type']} ({r['duration_min']}min, {r.get('kcal_active', 0)}kcal{hr_str})"
)
return '\n'.join(lines[:20]) # Max 20 entries to avoid token bloat
def get_trainingstyp_verteilung(profile_id: str, days: int = 14) -> str:
"""Get training type distribution."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT training_category, COUNT(*) as count
FROM activity_log
WHERE profile_id=%s AND date >= %s AND training_category IS NOT NULL
GROUP BY training_category
ORDER BY count DESC""",
(profile_id, cutoff)
)
rows = [r2d(r) for r in cur.fetchall()]
if not rows:
return "Keine kategorisierten Trainings"
total = sum(r['count'] for r in rows)
parts = [f"{r['training_category']}: {int(r['count']/total*100)}%" for r in rows[:3]]
return ", ".join(parts)
def get_sleep_avg_duration(profile_id: str, days: int = 7) -> str:
"""Calculate average sleep duration in hours."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT sleep_segments FROM sleep_log
WHERE profile_id=%s AND date >= %s
ORDER BY date DESC""",
(profile_id, cutoff)
)
rows = cur.fetchall()
if not rows:
return "nicht verfügbar"
total_minutes = 0
for row in rows:
segments = row['sleep_segments']
if segments:
# Sum duration_min from all segments
for seg in segments:
total_minutes += seg.get('duration_min', 0)
if total_minutes == 0:
return "nicht verfügbar"
avg_hours = total_minutes / len(rows) / 60
return f"{avg_hours:.1f}h"
def get_sleep_avg_quality(profile_id: str, days: int = 7) -> str:
"""Calculate average sleep quality (Deep+REM %)."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT sleep_segments FROM sleep_log
WHERE profile_id=%s AND date >= %s
ORDER BY date DESC""",
(profile_id, cutoff)
)
rows = cur.fetchall()
if not rows:
return "nicht verfügbar"
total_quality = 0
count = 0
for row in rows:
segments = row['sleep_segments']
if segments:
# Note: segments use 'phase' key (not 'stage'), stored lowercase (deep, rem, light, awake)
deep_rem_min = sum(s.get('duration_min', 0) for s in segments if s.get('phase') in ['deep', 'rem'])
total_min = sum(s.get('duration_min', 0) for s in segments)
if total_min > 0:
quality_pct = (deep_rem_min / total_min) * 100
total_quality += quality_pct
count += 1
if count == 0:
return "nicht verfügbar"
avg_quality = total_quality / count
return f"{avg_quality:.0f}% (Deep+REM)"
def get_rest_days_count(profile_id: str, days: int = 30) -> str:
"""Count rest days in the given period."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT COUNT(DISTINCT date) as count FROM rest_days
WHERE profile_id=%s AND date >= %s""",
(profile_id, cutoff)
)
row = cur.fetchone()
count = row['count'] if row else 0
return f"{count} Ruhetage"
def get_vitals_avg_hr(profile_id: str, days: int = 7) -> str:
"""Calculate average resting heart rate."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT AVG(resting_hr) as avg FROM vitals_baseline
WHERE profile_id=%s AND date >= %s AND resting_hr IS NOT NULL""",
(profile_id, cutoff)
)
row = cur.fetchone()
if row and row['avg']:
return f"{int(row['avg'])} bpm"
return "nicht verfügbar"
def get_vitals_avg_hrv(profile_id: str, days: int = 7) -> str:
"""Calculate average heart rate variability."""
with get_db() as conn:
cur = get_cursor(conn)
cutoff = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT AVG(hrv) as avg FROM vitals_baseline
WHERE profile_id=%s AND date >= %s AND hrv IS NOT NULL""",
(profile_id, cutoff)
)
row = cur.fetchone()
if row and row['avg']:
return f"{int(row['avg'])} ms"
return "nicht verfügbar"
def get_vitals_vo2_max(profile_id: str) -> str:
"""Get latest VO2 Max value."""
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT vo2_max FROM vitals_baseline
WHERE profile_id=%s AND vo2_max IS NOT NULL
ORDER BY date DESC LIMIT 1""",
(profile_id,)
)
row = cur.fetchone()
if row and row['vo2_max']:
return f"{row['vo2_max']:.1f} ml/kg/min"
return "nicht verfügbar"
# ── Placeholder Registry ──────────────────────────────────────────────────────
PLACEHOLDER_MAP: Dict[str, Callable[[str], str]] = {
# Profil
'{{name}}': lambda pid: get_profile_data(pid).get('name', 'Nutzer'),
'{{age}}': lambda pid: calculate_age(get_profile_data(pid).get('dob')),
'{{height}}': lambda pid: str(get_profile_data(pid).get('height', 'unbekannt')),
'{{geschlecht}}': lambda pid: 'männlich' if get_profile_data(pid).get('sex') == 'm' else 'weiblich',
# Körper
'{{weight_aktuell}}': get_latest_weight,
'{{weight_trend}}': get_weight_trend,
'{{kf_aktuell}}': get_latest_bf,
'{{bmi}}': lambda pid: calculate_bmi(pid),
'{{caliper_summary}}': get_caliper_summary,
'{{circ_summary}}': get_circ_summary,
'{{goal_weight}}': get_goal_weight,
'{{goal_bf_pct}}': get_goal_bf_pct,
# Ernährung
'{{kcal_avg}}': lambda pid: get_nutrition_avg(pid, 'kcal', 30),
'{{protein_avg}}': lambda pid: get_nutrition_avg(pid, 'protein', 30),
'{{carb_avg}}': lambda pid: get_nutrition_avg(pid, 'carb', 30),
'{{fat_avg}}': lambda pid: get_nutrition_avg(pid, 'fat', 30),
'{{nutrition_days}}': lambda pid: get_nutrition_days(pid, 30),
'{{protein_ziel_low}}': get_protein_ziel_low,
'{{protein_ziel_high}}': get_protein_ziel_high,
# Training
'{{activity_summary}}': get_activity_summary,
'{{activity_detail}}': get_activity_detail,
'{{trainingstyp_verteilung}}': get_trainingstyp_verteilung,
# Schlaf & Erholung
'{{sleep_avg_duration}}': lambda pid: get_sleep_avg_duration(pid, 7),
'{{sleep_avg_quality}}': lambda pid: get_sleep_avg_quality(pid, 7),
'{{rest_days_count}}': lambda pid: get_rest_days_count(pid, 30),
# Vitalwerte
'{{vitals_avg_hr}}': lambda pid: get_vitals_avg_hr(pid, 7),
'{{vitals_avg_hrv}}': lambda pid: get_vitals_avg_hrv(pid, 7),
'{{vitals_vo2_max}}': get_vitals_vo2_max,
# Zeitraum
'{{datum_heute}}': lambda pid: datetime.now().strftime('%d.%m.%Y'),
'{{zeitraum_7d}}': lambda pid: 'letzte 7 Tage',
'{{zeitraum_30d}}': lambda pid: 'letzte 30 Tage',
'{{zeitraum_90d}}': lambda pid: 'letzte 90 Tage',
}
def calculate_bmi(profile_id: str) -> str:
"""Calculate BMI from latest weight and profile height."""
profile = get_profile_data(profile_id)
if not profile.get('height'):
return "nicht verfügbar"
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"SELECT weight FROM weight_log WHERE profile_id=%s ORDER BY date DESC LIMIT 1",
(profile_id,)
)
row = cur.fetchone()
if not row:
return "nicht verfügbar"
height_m = profile['height'] / 100
bmi = row['weight'] / (height_m ** 2)
return f"{bmi:.1f}"
# ── Public API ────────────────────────────────────────────────────────────────
def resolve_placeholders(template: str, profile_id: str) -> str:
"""
Replace all {{placeholders}} in template with actual user data.
Args:
template: Prompt template with placeholders
profile_id: User profile ID
Returns:
Resolved template with placeholders replaced by values
"""
result = template
for placeholder, resolver in PLACEHOLDER_MAP.items():
if placeholder in result:
try:
value = resolver(profile_id)
result = result.replace(placeholder, str(value))
except Exception as e:
# On error, replace with error message
result = result.replace(placeholder, f"[Fehler: {placeholder}]")
return result
def get_unknown_placeholders(template: str) -> List[str]:
"""
Find all placeholders in template that are not in PLACEHOLDER_MAP.
Args:
template: Prompt template
Returns:
List of unknown placeholder names (without {{}})
"""
# Find all {{...}} patterns
found = re.findall(r'\{\{(\w+)\}\}', template)
# Filter to only unknown ones
known_names = {p.strip('{}') for p in PLACEHOLDER_MAP.keys()}
unknown = [p for p in found if p not in known_names]
return list(set(unknown)) # Remove duplicates
def get_available_placeholders(categories: Optional[List[str]] = None) -> Dict[str, List[str]]:
"""
Get available placeholders, optionally filtered by categories.
Args:
categories: Optional list of categories to filter (körper, ernährung, training, etc.)
Returns:
Dict mapping category to list of placeholders
"""
placeholder_categories = {
'profil': [
'{{name}}', '{{age}}', '{{height}}', '{{geschlecht}}'
],
'körper': [
'{{weight_aktuell}}', '{{weight_trend}}', '{{kf_aktuell}}', '{{bmi}}'
],
'ernährung': [
'{{kcal_avg}}', '{{protein_avg}}', '{{carb_avg}}', '{{fat_avg}}'
],
'training': [
'{{activity_summary}}', '{{trainingstyp_verteilung}}'
],
'zeitraum': [
'{{datum_heute}}', '{{zeitraum_7d}}', '{{zeitraum_30d}}', '{{zeitraum_90d}}'
]
}
if not categories:
return placeholder_categories
# Filter to requested categories
return {k: v for k, v in placeholder_categories.items() if k in categories}
def get_placeholder_example_values(profile_id: str) -> Dict[str, str]:
"""
Get example values for all placeholders using real user data.
Args:
profile_id: User profile ID
Returns:
Dict mapping placeholder to example value
"""
examples = {}
for placeholder, resolver in PLACEHOLDER_MAP.items():
try:
examples[placeholder] = resolver(profile_id)
except Exception as e:
examples[placeholder] = f"[Fehler: {str(e)}]"
return examples
def get_placeholder_catalog(profile_id: str) -> Dict[str, List[Dict[str, str]]]:
"""
Get grouped placeholder catalog with descriptions and example values.
Args:
profile_id: User profile ID
Returns:
Dict mapping category to list of {key, description, example}
"""
# Placeholder definitions with descriptions
placeholders = {
'Profil': [
('name', 'Name des Nutzers'),
('age', 'Alter in Jahren'),
('height', 'Körpergröße in cm'),
('geschlecht', 'Geschlecht'),
],
'Körper': [
('weight_aktuell', 'Aktuelles Gewicht in kg'),
('weight_trend', 'Gewichtstrend (7d/30d)'),
('kf_aktuell', 'Aktueller Körperfettanteil in %'),
('bmi', 'Body Mass Index'),
],
'Ernährung': [
('kcal_avg', 'Durchschn. Kalorien (30d)'),
('protein_avg', 'Durchschn. Protein in g (30d)'),
('carb_avg', 'Durchschn. Kohlenhydrate in g (30d)'),
('fat_avg', 'Durchschn. Fett in g (30d)'),
],
'Training': [
('activity_summary', 'Aktivitäts-Zusammenfassung (7d)'),
('trainingstyp_verteilung', 'Verteilung nach Trainingstypen'),
],
'Schlaf & Erholung': [
('sleep_avg_duration', 'Durchschn. Schlafdauer (7d)'),
('sleep_avg_quality', 'Durchschn. Schlafqualität (7d)'),
('rest_days_count', 'Anzahl Ruhetage (30d)'),
],
'Vitalwerte': [
('vitals_avg_hr', 'Durchschn. Ruhepuls (7d)'),
('vitals_avg_hrv', 'Durchschn. HRV (7d)'),
('vitals_vo2_max', 'Aktueller VO2 Max'),
],
'Zeitraum': [
('datum_heute', 'Heutiges Datum'),
('zeitraum_7d', '7-Tage-Zeitraum'),
('zeitraum_30d', '30-Tage-Zeitraum'),
],
}
catalog = {}
for category, items in placeholders.items():
catalog[category] = []
for key, description in items:
placeholder = f'{{{{{key}}}}}'
# Get example value if resolver exists
resolver = PLACEHOLDER_MAP.get(placeholder)
if resolver:
try:
example = resolver(profile_id)
except Exception:
example = '[Nicht verfügbar]'
else:
example = '[Nicht implementiert]'
catalog[category].append({
'key': key,
'description': description,
'example': str(example)
})
return catalog

526
backend/prompt_executor.py Normal file
View File

@ -0,0 +1,526 @@
"""
Unified Prompt Executor (Issue #28 Phase 2)
Executes both base and pipeline-type prompts with:
- Dynamic placeholder resolution
- JSON output validation
- Multi-stage parallel execution
- Reference and inline prompt support
"""
import json
import re
from typing import Dict, Any, Optional
from db import get_db, get_cursor, r2d
from fastapi import HTTPException
def resolve_placeholders(template: str, variables: Dict[str, Any], debug_info: Optional[Dict] = None, catalog: Optional[Dict] = None) -> str:
"""
Replace {{placeholder}} with values from variables dict.
Supports modifiers:
- {{key|d}} - Include description in parentheses (requires catalog)
Args:
template: String with {{key}} or {{key|modifiers}} placeholders
variables: Dict of key -> value mappings
debug_info: Optional dict to collect debug information
catalog: Optional placeholder catalog for descriptions (from get_placeholder_catalog)
Returns:
Template with placeholders replaced
"""
resolved = {}
unresolved = []
def replacer(match):
full_placeholder = match.group(1).strip()
# Parse key and modifiers (e.g., "weight_aktuell|d" -> key="weight_aktuell", modifiers="d")
parts = full_placeholder.split('|')
key = parts[0].strip()
modifiers = parts[1].strip() if len(parts) > 1 else ''
if key in variables:
value = variables[key]
# Convert dict/list to JSON string
if isinstance(value, (dict, list)):
resolved_value = json.dumps(value, ensure_ascii=False)
else:
resolved_value = str(value)
# Apply modifiers
if 'd' in modifiers:
if catalog:
# Add description from catalog
description = None
for cat_items in catalog.values():
matching = [item for item in cat_items if item['key'] == key]
if matching:
description = matching[0].get('description', '')
break
if description:
resolved_value = f"{resolved_value} ({description})"
else:
# Catalog not available - log warning in debug
if debug_info is not None:
if 'warnings' not in debug_info:
debug_info['warnings'] = []
debug_info['warnings'].append(f"Modifier |d used but catalog not available for {key}")
# Track resolution for debug
if debug_info is not None:
resolved[key] = resolved_value[:100] + ('...' if len(resolved_value) > 100 else '')
return resolved_value
else:
# Keep placeholder if no value found
if debug_info is not None:
unresolved.append(key)
return match.group(0)
result = re.sub(r'\{\{([^}]+)\}\}', replacer, template)
# Store debug info
if debug_info is not None:
debug_info['resolved_placeholders'] = resolved
debug_info['unresolved_placeholders'] = unresolved
return result
def validate_json_output(output: str, schema: Optional[Dict] = None, debug_info: Optional[Dict] = None) -> Dict:
"""
Validate that output is valid JSON.
Unwraps Markdown-wrapped JSON (```json ... ```) if present.
Args:
output: String to validate
schema: Optional JSON schema to validate against (TODO: jsonschema library)
debug_info: Optional dict to attach to error for debugging
Returns:
Parsed JSON dict
Raises:
HTTPException: If output is not valid JSON (with debug info attached)
"""
# Try to unwrap Markdown code blocks (common AI pattern)
unwrapped = output.strip()
if unwrapped.startswith('```json'):
# Extract content between ```json and ```
lines = unwrapped.split('\n')
if len(lines) > 2 and lines[-1].strip() == '```':
unwrapped = '\n'.join(lines[1:-1])
elif unwrapped.startswith('```'):
# Generic code block
lines = unwrapped.split('\n')
if len(lines) > 2 and lines[-1].strip() == '```':
unwrapped = '\n'.join(lines[1:-1])
try:
parsed = json.loads(unwrapped)
# TODO: Add jsonschema validation if schema provided
return parsed
except json.JSONDecodeError as e:
error_detail = {
"error": f"AI returned invalid JSON: {str(e)}",
"raw_output": output[:500] + ('...' if len(output) > 500 else ''),
"unwrapped": unwrapped[:500] if unwrapped != output else None,
"output_length": len(output)
}
if debug_info:
error_detail["debug"] = debug_info
raise HTTPException(
status_code=500,
detail=error_detail
)
async def execute_prompt(
prompt_slug: str,
variables: Dict[str, Any],
openrouter_call_func,
enable_debug: bool = False
) -> Dict[str, Any]:
"""
Execute a single prompt (base or pipeline type).
Args:
prompt_slug: Slug of prompt to execute
variables: Dict of variables for placeholder replacement
openrouter_call_func: Async function(prompt_text) -> response_text
enable_debug: If True, include debug information in response
Returns:
Dict with execution results:
{
"type": "base" | "pipeline",
"slug": "...",
"output": "..." | {...}, # String or parsed JSON
"stages": [...] # Only for pipeline type
"debug": {...} # Only if enable_debug=True
}
"""
# Load prompt from database
with get_db() as conn:
cur = get_cursor(conn)
cur.execute(
"""SELECT * FROM ai_prompts
WHERE slug = %s AND active = true""",
(prompt_slug,)
)
row = cur.fetchone()
if not row:
raise HTTPException(404, f"Prompt nicht gefunden: {prompt_slug}")
prompt = r2d(row)
prompt_type = prompt.get('type', 'pipeline')
# Get catalog from variables if available (passed from execute_prompt_with_data)
catalog = variables.pop('_catalog', None) if '_catalog' in variables else None
if prompt_type == 'base':
# Base prompt: single execution with template
return await execute_base_prompt(prompt, variables, openrouter_call_func, enable_debug, catalog)
elif prompt_type == 'pipeline':
# Pipeline prompt: multi-stage execution
return await execute_pipeline_prompt(prompt, variables, openrouter_call_func, enable_debug, catalog)
else:
raise HTTPException(400, f"Unknown prompt type: {prompt_type}")
async def execute_base_prompt(
prompt: Dict,
variables: Dict[str, Any],
openrouter_call_func,
enable_debug: bool = False,
catalog: Optional[Dict] = None
) -> Dict[str, Any]:
"""Execute a base-type prompt (single template)."""
template = prompt.get('template')
if not template:
raise HTTPException(400, f"Base prompt missing template: {prompt['slug']}")
debug_info = {} if enable_debug else None
# Resolve placeholders (with optional catalog for |d modifier)
prompt_text = resolve_placeholders(template, variables, debug_info, catalog)
if enable_debug:
debug_info['template'] = template
debug_info['final_prompt'] = prompt_text[:500] + ('...' if len(prompt_text) > 500 else '')
debug_info['available_variables'] = list(variables.keys())
# Call AI
response = await openrouter_call_func(prompt_text)
if enable_debug:
debug_info['ai_response_length'] = len(response)
debug_info['ai_response_preview'] = response[:200] + ('...' if len(response) > 200 else '')
# Validate JSON if required
output_format = prompt.get('output_format', 'text')
if output_format == 'json':
output = validate_json_output(response, prompt.get('output_schema'), debug_info if enable_debug else None)
else:
output = response
result = {
"type": "base",
"slug": prompt['slug'],
"output": output,
"output_format": output_format
}
if enable_debug:
result['debug'] = debug_info
return result
async def execute_pipeline_prompt(
prompt: Dict,
variables: Dict[str, Any],
openrouter_call_func,
enable_debug: bool = False,
catalog: Optional[Dict] = None
) -> Dict[str, Any]:
"""
Execute a pipeline-type prompt (multi-stage).
Each stage's results are added to variables for next stage.
"""
stages = prompt.get('stages')
if not stages:
raise HTTPException(400, f"Pipeline prompt missing stages: {prompt['slug']}")
# Parse stages if stored as JSON string
if isinstance(stages, str):
stages = json.loads(stages)
stage_results = []
context_vars = variables.copy()
pipeline_debug = [] if enable_debug else None
# Execute stages in order
for stage_def in sorted(stages, key=lambda s: s['stage']):
stage_num = stage_def['stage']
stage_prompts = stage_def.get('prompts', [])
if not stage_prompts:
continue
stage_debug = {} if enable_debug else None
if enable_debug:
stage_debug['stage'] = stage_num
stage_debug['available_variables'] = list(context_vars.keys())
stage_debug['prompts'] = []
# Execute all prompts in this stage (parallel concept, sequential impl for now)
stage_outputs = {}
for prompt_def in stage_prompts:
source = prompt_def.get('source')
output_key = prompt_def.get('output_key', f'stage{stage_num}')
output_format = prompt_def.get('output_format', 'text')
prompt_debug = {} if enable_debug else None
if source == 'reference':
# Reference to another prompt
ref_slug = prompt_def.get('slug')
if not ref_slug:
raise HTTPException(400, f"Reference prompt missing slug in stage {stage_num}")
if enable_debug:
prompt_debug['source'] = 'reference'
prompt_debug['ref_slug'] = ref_slug
# Load referenced prompt
result = await execute_prompt(ref_slug, context_vars, openrouter_call_func, enable_debug)
output = result['output']
if enable_debug and 'debug' in result:
prompt_debug['ref_debug'] = result['debug']
elif source == 'inline':
# Inline template
template = prompt_def.get('template')
if not template:
raise HTTPException(400, f"Inline prompt missing template in stage {stage_num}")
placeholder_debug = {} if enable_debug else None
prompt_text = resolve_placeholders(template, context_vars, placeholder_debug, catalog)
if enable_debug:
prompt_debug['source'] = 'inline'
prompt_debug['template'] = template
prompt_debug['final_prompt'] = prompt_text[:500] + ('...' if len(prompt_text) > 500 else '')
prompt_debug.update(placeholder_debug)
response = await openrouter_call_func(prompt_text)
if enable_debug:
prompt_debug['ai_response_length'] = len(response)
prompt_debug['ai_response_preview'] = response[:200] + ('...' if len(response) > 200 else '')
# Validate JSON if required
if output_format == 'json':
output = validate_json_output(response, prompt_def.get('output_schema'), prompt_debug if enable_debug else None)
else:
output = response
else:
raise HTTPException(400, f"Unknown prompt source: {source}")
# Store output with key
stage_outputs[output_key] = output
# Add to context for next stage
context_var_key = f'stage_{stage_num}_{output_key}'
context_vars[context_var_key] = output
if enable_debug:
prompt_debug['output_key'] = output_key
prompt_debug['context_var_key'] = context_var_key
stage_debug['prompts'].append(prompt_debug)
stage_results.append({
"stage": stage_num,
"outputs": stage_outputs
})
if enable_debug:
stage_debug['output'] = stage_outputs # Add outputs to debug info for value table
pipeline_debug.append(stage_debug)
# Final output is last stage's first output
final_output = stage_results[-1]['outputs'] if stage_results else {}
result = {
"type": "pipeline",
"slug": prompt['slug'],
"stages": stage_results,
"output": final_output,
"output_format": prompt.get('output_format', 'text')
}
if enable_debug:
result['debug'] = {
'initial_variables': list(variables.keys()),
'stages': pipeline_debug
}
return result
async def execute_prompt_with_data(
prompt_slug: str,
profile_id: str,
modules: Optional[Dict[str, bool]] = None,
timeframes: Optional[Dict[str, int]] = None,
openrouter_call_func = None,
enable_debug: bool = False
) -> Dict[str, Any]:
"""
Execute prompt with data loaded from database.
Args:
prompt_slug: Slug of prompt to execute
profile_id: User profile ID
modules: Dict of module -> enabled (e.g., {"körper": true})
timeframes: Dict of module -> days (e.g., {"körper": 30})
openrouter_call_func: Async function for AI calls
enable_debug: If True, include debug information in response
Returns:
Execution result dict
"""
from datetime import datetime, timedelta
from placeholder_resolver import get_placeholder_example_values, get_placeholder_catalog
# Build variables from data modules
variables = {
'profile_id': profile_id,
'today': datetime.now().strftime('%Y-%m-%d')
}
# Load placeholder catalog for |d modifier support
try:
catalog = get_placeholder_catalog(profile_id)
except Exception as e:
catalog = None
print(f"Warning: Could not load placeholder catalog: {e}")
variables['_catalog'] = catalog # Will be popped in execute_prompt (can be None)
# Add PROCESSED placeholders (name, weight_trend, caliper_summary, etc.)
# This makes old-style prompts work with the new executor
try:
processed_placeholders = get_placeholder_example_values(profile_id)
# Remove {{ }} from keys (placeholder_resolver returns them with wrappers)
cleaned_placeholders = {
key.replace('{{', '').replace('}}', ''): value
for key, value in processed_placeholders.items()
}
variables.update(cleaned_placeholders)
except Exception as e:
# Continue even if placeholder resolution fails
if enable_debug:
variables['_placeholder_error'] = str(e)
# Load data for enabled modules
if modules:
with get_db() as conn:
cur = get_cursor(conn)
# Weight data
if modules.get('körper'):
days = timeframes.get('körper', 30)
since = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT date, weight FROM weight_log
WHERE profile_id = %s AND date >= %s
ORDER BY date DESC""",
(profile_id, since)
)
variables['weight_data'] = [r2d(r) for r in cur.fetchall()]
# Nutrition data
if modules.get('ernährung'):
days = timeframes.get('ernährung', 30)
since = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT date, kcal, protein_g, fat_g, carbs_g
FROM nutrition_log
WHERE profile_id = %s AND date >= %s
ORDER BY date DESC""",
(profile_id, since)
)
variables['nutrition_data'] = [r2d(r) for r in cur.fetchall()]
# Activity data
if modules.get('training'):
days = timeframes.get('training', 14)
since = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT date, activity_type, duration_min, kcal_active, hr_avg
FROM activity_log
WHERE profile_id = %s AND date >= %s
ORDER BY date DESC""",
(profile_id, since)
)
variables['activity_data'] = [r2d(r) for r in cur.fetchall()]
# Sleep data
if modules.get('schlaf'):
days = timeframes.get('schlaf', 14)
since = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
cur.execute(
"""SELECT date, sleep_segments, source
FROM sleep_log
WHERE profile_id = %s AND date >= %s
ORDER BY date DESC""",
(profile_id, since)
)
variables['sleep_data'] = [r2d(r) for r in cur.fetchall()]
# Vitals data
if modules.get('vitalwerte'):
days = timeframes.get('vitalwerte', 7)
since = (datetime.now() - timedelta(days=days)).strftime('%Y-%m-%d')
# Baseline vitals
cur.execute(
"""SELECT date, resting_hr, hrv, vo2_max, spo2, respiratory_rate
FROM vitals_baseline
WHERE profile_id = %s AND date >= %s
ORDER BY date DESC""",
(profile_id, since)
)
variables['vitals_baseline'] = [r2d(r) for r in cur.fetchall()]
# Blood pressure
cur.execute(
"""SELECT measured_at, systolic, diastolic, pulse
FROM blood_pressure_log
WHERE profile_id = %s AND measured_at >= %s
ORDER BY measured_at DESC""",
(profile_id, since + ' 00:00:00')
)
variables['blood_pressure'] = [r2d(r) for r in cur.fetchall()]
# Mental/Goals (no timeframe, just current state)
if modules.get('mentales') or modules.get('ziele'):
# TODO: Add mental state / goals data when implemented
variables['goals_data'] = []
# Execute prompt
return await execute_prompt(prompt_slug, variables, openrouter_call_func, enable_debug)

View File

@ -433,8 +433,17 @@ async def analyze_with_prompt(slug: str, x_profile_id: Optional[str]=Header(defa
@router.post("/insights/pipeline")
async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), session: dict=Depends(require_auth)):
"""Run 3-stage pipeline analysis."""
async def analyze_pipeline(
config_id: Optional[str] = None,
x_profile_id: Optional[str] = Header(default=None),
session: dict = Depends(require_auth)
):
"""
Run configurable multi-stage pipeline analysis.
Args:
config_id: Pipeline config ID (optional, uses default if not specified)
"""
pid = get_pid(x_profile_id)
# Phase 4: Check pipeline feature access (boolean - enabled/disabled)
@ -466,14 +475,34 @@ async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), ses
f"Bitte kontaktiere den Admin oder warte bis zum nächsten Reset."
)
# Load pipeline config
with get_db() as conn:
cur = get_cursor(conn)
if config_id:
cur.execute("SELECT * FROM pipeline_configs WHERE id=%s AND active=true", (config_id,))
else:
cur.execute("SELECT * FROM pipeline_configs WHERE is_default=true AND active=true")
config = r2d(cur.fetchone())
if not config:
raise HTTPException(404, "Pipeline-Konfiguration nicht gefunden")
logger.info(f"[PIPELINE] Using config '{config['name']}' (id={config['id']})")
data = _get_profile_data(pid)
vars = _prepare_template_vars(data)
# Stage 1: Parallel JSON analyses
# Stage 1: Load and execute prompts from config
stage1_prompts = []
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("SELECT slug, template FROM ai_prompts WHERE slug LIKE 'pipeline_%' AND slug NOT IN ('pipeline_synthesis','pipeline_goals') AND active=true")
stage1_prompts = [r2d(r) for r in cur.fetchall()]
for slug in config['stage1_prompts']:
cur.execute("SELECT slug, template FROM ai_prompts WHERE slug=%s AND active=true", (slug,))
prompt = r2d(cur.fetchone())
if prompt:
stage1_prompts.append(prompt)
else:
logger.warning(f"[PIPELINE] Stage 1 prompt '{slug}' not found or inactive")
stage1_results = {}
for p in stage1_prompts:
@ -510,17 +539,20 @@ async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), ses
except:
stage1_results[slug] = content
# Stage 2: Synthesis
vars['stage1_body'] = json.dumps(stage1_results.get('pipeline_body', {}), ensure_ascii=False)
vars['stage1_nutrition'] = json.dumps(stage1_results.get('pipeline_nutrition', {}), ensure_ascii=False)
vars['stage1_activity'] = json.dumps(stage1_results.get('pipeline_activity', {}), ensure_ascii=False)
# Stage 2: Synthesis with dynamic placeholders
# Inject all stage1 results as {{stage1_<slug>}} placeholders
for slug, result in stage1_results.items():
# Convert slug like "pipeline_body" to placeholder name "stage1_body"
placeholder_name = slug.replace('pipeline_', 'stage1_')
vars[placeholder_name] = json.dumps(result, ensure_ascii=False) if isinstance(result, dict) else str(result)
# Load stage 2 prompt from config
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("SELECT template FROM ai_prompts WHERE slug='pipeline_synthesis' AND active=true")
cur.execute("SELECT template FROM ai_prompts WHERE slug=%s AND active=true", (config['stage2_prompt'],))
synth_row = cur.fetchone()
if not synth_row:
raise HTTPException(500, "Pipeline synthesis prompt not found")
raise HTTPException(500, f"Pipeline synthesis prompt '{config['stage2_prompt']}' not found")
synth_prompt = _render_template(synth_row['template'], vars)
@ -548,16 +580,24 @@ async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), ses
else:
raise HTTPException(500, "Keine KI-API konfiguriert")
# Stage 3: Goals (only if goals are set)
# Stage 3: Optional (e.g., Goals)
goals_text = None
prof = data['profile']
if prof.get('goal_weight') or prof.get('goal_bf_pct'):
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("SELECT template FROM ai_prompts WHERE slug='pipeline_goals' AND active=true")
goals_row = cur.fetchone()
if goals_row:
goals_prompt = _render_template(goals_row['template'], vars)
if config.get('stage3_prompt'):
# Check if conditions are met (for backwards compatibility with goals check)
prof = data['profile']
should_run_stage3 = True
# Special case: goals prompt only runs if goals are set
if config['stage3_prompt'] == 'pipeline_goals':
should_run_stage3 = bool(prof.get('goal_weight') or prof.get('goal_bf_pct'))
if should_run_stage3:
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("SELECT template FROM ai_prompts WHERE slug=%s AND active=true", (config['stage3_prompt'],))
goals_row = cur.fetchone()
if goals_row:
goals_prompt = _render_template(goals_row['template'], vars)
if ANTHROPIC_KEY:
import anthropic
@ -586,11 +626,14 @@ async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), ses
if goals_text:
final_content += "\n\n" + goals_text
# Save as 'pipeline' scope (with history - no DELETE)
# Save with config-specific scope (with history - no DELETE)
scope = f"pipeline_{config['name'].lower().replace(' ', '_')}"
with get_db() as conn:
cur = get_cursor(conn)
cur.execute("INSERT INTO ai_insights (id, profile_id, scope, content, created) VALUES (%s,%s,'pipeline',%s,CURRENT_TIMESTAMP)",
(str(uuid.uuid4()), pid, final_content))
cur.execute("INSERT INTO ai_insights (id, profile_id, scope, content, created) VALUES (%s,%s,%s,%s,CURRENT_TIMESTAMP)",
(str(uuid.uuid4()), pid, scope, final_content))
logger.info(f"[PIPELINE] Completed '{config['name']}' - saved as scope='{scope}'")
# Phase 2: Increment ai_calls usage (pipeline uses multiple API calls)
# Note: We increment once per pipeline run, not per individual call
@ -599,7 +642,15 @@ async def analyze_pipeline(x_profile_id: Optional[str]=Header(default=None), ses
# Old usage tracking (keep for now)
inc_ai_usage(pid)
return {"scope": "pipeline", "content": final_content, "stage1": stage1_results}
return {
"scope": scope,
"content": final_content,
"stage1": stage1_results,
"config": {
"id": config['id'],
"name": config['name']
}
}
@router.get("/ai/usage")

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,181 @@
# Enhancement: Wertetabelle Optimierung
**Labels:** enhancement, ux
**Priority:** Medium (Phase 1)
**Related:** Issue #47 (Value Table - Complete)
## Beschreibung
Wertetabelle übersichtlicher gestalten durch intelligente Filterung und Beschreibungs-Vervollständigung.
## Problem (aktueller Stand)
Nach Implementierung von Issue #47 haben wir eine sehr umfangreiche Wertetabelle mit:
- Reguläre Platzhalter (PROFIL, KÖRPER, etc.)
- Extrahierte Einzelwerte aus Stages (↳ Symbol)
- Rohdaten der Stage-Outputs (🔬 JSON)
**Probleme:**
1. Zu viele Werte im Normal-Modus (unübersichtlich)
2. Stage-Rohdaten sollten nur im Experten-Modus sichtbar sein
3. Einige Platzhalter haben keine/unvollständige Beschreibungen
## Gewünschtes Verhalten
### Normal-Modus (Standard)
```
📊 Verwendete Werte (24) [🔬 Experten-Modus]
PROFIL
├─ name: Lars Stommer
├─ age: 35 Jahre (Geburtsdatum 1990-05-15)
├─ height: 178cm (Körpergröße)
KÖRPER
├─ weight_aktuell: 85.2kg (Aktuelles Gewicht)
├─ bmi: 26.9 (Body-Mass-Index)
├─ bmi_interpretation: Leicht übergewichtig (BMI-Bewertung)
├─ kf_aktuell: 18.5% (Körperfettanteil)
ERNÄHRUNG
├─ kcal_avg: 2450 kcal (Durchschnitt 7 Tage)
...
Stage 1 - Body (Extrahierte Werte)
├─ ↳ trend: leicht sinkend (Gewichtstrend)
├─ ↳ ziel_erreichbar: ja, in 8 Wochen (Zielerreichbarkeit)
```
**Ausgeblendet:**
- 🔬 stage_1_stage1_body (komplettes JSON)
- Leere/nicht verfügbare Werte
### Experten-Modus
```
📊 Verwendete Werte (32) [🔬 Experten-Modus ✓]
[... wie Normal-Modus ...]
Stage 1 - Rohdaten
├─ 🔬 stage_1_stage1_body
└─ [JSON anzeigen ▼]
{
"bmi": 26.9,
"trend": "leicht sinkend",
"ziel_erreichbar": "ja, in 8 Wochen",
"interpretation": "Dein BMI liegt..."
}
Stage 2 - Rohdaten
├─ 🔬 stage_2_stage2_nutrition
└─ [JSON anzeigen ▼]
{ ... }
```
**Zusätzlich sichtbar:**
- Alle Stage-Rohdaten (🔬 JSON)
- Leere/nicht verfügbare Werte
- Debug-Informationen
## Technische Umsetzung
### 1. Filterlogik anpassen (Analysis.jsx)
**Aktuell:**
```javascript
const placeholders = expertMode
? allPlaceholders
: Object.fromEntries(
Object.entries(allPlaceholders).filter(([key, data]) => {
if (data.is_stage_raw) return false // Versteckt Rohdaten
const val = data.value || ''
return val.trim() !== '' && val !== 'nicht verfügbar'
})
)
```
**Problem:** `is_stage_raw` wird nur für Keys wie "stage_1_stage1_body" gesetzt, nicht für extrahierte Werte.
**Lösung:** Neue Flag `is_extracted` (bereits vorhanden) wird beibehalten, `is_stage_raw` nur für komplette JSON-Outputs.
### 2. Beschreibungen vervollständigen (placeholder_resolver.py)
**Fehlende Beschreibungen prüfen:**
```python
# In get_placeholder_catalog()
PLACEHOLDER_CATALOG = {
'PROFIL': [
{'key': 'name', 'description': 'Name des Nutzers', 'example': '...'},
{'key': 'age', 'description': 'Alter in Jahren', 'example': '35'},
# ... alle prüfen
],
'KÖRPER': [
{'key': 'weight_aktuell', 'description': 'Aktuelles Gewicht', 'example': '85.2kg'},
{'key': 'bmi', 'description': 'Body-Mass-Index (berechnet)', 'example': '26.9'},
# ... alle prüfen
],
# ... alle Kategorien durchgehen
}
```
**Aktionen:**
- Alle 32 Platzhalter durchgehen
- Fehlende Beschreibungen ergänzen
- Beschreibungen aus Kontext ableiten (z.B. bei extrahierten Werten aus Stage-Output)
### 3. Extrahierte Werte beschreiben (prompts.py)
**Aktuell (Line 896-901):**
```python
metadata['placeholders'][field_key] = {
'value': field_data['value'],
'description': f"Aus Stage {field_data['source_stage']} ({field_data['source_output']})",
'is_extracted': True,
'category': category
}
```
**Verbesserung:**
- Wenn Base-Prompt ein JSON-Schema hat, Feld-Beschreibungen aus Schema extrahieren
- Fallback: Generische Beschreibung aus Kontext
**Beispiel:**
```python
# Wenn output_schema verfügbar:
schema = base_prompt.get('output_schema', {})
properties = schema.get('properties', {})
field_description = properties.get(field_key, {}).get('description', '')
metadata['placeholders'][field_key] = {
'value': field_data['value'],
'description': field_description or f"Aus Stage {stage_num} ({output_name})",
'is_extracted': True,
'category': category
}
```
## Akzeptanzkriterien
- [ ] Normal-Modus zeigt nur Einzelwerte (regulär + extrahiert ↳)
- [ ] Experten-Modus zeigt zusätzlich Stage-Rohdaten (🔬 JSON)
- [ ] Alle 32 Platzhalter haben sinnvolle Beschreibungen
- [ ] Extrahierte Werte nutzen Schema-Beschreibungen (wenn vorhanden)
- [ ] Toggle "Experten-Modus" funktioniert korrekt
- [ ] Kategorien bleiben sauber getrennt
- [ ] Leere Werte werden im Normal-Modus ausgeblendet
## Abschätzung
**Aufwand:** 4-6 Stunden
- 1h: Filterlogik testen/anpassen
- 2-3h: Beschreibungen vervollständigen (32 Platzhalter)
- 1h: Schema-basierte Beschreibungen für extrahierte Werte
- 1h: Testing + Feintuning
**Priorität:** Medium (verbessert UX erheblich, aber keine kritische Funktionalität)
## Notizen
- Issue #47 hat die Grundlage geschaffen (Kategorien, Expert-Mode, Stage-Outputs)
- Diese Optimierung macht die Wertetabelle produktionsreif
- Beschreibungen sind wichtig für KI-Kontext UND User-Verständnis
- ggf. später: Beschreibungen editierbar machen (Admin-UI)

13
find-container.sh Normal file
View File

@ -0,0 +1,13 @@
#!/bin/bash
# Find correct container name
echo "Suche Backend-Container..."
echo ""
# List all running containers
echo "Alle laufenden Container:"
docker ps --format "{{.Names}}" | grep -i backend
echo ""
echo "Oder alle Container (auch gestoppte):"
docker ps -a --format "{{.Names}}" | grep -i backend

View File

@ -30,6 +30,7 @@ import AdminUserRestrictionsPage from './pages/AdminUserRestrictionsPage'
import AdminTrainingTypesPage from './pages/AdminTrainingTypesPage'
import AdminActivityMappingsPage from './pages/AdminActivityMappingsPage'
import AdminTrainingProfiles from './pages/AdminTrainingProfiles'
import AdminPromptsPage from './pages/AdminPromptsPage'
import SubscriptionPage from './pages/SubscriptionPage'
import SleepPage from './pages/SleepPage'
import RestDaysPage from './pages/RestDaysPage'
@ -184,6 +185,7 @@ function AppShell() {
<Route path="/admin/training-types" element={<AdminTrainingTypesPage/>}/>
<Route path="/admin/activity-mappings" element={<AdminActivityMappingsPage/>}/>
<Route path="/admin/training-profiles" element={<AdminTrainingProfiles/>}/>
<Route path="/admin/prompts" element={<AdminPromptsPage/>}/>
<Route path="/subscription" element={<SubscriptionPage/>}/>
</Routes>
</main>

View File

@ -0,0 +1,255 @@
import { useState, useEffect } from 'react'
import { api } from '../utils/api'
import { Search, X } from 'lucide-react'
/**
* Placeholder Picker with grouped categories and search
*
* Loads placeholders dynamically from backend catalog.
* Grouped by category (Profil, Körper, Ernährung, Training, etc.)
*/
export default function PlaceholderPicker({ onSelect, onClose }) {
const [catalog, setCatalog] = useState({})
const [search, setSearch] = useState('')
const [loading, setLoading] = useState(true)
const [expandedCategories, setExpandedCategories] = useState(new Set())
useEffect(() => {
loadCatalog()
}, [])
const loadCatalog = async () => {
try {
setLoading(true)
const data = await api.listPlaceholders()
setCatalog(data)
// Expand all categories by default
setExpandedCategories(new Set(Object.keys(data)))
} catch (e) {
console.error('Failed to load placeholders:', e)
} finally {
setLoading(false)
}
}
const toggleCategory = (category) => {
const newExpanded = new Set(expandedCategories)
if (newExpanded.has(category)) {
newExpanded.delete(category)
} else {
newExpanded.add(category)
}
setExpandedCategories(newExpanded)
}
const handleSelect = (key) => {
onSelect(`{{${key}}}`)
onClose()
}
// Filter placeholders by search
const filteredCatalog = {}
const searchLower = search.toLowerCase()
Object.entries(catalog).forEach(([category, items]) => {
const filtered = items.filter(item =>
item.key.toLowerCase().includes(searchLower) ||
item.description.toLowerCase().includes(searchLower)
)
if (filtered.length > 0) {
filteredCatalog[category] = filtered
}
})
return (
<div
style={{
position: 'fixed',
inset: 0,
background: 'rgba(0,0,0,0.5)',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
zIndex: 2000,
padding: 20
}}
onClick={onClose}
>
<div
onClick={e => e.stopPropagation()}
style={{
background: 'var(--bg)',
borderRadius: 12,
maxWidth: 800,
width: '100%',
maxHeight: '80vh',
display: 'flex',
flexDirection: 'column',
overflow: 'hidden'
}}
>
{/* Header */}
<div style={{
padding: 20,
borderBottom: '1px solid var(--border)',
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center'
}}>
<h3 style={{ margin: 0, fontSize: 18, fontWeight: 600 }}>
Platzhalter auswählen
</h3>
<button
onClick={onClose}
style={{ background: 'none', border: 'none', cursor: 'pointer', padding: 4 }}
>
<X size={24} color="var(--text3)" />
</button>
</div>
{/* Search */}
<div style={{ padding: '12px 20px', borderBottom: '1px solid var(--border)' }}>
<div style={{ position: 'relative' }}>
<Search
size={16}
style={{
position: 'absolute',
left: 12,
top: '50%',
transform: 'translateY(-50%)',
color: 'var(--text3)'
}}
/>
<input
type="text"
className="form-input"
value={search}
onChange={e => setSearch(e.target.value)}
placeholder="Platzhalter suchen..."
style={{
width: '100%',
paddingLeft: 40,
textAlign: 'left'
}}
/>
</div>
</div>
{/* Categories */}
<div style={{
flex: 1,
overflow: 'auto',
padding: 20
}}>
{loading ? (
<div style={{ textAlign: 'center', padding: 40, color: 'var(--text3)' }}>
Lädt Platzhalter...
</div>
) : Object.keys(filteredCatalog).length === 0 ? (
<div style={{ textAlign: 'center', padding: 40, color: 'var(--text3)' }}>
Keine Platzhalter gefunden
</div>
) : (
Object.entries(filteredCatalog).map(([category, items]) => (
<div key={category} style={{ marginBottom: 16 }}>
<div
onClick={() => toggleCategory(category)}
style={{
padding: '8px 12px',
background: 'var(--surface)',
borderRadius: 8,
cursor: 'pointer',
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
marginBottom: 8
}}
>
<h4 style={{ margin: 0, fontSize: 14, fontWeight: 600 }}>
{category} ({items.length})
</h4>
<span style={{ fontSize: 12, color: 'var(--text3)' }}>
{expandedCategories.has(category) ? '▼' : '▶'}
</span>
</div>
{expandedCategories.has(category) && (
<div style={{ display: 'grid', gap: 6, paddingLeft: 12 }}>
{items.map(item => (
<div
key={item.key}
onClick={() => handleSelect(item.key)}
style={{
padding: '8px 12px',
background: 'var(--surface2)',
borderRadius: 6,
cursor: 'pointer',
border: '1px solid transparent',
transition: 'all 0.2s'
}}
onMouseEnter={e => {
e.currentTarget.style.borderColor = 'var(--accent)'
e.currentTarget.style.background = 'var(--surface)'
}}
onMouseLeave={e => {
e.currentTarget.style.borderColor = 'transparent'
e.currentTarget.style.background = 'var(--surface2)'
}}
>
<div>
<div>
<code style={{
fontSize: 12,
fontWeight: 600,
color: 'var(--accent)',
fontFamily: 'monospace'
}}>
{`{{${item.key}}}`}
</code>
<div style={{
fontSize: 11,
color: 'var(--text2)',
marginTop: 2
}}>
{item.description}
</div>
</div>
{item.example && (
<div style={{
fontSize: 11,
color: 'var(--text3)',
fontFamily: 'monospace',
padding: '4px 8px',
background: 'var(--bg)',
borderRadius: 4,
marginTop: 6,
wordBreak: 'break-word'
}}>
<span style={{ fontSize: 9, opacity: 0.7, marginRight: 4 }}>Beispiel:</span>
{item.example}
</div>
)}
</div>
</div>
))}
</div>
)}
</div>
))
)}
</div>
{/* Footer */}
<div style={{
padding: 16,
borderTop: '1px solid var(--border)',
textAlign: 'center',
fontSize: 11,
color: 'var(--text3)'
}}>
Klicke auf einen Platzhalter zum Einfügen
</div>
</div>
</div>
)
}

View File

@ -0,0 +1,190 @@
import { useState } from 'react'
import { api } from '../utils/api'
export default function PromptGenerator({ onGenerated, onClose }) {
const [goal, setGoal] = useState('')
const [dataCategories, setDataCategories] = useState(['körper', 'ernährung'])
const [exampleOutput, setExampleOutput] = useState('')
const [exampleData, setExampleData] = useState(null)
const [generating, setGenerating] = useState(false)
const [loadingExample, setLoadingExample] = useState(false)
const categories = [
{ id: 'körper', label: 'Körper (Gewicht, KF, Umfänge)' },
{ id: 'ernährung', label: 'Ernährung (Kalorien, Makros)' },
{ id: 'training', label: 'Training (Volumen, Typen)' },
{ id: 'schlaf', label: 'Schlaf (Dauer, Qualität)' },
{ id: 'vitalwerte', label: 'Vitalwerte (RHR, HRV, VO2max)' },
{ id: 'ziele', label: 'Ziele (Fortschritt, Prognose)' }
]
const handleToggleCategory = (catId) => {
if (dataCategories.includes(catId)) {
setDataCategories(dataCategories.filter(c => c !== catId))
} else {
setDataCategories([...dataCategories, catId])
}
}
const handleShowExampleData = async () => {
try {
setLoadingExample(true)
const placeholders = await api.listPlaceholders()
setExampleData(placeholders)
} catch (e) {
alert('Fehler: ' + e.message)
} finally {
setLoadingExample(false)
}
}
const handleGenerate = async () => {
if (!goal.trim()) {
alert('Bitte Ziel beschreiben')
return
}
if (dataCategories.length === 0) {
alert('Bitte mindestens einen Datenbereich wählen')
return
}
try {
setGenerating(true)
const result = await api.generatePrompt({
goal,
data_categories: dataCategories,
example_output: exampleOutput || null
})
onGenerated(result)
} catch (e) {
alert('Fehler beim Generieren: ' + e.message)
} finally {
setGenerating(false)
}
}
return (
<div style={{
position:'fixed', inset:0, background:'rgba(0,0,0,0.6)',
display:'flex', alignItems:'center', justifyContent:'center',
zIndex:1001, padding:20
}}>
<div style={{
background:'var(--bg)', borderRadius:12, maxWidth:700, width:'100%',
maxHeight:'90vh', overflow:'auto', padding:24
}}>
<h2 style={{margin:'0 0 24px 0', fontSize:18, fontWeight:600}}>
🤖 KI-Prompt generieren
</h2>
{/* Step 1: Goal */}
<div style={{marginBottom:24}}>
<label className="form-label" style={{display:'block', marginBottom:6}}>
1 Was möchtest du analysieren?
</label>
<textarea
className="form-input"
value={goal}
onChange={e => setGoal(e.target.value)}
rows={3}
placeholder="Beispiel: Ich möchte wissen ob meine Proteinzufuhr ausreichend ist für Muskelaufbau und wie ich sie optimieren kann."
style={{width:'100%', textAlign:'left', resize:'vertical'}}
/>
</div>
{/* Step 2: Data Categories */}
<div style={{marginBottom:24}}>
<label className="form-label">
2 Welche Daten sollen analysiert werden?
</label>
<div style={{display:'grid', gridTemplateColumns:'1fr 1fr', gap:8}}>
{categories.map(cat => (
<label
key={cat.id}
style={{
display:'flex', alignItems:'center', gap:8,
padding:8, background:'var(--surface)', borderRadius:6,
cursor:'pointer', fontSize:13
}}
>
<input
type="checkbox"
checked={dataCategories.includes(cat.id)}
onChange={() => handleToggleCategory(cat.id)}
/>
{cat.label}
</label>
))}
</div>
<button
onClick={handleShowExampleData}
disabled={loadingExample}
style={{
marginTop:12, fontSize:12, padding:'6px 12px',
background:'var(--surface2)', border:'1px solid var(--border)',
borderRadius:6, cursor:'pointer'
}}
>
{loadingExample ? 'Lädt...' : '📊 Beispieldaten anzeigen'}
</button>
</div>
{/* Example Data */}
{exampleData && (
<div style={{
marginBottom:24, padding:12, background:'var(--surface2)',
borderRadius:8, fontSize:11, fontFamily:'monospace',
maxHeight:200, overflow:'auto'
}}>
<strong style={{fontFamily:'var(--font)'}}>Deine aktuellen Daten:</strong>
<pre style={{marginTop:8, whiteSpace:'pre-wrap'}}>
{JSON.stringify(exampleData, null, 2)}
</pre>
</div>
)}
{/* Step 3: Desired Format */}
<div style={{marginBottom:24}}>
<label className="form-label" style={{display:'block', marginBottom:6}}>
3 Gewünschtes Antwort-Format (optional)
</label>
<textarea
className="form-input"
value={exampleOutput}
onChange={e => setExampleOutput(e.target.value)}
rows={4}
placeholder={'Beispiel:\n## Analyse\n[Bewertung]\n\n## Empfehlungen\n- Punkt 1\n- Punkt 2'}
style={{width:'100%', textAlign:'left', fontFamily:'monospace', fontSize:12, resize:'vertical'}}
/>
</div>
{/* Info Box */}
<div style={{
marginBottom:24, padding:12, background:'var(--surface)',
border:'1px solid var(--border)', borderRadius:8, fontSize:12,
color:'var(--text3)'
}}>
<strong style={{color:'var(--text2)'}}>💡 Tipp:</strong> Je präziser deine Zielbeschreibung,
desto besser der generierte Prompt. Die KI wählt automatisch passende Platzhalter
und strukturiert die Analyse optimal.
</div>
{/* Actions */}
<div style={{display:'flex', gap:12}}>
<button
className="btn btn-primary"
onClick={handleGenerate}
disabled={generating || !goal.trim() || dataCategories.length === 0}
style={{flex:1}}
>
{generating ? '⏳ Generiere...' : '🚀 Prompt generieren'}
</button>
<button className="btn" onClick={onClose}>
Abbrechen
</button>
</div>
</div>
</div>
)
}

View File

@ -0,0 +1,872 @@
import { useState, useEffect, useRef } from 'react'
import { api } from '../utils/api'
import { X, Plus, Trash2, MoveUp, MoveDown, Code } from 'lucide-react'
import PlaceholderPicker from './PlaceholderPicker'
/**
* Unified Prompt Editor Modal (Issue #28 Phase 3)
*
* Supports both prompt types:
* - Base: Single reusable template
* - Pipeline: Multi-stage workflow with dynamic stages
*/
export default function UnifiedPromptModal({ prompt, onSave, onClose }) {
const [name, setName] = useState('')
const [slug, setSlug] = useState('')
const [displayName, setDisplayName] = useState('')
const [description, setDescription] = useState('')
const [type, setType] = useState('pipeline') // 'base' or 'pipeline'
const [category, setCategory] = useState('ganzheitlich')
const [active, setActive] = useState(true)
const [sortOrder, setSortOrder] = useState(0)
// Base prompt fields
const [template, setTemplate] = useState('')
const [outputFormat, setOutputFormat] = useState('text')
// Pipeline prompt fields
const [stages, setStages] = useState([
{
stage: 1,
prompts: []
}
])
// Available prompts for reference selection
const [availablePrompts, setAvailablePrompts] = useState([])
const [loading, setLoading] = useState(false)
const [error, setError] = useState(null)
const [showPlaceholderPicker, setShowPlaceholderPicker] = useState(false)
const [pickerTarget, setPickerTarget] = useState(null) // 'base' or {stage, promptIdx}
const [cursorPosition, setCursorPosition] = useState(null) // Track cursor position for insertion
const baseTemplateRef = useRef(null)
const stageTemplateRefs = useRef({}) // Map of stage_promptIdx -> ref
// Test functionality
const [testing, setTesting] = useState(false)
const [testResult, setTestResult] = useState(null)
const [showDebug, setShowDebug] = useState(false)
useEffect(() => {
loadAvailablePrompts()
if (prompt) {
// Edit mode
setName(prompt.name || '')
setSlug(prompt.slug || '')
setDisplayName(prompt.display_name || '')
setDescription(prompt.description || '')
setType(prompt.type || 'pipeline')
setCategory(prompt.category || 'ganzheitlich')
setActive(prompt.active ?? true)
setSortOrder(prompt.sort_order || 0)
setTemplate(prompt.template || '')
setOutputFormat(prompt.output_format || 'text')
// Parse stages if editing pipeline
if (prompt.type === 'pipeline' && prompt.stages) {
try {
const parsedStages = typeof prompt.stages === 'string'
? JSON.parse(prompt.stages)
: prompt.stages
setStages(parsedStages.length > 0 ? parsedStages : [{ stage: 1, prompts: [] }])
} catch (e) {
console.error('Failed to parse stages:', e)
setStages([{ stage: 1, prompts: [] }])
}
}
}
}, [prompt])
const loadAvailablePrompts = async () => {
try {
const prompts = await api.listAdminPrompts()
setAvailablePrompts(prompts)
} catch (e) {
setError('Fehler beim Laden der Prompts: ' + e.message)
}
}
const addStage = () => {
const nextStageNum = Math.max(...stages.map(s => s.stage), 0) + 1
setStages([...stages, { stage: nextStageNum, prompts: [] }])
}
const removeStage = (stageNum) => {
if (stages.length === 1) {
setError('Mindestens eine Stage erforderlich')
return
}
setStages(stages.filter(s => s.stage !== stageNum))
}
const moveStage = (stageNum, direction) => {
const idx = stages.findIndex(s => s.stage === stageNum)
if (idx === -1) return
const newStages = [...stages]
if (direction === 'up' && idx > 0) {
[newStages[idx], newStages[idx - 1]] = [newStages[idx - 1], newStages[idx]]
} else if (direction === 'down' && idx < newStages.length - 1) {
[newStages[idx], newStages[idx + 1]] = [newStages[idx + 1], newStages[idx]]
}
// Renumber stages
newStages.forEach((s, i) => s.stage = i + 1)
setStages(newStages)
}
const addPromptToStage = (stageNum) => {
setStages(stages.map(s => {
if (s.stage === stageNum) {
return {
...s,
prompts: [...s.prompts, {
source: 'inline',
template: '',
output_key: `output_${Date.now()}`,
output_format: 'text'
}]
}
}
return s
}))
}
const removePromptFromStage = (stageNum, promptIdx) => {
setStages(stages.map(s => {
if (s.stage === stageNum) {
return {
...s,
prompts: s.prompts.filter((_, i) => i !== promptIdx)
}
}
return s
}))
}
const updateStagePrompt = (stageNum, promptIdx, field, value) => {
setStages(stages.map(s => {
if (s.stage === stageNum) {
const newPrompts = [...s.prompts]
newPrompts[promptIdx] = { ...newPrompts[promptIdx], [field]: value }
return { ...s, prompts: newPrompts }
}
return s
}))
}
const handleSave = async () => {
// Validation
if (!name.trim() || !slug.trim()) {
setError('Name und Slug sind Pflichtfelder')
return
}
if (type === 'base' && !template.trim()) {
setError('Basis-Prompts benötigen ein Template')
return
}
if (type === 'pipeline' && stages.length === 0) {
setError('Pipeline-Prompts benötigen mindestens eine Stage')
return
}
if (type === 'pipeline') {
// Validate all stages have at least one prompt
const emptyStages = stages.filter(s => s.prompts.length === 0)
if (emptyStages.length > 0) {
setError(`Stage ${emptyStages[0].stage} hat keine Prompts`)
return
}
// Validate all prompts have required fields
for (const stage of stages) {
for (const p of stage.prompts) {
if (!p.output_key) {
setError(`Stage ${stage.stage}: Output-Key fehlt`)
return
}
if (p.source === 'inline' && !p.template) {
setError(`Stage ${stage.stage}: Inline-Prompt ohne Template`)
return
}
if (p.source === 'reference' && !p.slug) {
setError(`Stage ${stage.stage}: Referenz-Prompt ohne Slug`)
return
}
}
}
}
setLoading(true)
setError(null)
try {
const data = {
name,
slug,
display_name: displayName,
description,
type,
category,
active,
sort_order: sortOrder,
output_format: outputFormat
}
if (type === 'base') {
data.template = template
data.stages = null
} else {
data.template = null
data.stages = stages
}
if (prompt?.id) {
await api.updateUnifiedPrompt(prompt.id, data)
} else {
await api.createUnifiedPrompt(data)
}
onSave()
} catch (e) {
setError(e.message)
} finally {
setLoading(false)
}
}
const handleExport = () => {
// Export complete prompt configuration as JSON
const exportData = {
name,
slug,
display_name: displayName,
description,
type,
category,
active,
sort_order: sortOrder,
output_format: outputFormat,
template: type === 'base' ? template : null,
stages: type === 'pipeline' ? stages : null
}
const blob = new Blob([JSON.stringify(exportData, null, 2)], { type: 'application/json' })
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = `prompt-${slug || 'new'}-${new Date().toISOString().split('T')[0]}.json`
document.body.appendChild(a)
a.click()
document.body.removeChild(a)
URL.revokeObjectURL(url)
}
const handleTest = async () => {
// Can only test existing prompts (need slug in database)
if (!prompt?.slug) {
setError('Bitte erst speichern, dann testen')
return
}
setTesting(true)
setError(null)
setTestResult(null)
try {
const result = await api.executeUnifiedPrompt(prompt.slug, null, null, true)
setTestResult(result)
setShowDebug(true)
} catch (e) {
// Show error AND try to extract debug info from error
const errorMsg = e.message
let debugData = null
// Try to parse error message for embedded debug info
try {
const parsed = JSON.parse(errorMsg)
if (parsed.detail) {
setError('Test-Fehler: ' + parsed.detail)
debugData = parsed
} else {
setError('Test-Fehler: ' + errorMsg)
}
} catch {
setError('Test-Fehler: ' + errorMsg)
}
// Set result with error info so debug viewer shows it
setTestResult({
error: true,
error_message: errorMsg,
debug: debugData || { error: errorMsg }
})
setShowDebug(true) // ALWAYS show debug on test, even on error
} finally {
setTesting(false)
}
}
const handleExportPlaceholders = () => {
if (!testResult) return
// Extract all placeholder data from test result
const debug = testResult.debug || testResult
const exportData = {
export_date: new Date().toISOString(),
prompt_slug: prompt?.slug || 'unknown',
prompt_name: name || 'Unnamed Prompt',
placeholders: {}
}
// For pipeline prompts, collect from all stages
if (debug.stages && Array.isArray(debug.stages)) {
debug.stages.forEach(stage => {
exportData.placeholders[`stage_${stage.stage}`] = {
available_variables: stage.available_variables || [],
prompts: stage.prompts?.map(p => ({
source: p.source,
resolved: p.resolved_placeholders || p.ref_debug?.resolved_placeholders || {},
unresolved: p.unresolved_placeholders || p.ref_debug?.unresolved_placeholders || []
})) || []
}
})
}
// For base prompts or direct execution
if (debug.resolved_placeholders) {
exportData.placeholders.resolved = debug.resolved_placeholders
}
if (debug.unresolved_placeholders) {
exportData.placeholders.unresolved = debug.unresolved_placeholders
}
if (debug.available_variables) {
exportData.available_variables = debug.available_variables
}
if (debug.initial_variables) {
exportData.initial_variables = debug.initial_variables
}
// Download as JSON
const blob = new Blob([JSON.stringify(exportData, null, 2)], { type: 'application/json' })
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = `placeholders-${prompt?.slug || 'test'}-${new Date().toISOString().split('T')[0]}.json`
document.body.appendChild(a)
a.click()
document.body.removeChild(a)
URL.revokeObjectURL(url)
}
return (
<div style={{
position: 'fixed', inset: 0, background: 'rgba(0,0,0,0.5)',
display: 'flex', alignItems: 'center', justifyContent: 'center',
zIndex: 1000, padding: 20, overflow: 'auto'
}}>
<div style={{
background: 'var(--bg)', borderRadius: 12, maxWidth: 1000, width: '100%',
maxHeight: '90vh', overflow: 'auto', padding: 24
}}>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 24 }}>
<h2 style={{ margin: 0, fontSize: 20, fontWeight: 600 }}>
{prompt ? 'Prompt bearbeiten' : 'Neuer Prompt'}
</h2>
<button
onClick={onClose}
style={{ background: 'none', border: 'none', cursor: 'pointer', padding: 4 }}
>
<X size={24} color="var(--text3)" />
</button>
</div>
{error && (
<div style={{
padding: 12, background: '#fee', color: '#c00', borderRadius: 8, marginBottom: 16, fontSize: 13
}}>
{error}
</div>
)}
{/* Basic Info */}
<div style={{ display: 'grid', gap: 16, marginBottom: 24 }}>
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr', gap: 12 }}>
<div>
<label className="form-label">Name *</label>
<input
className="form-input"
value={name}
onChange={e => setName(e.target.value)}
placeholder="Interner Name"
style={{ width: '100%', textAlign: 'left' }}
/>
</div>
<div>
<label className="form-label">Slug *</label>
<input
className="form-input"
value={slug}
onChange={e => setSlug(e.target.value)}
placeholder="technischer_name"
style={{ width: '100%', textAlign: 'left' }}
disabled={!!prompt}
/>
</div>
</div>
<div>
<label className="form-label">Anzeigename</label>
<input
className="form-input"
value={displayName}
onChange={e => setDisplayName(e.target.value)}
placeholder="Name für Benutzer (optional)"
style={{ width: '100%', textAlign: 'left' }}
/>
</div>
<div>
<label className="form-label">Beschreibung</label>
<textarea
className="form-input"
value={description}
onChange={e => setDescription(e.target.value)}
rows={2}
placeholder="Kurze Beschreibung des Prompts"
style={{ width: '100%', textAlign: 'left', resize: 'vertical' }}
/>
</div>
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr 1fr', gap: 12 }}>
<div>
<label className="form-label">Typ *</label>
<select
className="form-select"
value={type}
onChange={e => setType(e.target.value)}
style={{ width: '100%' }}
disabled={!!prompt}
>
<option value="base">Basis-Prompt</option>
<option value="pipeline">Pipeline</option>
</select>
</div>
<div>
<label className="form-label">Kategorie</label>
<select
className="form-select"
value={category}
onChange={e => setCategory(e.target.value)}
style={{ width: '100%' }}
>
<option value="ganzheitlich">Ganzheitlich</option>
<option value="körper">Körper</option>
<option value="ernährung">Ernährung</option>
<option value="training">Training</option>
<option value="schlaf">Schlaf</option>
<option value="pipeline">Pipeline</option>
</select>
</div>
<div>
<label className="form-label">Output-Format</label>
<select
className="form-select"
value={outputFormat}
onChange={e => setOutputFormat(e.target.value)}
style={{ width: '100%' }}
>
<option value="text">Text</option>
<option value="json">JSON</option>
</select>
</div>
</div>
<div style={{ display: 'flex', gap: 16, alignItems: 'center' }}>
<label style={{ display: 'flex', alignItems: 'center', gap: 6, cursor: 'pointer' }}>
<input
type="checkbox"
checked={active}
onChange={e => setActive(e.target.checked)}
/>
<span style={{ fontSize: 13 }}>Aktiv</span>
</label>
<div style={{ display: 'flex', alignItems: 'center', gap: 6 }}>
<label className="form-label" style={{ margin: 0 }}>Sortierung:</label>
<input
type="number"
className="form-input"
value={sortOrder}
onChange={e => setSortOrder(parseInt(e.target.value) || 0)}
style={{ width: 80, padding: '4px 8px' }}
/>
</div>
</div>
</div>
{/* Type-specific editor */}
{type === 'base' && (
<div style={{ marginBottom: 24 }}>
<h3 style={{ fontSize: 16, fontWeight: 600, marginBottom: 12 }}>Template</h3>
<textarea
ref={baseTemplateRef}
className="form-input"
value={template}
onChange={e => setTemplate(e.target.value)}
onClick={e => setCursorPosition(e.target.selectionStart)}
onKeyUp={e => setCursorPosition(e.target.selectionStart)}
rows={12}
placeholder="Prompt-Template mit {{placeholders}}..."
style={{ width: '100%', textAlign: 'left', resize: 'vertical', fontFamily: 'monospace', fontSize: 12 }}
/>
<div style={{ fontSize: 11, color: 'var(--text3)', marginTop: 4 }}>
<button
className="btn"
onClick={() => {
setPickerTarget('base')
setShowPlaceholderPicker(true)
}}
style={{
fontSize: 12,
padding: '6px 12px',
display: 'flex',
alignItems: 'center',
gap: 6
}}
>
<Code size={14} />
Platzhalter einfügen
</button>
</div>
</div>
)}
{type === 'pipeline' && (
<div style={{ marginBottom: 24 }}>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 12 }}>
<h3 style={{ fontSize: 16, fontWeight: 600, margin: 0 }}>Stages ({stages.length})</h3>
<button className="btn" onClick={addStage} style={{ display: 'flex', alignItems: 'center', gap: 6 }}>
<Plus size={16} /> Stage hinzufügen
</button>
</div>
{stages.map((stage, sIdx) => (
<div key={stage.stage} style={{
background: 'var(--surface)', padding: 16, borderRadius: 8,
border: '1px solid var(--border)', marginBottom: 12
}}>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 12 }}>
<h4 style={{ margin: 0, fontSize: 14, fontWeight: 600 }}>Stage {stage.stage}</h4>
<div style={{ display: 'flex', gap: 4 }}>
{sIdx > 0 && (
<button
className="btn"
onClick={() => moveStage(stage.stage, 'up')}
style={{ padding: '4px 8px' }}
>
<MoveUp size={14} />
</button>
)}
{sIdx < stages.length - 1 && (
<button
className="btn"
onClick={() => moveStage(stage.stage, 'down')}
style={{ padding: '4px 8px' }}
>
<MoveDown size={14} />
</button>
)}
<button
className="btn"
onClick={() => removeStage(stage.stage)}
style={{ padding: '4px 8px', color: 'var(--danger)' }}
disabled={stages.length === 1}
>
<Trash2 size={14} />
</button>
</div>
</div>
{/* Prompts in this stage */}
{stage.prompts.map((p, pIdx) => (
<div key={pIdx} style={{
background: 'var(--bg)', padding: 12, borderRadius: 6, marginBottom: 8,
border: '1px solid var(--border)'
}}>
<div style={{ display: 'grid', gap: 8 }}>
<div style={{ display: 'grid', gridTemplateColumns: '120px 1fr 120px auto', gap: 8, alignItems: 'center' }}>
<select
className="form-select"
value={p.source || 'inline'}
onChange={e => updateStagePrompt(stage.stage, pIdx, 'source', e.target.value)}
style={{ fontSize: 12 }}
>
<option value="inline">Inline</option>
<option value="reference">Referenz</option>
</select>
<input
className="form-input"
value={p.output_key || ''}
onChange={e => updateStagePrompt(stage.stage, pIdx, 'output_key', e.target.value)}
placeholder="output_key"
style={{ fontSize: 12, textAlign: 'left' }}
/>
<select
className="form-select"
value={p.output_format || 'text'}
onChange={e => updateStagePrompt(stage.stage, pIdx, 'output_format', e.target.value)}
style={{ fontSize: 12 }}
>
<option value="text">Text</option>
<option value="json">JSON</option>
</select>
<button
onClick={() => removePromptFromStage(stage.stage, pIdx)}
style={{ background: 'none', border: 'none', cursor: 'pointer', padding: 4 }}
>
<Trash2 size={16} color="var(--danger)" />
</button>
</div>
{p.source === 'reference' ? (
<select
className="form-select"
value={p.slug || ''}
onChange={e => updateStagePrompt(stage.stage, pIdx, 'slug', e.target.value)}
style={{ fontSize: 12 }}
>
<option value="">-- Prompt wählen --</option>
{availablePrompts
.filter(ap => ap.type === 'base' || !ap.type)
.map(ap => (
<option key={ap.slug} value={ap.slug}>
{ap.display_name || ap.name} ({ap.slug})
</option>
))}
</select>
) : (
<div>
<textarea
ref={el => stageTemplateRefs.current[`${stage.stage}_${pIdx}`] = el}
className="form-input"
value={p.template || ''}
onChange={e => updateStagePrompt(stage.stage, pIdx, 'template', e.target.value)}
onClick={e => {
setCursorPosition(e.target.selectionStart)
setPickerTarget({ stage: stage.stage, promptIdx: pIdx })
}}
onKeyUp={e => {
setCursorPosition(e.target.selectionStart)
setPickerTarget({ stage: stage.stage, promptIdx: pIdx })
}}
rows={3}
placeholder="Inline-Template mit {{placeholders}}..."
style={{ width: '100%', fontSize: 12, textAlign: 'left', resize: 'vertical', fontFamily: 'monospace' }}
/>
<div style={{ fontSize: 10, color: 'var(--text3)', marginTop: 4 }}>
<button
className="btn"
onClick={() => {
setPickerTarget({ stage: stage.stage, promptIdx: pIdx })
setShowPlaceholderPicker(true)
}}
style={{
fontSize: 11,
padding: '4px 8px',
display: 'flex',
alignItems: 'center',
gap: 4
}}
>
<Code size={12} />
Platzhalter
</button>
</div>
</div>
)}
</div>
</div>
))}
<button
className="btn"
onClick={() => addPromptToStage(stage.stage)}
style={{ fontSize: 12, display: 'flex', alignItems: 'center', gap: 4 }}
>
<Plus size={14} /> Prompt hinzufügen
</button>
</div>
))}
</div>
)}
{/* Debug Output */}
{showDebug && testResult && (
<div style={{
marginTop: 16,
padding: 16,
background: 'var(--surface2)',
borderRadius: 8,
border: '1px solid var(--border)'
}}>
<div style={{
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
marginBottom: 12
}}>
<h3 style={{ margin: 0, fontSize: 14, fontWeight: 600 }}>
🔬 Debug-Info
</h3>
<div style={{ display: 'flex', gap: 8, alignItems: 'center' }}>
<button
className="btn"
onClick={handleExportPlaceholders}
style={{
padding: '4px 8px',
fontSize: 11,
background: 'var(--accent)',
color: 'white'
}}
title="Exportiere alle Platzhalter mit Werten als JSON"
>
📋 Platzhalter exportieren
</button>
<button
onClick={() => setShowDebug(false)}
style={{ background: 'none', border: 'none', cursor: 'pointer', padding: 4 }}
>
<X size={16} color="var(--text3)" />
</button>
</div>
</div>
<pre style={{
fontSize: 11,
fontFamily: 'monospace',
background: 'var(--bg)',
padding: 12,
borderRadius: 6,
overflow: 'auto',
maxHeight: 400,
lineHeight: 1.5,
color: 'var(--text2)'
}}>
{JSON.stringify(testResult.debug || testResult, null, 2)}
</pre>
</div>
)}
{/* Actions */}
<div style={{
display: 'flex', gap: 12, justifyContent: 'space-between',
paddingTop: 16, borderTop: '1px solid var(--border)'
}}>
<div style={{ display: 'flex', gap: 8 }}>
<button
className="btn"
onClick={handleTest}
disabled={testing || loading}
style={{
background: testing ? 'var(--surface)' : 'var(--accent)',
color: testing ? 'var(--text3)' : 'white'
}}
title={!prompt?.slug ? 'Bitte erst speichern, dann testen' : 'Test mit Debug-Modus ausführen'}
>
{testing ? '🔬 Teste...' : '🔬 Test ausführen'}
</button>
<button
className="btn"
onClick={handleExport}
disabled={loading}
title="Exportiere Prompt-Konfiguration als JSON"
>
📥 Export
</button>
</div>
<div style={{ display: 'flex', gap: 12 }}>
<button className="btn" onClick={onClose}>
Abbrechen
</button>
<button
className="btn btn-primary"
onClick={handleSave}
disabled={loading}
>
{loading ? 'Speichern...' : 'Speichern'}
</button>
</div>
</div>
</div>
{/* Placeholder Picker */}
{showPlaceholderPicker && (
<PlaceholderPicker
onSelect={(placeholder) => {
if (pickerTarget === 'base') {
// Insert into base template at cursor position
const pos = cursorPosition ?? template.length
const newTemplate = template.slice(0, pos) + placeholder + template.slice(pos)
setTemplate(newTemplate)
// Restore focus and cursor position after insertion
setTimeout(() => {
if (baseTemplateRef.current) {
baseTemplateRef.current.focus()
const newPos = pos + placeholder.length
baseTemplateRef.current.setSelectionRange(newPos, newPos)
setCursorPosition(newPos)
}
}, 0)
} else if (pickerTarget && typeof pickerTarget === 'object') {
// Insert into pipeline stage template at cursor position
const { stage: stageNum, promptIdx } = pickerTarget
setStages(stages.map(s => {
if (s.stage === stageNum) {
const newPrompts = [...s.prompts]
const currentTemplate = newPrompts[promptIdx].template || ''
const pos = cursorPosition ?? currentTemplate.length
const newTemplate = currentTemplate.slice(0, pos) + placeholder + currentTemplate.slice(pos)
newPrompts[promptIdx] = {
...newPrompts[promptIdx],
template: newTemplate
}
// Restore focus and cursor position
setTimeout(() => {
const refKey = `${stageNum}_${promptIdx}`
const textarea = stageTemplateRefs.current[refKey]
if (textarea) {
textarea.focus()
const newPos = pos + placeholder.length
textarea.setSelectionRange(newPos, newPos)
setCursorPosition(newPos)
}
}, 0)
return { ...s, prompts: newPrompts }
}
return s
}))
}
}}
onClose={() => {
setShowPlaceholderPicker(false)
setPickerTarget(null)
}}
/>
)}
</div>
)
}

View File

@ -451,6 +451,23 @@ export default function AdminPanel() {
</Link>
</div>
</div>
{/* KI-Prompts Section */}
<div className="card">
<div style={{fontWeight:700,fontSize:14,marginBottom:12,display:'flex',alignItems:'center',gap:6}}>
<Settings size={16} color="var(--accent)"/> KI-Prompts (v9f)
</div>
<div style={{fontSize:12,color:'var(--text3)',marginBottom:12,lineHeight:1.5}}>
Verwalte AI-Prompts mit KI-Unterstützung: Generiere, optimiere und organisiere Prompts.
</div>
<div style={{display:'grid',gap:8}}>
<Link to="/admin/prompts">
<button className="btn btn-secondary btn-full">
🤖 KI-Prompts verwalten
</button>
</Link>
</div>
</div>
</div>
)
}

View File

@ -0,0 +1,588 @@
import { useState, useEffect } from 'react'
import { api } from '../utils/api'
import UnifiedPromptModal from '../components/UnifiedPromptModal'
import { Star, Trash2, Edit, Copy, Filter, ArrowDownToLine } from 'lucide-react'
/**
* Admin Prompts Page - Unified System (Issue #28 Phase 3)
*
* Manages both base and pipeline-type prompts in one interface.
*/
export default function AdminPromptsPage() {
const [prompts, setPrompts] = useState([])
const [filteredPrompts, setFilteredPrompts] = useState([])
const [typeFilter, setTypeFilter] = useState('all') // 'all' | 'base' | 'pipeline'
const [category, setCategory] = useState('all')
const [loading, setLoading] = useState(true)
const [error, setError] = useState(null)
const [editingPrompt, setEditingPrompt] = useState(null)
const [showNewPrompt, setShowNewPrompt] = useState(false)
const [importing, setImporting] = useState(false)
const [importResult, setImportResult] = useState(null)
const categories = [
{ id: 'all', label: 'Alle Kategorien' },
{ id: 'körper', label: 'Körper' },
{ id: 'ernährung', label: 'Ernährung' },
{ id: 'training', label: 'Training' },
{ id: 'schlaf', label: 'Schlaf' },
{ id: 'vitalwerte', label: 'Vitalwerte' },
{ id: 'ziele', label: 'Ziele' },
{ id: 'ganzheitlich', label: 'Ganzheitlich' },
{ id: 'pipeline', label: 'Pipeline' }
]
useEffect(() => {
loadPrompts()
}, [])
useEffect(() => {
let filtered = prompts
// Filter by type
if (typeFilter === 'base') {
filtered = filtered.filter(p => p.type === 'base')
} else if (typeFilter === 'pipeline') {
filtered = filtered.filter(p => p.type === 'pipeline')
}
// Filter by category
if (category !== 'all') {
filtered = filtered.filter(p => p.category === category)
}
setFilteredPrompts(filtered)
}, [typeFilter, category, prompts])
const loadPrompts = async () => {
try {
setLoading(true)
const data = await api.listAdminPrompts()
setPrompts(data)
setError(null)
} catch (e) {
setError(e.message)
} finally {
setLoading(false)
}
}
const handleToggleActive = async (prompt) => {
try {
await api.updateUnifiedPrompt(prompt.id, { active: !prompt.active })
await loadPrompts()
} catch (e) {
alert('Fehler: ' + e.message)
}
}
const handleDelete = async (prompt) => {
if (!confirm(`Prompt "${prompt.name}" wirklich löschen?`)) return
try {
await api.deletePrompt(prompt.id)
await loadPrompts()
} catch (e) {
alert('Fehler: ' + e.message)
}
}
const handleDuplicate = async (prompt) => {
try {
await api.duplicatePrompt(prompt.id)
await loadPrompts()
} catch (e) {
alert('Fehler: ' + e.message)
}
}
const handleConvertToBase = async (prompt) => {
// Convert a 1-stage pipeline to a base prompt
if (prompt.type !== 'pipeline') {
alert('Nur Pipeline-Prompts können konvertiert werden')
return
}
const stages = typeof prompt.stages === 'string'
? JSON.parse(prompt.stages)
: prompt.stages
if (!stages || stages.length !== 1) {
alert('Nur 1-stage Pipeline-Prompts können zu Basis-Prompts konvertiert werden')
return
}
const stage1 = stages[0]
if (!stage1.prompts || stage1.prompts.length !== 1) {
alert('Stage muss genau einen Prompt haben')
return
}
const firstPrompt = stage1.prompts[0]
if (firstPrompt.source !== 'inline' || !firstPrompt.template) {
alert('Nur inline Templates können konvertiert werden')
return
}
if (!confirm(`"${prompt.name}" zu Basis-Prompt konvertieren?`)) return
try {
await api.updateUnifiedPrompt(prompt.id, {
type: 'base',
template: firstPrompt.template,
output_format: firstPrompt.output_format || 'text',
stages: null
})
await loadPrompts()
} catch (e) {
alert('Fehler: ' + e.message)
}
}
const handleSave = async () => {
setEditingPrompt(null)
setShowNewPrompt(false)
await loadPrompts()
}
const getStageCount = (prompt) => {
if (prompt.type !== 'pipeline' || !prompt.stages) return 0
try {
const stages = typeof prompt.stages === 'string'
? JSON.parse(prompt.stages)
: prompt.stages
return stages.length
} catch (e) {
return 0
}
}
const getTypeLabel = (type) => {
if (type === 'base') return 'Basis'
if (type === 'pipeline') return 'Pipeline'
return type || 'Pipeline' // Default for old prompts
}
const getTypeColor = (type) => {
if (type === 'base') return 'var(--accent)'
if (type === 'pipeline') return '#6366f1'
return 'var(--text3)'
}
const handleExportAll = async () => {
try {
const data = await api.exportAllPrompts()
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' })
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = `all-prompts-${new Date().toISOString().split('T')[0]}.json`
document.body.appendChild(a)
a.click()
document.body.removeChild(a)
URL.revokeObjectURL(url)
} catch (e) {
setError('Export-Fehler: ' + e.message)
}
}
const handleImport = async (event) => {
const file = event.target.files[0]
if (!file) return
setImporting(true)
setError(null)
setImportResult(null)
try {
const text = await file.text()
const data = JSON.parse(text)
// Ask user about overwrite
const overwrite = confirm(
'Bestehende Prompts überschreiben?\n\n' +
'JA = Existierende Prompts aktualisieren\n' +
'NEIN = Nur neue Prompts erstellen, Duplikate überspringen'
)
const result = await api.importPrompts(data, overwrite)
setImportResult(result)
await loadPrompts()
} catch (e) {
setError('Import-Fehler: ' + e.message)
} finally {
setImporting(false)
event.target.value = '' // Reset file input
}
}
return (
<div style={{
padding: 20,
maxWidth: 1400,
margin: '0 auto',
paddingBottom: 80
}}>
<div style={{
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
marginBottom: 24
}}>
<h1 style={{ margin: 0, fontSize: 24, fontWeight: 600 }}>
KI-Prompts ({filteredPrompts.length})
</h1>
<div style={{ display: 'flex', gap: 8 }}>
<button
className="btn"
onClick={handleExportAll}
title="Alle Prompts als JSON exportieren (Backup / Dev→Prod Sync)"
>
📦 Alle exportieren
</button>
<label className="btn" style={{ margin: 0, cursor: 'pointer' }}>
📥 Importieren
<input
type="file"
accept=".json"
onChange={handleImport}
disabled={importing}
style={{ display: 'none' }}
/>
</label>
<button
className="btn btn-primary"
onClick={() => setShowNewPrompt(true)}
>
+ Neuer Prompt
</button>
</div>
</div>
{error && (
<div style={{
padding: 16,
background: '#fee',
color: '#c00',
borderRadius: 8,
marginBottom: 16
}}>
{error}
</div>
)}
{importResult && (
<div style={{
padding: 16,
background: '#efe',
color: '#060',
borderRadius: 8,
marginBottom: 16
}}>
<div style={{ fontWeight: 600, marginBottom: 4 }}>
Import erfolgreich
</div>
<div style={{ fontSize: 13 }}>
{importResult.created} erstellt · {importResult.updated} aktualisiert · {importResult.skipped} übersprungen
</div>
<button
onClick={() => setImportResult(null)}
style={{ marginTop: 8, fontSize: 12, padding: '4px 8px' }}
className="btn"
>
OK
</button>
</div>
)}
{/* Filters */}
<div style={{
display: 'flex',
gap: 12,
marginBottom: 24,
flexWrap: 'wrap',
alignItems: 'center'
}}>
<div style={{ display: 'flex', alignItems: 'center', gap: 6 }}>
<Filter size={16} color="var(--text3)" />
<span style={{ fontSize: 13, color: 'var(--text3)' }}>Typ:</span>
</div>
<div style={{ display: 'flex', gap: 6 }}>
<button
className={typeFilter === 'all' ? 'btn btn-primary' : 'btn'}
onClick={() => setTypeFilter('all')}
style={{ fontSize: 13, padding: '6px 12px' }}
>
Alle ({prompts.length})
</button>
<button
className={typeFilter === 'base' ? 'btn btn-primary' : 'btn'}
onClick={() => setTypeFilter('base')}
style={{ fontSize: 13, padding: '6px 12px' }}
>
Basis-Prompts ({prompts.filter(p => p.type === 'base').length})
</button>
<button
className={typeFilter === 'pipeline' ? 'btn btn-primary' : 'btn'}
onClick={() => setTypeFilter('pipeline')}
style={{ fontSize: 13, padding: '6px 12px' }}
>
Pipelines ({prompts.filter(p => p.type === 'pipeline' || !p.type).length})
</button>
</div>
<div style={{
width: 1,
height: 24,
background: 'var(--border)',
margin: '0 8px'
}} />
<select
className="form-select"
value={category}
onChange={e => setCategory(e.target.value)}
style={{ fontSize: 13, padding: '6px 12px' }}
>
{categories.map(cat => (
<option key={cat.id} value={cat.id}>
{cat.label}
</option>
))}
</select>
</div>
{/* Prompts Table */}
{loading ? (
<div style={{ textAlign: 'center', padding: 40, color: 'var(--text3)' }}>
Lädt...
</div>
) : (
<div style={{
background: 'var(--surface)',
borderRadius: 12,
border: '1px solid var(--border)',
overflowX: 'auto'
}}>
<table style={{ width: '100%', minWidth: 900, borderCollapse: 'collapse' }}>
<thead>
<tr style={{
background: 'var(--surface2)',
borderBottom: '1px solid var(--border)'
}}>
<th style={{
padding: 12,
textAlign: 'left',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)',
width: 80
}}>
Typ
</th>
<th style={{
padding: 12,
textAlign: 'left',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)'
}}>
Name
</th>
<th style={{
padding: 12,
textAlign: 'left',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)',
width: 120
}}>
Kategorie
</th>
<th style={{
padding: 12,
textAlign: 'center',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)',
width: 100
}}>
Stages
</th>
<th style={{
padding: 12,
textAlign: 'center',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)',
width: 80
}}>
Status
</th>
<th style={{
padding: 12,
textAlign: 'right',
fontSize: 12,
fontWeight: 600,
color: 'var(--text3)',
width: 120
}}>
Aktionen
</th>
</tr>
</thead>
<tbody>
{filteredPrompts.length === 0 ? (
<tr>
<td colSpan="6" style={{
padding: 40,
textAlign: 'center',
color: 'var(--text3)'
}}>
Keine Prompts gefunden
</td>
</tr>
) : (
filteredPrompts.map(prompt => (
<tr
key={prompt.id}
style={{
borderBottom: '1px solid var(--border)',
opacity: prompt.active ? 1 : 0.5
}}
>
<td style={{ padding: 12 }}>
<div style={{
display: 'inline-block',
padding: '2px 8px',
background: getTypeColor(prompt.type) + '20',
color: getTypeColor(prompt.type),
borderRadius: 6,
fontSize: 11,
fontWeight: 600
}}>
{getTypeLabel(prompt.type)}
</div>
</td>
<td style={{ padding: 12 }}>
<div>
<div style={{ fontSize: 14, fontWeight: 500 }}>
{prompt.display_name || prompt.name}
</div>
<div style={{ fontSize: 11, color: 'var(--text3)', marginTop: 2 }}>
{prompt.slug}
</div>
</div>
</td>
<td style={{ padding: 12, fontSize: 13 }}>
{prompt.category || 'ganzheitlich'}
</td>
<td style={{ padding: 12, textAlign: 'center', fontSize: 13 }}>
{prompt.type === 'pipeline' ? (
<span style={{
background: 'var(--surface2)',
padding: '2px 6px',
borderRadius: 4,
fontSize: 11
}}>
{getStageCount(prompt)} Stages
</span>
) : (
<span style={{ color: 'var(--text3)', fontSize: 11 }}></span>
)}
</td>
<td style={{ padding: 12, textAlign: 'center' }}>
<label style={{
display: 'inline-flex',
alignItems: 'center',
cursor: 'pointer'
}}>
<input
type="checkbox"
checked={prompt.active}
onChange={() => handleToggleActive(prompt)}
style={{ margin: 0 }}
/>
</label>
</td>
<td style={{ padding: 12 }}>
<div style={{
display: 'flex',
gap: 6,
justifyContent: 'flex-end'
}}>
<button
onClick={() => setEditingPrompt(prompt)}
style={{
background: 'none',
border: 'none',
cursor: 'pointer',
padding: 4
}}
title="Bearbeiten"
>
<Edit size={16} color="var(--accent)" />
</button>
{/* Show convert button for 1-stage pipelines */}
{prompt.type === 'pipeline' && getStageCount(prompt) === 1 && (
<button
onClick={() => handleConvertToBase(prompt)}
style={{
background: 'none',
border: 'none',
cursor: 'pointer',
padding: 4
}}
title="Zu Basis-Prompt konvertieren"
>
<ArrowDownToLine size={16} color="#6366f1" />
</button>
)}
<button
onClick={() => handleDuplicate(prompt)}
style={{
background: 'none',
border: 'none',
cursor: 'pointer',
padding: 4
}}
title="Duplizieren"
>
<Copy size={16} color="var(--text3)" />
</button>
<button
onClick={() => handleDelete(prompt)}
style={{
background: 'none',
border: 'none',
cursor: 'pointer',
padding: 4
}}
title="Löschen"
>
<Trash2 size={16} color="var(--danger)" />
</button>
</div>
</td>
</tr>
))
)}
</tbody>
</table>
</div>
)}
{/* Unified Prompt Modal */}
{(editingPrompt || showNewPrompt) && (
<UnifiedPromptModal
prompt={editingPrompt}
onSave={handleSave}
onClose={() => {
setEditingPrompt(null)
setShowNewPrompt(false)
}}
/>
)}
</div>
)
}

View File

@ -1,5 +1,5 @@
import { useState, useEffect } from 'react'
import { Brain, Pencil, Trash2, ChevronDown, ChevronUp, Check, X } from 'lucide-react'
import React, { useState, useEffect } from 'react'
import { Brain, Trash2, ChevronDown, ChevronUp } from 'lucide-react'
import { api } from '../utils/api'
import { useAuth } from '../context/AuthContext'
import Markdown from '../utils/Markdown'
@ -8,30 +8,83 @@ import dayjs from 'dayjs'
import 'dayjs/locale/de'
dayjs.locale('de')
// Legacy fallback labels (display_name takes precedence)
const SLUG_LABELS = {
gesamt: '🔍 Gesamtanalyse',
koerper: '🫧 Körperkomposition',
ernaehrung: '🍽️ Ernährung',
aktivitaet: '🏋️ Aktivität',
gesundheit: '❤️ Gesundheitsindikatoren',
ziele: '🎯 Zielfortschritt',
pipeline: '🔬 Mehrstufige Gesamtanalyse',
pipeline_body: '🔬 Pipeline: Körper-Analyse (JSON)',
pipeline_nutrition: '🔬 Pipeline: Ernährungs-Analyse (JSON)',
pipeline_activity: '🔬 Pipeline: Aktivitäts-Analyse (JSON)',
pipeline_synthesis: '🔬 Pipeline: Synthese',
pipeline_goals: '🔬 Pipeline: Zielabgleich',
pipeline: '🔬 Mehrstufige Gesamtanalyse'
}
function InsightCard({ ins, onDelete, defaultOpen=false }) {
function InsightCard({ ins, onDelete, defaultOpen=false, prompts=[] }) {
const [open, setOpen] = useState(defaultOpen)
// Parse metadata early to determine showOnlyValues
const metadataRaw = ins.metadata ? (typeof ins.metadata === 'string' ? JSON.parse(ins.metadata) : ins.metadata) : null
const isBasePrompt = metadataRaw?.prompt_type === 'base'
const isJsonOutput = ins.content && (ins.content.trim().startsWith('{') || ins.content.trim().startsWith('['))
const placeholdersRaw = metadataRaw?.placeholders || {}
const showOnlyValues = isBasePrompt && isJsonOutput && Object.keys(placeholdersRaw).length > 0
const [showValues, setShowValues] = useState(showOnlyValues) // Auto-expand for base prompts with JSON
const [expertMode, setExpertMode] = useState(false) // Show empty/technical placeholders
// Find matching prompt to get display_name
const prompt = prompts.find(p => p.slug === ins.scope)
const displayName = prompt?.display_name || SLUG_LABELS[ins.scope] || ins.scope
// Use already-parsed metadata
const metadata = metadataRaw
const allPlaceholders = placeholdersRaw
// Filter placeholders: In normal mode, hide empty values and raw stage outputs
const placeholders = expertMode
? allPlaceholders
: Object.fromEntries(
Object.entries(allPlaceholders).filter(([key, data]) => {
// Hide raw stage outputs (JSON) in normal mode
if (data.is_stage_raw) return false
// Hide empty values
const val = data.value || ''
return val.trim() !== '' && val !== 'nicht verfügbar' && val !== '[Nicht verfügbar]'
})
)
const placeholderCount = Object.keys(placeholders).length
const hiddenCount = Object.keys(allPlaceholders).length - placeholderCount
// Group placeholders by category
const groupedPlaceholders = Object.entries(placeholders).reduce((acc, [key, data]) => {
const category = data.category || 'Sonstiges'
if (!acc[category]) acc[category] = []
acc[category].push([key, data])
return acc
}, {})
// Sort categories: Regular categories first, then Stage outputs, then Rohdaten
const sortedCategories = Object.keys(groupedPlaceholders).sort((a, b) => {
const aIsStage = a.startsWith('Stage')
const bIsStage = b.startsWith('Stage')
const aIsRohdaten = a.includes('Rohdaten')
const bIsRohdaten = b.includes('Rohdaten')
// Rohdaten last
if (aIsRohdaten && !bIsRohdaten) return 1
if (!aIsRohdaten && bIsRohdaten) return -1
// Stage outputs after regular categories
if (!aIsStage && bIsStage) return -1
if (aIsStage && !bIsStage) return 1
// Otherwise alphabetical
return a.localeCompare(b)
})
return (
<div className="card section-gap" style={{borderLeft:`3px solid var(--accent)`}}>
<div style={{display:'flex',alignItems:'center',gap:8,marginBottom:open?12:0,cursor:'pointer'}}
onClick={()=>setOpen(o=>!o)}>
<div style={{flex:1}}>
<div style={{fontSize:13,fontWeight:600}}>
{SLUG_LABELS[ins.scope] || ins.scope}
{displayName}
</div>
<div style={{fontSize:11,color:'var(--text3)'}}>
{dayjs(ins.created).format('DD. MMMM YYYY, HH:mm')}
@ -43,83 +96,194 @@ function InsightCard({ ins, onDelete, defaultOpen=false }) {
</button>
{open ? <ChevronUp size={16} color="var(--text3)"/> : <ChevronDown size={16} color="var(--text3)"/>}
</div>
{open && <Markdown text={ins.content}/>}
{open && (
<>
{/* For base prompts with JSON: Only show value table */}
{showOnlyValues && (
<div style={{ padding: '12px 16px', background: 'var(--surface)', borderRadius: 8, marginBottom: 12 }}>
<div style={{ fontSize: 11, color: 'var(--text3)', marginBottom: 8 }}>
Basis-Prompt Rohdaten (JSON-Struktur für technische Nutzung)
</div>
<details style={{ fontSize: 11, color: 'var(--text3)' }}>
<summary style={{ cursor: 'pointer' }}>Technische Daten anzeigen</summary>
<pre style={{
marginTop: 8,
padding: 8,
background: 'var(--bg)',
borderRadius: 4,
overflow: 'auto',
fontSize: 10,
fontFamily: 'monospace'
}}>
{ins.content}
</pre>
</details>
</div>
)}
{/* For other prompts: Show full content */}
{!showOnlyValues && <Markdown text={ins.content}/>}
{/* Value Table */}
{placeholderCount > 0 && (
<div style={{ marginTop: 16, borderTop: '1px solid var(--border)', paddingTop: 12 }}>
<div style={{ display: 'flex', alignItems: 'center', justifyContent: 'space-between' }}>
<div
onClick={() => setShowValues(!showValues)}
style={{
cursor: 'pointer',
fontSize: 12,
color: 'var(--text2)',
fontWeight: 600,
display: 'flex',
alignItems: 'center',
gap: 6
}}
>
{showValues ? <ChevronUp size={14} /> : <ChevronDown size={14} />}
📊 Verwendete Werte ({placeholderCount})
{hiddenCount > 0 && !expertMode && (
<span style={{ fontSize: 10, color: 'var(--text3)', fontWeight: 400 }}>
(+{hiddenCount} ausgeblendet)
</span>
)}
</div>
{showValues && Object.keys(allPlaceholders).length > 0 && (
<button
onClick={(e) => {
e.stopPropagation()
setExpertMode(!expertMode)
}}
className="btn"
style={{
fontSize: 10,
padding: '4px 8px',
background: expertMode ? 'var(--accent)' : 'var(--surface)',
color: expertMode ? 'white' : 'var(--text2)'
}}
>
🔬 Experten-Modus
</button>
)}
</div>
{showValues && (
<div style={{ marginTop: 12, fontSize: 11 }}>
<table style={{ width: '100%', borderCollapse: 'collapse' }}>
<thead>
<tr style={{ borderBottom: '1px solid var(--border)', textAlign: 'left' }}>
<th style={{ padding: '6px 8px', color: 'var(--text3)', fontWeight: 600 }}>Platzhalter</th>
<th style={{ padding: '6px 8px', color: 'var(--text3)', fontWeight: 600 }}>Wert</th>
<th style={{ padding: '6px 8px', color: 'var(--text3)', fontWeight: 600 }}>Beschreibung</th>
</tr>
</thead>
<tbody>
{sortedCategories.map(category => (
<React.Fragment key={category}>
{/* Category Header */}
<tr style={{ background: 'var(--surface2)', borderTop: '2px solid var(--border)' }}>
<td colSpan="3" style={{
padding: '8px',
fontWeight: 600,
fontSize: 11,
color: 'var(--text2)',
letterSpacing: '0.5px'
}}>
{category}
</td>
</tr>
{/* Category Values */}
{groupedPlaceholders[category].map(([key, data]) => {
const isExtracted = data.is_extracted
const isStageRaw = data.is_stage_raw
return (
<tr key={key} style={{
borderBottom: '1px solid var(--border)',
background: isStageRaw && expertMode ? 'var(--surface)' : 'transparent'
}}>
<td style={{
padding: '6px 8px',
fontFamily: 'monospace',
color: isStageRaw ? 'var(--text3)' : (isExtracted ? '#6B8E23' : 'var(--accent)'),
whiteSpace: 'nowrap',
verticalAlign: 'top',
fontSize: isStageRaw ? 10 : 11
}}>
{isExtracted && '↳ '}
{isStageRaw && '🔬 '}
{key}
</td>
<td style={{
padding: '6px 8px',
fontFamily: 'monospace',
wordBreak: 'break-word',
maxWidth: '400px',
verticalAlign: 'top',
fontSize: isStageRaw ? 9 : 11,
color: isStageRaw ? 'var(--text3)' : 'var(--text1)'
}}>
{isStageRaw ? (
<details style={{ cursor: 'pointer' }}>
<summary style={{
fontWeight: 600,
color: 'var(--accent)',
fontSize: 10,
marginBottom: '4px'
}}>
JSON anzeigen
</summary>
<pre style={{
background: 'var(--surface2)',
padding: '8px',
borderRadius: '4px',
fontSize: 9,
overflow: 'auto',
maxHeight: '300px',
margin: 0
}}>
{data.value}
</pre>
</details>
) : data.value}
</td>
<td style={{
padding: '6px 8px',
color: 'var(--text3)',
fontSize: 10,
verticalAlign: 'top',
fontStyle: isExtracted ? 'italic' : 'normal'
}}>
{data.description || '—'}
</td>
</tr>
)
})}
</React.Fragment>
))}
</tbody>
</table>
</div>
)}
</div>
)}
</>
)}
</div>
)
}
function PromptEditor({ prompt, onSave, onCancel }) {
const [template, setTemplate] = useState(prompt.template)
const [name, setName] = useState(prompt.name)
const [desc, setDesc] = useState(prompt.description||'')
const VARS = ['{{name}}','{{geschlecht}}','{{height}}','{{goal_weight}}','{{goal_bf_pct}}',
'{{weight_trend}}','{{weight_aktuell}}','{{kf_aktuell}}','{{caliper_summary}}',
'{{circ_summary}}','{{nutrition_summary}}','{{nutrition_detail}}',
'{{protein_ziel_low}}','{{protein_ziel_high}}','{{activity_summary}}',
'{{activity_kcal_summary}}','{{activity_detail}}',
'{{sleep_summary}}','{{sleep_detail}}','{{sleep_avg_duration}}','{{sleep_avg_quality}}',
'{{rest_days_summary}}','{{rest_days_count}}','{{rest_days_types}}',
'{{vitals_summary}}','{{vitals_detail}}','{{vitals_avg_hr}}','{{vitals_avg_hrv}}',
'{{vitals_avg_bp}}','{{vitals_vo2_max}}','{{bp_summary}}']
return (
<div className="card section-gap">
<div style={{display:'flex',alignItems:'center',justifyContent:'space-between',marginBottom:12}}>
<div className="card-title" style={{margin:0}}>Prompt bearbeiten</div>
<button style={{background:'none',border:'none',cursor:'pointer',color:'var(--text3)'}}
onClick={onCancel}><X size={16}/></button>
</div>
<div className="form-row">
<label className="form-label">Name</label>
<input type="text" className="form-input" value={name} onChange={e=>setName(e.target.value)}/>
<span className="form-unit"/>
</div>
<div className="form-row">
<label className="form-label">Beschreibung</label>
<input type="text" className="form-input" value={desc} onChange={e=>setDesc(e.target.value)}/>
<span className="form-unit"/>
</div>
<div style={{marginBottom:8}}>
<div style={{fontSize:12,fontWeight:600,color:'var(--text3)',marginBottom:6}}>
Variablen (antippen zum Einfügen):
</div>
<div style={{display:'flex',flexWrap:'wrap',gap:4}}>
{VARS.map(v=>(
<button key={v} onClick={()=>setTemplate(t=>t+v)}
style={{fontSize:10,padding:'2px 7px',borderRadius:4,border:'1px solid var(--border2)',
background:'var(--surface2)',cursor:'pointer',fontFamily:'monospace',color:'var(--accent)'}}>
{v}
</button>
))}
</div>
</div>
<textarea value={template} onChange={e=>setTemplate(e.target.value)}
style={{width:'100%',minHeight:280,padding:10,fontFamily:'monospace',fontSize:12,
background:'var(--surface2)',border:'1.5px solid var(--border2)',borderRadius:8,
color:'var(--text1)',resize:'vertical',lineHeight:1.5,boxSizing:'border-box'}}/>
<div style={{display:'flex',gap:8,marginTop:8}}>
<button className="btn btn-primary" style={{flex:1}}
onClick={()=>onSave({name,description:desc,template})}>
<Check size={14}/> Speichern
</button>
<button className="btn btn-secondary" style={{flex:1}} onClick={onCancel}>Abbrechen</button>
</div>
</div>
)
}
export default function Analysis() {
const { canUseAI, isAdmin } = useAuth()
const { canUseAI } = useAuth()
const [prompts, setPrompts] = useState([])
const [allInsights, setAllInsights] = useState([])
const [loading, setLoading] = useState(null)
const [error, setError] = useState(null)
const [editing, setEditing] = useState(null)
const [tab, setTab] = useState('run')
const [newResult, setNewResult] = useState(null)
const [pipelineLoading, setPipelineLoading] = useState(false)
const [aiUsage, setAiUsage] = useState(null) // Phase 3: Usage badge
const [newResult, setNewResult] = useState(null)
const [aiUsage, setAiUsage] = useState(null)
const loadAll = async () => {
const [p, i] = await Promise.all([
@ -139,48 +303,74 @@ export default function Analysis() {
}).catch(err => console.error('Failed to load usage:', err))
},[])
const runPipeline = async () => {
setPipelineLoading(true); setError(null); setNewResult(null)
try {
const result = await api.insightPipeline()
setNewResult(result)
await loadAll()
setTab('run')
} catch(e) {
setError('Pipeline-Fehler: ' + e.message)
} finally { setPipelineLoading(false) }
}
const runPrompt = async (slug) => {
setLoading(slug); setError(null); setNewResult(null)
try {
const result = await api.runInsight(slug)
setNewResult(result) // show immediately
await loadAll() // refresh lists
setTab('run') // stay on run tab to see result
// Use new unified executor with save=true
const result = await api.executeUnifiedPrompt(slug, null, null, false, true)
// Transform result to match old format for InsightCard
let content = ''
if (result.type === 'pipeline') {
// For pipeline, extract final output
const finalOutput = result.output || {}
if (typeof finalOutput === 'object' && Object.keys(finalOutput).length === 1) {
content = Object.values(finalOutput)[0]
} else {
content = JSON.stringify(finalOutput, null, 2)
}
} else {
// For base prompts, use output directly
content = typeof result.output === 'string' ? result.output : JSON.stringify(result.output, null, 2)
}
// Build metadata from debug info (same logic as backend)
let metadata = null
if (result.debug && result.debug.resolved_placeholders) {
const placeholders = {}
const resolved = result.debug.resolved_placeholders
// For pipeline, collect from all stages
if (result.type === 'pipeline' && result.debug.stages) {
for (const stage of result.debug.stages) {
for (const promptDebug of (stage.prompts || [])) {
const stageResolved = promptDebug.resolved_placeholders || promptDebug.ref_debug?.resolved_placeholders || {}
for (const [key, value] of Object.entries(stageResolved)) {
if (!placeholders[key]) {
placeholders[key] = { value, description: '' }
}
}
}
}
} else {
// For base prompts
for (const [key, value] of Object.entries(resolved)) {
placeholders[key] = { value, description: '' }
}
}
if (Object.keys(placeholders).length > 0) {
metadata = { prompt_type: result.type, placeholders }
}
}
setNewResult({ scope: slug, content, metadata })
await loadAll()
setTab('run')
} catch(e) {
setError('Fehler: ' + e.message)
} finally { setLoading(null) }
}
const savePrompt = async (promptId, data) => {
const token = localStorage.getItem('bodytrack_token')||''
await fetch(`/api/prompts/${promptId}`, {
method:'PUT',
headers:{'Content-Type':'application/json', 'X-Auth-Token': token},
body:JSON.stringify(data)
})
setEditing(null); await loadAll()
}
const deleteInsight = async (id) => {
if (!confirm('Analyse löschen?')) return
const pid = localStorage.getItem('bodytrack_active_profile')||''
await fetch(`/api/insights/${id}`, {
method:'DELETE', headers: pid ? {'X-Profile-Id':pid} : {}
})
if (newResult?.id === id) setNewResult(null)
await loadAll()
try {
await api.deleteInsight(id)
if (newResult?.id === id) setNewResult(null)
await loadAll()
} catch (e) {
setError('Löschen fehlgeschlagen: ' + e.message)
}
}
// Group insights by scope for history view
@ -191,11 +381,8 @@ export default function Analysis() {
grouped[key].push(ins)
})
const activePrompts = prompts.filter(p=>p.active && !p.slug.startsWith('pipeline_') && p.slug !== 'pipeline')
// Pipeline is available if the "pipeline" prompt is active
const pipelinePrompt = prompts.find(p=>p.slug==='pipeline')
const pipelineAvailable = pipelinePrompt?.active ?? true // Default to true if not found (backwards compatibility)
// Show only active pipeline-type prompts
const pipelinePrompts = prompts.filter(p => p.active && p.type === 'pipeline')
return (
<div>
@ -208,7 +395,6 @@ export default function Analysis() {
{allInsights.length>0 && <span style={{marginLeft:4,fontSize:10,background:'var(--accent)',
color:'white',padding:'1px 5px',borderRadius:8}}>{allInsights.length}</span>}
</button>
{isAdmin && <button className={'tab'+(tab==='prompts'?' active':'')} onClick={()=>setTab('prompts')}>Prompts</button>}
</div>
{error && (
@ -235,56 +421,11 @@ export default function Analysis() {
ins={{...newResult, created: new Date().toISOString()}}
onDelete={deleteInsight}
defaultOpen={true}
prompts={prompts}
/>
</div>
)}
{/* Pipeline button - only if all sub-prompts are active */}
{pipelineAvailable && (
<div className="card" style={{marginBottom:16,borderColor:'var(--accent)',borderWidth:2}}>
<div style={{display:'flex',alignItems:'flex-start',gap:12}}>
<div style={{flex:1}}>
<div className="badge-container-right" style={{fontWeight:700,fontSize:15,color:'var(--accent)'}}>
<span>🔬 Mehrstufige Gesamtanalyse</span>
{aiUsage && <UsageBadge {...aiUsage} />}
</div>
<div style={{fontSize:12,color:'var(--text2)',marginTop:3,lineHeight:1.5}}>
3 spezialisierte KI-Calls parallel (Körper + Ernährung + Aktivität),
dann Synthese + Zielabgleich. Detaillierteste Auswertung.
</div>
{allInsights.find(i=>i.scope==='pipeline') && (
<div style={{fontSize:11,color:'var(--text3)',marginTop:3}}>
Letzte Analyse: {dayjs(allInsights.find(i=>i.scope==='pipeline').created).format('DD.MM.YYYY, HH:mm')}
</div>
)}
</div>
<div
title={aiUsage && !aiUsage.allowed ? `Limit erreicht (${aiUsage.used}/${aiUsage.limit}). Kontaktiere den Admin oder warte bis zum nächsten Reset.` : ''}
style={{display:'inline-block'}}
>
<button
className="btn btn-primary"
style={{flexShrink:0,minWidth:100, cursor: (aiUsage && !aiUsage.allowed) ? 'not-allowed' : 'pointer'}}
onClick={runPipeline}
disabled={!!loading||pipelineLoading||(aiUsage && !aiUsage.allowed)}
>
{pipelineLoading
? <><div className="spinner" style={{width:13,height:13}}/> Läuft</>
: (aiUsage && !aiUsage.allowed) ? '🔒 Limit'
: <><Brain size={13}/> Starten</>}
</button>
</div>
{!canUseAI && <div style={{fontSize:11,color:'#D85A30',marginTop:4}}>🔒 KI nicht freigeschaltet</div>}
</div>
{pipelineLoading && (
<div style={{marginTop:10,padding:'8px 12px',background:'var(--accent-light)',
borderRadius:8,fontSize:12,color:'var(--accent-dark)'}}>
Stufe 1: 3 parallele Analyse-Calls dann Synthese dann Zielabgleich
</div>
)}
</div>
)}
{!canUseAI && (
<div style={{padding:'14px 16px',background:'#FCEBEB',borderRadius:10,
border:'1px solid #D85A3033',marginBottom:16}}>
@ -298,25 +439,31 @@ export default function Analysis() {
</div>
</div>
)}
{canUseAI && <p style={{fontSize:13,color:'var(--text2)',marginBottom:14,lineHeight:1.6}}>
Oder wähle eine Einzelanalyse:
</p>}
{activePrompts.map(p => {
// Show latest existing insight for this prompt
{canUseAI && pipelinePrompts.length > 0 && (
<p style={{fontSize:13,color:'var(--text2)',marginBottom:14,lineHeight:1.6}}>
Wähle eine mehrstufige KI-Analyse:
</p>
)}
{pipelinePrompts.map(p => {
const existing = allInsights.find(i=>i.scope===p.slug)
return (
<div key={p.id} className="card section-gap">
<div key={p.id} className="card" style={{marginBottom:16,borderColor:'var(--accent)',borderWidth:2}}>
<div style={{display:'flex',alignItems:'flex-start',gap:12}}>
<div style={{flex:1}}>
<div className="badge-container-right" style={{fontWeight:600,fontSize:15}}>
<span>{SLUG_LABELS[p.slug]||p.name}</span>
<div className="badge-container-right" style={{fontWeight:700,fontSize:15,color:'var(--accent)'}}>
<span>{p.display_name || SLUG_LABELS[p.slug] || p.name}</span>
{aiUsage && <UsageBadge {...aiUsage} />}
</div>
{p.description && <div style={{fontSize:12,color:'var(--text3)',marginTop:2}}>{p.description}</div>}
{p.description && (
<div style={{fontSize:12,color:'var(--text2)',marginTop:3,lineHeight:1.5}}>
{p.description}
</div>
)}
{existing && (
<div style={{fontSize:11,color:'var(--text3)',marginTop:3}}>
Letzte Auswertung: {dayjs(existing.created).format('DD.MM.YYYY, HH:mm')}
Letzte Analyse: {dayjs(existing.created).format('DD.MM.YYYY, HH:mm')}
</div>
)}
</div>
@ -326,28 +473,34 @@ export default function Analysis() {
>
<button
className="btn btn-primary"
style={{flexShrink:0,minWidth:90, cursor: (aiUsage && !aiUsage.allowed) ? 'not-allowed' : 'pointer'}}
style={{flexShrink:0,minWidth:100, cursor: (aiUsage && !aiUsage.allowed) ? 'not-allowed' : 'pointer'}}
onClick={()=>runPrompt(p.slug)}
disabled={!!loading||!canUseAI||(aiUsage && !aiUsage.allowed)}
>
{loading===p.slug
? <><div className="spinner" style={{width:13,height:13}}/> Läuft</>
: (aiUsage && !aiUsage.allowed) ? '🔒 Limit'
: <><Brain size={13}/> {existing?'Neu erstellen':'Starten'}</>}
: <><Brain size={13}/> Starten</>}
</button>
</div>
</div>
{/* Show existing result collapsed */}
{existing && newResult?.id !== existing.id && (
<div style={{marginTop:8,borderTop:'1px solid var(--border)',paddingTop:8}}>
<InsightCard ins={existing} onDelete={deleteInsight} defaultOpen={false}/>
<InsightCard ins={existing} onDelete={deleteInsight} defaultOpen={false} prompts={prompts}/>
</div>
)}
</div>
)
})}
{activePrompts.length===0 && (
<div className="empty-state"><p>Keine aktiven Prompts. Aktiviere im Tab "Prompts".</p></div>
{canUseAI && pipelinePrompts.length === 0 && (
<div className="empty-state">
<p>Keine aktiven Pipeline-Prompts verfügbar.</p>
<p style={{fontSize:12,color:'var(--text3)',marginTop:8}}>
Erstelle Pipeline-Prompts im Admin-Bereich (Einstellungen Admin KI-Prompts).
</p>
</div>
)}
</div>
)}
@ -361,143 +514,14 @@ export default function Analysis() {
<div key={scope} style={{marginBottom:20}}>
<div style={{fontSize:13,fontWeight:700,color:'var(--text3)',
textTransform:'uppercase',letterSpacing:'0.05em',marginBottom:8}}>
{SLUG_LABELS[scope]||scope} ({ins.length})
{prompts.find(p => p.slug === scope)?.display_name || SLUG_LABELS[scope] || scope} ({ins.length})
</div>
{ins.map(i => <InsightCard key={i.id} ins={i} onDelete={deleteInsight}/>)}
{ins.map(i => <InsightCard key={i.id} ins={i} onDelete={deleteInsight} prompts={prompts}/>)}
</div>
))
}
</div>
)}
{/* ── Prompts ── */}
{tab==='prompts' && (
<div>
<p style={{fontSize:13,color:'var(--text2)',marginBottom:14,lineHeight:1.6}}>
Passe Prompts an. Variablen wie{' '}
<code style={{fontSize:11,background:'var(--surface2)',padding:'1px 4px',borderRadius:3}}>{'{{name}}'}</code>{' '}
werden automatisch mit deinen Daten befüllt.
</p>
{editing ? (
<PromptEditor prompt={editing}
onSave={(data)=>savePrompt(editing.id,data)}
onCancel={()=>setEditing(null)}/>
) : (() => {
const singlePrompts = prompts.filter(p=>!p.slug.startsWith('pipeline_'))
const pipelinePrompts = prompts.filter(p=>p.slug.startsWith('pipeline_'))
const jsonSlugs = ['pipeline_body','pipeline_nutrition','pipeline_activity']
return (
<>
{/* Single prompts */}
<div style={{fontSize:12,fontWeight:700,color:'var(--text3)',
textTransform:'uppercase',letterSpacing:'0.05em',marginBottom:8}}>
Einzelanalysen
</div>
{singlePrompts.map(p=>(
<div key={p.id} className="card section-gap" style={{opacity:p.active?1:0.6}}>
<div style={{display:'flex',alignItems:'center',gap:10}}>
<div style={{flex:1}}>
<div style={{fontWeight:600,fontSize:14,display:'flex',alignItems:'center',gap:8}}>
{SLUG_LABELS[p.slug]||p.name}
{!p.active && <span style={{fontSize:10,color:'#D85A30',
background:'#FCEBEB',padding:'2px 8px',borderRadius:4,fontWeight:600}}> Deaktiviert</span>}
</div>
{p.description && <div style={{fontSize:12,color:'var(--text3)',marginTop:1}}>{p.description}</div>}
</div>
<button className="btn btn-secondary" style={{padding:'5px 8px',fontSize:12}}
onClick={()=>{
const token = localStorage.getItem('bodytrack_token')||''
fetch(`/api/prompts/${p.id}`,{
method:'PUT',
headers:{'Content-Type':'application/json','X-Auth-Token':token},
body:JSON.stringify({active:!p.active})
}).then(loadAll)
}}>
{p.active?'Deaktivieren':'Aktivieren'}
</button>
<button className="btn btn-secondary" style={{padding:'5px 8px'}}
onClick={()=>setEditing(p)}><Pencil size={13}/></button>
</div>
<div style={{marginTop:8,padding:'8px 10px',background:'var(--surface2)',borderRadius:6,
fontSize:11,fontFamily:'monospace',color:'var(--text3)',maxHeight:60,overflow:'hidden',lineHeight:1.4}}>
{p.template.slice(0,200)}
</div>
</div>
))}
{/* Pipeline prompts */}
<div style={{display:'flex',alignItems:'center',justifyContent:'space-between',margin:'20px 0 8px'}}>
<div style={{fontSize:12,fontWeight:700,color:'var(--text3)',
textTransform:'uppercase',letterSpacing:'0.05em'}}>
Mehrstufige Pipeline
</div>
{(() => {
const pipelinePrompt = prompts.find(p=>p.slug==='pipeline')
return pipelinePrompt && (
<button className="btn btn-secondary" style={{padding:'5px 12px',fontSize:12}}
onClick={()=>{
const token = localStorage.getItem('bodytrack_token')||''
fetch(`/api/prompts/${pipelinePrompt.id}`,{
method:'PUT',
headers:{'Content-Type':'application/json','X-Auth-Token':token},
body:JSON.stringify({active:!pipelinePrompt.active})
}).then(loadAll)
}}>
{pipelinePrompt.active ? 'Gesamte Pipeline deaktivieren' : 'Gesamte Pipeline aktivieren'}
</button>
)
})()}
</div>
{(() => {
const pipelinePrompt = prompts.find(p=>p.slug==='pipeline')
const isPipelineActive = pipelinePrompt?.active ?? true
return (
<div style={{padding:'10px 12px',
background: isPipelineActive ? 'var(--warn-bg)' : '#FCEBEB',
borderRadius:8,fontSize:12,
color: isPipelineActive ? 'var(--warn-text)' : '#D85A30',
marginBottom:12,lineHeight:1.6}}>
{isPipelineActive ? (
<> <strong>Hinweis:</strong> Pipeline-Stage-1-Prompts müssen valides JSON zurückgeben.
Halte das JSON-Format im Prompt erhalten. Stage 2 + 3 können frei angepasst werden.</>
) : (
<> <strong>Pipeline deaktiviert:</strong> Die mehrstufige Gesamtanalyse ist aktuell nicht verfügbar.
Aktiviere sie mit dem Schalter oben, um sie auf der Analyse-Seite zu nutzen.</>
)}
</div>
)
})()}
{pipelinePrompts.map(p=>{
const isJson = jsonSlugs.includes(p.slug)
return (
<div key={p.id} className="card section-gap"
style={{borderLeft:`3px solid ${isJson?'var(--warn)':'var(--accent)'}`,opacity:p.active?1:0.6}}>
<div style={{display:'flex',alignItems:'center',gap:10}}>
<div style={{flex:1}}>
<div style={{fontWeight:600,fontSize:14,display:'flex',alignItems:'center',gap:8}}>
{p.name}
{isJson && <span style={{fontSize:10,background:'var(--warn-bg)',
color:'var(--warn-text)',padding:'1px 6px',borderRadius:4}}>JSON-Output</span>}
{!p.active && <span style={{fontSize:10,color:'#D85A30',
background:'#FCEBEB',padding:'2px 8px',borderRadius:4,fontWeight:600}}> Deaktiviert</span>}
</div>
{p.description && <div style={{fontSize:12,color:'var(--text3)',marginTop:1}}>{p.description}</div>}
</div>
<button className="btn btn-secondary" style={{padding:'5px 8px'}}
onClick={()=>setEditing(p)}><Pencil size={13}/></button>
</div>
<div style={{marginTop:8,padding:'8px 10px',background:'var(--surface2)',borderRadius:6,
fontSize:11,fontFamily:'monospace',color:'var(--text3)',maxHeight:80,overflow:'hidden',lineHeight:1.4}}>
{p.template.slice(0,300)}
</div>
</div>
)
})}
</>
)
})()}
</div>
)}
</div>
)
}

View File

@ -251,6 +251,23 @@ export default function SettingsPage() {
setEditingId(null)
}
const handleExportPlaceholders = async () => {
try {
const data = await api.exportPlaceholderValues()
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' })
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = `placeholders-${activeProfile?.name || 'profile'}-${new Date().toISOString().split('T')[0]}.json`
document.body.appendChild(a)
a.click()
document.body.removeChild(a)
URL.revokeObjectURL(url)
} catch (e) {
alert('Fehler beim Export: ' + e.message)
}
}
return (
<div>
<h1 className="page-title">Einstellungen</h1>
@ -409,6 +426,16 @@ export default function SettingsPage() {
<span className="badge-button-description">maschinenlesbar, alles in einer Datei</span>
</div>
</button>
<button className="btn btn-full"
onClick={handleExportPlaceholders}
style={{ background: 'var(--surface2)', border: '1px solid var(--border)' }}>
<div className="badge-button-layout">
<div className="badge-button-header">
<span><BarChart3 size={14}/> Platzhalter exportieren</span>
</div>
<span className="badge-button-description">alle verfügbaren Platzhalter mit aktuellen Werten</span>
</div>
</button>
</>}
</div>
<p style={{fontSize:11,color:'var(--text3)',marginTop:8}}>

View File

@ -107,6 +107,7 @@ export const api = {
insightPipeline: () => req('/insights/pipeline',{method:'POST'}),
listInsights: () => req('/insights'),
latestInsights: () => req('/insights/latest'),
deleteInsight: (id) => req(`/insights/${id}`, {method:'DELETE'}),
exportZip: async () => {
const res = await fetch(`${BASE}/export/zip`, {headers: hdrs()})
if (!res.ok) throw new Error('Export failed')
@ -282,4 +283,51 @@ export const api = {
fd.append('file', file)
return req('/blood-pressure/import/omron', {method:'POST', body:fd})
},
// AI Prompts Management (Issue #28)
listAdminPrompts: () => req('/prompts'),
createPrompt: (d) => req('/prompts', json(d)),
updatePrompt: (id,d) => req(`/prompts/${id}`, jput(d)),
deletePrompt: (id) => req(`/prompts/${id}`, {method:'DELETE'}),
duplicatePrompt: (id) => req(`/prompts/${id}/duplicate`, json({})),
reorderPrompts: (order) => req('/prompts/reorder', jput(order)),
previewPrompt: (tpl) => req('/prompts/preview', json({template:tpl})),
generatePrompt: (d) => req('/prompts/generate', json(d)),
optimizePrompt: (id) => req(`/prompts/${id}/optimize`, json({})),
listPlaceholders: () => req('/prompts/placeholders'),
resetPromptToDefault: (id) => req(`/prompts/${id}/reset-to-default`, json({})),
// Pipeline Configs Management (Issue #28 Phase 2)
listPipelineConfigs: () => req('/prompts/pipeline-configs'),
createPipelineConfig: (d) => req('/prompts/pipeline-configs', json(d)),
updatePipelineConfig: (id,d) => req(`/prompts/pipeline-configs/${id}`, jput(d)),
deletePipelineConfig: (id) => req(`/prompts/pipeline-configs/${id}`, {method:'DELETE'}),
setDefaultPipelineConfig: (id) => req(`/prompts/pipeline-configs/${id}/set-default`, json({})),
// Pipeline Execution (Issue #28 Phase 2)
executePipeline: (configId=null) => req('/insights/pipeline' + (configId ? `?config_id=${configId}` : ''), json({})),
// Unified Prompt System (Issue #28 Phase 2)
executeUnifiedPrompt: (slug, modules=null, timeframes=null, debug=false, save=false) => {
const params = new URLSearchParams({ prompt_slug: slug })
if (debug) params.append('debug', 'true')
if (save) params.append('save', 'true')
const body = {}
if (modules) body.modules = modules
if (timeframes) body.timeframes = timeframes
return req('/prompts/execute?' + params, json(body))
},
createUnifiedPrompt: (d) => req('/prompts/unified', json(d)),
updateUnifiedPrompt: (id,d) => req(`/prompts/unified/${id}`, jput(d)),
// Batch Import/Export
exportAllPrompts: () => req('/prompts/export-all'),
importPrompts: (data, overwrite=false) => {
const params = new URLSearchParams()
if (overwrite) params.append('overwrite', 'true')
return req(`/prompts/import?${params}`, json(data))
},
// Placeholder Export
exportPlaceholderValues: () => req('/prompts/placeholders/export-values'),
}

68
test-pipeline-api.sh Normal file
View File

@ -0,0 +1,68 @@
#!/bin/bash
# API-Tests für Pipeline-System (Issue #28)
# Ausführen auf Server oder lokal: bash test-pipeline-api.sh
echo "═══════════════════════════════════════════════════════════"
echo "Pipeline-System API Tests"
echo "═══════════════════════════════════════════════════════════"
echo ""
# Configuration
API_URL="https://dev.mitai.jinkendo.de"
TOKEN="" # <-- Füge deinen Admin-Token hier ein
if [ -z "$TOKEN" ]; then
echo "❌ Bitte TOKEN in Zeile 11 eintragen (Admin-Token von dev.mitai.jinkendo.de)"
echo ""
echo "1. In Browser einloggen: https://dev.mitai.jinkendo.de"
echo "2. Developer Tools öffnen (F12)"
echo "3. Application/Storage → localStorage → auth_token kopieren"
exit 1
fi
echo "Test 1: GET /api/prompts/pipeline-configs"
echo "─────────────────────────────────────────────────────────"
curl -s "$API_URL/api/prompts/pipeline-configs" \
-H "X-Auth-Token: $TOKEN" | jq -r '.[] | "\(.name) (default: \(.is_default), active: \(.active))"'
echo ""
echo "Test 2: GET /api/prompts/pipeline-configs - Full JSON"
echo "─────────────────────────────────────────────────────────"
curl -s "$API_URL/api/prompts/pipeline-configs" \
-H "X-Auth-Token: $TOKEN" | jq '.[0] | {name, is_default, modules, stage1_prompts, stage2_prompt}'
echo ""
echo "Test 3: POST /api/insights/pipeline (default config)"
echo "─────────────────────────────────────────────────────────"
echo "Hinweis: Dies startet eine echte KI-Analyse (kann 30-60s dauern)"
read -p "Fortfahren? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
RESULT=$(curl -s -X POST "$API_URL/api/insights/pipeline" \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json")
echo "$RESULT" | jq '{
scope,
config: .config,
stage1_results: (.stage1 | keys),
content_length: (.content | length)
}'
echo ""
echo "Full content (first 500 chars):"
echo "$RESULT" | jq -r '.content' | head -c 500
echo "..."
else
echo "Übersprungen."
fi
echo ""
echo "Test 4: GET /api/prompts - Prüfe auf is_system_default"
echo "─────────────────────────────────────────────────────────"
curl -s "$API_URL/api/prompts" \
-H "X-Auth-Token: $TOKEN" | jq -r '.[] | select(.slug | startswith("pipeline_")) | "\(.slug): is_system_default=\(.is_system_default // false)"' | head -6
echo ""
echo "═══════════════════════════════════════════════════════════"
echo "API-Tests abgeschlossen!"
echo "═══════════════════════════════════════════════════════════"

53
test-pipeline-backend.sh Normal file
View File

@ -0,0 +1,53 @@
#!/bin/bash
# Test-Script für Pipeline-System Backend (Issue #28)
# Ausführen auf Server: bash test-pipeline-backend.sh
echo "═══════════════════════════════════════════════════════════"
echo "Pipeline-System Backend Tests"
echo "═══════════════════════════════════════════════════════════"
echo ""
# Container Names (from docker-compose.dev-env.yml)
POSTGRES_CONTAINER="dev-mitai-postgres"
DB_USER="mitai_dev"
DB_NAME="mitai_dev"
echo "Test 1: Migration 019 ausgeführt"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT version, applied_at FROM schema_migrations WHERE version = '019';"
echo ""
echo "Test 2: Tabelle pipeline_configs existiert"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"\d pipeline_configs" | head -40
echo ""
echo "Test 3: Seed-Daten (3 Configs erwartet)"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT name, is_default, active, stage2_prompt, stage3_prompt FROM pipeline_configs ORDER BY is_default DESC, name;"
echo ""
echo "Test 4: ai_prompts erweitert (is_system_default)"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT slug, is_system_default, (default_template IS NOT NULL) as has_default FROM ai_prompts WHERE slug LIKE 'pipeline_%' ORDER BY slug;"
echo ""
echo "Test 5: stage1_prompts Array-Inhalte"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT name, stage1_prompts FROM pipeline_configs WHERE name = 'Alltags-Check';"
echo ""
echo "Test 6: modules JSONB"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT name, jsonb_pretty(modules) as modules FROM pipeline_configs WHERE name = 'Alltags-Check';"
echo ""
echo "═══════════════════════════════════════════════════════════"
echo "Alle DB-Tests abgeschlossen!"
echo "═══════════════════════════════════════════════════════════"

74
test-unified-migration.sh Normal file
View File

@ -0,0 +1,74 @@
#!/bin/bash
# Test Migration 020: Unified Prompt System (Issue #28)
echo "═══════════════════════════════════════════════════════════"
echo "Migration 020: Unified Prompt System - Verification Tests"
echo "═══════════════════════════════════════════════════════════"
echo ""
POSTGRES_CONTAINER="dev-mitai-postgres"
DB_USER="mitai_dev"
DB_NAME="mitai_dev"
echo "Test 1: Migration 020 applied"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT version, applied_at FROM schema_migrations WHERE version = '020';"
echo ""
echo "Test 2: New columns exist in ai_prompts"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT column_name, data_type
FROM information_schema.columns
WHERE table_name = 'ai_prompts'
AND column_name IN ('type', 'stages', 'output_format', 'output_schema')
ORDER BY column_name;"
echo ""
echo "Test 3: Existing prompts migrated to pipeline type"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT slug, type,
CASE WHEN stages IS NOT NULL THEN 'Has stages' ELSE 'No stages' END as stages_status,
output_format
FROM ai_prompts
WHERE slug LIKE 'pipeline_%'
LIMIT 5;"
echo ""
echo "Test 4: Pipeline configs migrated to ai_prompts"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT slug, name, type,
jsonb_array_length(stages) as num_stages
FROM ai_prompts
WHERE slug LIKE 'pipeline_config_%';"
echo ""
echo "Test 5: Stages JSONB structure (sample)"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT slug, jsonb_pretty(stages) as stages_structure
FROM ai_prompts
WHERE slug LIKE 'pipeline_config_%'
LIMIT 1;"
echo ""
echo "Test 6: Backup table created"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT COUNT(*) as backup_count FROM pipeline_configs_backup_pre_020;"
echo ""
echo "Test 7: Indices created"
echo "─────────────────────────────────────────────────────────"
docker exec $POSTGRES_CONTAINER psql -U $DB_USER -d $DB_NAME -c \
"SELECT indexname FROM pg_indexes
WHERE tablename = 'ai_prompts'
AND indexname LIKE 'idx_ai_prompts_%';"
echo ""
echo "═══════════════════════════════════════════════════════════"
echo "Migration 020 Tests Complete!"
echo "═══════════════════════════════════════════════════════════"