Compare commits
329 Commits
v2.6.1-gra
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 0d61a9e191 | |||
| 55d1a7e290 | |||
| 4537e65428 | |||
| 43327c1f6d | |||
| 39a6998123 | |||
| 273c4c6919 | |||
| 2ed4488cf6 | |||
| 36490425c5 | |||
| b8cb8bb89b | |||
| 6d268d9dfb | |||
| df5f9b3fe4 | |||
| 5e67cd470c | |||
| 0b2a1f1a63 | |||
| d0012355b9 | |||
| 1056078e6a | |||
| c42a76b3d7 | |||
| ec9b3c68af | |||
| f9118a36f8 | |||
| e52eed40ca | |||
| 43641441ef | |||
| c613d81846 | |||
| de5db09b51 | |||
| 7cb8fd6602 | |||
| 6047e94964 | |||
| 78fbc9b31b | |||
| 742792770c | |||
| b19f91c3ee | |||
| 9b0d8c18cb | |||
| f2a2f4d2df | |||
| ea0fd951f2 | |||
| c8c828c8a8 | |||
| 716a063849 | |||
| 3dc81ade0f | |||
| 1df89205ac | |||
| 2445f7cb2b | |||
| 47fdcf8eed | |||
| 3e27c72b80 | |||
| 2d87f9d816 | |||
| d7d6155203 | |||
| f8506c0bb2 | |||
| c91910ee9f | |||
| ee91583614 | |||
| 3a17b646e1 | |||
| 727de50290 | |||
| a780104b3c | |||
| f51e1cb2c4 | |||
| 20fb1e92e2 | |||
| 1d66ca0649 | |||
| 55b64c331a | |||
| 4d43cc526e | |||
| 6131b315d7 | |||
| dfff46e45c | |||
| 003a270548 | |||
| 39fd15b565 | |||
| be2bed9927 | |||
| 2da98e8e37 | |||
| a852975811 | |||
| 8fd7ef804d | |||
| b0f4309a29 | |||
| c33b1c644a | |||
| 7cc823e2f4 | |||
| 7e00344b84 | |||
| ec89d83916 | |||
| 57656bbaaf | |||
| 7953acf3ee | |||
| 3f528f2184 | |||
| 29e334625e | |||
| 114cea80de | |||
| 981b0cba1f | |||
| e2c40666d1 | |||
| c9ae58725c | |||
| 4318395c83 | |||
| 00264a9653 | |||
| 7e4ea670b1 | |||
| 008a470f02 | |||
| 7ed82ad82e | |||
| 72cf71fa87 | |||
| 9cb08777fa | |||
| 2c18f8b3de | |||
| d5d6987ce2 | |||
| 61a319a049 | |||
| a392dc2786 | |||
| 5e2a074019 | |||
| 9b3fd7723e | |||
| 4802eba27b | |||
| 745352ff3f | |||
| 13f0a0c9bc | |||
| 4a404d74de | |||
| 8ed4efaadc | |||
| d17c966301 | |||
| 548c503e7c | |||
| 277444ec0a | |||
| 62a00d1ac3 | |||
| 8505538b34 | |||
| a9d0874fe9 | |||
| 1563ebbdf9 | |||
| 38fac89f73 | |||
| 7026fc4fed | |||
| d41da670fc | |||
| 5541ceb13d | |||
| ac26cc4940 | |||
| 9b906bbabf | |||
| 9a98093e70 | |||
| de05784428 | |||
| f62983b08f | |||
| d0eae8e43c | |||
| 3d2f3d12d9 | |||
| 0d2469f8fa | |||
| 124849c580 | |||
| ea38743a2a | |||
| 5ab01c5150 | |||
| ed3f3e5588 | |||
| bb6959a090 | |||
| d49d509451 | |||
| 008167268f | |||
| 67d7154328 | |||
| 4de9a4f649 | |||
| d35bdc64b9 | |||
| 3a768be488 | |||
| 1965c984f4 | |||
| e3e1700de5 | |||
| f05d766b64 | |||
| 58e414041a | |||
| cd5056d4c9 | |||
| 39fb821481 | |||
| beb87a8c43 | |||
| 4327fc939c | |||
| ef1046c6f5 | |||
| ef8cf719f2 | |||
| 6aa6b32a6c | |||
| 33dff04d47 | |||
| 65d697b7be | |||
| 06fc42ed37 | |||
| 3c5c567077 | |||
| 8f65e550c8 | |||
| 6b83879741 | |||
| be265e9cc0 | |||
| 680c36ab59 | |||
| 96b4f65cd1 | |||
| b1a897e51c | |||
| e5a34efee9 | |||
| f9ac4e4dbf | |||
| 1b40e29f40 | |||
| 7eba1fb487 | |||
| 838083b909 | |||
| 8f5eb36b5f | |||
| b7d1bcce3d | |||
| 03d3173ca6 | |||
| 38a61d7b50 | |||
| 0a429e1f7b | |||
| 857ba953e3 | |||
| e180018c99 | |||
| ac9956bf00 | |||
| 62b5a8bf65 | |||
| 303efefcb7 | |||
| feeb7c2d92 | |||
| ea9a54421a | |||
| fdf99b2bb0 | |||
| c7cd641f89 | |||
| 18b90c8df3 | |||
| 8d3bc1c2e2 | |||
| 079d988034 | |||
| aa9d388337 | |||
| 92bd3d9a47 | |||
| 53058d1504 | |||
| 3fe8463a03 | |||
| 5c4ce5d727 | |||
| 459193e7b1 | |||
| 98f21323fb | |||
| 515248d438 | |||
| b0c69ad3e0 | |||
| c5f29ab4ae | |||
| 876ee898d8 | |||
| e93bab6ea7 | |||
| 5225090490 | |||
| e9532e8878 | |||
| 23b1cb2966 | |||
| fa909e2e7d | |||
| 7fa9ce81bd | |||
| 8490911958 | |||
| 19d899b277 | |||
| 37ec8b614e | |||
| e045371969 | |||
| cd5383432e | |||
| 8b8baa27b3 | |||
| 386fa3ef0c | |||
| 19c96fd00f | |||
| ecb35fb869 | |||
| 21cda0072a | |||
| e3858e8bc3 | |||
| f08a331bc6 | |||
| cfcaa926cd | |||
| 8ade34af0a | |||
| a6d37c92d2 | |||
| 1b7b8091a3 | |||
| 94e5ebf577 | |||
| cf302e8334 | |||
| 82c7752266 | |||
| c676c8263f | |||
| f6b2375d65 | |||
| d1a065fec8 | |||
| f686ecf947 | |||
| 6ac1f318d0 | |||
| cbaf664123 | |||
| 470e653da6 | |||
| 43d3d8f7f3 | |||
| 83c0c9944d | |||
| f6f3213b84 | |||
| e5db7011f3 | |||
| b0bc8518ed | |||
| 046b648286 | |||
| e04aecb0c5 | |||
| d64ba06809 | |||
| b0d73cb053 | |||
| 5213d262a2 | |||
| 2985f5288b | |||
| 27b404560c | |||
| 16e128668c | |||
| ecfdc67485 | |||
| c9cf1b7e4c | |||
| fa6eb0795a | |||
| 4ac504a4ae | |||
| 56dd1bcd84 | |||
| b4a07a05af | |||
| 5c55229376 | |||
| f3fd71b828 | |||
| 079cf174d4 | |||
| 4ab44e36a2 | |||
| 5278c75ac1 | |||
| a908853c30 | |||
| 867a7a8b44 | |||
| 2c073c7d3c | |||
| 0157faab89 | |||
| f1bfa40b5b | |||
| dcc3083455 | |||
| a5b4dfb31f | |||
| a733212c0f | |||
| 49b454d2ec | |||
| 18780e5330 | |||
| 36fb27edf0 | |||
| c60aba63a4 | |||
| 2a98c37ca1 | |||
| 0ac8a14ea7 | |||
| 234949800b | |||
| c68c7404cf | |||
| f8ed7bb62e | |||
| 81d005d969 | |||
| 99482ad65a | |||
| 2d43e0596c | |||
| 74cac7e16c | |||
| 0c2dc61cb5 | |||
| 97c3461685 | |||
| 41e7a49d52 | |||
| 725dc6bda4 | |||
| 5da8eb6543 | |||
| 4f2296b9b0 | |||
| cb9addb1b6 | |||
| c5198a80d8 | |||
| 0df266b2e9 | |||
| 444f64e94c | |||
| 59bcd227f9 | |||
| 7f707cffb9 | |||
| a50c4494c6 | |||
| 418612cd10 | |||
| 828fe1f8ce | |||
| c184c3cae2 | |||
| 2cd3605017 | |||
| 0d71f41a13 | |||
| f5bfb0cfb4 | |||
| f7ab32ebf4 | |||
| 64dbd57fc5 | |||
| ba46957556 | |||
| 33b0c83c87 | |||
| 5dd58f49f0 | |||
| cc12dcf993 | |||
| cbfdd96152 | |||
| babab3167b | |||
| c61d9c8236 | |||
| e47241740d | |||
| 6db1714449 | |||
| 136c3bb43f | |||
| 2c3ee8efd6 | |||
| 48729e6f5d | |||
| 3eac646cb6 | |||
| 9a18f3cc8b | |||
| 5dd20d683f | |||
| 8af744fc97 | |||
| 342d3e5103 | |||
| 7f7d8c87db | |||
| 43f695de54 | |||
| 20b219d86c | |||
| 0ff39d7b14 | |||
| e2ee5df815 | |||
| 77a6db7e92 | |||
| d53912a8f1 | |||
| cdcaff184f | |||
| 25ec3880bb | |||
| edbd8f0ca8 | |||
| a4272c17a9 | |||
| c8cdf218f2 | |||
| b3833f2051 | |||
| a272c39613 | |||
| 6df9b54626 | |||
| 156c2c2fd5 | |||
| 6011b96fc1 | |||
| 7639bb8472 | |||
| c61b66b49d | |||
| cf49715c66 | |||
| bf8a814c58 | |||
| 8fadec5c2c | |||
| 7263fee4c7 | |||
| 4204c2c974 | |||
| 9372cfb8ca | |||
| 179949b289 | |||
| cd3946bd11 | |||
| 12e374bc05 | |||
| fd2fac7112 | |||
| 60092b378b | |||
| 9025af62f0 | |||
| 83bb18b6a7 | |||
| ec759dd1dc | |||
| c454fc40bb | |||
| bfb42cfc24 | |||
| 49dd91fee0 | |||
| 56c1862205 | |||
| dbefd30dce | |||
| bc06ce5b90 | |||
| b26d9fd10e | |||
| 909ee338d2 |
237
ANALYSE_TYPES_YAML_ZUGRIFFE.md
Normal file
237
ANALYSE_TYPES_YAML_ZUGRIFFE.md
Normal file
|
|
@ -0,0 +1,237 @@
|
|||
# Analyse: Zugriffe auf config/types.yaml
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Diese Analyse prüft, welche Scripte auf `config/types.yaml` zugreifen und ob sie auf Elemente zugreifen, die in der aktuellen `types.yaml` nicht mehr vorhanden sind.
|
||||
|
||||
**Datum:** 2025-01-XX
|
||||
**Version types.yaml:** 2.7.0
|
||||
|
||||
---
|
||||
|
||||
## ❌ KRITISCHE PROBLEME
|
||||
|
||||
### 1. `edge_defaults` fehlt in types.yaml, wird aber im Code verwendet
|
||||
|
||||
**Status:** ⚠️ **PROBLEM** - Code sucht nach `edge_defaults` in types.yaml, aber dieses Feld existiert nicht mehr.
|
||||
|
||||
**Betroffene Dateien:**
|
||||
|
||||
#### a) `app/core/graph/graph_utils.py` (Zeilen 101-112)
|
||||
```python
|
||||
def get_edge_defaults_for(note_type: Optional[str], reg: dict) -> List[str]:
|
||||
"""Ermittelt Standard-Kanten für einen Typ."""
|
||||
types_map = reg.get("types", reg) if isinstance(reg, dict) else {}
|
||||
if note_type and isinstance(types_map, dict):
|
||||
t = types_map.get(note_type)
|
||||
if isinstance(t, dict) and isinstance(t.get("edge_defaults"), list): # ❌ Sucht nach edge_defaults
|
||||
return [str(x) for x in t["edge_defaults"] if isinstance(x, str)]
|
||||
for key in ("defaults", "default", "global"):
|
||||
v = reg.get(key)
|
||||
if isinstance(v, dict) and isinstance(v.get("edge_defaults"), list): # ❌ Sucht nach edge_defaults
|
||||
return [str(x) for x in v["edge_defaults"] if isinstance(x, str)]
|
||||
return []
|
||||
```
|
||||
**Problem:** Funktion gibt immer `[]` zurück, da `edge_defaults` nicht in types.yaml existiert.
|
||||
|
||||
#### b) `app/core/graph/graph_derive_edges.py` (Zeile 64)
|
||||
```python
|
||||
defaults = get_edge_defaults_for(note_type, reg) # ❌ Wird verwendet, liefert aber []
|
||||
```
|
||||
**Problem:** Keine automatischen Default-Kanten werden mehr erzeugt.
|
||||
|
||||
#### c) `app/services/discovery.py` (Zeile 212)
|
||||
```python
|
||||
defaults = type_def.get("edge_defaults") # ❌ Sucht nach edge_defaults
|
||||
return defaults[0] if defaults else "related_to"
|
||||
```
|
||||
**Problem:** Fallback funktioniert, aber nutzt nicht die neue dynamische Lösung.
|
||||
|
||||
#### d) `tests/check_types_registry_edges.py` (Zeile 170)
|
||||
```python
|
||||
eddefs = (tdef or {}).get("edge_defaults") or [] # ❌ Sucht nach edge_defaults
|
||||
```
|
||||
**Problem:** Test findet keine `edge_defaults` mehr und gibt Warnung aus.
|
||||
|
||||
**✅ Lösung bereits implementiert:**
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (WP-24c, Zeilen 124-134) nutzt bereits die neue dynamische Lösung über `edge_registry.get_topology_info()`.
|
||||
|
||||
**Empfehlung:**
|
||||
- `get_edge_defaults_for()` in `graph_utils.py` sollte auf die EdgeRegistry umgestellt werden.
|
||||
- `discovery.py` sollte ebenfalls die EdgeRegistry nutzen.
|
||||
|
||||
---
|
||||
|
||||
### 2. Inkonsistenz: `chunk_profile` vs `chunking_profile`
|
||||
|
||||
**Status:** ⚠️ **WARNUNG** - Meistens abgefangen durch Fallback-Logik.
|
||||
|
||||
**Problem:**
|
||||
- In `types.yaml` heißt es: `chunking_profile` ✅
|
||||
- `app/core/type_registry.py` (Zeile 88) sucht nach: `chunk_profile` ❌
|
||||
|
||||
```python
|
||||
def effective_chunk_profile(note_type: Optional[str], reg: Dict[str, Any]) -> Optional[str]:
|
||||
cfg = get_type_config(note_type, reg)
|
||||
prof = cfg.get("chunk_profile") # ❌ Sucht nach "chunk_profile", aber types.yaml hat "chunking_profile"
|
||||
if isinstance(prof, str) and prof.strip():
|
||||
return prof.strip().lower()
|
||||
return None
|
||||
```
|
||||
|
||||
**Betroffene Dateien:**
|
||||
- `app/core/type_registry.py` (Zeile 88) - verwendet `chunk_profile` statt `chunking_profile`
|
||||
|
||||
**✅ Gut gehandhabt:**
|
||||
- `app/core/ingestion/ingestion_chunk_payload.py` (Zeile 33) - hat Fallback: `t_cfg.get(key) or t_cfg.get(key.replace("ing", ""))`
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (Zeile 120) - prüft beide Varianten
|
||||
|
||||
**Empfehlung:**
|
||||
- `type_registry.py` sollte auch `chunking_profile` prüfen (oder beide Varianten).
|
||||
|
||||
---
|
||||
|
||||
## ✅ KORREKT VERWENDETE ELEMENTE
|
||||
|
||||
### 1. `chunking_profiles` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/chunking/chunking_utils.py` (Zeile 33) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 2. `defaults` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/ingestion/ingestion_chunk_payload.py` (Zeile 36) ✅
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (Zeile 104) ✅
|
||||
- `app/core/chunking/chunking_utils.py` (Zeile 35) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 3. `ingestion_settings` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (Zeile 105) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 4. `llm_settings` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/registry.py` (Zeile 37) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 5. `types` (Hauptstruktur) ✅
|
||||
- **Verwendet in:** Viele Dateien
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 6. `types[].chunking_profile` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/chunking/chunking_utils.py` (Zeile 35) ✅
|
||||
- `app/core/ingestion/ingestion_chunk_payload.py` (Zeile 67) ✅
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (Zeile 120) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 7. `types[].retriever_weight` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/core/ingestion/ingestion_chunk_payload.py` (Zeile 71) ✅
|
||||
- `app/core/ingestion/ingestion_note_payload.py` (Zeile 111) ✅
|
||||
- `app/core/retrieval/retriever_scoring.py` (Zeile 87) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 8. `types[].detection_keywords` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/routers/chat.py` (Zeilen 104, 150) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
### 9. `types[].schema` ✅
|
||||
- **Verwendet in:**
|
||||
- `app/routers/chat.py` (vermutlich) ✅
|
||||
- **Status:** Korrekt vorhanden in types.yaml
|
||||
|
||||
---
|
||||
|
||||
## 📋 ZUSAMMENFASSUNG DER ZUGRIFFE
|
||||
|
||||
### Dateien, die auf types.yaml zugreifen:
|
||||
|
||||
1. **app/core/type_registry.py** ⚠️
|
||||
- Verwendet: `types`, `chunk_profile` (sollte `chunking_profile` sein)
|
||||
- Problem: Sucht nach `chunk_profile` statt `chunking_profile`
|
||||
|
||||
2. **app/core/registry.py** ✅
|
||||
- Verwendet: `llm_settings.cleanup_patterns`
|
||||
- Status: OK
|
||||
|
||||
3. **app/core/ingestion/ingestion_chunk_payload.py** ✅
|
||||
- Verwendet: `types`, `defaults`, `chunking_profile`, `retriever_weight`
|
||||
- Status: OK (hat Fallback für chunk_profile/chunking_profile)
|
||||
|
||||
4. **app/core/ingestion/ingestion_note_payload.py** ✅
|
||||
- Verwendet: `types`, `defaults`, `ingestion_settings`, `chunking_profile`, `retriever_weight`
|
||||
- Status: OK (nutzt neue EdgeRegistry für edge_defaults)
|
||||
|
||||
5. **app/core/chunking/chunking_utils.py** ✅
|
||||
- Verwendet: `chunking_profiles`, `types`, `defaults.chunking_profile`
|
||||
- Status: OK
|
||||
|
||||
6. **app/core/retrieval/retriever_scoring.py** ✅
|
||||
- Verwendet: `retriever_weight` (aus Payload, kommt ursprünglich aus types.yaml)
|
||||
- Status: OK
|
||||
|
||||
7. **app/core/graph/graph_utils.py** ❌
|
||||
- Verwendet: `types[].edge_defaults` (existiert nicht mehr!)
|
||||
- Problem: Sucht nach `edge_defaults` in types.yaml
|
||||
|
||||
8. **app/core/graph/graph_derive_edges.py** ❌
|
||||
- Verwendet: `get_edge_defaults_for()` → sucht nach `edge_defaults`
|
||||
- Problem: Keine Default-Kanten mehr
|
||||
|
||||
9. **app/services/discovery.py** ⚠️
|
||||
- Verwendet: `types[].edge_defaults` (existiert nicht mehr!)
|
||||
- Problem: Fallback funktioniert, aber nutzt nicht neue Lösung
|
||||
|
||||
10. **app/routers/chat.py** ✅
|
||||
- Verwendet: `types[].detection_keywords`
|
||||
- Status: OK
|
||||
|
||||
11. **tests/test_type_registry.py** ⚠️
|
||||
- Verwendet: `types[].chunk_profile`, `types[].edge_defaults`
|
||||
- Problem: Test verwendet alte Struktur
|
||||
|
||||
12. **tests/check_types_registry_edges.py** ❌
|
||||
- Verwendet: `types[].edge_defaults` (existiert nicht mehr!)
|
||||
- Problem: Test findet keine edge_defaults
|
||||
|
||||
13. **scripts/payload_dryrun.py** ✅
|
||||
- Verwendet: Indirekt über `make_note_payload()` und `make_chunk_payloads()`
|
||||
- Status: OK
|
||||
|
||||
---
|
||||
|
||||
## 🔧 EMPFOHLENE FIXES
|
||||
|
||||
### Priorität 1 (Kritisch):
|
||||
|
||||
1. **`app/core/graph/graph_utils.py` - `get_edge_defaults_for()`**
|
||||
- Sollte auf `edge_registry.get_topology_info()` umgestellt werden
|
||||
- Oder: Rückwärtskompatibilität beibehalten, aber EdgeRegistry als primäre Quelle nutzen
|
||||
|
||||
2. **`app/core/graph/graph_derive_edges.py`**
|
||||
- Nutzt `get_edge_defaults_for()`, sollte nach Fix von graph_utils.py funktionieren
|
||||
|
||||
3. **`app/services/discovery.py`**
|
||||
- Sollte EdgeRegistry für `edge_defaults` nutzen
|
||||
|
||||
### Priorität 2 (Warnung):
|
||||
|
||||
4. **`app/core/type_registry.py` - `effective_chunk_profile()`**
|
||||
- Sollte auch `chunking_profile` prüfen (nicht nur `chunk_profile`)
|
||||
|
||||
5. **`tests/test_type_registry.py`**
|
||||
- Test sollte aktualisiert werden, um `chunking_profile` statt `chunk_profile` zu verwenden
|
||||
|
||||
6. **`tests/check_types_registry_edges.py`**
|
||||
- Test sollte auf EdgeRegistry umgestellt werden oder als deprecated markiert werden
|
||||
|
||||
---
|
||||
|
||||
## 📝 HINWEISE
|
||||
|
||||
- **WP-24c** hat bereits eine Lösung für `edge_defaults` implementiert: Dynamische Abfrage über `edge_registry.get_topology_info()`
|
||||
- Die alte Lösung (statische `edge_defaults` in types.yaml) wurde durch die dynamische Lösung ersetzt
|
||||
- Code-Stellen, die noch die alte Lösung verwenden, sollten migriert werden
|
||||
|
|
@ -1,17 +1,10 @@
|
|||
"""
|
||||
app — mindnet API package
|
||||
|
||||
Zweck:
|
||||
Markiert 'app/' als Python-Paket, damit 'from app.main import create_app'
|
||||
in Tests und Skripten funktioniert.
|
||||
Kompatibilität:
|
||||
Python 3.12+
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Hinweise:
|
||||
Keine Logik – nur Paketinitialisierung.
|
||||
FILE: app/__init__.py
|
||||
DESCRIPTION: Paket-Initialisierung.
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: None
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
|
|
|
|||
|
|
@ -1,36 +1,107 @@
|
|||
"""
|
||||
app/config.py — zentrale Konfiguration
|
||||
Version: 0.4.0 (WP-06 Complete)
|
||||
FILE: app/config.py
|
||||
DESCRIPTION: Zentrale Pydantic-Konfiguration.
|
||||
WP-20: Hybrid-Cloud Modus Support (OpenRouter/Gemini/Ollama).
|
||||
FIX: Einführung von Parametern zur intelligenten Rate-Limit Steuerung (429 Handling).
|
||||
VERSION: 0.6.7
|
||||
STATUS: Active
|
||||
DEPENDENCIES: os, functools, pathlib, python-dotenv
|
||||
"""
|
||||
from __future__ import annotations
|
||||
import os
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# WP-20: Lade Umgebungsvariablen aus der .env Datei
|
||||
# override=True garantiert, dass Änderungen in der .env immer Vorrang haben.
|
||||
# WP-24c v4.5.10: Expliziter Pfad für .env-Datei, um Probleme mit Arbeitsverzeichnis zu vermeiden
|
||||
# Suche .env im Projekt-Root (3 Ebenen über app/config.py: app/config.py -> app/ -> root/)
|
||||
_project_root = Path(__file__).parent.parent.parent
|
||||
_env_file = _project_root / ".env"
|
||||
_env_loaded = False
|
||||
|
||||
# Versuche zuerst expliziten Pfad
|
||||
if _env_file.exists():
|
||||
_env_loaded = load_dotenv(_env_file, override=True)
|
||||
if _env_loaded:
|
||||
# Optional: Logging (nur wenn logging bereits initialisiert ist)
|
||||
try:
|
||||
import logging
|
||||
_logger = logging.getLogger(__name__)
|
||||
_logger.debug(f"✅ .env geladen von: {_env_file}")
|
||||
except:
|
||||
pass # Logging noch nicht initialisiert
|
||||
|
||||
# Fallback: Automatische Suche (für Dev/Test oder wenn .env an anderer Stelle liegt)
|
||||
if not _env_loaded:
|
||||
_env_loaded = load_dotenv(override=True)
|
||||
if _env_loaded:
|
||||
try:
|
||||
import logging
|
||||
_logger = logging.getLogger(__name__)
|
||||
_logger.debug(f"✅ .env geladen via automatische Suche (cwd: {Path.cwd()})")
|
||||
except:
|
||||
pass
|
||||
|
||||
class Settings:
|
||||
# Qdrant
|
||||
# --- Qdrant Datenbank ---
|
||||
QDRANT_URL: str = os.getenv("QDRANT_URL", "http://127.0.0.1:6333")
|
||||
QDRANT_API_KEY: str | None = os.getenv("QDRANT_API_KEY")
|
||||
COLLECTION_PREFIX: str = os.getenv("MINDNET_PREFIX", "mindnet")
|
||||
VECTOR_SIZE: int = int(os.getenv("MINDNET_VECTOR_SIZE", "384"))
|
||||
# WP-24c v4.5.10: Harmonisierung - Unterstützt beide Umgebungsvariablen für Abwärtskompatibilität
|
||||
# COLLECTION_PREFIX hat Priorität, MINDNET_PREFIX als Fallback
|
||||
# WP-24c v4.5.10-FIX: Default auf "mindnet" (Prod) statt "mindnet_dev" (Dev)
|
||||
# Dev muss explizit COLLECTION_PREFIX=mindnet_dev in .env setzen
|
||||
COLLECTION_PREFIX: str = os.getenv("COLLECTION_PREFIX") or os.getenv("MINDNET_PREFIX") or "mindnet"
|
||||
|
||||
# WP-22: Vektor-Dimension für das Embedding-Modell (nomic)
|
||||
VECTOR_SIZE: int = int(os.getenv("VECTOR_DIM", "768"))
|
||||
DISTANCE: str = os.getenv("MINDNET_DISTANCE", "Cosine")
|
||||
|
||||
# Embeddings
|
||||
# --- Lokale Embeddings (Ollama & Sentence-Transformers) ---
|
||||
EMBEDDING_MODEL: str = os.getenv("MINDNET_EMBEDDING_MODEL", "nomic-embed-text")
|
||||
MODEL_NAME: str = os.getenv("MINDNET_MODEL", "sentence-transformers/all-MiniLM-L6-v2")
|
||||
|
||||
# WP-05 LLM / Ollama
|
||||
# --- WP-20 Hybrid LLM Provider ---
|
||||
# Erlaubt: "ollama" | "gemini" | "openrouter"
|
||||
MINDNET_LLM_PROVIDER: str = os.getenv("MINDNET_LLM_PROVIDER", "openrouter").lower()
|
||||
# Standardwert 10000, falls nichts in der .env steht
|
||||
MAX_OLLAMA_CHARS: int = int(os.getenv("MAX_OLLAMA_CHARS", 10000))
|
||||
|
||||
# Google AI Studio (2025er Lite-Modell für höhere Kapazität)
|
||||
GOOGLE_API_KEY: str | None = os.getenv("GOOGLE_API_KEY")
|
||||
GEMINI_MODEL: str = os.getenv("MINDNET_GEMINI_MODEL", "gemini-2.5-flash-lite")
|
||||
|
||||
# OpenRouter Integration (Verfügbares Free-Modell 2025)
|
||||
OPENROUTER_API_KEY: str | None = os.getenv("OPENROUTER_API_KEY")
|
||||
OPENROUTER_MODEL: str = os.getenv("OPENROUTER_MODEL", "mistralai/mistral-7b-instruct:free")
|
||||
|
||||
LLM_FALLBACK_ENABLED: bool = os.getenv("MINDNET_LLM_FALLBACK", "true").lower() == "true"
|
||||
|
||||
# --- NEU: Intelligente Rate-Limit Steuerung ---
|
||||
# Dauer der Wartezeit in Sekunden, wenn ein HTTP 429 (Rate Limit) auftritt
|
||||
LLM_RATE_LIMIT_WAIT: float = float(os.getenv("MINDNET_LLM_RATE_LIMIT_WAIT", "60.0"))
|
||||
# Anzahl der Cloud-Retries bei 429, bevor Ollama-Fallback greift
|
||||
LLM_RATE_LIMIT_RETRIES: int = int(os.getenv("MINDNET_LLM_RATE_LIMIT_RETRIES", "3"))
|
||||
|
||||
# --- WP-05 Lokales LLM (Ollama) ---
|
||||
OLLAMA_URL: str = os.getenv("MINDNET_OLLAMA_URL", "http://127.0.0.1:11434")
|
||||
LLM_MODEL: str = os.getenv("MINDNET_LLM_MODEL", "phi3:mini")
|
||||
PROMPTS_PATH: str = os.getenv("MINDNET_PROMPTS_PATH", "config/prompts.yaml")
|
||||
|
||||
# NEU für WP-06
|
||||
LLM_TIMEOUT: float = float(os.getenv("MINDNET_LLM_TIMEOUT", "120.0"))
|
||||
# --- WP-06 / WP-14 Performance & Last-Steuerung ---
|
||||
LLM_TIMEOUT: float = float(os.getenv("MINDNET_LLM_TIMEOUT", "300.0"))
|
||||
DECISION_CONFIG_PATH: str = os.getenv("MINDNET_DECISION_CONFIG", "config/decision_engine.yaml")
|
||||
BACKGROUND_LIMIT: int = int(os.getenv("MINDNET_LLM_BACKGROUND_LIMIT", "2"))
|
||||
|
||||
# API
|
||||
# --- System-Pfade & Ingestion-Logik ---
|
||||
DEBUG: bool = os.getenv("DEBUG", "false").lower() == "true"
|
||||
MINDNET_VAULT_ROOT: str = os.getenv("MINDNET_VAULT_ROOT", "./vault_master")
|
||||
MINDNET_TYPES_FILE: str = os.getenv("MINDNET_TYPES_FILE", "config/types.yaml")
|
||||
MINDNET_VOCAB_PATH: str = os.getenv("MINDNET_VOCAB_PATH", "/mindnet/vault/mindnet/_system/dictionary/edge_vocabulary.md")
|
||||
CHANGE_DETECTION_MODE: str = os.getenv("MINDNET_CHANGE_DETECTION_MODE", "full")
|
||||
|
||||
# WP-04 Retriever Defaults
|
||||
# --- WP-04 Retriever Gewichte ---
|
||||
RETRIEVER_W_SEM: float = float(os.getenv("MINDNET_WP04_W_SEM", "0.70"))
|
||||
RETRIEVER_W_EDGE: float = float(os.getenv("MINDNET_WP04_W_EDGE", "0.25"))
|
||||
RETRIEVER_W_CENT: float = float(os.getenv("MINDNET_WP04_W_CENT", "0.05"))
|
||||
|
|
@ -40,4 +111,5 @@ class Settings:
|
|||
|
||||
@lru_cache
|
||||
def get_settings() -> Settings:
|
||||
"""Gibt die zentralen Einstellungen als Singleton zurück."""
|
||||
return Settings()
|
||||
|
|
@ -1,136 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
app/core/chunk_payload.py (Mindnet V2 — types.yaml authoritative)
|
||||
- neighbors_prev / neighbors_next sind Listen ([], [id]).
|
||||
- retriever_weight / chunk_profile kommen aus types.yaml (Frontmatter wird ignoriert).
|
||||
- Fallbacks: defaults.* in types.yaml; sonst 1.0 / "default".
|
||||
- WP-11 Update: Injects 'title' into chunk payload for Discovery Service.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from typing import Any, Dict, List, Optional
|
||||
import os, yaml
|
||||
|
||||
def _env(n: str, d: Optional[str]=None) -> str:
|
||||
v = os.getenv(n)
|
||||
return v if v is not None else (d or "")
|
||||
|
||||
def _load_types() -> dict:
|
||||
p = _env("MINDNET_TYPES_FILE", "./config/types.yaml")
|
||||
try:
|
||||
with open(p, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _get_types_map(reg: dict) -> dict:
|
||||
if isinstance(reg, dict) and isinstance(reg.get("types"), dict):
|
||||
return reg["types"]
|
||||
return reg if isinstance(reg, dict) else {}
|
||||
|
||||
def _get_defaults(reg: dict) -> dict:
|
||||
if isinstance(reg, dict) and isinstance(reg.get("defaults"), dict):
|
||||
return reg["defaults"]
|
||||
if isinstance(reg, dict) and isinstance(reg.get("global"), dict):
|
||||
return reg["global"]
|
||||
return {}
|
||||
|
||||
def _as_float(x: Any):
|
||||
try:
|
||||
return float(x)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def _resolve_chunk_profile(note_type: str, reg: dict) -> str:
|
||||
types = _get_types_map(reg)
|
||||
if isinstance(types, dict):
|
||||
t = types.get(note_type, {})
|
||||
if isinstance(t, dict) and isinstance(t.get("chunk_profile"), str):
|
||||
return t["chunk_profile"]
|
||||
defs = _get_defaults(reg)
|
||||
if isinstance(defs, dict) and isinstance(defs.get("chunk_profile"), str):
|
||||
return defs["chunk_profile"]
|
||||
return "default"
|
||||
|
||||
def _resolve_retriever_weight(note_type: str, reg: dict) -> float:
|
||||
types = _get_types_map(reg)
|
||||
if isinstance(types, dict):
|
||||
t = types.get(note_type, {})
|
||||
if isinstance(t, dict) and (t.get("retriever_weight") is not None):
|
||||
v = _as_float(t.get("retriever_weight"))
|
||||
if v is not None:
|
||||
return float(v)
|
||||
defs = _get_defaults(reg)
|
||||
if isinstance(defs, dict) and (defs.get("retriever_weight") is not None):
|
||||
v = _as_float(defs.get("retriever_weight"))
|
||||
if v is not None:
|
||||
return float(v)
|
||||
return 1.0
|
||||
|
||||
def _as_list(x):
|
||||
if x is None:
|
||||
return []
|
||||
if isinstance(x, list):
|
||||
return x
|
||||
return [x]
|
||||
|
||||
def make_chunk_payloads(note: Dict[str, Any],
|
||||
note_path: str,
|
||||
chunks_from_chunker: List[Any],
|
||||
*,
|
||||
note_text: str = "",
|
||||
types_cfg: Optional[dict] = None,
|
||||
file_path: Optional[str] = None) -> List[Dict[str, Any]]:
|
||||
fm = (note or {}).get("frontmatter", {}) or {}
|
||||
note_type = fm.get("type") or note.get("type") or "concept"
|
||||
|
||||
# WP-11 FIX: Title Extraction für Discovery Service
|
||||
# Wir holen den Titel aus Frontmatter oder Fallback ID/Untitled
|
||||
title = fm.get("title") or note.get("title") or fm.get("id") or "Untitled"
|
||||
|
||||
reg = types_cfg if isinstance(types_cfg, dict) else _load_types()
|
||||
|
||||
# types.yaml authoritative
|
||||
cp = _resolve_chunk_profile(note_type, reg)
|
||||
rw = _resolve_retriever_weight(note_type, reg)
|
||||
|
||||
tags = fm.get("tags") or []
|
||||
if isinstance(tags, str):
|
||||
tags = [tags]
|
||||
|
||||
out: List[Dict[str, Any]] = []
|
||||
for idx, ch in enumerate(chunks_from_chunker):
|
||||
# Attribute oder Keys (Chunk-Objekt oder Dict)
|
||||
cid = getattr(ch, "id", None) or (ch.get("id") if isinstance(ch, dict) else None)
|
||||
nid = getattr(ch, "note_id", None) or (ch.get("note_id") if isinstance(ch, dict) else fm.get("id"))
|
||||
index = getattr(ch, "index", None) or (ch.get("index") if isinstance(ch, dict) else idx)
|
||||
text = getattr(ch, "text", None) or (ch.get("text") if isinstance(ch, dict) else "")
|
||||
window = getattr(ch, "window", None) or (ch.get("window") if isinstance(ch, dict) else text)
|
||||
prev_id = getattr(ch, "neighbors_prev", None) or (ch.get("neighbors_prev") if isinstance(ch, dict) else None)
|
||||
next_id = getattr(ch, "neighbors_next", None) or (ch.get("neighbors_next") if isinstance(ch, dict) else None)
|
||||
|
||||
pl: Dict[str, Any] = {
|
||||
"note_id": nid,
|
||||
"chunk_id": cid,
|
||||
"title": title, # <--- HIER: Titel in Payload einfügen
|
||||
"index": int(index),
|
||||
"ord": int(index) + 1,
|
||||
"type": note_type,
|
||||
"tags": tags,
|
||||
"text": text,
|
||||
"window": window,
|
||||
"neighbors_prev": _as_list(prev_id),
|
||||
"neighbors_next": _as_list(next_id),
|
||||
"section": getattr(ch, "section", None) or (ch.get("section") if isinstance(ch, dict) else ""),
|
||||
"path": note_path,
|
||||
"source_path": file_path or note_path,
|
||||
"retriever_weight": float(rw),
|
||||
"chunk_profile": cp,
|
||||
}
|
||||
# Aufräumen von Alt-Feldern
|
||||
for alias in ("chunk_num", "Chunk_Number"):
|
||||
pl.pop(alias, None)
|
||||
|
||||
out.append(pl)
|
||||
|
||||
return out
|
||||
|
|
@ -1,330 +0,0 @@
|
|||
from __future__ import annotations
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Dict, Optional, Tuple, Any, Set
|
||||
import re
|
||||
import math
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from markdown_it import MarkdownIt
|
||||
from markdown_it.token import Token
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
# Services
|
||||
from app.services.semantic_analyzer import get_semantic_analyzer
|
||||
|
||||
# Core Imports
|
||||
try:
|
||||
from app.core.derive_edges import build_edges_for_note
|
||||
except ImportError:
|
||||
# Mock für Tests
|
||||
def build_edges_for_note(note_id, chunks, note_level_references=None, include_note_scope_refs=False): return []
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ==========================================
|
||||
# 1. HELPER & CONFIG
|
||||
# ==========================================
|
||||
|
||||
BASE_DIR = Path(__file__).resolve().parent.parent.parent
|
||||
CONFIG_PATH = BASE_DIR / "config" / "types.yaml"
|
||||
DEFAULT_PROFILE = {"strategy": "sliding_window", "target": 400, "max": 600, "overlap": (50, 80)}
|
||||
_CONFIG_CACHE = None
|
||||
|
||||
def _load_yaml_config() -> Dict[str, Any]:
|
||||
global _CONFIG_CACHE
|
||||
if _CONFIG_CACHE is not None: return _CONFIG_CACHE
|
||||
if not CONFIG_PATH.exists(): return {}
|
||||
try:
|
||||
with open(CONFIG_PATH, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f)
|
||||
_CONFIG_CACHE = data
|
||||
return data
|
||||
except Exception: return {}
|
||||
|
||||
def get_chunk_config(note_type: str) -> Dict[str, Any]:
|
||||
full_config = _load_yaml_config()
|
||||
profiles = full_config.get("chunking_profiles", {})
|
||||
type_def = full_config.get("types", {}).get(note_type.lower(), {})
|
||||
profile_name = type_def.get("chunking_profile")
|
||||
if not profile_name:
|
||||
profile_name = full_config.get("defaults", {}).get("chunking_profile", "sliding_standard")
|
||||
|
||||
config = profiles.get(profile_name, DEFAULT_PROFILE).copy()
|
||||
if "overlap" in config and isinstance(config["overlap"], list):
|
||||
config["overlap"] = tuple(config["overlap"])
|
||||
return config
|
||||
|
||||
def extract_frontmatter_from_text(md_text: str) -> Tuple[Dict[str, Any], str]:
|
||||
fm_match = re.match(r'^\s*---\s*\n(.*?)\n---', md_text, re.DOTALL)
|
||||
if not fm_match: return {}, md_text
|
||||
try:
|
||||
frontmatter = yaml.safe_load(fm_match.group(1))
|
||||
if not isinstance(frontmatter, dict): frontmatter = {}
|
||||
except yaml.YAMLError:
|
||||
frontmatter = {}
|
||||
text_without_fm = re.sub(r'^\s*---\s*\n(.*?)\n---', '', md_text, flags=re.DOTALL)
|
||||
return frontmatter, text_without_fm.strip()
|
||||
|
||||
# ==========================================
|
||||
# 2. DATA CLASSES
|
||||
# ==========================================
|
||||
|
||||
_SENT_SPLIT = re.compile(r'(?<=[.!?])\s+(?=[A-ZÄÖÜ0-9„(])'); _WS = re.compile(r'\s+')
|
||||
|
||||
def estimate_tokens(text: str) -> int:
|
||||
return max(1, math.ceil(len(text.strip()) / 4))
|
||||
|
||||
def split_sentences(text: str) -> list[str]:
|
||||
text = _WS.sub(' ', text.strip())
|
||||
if not text: return []
|
||||
parts = _SENT_SPLIT.split(text)
|
||||
return [p.strip() for p in parts if p.strip()]
|
||||
|
||||
@dataclass
|
||||
class RawBlock:
|
||||
kind: str; text: str; level: Optional[int]; section_path: str; section_title: Optional[str]
|
||||
|
||||
@dataclass
|
||||
class Chunk:
|
||||
id: str; note_id: str; index: int; text: str; window: str; token_count: int
|
||||
section_title: Optional[str]; section_path: str
|
||||
neighbors_prev: Optional[str]; neighbors_next: Optional[str]
|
||||
suggested_edges: Optional[List[str]] = None
|
||||
|
||||
# ==========================================
|
||||
# 3. PARSING & STRATEGIES (SYNCHRON)
|
||||
# ==========================================
|
||||
|
||||
def parse_blocks(md_text: str) -> Tuple[List[RawBlock], str]:
|
||||
"""Zerlegt Text in logische Blöcke (Absätze, Header)."""
|
||||
blocks = []
|
||||
h1_title = "Dokument"
|
||||
section_path = "/"
|
||||
current_h2 = None
|
||||
|
||||
fm, text_without_fm = extract_frontmatter_from_text(md_text)
|
||||
|
||||
h1_match = re.search(r'^#\s+(.*)', text_without_fm, re.MULTILINE)
|
||||
if h1_match:
|
||||
h1_title = h1_match.group(1).strip()
|
||||
|
||||
lines = text_without_fm.split('\n')
|
||||
buffer = []
|
||||
|
||||
for line in lines:
|
||||
stripped = line.strip()
|
||||
if stripped.startswith('# '):
|
||||
continue
|
||||
elif stripped.startswith('## '):
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock("paragraph", content, None, section_path, current_h2))
|
||||
buffer = []
|
||||
current_h2 = stripped[3:].strip()
|
||||
section_path = f"/{current_h2}"
|
||||
blocks.append(RawBlock("heading", stripped, 2, section_path, current_h2))
|
||||
elif not stripped:
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock("paragraph", content, None, section_path, current_h2))
|
||||
buffer = []
|
||||
else:
|
||||
buffer.append(line)
|
||||
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock("paragraph", content, None, section_path, current_h2))
|
||||
|
||||
return blocks, h1_title
|
||||
|
||||
def _strategy_sliding_window(blocks: List[RawBlock], config: Dict[str, Any], note_id: str, doc_title: str = "", context_prefix: str = "") -> List[Chunk]:
|
||||
target = config.get("target", 400)
|
||||
max_tokens = config.get("max", 600)
|
||||
overlap_val = config.get("overlap", (50, 80))
|
||||
overlap = sum(overlap_val) // 2 if isinstance(overlap_val, tuple) else overlap_val
|
||||
chunks = []; buf = []
|
||||
|
||||
def _create_chunk(txt, win, sec, path):
|
||||
idx = len(chunks)
|
||||
chunks.append(Chunk(
|
||||
id=f"{note_id}#c{idx:02d}", note_id=note_id, index=idx,
|
||||
text=txt, window=win, token_count=estimate_tokens(txt),
|
||||
section_title=sec, section_path=path, neighbors_prev=None, neighbors_next=None,
|
||||
suggested_edges=[]
|
||||
))
|
||||
|
||||
def flush_buffer():
|
||||
nonlocal buf
|
||||
if not buf: return
|
||||
|
||||
text_body = "\n\n".join([b.text for b in buf])
|
||||
win_body = f"{context_prefix}\n{text_body}".strip() if context_prefix else text_body
|
||||
|
||||
if estimate_tokens(text_body) <= max_tokens:
|
||||
_create_chunk(text_body, win_body, buf[-1].section_title, buf[-1].section_path)
|
||||
else:
|
||||
sentences = split_sentences(text_body)
|
||||
current_chunk_sents = []
|
||||
current_len = 0
|
||||
|
||||
for sent in sentences:
|
||||
sent_len = estimate_tokens(sent)
|
||||
if current_len + sent_len > target and current_chunk_sents:
|
||||
c_txt = " ".join(current_chunk_sents)
|
||||
c_win = f"{context_prefix}\n{c_txt}".strip() if context_prefix else c_txt
|
||||
_create_chunk(c_txt, c_win, buf[-1].section_title, buf[-1].section_path)
|
||||
|
||||
overlap_sents = []
|
||||
ov_len = 0
|
||||
for s in reversed(current_chunk_sents):
|
||||
if ov_len + estimate_tokens(s) < overlap:
|
||||
overlap_sents.insert(0, s)
|
||||
ov_len += estimate_tokens(s)
|
||||
else:
|
||||
break
|
||||
|
||||
current_chunk_sents = list(overlap_sents)
|
||||
current_chunk_sents.append(sent)
|
||||
current_len = ov_len + sent_len
|
||||
else:
|
||||
current_chunk_sents.append(sent)
|
||||
current_len += sent_len
|
||||
|
||||
if current_chunk_sents:
|
||||
c_txt = " ".join(current_chunk_sents)
|
||||
c_win = f"{context_prefix}\n{c_txt}".strip() if context_prefix else c_txt
|
||||
_create_chunk(c_txt, c_win, buf[-1].section_title, buf[-1].section_path)
|
||||
|
||||
buf = []
|
||||
|
||||
for b in blocks:
|
||||
if b.kind == "heading": continue
|
||||
current_buf_text = "\n\n".join([x.text for x in buf])
|
||||
if estimate_tokens(current_buf_text) + estimate_tokens(b.text) >= target:
|
||||
flush_buffer()
|
||||
buf.append(b)
|
||||
if estimate_tokens(b.text) >= target:
|
||||
flush_buffer()
|
||||
|
||||
flush_buffer()
|
||||
return chunks
|
||||
|
||||
def _strategy_by_heading(blocks: List[RawBlock], config: Dict[str, Any], note_id: str, doc_title: str = "") -> List[Chunk]:
|
||||
return _strategy_sliding_window(blocks, config, note_id, doc_title, context_prefix=f"# {doc_title}")
|
||||
|
||||
# ==========================================
|
||||
# 4. ORCHESTRATION (ASYNC) - WP-15 CORE
|
||||
# ==========================================
|
||||
|
||||
async def assemble_chunks(note_id: str, md_text: str, note_type: str, config: Optional[Dict] = None) -> List[Chunk]:
|
||||
if config is None:
|
||||
config = get_chunk_config(note_type)
|
||||
|
||||
fm, body_text = extract_frontmatter_from_text(md_text)
|
||||
note_status = fm.get("status", "").lower()
|
||||
|
||||
primary_strategy = config.get("strategy", "sliding_window")
|
||||
enable_smart_edges = config.get("enable_smart_edge_allocation", False)
|
||||
|
||||
if enable_smart_edges and note_status in ["draft", "initial_gen"]:
|
||||
logger.info(f"Chunker: Skipping Smart Edges for draft '{note_id}'.")
|
||||
enable_smart_edges = False
|
||||
|
||||
blocks, doc_title = parse_blocks(md_text)
|
||||
|
||||
if primary_strategy == "by_heading":
|
||||
chunks = await asyncio.to_thread(_strategy_by_heading, blocks, config, note_id, doc_title)
|
||||
else:
|
||||
chunks = await asyncio.to_thread(_strategy_sliding_window, blocks, config, note_id, doc_title)
|
||||
|
||||
if not chunks:
|
||||
return []
|
||||
|
||||
if enable_smart_edges:
|
||||
# Hier rufen wir nun die Smart Edge Allocation auf
|
||||
chunks = await _run_smart_edge_allocation(chunks, md_text, note_id, note_type)
|
||||
|
||||
for i, ch in enumerate(chunks):
|
||||
ch.neighbors_prev = chunks[i-1].id if i > 0 else None
|
||||
ch.neighbors_next = chunks[i+1].id if i < len(chunks)-1 else None
|
||||
|
||||
return chunks
|
||||
|
||||
def _extract_all_edges_from_md(md_text: str, note_id: str, note_type: str) -> List[str]:
|
||||
"""
|
||||
Hilfsfunktion: Erstellt einen Dummy-Chunk für den gesamten Text und ruft
|
||||
den Edge-Parser auf, um ALLE Kanten der Notiz zu finden.
|
||||
"""
|
||||
# 1. Dummy Chunk erstellen, der den gesamten Text enthält
|
||||
# Das ist notwendig, da build_edges_for_note Kanten nur aus Chunks extrahiert.
|
||||
dummy_chunk = {
|
||||
"chunk_id": f"{note_id}#full",
|
||||
"text": md_text,
|
||||
"content": md_text, # Sicherstellen, dass der Parser Text findet
|
||||
"window": md_text,
|
||||
"type": note_type
|
||||
}
|
||||
|
||||
# 2. Aufruf des Parsers (Signatur-Fix!)
|
||||
# derive_edges.py: build_edges_for_note(note_id, chunks, note_level_references=None, include_note_scope_refs=False)
|
||||
raw_edges = build_edges_for_note(
|
||||
note_id,
|
||||
[dummy_chunk],
|
||||
note_level_references=None,
|
||||
include_note_scope_refs=False
|
||||
)
|
||||
|
||||
# 3. Kanten extrahieren
|
||||
all_candidates = set()
|
||||
for e in raw_edges:
|
||||
kind = e.get("kind")
|
||||
target = e.get("target_id")
|
||||
if target and kind not in ["belongs_to", "next", "prev", "backlink"]:
|
||||
all_candidates.add(f"{kind}:{target}")
|
||||
|
||||
return list(all_candidates)
|
||||
|
||||
async def _run_smart_edge_allocation(chunks: List[Chunk], full_text: str, note_id: str, note_type: str) -> List[Chunk]:
|
||||
analyzer = get_semantic_analyzer()
|
||||
|
||||
# A. Alle potenziellen Kanten der Notiz sammeln (über den Dummy-Chunk Trick)
|
||||
candidate_list = _extract_all_edges_from_md(full_text, note_id, note_type)
|
||||
|
||||
if not candidate_list:
|
||||
return chunks
|
||||
|
||||
# B. LLM Filterung pro Chunk (Parallel)
|
||||
tasks = []
|
||||
for chunk in chunks:
|
||||
tasks.append(analyzer.assign_edges_to_chunk(chunk.text, candidate_list, note_type))
|
||||
|
||||
results_per_chunk = await asyncio.gather(*tasks)
|
||||
|
||||
# C. Injection & Fallback
|
||||
assigned_edges_global = set()
|
||||
|
||||
for i, confirmed_edges in enumerate(results_per_chunk):
|
||||
chunk = chunks[i]
|
||||
chunk.suggested_edges = confirmed_edges
|
||||
assigned_edges_global.update(confirmed_edges)
|
||||
|
||||
if confirmed_edges:
|
||||
injection_str = "\n" + " ".join([f"[[rel:{e.split(':')[0]}|{e.split(':')[1]}]]" for e in confirmed_edges if ':' in e])
|
||||
chunk.text += injection_str
|
||||
chunk.window += injection_str
|
||||
|
||||
# D. Fallback: Unassigned Kanten überall hin
|
||||
unassigned = set(candidate_list) - assigned_edges_global
|
||||
if unassigned:
|
||||
fallback_str = "\n" + " ".join([f"[[rel:{e.split(':')[0]}|{e.split(':')[1]}]]" for e in unassigned if ':' in e])
|
||||
for chunk in chunks:
|
||||
chunk.text += fallback_str
|
||||
chunk.window += fallback_str
|
||||
if chunk.suggested_edges is None: chunk.suggested_edges = []
|
||||
chunk.suggested_edges.extend(list(unassigned))
|
||||
|
||||
return chunks
|
||||
10
app/core/chunking/__init__.py
Normal file
10
app/core/chunking/__init__.py
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
"""
|
||||
FILE: app/core/chunking/__init__.py
|
||||
DESCRIPTION: Package-Einstiegspunkt für Chunking. Exportiert assemble_chunks.
|
||||
VERSION: 3.3.0
|
||||
"""
|
||||
from .chunking_processor import assemble_chunks
|
||||
from .chunking_utils import get_chunk_config, extract_frontmatter_from_text
|
||||
from .chunking_models import Chunk
|
||||
|
||||
__all__ = ["assemble_chunks", "get_chunk_config", "extract_frontmatter_from_text", "Chunk"]
|
||||
33
app/core/chunking/chunking_models.py
Normal file
33
app/core/chunking/chunking_models.py
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_models.py
|
||||
DESCRIPTION: Datenklassen für das Chunking-System.
|
||||
"""
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Dict, Optional, Any
|
||||
|
||||
@dataclass
|
||||
class RawBlock:
|
||||
"""Repräsentiert einen logischen Block aus dem Markdown-Parsing."""
|
||||
kind: str
|
||||
text: str
|
||||
level: Optional[int]
|
||||
section_path: str
|
||||
section_title: Optional[str]
|
||||
exclude_from_chunking: bool = False # WP-24c v4.2.0: Flag für Edge-Zonen, die nicht gechunkt werden sollen
|
||||
is_meta_content: bool = False # WP-24c v4.2.6: Flag für Meta-Content (Callouts), der später entfernt wird
|
||||
|
||||
@dataclass
|
||||
class Chunk:
|
||||
"""Das finale Chunk-Objekt für Embedding und Graph-Speicherung."""
|
||||
id: str
|
||||
note_id: str
|
||||
index: int
|
||||
text: str
|
||||
window: str
|
||||
token_count: int
|
||||
section_title: Optional[str]
|
||||
section_path: str
|
||||
neighbors_prev: Optional[str]
|
||||
neighbors_next: Optional[str]
|
||||
candidate_pool: List[Dict[str, Any]] = field(default_factory=list)
|
||||
suggested_edges: Optional[List[str]] = None
|
||||
251
app/core/chunking/chunking_parser.py
Normal file
251
app/core/chunking/chunking_parser.py
Normal file
|
|
@ -0,0 +1,251 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_parser.py
|
||||
DESCRIPTION: Zerlegt Markdown in logische Einheiten (RawBlocks).
|
||||
Hält alle Überschriftenebenen (H1-H6) im Stream.
|
||||
Stellt die Funktion parse_edges_robust zur Verfügung.
|
||||
WP-24c v4.2.0: Identifiziert Edge-Zonen und markiert sie für Chunking-Ausschluss.
|
||||
WP-24c v4.2.5: Callout-Exclusion - Callouts werden als separate RawBlocks identifiziert und ausgeschlossen.
|
||||
"""
|
||||
import re
|
||||
import os
|
||||
from typing import List, Tuple, Set, Dict, Any, Optional
|
||||
from .chunking_models import RawBlock
|
||||
from .chunking_utils import extract_frontmatter_from_text
|
||||
|
||||
_WS = re.compile(r'\s+')
|
||||
_SENT_SPLIT = re.compile(r'(?<=[.!?])\s+(?=[A-ZÄÖÜ0-9„(])')
|
||||
|
||||
def split_sentences(text: str) -> list[str]:
|
||||
"""Teilt Text in Sätze auf unter Berücksichtigung deutscher Interpunktion."""
|
||||
text = _WS.sub(' ', text.strip())
|
||||
if not text: return []
|
||||
# Splittet bei Punkt, Ausrufezeichen oder Fragezeichen, gefolgt von Leerzeichen und Großbuchstabe
|
||||
return [p.strip() for p in _SENT_SPLIT.split(text) if p.strip()]
|
||||
|
||||
def parse_blocks(md_text: str) -> Tuple[List[RawBlock], str]:
|
||||
"""
|
||||
Zerlegt Text in logische Einheiten (RawBlocks), inklusive H1-H6.
|
||||
WP-24c v4.2.0: Identifiziert Edge-Zonen (LLM-Validierung & Note-Scope) und markiert sie für Chunking-Ausschluss.
|
||||
WP-24c v4.2.6: Callouts werden mit is_meta_content=True markiert (werden gechunkt, aber später entfernt).
|
||||
"""
|
||||
blocks = []
|
||||
h1_title = "Dokument"
|
||||
section_path = "/"
|
||||
current_section_title = None
|
||||
|
||||
# Frontmatter entfernen
|
||||
fm, text_without_fm = extract_frontmatter_from_text(md_text)
|
||||
|
||||
# WP-24c v4.2.0: Konfigurierbare Header-Namen und -Ebenen
|
||||
llm_validation_headers = os.getenv(
|
||||
"MINDNET_LLM_VALIDATION_HEADERS",
|
||||
"Unzugeordnete Kanten,Edge Pool,Candidates"
|
||||
)
|
||||
llm_validation_header_list = [h.strip() for h in llm_validation_headers.split(",") if h.strip()]
|
||||
if not llm_validation_header_list:
|
||||
llm_validation_header_list = ["Unzugeordnete Kanten", "Edge Pool", "Candidates"]
|
||||
|
||||
note_scope_headers = os.getenv(
|
||||
"MINDNET_NOTE_SCOPE_ZONE_HEADERS",
|
||||
"Smart Edges,Relationen,Global Links,Note-Level Relations,Globale Verbindungen"
|
||||
)
|
||||
note_scope_header_list = [h.strip() for h in note_scope_headers.split(",") if h.strip()]
|
||||
if not note_scope_header_list:
|
||||
note_scope_header_list = ["Smart Edges", "Relationen", "Global Links", "Note-Level Relations", "Globale Verbindungen"]
|
||||
|
||||
# Header-Ebenen konfigurierbar (Default: LLM=3, Note-Scope=2)
|
||||
llm_validation_level = int(os.getenv("MINDNET_LLM_VALIDATION_HEADER_LEVEL", "3"))
|
||||
note_scope_level = int(os.getenv("MINDNET_NOTE_SCOPE_HEADER_LEVEL", "2"))
|
||||
|
||||
# Status-Tracking für Edge-Zonen
|
||||
in_exclusion_zone = False
|
||||
exclusion_zone_type = None # "llm_validation" oder "note_scope"
|
||||
|
||||
# H1 für Note-Titel extrahieren (Metadaten-Zweck)
|
||||
h1_match = re.search(r'^#\s+(.*)', text_without_fm, re.MULTILINE)
|
||||
if h1_match:
|
||||
h1_title = h1_match.group(1).strip()
|
||||
|
||||
lines = text_without_fm.split('\n')
|
||||
buffer = []
|
||||
|
||||
# WP-24c v4.2.5: Callout-Erkennung (auch verschachtelt: >>)
|
||||
# Regex für Callouts: >\s*[!edge] oder >\s*[!abstract] (auch mit mehreren >)
|
||||
callout_pattern = re.compile(r'^\s*>{1,}\s*\[!(edge|abstract)\]', re.IGNORECASE)
|
||||
|
||||
# WP-24c v4.2.5: Markiere verarbeitete Zeilen, um sie zu überspringen
|
||||
processed_indices = set()
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
if i in processed_indices:
|
||||
continue
|
||||
|
||||
stripped = line.strip()
|
||||
|
||||
# WP-24c v4.2.5: Callout-Erkennung (VOR Heading-Erkennung)
|
||||
# Prüfe, ob diese Zeile ein Callout startet
|
||||
callout_match = callout_pattern.match(line)
|
||||
if callout_match:
|
||||
# Vorherigen Text-Block abschließen
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock(
|
||||
"paragraph", content, None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
buffer = []
|
||||
|
||||
# Sammle alle Zeilen des Callout-Blocks
|
||||
callout_lines = [line]
|
||||
leading_gt_count = len(line) - len(line.lstrip('>'))
|
||||
processed_indices.add(i)
|
||||
|
||||
# Sammle alle Zeilen, die zum Callout gehören (gleiche oder höhere Einrückung)
|
||||
j = i + 1
|
||||
while j < len(lines):
|
||||
next_line = lines[j]
|
||||
if not next_line.strip().startswith('>'):
|
||||
break
|
||||
next_leading_gt = len(next_line) - len(next_line.lstrip('>'))
|
||||
if next_leading_gt < leading_gt_count:
|
||||
break
|
||||
callout_lines.append(next_line)
|
||||
processed_indices.add(j)
|
||||
j += 1
|
||||
|
||||
# WP-24c v4.2.6: Erstelle Callout-Block mit is_meta_content = True
|
||||
# Callouts werden gechunkt (für Chunk-Attribution), aber später entfernt (Clean-Context)
|
||||
callout_content = "\n".join(callout_lines)
|
||||
blocks.append(RawBlock(
|
||||
"callout", callout_content, None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone, # Nur Edge-Zonen werden ausgeschlossen
|
||||
is_meta_content=True # WP-24c v4.2.6: Markierung für spätere Entfernung
|
||||
))
|
||||
continue
|
||||
|
||||
# Heading-Erkennung (H1 bis H6)
|
||||
heading_match = re.match(r'^(#{1,6})\s+(.*)', stripped)
|
||||
if heading_match:
|
||||
# Vorherigen Text-Block abschließen
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock(
|
||||
"paragraph", content, None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
buffer = []
|
||||
|
||||
level = len(heading_match.group(1))
|
||||
title = heading_match.group(2).strip()
|
||||
|
||||
# WP-24c v4.2.0: Prüfe, ob dieser Header eine Edge-Zone startet
|
||||
is_llm_validation_zone = (
|
||||
level == llm_validation_level and
|
||||
any(title.lower() == h.lower() for h in llm_validation_header_list)
|
||||
)
|
||||
is_note_scope_zone = (
|
||||
level == note_scope_level and
|
||||
any(title.lower() == h.lower() for h in note_scope_header_list)
|
||||
)
|
||||
|
||||
if is_llm_validation_zone:
|
||||
in_exclusion_zone = True
|
||||
exclusion_zone_type = "llm_validation"
|
||||
elif is_note_scope_zone:
|
||||
in_exclusion_zone = True
|
||||
exclusion_zone_type = "note_scope"
|
||||
elif in_exclusion_zone:
|
||||
# Neuer Header gefunden, der keine Edge-Zone ist -> Zone beendet
|
||||
in_exclusion_zone = False
|
||||
exclusion_zone_type = None
|
||||
|
||||
# Pfad- und Titel-Update für die Metadaten der folgenden Blöcke
|
||||
if level == 1:
|
||||
current_section_title = title; section_path = "/"
|
||||
elif level == 2:
|
||||
current_section_title = title; section_path = f"/{current_section_title}"
|
||||
|
||||
# Die Überschrift selbst als regulären Block hinzufügen (auch markiert, wenn in Zone)
|
||||
blocks.append(RawBlock(
|
||||
"heading", stripped, level, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
continue
|
||||
|
||||
# Trenner (---) oder Leerzeilen beenden Blöcke, außer innerhalb von Callouts
|
||||
if (not stripped or stripped == "---") and not line.startswith('>'):
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock(
|
||||
"paragraph", content, None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
buffer = []
|
||||
if stripped == "---":
|
||||
blocks.append(RawBlock(
|
||||
"separator", "---", None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
else:
|
||||
buffer.append(line)
|
||||
|
||||
if buffer:
|
||||
content = "\n".join(buffer).strip()
|
||||
if content:
|
||||
blocks.append(RawBlock(
|
||||
"paragraph", content, None, section_path, current_section_title,
|
||||
exclude_from_chunking=in_exclusion_zone
|
||||
))
|
||||
|
||||
return blocks, h1_title
|
||||
|
||||
def parse_edges_robust(text: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Extrahiert Kanten-Kandidaten aus Wikilinks und Callouts.
|
||||
WP-24c v4.2.7: Gibt Liste von Dicts zurück mit is_callout Flag für Chunk-Attribution.
|
||||
WP-24c v4.2.9 Fix A: current_edge_type bleibt über Leerzeilen hinweg erhalten,
|
||||
damit alle Links in einem Callout-Block korrekt verarbeitet werden.
|
||||
|
||||
Returns:
|
||||
List[Dict] mit keys: "edge" (str: "kind:target"), "is_callout" (bool)
|
||||
"""
|
||||
found_edges: List[Dict[str, any]] = []
|
||||
# 1. Wikilinks [[rel:kind|target]]
|
||||
inlines = re.findall(r'\[\[rel:([^\|\]]+)\|?([^\]]*)\]\]', text)
|
||||
for kind, target in inlines:
|
||||
k = kind.strip().lower()
|
||||
t = target.strip()
|
||||
if k and t:
|
||||
found_edges.append({"edge": f"{k}:{t}", "is_callout": False})
|
||||
|
||||
# 2. Callout Edges > [!edge] kind
|
||||
lines = text.split('\n')
|
||||
current_edge_type = None
|
||||
for line in lines:
|
||||
stripped = line.strip()
|
||||
callout_match = re.match(r'>+\s*\[!edge\]\s*([^:\s]+)', stripped)
|
||||
if callout_match:
|
||||
current_edge_type = callout_match.group(1).strip().lower()
|
||||
# Links in der gleichen Zeile des Callouts
|
||||
links = re.findall(r'\[\[([^\]]+)\]\]', stripped)
|
||||
for l in links:
|
||||
if "rel:" not in l:
|
||||
found_edges.append({"edge": f"{current_edge_type}:{l}", "is_callout": True})
|
||||
continue
|
||||
# Links in Folgezeilen des Callouts
|
||||
# WP-24c v4.2.9 Fix A: current_edge_type bleibt über Leerzeilen hinweg erhalten
|
||||
# innerhalb eines Callout-Blocks, damit alle Links korrekt verarbeitet werden
|
||||
if current_edge_type and stripped.startswith('>'):
|
||||
# Fortsetzung des Callout-Blocks: Links extrahieren
|
||||
links = re.findall(r'\[\[([^\]]+)\]\]', stripped)
|
||||
for l in links:
|
||||
if "rel:" not in l:
|
||||
found_edges.append({"edge": f"{current_edge_type}:{l}", "is_callout": True})
|
||||
elif current_edge_type and not stripped.startswith('>') and stripped:
|
||||
# Nicht-Callout-Zeile mit Inhalt: Callout-Block beendet
|
||||
current_edge_type = None
|
||||
# Leerzeilen werden ignoriert - current_edge_type bleibt erhalten
|
||||
return found_edges
|
||||
204
app/core/chunking/chunking_processor.py
Normal file
204
app/core/chunking/chunking_processor.py
Normal file
|
|
@ -0,0 +1,204 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_processor.py
|
||||
DESCRIPTION: Der zentrale Orchestrator für das Chunking-System.
|
||||
AUDIT v3.3.4: Wiederherstellung der "Gold-Standard" Qualität.
|
||||
- Fix: Synchronisierung der Parameter (context_prefix) für alle Strategien.
|
||||
- Integriert physikalische Kanten-Injektion (Propagierung).
|
||||
- Stellt H1-Kontext-Fenster sicher.
|
||||
- Baut den Candidate-Pool für die WP-15b Ingestion auf.
|
||||
WP-24c v4.2.0: Konfigurierbare Header-Namen für LLM-Validierung.
|
||||
WP-24c v4.2.5: Wiederherstellung der Chunking-Präzision
|
||||
- Frontmatter-Override für chunking_profile
|
||||
- Callout-Exclusion aus Chunks
|
||||
- Strict-Mode ohne Carry-Over
|
||||
WP-24c v4.2.6: Finale Härtung - "Semantic First, Clean Second"
|
||||
- Callouts werden gechunkt (Chunk-Attribution), aber später entfernt (Clean-Context)
|
||||
- remove_callouts_from_text erst nach propagate_section_edges und Candidate Pool
|
||||
WP-24c v4.2.7: Wiederherstellung der Chunk-Attribution
|
||||
- Callout-Kanten erhalten explicit:callout Provenance im candidate_pool
|
||||
- graph_derive_edges.py erkennt diese und verhindert Note-Scope Duplikate
|
||||
"""
|
||||
import asyncio
|
||||
import re
|
||||
import os
|
||||
import logging
|
||||
from typing import List, Dict, Optional
|
||||
from .chunking_models import Chunk
|
||||
from .chunking_utils import get_chunk_config, extract_frontmatter_from_text
|
||||
from .chunking_parser import parse_blocks, parse_edges_robust
|
||||
from .chunking_strategies import strategy_sliding_window, strategy_by_heading
|
||||
from .chunking_propagation import propagate_section_edges
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
async def assemble_chunks(note_id: str, md_text: str, note_type: str, config: Optional[Dict] = None) -> List[Chunk]:
|
||||
"""
|
||||
Hauptfunktion zur Zerlegung einer Note.
|
||||
Verbindet Strategien mit physikalischer Kontext-Anreicherung.
|
||||
WP-24c v4.2.5: Frontmatter-Override für chunking_profile wird berücksichtigt.
|
||||
"""
|
||||
# 1. WP-24c v4.2.5: Frontmatter VOR Konfiguration extrahieren (für Override)
|
||||
fm, body_text = extract_frontmatter_from_text(md_text)
|
||||
|
||||
# 2. Konfiguration mit Frontmatter-Override
|
||||
if config is None:
|
||||
config = get_chunk_config(note_type, frontmatter=fm)
|
||||
|
||||
blocks, doc_title = parse_blocks(md_text)
|
||||
|
||||
# WP-24c v4.2.6: Filtere NUR Edge-Zonen (LLM-Validierung & Note-Scope)
|
||||
# Callouts (is_meta_content=True) müssen durch, damit Chunk-Attribution erhalten bleibt
|
||||
blocks_for_chunking = [b for b in blocks if not getattr(b, 'exclude_from_chunking', False)]
|
||||
|
||||
# Vorbereitung des H1-Präfix für die Embedding-Fenster (Breadcrumbs)
|
||||
h1_prefix = f"# {doc_title}" if doc_title else ""
|
||||
|
||||
# 2. Anwendung der Splitting-Strategie
|
||||
# Alle Strategien nutzen nun einheitlich context_prefix für die Window-Bildung.
|
||||
# WP-24c v4.2.6: Callouts sind in blocks_for_chunking enthalten (für Chunk-Attribution)
|
||||
if config.get("strategy") == "by_heading":
|
||||
chunks = await asyncio.to_thread(
|
||||
strategy_by_heading, blocks_for_chunking, config, note_id, context_prefix=h1_prefix
|
||||
)
|
||||
else:
|
||||
chunks = await asyncio.to_thread(
|
||||
strategy_sliding_window, blocks_for_chunking, config, note_id, context_prefix=h1_prefix
|
||||
)
|
||||
|
||||
if not chunks:
|
||||
return []
|
||||
|
||||
# 3. Physikalische Kontext-Anreicherung (Der Qualitäts-Fix)
|
||||
# WP-24c v4.2.6: Arbeite auf Original-Text inkl. Callouts (für korrekte Chunk-Attribution)
|
||||
# Schreibt Kanten aus Callouts/Inlines hart in den Text für Qdrant.
|
||||
chunks = propagate_section_edges(chunks)
|
||||
|
||||
# 5. WP-15b: Candidate Pool Aufbau (Metadaten für IngestionService)
|
||||
# WP-24c v4.2.7: Markiere Callout-Kanten explizit für Chunk-Attribution
|
||||
# Zuerst die explizit im Text vorhandenen Kanten sammeln.
|
||||
# WP-24c v4.4.0-DEBUG: Schnittstelle 1 - Extraktion
|
||||
for idx, ch in enumerate(chunks):
|
||||
# Wir extrahieren aus dem bereits (durch Propagation) angereicherten Text.
|
||||
# ch.candidate_pool wird im Modell-Konstruktor als leere Liste initialisiert.
|
||||
for edge_info in parse_edges_robust(ch.text):
|
||||
edge_str = edge_info["edge"]
|
||||
is_callout = edge_info.get("is_callout", False)
|
||||
parts = edge_str.split(':', 1)
|
||||
if len(parts) == 2:
|
||||
k, t = parts
|
||||
# WP-24c v4.2.7: Callout-Kanten erhalten explicit:callout Provenance
|
||||
# WP-24c v4.4.1: Harmonisierung - Provenance muss exakt "explicit:callout" sein
|
||||
provenance = "explicit:callout" if is_callout else "explicit"
|
||||
# WP-24c v4.4.1: Verwende "to" für Kompatibilität (wird auch in graph_derive_edges.py erwartet)
|
||||
# Zusätzlich "target_id" für maximale Kompatibilität mit ingestion_processor Validierung
|
||||
pool_entry = {"kind": k, "to": t, "provenance": provenance}
|
||||
if is_callout:
|
||||
# WP-24c v4.4.1: Für Callouts auch "target_id" hinzufügen für Validierung
|
||||
pool_entry["target_id"] = t
|
||||
ch.candidate_pool.append(pool_entry)
|
||||
|
||||
# WP-24c v4.4.0-DEBUG: Schnittstelle 1 - Logging
|
||||
if is_callout:
|
||||
logger.debug(f"DEBUG-TRACER [Extraction]: Chunk Index: {idx}, Chunk ID: {ch.id}, Kind: {k}, Target: {t}, Provenance: {provenance}, Is_Callout: {is_callout}, Raw_Edge_Str: {edge_str}")
|
||||
|
||||
# 6. Global Pool (Unzugeordnete Kanten - kann mitten im Dokument oder am Ende stehen)
|
||||
# WP-24c v4.2.0: Konfigurierbare Header-Namen und -Ebene via .env
|
||||
# Sucht nach ALLEN Edge-Pool Blöcken im Original-Markdown (nicht nur am Ende).
|
||||
llm_validation_headers = os.getenv(
|
||||
"MINDNET_LLM_VALIDATION_HEADERS",
|
||||
"Unzugeordnete Kanten,Edge Pool,Candidates"
|
||||
)
|
||||
header_list = [h.strip() for h in llm_validation_headers.split(",") if h.strip()]
|
||||
# Fallback auf Defaults, falls leer
|
||||
if not header_list:
|
||||
header_list = ["Unzugeordnete Kanten", "Edge Pool", "Candidates"]
|
||||
|
||||
# Header-Ebene konfigurierbar (Default: 3 für ###)
|
||||
llm_validation_level = int(os.getenv("MINDNET_LLM_VALIDATION_HEADER_LEVEL", "3"))
|
||||
header_level_pattern = "#" * llm_validation_level
|
||||
|
||||
# Regex-Pattern mit konfigurierbaren Headern und Ebene
|
||||
# WP-24c v4.2.0: finditer statt search, um ALLE Zonen zu finden (auch mitten im Dokument)
|
||||
# Zone endet bei einem neuen Header (jeder Ebene) oder am Dokument-Ende
|
||||
header_pattern = "|".join(re.escape(h) for h in header_list)
|
||||
zone_pattern = rf'^{re.escape(header_level_pattern)}\s*(?:{header_pattern})\s*\n(.*?)(?=\n#|$)'
|
||||
|
||||
for pool_match in re.finditer(zone_pattern, body_text, re.DOTALL | re.IGNORECASE | re.MULTILINE):
|
||||
global_edges = parse_edges_robust(pool_match.group(1))
|
||||
for edge_info in global_edges:
|
||||
edge_str = edge_info["edge"]
|
||||
parts = edge_str.split(':', 1)
|
||||
if len(parts) == 2:
|
||||
k, t = parts
|
||||
# Diese Kanten werden als "global_pool" markiert für die spätere KI-Prüfung.
|
||||
for ch in chunks:
|
||||
ch.candidate_pool.append({"kind": k, "to": t, "provenance": "global_pool"})
|
||||
|
||||
# 7. De-Duplikation des Pools & Linking
|
||||
for ch in chunks:
|
||||
seen = set()
|
||||
unique = []
|
||||
for c in ch.candidate_pool:
|
||||
# Eindeutigkeit über Typ, Ziel und Herkunft (Provenance)
|
||||
key = (c["kind"], c["to"], c["provenance"])
|
||||
if key not in seen:
|
||||
seen.add(key)
|
||||
unique.append(c)
|
||||
ch.candidate_pool = unique
|
||||
|
||||
# 8. WP-24c v4.2.6: Clean-Context - Entferne Callout-Syntax aus Chunk-Text
|
||||
# WICHTIG: Dies geschieht NACH propagate_section_edges und Candidate Pool Aufbau,
|
||||
# damit Chunk-Attribution erhalten bleibt und Kanten korrekt extrahiert werden.
|
||||
# Hinweis: Callouts können mehrzeilig sein (auch verschachtelt: >>)
|
||||
def remove_callouts_from_text(text: str) -> str:
|
||||
"""Entfernt alle Callout-Zeilen (> [!edge] oder > [!abstract]) aus dem Text."""
|
||||
if not text:
|
||||
return text
|
||||
|
||||
lines = text.split('\n')
|
||||
cleaned_lines = []
|
||||
i = 0
|
||||
|
||||
# NEU (v4.2.8):
|
||||
# WP-24c v4.2.8: Callout-Pattern für Edge und Abstract
|
||||
callout_start_pattern = re.compile(r'^>\s*\[!(edge|abstract)[^\]]*\]', re.IGNORECASE)
|
||||
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
callout_match = callout_start_pattern.match(line)
|
||||
|
||||
if callout_match:
|
||||
# Callout gefunden: Überspringe alle Zeilen des Callout-Blocks
|
||||
leading_gt_count = len(line) - len(line.lstrip('>'))
|
||||
i += 1
|
||||
|
||||
# Überspringe alle Zeilen, die zum Callout gehören
|
||||
while i < len(lines):
|
||||
next_line = lines[i]
|
||||
if not next_line.strip().startswith('>'):
|
||||
break
|
||||
next_leading_gt = len(next_line) - len(next_line.lstrip('>'))
|
||||
if next_leading_gt < leading_gt_count:
|
||||
break
|
||||
i += 1
|
||||
else:
|
||||
# Normale Zeile: Behalte
|
||||
cleaned_lines.append(line)
|
||||
i += 1
|
||||
|
||||
# Normalisiere Leerzeilen (max. 2 aufeinanderfolgende)
|
||||
result = '\n'.join(cleaned_lines)
|
||||
result = re.sub(r'\n\s*\n\s*\n+', '\n\n', result)
|
||||
return result
|
||||
|
||||
for ch in chunks:
|
||||
ch.text = remove_callouts_from_text(ch.text)
|
||||
if ch.window:
|
||||
ch.window = remove_callouts_from_text(ch.window)
|
||||
|
||||
# Verknüpfung der Nachbarschaften für Graph-Traversierung
|
||||
for i, ch in enumerate(chunks):
|
||||
ch.neighbors_prev = chunks[i-1].id if i > 0 else None
|
||||
ch.neighbors_next = chunks[i+1].id if i < len(chunks)-1 else None
|
||||
|
||||
return chunks
|
||||
69
app/core/chunking/chunking_propagation.py
Normal file
69
app/core/chunking/chunking_propagation.py
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_propagation.py
|
||||
DESCRIPTION: Injiziert Sektions-Kanten physisch in den Text (Embedding-Enrichment).
|
||||
Fix v3.3.6: Nutzt robustes Parsing zur Erkennung vorhandener Kanten,
|
||||
um Dopplungen direkt hinter [!edge] Callouts format-agnostisch zu verhindern.
|
||||
"""
|
||||
from typing import List, Dict, Set
|
||||
from .chunking_models import Chunk
|
||||
from .chunking_parser import parse_edges_robust
|
||||
|
||||
def propagate_section_edges(chunks: List[Chunk]) -> List[Chunk]:
|
||||
"""
|
||||
Sammelt Kanten pro Sektion und schreibt sie hart in den Text und das Window.
|
||||
Verhindert Dopplungen, wenn Kanten bereits via [!edge] Callout vorhanden sind.
|
||||
"""
|
||||
# 1. Sammeln: Alle expliziten Kanten pro Sektions-Pfad aggregieren
|
||||
section_map: Dict[str, Set[str]] = {} # path -> set(kind:target)
|
||||
|
||||
for ch in chunks:
|
||||
# Root-Level "/" ignorieren (zu global), Fokus auf spezifische Kapitel
|
||||
if not ch.section_path or ch.section_path == "/":
|
||||
continue
|
||||
|
||||
# Nutzt den robusten Parser aus dem Package
|
||||
# WP-24c v4.2.7: parse_edges_robust gibt jetzt Liste von Dicts zurück
|
||||
edge_infos = parse_edges_robust(ch.text)
|
||||
if edge_infos:
|
||||
if ch.section_path not in section_map:
|
||||
section_map[ch.section_path] = set()
|
||||
for edge_info in edge_infos:
|
||||
section_map[ch.section_path].add(edge_info["edge"])
|
||||
|
||||
# 2. Injizieren: Kanten in jeden Chunk der Sektion zurückschreiben (Broadcasting)
|
||||
for ch in chunks:
|
||||
if ch.section_path in section_map:
|
||||
edges_to_add = section_map[ch.section_path]
|
||||
if not edges_to_add:
|
||||
continue
|
||||
|
||||
# Vorhandene Kanten (Typ:Ziel) in DIESEM Chunk ermitteln,
|
||||
# um Dopplungen (z.B. durch Callouts) zu vermeiden.
|
||||
# WP-24c v4.2.7: parse_edges_robust gibt jetzt Liste von Dicts zurück
|
||||
existing_edge_infos = parse_edges_robust(ch.text)
|
||||
existing_edges = {ei["edge"] for ei in existing_edge_infos}
|
||||
|
||||
injections = []
|
||||
# Sortierung für deterministische Ergebnisse
|
||||
for e_str in sorted(list(edges_to_add)):
|
||||
# Wenn die Kante (Typ + Ziel) bereits vorhanden ist (egal welches Format),
|
||||
# überspringen wir die Injektion für diesen Chunk.
|
||||
if e_str in existing_edges:
|
||||
continue
|
||||
|
||||
kind, target = e_str.split(':', 1)
|
||||
injections.append(f"[[rel:{kind}|{target}]]")
|
||||
|
||||
if injections:
|
||||
# Physische Anreicherung
|
||||
# Triple-Newline für saubere Trennung im Embedding-Fenster
|
||||
block = "\n\n\n" + " ".join(injections)
|
||||
ch.text += block
|
||||
|
||||
# Auch ins Window schreiben, da Qdrant hier sucht!
|
||||
if ch.window:
|
||||
ch.window += block
|
||||
else:
|
||||
ch.window = ch.text
|
||||
|
||||
return chunks
|
||||
190
app/core/chunking/chunking_strategies.py
Normal file
190
app/core/chunking/chunking_strategies.py
Normal file
|
|
@ -0,0 +1,190 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_strategies.py
|
||||
DESCRIPTION: Strategien für atomares Sektions-Chunking v3.9.9.
|
||||
Implementiert das 'Pack-and-Carry-Over' Verfahren nach Regel 1-3.
|
||||
- Keine redundante Kanten-Injektion.
|
||||
- Strikte Einhaltung von Sektionsgrenzen via Look-Ahead.
|
||||
- Fix: Synchronisierung der Parameter mit dem Orchestrator (context_prefix).
|
||||
WP-24c v4.2.5: Strict-Mode ohne Carry-Over - Bei strict_heading_split wird nach jeder Sektion geflasht.
|
||||
"""
|
||||
from typing import List, Dict, Any, Optional
|
||||
from .chunking_models import RawBlock, Chunk
|
||||
from .chunking_utils import estimate_tokens
|
||||
from .chunking_parser import split_sentences
|
||||
|
||||
def _create_win(context_prefix: str, sec_title: Optional[str], text: str) -> str:
|
||||
"""Baut den Breadcrumb-Kontext für das Embedding-Fenster."""
|
||||
parts = [context_prefix] if context_prefix else []
|
||||
# Verhindert Dopplung, falls der Context-Prefix (H1) bereits den Sektionsnamen enthält
|
||||
if sec_title and f"# {sec_title}" != context_prefix and sec_title not in (context_prefix or ""):
|
||||
parts.append(sec_title)
|
||||
prefix = " > ".join(parts)
|
||||
return f"{prefix}\n{text}".strip() if prefix else text
|
||||
|
||||
def strategy_by_heading(blocks: List[RawBlock], config: Dict[str, Any], note_id: str, context_prefix: str = "") -> List[Chunk]:
|
||||
"""
|
||||
Universelle Heading-Strategie mit Carry-Over Logik.
|
||||
Synchronisiert auf context_prefix für Kompatibilität mit dem Orchestrator.
|
||||
"""
|
||||
smart_edge = config.get("enable_smart_edge_allocation", True)
|
||||
strict = config.get("strict_heading_split", False)
|
||||
target = config.get("target", 400)
|
||||
max_tokens = config.get("max", 600)
|
||||
split_level = config.get("split_level", 2)
|
||||
overlap_cfg = config.get("overlap", (50, 80))
|
||||
overlap = sum(overlap_cfg) // 2 if isinstance(overlap_cfg, (list, tuple)) else overlap_cfg
|
||||
|
||||
chunks: List[Chunk] = []
|
||||
|
||||
def _emit(txt, title, path):
|
||||
"""Schreibt den finalen Chunk ohne Text-Modifikationen."""
|
||||
idx = len(chunks)
|
||||
win = _create_win(context_prefix, title, txt)
|
||||
chunks.append(Chunk(
|
||||
id=f"{note_id}#c{idx:02d}", note_id=note_id, index=idx,
|
||||
text=txt, window=win, token_count=estimate_tokens(txt),
|
||||
section_title=title, section_path=path, neighbors_prev=None, neighbors_next=None
|
||||
))
|
||||
|
||||
# --- SCHRITT 1: Gruppierung in atomare Sektions-Einheiten ---
|
||||
sections: List[Dict[str, Any]] = []
|
||||
curr_blocks = []
|
||||
for b in blocks:
|
||||
if b.kind == "heading" and b.level <= split_level:
|
||||
if curr_blocks:
|
||||
sections.append({
|
||||
"text": "\n\n".join([x.text for x in curr_blocks]),
|
||||
"meta": curr_blocks[0],
|
||||
"is_empty": len(curr_blocks) == 1 and curr_blocks[0].kind == "heading"
|
||||
})
|
||||
curr_blocks = [b]
|
||||
else:
|
||||
curr_blocks.append(b)
|
||||
if curr_blocks:
|
||||
sections.append({
|
||||
"text": "\n\n".join([x.text for x in curr_blocks]),
|
||||
"meta": curr_blocks[0],
|
||||
"is_empty": len(curr_blocks) == 1 and curr_blocks[0].kind == "heading"
|
||||
})
|
||||
|
||||
# --- SCHRITT 2: Verarbeitung der Queue ---
|
||||
queue = list(sections)
|
||||
current_chunk_text = ""
|
||||
current_meta = {"title": None, "path": "/"}
|
||||
|
||||
# Bestimmung des Modus: Hard-Split wenn smart_edge=False ODER strict=True
|
||||
is_hard_split_mode = (not smart_edge) or (strict)
|
||||
|
||||
while queue:
|
||||
item = queue.pop(0)
|
||||
item_text = item["text"]
|
||||
|
||||
# Initialisierung für neuen Chunk
|
||||
if not current_chunk_text:
|
||||
current_meta["title"] = item["meta"].section_title
|
||||
current_meta["path"] = item["meta"].section_path
|
||||
|
||||
# FALL A: HARD SPLIT MODUS (WP-24c v4.2.5: Strict-Mode ohne Carry-Over)
|
||||
if is_hard_split_mode:
|
||||
# WP-24c v4.2.5: Bei strict_heading_split: true wird nach JEDER Sektion geflasht
|
||||
# Kein Carry-Over erlaubt, auch nicht für leere Überschriften
|
||||
if current_chunk_text:
|
||||
# Flashe vorherigen Chunk
|
||||
_emit(current_chunk_text, current_meta["title"], current_meta["path"])
|
||||
current_chunk_text = ""
|
||||
|
||||
# Neue Sektion: Initialisiere Meta
|
||||
current_meta["title"] = item["meta"].section_title
|
||||
current_meta["path"] = item["meta"].section_path
|
||||
|
||||
# WP-24c v4.2.5: Auch leere Sektionen werden als separater Chunk erstellt
|
||||
# (nur Überschrift, kein Inhalt)
|
||||
if item.get("is_empty", False):
|
||||
# Leere Sektion: Nur Überschrift als Chunk
|
||||
_emit(item_text, current_meta["title"], current_meta["path"])
|
||||
else:
|
||||
# Normale Sektion: Prüfe auf Token-Limit
|
||||
if estimate_tokens(item_text) > max_tokens:
|
||||
# Sektion zu groß: Smart Zerlegung (aber trotzdem in separaten Chunks)
|
||||
sents = split_sentences(item_text)
|
||||
header_prefix = item["meta"].text if item["meta"].kind == "heading" else ""
|
||||
|
||||
take_sents = []; take_len = 0
|
||||
while sents:
|
||||
s = sents.pop(0); slen = estimate_tokens(s)
|
||||
if take_len + slen > target and take_sents:
|
||||
_emit(" ".join(take_sents), current_meta["title"], current_meta["path"])
|
||||
take_sents = [s]; take_len = slen
|
||||
else:
|
||||
take_sents.append(s); take_len += slen
|
||||
|
||||
if take_sents:
|
||||
_emit(" ".join(take_sents), current_meta["title"], current_meta["path"])
|
||||
else:
|
||||
# Sektion passt: Direkt als Chunk
|
||||
_emit(item_text, current_meta["title"], current_meta["path"])
|
||||
|
||||
current_chunk_text = ""
|
||||
continue
|
||||
|
||||
# FALL B: SMART MODE (Regel 1-3)
|
||||
combined_text = (current_chunk_text + "\n\n" + item_text).strip() if current_chunk_text else item_text
|
||||
combined_est = estimate_tokens(combined_text)
|
||||
|
||||
if combined_est <= max_tokens:
|
||||
# Regel 1 & 2: Passt rein laut Schätzung -> Aufnehmen
|
||||
current_chunk_text = combined_text
|
||||
else:
|
||||
if current_chunk_text:
|
||||
# Regel 2: Flashen an Sektionsgrenze, Item zurücklegen
|
||||
_emit(current_chunk_text, current_meta["title"], current_meta["path"])
|
||||
current_chunk_text = ""
|
||||
queue.insert(0, item)
|
||||
else:
|
||||
# Regel 3: Einzelne Sektion zu groß -> Smart Zerlegung
|
||||
sents = split_sentences(item_text)
|
||||
header_prefix = item["meta"].text if item["meta"].kind == "heading" else ""
|
||||
|
||||
take_sents = []; take_len = 0
|
||||
while sents:
|
||||
s = sents.pop(0); slen = estimate_tokens(s)
|
||||
if take_len + slen > target and take_sents:
|
||||
sents.insert(0, s); break
|
||||
take_sents.append(s); take_len += slen
|
||||
|
||||
_emit(" ".join(take_sents), current_meta["title"], current_meta["path"])
|
||||
|
||||
if sents:
|
||||
remainder = " ".join(sents)
|
||||
# Kontext-Erhalt: Überschrift für den Rest wiederholen
|
||||
if header_prefix and not remainder.startswith(header_prefix):
|
||||
remainder = header_prefix + "\n\n" + remainder
|
||||
# Carry-Over: Rest wird vorne in die Queue geschoben
|
||||
queue.insert(0, {"text": remainder, "meta": item["meta"], "is_split": True})
|
||||
|
||||
if current_chunk_text:
|
||||
_emit(current_chunk_text, current_meta["title"], current_meta["path"])
|
||||
|
||||
return chunks
|
||||
|
||||
def strategy_sliding_window(blocks: List[RawBlock], config: Dict[str, Any], note_id: str, context_prefix: str = "") -> List[Chunk]:
|
||||
"""Standard-Sliding-Window für flache Texte ohne Sektionsfokus."""
|
||||
target = config.get("target", 400); max_tokens = config.get("max", 600)
|
||||
chunks: List[Chunk] = []; buf: List[RawBlock] = []
|
||||
|
||||
for b in blocks:
|
||||
b_tokens = estimate_tokens(b.text)
|
||||
curr_tokens = sum(estimate_tokens(x.text) for x in buf) if buf else 0
|
||||
if curr_tokens + b_tokens > max_tokens and buf:
|
||||
txt = "\n\n".join([x.text for x in buf]); idx = len(chunks)
|
||||
win = _create_win(context_prefix, buf[0].section_title, txt)
|
||||
chunks.append(Chunk(id=f"{note_id}#c{idx:02d}", note_id=note_id, index=idx, text=txt, window=win, token_count=curr_tokens, section_title=buf[0].section_title, section_path=buf[0].section_path, neighbors_prev=None, neighbors_next=None))
|
||||
buf = []
|
||||
buf.append(b)
|
||||
|
||||
if buf:
|
||||
txt = "\n\n".join([x.text for x in buf]); idx = len(chunks)
|
||||
win = _create_win(context_prefix, buf[0].section_title, txt)
|
||||
chunks.append(Chunk(id=f"{note_id}#c{idx:02d}", note_id=note_id, index=idx, text=txt, window=win, token_count=estimate_tokens(txt), section_title=buf[0].section_title, section_path=buf[0].section_path, neighbors_prev=None, neighbors_next=None))
|
||||
|
||||
return chunks
|
||||
74
app/core/chunking/chunking_utils.py
Normal file
74
app/core/chunking/chunking_utils.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
"""
|
||||
FILE: app/core/chunking/chunking_utils.py
|
||||
DESCRIPTION: Hilfswerkzeuge für Token-Schätzung und YAML-Konfiguration.
|
||||
"""
|
||||
import math
|
||||
import yaml
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Tuple, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
BASE_DIR = Path(__file__).resolve().parent.parent.parent.parent
|
||||
CONFIG_PATH = BASE_DIR / "config" / "types.yaml"
|
||||
DEFAULT_PROFILE = {"strategy": "sliding_window", "target": 400, "max": 600, "overlap": (50, 80)}
|
||||
|
||||
_CONFIG_CACHE = None
|
||||
|
||||
def load_yaml_config() -> Dict[str, Any]:
|
||||
global _CONFIG_CACHE
|
||||
if _CONFIG_CACHE is not None: return _CONFIG_CACHE
|
||||
if not CONFIG_PATH.exists(): return {}
|
||||
try:
|
||||
with open(CONFIG_PATH, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f)
|
||||
_CONFIG_CACHE = data
|
||||
return data
|
||||
except Exception: return {}
|
||||
|
||||
def get_chunk_config(note_type: str, frontmatter: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Lädt die Chunking-Strategie basierend auf dem Note-Type.
|
||||
WP-24c v4.2.5: Frontmatter-Override für chunking_profile hat höchste Priorität.
|
||||
|
||||
Args:
|
||||
note_type: Der Typ der Note (z.B. "decision", "experience")
|
||||
frontmatter: Optionales Frontmatter-Dict mit chunking_profile Override
|
||||
|
||||
Returns:
|
||||
Dict mit Chunking-Konfiguration
|
||||
"""
|
||||
full_config = load_yaml_config()
|
||||
profiles = full_config.get("chunking_profiles", {})
|
||||
type_def = full_config.get("types", {}).get(note_type.lower(), {})
|
||||
|
||||
# WP-24c v4.2.5: Priorität: Frontmatter > Type-Def > Defaults
|
||||
profile_name = None
|
||||
if frontmatter and "chunking_profile" in frontmatter:
|
||||
profile_name = frontmatter.get("chunking_profile") or frontmatter.get("chunk_profile")
|
||||
if not profile_name:
|
||||
profile_name = type_def.get("chunking_profile")
|
||||
if not profile_name:
|
||||
profile_name = full_config.get("defaults", {}).get("chunking_profile", "sliding_standard")
|
||||
|
||||
config = profiles.get(profile_name, DEFAULT_PROFILE).copy()
|
||||
if "overlap" in config and isinstance(config["overlap"], list):
|
||||
config["overlap"] = tuple(config["overlap"])
|
||||
return config
|
||||
|
||||
def estimate_tokens(text: str) -> int:
|
||||
"""Grobe Schätzung der Token-Anzahl."""
|
||||
return max(1, math.ceil(len(text.strip()) / 4))
|
||||
|
||||
def extract_frontmatter_from_text(md_text: str) -> Tuple[Dict[str, Any], str]:
|
||||
"""Trennt YAML-Frontmatter vom Text."""
|
||||
import re
|
||||
fm_match = re.match(r'^\s*---\s*\n(.*?)\n---', md_text, re.DOTALL)
|
||||
if not fm_match: return {}, md_text
|
||||
try:
|
||||
frontmatter = yaml.safe_load(fm_match.group(1))
|
||||
if not isinstance(frontmatter, dict): frontmatter = {}
|
||||
except Exception: frontmatter = {}
|
||||
text_without_fm = re.sub(r'^\s*---\s*\n(.*?)\n---', '', md_text, flags=re.DOTALL)
|
||||
return frontmatter, text_without_fm.strip()
|
||||
35
app/core/database/__init__.py
Normal file
35
app/core/database/__init__.py
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
"""
|
||||
PACKAGE: app.core.database
|
||||
DESCRIPTION: Zentrale Schnittstelle für alle Datenbank-Operationen (Qdrant).
|
||||
Bündelt Client-Initialisierung und Point-Konvertierung.
|
||||
"""
|
||||
from .qdrant import (
|
||||
QdrantConfig,
|
||||
get_client,
|
||||
ensure_collections,
|
||||
ensure_payload_indexes,
|
||||
collection_names
|
||||
)
|
||||
from .qdrant_points import (
|
||||
points_for_note,
|
||||
points_for_chunks,
|
||||
points_for_edges,
|
||||
upsert_batch,
|
||||
get_edges_for_sources,
|
||||
search_chunks_by_vector
|
||||
)
|
||||
|
||||
# Öffentlicher Export für das Gesamtsystem
|
||||
__all__ = [
|
||||
"QdrantConfig",
|
||||
"get_client",
|
||||
"ensure_collections",
|
||||
"ensure_payload_indexes",
|
||||
"collection_names",
|
||||
"points_for_note",
|
||||
"points_for_chunks",
|
||||
"points_for_edges",
|
||||
"upsert_batch",
|
||||
"get_edges_for_sources",
|
||||
"search_chunks_by_vector"
|
||||
]
|
||||
|
|
@ -1,38 +1,23 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
app/core/qdrant.py
|
||||
Version: 2.2.0 (2025-11-11)
|
||||
|
||||
Aufgabe
|
||||
-------
|
||||
- Zentraler Qdrant-Zugriff (Client, Config)
|
||||
- Collection-Anlage (notes/chunks/edges)
|
||||
- **Payload-Indizes sicherstellen** (idempotent)
|
||||
|
||||
Hinweis
|
||||
-------
|
||||
Diese Datei ist als Drop-in-Ersatz gedacht, falls in deinem Projekt noch keine
|
||||
robuste ensure_payload_indexes()-Implementierung vorliegt. Die Signaturen
|
||||
bleiben kompatibel zu scripts.import_markdown und scripts.reset_qdrant.
|
||||
|
||||
API-Notizen
|
||||
-----------
|
||||
- Payload-Indizes werden mit `create_payload_index` angelegt.
|
||||
- Typen stammen aus `qdrant_client.http.models.PayloadSchemaType`:
|
||||
KEYWORD | TEXT | INTEGER | FLOAT | BOOL | GEO | DATETIME
|
||||
- Für häufige Filterfelder (note_id, kind, scope, type, tags, ...) legen wir
|
||||
Indizes an. Das ist laut Qdrant-Doku Best Practice für performante Filter.
|
||||
FILE: app/core/database/qdrant.py
|
||||
DESCRIPTION: Qdrant-Client Factory und Schema-Management.
|
||||
Erstellt Collections und Payload-Indizes.
|
||||
MODULARISIERUNG: Verschoben in das database-Paket für WP-14.
|
||||
VERSION: 2.2.2 (WP-Fix: Index für target_section)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: qdrant_client, dataclasses, os
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional, Tuple, Dict, List
|
||||
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Konfiguration
|
||||
|
|
@ -40,6 +25,7 @@ from qdrant_client.http import models as rest
|
|||
|
||||
@dataclass
|
||||
class QdrantConfig:
|
||||
"""Konfigurationsobjekt für den Qdrant-Verbindungsaufbau."""
|
||||
host: Optional[str] = None
|
||||
port: Optional[int] = None
|
||||
url: Optional[str] = None
|
||||
|
|
@ -51,16 +37,20 @@ class QdrantConfig:
|
|||
|
||||
@classmethod
|
||||
def from_env(cls) -> "QdrantConfig":
|
||||
"""Erstellt die Konfiguration aus Umgebungsvariablen."""
|
||||
# Entweder URL ODER Host/Port, API-Key optional
|
||||
url = os.getenv("QDRANT_URL") or None
|
||||
host = os.getenv("QDRANT_HOST") or None
|
||||
port = os.getenv("QDRANT_PORT")
|
||||
port = int(port) if port else None
|
||||
api_key = os.getenv("QDRANT_API_KEY") or None
|
||||
prefix = os.getenv("COLLECTION_PREFIX") or "mindnet"
|
||||
# WP-24c v4.5.10: Harmonisierung - Unterstützt beide Umgebungsvariablen für Abwärtskompatibilität
|
||||
# COLLECTION_PREFIX hat Priorität, MINDNET_PREFIX als Fallback
|
||||
prefix = os.getenv("COLLECTION_PREFIX") or os.getenv("MINDNET_PREFIX") or "mindnet"
|
||||
dim = int(os.getenv("VECTOR_DIM") or 384)
|
||||
distance = os.getenv("DISTANCE", "Cosine")
|
||||
on_disk_payload = (os.getenv("ON_DISK_PAYLOAD", "true").lower() == "true")
|
||||
|
||||
return cls(
|
||||
host=host, port=port, url=url, api_key=api_key,
|
||||
prefix=prefix, dim=dim, distance=distance, on_disk_payload=on_disk_payload
|
||||
|
|
@ -68,6 +58,7 @@ class QdrantConfig:
|
|||
|
||||
|
||||
def get_client(cfg: QdrantConfig) -> QdrantClient:
|
||||
"""Initialisiert den Qdrant-Client basierend auf der Konfiguration."""
|
||||
# QdrantClient akzeptiert entweder url=... oder host/port
|
||||
if cfg.url:
|
||||
return QdrantClient(url=cfg.url, api_key=cfg.api_key, timeout=60.0)
|
||||
|
|
@ -79,17 +70,19 @@ def get_client(cfg: QdrantConfig) -> QdrantClient:
|
|||
# ---------------------------------------------------------------------------
|
||||
|
||||
def collection_names(prefix: str) -> Tuple[str, str, str]:
|
||||
"""Gibt die standardisierten Collection-Namen zurück."""
|
||||
return f"{prefix}_notes", f"{prefix}_chunks", f"{prefix}_edges"
|
||||
|
||||
|
||||
def _vector_params(dim: int, distance: str) -> rest.VectorParams:
|
||||
"""Erstellt Vektor-Parameter für das Collection-Schema."""
|
||||
# Distance: "Cosine" | "Dot" | "Euclid"
|
||||
dist = getattr(rest.Distance, distance.capitalize(), rest.Distance.COSINE)
|
||||
return rest.VectorParams(size=dim, distance=dist)
|
||||
|
||||
|
||||
def ensure_collections(client: QdrantClient, prefix: str, dim: int) -> None:
|
||||
"""Legt mindnet_notes, mindnet_chunks, mindnet_edges an (falls nicht vorhanden)."""
|
||||
"""Legt notes, chunks und edges Collections an, falls nicht vorhanden."""
|
||||
notes, chunks, edges = collection_names(prefix)
|
||||
|
||||
# notes
|
||||
|
|
@ -106,7 +99,7 @@ def ensure_collections(client: QdrantClient, prefix: str, dim: int) -> None:
|
|||
vectors_config=_vector_params(dim, os.getenv("DISTANCE", "Cosine")),
|
||||
on_disk_payload=True,
|
||||
)
|
||||
# edges (Dummy-Vektor, Filter via Payload)
|
||||
# edges (Dummy-Vektor, da primär via Payload gefiltert wird)
|
||||
if not client.collection_exists(edges):
|
||||
client.create_collection(
|
||||
collection_name=edges,
|
||||
|
|
@ -120,21 +113,20 @@ def ensure_collections(client: QdrantClient, prefix: str, dim: int) -> None:
|
|||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _ensure_index(client: QdrantClient, collection: str, field: str, schema: rest.PayloadSchemaType) -> None:
|
||||
"""Idempotentes Anlegen eines Payload-Indexes für ein Feld."""
|
||||
"""Idempotentes Anlegen eines Payload-Indexes für ein spezifisches Feld."""
|
||||
try:
|
||||
client.create_payload_index(collection_name=collection, field_name=field, field_schema=schema, wait=True)
|
||||
except Exception as e:
|
||||
# Fehler ignorieren, falls Index bereits existiert oder Server "already indexed" meldet.
|
||||
# Für Debugging ggf. Logging ergänzen.
|
||||
_ = e
|
||||
# Fehler ignorieren, falls Index bereits existiert
|
||||
logger.debug(f"Index check for {field} in {collection}: {e}")
|
||||
|
||||
|
||||
def ensure_payload_indexes(client: QdrantClient, prefix: str) -> None:
|
||||
"""
|
||||
Stellt sicher, dass alle benötigten Payload-Indizes existieren.
|
||||
- notes: note_id(KEYWORD), type(KEYWORD), title(TEXT), updated(INTEGER), tags(KEYWORD)
|
||||
- chunks: note_id(KEYWORD), chunk_id(KEYWORD), index(INTEGER), type(KEYWORD), tags(KEYWORD)
|
||||
- edges: note_id(KEYWORD), kind(KEYWORD), scope(KEYWORD), source_id(KEYWORD), target_id(KEYWORD), chunk_id(KEYWORD)
|
||||
Stellt sicher, dass alle benötigten Payload-Indizes für die Suche existieren.
|
||||
- notes: note_id, type, title, updated, tags
|
||||
- chunks: note_id, chunk_id, index, type, tags
|
||||
- edges: note_id, kind, scope, source_id, target_id, chunk_id, target_section
|
||||
"""
|
||||
notes, chunks, edges = collection_names(prefix)
|
||||
|
||||
|
|
@ -166,6 +158,8 @@ def ensure_payload_indexes(client: QdrantClient, prefix: str) -> None:
|
|||
("source_id", rest.PayloadSchemaType.KEYWORD),
|
||||
("target_id", rest.PayloadSchemaType.KEYWORD),
|
||||
("chunk_id", rest.PayloadSchemaType.KEYWORD),
|
||||
# NEU: Index für Section-Links (WP-15b)
|
||||
("target_section", rest.PayloadSchemaType.KEYWORD),
|
||||
]:
|
||||
_ensure_index(client, edges, field, schema)
|
||||
|
||||
|
|
@ -176,4 +170,4 @@ __all__ = [
|
|||
"ensure_collections",
|
||||
"ensure_payload_indexes",
|
||||
"collection_names",
|
||||
]
|
||||
]
|
||||
|
|
@ -1,18 +1,11 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
app/core/qdrant_points.py - robust points helpers for Qdrant
|
||||
|
||||
- Single source of truth for building PointStruct for notes/chunks/edges
|
||||
- Backward-compatible payloads for edges
|
||||
- Handles both Single-Vector and Named-Vector collections
|
||||
- Deterministic overrides via ENV to avoid auto-detection traps:
|
||||
* NOTES_VECTOR_NAME, CHUNKS_VECTOR_NAME, EDGES_VECTOR_NAME
|
||||
* MINDNET_VECTOR_NAME (fallback)
|
||||
> Set to a concrete name (e.g. "text") to force Named-Vector with that name
|
||||
> Set to "__single__" (or "single") to force Single-Vector
|
||||
|
||||
Version: 1.5.0 (2025-11-08)
|
||||
FILE: app/core/database/qdrant_points.py
|
||||
DESCRIPTION: Object-Mapper für Qdrant. Konvertiert JSON-Payloads (Notes, Chunks, Edges)
|
||||
in PointStructs und generiert deterministische UUIDs.
|
||||
VERSION: 4.1.0 (WP-24c: Gold-Standard Identity v4.1.0 - target_section Support)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: qdrant_client, uuid, os, app.core.graph.graph_utils
|
||||
LAST_ANALYSIS: 2026-01-10
|
||||
"""
|
||||
from __future__ import annotations
|
||||
import os
|
||||
|
|
@ -22,25 +15,44 @@ from typing import List, Tuple, Iterable, Optional, Dict, Any
|
|||
from qdrant_client.http import models as rest
|
||||
from qdrant_client import QdrantClient
|
||||
|
||||
# WP-24c: Import der zentralen Identitäts-Logik zur Vermeidung von ID-Drift
|
||||
from app.core.graph.graph_utils import _mk_edge_id
|
||||
|
||||
# --------------------- ID helpers ---------------------
|
||||
|
||||
def _to_uuid(stable_key: str) -> str:
|
||||
return str(uuid.uuid5(uuid.NAMESPACE_URL, stable_key))
|
||||
"""
|
||||
Erzeugt eine deterministische UUIDv5 basierend auf einem stabilen Schlüssel.
|
||||
Härtung v1.5.2: Guard gegen leere Schlüssel zur Vermeidung von Pydantic-Fehlern.
|
||||
"""
|
||||
if not stable_key:
|
||||
raise ValueError("UUID generation failed: stable_key is empty or None")
|
||||
return str(uuid.uuid5(uuid.NAMESPACE_URL, str(stable_key)))
|
||||
|
||||
def _names(prefix: str) -> Tuple[str, str, str]:
|
||||
"""Interne Auflösung der Collection-Namen basierend auf dem Präfix."""
|
||||
return f"{prefix}_notes", f"{prefix}_chunks", f"{prefix}_edges"
|
||||
|
||||
# --------------------- Points builders ---------------------
|
||||
|
||||
def points_for_note(prefix: str, note_payload: dict, note_vec: List[float] | None, dim: int) -> Tuple[str, List[rest.PointStruct]]:
|
||||
"""Konvertiert Note-Metadaten in Qdrant Points."""
|
||||
notes_col, _, _ = _names(prefix)
|
||||
# Nutzt Null-Vektor als Fallback, falls kein Embedding vorhanden ist
|
||||
vector = note_vec if note_vec is not None else [0.0] * int(dim)
|
||||
|
||||
raw_note_id = note_payload.get("note_id") or note_payload.get("id") or "missing-note-id"
|
||||
point_id = _to_uuid(raw_note_id)
|
||||
pt = rest.PointStruct(id=point_id, vector=vector, payload=note_payload)
|
||||
|
||||
pt = rest.PointStruct(
|
||||
id=point_id,
|
||||
vector=vector,
|
||||
payload=note_payload
|
||||
)
|
||||
return notes_col, [pt]
|
||||
|
||||
def points_for_chunks(prefix: str, chunk_payloads: List[dict], vectors: List[List[float]]) -> Tuple[str, List[rest.PointStruct]]:
|
||||
"""Konvertiert Chunks und deren Vektoren in Qdrant Points."""
|
||||
_, chunks_col, _ = _names(prefix)
|
||||
points: List[rest.PointStruct] = []
|
||||
for i, (pl, vec) in enumerate(zip(chunk_payloads, vectors), start=1):
|
||||
|
|
@ -49,43 +61,93 @@ def points_for_chunks(prefix: str, chunk_payloads: List[dict], vectors: List[Lis
|
|||
note_id = pl.get("note_id") or pl.get("parent_note_id") or "missing-note"
|
||||
chunk_id = f"{note_id}#{i}"
|
||||
pl["chunk_id"] = chunk_id
|
||||
|
||||
point_id = _to_uuid(chunk_id)
|
||||
points.append(rest.PointStruct(id=point_id, vector=vec, payload=pl))
|
||||
points.append(rest.PointStruct(
|
||||
id=point_id,
|
||||
vector=vec,
|
||||
payload=pl
|
||||
))
|
||||
return chunks_col, points
|
||||
|
||||
def _normalize_edge_payload(pl: dict) -> dict:
|
||||
"""Normalisiert Edge-Felder und sichert Schema-Konformität."""
|
||||
kind = pl.get("kind") or pl.get("edge_type") or "edge"
|
||||
source_id = pl.get("source_id") or pl.get("src_id") or "unknown-src"
|
||||
target_id = pl.get("target_id") or pl.get("dst_id") or "unknown-tgt"
|
||||
seq = pl.get("seq") or pl.get("order") or pl.get("index")
|
||||
|
||||
# WP-Fix: target_section explizit durchreichen
|
||||
target_section = pl.get("target_section")
|
||||
|
||||
pl.setdefault("kind", kind)
|
||||
pl.setdefault("source_id", source_id)
|
||||
pl.setdefault("target_id", target_id)
|
||||
|
||||
if seq is not None and "seq" not in pl:
|
||||
pl["seq"] = seq
|
||||
|
||||
if target_section is not None:
|
||||
pl["target_section"] = target_section
|
||||
|
||||
return pl
|
||||
|
||||
def points_for_edges(prefix: str, edge_payloads: List[dict]) -> Tuple[str, List[rest.PointStruct]]:
|
||||
"""
|
||||
Konvertiert Kanten-Payloads in PointStructs.
|
||||
WP-24c v4.1.0: Nutzt die zentrale _mk_edge_id Funktion aus graph_utils.
|
||||
Dies eliminiert den ID-Drift zwischen manuellen und virtuellen Kanten.
|
||||
|
||||
GOLD-STANDARD v4.1.0: Die ID-Generierung verwendet 4 Parameter + optional target_section
|
||||
(kind, source_id, target_id, scope, target_section).
|
||||
rule_id und variant werden ignoriert, target_section fließt ein (Multigraph-Support).
|
||||
"""
|
||||
_, _, edges_col = _names(prefix)
|
||||
points: List[rest.PointStruct] = []
|
||||
|
||||
for raw in edge_payloads:
|
||||
pl = _normalize_edge_payload(raw)
|
||||
edge_id = pl.get("edge_id")
|
||||
if not edge_id:
|
||||
kind = pl.get("kind", "edge")
|
||||
s = pl.get("source_id", "unknown-src")
|
||||
t = pl.get("target_id", "unknown-tgt")
|
||||
seq = pl.get("seq") or ""
|
||||
edge_id = f"{kind}:{s}->{t}#{seq}"
|
||||
pl["edge_id"] = edge_id
|
||||
point_id = _to_uuid(edge_id)
|
||||
points.append(rest.PointStruct(id=point_id, vector=[0.0], payload=pl))
|
||||
|
||||
# Extraktion der Identitäts-Parameter (GOLD-STANDARD v4.1.0)
|
||||
kind = pl.get("kind", "edge")
|
||||
s = pl.get("source_id", "unknown-src")
|
||||
t = pl.get("target_id", "unknown-tgt")
|
||||
scope = pl.get("scope", "note")
|
||||
target_section = pl.get("target_section") # WP-24c v4.1.0: target_section für Section-Links
|
||||
|
||||
# Hinweis: rule_id und variant werden im Payload gespeichert,
|
||||
# fließen aber NICHT in die ID-Generierung ein (v4.0.0 Standard)
|
||||
# target_section fließt in die ID ein (v4.1.0: Multigraph-Support für Section-Links)
|
||||
|
||||
try:
|
||||
# Aufruf der Single-Source-of-Truth für IDs
|
||||
# GOLD-STANDARD v4.1.0: 4 Parameter + optional target_section
|
||||
point_id = _mk_edge_id(
|
||||
kind=kind,
|
||||
s=s,
|
||||
t=t,
|
||||
scope=scope,
|
||||
target_section=target_section
|
||||
)
|
||||
|
||||
# Synchronisierung des Payloads mit der berechneten ID
|
||||
pl["edge_id"] = point_id
|
||||
|
||||
points.append(rest.PointStruct(
|
||||
id=point_id,
|
||||
vector=[0.0],
|
||||
payload=pl
|
||||
))
|
||||
except ValueError as e:
|
||||
# Fehlerhaft definierte Kanten werden übersprungen, um Pydantic-Crashes zu vermeiden
|
||||
continue
|
||||
|
||||
return edges_col, points
|
||||
|
||||
# --------------------- Vector schema & overrides ---------------------
|
||||
|
||||
def _preferred_name(candidates: List[str]) -> str:
|
||||
"""Ermittelt den primären Vektor-Namen aus einer Liste von Kandidaten."""
|
||||
for k in ("text", "default", "embedding", "content"):
|
||||
if k in candidates:
|
||||
return k
|
||||
|
|
@ -93,10 +155,11 @@ def _preferred_name(candidates: List[str]) -> str:
|
|||
|
||||
def _env_override_for_collection(collection: str) -> Optional[str]:
|
||||
"""
|
||||
Prüft auf Umgebungsvariablen-Overrides für Vektor-Namen.
|
||||
Returns:
|
||||
- "__single__" to force single-vector
|
||||
- concrete name (str) to force named-vector with that name
|
||||
- None to auto-detect
|
||||
- "__single__" für erzwungenen Single-Vector Modus
|
||||
- Name (str) für spezifischen Named-Vector
|
||||
- None für automatische Erkennung
|
||||
"""
|
||||
base = os.getenv("MINDNET_VECTOR_NAME")
|
||||
if collection.endswith("_notes"):
|
||||
|
|
@ -111,19 +174,17 @@ def _env_override_for_collection(collection: str) -> Optional[str]:
|
|||
val = base.strip()
|
||||
if val.lower() in ("__single__", "single"):
|
||||
return "__single__"
|
||||
return val # concrete name
|
||||
return val
|
||||
|
||||
def _get_vector_schema(client: QdrantClient, collection_name: str) -> dict:
|
||||
"""
|
||||
Return {"kind": "single", "size": int} or {"kind": "named", "names": [...], "primary": str}.
|
||||
"""
|
||||
"""Ermittelt das Vektor-Schema einer existierenden Collection via API."""
|
||||
try:
|
||||
info = client.get_collection(collection_name=collection_name)
|
||||
vecs = getattr(info, "vectors", None)
|
||||
# Single-vector config
|
||||
# Prüfung auf Single-Vector Konfiguration
|
||||
if hasattr(vecs, "size") and isinstance(vecs.size, int):
|
||||
return {"kind": "single", "size": vecs.size}
|
||||
# Named-vectors config (dict-like in .config)
|
||||
# Prüfung auf Named-Vectors Konfiguration
|
||||
cfg = getattr(vecs, "config", None)
|
||||
if isinstance(cfg, dict) and cfg:
|
||||
names = list(cfg.keys())
|
||||
|
|
@ -134,6 +195,7 @@ def _get_vector_schema(client: QdrantClient, collection_name: str) -> dict:
|
|||
return {"kind": "single", "size": None}
|
||||
|
||||
def _as_named(points: List[rest.PointStruct], name: str) -> List[rest.PointStruct]:
|
||||
"""Transformiert PointStructs in das Named-Vector Format."""
|
||||
out: List[rest.PointStruct] = []
|
||||
for pt in points:
|
||||
vec = getattr(pt, "vector", None)
|
||||
|
|
@ -141,7 +203,6 @@ def _as_named(points: List[rest.PointStruct], name: str) -> List[rest.PointStruc
|
|||
if name in vec:
|
||||
out.append(pt)
|
||||
else:
|
||||
# take any existing entry; if empty dict fallback to [0.0]
|
||||
fallback_vec = None
|
||||
try:
|
||||
fallback_vec = list(next(iter(vec.values())))
|
||||
|
|
@ -156,35 +217,42 @@ def _as_named(points: List[rest.PointStruct], name: str) -> List[rest.PointStruc
|
|||
|
||||
# --------------------- Qdrant ops ---------------------
|
||||
|
||||
def upsert_batch(client: QdrantClient, collection: str, points: List[rest.PointStruct]) -> None:
|
||||
def upsert_batch(client: QdrantClient, collection: str, points: List[rest.PointStruct], wait: bool = True) -> None:
|
||||
"""
|
||||
Schreibt Points hocheffizient in eine Collection.
|
||||
Unterstützt automatische Schema-Erkennung und Named-Vector Transformation.
|
||||
WP-Fix: 'wait=True' ist Default für Datenkonsistenz zwischen den Ingest-Phasen.
|
||||
"""
|
||||
if not points:
|
||||
return
|
||||
|
||||
# 1) ENV overrides come first
|
||||
# 1) ENV overrides prüfen
|
||||
override = _env_override_for_collection(collection)
|
||||
if override == "__single__":
|
||||
client.upsert(collection_name=collection, points=points, wait=True)
|
||||
client.upsert(collection_name=collection, points=points, wait=wait)
|
||||
return
|
||||
elif isinstance(override, str):
|
||||
client.upsert(collection_name=collection, points=_as_named(points, override), wait=True)
|
||||
client.upsert(collection_name=collection, points=_as_named(points, override), wait=wait)
|
||||
return
|
||||
|
||||
# 2) Auto-detect schema
|
||||
# 2) Automatische Schema-Erkennung (Live-Check)
|
||||
schema = _get_vector_schema(client, collection)
|
||||
if schema.get("kind") == "named":
|
||||
name = schema.get("primary") or _preferred_name(schema.get("names") or [])
|
||||
client.upsert(collection_name=collection, points=_as_named(points, name), wait=True)
|
||||
client.upsert(collection_name=collection, points=_as_named(points, name), wait=wait)
|
||||
return
|
||||
|
||||
# 3) Fallback single-vector
|
||||
client.upsert(collection_name=collection, points=points, wait=True)
|
||||
# 3) Fallback: Single-Vector Upsert
|
||||
client.upsert(collection_name=collection, points=points, wait=wait)
|
||||
|
||||
# --- Optional search helpers ---
|
||||
|
||||
def _filter_any(field: str, values: Iterable[str]) -> rest.Filter:
|
||||
"""Hilfsfunktion für händische Filter-Konstruktion (Logical OR)."""
|
||||
return rest.Filter(should=[rest.FieldCondition(key=field, match=rest.MatchValue(value=v)) for v in values])
|
||||
|
||||
def _merge_filters(*filters: Optional[rest.Filter]) -> Optional[rest.Filter]:
|
||||
"""Führt mehrere Filter-Objekte zu einem konsolidierten Filter zusammen."""
|
||||
fs = [f for f in filters if f is not None]
|
||||
if not fs:
|
||||
return None
|
||||
|
|
@ -199,6 +267,7 @@ def _merge_filters(*filters: Optional[rest.Filter]) -> Optional[rest.Filter]:
|
|||
return rest.Filter(must=must)
|
||||
|
||||
def _filter_from_dict(filters: Optional[Dict[str, Any]]) -> Optional[rest.Filter]:
|
||||
"""Konvertiert ein Python-Dict in ein Qdrant-Filter Objekt."""
|
||||
if not filters:
|
||||
return None
|
||||
parts = []
|
||||
|
|
@ -210,9 +279,17 @@ def _filter_from_dict(filters: Optional[Dict[str, Any]]) -> Optional[rest.Filter
|
|||
return _merge_filters(*parts)
|
||||
|
||||
def search_chunks_by_vector(client: QdrantClient, prefix: str, vector: List[float], top: int = 10, filters: Optional[Dict[str, Any]] = None) -> List[Tuple[str, float, dict]]:
|
||||
"""Sucht semantisch ähnliche Chunks in der Vektordatenbank."""
|
||||
_, chunks_col, _ = _names(prefix)
|
||||
flt = _filter_from_dict(filters)
|
||||
res = client.search(collection_name=chunks_col, query_vector=vector, limit=top, with_payload=True, with_vectors=False, query_filter=flt)
|
||||
res = client.search(
|
||||
collection_name=chunks_col,
|
||||
query_vector=vector,
|
||||
limit=top,
|
||||
with_payload=True,
|
||||
with_vectors=False,
|
||||
query_filter=flt
|
||||
)
|
||||
out: List[Tuple[str, float, dict]] = []
|
||||
for r in res:
|
||||
out.append((str(r.id), float(r.score), dict(r.payload or {})))
|
||||
|
|
@ -228,41 +305,18 @@ def get_edges_for_sources(
|
|||
edge_types: Optional[Iterable[str]] = None,
|
||||
limit: int = 2048,
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Retrieve edge payloads from the <prefix>_edges collection.
|
||||
|
||||
Args:
|
||||
client: QdrantClient instance.
|
||||
prefix: Mindnet collection prefix (e.g. "mindnet").
|
||||
source_ids: Iterable of source_id values (typically chunk_ids or note_ids).
|
||||
edge_types: Optional iterable of edge kinds (e.g. ["references", "depends_on"]). If None,
|
||||
all kinds are returned.
|
||||
limit: Maximum number of edge payloads to return.
|
||||
|
||||
Returns:
|
||||
A list of edge payload dicts, e.g.:
|
||||
{
|
||||
"note_id": "...",
|
||||
"chunk_id": "...",
|
||||
"kind": "references" | "depends_on" | ...,
|
||||
"scope": "chunk",
|
||||
"source_id": "...",
|
||||
"target_id": "...",
|
||||
"rule_id": "...",
|
||||
"confidence": 0.7,
|
||||
...
|
||||
}
|
||||
"""
|
||||
"""Ruft alle Kanten ab, die von einer Menge von Quell-Notizen ausgehen."""
|
||||
source_ids = list(source_ids)
|
||||
if not source_ids or limit <= 0:
|
||||
return []
|
||||
|
||||
# Resolve collection name
|
||||
# Namen der Edges-Collection auflösen
|
||||
_, _, edges_col = _names(prefix)
|
||||
|
||||
# Build filter: source_id IN source_ids
|
||||
# Filter-Bau: source_id IN source_ids
|
||||
src_filter = _filter_any("source_id", [str(s) for s in source_ids])
|
||||
|
||||
# Optional: kind IN edge_types
|
||||
# Optionaler Filter auf den Kanten-Typ
|
||||
kind_filter = None
|
||||
if edge_types:
|
||||
kind_filter = _filter_any("kind", [str(k) for k in edge_types])
|
||||
|
|
@ -273,7 +327,7 @@ def get_edges_for_sources(
|
|||
next_page = None
|
||||
remaining = int(limit)
|
||||
|
||||
# Use paginated scroll API; we don't need vectors, only payloads.
|
||||
# Paginated Scroll API (NUR Payload, keine Vektoren)
|
||||
while remaining > 0:
|
||||
batch_limit = min(256, remaining)
|
||||
res, next_page = client.scroll(
|
||||
|
|
@ -297,4 +351,4 @@ def get_edges_for_sources(
|
|||
if next_page is None or remaining <= 0:
|
||||
break
|
||||
|
||||
return out
|
||||
return out
|
||||
|
|
@ -1,435 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/core/derive_edges.py
|
||||
Zweck:
|
||||
- Bewahrt bestehende Edgelogik (belongs_to, prev/next, references, backlink)
|
||||
- Ergänzt typenbasierte Default-Kanten (edge_defaults aus config/types.yaml)
|
||||
- Unterstützt "typed inline relations":
|
||||
* [[rel:KIND | Target]]
|
||||
* [[rel:KIND Target]]
|
||||
* rel: KIND [[Target]]
|
||||
- Unterstützt Obsidian-Callouts:
|
||||
* > [!edge] KIND: [[Target]] [[Target2]] ...
|
||||
Kompatibilität:
|
||||
- build_edges_for_note(...) Signatur unverändert
|
||||
- rule_id Werte:
|
||||
* structure:belongs_to
|
||||
* structure:order
|
||||
* explicit:wikilink
|
||||
* inline:rel
|
||||
* callout:edge
|
||||
* edge_defaults:<type>:<relation>
|
||||
* derived:backlink
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import re
|
||||
from typing import Iterable, List, Optional, Tuple, Set, Dict
|
||||
|
||||
try:
|
||||
import yaml # optional, nur für types.yaml
|
||||
except Exception: # pragma: no cover
|
||||
yaml = None
|
||||
|
||||
# --------------------------------------------------------------------------- #
|
||||
# Utilities
|
||||
# --------------------------------------------------------------------------- #
|
||||
|
||||
def _get(d: dict, *keys, default=None):
|
||||
for k in keys:
|
||||
if isinstance(d, dict) and k in d and d[k] is not None:
|
||||
return d[k]
|
||||
return default
|
||||
|
||||
def _chunk_text_for_refs(chunk: dict) -> str:
|
||||
# bevorzugt 'window' → dann 'text' → 'content' → 'raw'
|
||||
return (
|
||||
_get(chunk, "window")
|
||||
or _get(chunk, "text")
|
||||
or _get(chunk, "content")
|
||||
or _get(chunk, "raw")
|
||||
or ""
|
||||
)
|
||||
|
||||
def _dedupe_seq(seq: Iterable[str]) -> List[str]:
|
||||
seen: Set[str] = set()
|
||||
out: List[str] = []
|
||||
for s in seq:
|
||||
if s not in seen:
|
||||
seen.add(s)
|
||||
out.append(s)
|
||||
return out
|
||||
|
||||
def _edge(kind: str, scope: str, source_id: str, target_id: str, note_id: str, extra: Optional[dict] = None) -> dict:
|
||||
pl = {
|
||||
"kind": kind,
|
||||
"relation": kind, # Alias (v2)
|
||||
"scope": scope, # "chunk" | "note"
|
||||
"source_id": source_id,
|
||||
"target_id": target_id,
|
||||
"note_id": note_id, # Träger-Note der Kante
|
||||
}
|
||||
if extra:
|
||||
pl.update(extra)
|
||||
return pl
|
||||
|
||||
def _mk_edge_id(kind: str, s: str, t: str, scope: str, rule_id: Optional[str] = None) -> str:
|
||||
base = f"{kind}:{s}->{t}#{scope}"
|
||||
if rule_id:
|
||||
base += f"|{rule_id}"
|
||||
try:
|
||||
import hashlib
|
||||
return hashlib.blake2s(base.encode("utf-8"), digest_size=12).hexdigest()
|
||||
except Exception: # pragma: no cover
|
||||
return base
|
||||
|
||||
# --------------------------------------------------------------------------- #
|
||||
# Typen-Registry (types.yaml)
|
||||
# --------------------------------------------------------------------------- #
|
||||
|
||||
def _env(n: str, default: Optional[str] = None) -> str:
|
||||
v = os.getenv(n)
|
||||
return v if v is not None else (default or "")
|
||||
|
||||
def _load_types_registry() -> dict:
|
||||
"""Lädt die YAML-Registry aus MINDNET_TYPES_FILE oder ./config/types.yaml"""
|
||||
p = _env("MINDNET_TYPES_FILE", "./config/types.yaml")
|
||||
if not os.path.isfile(p) or yaml is None:
|
||||
return {}
|
||||
try:
|
||||
with open(p, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
return data
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _get_types_map(reg: dict) -> dict:
|
||||
if isinstance(reg, dict) and isinstance(reg.get("types"), dict):
|
||||
return reg["types"]
|
||||
return reg if isinstance(reg, dict) else {}
|
||||
|
||||
def _edge_defaults_for(note_type: Optional[str], reg: dict) -> List[str]:
|
||||
"""
|
||||
Liefert die edge_defaults-Liste für den gegebenen Notiztyp.
|
||||
Fallback-Reihenfolge:
|
||||
1) reg['types'][note_type]['edge_defaults']
|
||||
2) reg['defaults']['edge_defaults'] (oder 'default'/'global')
|
||||
3) []
|
||||
"""
|
||||
types_map = _get_types_map(reg)
|
||||
if note_type and isinstance(types_map, dict):
|
||||
t = types_map.get(note_type)
|
||||
if isinstance(t, dict) and isinstance(t.get("edge_defaults"), list):
|
||||
return [str(x) for x in t["edge_defaults"] if isinstance(x, str)]
|
||||
for key in ("defaults", "default", "global"):
|
||||
v = reg.get(key)
|
||||
if isinstance(v, dict) and isinstance(v.get("edge_defaults"), list):
|
||||
return [str(x) for x in v["edge_defaults"] if isinstance(x, str)]
|
||||
return []
|
||||
|
||||
# --------------------------------------------------------------------------- #
|
||||
# Parser für Links / Relationen
|
||||
# --------------------------------------------------------------------------- #
|
||||
|
||||
# Normale Wikilinks (Fallback)
|
||||
_WIKILINK_RE = re.compile(r"\[\[(?:[^\|\]]+\|)?([a-zA-Z0-9_\-#:. ]+)\]\]")
|
||||
|
||||
# Getypte Inline-Relationen:
|
||||
# [[rel:KIND | Target]]
|
||||
# [[rel:KIND Target]]
|
||||
_REL_PIPE = re.compile(r"\[\[\s*rel:(?P<kind>[a-z_]+)\s*\|\s*(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
_REL_SPACE = re.compile(r"\[\[\s*rel:(?P<kind>[a-z_]+)\s+(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
# rel: KIND [[Target]] (reines Textmuster)
|
||||
_REL_TEXT = re.compile(r"rel\s*:\s*(?P<kind>[a-z_]+)\s*\[\[\s*(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
|
||||
def _extract_typed_relations(text: str) -> Tuple[List[Tuple[str,str]], str]:
|
||||
"""
|
||||
Gibt Liste (kind, target) zurück und den Text mit entfernten getypten Relation-Links,
|
||||
damit die generische Wikilink-Erkennung sie nicht doppelt zählt.
|
||||
Unterstützt drei Varianten:
|
||||
- [[rel:KIND | Target]]
|
||||
- [[rel:KIND Target]]
|
||||
- rel: KIND [[Target]]
|
||||
"""
|
||||
pairs: List[Tuple[str,str]] = []
|
||||
def _collect(m):
|
||||
k = (m.group("kind") or "").strip().lower()
|
||||
t = (m.group("target") or "").strip()
|
||||
if k and t:
|
||||
pairs.append((k, t))
|
||||
return "" # Link entfernen
|
||||
|
||||
text = _REL_PIPE.sub(_collect, text)
|
||||
text = _REL_SPACE.sub(_collect, text)
|
||||
text = _REL_TEXT.sub(_collect, text)
|
||||
return pairs, text
|
||||
|
||||
# Obsidian Callout Parser
|
||||
_CALLOUT_START = re.compile(r"^\s*>\s*\[!edge\]\s*(.*)$", re.IGNORECASE)
|
||||
_REL_LINE = re.compile(r"^(?P<kind>[a-z_]+)\s*:\s*(?P<targets>.+?)\s*$", re.IGNORECASE)
|
||||
_WIKILINKS_IN_LINE = re.compile(r"\[\[([^\]]+)\]\]")
|
||||
|
||||
def _extract_callout_relations(text: str) -> Tuple[List[Tuple[str,str]], str]:
|
||||
"""
|
||||
Findet [!edge]-Callouts und extrahiert (kind, target). Entfernt den gesamten
|
||||
Callout-Block aus dem Text (damit Wikilinks daraus nicht zusätzlich als
|
||||
"references" gezählt werden).
|
||||
"""
|
||||
if not text:
|
||||
return [], text
|
||||
|
||||
lines = text.splitlines()
|
||||
out_pairs: List[Tuple[str,str]] = []
|
||||
keep_lines: List[str] = []
|
||||
i = 0
|
||||
|
||||
while i < len(lines):
|
||||
m = _CALLOUT_START.match(lines[i])
|
||||
if not m:
|
||||
keep_lines.append(lines[i])
|
||||
i += 1
|
||||
continue
|
||||
|
||||
block_lines: List[str] = []
|
||||
first_rest = m.group(1) or ""
|
||||
if first_rest.strip():
|
||||
block_lines.append(first_rest)
|
||||
|
||||
i += 1
|
||||
while i < len(lines) and lines[i].lstrip().startswith('>'):
|
||||
block_lines.append(lines[i].lstrip()[1:].lstrip())
|
||||
i += 1
|
||||
|
||||
for bl in block_lines:
|
||||
mrel = _REL_LINE.match(bl)
|
||||
if not mrel:
|
||||
continue
|
||||
kind = (mrel.group("kind") or "").strip().lower()
|
||||
targets = mrel.group("targets") or ""
|
||||
found = _WIKILINKS_IN_LINE.findall(targets)
|
||||
if found:
|
||||
for t in found:
|
||||
t = t.strip()
|
||||
if t:
|
||||
out_pairs.append((kind, t))
|
||||
else:
|
||||
for raw in re.split(r"[,;]", targets):
|
||||
t = raw.strip()
|
||||
if t:
|
||||
out_pairs.append((kind, t))
|
||||
|
||||
# Callout wird NICHT in keep_lines übernommen
|
||||
continue
|
||||
|
||||
remainder = "\n".join(keep_lines)
|
||||
return out_pairs, remainder
|
||||
|
||||
def _extract_wikilinks(text: str) -> List[str]:
|
||||
ids: List[str] = []
|
||||
for m in _WIKILINK_RE.finditer(text or ""):
|
||||
ids.append(m.group(1).strip())
|
||||
return ids
|
||||
|
||||
# --------------------------------------------------------------------------- #
|
||||
# Hauptfunktion
|
||||
# --------------------------------------------------------------------------- #
|
||||
|
||||
def build_edges_for_note(
|
||||
note_id: str,
|
||||
chunks: List[dict],
|
||||
note_level_references: Optional[List[str]] = None,
|
||||
include_note_scope_refs: bool = False,
|
||||
) -> List[dict]:
|
||||
"""
|
||||
Erzeugt Kanten für eine Note.
|
||||
|
||||
- belongs_to: für jeden Chunk (chunk -> note)
|
||||
- next / prev: zwischen aufeinanderfolgenden Chunks
|
||||
- references: pro Chunk aus window/text (via Wikilinks)
|
||||
- typed inline relations: [[rel:KIND | Target]] / [[rel:KIND Target]] / rel: KIND [[Target]]
|
||||
- Obsidian Callouts: > [!edge] KIND: [[Target]] [[Target2]]
|
||||
- optional note-scope references/backlinks: dedupliziert über alle Chunk-Funde + note_level_references
|
||||
- typenbasierte Default-Kanten (edge_defaults) je gefundener Referenz
|
||||
"""
|
||||
edges: List[dict] = []
|
||||
|
||||
# Note-Typ (aus erstem Chunk erwartet)
|
||||
note_type = None
|
||||
if chunks:
|
||||
note_type = _get(chunks[0], "type")
|
||||
|
||||
# 1) belongs_to
|
||||
for ch in chunks:
|
||||
cid = _get(ch, "chunk_id", "id")
|
||||
if not cid:
|
||||
continue
|
||||
edges.append(_edge("belongs_to", "chunk", cid, note_id, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id("belongs_to", cid, note_id, "chunk", "structure:belongs_to"),
|
||||
"provenance": "rule",
|
||||
"rule_id": "structure:belongs_to",
|
||||
"confidence": 1.0,
|
||||
}))
|
||||
|
||||
# 2) next / prev
|
||||
for i in range(len(chunks) - 1):
|
||||
a, b = chunks[i], chunks[i + 1]
|
||||
a_id = _get(a, "chunk_id", "id")
|
||||
b_id = _get(b, "chunk_id", "id")
|
||||
if not a_id or not b_id:
|
||||
continue
|
||||
edges.append(_edge("next", "chunk", a_id, b_id, note_id, {
|
||||
"chunk_id": a_id,
|
||||
"edge_id": _mk_edge_id("next", a_id, b_id, "chunk", "structure:order"),
|
||||
"provenance": "rule",
|
||||
"rule_id": "structure:order",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
edges.append(_edge("prev", "chunk", b_id, a_id, note_id, {
|
||||
"chunk_id": b_id,
|
||||
"edge_id": _mk_edge_id("prev", b_id, a_id, "chunk", "structure:order"),
|
||||
"provenance": "rule",
|
||||
"rule_id": "structure:order",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
|
||||
# 3) references + typed inline + callouts + defaults (chunk-scope)
|
||||
reg = _load_types_registry()
|
||||
defaults = _edge_defaults_for(note_type, reg)
|
||||
refs_all: List[str] = []
|
||||
|
||||
for ch in chunks:
|
||||
cid = _get(ch, "chunk_id", "id")
|
||||
if not cid:
|
||||
continue
|
||||
raw = _chunk_text_for_refs(ch)
|
||||
|
||||
# 3a) typed inline relations
|
||||
typed, remainder = _extract_typed_relations(raw)
|
||||
for kind, target in typed:
|
||||
kind = kind.strip().lower()
|
||||
if not kind or not target:
|
||||
continue
|
||||
edges.append(_edge(kind, "chunk", cid, target, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(kind, cid, target, "chunk", "inline:rel"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "inline:rel",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
if kind in {"related_to", "similar_to"}:
|
||||
edges.append(_edge(kind, "chunk", target, cid, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(kind, target, cid, "chunk", "inline:rel"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "inline:rel",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
|
||||
# 3b) callouts
|
||||
call_pairs, remainder2 = _extract_callout_relations(remainder)
|
||||
for kind, target in call_pairs:
|
||||
k = (kind or "").strip().lower()
|
||||
if not k or not target:
|
||||
continue
|
||||
edges.append(_edge(k, "chunk", cid, target, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(k, cid, target, "chunk", "callout:edge"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "callout:edge",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
if k in {"related_to", "similar_to"}:
|
||||
edges.append(_edge(k, "chunk", target, cid, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(k, target, cid, "chunk", "callout:edge"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "callout:edge",
|
||||
"confidence": 0.95,
|
||||
}))
|
||||
|
||||
# 3c) generische Wikilinks → references (+ defaults je Ref)
|
||||
refs = _extract_wikilinks(remainder2)
|
||||
for r in refs:
|
||||
edges.append(_edge("references", "chunk", cid, r, note_id, {
|
||||
"chunk_id": cid,
|
||||
"ref_text": r,
|
||||
"edge_id": _mk_edge_id("references", cid, r, "chunk", "explicit:wikilink"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "explicit:wikilink",
|
||||
"confidence": 1.0,
|
||||
}))
|
||||
for rel in defaults:
|
||||
if rel == "references":
|
||||
continue
|
||||
edges.append(_edge(rel, "chunk", cid, r, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(rel, cid, r, "chunk", f"edge_defaults:{note_type}:{rel}"),
|
||||
"provenance": "rule",
|
||||
"rule_id": f"edge_defaults:{note_type}:{rel}",
|
||||
"confidence": 0.7,
|
||||
}))
|
||||
if rel in {"related_to", "similar_to"}:
|
||||
edges.append(_edge(rel, "chunk", r, cid, note_id, {
|
||||
"chunk_id": cid,
|
||||
"edge_id": _mk_edge_id(rel, r, cid, "chunk", f"edge_defaults:{note_type}:{rel}"),
|
||||
"provenance": "rule",
|
||||
"rule_id": f"edge_defaults:{note_type}:{rel}",
|
||||
"confidence": 0.7,
|
||||
}))
|
||||
|
||||
refs_all.extend(refs)
|
||||
|
||||
# 4) optional note-scope refs/backlinks (+ defaults)
|
||||
if include_note_scope_refs:
|
||||
refs_note = list(refs_all or [])
|
||||
if note_level_references:
|
||||
refs_note.extend([r for r in note_level_references if isinstance(r, str) and r])
|
||||
refs_note = _dedupe_seq(refs_note)
|
||||
for r in refs_note:
|
||||
edges.append(_edge("references", "note", note_id, r, note_id, {
|
||||
"edge_id": _mk_edge_id("references", note_id, r, "note", "explicit:note_scope"),
|
||||
"provenance": "explicit",
|
||||
"rule_id": "explicit:note_scope",
|
||||
"confidence": 1.0,
|
||||
}))
|
||||
edges.append(_edge("backlink", "note", r, note_id, note_id, {
|
||||
"edge_id": _mk_edge_id("backlink", r, note_id, "note", "derived:backlink"),
|
||||
"provenance": "rule",
|
||||
"rule_id": "derived:backlink",
|
||||
"confidence": 0.9,
|
||||
}))
|
||||
for rel in defaults:
|
||||
if rel == "references":
|
||||
continue
|
||||
edges.append(_edge(rel, "note", note_id, r, note_id, {
|
||||
"edge_id": _mk_edge_id(rel, note_id, r, "note", f"edge_defaults:{note_type}:{rel}"),
|
||||
"provenance": "rule",
|
||||
"rule_id": f"edge_defaults:{note_type}:{rel}",
|
||||
"confidence": 0.7,
|
||||
}))
|
||||
if rel in {"related_to", "similar_to"}:
|
||||
edges.append(_edge(rel, "note", r, note_id, note_id, {
|
||||
"edge_id": _mk_edge_id(rel, r, note_id, "note", f"edge_defaults:{note_type}:{rel}"),
|
||||
"provenance": "rule",
|
||||
"rule_id": f"edge_defaults:{note_type}:{rel}",
|
||||
"confidence": 0.7,
|
||||
}))
|
||||
|
||||
# 5) De-Dupe (source_id, target_id, relation, rule_id)
|
||||
seen: Set[Tuple[str,str,str,str]] = set()
|
||||
out: List[dict] = []
|
||||
for e in edges:
|
||||
s = str(e.get("source_id") or "")
|
||||
t = str(e.get("target_id") or "")
|
||||
rel = str(e.get("relation") or e.get("kind") or "edge")
|
||||
rule = str(e.get("rule_id") or "")
|
||||
key = (s, t, rel, rule)
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
out.append(e)
|
||||
return out
|
||||
|
|
@ -1,296 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/core/edges.py
|
||||
Version: 2.0.0 (V2‑superset, rückwärtskompatibel zu v1 vom 2025‑09‑09)
|
||||
|
||||
Zweck
|
||||
-----
|
||||
Bewahrt die bestehende Edgelogik (belongs_to, prev/next, references, backlink)
|
||||
und ergänzt V2‑Felder + Typ‑Default‑Kanten gemäß config/types.yaml (edge_defaults).
|
||||
Die Funktion ist **idempotent** und **rückwärtskompatibel** zur bisherigen Signatur.
|
||||
|
||||
Kompatibilitätsgarantien (gegenüber v1):
|
||||
- **Input**: akzeptiert identische Chunk‑Payloads wie v1:
|
||||
* `id` (Chunk‑ID), `note_id` (Owner), `neighbors.prev|next` (optional),
|
||||
`references: [{target_id: ...}]` (optional),
|
||||
alternativ: `chunk_id`, `chunk_index|ord`, `window|text`
|
||||
- **Output (v1‑Felder)**: `kind`, `source_id`, `target_id`, `scope`, `note_id`, `edge_id`
|
||||
- **Neu (v2‑Felder)**: `relation`, `src_note_id`, `src_chunk_id?`, `dst_note_id`, `dst_chunk_id?`,
|
||||
`provenance` (`explicit|rule`), `rule_id?`, `confidence?`
|
||||
|
||||
Regeln
|
||||
------
|
||||
- Deduplizierungsschlüssel: (source_id, target_id, relation, rule_id)
|
||||
- Strukturkanten:
|
||||
* belongs_to: 1× pro Chunk
|
||||
* next/prev: Sequenz der Chunks; nutzt bevorzugt neighbors; sonst ord/chunk_index
|
||||
- Explizite Referenzen:
|
||||
* aus Chunk: `references[].target_id` (falls vorhanden)
|
||||
* Fallback: Wikilinks in `window|text`: [[Some Title|some-id]] oder [[some-id]]
|
||||
- Note‑Scope:
|
||||
* backlink immer; references nur, wenn include_note_scope_refs=True
|
||||
- Typ‑Defaults (edge_defaults aus config/types.yaml des **Quell‑Notiztyps**):
|
||||
* Für jede explizite Referenz wird je default‑Relation eine Regel‑Kante erzeugt
|
||||
* rule_id: "type_default:{note_type}:{relation}:v1", provenance="rule"
|
||||
|
||||
Konfiguration
|
||||
-------------
|
||||
- ENV MINDNET_TYPES_FILE (Default: ./config/types.yaml)
|
||||
|
||||
Lizenz/Autor
|
||||
------------
|
||||
- Erstimplementierung v1 (2025‑09‑09) — Projekt Mindnet
|
||||
- Erweiterung v2 (2025‑11‑11) — kompatible Superset‑Implementierung
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import re
|
||||
from typing import Dict, Iterable, List, Optional, Tuple, Set
|
||||
|
||||
try:
|
||||
import yaml # optional, nur für types.yaml
|
||||
except Exception: # pragma: no cover
|
||||
yaml = None
|
||||
|
||||
# ------------------------------------------------------------
|
||||
# Hilfen: types.yaml laden (edge_defaults)
|
||||
# ------------------------------------------------------------
|
||||
|
||||
def _types_path() -> str:
|
||||
return os.getenv("MINDNET_TYPES_FILE") or "./config/types.yaml"
|
||||
|
||||
def _load_types() -> Dict[str, dict]:
|
||||
p = _types_path()
|
||||
if not os.path.isfile(p) or yaml is None:
|
||||
return {}
|
||||
try:
|
||||
with open(p, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
if isinstance(data, dict) and "types" in data and isinstance(data["types"], dict):
|
||||
return data["types"]
|
||||
return data if isinstance(data, dict) else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _edge_defaults_for(note_type: Optional[str]) -> List[str]:
|
||||
types = _load_types()
|
||||
t = (note_type or "").strip().lower()
|
||||
cfg = types.get(t) or {}
|
||||
defaults = cfg.get("edge_defaults") or []
|
||||
if isinstance(defaults, str):
|
||||
defaults = [defaults]
|
||||
return [str(x) for x in defaults if isinstance(x, (str, int, float))]
|
||||
|
||||
# ------------------------------------------------------------
|
||||
# Wikilink‑Parser (Fallback, wenn ch["references"] fehlt)
|
||||
# ------------------------------------------------------------
|
||||
|
||||
_WIKILINK_RE = re.compile(r"\[\[(?:[^\|\]]+\|)?([a-zA-Z0-9_\-#:. ]+)\]\]")
|
||||
|
||||
def _extract_wikilinks(text: str) -> List[str]:
|
||||
ids: List[str] = []
|
||||
for m in _WIKILINK_RE.finditer(text or ""):
|
||||
ids.append(m.group(1).strip())
|
||||
return ids
|
||||
|
||||
# ------------------------------------------------------------
|
||||
# Utility
|
||||
# ------------------------------------------------------------
|
||||
|
||||
def _mk_edge_id(kind: str, s: str, t: str, scope: str, rule_id: Optional[str] = None) -> str:
|
||||
base = f"{kind}:{s}->{t}#{scope}"
|
||||
if rule_id:
|
||||
base += f"|{rule_id}"
|
||||
try:
|
||||
import hashlib
|
||||
return hashlib.blake2s(base.encode("utf-8"), digest_size=12).hexdigest()
|
||||
except Exception: # pragma: no cover
|
||||
return base
|
||||
|
||||
def _dedupe(edges: List[Dict]) -> List[Dict]:
|
||||
seen: Set[Tuple[str,str,str,str]] = set()
|
||||
out: List[Dict] = []
|
||||
for e in edges:
|
||||
s = str(e.get("source_id") or "")
|
||||
t = str(e.get("target_id") or "")
|
||||
rel = str(e.get("relation") or e.get("kind") or "edge")
|
||||
rule = str(e.get("rule_id") or "")
|
||||
key = (s, t, rel, rule)
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
out.append(e)
|
||||
return out
|
||||
|
||||
def _first(v: dict, *keys, default=None):
|
||||
for k in keys:
|
||||
if k in v and v[k] is not None:
|
||||
return v[k]
|
||||
return default
|
||||
|
||||
# ------------------------------------------------------------
|
||||
# Hauptfunktion
|
||||
# ------------------------------------------------------------
|
||||
|
||||
def build_edges_for_note(
|
||||
note_id: str,
|
||||
chunk_payloads: List[Dict],
|
||||
note_level_refs: Optional[List[str]] = None,
|
||||
*,
|
||||
include_note_scope_refs: bool = False,
|
||||
) -> List[Dict]:
|
||||
edges: List[Dict] = []
|
||||
chunks = list(chunk_payloads or [])
|
||||
# Notiztyp aus erstem Chunk ableiten (kompatibel zu existierenden Payloads)
|
||||
note_type = (chunks[0].get("type") if chunks else None) or (chunks[0].get("note_type") if chunks else None)
|
||||
|
||||
# --- Strukturkanten ------------------------------------------------------
|
||||
# belongs_to
|
||||
for ch in chunks:
|
||||
cid = _first(ch, "id", "chunk_id")
|
||||
if not cid:
|
||||
continue
|
||||
owner = ch.get("note_id") or note_id
|
||||
e = {
|
||||
"edge_id": _mk_edge_id("belongs_to", cid, note_id, "chunk", "structure:belongs_to:v1"),
|
||||
"kind": "belongs_to",
|
||||
"relation": "belongs_to",
|
||||
"scope": "chunk",
|
||||
"source_id": cid,
|
||||
"target_id": note_id,
|
||||
"note_id": owner, # v1-Kompat
|
||||
# v2
|
||||
"src_note_id": owner,
|
||||
"src_chunk_id": cid,
|
||||
"dst_note_id": note_id,
|
||||
"provenance": "rule",
|
||||
"rule_id": "structure:belongs_to:v1",
|
||||
"confidence": 1.0,
|
||||
}
|
||||
edges.append(e)
|
||||
|
||||
# next/prev — bevorzugt neighbors.prev/next; sonst via ord/chunk_index
|
||||
# Map der Chunks nach Index
|
||||
ordered = list(chunks)
|
||||
def _idx(c):
|
||||
return _first(c, "chunk_index", "ord", default=0)
|
||||
ordered.sort(key=_idx)
|
||||
|
||||
for i, ch in enumerate(ordered):
|
||||
cid = _first(ch, "id", "chunk_id")
|
||||
if not cid:
|
||||
continue
|
||||
owner = ch.get("note_id") or note_id
|
||||
nb = ch.get("neighbors") or {}
|
||||
prev_id = nb.get("prev")
|
||||
next_id = nb.get("next")
|
||||
# Fallback-Reihenfolge
|
||||
if prev_id is None and i > 0:
|
||||
prev_id = _first(ordered[i-1], "id", "chunk_id")
|
||||
if next_id is None and i+1 < len(ordered):
|
||||
next_id = _first(ordered[i+1], "id", "chunk_id")
|
||||
|
||||
if prev_id:
|
||||
edges.append({
|
||||
"edge_id": _mk_edge_id("prev", cid, prev_id, "chunk", "structure:order:v1"),
|
||||
"kind": "prev", "relation": "prev", "scope": "chunk",
|
||||
"source_id": cid, "target_id": prev_id, "note_id": owner,
|
||||
"src_note_id": owner, "src_chunk_id": cid,
|
||||
"dst_note_id": owner, "dst_chunk_id": prev_id,
|
||||
"provenance": "rule", "rule_id": "structure:order:v1", "confidence": 0.95,
|
||||
})
|
||||
edges.append({
|
||||
"edge_id": _mk_edge_id("next", prev_id, cid, "chunk", "structure:order:v1"),
|
||||
"kind": "next", "relation": "next", "scope": "chunk",
|
||||
"source_id": prev_id, "target_id": cid, "note_id": owner,
|
||||
"src_note_id": owner, "src_chunk_id": prev_id,
|
||||
"dst_note_id": owner, "dst_chunk_id": cid,
|
||||
"provenance": "rule", "rule_id": "structure:order:v1", "confidence": 0.95,
|
||||
})
|
||||
|
||||
# --- Explizite Referenzen (Chunk‑Scope) ---------------------------------
|
||||
explicit_refs: List[Dict] = []
|
||||
for ch in chunks:
|
||||
cid = _first(ch, "id", "chunk_id")
|
||||
if not cid:
|
||||
continue
|
||||
owner = ch.get("note_id") or note_id
|
||||
# 1) bevorzugt vorhandene ch["references"]
|
||||
refs = ch.get("references") or []
|
||||
targets = [r.get("target_id") for r in refs if isinstance(r, dict) and r.get("target_id")]
|
||||
# 2) Fallback: Wikilinks aus Text
|
||||
if not targets:
|
||||
text = _first(ch, "window", "text", default="") or ""
|
||||
targets = _extract_wikilinks(text)
|
||||
for tid in targets:
|
||||
if not isinstance(tid, str) or not tid.strip():
|
||||
continue
|
||||
e = {
|
||||
"edge_id": _mk_edge_id("references", cid, tid, "chunk"),
|
||||
"kind": "references",
|
||||
"relation": "references",
|
||||
"scope": "chunk",
|
||||
"source_id": cid,
|
||||
"target_id": tid,
|
||||
"note_id": owner,
|
||||
# v2
|
||||
"src_note_id": owner,
|
||||
"src_chunk_id": cid,
|
||||
"dst_note_id": tid,
|
||||
"provenance": "explicit",
|
||||
"rule_id": "",
|
||||
"confidence": 1.0,
|
||||
}
|
||||
edges.append(e)
|
||||
explicit_refs.append(e)
|
||||
|
||||
# --- Note‑Scope: references (optional) + backlink (immer) ----------------
|
||||
unique_refs = []
|
||||
if note_level_refs:
|
||||
seen = set()
|
||||
for tid in note_level_refs:
|
||||
if isinstance(tid, str) and tid.strip() and tid not in seen:
|
||||
unique_refs.append(tid); seen.add(tid)
|
||||
|
||||
for tid in unique_refs:
|
||||
if include_note_scope_refs:
|
||||
edges.append({
|
||||
"edge_id": _mk_edge_id("references", note_id, tid, "note"),
|
||||
"kind": "references", "relation": "references", "scope": "note",
|
||||
"source_id": note_id, "target_id": tid, "note_id": note_id,
|
||||
"src_note_id": note_id, "dst_note_id": tid,
|
||||
"provenance": "explicit", "rule_id": "", "confidence": 1.0,
|
||||
})
|
||||
edges.append({
|
||||
"edge_id": _mk_edge_id("backlink", tid, note_id, "note", "derived:backlink:v1"),
|
||||
"kind": "backlink", "relation": "backlink", "scope": "note",
|
||||
"source_id": tid, "target_id": note_id, "note_id": note_id,
|
||||
"src_note_id": tid, "dst_note_id": note_id,
|
||||
"provenance": "rule", "rule_id": "derived:backlink:v1", "confidence": 0.9,
|
||||
})
|
||||
|
||||
# --- Type‑Defaults je expliziter Referenz --------------------------------
|
||||
defaults = [d for d in _edge_defaults_for(note_type) if d and d != "references"]
|
||||
if defaults:
|
||||
for e in explicit_refs + ([ ] if not include_note_scope_refs else []):
|
||||
# wir nutzen die bereits erzeugten explicit‑Edges als Vorlage
|
||||
src = e["source_id"]; tgt = e["target_id"]
|
||||
scope = e.get("scope", "chunk")
|
||||
s_note = e.get("src_note_id") or note_id
|
||||
s_chunk = e.get("src_chunk_id")
|
||||
t_note = e.get("dst_note_id") or tgt
|
||||
for rel in defaults:
|
||||
rule_id = f"type_default:{(note_type or 'unknown')}:{rel}:v1"
|
||||
edges.append({
|
||||
"edge_id": _mk_edge_id(rel, src, tgt, scope, rule_id),
|
||||
"kind": rel, "relation": rel, "scope": scope,
|
||||
"source_id": src, "target_id": tgt, "note_id": s_note,
|
||||
"src_note_id": s_note, "src_chunk_id": s_chunk,
|
||||
"dst_note_id": t_note,
|
||||
"provenance": "rule", "rule_id": rule_id, "confidence": 0.7,
|
||||
})
|
||||
|
||||
# --- Dedupe & Return -----------------------------------------------------
|
||||
return _dedupe(edges)
|
||||
|
|
@ -1,94 +0,0 @@
|
|||
# app/core/edges_writer.py
|
||||
from __future__ import annotations
|
||||
import hashlib
|
||||
from typing import Dict, List, Iterable, Tuple
|
||||
|
||||
try:
|
||||
# Dein Modul mit der Schemadefinition und der Builder-Funktion
|
||||
from app.core.edges import build_edges_for_note # noqa: F401
|
||||
except Exception as e:
|
||||
raise RuntimeError("Konnte app.core.edges nicht importieren. "
|
||||
"Bitte sicherstellen, dass app/core/edges.py vorhanden ist.") from e
|
||||
|
||||
def _edge_uid(kind: str, source_id: str, target_id: str, scope: str) -> str:
|
||||
"""
|
||||
Deterministische, kurze ID für eine Edge.
|
||||
Kollisionen sind praktisch ausgeschlossen (BLAKE2s über den Kanonischen Schlüssel).
|
||||
"""
|
||||
key = f"{kind}|{source_id}|{target_id}|{scope}"
|
||||
return hashlib.blake2s(key.encode("utf-8"), digest_size=12).hexdigest()
|
||||
|
||||
def ensure_edges_collection(qdrant_client, collection: str) -> None:
|
||||
"""
|
||||
Legt die Edge-Collection an, falls sie nicht existiert.
|
||||
Minimal: 1D-Vector (Dummy), Cosine. Payload-only-Collections sind je nach Qdrant-Version heikel.
|
||||
"""
|
||||
from qdrant_client.http import models as qm
|
||||
|
||||
existing = [c.name for c in qdrant_client.get_collections().collections]
|
||||
if collection in existing:
|
||||
return
|
||||
|
||||
qdrant_client.recreate_collection(
|
||||
collection_name=collection,
|
||||
vectors_config=qm.VectorParams(size=1, distance=qm.Distance.COSINE),
|
||||
on_disk_payload=True,
|
||||
)
|
||||
|
||||
def edges_from_note(
|
||||
note_id: str,
|
||||
chunk_payloads: List[Dict],
|
||||
note_level_refs: Iterable[str] | None,
|
||||
*,
|
||||
include_note_scope_refs: bool = False,
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Ruft deinen Edge-Builder auf und gibt die (deduplizierten) Edge-Payloads zurück.
|
||||
Keine Schemaänderung – exakt das aus app/core/edges.py.
|
||||
"""
|
||||
return build_edges_for_note(
|
||||
note_id=note_id,
|
||||
chunk_payloads=chunk_payloads,
|
||||
note_level_refs=list(note_level_refs or []),
|
||||
include_note_scope_refs=include_note_scope_refs,
|
||||
)
|
||||
|
||||
def upsert_edges(
|
||||
qdrant_client,
|
||||
collection: str,
|
||||
edge_payloads: List[Dict],
|
||||
) -> Tuple[int, int]:
|
||||
"""
|
||||
Schreibt Edges als Points in Qdrant.
|
||||
- id: deterministisch aus (kind, source_id, target_id, scope)
|
||||
- vector: [0.0] Dummy
|
||||
- payload: Edge-Dict (unverändert, siehe Schema in app/core/edges.py)
|
||||
Gibt (anzahl_points, anzahl_unique_keys) zurück.
|
||||
"""
|
||||
from qdrant_client.models import PointStruct
|
||||
|
||||
if not edge_payloads:
|
||||
return 0, 0
|
||||
|
||||
points = []
|
||||
seen = set()
|
||||
for e in edge_payloads:
|
||||
key = (e.get("kind"), e.get("source_id"), e.get("target_id"), e.get("scope"))
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
eid = _edge_uid(*key)
|
||||
points.append(
|
||||
PointStruct(
|
||||
id=eid,
|
||||
vector=[0.0],
|
||||
payload=e,
|
||||
)
|
||||
)
|
||||
|
||||
if not points:
|
||||
return 0, 0
|
||||
|
||||
ensure_edges_collection(qdrant_client, collection)
|
||||
qdrant_client.upsert(collection_name=collection, points=points)
|
||||
return len(points), len(seen)
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
from __future__ import annotations
|
||||
import os, time, json
|
||||
import urllib.request
|
||||
from typing import List, Dict, Any
|
||||
|
||||
# Backend-Auswahl:
|
||||
# - EMBED_BACKEND=ollama -> EMBED_URL=/api/embeddings (Ollama), EMBED_MODEL=z.B. nomic-embed-text
|
||||
# - EMBED_BACKEND=mini -> EMBED_URL=/embed (unser MiniLM-Server), EMBED_MODEL=minilm-384
|
||||
EMBED_BACKEND = os.getenv("EMBED_BACKEND", "mini").lower()
|
||||
EMBED_URL = os.getenv("EMBED_URL", "http://127.0.0.1:8990/embed")
|
||||
EMBED_MODEL = os.getenv("EMBED_MODEL", "minilm-384")
|
||||
EMBED_BATCH = int(os.getenv("EMBED_BATCH", "64"))
|
||||
TIMEOUT = 60
|
||||
|
||||
class EmbedError(RuntimeError): ...
|
||||
|
||||
def _post_json(url: str, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
data = json.dumps(payload).encode("utf-8")
|
||||
req = urllib.request.Request(url, data=data, headers={"Content-Type": "application/json"})
|
||||
with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
def _embed_mini(inputs: List[str], model: str, batch: int) -> List[List[float]]:
|
||||
out: List[List[float]] = []
|
||||
i = 0
|
||||
while i < len(inputs):
|
||||
chunk = inputs[i:i+batch]
|
||||
# einfache Retries
|
||||
for attempt in range(5):
|
||||
try:
|
||||
resp = _post_json(EMBED_URL, {"model": model, "inputs": chunk})
|
||||
vecs = resp.get("embeddings") or resp.get("vectors") or resp.get("data")
|
||||
if not isinstance(vecs, list):
|
||||
raise EmbedError(f"Bad embed response keys: {list(resp.keys())}")
|
||||
out.extend(vecs)
|
||||
break
|
||||
except Exception:
|
||||
if attempt == 4:
|
||||
raise
|
||||
time.sleep(1.5 * (attempt + 1))
|
||||
i += batch
|
||||
return out
|
||||
|
||||
def _embed_ollama(inputs: List[str], model: str, batch: int) -> List[List[float]]:
|
||||
# Ollama /api/embeddings akzeptiert "input" als String ODER Array.
|
||||
# Die Response enthält:
|
||||
# - für single input: {"embedding":[...], "model":"...", ...}
|
||||
# - für array input: {"embeddings":[[...],[...],...], "model":"...", ...} (je nach Version)
|
||||
# Um maximal kompatibel zu sein, rufen wir pro Text einzeln auf.
|
||||
out: List[List[float]] = []
|
||||
for text in inputs:
|
||||
# Retries
|
||||
for attempt in range(5):
|
||||
try:
|
||||
resp = _post_json(EMBED_URL, {"model": model, "input": text})
|
||||
if "embedding" in resp and isinstance(resp["embedding"], list):
|
||||
out.append(resp["embedding"])
|
||||
elif "embeddings" in resp and isinstance(resp["embeddings"], list):
|
||||
# Falls Server array zurückgibt, nimm das erste Element
|
||||
vecs = resp["embeddings"]
|
||||
out.append(vecs[0] if vecs else [])
|
||||
else:
|
||||
raise EmbedError(f"Ollama response unexpected keys: {list(resp.keys())}")
|
||||
break
|
||||
except Exception:
|
||||
if attempt == 4:
|
||||
raise
|
||||
time.sleep(1.5 * (attempt + 1))
|
||||
return out
|
||||
|
||||
def embed_texts(texts: List[str], model: str | None = None, batch_size: int | None = None) -> List[List[float]]:
|
||||
model = model or EMBED_MODEL
|
||||
batch = batch_size or EMBED_BATCH
|
||||
if not texts:
|
||||
return []
|
||||
if EMBED_BACKEND == "ollama":
|
||||
return _embed_ollama(texts, model, batch)
|
||||
# default: mini
|
||||
return _embed_mini(texts, model, batch)
|
||||
|
||||
def embed_one(text: str, model: str | None = None) -> List[float]:
|
||||
return embed_texts([text], model=model, batch_size=1)[0]
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Datei: app/core/env_vars.py
|
||||
Version: 1.1.0 (2025-11-08)
|
||||
|
||||
Zweck
|
||||
Einheitliche Auflösung von ENV-Variablen (Prefix, Qdrant, Embeddings, Hashing)
|
||||
mit Abwärtskompatibilität.
|
||||
|
||||
Grundsatz
|
||||
- Für Qdrant-Funktionen ist 'COLLECTION_PREFIX' der Primärschlüssel.
|
||||
- 'MINDNET_PREFIX' bleibt für App-/UI-/Exporter-Kontexte nutzbar.
|
||||
- Fallbacks sorgen dafür, dass ältere Umgebungen weiter funktionieren.
|
||||
|
||||
Wichtig
|
||||
- Lädt optional eine .env (wenn python-dotenv verfügbar ist).
|
||||
- Überschreibt keine bereits gesetzten OS-Variablen (override=False).
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from typing import Optional, Dict
|
||||
|
||||
# Optional: .env automatisch laden (ohne Hard-Fail, falls nicht vorhanden)
|
||||
try:
|
||||
from dotenv import load_dotenv, find_dotenv # type: ignore
|
||||
_p = find_dotenv()
|
||||
if _p:
|
||||
load_dotenv(_p, override=False)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# -------- Prefix-Auflösung --------
|
||||
|
||||
def get_collection_prefix(cli_override: Optional[str] = None) -> str:
|
||||
"""
|
||||
Für Qdrant-relevante Funktionen:
|
||||
1) CLI-Override (--prefix)
|
||||
2) ENV COLLECTION_PREFIX
|
||||
3) ENV MINDNET_PREFIX (Fallback)
|
||||
4) 'mindnet' (Default)
|
||||
"""
|
||||
if cli_override and str(cli_override).strip():
|
||||
return str(cli_override).strip()
|
||||
return (
|
||||
os.getenv("COLLECTION_PREFIX")
|
||||
or os.getenv("MINDNET_PREFIX")
|
||||
or "mindnet"
|
||||
)
|
||||
|
||||
def get_mindnet_prefix(cli_override: Optional[str] = None) -> str:
|
||||
"""
|
||||
Für App-/UI-/Exporter-Kontexte:
|
||||
1) CLI-Override (--prefix)
|
||||
2) ENV MINDNET_PREFIX
|
||||
3) ENV COLLECTION_PREFIX (Fallback)
|
||||
4) 'mindnet'
|
||||
"""
|
||||
if cli_override and str(cli_override).strip():
|
||||
return str(cli_override).strip()
|
||||
return (
|
||||
os.getenv("MINDNET_PREFIX")
|
||||
or os.getenv("COLLECTION_PREFIX")
|
||||
or "mindnet"
|
||||
)
|
||||
|
||||
def get_prefix(cli_override: Optional[str] = None, target: str = "qdrant") -> str:
|
||||
"""
|
||||
Universelle Hülle (abwärtskompatibel):
|
||||
target='qdrant' -> get_collection_prefix
|
||||
target='app' -> get_mindnet_prefix
|
||||
"""
|
||||
if target.lower() == "app":
|
||||
return get_mindnet_prefix(cli_override)
|
||||
return get_collection_prefix(cli_override)
|
||||
|
||||
# -------- Qdrant / Embeddings / Hashing --------
|
||||
|
||||
def get_qdrant_url(default: str = "http://127.0.0.1:6333") -> str:
|
||||
return os.getenv("QDRANT_URL", default)
|
||||
|
||||
def get_qdrant_api_key(default: str = "") -> str:
|
||||
return os.getenv("QDRANT_API_KEY", default)
|
||||
|
||||
def get_vector_dim(default: int = 384) -> int:
|
||||
try:
|
||||
return int(os.getenv("VECTOR_DIM", str(default)))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
def get_embed_url(default: Optional[str] = None) -> Optional[str]:
|
||||
return os.getenv("EMBED_URL", default)
|
||||
|
||||
def get_hash_env() -> Dict[str, str]:
|
||||
"""
|
||||
Liefert die Hash-Konfiguration (nur Aggregation; die Auswertung bleibt in den Skripten).
|
||||
"""
|
||||
return {
|
||||
"MINDNET_HASH_COMPARE": os.getenv("MINDNET_HASH_COMPARE", ""),
|
||||
"MINDNET_HASH_SOURCE": os.getenv("MINDNET_HASH_SOURCE", ""),
|
||||
"MINDNET_HASH_NORMALIZE": os.getenv("MINDNET_HASH_NORMALIZE", ""),
|
||||
}
|
||||
16
app/core/graph/__init__.py
Normal file
16
app/core/graph/__init__.py
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
"""
|
||||
FILE: app/core/graph/__init__.py
|
||||
DESCRIPTION: Unified Graph Package. Exportiert Kanten-Ableitung und Graph-Adapter.
|
||||
"""
|
||||
from .graph_derive_edges import build_edges_for_note
|
||||
from .graph_utils import PROVENANCE_PRIORITY
|
||||
from .graph_subgraph import Subgraph, expand
|
||||
from .graph_weights import EDGE_BASE_WEIGHTS
|
||||
|
||||
__all__ = [
|
||||
"build_edges_for_note",
|
||||
"PROVENANCE_PRIORITY",
|
||||
"Subgraph",
|
||||
"expand",
|
||||
"EDGE_BASE_WEIGHTS"
|
||||
]
|
||||
101
app/core/graph/graph_db_adapter.py
Normal file
101
app/core/graph/graph_db_adapter.py
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
"""
|
||||
FILE: app/core/graph/graph_db_adapter.py
|
||||
DESCRIPTION: Datenbeschaffung aus Qdrant für den Graphen.
|
||||
AUDIT v1.2.0: Gold-Standard v4.1.0 - Scope-Awareness & Section-Filtering.
|
||||
- Erweiterte Suche nach chunk_id-Edges für Scope-Awareness
|
||||
- Optionales target_section-Filtering für präzise Section-Links
|
||||
- Vollständige Metadaten-Unterstützung (provenance, confidence, virtual)
|
||||
VERSION: 1.2.0 (WP-24c: Gold-Standard v4.1.0)
|
||||
"""
|
||||
from typing import List, Dict, Optional
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
# Nutzt die zentrale Infrastruktur für konsistente Collection-Namen (WP-14)
|
||||
from app.core.database import collection_names
|
||||
|
||||
def fetch_edges_from_qdrant(
|
||||
client: QdrantClient,
|
||||
prefix: str,
|
||||
seeds: List[str],
|
||||
edge_types: Optional[List[str]] = None,
|
||||
target_section: Optional[str] = None,
|
||||
chunk_ids: Optional[List[str]] = None,
|
||||
limit: int = 2048,
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Holt Edges aus der Datenbank basierend auf Seed-IDs.
|
||||
WP-24c v4.1.0: Scope-Aware Edge Retrieval mit Section-Filtering.
|
||||
|
||||
Args:
|
||||
client: Qdrant Client
|
||||
prefix: Collection-Präfix
|
||||
seeds: Liste von Note-IDs für die Suche
|
||||
edge_types: Optionale Filterung nach Kanten-Typen
|
||||
target_section: Optionales Section-Filtering (für präzise Section-Links)
|
||||
chunk_ids: Optionale Liste von Chunk-IDs für Scope-Awareness (Chunk-Level Edges)
|
||||
limit: Maximale Anzahl zurückgegebener Edges
|
||||
"""
|
||||
if not seeds or limit <= 0:
|
||||
return []
|
||||
|
||||
# Konsistente Namensauflösung via database-Paket
|
||||
# Rückgabe: (notes_col, chunks_col, edges_col)
|
||||
_, _, edges_col = collection_names(prefix)
|
||||
|
||||
# WP-24c v4.1.0: Scope-Awareness - Suche nach Note- UND Chunk-Level Edges
|
||||
seed_conditions = []
|
||||
for field in ("source_id", "target_id", "note_id"):
|
||||
for s in seeds:
|
||||
seed_conditions.append(
|
||||
rest.FieldCondition(key=field, match=rest.MatchValue(value=str(s)))
|
||||
)
|
||||
|
||||
# Chunk-Level Edges: Wenn chunk_ids angegeben, suche auch nach chunk_id als source_id
|
||||
if chunk_ids:
|
||||
for cid in chunk_ids:
|
||||
seed_conditions.append(
|
||||
rest.FieldCondition(key="source_id", match=rest.MatchValue(value=str(cid)))
|
||||
)
|
||||
|
||||
seeds_filter = rest.Filter(should=seed_conditions) if seed_conditions else None
|
||||
|
||||
# Optionaler Filter auf spezifische Kanten-Typen (z.B. für Intent-Routing)
|
||||
type_filter = None
|
||||
if edge_types:
|
||||
type_conds = [
|
||||
rest.FieldCondition(key="kind", match=rest.MatchValue(value=str(k)))
|
||||
for k in edge_types
|
||||
]
|
||||
type_filter = rest.Filter(should=type_conds)
|
||||
|
||||
# WP-24c v4.1.0: Section-Filtering für präzise Section-Links
|
||||
section_filter = None
|
||||
if target_section:
|
||||
section_filter = rest.Filter(must=[
|
||||
rest.FieldCondition(key="target_section", match=rest.MatchValue(value=str(target_section)))
|
||||
])
|
||||
|
||||
must = []
|
||||
if seeds_filter:
|
||||
must.append(seeds_filter)
|
||||
if type_filter:
|
||||
must.append(type_filter)
|
||||
if section_filter:
|
||||
must.append(section_filter)
|
||||
|
||||
flt = rest.Filter(must=must) if must else None
|
||||
|
||||
# Abfrage via Qdrant Scroll API
|
||||
# WICHTIG: with_payload=True lädt alle Metadaten (target_section, provenance etc.)
|
||||
pts, _ = client.scroll(
|
||||
collection_name=edges_col,
|
||||
scroll_filter=flt,
|
||||
limit=limit,
|
||||
with_payload=True,
|
||||
with_vectors=False,
|
||||
)
|
||||
|
||||
# Wir geben das vollständige Payload zurück, damit der Retriever
|
||||
# alle Signale für die Super-Edge-Aggregation und das Scoring hat.
|
||||
return [dict(p.payload) for p in pts if p.payload]
|
||||
1008
app/core/graph/graph_derive_edges.py
Normal file
1008
app/core/graph/graph_derive_edges.py
Normal file
File diff suppressed because it is too large
Load Diff
164
app/core/graph/graph_extractors.py
Normal file
164
app/core/graph/graph_extractors.py
Normal file
|
|
@ -0,0 +1,164 @@
|
|||
"""
|
||||
FILE: app/core/graph/graph_extractors.py
|
||||
DESCRIPTION: Regex-basierte Extraktion von Relationen aus Text.
|
||||
AUDIT:
|
||||
- Regex für Wikilinks liberalisiert (Umlaute, Sonderzeichen).
|
||||
- Callout-Parser erweitert für Multi-Line-Listen und Header-Typen.
|
||||
"""
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
|
||||
# Erlaube alle Zeichen außer ']' im Target (fängt Umlaute, Emojis, '&', '#' ab)
|
||||
_WIKILINK_RE = re.compile(r"\[\[(?:[^\|\]]+\|)?([^\]]+)\]\]")
|
||||
|
||||
_REL_PIPE = re.compile(r"\[\[\s*rel:(?P<kind>[a-z_]+)\s*\|\s*(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
_REL_SPACE = re.compile(r"\[\[\s*rel:(?P<kind>[a-z_]+)\s+(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
_REL_TEXT = re.compile(r"rel\s*:\s*(?P<kind>[a-z_]+)\s*\[\[\s*(?P<target>[^\]]+?)\s*\]\]", re.IGNORECASE)
|
||||
|
||||
# Erkennt [!edge] Callouts mit einem oder mehreren '>' am Anfang (für verschachtelte Callouts)
|
||||
_CALLOUT_START = re.compile(r"^\s*>{1,}\s*\[!edge\]\s*(.*)$", re.IGNORECASE)
|
||||
# Erkennt "kind: targets..."
|
||||
_REL_LINE = re.compile(r"^(?P<kind>[a-z_]+)\s*:\s*(?P<targets>.+?)\s*$", re.IGNORECASE)
|
||||
# Erkennt reine Typen (z.B. "depends_on" im Header)
|
||||
_SIMPLE_KIND = re.compile(r"^[a-z_]+$", re.IGNORECASE)
|
||||
|
||||
def extract_typed_relations(text: str) -> Tuple[List[Tuple[str, str]], str]:
|
||||
"""
|
||||
Findet Inline-Relationen wie [[rel:depends_on Target]].
|
||||
Gibt (Liste[(kind, target)], bereinigter_text) zurück.
|
||||
"""
|
||||
if not text: return [], ""
|
||||
pairs = []
|
||||
def _collect(m):
|
||||
k, t = m.group("kind").strip().lower(), m.group("target").strip()
|
||||
pairs.append((k, t))
|
||||
return ""
|
||||
text = _REL_PIPE.sub(_collect, text)
|
||||
text = _REL_SPACE.sub(_collect, text)
|
||||
text = _REL_TEXT.sub(_collect, text)
|
||||
return pairs, text
|
||||
|
||||
def extract_callout_relations(text: str) -> Tuple[List[Tuple[str,str]], str]:
|
||||
"""
|
||||
Verarbeitet Obsidian [!edge]-Callouts.
|
||||
Unterstützt zwei Formate:
|
||||
1. Explizit: "kind: [[Target]]"
|
||||
2. Implizit (Header): "> [!edge] kind" gefolgt von "[[Target]]" Zeilen
|
||||
3. Verschachtelt: ">> [!edge] kind" in verschachtelten Callouts
|
||||
"""
|
||||
if not text: return [], text
|
||||
lines = text.splitlines()
|
||||
out_pairs = []
|
||||
keep_lines = []
|
||||
i = 0
|
||||
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
m = _CALLOUT_START.match(line)
|
||||
if not m:
|
||||
keep_lines.append(line)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Callout-Block gefunden. Wir sammeln alle relevanten Zeilen.
|
||||
block_lines = []
|
||||
|
||||
# Header Content prüfen (z.B. "type" aus "> [!edge] type" oder ">> [!edge] type")
|
||||
header_raw = m.group(1).strip()
|
||||
if header_raw:
|
||||
block_lines.append(header_raw)
|
||||
|
||||
# Bestimme die Einrückungsebene (Anzahl der '>' am Anfang der ersten Zeile)
|
||||
leading_gt_count = len(line) - len(line.lstrip('>'))
|
||||
if leading_gt_count == 0:
|
||||
leading_gt_count = 1 # Fallback für den Fall, dass kein '>' gefunden wurde
|
||||
|
||||
i += 1
|
||||
# Sammle alle Zeilen, die mit mindestens der gleichen Anzahl '>' beginnen
|
||||
while i < len(lines):
|
||||
next_line = lines[i]
|
||||
stripped = next_line.lstrip()
|
||||
# Prüfe, ob die Zeile mit mindestens der gleichen Anzahl '>' beginnt
|
||||
if not stripped.startswith('>'):
|
||||
break
|
||||
next_leading_gt_count = len(next_line) - len(next_line.lstrip('>'))
|
||||
# Wenn die Einrückung kleiner wird, haben wir den Block verlassen
|
||||
if next_leading_gt_count < leading_gt_count:
|
||||
break
|
||||
# Entferne genau die Anzahl der führenden '>' entsprechend der Einrückungsebene
|
||||
# und dann führende Leerzeichen
|
||||
if next_leading_gt_count >= leading_gt_count:
|
||||
# Entferne die führenden '>' (entsprechend der Einrückungsebene)
|
||||
content = stripped[leading_gt_count:].lstrip()
|
||||
if content:
|
||||
block_lines.append(content)
|
||||
i += 1
|
||||
|
||||
# Verarbeitung des Blocks
|
||||
current_kind = None
|
||||
|
||||
# Heuristik: Ist die allererste Zeile (meist aus dem Header) ein reiner Typ?
|
||||
# Dann setzen wir diesen als Default für den Block.
|
||||
if block_lines:
|
||||
first = block_lines[0]
|
||||
# Wenn es NICHT wie "Key: Value" aussieht, aber wie ein Wort:
|
||||
if not _REL_LINE.match(first) and _SIMPLE_KIND.match(first):
|
||||
current_kind = first.lower()
|
||||
|
||||
for bl in block_lines:
|
||||
# Prüfe, ob diese Zeile selbst ein neuer [!edge] Callout ist (für verschachtelte Blöcke)
|
||||
edge_match = re.match(r"^\s*\[!edge\]\s*(.*)$", bl, re.IGNORECASE)
|
||||
if edge_match:
|
||||
# Neuer Edge-Callout gefunden, setze den Typ
|
||||
edge_content = edge_match.group(1).strip()
|
||||
if edge_content:
|
||||
# Prüfe, ob es ein "kind: targets" Format ist
|
||||
mrel = _REL_LINE.match(edge_content)
|
||||
if mrel:
|
||||
current_kind = mrel.group("kind").strip().lower()
|
||||
targets = mrel.group("targets")
|
||||
# Links extrahieren
|
||||
found = _WIKILINK_RE.findall(targets)
|
||||
if found:
|
||||
for t in found: out_pairs.append((current_kind, t.strip()))
|
||||
elif _SIMPLE_KIND.match(edge_content):
|
||||
# Reiner Typ ohne Targets
|
||||
current_kind = edge_content.lower()
|
||||
continue
|
||||
|
||||
# 1. Prüfen auf explizites "Kind: Targets" (überschreibt Header-Typ für diese Zeile)
|
||||
mrel = _REL_LINE.match(bl)
|
||||
if mrel:
|
||||
line_kind = mrel.group("kind").strip().lower()
|
||||
targets = mrel.group("targets")
|
||||
|
||||
# Links extrahieren
|
||||
found = _WIKILINK_RE.findall(targets)
|
||||
if found:
|
||||
for t in found: out_pairs.append((line_kind, t.strip()))
|
||||
else:
|
||||
# Fallback für kommagetrennten Plaintext
|
||||
for raw in re.split(r"[,;]", targets):
|
||||
if raw.strip(): out_pairs.append((line_kind, raw.strip()))
|
||||
|
||||
# Aktualisiere current_kind für nachfolgende Zeilen
|
||||
current_kind = line_kind
|
||||
continue
|
||||
|
||||
# 2. Kein Key:Value Muster -> Prüfen auf Links, die den current_kind nutzen
|
||||
found = _WIKILINK_RE.findall(bl)
|
||||
if found:
|
||||
if current_kind:
|
||||
for t in found: out_pairs.append((current_kind, t.strip()))
|
||||
else:
|
||||
# Link ohne Typ und ohne Header-Typ.
|
||||
# Wird ignoriert oder könnte als 'related_to' fallback dienen.
|
||||
# Aktuell: Ignorieren, um False Positives zu vermeiden.
|
||||
pass
|
||||
|
||||
return out_pairs, "\n".join(keep_lines)
|
||||
|
||||
def extract_wikilinks(text: str) -> List[str]:
|
||||
"""Findet Standard-Wikilinks [[Target]] oder [[Alias|Target]]."""
|
||||
if not text: return []
|
||||
return [m.strip() for m in _WIKILINK_RE.findall(text) if m.strip()]
|
||||
180
app/core/graph/graph_subgraph.py
Normal file
180
app/core/graph/graph_subgraph.py
Normal file
|
|
@ -0,0 +1,180 @@
|
|||
"""
|
||||
FILE: app/core/graph/graph_subgraph.py
|
||||
DESCRIPTION: In-Memory Repräsentation eines Graphen für Scoring und Analyse.
|
||||
Zentrale Komponente für die Graph-Expansion (BFS) und Bonus-Berechnung.
|
||||
WP-15c Update: Erhalt von Metadaten (target_section, provenance)
|
||||
für präzises Retrieval-Reasoning.
|
||||
WP-24c v4.1.0: Scope-Awareness und Section-Filtering Support.
|
||||
VERSION: 1.3.0 (WP-24c: Gold-Standard v4.1.0)
|
||||
STATUS: Active
|
||||
"""
|
||||
import math
|
||||
from collections import defaultdict
|
||||
from typing import Dict, List, Optional, DefaultDict, Any, Set
|
||||
from qdrant_client import QdrantClient
|
||||
|
||||
# Lokale Paket-Imports
|
||||
from .graph_weights import EDGE_BASE_WEIGHTS, calculate_edge_weight
|
||||
from .graph_db_adapter import fetch_edges_from_qdrant
|
||||
|
||||
class Subgraph:
|
||||
"""
|
||||
Leichtgewichtiger Subgraph mit Adjazenzlisten & Kennzahlen.
|
||||
Wird für die Berechnung von Graph-Boni im Retriever genutzt.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
# adj speichert nun vollständige Payloads statt nur Tripel
|
||||
self.adj: DefaultDict[str, List[Dict]] = defaultdict(list)
|
||||
self.reverse_adj: DefaultDict[str, List[Dict]] = defaultdict(list)
|
||||
self.in_degree: DefaultDict[str, int] = defaultdict(int)
|
||||
self.out_degree: DefaultDict[str, int] = defaultdict(int)
|
||||
# WP-24c v4.1.0: Chunk-Level In-Degree für präzise Scoring-Aggregation
|
||||
self.chunk_level_in_degree: DefaultDict[str, int] = defaultdict(int)
|
||||
|
||||
def add_edge(self, e: Dict) -> None:
|
||||
"""
|
||||
Fügt eine Kante hinzu und aktualisiert Indizes.
|
||||
WP-15c: Speichert das vollständige Payload für den Explanation Layer.
|
||||
"""
|
||||
src = e.get("source")
|
||||
tgt = e.get("target")
|
||||
kind = e.get("kind")
|
||||
|
||||
# Das gesamte Payload wird als Kanten-Objekt behalten
|
||||
# Wir stellen sicher, dass alle relevanten Metadaten vorhanden sind
|
||||
edge_data = {
|
||||
"source": src,
|
||||
"target": tgt,
|
||||
"kind": kind,
|
||||
"weight": e.get("weight", EDGE_BASE_WEIGHTS.get(kind, 0.0)),
|
||||
"provenance": e.get("provenance", "rule"),
|
||||
"confidence": e.get("confidence", 1.0),
|
||||
"target_section": e.get("target_section"), # Essentiell für Präzision
|
||||
"is_super_edge": e.get("is_super_edge", False),
|
||||
"virtual": e.get("virtual", False), # WP-24c v4.1.0: Für Authority-Priorisierung
|
||||
"chunk_id": e.get("chunk_id") # WP-24c v4.1.0: Für RAG-Kontext
|
||||
}
|
||||
|
||||
owner = e.get("note_id")
|
||||
|
||||
if not src or not tgt:
|
||||
return
|
||||
|
||||
# 1. Forward-Kante
|
||||
self.adj[src].append(edge_data)
|
||||
self.out_degree[src] += 1
|
||||
self.in_degree[tgt] += 1
|
||||
|
||||
# 2. Reverse-Kante (für Explanation Layer & Backlinks)
|
||||
self.reverse_adj[tgt].append(edge_data)
|
||||
|
||||
# 3. Kontext-Note Handling (erhöht die Zentralität der Parent-Note)
|
||||
if owner and owner != src:
|
||||
# Wir erstellen eine virtuelle Kontext-Kante
|
||||
ctx_edge = edge_data.copy()
|
||||
ctx_edge["source"] = owner
|
||||
ctx_edge["via_context"] = True
|
||||
|
||||
self.adj[owner].append(ctx_edge)
|
||||
self.out_degree[owner] += 1
|
||||
if owner != tgt:
|
||||
self.reverse_adj[tgt].append(ctx_edge)
|
||||
self.in_degree[owner] += 1
|
||||
|
||||
def aggregate_edge_bonus(self, node_id: str) -> float:
|
||||
"""Summe der ausgehenden Kantengewichte (Hub-Score)."""
|
||||
return sum(edge["weight"] for edge in self.adj.get(node_id, []))
|
||||
|
||||
def edge_bonus(self, node_id: str) -> float:
|
||||
"""API für Retriever (WP-04a Kompatibilität)."""
|
||||
return self.aggregate_edge_bonus(node_id)
|
||||
|
||||
def centrality_bonus(self, node_id: str) -> float:
|
||||
"""
|
||||
Log-gedämpfte Zentralität basierend auf dem In-Degree.
|
||||
Begrenzt auf einen maximalen Boost von 0.15.
|
||||
"""
|
||||
indeg = self.in_degree.get(node_id, 0)
|
||||
if indeg <= 0:
|
||||
return 0.0
|
||||
# math.log1p(x) entspricht log(1+x)
|
||||
return min(math.log1p(indeg) / 10.0, 0.15)
|
||||
|
||||
def get_outgoing_edges(self, node_id: str) -> List[Dict[str, Any]]:
|
||||
"""Gibt alle ausgehenden Kanten einer Node inkl. Metadaten zurück."""
|
||||
return self.adj.get(node_id, [])
|
||||
|
||||
def get_incoming_edges(self, node_id: str) -> List[Dict[str, Any]]:
|
||||
"""Gibt alle eingehenden Kanten einer Node inkl. Metadaten zurück."""
|
||||
return self.reverse_adj.get(node_id, [])
|
||||
|
||||
|
||||
def expand(
|
||||
client: QdrantClient,
|
||||
prefix: str,
|
||||
seeds: List[str],
|
||||
depth: int = 1,
|
||||
edge_types: Optional[List[str]] = None,
|
||||
chunk_ids: Optional[List[str]] = None,
|
||||
target_section: Optional[str] = None,
|
||||
) -> Subgraph:
|
||||
"""
|
||||
Expandiert ab Seeds entlang von Edges bis zu einer bestimmten Tiefe.
|
||||
WP-24c v4.1.0: Unterstützt Scope-Awareness (chunk_ids) und Section-Filtering.
|
||||
|
||||
Args:
|
||||
client: Qdrant Client
|
||||
prefix: Collection-Präfix
|
||||
seeds: Liste von Note-IDs für die Expansion
|
||||
depth: Maximale Tiefe der Expansion
|
||||
edge_types: Optionale Filterung nach Kanten-Typen
|
||||
chunk_ids: Optionale Liste von Chunk-IDs für Scope-Awareness
|
||||
target_section: Optionales Section-Filtering
|
||||
"""
|
||||
sg = Subgraph()
|
||||
frontier = set(seeds)
|
||||
visited = set()
|
||||
|
||||
for _ in range(max(depth, 0)):
|
||||
if not frontier:
|
||||
break
|
||||
|
||||
# WP-24c v4.1.0: Erweiterte Edge-Retrieval mit Scope-Awareness und Section-Filtering
|
||||
payloads = fetch_edges_from_qdrant(
|
||||
client, prefix, list(frontier),
|
||||
edge_types=edge_types,
|
||||
chunk_ids=chunk_ids,
|
||||
target_section=target_section
|
||||
)
|
||||
next_frontier: Set[str] = set()
|
||||
|
||||
for pl in payloads:
|
||||
src, tgt = pl.get("source_id"), pl.get("target_id")
|
||||
if not src or not tgt: continue
|
||||
|
||||
# WP-15c: Wir übergeben das vollständige Payload an add_edge
|
||||
# WP-24c v4.1.0: virtual Flag wird für Authority-Priorisierung benötigt
|
||||
edge_payload = {
|
||||
"source": src,
|
||||
"target": tgt,
|
||||
"kind": pl.get("kind", "edge"),
|
||||
"weight": calculate_edge_weight(pl),
|
||||
"note_id": pl.get("note_id"),
|
||||
"provenance": pl.get("provenance", "rule"),
|
||||
"confidence": pl.get("confidence", 1.0),
|
||||
"target_section": pl.get("target_section"),
|
||||
"virtual": pl.get("virtual", False), # WP-24c v4.1.0: Für Authority-Priorisierung
|
||||
"chunk_id": pl.get("chunk_id") # WP-24c v4.1.0: Für RAG-Kontext
|
||||
}
|
||||
|
||||
sg.add_edge(edge_payload)
|
||||
|
||||
# BFS Logik: Neue Ziele in die nächste Frontier aufnehmen
|
||||
if tgt not in visited:
|
||||
next_frontier.add(str(tgt))
|
||||
|
||||
visited |= frontier
|
||||
frontier = next_frontier - visited
|
||||
|
||||
return sg
|
||||
177
app/core/graph/graph_utils.py
Normal file
177
app/core/graph/graph_utils.py
Normal file
|
|
@ -0,0 +1,177 @@
|
|||
"""
|
||||
FILE: app/core/graph/graph_utils.py
|
||||
DESCRIPTION: Basale Werkzeuge, ID-Generierung und Provenance-Konfiguration für den Graphen.
|
||||
AUDIT v4.0.0:
|
||||
- GOLD-STANDARD v4.0.0: Strikte 4-Parameter-ID für Kanten (kind, source, target, scope).
|
||||
- Eliminiert ID-Inkonsistenz zwischen Phase 1 (Autorität) und Phase 2 (Symmetrie).
|
||||
- rule_id und variant werden ignoriert in der ID-Generierung (nur im Payload gespeichert).
|
||||
- Fix für das "Steinzeitaxt"-Problem durch konsistente ID-Generierung.
|
||||
VERSION: 4.0.0 (WP-24c: Gold-Standard Identity)
|
||||
STATUS: Active
|
||||
"""
|
||||
import os
|
||||
import uuid
|
||||
import hashlib
|
||||
from typing import Iterable, List, Optional, Set, Any, Tuple
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
yaml = None
|
||||
|
||||
# WP-15b: Prioritäten-Ranking für die De-Duplizierung von Kanten unterschiedlicher Herkunft
|
||||
PROVENANCE_PRIORITY = {
|
||||
"explicit:wikilink": 1.00,
|
||||
"inline:rel": 0.95,
|
||||
"callout:edge": 0.90,
|
||||
"explicit:callout": 0.90, # WP-24c v4.2.7: Callout-Kanten aus candidate_pool
|
||||
"semantic_ai": 0.90, # Validierte KI-Kanten
|
||||
"structure:belongs_to": 1.00,
|
||||
"structure:order": 0.95, # next/prev
|
||||
"explicit:note_scope": 1.00,
|
||||
"explicit:note_zone": 1.00, # WP-24c v4.2.0: Note-Scope Zonen (höchste Priorität)
|
||||
"derived:backlink": 0.90,
|
||||
"edge_defaults": 0.70 # Heuristik basierend auf types.yaml
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Pfad-Auflösung (Integration der .env Umgebungsvariablen)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def get_vocab_path() -> str:
|
||||
"""Liefert den Pfad zum Edge-Vokabular aus der .env oder den Default."""
|
||||
return os.getenv("MINDNET_VOCAB_PATH", "/mindnet/vault/mindnet/_system/dictionary/edge_vocabulary.md")
|
||||
|
||||
def get_schema_path() -> str:
|
||||
"""Liefert den Pfad zum Graph-Schema aus der .env oder den Default."""
|
||||
return os.getenv("MINDNET_SCHEMA_PATH", "/mindnet/vault/mindnet/_system/dictionary/graph_schema.md")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ID & String Helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _get(d: dict, *keys, default=None):
|
||||
"""Sicherer Zugriff auf tief verschachtelte Dictionary-Keys."""
|
||||
for k in keys:
|
||||
if isinstance(d, dict) and k in d and d[k] is not None:
|
||||
return d[k]
|
||||
return default
|
||||
|
||||
def _dedupe_seq(seq: Iterable[str]) -> List[str]:
|
||||
"""Dedupliziert eine Sequenz von Strings unter Beibehaltung der Reihenfolge."""
|
||||
seen: Set[str] = set()
|
||||
out: List[str] = []
|
||||
for s in seq:
|
||||
if s not in seen:
|
||||
seen.add(s)
|
||||
out.append(s)
|
||||
return out
|
||||
|
||||
def parse_link_target(raw: str, current_note_id: Optional[str] = None) -> Tuple[str, Optional[str]]:
|
||||
"""
|
||||
Trennt einen Obsidian-Link [[Target#Section]] in seine Bestandteile Target und Section.
|
||||
Behandelt Self-Links (z.B. [[#Ziele]]), indem die aktuelle note_id eingesetzt wird.
|
||||
|
||||
Returns:
|
||||
Tuple (target_id, target_section)
|
||||
"""
|
||||
if not raw:
|
||||
return "", None
|
||||
|
||||
parts = raw.split("#", 1)
|
||||
target = parts[0].strip()
|
||||
section = parts[1].strip() if len(parts) > 1 else None
|
||||
|
||||
# Spezialfall: Self-Link innerhalb derselben Datei
|
||||
if not target and section and current_note_id:
|
||||
target = current_note_id
|
||||
|
||||
return target, section
|
||||
|
||||
def _mk_edge_id(kind: str, s: str, t: str, scope: str, target_section: Optional[str] = None) -> str:
|
||||
"""
|
||||
WP-24c v4.0.0: DER GLOBALE STANDARD für Kanten-IDs.
|
||||
Erzeugt eine deterministische UUIDv5. Dies stellt sicher, dass manuelle Links
|
||||
und systemgenerierte Symmetrien dieselbe Point-ID in Qdrant erhalten.
|
||||
|
||||
GOLD-STANDARD v4.0.0: Die ID basiert STRICT auf vier Parametern:
|
||||
f"edge:{kind}:{source}:{target}:{scope}"
|
||||
|
||||
Die Parameter rule_id und variant werden IGNORIERT und fließen NICHT in die ID ein.
|
||||
Sie können weiterhin im Payload gespeichert werden, haben aber keinen Einfluss auf die Identität.
|
||||
|
||||
Args:
|
||||
kind: Typ der Relation (z.B. 'mastered_by')
|
||||
s: Kanonische ID der Quell-Note
|
||||
t: Kanonische ID der Ziel-Note
|
||||
scope: Granularität (Standard: 'note')
|
||||
rule_id: Optionale ID der Regel (aus graph_derive_edges) - IGNORIERT in ID-Generierung
|
||||
variant: Optionale Variante für multiple Links zum selben Ziel - IGNORIERT in ID-Generierung
|
||||
"""
|
||||
if not all([kind, s, t]):
|
||||
raise ValueError(f"Incomplete data for edge ID: kind={kind}, src={s}, tgt={t}")
|
||||
|
||||
# Der String enthält nun alle distinkten semantischen Merkmale
|
||||
base = f"edge:{kind}:{s}:{t}:{scope}"
|
||||
|
||||
# Wenn ein Link auf eine spezifische Sektion zeigt, ist es eine andere Relation
|
||||
if target_section:
|
||||
base += f":{target_section}"
|
||||
|
||||
return str(uuid.uuid5(uuid.NAMESPACE_URL, base))
|
||||
|
||||
def _edge(kind: str, scope: str, source_id: str, target_id: str, note_id: str, extra: Optional[dict] = None) -> dict:
|
||||
"""
|
||||
Konstruiert ein standardisiertes Kanten-Payload für Qdrant.
|
||||
Wird von graph_derive_edges.py benötigt.
|
||||
"""
|
||||
pl = {
|
||||
"kind": kind,
|
||||
"relation": kind,
|
||||
"scope": scope,
|
||||
"source_id": source_id,
|
||||
"target_id": target_id,
|
||||
"note_id": note_id,
|
||||
"virtual": False # Standardmäßig explizit, solange nicht anders in Phase 2 gesetzt
|
||||
}
|
||||
if extra:
|
||||
pl.update(extra)
|
||||
return pl
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Registry Operations
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def load_types_registry() -> dict:
|
||||
"""
|
||||
Lädt die zentrale YAML-Registry (types.yaml).
|
||||
Pfad wird über die Umgebungsvariable MINDNET_TYPES_FILE gesteuert.
|
||||
"""
|
||||
p = os.getenv("MINDNET_TYPES_FILE", "./config/types.yaml")
|
||||
if not os.path.isfile(p) or yaml is None:
|
||||
return {}
|
||||
try:
|
||||
with open(p, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f)
|
||||
return data if data is not None else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def get_edge_defaults_for(note_type: Optional[str], reg: dict) -> List[str]:
|
||||
"""
|
||||
Ermittelt die konfigurierten Standard-Kanten für einen Note-Typ.
|
||||
Greift bei Bedarf auf die globalen Defaults in der Registry zurück.
|
||||
"""
|
||||
types_map = reg.get("types", reg) if isinstance(reg, dict) else {}
|
||||
if note_type and isinstance(types_map, dict):
|
||||
t_cfg = types_map.get(note_type)
|
||||
if isinstance(t_cfg, dict) and isinstance(t_cfg.get("edge_defaults"), list):
|
||||
return [str(x) for x in t_cfg["edge_defaults"]]
|
||||
|
||||
# Fallback auf globale Defaults
|
||||
for key in ("defaults", "default", "global"):
|
||||
v = reg.get(key)
|
||||
if isinstance(v, dict) and isinstance(v.get("edge_defaults"), list):
|
||||
return [str(x) for x in v["edge_defaults"] if isinstance(x, str)]
|
||||
|
||||
return []
|
||||
39
app/core/graph/graph_weights.py
Normal file
39
app/core/graph/graph_weights.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
"""
|
||||
FILE: app/core/graph/graph_weights.py
|
||||
DESCRIPTION: Definition der Basisgewichte und Berechnung der Kanteneffektivität.
|
||||
"""
|
||||
from typing import Dict
|
||||
|
||||
# Basisgewichte je Edge-Typ (WP-04a Config)
|
||||
EDGE_BASE_WEIGHTS: Dict[str, float] = {
|
||||
# Struktur
|
||||
"belongs_to": 0.10,
|
||||
"next": 0.06,
|
||||
"prev": 0.06,
|
||||
"backlink": 0.04,
|
||||
"references_at": 0.08,
|
||||
|
||||
# Wissen
|
||||
"references": 0.20,
|
||||
"depends_on": 0.18,
|
||||
"related_to": 0.15,
|
||||
"similar_to": 0.12,
|
||||
}
|
||||
|
||||
def calculate_edge_weight(pl: Dict) -> float:
|
||||
"""Berechnet das effektive Edge-Gewicht aus kind + confidence."""
|
||||
kind = pl.get("kind", "edge")
|
||||
base = EDGE_BASE_WEIGHTS.get(kind, 0.0)
|
||||
|
||||
conf_raw = pl.get("confidence", None)
|
||||
try:
|
||||
conf = float(conf_raw) if conf_raw is not None else None
|
||||
except Exception:
|
||||
conf = None
|
||||
|
||||
if conf is None:
|
||||
return base
|
||||
|
||||
# Clamp confidence 0.0 - 1.0
|
||||
conf = max(0.0, min(1.0, conf))
|
||||
return base * conf
|
||||
|
|
@ -1,256 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
app/core/graph_adapter.py — Adjazenzaufbau & Subgraph-Expansion
|
||||
|
||||
Zweck:
|
||||
Baut aus Qdrant-Edges (Collection: *_edges) einen leichten In-Memory-Graph.
|
||||
|
||||
Kompatibilität:
|
||||
- WP-04a: Liefert Scores (edge_bonus, centrality).
|
||||
- WP-04b: Liefert jetzt auch Struktur-Daten für Erklärungen (Reverse-Lookup).
|
||||
|
||||
Version:
|
||||
0.4.0 (Update für WP-04b: Reverse Adjacency für Explainability)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Dict, List, Optional, DefaultDict, Any
|
||||
from collections import defaultdict
|
||||
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
from app.core.qdrant import collection_names
|
||||
|
||||
# Legacy-Import Fallback
|
||||
try: # pragma: no cover
|
||||
from app.core.qdrant_points import get_edges_for_sources # type: ignore
|
||||
except Exception: # pragma: no cover
|
||||
get_edges_for_sources = None # type: ignore
|
||||
|
||||
|
||||
# Basisgewichte je Edge-Typ (WP-04a Config)
|
||||
EDGE_BASE_WEIGHTS: Dict[str, float] = {
|
||||
# Struktur
|
||||
"belongs_to": 0.10,
|
||||
"next": 0.06,
|
||||
"prev": 0.06,
|
||||
"backlink": 0.04,
|
||||
"references_at": 0.08,
|
||||
|
||||
# Wissen
|
||||
"references": 0.20,
|
||||
"depends_on": 0.18,
|
||||
"related_to": 0.15,
|
||||
"similar_to": 0.12,
|
||||
}
|
||||
|
||||
|
||||
def _edge_weight(pl: Dict) -> float:
|
||||
"""Berechnet das effektive Edge-Gewicht aus kind + confidence."""
|
||||
kind = pl.get("kind", "edge")
|
||||
base = EDGE_BASE_WEIGHTS.get(kind, 0.0)
|
||||
|
||||
conf_raw = pl.get("confidence", None)
|
||||
try:
|
||||
conf = float(conf_raw) if conf_raw is not None else None
|
||||
except Exception:
|
||||
conf = None
|
||||
|
||||
if conf is None:
|
||||
return base
|
||||
|
||||
if conf < 0.0: conf = 0.0
|
||||
if conf > 1.0: conf = 1.0
|
||||
|
||||
return base * conf
|
||||
|
||||
|
||||
def _fetch_edges(
|
||||
client: QdrantClient,
|
||||
prefix: str,
|
||||
seeds: List[str],
|
||||
edge_types: Optional[List[str]] = None,
|
||||
limit: int = 2048,
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Holt Edges direkt aus der *_edges Collection.
|
||||
Filter: source_id IN seeds OR target_id IN seeds OR note_id IN seeds
|
||||
"""
|
||||
if not seeds or limit <= 0:
|
||||
return []
|
||||
|
||||
_, _, edges_col = collection_names(prefix)
|
||||
|
||||
seed_conditions = []
|
||||
for field in ("source_id", "target_id", "note_id"):
|
||||
for s in seeds:
|
||||
seed_conditions.append(
|
||||
rest.FieldCondition(key=field, match=rest.MatchValue(value=str(s)))
|
||||
)
|
||||
seeds_filter = rest.Filter(should=seed_conditions) if seed_conditions else None
|
||||
|
||||
type_filter = None
|
||||
if edge_types:
|
||||
type_conds = [
|
||||
rest.FieldCondition(key="kind", match=rest.MatchValue(value=str(k)))
|
||||
for k in edge_types
|
||||
]
|
||||
type_filter = rest.Filter(should=type_conds)
|
||||
|
||||
must = []
|
||||
if seeds_filter: must.append(seeds_filter)
|
||||
if type_filter: must.append(type_filter)
|
||||
|
||||
flt = rest.Filter(must=must) if must else None
|
||||
|
||||
pts, _ = client.scroll(
|
||||
collection_name=edges_col,
|
||||
scroll_filter=flt,
|
||||
limit=limit,
|
||||
with_payload=True,
|
||||
with_vectors=False,
|
||||
)
|
||||
|
||||
out: List[Dict] = []
|
||||
for p in pts or []:
|
||||
pl = dict(p.payload or {})
|
||||
if pl:
|
||||
out.append(pl)
|
||||
return out
|
||||
|
||||
|
||||
class Subgraph:
|
||||
"""Leichtgewichtiger Subgraph mit Adjazenzlisten & Kennzahlen."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
# Forward: source -> [targets]
|
||||
self.adj: DefaultDict[str, List[Dict]] = defaultdict(list)
|
||||
# Reverse: target -> [sources] (Neu für WP-04b Explanation)
|
||||
self.reverse_adj: DefaultDict[str, List[Dict]] = defaultdict(list)
|
||||
|
||||
self.in_degree: DefaultDict[str, int] = defaultdict(int)
|
||||
self.out_degree: DefaultDict[str, int] = defaultdict(int)
|
||||
|
||||
def add_edge(self, e: Dict) -> None:
|
||||
"""
|
||||
Fügt eine Kante hinzu und aktualisiert Forward/Reverse Indizes.
|
||||
e muss enthalten: source, target, kind, weight.
|
||||
"""
|
||||
src = e.get("source")
|
||||
tgt = e.get("target")
|
||||
kind = e.get("kind")
|
||||
weight = e.get("weight", EDGE_BASE_WEIGHTS.get(kind, 0.0))
|
||||
owner = e.get("note_id")
|
||||
|
||||
if not src or not tgt:
|
||||
return
|
||||
|
||||
# 1. Primäre Adjazenz (Forward)
|
||||
edge_data = {"target": tgt, "kind": kind, "weight": weight}
|
||||
self.adj[src].append(edge_data)
|
||||
self.out_degree[src] += 1
|
||||
self.in_degree[tgt] += 1
|
||||
|
||||
# 2. Reverse Adjazenz (Neu für Explanation)
|
||||
# Wir speichern, woher die Kante kam.
|
||||
rev_data = {"source": src, "kind": kind, "weight": weight}
|
||||
self.reverse_adj[tgt].append(rev_data)
|
||||
|
||||
# 3. Kontext-Note Handling (Forward & Reverse)
|
||||
# Wenn eine Kante "im Kontext einer Note" (owner) definiert ist,
|
||||
# schreiben wir sie der Note gut, damit der Retriever Scores auf Note-Ebene findet.
|
||||
if owner and owner != src:
|
||||
# Forward: Owner -> Target
|
||||
self.adj[owner].append(edge_data)
|
||||
self.out_degree[owner] += 1
|
||||
|
||||
# Reverse: Target wird vom Owner referenziert (indirekt)
|
||||
if owner != tgt:
|
||||
rev_owner_data = {"source": owner, "kind": kind, "weight": weight, "via_context": True}
|
||||
self.reverse_adj[tgt].append(rev_owner_data)
|
||||
self.in_degree[owner] += 1 # Leichter Centrality Boost für den Owner
|
||||
|
||||
def aggregate_edge_bonus(self, node_id: str) -> float:
|
||||
"""Summe der ausgehenden Kantengewichte (Hub-Score)."""
|
||||
return sum(edge["weight"] for edge in self.adj.get(node_id, []))
|
||||
|
||||
def edge_bonus(self, node_id: str) -> float:
|
||||
"""API für Retriever (WP-04a Kompatibilität)."""
|
||||
return self.aggregate_edge_bonus(node_id)
|
||||
|
||||
def centrality_bonus(self, node_id: str) -> float:
|
||||
"""Log-gedämpfte Zentralität (In-Degree)."""
|
||||
import math
|
||||
indeg = self.in_degree.get(node_id, 0)
|
||||
if indeg <= 0:
|
||||
return 0.0
|
||||
return min(math.log1p(indeg) / 10.0, 0.15)
|
||||
|
||||
# --- WP-04b Explanation Helpers ---
|
||||
|
||||
def get_outgoing_edges(self, node_id: str) -> List[Dict[str, Any]]:
|
||||
"""Liefert Liste aller Ziele, auf die dieser Knoten zeigt."""
|
||||
return self.adj.get(node_id, [])
|
||||
|
||||
def get_incoming_edges(self, node_id: str) -> List[Dict[str, Any]]:
|
||||
"""Liefert Liste aller Quellen, die auf diesen Knoten zeigen."""
|
||||
return self.reverse_adj.get(node_id, [])
|
||||
|
||||
|
||||
def expand(
|
||||
client: QdrantClient,
|
||||
prefix: str,
|
||||
seeds: List[str],
|
||||
depth: int = 1,
|
||||
edge_types: Optional[List[str]] = None,
|
||||
) -> Subgraph:
|
||||
"""
|
||||
Expandiert ab Seeds entlang von Edges (bis `depth`).
|
||||
"""
|
||||
sg = Subgraph()
|
||||
frontier = set(seeds)
|
||||
visited = set()
|
||||
|
||||
max_depth = max(depth, 0)
|
||||
|
||||
for _ in range(max_depth):
|
||||
if not frontier:
|
||||
break
|
||||
|
||||
edges_payloads = _fetch_edges(
|
||||
client=client,
|
||||
prefix=prefix,
|
||||
seeds=list(frontier),
|
||||
edge_types=edge_types,
|
||||
limit=2048,
|
||||
)
|
||||
|
||||
next_frontier = set()
|
||||
for pl in edges_payloads:
|
||||
src = pl.get("source_id")
|
||||
tgt = pl.get("target_id")
|
||||
|
||||
# Skip invalid edges
|
||||
if not src or not tgt:
|
||||
continue
|
||||
|
||||
e = {
|
||||
"source": src,
|
||||
"target": tgt,
|
||||
"kind": pl.get("kind", "edge"),
|
||||
"weight": _edge_weight(pl),
|
||||
"note_id": pl.get("note_id"),
|
||||
}
|
||||
sg.add_edge(e)
|
||||
|
||||
# Nur weitersuchen, wenn Target noch nicht besucht
|
||||
if tgt and tgt not in visited:
|
||||
next_frontier.add(tgt)
|
||||
|
||||
visited |= frontier
|
||||
frontier = next_frontier - visited
|
||||
|
||||
return sg
|
||||
|
|
@ -1,305 +0,0 @@
|
|||
"""
|
||||
app/core/ingestion.py
|
||||
|
||||
Zentraler Service für die Transformation von Markdown-Dateien in Qdrant-Objekte.
|
||||
Version: 2.5.2 (Full Feature: Change Detection + Robust IO + Clean Config)
|
||||
"""
|
||||
import os
|
||||
import logging
|
||||
import asyncio
|
||||
import time
|
||||
from typing import Dict, List, Optional, Tuple, Any
|
||||
|
||||
# Core Module Imports
|
||||
from app.core.parser import (
|
||||
read_markdown,
|
||||
normalize_frontmatter,
|
||||
validate_required_frontmatter,
|
||||
)
|
||||
from app.core.note_payload import make_note_payload
|
||||
from app.core.chunker import assemble_chunks, get_chunk_config
|
||||
from app.core.chunk_payload import make_chunk_payloads
|
||||
|
||||
# Fallback für Edges
|
||||
try:
|
||||
from app.core.derive_edges import build_edges_for_note
|
||||
except ImportError:
|
||||
def build_edges_for_note(*args, **kwargs): return []
|
||||
|
||||
from app.core.qdrant import QdrantConfig, get_client, ensure_collections, ensure_payload_indexes
|
||||
from app.core.qdrant_points import (
|
||||
points_for_chunks,
|
||||
points_for_note,
|
||||
points_for_edges,
|
||||
upsert_batch,
|
||||
)
|
||||
|
||||
from app.services.embeddings_client import EmbeddingsClient
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# --- Helper ---
|
||||
def load_type_registry(custom_path: Optional[str] = None) -> dict:
|
||||
import yaml
|
||||
path = custom_path or os.getenv("MINDNET_TYPES_FILE", "config/types.yaml")
|
||||
if not os.path.exists(path): return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f: return yaml.safe_load(f) or {}
|
||||
except Exception: return {}
|
||||
|
||||
def resolve_note_type(requested: Optional[str], reg: dict) -> str:
|
||||
types = reg.get("types", {})
|
||||
if requested and requested in types: return requested
|
||||
return "concept"
|
||||
|
||||
def effective_chunk_profile(note_type: str, reg: dict) -> str:
|
||||
t_cfg = reg.get("types", {}).get(note_type, {})
|
||||
if t_cfg and t_cfg.get("chunk_profile"):
|
||||
return t_cfg.get("chunk_profile")
|
||||
return reg.get("defaults", {}).get("chunk_profile", "default")
|
||||
|
||||
def effective_retriever_weight(note_type: str, reg: dict) -> float:
|
||||
t_cfg = reg.get("types", {}).get(note_type, {})
|
||||
if t_cfg and "retriever_weight" in t_cfg:
|
||||
return float(t_cfg["retriever_weight"])
|
||||
return float(reg.get("defaults", {}).get("retriever_weight", 1.0))
|
||||
|
||||
|
||||
class IngestionService:
|
||||
def __init__(self, collection_prefix: str = None):
|
||||
env_prefix = os.getenv("COLLECTION_PREFIX", "mindnet")
|
||||
self.prefix = collection_prefix or env_prefix
|
||||
|
||||
self.cfg = QdrantConfig.from_env()
|
||||
self.cfg.prefix = self.prefix
|
||||
self.client = get_client(self.cfg)
|
||||
self.dim = self.cfg.dim
|
||||
self.registry = load_type_registry()
|
||||
self.embedder = EmbeddingsClient()
|
||||
|
||||
try:
|
||||
ensure_collections(self.client, self.prefix, self.dim)
|
||||
ensure_payload_indexes(self.client, self.prefix)
|
||||
except Exception as e:
|
||||
logger.warning(f"DB init warning: {e}")
|
||||
|
||||
async def process_file(
|
||||
self,
|
||||
file_path: str,
|
||||
vault_root: str,
|
||||
force_replace: bool = False,
|
||||
apply: bool = False,
|
||||
purge_before: bool = False,
|
||||
note_scope_refs: bool = False,
|
||||
hash_mode: str = "body",
|
||||
hash_source: str = "parsed",
|
||||
hash_normalize: str = "canonical"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Verarbeitet eine einzelne Datei (ASYNC).
|
||||
Inklusive Change Detection (Hash-Check) gegen Qdrant.
|
||||
"""
|
||||
result = {
|
||||
"path": file_path,
|
||||
"status": "skipped",
|
||||
"changed": False,
|
||||
"error": None
|
||||
}
|
||||
|
||||
# 1. Parse & Frontmatter Validation
|
||||
try:
|
||||
parsed = read_markdown(file_path)
|
||||
if not parsed:
|
||||
return {**result, "error": "Empty or unreadable file"}
|
||||
|
||||
fm = normalize_frontmatter(parsed.frontmatter)
|
||||
validate_required_frontmatter(fm)
|
||||
except Exception as e:
|
||||
logger.error(f"Validation failed for {file_path}: {e}")
|
||||
return {**result, "error": f"Validation failed: {str(e)}"}
|
||||
|
||||
# 2. Type & Config Resolution
|
||||
note_type = resolve_note_type(fm.get("type"), self.registry)
|
||||
fm["type"] = note_type
|
||||
fm["chunk_profile"] = effective_chunk_profile(note_type, self.registry)
|
||||
|
||||
weight = fm.get("retriever_weight")
|
||||
if weight is None:
|
||||
weight = effective_retriever_weight(note_type, self.registry)
|
||||
fm["retriever_weight"] = float(weight)
|
||||
|
||||
# 3. Build Note Payload
|
||||
try:
|
||||
note_pl = make_note_payload(
|
||||
parsed,
|
||||
vault_root=vault_root,
|
||||
hash_mode=hash_mode,
|
||||
hash_normalize=hash_normalize,
|
||||
hash_source=hash_source,
|
||||
file_path=file_path
|
||||
)
|
||||
if not note_pl.get("fulltext"):
|
||||
note_pl["fulltext"] = getattr(parsed, "body", "") or ""
|
||||
note_pl["retriever_weight"] = fm["retriever_weight"]
|
||||
|
||||
note_id = note_pl["note_id"]
|
||||
except Exception as e:
|
||||
logger.error(f"Payload build failed: {e}")
|
||||
return {**result, "error": f"Payload build failed: {str(e)}"}
|
||||
|
||||
# 4. Change Detection (Das fehlende Stück!)
|
||||
old_payload = None
|
||||
if not force_replace:
|
||||
old_payload = self._fetch_note_payload(note_id)
|
||||
|
||||
has_old = old_payload is not None
|
||||
key_current = f"{hash_mode}:{hash_source}:{hash_normalize}"
|
||||
old_hash = (old_payload or {}).get("hashes", {}).get(key_current)
|
||||
new_hash = note_pl.get("hashes", {}).get(key_current)
|
||||
|
||||
hash_changed = (old_hash != new_hash)
|
||||
chunks_missing, edges_missing = self._artifacts_missing(note_id)
|
||||
|
||||
should_write = force_replace or (not has_old) or hash_changed or chunks_missing or edges_missing
|
||||
|
||||
if not should_write:
|
||||
return {**result, "status": "unchanged", "note_id": note_id}
|
||||
|
||||
if not apply:
|
||||
return {**result, "status": "dry-run", "changed": True, "note_id": note_id}
|
||||
|
||||
# 5. Processing (Chunking, Embedding, Edges)
|
||||
try:
|
||||
body_text = getattr(parsed, "body", "") or ""
|
||||
|
||||
# --- Config Loading (Clean) ---
|
||||
chunk_config = get_chunk_config(note_type)
|
||||
# Hier greift die Logik aus types.yaml (smart=True/False)
|
||||
|
||||
chunks = await assemble_chunks(fm["id"], body_text, fm["type"], config=chunk_config)
|
||||
chunk_pls = make_chunk_payloads(fm, note_pl["path"], chunks, note_text=body_text)
|
||||
|
||||
# Embedding
|
||||
vecs = []
|
||||
if chunk_pls:
|
||||
texts = [c.get("window") or c.get("text") or "" for c in chunk_pls]
|
||||
try:
|
||||
if hasattr(self.embedder, 'embed_documents'):
|
||||
vecs = await self.embedder.embed_documents(texts)
|
||||
else:
|
||||
for t in texts:
|
||||
v = await self.embedder.embed_query(t)
|
||||
vecs.append(v)
|
||||
except Exception as e:
|
||||
logger.error(f"Embedding failed: {e}")
|
||||
raise RuntimeError(f"Embedding failed: {e}")
|
||||
|
||||
# Edges
|
||||
try:
|
||||
edges = build_edges_for_note(
|
||||
note_id,
|
||||
chunk_pls,
|
||||
note_level_references=note_pl.get("references", []),
|
||||
include_note_scope_refs=note_scope_refs
|
||||
)
|
||||
except TypeError:
|
||||
edges = build_edges_for_note(note_id, chunk_pls)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Processing failed: {e}", exc_info=True)
|
||||
return {**result, "error": f"Processing failed: {str(e)}"}
|
||||
|
||||
# 6. Upsert Action
|
||||
try:
|
||||
if purge_before and has_old:
|
||||
self._purge_artifacts(note_id)
|
||||
|
||||
n_name, n_pts = points_for_note(self.prefix, note_pl, None, self.dim)
|
||||
upsert_batch(self.client, n_name, n_pts)
|
||||
|
||||
if chunk_pls and vecs:
|
||||
c_name, c_pts = points_for_chunks(self.prefix, chunk_pls, vecs)
|
||||
upsert_batch(self.client, c_name, c_pts)
|
||||
|
||||
if edges:
|
||||
e_name, e_pts = points_for_edges(self.prefix, edges)
|
||||
upsert_batch(self.client, e_name, e_pts)
|
||||
|
||||
return {
|
||||
"path": file_path,
|
||||
"status": "success",
|
||||
"changed": True,
|
||||
"note_id": note_id,
|
||||
"chunks_count": len(chunk_pls),
|
||||
"edges_count": len(edges)
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Upsert failed: {e}", exc_info=True)
|
||||
return {**result, "error": f"DB Upsert failed: {e}"}
|
||||
|
||||
# --- Qdrant Helper (Restored) ---
|
||||
|
||||
def _fetch_note_payload(self, note_id: str) -> Optional[dict]:
|
||||
from qdrant_client.http import models as rest
|
||||
col = f"{self.prefix}_notes"
|
||||
try:
|
||||
f = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
|
||||
pts, _ = self.client.scroll(collection_name=col, scroll_filter=f, limit=1, with_payload=True)
|
||||
return pts[0].payload if pts else None
|
||||
except: return None
|
||||
|
||||
def _artifacts_missing(self, note_id: str) -> Tuple[bool, bool]:
|
||||
from qdrant_client.http import models as rest
|
||||
c_col = f"{self.prefix}_chunks"
|
||||
e_col = f"{self.prefix}_edges"
|
||||
try:
|
||||
f = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
|
||||
c_pts, _ = self.client.scroll(collection_name=c_col, scroll_filter=f, limit=1)
|
||||
e_pts, _ = self.client.scroll(collection_name=e_col, scroll_filter=f, limit=1)
|
||||
return (not bool(c_pts)), (not bool(e_pts))
|
||||
except: return True, True
|
||||
|
||||
def _purge_artifacts(self, note_id: str):
|
||||
from qdrant_client.http import models as rest
|
||||
f = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
|
||||
selector = rest.FilterSelector(filter=f)
|
||||
for suffix in ["chunks", "edges"]:
|
||||
try:
|
||||
self.client.delete(collection_name=f"{self.prefix}_{suffix}", points_selector=selector)
|
||||
except Exception: pass
|
||||
|
||||
async def create_from_text(
|
||||
self,
|
||||
markdown_content: str,
|
||||
filename: str,
|
||||
vault_root: str,
|
||||
folder: str = "00_Inbox"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
WP-11 Persistence API Entrypoint.
|
||||
"""
|
||||
target_dir = os.path.join(vault_root, folder)
|
||||
os.makedirs(target_dir, exist_ok=True)
|
||||
|
||||
file_path = os.path.join(target_dir, filename)
|
||||
|
||||
try:
|
||||
# Robust Write: Ensure Flush & Sync
|
||||
with open(file_path, "w", encoding="utf-8") as f:
|
||||
f.write(markdown_content)
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
logger.info(f"Written file to {file_path}")
|
||||
except Exception as e:
|
||||
return {"status": "error", "error": f"Disk write failed: {str(e)}"}
|
||||
|
||||
return await self.process_file(
|
||||
file_path=file_path,
|
||||
vault_root=vault_root,
|
||||
apply=True,
|
||||
force_replace=True,
|
||||
purge_before=True
|
||||
)
|
||||
26
app/core/ingestion/__init__.py
Normal file
26
app/core/ingestion/__init__.py
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/__init__.py
|
||||
DESCRIPTION: Package-Einstiegspunkt für Ingestion. Exportiert den IngestionService.
|
||||
AUDIT v2.13.10: Abschluss der Modularisierung (WP-14).
|
||||
Bricht Zirkelbezüge durch Nutzung der neutralen registry.py auf.
|
||||
VERSION: 2.13.10
|
||||
"""
|
||||
# Der IngestionService ist der primäre Orchestrator für den Datenimport
|
||||
from .ingestion_processor import IngestionService
|
||||
|
||||
# Hilfswerkzeuge für JSON-Verarbeitung und Konfigurations-Management
|
||||
# load_type_registry wird hier re-exportiert, um die Abwärtskompatibilität zu wahren,
|
||||
# obwohl die Implementierung nun in app.core.registry liegt.
|
||||
from .ingestion_utils import (
|
||||
extract_json_from_response,
|
||||
load_type_registry,
|
||||
resolve_note_type
|
||||
)
|
||||
|
||||
# Öffentliche API des Pakets
|
||||
__all__ = [
|
||||
"IngestionService",
|
||||
"extract_json_from_response",
|
||||
"load_type_registry",
|
||||
"resolve_note_type"
|
||||
]
|
||||
131
app/core/ingestion/ingestion_chunk_payload.py
Normal file
131
app/core/ingestion/ingestion_chunk_payload.py
Normal file
|
|
@ -0,0 +1,131 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_chunk_payload.py
|
||||
DESCRIPTION: Baut das JSON-Objekt für 'mindnet_chunks'.
|
||||
Fix v2.4.3: Integration der zentralen Registry (WP-14) für konsistente Defaults.
|
||||
WP-24c v4.3.0: candidate_pool wird explizit übernommen für Chunk-Attribution.
|
||||
VERSION: 2.4.4 (WP-24c v4.3.0)
|
||||
STATUS: Active
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from typing import Any, Dict, List, Optional
|
||||
import logging
|
||||
|
||||
# ENTSCHEIDENDER FIX: Import der neutralen Registry-Logik zur Vermeidung von Circular Imports
|
||||
from app.core.registry import load_type_registry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Resolution Helpers (Audited)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _as_list(x):
|
||||
"""Sichert die Listen-Integrität für Metadaten wie Tags."""
|
||||
if x is None: return []
|
||||
return x if isinstance(x, list) else [x]
|
||||
|
||||
def _resolve_val(note_type: str, reg: dict, key: str, default: Any) -> Any:
|
||||
"""
|
||||
Hierarchische Suche in der Registry: Type-Spezifisch > Globaler Default.
|
||||
WP-14: Erlaubt dynamische Konfiguration via types.yaml.
|
||||
"""
|
||||
types = reg.get("types", {})
|
||||
if isinstance(types, dict):
|
||||
t_cfg = types.get(note_type, {})
|
||||
if isinstance(t_cfg, dict):
|
||||
# Fallback für Key-Varianten (z.B. chunking_profile vs chunk_profile)
|
||||
val = t_cfg.get(key) or t_cfg.get(key.replace("ing", ""))
|
||||
if val is not None: return val
|
||||
|
||||
defs = reg.get("defaults", {}) or reg.get("global", {})
|
||||
if isinstance(defs, dict):
|
||||
val = defs.get(key) or defs.get(key.replace("ing", ""))
|
||||
if val is not None: return val
|
||||
|
||||
return default
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Haupt-API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def make_chunk_payloads(note: Dict[str, Any], note_path: str, chunks_from_chunker: List[Any], **kwargs) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Erstellt die Payloads für die Chunks inklusive Audit-Resolution.
|
||||
Nutzt nun die zentrale Registry für alle Fallbacks.
|
||||
"""
|
||||
if isinstance(note, dict) and "frontmatter" in note:
|
||||
fm = note["frontmatter"]
|
||||
else:
|
||||
fm = note or {}
|
||||
|
||||
# WP-14 Fix: Nutzt übergebene Registry oder lädt sie global
|
||||
reg = kwargs.get("types_cfg") or load_type_registry()
|
||||
|
||||
note_type = fm.get("type") or "concept"
|
||||
title = fm.get("title") or fm.get("id") or "Untitled"
|
||||
tags = _as_list(fm.get("tags") or [])
|
||||
|
||||
# Audit: Resolution Hierarchie (Frontmatter > Registry)
|
||||
cp = fm.get("chunking_profile") or fm.get("chunk_profile")
|
||||
if not cp:
|
||||
cp = _resolve_val(note_type, reg, "chunking_profile", "sliding_standard")
|
||||
|
||||
rw = fm.get("retriever_weight")
|
||||
if rw is None:
|
||||
rw = _resolve_val(note_type, reg, "retriever_weight", 1.0)
|
||||
try:
|
||||
rw = float(rw)
|
||||
except:
|
||||
rw = 1.0
|
||||
|
||||
out: List[Dict[str, Any]] = []
|
||||
for idx, ch in enumerate(chunks_from_chunker):
|
||||
is_dict = isinstance(ch, dict)
|
||||
cid = getattr(ch, "id", None) if not is_dict else ch.get("id")
|
||||
nid = getattr(ch, "note_id", None) if not is_dict else ch.get("note_id")
|
||||
index = getattr(ch, "index", idx) if not is_dict else ch.get("index", idx)
|
||||
text = getattr(ch, "text", "") if not is_dict else ch.get("text", "")
|
||||
window = getattr(ch, "window", text) if not is_dict else ch.get("window", text)
|
||||
prev_id = getattr(ch, "neighbors_prev", None) if not is_dict else ch.get("neighbors_prev")
|
||||
next_id = getattr(ch, "neighbors_next", None) if not is_dict else ch.get("neighbors_next")
|
||||
section = getattr(ch, "section_title", "") if not is_dict else ch.get("section", "")
|
||||
# WP-24c v4.3.0: candidate_pool muss erhalten bleiben für Chunk-Attribution
|
||||
candidate_pool = getattr(ch, "candidate_pool", []) if not is_dict else ch.get("candidate_pool", [])
|
||||
|
||||
pl: Dict[str, Any] = {
|
||||
"note_id": nid or fm.get("id"),
|
||||
"chunk_id": cid,
|
||||
"title": title,
|
||||
"index": int(index),
|
||||
"ord": int(index) + 1,
|
||||
"type": note_type,
|
||||
"tags": tags,
|
||||
"text": text,
|
||||
"window": window,
|
||||
"neighbors_prev": _as_list(prev_id),
|
||||
"neighbors_next": _as_list(next_id),
|
||||
"section": section,
|
||||
"path": note_path,
|
||||
"source_path": kwargs.get("file_path") or note_path,
|
||||
"retriever_weight": rw,
|
||||
"chunk_profile": cp,
|
||||
"candidate_pool": candidate_pool # WP-24c v4.3.0: Kritisch für Chunk-Attribution
|
||||
}
|
||||
|
||||
# Audit: Cleanup Pop (Vermeidung von redundanten Alias-Feldern)
|
||||
for alias in ("chunk_num", "Chunk_Number"):
|
||||
pl.pop(alias, None)
|
||||
|
||||
# WP-24c v4.4.0-DEBUG: Schnittstelle 2 - Transfer
|
||||
# Log-Output unmittelbar bevor das Dictionary zurückgegeben wird
|
||||
pool_size = len(candidate_pool) if candidate_pool else 0
|
||||
pool_content = candidate_pool if candidate_pool else []
|
||||
explicit_callout_in_pool = [c for c in pool_content if isinstance(c, dict) and c.get("provenance") == "explicit:callout"]
|
||||
logger.debug(f"DEBUG-TRACER [Payload]: Chunk ID: {cid}, Index: {index}, Pool-Size: {pool_size}, Pool-Inhalt: {pool_content}, Explicit-Callout-Count: {len(explicit_callout_in_pool)}, Has_Candidate_Pool_Key: {'candidate_pool' in pl}")
|
||||
if explicit_callout_in_pool:
|
||||
for ec in explicit_callout_in_pool:
|
||||
logger.debug(f"DEBUG-TRACER [Payload]: Explicit-Callout Detail - Kind: {ec.get('kind')}, To: {ec.get('to')}, Provenance: {ec.get('provenance')}")
|
||||
|
||||
out.append(pl)
|
||||
|
||||
return out
|
||||
116
app/core/ingestion/ingestion_db.py
Normal file
116
app/core/ingestion/ingestion_db.py
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_db.py
|
||||
DESCRIPTION: Datenbank-Schnittstelle für Note-Metadaten und Artefakt-Prüfung.
|
||||
WP-14: Umstellung auf zentrale database-Infrastruktur.
|
||||
WP-24c: Integration der Authority-Prüfung für Point-IDs.
|
||||
Ermöglicht dem Prozessor die Unterscheidung zwischen
|
||||
manueller Nutzer-Autorität und virtuellen Symmetrien.
|
||||
VERSION: 2.2.0 (WP-24c: Authority Lookup Integration)
|
||||
STATUS: Active
|
||||
"""
|
||||
import logging
|
||||
from typing import Optional, Tuple, List
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
# Import der modularisierten Namen-Logik zur Sicherstellung der Konsistenz
|
||||
from app.core.database import collection_names
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def fetch_note_payload(client: QdrantClient, prefix: str, note_id: str) -> Optional[dict]:
|
||||
"""
|
||||
Holt die Metadaten einer Note aus Qdrant via Scroll-API.
|
||||
Wird primär für die Change-Detection (Hash-Vergleich) genutzt.
|
||||
"""
|
||||
notes_col, _, _ = collection_names(prefix)
|
||||
try:
|
||||
f = rest.Filter(must=[
|
||||
rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))
|
||||
])
|
||||
pts, _ = client.scroll(
|
||||
collection_name=notes_col,
|
||||
scroll_filter=f,
|
||||
limit=1,
|
||||
with_payload=True
|
||||
)
|
||||
return pts[0].payload if pts else None
|
||||
except Exception as e:
|
||||
logger.debug(f"Note {note_id} not found or error during fetch: {e}")
|
||||
return None
|
||||
|
||||
def artifacts_missing(client: QdrantClient, prefix: str, note_id: str) -> Tuple[bool, bool]:
|
||||
"""
|
||||
Prüft Qdrant aktiv auf vorhandene Chunks und Edges für eine Note.
|
||||
Gibt (chunks_missing, edges_missing) als Boolean-Tupel zurück.
|
||||
"""
|
||||
_, chunks_col, edges_col = collection_names(prefix)
|
||||
try:
|
||||
# Filter für die note_id Suche
|
||||
f = rest.Filter(must=[
|
||||
rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))
|
||||
])
|
||||
c_pts, _ = client.scroll(collection_name=chunks_col, scroll_filter=f, limit=1)
|
||||
e_pts, _ = client.scroll(collection_name=edges_col, scroll_filter=f, limit=1)
|
||||
return (not bool(c_pts)), (not bool(e_pts))
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking artifacts for {note_id}: {e}")
|
||||
return True, True
|
||||
|
||||
def is_explicit_edge_present(client: QdrantClient, prefix: str, edge_id: str) -> bool:
|
||||
"""
|
||||
WP-24c: Prüft via Point-ID, ob bereits eine explizite Kante existiert.
|
||||
Wird vom IngestionProcessor in Phase 2 genutzt, um das Überschreiben
|
||||
von manuellem Wissen durch virtuelle Symmetrie-Kanten zu verhindern.
|
||||
|
||||
Args:
|
||||
edge_id: Die deterministisch berechnete UUID der Kante.
|
||||
Returns:
|
||||
True, wenn eine physische Kante (virtual=False) existiert.
|
||||
"""
|
||||
if not edge_id:
|
||||
return False
|
||||
|
||||
_, _, edges_col = collection_names(prefix)
|
||||
try:
|
||||
# retrieve ist die effizienteste Methode für den Zugriff via ID
|
||||
res = client.retrieve(
|
||||
collection_name=edges_col,
|
||||
ids=[edge_id],
|
||||
with_payload=True
|
||||
)
|
||||
|
||||
if res and len(res) > 0:
|
||||
# Wir prüfen das 'virtual' Flag im Payload
|
||||
is_virtual = res[0].payload.get("virtual", False)
|
||||
if not is_virtual:
|
||||
return True # Es ist eine explizite Nutzer-Kante
|
||||
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.debug(f"Authority check failed for ID {edge_id}: {e}")
|
||||
return False
|
||||
|
||||
def purge_artifacts(client: QdrantClient, prefix: str, note_id: str):
|
||||
"""
|
||||
Löscht verwaiste Chunks und Edges einer Note vor einem Re-Import.
|
||||
Stellt sicher, dass keine Duplikate bei Inhaltsänderungen entstehen.
|
||||
"""
|
||||
_, chunks_col, edges_col = collection_names(prefix)
|
||||
try:
|
||||
f = rest.Filter(must=[
|
||||
rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))
|
||||
])
|
||||
# Chunks löschen
|
||||
client.delete(
|
||||
collection_name=chunks_col,
|
||||
points_selector=rest.FilterSelector(filter=f)
|
||||
)
|
||||
# Edges löschen
|
||||
client.delete(
|
||||
collection_name=edges_col,
|
||||
points_selector=rest.FilterSelector(filter=f)
|
||||
)
|
||||
logger.info(f"🧹 [PURGE] Local artifacts for '{note_id}' cleared.")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ [PURGE ERROR] Failed to clear artifacts for {note_id}: {e}")
|
||||
176
app/core/ingestion/ingestion_note_payload.py
Normal file
176
app/core/ingestion/ingestion_note_payload.py
Normal file
|
|
@ -0,0 +1,176 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_note_payload.py
|
||||
DESCRIPTION: Baut das JSON-Objekt für mindnet_notes.
|
||||
WP-14: Integration der zentralen Registry.
|
||||
WP-24c: Dynamische Ermittlung von edge_defaults aus dem Graph-Schema.
|
||||
VERSION: 2.5.0 (WP-24c: Dynamic Topology Integration)
|
||||
STATUS: Active
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from typing import Any, Dict, Tuple, Optional
|
||||
import os
|
||||
import json
|
||||
import pathlib
|
||||
import hashlib
|
||||
|
||||
# Import der zentralen Registry-Logik
|
||||
from app.core.registry import load_type_registry
|
||||
# WP-24c: Zugriff auf das dynamische Graph-Schema
|
||||
from app.services.edge_registry import registry as edge_registry
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _as_dict(x) -> Dict[str, Any]:
|
||||
"""Versucht, ein Objekt in ein Dict zu überführen."""
|
||||
if isinstance(x, dict): return dict(x)
|
||||
out: Dict[str, Any] = {}
|
||||
for attr in ("frontmatter", "body", "id", "note_id", "title", "path", "tags", "type", "created", "modified", "date"):
|
||||
if hasattr(x, attr):
|
||||
val = getattr(x, attr)
|
||||
if val is not None: out[attr] = val
|
||||
if not out: out["raw"] = str(x)
|
||||
return out
|
||||
|
||||
def _ensure_list(x) -> list:
|
||||
"""Sichert String-Listen Integrität."""
|
||||
if x is None: return []
|
||||
if isinstance(x, list): return [str(i) for i in x]
|
||||
if isinstance(x, (set, tuple)): return [str(i) for i in x]
|
||||
return [str(x)]
|
||||
|
||||
def _compute_hash(content: str) -> str:
|
||||
"""SHA-256 Hash-Berechnung."""
|
||||
if not content: return ""
|
||||
return hashlib.sha256(content.encode("utf-8")).hexdigest()
|
||||
|
||||
def _get_hash_source_content(n: Dict[str, Any], mode: str) -> str:
|
||||
"""
|
||||
Generiert den Hash-Input-String basierend auf Body oder Metadaten.
|
||||
Inkludiert alle entscheidungsrelevanten Profil-Parameter.
|
||||
"""
|
||||
body = str(n.get("body") or "").strip()
|
||||
if mode == "body": return body
|
||||
if mode == "full":
|
||||
fm = n.get("frontmatter") or {}
|
||||
meta_parts = []
|
||||
# Alle Felder, die das Chunking oder Retrieval beeinflussen
|
||||
keys = [
|
||||
"title", "type", "status", "tags",
|
||||
"chunking_profile", "chunk_profile",
|
||||
"retriever_weight", "split_level", "strict_heading_split"
|
||||
]
|
||||
for k in sorted(keys):
|
||||
val = fm.get(k)
|
||||
if val is not None: meta_parts.append(f"{k}:{val}")
|
||||
return f"{'|'.join(meta_parts)}||{body}"
|
||||
return body
|
||||
|
||||
def _cfg_for_type(note_type: str, reg: dict) -> dict:
|
||||
"""Extrahiert Typ-spezifische Config aus der Registry."""
|
||||
if not isinstance(reg, dict): return {}
|
||||
types = reg.get("types") if isinstance(reg.get("types"), dict) else reg
|
||||
return types.get(note_type, {}) if isinstance(types, dict) else {}
|
||||
|
||||
def _cfg_defaults(reg: dict) -> dict:
|
||||
"""Extrahiert globale Default-Werte aus der Registry."""
|
||||
if not isinstance(reg, dict): return {}
|
||||
for key in ("defaults", "default", "global"):
|
||||
v = reg.get(key)
|
||||
if isinstance(v, dict): return v
|
||||
return {}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Haupt-API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def make_note_payload(note: Any, *args, **kwargs) -> Dict[str, Any]:
|
||||
"""
|
||||
Baut das Note-Payload inklusive Multi-Hash und Audit-Validierung.
|
||||
WP-24c: Nutzt die EdgeRegistry zur dynamischen Auflösung von Typical Edges.
|
||||
"""
|
||||
n = _as_dict(note)
|
||||
|
||||
# Registry & Context Settings
|
||||
reg = kwargs.get("types_cfg") or load_type_registry()
|
||||
hash_source = kwargs.get("hash_source", "parsed")
|
||||
hash_normalize = kwargs.get("hash_normalize", "canonical")
|
||||
|
||||
fm = n.get("frontmatter") or {}
|
||||
note_type = str(fm.get("type") or n.get("type") or "concept")
|
||||
|
||||
cfg_type = _cfg_for_type(note_type, reg)
|
||||
cfg_def = _cfg_defaults(reg)
|
||||
ingest_cfg = reg.get("ingestion_settings", {})
|
||||
|
||||
# --- retriever_weight Audit ---
|
||||
default_rw = float(os.environ.get("MINDNET_DEFAULT_RETRIEVER_WEIGHT", 1.0))
|
||||
retriever_weight = fm.get("retriever_weight")
|
||||
if retriever_weight is None:
|
||||
retriever_weight = cfg_type.get("retriever_weight", cfg_def.get("retriever_weight", default_rw))
|
||||
try:
|
||||
retriever_weight = float(retriever_weight)
|
||||
except:
|
||||
retriever_weight = default_rw
|
||||
|
||||
# --- chunk_profile Audit ---
|
||||
chunk_profile = fm.get("chunking_profile") or fm.get("chunk_profile")
|
||||
if chunk_profile is None:
|
||||
chunk_profile = cfg_type.get("chunking_profile") or cfg_type.get("chunk_profile")
|
||||
if chunk_profile is None:
|
||||
chunk_profile = ingest_cfg.get("default_chunk_profile", cfg_def.get("chunking_profile", "sliding_standard"))
|
||||
|
||||
# --- WP-24c: edge_defaults Dynamisierung ---
|
||||
# 1. Priorität: Manuelle Definition im Frontmatter
|
||||
edge_defaults = fm.get("edge_defaults")
|
||||
|
||||
# 2. Priorität: Dynamische Abfrage der 'Typical Edges' aus dem Graph-Schema
|
||||
if edge_defaults is None:
|
||||
topology = edge_registry.get_topology_info(note_type, "any")
|
||||
edge_defaults = topology.get("typical", [])
|
||||
|
||||
# 3. Fallback: Leere Liste, falls kein Schema-Eintrag existiert
|
||||
edge_defaults = _ensure_list(edge_defaults)
|
||||
|
||||
# --- Basis-Metadaten ---
|
||||
note_id = n.get("note_id") or n.get("id") or fm.get("id")
|
||||
title = n.get("title") or fm.get("title") or ""
|
||||
path = n.get("path") or kwargs.get("file_path") or ""
|
||||
if isinstance(path, pathlib.Path): path = str(path)
|
||||
|
||||
payload: Dict[str, Any] = {
|
||||
"note_id": note_id,
|
||||
"title": title,
|
||||
"type": note_type,
|
||||
"path": path,
|
||||
"retriever_weight": retriever_weight,
|
||||
"chunk_profile": chunk_profile,
|
||||
"edge_defaults": edge_defaults,
|
||||
"hashes": {}
|
||||
}
|
||||
|
||||
# --- MULTI-HASH ---
|
||||
# Generiert Hashes für Change Detection (WP-15b)
|
||||
for mode in ["body", "full"]:
|
||||
content = _get_hash_source_content(n, mode)
|
||||
payload["hashes"][f"{mode}:{hash_source}:{hash_normalize}"] = _compute_hash(content)
|
||||
|
||||
# Metadaten Anreicherung (Tags, Aliases, Zeitstempel)
|
||||
tags = fm.get("tags") or fm.get("keywords") or n.get("tags")
|
||||
if tags: payload["tags"] = _ensure_list(tags)
|
||||
|
||||
aliases = fm.get("aliases")
|
||||
if aliases: payload["aliases"] = _ensure_list(aliases)
|
||||
|
||||
for k in ("created", "modified", "date"):
|
||||
v = fm.get(k) or n.get(k)
|
||||
if v: payload[k] = str(v)
|
||||
|
||||
if n.get("body"):
|
||||
payload["fulltext"] = str(n["body"])
|
||||
|
||||
# Final JSON Validation Audit
|
||||
json.loads(json.dumps(payload, ensure_ascii=False))
|
||||
|
||||
return payload
|
||||
652
app/core/ingestion/ingestion_processor.py
Normal file
652
app/core/ingestion/ingestion_processor.py
Normal file
|
|
@ -0,0 +1,652 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_processor.py
|
||||
DESCRIPTION: Der zentrale IngestionService (Orchestrator).
|
||||
WP-25a: Integration der Mixture of Experts (MoE) Architektur.
|
||||
WP-15b: Two-Pass Workflow mit globalem Kontext-Cache.
|
||||
WP-20/22: Cloud-Resilienz und Content-Lifecycle integriert.
|
||||
AUDIT v4.2.4:
|
||||
- GOLD-STANDARD v4.2.4: Hash-basierte Change-Detection (MINDNET_CHANGE_DETECTION_MODE).
|
||||
- Wiederherstellung des iterativen Abgleichs basierend auf Inhalts-Hashes.
|
||||
- Phase 2 verwendet exakt dieselbe ID-Generierung wie Phase 1 (inkl. target_section).
|
||||
- Authority-Check in Phase 2 prüft mit konsistenter ID-Generierung.
|
||||
- Eliminiert Duplikate durch inkonsistente ID-Generierung (Steinzeitaxt-Problem).
|
||||
VERSION: 4.2.4 (WP-24c: Hash-Integrität)
|
||||
STATUS: Active
|
||||
"""
|
||||
import logging
|
||||
import asyncio
|
||||
import os
|
||||
import re
|
||||
from typing import Dict, List, Optional, Tuple, Any
|
||||
|
||||
# Core Module Imports
|
||||
from app.core.parser import (
|
||||
read_markdown, pre_scan_markdown, normalize_frontmatter,
|
||||
validate_required_frontmatter, NoteContext
|
||||
)
|
||||
from app.core.chunking import assemble_chunks
|
||||
# WP-24c: Import der zentralen Identitäts-Logik
|
||||
from app.core.graph.graph_utils import _mk_edge_id
|
||||
|
||||
# Datenbank-Ebene (Modularisierte database-Infrastruktur)
|
||||
from app.core.database.qdrant import QdrantConfig, get_client, ensure_collections, ensure_payload_indexes
|
||||
from app.core.database.qdrant_points import points_for_chunks, points_for_note, points_for_edges, upsert_batch
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
# Services
|
||||
from app.services.embeddings_client import EmbeddingsClient
|
||||
from app.services.edge_registry import registry as edge_registry
|
||||
from app.services.llm_service import LLMService
|
||||
|
||||
# Package-Interne Imports (Refactoring WP-14)
|
||||
from .ingestion_utils import load_type_registry, resolve_note_type, get_chunk_config_by_profile
|
||||
from .ingestion_db import fetch_note_payload, artifacts_missing, purge_artifacts, is_explicit_edge_present
|
||||
from .ingestion_validation import validate_edge_candidate
|
||||
from .ingestion_note_payload import make_note_payload
|
||||
from .ingestion_chunk_payload import make_chunk_payloads
|
||||
|
||||
# Fallback für Edges (Struktur-Verknüpfung)
|
||||
try:
|
||||
from app.core.graph.graph_derive_edges import build_edges_for_note
|
||||
except ImportError:
|
||||
def build_edges_for_note(*args, **kwargs): return []
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class IngestionService:
|
||||
def __init__(self, collection_prefix: str = None):
|
||||
"""Initialisiert den Service und nutzt die neue database-Infrastruktur."""
|
||||
from app.config import get_settings
|
||||
self.settings = get_settings()
|
||||
|
||||
# --- LOGGING CLEANUP ---
|
||||
# Unterdrückt Bibliotheks-Lärm, erhält aber inhaltliche Service-Logs
|
||||
for lib in ["httpx", "httpcore", "qdrant_client", "urllib3", "openai"]:
|
||||
logging.getLogger(lib).setLevel(logging.WARNING)
|
||||
|
||||
self.prefix = collection_prefix or self.settings.COLLECTION_PREFIX
|
||||
self.cfg = QdrantConfig.from_env()
|
||||
self.cfg.prefix = self.prefix
|
||||
self.client = get_client(self.cfg)
|
||||
|
||||
self.registry = load_type_registry()
|
||||
self.embedder = EmbeddingsClient()
|
||||
self.llm = LLMService()
|
||||
|
||||
# WP-25a: Auflösung der Dimension über das Embedding-Profil (MoE)
|
||||
embed_cfg = self.llm.profiles.get("embedding_expert", {})
|
||||
self.dim = embed_cfg.get("dimensions") or self.settings.VECTOR_SIZE
|
||||
|
||||
self.active_hash_mode = self.settings.CHANGE_DETECTION_MODE
|
||||
|
||||
# WP-15b: Kontext-Gedächtnis für ID-Auflösung (Globaler Cache)
|
||||
self.batch_cache: Dict[str, NoteContext] = {}
|
||||
|
||||
# WP-24c: Puffer für Phase 2 (Symmetrie-Injektion am Ende des gesamten Imports)
|
||||
self.symmetry_buffer: List[Dict[str, Any]] = []
|
||||
|
||||
try:
|
||||
ensure_collections(self.client, self.prefix, self.dim)
|
||||
ensure_payload_indexes(self.client, self.prefix)
|
||||
except Exception as e:
|
||||
logger.warning(f"DB initialization warning: {e}")
|
||||
|
||||
def _log_id_collision(
|
||||
self,
|
||||
note_id: str,
|
||||
existing_path: str,
|
||||
conflicting_path: str,
|
||||
action: str = "ERROR"
|
||||
) -> None:
|
||||
"""
|
||||
WP-24c v4.5.10: Loggt ID-Kollisionen in eine dedizierte Log-Datei.
|
||||
|
||||
Schreibt alle ID-Kollisionen in logs/id_collisions.log für manuelle Analyse.
|
||||
Format: JSONL (eine Kollision pro Zeile) mit allen relevanten Metadaten.
|
||||
|
||||
Args:
|
||||
note_id: Die doppelte note_id
|
||||
existing_path: Pfad der bereits vorhandenen Datei
|
||||
conflicting_path: Pfad der kollidierenden Datei
|
||||
action: Gewählte Aktion (z.B. "ERROR", "SKIPPED")
|
||||
"""
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
# Erstelle Log-Verzeichnis falls nicht vorhanden
|
||||
log_dir = "logs"
|
||||
if not os.path.exists(log_dir):
|
||||
os.makedirs(log_dir)
|
||||
|
||||
log_file = os.path.join(log_dir, "id_collisions.log")
|
||||
|
||||
# Erstelle Log-Eintrag mit allen relevanten Informationen
|
||||
log_entry = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"note_id": note_id,
|
||||
"existing_file": {
|
||||
"path": existing_path,
|
||||
"filename": os.path.basename(existing_path) if existing_path else None
|
||||
},
|
||||
"conflicting_file": {
|
||||
"path": conflicting_path,
|
||||
"filename": os.path.basename(conflicting_path) if conflicting_path else None
|
||||
},
|
||||
"action": action,
|
||||
"collection_prefix": self.prefix
|
||||
}
|
||||
|
||||
# Schreibe als JSONL (eine Zeile pro Eintrag)
|
||||
try:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(json.dumps(log_entry, ensure_ascii=False) + "\n")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Konnte ID-Kollision nicht in Log-Datei schreiben: {e}")
|
||||
|
||||
def _persist_rejected_edges(self, note_id: str, rejected_edges: List[Dict[str, Any]]) -> None:
|
||||
"""
|
||||
WP-24c v4.5.9: Persistiert abgelehnte Kanten für Audit-Zwecke.
|
||||
|
||||
Schreibt rejected_edges in eine JSONL-Datei im _system Ordner oder logs/rejected_edges.log.
|
||||
Dies ermöglicht die Analyse der Ablehnungsgründe und Verbesserung der Validierungs-Logik.
|
||||
|
||||
Args:
|
||||
note_id: ID der Note, zu der die abgelehnten Kanten gehören
|
||||
rejected_edges: Liste von abgelehnten Edge-Dicts
|
||||
"""
|
||||
if not rejected_edges:
|
||||
return
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
# WP-24c v4.5.9: Erstelle Log-Verzeichnis falls nicht vorhanden
|
||||
log_dir = "logs"
|
||||
if not os.path.exists(log_dir):
|
||||
os.makedirs(log_dir)
|
||||
|
||||
log_file = os.path.join(log_dir, "rejected_edges.log")
|
||||
|
||||
# WP-24c v4.5.9: Schreibe als JSONL (eine Kante pro Zeile)
|
||||
try:
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
for edge in rejected_edges:
|
||||
log_entry = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"note_id": note_id,
|
||||
"edge": {
|
||||
"kind": edge.get("kind", "unknown"),
|
||||
"source_id": edge.get("source_id", "unknown"),
|
||||
"target_id": edge.get("target_id") or edge.get("to", "unknown"),
|
||||
"scope": edge.get("scope", "unknown"),
|
||||
"provenance": edge.get("provenance", "unknown"),
|
||||
"rule_id": edge.get("rule_id", "unknown"),
|
||||
"confidence": edge.get("confidence", 0.0),
|
||||
"target_section": edge.get("target_section")
|
||||
}
|
||||
}
|
||||
f.write(json.dumps(log_entry, ensure_ascii=False) + "\n")
|
||||
|
||||
logger.debug(f"📝 [AUDIT] {len(rejected_edges)} abgelehnte Kanten für '{note_id}' in {log_file} gespeichert")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ [AUDIT] Fehler beim Speichern der rejected_edges: {e}")
|
||||
|
||||
def _is_valid_id(self, text: Optional[str]) -> bool:
|
||||
"""WP-24c: Prüft IDs auf fachliche Validität (Ghost-ID Schutz)."""
|
||||
if not text or not isinstance(text, str) or len(text.strip()) < 2:
|
||||
return False
|
||||
blacklisted = {"none", "unknown", "insight", "source", "task", "project", "person", "concept"}
|
||||
if text.lower().strip() in blacklisted:
|
||||
return False
|
||||
return True
|
||||
|
||||
async def run_batch(self, file_paths: List[str], vault_root: str) -> Dict[str, Any]:
|
||||
"""
|
||||
WP-15b: Phase 1 des Two-Pass Workflows.
|
||||
Verarbeitet Batches und schreibt NUR Nutzer-Autorität (explizite Kanten).
|
||||
"""
|
||||
self.batch_cache.clear()
|
||||
logger.info(f"--- 🔍 START BATCH PHASE 1 ({len(file_paths)} Dateien) ---")
|
||||
|
||||
# 1. Schritt: Pre-Scan (Context-Cache füllen)
|
||||
for path in file_paths:
|
||||
try:
|
||||
ctx = pre_scan_markdown(path, registry=self.registry)
|
||||
if ctx:
|
||||
self.batch_cache[ctx.note_id] = ctx
|
||||
self.batch_cache[ctx.title] = ctx
|
||||
self.batch_cache[os.path.splitext(os.path.basename(path))[0]] = ctx
|
||||
except Exception as e:
|
||||
logger.warning(f" ⚠️ Pre-scan fehlgeschlagen für {path}: {e}")
|
||||
|
||||
# 2. Schritt: Batch Processing (Authority Only)
|
||||
processed_count = 0
|
||||
success_count = 0
|
||||
for p in file_paths:
|
||||
processed_count += 1
|
||||
res = await self.process_file(p, vault_root, apply=True, purge_before=True)
|
||||
if res.get("status") == "success":
|
||||
success_count += 1
|
||||
|
||||
logger.info(f"--- ✅ Batch Phase 1 abgeschlossen ({success_count}/{processed_count}) ---")
|
||||
return {
|
||||
"status": "success",
|
||||
"processed": processed_count,
|
||||
"success": success_count,
|
||||
"buffered_symmetries": len(self.symmetry_buffer)
|
||||
}
|
||||
|
||||
async def commit_vault_symmetries(self) -> Dict[str, Any]:
|
||||
"""
|
||||
WP-24c: Führt PHASE 2 (Globale Symmetrie-Injektion) aus.
|
||||
Wird am Ende des gesamten Imports aufgerufen.
|
||||
"""
|
||||
if not self.symmetry_buffer:
|
||||
return {"status": "skipped", "reason": "buffer_empty"}
|
||||
|
||||
logger.info(f"🔄 PHASE 2: Validiere {len(self.symmetry_buffer)} Symmetrien gegen Live-DB...")
|
||||
final_virtuals = []
|
||||
for v_edge in self.symmetry_buffer:
|
||||
# WP-24c v4.1.0: Korrekte Extraktion der Identitäts-Parameter
|
||||
src = v_edge.get("source_id") or v_edge.get("note_id") # source_id hat Priorität
|
||||
tgt = v_edge.get("target_id")
|
||||
kind = v_edge.get("kind")
|
||||
scope = v_edge.get("scope", "note")
|
||||
target_section = v_edge.get("target_section") # WP-24c v4.1.0: target_section berücksichtigen
|
||||
|
||||
if not all([src, tgt, kind]):
|
||||
continue
|
||||
|
||||
# WP-24c v4.1.0: Nutzung der zentralisierten ID-Logik aus graph_utils
|
||||
# GOLD-STANDARD v4.1.0: ID-Generierung muss absolut synchron zu Phase 1 sein
|
||||
# - Wenn target_section vorhanden, muss es in die ID einfließen
|
||||
# - Dies stellt sicher, dass der Authority-Check korrekt funktioniert
|
||||
try:
|
||||
v_id = _mk_edge_id(kind, src, tgt, scope, target_section=target_section)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
# AUTHORITY-CHECK: Nur schreiben, wenn keine manuelle Kante existiert
|
||||
# Prüft mit exakt derselben ID, die in Phase 1 verwendet wurde (inkl. target_section)
|
||||
if not is_explicit_edge_present(self.client, self.prefix, v_id):
|
||||
final_virtuals.append(v_edge)
|
||||
section_info = f" (section: {target_section})" if target_section else ""
|
||||
logger.info(f" 🔄 [SYMMETRY] Add inverse: {src} --({kind})--> {tgt}{section_info}")
|
||||
else:
|
||||
logger.info(f" 🛡️ [PROTECTED] Manuelle Kante gefunden. Symmetrie für {kind} unterdrückt.")
|
||||
|
||||
if final_virtuals:
|
||||
col, pts = points_for_edges(self.prefix, final_virtuals)
|
||||
upsert_batch(self.client, col, pts, wait=True)
|
||||
|
||||
count = len(final_virtuals)
|
||||
self.symmetry_buffer.clear()
|
||||
return {"status": "success", "added": count}
|
||||
|
||||
async def process_file(self, file_path: str, vault_root: str, **kwargs) -> Dict[str, Any]:
|
||||
"""
|
||||
Transformiert eine Markdown-Datei (Phase 1).
|
||||
Schreibt Notes/Chunks/Explicit Edges sofort.
|
||||
"""
|
||||
apply = kwargs.get("apply", False)
|
||||
force_replace = kwargs.get("force_replace", False)
|
||||
purge_before = kwargs.get("purge_before", False)
|
||||
|
||||
result = {"path": file_path, "status": "skipped", "changed": False, "error": None}
|
||||
|
||||
try:
|
||||
# Ordner-Filter (.trash / .obsidian)
|
||||
if ".trash" in file_path or any(part.startswith('.') for part in file_path.split(os.sep)):
|
||||
return {**result, "status": "skipped", "reason": "ignored_folder"}
|
||||
|
||||
# WP-24c v4.5.9: Path-Normalization für konsistente Hash-Prüfung
|
||||
# Normalisiere file_path zu absolutem Pfad für konsistente Verarbeitung
|
||||
normalized_file_path = os.path.abspath(file_path) if not os.path.isabs(file_path) else file_path
|
||||
|
||||
parsed = read_markdown(normalized_file_path)
|
||||
if not parsed: return {**result, "error": "Empty file"}
|
||||
fm = normalize_frontmatter(parsed.frontmatter)
|
||||
validate_required_frontmatter(fm)
|
||||
|
||||
note_pl = make_note_payload(parsed, vault_root=vault_root, file_path=normalized_file_path, types_cfg=self.registry)
|
||||
note_id = note_pl.get("note_id")
|
||||
|
||||
if not note_id:
|
||||
return {**result, "status": "error", "error": "missing_id"}
|
||||
|
||||
logger.info(f"📄 Bearbeite: '{note_id}' | Pfad: {normalized_file_path} | Title: {note_pl.get('title', 'N/A')}")
|
||||
|
||||
# WP-24c v4.5.9: Strikte Change Detection (Hash-basierte Inhaltsprüfung)
|
||||
# Prüft Hash VOR der Verarbeitung, um redundante Ingestion zu vermeiden
|
||||
old_payload = None if force_replace else fetch_note_payload(self.client, self.prefix, note_id)
|
||||
|
||||
# WP-24c v4.5.10: Prüfe auf ID-Kollisionen (zwei Dateien mit derselben note_id)
|
||||
if old_payload and not force_replace:
|
||||
old_path = old_payload.get("path", "")
|
||||
if old_path and old_path != normalized_file_path:
|
||||
# ID-Kollision erkannt: Zwei verschiedene Dateien haben dieselbe note_id
|
||||
# Logge die Kollision in dedizierte Log-Datei
|
||||
self._log_id_collision(
|
||||
note_id=note_id,
|
||||
existing_path=old_path,
|
||||
conflicting_path=normalized_file_path,
|
||||
action="ERROR"
|
||||
)
|
||||
logger.error(
|
||||
f"❌ [ID-KOLLISION] Kritischer Fehler: Die note_id '{note_id}' wird bereits von einer anderen Datei verwendet!\n"
|
||||
f" Bereits vorhanden: '{old_path}'\n"
|
||||
f" Konflikt mit: '{normalized_file_path}'\n"
|
||||
f" Lösung: Bitte ändern Sie die 'id' im Frontmatter einer der beiden Dateien, um eine eindeutige ID zu gewährleisten.\n"
|
||||
f" Details wurden in logs/id_collisions.log gespeichert."
|
||||
)
|
||||
return {**result, "status": "error", "error": "id_collision", "note_id": note_id, "existing_path": old_path, "conflicting_path": normalized_file_path}
|
||||
|
||||
logger.debug(f"🔍 [CHANGE-DETECTION] Start für '{note_id}': force_replace={force_replace}, old_payload={old_payload is not None}")
|
||||
|
||||
content_changed = True
|
||||
hash_match = False
|
||||
if old_payload and not force_replace:
|
||||
# Nutzt die über MINDNET_CHANGE_DETECTION_MODE gesteuerte Genauigkeit
|
||||
# Mapping: 'full' -> 'full:parsed:canonical', 'body' -> 'body:parsed:canonical'
|
||||
h_key = f"{self.active_hash_mode or 'full'}:parsed:canonical"
|
||||
new_h = note_pl.get("hashes", {}).get(h_key)
|
||||
old_h = old_payload.get("hashes", {}).get(h_key)
|
||||
|
||||
# WP-24c v4.5.9-DEBUG: Detaillierte Hash-Diagnose (INFO-Level)
|
||||
logger.info(f"🔍 [CHANGE-DETECTION] Hash-Vergleich für '{note_id}':")
|
||||
logger.debug(f" -> Hash-Key: '{h_key}'")
|
||||
logger.debug(f" -> Active Hash-Mode: '{self.active_hash_mode or 'full'}'")
|
||||
logger.debug(f" -> New Hash vorhanden: {bool(new_h)}")
|
||||
logger.debug(f" -> Old Hash vorhanden: {bool(old_h)}")
|
||||
if new_h:
|
||||
logger.debug(f" -> New Hash (erste 32 Zeichen): {new_h[:32]}...")
|
||||
if old_h:
|
||||
logger.debug(f" -> Old Hash (erste 32 Zeichen): {old_h[:32]}...")
|
||||
logger.debug(f" -> Verfügbare Hash-Keys in new: {list(note_pl.get('hashes', {}).keys())}")
|
||||
logger.debug(f" -> Verfügbare Hash-Keys in old: {list(old_payload.get('hashes', {}).keys())}")
|
||||
|
||||
if new_h and old_h:
|
||||
hash_match = (new_h == old_h)
|
||||
if hash_match:
|
||||
content_changed = False
|
||||
logger.info(f"🔍 [CHANGE-DETECTION] ✅ Hash identisch für '{note_id}': {h_key} = {new_h[:16]}...")
|
||||
else:
|
||||
logger.warning(f"🔍 [CHANGE-DETECTION] ❌ Hash geändert für '{note_id}': alt={old_h[:16]}..., neu={new_h[:16]}...")
|
||||
# Finde erste unterschiedliche Position
|
||||
diff_pos = next((i for i, (a, b) in enumerate(zip(new_h, old_h)) if a != b), None)
|
||||
if diff_pos is not None:
|
||||
logger.debug(f" -> Hash-Unterschied: Erste unterschiedliche Position: {diff_pos}")
|
||||
else:
|
||||
logger.debug(f" -> Hash-Unterschied: Längen unterschiedlich (new={len(new_h)}, old={len(old_h)})")
|
||||
|
||||
# WP-24c v4.5.10: Logge Hash-Input für Diagnose (DEBUG-Level)
|
||||
# WICHTIG: _get_hash_source_content benötigt ein Dictionary, nicht das ParsedNote-Objekt!
|
||||
from app.core.ingestion.ingestion_note_payload import _get_hash_source_content, _as_dict
|
||||
hash_mode = self.active_hash_mode or 'full'
|
||||
# Konvertiere parsed zu Dictionary für _get_hash_source_content
|
||||
parsed_dict = _as_dict(parsed)
|
||||
hash_input = _get_hash_source_content(parsed_dict, hash_mode)
|
||||
logger.debug(f" -> Hash-Input (erste 200 Zeichen): {hash_input[:200]}...")
|
||||
logger.debug(f" -> Hash-Input Länge: {len(hash_input)}")
|
||||
|
||||
# WP-24c v4.5.10: Vergleiche auch Body-Länge und Frontmatter (DEBUG-Level)
|
||||
# Verwende parsed.body statt note_pl.get("body")
|
||||
new_body = str(getattr(parsed, "body", "") or "").strip()
|
||||
old_body = str(old_payload.get("body", "")).strip() if old_payload else ""
|
||||
logger.debug(f" -> Body-Länge: new={len(new_body)}, old={len(old_body)}")
|
||||
if len(new_body) != len(old_body):
|
||||
logger.debug(f" -> ⚠️ Body-Länge unterschiedlich! Mögliche Ursache: Parsing-Unterschiede")
|
||||
|
||||
# Verwende parsed.frontmatter statt note_pl.get("frontmatter")
|
||||
new_fm = getattr(parsed, "frontmatter", {}) or {}
|
||||
old_fm = old_payload.get("frontmatter", {}) if old_payload else {}
|
||||
logger.debug(f" -> Frontmatter-Keys: new={sorted(new_fm.keys())}, old={sorted(old_fm.keys())}")
|
||||
# Prüfe relevante Frontmatter-Felder
|
||||
relevant_keys = ["title", "type", "status", "tags", "chunking_profile", "chunk_profile", "retriever_weight", "split_level", "strict_heading_split"]
|
||||
for key in relevant_keys:
|
||||
new_val = new_fm.get(key) if isinstance(new_fm, dict) else getattr(new_fm, key, None)
|
||||
old_val = old_fm.get(key) if isinstance(old_fm, dict) else None
|
||||
if new_val != old_val:
|
||||
logger.debug(f" -> ⚠️ Frontmatter '{key}' unterschiedlich: new={new_val}, old={old_val}")
|
||||
else:
|
||||
# WP-24c v4.5.10: Wenn Hash fehlt, als geändert behandeln (Sicherheit)
|
||||
logger.debug(f"⚠️ [CHANGE-DETECTION] Hash fehlt für '{note_id}': new_h={bool(new_h)}, old_h={bool(old_h)}")
|
||||
logger.debug(f" -> Grund: Hash wird als 'geändert' behandelt, da Hash-Werte fehlen")
|
||||
else:
|
||||
if force_replace:
|
||||
logger.debug(f"🔍 [CHANGE-DETECTION] '{note_id}': force_replace=True -> überspringe Hash-Check")
|
||||
elif not old_payload:
|
||||
logger.debug(f"🔍 [CHANGE-DETECTION] '{note_id}': ⚠️ Keine alte Payload gefunden -> erste Verarbeitung oder gelöscht")
|
||||
|
||||
# WP-24c v4.5.9: Strikte Logik - überspringe komplett wenn Hash identisch
|
||||
# WICHTIG: Artifact-Check NACH Hash-Check, da purge_before die Artefakte löschen kann
|
||||
# Wenn Hash identisch ist, sind die Artefakte entweder vorhanden oder werden gerade neu geschrieben
|
||||
if not force_replace and hash_match and old_payload:
|
||||
# WP-24c v4.5.9: Hash identisch -> überspringe komplett (auch wenn Artefakte nach PURGE fehlen)
|
||||
# Der Hash ist die autoritative Quelle für "Inhalt unverändert"
|
||||
# Artefakte werden beim nächsten normalen Import wieder erstellt, wenn nötig
|
||||
logger.info(f"⏭️ [SKIP] '{note_id}' unverändert (Hash identisch - überspringe komplett, auch wenn Artefakte fehlen)")
|
||||
return {**result, "status": "unchanged", "note_id": note_id, "reason": "hash_identical"}
|
||||
elif not force_replace and old_payload and not hash_match:
|
||||
# WP-24c v4.5.10: Hash geändert - erlaube Verarbeitung (DEBUG-Level)
|
||||
logger.debug(f"🔍 [CHANGE-DETECTION] '{note_id}': Hash geändert -> erlaube Verarbeitung")
|
||||
|
||||
# WP-24c v4.5.10: Hash geändert oder keine alte Payload - prüfe Artefakte für normale Verarbeitung
|
||||
c_miss, e_miss = artifacts_missing(self.client, self.prefix, note_id)
|
||||
logger.debug(f"🔍 [CHANGE-DETECTION] '{note_id}': Artifact-Check: c_miss={c_miss}, e_miss={e_miss}")
|
||||
|
||||
if not apply:
|
||||
return {**result, "status": "dry-run", "changed": True, "note_id": note_id}
|
||||
|
||||
# Chunks & MoE
|
||||
profile = note_pl.get("chunk_profile", "sliding_standard")
|
||||
note_type = resolve_note_type(self.registry, fm.get("type"))
|
||||
chunk_cfg = get_chunk_config_by_profile(self.registry, profile, note_type)
|
||||
enable_smart = chunk_cfg.get("enable_smart_edge_allocation", False)
|
||||
chunks = await assemble_chunks(note_id, getattr(parsed, "body", ""), note_type, config=chunk_cfg)
|
||||
|
||||
# WP-24c v4.5.8: Validierung in Chunk-Schleife entfernt
|
||||
# Alle candidate: Kanten werden jetzt in Phase 3 (nach build_edges_for_note) validiert
|
||||
# Dies stellt sicher, dass auch Note-Scope Kanten aus LLM-Validierungs-Zonen geprüft werden
|
||||
# Der candidate_pool wird unverändert weitergegeben, damit build_edges_for_note alle Kanten erkennt
|
||||
# WP-24c v4.5.8: Nur ID-Validierung bleibt (Ghost-ID Schutz), keine LLM-Validierung mehr hier
|
||||
for ch in chunks:
|
||||
new_pool = []
|
||||
for cand in getattr(ch, "candidate_pool", []):
|
||||
# WP-24c v4.5.8: Nur ID-Validierung (Ghost-ID Schutz)
|
||||
t_id = cand.get('target_id') or cand.get('to') or cand.get('note_id')
|
||||
if not self._is_valid_id(t_id):
|
||||
continue
|
||||
# WP-24c v4.5.8: Alle Kanten gehen durch - LLM-Validierung erfolgt in Phase 3
|
||||
new_pool.append(cand)
|
||||
ch.candidate_pool = new_pool
|
||||
|
||||
# chunk_pls = make_chunk_payloads(fm, note_pl["path"], chunks, file_path=file_path, types_cfg=self.registry)
|
||||
# v4.2.8 Fix C: Explizite Übergabe des Profil-Namens für den Chunk-Payload
|
||||
chunk_pls = make_chunk_payloads(fm, note_pl["path"], chunks, file_path=file_path, types_cfg=self.registry, chunk_profile=profile)
|
||||
|
||||
vecs = await self.embedder.embed_documents([c.get("window") or "" for c in chunk_pls]) if chunk_pls else []
|
||||
|
||||
# WP-24c v4.2.0: Kanten-Extraktion mit Note-Scope Zonen Support
|
||||
# Übergabe des Original-Markdown-Texts für Note-Scope Zonen-Extraktion
|
||||
markdown_body = getattr(parsed, "body", "")
|
||||
raw_edges = build_edges_for_note(
|
||||
note_id,
|
||||
chunk_pls,
|
||||
note_level_references=note_pl.get("references", []),
|
||||
markdown_body=markdown_body
|
||||
)
|
||||
|
||||
# WP-24c v4.5.8: Phase 3 - Finaler Validierungs-Gate für candidate: Kanten
|
||||
# Prüfe alle Kanten mit rule_id ODER provenance beginnend mit "candidate:"
|
||||
# Dies schließt alle Kandidaten ein, unabhängig von ihrer Herkunft (global_pool, explicit:callout, etc.)
|
||||
|
||||
# WP-24c v4.5.8: Kontext-Optimierung für Note-Scope Kanten
|
||||
# Aggregiere den gesamten Note-Text für bessere Validierungs-Entscheidungen
|
||||
note_text = markdown_body or " ".join([c.get("text", "") or c.get("window", "") for c in chunk_pls])
|
||||
# Erstelle eine Note-Summary aus den wichtigsten Chunks (für bessere Kontext-Qualität)
|
||||
note_summary = " ".join([c.get("window", "") or c.get("text", "") for c in chunk_pls[:5]]) # Top 5 Chunks
|
||||
|
||||
validated_edges = []
|
||||
rejected_edges = []
|
||||
|
||||
for e in raw_edges:
|
||||
rule_id = e.get("rule_id", "")
|
||||
provenance = e.get("provenance", "")
|
||||
|
||||
# WP-24c v4.5.8: Trigger-Kriterium - rule_id ODER provenance beginnt mit "candidate:"
|
||||
is_candidate = (rule_id and rule_id.startswith("candidate:")) or (provenance and provenance.startswith("candidate:"))
|
||||
|
||||
if is_candidate:
|
||||
# Extrahiere target_id für Validierung (aus verschiedenen möglichen Feldern)
|
||||
target_id = e.get("target_id") or e.get("to")
|
||||
if not target_id:
|
||||
# Fallback: Versuche aus Payload zu extrahieren
|
||||
payload = e.get("extra", {}) if isinstance(e.get("extra"), dict) else {}
|
||||
target_id = payload.get("target_id") or payload.get("to")
|
||||
|
||||
if not target_id:
|
||||
logger.warning(f"⚠️ [PHASE 3] Keine target_id gefunden für Kante: {e}")
|
||||
rejected_edges.append(e)
|
||||
continue
|
||||
|
||||
kind = e.get("kind", "related_to")
|
||||
source_id = e.get("source_id", note_id)
|
||||
scope = e.get("scope", "chunk")
|
||||
|
||||
# WP-24c v4.5.8: Kontext-Optimierung für Note-Scope Kanten
|
||||
# Für scope: note verwende Note-Summary oder gesamten Note-Text
|
||||
# Für scope: chunk verwende den spezifischen Chunk-Text (falls verfügbar)
|
||||
if scope == "note":
|
||||
validation_text = note_summary or note_text
|
||||
context_info = "Note-Scope (aggregiert)"
|
||||
else:
|
||||
# Für Chunk-Scope: Versuche Chunk-Text zu finden, sonst Note-Text
|
||||
chunk_id = e.get("chunk_id") or source_id
|
||||
chunk_text = None
|
||||
for ch in chunk_pls:
|
||||
if ch.get("chunk_id") == chunk_id or ch.get("id") == chunk_id:
|
||||
chunk_text = ch.get("text") or ch.get("window", "")
|
||||
break
|
||||
validation_text = chunk_text or note_text
|
||||
context_info = f"Chunk-Scope ({chunk_id})"
|
||||
|
||||
# Erstelle Edge-Dict für Validierung (kompatibel mit validate_edge_candidate)
|
||||
edge_for_validation = {
|
||||
"kind": kind,
|
||||
"to": target_id, # validate_edge_candidate erwartet "to"
|
||||
"target_id": target_id,
|
||||
"provenance": provenance if not provenance.startswith("candidate:") else provenance.replace("candidate:", "").strip(),
|
||||
"confidence": e.get("confidence", 0.9)
|
||||
}
|
||||
|
||||
logger.info(f"🚀 [PHASE 3] Validierung: {source_id} -> {target_id} ({kind}) | Scope: {scope} | Kontext: {context_info}")
|
||||
|
||||
# WP-24c v4.5.8: Validiere gegen optimierten Kontext
|
||||
is_valid = await validate_edge_candidate(
|
||||
chunk_text=validation_text,
|
||||
edge=edge_for_validation,
|
||||
batch_cache=self.batch_cache,
|
||||
llm_service=self.llm,
|
||||
profile_name="ingest_validator"
|
||||
)
|
||||
|
||||
if is_valid:
|
||||
# WP-24c v4.5.8: Entferne candidate: Präfix (Kante wird zum Fakt)
|
||||
new_rule_id = rule_id.replace("candidate:", "").strip() if rule_id else provenance.replace("candidate:", "").strip() if provenance.startswith("candidate:") else provenance
|
||||
if not new_rule_id:
|
||||
new_rule_id = e.get("provenance", "explicit").replace("candidate:", "").strip()
|
||||
|
||||
# Aktualisiere rule_id und provenance im Edge
|
||||
e["rule_id"] = new_rule_id
|
||||
if provenance.startswith("candidate:"):
|
||||
e["provenance"] = provenance.replace("candidate:", "").strip()
|
||||
|
||||
validated_edges.append(e)
|
||||
logger.info(f"✅ [PHASE 3] VERIFIED: {source_id} -> {target_id} ({kind}) | rule_id: {new_rule_id}")
|
||||
else:
|
||||
# WP-24c v4.5.8: Kante ablehnen (nicht zu validated_edges hinzufügen)
|
||||
rejected_edges.append(e)
|
||||
logger.info(f"🚫 [PHASE 3] REJECTED: {source_id} -> {target_id} ({kind})")
|
||||
else:
|
||||
# WP-24c v4.5.8: Keine candidate: Kante -> direkt übernehmen
|
||||
validated_edges.append(e)
|
||||
|
||||
# WP-24c v4.5.8: Phase 3 abgeschlossen - rejected_edges werden NICHT weiterverarbeitet
|
||||
# WP-24c v4.5.9: Persistierung von rejected_edges für Audit-Zwecke
|
||||
if rejected_edges:
|
||||
logger.info(f"🚫 [PHASE 3] {len(rejected_edges)} Kanten abgelehnt und werden nicht in die DB geschrieben")
|
||||
self._persist_rejected_edges(note_id, rejected_edges)
|
||||
|
||||
# WP-24c v4.5.8: Verwende validated_edges statt raw_edges für weitere Verarbeitung
|
||||
# Nur verified Kanten (ohne candidate: Präfix) werden in Phase 2 (Symmetrie) verarbeitet
|
||||
explicit_edges = []
|
||||
for e in validated_edges:
|
||||
t_raw = e.get("target_id")
|
||||
t_ctx = self.batch_cache.get(t_raw)
|
||||
t_id = t_ctx.note_id if t_ctx else t_raw
|
||||
|
||||
if not self._is_valid_id(t_id): continue
|
||||
|
||||
resolved_kind = edge_registry.resolve(e.get("kind", "related_to"), provenance="explicit")
|
||||
# WP-24c v4.1.0: target_section aus dem Edge-Payload extrahieren und beibehalten
|
||||
target_section = e.get("target_section")
|
||||
e.update({
|
||||
"kind": resolved_kind,
|
||||
"relation": resolved_kind, # Konsistenz: kind und relation identisch
|
||||
"target_id": t_id,
|
||||
"source_id": e.get("source_id") or note_id, # Sicherstellen, dass source_id gesetzt ist
|
||||
"origin_note_id": note_id,
|
||||
"virtual": False
|
||||
})
|
||||
explicit_edges.append(e)
|
||||
|
||||
# Symmetrie puffern (WP-24c v4.1.0: Korrekte Symmetrie-Integrität)
|
||||
inv_kind = edge_registry.get_inverse(resolved_kind)
|
||||
if inv_kind and t_id != note_id:
|
||||
# GOLD-STANDARD v4.1.0: Symmetrie-Integrität
|
||||
v_edge = {
|
||||
"note_id": t_id, # Besitzer-Wechsel: Symmetrie gehört zum Link-Ziel
|
||||
"source_id": t_id, # Neue Quelle ist das Link-Ziel
|
||||
"target_id": note_id, # Ziel ist die ursprüngliche Quelle
|
||||
"kind": inv_kind, # Inverser Kanten-Typ
|
||||
"relation": inv_kind, # Konsistenz: kind und relation identisch
|
||||
"scope": "note", # Symmetrien sind immer Note-Level
|
||||
"virtual": True,
|
||||
"origin_note_id": note_id, # Tracking: Woher kommt die Symmetrie
|
||||
}
|
||||
# target_section beibehalten, falls vorhanden (für Section-Links)
|
||||
if target_section:
|
||||
v_edge["target_section"] = target_section
|
||||
self.symmetry_buffer.append(v_edge)
|
||||
|
||||
# DB Upsert
|
||||
if purge_before and old_payload: purge_artifacts(self.client, self.prefix, note_id)
|
||||
|
||||
col_n, pts_n = points_for_note(self.prefix, note_pl, None, self.dim)
|
||||
upsert_batch(self.client, col_n, pts_n, wait=True)
|
||||
|
||||
if chunk_pls and vecs:
|
||||
col_c, pts_c = points_for_chunks(self.prefix, chunk_pls, vecs)
|
||||
upsert_batch(self.client, col_c, pts_c, wait=True)
|
||||
|
||||
if explicit_edges:
|
||||
col_e, pts_e = points_for_edges(self.prefix, explicit_edges)
|
||||
upsert_batch(self.client, col_e, pts_e, wait=True)
|
||||
|
||||
logger.info(f" ✨ Phase 1 fertig: {len(explicit_edges)} explizite Kanten für '{note_id}'.")
|
||||
return {"status": "success", "note_id": note_id}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Fehler bei {file_path}: {e}", exc_info=True)
|
||||
return {**result, "status": "error", "error": str(e)}
|
||||
|
||||
async def create_from_text(self, markdown_content: str, filename: str, vault_root: str, folder: str = "00_Inbox") -> Dict[str, Any]:
|
||||
"""Erstellt eine Note aus einem Textstream."""
|
||||
target_path = os.path.join(vault_root, folder, filename)
|
||||
os.makedirs(os.path.dirname(target_path), exist_ok=True)
|
||||
with open(target_path, "w", encoding="utf-8") as f:
|
||||
f.write(markdown_content)
|
||||
await asyncio.sleep(0.1)
|
||||
return await self.process_file(file_path=target_path, vault_root=vault_root, apply=True, force_replace=True, purge_before=True)
|
||||
71
app/core/ingestion/ingestion_utils.py
Normal file
71
app/core/ingestion/ingestion_utils.py
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_utils.py
|
||||
DESCRIPTION: Hilfswerkzeuge für JSON-Recovery, Typ-Registry und Konfigurations-Lookups.
|
||||
AUDIT v2.13.9: Behebung des Circular Imports durch Nutzung der app.core.registry.
|
||||
"""
|
||||
import json
|
||||
import re
|
||||
from typing import Any, Optional, Dict
|
||||
|
||||
# ENTSCHEIDENDER FIX: Import der Basis-Logik aus dem neutralen Registry-Modul.
|
||||
# Dies bricht den Zirkelbezug auf, da dieses Modul keine Services mehr importiert.
|
||||
from app.core.registry import load_type_registry, clean_llm_text
|
||||
|
||||
def extract_json_from_response(text: str, registry: Optional[dict] = None) -> Any:
|
||||
"""
|
||||
Extrahiert JSON-Daten und bereinigt LLM-Steuerzeichen.
|
||||
WP-14: Nutzt nun die zentrale clean_llm_text Funktion aus app.core.registry.
|
||||
"""
|
||||
if not text:
|
||||
return []
|
||||
|
||||
# 1. Text zentral bereinigen via neutralem Modul
|
||||
clean = clean_llm_text(text, registry)
|
||||
|
||||
# 2. Markdown-Code-Blöcke extrahieren
|
||||
match = re.search(r"```(?:json)?\s*(.*?)\s*```", clean, re.DOTALL)
|
||||
payload = match.group(1) if match else clean
|
||||
|
||||
try:
|
||||
return json.loads(payload.strip())
|
||||
except json.JSONDecodeError:
|
||||
# Recovery: Suche nach Liste
|
||||
start = payload.find('[')
|
||||
end = payload.rfind(']') + 1
|
||||
if start != -1 and end > start:
|
||||
try: return json.loads(payload[start:end])
|
||||
except: pass
|
||||
|
||||
# Recovery: Suche nach Objekt
|
||||
start_obj = payload.find('{')
|
||||
end_obj = payload.rfind('}') + 1
|
||||
if start_obj != -1 and end_obj > start_obj:
|
||||
try: return json.loads(payload[start_obj:end_obj])
|
||||
except: pass
|
||||
return []
|
||||
|
||||
def resolve_note_type(registry: dict, requested: Optional[str]) -> str:
|
||||
"""
|
||||
Bestimmt den finalen Notiz-Typ.
|
||||
WP-14: Fallback wird nun über ingestion_settings.default_note_type gesteuert.
|
||||
"""
|
||||
types = registry.get("types", {})
|
||||
if requested and requested in types:
|
||||
return requested
|
||||
|
||||
# Dynamischer Fallback aus der Registry (Standard: 'concept')
|
||||
ingest_cfg = registry.get("ingestion_settings", {})
|
||||
return ingest_cfg.get("default_note_type", "concept")
|
||||
|
||||
def get_chunk_config_by_profile(registry: dict, profile_name: str, note_type: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Holt die Chunker-Parameter für ein spezifisches Profil aus der Registry.
|
||||
"""
|
||||
from app.core.chunking import get_chunk_config
|
||||
profiles = registry.get("chunking_profiles", {})
|
||||
if profile_name in profiles:
|
||||
cfg = profiles[profile_name].copy()
|
||||
if "overlap" in cfg and isinstance(cfg["overlap"], list):
|
||||
cfg["overlap"] = tuple(cfg["overlap"])
|
||||
return cfg
|
||||
return get_chunk_config(note_type)
|
||||
150
app/core/ingestion/ingestion_validation.py
Normal file
150
app/core/ingestion/ingestion_validation.py
Normal file
|
|
@ -0,0 +1,150 @@
|
|||
"""
|
||||
FILE: app/core/ingestion/ingestion_validation.py
|
||||
DESCRIPTION: WP-15b semantische Validierung von Kanten gegen den LocalBatchCache.
|
||||
WP-24c: Erweiterung um automatische Symmetrie-Generierung (Inverse Kanten).
|
||||
WP-25b: Konsequente Lazy-Prompt-Orchestration (prompt_key + variables).
|
||||
VERSION: 3.0.0 (WP-24c: Symmetric Edge Management)
|
||||
STATUS: Active
|
||||
FIX:
|
||||
- WP-24c: Integration der EdgeRegistry zur dynamischen Inversions-Ermittlung.
|
||||
- WP-24c: Implementierung von validate_and_symmetrize für bidirektionale Graphen.
|
||||
- WP-25b: Beibehaltung der hierarchischen Prompt-Resolution und Modell-Spezi-Logik.
|
||||
"""
|
||||
import logging
|
||||
from typing import Dict, Any, Optional, List
|
||||
from app.core.parser import NoteContext
|
||||
|
||||
# Import der neutralen Bereinigungs-Logik zur Vermeidung von Circular Imports
|
||||
from app.core.registry import clean_llm_text
|
||||
# WP-24c: Zugriff auf das dynamische Vokabular
|
||||
from app.services.edge_registry import registry as edge_registry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
async def validate_edge_candidate(
|
||||
chunk_text: str,
|
||||
edge: Dict,
|
||||
batch_cache: Dict[str, NoteContext],
|
||||
llm_service: Any,
|
||||
provider: Optional[str] = None,
|
||||
profile_name: str = "ingest_validator"
|
||||
) -> bool:
|
||||
"""
|
||||
WP-15b/25b: Validiert einen Kandidaten semantisch gegen das Ziel im Cache.
|
||||
Nutzt Lazy-Prompt-Loading (PROMPT-TRACE) für deterministische YES/NO Entscheidungen.
|
||||
"""
|
||||
target_id = edge.get("to")
|
||||
target_ctx = batch_cache.get(target_id)
|
||||
|
||||
# Robust Lookup Fix (v2.12.2): Support für Anker (Note#Section)
|
||||
if not target_ctx and "#" in str(target_id):
|
||||
base_id = target_id.split("#")[0]
|
||||
target_ctx = batch_cache.get(base_id)
|
||||
|
||||
# Sicherheits-Fallback (Hard-Link Integrity)
|
||||
# Wenn das Ziel nicht im Cache ist, erlauben wir die Kante (Link-Erhalt).
|
||||
if not target_ctx:
|
||||
logger.info(f"ℹ️ [VALIDATION SKIP] No context for '{target_id}' - allowing link.")
|
||||
return True
|
||||
|
||||
try:
|
||||
logger.info(f"⚖️ [VALIDATING] Relation '{edge.get('kind')}' -> '{target_id}' (Profile: {profile_name})...")
|
||||
|
||||
# WP-25b: Lazy-Prompt Aufruf.
|
||||
# Übergabe von prompt_key und Variablen für modell-optimierte Formatierung.
|
||||
raw_response = await llm_service.generate_raw_response(
|
||||
prompt_key="edge_validation",
|
||||
variables={
|
||||
"chunk_text": chunk_text[:1500],
|
||||
"target_title": target_ctx.title,
|
||||
"target_summary": target_ctx.summary,
|
||||
"edge_kind": edge.get("kind", "related_to")
|
||||
},
|
||||
priority="background",
|
||||
profile_name=profile_name
|
||||
)
|
||||
|
||||
# Bereinigung zur Sicherstellung der Interpretierbarkeit (Mistral/Qwen Safe)
|
||||
response = clean_llm_text(raw_response)
|
||||
|
||||
# Semantische Prüfung des Ergebnisses
|
||||
is_valid = "YES" in response.upper()
|
||||
|
||||
if is_valid:
|
||||
logger.info(f"✅ [VALIDATED] Relation to '{target_id}' confirmed.")
|
||||
else:
|
||||
logger.info(f"🚫 [REJECTED] Relation to '{target_id}' irrelevant for this chunk.")
|
||||
return is_valid
|
||||
|
||||
except Exception as e:
|
||||
error_str = str(e).lower()
|
||||
error_type = type(e).__name__
|
||||
|
||||
# WP-25b: Differenzierung zwischen transienten und permanenten Fehlern
|
||||
# Transiente Fehler (Netzwerk) → erlauben (Integrität vor Präzision)
|
||||
if any(x in error_str for x in ["timeout", "connection", "network", "unreachable", "refused"]):
|
||||
logger.warning(f"⚠️ Transient error for {target_id}: {error_type} - {e}. Allowing edge.")
|
||||
return True
|
||||
|
||||
# Permanente Fehler → ablehnen (Graph-Qualität schützen)
|
||||
logger.error(f"❌ Permanent validation error for {target_id}: {error_type} - {e}")
|
||||
return False
|
||||
|
||||
async def validate_and_symmetrize(
|
||||
chunk_text: str,
|
||||
edge: Dict,
|
||||
source_id: str,
|
||||
batch_cache: Dict[str, NoteContext],
|
||||
llm_service: Any,
|
||||
profile_name: str = "ingest_validator"
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
WP-24c: Erweitertes Validierungs-Gateway.
|
||||
Prüft die Primärkante und erzeugt bei Erfolg automatisch die inverse Kante.
|
||||
|
||||
Returns:
|
||||
List[Dict]: Eine Liste mit 0, 1 (nur Primär) oder 2 (Primär + Invers) Kanten.
|
||||
"""
|
||||
# 1. Semantische Prüfung der Primärkante (A -> B)
|
||||
is_valid = await validate_edge_candidate(
|
||||
chunk_text=chunk_text,
|
||||
edge=edge,
|
||||
batch_cache=batch_cache,
|
||||
llm_service=llm_service,
|
||||
profile_name=profile_name
|
||||
)
|
||||
|
||||
if not is_valid:
|
||||
return []
|
||||
|
||||
validated_edges = [edge]
|
||||
|
||||
# 2. WP-24c: Symmetrie-Generierung (B -> A)
|
||||
# Wir laden den inversen Typ dynamisch aus der EdgeRegistry (Single Source of Truth)
|
||||
original_kind = edge.get("kind", "related_to")
|
||||
inverse_kind = edge_registry.get_inverse(original_kind)
|
||||
|
||||
# Wir erzeugen eine inverse Kante nur, wenn ein sinnvoller inverser Typ existiert
|
||||
# und das Ziel der Primärkante (to) valide ist.
|
||||
target_id = edge.get("to")
|
||||
|
||||
if target_id and source_id:
|
||||
# Die inverse Kante zeigt vom Ziel der Primärkante zurück zur Quelle.
|
||||
# Sie wird als 'virtual' markiert, um sie im Retrieval/UI identifizierbar zu machen.
|
||||
inverse_edge = {
|
||||
"to": source_id,
|
||||
"kind": inverse_kind,
|
||||
"provenance": "structure", # System-generiert, geschützt durch Firewall
|
||||
"confidence": edge.get("confidence", 0.9) * 0.9, # Leichte Dämpfung für virtuelle Pfade
|
||||
"virtual": True,
|
||||
"note_id": target_id, # Die Note, von der die inverse Kante ausgeht
|
||||
"rule_id": f"symmetry:{original_kind}"
|
||||
}
|
||||
|
||||
# Wir fügen die Symmetrie nur hinzu, wenn sie einen echten Mehrwert bietet
|
||||
# (Vermeidung von redundanten related_to -> related_to Loops)
|
||||
if inverse_kind != original_kind or original_kind not in ["related_to", "references"]:
|
||||
validated_edges.append(inverse_edge)
|
||||
logger.info(f"🔄 [SYMMETRY] Generated inverse edge: '{target_id}' --({inverse_kind})--> '{source_id}'")
|
||||
|
||||
return validated_edges
|
||||
53
app/core/logging_setup.py
Normal file
53
app/core/logging_setup.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
import logging
|
||||
import os
|
||||
from logging.handlers import RotatingFileHandler
|
||||
|
||||
def setup_logging(log_level: int = None):
|
||||
"""
|
||||
Konfiguriert das Logging-System mit File- und Console-Handler.
|
||||
WP-24c v4.4.0-DEBUG: Unterstützt DEBUG-Level für End-to-End Tracing.
|
||||
|
||||
Args:
|
||||
log_level: Optionales Log-Level (logging.DEBUG, logging.INFO, etc.)
|
||||
Falls nicht gesetzt, wird aus DEBUG Umgebungsvariable gelesen.
|
||||
"""
|
||||
# 1. Log-Level bestimmen
|
||||
if log_level is None:
|
||||
# WP-24c v4.4.0-DEBUG: Unterstützung für DEBUG-Level via Umgebungsvariable
|
||||
debug_mode = os.getenv("DEBUG", "false").lower() == "true"
|
||||
log_level = logging.DEBUG if debug_mode else logging.INFO
|
||||
|
||||
# 2. Log-Verzeichnis erstellen (falls nicht vorhanden)
|
||||
log_dir = "logs"
|
||||
if not os.path.exists(log_dir):
|
||||
os.makedirs(log_dir)
|
||||
|
||||
log_file = os.path.join(log_dir, "mindnet.log")
|
||||
|
||||
# 3. Formatter definieren (Zeitstempel | Level | Modul | Nachricht)
|
||||
formatter = logging.Formatter(
|
||||
'%(asctime)s | %(levelname)-8s | %(name)s | %(message)s',
|
||||
datefmt='%Y-%m-%d %H:%M:%S'
|
||||
)
|
||||
|
||||
# 4. File Handler: Schreibt in Datei (max. 5MB pro Datei, behält 5 Backups)
|
||||
file_handler = RotatingFileHandler(
|
||||
log_file, maxBytes=5*1024*1024, backupCount=5, encoding='utf-8'
|
||||
)
|
||||
file_handler.setFormatter(formatter)
|
||||
file_handler.setLevel(log_level) # WP-24c v4.4.0-DEBUG: Respektiert log_level
|
||||
|
||||
# 5. Stream Handler: Schreibt weiterhin auf die Konsole
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(formatter)
|
||||
console_handler.setLevel(log_level) # WP-24c v4.4.0-DEBUG: Respektiert log_level
|
||||
|
||||
# 6. Root Logger konfigurieren
|
||||
logging.basicConfig(
|
||||
level=log_level,
|
||||
handlers=[file_handler, console_handler],
|
||||
force=True # Überschreibt bestehende Konfiguration
|
||||
)
|
||||
|
||||
level_name = "DEBUG" if log_level == logging.DEBUG else "INFO"
|
||||
logging.info(f"📝 Logging initialized (Level: {level_name}). Writing to {log_file}")
|
||||
|
|
@ -1,236 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/core/note_payload.py
|
||||
Version: 2.1.0 (WP-11 Update: Aliases support)
|
||||
|
||||
Zweck
|
||||
-----
|
||||
Erzeugt ein robustes Note-Payload. Werte wie `retriever_weight`, `chunk_profile`
|
||||
und `edge_defaults` werden in folgender Priorität bestimmt:
|
||||
1) Frontmatter (Note)
|
||||
2) Typ-Registry (config/types.yaml: types.<type>.*)
|
||||
3) Registry-Defaults (config/types.yaml: defaults.*)
|
||||
4) ENV-Defaults (MINDNET_DEFAULT_RETRIEVER_WEIGHT / MINDNET_DEFAULT_CHUNK_PROFILE)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, Tuple, Optional
|
||||
import os
|
||||
import json
|
||||
import pathlib
|
||||
|
||||
try:
|
||||
import yaml # type: ignore
|
||||
except Exception:
|
||||
yaml = None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _as_dict(x) -> Dict[str, Any]:
|
||||
"""Versucht, ein ParsedMarkdown-ähnliches Objekt in ein Dict zu überführen."""
|
||||
if isinstance(x, dict):
|
||||
return dict(x)
|
||||
|
||||
out: Dict[str, Any] = {}
|
||||
# bekannte Attribute übernehmen, sofern vorhanden
|
||||
for attr in (
|
||||
"frontmatter",
|
||||
"body",
|
||||
"id",
|
||||
"note_id",
|
||||
"title",
|
||||
"path",
|
||||
"tags",
|
||||
"type",
|
||||
"created",
|
||||
"modified",
|
||||
"date",
|
||||
):
|
||||
if hasattr(x, attr):
|
||||
val = getattr(x, attr)
|
||||
if val is not None:
|
||||
out[attr] = val
|
||||
|
||||
# Fallback: wenn immer noch leer, raw speichern
|
||||
if not out:
|
||||
out["raw"] = str(x)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
def _pick_args(*args, **kwargs) -> Tuple[Optional[str], Optional[dict]]:
|
||||
"""Extrahiert optionale Zusatzargumente wie path und types_cfg."""
|
||||
path = kwargs.get("path") or (args[0] if args else None)
|
||||
types_cfg = kwargs.get("types_cfg") or kwargs.get("types") or None
|
||||
return path, types_cfg
|
||||
|
||||
|
||||
def _env_float(name: str, default: float) -> float:
|
||||
"""Liest einen Float-Wert aus der Umgebung, mit robustem Fallback."""
|
||||
try:
|
||||
return float(os.environ.get(name, default))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def _ensure_list(x) -> list:
|
||||
"""Garantiert eine String-Liste."""
|
||||
if x is None:
|
||||
return []
|
||||
if isinstance(x, list):
|
||||
return [str(i) for i in x]
|
||||
if isinstance(x, (set, tuple)):
|
||||
return [str(i) for i in x]
|
||||
return [str(x)]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Type-Registry laden
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _load_types_config(explicit_cfg: Optional[dict] = None) -> dict:
|
||||
"""Lädt die Type-Registry aus YAML/JSON oder nutzt ein explizit übergebenes Dict."""
|
||||
if explicit_cfg and isinstance(explicit_cfg, dict):
|
||||
return explicit_cfg
|
||||
|
||||
path = os.getenv("MINDNET_TYPES_FILE") or "./config/types.yaml"
|
||||
if not os.path.isfile(path) or yaml is None:
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
return data if isinstance(data, dict) else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def _cfg_for_type(note_type: str, reg: dict) -> dict:
|
||||
"""Liefert die Konfiguration für einen konkreten Notiztyp aus der Registry."""
|
||||
if not isinstance(reg, dict):
|
||||
return {}
|
||||
types = reg.get("types") if isinstance(reg.get("types"), dict) else reg
|
||||
return types.get(note_type, {}) if isinstance(types, dict) else {}
|
||||
|
||||
|
||||
def _cfg_defaults(reg: dict) -> dict:
|
||||
"""Liefert den Default-Block aus der Registry (defaults/global)."""
|
||||
if not isinstance(reg, dict):
|
||||
return {}
|
||||
for key in ("defaults", "default", "global"):
|
||||
v = reg.get(key)
|
||||
if isinstance(v, dict):
|
||||
return v
|
||||
return {}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Haupt-API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def make_note_payload(note: Any, *args, **kwargs) -> Dict[str, Any]:
|
||||
"""
|
||||
Baut das Note-Payload für mindnet_notes auf.
|
||||
|
||||
Erwartete Felder im Payload:
|
||||
- note_id: stabile ID aus Frontmatter (id) oder Note-Objekt
|
||||
- title: Titel der Notiz
|
||||
- type: Notiztyp (z. B. concept, project, journal, ...)
|
||||
- path: Pfad im Vault
|
||||
- retriever_weight: effektives Gewicht für den Retriever
|
||||
- chunk_profile: Profil für Chunking (short|medium|long|default|...)
|
||||
- edge_defaults: Liste von Kanten-Typen, die als Defaults gelten
|
||||
- aliases: Liste von Synonymen (WP-11)
|
||||
"""
|
||||
n = _as_dict(note)
|
||||
path_arg, types_cfg_explicit = _pick_args(*args, **kwargs)
|
||||
reg = _load_types_config(types_cfg_explicit)
|
||||
|
||||
fm = n.get("frontmatter") or {}
|
||||
fm_type = fm.get("type") or n.get("type") or "concept"
|
||||
note_type = str(fm_type)
|
||||
|
||||
cfg_type = _cfg_for_type(note_type, reg)
|
||||
cfg_def = _cfg_defaults(reg)
|
||||
|
||||
# --- retriever_weight: Frontmatter > Typ-Config > Registry-Defaults > ENV ---
|
||||
default_rw = _env_float("MINDNET_DEFAULT_RETRIEVER_WEIGHT", 1.0)
|
||||
retriever_weight = fm.get("retriever_weight")
|
||||
if retriever_weight is None:
|
||||
retriever_weight = cfg_type.get(
|
||||
"retriever_weight",
|
||||
cfg_def.get("retriever_weight", default_rw),
|
||||
)
|
||||
try:
|
||||
retriever_weight = float(retriever_weight)
|
||||
except Exception:
|
||||
retriever_weight = default_rw
|
||||
|
||||
# --- chunk_profile: Frontmatter > Typ-Config > Registry-Defaults > ENV ---
|
||||
chunk_profile = fm.get("chunk_profile")
|
||||
if chunk_profile is None:
|
||||
chunk_profile = cfg_type.get(
|
||||
"chunk_profile",
|
||||
cfg_def.get(
|
||||
"chunk_profile",
|
||||
os.environ.get("MINDNET_DEFAULT_CHUNK_PROFILE", "medium"),
|
||||
),
|
||||
)
|
||||
if not isinstance(chunk_profile, str):
|
||||
chunk_profile = "medium"
|
||||
|
||||
# --- edge_defaults: Frontmatter > Typ-Config > Registry-Defaults ---
|
||||
edge_defaults = fm.get("edge_defaults")
|
||||
if edge_defaults is None:
|
||||
edge_defaults = cfg_type.get(
|
||||
"edge_defaults",
|
||||
cfg_def.get("edge_defaults", []),
|
||||
)
|
||||
edge_defaults = _ensure_list(edge_defaults)
|
||||
|
||||
# --- Basis-Metadaten (IDs, Titel, Pfad) ---
|
||||
note_id = n.get("note_id") or n.get("id") or fm.get("id")
|
||||
title = n.get("title") or fm.get("title") or ""
|
||||
path = n.get("path") or path_arg
|
||||
if isinstance(path, pathlib.Path):
|
||||
path = str(path)
|
||||
|
||||
payload: Dict[str, Any] = {
|
||||
"note_id": note_id,
|
||||
"title": title,
|
||||
"type": note_type,
|
||||
"path": path or "",
|
||||
"retriever_weight": retriever_weight,
|
||||
"chunk_profile": chunk_profile,
|
||||
"edge_defaults": edge_defaults,
|
||||
}
|
||||
|
||||
# Tags / Keywords übernehmen
|
||||
tags = fm.get("tags") or fm.get("keywords") or n.get("tags")
|
||||
if tags:
|
||||
payload["tags"] = _ensure_list(tags)
|
||||
|
||||
# WP-11: Aliases übernehmen (für Discovery Service)
|
||||
aliases = fm.get("aliases")
|
||||
if aliases:
|
||||
payload["aliases"] = _ensure_list(aliases)
|
||||
|
||||
# Zeitliche Metadaten (sofern vorhanden)
|
||||
for k in ("created", "modified", "date"):
|
||||
v = fm.get(k) or n.get(k)
|
||||
if v:
|
||||
payload[k] = str(v)
|
||||
|
||||
# Fulltext (Fallback, falls body im Input)
|
||||
if "body" in n and n["body"]:
|
||||
payload["fulltext"] = str(n["body"])
|
||||
|
||||
# JSON-Roundtrip zur harten Validierung (ASCII beibehalten)
|
||||
json.loads(json.dumps(payload, ensure_ascii=False))
|
||||
|
||||
return payload
|
||||
|
|
@ -1,266 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/core/parser.py
|
||||
Version: 1.7.1 (fault-tolerant, API-kompatibel)
|
||||
Datum: 2025-10-01
|
||||
|
||||
Zweck
|
||||
-----
|
||||
Fehlertolerantes Einlesen von Markdown-Dateien mit YAML-Frontmatter.
|
||||
Kompatibel zur bisherigen Parser-API, aber robust gegenüber Nicht-UTF-8-Dateien:
|
||||
- Versucht nacheinander: utf-8 → utf-8-sig → cp1252 → latin-1.
|
||||
- Bei Fallback wird ein JSON-Warnhinweis auf stdout ausgegeben; der Import bricht NICHT ab.
|
||||
- YAML-Frontmatter wird mit '---' am Anfang und '---' als Abschluss erkannt.
|
||||
- extract_wikilinks() normalisiert [[id#anchor|label]] → 'id'.
|
||||
|
||||
Öffentliche API (kompatibel):
|
||||
- class ParsedNote(frontmatter: dict, body: str, path: str)
|
||||
- read_markdown(path) -> ParsedNote | None
|
||||
- normalize_frontmatter(fm) -> dict
|
||||
- validate_required_frontmatter(fm, required: tuple[str,...]=("id","title")) -> None
|
||||
- extract_wikilinks(text) -> list[str]
|
||||
- FRONTMATTER_RE (Kompatibilitäts-Konstante; Regex für '---'-Zeilen)
|
||||
|
||||
Beispiele
|
||||
---------
|
||||
from app.core.parser import read_markdown, normalize_frontmatter, validate_required_frontmatter
|
||||
parsed = read_markdown("./vault/30_projects/project-demo.md")
|
||||
fm = normalize_frontmatter(parsed.frontmatter)
|
||||
validate_required_frontmatter(fm)
|
||||
body = parsed.body
|
||||
|
||||
from app.core.parser import extract_wikilinks
|
||||
links = extract_wikilinks(body)
|
||||
|
||||
Abhängigkeiten
|
||||
--------------
|
||||
- PyYAML (yaml)
|
||||
|
||||
Lizenz: MIT (projektintern)
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Dict, Optional, Tuple, Iterable, List
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
|
||||
try:
|
||||
import yaml # PyYAML
|
||||
except Exception as e: # pragma: no cover
|
||||
yaml = None # Fehler wird zur Laufzeit geworfen, falls wirklich benötigt
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Datamodell
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
@dataclass
|
||||
class ParsedNote:
|
||||
frontmatter: Dict[str, Any]
|
||||
body: str
|
||||
path: str
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Frontmatter-Erkennung
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
# Öffentliche Kompatibilitäts-Konstante: frühere Skripte importieren FRONTMATTER_RE
|
||||
FRONTMATTER_RE = re.compile(r"^\s*---\s*$") # <— public
|
||||
# Zusätzlich interner Alias (falls jemand ihn referenziert)
|
||||
FRONTMATTER_END = FRONTMATTER_RE # <— public alias
|
||||
|
||||
# interne Namen bleiben bestehen
|
||||
_FRONTMATTER_HEAD = FRONTMATTER_RE
|
||||
_FRONTMATTER_END = FRONTMATTER_RE
|
||||
|
||||
|
||||
def _split_frontmatter(text: str) -> Tuple[Dict[str, Any], str]:
|
||||
"""
|
||||
Zerlegt Text in (frontmatter: dict, body: str).
|
||||
Erkennt Frontmatter nur, wenn die erste Zeile '---' ist und später ein zweites '---' folgt.
|
||||
YAML-Fehler im Frontmatter führen NICHT zum Abbruch: es wird dann ein leeres dict benutzt.
|
||||
"""
|
||||
lines = text.splitlines(True) # keep line endings
|
||||
if not lines:
|
||||
return {}, ""
|
||||
|
||||
if not _FRONTMATTER_HEAD.match(lines[0]):
|
||||
# kein Frontmatter-Header → gesamter Text ist Body
|
||||
return {}, text
|
||||
|
||||
end_idx = None
|
||||
# Suche nach nächstem '---' (max. 2000 Zeilen als Sicherheitslimit)
|
||||
for i in range(1, min(len(lines), 2000)):
|
||||
if _FRONTMATTER_END.match(lines[i]):
|
||||
end_idx = i
|
||||
break
|
||||
|
||||
if end_idx is None:
|
||||
# unvollständiger Frontmatter-Block → behandle alles als Body
|
||||
return {}, text
|
||||
|
||||
fm_raw = "".join(lines[1:end_idx])
|
||||
body = "".join(lines[end_idx + 1:])
|
||||
|
||||
data: Dict[str, Any] = {}
|
||||
if yaml is None:
|
||||
raise RuntimeError("PyYAML ist nicht installiert (pip install pyyaml).")
|
||||
|
||||
try:
|
||||
loaded = yaml.safe_load(fm_raw) or {}
|
||||
if isinstance(loaded, dict):
|
||||
data = loaded
|
||||
else:
|
||||
data = {}
|
||||
except Exception as e:
|
||||
# YAML-Fehler nicht fatal machen
|
||||
print(json.dumps({"warn": "frontmatter_yaml_parse_failed", "error": str(e)}))
|
||||
data = {}
|
||||
|
||||
# optionales kosmetisches Trim: eine führende Leerzeile im Body entfernen
|
||||
if body.startswith("\n"):
|
||||
body = body[1:]
|
||||
|
||||
return data, body
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Robustes Lesen mit Encoding-Fallback
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
_FALLBACK_ENCODINGS: Tuple[str, ...] = ("utf-8", "utf-8-sig", "cp1252", "latin-1")
|
||||
|
||||
|
||||
def _read_text_with_fallback(path: str) -> Tuple[str, str, bool]:
|
||||
"""
|
||||
Liest Datei mit mehreren Decodierungsversuchen.
|
||||
Rückgabe: (text, used_encoding, had_fallback)
|
||||
- had_fallback=True, falls NICHT 'utf-8' verwendet wurde (oder 'utf-8-sig').
|
||||
"""
|
||||
last_err: Optional[str] = None
|
||||
for enc in _FALLBACK_ENCODINGS:
|
||||
try:
|
||||
with io.open(path, "r", encoding=enc, errors="strict") as f:
|
||||
text = f.read()
|
||||
# 'utf-8-sig' zählt hier als Fallback (weil BOM), aber ist unproblematisch
|
||||
return text, enc, (enc != "utf-8")
|
||||
except UnicodeDecodeError as e:
|
||||
last_err = f"{type(e).__name__}: {e}"
|
||||
continue
|
||||
|
||||
# Letzter, extrem defensiver Fallback: Bytes → UTF-8 mit REPLACE (keine Exception)
|
||||
with open(path, "rb") as fb:
|
||||
raw = fb.read()
|
||||
text = raw.decode("utf-8", errors="replace")
|
||||
print(json.dumps({
|
||||
"path": path,
|
||||
"warn": "encoding_fallback_exhausted",
|
||||
"info": last_err or "unknown"
|
||||
}, ensure_ascii=False))
|
||||
return text, "utf-8(replace)", True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Öffentliche API
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def read_markdown(path: str) -> Optional[ParsedNote]:
|
||||
"""
|
||||
Liest eine Markdown-Datei fehlertolerant:
|
||||
- Erlaubt verschiedene Encodings (UTF-8 bevorzugt, cp1252/latin-1 als Fallback).
|
||||
- Schlägt NICHT mit UnicodeDecodeError fehl.
|
||||
- Gibt ParsedNote(frontmatter, body, path) zurück oder None, falls die Datei nicht existiert.
|
||||
|
||||
Bei Decoding-Fallback wird ein JSON-Warnhinweis geloggt:
|
||||
{"path": "...", "warn": "encoding_fallback_used", "used": "cp1252"}
|
||||
"""
|
||||
if not os.path.exists(path):
|
||||
return None
|
||||
|
||||
text, enc, had_fb = _read_text_with_fallback(path)
|
||||
if had_fb:
|
||||
print(json.dumps({"path": path, "warn": "encoding_fallback_used", "used": enc}, ensure_ascii=False))
|
||||
|
||||
fm, body = _split_frontmatter(text)
|
||||
return ParsedNote(frontmatter=fm or {}, body=body or "", path=path)
|
||||
|
||||
|
||||
def validate_required_frontmatter(fm: Dict[str, Any],
|
||||
required: Tuple[str, ...] = ("id", "title")) -> None:
|
||||
"""
|
||||
Prüft, ob alle Pflichtfelder vorhanden sind.
|
||||
Default-kompatibel: ('id', 'title'), kann aber vom Aufrufer erweitert werden, z. B.:
|
||||
validate_required_frontmatter(fm, required=("id","title","type","status","created"))
|
||||
|
||||
Hebt ValueError, falls Felder fehlen oder leer sind.
|
||||
"""
|
||||
if fm is None:
|
||||
fm = {}
|
||||
missing = []
|
||||
for k in required:
|
||||
v = fm.get(k)
|
||||
if v is None:
|
||||
missing.append(k)
|
||||
elif isinstance(v, str) and not v.strip():
|
||||
missing.append(k)
|
||||
if missing:
|
||||
raise ValueError(f"Missing required frontmatter fields: {', '.join(missing)}")
|
||||
|
||||
# Plausibilitäten: 'tags' sollte eine Liste sein, wenn vorhanden
|
||||
if "tags" in fm and fm["tags"] not in (None, "") and not isinstance(fm["tags"], (list, tuple)):
|
||||
raise ValueError("frontmatter 'tags' must be a list of strings")
|
||||
|
||||
|
||||
def normalize_frontmatter(fm: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Sanfte Normalisierung ohne Semantikänderung:
|
||||
- 'tags' → Liste von Strings (Trim)
|
||||
- 'embedding_exclude' → bool
|
||||
- andere Felder unverändert
|
||||
"""
|
||||
out = dict(fm or {})
|
||||
if "tags" in out:
|
||||
if isinstance(out["tags"], str):
|
||||
out["tags"] = [out["tags"].strip()] if out["tags"].strip() else []
|
||||
elif isinstance(out["tags"], list):
|
||||
out["tags"] = [str(t).strip() for t in out["tags"] if t is not None]
|
||||
else:
|
||||
out["tags"] = [str(out["tags"]).strip()] if out["tags"] not in (None, "") else []
|
||||
if "embedding_exclude" in out:
|
||||
out["embedding_exclude"] = bool(out["embedding_exclude"])
|
||||
return out
|
||||
|
||||
|
||||
# ------------------------------ Wikilinks ---------------------------- #
|
||||
|
||||
# Basismuster für [[...]]; die Normalisierung (id vor '#', vor '|') macht extract_wikilinks
|
||||
_WIKILINK_RE = re.compile(r"\[\[([^\]]+)\]\]")
|
||||
|
||||
|
||||
def extract_wikilinks(text: str) -> List[str]:
|
||||
"""
|
||||
Extrahiert Wikilinks wie [[id]], [[id#anchor]], [[id|label]], [[id#anchor|label]].
|
||||
Rückgabe sind NUR die Ziel-IDs (ohne Anchor/Label), führend/folgend getrimmt.
|
||||
Keine aggressive Slug-Normalisierung (die kann später im Resolver erfolgen).
|
||||
"""
|
||||
if not text:
|
||||
return []
|
||||
out: List[str] = []
|
||||
for m in _WIKILINK_RE.finditer(text):
|
||||
raw = (m.group(1) or "").strip()
|
||||
if not raw:
|
||||
continue
|
||||
# Split an Pipe (Label) → links vor '|'
|
||||
if "|" in raw:
|
||||
raw = raw.split("|", 1)[0].strip()
|
||||
# Split an Anchor
|
||||
if "#" in raw:
|
||||
raw = raw.split("#", 1)[0].strip()
|
||||
if raw:
|
||||
out.append(raw)
|
||||
return out
|
||||
22
app/core/parser/__init__.py
Normal file
22
app/core/parser/__init__.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
"""
|
||||
FILE: app/core/parser/__init__.py
|
||||
DESCRIPTION: Package-Einstiegspunkt für den Parser.
|
||||
Ermöglicht das Löschen der parser.py Facade.
|
||||
VERSION: 1.10.0
|
||||
"""
|
||||
from .parsing_models import ParsedNote, NoteContext
|
||||
from .parsing_utils import (
|
||||
FRONTMATTER_RE, validate_required_frontmatter,
|
||||
normalize_frontmatter, extract_wikilinks, extract_edges_with_context
|
||||
)
|
||||
from .parsing_markdown import read_markdown
|
||||
from .parsing_scanner import pre_scan_markdown
|
||||
|
||||
# Kompatibilitäts-Alias
|
||||
FRONTMATTER_END = FRONTMATTER_RE
|
||||
|
||||
__all__ = [
|
||||
"ParsedNote", "NoteContext", "FRONTMATTER_RE", "FRONTMATTER_END",
|
||||
"read_markdown", "pre_scan_markdown", "validate_required_frontmatter",
|
||||
"normalize_frontmatter", "extract_wikilinks", "extract_edges_with_context"
|
||||
]
|
||||
60
app/core/parser/parsing_markdown.py
Normal file
60
app/core/parser/parsing_markdown.py
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
"""
|
||||
FILE: app/core/parsing/parsing_markdown.py
|
||||
DESCRIPTION: Fehlertolerantes Einlesen von Markdown und Frontmatter-Splitting.
|
||||
"""
|
||||
import io
|
||||
import os
|
||||
import json
|
||||
from typing import Any, Dict, Optional, Tuple
|
||||
from .parsing_models import ParsedNote
|
||||
from .parsing_utils import FRONTMATTER_RE
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
yaml = None
|
||||
|
||||
_FALLBACK_ENCODINGS: Tuple[str, ...] = ("utf-8", "utf-8-sig", "cp1252", "latin-1")
|
||||
|
||||
def _split_frontmatter(text: str) -> Tuple[Dict[str, Any], str]:
|
||||
"""Zerlegt Text in Frontmatter-Dict und Body."""
|
||||
lines = text.splitlines(True)
|
||||
if not lines or not FRONTMATTER_RE.match(lines[0]):
|
||||
return {}, text
|
||||
end_idx = None
|
||||
for i in range(1, min(len(lines), 2000)):
|
||||
if FRONTMATTER_RE.match(lines[i]):
|
||||
end_idx = i
|
||||
break
|
||||
if end_idx is None: return {}, text
|
||||
fm_raw = "".join(lines[1:end_idx])
|
||||
body = "".join(lines[end_idx + 1:])
|
||||
if yaml is None: raise RuntimeError("PyYAML not installed.")
|
||||
try:
|
||||
loaded = yaml.safe_load(fm_raw) or {}
|
||||
data = loaded if isinstance(loaded, dict) else {}
|
||||
except Exception as e:
|
||||
print(json.dumps({"warn": "frontmatter_yaml_parse_failed", "error": str(e)}))
|
||||
data = {}
|
||||
if body.startswith("\n"): body = body[1:]
|
||||
return data, body
|
||||
|
||||
def _read_text_with_fallback(path: str) -> Tuple[str, str, bool]:
|
||||
"""Liest Datei mit Encoding-Fallback-Kette."""
|
||||
last_err = None
|
||||
for enc in _FALLBACK_ENCODINGS:
|
||||
try:
|
||||
with io.open(path, "r", encoding=enc, errors="strict") as f:
|
||||
return f.read(), enc, (enc != "utf-8")
|
||||
except UnicodeDecodeError as e:
|
||||
last_err = str(e); continue
|
||||
with open(path, "rb") as fb:
|
||||
text = fb.read().decode("utf-8", errors="replace")
|
||||
return text, "utf-8(replace)", True
|
||||
|
||||
def read_markdown(path: str) -> Optional[ParsedNote]:
|
||||
"""Öffentliche API zum Einlesen einer Datei."""
|
||||
if not os.path.exists(path): return None
|
||||
text, enc, had_fb = _read_text_with_fallback(path)
|
||||
fm, body = _split_frontmatter(text)
|
||||
return ParsedNote(frontmatter=fm or {}, body=body or "", path=path)
|
||||
22
app/core/parser/parsing_models.py
Normal file
22
app/core/parser/parsing_models.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
"""
|
||||
FILE: app/core/parsing/parsing_models.py
|
||||
DESCRIPTION: Datenklassen für das Parsing-System.
|
||||
"""
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Dict, List
|
||||
|
||||
@dataclass
|
||||
class ParsedNote:
|
||||
"""Container für eine vollständig eingelesene Markdown-Datei."""
|
||||
frontmatter: Dict[str, Any]
|
||||
body: str
|
||||
path: str
|
||||
|
||||
@dataclass
|
||||
class NoteContext:
|
||||
"""Metadaten-Container für den flüchtigen LocalBatchCache (Pass 1)."""
|
||||
note_id: str
|
||||
title: str
|
||||
type: str
|
||||
summary: str
|
||||
tags: List[str]
|
||||
40
app/core/parser/parsing_scanner.py
Normal file
40
app/core/parser/parsing_scanner.py
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
"""
|
||||
FILE: app/core/parsing/parsing_scanner.py
|
||||
DESCRIPTION: Pre-Scan für den LocalBatchCache (Pass 1).
|
||||
AUDIT v1.1.0: Dynamisierung der Scan-Parameter (WP-14).
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
from typing import Optional, Dict, Any
|
||||
from .parsing_models import NoteContext
|
||||
from .parsing_markdown import read_markdown
|
||||
|
||||
def pre_scan_markdown(path: str, registry: Optional[Dict[str, Any]] = None) -> Optional[NoteContext]:
|
||||
"""
|
||||
Extrahiert Identität und Kurz-Kontext zur Validierung.
|
||||
WP-14: Scan-Tiefe und Summary-Länge sind nun über die Registry steuerbar.
|
||||
"""
|
||||
parsed = read_markdown(path)
|
||||
if not parsed: return None
|
||||
|
||||
# WP-14: Konfiguration laden oder Standardwerte nutzen
|
||||
reg = registry or {}
|
||||
summary_cfg = reg.get("summary_settings", {})
|
||||
scan_depth = summary_cfg.get("pre_scan_depth", 600)
|
||||
max_len = summary_cfg.get("max_summary_length", 500)
|
||||
|
||||
fm = parsed.frontmatter
|
||||
# ID-Findung: Frontmatter ID oder Dateiname als Fallback
|
||||
note_id = str(fm.get("id") or os.path.splitext(os.path.basename(path))[0])
|
||||
|
||||
# Erstelle Kurz-Zusammenfassung mit dynamischen Limits
|
||||
clean_body = re.sub(r'[#*`>]', '', parsed.body[:scan_depth]).strip()
|
||||
summary = clean_body[:max_len] + "..." if len(clean_body) > max_len else clean_body
|
||||
|
||||
return NoteContext(
|
||||
note_id=note_id,
|
||||
title=str(fm.get("title", note_id)),
|
||||
type=str(fm.get("type", "concept")),
|
||||
summary=summary,
|
||||
tags=fm.get("tags", []) if isinstance(fm.get("tags"), list) else []
|
||||
)
|
||||
69
app/core/parser/parsing_utils.py
Normal file
69
app/core/parser/parsing_utils.py
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
"""
|
||||
FILE: app/core/parsing/parsing_utils.py
|
||||
DESCRIPTION: Werkzeuge zur Validierung, Normalisierung und Wikilink-Extraktion.
|
||||
"""
|
||||
import re
|
||||
from typing import Any, Dict, List, Tuple, Optional
|
||||
from .parsing_models import ParsedNote
|
||||
|
||||
# Öffentliche Konstanten für Abwärtskompatibilität
|
||||
FRONTMATTER_RE = re.compile(r"^\s*---\s*$")
|
||||
_WIKILINK_RE = re.compile(r"\[\[([^\]]+)\]\]")
|
||||
|
||||
def validate_required_frontmatter(fm: Dict[str, Any], required: Tuple[str, ...] = ("id", "title")) -> None:
|
||||
"""Prüft, ob alle Pflichtfelder vorhanden sind."""
|
||||
if fm is None: fm = {}
|
||||
missing = []
|
||||
for k in required:
|
||||
v = fm.get(k)
|
||||
if v is None or (isinstance(v, str) and not v.strip()):
|
||||
missing.append(k)
|
||||
if missing:
|
||||
raise ValueError(f"Missing required frontmatter fields: {', '.join(missing)}")
|
||||
if "tags" in fm and fm["tags"] not in (None, "") and not isinstance(fm["tags"], (list, tuple)):
|
||||
raise ValueError("frontmatter 'tags' must be a list of strings")
|
||||
|
||||
def normalize_frontmatter(fm: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Normalisierung von Tags und Boolean-Feldern."""
|
||||
out = dict(fm or {})
|
||||
if "tags" in out:
|
||||
if isinstance(out["tags"], str):
|
||||
out["tags"] = [out["tags"].strip()] if out["tags"].strip() else []
|
||||
elif isinstance(out["tags"], list):
|
||||
out["tags"] = [str(t).strip() for t in out["tags"] if t is not None]
|
||||
else:
|
||||
out["tags"] = [str(out["tags"]).strip()] if out["tags"] not in (None, "") else []
|
||||
if "embedding_exclude" in out:
|
||||
out["embedding_exclude"] = bool(out["embedding_exclude"])
|
||||
return out
|
||||
|
||||
def extract_wikilinks(text: str) -> List[str]:
|
||||
"""Extrahiert Wikilinks als einfache Liste von IDs."""
|
||||
if not text: return []
|
||||
out: List[str] = []
|
||||
for m in _WIKILINK_RE.finditer(text):
|
||||
raw = (m.group(1) or "").strip()
|
||||
if not raw: continue
|
||||
if "|" in raw: raw = raw.split("|", 1)[0].strip()
|
||||
if "#" in raw: raw = raw.split("#", 1)[0].strip()
|
||||
if raw: out.append(raw)
|
||||
return out
|
||||
|
||||
def extract_edges_with_context(parsed: ParsedNote) -> List[Dict[str, Any]]:
|
||||
"""WP-22: Extrahiert Wikilinks mit Zeilennummern für die EdgeRegistry."""
|
||||
edges = []
|
||||
if not parsed or not parsed.body: return edges
|
||||
lines = parsed.body.splitlines()
|
||||
for line_num, line_content in enumerate(lines, 1):
|
||||
for match in _WIKILINK_RE.finditer(line_content):
|
||||
raw = (match.group(1) or "").strip()
|
||||
if not raw: continue
|
||||
if "|" in raw:
|
||||
parts = raw.split("|", 1)
|
||||
target, kind = parts[0].strip(), parts[1].strip()
|
||||
else:
|
||||
target, kind = raw.strip(), "related_to"
|
||||
if "#" in target: target = target.split("#", 1)[0].strip()
|
||||
if target:
|
||||
edges.append({"to": target, "kind": kind, "line": line_num, "provenance": "explicit"})
|
||||
return edges
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
"""
|
||||
app/core/ranking.py — Kombiniertes Scoring (WP-04)
|
||||
|
||||
Zweck:
|
||||
Zusammenführen von semantischem Score (normalisiert), Edge-Bonus und
|
||||
Centrality-Bonus in einen Gesamtscore für die Ergebnisreihung.
|
||||
Kompatibilität:
|
||||
Python 3.12+
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Bezug:
|
||||
WP-04 Ranking-Formel (w_sem, w_edge, w_cent)
|
||||
Nutzung:
|
||||
from app.core.ranking import combine_scores
|
||||
Änderungsverlauf:
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from typing import List, Tuple, Dict
|
||||
|
||||
|
||||
def normalize_scores(values: List[float]) -> List[float]:
|
||||
"""Min-Max-Normalisierung über die Kandidatenmenge (Fallback 0.5 bei Konstanz)."""
|
||||
if not values:
|
||||
return values
|
||||
lo, hi = min(values), max(values)
|
||||
if hi - lo < 1e-9:
|
||||
return [0.5] * len(values)
|
||||
return [(v - lo) / (hi - lo) for v in values]
|
||||
|
||||
|
||||
def combine_scores(
|
||||
hits: List[Tuple[str, float, dict]], # (id, semantic_score, payload)
|
||||
edge_bonus_map: Dict[str, float],
|
||||
centrality_map: Dict[str, float],
|
||||
w_sem: float = 0.70,
|
||||
w_edge: float = 0.25,
|
||||
w_cent: float = 0.05,
|
||||
) -> List[Tuple[str, float, float, float, float]]:
|
||||
"""
|
||||
Liefert Liste von (point_id, total_score, edge_bonus, centrality_bonus, raw_semantic_score),
|
||||
absteigend nach total_score sortiert.
|
||||
"""
|
||||
sem = [h[1] for h in hits]
|
||||
sem_n = normalize_scores(sem)
|
||||
out = []
|
||||
for (pid, s, payload), s_norm in zip(hits, sem_n):
|
||||
e = edge_bonus_map.get(pid, 0.0)
|
||||
c = centrality_map.get(pid, 0.0)
|
||||
total = w_sem * s_norm + w_edge * e + w_cent * c
|
||||
out.append((pid, total, e, c, s))
|
||||
out.sort(key=lambda t: t[1], reverse=True)
|
||||
return out
|
||||
43
app/core/registry.py
Normal file
43
app/core/registry.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
"""
|
||||
FILE: app/core/registry.py
|
||||
DESCRIPTION: Zentraler Base-Layer für Konfigurations-Loading und Text-Bereinigung.
|
||||
Bricht Zirkelbezüge zwischen Ingestion und LLMService auf.
|
||||
VERSION: 1.0.0
|
||||
"""
|
||||
import os
|
||||
import yaml
|
||||
from typing import Optional, List
|
||||
|
||||
def load_type_registry(custom_path: Optional[str] = None) -> dict:
|
||||
"""Lädt die types.yaml zur Steuerung der typ-spezifischen Logik."""
|
||||
# Wir nutzen hier einen direkten Import von Settings, um Zyklen zu vermeiden
|
||||
from app.config import get_settings
|
||||
settings = get_settings()
|
||||
path = custom_path or settings.MINDNET_TYPES_FILE
|
||||
if not os.path.exists(path):
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def clean_llm_text(text: str, registry: Optional[dict] = None) -> str:
|
||||
"""
|
||||
Entfernt LLM-Steuerzeichen (<s>, [OUT] etc.) aus einem Text.
|
||||
Wird sowohl für JSON-Parsing als auch für Chat-Antworten genutzt.
|
||||
"""
|
||||
if not text or not isinstance(text, str):
|
||||
return ""
|
||||
|
||||
default_patterns = ["<s>", "</s>", "[OUT]", "[/OUT]"]
|
||||
reg = registry or load_type_registry()
|
||||
|
||||
# Lade Patterns aus llm_settings (WP-14)
|
||||
patterns: List[str] = reg.get("llm_settings", {}).get("cleanup_patterns", default_patterns)
|
||||
|
||||
clean = text
|
||||
for p in patterns:
|
||||
clean = clean.replace(p, "")
|
||||
|
||||
return clean.strip()
|
||||
25
app/core/retrieval/__init__.py
Normal file
25
app/core/retrieval/__init__.py
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
"""
|
||||
PACKAGE: app.core.retrieval
|
||||
DESCRIPTION: Zentrale Schnittstelle für Retrieval-Operationen (Vektor- & Graph-Suche).
|
||||
Bündelt Suche und mathematische Scoring-Engine.
|
||||
"""
|
||||
from .retriever import (
|
||||
Retriever,
|
||||
hybrid_retrieve,
|
||||
semantic_retrieve
|
||||
)
|
||||
|
||||
from .retriever_scoring import (
|
||||
get_weights,
|
||||
compute_wp22_score,
|
||||
get_status_multiplier
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"Retriever",
|
||||
"hybrid_retrieve",
|
||||
"semantic_retrieve",
|
||||
"get_weights",
|
||||
"compute_wp22_score",
|
||||
"get_status_multiplier"
|
||||
]
|
||||
378
app/core/retrieval/decision_engine.py
Normal file
378
app/core/retrieval/decision_engine.py
Normal file
|
|
@ -0,0 +1,378 @@
|
|||
"""
|
||||
FILE: app/core/retrieval/decision_engine.py
|
||||
DESCRIPTION: Der Agentic Orchestrator für MindNet (WP-25b Edition).
|
||||
Realisiert Multi-Stream Retrieval, Intent-basiertes Routing
|
||||
und die neue Lazy-Prompt Orchestrierung (Module A & B).
|
||||
VERSION: 1.3.2 (WP-25b: Full Robustness Recovery & Regex Parsing)
|
||||
STATUS: Active
|
||||
FIX:
|
||||
- WP-25b: ULTRA-Robustes Intent-Parsing via Regex (Fix: 'CODING[/S]' -> 'CODING').
|
||||
- WP-25b: Wiederherstellung der prepend_instruction Logik via variables.
|
||||
- WP-25a: Voller Erhalt der Profil-Kaskade via LLMService v3.5.5.
|
||||
- WP-25: Beibehaltung von Stream-Tracing, Edge-Boosts und Pre-Initialization.
|
||||
- RECOVERY: Wiederherstellung der lokalen Sicherheits-Gates aus v1.2.1.
|
||||
"""
|
||||
import asyncio
|
||||
import logging
|
||||
import yaml
|
||||
import os
|
||||
import re # Neu für robustes Intent-Parsing
|
||||
from typing import List, Dict, Any, Optional
|
||||
|
||||
# Core & Service Imports
|
||||
from app.models.dto import QueryRequest, QueryResponse
|
||||
from app.core.retrieval.retriever import Retriever
|
||||
from app.services.llm_service import LLMService
|
||||
from app.config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DecisionEngine:
|
||||
def __init__(self):
|
||||
"""Initialisiert die Engine und lädt die modularen Konfigurationen."""
|
||||
self.settings = get_settings()
|
||||
self.retriever = Retriever()
|
||||
self.llm_service = LLMService()
|
||||
self.config = self._load_engine_config()
|
||||
|
||||
def _load_engine_config(self) -> Dict[str, Any]:
|
||||
"""Lädt die Multi-Stream Konfiguration (WP-25/25a)."""
|
||||
path = os.getenv("MINDNET_DECISION_CONFIG", "config/decision_engine.yaml")
|
||||
if not os.path.exists(path):
|
||||
logger.error(f"❌ Decision Engine Config not found at {path}")
|
||||
return {"strategies": {}, "streams_library": {}}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
config = yaml.safe_load(f) or {}
|
||||
|
||||
# WP-25b FIX: Schema-Validierung
|
||||
required_keys = ["strategies", "streams_library"]
|
||||
missing = [k for k in required_keys if k not in config]
|
||||
if missing:
|
||||
logger.error(f"❌ Missing required keys in decision_engine.yaml: {missing}")
|
||||
return {"strategies": {}, "streams_library": {}}
|
||||
|
||||
# Warnung bei unbekannten Top-Level-Keys
|
||||
known_keys = {"version", "settings", "strategies", "streams_library"}
|
||||
unknown = set(config.keys()) - known_keys
|
||||
if unknown:
|
||||
logger.warning(f"⚠️ Unknown keys in decision_engine.yaml: {unknown}")
|
||||
|
||||
logger.info(f"⚙️ Decision Engine Config loaded (v{config.get('version', 'unknown')})")
|
||||
return config
|
||||
except yaml.YAMLError as e:
|
||||
logger.error(f"❌ YAML syntax error in decision_engine.yaml: {e}")
|
||||
return {"strategies": {}, "streams_library": {}}
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to load decision_engine.yaml: {e}")
|
||||
return {"strategies": {}, "streams_library": {}}
|
||||
|
||||
async def ask(self, query: str) -> str:
|
||||
"""
|
||||
Hauptmethode des MindNet Chats.
|
||||
Orchestriert den agentischen Prozess: Routing -> Retrieval -> Kompression -> Synthese.
|
||||
"""
|
||||
# 1. Intent Recognition (Strategy Routing)
|
||||
strategy_key = await self._determine_strategy(query)
|
||||
|
||||
strategies = self.config.get("strategies", {})
|
||||
strategy = strategies.get(strategy_key)
|
||||
|
||||
if not strategy:
|
||||
logger.warning(f"⚠️ Unknown strategy '{strategy_key}'. Fallback to FACT_WHAT.")
|
||||
strategy_key = "FACT_WHAT"
|
||||
strategy = strategies.get("FACT_WHAT")
|
||||
|
||||
if not strategy and strategies:
|
||||
strategy_key = next(iter(strategies))
|
||||
strategy = strategies[strategy_key]
|
||||
|
||||
if not strategy:
|
||||
return "Entschuldigung, meine Wissensbasis ist aktuell nicht konfiguriert."
|
||||
|
||||
# 2. Multi-Stream Retrieval & Pre-Synthesis (Parallel Tasks inkl. Kompression)
|
||||
stream_results = await self._execute_parallel_streams(strategy, query)
|
||||
|
||||
# 3. Finale Synthese
|
||||
return await self._generate_final_answer(strategy_key, strategy, query, stream_results)
|
||||
|
||||
async def _determine_strategy(self, query: str) -> str:
|
||||
"""WP-25b: Nutzt den LLM-Router via Lazy-Loading und bereinigt Modell-Artefakte via Regex."""
|
||||
settings_cfg = self.config.get("settings", {})
|
||||
prompt_key = settings_cfg.get("router_prompt_key", "intent_router_v1")
|
||||
router_profile = settings_cfg.get("router_profile")
|
||||
|
||||
try:
|
||||
# Delegation an LLMService ohne manuelle Vor-Formatierung
|
||||
response = await self.llm_service.generate_raw_response(
|
||||
prompt_key=prompt_key,
|
||||
variables={"query": query},
|
||||
max_retries=1,
|
||||
priority="realtime",
|
||||
profile_name=router_profile
|
||||
)
|
||||
|
||||
# --- ULTRA-ROBUST PARSING (Fix für 'CODING[/S]') ---
|
||||
# 1. Alles in Großbuchstaben umwandeln
|
||||
raw_text = str(response).upper()
|
||||
|
||||
# 2. Regex: Suche das erste Wort, das nur aus A-Z und Unterstrichen besteht
|
||||
# Dies ignoriert [/S], </s>, Newlines oder Plaudereien des Modells
|
||||
match = re.search(r'\b(FACT_WHEN|FACT_WHAT|DECISION|EMPATHY|CODING|INTERVIEW)\b', raw_text)
|
||||
|
||||
if match:
|
||||
intent = match.group(1)
|
||||
logger.info(f"🎯 [ROUTING] Parsed Intent: '{intent}' from raw response: '{response.strip()}'")
|
||||
return intent
|
||||
|
||||
# Fallback, falls Regex nicht greift
|
||||
logger.warning(f"⚠️ Unmapped intent '{response.strip()}' from router. Falling back to FACT_WHAT.")
|
||||
return "FACT_WHAT"
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Strategy Routing failed: {e}")
|
||||
return "FACT_WHAT"
|
||||
|
||||
async def _execute_parallel_streams(self, strategy: Dict, query: str) -> Dict[str, str]:
|
||||
"""Führt Such-Streams aus und komprimiert überlange Ergebnisse (Pre-Synthesis)."""
|
||||
stream_keys = strategy.get("use_streams", [])
|
||||
library = self.config.get("streams_library", {})
|
||||
|
||||
# Phase 1: Retrieval Tasks starten
|
||||
retrieval_tasks = []
|
||||
active_streams = []
|
||||
for key in stream_keys:
|
||||
stream_cfg = library.get(key)
|
||||
if stream_cfg:
|
||||
active_streams.append(key)
|
||||
retrieval_tasks.append(self._run_single_stream(key, stream_cfg, query))
|
||||
|
||||
# Ergebnisse sammeln
|
||||
retrieval_results = await asyncio.gather(*retrieval_tasks, return_exceptions=True)
|
||||
|
||||
# Phase 2: Formatierung und optionale Kompression
|
||||
# WP-24c v4.5.5: Context-Reuse - Sicherstellen, dass formatted_context auch bei Kompressions-Fehlern erhalten bleibt
|
||||
final_stream_tasks = []
|
||||
formatted_contexts = {} # WP-24c v4.5.5: Persistenz für Fallback-Zugriff
|
||||
|
||||
for name, res in zip(active_streams, retrieval_results):
|
||||
if isinstance(res, Exception):
|
||||
logger.error(f"Stream '{name}' failed during retrieval: {res}")
|
||||
error_msg = f"[Fehler im Wissens-Stream {name}]"
|
||||
formatted_contexts[name] = error_msg
|
||||
async def _err(msg=error_msg): return msg
|
||||
final_stream_tasks.append(_err())
|
||||
continue
|
||||
|
||||
formatted_context = self._format_stream_context(res)
|
||||
formatted_contexts[name] = formatted_context # WP-24c v4.5.5: Persistenz für Fallback
|
||||
|
||||
# WP-25a: Kompressions-Check (Inhaltsverdichtung)
|
||||
stream_cfg = library.get(name, {})
|
||||
threshold = stream_cfg.get("compression_threshold", 4000)
|
||||
|
||||
if len(formatted_context) > threshold:
|
||||
logger.info(f"⚙️ [WP-25b] Triggering Lazy-Compression for stream '{name}'...")
|
||||
comp_profile = stream_cfg.get("compression_profile")
|
||||
# WP-24c v4.5.5: Kompression mit Context-Reuse - bei Fehler wird formatted_context zurückgegeben
|
||||
final_stream_tasks.append(
|
||||
self._compress_stream_content(name, formatted_context, query, comp_profile)
|
||||
)
|
||||
else:
|
||||
async def _direct(c=formatted_context): return c
|
||||
final_stream_tasks.append(_direct())
|
||||
|
||||
# Finale Inhalte parallel fertigstellen
|
||||
# WP-24c v4.5.5: Bei Kompressions-Fehlern wird der Original-Content zurückgegeben (siehe _compress_stream_content)
|
||||
final_contents = await asyncio.gather(*final_stream_tasks, return_exceptions=True)
|
||||
|
||||
# WP-24c v4.5.5: Exception-Handling für finale Inhalte - verwende Original-Content bei Fehlern
|
||||
final_results = {}
|
||||
for name, content in zip(active_streams, final_contents):
|
||||
if isinstance(content, Exception):
|
||||
logger.warning(f"⚠️ [CONTEXT-REUSE] Stream '{name}' Fehler in finaler Verarbeitung: {content}. Verwende Original-Context.")
|
||||
final_results[name] = formatted_contexts.get(name, f"[Fehler im Stream {name}]")
|
||||
else:
|
||||
final_results[name] = content
|
||||
|
||||
logger.debug(f"📊 [STREAMS] Finale Stream-Ergebnisse: {[(k, len(v)) for k, v in final_results.items()]}")
|
||||
return final_results
|
||||
|
||||
async def _compress_stream_content(self, stream_name: str, content: str, query: str, profile: Optional[str]) -> str:
|
||||
"""
|
||||
WP-25b: Inhaltsverdichtung via Lazy-Loading 'compression_template'.
|
||||
WP-24c v4.5.5: Context-Reuse - Bei Fehlern wird der Original-Content zurückgegeben,
|
||||
um Re-Retrieval zu vermeiden.
|
||||
"""
|
||||
try:
|
||||
# WP-24c v4.5.5: Logging für LLM-Trace im Kompressions-Modus
|
||||
logger.debug(f"🔧 [COMPRESSION] Starte Kompression für Stream '{stream_name}' (Content-Länge: {len(content)})")
|
||||
|
||||
summary = await self.llm_service.generate_raw_response(
|
||||
prompt_key="compression_template",
|
||||
variables={
|
||||
"stream_name": stream_name,
|
||||
"content": content,
|
||||
"query": query
|
||||
},
|
||||
profile_name=profile,
|
||||
priority="background",
|
||||
max_retries=1
|
||||
)
|
||||
|
||||
# WP-24c v4.5.5: Validierung des Kompressions-Ergebnisses
|
||||
if summary and len(summary.strip()) > 10:
|
||||
logger.debug(f"✅ [COMPRESSION] Kompression erfolgreich für '{stream_name}' (Original: {len(content)}, Komprimiert: {len(summary)})")
|
||||
return summary.strip()
|
||||
else:
|
||||
logger.warning(f"⚠️ [COMPRESSION] Kompressions-Ergebnis zu kurz für '{stream_name}', verwende Original-Content")
|
||||
return content
|
||||
|
||||
except Exception as e:
|
||||
# WP-24c v4.5.5: Context-Reuse - Bei Fehlern Original-Content zurückgeben (kein Re-Retrieval)
|
||||
logger.error(f"❌ [COMPRESSION] Kompression von '{stream_name}' fehlgeschlagen: {e}")
|
||||
logger.info(f"🔄 [CONTEXT-REUSE] Verwende Original-Content für '{stream_name}' (Länge: {len(content)}) - KEIN Re-Retrieval")
|
||||
return content
|
||||
|
||||
async def _run_single_stream(self, name: str, cfg: Dict, query: str) -> QueryResponse:
|
||||
"""Spezialisierte Graph-Suche mit Stream-Tracing und Edge-Boosts."""
|
||||
transformed_query = cfg.get("query_template", "{query}").format(query=query)
|
||||
|
||||
request = QueryRequest(
|
||||
query=transformed_query,
|
||||
top_k=cfg.get("top_k", 5),
|
||||
filters={"type": cfg.get("filter_types", [])},
|
||||
expand={"depth": 1},
|
||||
boost_edges=cfg.get("edge_boosts", {}), # Erhalt der Gewichtung
|
||||
explain=True
|
||||
)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Protokollierung vor der Suche
|
||||
logger.info(f"🔍 [RETRIEVAL] Starte Stream: '{name}'")
|
||||
logger.info(f" -> Transformierte Query: '{transformed_query}'")
|
||||
logger.debug(f" ⚙️ [FILTER] Angewandte Metadaten-Filter: {request.filters}")
|
||||
logger.debug(f" ⚙️ [FILTER] Top-K: {request.top_k}, Expand-Depth: {request.expand.get('depth') if request.expand else None}")
|
||||
|
||||
response = await self.retriever.search(request)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Protokollierung nach der Suche
|
||||
if not response.results:
|
||||
logger.warning(f"⚠️ [EMPTY] Stream '{name}' lieferte 0 Ergebnisse.")
|
||||
else:
|
||||
logger.info(f"✨ [SUCCESS] Stream '{name}' lieferte {len(response.results)} Treffer.")
|
||||
# Top 3 Treffer im DEBUG-Level loggen
|
||||
# WP-24c v4.5.4: QueryHit hat kein chunk_id Feld - verwende node_id (enthält die Chunk-ID)
|
||||
for i, hit in enumerate(response.results[:3]):
|
||||
chunk_id = hit.node_id # node_id ist die Chunk-ID (pid)
|
||||
score = hit.total_score # QueryHit hat total_score, nicht score
|
||||
logger.debug(f" [{i+1}] Chunk: {chunk_id} | Score: {score:.4f} | Path: {hit.source.get('path', 'N/A') if hit.source else 'N/A'}")
|
||||
|
||||
for hit in response.results:
|
||||
hit.stream_origin = name
|
||||
return response
|
||||
|
||||
def _format_stream_context(self, response: QueryResponse) -> str:
|
||||
"""Wandelt QueryHits in einen formatierten Kontext-String um."""
|
||||
if not response.results:
|
||||
return "Keine spezifischen Informationen gefunden."
|
||||
lines = []
|
||||
for i, hit in enumerate(response.results, 1):
|
||||
source = hit.source.get("path", "Unbekannt")
|
||||
content = hit.source.get("text", "").strip()
|
||||
lines.append(f"[{i}] QUELLE: {source}\nINHALT: {content}")
|
||||
return "\n\n".join(lines)
|
||||
|
||||
async def _generate_final_answer(
|
||||
self,
|
||||
strategy_key: str,
|
||||
strategy: Dict,
|
||||
query: str,
|
||||
stream_results: Dict[str, str]
|
||||
) -> str:
|
||||
"""WP-25b: Finale Synthese via Lazy-Prompt mit Robustheit aus v1.2.1."""
|
||||
profile = strategy.get("llm_profile")
|
||||
template_key = strategy.get("prompt_template", "fact_synthesis_v1")
|
||||
system_prompt = self.llm_service.get_prompt("system_prompt")
|
||||
|
||||
# WP-25 ROBUSTNESS: Pre-Initialization der Variablen
|
||||
all_possible_streams = ["values_stream", "facts_stream", "biography_stream", "risk_stream", "tech_stream"]
|
||||
template_vars = {s: "" for s in all_possible_streams}
|
||||
template_vars.update(stream_results)
|
||||
template_vars["query"] = query
|
||||
|
||||
# WP-25a Erhalt: Prepend Instructions aus der strategy_config
|
||||
prepend = strategy.get("prepend_instruction", "")
|
||||
template_vars["prepend_instruction"] = prepend
|
||||
|
||||
try:
|
||||
# WP-25b: Delegation der Synthese an den LLMService
|
||||
response = await self.llm_service.generate_raw_response(
|
||||
prompt_key=template_key,
|
||||
variables=template_vars,
|
||||
system=system_prompt,
|
||||
profile_name=profile,
|
||||
priority="realtime"
|
||||
)
|
||||
|
||||
# WP-25a RECOVERY: Falls dieprepend_instruction nicht im Template-Key
|
||||
# der prompts.yaml enthalten ist (WP-25b Lazy Loading), fügen wir sie
|
||||
# hier manuell an den Anfang, um die Logik aus v1.2.1 zu bewahren.
|
||||
if prepend and prepend not in response[:len(prepend)+50]:
|
||||
logger.info("ℹ️ Adding prepend_instruction manually (not found in response).")
|
||||
response = f"{prepend}\n\n{response}"
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Final Synthesis failed: {e}")
|
||||
# WP-24c v4.5.5: ROBUST FALLBACK mit Context-Reuse
|
||||
# WICHTIG: stream_results werden Wiederverwendet - KEIN Re-Retrieval
|
||||
logger.info(f"🔄 [FALLBACK] Verwende vorhandene stream_results (KEIN Re-Retrieval)")
|
||||
logger.debug(f" -> Verfügbare Streams: {list(stream_results.keys())}")
|
||||
logger.debug(f" -> Stream-Längen: {[(k, len(v)) for k, v in stream_results.items()]}")
|
||||
|
||||
# WP-24c v4.5.5: Context-Reuse - Nutze vorhandene stream_results
|
||||
fallback_context = "\n\n".join([v for v in stream_results.values() if len(v) > 20])
|
||||
|
||||
if not fallback_context or len(fallback_context.strip()) < 20:
|
||||
logger.warning(f"⚠️ [FALLBACK] Fallback-Context zu kurz ({len(fallback_context)} Zeichen). Stream-Ergebnisse möglicherweise leer.")
|
||||
return f"Entschuldigung, ich konnte keine relevanten Informationen zu Ihrer Anfrage finden. (Fehler: {str(e)})"
|
||||
|
||||
try:
|
||||
# WP-24c v4.5.5: Fallback-Synthese mit LLM-Trace-Logging
|
||||
logger.info(f"🔄 [FALLBACK] Starte Fallback-Synthese mit vorhandenem Context (Länge: {len(fallback_context)})")
|
||||
logger.debug(f" -> Fallback-Profile: {profile}, Template: fallback_synthesis")
|
||||
|
||||
result = await self.llm_service.generate_raw_response(
|
||||
prompt_key="fallback_synthesis",
|
||||
variables={"query": query, "context": fallback_context},
|
||||
system=system_prompt, priority="realtime", profile_name=profile
|
||||
)
|
||||
|
||||
logger.info(f"✅ [FALLBACK] Fallback-Synthese erfolgreich (Antwort-Länge: {len(result) if result else 0})")
|
||||
return result
|
||||
|
||||
except (ValueError, KeyError) as template_error:
|
||||
# WP-24c v4.5.9: Fallback auf generisches Template mit variables
|
||||
# Nutzt Lazy-Loading aus WP-25b für modell-spezifische Fallback-Prompts
|
||||
logger.warning(f"⚠️ [FALLBACK] Template 'fallback_synthesis' nicht gefunden: {template_error}. Versuche generisches Template.")
|
||||
logger.debug(f" -> Fallback-Profile: {profile}, Context-Länge: {len(fallback_context)}")
|
||||
|
||||
try:
|
||||
# WP-24c v4.5.9: Versuche generisches Template mit variables (Lazy-Loading)
|
||||
result = await self.llm_service.generate_raw_response(
|
||||
prompt_key="fallback_synthesis_generic", # Fallback-Template
|
||||
variables={"query": query, "context": fallback_context},
|
||||
system=system_prompt, priority="realtime", profile_name=profile
|
||||
)
|
||||
logger.info(f"✅ [FALLBACK] Generisches Template erfolgreich (Antwort-Länge: {len(result) if result else 0})")
|
||||
return result
|
||||
except (ValueError, KeyError) as fallback_error:
|
||||
# WP-24c v4.5.9: Letzter Fallback - direkter Prompt (nur wenn beide Templates fehlen)
|
||||
logger.error(f"❌ [FALLBACK] Auch generisches Template nicht gefunden: {fallback_error}. Verwende direkten Prompt als letzten Fallback.")
|
||||
result = await self.llm_service.generate_raw_response(
|
||||
prompt=f"Beantworte: {query}\n\nKontext:\n{fallback_context}",
|
||||
system=system_prompt, priority="realtime", profile_name=profile
|
||||
)
|
||||
logger.info(f"✅ [FALLBACK] Direkter Prompt erfolgreich (Antwort-Länge: {len(result) if result else 0})")
|
||||
return result
|
||||
587
app/core/retrieval/retriever.py
Normal file
587
app/core/retrieval/retriever.py
Normal file
|
|
@ -0,0 +1,587 @@
|
|||
"""
|
||||
FILE: app/core/retrieval/retriever.py
|
||||
DESCRIPTION: Haupt-Schnittstelle für die Suche. Orchestriert Vektorsuche und Graph-Expansion.
|
||||
WP-15c Update: Note-Level Diversity Pooling & Super-Edge Aggregation.
|
||||
WP-24c v4.1.0: Gold-Standard - Scope-Awareness, Section-Filtering, Authority-Priorisierung.
|
||||
VERSION: 0.8.0 (WP-24c: Gold-Standard v4.1.0)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.config, app.models.dto, app.core.database*, app.core.graph_adapter
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
from typing import Any, Dict, List, Tuple, Iterable, Optional
|
||||
from collections import defaultdict
|
||||
|
||||
from app.config import get_settings
|
||||
from app.models.dto import (
|
||||
QueryRequest, QueryResponse, QueryHit,
|
||||
Explanation, ScoreBreakdown, Reason, EdgeDTO
|
||||
)
|
||||
|
||||
# MODULARISIERUNG: Neue Import-Pfade für die Datenbank-Ebene
|
||||
import app.core.database.qdrant as qdr
|
||||
import app.core.database.qdrant_points as qp
|
||||
|
||||
import app.services.embeddings_client as ec
|
||||
import app.core.graph.graph_subgraph as ga
|
||||
import app.core.graph.graph_db_adapter as gdb
|
||||
from app.core.graph.graph_utils import PROVENANCE_PRIORITY
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
# Mathematische Engine importieren
|
||||
from app.core.retrieval.retriever_scoring import get_weights, compute_wp22_score
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# ==============================================================================
|
||||
# 1. CORE HELPERS & CONFIG LOADERS
|
||||
# ==============================================================================
|
||||
|
||||
def _get_client_and_prefix() -> Tuple[Any, str]:
|
||||
"""Initialisiert Qdrant Client und lädt Collection-Prefix via database-Paket."""
|
||||
cfg = qdr.QdrantConfig.from_env()
|
||||
return qdr.get_client(cfg), cfg.prefix
|
||||
|
||||
|
||||
def _get_query_vector(req: QueryRequest) -> List[float]:
|
||||
"""
|
||||
Vektorisiert die Anfrage.
|
||||
FIX: Enthält try-except Block für unterschiedliche Signaturen von ec.embed_text.
|
||||
"""
|
||||
if req.query_vector:
|
||||
return list(req.query_vector)
|
||||
if not req.query:
|
||||
raise ValueError("Kein Text oder Vektor für die Suche angegeben.")
|
||||
|
||||
settings = get_settings()
|
||||
|
||||
try:
|
||||
# Versuch mit modernem Interface (WP-03 kompatibel)
|
||||
return ec.embed_text(req.query, model_name=settings.MODEL_NAME)
|
||||
except TypeError:
|
||||
# Fallback für Signaturen, die 'model_name' nicht als Keyword akzeptieren
|
||||
logger.debug("ec.embed_text does not accept 'model_name' keyword. Falling back.")
|
||||
return ec.embed_text(req.query)
|
||||
|
||||
|
||||
def _get_chunk_ids_for_notes(
|
||||
client: Any,
|
||||
prefix: str,
|
||||
note_ids: List[str]
|
||||
) -> List[str]:
|
||||
"""
|
||||
WP-24c v4.1.0: Lädt alle Chunk-IDs für gegebene Note-IDs.
|
||||
Wird für Scope-Aware Edge Retrieval benötigt.
|
||||
"""
|
||||
if not note_ids:
|
||||
return []
|
||||
|
||||
_, chunks_col, _ = qp._names(prefix)
|
||||
chunk_ids = []
|
||||
|
||||
try:
|
||||
# Filter: note_id IN note_ids
|
||||
note_filter = rest.Filter(should=[
|
||||
rest.FieldCondition(key="note_id", match=rest.MatchValue(value=str(nid)))
|
||||
for nid in note_ids
|
||||
])
|
||||
|
||||
pts, _ = client.scroll(
|
||||
collection_name=chunks_col,
|
||||
scroll_filter=note_filter,
|
||||
limit=2048,
|
||||
with_payload=True,
|
||||
with_vectors=False
|
||||
)
|
||||
|
||||
for pt in pts:
|
||||
pl = pt.payload or {}
|
||||
cid = pl.get("chunk_id")
|
||||
if cid:
|
||||
chunk_ids.append(str(cid))
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to load chunk IDs for notes: {e}")
|
||||
|
||||
return chunk_ids
|
||||
|
||||
def _semantic_hits(
|
||||
client: Any,
|
||||
prefix: str,
|
||||
vector: List[float],
|
||||
top_k: int,
|
||||
filters: Optional[Dict] = None,
|
||||
target_section: Optional[str] = None
|
||||
) -> List[Tuple[str, float, Dict[str, Any]]]:
|
||||
"""
|
||||
Führt die Vektorsuche via database-Points-Modul durch.
|
||||
WP-24c v4.1.0: Unterstützt optionales Section-Filtering.
|
||||
"""
|
||||
# WP-24c v4.1.0: Section-Filtering für präzise Section-Links
|
||||
if target_section and filters:
|
||||
filters = {**filters, "section": target_section}
|
||||
elif target_section:
|
||||
filters = {"section": target_section}
|
||||
|
||||
raw_hits = qp.search_chunks_by_vector(client, prefix, vector, top=top_k, filters=filters)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Protokollierung der rohen Qdrant-Antwort
|
||||
logger.debug(f"📊 [RAW-HITS] Qdrant lieferte {len(raw_hits)} Roh-Treffer (Top-K: {top_k})")
|
||||
if filters:
|
||||
logger.debug(f" ⚙️ [FILTER] Angewandte Filter: {filters}")
|
||||
|
||||
# Logge die Top 3 Roh-Scores für Diagnose
|
||||
for i, hit in enumerate(raw_hits[:3]):
|
||||
hit_id = str(hit[0]) if hit else "N/A"
|
||||
hit_score = float(hit[1]) if hit and len(hit) > 1 else 0.0
|
||||
hit_payload = dict(hit[2] or {}) if hit and len(hit) > 2 else {}
|
||||
hit_path = hit_payload.get('path', 'N/A')
|
||||
logger.debug(f" [{i+1}] ID: {hit_id} | Raw-Score: {hit_score:.4f} | Path: {hit_path}")
|
||||
|
||||
# Strikte Typkonvertierung für Stabilität
|
||||
return [(str(hit[0]), float(hit[1]), dict(hit[2] or {})) for hit in raw_hits]
|
||||
|
||||
# ==============================================================================
|
||||
# 2. EXPLANATION LAYER (DEBUG & VERIFIABILITY)
|
||||
# ==============================================================================
|
||||
|
||||
def _build_explanation(
|
||||
semantic_score: float,
|
||||
payload: Dict[str, Any],
|
||||
scoring_debug: Dict[str, Any],
|
||||
subgraph: Optional[ga.Subgraph],
|
||||
target_note_id: Optional[str],
|
||||
applied_boosts: Optional[Dict[str, float]] = None
|
||||
) -> Explanation:
|
||||
"""
|
||||
Transformiert mathematische Scores und Graph-Signale in eine menschenlesbare Erklärung.
|
||||
"""
|
||||
_, edge_w_cfg, _ = get_weights()
|
||||
base_val = scoring_debug["base_val"]
|
||||
|
||||
# 1. Detaillierter mathematischer Breakdown
|
||||
breakdown = ScoreBreakdown(
|
||||
semantic_contribution=base_val,
|
||||
edge_contribution=base_val * scoring_debug["edge_impact_final"],
|
||||
centrality_contribution=base_val * scoring_debug["cent_impact_final"],
|
||||
raw_semantic=semantic_score,
|
||||
raw_edge_bonus=scoring_debug["edge_bonus"],
|
||||
raw_centrality=scoring_debug["cent_bonus"],
|
||||
node_weight=float(payload.get("retriever_weight", 1.0)),
|
||||
status_multiplier=scoring_debug["status_multiplier"],
|
||||
graph_boost_factor=scoring_debug["graph_boost_factor"]
|
||||
)
|
||||
|
||||
reasons: List[Reason] = []
|
||||
edges_dto: List[EdgeDTO] = []
|
||||
|
||||
# 2. Gründe für Semantik hinzufügen
|
||||
if semantic_score > 0.85:
|
||||
reasons.append(Reason(kind="semantic", message="Sehr hohe textuelle Übereinstimmung.", score_impact=base_val))
|
||||
elif semantic_score > 0.70:
|
||||
reasons.append(Reason(kind="semantic", message="Inhaltliche Übereinstimmung.", score_impact=base_val))
|
||||
|
||||
# 3. Gründe für Typ und Lifecycle (WP-25 Vorbereitung)
|
||||
type_weight = float(payload.get("retriever_weight", 1.0))
|
||||
if type_weight != 1.0:
|
||||
msg = "Bevorzugt" if type_weight > 1.0 else "De-priorisiert"
|
||||
reasons.append(Reason(kind="type", message=f"{msg} durch Typ-Profil.", score_impact=base_val * (type_weight - 1.0)))
|
||||
|
||||
# NEU: Explizite Ausweisung des Lifecycle-Status (WP-22)
|
||||
status_mult = scoring_debug.get("status_multiplier", 1.0)
|
||||
if status_mult != 1.0:
|
||||
status_msg = "Belohnt (Stable)" if status_mult > 1.0 else "De-priorisiert (Draft)"
|
||||
reasons.append(Reason(
|
||||
kind="status",
|
||||
message=f"{status_msg} durch Content-Lifecycle.",
|
||||
score_impact=semantic_score * (status_mult - 1.0)
|
||||
))
|
||||
|
||||
# 4. Kanten-Verarbeitung (Graph-Intelligence)
|
||||
if subgraph and target_note_id and scoring_debug["edge_bonus"] > 0:
|
||||
raw_edges = []
|
||||
if hasattr(subgraph, "get_incoming_edges"):
|
||||
raw_edges.extend(subgraph.get_incoming_edges(target_note_id) or [])
|
||||
if hasattr(subgraph, "get_outgoing_edges"):
|
||||
raw_edges.extend(subgraph.get_outgoing_edges(target_note_id) or [])
|
||||
|
||||
for edge in raw_edges:
|
||||
src = str(edge.get("source") or "note_root")
|
||||
tgt = str(edge.get("target") or target_note_id or "unknown_target")
|
||||
kind = str(edge.get("kind", "related_to"))
|
||||
prov = str(edge.get("provenance", "rule"))
|
||||
conf = float(edge.get("confidence", 1.0))
|
||||
|
||||
direction = "in" if tgt == target_note_id else "out"
|
||||
|
||||
# WP-24c v4.5.10: Robuste EdgeDTO-Erstellung mit Fehlerbehandlung
|
||||
# Falls Provenance-Wert nicht unterstützt wird, verwende Fallback
|
||||
try:
|
||||
edge_obj = EdgeDTO(
|
||||
id=f"{src}->{tgt}:{kind}",
|
||||
kind=kind,
|
||||
source=src,
|
||||
target=tgt,
|
||||
weight=conf,
|
||||
direction=direction,
|
||||
provenance=prov,
|
||||
confidence=conf
|
||||
)
|
||||
edges_dto.append(edge_obj)
|
||||
except Exception as e:
|
||||
# WP-24c v4.5.10: Fallback bei Validierungsfehler (z.B. alte EdgeDTO-Version im Cache)
|
||||
logger.warning(
|
||||
f"⚠️ [EDGE-DTO] Provenance '{prov}' nicht unterstützt für Edge {src}->{tgt} ({kind}). "
|
||||
f"Fehler: {e}. Verwende Fallback 'explicit'."
|
||||
)
|
||||
# Fallback: Verwende 'explicit' als sicheren Default
|
||||
try:
|
||||
edge_obj = EdgeDTO(
|
||||
id=f"{src}->{tgt}:{kind}",
|
||||
kind=kind,
|
||||
source=src,
|
||||
target=tgt,
|
||||
weight=conf,
|
||||
direction=direction,
|
||||
provenance="explicit", # Fallback
|
||||
confidence=conf
|
||||
)
|
||||
edges_dto.append(edge_obj)
|
||||
except Exception as e2:
|
||||
logger.error(f"❌ [EDGE-DTO] Auch Fallback fehlgeschlagen: {e2}. Überspringe Edge.")
|
||||
# Überspringe diese Kante - besser als kompletter Fehler
|
||||
|
||||
# Die 3 wichtigsten Kanten als Begründung formulieren
|
||||
top_edges = sorted(edges_dto, key=lambda e: e.confidence, reverse=True)
|
||||
for e in top_edges[:3]:
|
||||
peer = e.source if e.direction == "in" else e.target
|
||||
# WP-24c v4.5.3: Unterstütze alle explicit-Varianten (explicit, explicit:callout, etc.)
|
||||
prov_txt = "Bestätigte" if e.provenance and e.provenance.startswith("explicit") else "KI-basierte"
|
||||
boost_txt = f" [Boost x{applied_boosts.get(e.kind)}]" if applied_boosts and e.kind in applied_boosts else ""
|
||||
|
||||
reasons.append(Reason(
|
||||
kind="edge",
|
||||
message=f"{prov_txt} Kante '{e.kind}'{boost_txt} von/zu '{peer}'.",
|
||||
score_impact=edge_w_cfg * e.confidence
|
||||
))
|
||||
|
||||
if scoring_debug["cent_bonus"] > 0.01:
|
||||
reasons.append(Reason(kind="centrality", message="Die Notiz ist ein zentraler Informations-Hub.", score_impact=breakdown.centrality_contribution))
|
||||
|
||||
return Explanation(
|
||||
breakdown=breakdown,
|
||||
reasons=reasons,
|
||||
related_edges=edges_dto if edges_dto else None,
|
||||
applied_boosts=applied_boosts
|
||||
)
|
||||
|
||||
# ==============================================================================
|
||||
# 3. CORE RETRIEVAL PIPELINE
|
||||
# ==============================================================================
|
||||
|
||||
def _build_hits_from_semantic(
|
||||
hits: Iterable[Tuple[str, float, Dict[str, Any]]],
|
||||
top_k: int,
|
||||
used_mode: str,
|
||||
subgraph: ga.Subgraph | None = None,
|
||||
explain: bool = False,
|
||||
dynamic_edge_boosts: Dict[str, float] = None
|
||||
) -> QueryResponse:
|
||||
"""
|
||||
Wandelt semantische Roh-Treffer in bewertete QueryHits um.
|
||||
WP-15c: Implementiert Note-Level Diversity Pooling.
|
||||
"""
|
||||
t0 = time.time()
|
||||
enriched = []
|
||||
|
||||
# Erstes Scoring für alle Kandidaten
|
||||
for pid, semantic_score, payload in hits:
|
||||
edge_bonus, cent_bonus = 0.0, 0.0
|
||||
target_id = payload.get("note_id")
|
||||
|
||||
if subgraph and target_id:
|
||||
try:
|
||||
edge_bonus = float(subgraph.edge_bonus(target_id))
|
||||
cent_bonus = float(subgraph.centrality_bonus(target_id))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
debug_data = compute_wp22_score(
|
||||
semantic_score, payload, edge_bonus, cent_bonus, dynamic_edge_boosts
|
||||
)
|
||||
enriched.append((pid, semantic_score, payload, debug_data))
|
||||
|
||||
# 1. Sortierung nach finalem mathematischen Score
|
||||
enriched_sorted = sorted(enriched, key=lambda h: h[3]["total"], reverse=True)
|
||||
|
||||
# 2. WP-15c: Note-Level Diversity Pooling
|
||||
# Wir behalten pro note_id nur den Hit mit dem höchsten total_score.
|
||||
# Dies verhindert, dass 10 Chunks derselben Note andere KeyNotes verdrängen.
|
||||
unique_note_hits = []
|
||||
seen_notes = set()
|
||||
|
||||
for item in enriched_sorted:
|
||||
_, _, payload, _ = item
|
||||
note_id = str(payload.get("note_id", "unknown"))
|
||||
|
||||
if note_id not in seen_notes:
|
||||
unique_note_hits.append(item)
|
||||
seen_notes.add(note_id)
|
||||
|
||||
# 3. Begrenzung auf top_k nach dem Diversity-Pooling
|
||||
limited_hits = unique_note_hits[: max(1, top_k)]
|
||||
|
||||
results: List[QueryHit] = []
|
||||
for pid, s_score, pl, dbg in limited_hits:
|
||||
explanation_obj = None
|
||||
if explain:
|
||||
explanation_obj = _build_explanation(
|
||||
semantic_score=float(s_score),
|
||||
payload=pl,
|
||||
scoring_debug=dbg,
|
||||
subgraph=subgraph,
|
||||
target_note_id=pl.get("note_id"),
|
||||
applied_boosts=dynamic_edge_boosts
|
||||
)
|
||||
|
||||
text_content = pl.get("page_content") or pl.get("text") or pl.get("content", "[Kein Text]")
|
||||
|
||||
# WP-24c v4.1.0: RAG-Kontext - source_chunk_id aus Edge-Payload extrahieren
|
||||
source_chunk_id = None
|
||||
if explanation_obj and explanation_obj.related_edges:
|
||||
# Finde die erste Edge mit chunk_id als source
|
||||
for edge in explanation_obj.related_edges:
|
||||
# Prüfe, ob source eine Chunk-ID ist (enthält # oder ist chunk_id)
|
||||
if edge.source and ("#" in edge.source or edge.source.startswith("chunk:")):
|
||||
source_chunk_id = edge.source
|
||||
break
|
||||
|
||||
results.append(QueryHit(
|
||||
node_id=str(pid),
|
||||
note_id=str(pl.get("note_id", "unknown")),
|
||||
semantic_score=float(s_score),
|
||||
edge_bonus=dbg["edge_bonus"],
|
||||
centrality_bonus=dbg["cent_bonus"],
|
||||
total_score=dbg["total"],
|
||||
source={
|
||||
"path": pl.get("path"),
|
||||
"section": pl.get("section") or pl.get("section_title"),
|
||||
"text": text_content
|
||||
},
|
||||
payload=pl,
|
||||
explanation=explanation_obj,
|
||||
source_chunk_id=source_chunk_id # WP-24c v4.1.0: RAG-Kontext
|
||||
))
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Finale Ergebnisse
|
||||
latency_ms = int((time.time() - t0) * 1000)
|
||||
if not results:
|
||||
logger.warning(f"⚠️ [EMPTY] Hybride Suche lieferte 0 Ergebnisse (Latency: {latency_ms}ms)")
|
||||
else:
|
||||
logger.info(f"✨ [SUCCESS] Hybride Suche lieferte {len(results)} Treffer (Latency: {latency_ms}ms)")
|
||||
# Top 3 finale Scores loggen
|
||||
# WP-24c v4.5.4: QueryHit hat kein chunk_id Feld - verwende node_id (enthält die Chunk-ID)
|
||||
for i, hit in enumerate(results[:3]):
|
||||
chunk_id = hit.node_id # node_id ist die Chunk-ID (pid)
|
||||
logger.debug(f" [{i+1}] Final: Chunk={chunk_id} | Total-Score={hit.total_score:.4f} | Semantic={hit.semantic_score:.4f} | Edge={hit.edge_bonus:.4f}")
|
||||
|
||||
return QueryResponse(results=results, used_mode=used_mode, latency_ms=latency_ms)
|
||||
|
||||
|
||||
def hybrid_retrieve(req: QueryRequest) -> QueryResponse:
|
||||
"""
|
||||
Die Haupt-Einstiegsfunktion für die hybride Suche.
|
||||
WP-15c: Implementiert Edge-Aggregation (Super-Kanten).
|
||||
WP-24c v4.5.0-DEBUG: Retrieval-Tracer für Diagnose.
|
||||
"""
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Start der hybriden Suche
|
||||
logger.info(f"🔍 [RETRIEVAL] Starte hybride Suche")
|
||||
logger.info(f" -> Query: '{req.query[:100]}...' (Länge: {len(req.query)})")
|
||||
logger.debug(f" ⚙️ [FILTER] Request-Filter: {req.filters}")
|
||||
logger.debug(f" ⚙️ [FILTER] Top-K: {req.top_k}, Expand: {req.expand}, Target-Section: {req.target_section}")
|
||||
client, prefix = _get_client_and_prefix()
|
||||
vector = list(req.query_vector) if req.query_vector else _get_query_vector(req)
|
||||
top_k = req.top_k or 10
|
||||
|
||||
# 1. Semantische Seed-Suche (Wir laden etwas mehr für das Pooling)
|
||||
# WP-24c v4.1.0: Section-Filtering unterstützen
|
||||
target_section = getattr(req, "target_section", None)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Vor semantischer Suche
|
||||
logger.debug(f"🔍 [RETRIEVAL] Starte semantische Seed-Suche (Top-K: {top_k * 3}, Target-Section: {target_section})")
|
||||
|
||||
hits = _semantic_hits(client, prefix, vector, top_k=top_k * 3, filters=req.filters, target_section=target_section)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Nach semantischer Suche
|
||||
logger.debug(f"📊 [SEED-HITS] Semantische Suche lieferte {len(hits)} Seed-Treffer")
|
||||
|
||||
# 2. Graph Expansion Konfiguration
|
||||
expand_cfg = req.expand if isinstance(req.expand, dict) else {}
|
||||
depth = int(expand_cfg.get("depth", 1))
|
||||
boost_edges = getattr(req, "boost_edges", {}) or {}
|
||||
|
||||
subgraph: ga.Subgraph | None = None
|
||||
if depth > 0 and hits:
|
||||
# WP-24c v4.5.2: Chunk-Aware Graph Traversal
|
||||
# Extrahiere sowohl note_id als auch chunk_id (pid) direkt aus den Hits
|
||||
# Dies stellt sicher, dass Chunk-Scope Edges gefunden werden
|
||||
seed_note_ids = list({h[2].get("note_id") for h in hits if h[2].get("note_id")})
|
||||
seed_chunk_ids = list({h[0] for h in hits if h[0]}) # pid ist die Chunk-ID
|
||||
|
||||
# Kombiniere beide Sets für vollständige Seed-Abdeckung
|
||||
# Chunk-IDs können auch als Note-IDs fungieren (für Note-Scope Edges)
|
||||
all_seed_ids = list(set(seed_note_ids + seed_chunk_ids))
|
||||
|
||||
if all_seed_ids:
|
||||
try:
|
||||
# WP-24c v4.5.2: Chunk-IDs sind bereits aus Hits extrahiert
|
||||
# Zusätzlich können wir noch weitere Chunk-IDs für die Note-IDs laden
|
||||
# (für den Fall, dass nicht alle Chunks in den Top-K Hits sind)
|
||||
additional_chunk_ids = _get_chunk_ids_for_notes(client, prefix, seed_note_ids)
|
||||
# Kombiniere direkte Chunk-IDs aus Hits mit zusätzlich geladenen
|
||||
all_chunk_ids = list(set(seed_chunk_ids + additional_chunk_ids))
|
||||
|
||||
# WP-24c v4.5.2: Erweiterte Edge-Retrieval mit Chunk-Scope und Section-Filtering
|
||||
# Verwende all_seed_ids (enthält sowohl note_id als auch chunk_id)
|
||||
# und all_chunk_ids für explizite Chunk-Scope Edge-Suche
|
||||
subgraph = ga.expand(
|
||||
client, prefix, all_seed_ids,
|
||||
depth=depth,
|
||||
edge_types=expand_cfg.get("edge_types"),
|
||||
chunk_ids=all_chunk_ids,
|
||||
target_section=target_section
|
||||
)
|
||||
|
||||
# WP-24c v4.5.2: Debug-Logging für Chunk-Awareness
|
||||
logger.debug(f"🔍 [SEEDS] Note-IDs: {len(seed_note_ids)}, Chunk-IDs: {len(seed_chunk_ids)}, Total Seeds: {len(all_seed_ids)}")
|
||||
logger.debug(f" -> Zusätzliche Chunk-IDs geladen: {len(additional_chunk_ids)}, Total Chunk-IDs: {len(all_chunk_ids)}")
|
||||
|
||||
# --- WP-24c v4.1.0: Chunk-Level Edge-Aggregation & Deduplizierung ---
|
||||
# Verhindert Score-Explosion durch multiple Links auf versch. Abschnitte.
|
||||
# Logik: 1. Kante zählt voll, weitere dämpfen auf Faktor 0.1.
|
||||
# Erweitert um Chunk-Level Tracking für präzise In-Degree-Berechnung.
|
||||
if subgraph and hasattr(subgraph, "adj"):
|
||||
# WP-24c v4.1.0: Chunk-Level In-Degree Tracking
|
||||
chunk_level_in_degree = defaultdict(int) # target -> count of chunk sources
|
||||
|
||||
for src, edge_list in subgraph.adj.items():
|
||||
# Gruppiere Kanten nach Ziel-Note (Deduplizierung ID_A -> ID_B)
|
||||
by_target = defaultdict(list)
|
||||
for e in edge_list:
|
||||
by_target[e["target"]].append(e)
|
||||
|
||||
# WP-24c v4.1.0: Chunk-Level In-Degree Tracking
|
||||
# Wenn source eine Chunk-ID ist, zähle für Chunk-Level In-Degree
|
||||
if e.get("chunk_id") or (src and ("#" in src or src.startswith("chunk:"))):
|
||||
chunk_level_in_degree[e["target"]] += 1
|
||||
|
||||
aggregated_list = []
|
||||
for tgt, edges in by_target.items():
|
||||
if len(edges) > 1:
|
||||
# Sortiere: Stärkste Kante zuerst (Authority-Priorisierung)
|
||||
sorted_edges = sorted(
|
||||
edges,
|
||||
key=lambda x: (
|
||||
x.get("weight", 0.0) *
|
||||
(1.0 if not x.get("virtual", False) else 0.5) * # Virtual-Penalty
|
||||
float(x.get("confidence", 1.0)) # Confidence-Boost
|
||||
),
|
||||
reverse=True
|
||||
)
|
||||
primary = sorted_edges[0]
|
||||
|
||||
# Aggregiertes Gewicht berechnen (Sättigungs-Logik)
|
||||
total_w = primary.get("weight", 0.0)
|
||||
chunk_count = 0
|
||||
for secondary in sorted_edges[1:]:
|
||||
total_w += secondary.get("weight", 0.0) * 0.1
|
||||
if secondary.get("chunk_id") or (secondary.get("source") and ("#" in secondary.get("source", "") or secondary.get("source", "").startswith("chunk:"))):
|
||||
chunk_count += 1
|
||||
|
||||
primary["weight"] = total_w
|
||||
primary["is_super_edge"] = True # Flag für Explanation Layer
|
||||
primary["edge_count"] = len(edges)
|
||||
primary["chunk_source_count"] = chunk_count + (1 if (primary.get("chunk_id") or (primary.get("source") and ("#" in primary.get("source", "") or primary.get("source", "").startswith("chunk:")))) else 0)
|
||||
aggregated_list.append(primary)
|
||||
else:
|
||||
edge = edges[0]
|
||||
# WP-24c v4.1.0: Chunk-Count auch für einzelne Edges
|
||||
if edge.get("chunk_id") or (edge.get("source") and ("#" in edge.get("source", "") or edge.get("source", "").startswith("chunk:"))):
|
||||
edge["chunk_source_count"] = 1
|
||||
aggregated_list.append(edge)
|
||||
|
||||
# In-Place Update der Adjazenzliste des Graphen
|
||||
subgraph.adj[src] = aggregated_list
|
||||
|
||||
# Re-Sync der In-Degrees für Centrality-Bonus (Aggregation konsistent halten)
|
||||
subgraph.in_degree = defaultdict(int)
|
||||
for src, edges in subgraph.adj.items():
|
||||
for e in edges:
|
||||
subgraph.in_degree[e["target"]] += 1
|
||||
|
||||
# WP-24c v4.1.0: Chunk-Level In-Degree als Attribut speichern
|
||||
subgraph.chunk_level_in_degree = chunk_level_in_degree
|
||||
|
||||
# --- WP-24c v4.1.0: Authority-Priorisierung (Provenance & Confidence) ---
|
||||
if subgraph and hasattr(subgraph, "adj"):
|
||||
for src, edges in subgraph.adj.items():
|
||||
for e in edges:
|
||||
# A. Provenance Weighting (nutzt PROVENANCE_PRIORITY aus graph_utils)
|
||||
prov = e.get("provenance", "rule")
|
||||
prov_key = f"{prov}:{e.get('kind', 'related_to')}" if ":" not in prov else prov
|
||||
prov_w = PROVENANCE_PRIORITY.get(prov_key, PROVENANCE_PRIORITY.get(prov, 0.7))
|
||||
|
||||
# B. Confidence-Weighting (aus Edge-Payload)
|
||||
confidence = float(e.get("confidence", 1.0))
|
||||
|
||||
# C. Virtual-Flag De-Priorisierung
|
||||
is_virtual = e.get("virtual", False)
|
||||
virtual_penalty = 0.5 if is_virtual else 1.0
|
||||
|
||||
# D. Intent Boost Multiplikator
|
||||
kind = e.get("kind")
|
||||
intent_multiplier = boost_edges.get(kind, 1.0)
|
||||
|
||||
# Gewichtung anpassen (Authority-Priorisierung)
|
||||
e["weight"] = e.get("weight", 1.0) * prov_w * confidence * virtual_penalty * intent_multiplier
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Graph Expansion failed: {e}")
|
||||
subgraph = None
|
||||
|
||||
# 3. Scoring & Explanation Generierung
|
||||
# top_k wird erst hier final angewandt
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Vor finaler Hit-Erstellung
|
||||
if subgraph:
|
||||
# WP-24c v4.5.1: Subgraph hat kein .edges Attribut, sondern .adj (Adjazenzliste)
|
||||
# Zähle alle Kanten aus der Adjazenzliste
|
||||
edge_count = sum(len(edges) for edges in subgraph.adj.values()) if hasattr(subgraph, 'adj') else 0
|
||||
logger.debug(f"📊 [GRAPH] Subgraph enthält {edge_count} Kanten")
|
||||
else:
|
||||
logger.debug(f"📊 [GRAPH] Kein Subgraph (depth=0 oder keine Seed-IDs)")
|
||||
|
||||
result = _build_hits_from_semantic(hits, top_k, "hybrid", subgraph, req.explain, boost_edges)
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Nach finaler Hit-Erstellung
|
||||
if not result.results:
|
||||
logger.warning(f"⚠️ [EMPTY] Hybride Suche lieferte nach Scoring 0 finale Ergebnisse")
|
||||
else:
|
||||
logger.info(f"✨ [SUCCESS] Hybride Suche lieferte {len(result.results)} finale Treffer (Mode: {result.used_mode})")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def semantic_retrieve(req: QueryRequest) -> QueryResponse:
|
||||
"""Standard Vektorsuche ohne Graph-Einfluss."""
|
||||
client, prefix = _get_client_and_prefix()
|
||||
vector = _get_query_vector(req)
|
||||
hits = _semantic_hits(client, prefix, vector, req.top_k or 10, req.filters)
|
||||
return _build_hits_from_semantic(hits, req.top_k or 10, "semantic", explain=req.explain)
|
||||
|
||||
|
||||
class Retriever:
|
||||
"""Schnittstelle für die asynchrone Suche."""
|
||||
async def search(self, request: QueryRequest) -> QueryResponse:
|
||||
return hybrid_retrieve(request)
|
||||
128
app/core/retrieval/retriever_scoring.py
Normal file
128
app/core/retrieval/retriever_scoring.py
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
"""
|
||||
FILE: app/core/retrieval/retriever_scoring.py
|
||||
DESCRIPTION: Mathematische Kern-Logik für das WP-22/WP-15c Scoring.
|
||||
Berechnet Relevanz-Scores basierend auf Semantik, Graph-Intelligence und Content Lifecycle.
|
||||
FIX v1.0.3: Optimierte Interaktion zwischen Typ-Boost und Status-Dämpfung.
|
||||
VERSION: 1.0.3
|
||||
STATUS: Active
|
||||
"""
|
||||
import os
|
||||
import logging
|
||||
from functools import lru_cache
|
||||
from typing import Any, Dict, Tuple, Optional
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
yaml = None
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@lru_cache
|
||||
def get_weights() -> Tuple[float, float, float]:
|
||||
"""
|
||||
Liefert die Basis-Gewichtung (semantic, edge, centrality) aus der Konfiguration.
|
||||
"""
|
||||
from app.config import get_settings
|
||||
settings = get_settings()
|
||||
|
||||
# Defaults aus Settings laden
|
||||
sem = float(getattr(settings, "RETRIEVER_W_SEM", 1.0))
|
||||
edge = float(getattr(settings, "RETRIEVER_W_EDGE", 0.0))
|
||||
cent = float(getattr(settings, "RETRIEVER_W_CENT", 0.0))
|
||||
|
||||
# Optionaler Override via YAML
|
||||
config_path = os.getenv("MINDNET_RETRIEVER_CONFIG", "config/retriever.yaml")
|
||||
if yaml and os.path.exists(config_path):
|
||||
try:
|
||||
with open(config_path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
scoring = data.get("scoring", {})
|
||||
sem = float(scoring.get("semantic_weight", sem))
|
||||
edge = float(scoring.get("edge_weight", edge))
|
||||
cent = float(scoring.get("centrality_weight", cent))
|
||||
except Exception as e:
|
||||
logger.warning(f"Retriever Configuration could not be fully loaded from {config_path}: {e}")
|
||||
|
||||
return sem, edge, cent
|
||||
|
||||
def get_status_multiplier(payload: Dict[str, Any]) -> float:
|
||||
"""
|
||||
WP-22 A: Content Lifecycle Multiplier.
|
||||
Steuert das Ranking basierend auf dem Reifegrad der Information.
|
||||
|
||||
- stable: 1.2 (Belohnung für verifiziertes Wissen)
|
||||
- active: 1.0 (Standard-Gewichtung)
|
||||
- draft: 0.5 (Dämpfung für unfertige Fragmente)
|
||||
"""
|
||||
status = str(payload.get("status", "active")).lower().strip()
|
||||
if status == "stable":
|
||||
return 1.2
|
||||
if status == "draft":
|
||||
return 0.5
|
||||
return 1.0
|
||||
|
||||
def compute_wp22_score(
|
||||
semantic_score: float,
|
||||
payload: Dict[str, Any],
|
||||
edge_bonus_raw: float = 0.0,
|
||||
cent_bonus_raw: float = 0.0,
|
||||
dynamic_edge_boosts: Optional[Dict[str, float]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Die zentrale mathematische Scoring-Formel (WP-15c optimiert).
|
||||
Implementiert das Hybrid-Scoring (Semantic * Lifecycle * Graph).
|
||||
|
||||
LOGIK:
|
||||
1. Base = Similarity * StatusMult (Lifecycle-Filter).
|
||||
2. Boosts = (TypeBoost - 1) + (GraphBoni * IntentFactor).
|
||||
3. Final = Base * (1 + Boosts).
|
||||
|
||||
Der edge_bonus_raw enthält bereits die Super-Edge-Aggregation (WP-15c).
|
||||
"""
|
||||
sem_w, edge_w_cfg, cent_w_cfg = get_weights()
|
||||
status_mult = get_status_multiplier(payload)
|
||||
|
||||
# Retriever Weight (Typ-Boost aus types.yaml, z.B. 1.1 für Decisions)
|
||||
node_weight = float(payload.get("retriever_weight", 1.0))
|
||||
|
||||
# 1. Berechnung des Base Scores (Semantik gewichtet durch Lifecycle-Status)
|
||||
# WICHTIG: Der Status wirkt hier als Multiplikator auf die Basis-Relevanz.
|
||||
base_val = float(semantic_score) * status_mult
|
||||
|
||||
# 2. Graph Boost Factor (Intent-spezifische Verstärkung aus decision_engine.yaml)
|
||||
# Erhöht das Gewicht des gesamten Graphen um 50%, wenn ein spezifischer Intent vorliegt.
|
||||
graph_boost_factor = 1.5 if dynamic_edge_boosts and (edge_bonus_raw > 0 or cent_bonus_raw > 0) else 1.0
|
||||
|
||||
# 3. Einzelne Graph-Komponenten berechnen
|
||||
# WP-15c Hinweis: edge_bonus_raw ist durch den retriever.py bereits gedämpft/aggregiert.
|
||||
edge_impact_final = (edge_w_cfg * edge_bonus_raw) * graph_boost_factor
|
||||
cent_impact_final = (cent_w_cfg * cent_bonus_raw) * graph_boost_factor
|
||||
|
||||
# 4. Finales Zusammenführen (Merging)
|
||||
# (node_weight - 1.0) wandelt das Gewicht in einen relativen Bonus um (z.B. 1.2 -> +0.2).
|
||||
# Alle Boni werden addiert und wirken dann auf den base_val.
|
||||
type_impact = node_weight - 1.0
|
||||
total_boost = 1.0 + type_impact + edge_impact_final + cent_impact_final
|
||||
|
||||
total = base_val * total_boost
|
||||
|
||||
# Sicherstellen, dass der Score niemals 0 oder negativ ist (Floor)
|
||||
final_score = max(0.0001, float(total))
|
||||
|
||||
# WP-24c v4.5.0-DEBUG: Retrieval-Tracer - Protokollierung der Score-Berechnung
|
||||
chunk_id = payload.get("chunk_id", payload.get("id", "unknown"))
|
||||
logger.debug(f"📈 [SCORE-TRACE] Chunk: {chunk_id} | Base: {base_val:.4f} | Multiplier: {total_boost:.2f} | Final: {final_score:.4f}")
|
||||
logger.debug(f" -> Details: StatusMult={status_mult:.2f}, TypeImpact={type_impact:.2f}, EdgeImpact={edge_impact_final:.4f}, CentImpact={cent_impact_final:.4f}")
|
||||
|
||||
return {
|
||||
"total": final_score,
|
||||
"edge_bonus": float(edge_bonus_raw),
|
||||
"cent_bonus": float(cent_bonus_raw),
|
||||
"status_multiplier": status_mult,
|
||||
"graph_boost_factor": graph_boost_factor,
|
||||
"type_impact": type_impact,
|
||||
"base_val": base_val,
|
||||
"edge_impact_final": edge_impact_final,
|
||||
"cent_impact_final": cent_impact_final
|
||||
}
|
||||
|
|
@ -1,336 +0,0 @@
|
|||
"""
|
||||
app/core/retriever.py — Hybrider Such-Algorithmus
|
||||
|
||||
Version:
|
||||
0.5.3 (WP-06 Fix: Populate 'payload' in QueryHit for meta-data access)
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import time
|
||||
from functools import lru_cache
|
||||
from typing import Any, Dict, List, Tuple, Iterable, Optional
|
||||
|
||||
from app.config import get_settings
|
||||
from app.models.dto import (
|
||||
QueryRequest,
|
||||
QueryResponse,
|
||||
QueryHit,
|
||||
Explanation,
|
||||
ScoreBreakdown,
|
||||
Reason,
|
||||
EdgeDTO
|
||||
)
|
||||
import app.core.qdrant as qdr
|
||||
import app.core.qdrant_points as qp
|
||||
import app.services.embeddings_client as ec
|
||||
import app.core.graph_adapter as ga
|
||||
|
||||
try:
|
||||
import yaml # type: ignore[import]
|
||||
except Exception: # pragma: no cover
|
||||
yaml = None # type: ignore[assignment]
|
||||
|
||||
|
||||
@lru_cache
|
||||
def _get_scoring_weights() -> Tuple[float, float, float]:
|
||||
"""Liefert (semantic_weight, edge_weight, centrality_weight) für den Retriever."""
|
||||
settings = get_settings()
|
||||
sem = float(getattr(settings, "RETRIEVER_W_SEM", 1.0))
|
||||
edge = float(getattr(settings, "RETRIEVER_W_EDGE", 0.0))
|
||||
cent = float(getattr(settings, "RETRIEVER_W_CENT", 0.0))
|
||||
|
||||
config_path = os.getenv("MINDNET_RETRIEVER_CONFIG", "config/retriever.yaml")
|
||||
if yaml is None:
|
||||
return sem, edge, cent
|
||||
try:
|
||||
if os.path.exists(config_path):
|
||||
with open(config_path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
scoring = data.get("scoring", {}) or {}
|
||||
sem = float(scoring.get("semantic_weight", sem))
|
||||
edge = float(scoring.get("edge_weight", edge))
|
||||
cent = float(scoring.get("centrality_weight", cent))
|
||||
except Exception:
|
||||
return sem, edge, cent
|
||||
return sem, edge, cent
|
||||
|
||||
|
||||
def _get_client_and_prefix() -> Tuple[Any, str]:
|
||||
"""Liefert (QdrantClient, prefix)."""
|
||||
cfg = qdr.QdrantConfig.from_env()
|
||||
client = qdr.get_client(cfg)
|
||||
return client, cfg.prefix
|
||||
|
||||
|
||||
def _get_query_vector(req: QueryRequest) -> List[float]:
|
||||
"""Liefert den Query-Vektor aus dem Request."""
|
||||
if req.query_vector:
|
||||
return list(req.query_vector)
|
||||
|
||||
if not req.query:
|
||||
raise ValueError("QueryRequest benötigt entweder query oder query_vector")
|
||||
|
||||
settings = get_settings()
|
||||
model_name = settings.MODEL_NAME
|
||||
|
||||
try:
|
||||
return ec.embed_text(req.query, model_name=model_name)
|
||||
except TypeError:
|
||||
return ec.embed_text(req.query)
|
||||
|
||||
|
||||
def _semantic_hits(
|
||||
client: Any,
|
||||
prefix: str,
|
||||
vector: List[float],
|
||||
top_k: int,
|
||||
filters: Dict[str, Any] | None = None,
|
||||
) -> List[Tuple[str, float, Dict[str, Any]]]:
|
||||
"""Führt eine semantische Suche aus."""
|
||||
flt = filters or None
|
||||
raw_hits = qp.search_chunks_by_vector(client, prefix, vector, top=top_k, filters=flt)
|
||||
results: List[Tuple[str, float, Dict[str, Any]]] = []
|
||||
for pid, score, payload in raw_hits:
|
||||
results.append((str(pid), float(score), dict(payload or {})))
|
||||
return results
|
||||
|
||||
|
||||
def _compute_total_score(
|
||||
semantic_score: float,
|
||||
payload: Dict[str, Any],
|
||||
edge_bonus: float = 0.0,
|
||||
cent_bonus: float = 0.0,
|
||||
) -> Tuple[float, float, float]:
|
||||
"""Berechnet total_score."""
|
||||
raw_weight = payload.get("retriever_weight", 1.0)
|
||||
try:
|
||||
weight = float(raw_weight)
|
||||
except (TypeError, ValueError):
|
||||
weight = 1.0
|
||||
if weight < 0.0:
|
||||
weight = 0.0
|
||||
|
||||
sem_w, edge_w, cent_w = _get_scoring_weights()
|
||||
total = (sem_w * float(semantic_score) * weight) + (edge_w * edge_bonus) + (cent_w * cent_bonus)
|
||||
return float(total), float(edge_bonus), float(cent_bonus)
|
||||
|
||||
|
||||
# --- WP-04b Explanation Logic ---
|
||||
|
||||
def _build_explanation(
|
||||
semantic_score: float,
|
||||
payload: Dict[str, Any],
|
||||
edge_bonus: float,
|
||||
cent_bonus: float,
|
||||
subgraph: Optional[ga.Subgraph],
|
||||
node_key: Optional[str]
|
||||
) -> Explanation:
|
||||
"""Erstellt ein Explanation-Objekt."""
|
||||
sem_w, _edge_w, _cent_w = _get_scoring_weights()
|
||||
# Scoring weights erneut laden für Reason-Details
|
||||
_, edge_w_cfg, cent_w_cfg = _get_scoring_weights()
|
||||
|
||||
try:
|
||||
type_weight = float(payload.get("retriever_weight", 1.0))
|
||||
except (TypeError, ValueError):
|
||||
type_weight = 1.0
|
||||
|
||||
note_type = payload.get("type", "unknown")
|
||||
|
||||
breakdown = ScoreBreakdown(
|
||||
semantic_contribution=(sem_w * semantic_score * type_weight),
|
||||
edge_contribution=(edge_w_cfg * edge_bonus),
|
||||
centrality_contribution=(cent_w_cfg * cent_bonus),
|
||||
raw_semantic=semantic_score,
|
||||
raw_edge_bonus=edge_bonus,
|
||||
raw_centrality=cent_bonus,
|
||||
node_weight=type_weight
|
||||
)
|
||||
|
||||
reasons: List[Reason] = []
|
||||
edges_dto: List[EdgeDTO] = []
|
||||
|
||||
if semantic_score > 0.85:
|
||||
reasons.append(Reason(kind="semantic", message="Sehr hohe textuelle Übereinstimmung.", score_impact=breakdown.semantic_contribution))
|
||||
elif semantic_score > 0.70:
|
||||
reasons.append(Reason(kind="semantic", message="Gute textuelle Übereinstimmung.", score_impact=breakdown.semantic_contribution))
|
||||
|
||||
if type_weight != 1.0:
|
||||
msg = "Bevorzugt" if type_weight > 1.0 else "Leicht abgewertet"
|
||||
reasons.append(Reason(kind="type", message=f"{msg} aufgrund des Typs '{note_type}'.", score_impact=(sem_w * semantic_score * (type_weight - 1.0))))
|
||||
|
||||
if subgraph and node_key and edge_bonus > 0:
|
||||
if hasattr(subgraph, "get_outgoing_edges"):
|
||||
outgoing = subgraph.get_outgoing_edges(node_key)
|
||||
for edge in outgoing:
|
||||
target = edge.get("target", "Unknown")
|
||||
kind = edge.get("kind", "edge")
|
||||
weight = edge.get("weight", 0.0)
|
||||
if weight > 0.05:
|
||||
edges_dto.append(EdgeDTO(id=f"{node_key}->{target}:{kind}", kind=kind, source=node_key, target=target, weight=weight, direction="out"))
|
||||
|
||||
if hasattr(subgraph, "get_incoming_edges"):
|
||||
incoming = subgraph.get_incoming_edges(node_key)
|
||||
for edge in incoming:
|
||||
src = edge.get("source", "Unknown")
|
||||
kind = edge.get("kind", "edge")
|
||||
weight = edge.get("weight", 0.0)
|
||||
if weight > 0.05:
|
||||
edges_dto.append(EdgeDTO(id=f"{src}->{node_key}:{kind}", kind=kind, source=src, target=node_key, weight=weight, direction="in"))
|
||||
|
||||
all_edges = sorted(edges_dto, key=lambda e: e.weight, reverse=True)
|
||||
for top_edge in all_edges[:3]:
|
||||
impact = edge_w_cfg * top_edge.weight
|
||||
dir_txt = "Verweist auf" if top_edge.direction == "out" else "Referenziert von"
|
||||
tgt_txt = top_edge.target if top_edge.direction == "out" else top_edge.source
|
||||
reasons.append(Reason(kind="edge", message=f"{dir_txt} '{tgt_txt}' via '{top_edge.kind}'", score_impact=impact, details={"kind": top_edge.kind}))
|
||||
|
||||
if cent_bonus > 0.01:
|
||||
reasons.append(Reason(kind="centrality", message="Knoten liegt zentral im Kontext.", score_impact=breakdown.centrality_contribution))
|
||||
|
||||
return Explanation(breakdown=breakdown, reasons=reasons, related_edges=edges_dto if edges_dto else None)
|
||||
|
||||
|
||||
def _extract_expand_options(req: QueryRequest) -> Tuple[int, List[str] | None]:
|
||||
"""Extrahiert depth und edge_types."""
|
||||
expand = getattr(req, "expand", None)
|
||||
if not expand:
|
||||
return 0, None
|
||||
|
||||
depth = 1
|
||||
edge_types: List[str] | None = None
|
||||
|
||||
if hasattr(expand, "depth") or hasattr(expand, "edge_types"):
|
||||
depth = int(getattr(expand, "depth", 1) or 1)
|
||||
types_val = getattr(expand, "edge_types", None)
|
||||
if types_val:
|
||||
edge_types = list(types_val)
|
||||
return depth, edge_types
|
||||
|
||||
if isinstance(expand, dict):
|
||||
if "depth" in expand:
|
||||
depth = int(expand.get("depth") or 1)
|
||||
if "edge_types" in expand and expand["edge_types"] is not None:
|
||||
edge_types = list(expand["edge_types"])
|
||||
return depth, edge_types
|
||||
|
||||
return 0, None
|
||||
|
||||
|
||||
def _build_hits_from_semantic(
|
||||
hits: Iterable[Tuple[str, float, Dict[str, Any]]],
|
||||
top_k: int,
|
||||
used_mode: str,
|
||||
subgraph: ga.Subgraph | None = None,
|
||||
explain: bool = False,
|
||||
) -> QueryResponse:
|
||||
"""Baut strukturierte QueryHits."""
|
||||
t0 = time.time()
|
||||
enriched: List[Tuple[str, float, Dict[str, Any], float, float, float]] = []
|
||||
|
||||
for pid, semantic_score, payload in hits:
|
||||
edge_bonus = 0.0
|
||||
cent_bonus = 0.0
|
||||
node_key = payload.get("chunk_id") or payload.get("note_id")
|
||||
|
||||
if subgraph is not None and node_key:
|
||||
try:
|
||||
edge_bonus = float(subgraph.edge_bonus(node_key))
|
||||
except Exception:
|
||||
edge_bonus = 0.0
|
||||
try:
|
||||
cent_bonus = float(subgraph.centrality_bonus(node_key))
|
||||
except Exception:
|
||||
cent_bonus = 0.0
|
||||
|
||||
total, edge_bonus, cent_bonus = _compute_total_score(semantic_score, payload, edge_bonus=edge_bonus, cent_bonus=cent_bonus)
|
||||
enriched.append((pid, float(semantic_score), payload, total, edge_bonus, cent_bonus))
|
||||
|
||||
enriched_sorted = sorted(enriched, key=lambda h: h[3], reverse=True)
|
||||
limited = enriched_sorted[: max(1, top_k)]
|
||||
|
||||
results: List[QueryHit] = []
|
||||
for pid, semantic_score, payload, total, edge_bonus, cent_bonus in limited:
|
||||
explanation_obj = None
|
||||
if explain:
|
||||
explanation_obj = _build_explanation(
|
||||
semantic_score=float(semantic_score),
|
||||
payload=payload,
|
||||
edge_bonus=edge_bonus,
|
||||
cent_bonus=cent_bonus,
|
||||
subgraph=subgraph,
|
||||
node_key=payload.get("chunk_id") or payload.get("note_id")
|
||||
)
|
||||
|
||||
text_content = payload.get("page_content") or payload.get("text") or payload.get("content")
|
||||
|
||||
results.append(QueryHit(
|
||||
node_id=str(pid),
|
||||
note_id=payload.get("note_id"),
|
||||
semantic_score=float(semantic_score),
|
||||
edge_bonus=edge_bonus,
|
||||
centrality_bonus=cent_bonus,
|
||||
total_score=total,
|
||||
paths=None,
|
||||
source={
|
||||
"path": payload.get("path"),
|
||||
"section": payload.get("section") or payload.get("section_title"),
|
||||
"text": text_content
|
||||
},
|
||||
# --- FIX: Wir füllen das payload-Feld explizit ---
|
||||
payload=payload,
|
||||
explanation=explanation_obj
|
||||
))
|
||||
|
||||
dt = int((time.time() - t0) * 1000)
|
||||
return QueryResponse(results=results, used_mode=used_mode, latency_ms=dt)
|
||||
|
||||
|
||||
def semantic_retrieve(req: QueryRequest) -> QueryResponse:
|
||||
"""Reiner semantischer Retriever."""
|
||||
client, prefix = _get_client_and_prefix()
|
||||
vector = _get_query_vector(req)
|
||||
top_k = req.top_k or get_settings().RETRIEVER_TOP_K
|
||||
|
||||
hits = _semantic_hits(client, prefix, vector, top_k=top_k, filters=req.filters)
|
||||
return _build_hits_from_semantic(hits, top_k=top_k, used_mode="semantic", subgraph=None, explain=req.explain)
|
||||
|
||||
|
||||
def hybrid_retrieve(req: QueryRequest) -> QueryResponse:
|
||||
"""Hybrid-Retriever: semantische Suche + optionale Edge-Expansion."""
|
||||
client, prefix = _get_client_and_prefix()
|
||||
if req.query_vector:
|
||||
vector = list(req.query_vector)
|
||||
else:
|
||||
vector = _get_query_vector(req)
|
||||
|
||||
top_k = req.top_k or get_settings().RETRIEVER_TOP_K
|
||||
hits = _semantic_hits(client, prefix, vector, top_k=top_k, filters=req.filters)
|
||||
|
||||
depth, edge_types = _extract_expand_options(req)
|
||||
subgraph: ga.Subgraph | None = None
|
||||
if depth and depth > 0:
|
||||
seed_ids: List[str] = []
|
||||
for _pid, _score, payload in hits:
|
||||
key = payload.get("chunk_id") or payload.get("note_id")
|
||||
if key and key not in seed_ids:
|
||||
seed_ids.append(key)
|
||||
if seed_ids:
|
||||
try:
|
||||
subgraph = ga.expand(client, prefix, seed_ids, depth=depth, edge_types=edge_types)
|
||||
except Exception:
|
||||
subgraph = None
|
||||
|
||||
return _build_hits_from_semantic(hits, top_k=top_k, used_mode="hybrid", subgraph=subgraph, explain=req.explain)
|
||||
|
||||
|
||||
class Retriever:
|
||||
"""
|
||||
Wrapper-Klasse für WP-05 (Chat).
|
||||
"""
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
async def search(self, request: QueryRequest) -> QueryResponse:
|
||||
return hybrid_retrieve(request)
|
||||
|
|
@ -1,116 +0,0 @@
|
|||
"""app/core/retriever_config.py
|
||||
---------------------------------
|
||||
Zentrale Konfiguration für den mindnet-Retriever (WP-04).
|
||||
|
||||
Zweck:
|
||||
- Lädt config/retriever.yaml (falls vorhanden) oder nutzt sinnvolle Defaults.
|
||||
- Bietet einen gecachten Zugriff auf die Retriever-Config für
|
||||
andere Module (z. B. graph_adapter, retriever).
|
||||
|
||||
Hinweis zur Weiterentwicklung (Selbstjustierung):
|
||||
- Die hier definierten Parameter sind so gewählt, dass sie später
|
||||
durch ein Feedback-/Learning-to-Rank-Modell überschrieben werden
|
||||
können, ohne die restliche Architektur anzupassen.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from typing import Dict
|
||||
|
||||
try:
|
||||
import yaml # type: ignore
|
||||
except Exception: # pragma: no cover - Fallback, falls PyYAML nicht installiert ist.
|
||||
yaml = None # type: ignore
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RetrieverConfig:
|
||||
semantic_scale: float
|
||||
edge_scale: float
|
||||
centrality_scale: float
|
||||
edge_weights: Dict[str, float]
|
||||
|
||||
@lru_cache
|
||||
def get_retriever_config() -> RetrieverConfig:
|
||||
"""Lädt die Retriever-Konfiguration (YAML + Defaults).
|
||||
|
||||
Reihenfolge:
|
||||
1. Defaults (sinnvoll gewählte Startwerte).
|
||||
2. Optional: config/retriever.yaml bzw. Pfad aus ENV
|
||||
MINDNET_RETRIEVER_CONFIG überschreibt die Defaults.
|
||||
|
||||
Die Funktion ist bewusst gecached, da sich die Konfiguration zur
|
||||
Laufzeit üblicherweise nicht ändert. Für dynamisches Nachladen
|
||||
könnte der Cache explizit geleert werden.
|
||||
"""
|
||||
|
||||
# 1) Defaults – bewusst konservativ gewählt.
|
||||
semantic_scale = 1.0
|
||||
edge_scale = 1.0
|
||||
centrality_scale = 1.0
|
||||
|
||||
edge_weights: Dict[str, float] = {
|
||||
# Wissens-Kanten
|
||||
"depends_on": 1.0,
|
||||
"related_to": 0.7,
|
||||
"similar_to": 0.7,
|
||||
"references": 0.5,
|
||||
# Struktur-Kanten
|
||||
"belongs_to": 0.2,
|
||||
"next": 0.1,
|
||||
"prev": 0.1,
|
||||
# Sonstige / technische Kanten
|
||||
"backlink": 0.2,
|
||||
"references_at": 0.2,
|
||||
}
|
||||
|
||||
# 2) Optional: YAML-Konfiguration laden
|
||||
cfg_path_env = os.getenv("MINDNET_RETRIEVER_CONFIG")
|
||||
if cfg_path_env:
|
||||
cfg_path = Path(cfg_path_env)
|
||||
else:
|
||||
# Project-Root = zwei Ebenen über app/core/
|
||||
cfg_path = Path(__file__).resolve().parents[2] / "config" / "retriever.yaml"
|
||||
|
||||
if yaml is not None and cfg_path.exists():
|
||||
try:
|
||||
data = yaml.safe_load(cfg_path.read_text(encoding="utf-8")) or {}
|
||||
except Exception:
|
||||
data = {}
|
||||
|
||||
retr = data.get("retriever") or {}
|
||||
|
||||
# Skalenwerte überschreiben, falls angegeben
|
||||
try:
|
||||
semantic_scale = float(retr.get("semantic_scale", semantic_scale))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
try:
|
||||
edge_scale = float(retr.get("edge_scale", edge_scale))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
try:
|
||||
centrality_scale = float(retr.get("centrality_scale", centrality_scale))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
|
||||
# Edge-Gewichte je Kanten-Typ
|
||||
ew_cfg = retr.get("edge_weights") or {}
|
||||
if isinstance(ew_cfg, dict):
|
||||
for k, v in ew_cfg.items():
|
||||
try:
|
||||
edge_weights[str(k)] = float(v)
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
|
||||
return RetrieverConfig(
|
||||
semantic_scale=semantic_scale,
|
||||
edge_scale=edge_scale,
|
||||
centrality_scale=centrality_scale,
|
||||
edge_weights=edge_weights,
|
||||
)
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
from __future__ import annotations
|
||||
import json
|
||||
import os
|
||||
from functools import lru_cache
|
||||
from jsonschema import Draft202012Validator, RefResolver
|
||||
|
||||
SCHEMAS_DIR = os.getenv("SCHEMAS_DIR", os.path.join(os.path.dirname(os.path.dirname(__file__)), "..", "schemas"))
|
||||
|
||||
@lru_cache(maxsize=16)
|
||||
def load_schema(name: str) -> dict:
|
||||
# name: "note.schema.json" | "chunk.schema.json" | "edge.schema.json"
|
||||
path = os.path.join(SCHEMAS_DIR, name)
|
||||
if not os.path.isfile(path):
|
||||
raise FileNotFoundError(f"Schema not found: {path}")
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
@lru_cache(maxsize=16)
|
||||
def get_validator(name: str) -> Draft202012Validator:
|
||||
schema = load_schema(name)
|
||||
resolver = RefResolver.from_schema(schema)
|
||||
return Draft202012Validator(schema, resolver=resolver)
|
||||
|
|
@ -1,30 +1,12 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/core/type_registry.py
|
||||
Version: 1.0.0
|
||||
Datum: 2025-11-08
|
||||
|
||||
Zweck
|
||||
-----
|
||||
Lädt eine optionale Typ-Registry (config/types.yaml) und stellt
|
||||
komfortable Zugriffsfunktionen bereit. Die Registry ist *optional*:
|
||||
- Fehlt die Datei oder ist das YAML defekt, wird ein konservativer
|
||||
Default (Typ "concept") verwendet und es wird eine Warnung ausgegeben.
|
||||
- Änderungen an der Datei greifen nach einem Neustart des Prozesses.
|
||||
|
||||
Öffentliche API
|
||||
---------------
|
||||
- load_type_registry(path: str = "config/types.yaml") -> dict
|
||||
- get_type_config(note_type: str, reg: dict) -> dict
|
||||
- resolve_note_type(fm_type: str | None, reg: dict) -> str
|
||||
- effective_chunk_profile(note_type: str, reg: dict) -> str | None
|
||||
- profile_overlap(profile: str | None) -> tuple[int,int] # nur Overlap-Empfehlung
|
||||
|
||||
Hinweis
|
||||
-------
|
||||
Die Registry steuert KEINE Breaking Changes. Ohne Datei/Typ bleibt das
|
||||
Verhalten exakt wie im Release-Stand 20251105.
|
||||
FILE: app/core/type_registry.py
|
||||
DESCRIPTION: Loader für types.yaml.
|
||||
WP-24c: Robustheits-Fix für chunking_profile vs chunk_profile.
|
||||
WP-14: Support für zentrale Registry-Strukturen.
|
||||
VERSION: 1.1.0 (Audit-Fix: Profile Key Consistency)
|
||||
STATUS: Active (Support für Legacy-Loader)
|
||||
DEPENDENCIES: yaml, os, functools
|
||||
EXTERNAL_CONFIG: config/types.yaml
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
|
|
@ -37,12 +19,12 @@ try:
|
|||
except Exception:
|
||||
yaml = None # wird erst benötigt, wenn eine Datei gelesen werden soll
|
||||
|
||||
# Konservativer Default – bewusst minimal
|
||||
# Konservativer Default – WP-24c: Nutzt nun konsistent 'chunking_profile'
|
||||
_DEFAULT_REGISTRY: Dict[str, Any] = {
|
||||
"version": "1.0",
|
||||
"types": {
|
||||
"concept": {
|
||||
"chunk_profile": "medium",
|
||||
"chunking_profile": "medium",
|
||||
"edge_defaults": ["references", "related_to"],
|
||||
"retriever_weight": 1.0,
|
||||
}
|
||||
|
|
@ -52,7 +34,6 @@ _DEFAULT_REGISTRY: Dict[str, Any] = {
|
|||
}
|
||||
|
||||
# Chunk-Profile → Overlap-Empfehlungen (nur für synthetische Fensterbildung)
|
||||
# Die absoluten Chunk-Längen bleiben Aufgabe des Chunkers (assemble_chunks).
|
||||
_PROFILE_TO_OVERLAP: Dict[str, Tuple[int, int]] = {
|
||||
"short": (20, 30),
|
||||
"medium": (40, 60),
|
||||
|
|
@ -64,7 +45,7 @@ _PROFILE_TO_OVERLAP: Dict[str, Tuple[int, int]] = {
|
|||
def load_type_registry(path: str = "config/types.yaml") -> Dict[str, Any]:
|
||||
"""
|
||||
Lädt die Registry aus 'path'. Bei Fehlern wird ein konserviver Default geliefert.
|
||||
Die Rückgabe ist *prozessweit* gecached.
|
||||
Die Rückgabe ist prozessweit gecached.
|
||||
"""
|
||||
if not path:
|
||||
return dict(_DEFAULT_REGISTRY)
|
||||
|
|
@ -73,7 +54,6 @@ def load_type_registry(path: str = "config/types.yaml") -> Dict[str, Any]:
|
|||
return dict(_DEFAULT_REGISTRY)
|
||||
|
||||
if yaml is None:
|
||||
# PyYAML fehlt → auf Default zurückfallen
|
||||
return dict(_DEFAULT_REGISTRY)
|
||||
|
||||
try:
|
||||
|
|
@ -90,6 +70,7 @@ def load_type_registry(path: str = "config/types.yaml") -> Dict[str, Any]:
|
|||
|
||||
|
||||
def get_type_config(note_type: Optional[str], reg: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Extrahiert die Konfiguration für einen spezifischen Typ."""
|
||||
t = (note_type or "concept").strip().lower()
|
||||
types = (reg or {}).get("types", {}) if isinstance(reg, dict) else {}
|
||||
return types.get(t) or types.get("concept") or _DEFAULT_REGISTRY["types"]["concept"]
|
||||
|
|
@ -103,8 +84,13 @@ def resolve_note_type(fm_type: Optional[str], reg: Dict[str, Any]) -> str:
|
|||
|
||||
|
||||
def effective_chunk_profile(note_type: Optional[str], reg: Dict[str, Any]) -> Optional[str]:
|
||||
"""
|
||||
Ermittelt das aktive Chunking-Profil für einen Notiz-Typ.
|
||||
Fix (Audit-Problem 2): Prüft beide Key-Varianten für 100% Kompatibilität.
|
||||
"""
|
||||
cfg = get_type_config(note_type, reg)
|
||||
prof = cfg.get("chunk_profile")
|
||||
# Check 'chunking_profile' (Standard) OR 'chunk_profile' (Legacy/Fallback)
|
||||
prof = cfg.get("chunking_profile") or cfg.get("chunk_profile")
|
||||
if isinstance(prof, str) and prof.strip():
|
||||
return prof.strip().lower()
|
||||
return None
|
||||
|
|
@ -114,4 +100,4 @@ def profile_overlap(profile: Optional[str]) -> Tuple[int, int]:
|
|||
"""Gibt eine Overlap-Empfehlung (low, high) für das Profil zurück."""
|
||||
if not profile:
|
||||
return _PROFILE_TO_OVERLAP["medium"]
|
||||
return _PROFILE_TO_OVERLAP.get(profile.strip().lower(), _PROFILE_TO_OVERLAP["medium"])
|
||||
return _PROFILE_TO_OVERLAP.get(profile.strip().lower(), _PROFILE_TO_OVERLAP["medium"])
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
from __future__ import annotations
|
||||
from typing import Dict
|
||||
from jsonschema import ValidationError
|
||||
from .schema_loader import get_validator
|
||||
|
||||
NOTE_SCHEMA_NAME = "note.schema.json"
|
||||
|
||||
def validate_note_payload(payload: Dict) -> None:
|
||||
validator = get_validator(NOTE_SCHEMA_NAME)
|
||||
errors = sorted(validator.iter_errors(payload), key=lambda e: e.path)
|
||||
if errors:
|
||||
msgs = []
|
||||
for e in errors:
|
||||
loc = ".".join([str(x) for x in e.path]) or "<root>"
|
||||
msgs.append(f"{loc}: {e.message}")
|
||||
raise ValidationError(" | ".join(msgs))
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
"""
|
||||
Version 1
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Optional
|
||||
from sentence_transformers import SentenceTransformer
|
||||
|
||||
app = FastAPI(title="mindnet-embed", version="1.0")
|
||||
|
||||
MODEL_NAME = "sentence-transformers/all-MiniLM-L6-v2" # 384-dim
|
||||
_model: SentenceTransformer | None = None
|
||||
|
||||
class EmbedIn(BaseModel):
|
||||
model: Optional[str] = None
|
||||
inputs: List[str]
|
||||
|
||||
class EmbedOut(BaseModel):
|
||||
embeddings: List[List[float]]
|
||||
|
||||
@app.on_event("startup")
|
||||
def _load_model():
|
||||
global _model
|
||||
_model = SentenceTransformer(MODEL_NAME)
|
||||
|
||||
@app.get("/health")
|
||||
def health():
|
||||
return {"ok": True, "model": MODEL_NAME, "dim": 384}
|
||||
|
||||
@app.post("/embed", response_model=EmbedOut)
|
||||
def embed(payload: EmbedIn) -> EmbedOut:
|
||||
if _model is None:
|
||||
raise HTTPException(status_code=503, detail="Model not loaded")
|
||||
if not payload.inputs:
|
||||
return EmbedOut(embeddings=[])
|
||||
vecs = _model.encode(payload.inputs, normalize_embeddings=False).tolist()
|
||||
if any(len(v) != 384 for v in vecs):
|
||||
raise HTTPException(status_code=500, detail="Embedding size mismatch (expected 384)")
|
||||
return EmbedOut(embeddings=vecs)
|
||||
|
|
@ -1,6 +1,10 @@
|
|||
"""
|
||||
Version 0.1
|
||||
|
||||
FILE: app/embeddings.py
|
||||
DESCRIPTION: Lokaler Wrapper für SentenceTransformer Embeddings.
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active (Bestätigung durch Aufrufer erforderlich)
|
||||
DEPENDENCIES: app.config, sentence_transformers
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui.py
|
||||
DESCRIPTION: Main Entrypoint für Streamlit. Router, der basierend auf Sidebar-Auswahl die Module (Chat, Editor, Graph) lädt.
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, ui_config, ui_sidebar, ui_chat, ui_editor, ui_graph_service, ui_graph*, ui_graph_cytoscape
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import uuid
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_api.py
|
||||
DESCRIPTION: Wrapper für Backend-Calls (Chat, Ingest, Feedback). Kapselt requests und Error-Handling.
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: requests, streamlit, ui_config
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import requests
|
||||
import streamlit as st
|
||||
from ui_config import CHAT_ENDPOINT, INGEST_ANALYZE_ENDPOINT, INGEST_SAVE_ENDPOINT, FEEDBACK_ENDPOINT, API_TIMEOUT
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_callbacks.py
|
||||
DESCRIPTION: Event-Handler für UI-Interaktionen. Implementiert den Übergang vom Graphen zum Editor (State Transfer).
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, os, ui_utils
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
import os
|
||||
from ui_utils import build_markdown_doc
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_chat.py
|
||||
DESCRIPTION: Chat-UI. Rendert Nachrichtenverlauf, Quellen-Expanders mit Feedback-Buttons und delegiert bei Bedarf an den Editor.
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, ui_api, ui_editor
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from ui_api import send_chat_message, submit_feedback
|
||||
from ui_editor import render_draft_editor
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_config.py
|
||||
DESCRIPTION: Zentrale Konfiguration für das Frontend. Definiert API-Endpoints, Timeouts und Graph-Styles (Farben).
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: os, hashlib, dotenv, pathlib
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import os
|
||||
import hashlib
|
||||
from dotenv import load_dotenv
|
||||
|
|
|
|||
|
|
@ -1,3 +1,11 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_editor.py
|
||||
DESCRIPTION: Markdown-Editor mit Live-Vorschau.
|
||||
Refactored für WP-14: Asynchrones Feedback-Handling (Queued State).
|
||||
VERSION: 2.7.0 (Fix: Async Save UI)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, uuid, re, datetime, ui_utils, ui_api
|
||||
"""
|
||||
import streamlit as st
|
||||
import uuid
|
||||
import re
|
||||
|
|
@ -68,14 +76,11 @@ def render_draft_editor(msg):
|
|||
|
||||
# --- UI LAYOUT ---
|
||||
|
||||
# Header Info (Debug Pfad anzeigen, damit wir sicher sind)
|
||||
origin_fname = st.session_state.get(f"{key_base}_origin_filename")
|
||||
|
||||
if origin_fname:
|
||||
# Dateiname extrahieren für saubere Anzeige
|
||||
display_name = str(origin_fname).split("/")[-1]
|
||||
st.success(f"📂 **Update-Modus**: `{display_name}`")
|
||||
# Debugging: Zeige vollen Pfad im Expander
|
||||
with st.expander("Dateipfad Details", expanded=False):
|
||||
st.code(origin_fname)
|
||||
st.markdown(f'<div class="draft-box" style="border-left: 5px solid #ff9f43;">', unsafe_allow_html=True)
|
||||
|
|
@ -165,21 +170,33 @@ def render_draft_editor(msg):
|
|||
save_label = "💾 Update speichern" if origin_fname else "💾 Neu anlegen & Indizieren"
|
||||
|
||||
if st.button(save_label, type="primary", key=f"{key_base}_save"):
|
||||
with st.spinner("Speichere im Vault..."):
|
||||
with st.spinner("Sende an Backend..."):
|
||||
if origin_fname:
|
||||
# UPDATE: Ziel ist der exakte Pfad
|
||||
target_file = origin_fname
|
||||
else:
|
||||
# CREATE: Neuer Dateiname
|
||||
raw_title = final_meta.get("title", "draft")
|
||||
target_file = f"{datetime.now().strftime('%Y%m%d')}-{slugify(raw_title)[:60]}.md"
|
||||
|
||||
result = save_draft_to_vault(final_doc, filename=target_file)
|
||||
|
||||
# --- WP-14 CHANGE START: Handling Async Response ---
|
||||
if "error" in result:
|
||||
st.error(f"Fehler: {result['error']}")
|
||||
else:
|
||||
st.success(f"Gespeichert: {result.get('file_path')}")
|
||||
status = result.get("status", "success")
|
||||
file_path = result.get("file_path", "unbekannt")
|
||||
|
||||
if status == "queued":
|
||||
# Neuer Status für Async Processing
|
||||
st.info(f"✅ **Eingereiht:** Datei `{file_path}` wurde gespeichert.")
|
||||
st.caption("Die KI-Analyse und Indizierung läuft im Hintergrund. Du kannst weiterarbeiten.")
|
||||
else:
|
||||
# Legacy / Synchroner Fall
|
||||
st.success(f"Gespeichert: {file_path}")
|
||||
|
||||
st.balloons()
|
||||
# --- WP-14 CHANGE END ---
|
||||
|
||||
with b2:
|
||||
if st.button("📋 Code anzeigen", key=f"{key_base}_btn_copy"):
|
||||
st.code(final_doc, language="markdown")
|
||||
|
|
@ -189,25 +206,18 @@ def render_draft_editor(msg):
|
|||
def render_manual_editor():
|
||||
"""
|
||||
Rendert den manuellen Editor.
|
||||
PRÜFT, ob eine Edit-Anfrage aus dem Graphen vorliegt!
|
||||
"""
|
||||
|
||||
target_msg = None
|
||||
|
||||
# 1. Prüfen: Gibt es Nachrichten im Verlauf?
|
||||
if st.session_state.messages:
|
||||
last_msg = st.session_state.messages[-1]
|
||||
|
||||
# 2. Ist die letzte Nachricht eine Edit-Anfrage? (Erkennbar am query_id prefix 'edit_')
|
||||
qid = str(last_msg.get("query_id", ""))
|
||||
if qid.startswith("edit_"):
|
||||
target_msg = last_msg
|
||||
|
||||
# 3. Fallback: Leeres Template, falls keine Edit-Anfrage vorliegt
|
||||
if not target_msg:
|
||||
target_msg = {
|
||||
"content": "---\ntype: concept\ntitle: Neue Notiz\nstatus: draft\ntags: []\n---\n# Titel\n",
|
||||
"query_id": f"manual_{uuid.uuid4()}" # Eigene ID, damit neuer State entsteht
|
||||
"query_id": f"manual_{uuid.uuid4()}"
|
||||
}
|
||||
|
||||
render_draft_editor(target_msg)
|
||||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_graph.py
|
||||
DESCRIPTION: Legacy Graph-Explorer (Streamlit-Agraph). Implementiert Physik-Simulation (BarnesHut) und direkten Editor-Sprung.
|
||||
VERSION: 2.6.0
|
||||
STATUS: Maintenance (Active Fallback)
|
||||
DEPENDENCIES: streamlit, streamlit_agraph, qdrant_client, ui_config, ui_callbacks
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from streamlit_agraph import agraph, Config
|
||||
from qdrant_client import models
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_graph_cytoscape.py
|
||||
DESCRIPTION: Moderner Graph-Explorer (Cytoscape.js). Features: COSE-Layout, Deep-Linking (URL Params), Active Inspector Pattern (CSS-Styling ohne Re-Render).
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, st_cytoscape, qdrant_client, ui_config, ui_callbacks
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from st_cytoscape import cytoscape
|
||||
from qdrant_client import models
|
||||
|
|
@ -60,20 +69,35 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
search_term = st.text_input("Suche Notiz", placeholder="Titel eingeben...", key="cy_search")
|
||||
|
||||
if search_term:
|
||||
hits, _ = graph_service.client.scroll(
|
||||
collection_name=f"{COLLECTION_PREFIX}_notes",
|
||||
limit=10,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchText(text=search_term))])
|
||||
)
|
||||
options = {h.payload['title']: h.payload['note_id'] for h in hits}
|
||||
|
||||
if options:
|
||||
selected_title = st.selectbox("Ergebnisse:", list(options.keys()), key="cy_select")
|
||||
if st.button("Laden", use_container_width=True, key="cy_load"):
|
||||
new_id = options[selected_title]
|
||||
st.session_state.graph_center_id = new_id
|
||||
st.session_state.graph_inspected_id = new_id
|
||||
st.rerun()
|
||||
try:
|
||||
hits, _ = graph_service.client.scroll(
|
||||
collection_name=f"{COLLECTION_PREFIX}_notes",
|
||||
limit=10,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchText(text=search_term))])
|
||||
)
|
||||
options = {}
|
||||
for h in hits:
|
||||
if h.payload and 'title' in h.payload and 'note_id' in h.payload:
|
||||
title = h.payload['title']
|
||||
note_id = h.payload['note_id']
|
||||
# Vermeide Duplikate (falls mehrere Chunks/Notes denselben Titel haben)
|
||||
if title not in options:
|
||||
options[title] = note_id
|
||||
|
||||
if options:
|
||||
selected_title = st.selectbox("Ergebnisse:", list(options.keys()), key="cy_select")
|
||||
if st.button("Laden", use_container_width=True, key="cy_load"):
|
||||
new_id = options[selected_title]
|
||||
st.session_state.graph_center_id = new_id
|
||||
st.session_state.graph_inspected_id = new_id
|
||||
st.rerun()
|
||||
else:
|
||||
# Zeige Info, wenn keine Ergebnisse gefunden wurden
|
||||
st.info(f"Keine Notizen mit '{search_term}' im Titel gefunden.")
|
||||
except Exception as e:
|
||||
st.error(f"Fehler bei der Suche: {e}")
|
||||
import traceback
|
||||
st.code(traceback.format_exc())
|
||||
|
||||
st.divider()
|
||||
|
||||
|
|
@ -129,6 +153,26 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
)
|
||||
# 2. Detail Daten (Inspector)
|
||||
inspected_data = graph_service.get_note_with_full_content(inspected_id)
|
||||
|
||||
# DEBUG: Zeige Debug-Informationen
|
||||
with st.expander("🔍 Debug-Informationen", expanded=False):
|
||||
st.write(f"**Gefundene Knoten:** {len(nodes_data) if nodes_data else 0}")
|
||||
st.write(f"**Gefundene Kanten:** {len(edges_data) if edges_data else 0}")
|
||||
if nodes_data:
|
||||
st.write("**Knoten-IDs:**")
|
||||
for n in nodes_data[:10]:
|
||||
nid = getattr(n, 'id', 'N/A')
|
||||
st.write(f" - {nid}")
|
||||
if len(nodes_data) > 10:
|
||||
st.write(f" ... und {len(nodes_data) - 10} weitere")
|
||||
if edges_data:
|
||||
st.write("**Kanten:**")
|
||||
for e in edges_data[:10]:
|
||||
src = getattr(e, "source", "N/A")
|
||||
tgt = getattr(e, "to", getattr(e, "target", "N/A"))
|
||||
st.write(f" - {src} -> {tgt}")
|
||||
if len(edges_data) > 10:
|
||||
st.write(f" ... und {len(edges_data) - 10} weitere")
|
||||
|
||||
# --- ACTION BAR ---
|
||||
action_container = st.container()
|
||||
|
|
@ -165,7 +209,12 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
st.markdown(f"**ID:** `{inspected_data.get('note_id')}`")
|
||||
st.markdown(f"**Typ:** `{inspected_data.get('type')}`")
|
||||
with col_i2:
|
||||
st.markdown(f"**Tags:** {', '.join(inspected_data.get('tags', []))}")
|
||||
tags = inspected_data.get('tags', [])
|
||||
if isinstance(tags, list):
|
||||
tags_str = ', '.join(tags) if tags else "Keine"
|
||||
else:
|
||||
tags_str = str(tags) if tags else "Keine"
|
||||
st.markdown(f"**Tags:** {tags_str}")
|
||||
path_check = "✅" if inspected_data.get('path') else "❌"
|
||||
st.markdown(f"**Pfad:** {path_check}")
|
||||
|
||||
|
|
@ -180,12 +229,27 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
# --- GRAPH ELEMENTS ---
|
||||
cy_elements = []
|
||||
|
||||
# Validierung: Prüfe ob nodes_data vorhanden ist
|
||||
if not nodes_data:
|
||||
st.warning("⚠️ Keine Knoten gefunden. Bitte wähle eine andere Notiz.")
|
||||
# Zeige trotzdem den Inspector, falls Daten vorhanden
|
||||
if inspected_data:
|
||||
st.info(f"**Hinweis:** Die Notiz '{inspected_data.get('title', inspected_id)}' wurde gefunden, hat aber keine Verbindungen im Graphen.")
|
||||
return
|
||||
|
||||
# Erstelle Set aller Node-IDs für schnelle Validierung
|
||||
node_ids = {n.id for n in nodes_data if hasattr(n, 'id') and n.id}
|
||||
|
||||
# Nodes hinzufügen
|
||||
for n in nodes_data:
|
||||
if not hasattr(n, 'id') or not n.id:
|
||||
continue
|
||||
|
||||
is_center = (n.id == center_id)
|
||||
is_inspected = (n.id == inspected_id)
|
||||
|
||||
tooltip_text = n.title if n.title else n.label
|
||||
display_label = n.label
|
||||
tooltip_text = getattr(n, 'title', None) or getattr(n, 'label', '')
|
||||
display_label = getattr(n, 'label', str(n.id))
|
||||
if len(display_label) > 15 and " " in display_label:
|
||||
display_label = display_label.replace(" ", "\n", 1)
|
||||
|
||||
|
|
@ -193,7 +257,7 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
"data": {
|
||||
"id": n.id,
|
||||
"label": display_label,
|
||||
"bg_color": n.color,
|
||||
"bg_color": getattr(n, 'color', '#8395a7'),
|
||||
"tooltip": tooltip_text
|
||||
},
|
||||
# Wir steuern das Aussehen rein über Klassen (.inspected / .center)
|
||||
|
|
@ -202,18 +266,22 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
}
|
||||
cy_elements.append(cy_node)
|
||||
|
||||
for e in edges_data:
|
||||
target_id = getattr(e, "to", getattr(e, "target", None))
|
||||
if target_id:
|
||||
cy_edge = {
|
||||
"data": {
|
||||
"source": e.source,
|
||||
"target": target_id,
|
||||
"label": e.label,
|
||||
"line_color": e.color
|
||||
# Edges hinzufügen - nur wenn beide Nodes im Graph vorhanden sind
|
||||
if edges_data:
|
||||
for e in edges_data:
|
||||
source_id = getattr(e, "source", None)
|
||||
target_id = getattr(e, "to", getattr(e, "target", None))
|
||||
# Nur hinzufügen, wenn beide IDs vorhanden UND beide Nodes im Graph sind
|
||||
if source_id and target_id and source_id in node_ids and target_id in node_ids:
|
||||
cy_edge = {
|
||||
"data": {
|
||||
"source": source_id,
|
||||
"target": target_id,
|
||||
"label": getattr(e, "label", ""),
|
||||
"line_color": getattr(e, "color", "#bdc3c7")
|
||||
}
|
||||
}
|
||||
}
|
||||
cy_elements.append(cy_edge)
|
||||
cy_elements.append(cy_edge)
|
||||
|
||||
# --- STYLESHEET ---
|
||||
stylesheet = [
|
||||
|
|
@ -283,43 +351,47 @@ def render_graph_explorer_cytoscape(graph_service):
|
|||
]
|
||||
|
||||
# --- RENDER ---
|
||||
graph_key = f"cy_{center_id}_{st.session_state.cy_depth}_{st.session_state.cy_ideal_edge_len}"
|
||||
# Nur rendern, wenn Elemente vorhanden sind
|
||||
if not cy_elements:
|
||||
st.warning("⚠️ Keine Graph-Elemente zum Anzeigen gefunden.")
|
||||
else:
|
||||
graph_key = f"cy_{center_id}_{st.session_state.cy_depth}_{st.session_state.cy_ideal_edge_len}"
|
||||
|
||||
clicked_elements = cytoscape(
|
||||
elements=cy_elements,
|
||||
stylesheet=stylesheet,
|
||||
layout={
|
||||
"name": "cose",
|
||||
"idealEdgeLength": st.session_state.cy_ideal_edge_len,
|
||||
"nodeOverlap": 20,
|
||||
"refresh": 20,
|
||||
"fit": True,
|
||||
"padding": 50,
|
||||
"randomize": False,
|
||||
"componentSpacing": 100,
|
||||
"nodeRepulsion": st.session_state.cy_node_repulsion,
|
||||
"edgeElasticity": 100,
|
||||
"nestingFactor": 5,
|
||||
"gravity": 80,
|
||||
"numIter": 1000,
|
||||
"initialTemp": 200,
|
||||
"coolingFactor": 0.95,
|
||||
"minTemp": 1.0,
|
||||
"animate": False
|
||||
},
|
||||
key=graph_key,
|
||||
height="700px"
|
||||
)
|
||||
clicked_elements = cytoscape(
|
||||
elements=cy_elements,
|
||||
stylesheet=stylesheet,
|
||||
layout={
|
||||
"name": "cose",
|
||||
"idealEdgeLength": st.session_state.cy_ideal_edge_len,
|
||||
"nodeOverlap": 20,
|
||||
"refresh": 20,
|
||||
"fit": True,
|
||||
"padding": 50,
|
||||
"randomize": False,
|
||||
"componentSpacing": 100,
|
||||
"nodeRepulsion": st.session_state.cy_node_repulsion,
|
||||
"edgeElasticity": 100,
|
||||
"nestingFactor": 5,
|
||||
"gravity": 80,
|
||||
"numIter": 1000,
|
||||
"initialTemp": 200,
|
||||
"coolingFactor": 0.95,
|
||||
"minTemp": 1.0,
|
||||
"animate": False
|
||||
},
|
||||
key=graph_key,
|
||||
height="700px"
|
||||
)
|
||||
|
||||
# --- EVENT HANDLING ---
|
||||
if clicked_elements:
|
||||
clicked_nodes = clicked_elements.get("nodes", [])
|
||||
if clicked_nodes:
|
||||
clicked_id = clicked_nodes[0]
|
||||
|
||||
if clicked_id != st.session_state.graph_inspected_id:
|
||||
st.session_state.graph_inspected_id = clicked_id
|
||||
st.rerun()
|
||||
# --- EVENT HANDLING ---
|
||||
if clicked_elements:
|
||||
clicked_nodes = clicked_elements.get("nodes", [])
|
||||
if clicked_nodes:
|
||||
clicked_id = clicked_nodes[0]
|
||||
|
||||
if clicked_id != st.session_state.graph_inspected_id:
|
||||
st.session_state.graph_inspected_id = clicked_id
|
||||
st.rerun()
|
||||
|
||||
else:
|
||||
st.info("👈 Bitte wähle links eine Notiz aus.")
|
||||
|
|
@ -1,15 +1,28 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_graph_service.py
|
||||
DESCRIPTION: Data Layer für den Graphen. Greift direkt auf Qdrant zu (Performance), um Knoten/Kanten zu laden und Texte zu rekonstruieren ("Stitching").
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: qdrant_client, streamlit_agraph, ui_config, re
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import re
|
||||
from qdrant_client import QdrantClient, models
|
||||
from streamlit_agraph import Node, Edge
|
||||
from ui_config import GRAPH_COLORS, get_edge_color, SYSTEM_EDGES
|
||||
from ui_config import COLLECTION_PREFIX, GRAPH_COLORS, get_edge_color, SYSTEM_EDGES
|
||||
|
||||
class GraphExplorerService:
|
||||
def __init__(self, url, api_key=None, prefix="mindnet"):
|
||||
def __init__(self, url, api_key=None, prefix=None):
|
||||
"""
|
||||
Initialisiert den Service. Nutzt COLLECTION_PREFIX aus der Config,
|
||||
sofern kein spezifischer Prefix übergeben wurde.
|
||||
"""
|
||||
self.client = QdrantClient(url=url, api_key=api_key)
|
||||
self.prefix = prefix
|
||||
self.notes_col = f"{prefix}_notes"
|
||||
self.chunks_col = f"{prefix}_chunks"
|
||||
self.edges_col = f"{prefix}_edges"
|
||||
self.prefix = prefix if prefix else COLLECTION_PREFIX
|
||||
self.notes_col = f"{self.prefix}_notes"
|
||||
self.chunks_col = f"{self.prefix}_chunks"
|
||||
self.edges_col = f"{self.prefix}_edges"
|
||||
self._note_cache = {}
|
||||
|
||||
def get_note_with_full_content(self, note_id):
|
||||
|
|
@ -154,31 +167,33 @@ class GraphExplorerService:
|
|||
return previews
|
||||
|
||||
def _find_connected_edges(self, note_ids, note_title=None):
|
||||
"""Findet eingehende und ausgehende Kanten."""
|
||||
"""
|
||||
Findet eingehende und ausgehende Kanten.
|
||||
|
||||
WICHTIG: target_id enthält nur den Titel (ohne #Abschnitt).
|
||||
target_section ist ein separates Feld für Abschnitt-Informationen.
|
||||
"""
|
||||
results = []
|
||||
if not note_ids:
|
||||
return results
|
||||
|
||||
# 1. OUTGOING EDGES (Der "Owner"-Fix)
|
||||
# Wir suchen Kanten, die im Feld 'note_id' (Owner) eine unserer Notizen haben.
|
||||
# Das findet ALLE ausgehenden Kanten, egal ob sie an einem Chunk oder der Note hängen.
|
||||
if note_ids:
|
||||
out_filter = models.Filter(must=[
|
||||
models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids)),
|
||||
models.FieldCondition(key="kind", match=models.MatchExcept(**{"except": SYSTEM_EDGES}))
|
||||
])
|
||||
# Limit hoch, um alles zu finden
|
||||
res_out, _ = self.client.scroll(self.edges_col, scroll_filter=out_filter, limit=500, with_payload=True)
|
||||
results.extend(res_out)
|
||||
out_filter = models.Filter(must=[
|
||||
models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids)),
|
||||
models.FieldCondition(key="kind", match=models.MatchExcept(**{"except": SYSTEM_EDGES}))
|
||||
])
|
||||
res_out, _ = self.client.scroll(self.edges_col, scroll_filter=out_filter, limit=2000, with_payload=True)
|
||||
results.extend(res_out)
|
||||
|
||||
# 2. INCOMING EDGES (Ziel = Chunk ID oder Titel oder Note ID)
|
||||
# Hier müssen wir Chunks auflösen, um Treffer auf Chunks zu finden.
|
||||
# 2. INCOMING EDGES (Ziel = Chunk ID, Note ID oder Titel)
|
||||
# WICHTIG: target_id enthält nur den Titel, target_section ist separat
|
||||
|
||||
# Chunk IDs der aktuellen Notes holen
|
||||
chunk_ids = []
|
||||
if note_ids:
|
||||
c_filter = models.Filter(must=[models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids))])
|
||||
chunks, _ = self.client.scroll(self.chunks_col, scroll_filter=c_filter, limit=300)
|
||||
chunk_ids = [c.id for c in chunks]
|
||||
c_filter = models.Filter(must=[models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids))])
|
||||
chunks, _ = self.client.scroll(self.chunks_col, scroll_filter=c_filter, limit=1000, with_payload=False)
|
||||
chunk_ids = [c.id for c in chunks]
|
||||
|
||||
shoulds = []
|
||||
# Case A: Edge zeigt auf einen unserer Chunks
|
||||
|
|
@ -186,42 +201,92 @@ class GraphExplorerService:
|
|||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchAny(any=chunk_ids)))
|
||||
|
||||
# Case B: Edge zeigt direkt auf unsere Note ID
|
||||
if note_ids:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchAny(any=note_ids)))
|
||||
|
||||
# Case C: Edge zeigt auf unseren Titel (Wikilinks)
|
||||
if note_title:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchValue(value=note_title)))
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchAny(any=note_ids)))
|
||||
|
||||
# Case C: Edge zeigt auf unseren Titel
|
||||
# WICHTIG: target_id enthält nur den Titel (z.B. "Meine Prinzipien 2025")
|
||||
# target_section enthält die Abschnitt-Information (z.B. "P3 – Disziplin"), wenn gesetzt
|
||||
|
||||
# Sammle alle relevanten Titel (inkl. Aliase)
|
||||
titles_to_search = []
|
||||
if note_title:
|
||||
titles_to_search.append(note_title)
|
||||
|
||||
# Lade auch Titel aus den Notes selbst (falls note_title nicht übergeben wurde)
|
||||
for nid in note_ids:
|
||||
note = self._fetch_note_cached(nid)
|
||||
if note:
|
||||
note_title_from_db = note.get("title")
|
||||
if note_title_from_db and note_title_from_db not in titles_to_search:
|
||||
titles_to_search.append(note_title_from_db)
|
||||
# Aliase hinzufügen
|
||||
aliases = note.get("aliases", [])
|
||||
if isinstance(aliases, str):
|
||||
aliases = [aliases]
|
||||
for alias in aliases:
|
||||
if alias and alias not in titles_to_search:
|
||||
titles_to_search.append(alias)
|
||||
|
||||
# Für jeden Titel: Suche nach exaktem Match
|
||||
# target_id enthält nur den Titel, daher reicht MatchValue
|
||||
for title in titles_to_search:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchValue(value=title)))
|
||||
|
||||
if shoulds:
|
||||
in_filter = models.Filter(
|
||||
must=[models.FieldCondition(key="kind", match=models.MatchExcept(**{"except": SYSTEM_EDGES}))],
|
||||
should=shoulds
|
||||
)
|
||||
res_in, _ = self.client.scroll(self.edges_col, scroll_filter=in_filter, limit=500, with_payload=True)
|
||||
res_in, _ = self.client.scroll(self.edges_col, scroll_filter=in_filter, limit=2000, with_payload=True)
|
||||
results.extend(res_in)
|
||||
|
||||
return results
|
||||
|
||||
def _find_connected_edges_batch(self, note_ids):
|
||||
# Wrapper für Level 2 Suche
|
||||
return self._find_connected_edges(note_ids)
|
||||
"""
|
||||
Wrapper für Level 2 Suche.
|
||||
Lädt Titel der ersten Note für Titel-basierte Suche.
|
||||
"""
|
||||
if not note_ids:
|
||||
return []
|
||||
first_note = self._fetch_note_cached(note_ids[0])
|
||||
note_title = first_note.get("title") if first_note else None
|
||||
return self._find_connected_edges(note_ids, note_title=note_title)
|
||||
|
||||
def _process_edge(self, record, nodes_dict, unique_edges, current_depth):
|
||||
"""Verarbeitet eine rohe Edge, löst IDs auf und fügt sie den Dictionaries hinzu."""
|
||||
"""
|
||||
Verarbeitet eine rohe Edge, löst IDs auf und fügt sie den Dictionaries hinzu.
|
||||
|
||||
WICHTIG: Beide Richtungen werden unterstützt:
|
||||
- Ausgehende Kanten: source_id gehört zu unserer Note (via note_id Owner)
|
||||
- Eingehende Kanten: target_id zeigt auf unsere Note (via target_id Match)
|
||||
"""
|
||||
if not record or not record.payload:
|
||||
return None, None
|
||||
|
||||
payload = record.payload
|
||||
src_ref = payload.get("source_id")
|
||||
tgt_ref = payload.get("target_id")
|
||||
kind = payload.get("kind")
|
||||
provenance = payload.get("provenance", "explicit")
|
||||
|
||||
# Prüfe, ob beide Referenzen vorhanden sind
|
||||
if not src_ref or not tgt_ref:
|
||||
return None, None
|
||||
|
||||
# IDs zu Notes auflösen
|
||||
# WICHTIG: source_id kann Chunk-ID (note_id#c01), Note-ID oder Titel sein
|
||||
# WICHTIG: target_id kann Chunk-ID, Note-ID oder Titel sein (ohne #Abschnitt)
|
||||
src_note = self._resolve_note_from_ref(src_ref)
|
||||
tgt_note = self._resolve_note_from_ref(tgt_ref)
|
||||
|
||||
if src_note and tgt_note:
|
||||
src_id = src_note['note_id']
|
||||
tgt_id = tgt_note['note_id']
|
||||
src_id = src_note.get('note_id')
|
||||
tgt_id = tgt_note.get('note_id')
|
||||
|
||||
# Prüfe, ob beide IDs vorhanden sind
|
||||
if not src_id or not tgt_id:
|
||||
return None, None
|
||||
|
||||
if src_id != tgt_id:
|
||||
# Nodes hinzufügen
|
||||
|
|
@ -236,7 +301,7 @@ class GraphExplorerService:
|
|||
# Bevorzuge explizite Kanten vor Smart Kanten
|
||||
is_current_explicit = (provenance in ["explicit", "rule"])
|
||||
if existing:
|
||||
is_existing_explicit = (existing['provenance'] in ["explicit", "rule"])
|
||||
is_existing_explicit = (existing.get('provenance', '') in ["explicit", "rule"])
|
||||
if is_existing_explicit and not is_current_explicit:
|
||||
should_update = False
|
||||
|
||||
|
|
@ -258,38 +323,109 @@ class GraphExplorerService:
|
|||
return None
|
||||
|
||||
def _resolve_note_from_ref(self, ref_str):
|
||||
"""Löst eine ID (Chunk, Note oder Titel) zu einer Note Payload auf."""
|
||||
if not ref_str: return None
|
||||
"""
|
||||
Löst eine Referenz zu einer Note Payload auf.
|
||||
|
||||
# Fall A: Chunk ID (enthält #)
|
||||
WICHTIG: Wenn ref_str ein Titel#Abschnitt Format hat, wird nur der Titel-Teil verwendet.
|
||||
Unterstützt:
|
||||
- Note-ID: "20250101-meine-note"
|
||||
- Chunk-ID: "20250101-meine-note#c01"
|
||||
- Titel: "Meine Prinzipien 2025"
|
||||
- Titel#Abschnitt: "Meine Prinzipien 2025#P3 – Disziplin" (trennt Abschnitt ab, sucht nur nach Titel)
|
||||
"""
|
||||
if not ref_str:
|
||||
return None
|
||||
|
||||
# Fall A: Enthält # (kann Chunk-ID oder Titel#Abschnitt sein)
|
||||
if "#" in ref_str:
|
||||
try:
|
||||
# Versuch 1: Chunk ID direkt
|
||||
# Versuch 1: Chunk ID direkt (Format: note_id#c01)
|
||||
res = self.client.retrieve(self.chunks_col, ids=[ref_str], with_payload=True)
|
||||
if res: return self._fetch_note_cached(res[0].payload.get("note_id"))
|
||||
except: pass
|
||||
if res and res[0].payload:
|
||||
note_id = res[0].payload.get("note_id")
|
||||
if note_id:
|
||||
return self._fetch_note_cached(note_id)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Versuch 2: NoteID#Section (Hash abtrennen)
|
||||
possible_note_id = ref_str.split("#")[0]
|
||||
if self._fetch_note_cached(possible_note_id): return self._fetch_note_cached(possible_note_id)
|
||||
# Versuch 2: NoteID#Section (Hash abtrennen und als Note-ID versuchen)
|
||||
# z.B. "20250101-meine-note#Abschnitt" -> "20250101-meine-note"
|
||||
possible_note_id = ref_str.split("#")[0].strip()
|
||||
note = self._fetch_note_cached(possible_note_id)
|
||||
if note:
|
||||
return note
|
||||
|
||||
# Versuch 3: Titel#Abschnitt (Hash abtrennen und als Titel suchen)
|
||||
# z.B. "Meine Prinzipien 2025#P3 – Disziplin" -> "Meine Prinzipien 2025"
|
||||
# WICHTIG: target_id enthält nur den Titel, daher suchen wir nur nach dem Titel-Teil
|
||||
possible_title = ref_str.split("#")[0].strip()
|
||||
if possible_title:
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[
|
||||
models.FieldCondition(key="title", match=models.MatchValue(value=possible_title))
|
||||
]),
|
||||
limit=1, with_payload=True
|
||||
)
|
||||
if res and res[0].payload:
|
||||
payload = res[0].payload
|
||||
self._note_cache[payload['note_id']] = payload
|
||||
return payload
|
||||
|
||||
# Fallback: Text-Suche für Fuzzy-Matching
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[
|
||||
models.FieldCondition(key="title", match=models.MatchText(text=possible_title))
|
||||
]),
|
||||
limit=10, with_payload=True
|
||||
)
|
||||
if res:
|
||||
# Nimm das erste Ergebnis, das exakt oder beginnend mit possible_title übereinstimmt
|
||||
for r in res:
|
||||
if r.payload:
|
||||
note_title = r.payload.get("title", "")
|
||||
if note_title == possible_title or note_title.startswith(possible_title):
|
||||
payload = r.payload
|
||||
self._note_cache[payload['note_id']] = payload
|
||||
return payload
|
||||
|
||||
# Fall B: Note ID direkt
|
||||
if self._fetch_note_cached(ref_str): return self._fetch_note_cached(ref_str)
|
||||
note = self._fetch_note_cached(ref_str)
|
||||
if note:
|
||||
return note
|
||||
|
||||
# Fall C: Titel
|
||||
# Fall C: Titel (exakte Übereinstimmung)
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchValue(value=ref_str))]),
|
||||
scroll_filter=models.Filter(must=[
|
||||
models.FieldCondition(key="title", match=models.MatchValue(value=ref_str))
|
||||
]),
|
||||
limit=1, with_payload=True
|
||||
)
|
||||
if res:
|
||||
self._note_cache[res[0].payload['note_id']] = res[0].payload
|
||||
return res[0].payload
|
||||
if res and res[0].payload:
|
||||
payload = res[0].payload
|
||||
self._note_cache[payload['note_id']] = payload
|
||||
return payload
|
||||
|
||||
# Fall D: Titel (Text-Suche für Fuzzy-Matching)
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[
|
||||
models.FieldCondition(key="title", match=models.MatchText(text=ref_str))
|
||||
]),
|
||||
limit=1, with_payload=True
|
||||
)
|
||||
if res and res[0].payload:
|
||||
payload = res[0].payload
|
||||
self._note_cache[payload['note_id']] = payload
|
||||
return payload
|
||||
|
||||
return None
|
||||
|
||||
def _add_node_to_dict(self, node_dict, note_payload, level=1):
|
||||
nid = note_payload.get("note_id")
|
||||
if nid in node_dict: return
|
||||
if not nid or nid in node_dict: return
|
||||
|
||||
ntype = note_payload.get("type", "default")
|
||||
color = GRAPH_COLORS.get(ntype, GRAPH_COLORS["default"])
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_sidebar.py
|
||||
DESCRIPTION: Rendert die Sidebar. Steuert den Modus-Wechsel (Chat/Editor/Graph) und globale Settings (Top-K).
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: streamlit, ui_utils, ui_config
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import streamlit as st
|
||||
from ui_utils import load_history_from_logs
|
||||
from ui_config import HISTORY_FILE
|
||||
|
|
|
|||
|
|
@ -1,3 +1,12 @@
|
|||
"""
|
||||
FILE: app/frontend/ui_utils.py
|
||||
DESCRIPTION: String-Utilities. Parser für Markdown/YAML (LLM-Healing) und Helper für History-Loading.
|
||||
VERSION: 2.6.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: re, yaml, unicodedata, json, datetime
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
import re
|
||||
import yaml
|
||||
import unicodedata
|
||||
|
|
|
|||
|
|
@ -1,172 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Modul: app/graph/service.py
|
||||
Version: 0.1.0
|
||||
Datum: 2025-09-10
|
||||
|
||||
Zweck
|
||||
-----
|
||||
Leichtgewichtiger Graph-Layer über Qdrant:
|
||||
- get_note(note_id)
|
||||
- get_chunks(note_id)
|
||||
- neighbors(source_id, kinds=[...], scope=['note','chunk'], depth=1)
|
||||
- walk_bfs(source_id, kinds, max_depth)
|
||||
- context_for_note(note_id, max_neighbors): heuristische Kontextsammlung
|
||||
|
||||
Hinweise
|
||||
--------
|
||||
- Nutzt die bestehenden Collections <prefix>_notes/_chunks/_edges.
|
||||
- Edges werden über Payload-Felder (`kind`, `source_id`, `target_id`) abgefragt.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from typing import List, Dict, Any, Optional, Iterable, Set, Tuple
|
||||
from qdrant_client.http import models as rest
|
||||
from app.core.qdrant import QdrantConfig, get_client
|
||||
|
||||
def _cols(prefix: str):
|
||||
return f"{prefix}_notes", f"{prefix}_chunks", f"{prefix}_edges"
|
||||
|
||||
class GraphService:
|
||||
def __init__(self, cfg: Optional[QdrantConfig] = None, prefix: Optional[str] = None):
|
||||
self.cfg = cfg or QdrantConfig.from_env()
|
||||
if prefix:
|
||||
self.cfg.prefix = prefix
|
||||
self.client = get_client(self.cfg)
|
||||
self.notes_col, self.chunks_col, self.edges_col = _cols(self.cfg.prefix)
|
||||
|
||||
# ------------------------ fetch helpers ------------------------
|
||||
def _scroll(self, col: str, flt: Optional[rest.Filter] = None, limit: int = 256):
|
||||
out = []
|
||||
nextp = None
|
||||
while True:
|
||||
pts, nextp = self.client.scroll(
|
||||
collection_name=col,
|
||||
with_payload=True,
|
||||
with_vectors=False,
|
||||
limit=limit,
|
||||
offset=nextp,
|
||||
scroll_filter=flt,
|
||||
)
|
||||
if not pts:
|
||||
break
|
||||
out.extend(pts)
|
||||
if nextp is None:
|
||||
break
|
||||
return out
|
||||
|
||||
# ------------------------ public API ---------------------------
|
||||
def get_note(self, note_id: str) -> Optional[Dict[str, Any]]:
|
||||
f = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
|
||||
pts, _ = self.client.scroll(self.notes_col, with_payload=True, with_vectors=False, limit=1, scroll_filter=f)
|
||||
return (pts[0].payload or None) if pts else None
|
||||
|
||||
def get_chunks(self, note_id: str) -> List[Dict[str, Any]]:
|
||||
f = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
|
||||
pts = self._scroll(self.chunks_col, f)
|
||||
# Sortierung analog Export
|
||||
def key(pl):
|
||||
p = pl.payload or {}
|
||||
s = p.get("seq") or 0
|
||||
ci = p.get("chunk_index") or 0
|
||||
n = 0
|
||||
cid = p.get("chunk_id") or ""
|
||||
if isinstance(cid, str) and "#" in cid:
|
||||
try:
|
||||
n = int(cid.rsplit("#", 1)[-1])
|
||||
except Exception:
|
||||
n = 0
|
||||
return (int(s), int(ci), n)
|
||||
pts_sorted = sorted(pts, key=key)
|
||||
return [p.payload or {} for p in pts_sorted]
|
||||
|
||||
def neighbors(self, source_id: str, kinds: Optional[Iterable[str]] = None,
|
||||
scope: Optional[Iterable[str]] = None, depth: int = 1) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""
|
||||
Liefert eingehende & ausgehende Nachbarn (nur nach kind gefiltert).
|
||||
depth==1: direkte Kanten.
|
||||
"""
|
||||
kinds = list(kinds) if kinds else None
|
||||
must = [rest.FieldCondition(key="source_id", match=rest.MatchValue(value=source_id))]
|
||||
if kinds:
|
||||
must.append(rest.FieldCondition(key="kind", match=rest.MatchAny(any=kinds)))
|
||||
f = rest.Filter(must=must)
|
||||
edges = self._scroll(self.edges_col, f)
|
||||
out = {"out": [], "in": []}
|
||||
for e in edges:
|
||||
out["out"].append(e.payload or {})
|
||||
# Inverse Richtung (eingehend)
|
||||
must_in = [rest.FieldCondition(key="target_id", match=rest.MatchValue(value=source_id))]
|
||||
if kinds:
|
||||
must_in.append(rest.FieldCondition(key="kind", match=rest.MatchAny(any=kinds)))
|
||||
f_in = rest.Filter(must=must_in)
|
||||
edges_in = self._scroll(self.edges_col, f_in)
|
||||
for e in edges_in:
|
||||
out["in"].append(e.payload or {})
|
||||
return out
|
||||
|
||||
def walk_bfs(self, source_id: str, kinds: Iterable[str], max_depth: int = 2) -> Set[str]:
|
||||
visited: Set[str] = {source_id}
|
||||
frontier: Set[str] = {source_id}
|
||||
kinds = list(kinds)
|
||||
for _ in range(max_depth):
|
||||
nxt: Set[str] = set()
|
||||
for s in frontier:
|
||||
neigh = self.neighbors(s, kinds=kinds)
|
||||
for e in neigh["out"]:
|
||||
t = e.get("target_id")
|
||||
if isinstance(t, str) and t not in visited:
|
||||
visited.add(t)
|
||||
nxt.add(t)
|
||||
frontier = nxt
|
||||
if not frontier:
|
||||
break
|
||||
return visited
|
||||
|
||||
def context_for_note(self, note_id: str, kinds: Iterable[str] = ("references","backlink"), max_neighbors: int = 12) -> Dict[str, Any]:
|
||||
"""
|
||||
Heuristischer Kontext: eigene Chunks + Nachbarn nach Kantenarten, dedupliziert.
|
||||
"""
|
||||
note = self.get_note(note_id) or {}
|
||||
chunks = self.get_chunks(note_id)
|
||||
neigh = self.neighbors(note_id, kinds=list(kinds))
|
||||
targets = []
|
||||
for e in neigh["out"]:
|
||||
t = e.get("target_id")
|
||||
if isinstance(t, str):
|
||||
targets.append(t)
|
||||
for e in neigh["in"]:
|
||||
s = e.get("source_id")
|
||||
if isinstance(s, str):
|
||||
targets.append(s)
|
||||
# de-dupe
|
||||
seen = set()
|
||||
uniq = []
|
||||
for t in targets:
|
||||
if t not in seen:
|
||||
seen.add(t)
|
||||
uniq.append(t)
|
||||
uniq = uniq[:max_neighbors]
|
||||
neighbor_notes = [self.get_note(t) for t in uniq]
|
||||
return {
|
||||
"note": note,
|
||||
"chunks": chunks,
|
||||
"neighbors": [n for n in neighbor_notes if n],
|
||||
"edges_out": neigh["out"],
|
||||
"edges_in": neigh["in"],
|
||||
}
|
||||
|
||||
# Optional: Mini-CLI
|
||||
if __name__ == "__main__": # pragma: no cover
|
||||
import argparse, json
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--prefix", help="Collection-Prefix (überschreibt ENV)")
|
||||
ap.add_argument("--note-id", required=True)
|
||||
ap.add_argument("--neighbors", action="store_true", help="Nur Nachbarn anzeigen")
|
||||
args = ap.parse_args()
|
||||
svc = GraphService(prefix=args.prefix)
|
||||
if args.neighbors:
|
||||
out = svc.neighbors(args.note_id, kinds=["references","backlink","prev","next","belongs_to"])
|
||||
else:
|
||||
out = svc.context_for_note(args.note_id)
|
||||
print(json.dumps(out, ensure_ascii=False, indent=2))
|
||||
130
app/main.py
130
app/main.py
|
|
@ -1,19 +1,29 @@
|
|||
"""
|
||||
app/main.py — mindnet API bootstrap
|
||||
FILE: app/main.py
|
||||
DESCRIPTION: Bootstrap der FastAPI Anwendung für WP-25a (Agentic MoE).
|
||||
Orchestriert Lifespan-Events, globale Fehlerbehandlung und Routing.
|
||||
Prüft beim Start die Integrität der Mixture of Experts Konfiguration.
|
||||
VERSION: 1.1.0 (WP-25a: MoE Integrity Check)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.config, app.routers.*, app.services.llm_service
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from fastapi import FastAPI
|
||||
from .config import get_settings
|
||||
from .routers.embed_router import router as embed_router
|
||||
from .routers.qdrant_router import router as qdrant_router
|
||||
|
||||
from __future__ import annotations
|
||||
import logging
|
||||
import os
|
||||
from contextlib import asynccontextmanager
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from .config import get_settings
|
||||
from .services.llm_service import LLMService
|
||||
|
||||
# Import der Router
|
||||
from .routers.query import router as query_router
|
||||
from .routers.graph import router as graph_router
|
||||
from .routers.tools import router as tools_router
|
||||
from .routers.feedback import router as feedback_router
|
||||
# NEU: Chat Router (WP-05)
|
||||
from .routers.chat import router as chat_router
|
||||
# NEU: Ingest Router (WP-11)
|
||||
from .routers.ingest import router as ingest_router
|
||||
|
||||
try:
|
||||
|
|
@ -21,26 +31,109 @@ try:
|
|||
except Exception:
|
||||
admin_router = None
|
||||
|
||||
from .core.logging_setup import setup_logging
|
||||
|
||||
# Initialisierung des Loggings noch VOR create_app()
|
||||
setup_logging()
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# --- WP-25a: Lifespan Management mit MoE Integritäts-Prüfung ---
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""
|
||||
Verwaltet den Lebenszyklus der Anwendung (Startup/Shutdown).
|
||||
Verifiziert die Verfügbarkeit der MoE-Experten-Profile und Strategien.
|
||||
"""
|
||||
settings = get_settings()
|
||||
logger.info("🚀 mindnet API: Starting up (WP-25a MoE Mode)...")
|
||||
|
||||
# 1. Startup: Integritäts-Check der MoE Konfiguration
|
||||
# Wir prüfen die drei Säulen der Agentic-RAG Architektur.
|
||||
decision_cfg = os.getenv("MINDNET_DECISION_CONFIG", "config/decision_engine.yaml")
|
||||
profiles_cfg = getattr(settings, "LLM_PROFILES_PATH", "config/llm_profiles.yaml")
|
||||
prompts_cfg = settings.PROMPTS_PATH
|
||||
|
||||
missing_files = []
|
||||
if not os.path.exists(decision_cfg): missing_files.append(decision_cfg)
|
||||
if not os.path.exists(profiles_cfg): missing_files.append(profiles_cfg)
|
||||
if not os.path.exists(prompts_cfg): missing_files.append(prompts_cfg)
|
||||
|
||||
if missing_files:
|
||||
logger.error(f"❌ CRITICAL: Missing MoE config files: {missing_files}")
|
||||
else:
|
||||
logger.info("✅ MoE Configuration files verified.")
|
||||
|
||||
yield
|
||||
|
||||
# 2. Shutdown: Ressourcen bereinigen
|
||||
logger.info("🛑 mindnet API: Shutting down...")
|
||||
try:
|
||||
llm = LLMService()
|
||||
await llm.close()
|
||||
logger.info("✨ LLM resources cleaned up.")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Error during LLMService cleanup: {e}")
|
||||
|
||||
logger.info("Goodbye.")
|
||||
|
||||
# --- App Factory ---
|
||||
|
||||
def create_app() -> FastAPI:
|
||||
app = FastAPI(title="mindnet API", version="0.6.0") # Version bump WP-11
|
||||
"""Initialisiert die FastAPI App mit WP-25a Erweiterungen."""
|
||||
app = FastAPI(
|
||||
title="mindnet API",
|
||||
version="1.1.0", # WP-25a Milestone
|
||||
lifespan=lifespan,
|
||||
description="Digital Twin Knowledge Engine mit Mixture of Experts Orchestration."
|
||||
)
|
||||
|
||||
s = get_settings()
|
||||
|
||||
# --- Globale Fehlerbehandlung (WP-25a Resilienz) ---
|
||||
|
||||
@app.exception_handler(Exception)
|
||||
async def global_exception_handler(request: Request, exc: Exception):
|
||||
"""Fängt unerwartete Fehler in der MoE-Prozesskette ab."""
|
||||
logger.error(f"❌ Unhandled Engine Error: {exc}", exc_info=True)
|
||||
return JSONResponse(
|
||||
status_code=500,
|
||||
content={
|
||||
"detail": "Ein interner Fehler ist in der MoE-Kette aufgetreten.",
|
||||
"error_type": type(exc).__name__
|
||||
}
|
||||
)
|
||||
|
||||
# Healthcheck
|
||||
@app.get("/healthz")
|
||||
def healthz():
|
||||
return {"status": "ok", "qdrant": s.QDRANT_URL, "prefix": s.COLLECTION_PREFIX}
|
||||
|
||||
app.include_router(embed_router)
|
||||
app.include_router(qdrant_router)
|
||||
"""Bietet Statusinformationen über die Engine und Datenbank-Verbindung."""
|
||||
# WP-24c v4.5.10: Prüfe EdgeDTO-Version zur Laufzeit
|
||||
edge_dto_supports_callout = False
|
||||
try:
|
||||
from app.models.dto import EdgeDTO
|
||||
import inspect
|
||||
source = inspect.getsource(EdgeDTO)
|
||||
edge_dto_supports_callout = "explicit:callout" in source
|
||||
except Exception:
|
||||
pass # Fehler beim Prüfen ist nicht kritisch
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"version": "1.1.0",
|
||||
"qdrant": s.QDRANT_URL,
|
||||
"prefix": s.COLLECTION_PREFIX,
|
||||
"moe_enabled": True,
|
||||
"edge_dto_supports_callout": edge_dto_supports_callout # WP-24c v4.5.10: Diagnose-Hilfe
|
||||
}
|
||||
|
||||
# Inkludieren der Router (100% Kompatibilität erhalten)
|
||||
app.include_router(query_router, prefix="/query", tags=["query"])
|
||||
app.include_router(graph_router, prefix="/graph", tags=["graph"])
|
||||
app.include_router(tools_router, prefix="/tools", tags=["tools"])
|
||||
app.include_router(feedback_router, prefix="/feedback", tags=["feedback"])
|
||||
|
||||
# NEU: Chat Endpoint
|
||||
app.include_router(chat_router, prefix="/chat", tags=["chat"])
|
||||
|
||||
# NEU: Ingest Endpoint
|
||||
app.include_router(chat_router, prefix="/chat", tags=["chat"]) # WP-25a Agentic Chat
|
||||
app.include_router(ingest_router, prefix="/ingest", tags=["ingest"])
|
||||
|
||||
if admin_router:
|
||||
|
|
@ -48,4 +141,5 @@ def create_app() -> FastAPI:
|
|||
|
||||
return app
|
||||
|
||||
# Instanziierung der App
|
||||
app = create_app()
|
||||
|
|
@ -1,14 +1,9 @@
|
|||
"""
|
||||
app/models/dto.py — Pydantic-Modelle (DTOs) für WP-04/WP-05/WP-06
|
||||
|
||||
Zweck:
|
||||
Laufzeit-Modelle für FastAPI (Requests/Responses).
|
||||
WP-06 Update: Intent & Intent-Source in ChatResponse.
|
||||
|
||||
Version:
|
||||
0.6.2 (WP-06: Decision Engine Transparency, Erweiterung des Feeback Request)
|
||||
Stand:
|
||||
2025-12-09
|
||||
FILE: app/models/dto.py
|
||||
DESCRIPTION: Pydantic-Modelle (DTOs) für Request/Response Bodies. Definiert das API-Schema.
|
||||
VERSION: 0.7.1 (WP-25: Stream-Tracing Support)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: pydantic, typing, uuid
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
@ -16,7 +11,14 @@ from pydantic import BaseModel, Field
|
|||
from typing import List, Literal, Optional, Dict, Any
|
||||
import uuid
|
||||
|
||||
EdgeKind = Literal["references", "references_at", "backlink", "next", "prev", "belongs_to", "depends_on", "related_to", "similar_to"]
|
||||
# WP-25: Erweiterte Kanten-Typen gemäß neuer decision_engine.yaml
|
||||
EdgeKind = Literal[
|
||||
"references", "references_at", "backlink", "next", "prev",
|
||||
"belongs_to", "depends_on", "related_to", "similar_to",
|
||||
"caused_by", "derived_from", "based_on", "solves", "blocks",
|
||||
"uses", "guides", "enforced_by", "implemented_in", "part_of",
|
||||
"experienced_in", "impacts", "risk_of"
|
||||
]
|
||||
|
||||
|
||||
# --- Basis-DTOs ---
|
||||
|
|
@ -44,13 +46,24 @@ class EdgeDTO(BaseModel):
|
|||
target: str
|
||||
weight: float
|
||||
direction: Literal["out", "in", "undirected"] = "out"
|
||||
# WP-24c v4.5.3: Erweiterte Provenance-Werte für Chunk-Aware Edges
|
||||
# Unterstützt alle tatsächlich verwendeten Provenance-Typen im System
|
||||
provenance: Optional[Literal[
|
||||
"explicit", "rule", "smart", "structure",
|
||||
"explicit:callout", "explicit:wikilink", "explicit:note_zone", "explicit:note_scope",
|
||||
"inline:rel", "callout:edge", "semantic_ai", "structure:belongs_to", "structure:order",
|
||||
"derived:backlink", "edge_defaults", "global_pool"
|
||||
]] = "explicit"
|
||||
confidence: float = 1.0
|
||||
target_section: Optional[str] = None
|
||||
|
||||
|
||||
# --- Request Models ---
|
||||
|
||||
class QueryRequest(BaseModel):
|
||||
"""
|
||||
Request für /query.
|
||||
Request für /query. Unterstützt Multi-Stream Isolation via filters.
|
||||
WP-24c v4.1.0: Erweitert um Section-Filtering und Scope-Awareness.
|
||||
"""
|
||||
mode: Literal["semantic", "edge", "hybrid"] = "hybrid"
|
||||
query: Optional[str] = None
|
||||
|
|
@ -60,36 +73,34 @@ class QueryRequest(BaseModel):
|
|||
filters: Optional[Dict] = None
|
||||
ret: Dict = {"with_paths": True, "with_notes": True, "with_chunks": True}
|
||||
explain: bool = False
|
||||
|
||||
# WP-22/25: Dynamische Gewichtung der Graphen-Highways
|
||||
boost_edges: Optional[Dict[str, float]] = None
|
||||
|
||||
# WP-24c v4.1.0: Section-Filtering für präzise Section-Links
|
||||
target_section: Optional[str] = None
|
||||
|
||||
|
||||
class FeedbackRequest(BaseModel):
|
||||
"""
|
||||
User-Feedback zu einem spezifischen Treffer oder der Gesamtantwort.
|
||||
"""
|
||||
"""User-Feedback zu einem spezifischen Treffer oder der Gesamtantwort."""
|
||||
query_id: str = Field(..., description="ID der ursprünglichen Suche")
|
||||
# node_id ist optional: Wenn leer oder "generated_answer", gilt es für die Antwort.
|
||||
# Wenn eine echte Chunk-ID, gilt es für die Quelle.
|
||||
node_id: str = Field(..., description="ID des bewerteten Treffers oder 'generated_answer'")
|
||||
# Update: Range auf 1-5 erweitert für differenziertes Tuning
|
||||
score: int = Field(..., ge=1, le=5, description="1 (Irrelevant/Falsch) bis 5 (Perfekt)")
|
||||
score: int = Field(..., ge=1, le=5, description="1 (Irrelevant) bis 5 (Perfekt)")
|
||||
comment: Optional[str] = None
|
||||
|
||||
|
||||
class ChatRequest(BaseModel):
|
||||
"""
|
||||
WP-05: Request für /chat.
|
||||
"""
|
||||
"""Request für /chat (WP-25 Einstieg)."""
|
||||
message: str = Field(..., description="Die Nachricht des Users")
|
||||
conversation_id: Optional[str] = Field(None, description="Optional: ID für Chat-Verlauf (noch nicht implementiert)")
|
||||
# RAG Parameter (Override defaults)
|
||||
conversation_id: Optional[str] = Field(None, description="ID für Chat-Verlauf")
|
||||
top_k: int = 5
|
||||
explain: bool = False
|
||||
|
||||
|
||||
# --- WP-04b Explanation Models ---
|
||||
# --- Explanation Models ---
|
||||
|
||||
class ScoreBreakdown(BaseModel):
|
||||
"""Aufschlüsselung der Score-Komponenten."""
|
||||
"""Aufschlüsselung der Score-Komponenten nach der WP-22 Formel."""
|
||||
semantic_contribution: float
|
||||
edge_contribution: float
|
||||
centrality_contribution: float
|
||||
|
|
@ -97,11 +108,14 @@ class ScoreBreakdown(BaseModel):
|
|||
raw_edge_bonus: float
|
||||
raw_centrality: float
|
||||
node_weight: float
|
||||
status_multiplier: float = 1.0
|
||||
graph_boost_factor: float = 1.0
|
||||
|
||||
|
||||
class Reason(BaseModel):
|
||||
"""Ein semantischer Grund für das Ranking."""
|
||||
kind: Literal["semantic", "edge", "type", "centrality"]
|
||||
# WP-25: 'status' hinzugefügt für Synchronität mit retriever.py
|
||||
kind: Literal["semantic", "edge", "type", "centrality", "lifecycle", "status"]
|
||||
message: str
|
||||
score_impact: Optional[float] = None
|
||||
details: Optional[Dict[str, Any]] = None
|
||||
|
|
@ -112,26 +126,34 @@ class Explanation(BaseModel):
|
|||
breakdown: ScoreBreakdown
|
||||
reasons: List[Reason]
|
||||
related_edges: Optional[List[EdgeDTO]] = None
|
||||
applied_intent: Optional[str] = None
|
||||
applied_boosts: Optional[Dict[str, float]] = None
|
||||
|
||||
|
||||
# --- Response Models ---
|
||||
|
||||
class QueryHit(BaseModel):
|
||||
"""Einzelnes Trefferobjekt für /query."""
|
||||
"""
|
||||
Einzelnes Trefferobjekt.
|
||||
WP-25: stream_origin hinzugefügt für Tracing und Feedback-Optimierung.
|
||||
WP-24c v4.1.0: source_chunk_id für RAG-Kontext hinzugefügt.
|
||||
"""
|
||||
node_id: str
|
||||
note_id: Optional[str]
|
||||
note_id: str
|
||||
semantic_score: float
|
||||
edge_bonus: float
|
||||
centrality_bonus: float
|
||||
total_score: float
|
||||
paths: Optional[List[List[Dict]]] = None
|
||||
source: Optional[Dict] = None
|
||||
payload: Optional[Dict] = None # Added for flexibility & WP-06 meta-data
|
||||
payload: Optional[Dict] = None
|
||||
explanation: Optional[Explanation] = None
|
||||
stream_origin: Optional[str] = Field(None, description="Name des Ursprungs-Streams")
|
||||
source_chunk_id: Optional[str] = Field(None, description="Chunk-ID der Quelle (für RAG-Kontext)")
|
||||
|
||||
|
||||
class QueryResponse(BaseModel):
|
||||
"""Antwortstruktur für /query."""
|
||||
"""Antwortstruktur für /query (wird von DecisionEngine Streams genutzt)."""
|
||||
query_id: str = Field(default_factory=lambda: str(uuid.uuid4()))
|
||||
results: List[QueryHit]
|
||||
used_mode: str
|
||||
|
|
@ -148,13 +170,12 @@ class GraphResponse(BaseModel):
|
|||
|
||||
class ChatResponse(BaseModel):
|
||||
"""
|
||||
WP-05/06: Antwortstruktur für /chat.
|
||||
Antwortstruktur für /chat.
|
||||
WP-25: 'intent' spiegelt nun die gewählte Strategie wider.
|
||||
"""
|
||||
query_id: str = Field(..., description="Traceability ID (dieselbe wie für Search)")
|
||||
query_id: str = Field(..., description="Traceability ID")
|
||||
answer: str = Field(..., description="Generierte Antwort vom LLM")
|
||||
sources: List[QueryHit] = Field(..., description="Die für die Antwort genutzten Quellen")
|
||||
sources: List[QueryHit] = Field(..., description="Die genutzten Quellen (alle Streams)")
|
||||
latency_ms: int
|
||||
intent: Optional[str] = Field("FACT", description="WP-06: Erkannter Intent (FACT/DECISION)")
|
||||
intent_source: Optional[str] = Field("Unknown", description="WP-06: Quelle der Intent-Erkennung (Keyword vs. LLM)")
|
||||
|
||||
|
||||
intent: Optional[str] = Field("FACT", description="Die gewählte WP-25 Strategie")
|
||||
intent_source: Optional[str] = Field("LLM_Router", description="Quelle der Intent-Erkennung")
|
||||
|
|
@ -1,20 +1,10 @@
|
|||
"""
|
||||
app/routers/admin.py — Admin-/Monitoring-Endpunkte (optional)
|
||||
|
||||
Zweck:
|
||||
Liefert einfache Kennzahlen zu Collections (Counts) und Config.
|
||||
Kompatibilität:
|
||||
Python 3.12+, FastAPI 0.110+, qdrant-client 1.x
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Bezug:
|
||||
- Qdrant Collections: *_notes, *_chunks, *_edges
|
||||
Nutzung:
|
||||
app.include_router(admin.router, prefix="/admin", tags=["admin"])
|
||||
Änderungsverlauf:
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
FILE: app/routers/admin.py
|
||||
DESCRIPTION: Monitoring-Endpunkt. Zeigt Qdrant-Collection-Counts und geladene Config.
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active (Optional)
|
||||
DEPENDENCIES: qdrant_client, app.config
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
|
|||
|
|
@ -1,59 +1,78 @@
|
|||
"""
|
||||
app/routers/chat.py — RAG Endpunkt
|
||||
Version: 2.5.0 (Fix: Question Detection protects against False-Positive Interviews)
|
||||
FILE: app/routers/chat.py
|
||||
DESCRIPTION: Haupt-Chat-Interface (WP-25b Edition).
|
||||
Kombiniert die spezialisierte Interview-Logik mit der neuen
|
||||
Lazy-Prompt-Orchestration und MoE-Synthese.
|
||||
WP-24c: Integration der Discovery API für proaktive Vernetzung.
|
||||
VERSION: 3.1.0 (WP-24c: Discovery API Integration)
|
||||
STATUS: Active
|
||||
FIX:
|
||||
- WP-24c: Neuer Endpunkt /query/discover für proaktive Kanten-Vorschläge.
|
||||
- WP-25b: Umstellung des Interview-Modus auf Lazy-Prompt (prompt_key + variables).
|
||||
- WP-25b: Delegation der RAG-Phase an die Engine v1.3.0 für konsistente MoE-Steuerung.
|
||||
- WP-25a: Voller Erhalt der v3.0.2 Logik (Interview, Schema-Resolution, FastPaths).
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Depends
|
||||
from typing import List, Dict, Any, Optional
|
||||
from pydantic import BaseModel
|
||||
import time
|
||||
import uuid
|
||||
import logging
|
||||
import yaml
|
||||
import os
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
|
||||
from app.config import get_settings
|
||||
from app.models.dto import ChatRequest, ChatResponse, QueryRequest, QueryHit
|
||||
from app.models.dto import ChatRequest, ChatResponse, QueryHit, QueryRequest
|
||||
from app.services.llm_service import LLMService
|
||||
from app.core.retriever import Retriever
|
||||
from app.services.feedback_service import log_search
|
||||
|
||||
router = APIRouter()
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# --- Helper: Config Loader ---
|
||||
# --- EBENE 0: DTOs FÜR DISCOVERY (WP-24c) ---
|
||||
|
||||
class DiscoveryRequest(BaseModel):
|
||||
content: str
|
||||
top_k: int = 8
|
||||
min_confidence: float = 0.6
|
||||
|
||||
class DiscoveryHit(BaseModel):
|
||||
target_note: str # Note ID
|
||||
target_title: str # Menschenlesbarer Titel
|
||||
suggested_edge_type: str # Kanonischer Typ aus edge_vocabulary
|
||||
confidence_score: float # Kombinierter Vektor- + KI-Score
|
||||
reasoning: str # Kurze Begründung der KI
|
||||
|
||||
# --- EBENE 1: CONFIG LOADER & CACHING (WP-25 Standard) ---
|
||||
|
||||
_DECISION_CONFIG_CACHE = None
|
||||
_TYPES_CONFIG_CACHE = None
|
||||
|
||||
def _load_decision_config() -> Dict[str, Any]:
|
||||
"""Lädt die Strategie-Konfiguration."""
|
||||
settings = get_settings()
|
||||
path = Path(settings.DECISION_CONFIG_PATH)
|
||||
default_config = {
|
||||
"strategies": {
|
||||
"FACT": {"trigger_keywords": []}
|
||||
}
|
||||
}
|
||||
|
||||
if not path.exists():
|
||||
logger.warning(f"Decision config not found at {path}, using defaults.")
|
||||
return default_config
|
||||
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f)
|
||||
if path.exists():
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load decision config: {e}")
|
||||
return default_config
|
||||
return {"strategies": {}}
|
||||
|
||||
def _load_types_config() -> Dict[str, Any]:
|
||||
"""Lädt die types.yaml für Keyword-Erkennung."""
|
||||
"""Lädt die types.yaml für die Typerkennung."""
|
||||
path = os.getenv("MINDNET_TYPES_FILE", "config/types.yaml")
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception:
|
||||
return {}
|
||||
if os.path.exists(path):
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load types config: {e}")
|
||||
return {}
|
||||
|
||||
def get_full_config() -> Dict[str, Any]:
|
||||
global _DECISION_CONFIG_CACHE
|
||||
|
|
@ -70,21 +89,17 @@ def get_types_config() -> Dict[str, Any]:
|
|||
def get_decision_strategy(intent: str) -> Dict[str, Any]:
|
||||
config = get_full_config()
|
||||
strategies = config.get("strategies", {})
|
||||
return strategies.get(intent, strategies.get("FACT", {}))
|
||||
return strategies.get(intent, strategies.get("FACT_WHAT", {}))
|
||||
|
||||
# --- Helper: Target Type Detection (WP-07) ---
|
||||
# --- EBENE 2: SPEZIAL-LOGIK (INTERVIEW & DETECTION) ---
|
||||
|
||||
def _detect_target_type(message: str, configured_schemas: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Versucht zu erraten, welchen Notiz-Typ der User erstellen will.
|
||||
Nutzt Keywords aus types.yaml UND Mappings.
|
||||
"""
|
||||
"""WP-07: Identifiziert den gewünschten Notiz-Typ (Keyword-basiert)."""
|
||||
message_lower = message.lower()
|
||||
|
||||
# 1. Check types.yaml detection_keywords (Priority!)
|
||||
types_cfg = get_types_config()
|
||||
types_def = types_cfg.get("types", {})
|
||||
|
||||
# 1. Check types.yaml detection_keywords
|
||||
for type_name, type_data in types_def.items():
|
||||
keywords = type_data.get("detection_keywords", [])
|
||||
for kw in keywords:
|
||||
|
|
@ -97,254 +112,251 @@ def _detect_target_type(message: str, configured_schemas: Dict[str, Any]) -> str
|
|||
if type_key in message_lower:
|
||||
return type_key
|
||||
|
||||
# 3. Synonym-Mapping (Legacy Fallback)
|
||||
# 3. Synonym-Mapping (Legacy)
|
||||
synonyms = {
|
||||
"projekt": "project", "vorhaben": "project",
|
||||
"entscheidung": "decision", "beschluss": "decision",
|
||||
"ziel": "goal",
|
||||
"erfahrung": "experience", "lektion": "experience",
|
||||
"wert": "value",
|
||||
"prinzip": "principle",
|
||||
"notiz": "default", "idee": "default"
|
||||
"projekt": "project", "entscheidung": "decision", "ziel": "goal",
|
||||
"erfahrung": "experience", "wert": "value", "prinzip": "principle"
|
||||
}
|
||||
|
||||
for term, schema_key in synonyms.items():
|
||||
if term in message_lower:
|
||||
return schema_key
|
||||
|
||||
return "default"
|
||||
|
||||
# --- Dependencies ---
|
||||
def _is_question(query: str) -> bool:
|
||||
"""Prüft, ob der Input eine Frage ist."""
|
||||
q = query.strip().lower()
|
||||
if "?" in q: return True
|
||||
starters = ["wer", "wie", "was", "wo", "wann", "warum", "weshalb", "wozu", "welche", "bist du"]
|
||||
return any(q.startswith(s + " ") for s in starters)
|
||||
|
||||
async def _classify_intent(query: str, llm: LLMService) -> tuple[str, str]:
|
||||
"""Hybrid Router: Keyword-Fast-Paths & DecisionEngine LLM Router."""
|
||||
config = get_full_config()
|
||||
strategies = config.get("strategies", {})
|
||||
query_lower = query.lower()
|
||||
|
||||
# 1. FAST PATH: Keyword Trigger
|
||||
for intent_name, strategy in strategies.items():
|
||||
keywords = strategy.get("trigger_keywords", [])
|
||||
for k in keywords:
|
||||
if k.lower() in query_lower:
|
||||
return intent_name, "Keyword (FastPath)"
|
||||
|
||||
# 2. FAST PATH B: Type Keywords -> INTERVIEW
|
||||
if not _is_question(query_lower):
|
||||
types_cfg = get_types_config()
|
||||
for type_name, type_data in types_cfg.get("types", {}).items():
|
||||
for kw in type_data.get("detection_keywords", []):
|
||||
if kw.lower() in query_lower:
|
||||
return "INTERVIEW", "Keyword (Interview)"
|
||||
|
||||
# 3. SLOW PATH: DecisionEngine LLM Router (MoE-gesteuert)
|
||||
intent = await llm.decision_engine._determine_strategy(query)
|
||||
return intent, "DecisionEngine (LLM)"
|
||||
|
||||
# --- EBENE 3: RETRIEVAL AGGREGATION ---
|
||||
|
||||
def _collect_all_hits(stream_responses: Dict[str, Any]) -> List[QueryHit]:
|
||||
"""Sammelt deduplizierte Treffer aus allen Streams für das Tracing."""
|
||||
all_hits = []
|
||||
seen_node_ids = set()
|
||||
for _, response in stream_responses.items():
|
||||
# Sammeln der Hits aus den QueryResponse Objekten
|
||||
if hasattr(response, 'results'):
|
||||
for hit in response.results:
|
||||
if hit.node_id not in seen_node_ids:
|
||||
all_hits.append(hit)
|
||||
seen_node_ids.add(hit.node_id)
|
||||
return sorted(all_hits, key=lambda h: h.total_score, reverse=True)
|
||||
|
||||
# --- EBENE 4: ENDPUNKTE ---
|
||||
|
||||
def get_llm_service():
|
||||
return LLMService()
|
||||
|
||||
def get_retriever():
|
||||
return Retriever()
|
||||
|
||||
|
||||
# --- Logic ---
|
||||
|
||||
def _build_enriched_context(hits: List[QueryHit]) -> str:
|
||||
context_parts = []
|
||||
for i, hit in enumerate(hits, 1):
|
||||
source = hit.source or {}
|
||||
content = (
|
||||
source.get("text") or source.get("content") or
|
||||
source.get("page_content") or source.get("chunk_text") or
|
||||
"[Kein Text]"
|
||||
)
|
||||
title = hit.note_id or "Unbekannt"
|
||||
|
||||
payload = hit.payload or {}
|
||||
note_type = payload.get("type") or source.get("type", "unknown")
|
||||
note_type = str(note_type).upper()
|
||||
|
||||
entry = (
|
||||
f"### QUELLE {i}: {title}\n"
|
||||
f"TYP: [{note_type}] (Score: {hit.total_score:.2f})\n"
|
||||
f"INHALT:\n{content}\n"
|
||||
)
|
||||
context_parts.append(entry)
|
||||
|
||||
return "\n\n".join(context_parts)
|
||||
|
||||
def _is_question(query: str) -> bool:
|
||||
"""Prüft, ob der Input wahrscheinlich eine Frage ist."""
|
||||
q = query.strip().lower()
|
||||
if "?" in q: return True
|
||||
|
||||
# W-Fragen Indikatoren (falls User das ? vergisst)
|
||||
starters = ["wer", "wie", "was", "wo", "wann", "warum", "weshalb", "wozu", "welche", "bist du", "entspricht"]
|
||||
if any(q.startswith(s + " ") for s in starters):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def _classify_intent(query: str, llm: LLMService) -> tuple[str, str]:
|
||||
"""
|
||||
Hybrid Router v5:
|
||||
1. Decision Keywords (Strategie) -> Prio 1
|
||||
2. Type Keywords (Interview Trigger) -> Prio 2, ABER NUR WENN KEINE FRAGE!
|
||||
3. LLM (Fallback) -> Prio 3
|
||||
"""
|
||||
config = get_full_config()
|
||||
strategies = config.get("strategies", {})
|
||||
settings = config.get("settings", {})
|
||||
|
||||
query_lower = query.lower()
|
||||
|
||||
# 1. FAST PATH A: Strategie Keywords (z.B. "Soll ich...")
|
||||
for intent_name, strategy in strategies.items():
|
||||
if intent_name == "FACT": continue
|
||||
keywords = strategy.get("trigger_keywords", [])
|
||||
for k in keywords:
|
||||
if k.lower() in query_lower:
|
||||
return intent_name, "Keyword (Strategy)"
|
||||
|
||||
# 2. FAST PATH B: Type Keywords (z.B. "Projekt", "Werte") -> INTERVIEW
|
||||
# FIX: Wir prüfen, ob es eine Frage ist. Fragen zu Typen sollen RAG (FACT/DECISION) sein,
|
||||
# keine Interviews. Wir überlassen das dann dem LLM Router (Slow Path).
|
||||
|
||||
if not _is_question(query_lower):
|
||||
types_cfg = get_types_config()
|
||||
types_def = types_cfg.get("types", {})
|
||||
|
||||
for type_name, type_data in types_def.items():
|
||||
keywords = type_data.get("detection_keywords", [])
|
||||
for kw in keywords:
|
||||
if kw.lower() in query_lower:
|
||||
return "INTERVIEW", f"Keyword (Type: {type_name})"
|
||||
|
||||
# 3. SLOW PATH: LLM Router
|
||||
if settings.get("llm_fallback_enabled", False):
|
||||
# Nutze Prompts aus prompts.yaml (via LLM Service)
|
||||
router_prompt_template = llm.prompts.get("router_prompt", "")
|
||||
|
||||
if router_prompt_template:
|
||||
prompt = router_prompt_template.replace("{query}", query)
|
||||
logger.info("Keywords failed (or Question detected). Asking LLM for Intent...")
|
||||
|
||||
try:
|
||||
# Nutze priority="realtime" für den Router, damit er nicht wartet
|
||||
raw_response = await llm.generate_raw_response(prompt, priority="realtime")
|
||||
llm_output_upper = raw_response.upper()
|
||||
|
||||
# Zuerst INTERVIEW prüfen
|
||||
if "INTERVIEW" in llm_output_upper or "CREATE" in llm_output_upper:
|
||||
return "INTERVIEW", "LLM Router"
|
||||
|
||||
for strat_key in strategies.keys():
|
||||
if strat_key in llm_output_upper:
|
||||
return strat_key, "LLM Router"
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Router LLM failed: {e}")
|
||||
|
||||
return "FACT", "Default (No Match)"
|
||||
|
||||
@router.post("/", response_model=ChatResponse)
|
||||
async def chat_endpoint(
|
||||
request: ChatRequest,
|
||||
llm: LLMService = Depends(get_llm_service),
|
||||
retriever: Retriever = Depends(get_retriever)
|
||||
llm: LLMService = Depends(get_llm_service)
|
||||
):
|
||||
start_time = time.time()
|
||||
query_id = str(uuid.uuid4())
|
||||
logger.info(f"Chat request [{query_id}]: {request.message[:50]}...")
|
||||
logger.info(f"🚀 [WP-25b] Chat request [{query_id}]: {request.message[:50]}...")
|
||||
|
||||
try:
|
||||
# 1. Intent Detection
|
||||
intent, intent_source = await _classify_intent(request.message, llm)
|
||||
logger.info(f"[{query_id}] Final Intent: {intent} via {intent_source}")
|
||||
logger.info(f"[{query_id}] Intent: {intent} via {intent_source}")
|
||||
|
||||
# Strategy Load
|
||||
strategy = get_decision_strategy(intent)
|
||||
prompt_key = strategy.get("prompt_template", "rag_template")
|
||||
engine = llm.decision_engine
|
||||
|
||||
sources_hits = []
|
||||
final_prompt = ""
|
||||
|
||||
answer_text = ""
|
||||
|
||||
# 2. INTERVIEW MODE (WP-25b Lazy-Prompt Logik)
|
||||
if intent == "INTERVIEW":
|
||||
# --- INTERVIEW MODE ---
|
||||
target_type = _detect_target_type(request.message, strategy.get("schemas", {}))
|
||||
|
||||
types_cfg = get_types_config()
|
||||
type_def = types_cfg.get("types", {}).get(target_type, {})
|
||||
fields_list = type_def.get("schema", [])
|
||||
|
||||
# WP-07: Restaurierte Fallback Logik
|
||||
if not fields_list:
|
||||
configured_schemas = strategy.get("schemas", {})
|
||||
fallback_schema = configured_schemas.get(target_type, configured_schemas.get("default"))
|
||||
if isinstance(fallback_schema, dict):
|
||||
fields_list = fallback_schema.get("fields", [])
|
||||
else:
|
||||
fields_list = fallback_schema or []
|
||||
fallback = configured_schemas.get(target_type, configured_schemas.get("default", {}))
|
||||
fields_list = fallback.get("fields", []) if isinstance(fallback, dict) else (fallback or [])
|
||||
|
||||
logger.info(f"[{query_id}] Interview Type: {target_type}. Fields: {len(fields_list)}")
|
||||
fields_str = "\n- " + "\n- ".join(fields_list)
|
||||
template_key = strategy.get("prompt_template", "interview_template")
|
||||
|
||||
template = llm.prompts.get(prompt_key, "")
|
||||
final_prompt = template.replace("{context_str}", "Dialogverlauf...") \
|
||||
.replace("{query}", request.message) \
|
||||
.replace("{target_type}", target_type) \
|
||||
.replace("{schema_fields}", fields_str) \
|
||||
.replace("{schema_hint}", "")
|
||||
sources_hits = []
|
||||
|
||||
else:
|
||||
# --- RAG MODE ---
|
||||
inject_types = strategy.get("inject_types", [])
|
||||
prepend_instr = strategy.get("prepend_instruction", "")
|
||||
|
||||
query_req = QueryRequest(
|
||||
query=request.message,
|
||||
mode="hybrid",
|
||||
top_k=request.top_k,
|
||||
explain=request.explain
|
||||
# WP-25b: Lazy Loading Call
|
||||
answer_text = await llm.generate_raw_response(
|
||||
prompt_key=template_key,
|
||||
variables={
|
||||
"query": request.message,
|
||||
"target_type": target_type,
|
||||
"schema_fields": fields_str
|
||||
},
|
||||
system=llm.get_prompt("system_prompt"),
|
||||
priority="realtime",
|
||||
profile_name="compression_fast",
|
||||
max_retries=0
|
||||
)
|
||||
retrieve_result = await retriever.search(query_req)
|
||||
hits = retrieve_result.results
|
||||
sources_hits = []
|
||||
|
||||
# 3. RAG MODE (WP-25b Delegation an Engine v1.3.0)
|
||||
else:
|
||||
# Phase A & B: Retrieval & Kompression (Delegiert an Engine v1.3.0)
|
||||
formatted_context_map = await engine._execute_parallel_streams(strategy, request.message)
|
||||
|
||||
if inject_types:
|
||||
strategy_req = QueryRequest(
|
||||
query=request.message,
|
||||
mode="hybrid",
|
||||
top_k=3,
|
||||
filters={"type": inject_types},
|
||||
explain=False
|
||||
)
|
||||
strategy_result = await retriever.search(strategy_req)
|
||||
existing_ids = {h.node_id for h in hits}
|
||||
for strat_hit in strategy_result.results:
|
||||
if strat_hit.node_id not in existing_ids:
|
||||
hits.append(strat_hit)
|
||||
|
||||
if not hits:
|
||||
context_str = "Keine relevanten Notizen gefunden."
|
||||
else:
|
||||
context_str = _build_enriched_context(hits)
|
||||
|
||||
template = llm.prompts.get(prompt_key, "{context_str}\n\n{query}")
|
||||
# Erfassung der Quellen für das Tracing
|
||||
raw_stream_map = {}
|
||||
stream_keys = strategy.get("use_streams", [])
|
||||
library = engine.config.get("streams_library", {})
|
||||
|
||||
if prepend_instr:
|
||||
context_str = f"{prepend_instr}\n\n{context_str}"
|
||||
retrieval_tasks = []
|
||||
active_streams = []
|
||||
for key in stream_keys:
|
||||
if key in library:
|
||||
active_streams.append(key)
|
||||
retrieval_tasks.append(engine._run_single_stream(key, library[key], request.message))
|
||||
|
||||
responses = await asyncio.gather(*retrieval_tasks, return_exceptions=True)
|
||||
for name, res in zip(active_streams, responses):
|
||||
if not isinstance(res, Exception):
|
||||
raw_stream_map[name] = res
|
||||
|
||||
sources_hits = _collect_all_hits(raw_stream_map)
|
||||
|
||||
final_prompt = template.replace("{context_str}", context_str).replace("{query}", request.message)
|
||||
sources_hits = hits
|
||||
|
||||
# --- GENERATION ---
|
||||
system_prompt = llm.prompts.get("system_prompt", "")
|
||||
|
||||
# Chat nutzt IMMER realtime priority
|
||||
answer_text = await llm.generate_raw_response(
|
||||
prompt=final_prompt,
|
||||
system=system_prompt,
|
||||
priority="realtime"
|
||||
)
|
||||
# Phase C: Finale MoE Synthese (Delegiert an Engine v1.3.0)
|
||||
answer_text = await engine._generate_final_answer(
|
||||
intent, strategy, request.message, formatted_context_map
|
||||
)
|
||||
|
||||
duration_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
# Logging
|
||||
|
||||
# Logging (WP-15)
|
||||
try:
|
||||
log_search(
|
||||
query_id=query_id,
|
||||
query_text=request.message,
|
||||
results=sources_hits,
|
||||
mode="interview" if intent == "INTERVIEW" else "chat_rag",
|
||||
metadata={"intent": intent, "source": intent_source}
|
||||
query_id=query_id, query_text=request.message, results=sources_hits,
|
||||
mode=f"wp25b_{intent.lower()}", metadata={"strategy": intent, "source": intent_source}
|
||||
)
|
||||
except: pass
|
||||
|
||||
return ChatResponse(
|
||||
query_id=query_id,
|
||||
answer=answer_text,
|
||||
sources=sources_hits,
|
||||
latency_ms=duration_ms,
|
||||
intent=intent,
|
||||
intent_source=intent_source
|
||||
query_id=query_id, answer=answer_text, sources=sources_hits,
|
||||
latency_ms=duration_ms, intent=intent, intent_source=intent_source
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in chat endpoint: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
logger.error(f"❌ Chat Endpoint Failure: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail="Fehler bei der Verarbeitung der Anfrage.")
|
||||
|
||||
@router.post("/query/discover", response_model=List[DiscoveryHit])
|
||||
async def discover_edges(
|
||||
request: DiscoveryRequest,
|
||||
llm: LLMService = Depends(get_llm_service)
|
||||
):
|
||||
"""
|
||||
WP-24c: Analysiert Text auf potenzielle Kanten zu bestehendem Wissen.
|
||||
Nutzt Vektor-Suche und DecisionEngine-Logik (WP-25b PROMPT-TRACE konform).
|
||||
"""
|
||||
start_time = time.time()
|
||||
logger.info(f"🔍 [WP-24c] Discovery triggered for content: {request.content[:50]}...")
|
||||
|
||||
try:
|
||||
# 1. Kandidaten-Suche via Retriever (Vektor-Match)
|
||||
search_req = QueryRequest(
|
||||
query=request.content,
|
||||
top_k=request.top_k,
|
||||
explain=True
|
||||
)
|
||||
candidates = await llm.decision_engine.retriever.search(search_req)
|
||||
|
||||
if not candidates.results:
|
||||
logger.info("ℹ️ No candidates found for discovery.")
|
||||
return []
|
||||
|
||||
# 2. KI-gestützte Beziehungs-Extraktion (WP-25b)
|
||||
discovery_results = []
|
||||
|
||||
# Zugriff auf gültige Kanten-Typen aus der Registry
|
||||
from app.services.edge_registry import registry as edge_reg
|
||||
valid_types_str = ", ".join(list(edge_reg.valid_types))
|
||||
|
||||
# Parallele Evaluierung der Kandidaten für maximale Performance
|
||||
async def evaluate_candidate(hit: QueryHit) -> Optional[DiscoveryHit]:
|
||||
if hit.total_score < request.min_confidence:
|
||||
return None
|
||||
|
||||
try:
|
||||
# Nutzt ingest_extractor Profil für präzise semantische Analyse
|
||||
# Wir verwenden das prompt_key Pattern (edge_extraction) gemäß WP-24c Vorgabe
|
||||
raw_suggestion = await llm.generate_raw_response(
|
||||
prompt_key="edge_extraction",
|
||||
variables={
|
||||
"note_id": "NEUER_INHALT",
|
||||
"text": f"PROXIMITY_TARGET: {hit.source.get('text', '')}\n\nNEW_CONTENT: {request.content}",
|
||||
"valid_types": valid_types_str
|
||||
},
|
||||
profile_name="ingest_extractor",
|
||||
priority="realtime"
|
||||
)
|
||||
|
||||
# Parsing der LLM Antwort (Erwartet JSON Liste)
|
||||
from app.core.ingestion.ingestion_utils import extract_json_from_response
|
||||
suggestions = extract_json_from_response(raw_suggestion)
|
||||
|
||||
if isinstance(suggestions, list) and len(suggestions) > 0:
|
||||
sugg = suggestions[0] # Wir nehmen den stärksten Vorschlag pro Hit
|
||||
return DiscoveryHit(
|
||||
target_note=hit.note_id,
|
||||
target_title=hit.source.get("title") or hit.note_id,
|
||||
suggested_edge_type=sugg.get("kind", "related_to"),
|
||||
confidence_score=hit.total_score,
|
||||
reasoning=f"Semantische Nähe ({int(hit.total_score*100)}%) entdeckt."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Discovery evaluation failed for hit {hit.note_id}: {e}")
|
||||
return None
|
||||
|
||||
tasks = [evaluate_candidate(hit) for hit in candidates.results]
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
# Zusammenführung und Duplikat-Bereinigung
|
||||
seen_targets = set()
|
||||
for r in results:
|
||||
if r and r.target_note not in seen_targets:
|
||||
discovery_results.append(r)
|
||||
seen_targets.add(r.target_note)
|
||||
|
||||
duration = int((time.time() - start_time) * 1000)
|
||||
logger.info(f"✨ Discovery finished: found {len(discovery_results)} edges in {duration}ms")
|
||||
|
||||
return discovery_results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Discovery API failure: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail="Discovery-Prozess fehlgeschlagen.")
|
||||
|
|
@ -1,5 +1,10 @@
|
|||
"""
|
||||
Version 0.1
|
||||
FILE: app/routers/embed_router.py
|
||||
DESCRIPTION: Exponiert die lokale Embedding-Funktion als API-Endpunkt.
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.embeddings, pydantic
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
"""
|
||||
app/routers/feedback.py
|
||||
Endpunkt für User-Feedback (WP-04c).
|
||||
FILE: app/routers/feedback.py
|
||||
DESCRIPTION: Endpunkt für explizites User-Feedback (WP-04c).
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.models.dto, app.services.feedback_service
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from app.models.dto import FeedbackRequest
|
||||
|
|
|
|||
|
|
@ -1,21 +1,10 @@
|
|||
"""
|
||||
app/routers/graph.py — Graph-Endpunkte (WP-04)
|
||||
|
||||
Zweck:
|
||||
Liefert die Nachbarschaft einer Note/ID als JSON-Graph (Nodes/Edges/Stats).
|
||||
Kompatibilität:
|
||||
Python 3.12+, FastAPI 0.110+, qdrant-client 1.x
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Bezug:
|
||||
- app/core/graph_adapter.py
|
||||
- app/models/dto.py
|
||||
Nutzung:
|
||||
app.include_router(graph.router, prefix="/graph", tags=["graph"])
|
||||
Änderungsverlauf:
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
FILE: app/routers/graph.py
|
||||
DESCRIPTION: Liefert Graph-Daten (Knoten/Kanten) für UI-Visualisierungen basierend auf einer Seed-ID. (WP4)
|
||||
VERSION: 0.1.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: qdrant_client, app.models.dto, app.core.graph_adapter, app.config
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
@ -23,7 +12,7 @@ from typing import List, Optional
|
|||
from fastapi import APIRouter, Query
|
||||
from qdrant_client import QdrantClient
|
||||
from app.models.dto import GraphResponse, NodeDTO, EdgeDTO
|
||||
from app.core.graph_adapter import expand
|
||||
from app.core.graph.graph_subgraph import expand
|
||||
from app.config import get_settings
|
||||
|
||||
router = APIRouter()
|
||||
|
|
|
|||
|
|
@ -1,12 +1,18 @@
|
|||
"""
|
||||
app/routers/ingest.py
|
||||
API-Endpunkte für WP-11 (Discovery & Persistence).
|
||||
Delegiert an Services.
|
||||
FILE: app/routers/ingest.py
|
||||
DESCRIPTION: Endpunkte für WP-11. Nimmt Markdown entgegen.
|
||||
Refactored für WP-14: Nutzt BackgroundTasks für non-blocking Save.
|
||||
Update WP-20: Unterstützung für Hybrid-Cloud-Analyse Feedback.
|
||||
VERSION: 0.8.0 (WP-20 Hybrid Ready)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.core.ingestion, app.services.discovery, fastapi, pydantic
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
from fastapi import APIRouter, HTTPException
|
||||
import asyncio
|
||||
from fastapi import APIRouter, HTTPException, BackgroundTasks
|
||||
from pydantic import BaseModel
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
|
|
@ -16,7 +22,7 @@ from app.services.discovery import DiscoveryService
|
|||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter()
|
||||
|
||||
# Services Init (Global oder via Dependency Injection)
|
||||
# Services Init
|
||||
discovery_service = DiscoveryService()
|
||||
|
||||
class AnalyzeRequest(BaseModel):
|
||||
|
|
@ -32,7 +38,33 @@ class SaveResponse(BaseModel):
|
|||
status: str
|
||||
file_path: str
|
||||
note_id: str
|
||||
stats: Dict[str, Any]
|
||||
message: str # Neu für UX Feedback
|
||||
stats: Dict[str, Any] # Kann leer sein bei async processing
|
||||
|
||||
# --- Background Task Wrapper ---
|
||||
async def run_ingestion_task(markdown_content: str, filename: str, vault_root: str, folder: str):
|
||||
"""
|
||||
Führt die Ingestion im Hintergrund aus, damit der Request nicht blockiert.
|
||||
Integrierter WP-20 Hybrid-Modus über den IngestionService.
|
||||
"""
|
||||
logger.info(f"🔄 Background Task started: Ingesting {filename}...")
|
||||
try:
|
||||
ingest_service = IngestionService()
|
||||
result = await ingest_service.create_from_text(
|
||||
markdown_content=markdown_content,
|
||||
filename=filename,
|
||||
vault_root=vault_root,
|
||||
folder=folder
|
||||
)
|
||||
# Hier könnte man später Notification-Services (Websockets) triggern
|
||||
if result.get("status") == "error":
|
||||
logger.error(f"❌ Background Ingestion Error for {filename}: {result.get('error')}")
|
||||
else:
|
||||
logger.info(f"✅ Background Task finished: {filename} ({result.get('chunks_count')} Chunks)")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Critical Background Task Failure: {e}", exc_info=True)
|
||||
|
||||
|
||||
@router.post("/analyze")
|
||||
async def analyze_draft(req: AnalyzeRequest):
|
||||
|
|
@ -40,7 +72,6 @@ async def analyze_draft(req: AnalyzeRequest):
|
|||
WP-11 Intelligence: Liefert Link-Vorschläge via DiscoveryService.
|
||||
"""
|
||||
try:
|
||||
# Hier rufen wir jetzt den verbesserten Service auf
|
||||
result = await discovery_service.analyze_draft(req.text, req.type)
|
||||
return result
|
||||
except Exception as e:
|
||||
|
|
@ -48,42 +79,47 @@ async def analyze_draft(req: AnalyzeRequest):
|
|||
return {"suggestions": [], "error": str(e)}
|
||||
|
||||
@router.post("/save", response_model=SaveResponse)
|
||||
async def save_note(req: SaveRequest):
|
||||
async def save_note(req: SaveRequest, background_tasks: BackgroundTasks):
|
||||
"""
|
||||
WP-11 Persistence: Speichert und indiziert.
|
||||
WP-14 Fix: Startet Ingestion im Hintergrund (Fire & Forget).
|
||||
Verhindert Timeouts bei aktiver Smart-Edge-Allocation (WP-15) und Cloud-Hybrid-Modus (WP-20).
|
||||
"""
|
||||
try:
|
||||
vault_root = os.getenv("MINDNET_VAULT_ROOT", "./vault")
|
||||
abs_vault_root = os.path.abspath(vault_root)
|
||||
|
||||
if not os.path.exists(abs_vault_root):
|
||||
try: os.makedirs(abs_vault_root, exist_ok=True)
|
||||
except: pass
|
||||
try:
|
||||
os.makedirs(abs_vault_root, exist_ok=True)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not create vault root: {e}")
|
||||
|
||||
final_filename = req.filename or f"draft_{int(time.time())}.md"
|
||||
ingest_service = IngestionService()
|
||||
|
||||
# Async Call
|
||||
result = await ingest_service.create_from_text(
|
||||
# Wir geben sofort eine ID zurück (optimistisch),
|
||||
# auch wenn die echte ID erst nach dem Parsing feststeht.
|
||||
# Für UI-Feedback nutzen wir den Filename.
|
||||
|
||||
# Task in die Queue schieben
|
||||
background_tasks.add_task(
|
||||
run_ingestion_task,
|
||||
markdown_content=req.markdown_content,
|
||||
filename=final_filename,
|
||||
vault_root=abs_vault_root,
|
||||
folder=req.folder
|
||||
)
|
||||
|
||||
if result.get("status") == "error":
|
||||
raise HTTPException(status_code=500, detail=result.get("error"))
|
||||
|
||||
return SaveResponse(
|
||||
status="success",
|
||||
file_path=result.get("path", "unknown"),
|
||||
note_id=result.get("note_id", "unknown"),
|
||||
status="queued",
|
||||
file_path=os.path.join(req.folder, final_filename),
|
||||
note_id="pending",
|
||||
message="Speicherung & Hybrid-KI-Analyse (WP-20) im Hintergrund gestartet.",
|
||||
stats={
|
||||
"chunks": result.get("chunks_count", 0),
|
||||
"edges": result.get("edges_count", 0)
|
||||
"chunks": -1, # Indikator für Async
|
||||
"edges": -1
|
||||
}
|
||||
)
|
||||
except HTTPException as he: raise he
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Save failed: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=f"Save failed: {str(e)}")
|
||||
logger.error(f"Save dispatch failed: {e}", exc_info=True)
|
||||
raise HTTPException(status_code=500, detail=f"Save dispatch failed: {str(e)}")
|
||||
|
|
@ -1,160 +0,0 @@
|
|||
"""
|
||||
Version 0.1
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Optional, List
|
||||
import uuid
|
||||
|
||||
from fastapi import APIRouter
|
||||
from pydantic import BaseModel, Field
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http.models import (
|
||||
Distance,
|
||||
VectorParams,
|
||||
PointStruct,
|
||||
Filter,
|
||||
FieldCondition,
|
||||
MatchValue,
|
||||
)
|
||||
|
||||
from ..config import get_settings
|
||||
from ..embeddings import embed_texts
|
||||
|
||||
router = APIRouter(prefix="/qdrant", tags=["qdrant"])
|
||||
|
||||
def _client() -> QdrantClient:
|
||||
s = get_settings()
|
||||
return QdrantClient(url=s.QDRANT_URL, api_key=s.QDRANT_API_KEY)
|
||||
|
||||
def _col(name: str) -> str:
|
||||
return f"{get_settings().COLLECTION_PREFIX}_{name}"
|
||||
|
||||
def _uuid5(s: str) -> str:
|
||||
"""Deterministic UUIDv5 from arbitrary string (server-side point id)."""
|
||||
return str(uuid.uuid5(uuid.NAMESPACE_URL, s))
|
||||
|
||||
# --- Models ---
|
||||
class BaseMeta(BaseModel):
|
||||
note_id: str = Field(..., description="Stable ID of the note (e.g., hash of vault-relative path)")
|
||||
title: Optional[str] = Field(None, description="Note or chunk title")
|
||||
path: Optional[str] = Field(None, description="Vault-relative path to the .md file")
|
||||
Typ: Optional[str] = None
|
||||
Status: Optional[str] = None
|
||||
tags: Optional[List[str]] = None
|
||||
Rolle: Optional[List[str]] = None # allow list
|
||||
|
||||
class UpsertChunkRequest(BaseMeta):
|
||||
chunk_id: str = Field(..., description="Stable ID of the chunk within the note")
|
||||
text: str = Field(..., description="Chunk text content")
|
||||
links: Optional[List[str]] = Field(default=None, description="Outbound links detected in the chunk")
|
||||
|
||||
class UpsertNoteRequest(BaseMeta):
|
||||
text: Optional[str] = Field(None, description="Full note text (optional)")
|
||||
|
||||
class UpsertEdgeRequest(BaseModel):
|
||||
src_note_id: str
|
||||
dst_note_id: Optional[str] = None
|
||||
src_chunk_id: Optional[str] = None
|
||||
dst_chunk_id: Optional[str] = None
|
||||
relation: str = Field(default="links_to")
|
||||
link_text: Optional[str] = None
|
||||
|
||||
class QueryRequest(BaseModel):
|
||||
query: str
|
||||
limit: int = 5
|
||||
note_id: Optional[str] = None
|
||||
path: Optional[str] = None
|
||||
tags: Optional[List[str]] = None
|
||||
|
||||
# --- Helpers ---
|
||||
def _ensure_collections():
|
||||
s = get_settings()
|
||||
cli = _client()
|
||||
# chunks
|
||||
try:
|
||||
cli.get_collection(_col("chunks"))
|
||||
except Exception:
|
||||
cli.recreate_collection(_col("chunks"), vectors_config=VectorParams(size=s.VECTOR_SIZE, distance=Distance.COSINE))
|
||||
# notes
|
||||
try:
|
||||
cli.get_collection(_col("notes"))
|
||||
except Exception:
|
||||
cli.recreate_collection(_col("notes"), vectors_config=VectorParams(size=s.VECTOR_SIZE, distance=Distance.COSINE))
|
||||
# edges (dummy vector of size 1)
|
||||
try:
|
||||
cli.get_collection(_col("edges"))
|
||||
except Exception:
|
||||
cli.recreate_collection(_col("edges"), vectors_config=VectorParams(size=1, distance=Distance.COSINE))
|
||||
|
||||
@router.post("/upsert_chunk", summary="Upsert a chunk into mindnet_chunks")
|
||||
def upsert_chunk(req: UpsertChunkRequest) -> dict:
|
||||
_ensure_collections()
|
||||
cli = _client()
|
||||
vec = embed_texts([req.text])[0]
|
||||
payload: dict[str, Any] = req.model_dump()
|
||||
payload.pop("text", None)
|
||||
payload["preview"] = (req.text[:240] + "…") if len(req.text) > 240 else req.text
|
||||
qdrant_id = _uuid5(f"chunk:{req.chunk_id}")
|
||||
pt = PointStruct(id=qdrant_id, vector=vec, payload=payload)
|
||||
cli.upsert(collection_name=_col("chunks"), points=[pt])
|
||||
return {"status": "ok", "id": qdrant_id}
|
||||
|
||||
@router.post("/upsert_note", summary="Upsert a note into mindnet_notes")
|
||||
def upsert_note(req: UpsertNoteRequest) -> dict:
|
||||
_ensure_collections()
|
||||
cli = _client()
|
||||
text_for_embedding = req.text if req.text else (req.title or req.note_id)
|
||||
vec = embed_texts([text_for_embedding])[0]
|
||||
payload: dict[str, Any] = req.model_dump()
|
||||
payload.pop("text", None)
|
||||
qdrant_id = _uuid5(f"note:{req.note_id}")
|
||||
pt = PointStruct(id=qdrant_id, vector=vec, payload=payload)
|
||||
cli.upsert(collection_name=_col("notes"), points=[pt])
|
||||
return {"status": "ok", "id": qdrant_id}
|
||||
|
||||
@router.post("/upsert_edge", summary="Upsert a graph edge into mindnet_edges")
|
||||
def upsert_edge(req: UpsertEdgeRequest) -> dict:
|
||||
_ensure_collections()
|
||||
cli = _client()
|
||||
payload = req.model_dump()
|
||||
vec = [0.0]
|
||||
raw_edge_id = f"{req.src_note_id}|{req.src_chunk_id or ''}->{req.dst_note_id or ''}|{req.dst_chunk_id or ''}|{req.relation}"
|
||||
qdrant_id = _uuid5(f"edge:{raw_edge_id}")
|
||||
pt = PointStruct(id=qdrant_id, vector=vec, payload=payload)
|
||||
cli.upsert(collection_name=_col("edges"), points=[pt])
|
||||
return {"status": "ok", "id": qdrant_id}
|
||||
|
||||
@router.post("/query", summary="Vector query over mindnet_chunks with optional filters")
|
||||
def query(req: QueryRequest) -> dict:
|
||||
_ensure_collections()
|
||||
cli = _client()
|
||||
vec = embed_texts([req.query])[0]
|
||||
|
||||
flt: Optional[Filter] = None
|
||||
conds = []
|
||||
if req.note_id:
|
||||
conds.append(FieldCondition(key="note_id", match=MatchValue(value=req.note_id)))
|
||||
if req.path:
|
||||
conds.append(FieldCondition(key="path", match=MatchValue(value=req.path)))
|
||||
if req.tags:
|
||||
for t in req.tags:
|
||||
conds.append(FieldCondition(key="tags", match=MatchValue(value=t)))
|
||||
if conds:
|
||||
flt = Filter(must=conds)
|
||||
|
||||
res = cli.search(collection_name=_col("chunks"), query_vector=vec, limit=req.limit, with_payload=True, with_vectors=False, query_filter=flt)
|
||||
hits = []
|
||||
for p in res:
|
||||
pl = p.payload or {}
|
||||
hits.append({
|
||||
"chunk_id": p.id,
|
||||
"score": p.score,
|
||||
"note_id": pl.get("note_id"),
|
||||
"title": pl.get("title"),
|
||||
"path": pl.get("path"),
|
||||
"preview": pl.get("preview"),
|
||||
"tags": pl.get("tags"),
|
||||
})
|
||||
return {"results": hits}
|
||||
|
|
@ -1,27 +1,16 @@
|
|||
"""
|
||||
app/routers/query.py — Query-Endpunkte (WP-04)
|
||||
FILE: app/routers/query.py
|
||||
DESCRIPTION: Klassische Such-Endpunkte (Semantic & Hybrid). Initiiert asynchrones Feedback-Logging und ruft den richtigen Retriever Modus auf
|
||||
VERSION: 0.2.0
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.models.dto, app.core.retriever, app.services.feedback_service
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
Zweck:
|
||||
Stellt POST /query bereit und ruft den passenden Retriever-Modus auf.
|
||||
Kompatibilität:
|
||||
Python 3.12+, FastAPI 0.110+
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Bezug:
|
||||
- app/core/retriever.py
|
||||
- app/models/dto.py
|
||||
Nutzung:
|
||||
app.include_router(query.router, prefix="/query", tags=["query"])
|
||||
Änderungsverlauf:
|
||||
0.2.0 (2025-12-07) - Update für WP04c Feedback
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
from fastapi import APIRouter, HTTPException, BackgroundTasks
|
||||
from app.models.dto import QueryRequest, QueryResponse
|
||||
from app.core.retriever import hybrid_retrieve, semantic_retrieve
|
||||
from app.core.retrieval.retriever import hybrid_retrieve, semantic_retrieve
|
||||
# NEU:
|
||||
from app.services.feedback_service import log_search
|
||||
|
||||
|
|
|
|||
|
|
@ -1,21 +1,10 @@
|
|||
"""
|
||||
app/routers/tools.py — Tool-Definitionen für Ollama/n8n/MCP (read-only)
|
||||
|
||||
Zweck:
|
||||
Liefert Funktions-Schemas (OpenAI-/Ollama-kompatibles Tool-JSON) für:
|
||||
- mindnet_query -> POST /query
|
||||
- mindnet_subgraph -> GET /graph/{note_id}
|
||||
Kompatibilität:
|
||||
Python 3.12+, FastAPI 0.110+
|
||||
Version:
|
||||
0.1.1 (query ODER query_vector möglich)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Nutzung:
|
||||
app.include_router(tools.router, prefix="/tools", tags=["tools"])
|
||||
Änderungsverlauf:
|
||||
0.1.1 (2025-10-07) – mindnet_query: oneOf(query, query_vector).
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
FILE: app/routers/tools.py
|
||||
DESCRIPTION: Liefert JSON-Schemas für die Integration als 'Tools' in Agents (Ollama/OpenAI). Read-Only.
|
||||
VERSION: 0.1.1
|
||||
STATUS: Active
|
||||
DEPENDENCIES: fastapi
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
"""
|
||||
app/services/discovery.py
|
||||
Service für Link-Vorschläge und Knowledge-Discovery (WP-11).
|
||||
|
||||
Features:
|
||||
- Sliding Window Analyse für lange Texte.
|
||||
- Footer-Scan für Projekt-Referenzen.
|
||||
- 'Matrix-Logic' für intelligente Kanten-Typen (Experience -> Value = based_on).
|
||||
- Async & Nomic-Embeddings kompatibel.
|
||||
FILE: app/services/discovery.py
|
||||
DESCRIPTION: Service für WP-11 (Discovery API). Analysiert Entwürfe, findet Entitäten
|
||||
und schlägt typisierte Verbindungen basierend auf der Topologie vor.
|
||||
WP-24c: Vollständige Umstellung auf EdgeRegistry für dynamische Vorschläge.
|
||||
WP-15b: Unterstützung für hybride Suche und Alias-Erkennung.
|
||||
VERSION: 1.1.0 (WP-24c: Full Registry Integration & Audit Fix)
|
||||
STATUS: Active
|
||||
COMPATIBILITY: 100% (Identische API-Signatur wie v0.6.0)
|
||||
"""
|
||||
import logging
|
||||
import asyncio
|
||||
|
|
@ -14,207 +14,184 @@ import os
|
|||
from typing import List, Dict, Any, Optional, Set
|
||||
import yaml
|
||||
|
||||
from app.core.qdrant import QdrantConfig, get_client
|
||||
from app.core.database.qdrant import QdrantConfig, get_client
|
||||
from app.models.dto import QueryRequest
|
||||
from app.core.retriever import hybrid_retrieve
|
||||
from app.core.retrieval.retriever import hybrid_retrieve
|
||||
# WP-24c: Zentrale Topologie-Quelle
|
||||
from app.services.edge_registry import registry as edge_registry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class DiscoveryService:
|
||||
def __init__(self, collection_prefix: str = None):
|
||||
"""Initialisiert den Discovery Service mit Qdrant-Anbindung."""
|
||||
self.cfg = QdrantConfig.from_env()
|
||||
self.prefix = collection_prefix or self.cfg.prefix or "mindnet"
|
||||
self.client = get_client(self.cfg)
|
||||
|
||||
# Die Registry wird für Typ-Metadaten geladen (Schema-Validierung)
|
||||
self.registry = self._load_type_registry()
|
||||
|
||||
async def analyze_draft(self, text: str, current_type: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Analysiert den Text und liefert Vorschläge mit kontext-sensitiven Kanten-Typen.
|
||||
Analysiert einen Textentwurf auf potenzielle Verbindungen.
|
||||
1. Findet exakte Treffer (Titel/Aliasse).
|
||||
2. Führt semantische Suchen für verschiedene Textabschnitte aus.
|
||||
3. Schlägt topologisch korrekte Kanten-Typen vor.
|
||||
"""
|
||||
if not text or len(text.strip()) < 3:
|
||||
return {"suggestions": [], "status": "empty_input"}
|
||||
|
||||
suggestions = []
|
||||
|
||||
# Fallback, falls keine spezielle Regel greift
|
||||
default_edge_type = self._get_default_edge_type(current_type)
|
||||
seen_target_ids = set()
|
||||
|
||||
# Tracking-Sets für Deduplizierung (Wir merken uns NOTE-IDs)
|
||||
seen_target_note_ids = set()
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 1. Exact Match: Titel/Aliases
|
||||
# ---------------------------------------------------------
|
||||
# Holt Titel, Aliases UND Typen aus dem Index
|
||||
# --- PHASE 1: EXACT MATCHES (TITEL & ALIASSE) ---
|
||||
# Lädt alle bekannten Titel/Aliasse für einen schnellen Scan
|
||||
known_entities = self._fetch_all_titles_and_aliases()
|
||||
found_entities = self._find_entities_in_text(text, known_entities)
|
||||
exact_matches = self._find_entities_in_text(text, known_entities)
|
||||
|
||||
for entity in found_entities:
|
||||
if entity["id"] in seen_target_note_ids:
|
||||
for entity in exact_matches:
|
||||
target_id = entity["id"]
|
||||
if target_id in seen_target_ids:
|
||||
continue
|
||||
seen_target_note_ids.add(entity["id"])
|
||||
|
||||
# INTELLIGENTE KANTEN-LOGIK (MATRIX)
|
||||
|
||||
seen_target_ids.add(target_id)
|
||||
target_type = entity.get("type", "concept")
|
||||
smart_edge = self._resolve_edge_type(current_type, target_type)
|
||||
|
||||
# WP-24c: Dynamische Kanten-Ermittlung statt Hardcoded Matrix
|
||||
suggested_kind = self._resolve_edge_type(current_type, target_type)
|
||||
|
||||
suggestions.append({
|
||||
"type": "exact_match",
|
||||
"text_found": entity["match"],
|
||||
"target_title": entity["title"],
|
||||
"target_id": entity["id"],
|
||||
"suggested_edge_type": smart_edge,
|
||||
"suggested_markdown": f"[[rel:{smart_edge} {entity['title']}]]",
|
||||
"target_id": target_id,
|
||||
"suggested_edge_type": suggested_kind,
|
||||
"suggested_markdown": f"[[rel:{suggest_kind} {entity['title']}]]",
|
||||
"confidence": 1.0,
|
||||
"reason": f"Exakter Treffer: '{entity['match']}' ({target_type})"
|
||||
"reason": f"Direkte Erwähnung von '{entity['match']}' ({target_type})"
|
||||
})
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 2. Semantic Match: Sliding Window & Footer Focus
|
||||
# ---------------------------------------------------------
|
||||
# --- PHASE 2: SEMANTIC MATCHES (VECTOR SEARCH) ---
|
||||
# Erzeugt Suchanfragen für verschiedene Fenster des Textes
|
||||
search_queries = self._generate_search_queries(text)
|
||||
|
||||
# Async parallel abfragen
|
||||
# Parallele Ausführung der Suchanfragen (Cloud-Performance)
|
||||
tasks = [self._get_semantic_suggestions_async(q) for q in search_queries]
|
||||
results_list = await asyncio.gather(*tasks)
|
||||
|
||||
# Ergebnisse verarbeiten
|
||||
for hits in results_list:
|
||||
for hit in hits:
|
||||
note_id = hit.payload.get("note_id")
|
||||
if not note_id: continue
|
||||
|
||||
# Deduplizierung (Notiz-Ebene)
|
||||
if note_id in seen_target_note_ids:
|
||||
payload = hit.payload or {}
|
||||
target_id = payload.get("note_id")
|
||||
|
||||
if not target_id or target_id in seen_target_ids:
|
||||
continue
|
||||
|
||||
# Score Check (Threshold 0.50 für nomic-embed-text)
|
||||
if hit.total_score > 0.50:
|
||||
seen_target_note_ids.add(note_id)
|
||||
# Relevanz-Threshold (Modell-spezifisch für nomic)
|
||||
if hit.total_score > 0.55:
|
||||
seen_target_ids.add(target_id)
|
||||
target_type = payload.get("type", "concept")
|
||||
target_title = payload.get("title") or "Unbenannt"
|
||||
|
||||
target_title = hit.payload.get("title") or "Unbekannt"
|
||||
|
||||
# INTELLIGENTE KANTEN-LOGIK (MATRIX)
|
||||
# Den Typ der gefundenen Notiz aus dem Payload lesen
|
||||
target_type = hit.payload.get("type", "concept")
|
||||
smart_edge = self._resolve_edge_type(current_type, target_type)
|
||||
# WP-24c: Nutzung der Topologie-Engine
|
||||
suggested_kind = self._resolve_edge_type(current_type, target_type)
|
||||
|
||||
suggestions.append({
|
||||
"type": "semantic_match",
|
||||
"text_found": (hit.source.get("text") or "")[:60] + "...",
|
||||
"text_found": (hit.source.get("text") or "")[:80] + "...",
|
||||
"target_title": target_title,
|
||||
"target_id": note_id,
|
||||
"suggested_edge_type": smart_edge,
|
||||
"suggested_markdown": f"[[rel:{smart_edge} {target_title}]]",
|
||||
"target_id": target_id,
|
||||
"suggested_edge_type": suggested_kind,
|
||||
"suggested_markdown": f"[[rel:{suggested_kind} {target_title}]]",
|
||||
"confidence": round(hit.total_score, 2),
|
||||
"reason": f"Semantisch ähnlich zu {target_type} ({hit.total_score:.2f})"
|
||||
"reason": f"Semantischer Bezug zu {target_type} ({int(hit.total_score*100)}%)"
|
||||
})
|
||||
|
||||
# Sortieren nach Confidence
|
||||
# Sortierung nach Konfidenz
|
||||
suggestions.sort(key=lambda x: x["confidence"], reverse=True)
|
||||
|
||||
return {
|
||||
"draft_length": len(text),
|
||||
"analyzed_windows": len(search_queries),
|
||||
"suggestions_count": len(suggestions),
|
||||
"suggestions": suggestions[:10]
|
||||
"suggestions": suggestions[:12] # Top 12 Vorschläge
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# Core Logic: Die Matrix
|
||||
# ---------------------------------------------------------
|
||||
|
||||
# --- LOGIK-ZENTRALE (WP-24c) ---
|
||||
|
||||
def _resolve_edge_type(self, source_type: str, target_type: str) -> str:
|
||||
"""
|
||||
Entscheidungsmatrix für komplexe Verbindungen.
|
||||
Definiert, wie Typ A auf Typ B verlinken sollte.
|
||||
Ermittelt den optimalen Kanten-Typ zwischen zwei Notiz-Typen.
|
||||
Nutzt EdgeRegistry (graph_schema.md) statt lokaler Matrix.
|
||||
"""
|
||||
st = source_type.lower()
|
||||
tt = target_type.lower()
|
||||
# 1. Spezifische Prüfung: Gibt es eine Regel für Source -> Target?
|
||||
info = edge_registry.get_topology_info(source_type, target_type)
|
||||
typical = info.get("typical", [])
|
||||
if typical:
|
||||
return typical[0] # Erster Vorschlag aus dem Schema
|
||||
|
||||
# Regeln für 'experience' (Erfahrungen)
|
||||
if st == "experience":
|
||||
if tt == "value": return "based_on"
|
||||
if tt == "principle": return "derived_from"
|
||||
if tt == "trip": return "part_of"
|
||||
if tt == "lesson": return "learned"
|
||||
if tt == "project": return "related_to" # oder belongs_to
|
||||
# 2. Fallback: Was ist für den Quell-Typ generell typisch? (Source -> any)
|
||||
info_fallback = edge_registry.get_topology_info(source_type, "any")
|
||||
typical_fallback = info_fallback.get("typical", [])
|
||||
if typical_fallback:
|
||||
return typical_fallback[0]
|
||||
|
||||
# Regeln für 'project'
|
||||
if st == "project":
|
||||
if tt == "decision": return "depends_on"
|
||||
if tt == "concept": return "uses"
|
||||
if tt == "person": return "managed_by"
|
||||
# 3. Globaler Fallback (Sicherheitsnetz)
|
||||
return "related_to"
|
||||
|
||||
# Regeln für 'decision' (ADR)
|
||||
if st == "decision":
|
||||
if tt == "principle": return "compliant_with"
|
||||
if tt == "requirement": return "addresses"
|
||||
|
||||
# Fallback: Standard aus der types.yaml für den Source-Typ
|
||||
return self._get_default_edge_type(st)
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# Sliding Windows
|
||||
# ---------------------------------------------------------
|
||||
# --- HELPERS (VOLLSTÄNDIG ERHALTEN) ---
|
||||
|
||||
def _generate_search_queries(self, text: str) -> List[str]:
|
||||
"""
|
||||
Erzeugt intelligente Fenster + Footer Scan.
|
||||
"""
|
||||
"""Erzeugt überlappende Fenster für die Vektorsuche (Sliding Window)."""
|
||||
text_len = len(text)
|
||||
if not text: return []
|
||||
|
||||
queries = []
|
||||
|
||||
# 1. Start / Gesamtkontext
|
||||
# Fokus A: Dokument-Anfang (Kontext)
|
||||
queries.append(text[:600])
|
||||
|
||||
# 2. Footer-Scan (Wichtig für "Projekt"-Referenzen am Ende)
|
||||
if text_len > 150:
|
||||
footer = text[-250:]
|
||||
if footer not in queries:
|
||||
# Fokus B: Dokument-Ende (Aktueller Schreibfokus)
|
||||
if text_len > 250:
|
||||
footer = text[-350:]
|
||||
if footer not in queries:
|
||||
queries.append(footer)
|
||||
|
||||
# 3. Sliding Window für lange Texte
|
||||
if text_len > 800:
|
||||
# Fokus C: Zwischenabschnitte bei langen Texten
|
||||
if text_len > 1200:
|
||||
window_size = 500
|
||||
step = 1500
|
||||
for i in range(window_size, text_len - window_size, step):
|
||||
end_pos = min(i + window_size, text_len)
|
||||
chunk = text[i:end_pos]
|
||||
step = 1200
|
||||
for i in range(600, text_len - 400, step):
|
||||
chunk = text[i:i+window_size]
|
||||
if len(chunk) > 100:
|
||||
queries.append(chunk)
|
||||
|
||||
return queries
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# Standard Helpers
|
||||
# ---------------------------------------------------------
|
||||
|
||||
async def _get_semantic_suggestions_async(self, text: str):
|
||||
req = QueryRequest(query=text, top_k=5, explain=False)
|
||||
"""Führt eine asynchrone Vektorsuche über den Retriever aus."""
|
||||
req = QueryRequest(query=text, top_k=6, explain=False)
|
||||
try:
|
||||
# Nutzt hybrid_retrieve (WP-15b Standard)
|
||||
res = hybrid_retrieve(req)
|
||||
return res.results
|
||||
except Exception as e:
|
||||
logger.error(f"Semantic suggestion error: {e}")
|
||||
logger.error(f"Discovery retrieval error: {e}")
|
||||
return []
|
||||
|
||||
def _load_type_registry(self) -> dict:
|
||||
"""Lädt die types.yaml für Typ-Definitionen."""
|
||||
path = os.getenv("MINDNET_TYPES_FILE", "config/types.yaml")
|
||||
if not os.path.exists(path):
|
||||
if os.path.exists("types.yaml"): path = "types.yaml"
|
||||
else: return {}
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f: return yaml.safe_load(f) or {}
|
||||
except Exception: return {}
|
||||
|
||||
def _get_default_edge_type(self, note_type: str) -> str:
|
||||
types_cfg = self.registry.get("types", {})
|
||||
type_def = types_cfg.get(note_type, {})
|
||||
defaults = type_def.get("edge_defaults")
|
||||
return defaults[0] if defaults else "related_to"
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _fetch_all_titles_and_aliases(self) -> List[Dict]:
|
||||
notes = []
|
||||
"""Holt alle Note-IDs, Titel und Aliasse für den Exakt-Match Abgleich."""
|
||||
entities = []
|
||||
next_page = None
|
||||
col = f"{self.prefix}_notes"
|
||||
try:
|
||||
|
|
@ -226,30 +203,40 @@ class DiscoveryService:
|
|||
for point in res:
|
||||
pl = point.payload or {}
|
||||
aliases = pl.get("aliases") or []
|
||||
if isinstance(aliases, str): aliases = [aliases]
|
||||
if isinstance(aliases, str):
|
||||
aliases = [aliases]
|
||||
|
||||
notes.append({
|
||||
entities.append({
|
||||
"id": pl.get("note_id"),
|
||||
"title": pl.get("title"),
|
||||
"aliases": aliases,
|
||||
"type": pl.get("type", "concept") # WICHTIG: Typ laden für Matrix
|
||||
"type": pl.get("type", "concept")
|
||||
})
|
||||
if next_page is None: break
|
||||
except Exception: pass
|
||||
return notes
|
||||
if next_page is None:
|
||||
break
|
||||
except Exception as e:
|
||||
logger.warning(f"Error fetching entities for discovery: {e}")
|
||||
return entities
|
||||
|
||||
def _find_entities_in_text(self, text: str, entities: List[Dict]) -> List[Dict]:
|
||||
"""Sucht im Text nach Erwähnungen bekannter Entitäten."""
|
||||
found = []
|
||||
text_lower = text.lower()
|
||||
for entity in entities:
|
||||
# Title Check
|
||||
title = entity.get("title")
|
||||
# Titel-Check
|
||||
if title and title.lower() in text_lower:
|
||||
found.append({"match": title, "title": title, "id": entity["id"], "type": entity["type"]})
|
||||
found.append({
|
||||
"match": title, "title": title,
|
||||
"id": entity["id"], "type": entity["type"]
|
||||
})
|
||||
continue
|
||||
# Alias Check
|
||||
# Alias-Check
|
||||
for alias in entity.get("aliases", []):
|
||||
if str(alias).lower() in text_lower:
|
||||
found.append({"match": alias, "title": title, "id": entity["id"], "type": entity["type"]})
|
||||
found.append({
|
||||
"match": str(alias), "title": title,
|
||||
"id": entity["id"], "type": entity["type"]
|
||||
})
|
||||
break
|
||||
return found
|
||||
227
app/services/edge_registry.py
Normal file
227
app/services/edge_registry.py
Normal file
|
|
@ -0,0 +1,227 @@
|
|||
"""
|
||||
FILE: app/services/edge_registry.py
|
||||
DESCRIPTION: Single Source of Truth für Kanten-Typen, Symmetrien und Graph-Topologie.
|
||||
WP-24c: Implementierung der dualen Registry (Vocabulary & Schema).
|
||||
Unterstützt dynamisches Laden von Inversen und kontextuellen Vorschlägen.
|
||||
VERSION: 1.0.1 (WP-24c: Verified Atomic Topology)
|
||||
STATUS: Active
|
||||
"""
|
||||
import re
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Optional, Set, Tuple, List
|
||||
|
||||
from app.config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class EdgeRegistry:
|
||||
"""
|
||||
Zentraler Verwalter für das Kanten-Vokabular und das Graph-Schema.
|
||||
Singleton-Pattern zur Sicherstellung konsistenter Validierung.
|
||||
"""
|
||||
_instance = None
|
||||
|
||||
# SYSTEM-SCHUTZ: Diese Kanten sind für die strukturelle Integrität reserviert (v0.8.0 Erhalt)
|
||||
FORBIDDEN_SYSTEM_EDGES = {"next", "prev", "belongs_to"}
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super(EdgeRegistry, cls).__new__(cls)
|
||||
cls._instance.initialized = False
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
if self.initialized:
|
||||
return
|
||||
|
||||
settings = get_settings()
|
||||
|
||||
# --- Pfad-Konfiguration (WP-24c: Variable Pfade für Vault-Spiegelung) ---
|
||||
# Das Vokabular (Semantik)
|
||||
self.full_vocab_path = os.path.abspath(settings.MINDNET_VOCAB_PATH)
|
||||
|
||||
# Das Schema (Topologie) - Konfigurierbar via ENV: MINDNET_SCHEMA_PATH
|
||||
schema_env = getattr(settings, "MINDNET_SCHEMA_PATH", None)
|
||||
if schema_env:
|
||||
self.full_schema_path = os.path.abspath(schema_env)
|
||||
else:
|
||||
# Fallback: Liegt im selben Verzeichnis wie das Vokabular
|
||||
self.full_schema_path = os.path.join(os.path.dirname(self.full_vocab_path), "graph_schema.md")
|
||||
|
||||
self.unknown_log_path = "data/logs/unknown_edges.jsonl"
|
||||
|
||||
# --- Interne Datenspeicher ---
|
||||
self.canonical_map: Dict[str, str] = {}
|
||||
self.inverse_map: Dict[str, str] = {}
|
||||
self.valid_types: Set[str] = set()
|
||||
|
||||
# Topologie: source_type -> { target_type -> {"typical": set, "prohibited": set} }
|
||||
self.topology: Dict[str, Dict[str, Dict[str, Set[str]]]] = {}
|
||||
|
||||
self._last_vocab_mtime = 0.0
|
||||
self._last_schema_mtime = 0.0
|
||||
|
||||
logger.info(f">>> [EDGE-REGISTRY] Initializing WP-24c Dual-Engine")
|
||||
logger.info(f" - Vocab-Path: {self.full_vocab_path}")
|
||||
logger.info(f" - Schema-Path: {self.full_schema_path}")
|
||||
|
||||
self.ensure_latest()
|
||||
self.initialized = True
|
||||
|
||||
def ensure_latest(self):
|
||||
"""Prüft Zeitstempel beider Dateien und führt bei Änderung Hot-Reload durch."""
|
||||
try:
|
||||
# Vokabular-Reload bei Änderung
|
||||
if os.path.exists(self.full_vocab_path):
|
||||
v_mtime = os.path.getmtime(self.full_vocab_path)
|
||||
if v_mtime > self._last_vocab_mtime:
|
||||
self._load_vocabulary()
|
||||
self._last_vocab_mtime = v_mtime
|
||||
|
||||
# Schema-Reload bei Änderung
|
||||
if os.path.exists(self.full_schema_path):
|
||||
s_mtime = os.path.getmtime(self.full_schema_path)
|
||||
if s_mtime > self._last_schema_mtime:
|
||||
self._load_schema()
|
||||
self._last_schema_mtime = s_mtime
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"!!! [EDGE-REGISTRY] Sync failure: {e}")
|
||||
|
||||
def _load_vocabulary(self):
|
||||
"""Parst edge_vocabulary.md: | Canonical | Inverse | Aliases | Description |"""
|
||||
self.canonical_map.clear()
|
||||
self.inverse_map.clear()
|
||||
self.valid_types.clear()
|
||||
|
||||
# Regex für die 4-Spalten Struktur (WP-24c konform)
|
||||
# Erwartet: | **`type`** | `inverse` | alias1, alias2 | ... |
|
||||
pattern = re.compile(r"\|\s*\*\*`?([a-zA-Z0-9_-]+)`?\*\*\s*\|\s*`?([a-zA-Z0-9_-]+)`?\s*\|\s*([^|]+)\|")
|
||||
|
||||
try:
|
||||
with open(self.full_vocab_path, "r", encoding="utf-8") as f:
|
||||
c_count = 0
|
||||
for line in f:
|
||||
match = pattern.search(line)
|
||||
if match:
|
||||
canonical = match.group(1).strip().lower()
|
||||
inverse = match.group(2).strip().lower()
|
||||
aliases_raw = match.group(3).strip()
|
||||
|
||||
self.valid_types.add(canonical)
|
||||
self.canonical_map[canonical] = canonical
|
||||
if inverse:
|
||||
self.inverse_map[canonical] = inverse
|
||||
|
||||
# Aliase verarbeiten (Normalisierung auf snake_case)
|
||||
if aliases_raw and "Kein Alias" not in aliases_raw:
|
||||
aliases = [a.strip() for a in aliases_raw.split(",") if a.strip()]
|
||||
for alias in aliases:
|
||||
clean_alias = alias.replace("`", "").lower().strip().replace(" ", "_")
|
||||
if clean_alias:
|
||||
self.canonical_map[clean_alias] = canonical
|
||||
c_count += 1
|
||||
|
||||
logger.info(f"✅ [VOCAB] Loaded {c_count} edge definitions and their inverses.")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ [VOCAB ERROR] {e}")
|
||||
|
||||
def _load_schema(self):
|
||||
"""Parst graph_schema.md: ## Source: `type` | Target | Typical | Prohibited |"""
|
||||
self.topology.clear()
|
||||
current_source = None
|
||||
|
||||
try:
|
||||
with open(self.full_schema_path, "r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
# Header erkennen (Atomare Sektionen)
|
||||
src_match = re.search(r"## Source:\s*`?([a-zA-Z0-9_-]+)`?", line)
|
||||
if src_match:
|
||||
current_source = src_match.group(1).strip().lower()
|
||||
if current_source not in self.topology:
|
||||
self.topology[current_source] = {}
|
||||
continue
|
||||
|
||||
# Tabellenzeilen parsen
|
||||
if current_source and "|" in line and not line.startswith("|-") and "Target" not in line:
|
||||
cols = [c.strip().replace("`", "").lower() for c in line.split("|")]
|
||||
if len(cols) >= 4:
|
||||
target_type = cols[1]
|
||||
typical_edges = [e.strip() for e in cols[2].split(",") if e.strip() and e != "-"]
|
||||
prohibited_edges = [e.strip() for e in cols[3].split(",") if e.strip() and e != "-"]
|
||||
|
||||
if target_type not in self.topology[current_source]:
|
||||
self.topology[current_source][target_type] = {"typical": set(), "prohibited": set()}
|
||||
|
||||
self.topology[current_source][target_type]["typical"].update(typical_edges)
|
||||
self.topology[current_source][target_type]["prohibited"].update(prohibited_edges)
|
||||
|
||||
logger.info(f"✅ [SCHEMA] Topology matrix built for {len(self.topology)} source types.")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ [SCHEMA ERROR] {e}")
|
||||
|
||||
def resolve(self, edge_type: str, provenance: str = "explicit", context: dict = None) -> str:
|
||||
"""
|
||||
Löst Aliasse auf kanonische Namen auf und schützt System-Kanten.
|
||||
Erhalt der v0.8.0 Schutz-Logik.
|
||||
"""
|
||||
self.ensure_latest()
|
||||
if not edge_type:
|
||||
return "related_to"
|
||||
|
||||
clean_type = edge_type.lower().strip().replace(" ", "_").replace("-", "_")
|
||||
ctx = context or {}
|
||||
|
||||
# Sicherheits-Gate: Schutz vor unerlaubter Nutzung von System-Kanten
|
||||
restricted_provenance = ["explicit", "semantic_ai", "inherited", "global_pool", "rule"]
|
||||
if provenance in restricted_provenance and clean_type in self.FORBIDDEN_SYSTEM_EDGES:
|
||||
self._log_issue(clean_type, f"forbidden_system_edge_manipulation_by_{provenance}", ctx)
|
||||
return "related_to"
|
||||
|
||||
# System-Kanten sind NUR bei struktureller Provenienz (Code-generiert) erlaubt
|
||||
if provenance == "structure" and clean_type in self.FORBIDDEN_SYSTEM_EDGES:
|
||||
return clean_type
|
||||
|
||||
# Alias-Auflösung
|
||||
return self.canonical_map.get(clean_type, clean_type)
|
||||
|
||||
def get_inverse(self, edge_type: str) -> str:
|
||||
"""WP-24c: Gibt das symmetrische Gegenstück zurück."""
|
||||
canonical = self.resolve(edge_type)
|
||||
return self.inverse_map.get(canonical, "related_to")
|
||||
|
||||
def get_topology_info(self, source_type: str, target_type: str) -> Dict[str, List[str]]:
|
||||
"""
|
||||
WP-24c: Liefert kontextuelle Kanten-Empfehlungen für Obsidian und das Backend.
|
||||
"""
|
||||
self.ensure_latest()
|
||||
|
||||
# Hierarchische Suche: Spezifisch -> 'any' -> Empty
|
||||
src_cfg = self.topology.get(source_type, self.topology.get("any", {}))
|
||||
tgt_cfg = src_cfg.get(target_type, src_cfg.get("any", {"typical": set(), "prohibited": set()}))
|
||||
|
||||
return {
|
||||
"typical": sorted(list(tgt_cfg["typical"])),
|
||||
"prohibited": sorted(list(tgt_cfg["prohibited"]))
|
||||
}
|
||||
|
||||
def _log_issue(self, edge_type: str, error_kind: str, ctx: dict):
|
||||
"""JSONL-Logging für unbekannte/verbotene Kanten (Erhalt v0.8.0)."""
|
||||
try:
|
||||
os.makedirs(os.path.dirname(self.unknown_log_path), exist_ok=True)
|
||||
entry = {
|
||||
"timestamp": time.strftime("%Y-%m-%d %H:%M:%S"),
|
||||
"edge_type": edge_type,
|
||||
"error": error_kind,
|
||||
"note_id": ctx.get("note_id", "unknown"),
|
||||
"provenance": ctx.get("provenance", "unknown")
|
||||
}
|
||||
with open(self.unknown_log_path, "a", encoding="utf-8") as f:
|
||||
f.write(json.dumps(entry) + "\n")
|
||||
except Exception: pass
|
||||
|
||||
# Singleton Export
|
||||
registry = EdgeRegistry()
|
||||
|
|
@ -1,42 +1,74 @@
|
|||
"""
|
||||
app/services/embeddings_client.py — Text→Embedding Service
|
||||
|
||||
Zweck:
|
||||
Einheitlicher Client für Embeddings via Ollama (Nomic).
|
||||
Stellt sicher, dass sowohl Async (Ingestion) als auch Sync (Retriever)
|
||||
denselben Vektorraum (768 Dim) nutzen.
|
||||
|
||||
Version: 2.5.0 (Unified Ollama)
|
||||
FILE: app/services/embeddings_client.py
|
||||
DESCRIPTION: Unified Embedding Client. Nutzt MoE-Profile zur Modellsteuerung.
|
||||
WP-25a: Integration der llm_profiles.yaml für konsistente Vektoren.
|
||||
VERSION: 2.6.0 (WP-25a: MoE & Profile Support)
|
||||
STATUS: Active
|
||||
DEPENDENCIES: httpx, requests, app.config, yaml
|
||||
"""
|
||||
from __future__ import annotations
|
||||
import os
|
||||
import logging
|
||||
import httpx
|
||||
import requests # Für den synchronen Fallback
|
||||
from typing import List
|
||||
import requests
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Any
|
||||
from app.config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class EmbeddingsClient:
|
||||
"""
|
||||
Async Client für Embeddings via Ollama.
|
||||
Async Client für Embeddings.
|
||||
Steuerung erfolgt über das 'embedding_expert' Profil in llm_profiles.yaml.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.settings = get_settings()
|
||||
self.base_url = os.getenv("MINDNET_OLLAMA_URL", "http://127.0.0.1:11434")
|
||||
self.model = os.getenv("MINDNET_EMBEDDING_MODEL")
|
||||
|
||||
# 1. MoE-Profil laden (WP-25a)
|
||||
self.profile = self._load_embedding_profile()
|
||||
|
||||
# 2. Modell & URL auflösen
|
||||
# Priorität: llm_profiles.yaml -> .env (Legacy) -> Fallback
|
||||
self.model = self.profile.get("model") or os.getenv("MINDNET_EMBEDDING_MODEL")
|
||||
|
||||
provider = self.profile.get("provider", "ollama")
|
||||
if provider == "ollama":
|
||||
self.base_url = self.settings.OLLAMA_URL
|
||||
else:
|
||||
# Platzhalter für zukünftige Cloud-Embedding-Provider
|
||||
self.base_url = os.getenv("MINDNET_OLLAMA_URL", "http://127.0.0.1:11434")
|
||||
|
||||
if not self.model:
|
||||
self.model = os.getenv("MINDNET_LLM_MODEL", "phi3:mini")
|
||||
logger.warning(f"No MINDNET_EMBEDDING_MODEL set. Fallback to '{self.model}'.")
|
||||
logger.warning(f"⚠️ Kein Embedding-Modell in Profil oder .env gefunden. Fallback auf '{self.model}'.")
|
||||
else:
|
||||
logger.info(f"🧬 Embedding-Experte aktiv: Model='{self.model}' via {provider}")
|
||||
|
||||
def _load_embedding_profile(self) -> Dict[str, Any]:
|
||||
"""Lädt die Konfiguration für den embedding_expert."""
|
||||
path_str = getattr(self.settings, "LLM_PROFILES_PATH", "config/llm_profiles.yaml")
|
||||
path = Path(path_str)
|
||||
if not path.exists():
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
profiles = data.get("profiles", {})
|
||||
return profiles.get("embedding_expert", {})
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to load embedding profile: {e}")
|
||||
return {}
|
||||
|
||||
async def embed_query(self, text: str) -> List[float]:
|
||||
"""Erzeugt einen Vektor für eine Suchanfrage."""
|
||||
return await self._request_embedding(text)
|
||||
|
||||
async def embed_documents(self, texts: List[str]) -> List[List[float]]:
|
||||
"""Erzeugt Vektoren für einen Batch von Dokumenten."""
|
||||
vectors = []
|
||||
# Längeres Timeout für Batches
|
||||
# Längeres Timeout für Batches (WP-20 Resilienz)
|
||||
async with httpx.AsyncClient(timeout=120.0) as client:
|
||||
for text in texts:
|
||||
vec = await self._request_embedding_with_client(client, text)
|
||||
|
|
@ -44,18 +76,23 @@ class EmbeddingsClient:
|
|||
return vectors
|
||||
|
||||
async def _request_embedding(self, text: str) -> List[float]:
|
||||
"""Interner Request-Handler für Einzelabfragen."""
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
return await self._request_embedding_with_client(client, text)
|
||||
|
||||
async def _request_embedding_with_client(self, client: httpx.AsyncClient, text: str) -> List[float]:
|
||||
if not text or not text.strip(): return []
|
||||
"""Führt den HTTP-Call gegen die Embedding-API durch."""
|
||||
if not text or not text.strip():
|
||||
return []
|
||||
|
||||
url = f"{self.base_url}/api/embeddings"
|
||||
try:
|
||||
# WP-25: Aktuell optimiert für Ollama-API Struktur
|
||||
response = await client.post(url, json={"model": self.model, "prompt": text})
|
||||
response.raise_for_status()
|
||||
return response.json().get("embedding", [])
|
||||
except Exception as e:
|
||||
logger.error(f"Async embedding failed: {e}")
|
||||
logger.error(f"Async embedding failed (Model: {self.model}): {e}")
|
||||
return []
|
||||
|
||||
# ==============================================================================
|
||||
|
|
@ -64,27 +101,38 @@ class EmbeddingsClient:
|
|||
|
||||
def embed_text(text: str) -> List[float]:
|
||||
"""
|
||||
LEGACY/SYNC: Nutzt jetzt ebenfalls OLLAMA via 'requests'.
|
||||
Ersetzt SentenceTransformers, um Dimensionskonflikte (768 vs 384) zu lösen.
|
||||
LEGACY/SYNC: Nutzt ebenfalls die Profil-Logik für Konsistenz.
|
||||
Ersetzt lokale sentence-transformers zur Vermeidung von Dimensionskonflikten.
|
||||
"""
|
||||
if not text or not text.strip():
|
||||
return []
|
||||
|
||||
base_url = os.getenv("MINDNET_OLLAMA_URL", "http://127.0.0.1:11434")
|
||||
model = os.getenv("MINDNET_EMBEDDING_MODEL")
|
||||
settings = get_settings()
|
||||
|
||||
# Fallback logik identisch zur Klasse
|
||||
# Schneller Profil-Lookup für Sync-Mode
|
||||
path = Path(getattr(settings, "LLM_PROFILES_PATH", "config/llm_profiles.yaml"))
|
||||
model = os.getenv("MINDNET_EMBEDDING_MODEL")
|
||||
base_url = settings.OLLAMA_URL
|
||||
|
||||
if path.exists():
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
prof = data.get("profiles", {}).get("embedding_expert", {})
|
||||
if prof.get("model"):
|
||||
model = prof["model"]
|
||||
except: pass
|
||||
|
||||
if not model:
|
||||
model = os.getenv("MINDNET_LLM_MODEL", "phi3:mini")
|
||||
|
||||
url = f"{base_url}/api/embeddings"
|
||||
|
||||
try:
|
||||
# Synchroner Request (blockierend)
|
||||
# Synchroner Request via requests
|
||||
response = requests.post(url, json={"model": model, "prompt": text}, timeout=30)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
return data.get("embedding", [])
|
||||
return response.json().get("embedding", [])
|
||||
except Exception as e:
|
||||
logger.error(f"Sync embedding (Ollama) failed: {e}")
|
||||
logger.error(f"Sync embedding failed (Model: {model}): {e}")
|
||||
return []
|
||||
|
|
@ -1,9 +1,10 @@
|
|||
"""
|
||||
app/services/feedback_service.py
|
||||
Service zum Loggen von Suchanfragen und Feedback (WP-04c).
|
||||
Speichert Daten als JSONL für späteres Self-Tuning (WP-08).
|
||||
|
||||
Version: 1.1 (Chat-Support)
|
||||
FILE: app/services/feedback_service.py
|
||||
DESCRIPTION: Schreibt Search- und Feedback-Logs in JSONL-Dateien.
|
||||
VERSION: 1.1
|
||||
STATUS: Active
|
||||
DEPENDENCIES: app.models.dto
|
||||
LAST_ANALYSIS: 2025-12-15
|
||||
"""
|
||||
import json
|
||||
import os
|
||||
|
|
|
|||
|
|
@ -1,88 +0,0 @@
|
|||
"""
|
||||
app/services/llm_ollama.py — Ollama-Integration & Prompt-Bau (WP-04)
|
||||
|
||||
Zweck:
|
||||
Prompt-Template & (optionaler) lokaler Aufruf von Ollama. Der Aufruf ist
|
||||
bewusst gekapselt und kann gefahrlos deaktiviert bleiben, bis ihr ein
|
||||
konkretes Modell konfigurieren wollt.
|
||||
Kompatibilität:
|
||||
Python 3.12+
|
||||
Version:
|
||||
0.1.0 (Erstanlage)
|
||||
Stand:
|
||||
2025-10-07
|
||||
Bezug:
|
||||
WP-04/05 Kontextbereitstellung für LLM
|
||||
Nutzung:
|
||||
from app.services.llm_ollama import build_prompt, call_ollama
|
||||
Änderungsverlauf:
|
||||
0.1.0 (2025-10-07) – Erstanlage.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from typing import List, Dict, Optional
|
||||
import subprocess
|
||||
import json
|
||||
|
||||
PROMPT_TEMPLATE = """System: You are a helpful expert.
|
||||
User: {question}
|
||||
|
||||
Context (ranked):
|
||||
{contexts}
|
||||
|
||||
Task: Answer precisely. At the end, list sources (note title + section) and important edge paths.
|
||||
"""
|
||||
|
||||
|
||||
def build_context_block(items: List[Dict]) -> str:
|
||||
"""Formatiert Top-K-Kontexte (Chunks) für den Prompt."""
|
||||
lines = []
|
||||
for i, it in enumerate(items, 1):
|
||||
note = it.get("note_title", "") or it.get("note_id", "")
|
||||
sec = it.get("section", "") or it.get("section_title", "")
|
||||
sc = it.get("score", 0)
|
||||
txt = it.get("text", "") or it.get("body", "") or ""
|
||||
lines.append(f"{i}) {note} — {sec} [score={sc:.2f}]\n{txt}\n")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def build_prompt(question: str, contexts: List[Dict]) -> str:
|
||||
"""Setzt Frage + Kontexte in ein konsistentes Template."""
|
||||
return PROMPT_TEMPLATE.format(question=question, contexts=build_context_block(contexts))
|
||||
|
||||
|
||||
def call_ollama(prompt: str, model: str = "llama3.1:8b", timeout_s: int = 120) -> Optional[str]:
|
||||
"""
|
||||
Optionaler lokaler Aufruf von `ollama run`.
|
||||
Rückgabe: generierter Text oder None bei Fehler/Abbruch.
|
||||
Hinweis: Nur nutzen, wenn Ollama lokal installiert/konfiguriert ist.
|
||||
"""
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
["ollama", "run", model],
|
||||
input=prompt.encode("utf-8"),
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
timeout=timeout_s,
|
||||
check=False,
|
||||
)
|
||||
out = proc.stdout.decode("utf-8", errors="replace")
|
||||
# viele ollama Builds streamen JSON-Zeilen; robust extrahieren:
|
||||
try:
|
||||
# Falls JSONL, letztes "response" zusammenfassen
|
||||
texts = []
|
||||
for line in out.splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
obj = json.loads(line)
|
||||
if "response" in obj:
|
||||
texts.append(obj["response"])
|
||||
except Exception:
|
||||
texts.append(line)
|
||||
return "".join(texts).strip()
|
||||
except Exception:
|
||||
return out.strip()
|
||||
except Exception:
|
||||
return None
|
||||
|
|
@ -1,142 +1,337 @@
|
|||
"""
|
||||
app/services/llm_service.py — LLM Client
|
||||
Version: 2.8.0 (Configurable Concurrency Limit)
|
||||
FILE: app/services/llm_service.py
|
||||
DESCRIPTION: Hybrid-Client für Ollama, Google GenAI (Gemini) und OpenRouter.
|
||||
WP-25b: Implementierung der Lazy-Prompt-Orchestration (Modell-spezifisch).
|
||||
VERSION: 3.5.5 (WP-25b: Prompt Orchestration & Full Resilience)
|
||||
STATUS: Active
|
||||
FIX:
|
||||
- WP-25b: get_prompt() unterstützt Hierarchie: Model-ID -> Provider -> Default.
|
||||
- WP-25b: generate_raw_response() unterstützt prompt_key + variables für Lazy-Formatting.
|
||||
- WP-25a: Voller Erhalt der rekursiven Fallback-Kaskade und visited_profiles Schutz.
|
||||
- WP-20: Restaurierung des internen Ollama-Retry-Loops für Hardware-Stabilität.
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import yaml
|
||||
import logging
|
||||
import os
|
||||
import asyncio
|
||||
import json
|
||||
from google import genai
|
||||
from google.genai import types
|
||||
from openai import AsyncOpenAI
|
||||
from pathlib import Path
|
||||
from typing import Optional, Dict, Any, Literal
|
||||
from app.config import get_settings
|
||||
|
||||
# Import der neutralen Bereinigungs-Logik
|
||||
from app.core.registry import clean_llm_text
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class Settings:
|
||||
OLLAMA_URL = os.getenv("MINDNET_OLLAMA_URL", "http://127.0.0.1:11434")
|
||||
LLM_TIMEOUT = float(os.getenv("MINDNET_LLM_TIMEOUT", 300.0))
|
||||
LLM_MODEL = os.getenv("MINDNET_LLM_MODEL", "phi3:mini")
|
||||
PROMPTS_PATH = os.getenv("MINDNET_PROMPTS_PATH", "./config/prompts.yaml")
|
||||
|
||||
# NEU: Konfigurierbares Limit für Hintergrund-Last
|
||||
# Default auf 2 (konservativ), kann in .env erhöht werden.
|
||||
BACKGROUND_LIMIT = int(os.getenv("MINDNET_LLM_BACKGROUND_LIMIT", "2"))
|
||||
|
||||
def get_settings():
|
||||
return Settings()
|
||||
|
||||
class LLMService:
|
||||
# GLOBALER SEMAPHOR (Lazy Initialization)
|
||||
# Wir initialisieren ihn erst, wenn wir die Settings kennen.
|
||||
_background_semaphore = None
|
||||
|
||||
def __init__(self):
|
||||
self.settings = get_settings()
|
||||
self.prompts = self._load_prompts()
|
||||
|
||||
# Initialisiere Semaphore einmalig auf Klassen-Ebene basierend auf Config
|
||||
self.profiles = self._load_llm_profiles()
|
||||
self._decision_engine = None
|
||||
|
||||
if LLMService._background_semaphore is None:
|
||||
limit = self.settings.BACKGROUND_LIMIT
|
||||
limit = getattr(self.settings, "BACKGROUND_LIMIT", 2)
|
||||
logger.info(f"🚦 LLMService: Initializing Background Semaphore with limit: {limit}")
|
||||
LLMService._background_semaphore = asyncio.Semaphore(limit)
|
||||
|
||||
self.timeout = httpx.Timeout(self.settings.LLM_TIMEOUT, connect=10.0)
|
||||
|
||||
self.client = httpx.AsyncClient(
|
||||
base_url=self.settings.OLLAMA_URL,
|
||||
timeout=self.timeout
|
||||
|
||||
# 1. Lokaler Ollama Client
|
||||
self.ollama_client = httpx.AsyncClient(
|
||||
base_url=self.settings.OLLAMA_URL,
|
||||
timeout=httpx.Timeout(self.settings.LLM_TIMEOUT)
|
||||
)
|
||||
|
||||
# 2. Google GenAI Client
|
||||
self.google_client = None
|
||||
if self.settings.GOOGLE_API_KEY:
|
||||
self.google_client = genai.Client(
|
||||
api_key=self.settings.GOOGLE_API_KEY,
|
||||
http_options={'api_version': 'v1'}
|
||||
)
|
||||
logger.info("✨ LLMService: Google GenAI (Gemini) active.")
|
||||
|
||||
# 3. OpenRouter Client
|
||||
self.openrouter_client = None
|
||||
if self.settings.OPENROUTER_API_KEY:
|
||||
self.openrouter_client = AsyncOpenAI(
|
||||
base_url="https://openrouter.ai/api/v1",
|
||||
api_key=self.settings.OPENROUTER_API_KEY,
|
||||
timeout=45.0
|
||||
)
|
||||
logger.info("🛰️ LLMService: OpenRouter Integration active.")
|
||||
|
||||
@property
|
||||
def decision_engine(self):
|
||||
if self._decision_engine is None:
|
||||
from app.core.retrieval.decision_engine import DecisionEngine
|
||||
self._decision_engine = DecisionEngine()
|
||||
return self._decision_engine
|
||||
|
||||
def _load_prompts(self) -> dict:
|
||||
path = Path(self.settings.PROMPTS_PATH)
|
||||
if not path.exists(): return {}
|
||||
if not path.exists():
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f: return yaml.safe_load(f)
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return yaml.safe_load(f) or {}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load prompts: {e}")
|
||||
logger.error(f"❌ Failed to load prompts: {e}")
|
||||
return {}
|
||||
|
||||
def _load_llm_profiles(self) -> dict:
|
||||
"""WP-25a: Lädt die zentralen MoE-Profile aus der llm_profiles.yaml."""
|
||||
path_str = getattr(self.settings, "LLM_PROFILES_PATH", "config/llm_profiles.yaml")
|
||||
path = Path(path_str)
|
||||
if not path.exists():
|
||||
logger.warning(f"⚠️ LLM Profiles file not found at {path}.")
|
||||
return {}
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
data = yaml.safe_load(f) or {}
|
||||
return data.get("profiles", {})
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to load llm_profiles.yaml: {e}")
|
||||
return {}
|
||||
|
||||
def get_prompt(self, key: str, model_id: str = None, provider: str = None) -> str:
|
||||
"""
|
||||
WP-25b: Hochpräziser Prompt-Lookup mit detailliertem Trace-Logging.
|
||||
"""
|
||||
data = self.prompts.get(key, "")
|
||||
if not isinstance(data, dict):
|
||||
return str(data)
|
||||
|
||||
# 1. Spezifischstes Match: Exakte Modell-ID
|
||||
if model_id and model_id in data:
|
||||
logger.info(f"🎯 [PROMPT-TRACE] Level 1 Match: Model-specific ('{model_id}') for key '{key}'")
|
||||
return str(data[model_id])
|
||||
|
||||
# 2. Mittlere Ebene: Provider
|
||||
if provider and provider in data:
|
||||
logger.info(f"📡 [PROMPT-TRACE] Level 2 Match: Provider-fallback ('{provider}') for key '{key}'")
|
||||
return str(data[provider])
|
||||
|
||||
# 3. Globaler Fallback
|
||||
default_val = data.get("default", data.get("gemini", data.get("ollama", "")))
|
||||
logger.info(f"⚓ [PROMPT-TRACE] Level 3 Match: Global Default for key '{key}'")
|
||||
return str(default_val)
|
||||
|
||||
async def generate_raw_response(
|
||||
self,
|
||||
prompt: str,
|
||||
system: str = None,
|
||||
self,
|
||||
prompt: str = None,
|
||||
prompt_key: str = None, # WP-25b: Lazy Loading Key
|
||||
variables: dict = None, # WP-25b: Daten für Formatierung
|
||||
system: str = None,
|
||||
force_json: bool = False,
|
||||
max_retries: int = 0,
|
||||
max_retries: int = 2,
|
||||
base_delay: float = 2.0,
|
||||
priority: Literal["realtime", "background"] = "realtime"
|
||||
priority: Literal["realtime", "background"] = "realtime",
|
||||
provider: Optional[str] = None,
|
||||
model_override: Optional[str] = None,
|
||||
json_schema: Optional[Dict[str, Any]] = None,
|
||||
json_schema_name: str = "mindnet_json",
|
||||
strict_json_schema: bool = True,
|
||||
profile_name: Optional[str] = None,
|
||||
visited_profiles: Optional[list] = None
|
||||
) -> str:
|
||||
"""
|
||||
Führt einen LLM Call aus.
|
||||
priority="realtime": Chat (Sofort, keine Bremse).
|
||||
priority="background": Import/Analyse (Gedrosselt durch Semaphore).
|
||||
"""
|
||||
|
||||
use_semaphore = (priority == "background")
|
||||
|
||||
if use_semaphore and LLMService._background_semaphore:
|
||||
async with LLMService._background_semaphore:
|
||||
return await self._execute_request(prompt, system, force_json, max_retries, base_delay)
|
||||
else:
|
||||
# Realtime oder Fallback (falls Semaphore Init fehlschlug)
|
||||
return await self._execute_request(prompt, system, force_json, max_retries, base_delay)
|
||||
"""Haupteinstiegspunkt für LLM-Anfragen mit Lazy-Prompt Orchestrierung."""
|
||||
visited_profiles = visited_profiles or []
|
||||
target_provider = provider
|
||||
target_model = model_override
|
||||
target_temp = None
|
||||
fallback_profile = None
|
||||
|
||||
async def _execute_request(self, prompt, system, force_json, max_retries, base_delay):
|
||||
payload: Dict[str, Any] = {
|
||||
"model": self.settings.LLM_MODEL,
|
||||
"prompt": prompt,
|
||||
"stream": False,
|
||||
"options": {
|
||||
"temperature": 0.1 if force_json else 0.7,
|
||||
"num_ctx": 8192
|
||||
}
|
||||
}
|
||||
# 1. Profil-Auflösung (Mixture of Experts)
|
||||
if profile_name and self.profiles:
|
||||
profile = self.profiles.get(profile_name)
|
||||
if profile:
|
||||
target_provider = profile.get("provider", target_provider)
|
||||
target_model = profile.get("model", target_model)
|
||||
target_temp = profile.get("temperature")
|
||||
fallback_profile = profile.get("fallback_profile")
|
||||
visited_profiles.append(profile_name)
|
||||
logger.info(f"🎭 MoE Dispatch: Profil='{profile_name}' -> Provider='{target_provider}' | Model='{target_model}'")
|
||||
else:
|
||||
logger.warning(f"⚠️ Profil '{profile_name}' nicht in llm_profiles.yaml gefunden!")
|
||||
|
||||
if force_json:
|
||||
payload["format"] = "json"
|
||||
if not target_provider:
|
||||
target_provider = self.settings.MINDNET_LLM_PROVIDER
|
||||
|
||||
if system:
|
||||
payload["system"] = system
|
||||
|
||||
attempt = 0
|
||||
|
||||
while True:
|
||||
# 2. WP-25b: Lazy Prompt Resolving
|
||||
# Wir laden den Prompt erst JETZT, basierend auf dem gerade aktiven Modell.
|
||||
current_prompt = prompt
|
||||
if prompt_key:
|
||||
template = self.get_prompt(prompt_key, model_id=target_model, provider=target_provider)
|
||||
# WP-25b FIX: Validierung des geladenen Prompts
|
||||
if not template or not template.strip():
|
||||
available_keys = list(self.prompts.keys())
|
||||
logger.error(f"❌ Prompt key '{prompt_key}' not found or empty. Available keys: {available_keys[:10]}...")
|
||||
raise ValueError(f"Invalid prompt_key: '{prompt_key}' (not found in prompts.yaml)")
|
||||
|
||||
try:
|
||||
response = await self.client.post("/api/generate", json=payload)
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return data.get("response", "").strip()
|
||||
else:
|
||||
response.raise_for_status()
|
||||
# Formatierung mit den übergebenen Variablen
|
||||
current_prompt = template.format(**(variables or {}))
|
||||
except KeyError as e:
|
||||
logger.error(f"❌ Prompt formatting failed for key '{prompt_key}': Missing variable {e}")
|
||||
raise ValueError(f"Missing variable in prompt '{prompt_key}': {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Prompt formatting failed for key '{prompt_key}': {e}")
|
||||
current_prompt = template # Sicherheits-Fallback
|
||||
|
||||
# 3. Ausführung mit Fehler-Handling für Kaskade
|
||||
try:
|
||||
if priority == "background":
|
||||
async with LLMService._background_semaphore:
|
||||
res = await self._dispatch(
|
||||
target_provider, current_prompt, system, force_json,
|
||||
max_retries, base_delay, target_model,
|
||||
json_schema, json_schema_name, strict_json_schema, target_temp
|
||||
)
|
||||
else:
|
||||
res = await self._dispatch(
|
||||
target_provider, current_prompt, system, force_json,
|
||||
max_retries, base_delay, target_model,
|
||||
json_schema, json_schema_name, strict_json_schema, target_temp
|
||||
)
|
||||
|
||||
# Check auf leere Cloud-Antworten (WP-25 Stability)
|
||||
if not res and target_provider != "ollama":
|
||||
logger.warning(f"⚠️ Empty response from {target_provider}. Triggering fallback.")
|
||||
raise ValueError(f"Empty response from {target_provider}")
|
||||
|
||||
return clean_llm_text(res) if not force_json else res
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error during execution of profile '{profile_name}' ({target_provider}): {e}")
|
||||
|
||||
# 4. WP-25b Kaskaden-Logik (Rekursiv mit Modell-spezifischem Re-Loading)
|
||||
if fallback_profile and fallback_profile not in visited_profiles:
|
||||
logger.info(f"🔄 Switching to fallback profile: '{fallback_profile}'")
|
||||
return await self.generate_raw_response(
|
||||
prompt=prompt,
|
||||
prompt_key=prompt_key,
|
||||
variables=variables, # Ermöglicht neues Formatting für Fallback-Modell
|
||||
system=system, force_json=force_json,
|
||||
max_retries=max_retries, base_delay=base_delay,
|
||||
priority=priority, provider=None, model_override=None,
|
||||
json_schema=json_schema, json_schema_name=json_schema_name,
|
||||
strict_json_schema=strict_json_schema,
|
||||
profile_name=fallback_profile,
|
||||
visited_profiles=visited_profiles
|
||||
)
|
||||
|
||||
# 5. Ultimativer Notanker: Falls alles fehlschlägt, direkt zu Ollama
|
||||
if target_provider != "ollama" and self.settings.LLM_FALLBACK_ENABLED:
|
||||
logger.warning(f"🚨 Kaskade erschöpft. Nutze finalen Ollama-Notanker.")
|
||||
res = await self._execute_ollama(current_prompt, system, force_json, max_retries, base_delay, target_temp, target_model)
|
||||
return clean_llm_text(res) if not force_json else res
|
||||
|
||||
raise e
|
||||
|
||||
async def _dispatch(
|
||||
self, provider, prompt, system, force_json, max_retries, base_delay,
|
||||
model_override, json_schema, json_schema_name, strict_json_schema, temperature
|
||||
) -> str:
|
||||
"""Routet die Anfrage an den spezifischen Provider-Executor."""
|
||||
rate_limit_attempts = 0
|
||||
max_rate_retries = min(max_retries, getattr(self.settings, "LLM_RATE_LIMIT_RETRIES", 3))
|
||||
wait_time = getattr(self.settings, "LLM_RATE_LIMIT_WAIT", 60.0)
|
||||
|
||||
while rate_limit_attempts <= max_rate_retries:
|
||||
try:
|
||||
if provider == "openrouter" and self.openrouter_client:
|
||||
return await self._execute_openrouter(
|
||||
prompt=prompt, system=system, force_json=force_json,
|
||||
model_override=model_override, json_schema=json_schema,
|
||||
json_schema_name=json_schema_name, strict_json_schema=strict_json_schema,
|
||||
temperature=temperature
|
||||
)
|
||||
|
||||
if provider == "gemini" and self.google_client:
|
||||
return await self._execute_google(prompt, system, force_json, model_override, temperature)
|
||||
|
||||
return await self._execute_ollama(prompt, system, force_json, max_retries, base_delay, temperature, model_override)
|
||||
|
||||
except Exception as e:
|
||||
attempt += 1
|
||||
if attempt > max_retries:
|
||||
logger.error(f"LLM Final Error (Versuch {attempt}): {e}")
|
||||
raise e
|
||||
|
||||
wait_time = base_delay * (2 ** (attempt - 1))
|
||||
logger.warning(f"⚠️ LLM Retry ({attempt}/{max_retries}) in {wait_time}s: {e}")
|
||||
await asyncio.sleep(wait_time)
|
||||
err_str = str(e)
|
||||
if any(x in err_str for x in ["429", "RESOURCE_EXHAUSTED", "rate_limited"]):
|
||||
rate_limit_attempts += 1
|
||||
logger.warning(f"⏳ Rate Limit {provider}. Attempt {rate_limit_attempts}. Wait {wait_time}s.")
|
||||
await asyncio.sleep(wait_time)
|
||||
continue
|
||||
raise e
|
||||
|
||||
async def generate_rag_response(self, query: str, context_str: str) -> str:
|
||||
"""
|
||||
Chat-Wrapper: Immer Realtime.
|
||||
"""
|
||||
system_prompt = self.prompts.get("system_prompt", "")
|
||||
rag_template = self.prompts.get("rag_template", "{context_str}\n\n{query}")
|
||||
|
||||
final_prompt = rag_template.format(context_str=context_str, query=query)
|
||||
|
||||
return await self.generate_raw_response(
|
||||
final_prompt,
|
||||
system=system_prompt,
|
||||
max_retries=0,
|
||||
force_json=False,
|
||||
priority="realtime"
|
||||
async def _execute_google(self, prompt, system, force_json, model_override, temperature):
|
||||
model = (model_override or self.settings.GEMINI_MODEL).replace("models/", "")
|
||||
config_kwargs = {
|
||||
"system_instruction": system,
|
||||
"response_mime_type": "application/json" if force_json else "text/plain"
|
||||
}
|
||||
if temperature is not None:
|
||||
config_kwargs["temperature"] = temperature
|
||||
|
||||
config = types.GenerateContentConfig(**config_kwargs)
|
||||
response = await asyncio.wait_for(
|
||||
asyncio.to_thread(self.google_client.models.generate_content, model=model, contents=prompt, config=config),
|
||||
timeout=45.0
|
||||
)
|
||||
return response.text.strip()
|
||||
|
||||
async def _execute_openrouter(self, prompt, system, force_json, model_override, json_schema, json_schema_name, strict_json_schema, temperature) -> str:
|
||||
model = model_override or self.settings.OPENROUTER_MODEL
|
||||
logger.info(f"🛰️ OpenRouter Call: Model='{model}' | Temp={temperature}")
|
||||
messages = []
|
||||
if system: messages.append({"role": "system", "content": system})
|
||||
messages.append({"role": "user", "content": prompt})
|
||||
|
||||
kwargs: Dict[str, Any] = {}
|
||||
if temperature is not None: kwargs["temperature"] = temperature
|
||||
|
||||
if force_json:
|
||||
if json_schema:
|
||||
kwargs["response_format"] = {"type": "json_schema", "json_schema": {"name": json_schema_name, "strict": strict_json_schema, "schema": json_schema}}
|
||||
else:
|
||||
kwargs["response_format"] = {"type": "json_object"}
|
||||
|
||||
response = await self.openrouter_client.chat.completions.create(model=model, messages=messages, **kwargs)
|
||||
if not response.choices: return ""
|
||||
return response.choices[0].message.content.strip() if response.choices[0].message.content else ""
|
||||
|
||||
async def _execute_ollama(self, prompt, system, force_json, max_retries, base_delay, temperature=None, model_override=None):
|
||||
# WP-20: Restaurierter Retry-Loop für lokale Hardware-Resilienz
|
||||
effective_model = model_override or self.settings.LLM_MODEL
|
||||
effective_temp = temperature if temperature is not None else (0.1 if force_json else 0.7)
|
||||
|
||||
payload = {
|
||||
"model": effective_model,
|
||||
"prompt": prompt, "stream": False,
|
||||
"options": {"temperature": effective_temp, "num_ctx": 8192}
|
||||
}
|
||||
if force_json: payload["format"] = "json"
|
||||
if system: payload["system"] = system
|
||||
|
||||
attempt = 0
|
||||
while True:
|
||||
try:
|
||||
res = await self.ollama_client.post("/api/generate", json=payload)
|
||||
res.raise_for_status()
|
||||
return res.json().get("response", "").strip()
|
||||
except Exception as e:
|
||||
attempt += 1
|
||||
if attempt > max_retries:
|
||||
logger.error(f"❌ Ollama failure after {attempt} attempts: {e}")
|
||||
raise e
|
||||
await asyncio.sleep(base_delay * (2 ** (attempt - 1)))
|
||||
|
||||
async def generate_rag_response(self, query: str, context_str: Optional[str] = None) -> str:
|
||||
return await self.decision_engine.ask(query)
|
||||
|
||||
async def close(self):
|
||||
if self.client:
|
||||
await self.client.aclose()
|
||||
if self.ollama_client:
|
||||
await self.ollama_client.aclose()
|
||||
|
|
@ -1,138 +0,0 @@
|
|||
"""
|
||||
app/services/semantic_analyzer.py — Edge Validation & Filtering
|
||||
Version: 2.0 (Update: Background Priority for Batch Jobs)
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import List, Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
# Importe
|
||||
from app.services.llm_service import LLMService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SemanticAnalyzer:
|
||||
def __init__(self):
|
||||
self.llm = LLMService()
|
||||
|
||||
async def assign_edges_to_chunk(self, chunk_text: str, all_edges: List[str], note_type: str) -> List[str]:
|
||||
"""
|
||||
Sendet einen Chunk und eine Liste potenzieller Kanten an das LLM.
|
||||
Das LLM filtert heraus, welche Kanten für diesen Chunk relevant sind.
|
||||
|
||||
Features:
|
||||
- Retry Strategy: Wartet bei Überlastung (max_retries=5).
|
||||
- Priority Queue: Läuft als "background" Task, um den Chat nicht zu blockieren.
|
||||
- Observability: Loggt Input-Größe, Raw-Response und Parsing-Details.
|
||||
"""
|
||||
if not all_edges:
|
||||
return []
|
||||
|
||||
# 1. Prompt laden
|
||||
prompt_template = self.llm.prompts.get("edge_allocation_template")
|
||||
|
||||
if not prompt_template:
|
||||
logger.warning("⚠️ [SemanticAnalyzer] Prompt 'edge_allocation_template' fehlt. Nutze Fallback.")
|
||||
prompt_template = (
|
||||
"TASK: Wähle aus den Kandidaten die relevanten Kanten für den Text.\n"
|
||||
"TEXT: {chunk_text}\n"
|
||||
"KANDIDATEN: {edge_list}\n"
|
||||
"OUTPUT: JSON Liste von Strings [\"kind:target\"]."
|
||||
)
|
||||
|
||||
# 2. Kandidaten-Liste formatieren
|
||||
edges_str = "\n".join([f"- {e}" for e in all_edges])
|
||||
|
||||
# LOG: Request Info
|
||||
logger.debug(f"🔍 [SemanticAnalyzer] Request: {len(chunk_text)} chars Text, {len(all_edges)} Candidates.")
|
||||
|
||||
# 3. Prompt füllen
|
||||
final_prompt = prompt_template.format(
|
||||
chunk_text=chunk_text[:3500],
|
||||
edge_list=edges_str
|
||||
)
|
||||
|
||||
try:
|
||||
# 4. LLM Call mit Traffic Control (NEU: priority="background")
|
||||
# Wir nutzen die "Slow Lane", damit der User im Chat nicht warten muss.
|
||||
response_json = await self.llm.generate_raw_response(
|
||||
prompt=final_prompt,
|
||||
force_json=True,
|
||||
max_retries=5,
|
||||
base_delay=5.0,
|
||||
priority="background" # <--- WICHTIG: Drosselung aktivieren
|
||||
)
|
||||
|
||||
# LOG: Raw Response Preview
|
||||
logger.debug(f"📥 [SemanticAnalyzer] Raw Response (Preview): {response_json[:200]}...")
|
||||
|
||||
# 5. Parsing & Cleaning
|
||||
clean_json = response_json.replace("```json", "").replace("```", "").strip()
|
||||
|
||||
if not clean_json:
|
||||
logger.warning("⚠️ [SemanticAnalyzer] Leere Antwort vom LLM erhalten. Trigger Fallback.")
|
||||
return []
|
||||
|
||||
try:
|
||||
data = json.loads(clean_json)
|
||||
except json.JSONDecodeError as json_err:
|
||||
logger.error(f"❌ [SemanticAnalyzer] JSON Decode Error.")
|
||||
logger.error(f" Grund: {json_err}")
|
||||
logger.error(f" Empfangener String: {clean_json[:500]}")
|
||||
logger.info(" -> Workaround: Fallback auf 'Alle Kanten' (durch Chunker).")
|
||||
return []
|
||||
|
||||
valid_edges = []
|
||||
|
||||
# 6. Robuste Validierung (List vs Dict)
|
||||
if isinstance(data, list):
|
||||
# Standardfall: ["kind:target", ...]
|
||||
valid_edges = [str(e) for e in data if isinstance(e, str) and ":" in e]
|
||||
|
||||
elif isinstance(data, dict):
|
||||
# Abweichende Formate behandeln
|
||||
logger.info(f"ℹ️ [SemanticAnalyzer] LLM lieferte Dict statt Liste. Versuche Reparatur. Keys: {list(data.keys())}")
|
||||
|
||||
for key, val in data.items():
|
||||
# Fall A: {"edges": ["kind:target"]}
|
||||
if key.lower() in ["edges", "results", "kanten", "matches"] and isinstance(val, list):
|
||||
valid_edges.extend([str(e) for e in val if isinstance(e, str) and ":" in e])
|
||||
|
||||
# Fall B: {"kind": "target"}
|
||||
elif isinstance(val, str):
|
||||
valid_edges.append(f"{key}:{val}")
|
||||
|
||||
# Fall C: {"kind": ["target1", "target2"]}
|
||||
elif isinstance(val, list):
|
||||
for target in val:
|
||||
if isinstance(target, str):
|
||||
valid_edges.append(f"{key}:{target}")
|
||||
|
||||
# Safety: Filtere nur Kanten, die halbwegs valide aussehen
|
||||
final_result = [e for e in valid_edges if ":" in e]
|
||||
|
||||
# LOG: Ergebnis
|
||||
if final_result:
|
||||
logger.info(f"✅ [SemanticAnalyzer] Success. {len(final_result)} Kanten zugewiesen.")
|
||||
else:
|
||||
logger.debug(" [SemanticAnalyzer] Keine spezifischen Kanten erkannt (Empty Result).")
|
||||
|
||||
return final_result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"💥 [SemanticAnalyzer] Kritischer Fehler: {e}", exc_info=True)
|
||||
return []
|
||||
|
||||
async def close(self):
|
||||
if self.llm:
|
||||
await self.llm.close()
|
||||
|
||||
# Singleton Helper
|
||||
_analyzer_instance = None
|
||||
def get_semantic_analyzer():
|
||||
global _analyzer_instance
|
||||
if _analyzer_instance is None:
|
||||
_analyzer_instance = SemanticAnalyzer()
|
||||
return _analyzer_instance
|
||||
|
|
@ -1,122 +1,141 @@
|
|||
# config/decision_engine.yaml
|
||||
# Steuerung der Decision Engine (Intent Recognition)
|
||||
# Version: 2.4.0 (Clean Architecture: Generic Intents only)
|
||||
# VERSION: 3.2.2 (WP-25a: Decoupled MoE Logic)
|
||||
# STATUS: Active
|
||||
# DESCRIPTION: Zentrale Orchestrierung der Multi-Stream-Engine.
|
||||
# FIX:
|
||||
# - Auslagerung der LLM-Profile in llm_profiles.yaml zur zentralen Wartbarkeit.
|
||||
# - Integration von compression_thresholds zur Inhaltsverdichtung (WP-25a).
|
||||
# - 100% Erhalt aller WP-25 Edge-Boosts und Filter-Typen (v3.1.6).
|
||||
|
||||
version: 1.4
|
||||
version: 3.2
|
||||
|
||||
settings:
|
||||
llm_fallback_enabled: true
|
||||
|
||||
# Few-Shot Prompting für den LLM-Router (Slow Path)
|
||||
llm_router_prompt: |
|
||||
Du bist ein Klassifikator. Analysiere die Nachricht und wähle die passende Strategie.
|
||||
Antworte NUR mit dem Namen der Strategie.
|
||||
|
||||
STRATEGIEN:
|
||||
- INTERVIEW: User will Wissen erfassen, Notizen anlegen oder Dinge festhalten.
|
||||
- DECISION: Rat, Strategie, Vor/Nachteile, "Soll ich".
|
||||
- EMPATHY: Gefühle, Frust, Freude, Probleme.
|
||||
- CODING: Code, Syntax, Programmierung.
|
||||
- FACT: Wissen, Fakten, Definitionen.
|
||||
|
||||
BEISPIELE:
|
||||
User: "Wie funktioniert Qdrant?" -> FACT
|
||||
User: "Soll ich Qdrant nutzen?" -> DECISION
|
||||
User: "Ich möchte etwas notieren" -> INTERVIEW
|
||||
User: "Lass uns das festhalten" -> INTERVIEW
|
||||
User: "Schreibe ein Python Script" -> CODING
|
||||
User: "Alles ist grau und sinnlos" -> EMPATHY
|
||||
|
||||
NACHRICHT: "{query}"
|
||||
|
||||
STRATEGIE:
|
||||
# "auto" nutzt den globalen Default-Provider aus der .env
|
||||
router_provider: "auto"
|
||||
# Verweis auf den Intent-Klassifizierer in der prompts.yaml
|
||||
router_prompt_key: "intent_router_v1"
|
||||
# Pfad zur neuen Experten-Konfiguration (WP-25a Architektur-Cleanliness)
|
||||
profiles_config_path: "config/llm_profiles.yaml"
|
||||
router_profile: "compression_fast"
|
||||
|
||||
# --- EBENE 1: STREAM-LIBRARY (Bausteine basierend auf types.yaml v2.7.0) ---
|
||||
streams_library:
|
||||
values_stream:
|
||||
name: "Identität & Ethik"
|
||||
# Referenz auf Experten-Profil (z.B. lokal via Ollama für Privacy)
|
||||
llm_profile: "identity_safe"
|
||||
compression_profile: "identity_safe"
|
||||
compression_threshold: 2500
|
||||
query_template: "Welche meiner Werte und Prinzipien betreffen: {query}"
|
||||
filter_types: ["value", "principle", "belief", "trait", "boundary", "need", "motivation"]
|
||||
top_k: 5
|
||||
edge_boosts:
|
||||
guides: 3.0
|
||||
depends_on: 2.5
|
||||
based_on: 2.0
|
||||
upholds: 2.5
|
||||
violates: 2.5
|
||||
aligned_with: 2.0
|
||||
conflicts_with: 2.0
|
||||
supports: 1.5
|
||||
contradicts: 1.5
|
||||
facts_stream:
|
||||
name: "Operative Realität"
|
||||
llm_profile: "synthesis_pro"
|
||||
compression_profile: "compression_fast"
|
||||
compression_threshold: 3500
|
||||
query_template: "Status, Ressourcen und Fakten zu: {query}"
|
||||
filter_types: ["project", "decision", "task", "goal", "event", "state"]
|
||||
top_k: 5
|
||||
edge_boosts:
|
||||
part_of: 2.0
|
||||
depends_on: 1.5
|
||||
implemented_in: 1.5
|
||||
|
||||
biography_stream:
|
||||
name: "Persönliche Erfahrung"
|
||||
llm_profile: "synthesis_pro"
|
||||
compression_profile: "compression_fast"
|
||||
compression_threshold: 3000
|
||||
query_template: "Welche Erlebnisse habe ich im Kontext von {query} gemacht?"
|
||||
filter_types: ["experience", "journal", "profile", "person"]
|
||||
top_k: 3
|
||||
edge_boosts:
|
||||
related_to: 1.5
|
||||
experienced_in: 2.0
|
||||
expert_for: 2.5
|
||||
followed_by: 2.0
|
||||
preceded_by: 2.0
|
||||
|
||||
risk_stream:
|
||||
name: "Risiko-Radar"
|
||||
llm_profile: "synthesis_pro"
|
||||
compression_profile: "compression_fast"
|
||||
compression_threshold: 2500
|
||||
query_template: "Gefahren, Hindernisse oder Risiken bei: {query}"
|
||||
filter_types: ["risk", "obstacle", "bias"]
|
||||
top_k: 3
|
||||
edge_boosts:
|
||||
blocks: 2.5
|
||||
impacts: 2.0
|
||||
risk_of: 2.5
|
||||
|
||||
tech_stream:
|
||||
name: "Wissen & Technik"
|
||||
llm_profile: "tech_expert"
|
||||
compression_profile: "compression_fast"
|
||||
compression_threshold: 4500
|
||||
query_template: "Inhaltliche Details und Definitionen zu: {query}"
|
||||
filter_types: ["concept", "source", "glossary", "idea", "insight", "skill", "habit"]
|
||||
top_k: 5
|
||||
edge_boosts:
|
||||
uses: 2.5
|
||||
implemented_in: 3.0
|
||||
|
||||
# --- EBENE 2: STRATEGIEN (Finale Komposition via MoE-Profile) ---
|
||||
strategies:
|
||||
# 1. Fakten-Abfrage (Fallback & Default)
|
||||
FACT:
|
||||
description: "Reine Wissensabfrage."
|
||||
trigger_keywords: []
|
||||
inject_types: []
|
||||
prompt_template: "rag_template"
|
||||
prepend_instruction: null
|
||||
FACT_WHEN:
|
||||
description: "Abfrage von exakten Zeitpunkten und Terminen."
|
||||
llm_profile: "synthesis_pro"
|
||||
trigger_keywords: ["wann", "datum", "uhrzeit", "zeitpunkt"]
|
||||
use_streams: ["facts_stream", "biography_stream", "tech_stream"]
|
||||
prompt_template: "fact_synthesis_v1"
|
||||
|
||||
FACT_WHAT:
|
||||
description: "Abfrage von Definitionen, Listen und Inhalten."
|
||||
llm_profile: "synthesis_pro"
|
||||
trigger_keywords: ["was ist", "welche sind", "liste", "übersicht", "zusammenfassung"]
|
||||
use_streams: ["facts_stream", "tech_stream", "biography_stream"]
|
||||
prompt_template: "fact_synthesis_v1"
|
||||
|
||||
# 2. Entscheidungs-Frage
|
||||
DECISION:
|
||||
description: "Der User sucht Rat, Strategie oder Abwägung."
|
||||
trigger_keywords:
|
||||
- "soll ich"
|
||||
- "meinung"
|
||||
- "besser"
|
||||
- "empfehlung"
|
||||
- "strategie"
|
||||
- "entscheidung"
|
||||
- "abwägung"
|
||||
- "vergleich"
|
||||
inject_types: ["value", "principle", "goal", "risk"]
|
||||
prompt_template: "decision_template"
|
||||
llm_profile: "synthesis_pro"
|
||||
trigger_keywords: ["soll ich", "sollte ich", "entscheidung", "abwägen", "priorität", "empfehlung"]
|
||||
use_streams: ["values_stream", "facts_stream", "risk_stream"]
|
||||
prompt_template: "decision_synthesis_v1"
|
||||
prepend_instruction: |
|
||||
!!! ENTSCHEIDUNGS-MODUS !!!
|
||||
BITTE WÄGE FAKTEN GEGEN FOLGENDE WERTE, PRINZIPIEN UND ZIELE AB:
|
||||
!!! ENTSCHEIDUNGS-MODUS (AGENTIC MULTI-STREAM) !!!
|
||||
Analysiere die Fakten vor dem Hintergrund meiner Werte und evaluiere die Risiken.
|
||||
Wäge ab, ob das Vorhaben mit meiner langfristigen Identität kompatibel ist.
|
||||
|
||||
# 3. Empathie / "Ich"-Modus
|
||||
EMPATHY:
|
||||
description: "Reaktion auf emotionale Zustände."
|
||||
trigger_keywords:
|
||||
- "ich fühle"
|
||||
- "traurig"
|
||||
- "glücklich"
|
||||
- "gestresst"
|
||||
- "angst"
|
||||
- "nervt"
|
||||
- "überfordert"
|
||||
- "müde"
|
||||
inject_types: ["experience", "belief", "profile"]
|
||||
llm_profile: "synthesis_pro"
|
||||
trigger_keywords: ["fühle", "traurig", "glücklich", "stress", "angst"]
|
||||
use_streams: ["biography_stream", "values_stream"]
|
||||
prompt_template: "empathy_template"
|
||||
prepend_instruction: null
|
||||
|
||||
# 4. Coding / Technical
|
||||
CODING:
|
||||
description: "Technische Anfragen und Programmierung."
|
||||
trigger_keywords:
|
||||
- "code"
|
||||
- "python"
|
||||
- "script"
|
||||
- "funktion"
|
||||
- "bug"
|
||||
- "syntax"
|
||||
- "json"
|
||||
- "yaml"
|
||||
- "bash"
|
||||
inject_types: ["snippet", "reference", "source"]
|
||||
llm_profile: "tech_expert"
|
||||
trigger_keywords: ["code", "python", "script", "bug", "syntax"]
|
||||
use_streams: ["tech_stream", "facts_stream"]
|
||||
prompt_template: "technical_template"
|
||||
prepend_instruction: null
|
||||
|
||||
# 5. Interview / Datenerfassung
|
||||
# HINWEIS: Spezifische Typen (Projekt, Ziel etc.) werden automatisch
|
||||
# über die types.yaml erkannt. Hier stehen nur generische Trigger.
|
||||
INTERVIEW:
|
||||
description: "Der User möchte Wissen erfassen."
|
||||
trigger_keywords:
|
||||
- "neue notiz"
|
||||
- "etwas notieren"
|
||||
- "festhalten"
|
||||
- "erstellen"
|
||||
- "dokumentieren"
|
||||
- "anlegen"
|
||||
- "interview"
|
||||
- "erfassen"
|
||||
- "idee speichern"
|
||||
- "draft"
|
||||
inject_types: []
|
||||
prompt_template: "interview_template"
|
||||
prepend_instruction: null
|
||||
|
||||
# Schemas: Hier nur der Fallback.
|
||||
# Spezifische Schemas (Project, Experience) kommen jetzt aus types.yaml!
|
||||
schemas:
|
||||
default:
|
||||
fields:
|
||||
- "Titel"
|
||||
- "Thema/Inhalt"
|
||||
- "Tags"
|
||||
hint: "Halte es einfach und übersichtlich."
|
||||
description: "Der User möchte Wissen erfassen (Eingabemodus)."
|
||||
llm_profile: "compression_fast"
|
||||
use_streams: []
|
||||
prompt_template: "interview_template"
|
||||
64
config/llm_profiles.yaml
Normal file
64
config/llm_profiles.yaml
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
# config/llm_profiles.yaml
|
||||
# VERSION: 1.3.0 (WP-25a: Global MoE & Fallback Cascade)
|
||||
# STATUS: Active
|
||||
# DESCRIPTION: Zentrale Definition der LLM-Rollen inkl. Ausfall-Logik (Kaskade).
|
||||
|
||||
profiles:
|
||||
# --- CHAT & SYNTHESE ---
|
||||
# Der "Architekt": Hochwertige Synthese. Fällt bei Fehlern auf den Backup-Cloud-Experten zurück.
|
||||
synthesis_pro:
|
||||
provider: "openrouter"
|
||||
model: "google/gemini-2.0-flash-exp:free"
|
||||
temperature: 0.7
|
||||
fallback_profile: "synthesis_backup"
|
||||
|
||||
# Der "Vize": Leistungsstarkes Modell bei einem anderen Provider (Resilienz).
|
||||
synthesis_backup:
|
||||
provider: "openrouter"
|
||||
model: "meta-llama/llama-3.3-70b-instruct:free"
|
||||
temperature: 0.5
|
||||
fallback_profile: "identity_safe" # Letzte Instanz: Lokal
|
||||
|
||||
# Der "Ingenieur": Fachspezialist für Code. Nutzt bei Ausfall den Generalisten.
|
||||
tech_expert:
|
||||
provider: "openrouter"
|
||||
model: "qwen/qwen-2.5-vl-7b-instruct:free"
|
||||
temperature: 0.3
|
||||
fallback_profile: "synthesis_pro"
|
||||
|
||||
# Der "Dampfhammer": Schnell für Routing und Zusammenfassungen.
|
||||
compression_fast:
|
||||
provider: "openrouter"
|
||||
model: "mistralai/mistral-7b-instruct:free"
|
||||
temperature: 0.1
|
||||
fallback_profile: "identity_safe"
|
||||
|
||||
# --- INGESTION EXPERTEN ---
|
||||
# Spezialist für die Extraktion komplexer Datenstrukturen aus Dokumenten.
|
||||
ingest_extractor:
|
||||
provider: "openrouter"
|
||||
model: "mistralai/mistral-7b-instruct:free"
|
||||
temperature: 0.2
|
||||
fallback_profile: "synthesis_backup"
|
||||
|
||||
# Spezialist für binäre Prüfungen (YES/NO). Muss extrem deterministisch sein.
|
||||
ingest_validator:
|
||||
provider: "openrouter"
|
||||
model: "mistralai/mistral-7b-instruct:free"
|
||||
temperature: 0.0
|
||||
fallback_profile: "compression_fast"
|
||||
|
||||
# --- LOKALER ANKER & PRIVACY ---
|
||||
# Der "Wächter": Lokales Modell für maximale Privatsphäre. Ende der Kaskade.
|
||||
identity_safe:
|
||||
provider: "ollama"
|
||||
model: "phi3:mini"
|
||||
temperature: 0.2
|
||||
# Kein fallback_profile definiert = Terminaler Endpunkt
|
||||
|
||||
# --- EMBEDDING EXPERTE ---
|
||||
# Zentralisierung des Embedding-Modells zur Entfernung aus der .env.
|
||||
embedding_expert:
|
||||
provider: "ollama"
|
||||
model: "nomic-embed-text"
|
||||
dimensions: 768
|
||||
63
config/prod.env
Normal file
63
config/prod.env
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
# --- FastAPI Server (Produktion) ---
|
||||
UVICORN_HOST=0.0.0.0
|
||||
UVICORN_PORT=8000
|
||||
DEBUG=false
|
||||
|
||||
# --- Qdrant Vektor-Datenbank ---
|
||||
# Trennung der Daten durch eigenes Prefix
|
||||
QDRANT_URL=http://127.0.0.1:6333
|
||||
QDRANT_API_KEY=
|
||||
COLLECTION_PREFIX=mindnet
|
||||
|
||||
# --- Vektoren-Konfiguration ---
|
||||
# Muss 768 für 'nomic-embed-text' sein
|
||||
VECTOR_DIM=768
|
||||
|
||||
# --- AI Modelle (Lokal/Fallback) ---
|
||||
MINDNET_LLM_MODEL=phi3:mini
|
||||
MINDNET_OLLAMA_URL=http://127.0.0.1:11434
|
||||
MINDNET_LLM_TIMEOUT=300.0
|
||||
MINDNET_LLM_BACKGROUND_LIMIT=2
|
||||
|
||||
# Vektor-Modell für semantische Suche
|
||||
MINDNET_EMBEDDING_MODEL=nomic-embed-text
|
||||
|
||||
# --- WP-20/WP-76: Hybrid-Cloud & Resilienz ---
|
||||
# Primärer Provider für höchste Qualität
|
||||
MINDNET_LLM_PROVIDER=openrouter
|
||||
MINDNET_LLM_FALLBACK=true
|
||||
|
||||
# Intelligente Rate-Limit Steuerung (Sekunden/Versuche)
|
||||
MINDNET_LLM_RATE_LIMIT_WAIT=60.0
|
||||
MINDNET_LLM_RATE_LIMIT_RETRIES=3
|
||||
|
||||
# --- Cloud Provider Keys (Hier Prod-Keys einsetzen) ---
|
||||
GOOGLE_API_KEY=AIzaSy... (Dein Prod-Key)
|
||||
MINDNET_GEMINI_MODEL=gemini-2.5-flash-lite
|
||||
|
||||
OPENROUTER_API_KEY=sk-or-v1-... (Dein Prod-Key)
|
||||
# Stabilstes Free-Modell für strukturierte Extraktion
|
||||
OPENROUTER_MODEL=mistralai/mistral-7b-instruct:free
|
||||
|
||||
# --- Pfade & System (Produktions-Vault) ---
|
||||
MINDNET_TYPES_FILE=./config/types.yaml
|
||||
MINDNET_VAULT_ROOT=./vault_prod
|
||||
MINDNET_VOCAB_PATH=/mindnet/vault/mindnet/_system/dictionary/edge_vocabulary.md
|
||||
|
||||
# Change Detection für effiziente Re-Imports
|
||||
MINDNET_CHANGE_DETECTION_MODE=full
|
||||
|
||||
# --- WP-24c v4.2.0: Konfigurierbare Markdown-Header für Edge-Zonen ---
|
||||
# Komma-separierte Liste von Headern für LLM-Validierung
|
||||
# Format: Header1,Header2,Header3
|
||||
MINDNET_LLM_VALIDATION_HEADERS=Unzugeordnete Kanten,Edge Pool,Candidates
|
||||
|
||||
# Header-Ebene für LLM-Validierung (1-6, Default: 3 für ###)
|
||||
MINDNET_LLM_VALIDATION_HEADER_LEVEL=3
|
||||
|
||||
# Komma-separierte Liste von Headern für Note-Scope Zonen
|
||||
# Format: Header1,Header2,Header3
|
||||
MINDNET_NOTE_SCOPE_ZONE_HEADERS=Smart Edges,Relationen,Global Links,Note-Level Relations,Globale Verbindungen
|
||||
|
||||
# Header-Ebene für Note-Scope Zonen (1-6, Default: 2 für ##)
|
||||
MINDNET_NOTE_SCOPE_HEADER_LEVEL=2
|
||||
337
config/prompts - Kopie.yaml
Normal file
337
config/prompts - Kopie.yaml
Normal file
|
|
@ -0,0 +1,337 @@
|
|||
# config/prompts.yaml — VERSION 3.1.2 (WP-25 Cleanup: Multi-Stream Sync)
|
||||
# STATUS: Active
|
||||
# FIX:
|
||||
# - 100% Wiederherstellung der Ingest- & Validierungslogik (Sektion 5-8).
|
||||
# - Überführung der Kategorien 1-4 in die Multi-Stream Struktur unter Beibehaltung des Inhalts.
|
||||
# - Konsolidierung: Sektion 9 (v3.0.0) wurde in Sektion 1 & 2 integriert (keine Redundanz).
|
||||
|
||||
system_prompt: |
|
||||
Du bist 'mindnet', mein digitaler Zwilling und strategischer Partner.
|
||||
|
||||
DEINE IDENTITÄT:
|
||||
- Du bist nicht nur eine Datenbank, sondern handelst nach MEINEN Werten und Zielen.
|
||||
- Du passt deinen Stil dynamisch an die Situation an (Analytisch, Empathisch oder Technisch).
|
||||
|
||||
DEINE REGELN:
|
||||
1. Deine Antwort muss zu 100% auf dem bereitgestellten KONTEXT basieren.
|
||||
2. Halluziniere keine Fakten, die nicht in den Quellen stehen.
|
||||
3. Antworte auf Deutsch (außer bei Code/Fachbegriffen).
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 1. STANDARD: Fakten & Wissen (Intent: FACT_WHAT / FACT_WHEN)
|
||||
# ---------------------------------------------------------
|
||||
# Ersetzt das alte 'rag_template'. Nutzt jetzt parallele Streams.
|
||||
fact_synthesis_v1:
|
||||
ollama: |
|
||||
WISSENS-STREAMS:
|
||||
=========================================
|
||||
FAKTEN & STATUS:
|
||||
{facts_stream}
|
||||
|
||||
ERFAHRUNG & BIOGRAFIE:
|
||||
{biography_stream}
|
||||
|
||||
WISSEN & TECHNIK:
|
||||
{tech_stream}
|
||||
=========================================
|
||||
|
||||
FRAGE:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Beantworte die Frage präzise basierend auf den Quellen.
|
||||
Kombiniere harte Fakten mit persönlichen Erfahrungen, falls vorhanden.
|
||||
Fasse die Informationen zusammen. Sei objektiv und neutral.
|
||||
gemini: |
|
||||
Beantworte die Wissensabfrage "{query}" basierend auf diesen Streams:
|
||||
FAKTEN: {facts_stream}
|
||||
BIOGRAFIE/ERFAHRUNG: {biography_stream}
|
||||
TECHNIK: {tech_stream}
|
||||
Kombiniere harte Fakten mit persönlichen Erfahrungen, falls vorhanden. Antworte strukturiert und präzise.
|
||||
openrouter: |
|
||||
Synthese der Wissens-Streams für: {query}
|
||||
Inhalt: {facts_stream} | {biography_stream} | {tech_stream}
|
||||
Antworte basierend auf dem bereitgestellten Kontext.
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 2. DECISION: Strategie & Abwägung (Intent: DECISION)
|
||||
# ---------------------------------------------------------
|
||||
# Ersetzt das alte 'decision_template'. Nutzt jetzt parallele Streams.
|
||||
decision_synthesis_v1:
|
||||
ollama: |
|
||||
ENTSCHEIDUNGS-STREAMS:
|
||||
=========================================
|
||||
WERTE & PRINZIPIEN (Identität):
|
||||
{values_stream}
|
||||
|
||||
OPERATIVE FAKTEN (Realität):
|
||||
{facts_stream}
|
||||
|
||||
RISIKO-RADAR (Konsequenzen):
|
||||
{risk_stream}
|
||||
=========================================
|
||||
|
||||
ENTSCHEIDUNGSFRAGE:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du agierst als mein Entscheidungs-Partner.
|
||||
1. Analysiere die Faktenlage aus den Quellen.
|
||||
2. Prüfe dies hart gegen meine strategischen Notizen (Werte & Prinzipien).
|
||||
3. Wäge ab: Passt die technische/faktische Lösung zu meinen Werten?
|
||||
|
||||
FORMAT:
|
||||
- **Analyse:** (Kurze Zusammenfassung der Fakten)
|
||||
- **Abgleich:** (Gibt es Konflikte mit Werten/Zielen? Nenne die Quelle!)
|
||||
- **Empfehlung:** (Klare Meinung: Ja/No/Vielleicht mit Begründung)
|
||||
gemini: |
|
||||
Agiere als mein strategischer Partner. Analysiere die Frage: {query}
|
||||
Werte: {values_stream} | Fakten: {facts_stream} | Risiken: {risk_stream}.
|
||||
Wäge ab und gib eine klare strategische Empfehlung ab.
|
||||
openrouter: |
|
||||
Strategische Multi-Stream Analyse für: {query}
|
||||
Werte-Basis: {values_stream} | Fakten: {facts_stream} | Risiken: {risk_stream}
|
||||
Bitte wäge ab und gib eine Empfehlung.
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 3. EMPATHY: Der Spiegel / "Ich"-Modus (Intent: EMPATHY)
|
||||
# ---------------------------------------------------------
|
||||
empathy_template:
|
||||
ollama: |
|
||||
KONTEXT (ERFAHRUNGEN & WERTE):
|
||||
=========================================
|
||||
ERLEBNISSE & BIOGRAFIE:
|
||||
{biography_stream}
|
||||
|
||||
WERTE & BEDÜRFNISSE:
|
||||
{values_stream}
|
||||
=========================================
|
||||
|
||||
SITUATION:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du agierst jetzt als mein empathischer Spiegel.
|
||||
1. Versuche nicht sofort, das Problem technisch zu lösen.
|
||||
2. Zeige Verständnis für die Situation basierend auf meinen eigenen Erfahrungen ([EXPERIENCE]) oder Werten, falls im Kontext vorhanden.
|
||||
3. Antworte in der "Ich"-Form oder "Wir"-Form. Sei unterstützend.
|
||||
|
||||
TONFALL:
|
||||
Ruhig, verständnisvoll, reflektiert. Keine Aufzählungszeichen, sondern fließender Text.
|
||||
gemini: "Sei mein digitaler Spiegel für {query}. Kontext: {biography_stream}, {values_stream}"
|
||||
openrouter: "Empathische Reflexion der Situation {query}. Persönlicher Kontext: {biography_stream}, {values_stream}"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 4. TECHNICAL: Der Coder (Intent: CODING)
|
||||
# ---------------------------------------------------------
|
||||
technical_template:
|
||||
ollama: |
|
||||
KONTEXT (WISSEN & PROJEKTE):
|
||||
=========================================
|
||||
TECHNIK & SNIPPETS:
|
||||
{tech_stream}
|
||||
|
||||
PROJEKT-STATUS:
|
||||
{facts_stream}
|
||||
=========================================
|
||||
|
||||
TASK:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du bist Senior Developer.
|
||||
1. Ignoriere Smalltalk. Komm sofort zum Punkt.
|
||||
2. Generiere validen, performanten Code basierend auf den Quellen.
|
||||
3. Wenn Quellen fehlen, nutze dein allgemeines Programmierwissen, aber weise darauf hin.
|
||||
|
||||
FORMAT:
|
||||
- Kurze Erklärung des Ansatzes.
|
||||
- Markdown Code-Block (Copy-Paste fertig).
|
||||
- Wichtige Edge-Cases.
|
||||
gemini: "Generiere Code für {query} unter Berücksichtigung von {tech_stream} und {facts_stream}."
|
||||
openrouter: "Technischer Support für {query}. Referenzen: {tech_stream}, Projekt-Kontext: {facts_stream}"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 5. INTERVIEW: Der "One-Shot Extractor" (WP-07)
|
||||
# ---------------------------------------------------------
|
||||
interview_template:
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein professioneller Ghostwriter. Verwandle den "USER INPUT" in eine strukturierte Notiz vom Typ '{target_type}'.
|
||||
|
||||
STRUKTUR (Nutze EXAKT diese Überschriften):
|
||||
{schema_fields}
|
||||
|
||||
USER INPUT:
|
||||
"{query}"
|
||||
|
||||
ANWEISUNG ZUM INHALT:
|
||||
1. Analysiere den Input genau.
|
||||
2. Schreibe die Inhalte unter die passenden Überschriften aus der STRUKTUR-Liste oben.
|
||||
3. STIL: Schreibe flüssig, professionell und in der Ich-Perspektive. Korrigiere Grammatikfehler, aber behalte den persönlichen Ton bei.
|
||||
4. Wenn Informationen für einen Abschnitt fehlen, schreibe nur: "[TODO: Ergänzen]". Erfinde nichts dazu.
|
||||
|
||||
OUTPUT FORMAT (YAML + MARKDOWN):
|
||||
---
|
||||
type: {target_type}
|
||||
status: draft
|
||||
title: (Erstelle einen treffenden, kurzen Titel für den Inhalt)
|
||||
tags: [Tag1, Tag2]
|
||||
---
|
||||
|
||||
# (Wiederhole den Titel hier)
|
||||
|
||||
## (Erster Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
|
||||
## (Zweiter Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
gemini: "Extrahiere Daten für {target_type} aus {query}."
|
||||
openrouter: "Strukturiere den Input {query} nach dem Schema {schema_fields} für Typ {target_type}."
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 6. EDGE_ALLOCATION: Kantenfilter (Ingest)
|
||||
# ---------------------------------------------------------
|
||||
edge_allocation_template:
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein strikter Selektor. Du erhältst eine Liste von "Kandidaten-Kanten" (Strings).
|
||||
Wähle jene aus, die inhaltlich im "Textabschnitt" vorkommen oder relevant sind.
|
||||
|
||||
TEXTABSCHNITT:
|
||||
"""
|
||||
{chunk_text}
|
||||
"""
|
||||
|
||||
KANDIDATEN (Auswahl-Pool):
|
||||
{edge_list}
|
||||
|
||||
REGELN:
|
||||
1. Die Kanten haben das Format "typ:ziel". Der "typ" ist variabel und kann ALLES sein.
|
||||
2. Gib NUR die Strings aus der Kandidaten-Liste zurück, die zum Text passen.
|
||||
3. Erfinde KEINE neuen Kanten.
|
||||
4. Antworte als flache JSON-Liste.
|
||||
|
||||
DEIN OUTPUT (JSON):
|
||||
gemini: |
|
||||
TASK: Ordne Kanten einem Textabschnitt zu.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {chunk_text}
|
||||
KANDIDATEN: {edge_list}
|
||||
OUTPUT: STRIKT eine flache JSON-Liste ["typ:ziel"]. Kein Text davor/danach. Wenn nichts: []. Keine Objekte!
|
||||
openrouter: |
|
||||
TASK: Filtere relevante Kanten aus dem Pool.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {chunk_text}
|
||||
POOL: {edge_list}
|
||||
ANWEISUNG: Gib NUR eine flache JSON-Liste von Strings zurück.
|
||||
BEISPIEL: ["kind:target", "kind:target"]
|
||||
REGEL: Kein Text, keine Analyse, keine Kommentare. Wenn nichts passt, gib [] zurück.
|
||||
OUTPUT:
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 7. SMART EDGE ALLOCATION: Extraktion (Ingest)
|
||||
# ---------------------------------------------------------
|
||||
edge_extraction:
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein Wissens-Ingenieur für den digitalen Zwilling 'mindnet'.
|
||||
Deine Aufgabe ist es, semantische Relationen (Kanten) aus dem Text zu extrahieren,
|
||||
die die Hauptnotiz '{note_id}' mit anderen Konzepten verbinden.
|
||||
|
||||
ANWEISUNGEN:
|
||||
1. Identifiziere wichtige Entitäten, Konzepte oder Ereignisse im Text.
|
||||
2. Bestimme die Art der Beziehung (z.B. part_of, uses, related_to, blocks, caused_by).
|
||||
3. Das Ziel (target) muss ein prägnanter Begriff sein.
|
||||
4. Antworte AUSSCHLIESSLICH in validem JSON als Liste von Objekten.
|
||||
|
||||
BEISPIEL:
|
||||
[[ {{"to": "Ziel-Konzept", \"kind\": \"beziehungs_typ\"}} ]]
|
||||
|
||||
TEXT:
|
||||
"""
|
||||
{text}
|
||||
"""
|
||||
|
||||
DEIN OUTPUT (JSON):
|
||||
gemini: |
|
||||
Analysiere '{note_id}'. Extrahiere semantische Beziehungen.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {text}
|
||||
OUTPUT: STRIKT JSON-Array von Objekten: [[{{"to\":\"Ziel\",\"kind\":\"typ\"}}]]. Kein Text davor/danach. Wenn nichts: [].
|
||||
openrouter: |
|
||||
TASK: Extrahiere semantische Relationen für '{note_id}'.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {text}
|
||||
ANWEISUNG: Antworte AUSSCHLIESSLICH mit einem JSON-Array von Objekten.
|
||||
FORMAT: [[{{"to\":\"Ziel-Begriff\",\"kind\":\"typ\"}}]]
|
||||
STRIKTES VERBOT: Schreibe keine Einleitung, keine Analyse und keine Erklärungen.
|
||||
Wenn keine Relationen existieren, antworte NUR mit: []
|
||||
OUTPUT:
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 8. WP-15b: EDGE VALIDATION (Ingest/Validate)
|
||||
# ---------------------------------------------------------
|
||||
edge_validation:
|
||||
gemini: |
|
||||
Bewerte die semantische Validität dieser Verbindung im Wissensgraph.
|
||||
|
||||
KONTEXT DER QUELLE (Chunk):
|
||||
"{chunk_text}"
|
||||
|
||||
ZIEL-NOTIZ: "{target_title}"
|
||||
ZIEL-BESCHREIBUNG (Zusammenfassung):
|
||||
"{target_summary}"
|
||||
|
||||
GEPLANTE RELATION: "{edge_kind}"
|
||||
|
||||
FRAGE: Bestätigt der Kontext der Quelle die Beziehung '{edge_kind}' zum Ziel?
|
||||
REGEL: Antworte NUR mit 'YES' oder 'NO'. Keine Erklärungen oder Smalltalk.
|
||||
openrouter: |
|
||||
Verify semantic relation for graph construction.
|
||||
Source Context: {chunk_text}
|
||||
Target Note: {target_title}
|
||||
Target Summary: {target_summary}
|
||||
Proposed Relation: {edge_kind}
|
||||
Instruction: Does the source context support this relation to the target?
|
||||
Result: Respond ONLY with 'YES' or 'NO'.
|
||||
ollama: |
|
||||
Bewerte die semantische Korrektheit dieser Verbindung.
|
||||
QUELLE: {chunk_text}
|
||||
ZIEL: {target_title} ({target_summary})
|
||||
BEZIEHUNG: {edge_kind}
|
||||
Ist diese Verbindung valide? Antworte NUR mit YES oder NO.
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 10. WP-25: INTENT ROUTING (Intent: CLASSIFY)
|
||||
# ---------------------------------------------------------
|
||||
intent_router_v1:
|
||||
ollama: |
|
||||
Analysiere die Nutzeranfrage und wähle die passende Strategie.
|
||||
Antworte NUR mit dem Namen der Strategie.
|
||||
|
||||
STRATEGIEN:
|
||||
- FACT_WHEN: Nur für explizite Fragen nach einem exakten Datum, Uhrzeit oder dem "Wann" eines Ereignisses.
|
||||
- FACT_WHAT: Fragen nach Inhalten, Listen von Objekten/Projekten, Definitionen oder "Was/Welche" Anfragen (auch bei Zeiträumen).
|
||||
- DECISION: Rat, Meinung, "Soll ich?", Abwägung gegen Werte.
|
||||
- EMPATHY: Emotionen, Reflexion, Befindlichkeit.
|
||||
- CODING: Programmierung, Skripte, technische Syntax.
|
||||
- INTERVIEW: Dokumentation neuer Informationen, Notizen anlegen.
|
||||
|
||||
NACHRICHT: "{query}"
|
||||
STRATEGIE:
|
||||
gemini: |
|
||||
Classify intent:
|
||||
- FACT_WHEN: Exact dates/times only.
|
||||
- FACT_WHAT: Content, lists of entities (projects, etc.), definitions, "What/Which" queries.
|
||||
- DECISION: Strategic advice/values.
|
||||
- EMPATHY: Emotions.
|
||||
- CODING: Tech/Code.
|
||||
- INTERVIEW: Data entry.
|
||||
Query: "{query}"
|
||||
Result (One word only):
|
||||
openrouter: |
|
||||
Select strategy for Mindnet:
|
||||
FACT_WHEN (timing/dates), FACT_WHAT (entities/lists/what/which), DECISION, EMPATHY, CODING, INTERVIEW.
|
||||
Query: "{query}"
|
||||
Response:
|
||||
|
|
@ -1,4 +1,9 @@
|
|||
# config/prompts.yaml — Final V2.3.1 (Multi-Personality Support)
|
||||
# config/prompts.yaml — VERSION 3.2.2 (WP-25b: Hierarchical Model Sync)
|
||||
# STATUS: Active
|
||||
# FIX:
|
||||
# - 100% Erhalt der Original-Prompts aus v3.1.2 für die Provider-Ebene (ollama, gemini, openrouter).
|
||||
# - Integration der Modell-spezifischen Overrides für Gemini 2.0, Llama 3.3 und Qwen 2.5.
|
||||
# - Hinzufügen des notwendigen 'compression_template' für die DecisionEngine v1.3.0.
|
||||
|
||||
system_prompt: |
|
||||
Du bist 'mindnet', mein digitaler Zwilling und strategischer Partner.
|
||||
|
|
@ -13,150 +18,436 @@ system_prompt: |
|
|||
3. Antworte auf Deutsch (außer bei Code/Fachbegriffen).
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 1. STANDARD: Fakten & Wissen (Intent: FACT)
|
||||
# 1. STANDARD: Fakten & Wissen (Intent: FACT_WHAT / FACT_WHEN)
|
||||
# ---------------------------------------------------------
|
||||
rag_template: |
|
||||
QUELLEN (WISSEN):
|
||||
=========================================
|
||||
{context_str}
|
||||
=========================================
|
||||
fact_synthesis_v1:
|
||||
# --- Modell-spezifisch (WP-25b Optimierung) ---
|
||||
"google/gemini-2.0-flash-exp:free": |
|
||||
Analysiere die Wissens-Streams für: {query}
|
||||
FAKTEN: {facts_stream} | BIOGRAFIE: {biography_stream} | TECHNIK: {tech_stream}
|
||||
Nutze deine hohe Reasoning-Kapazität für eine tiefe Synthese. Antworte präzise auf Deutsch.
|
||||
|
||||
"meta-llama/llama-3.3-70b-instruct:free": |
|
||||
Erstelle eine fundierte Synthese für die Frage: "{query}"
|
||||
Nutze die Daten: {facts_stream}, {biography_stream} und {tech_stream}.
|
||||
Trenne klare Fakten von Erfahrungen. Bleibe strikt beim bereitgestellten Kontext.
|
||||
|
||||
FRAGE:
|
||||
{query}
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
WISSENS-STREAMS:
|
||||
=========================================
|
||||
FAKTEN & STATUS:
|
||||
{facts_stream}
|
||||
|
||||
ERFAHRUNG & BIOGRAFIE:
|
||||
{biography_stream}
|
||||
|
||||
WISSEN & TECHNIK:
|
||||
{tech_stream}
|
||||
=========================================
|
||||
|
||||
ANWEISUNG:
|
||||
Beantworte die Frage präzise basierend auf den Quellen.
|
||||
Fasse die Informationen zusammen. Sei objektiv und neutral.
|
||||
FRAGE:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Beantworte die Frage präzise basierend auf den Quellen.
|
||||
Kombiniere harte Fakten mit persönlichen Erfahrungen, falls vorhanden.
|
||||
Fasse die Informationen zusammen. Sei objektiv und neutral.
|
||||
|
||||
gemini: |
|
||||
Beantworte die Wissensabfrage "{query}" basierend auf diesen Streams:
|
||||
FAKTEN: {facts_stream}
|
||||
BIOGRAFIE/ERFAHRUNG: {biography_stream}
|
||||
TECHNIK: {tech_stream}
|
||||
Kombiniere harte Fakten mit persönlichen Erfahrungen, falls vorhanden. Antworte strukturiert und präzise.
|
||||
|
||||
openrouter: |
|
||||
Synthese der Wissens-Streams für: {query}
|
||||
Inhalt: {facts_stream} | {biography_stream} | {tech_stream}
|
||||
Antworte basierend auf dem bereitgestellten Kontext.
|
||||
|
||||
default: "Beantworte {query} basierend auf dem Kontext: {facts_stream} {biography_stream} {tech_stream}."
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 2. DECISION: Strategie & Abwägung (Intent: DECISION)
|
||||
# ---------------------------------------------------------
|
||||
decision_template: |
|
||||
KONTEXT (FAKTEN & STRATEGIE):
|
||||
=========================================
|
||||
{context_str}
|
||||
=========================================
|
||||
decision_synthesis_v1:
|
||||
# --- Modell-spezifisch (WP-25b Optimierung) ---
|
||||
"google/gemini-2.0-flash-exp:free": |
|
||||
Agiere als strategischer Partner für: {query}
|
||||
WERTE: {values_stream} | FAKTEN: {facts_stream} | RISIKEN: {risk_stream}
|
||||
Prüfe die Fakten gegen meine Werte. Zeige Zielkonflikte auf. Gib eine klare Empfehlung.
|
||||
|
||||
ENTSCHEIDUNGSFRAGE:
|
||||
{query}
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
ENTSCHEIDUNGS-STREAMS:
|
||||
=========================================
|
||||
WERTE & PRINZIPIEN (Identität):
|
||||
{values_stream}
|
||||
|
||||
OPERATIVE FAKTEN (Realität):
|
||||
{facts_stream}
|
||||
|
||||
RISIKO-RADAR (Konsequenzen):
|
||||
{risk_stream}
|
||||
=========================================
|
||||
|
||||
ANWEISUNG:
|
||||
Du agierst als mein Entscheidungs-Partner.
|
||||
1. Analysiere die Faktenlage aus den Quellen.
|
||||
2. Prüfe dies hart gegen meine strategischen Notizen (Typ [VALUE], [PRINCIPLE], [GOAL]).
|
||||
3. Wäge ab: Passt die technische/faktische Lösung zu meinen Werten?
|
||||
|
||||
FORMAT:
|
||||
- **Analyse:** (Kurze Zusammenfassung der Fakten)
|
||||
- **Abgleich:** (Gibt es Konflikte mit Werten/Zielen? Nenne die Quelle!)
|
||||
- **Empfehlung:** (Klare Meinung: Ja/Nein/Vielleicht mit Begründung)
|
||||
ENTSCHEIDUNGSFRAGE:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du agierst als mein Entscheidungs-Partner.
|
||||
1. Analysiere die Faktenlage aus den Quellen.
|
||||
2. Prüfe dies hart gegen meine strategischen Notizen (Werte & Prinzipien).
|
||||
3. Wäge ab: Passt die technische/faktische Lösung zu meinen Werten?
|
||||
|
||||
FORMAT:
|
||||
- **Analyse:** (Kurze Zusammenfassung der Fakten)
|
||||
- **Abgleich:** (Gibt es Konflikte mit Werten/Zielen? Nenne die Quelle!)
|
||||
- **Empfehlung:** (Klare Meinung: Ja/No/Vielleicht mit Begründung)
|
||||
|
||||
gemini: |
|
||||
Agiere als mein strategischer Partner. Analysiere die Frage: {query}
|
||||
Werte: {values_stream} | Fakten: {facts_stream} | Risiken: {risk_stream}.
|
||||
Wäge ab und gib eine klare strategische Empfehlung ab.
|
||||
|
||||
openrouter: |
|
||||
Strategische Multi-Stream Analyse für: {query}
|
||||
Werte-Basis: {values_stream} | Fakten: {facts_stream} | Risiken: {risk_stream}
|
||||
Bitte wäge ab und gib eine Empfehlung.
|
||||
|
||||
default: "Prüfe {query} gegen Werte {values_stream} und Fakten {facts_stream}."
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 3. EMPATHY: Der Spiegel / "Ich"-Modus (Intent: EMPATHY)
|
||||
# ---------------------------------------------------------
|
||||
empathy_template: |
|
||||
KONTEXT (ERFAHRUNGEN & GLAUBENSSÄTZE):
|
||||
=========================================
|
||||
{context_str}
|
||||
=========================================
|
||||
empathy_template:
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
KONTEXT (ERFAHRUNGEN & WERTE):
|
||||
=========================================
|
||||
ERLEBNISSE & BIOGRAFIE:
|
||||
{biography_stream}
|
||||
|
||||
WERTE & BEDÜRFNISSE:
|
||||
{values_stream}
|
||||
=========================================
|
||||
|
||||
SITUATION:
|
||||
{query}
|
||||
SITUATION:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du agierst jetzt als mein empathischer Spiegel.
|
||||
1. Versuche nicht sofort, das Problem technisch zu lösen.
|
||||
2. Zeige Verständnis für die Situation basierend auf meinen eigenen Erfahrungen ([EXPERIENCE]) oder Glaubenssätzen ([BELIEF]), falls im Kontext vorhanden.
|
||||
3. Antworte in der "Ich"-Form oder "Wir"-Form. Sei unterstützend.
|
||||
ANWEISUNG:
|
||||
Du agierst jetzt als mein empathischer Spiegel.
|
||||
1. Versuche nicht sofort, das Problem technisch zu lösen.
|
||||
2. Zeige Verständnis für die Situation basierend auf meinen eigenen Erfahrungen ([EXPERIENCE]) oder Werten, falls im Kontext vorhanden.
|
||||
3. Antworte in der "Ich"-Form oder "Wir"-Form. Sei unterstützend.
|
||||
|
||||
TONFALL:
|
||||
Ruhig, verständnisvoll, reflektiert. Keine Aufzählungszeichen, sondern fließender Text.
|
||||
|
||||
gemini: "Sei mein digitaler Spiegel für {query}. Kontext: {biography_stream}, {values_stream}"
|
||||
openrouter: "Empathische Reflexion der Situation {query}. Persönlicher Kontext: {biography_stream}, {values_stream}"
|
||||
|
||||
TONFALL:
|
||||
Ruhig, verständnisvoll, reflektiert. Keine Aufzählungszeichen, sondern fließender Text.
|
||||
default: "Reflektiere empathisch über {query} basierend auf {biography_stream}."
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 4. TECHNICAL: Der Coder (Intent: CODING)
|
||||
# ---------------------------------------------------------
|
||||
technical_template: |
|
||||
KONTEXT (DOCS & SNIPPETS):
|
||||
=========================================
|
||||
{context_str}
|
||||
=========================================
|
||||
technical_template:
|
||||
# --- Modell-spezifisch (WP-25b Optimierung) ---
|
||||
"qwen/qwen-2.5-vl-7b-instruct:free": |
|
||||
Du bist Senior Software Engineer. TASK: {query}
|
||||
REFERENZEN: {tech_stream} | KONTEXT: {facts_stream}
|
||||
Generiere validen, performanten Code. Nutze die Snippets aus dem Kontext.
|
||||
|
||||
TASK:
|
||||
{query}
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
KONTEXT (WISSEN & PROJEKTE):
|
||||
=========================================
|
||||
TECHNIK & SNIPPETS:
|
||||
{tech_stream}
|
||||
|
||||
PROJEKT-STATUS:
|
||||
{facts_stream}
|
||||
=========================================
|
||||
|
||||
ANWEISUNG:
|
||||
Du bist Senior Developer.
|
||||
1. Ignoriere Smalltalk. Komm sofort zum Punkt.
|
||||
2. Generiere validen, performanten Code basierend auf den Quellen.
|
||||
3. Wenn Quellen fehlen, nutze dein allgemeines Programmierwissen, aber weise darauf hin.
|
||||
|
||||
FORMAT:
|
||||
- Kurze Erklärung des Ansatzes.
|
||||
- Markdown Code-Block (Copy-Paste fertig).
|
||||
- Wichtige Edge-Cases.
|
||||
# ---------------------------------------------------------
|
||||
# 5. INTERVIEW: Der "One-Shot Extractor" (Performance Mode)
|
||||
# ---------------------------------------------------------
|
||||
interview_template: |
|
||||
TASK:
|
||||
Du bist ein professioneller Ghostwriter. Verwandle den "USER INPUT" in eine strukturierte Notiz vom Typ '{target_type}'.
|
||||
|
||||
STRUKTUR (Nutze EXAKT diese Überschriften):
|
||||
{schema_fields}
|
||||
|
||||
USER INPUT:
|
||||
"{query}"
|
||||
|
||||
ANWEISUNG ZUM INHALT:
|
||||
1. Analysiere den Input genau.
|
||||
2. Schreibe die Inhalte unter die passenden Überschriften aus der STRUKTUR-Liste oben.
|
||||
3. STIL: Schreibe flüssig, professionell und in der Ich-Perspektive. Korrigiere Grammatikfehler, aber behalte den persönlichen Ton bei.
|
||||
4. Wenn Informationen für einen Abschnitt fehlen, schreibe nur: "[TODO: Ergänzen]". Erfinde nichts dazu.
|
||||
|
||||
OUTPUT FORMAT (YAML + MARKDOWN):
|
||||
---
|
||||
type: {target_type}
|
||||
status: draft
|
||||
title: (Erstelle einen treffenden, kurzen Titel für den Inhalt)
|
||||
tags: [Tag1, Tag2]
|
||||
---
|
||||
|
||||
# (Wiederhole den Titel hier)
|
||||
|
||||
## (Erster Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
|
||||
## (Zweiter Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
|
||||
(usw.)
|
||||
|
||||
TASK:
|
||||
{query}
|
||||
|
||||
ANWEISUNG:
|
||||
Du bist Senior Developer.
|
||||
1. Ignoriere Smalltalk. Komm sofort zum Punkt.
|
||||
2. Generiere validen, performanten Code basierend auf den Quellen.
|
||||
3. Wenn Quellen fehlen, nutze dein allgemeines Programmierwissen, aber weise darauf hin.
|
||||
|
||||
FORMAT:
|
||||
- Kurze Erklärung des Ansatzes.
|
||||
- Markdown Code-Block (Copy-Paste fertig).
|
||||
- Wichtige Edge-Cases.
|
||||
|
||||
gemini: "Generiere Code für {query} unter Berücksichtigung von {tech_stream} und {facts_stream}."
|
||||
openrouter: "Technischer Support für {query}. Referenzen: {tech_stream}, Projekt-Kontext: {facts_stream}"
|
||||
|
||||
default: "Erstelle eine technische Lösung für {query}."
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 6. EDGE_ALLOCATION: Kantenfilter (Intent: OFFLINE_FILTER)
|
||||
# 5. INTERVIEW: Der "One-Shot Extractor" (WP-07)
|
||||
# ---------------------------------------------------------
|
||||
edge_allocation_template: |
|
||||
TASK:
|
||||
Du bist ein strikter Selektor. Du erhältst eine Liste von "Kandidaten-Kanten" (Strings).
|
||||
Wähle jene aus, die inhaltlich im "Textabschnitt" vorkommen oder relevant sind.
|
||||
interview_template:
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein professioneller Ghostwriter. Verwandle den "USER INPUT" in eine strukturierte Notiz vom Typ '{target_type}'.
|
||||
|
||||
STRUKTUR (Nutze EXAKT diese Überschriften):
|
||||
{schema_fields}
|
||||
|
||||
USER INPUT:
|
||||
"{query}"
|
||||
|
||||
ANWEISUNG ZUM INHALT:
|
||||
1. Analysiere den Input genau.
|
||||
2. Schreibe die Inhalte unter die passenden Überschriften aus der STRUKTUR-Liste oben.
|
||||
3. STIL: Schreibe flüssig, professionell und in der Ich-Perspektive. Korrigiere Grammatikfehler, aber behalte den persönlichen Ton bei.
|
||||
4. Wenn Informationen für einen Abschnitt fehlen, schreibe nur: "[TODO: Ergänzen]". Erfinde nichts dazu.
|
||||
|
||||
OUTPUT FORMAT (YAML + MARKDOWN):
|
||||
---
|
||||
type: {target_type}
|
||||
status: draft
|
||||
title: (Erstelle einen treffenden, kurzen Titel für den Inhalt)
|
||||
tags: [Tag1, Tag2]
|
||||
---
|
||||
|
||||
# (Wiederhole den Titel hier)
|
||||
|
||||
## (Erster Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
|
||||
## (Zweiter Begriff aus STRUKTUR)
|
||||
(Text...)
|
||||
|
||||
TEXTABSCHNITT:
|
||||
"""
|
||||
{chunk_text}
|
||||
"""
|
||||
gemini: "Extrahiere Daten für {target_type} aus {query}."
|
||||
openrouter: "Strukturiere den Input {query} nach dem Schema {schema_fields} für Typ {target_type}."
|
||||
|
||||
KANDIDATEN (Auswahl-Pool):
|
||||
{edge_list}
|
||||
default: "Extrahiere Informationen für {target_type} aus dem Input: {query}"
|
||||
|
||||
REGELN:
|
||||
1. Die Kanten haben das Format "typ:ziel". Der "typ" ist variabel und kann ALLES sein (z.B. uses, blocks, inspired_by, loves, etc.).
|
||||
2. Gib NUR die Strings aus der Kandidaten-Liste zurück, die zum Text passen.
|
||||
3. Erfinde KEINE neuen Kanten. Nutze exakt die Schreibweise aus der Liste.
|
||||
4. Antworte als flache JSON-Liste.
|
||||
# ---------------------------------------------------------
|
||||
# 6. WP-25b: PRE-SYNTHESIS COMPRESSION (Neu!)
|
||||
# ---------------------------------------------------------
|
||||
compression_template:
|
||||
"mistralai/mistral-7b-instruct:free": |
|
||||
Reduziere den Stream '{stream_name}' auf die Informationen, die für die Beantwortung der Frage '{query}' absolut notwendig sind.
|
||||
BEHALTE: Harte Fakten, Projektnamen, konkrete Werte und Quellenangaben.
|
||||
ENTFERNE: Redundante Einleitungen, Füllwörter und irrelevante Details.
|
||||
|
||||
INHALT:
|
||||
{content}
|
||||
|
||||
KOMPRIMIERTE ANALYSE:
|
||||
|
||||
BEISPIEL (Zur Demonstration der Logik):
|
||||
Input Text: "Das Projekt Alpha scheitert, weil Budget fehlt."
|
||||
Input Kandidaten: ["blocks:Projekt Alpha", "inspired_by:Buch der Weisen", "needs:Budget"]
|
||||
Output: ["blocks:Projekt Alpha", "needs:Budget"]
|
||||
default: "Fasse das Wichtigste aus {stream_name} für die Frage {query} kurz zusammen: {content}"
|
||||
|
||||
DEIN OUTPUT (JSON):
|
||||
# ---------------------------------------------------------
|
||||
# 7. EDGE_ALLOCATION: Kantenfilter (Ingest)
|
||||
# ---------------------------------------------------------
|
||||
edge_allocation_template:
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein strikter Selektor. Du erhältst eine Liste von "Kandidaten-Kanten" (Strings).
|
||||
Wähle jene aus, die inhaltlich im "Textabschnitt" vorkommen oder relevant sind.
|
||||
|
||||
TEXTABSCHNITT:
|
||||
"""
|
||||
{chunk_text}
|
||||
"""
|
||||
|
||||
KANDIDATEN (Auswahl-Pool):
|
||||
{edge_list}
|
||||
|
||||
REGELN:
|
||||
1. Die Kanten haben das Format "typ:ziel". Der "typ" ist variabel und kann ALLES sein.
|
||||
2. Gib NUR die Strings aus der Kandidaten-Liste zurück, die zum Text passen.
|
||||
3. Erfinde KEINE neuen Kanten.
|
||||
4. Antworte als flache JSON-Liste.
|
||||
|
||||
DEIN OUTPUT (JSON):
|
||||
|
||||
gemini: |
|
||||
TASK: Ordne Kanten einem Textabschnitt zu.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {chunk_text}
|
||||
KANDIDATEN: {edge_list}
|
||||
OUTPUT: STRIKT eine flache JSON-Liste ["typ:ziel"]. Kein Text davor/danach. Wenn nichts: []. Keine Objekte!
|
||||
|
||||
openrouter: |
|
||||
TASK: Filtere relevante Kanten aus dem Pool.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {chunk_text}
|
||||
POOL: {edge_list}
|
||||
ANWEISUNG: Gib NUR eine flache JSON-Liste von Strings zurück.
|
||||
BEISPIEL: ["kind:target", "kind:target"]
|
||||
REGEL: Kein Text, keine Analyse, keine Kommentare. Wenn nichts passt, gib [] zurück.
|
||||
OUTPUT:
|
||||
|
||||
default: "[]"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 8. SMART EDGE ALLOCATION: Extraktion (Ingest)
|
||||
# ---------------------------------------------------------
|
||||
edge_extraction:
|
||||
ollama: |
|
||||
TASK:
|
||||
Du bist ein Wissens-Ingenieur für den digitalen Zwilling 'mindnet'.
|
||||
Deine Aufgabe ist es, semantische Relationen (Kanten) aus dem Text zu extrahieren,
|
||||
die die Hauptnotiz '{note_id}' mit anderen Konzepten verbinden.
|
||||
|
||||
ANWEISUNGEN:
|
||||
1. Identifiziere wichtige Entitäten, Konzepte oder Ereignisse im Text.
|
||||
2. Bestimme die Art der Beziehung (z.B. part_of, uses, related_to, blocks, caused_by).
|
||||
3. Das Ziel (target) muss ein prägnanter Begriff sein.
|
||||
4. Antworte AUSSCHLIESSLICH in validem JSON als Liste von Objekten.
|
||||
|
||||
BEISPIEL:
|
||||
[[ {{"to": "Ziel-Konzept", \"kind\": \"beziehungs_typ\"}} ]]
|
||||
|
||||
TEXT:
|
||||
"""
|
||||
{text}
|
||||
"""
|
||||
|
||||
DEIN OUTPUT (JSON):
|
||||
|
||||
gemini: |
|
||||
Analysiere '{note_id}'. Extrahiere semantische Beziehungen.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {text}
|
||||
OUTPUT: STRIKT JSON-Array von Objekten: [[{{"to\":\"Ziel\",\"kind\":\"typ\"}}]]. Kein Text davor/danach. Wenn nichts: [].
|
||||
|
||||
openrouter: |
|
||||
TASK: Extrahiere semantische Relationen für '{note_id}'.
|
||||
ERLAUBTE TYPEN: {valid_types}
|
||||
TEXT: {text}
|
||||
ANWEISUNG: Antworte AUSSCHLIESSLICH mit einem JSON-Array von Objekten.
|
||||
FORMAT: [[{{"to\":\"Ziel-Begriff\",\"kind\":\"typ\"}}]]
|
||||
STRIKTES VERBOT: Schreibe keine Einleitung, keine Analyse und keine Erklärungen.
|
||||
Wenn keine Relationen existieren, antworte NUR mit: []
|
||||
OUTPUT:
|
||||
|
||||
default: "[]"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 9. INGESTION: EDGE VALIDATION (Ingest/Validate)
|
||||
# ---------------------------------------------------------
|
||||
edge_validation:
|
||||
# --- Modell-spezifisch (WP-25b Optimierung) ---
|
||||
"mistralai/mistral-7b-instruct:free": |
|
||||
Verify relation '{edge_kind}' for graph integrity.
|
||||
Chunk: "{chunk_text}"
|
||||
Target: "{target_title}" ({target_summary})
|
||||
Respond ONLY with 'YES' or 'NO'.
|
||||
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
gemini: |
|
||||
Bewerte die semantische Validität dieser Verbindung im Wissensgraph.
|
||||
|
||||
KONTEXT DER QUELLE (Chunk):
|
||||
"{chunk_text}"
|
||||
|
||||
ZIEL-NOTIZ: "{target_title}"
|
||||
ZIEL-BESCHREIBUNG (Zusammenfassung):
|
||||
"{target_summary}"
|
||||
|
||||
GEPLANTE RELATION: "{edge_kind}"
|
||||
|
||||
FRAGE: Bestätigt der Kontext der Quelle die Beziehung '{edge_kind}' zum Ziel?
|
||||
REGEL: Antworte NUR mit 'YES' oder 'NO'. Keine Erklärungen oder Smalltalk.
|
||||
|
||||
openrouter: |
|
||||
Verify semantic relation for graph construction.
|
||||
Source Context: {chunk_text}
|
||||
Target Note: {target_title}
|
||||
Target Summary: {target_summary}
|
||||
Proposed Relation: {edge_kind}
|
||||
Instruction: Does the source context support this relation to the target?
|
||||
Result: Respond ONLY with 'YES' or 'NO'.
|
||||
|
||||
ollama: |
|
||||
Bewerte die semantische Korrektheit dieser Verbindung.
|
||||
QUELLE: {chunk_text}
|
||||
ZIEL: {target_title} ({target_summary})
|
||||
BEZIEHUNG: {edge_kind}
|
||||
Ist diese Verbindung valide? Antworte NUR mit YES oder NO.
|
||||
|
||||
default: "YES"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 10. WP-25: INTENT ROUTING (Intent: CLASSIFY)
|
||||
# ---------------------------------------------------------
|
||||
intent_router_v1:
|
||||
# --- Modell-spezifisch (WP-25b Optimierung) ---
|
||||
"mistralai/mistral-7b-instruct:free": |
|
||||
Classify query "{query}" into exactly one of these categories:
|
||||
FACT_WHEN, FACT_WHAT, DECISION, EMPATHY, CODING, INTERVIEW.
|
||||
Respond with the category name only.
|
||||
|
||||
# --- EXAKTE Provider-Fallbacks aus v3.1.2 ---
|
||||
ollama: |
|
||||
Analysiere die Nutzeranfrage und wähle die passende Strategie.
|
||||
Antworte NUR mit dem Namen der Strategie.
|
||||
|
||||
STRATEGIEN:
|
||||
- FACT_WHEN: Nur für explizite Fragen nach einem exakten Datum, Uhrzeit oder dem "Wann" eines Ereignisses.
|
||||
- FACT_WHAT: Fragen nach Inhalten, Listen von Objekten/Projekten, Definitionen oder "Was/Welche" Anfragen (auch bei Zeiträumen).
|
||||
- DECISION: Rat, Meinung, "Soll ich?", Abwägung gegen Werte.
|
||||
- EMPATHY: Emotionen, Reflexion, Befindlichkeit.
|
||||
- CODING: Programmierung, Skripte, technische Syntax.
|
||||
- INTERVIEW: Dokumentation neuer Informationen, Notizen anlegen.
|
||||
|
||||
NACHRICHT: "{query}"
|
||||
STRATEGIE:
|
||||
|
||||
gemini: |
|
||||
Classify intent:
|
||||
- FACT_WHEN: Exact dates/times only.
|
||||
- FACT_WHAT: Content, lists of entities (projects, etc.), definitions, "What/Which" queries.
|
||||
- DECISION: Strategic advice/values.
|
||||
- EMPATHY: Emotions.
|
||||
- CODING: Tech/Code.
|
||||
- INTERVIEW: Data entry.
|
||||
Query: "{query}"
|
||||
Result (One word only):
|
||||
|
||||
openrouter: |
|
||||
Select strategy for Mindnet:
|
||||
FACT_WHEN (timing/dates), FACT_WHAT (entities/lists/what/which), DECISION, EMPATHY, CODING, INTERVIEW.
|
||||
Query: "{query}"
|
||||
Response:
|
||||
|
||||
default: "FACT_WHAT"
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 11. WP-25b: FALLBACK SYNTHESIS (Error Recovery)
|
||||
# ---------------------------------------------------------
|
||||
fallback_synthesis:
|
||||
ollama: |
|
||||
Beantworte die folgende Frage basierend auf dem bereitgestellten Kontext.
|
||||
|
||||
FRAGE:
|
||||
{query}
|
||||
|
||||
KONTEXT:
|
||||
{context}
|
||||
|
||||
ANWEISUNG:
|
||||
Nutze den Kontext, um eine präzise Antwort zu geben. Falls der Kontext unvollständig ist, weise darauf hin.
|
||||
|
||||
gemini: |
|
||||
Frage: {query}
|
||||
Kontext: {context}
|
||||
Antworte basierend auf dem Kontext.
|
||||
|
||||
openrouter: |
|
||||
Answer the question "{query}" using the provided context: {context}
|
||||
|
||||
default: "Answer: {query}\n\nContext: {context}"
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
version: 1.0
|
||||
version: 1.2
|
||||
|
||||
scoring:
|
||||
# W_sem: skaliert den Term (semantic_score * retriever_weight)
|
||||
|
|
@ -6,20 +6,54 @@ scoring:
|
|||
semantic_weight: 1.0
|
||||
|
||||
# W_edge: skaliert edge_bonus aus dem Subgraph
|
||||
# Empfehlung: 0.7 → Graph ist deutlich spürbar, aber überstimmt Semantik nicht komplett
|
||||
edge_weight: 0.7
|
||||
# Empfehlung: 0.8 → Graph ist deutlich spürbar, aber überstimmt Semantik nicht komplett
|
||||
edge_weight: 0.8
|
||||
|
||||
# W_cent: skaliert centrality_bonus (Knoten-Zentralität im Subgraph)
|
||||
# Empfehlung: 0.5 → zentrale Knoten werden bevorzugt, aber moderat
|
||||
centrality_weight: 0.5
|
||||
|
||||
# WP-22 Stellschraube: Lifecycle (Status-basiertes Scoring)
|
||||
# Bonus für verifiziertes Wissen, Malus für Entwürfe
|
||||
lifecycle_weights:
|
||||
stable: 1.2 # +20% Bonus
|
||||
active: 1.0 # Standardwert
|
||||
draft: 0.5 # -50% Malus
|
||||
system: 0.0 # Hard Skip via Ingestion
|
||||
|
||||
# Die nachfolgenden Werte überschreiben die Defaults aus app/core/retriever_config.
|
||||
# Wenn neue Kantentypen, z.B. durch Referenzierung innerhalb einer md-Datei im vault anders gewichtet werden sollen, dann muss hier die Konfiguration erfolgen
|
||||
edge_types:
|
||||
references: 0.20
|
||||
depends_on: 0.18
|
||||
related_to: 0.15
|
||||
similar_to: 0.12
|
||||
belongs_to: 0.10
|
||||
next: 0.06
|
||||
prev: 0.06
|
||||
# --- KATEGORIE 1: LOGIK-BOOSTS (Relevanz-Treiber) ---
|
||||
# Diese Kanten haben die Kraft, das semantische Ranking aktiv umzugestalten.
|
||||
blocks: 1.6 # Kritisch: Risiken/Blocker müssen sofort sichtbar sein.
|
||||
solves: 1.5 # Zielführend: Lösungen sind primäre Suchziele.
|
||||
depends_on: 1.4 # Logisch: Harte fachliche Abhängigkeit.
|
||||
resulted_in: 1.4 # Kausal: Ergebnisse und unmittelbare Konsequenzen.
|
||||
followed_by: 1.3 # Sequenziell (User): Bewusst gesteuerte Wissenspfade.
|
||||
caused_by: 1.2 # Kausal: Ursachen-Bezug (Basis für Intent-Boost).
|
||||
preceded_by: 1.1 # Sequenziell (User): Rückwärts-Bezug in Logik-Ketten.
|
||||
impacts: 1.2 # Langfristige Auswirkung/Einfluss
|
||||
|
||||
# --- KATEGORIE 2: QUALITATIVER KONTEXT (Stabilitäts-Stützen) ---
|
||||
# Diese Kanten liefern wichtigen Kontext, ohne das Ergebnis zu verfälschen.
|
||||
guides: 1.1 # Qualitativ: Prinzipien oder Werte leiten das Thema.
|
||||
part_of: 1.1 # Strukturell: Zieht übergeordnete Kontexte (Parents) mit hoch.
|
||||
based_on: 0.8 # Fundament: Bezug auf Basis-Werte (kalibriert auf Safe-Retrieval).
|
||||
derived_from: 0.6 # Historisch: Dokumentiert die Herkunft von Wissen.
|
||||
uses: 0.6 # Instrumentell: Genutzte Werkzeuge, Methoden oder Ressourcen.
|
||||
|
||||
# --- KATEGORIE 3: THEMATISCHE NÄHE (Ähnlichkeits-Signal) ---
|
||||
# Diese Werte verhindern den "Drift" in fachfremde Bereiche.
|
||||
similar_to: 0.4 # Analytisch: Thematische Nähe (oft KI-generiert).
|
||||
|
||||
# --- KATEGORIE 4: SYSTEM-NUDGES (Technische Struktur) ---
|
||||
# Reine Orientierungshilfen für das System; fast kein Einfluss auf das Ranking.
|
||||
belongs_to: 0.2 # System: Verknüpft Chunks mit der Note (Metadaten-Träger).
|
||||
next: 0.1 # System: Technische Lesereihenfolge der Absätze.
|
||||
prev: 0.1 # System: Technische Lesereihenfolge der Absätze.
|
||||
|
||||
# --- KATEGORIE 5: WEICHE ASSOZIATIONEN (Rausch-Unterdrückung) ---
|
||||
# Verhindert, dass lose Verknüpfungen das Ergebnis "verwässern".
|
||||
references: 0.1 # Assoziativ: Einfacher Querverweis oder Erwähnung.
|
||||
related_to: 0.05 # Minimal: Schwächste thematische Verbindung.
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
version: 2.4.0 # Optimized for Async Intelligence & Hybrid Router
|
||||
version: 2.7.0 # WP-14 Update: Dynamisierung der Ingestion-Pipeline
|
||||
|
||||
# ==============================================================================
|
||||
# 1. CHUNKING PROFILES
|
||||
|
|
@ -7,7 +7,6 @@ version: 2.4.0 # Optimized for Async Intelligence & Hybrid Router
|
|||
chunking_profiles:
|
||||
|
||||
# A. SHORT & FAST
|
||||
# Für Glossar, Tasks, Risiken. Kleine Schnipsel.
|
||||
sliding_short:
|
||||
strategy: sliding_window
|
||||
enable_smart_edge_allocation: false
|
||||
|
|
@ -16,7 +15,6 @@ chunking_profiles:
|
|||
overlap: [30, 50]
|
||||
|
||||
# B. STANDARD & FAST
|
||||
# Der "Traktor": Robust für Quellen, Journal, Daily Logs.
|
||||
sliding_standard:
|
||||
strategy: sliding_window
|
||||
enable_smart_edge_allocation: false
|
||||
|
|
@ -24,10 +22,7 @@ chunking_profiles:
|
|||
max: 650
|
||||
overlap: [50, 100]
|
||||
|
||||
# C. SMART FLOW (Performance-Safe Mode)
|
||||
# Für Konzepte, Projekte, Erfahrungen.
|
||||
# HINWEIS: 'enable_smart_edge_allocation' ist vorerst FALSE, um Ollama
|
||||
# bei der Generierung nicht zu überlasten. Später wieder aktivieren.
|
||||
# C. SMART FLOW (Text-Fluss)
|
||||
sliding_smart_edges:
|
||||
strategy: sliding_window
|
||||
enable_smart_edge_allocation: true
|
||||
|
|
@ -35,12 +30,32 @@ chunking_profiles:
|
|||
max: 600
|
||||
overlap: [50, 80]
|
||||
|
||||
# D. SMART STRUCTURE
|
||||
# Für Profile, Werte, Prinzipien. Trennt hart an Überschriften (H2).
|
||||
# D. SMART STRUCTURE (Soft Split)
|
||||
structured_smart_edges:
|
||||
strategy: by_heading
|
||||
enable_smart_edge_allocation: true
|
||||
split_level: 2
|
||||
strict_heading_split: false
|
||||
max: 600
|
||||
target: 400
|
||||
overlap: [50, 80]
|
||||
|
||||
# E. SMART STRUCTURE STRICT (H2 Hard Split)
|
||||
structured_smart_edges_strict:
|
||||
strategy: by_heading
|
||||
enable_smart_edge_allocation: true
|
||||
split_level: 2
|
||||
strict_heading_split: true # Hard Mode
|
||||
max: 600
|
||||
target: 400
|
||||
overlap: [50, 80]
|
||||
|
||||
# F. SMART STRUCTURE DEEP (H3 Hard Split + Merge-Check)
|
||||
structured_smart_edges_strict_L3:
|
||||
strategy: by_heading
|
||||
enable_smart_edge_allocation: true
|
||||
split_level: 3
|
||||
strict_heading_split: true
|
||||
max: 600
|
||||
target: 400
|
||||
overlap: [50, 80]
|
||||
|
|
@ -51,151 +66,245 @@ chunking_profiles:
|
|||
defaults:
|
||||
retriever_weight: 1.0
|
||||
chunking_profile: sliding_standard
|
||||
edge_defaults: []
|
||||
|
||||
# ==============================================================================
|
||||
# 3. TYPE DEFINITIONS
|
||||
# 3. INGESTION SETTINGS (WP-14 Dynamization)
|
||||
# ==============================================================================
|
||||
ingestion_settings:
|
||||
ignore_statuses: ["system", "template", "archive", "hidden"]
|
||||
default_note_type: "concept"
|
||||
|
||||
# ==============================================================================
|
||||
# 4. SUMMARY & SCAN SETTINGS
|
||||
# ==============================================================================
|
||||
summary_settings:
|
||||
max_summary_length: 500
|
||||
pre_scan_depth: 600
|
||||
|
||||
# ==============================================================================
|
||||
# 5. LLM SETTINGS
|
||||
# ==============================================================================
|
||||
llm_settings:
|
||||
cleanup_patterns: ["<s>", "</s>", "[OUT]", "[/OUT]", "```json", "```"]
|
||||
|
||||
# ==============================================================================
|
||||
# 6. TYPE DEFINITIONS
|
||||
# ==============================================================================
|
||||
|
||||
types:
|
||||
|
||||
# --- KERNTYPEN (Hoch priorisiert & Smart) ---
|
||||
|
||||
experience:
|
||||
chunking_profile: sliding_smart_edges
|
||||
retriever_weight: 0.90
|
||||
edge_defaults: ["derived_from", "references"]
|
||||
# Hybrid Classifier: Wenn diese Worte fallen, ist es eine Experience
|
||||
detection_keywords:
|
||||
- "passiert"
|
||||
- "erlebt"
|
||||
- "gefühl"
|
||||
- "situation"
|
||||
- "stolz"
|
||||
- "geärgert"
|
||||
- "reaktion"
|
||||
- "moment"
|
||||
- "konflikt"
|
||||
# Ghostwriter Schema: Sprechende Anweisungen für besseren Textfluss
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 1.10
|
||||
detection_keywords: ["erleben", "reagieren", "handeln", "prägen", "reflektieren"]
|
||||
schema:
|
||||
- "Situation (Was ist passiert?)"
|
||||
- "Meine Reaktion (Was habe ich getan?)"
|
||||
- "Ergebnis & Auswirkung"
|
||||
- "Reflexion & Learning (Was lerne ich daraus?)"
|
||||
|
||||
insight:
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 1.20
|
||||
detection_keywords: ["beobachten", "erkennen", "verstehen", "analysieren", "schlussfolgern"]
|
||||
schema:
|
||||
- "Beobachtung (Was sehe ich?)"
|
||||
- "Interpretation (Was bedeutet das?)"
|
||||
- "Bedürfnis (Was steckt dahinter?)"
|
||||
- "Handlungsempfehlung"
|
||||
|
||||
project:
|
||||
chunking_profile: sliding_smart_edges
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 0.97
|
||||
edge_defaults: ["references", "depends_on"]
|
||||
detection_keywords:
|
||||
- "projekt"
|
||||
- "vorhaben"
|
||||
- "ziel ist"
|
||||
- "meilenstein"
|
||||
- "planen"
|
||||
- "starten"
|
||||
- "mission"
|
||||
detection_keywords: ["umsetzen", "planen", "starten", "bauen", "abschließen"]
|
||||
schema:
|
||||
- "Mission & Zielsetzung"
|
||||
- "Aktueller Status & Blockaden"
|
||||
- "Nächste konkrete Schritte"
|
||||
- "Stakeholder & Ressourcen"
|
||||
|
||||
decision:
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 1.00 # MAX: Entscheidungen sind Gesetz
|
||||
edge_defaults: ["caused_by", "references"]
|
||||
detection_keywords:
|
||||
- "entschieden"
|
||||
- "wahl"
|
||||
- "optionen"
|
||||
- "alternativen"
|
||||
- "beschluss"
|
||||
- "adr"
|
||||
chunking_profile: structured_smart_edges_strict
|
||||
retriever_weight: 1.00
|
||||
detection_keywords: ["entscheiden", "wählen", "abwägen", "priorisieren", "festlegen"]
|
||||
schema:
|
||||
- "Kontext & Problemstellung"
|
||||
- "Betrachtete Optionen (Alternativen)"
|
||||
- "Betrachtete Optionen"
|
||||
- "Die Entscheidung"
|
||||
- "Begründung (Warum diese Wahl?)"
|
||||
|
||||
# --- PERSÖNLICHKEIT & IDENTITÄT ---
|
||||
- "Begründung"
|
||||
|
||||
value:
|
||||
chunking_profile: structured_smart_edges
|
||||
chunking_profile: structured_smart_edges_strict
|
||||
retriever_weight: 1.00
|
||||
edge_defaults: ["related_to"]
|
||||
detection_keywords: ["wert", "wichtig ist", "moral", "ethik"]
|
||||
schema: ["Definition", "Warum mir das wichtig ist", "Leitsätze für den Alltag"]
|
||||
detection_keywords: ["werten", "achten", "verpflichten", "bedeuten"]
|
||||
schema:
|
||||
- "Definition"
|
||||
- "Warum mir das wichtig ist"
|
||||
- "Leitsätze"
|
||||
|
||||
principle:
|
||||
chunking_profile: structured_smart_edges
|
||||
chunking_profile: structured_smart_edges_strict_L3
|
||||
retriever_weight: 0.95
|
||||
edge_defaults: ["derived_from", "references"]
|
||||
detection_keywords: ["prinzip", "regel", "grundsatz", "leitlinie"]
|
||||
schema: ["Das Prinzip", "Anwendung & Beispiele"]
|
||||
detection_keywords: ["leiten", "steuern", "ausrichten", "handhaben"]
|
||||
schema:
|
||||
- "Das Prinzip"
|
||||
- "Anwendung & Beispiele"
|
||||
|
||||
trait:
|
||||
chunking_profile: structured_smart_edges_strict
|
||||
retriever_weight: 1.10
|
||||
detection_keywords: ["begeistern", "können", "auszeichnen", "befähigen", "stärken"]
|
||||
schema:
|
||||
- "Eigenschaft / Talent"
|
||||
- "Beispiele aus der Praxis"
|
||||
- "Potenzial für die Zukunft"
|
||||
|
||||
obstacle:
|
||||
chunking_profile: structured_smart_edges_strict
|
||||
retriever_weight: 1.00
|
||||
detection_keywords: ["blockieren", "fürchten", "vermeiden", "hindern", "zweifeln"]
|
||||
schema:
|
||||
- "Beschreibung der Hürde"
|
||||
- "Ursprung / Auslöser"
|
||||
- "Auswirkung auf Ziele"
|
||||
- "Gegenstrategie"
|
||||
|
||||
belief:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.90
|
||||
edge_defaults: ["related_to"]
|
||||
detection_keywords: ["glaube", "überzeugung", "denke dass", "meinung"]
|
||||
schema: ["Der Glaubenssatz", "Ursprung & Reflexion"]
|
||||
detection_keywords: ["glauben", "meinen", "annehmen", "überzeugen"]
|
||||
schema:
|
||||
- "Der Glaubenssatz"
|
||||
- "Ursprung & Reflexion"
|
||||
|
||||
profile:
|
||||
chunking_profile: structured_smart_edges
|
||||
chunking_profile: structured_smart_edges_strict
|
||||
retriever_weight: 0.70
|
||||
edge_defaults: ["references", "related_to"]
|
||||
schema: ["Rolle / Identität", "Fakten & Daten", "Historie"]
|
||||
detection_keywords: ["verkörpern", "verantworten", "agieren", "repräsentieren"]
|
||||
schema:
|
||||
- "Rolle / Identität"
|
||||
- "Fakten & Daten"
|
||||
- "Historie"
|
||||
|
||||
# --- STRATEGIE & RISIKO ---
|
||||
idea:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.70
|
||||
detection_keywords: ["einfall", "gedanke", "potenzial", "möglichkeit"]
|
||||
schema:
|
||||
- "Der Kerngedanke"
|
||||
- "Potenzial & Auswirkung"
|
||||
- "Nächste Schritte"
|
||||
|
||||
skill:
|
||||
chunking_profile: sliding_smart_edges
|
||||
retriever_weight: 0.90
|
||||
detection_keywords: ["lernen", "beherrschen", "üben", "fertigkeit", "kompetenz"]
|
||||
schema:
|
||||
- "Definition der Fähigkeit"
|
||||
- "Aktueller Stand & Lernpfad"
|
||||
- "Evidenz (Proof of Work)"
|
||||
|
||||
habit:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.85
|
||||
detection_keywords: ["gewohnheit", "routine", "automatismus", "immer wenn"]
|
||||
schema:
|
||||
- "Auslöser (Trigger)"
|
||||
- "Routine (Handlung)"
|
||||
- "Belohnung (Reward)"
|
||||
- "Strategie"
|
||||
|
||||
need:
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 1.05
|
||||
detection_keywords: ["bedürfnis", "brauchen", "mangel", "erfüllung"]
|
||||
schema:
|
||||
- "Das Bedürfnis"
|
||||
- "Zustand (Mangel vs. Erfüllung)"
|
||||
- "Bezug zu Werten"
|
||||
|
||||
motivation:
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 0.95
|
||||
detection_keywords: ["motivation", "antrieb", "warum", "energie"]
|
||||
schema:
|
||||
- "Der Antrieb"
|
||||
- "Zielbezug"
|
||||
- "Energiequelle"
|
||||
|
||||
bias:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.80
|
||||
detection_keywords: ["denkfehler", "verzerrung", "vorurteil", "falle"]
|
||||
schema: ["Beschreibung der Verzerrung", "Typische Situationen", "Gegenstrategie"]
|
||||
|
||||
state:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.60
|
||||
detection_keywords: ["stimmung", "energie", "gefühl", "verfassung"]
|
||||
schema: ["Aktueller Zustand", "Auslöser", "Auswirkung auf den Tag"]
|
||||
|
||||
boundary:
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 0.90
|
||||
detection_keywords: ["grenze", "nein sagen", "limit", "schutz"]
|
||||
schema: ["Die Grenze", "Warum sie wichtig ist", "Konsequenz bei Verletzung"]
|
||||
|
||||
goal:
|
||||
chunking_profile: sliding_smart_edges
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 0.95
|
||||
edge_defaults: ["depends_on", "related_to"]
|
||||
detection_keywords: ["ziel", "zielzustand", "kpi", "zeitrahmen", "deadline", "meilenstein"]
|
||||
schema: ["Zielzustand", "Zeitrahmen & KPIs", "Motivation"]
|
||||
|
||||
risk:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.85
|
||||
edge_defaults: ["related_to", "blocks"]
|
||||
detection_keywords: ["risiko", "gefahr", "bedrohung", "problem", "angst"]
|
||||
schema: ["Beschreibung des Risikos", "Mögliche Auswirkungen", "Gegenmaßnahmen"]
|
||||
|
||||
# --- BASIS & WISSEN ---
|
||||
detection_keywords: ["risiko", "gefahr", "bedrohung"]
|
||||
schema: ["Beschreibung des Risikos", "Auswirkungen", "Gegenmaßnahmen"]
|
||||
|
||||
concept:
|
||||
chunking_profile: sliding_smart_edges
|
||||
retriever_weight: 0.60
|
||||
edge_defaults: ["references", "related_to"]
|
||||
schema:
|
||||
- "Definition"
|
||||
- "Kontext & Hintergrund"
|
||||
- "Verwandte Konzepte"
|
||||
chunking_profile: structured_smart_edges
|
||||
retriever_weight: 0.6
|
||||
detection_keywords: ["definition", "konzept", "begriff", "modell", "rahmen", "theorie"]
|
||||
schema: ["Definition", "Kontext", "Verwandte Konzepte"]
|
||||
|
||||
task:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.80
|
||||
edge_defaults: ["depends_on", "part_of"]
|
||||
retriever_weight: 0.8
|
||||
detection_keywords: ["aufgabe", "todo", "next_action", "erledigen", "definition_of_done", "checkliste"]
|
||||
schema: ["Aufgabe", "Kontext", "Definition of Done"]
|
||||
|
||||
journal:
|
||||
chunking_profile: sliding_standard
|
||||
retriever_weight: 0.80
|
||||
edge_defaults: ["references", "related_to"]
|
||||
schema: ["Log-Eintrag", "Gedanken & Erkenntnisse"]
|
||||
retriever_weight: 0.8
|
||||
detection_keywords: ["journal", "tagebuch", "log", "eintrag", "reflexion", "heute"]
|
||||
schema: ["Log-Eintrag", "Gedanken"]
|
||||
|
||||
source:
|
||||
chunking_profile: sliding_standard
|
||||
retriever_weight: 0.50
|
||||
edge_defaults: []
|
||||
schema:
|
||||
- "Metadaten (Autor, URL, Datum)"
|
||||
- "Kernaussage / Zusammenfassung"
|
||||
- "Zitate & Notizen"
|
||||
retriever_weight: 0.5
|
||||
detection_keywords: ["quelle", "paper", "buch", "artikel", "link", "zitat", "studie"]
|
||||
schema: ["Metadaten", "Zusammenfassung", "Zitate"]
|
||||
|
||||
glossary:
|
||||
chunking_profile: sliding_short
|
||||
retriever_weight: 0.40
|
||||
edge_defaults: ["related_to"]
|
||||
schema: ["Begriff", "Definition"]
|
||||
retriever_weight: 0.4
|
||||
detection_keywords: ["glossar", "begriff", "definition", "terminologie"]
|
||||
schema: ["Begriff", "Definition"]
|
||||
|
||||
person:
|
||||
chunking_profile: sliding_standard
|
||||
retriever_weight: 0.5
|
||||
detection_keywords: ["person", "mensch", "kontakt", "name", "beziehung", "stakeholder"]
|
||||
schema: ["Profile", "Beziehung", "Kontext"]
|
||||
|
||||
event:
|
||||
chunking_profile: sliding_standard
|
||||
retriever_weight: 0.6
|
||||
detection_keywords: ["ereignis", "termin", "datum", "ort", "teilnehmer", "meeting"]
|
||||
schema: ["Datum & Ort", "Teilnehmer", "Ergebnisse"]
|
||||
|
||||
default:
|
||||
chunking_profile: sliding_standard
|
||||
retriever_weight: 1.0
|
||||
detection_keywords: []
|
||||
schema: ["Inhalt"]
|
||||
|
|
|
|||
1
debug.log
Normal file
1
debug.log
Normal file
|
|
@ -0,0 +1 @@
|
|||
[0114/152756.633:ERROR:third_party\crashpad\crashpad\util\win\registration_protocol_win.cc:108] CreateFile: Das System kann die angegebene Datei nicht finden. (0x2)
|
||||
69
docs/00_General/00_Marketing_V3.md
Normal file
69
docs/00_General/00_Marketing_V3.md
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
# Mindnet V3.0: Der Aufstieg des Digitalen Zwillings
|
||||
## Von der Wissensdatenbank zum strategischen Partner – Ein Paradigmenwechsel
|
||||
|
||||
### Einleitung: Die Vision von Version 3.0
|
||||
Mit der Vollendung des Meilensteins WP25 (inklusive der Architektur-Erweiterungen 25a und 25b) transformiert sich Mindnet von einem reinen Retrieval-System (V2) zu einem autonomen, agentischen Ökosystem (V3.0). Mindnet V3.0 ist nicht länger nur ein Werkzeug zur Informationswiedergabe; es ist ein **Digitaler Zwilling**, der in der Lage ist, komplexe Realitäten durch Multi-Stream-Analysen zu erfassen, strategische Empfehlungen auf Basis individueller Werte zu geben und eine bisher unerreichte Ausfallsicherheit zu garantieren.
|
||||
|
||||
---
|
||||
|
||||
### Die 6 Säulen der Mindnet V3.0 Architektur
|
||||
|
||||
#### 1. Agentic Multi-Stream Retrieval (WP-25)
|
||||
Das Herzstück von V3.0 ist die neue `DecisionEngine`. Während herkömmliche Systeme lediglich eine einfache Vektorsuche durchführen, orchestriert die DecisionEngine parallele Wissens-Streams:
|
||||
* **Werte-Stream:** Abgleich von Anfragen mit Ihrer ethischen und strategischen Identität.
|
||||
* **Fakten-Stream:** Analyse der operativen Realität und aktueller Projektdaten.
|
||||
* **Biografie-Stream:** Integration persönlicher Erfahrungen und historischer Kontexte.
|
||||
* **Risiko-Radar:** Proaktive Identifikation von Hindernissen und Zielkonflikten.
|
||||
* **Technik-Wissen:** Tiefgreifende fachliche Expertise für spezialisierte Aufgaben.
|
||||
|
||||
Dieses System erlaubt es Mindnet, eine Anfrage aus fünf verschiedenen Perspektiven gleichzeitig zu beleuchten, bevor eine finale Synthese erfolgt.
|
||||
|
||||
#### 2. Mixture of Experts (MoE) & Dynamic Profiling (WP-25a)
|
||||
Mindnet V3.0 nutzt nicht mehr nur "ein" Modell. Über die zentrale Steuerung in der `llm_profiles.yaml` wird für jede Teilaufgabe der ideale "Experte" gerufen:
|
||||
* **Der Architekt (Gemini 2.0 Flash):** Für hochkomplexe reasoning-intensive Synthesen.
|
||||
* **Der Ingenieur (Qwen 2.5):** Spezialisiert auf präzise Code-Generierung und technische Problemlösung.
|
||||
* **Der Dampfhammer (Mistral 7B):** Optimiert für blitzschnelles Routing und asynchrone Inhaltskompression.
|
||||
* **Der Wächter (Phi-3 Mini):** Ein lokales Modell via Ollama, das maximale Privatsphäre für sensible Identitätsdaten garantiert.
|
||||
|
||||
#### 3. Hierarchische Lazy-Prompt-Orchestration (WP-25b)
|
||||
Ein technologisches Highlight ist die Einführung des **Lazy-Promptings**. Prompts werden nicht mehr statisch im Code verwaltet, sondern erst im Moment der Modellauswahl hierarchisch aufgelöst:
|
||||
1. **Modell-Ebene:** Spezifisch für die jeweilige Modell-ID optimierte Instruktionen.
|
||||
2. **Provider-Ebene:** Fallback-Anweisungen für OpenRouter oder Ollama.
|
||||
3. **Global-Ebene:** Sicherheits-Instruktionen als ultimativer Anker.
|
||||
|
||||
Dies garantiert, dass jedes Modell in seiner "Muttersprache" angesprochen wird, was die Antwortqualität drastisch erhöht.
|
||||
|
||||
#### 4. Die unzerstörbare Fallback-Kaskade
|
||||
Resilienz ist in V3.0 kein Schlagwort, sondern ein Algorithmus. Sollte ein Cloud-Anbieter (wie OpenRouter) ausfallen oder in ein Rate-Limit laufen, reagiert das System autonom:
|
||||
* Automatischer Wechsel auf das Backup-Profil (z.B. von Gemini auf Llama).
|
||||
* In letzter Instanz: Rückzug auf die lokale Hardware (Ollama/Phi-3), sodass Mindnet auch offline voll einsatzfähig bleibt.
|
||||
* **Lazy-Re-Formatting:** Beim Wechsel des Modells wird auch der Prompt sofort neu geladen und für das neue Modell optimiert.
|
||||
|
||||
#### 5. Hochpräzises Intent-Routing mit Regex-Cleaning
|
||||
Durch den neuen ultra-robusten Router in der `DecisionEngine` v1.3.2 erkennt Mindnet Nutzerintentionen mit chirurgischer Präzision. Modell-Artefakte (wie Stop-Marker oder überflüssige Tags freier Modelle) werden durch aggressive Regex-Filter eliminiert, bevor sie das System-Routing stören können. Dies stellt sicher, dass eine Coding-Frage niemals fälschlicherweise im Fakten-Modus landet.
|
||||
|
||||
#### 6. Semantische Ingestion-Validierung v2.14.0
|
||||
Die Qualität des Wissensgraphen wird durch eine neue Validierungsebene geschützt. Während des Imports prüft Mindnet semantisch, ob vorgeschlagene Verknüpfungen (Edges) zwischen Informationen wirklich sinnvoll sind. Dabei unterscheidet das System zwischen temporären Netzwerkfehlern und dauerhaften Logikfehlern, um die Integrität Ihres digitalen Gedächtnisses zu wahren.
|
||||
|
||||
---
|
||||
|
||||
### Technische Highlights für Power-User
|
||||
|
||||
| Feature | Technologie | Nutzen |
|
||||
| :--- | :--- | :--- |
|
||||
| **Orchestrator** | `DecisionEngine v1.3.2` | Agentische Steuerung & Multi-Stream Retrieval |
|
||||
| **Hybrid Cloud** | OpenRouter & Ollama | Maximale Flexibilität zwischen Leistung und Datenschutz |
|
||||
| **Traceability** | `[PROMPT-TRACE]` Logs | Volle Transparenz über die genutzten KI-Instruktionen |
|
||||
| **Context Guard** | Asynchrone Kompression | Optimierung der Kontextfenster für maximale Kosten-Effizienz |
|
||||
| **Resilienz** | Rekursive Fallback-Kaskade | 100% Verfügbarkeit durch Cloud-to-Local Automatisierung |
|
||||
|
||||
---
|
||||
|
||||
### Fazit: Ihr Gehirn, erweitert durch Mindnet V3.0
|
||||
Mindnet V3.0 ist das Ergebnis einer konsequenten Weiterentwicklung hin zu einer **Zero-Failure-Architektur**. Durch die Kombination aus agentischer Intelligenz, hybrider Modellnutzung und der neuen Lazy-Prompt-Infrastruktur bietet es eine Basis, die nicht nur mit Ihrem Wissen wächst, sondern aktiv dabei hilft, dieses Wissen in strategisches Handeln zu übersetzen.
|
||||
|
||||
**Willkommen in der Ära von Mindnet V3.0 – Ihr strategischer Partner ist bereit.**
|
||||
|
||||
---
|
||||
*Dokumentations-Identifikator: `mindnet_v3_core_release`*
|
||||
*Synchronisations-Stand: WP-25b Final*
|
||||
|
|
@ -18,9 +18,12 @@ Das Repository ist in **logische Domänen** unterteilt.
|
|||
*Zielgruppe: Alle*
|
||||
| Datei | Inhalt & Zweck |
|
||||
| :--- | :--- |
|
||||
| `README.md` | **Einstiegspunkt.** Übersicht über die Dokumentationsstruktur, Schnellzugriff nach Rollen und Navigation. |
|
||||
| `00_quickstart.md` | **Schnellstart.** Installation und erste Schritte in 15 Minuten. Ideal für neue Benutzer. |
|
||||
| `00_vision_and_strategy.md` | **Strategie.** Warum bauen wir das? Prinzipien (Privacy, Local-First), High-Level Architektur. |
|
||||
| `00_glossary.md` | **Definitionen.** Was bedeutet "Smart Edge", "Traffic Control", "Chunk"? Verhindert Begriffsverwirrung. |
|
||||
| `00_documentation_map.md` | **Dieser Index.** Navigationshilfe. |
|
||||
| `00_quality_checklist.md` | **Qualitätsprüfung.** Systematische Checkliste zur Vollständigkeitsprüfung für alle Rollen. |
|
||||
|
||||
### 📂 01_User_Manual (Anwendung)
|
||||
*Zielgruppe: Mindmaster, Autoren, Power-User*
|
||||
|
|
@ -28,6 +31,8 @@ Das Repository ist in **logische Domänen** unterteilt.
|
|||
| :--- | :--- |
|
||||
| `01_chat_usage_guide.md` | **Bedienung.** Wie steuere ich die Personas (Berater, Spiegel)? Wie nutze ich das Feedback? |
|
||||
| `01_knowledge_design.md` | **Content-Regeln.** Die "Bibel" für den Vault. Erklärt Note-Typen, Matrix-Logik und Markdown-Syntax. |
|
||||
| `01_authoring_guidelines.md` | **Content strukturieren.** Primäres Werkzeug, um Wissen so zu strukturieren, dass Mindnet die Persönlichkeit spiegelt, empathisch reagiert und strategisch berät. |
|
||||
| `01_obsidian_integration_guide.md` | **Obsidian Setup.** Technische Anleitung für die Integration von Mindnet mit Obsidian (Templater, Skripte, Workflows). |
|
||||
|
||||
### 📂 02_Concepts (Fachliche Logik)
|
||||
*Zielgruppe: Architekten, Product Owner*
|
||||
|
|
@ -35,6 +40,7 @@ Das Repository ist in **logische Domänen** unterteilt.
|
|||
| :--- | :--- |
|
||||
| `02_concept_graph_logic.md` | **Graph-Theorie.** Abstrakte Erklärung von Knoten, Kanten, Provenance und Idempotenz. |
|
||||
| `02_concept_ai_personality.md`| **KI-Verhalten.** Konzepte hinter dem Hybrid Router, Empathie-Modell und "Teach-the-AI". |
|
||||
| `02_concept_architecture_patterns.md` | **Architektur-Patterns.** Design-Entscheidungen, modulare Struktur (WP-14), Resilienz-Patterns und Erweiterbarkeit. |
|
||||
|
||||
### 📂 03_Technical_Reference (Technik & Code)
|
||||
*Zielgruppe: Entwickler, DevOps. (Enthält JSON/YAML Beispiele)*
|
||||
|
|
@ -45,19 +51,24 @@ Das Repository ist in **logische Domänen** unterteilt.
|
|||
| `03_tech_retrieval_scoring.md` | **Suche.** Die mathematischen Formeln für Scoring, Hybrid Search und Explanation Layer. |
|
||||
| `03_tech_chat_backend.md` | **API & LLM.** Implementation des Routers, Traffic Control (Semaphore) und Feedback-Traceability. |
|
||||
| `03_tech_frontend.md` | **UI & Graph.** Architektur des Streamlit-Frontends, State-Management, Cytoscape-Integration und Editor-Logik. |
|
||||
| `03_tech_configuration.md` | **Config.** Referenztabellen für `.env`, `types.yaml` und `retriever.yaml`. |
|
||||
| `03_tech_configuration.md` | **Config.** Referenztabellen für `.env`, `types.yaml`, `decision_engine.yaml`, `llm_profiles.yaml`, `prompts.yaml`. **Neu:** Verbindungen zwischen Config-Dateien, Praxisbeispiel und Mermaid-Grafik. |
|
||||
| `03_tech_api_reference.md` | **API-Referenz.** Vollständige Dokumentation aller Endpunkte (`/query`, `/chat`, `/ingest`, `/graph`, etc.). |
|
||||
|
||||
### 📂 04_Operations (Betrieb)
|
||||
*Zielgruppe: Administratoren*
|
||||
| Datei | Inhalt & Zweck |
|
||||
| :--- | :--- |
|
||||
| `04_admin_operations.md` | **Runbook.** Installation, Docker-Setup, Backup/Restore, Troubleshooting Guide. |
|
||||
| `04_server_operation_manual.md` | **Server-Betrieb.** Detaillierte Dokumentation für den Betrieb auf llm-node (Systemd, Borgmatic, Disaster Recovery). |
|
||||
| `04_deployment_guide.md` | **Deployment.** CI/CD-Pipelines, Rollout-Strategien, Versionierung, Rollback und Pre/Post-Deployment-Checklisten. |
|
||||
|
||||
### 📂 05_Development (Code)
|
||||
*Zielgruppe: Entwickler*
|
||||
| Datei | Inhalt & Zweck |
|
||||
| :--- | :--- |
|
||||
| `05_developer_guide.md` | **Workflow.** Hardware-Setup (Win/Pi/Beelink), Git-Flow, Test-Befehle, Modul-Interna. |
|
||||
| `05_genai_best_practices.md` | **AI Workflow.** Prompt-Library, Templates und Best Practices für die Entwicklung mit LLMs. |
|
||||
| `05_testing_guide.md` | **Testing.** Umfassender Test-Guide: Strategien, Frameworks, Test-Daten, Best Practices. |
|
||||
|
||||
### 📂 06_Roadmap & 99_Archive
|
||||
*Zielgruppe: Projektleitung*
|
||||
|
|
@ -80,7 +91,9 @@ Nutze diese Matrix, wenn du ein Workpackage bearbeitest, um die Dokumentation ko
|
|||
| **Retrieval / Scoring** | `03_tech_retrieval_scoring.md` (Formeln anpassen) |
|
||||
| **Frontend / Visualisierung** | 1. `03_tech_frontend.md` (Technische Details)<br>2. `01_chat_usage_guide.md` (Bedienung) |
|
||||
| **Chat-Logik / Prompts**| 1. `02_concept_ai_personality.md` (Konzept)<br>2. `03_tech_chat_backend.md` (Tech)<br>3. `01_chat_usage_guide.md` (User-Sicht) |
|
||||
| **Deployment / Server** | `04_admin_operations.md` |
|
||||
| **Architektur / Design-Patterns** | 1. `02_concept_architecture_patterns.md` (Patterns & Entscheidungen)<br>2. `02_concept_graph_logic.md` (Graph-Theorie)<br>3. `05_developer_guide.md` (Modulare Struktur) |
|
||||
| **Deployment / Server** | 1. `04_deployment_guide.md` (CI/CD, Rollout)<br>2. `04_admin_operations.md` (Installation, Wartung)<br>3. `04_server_operation_manual.md` (Server-Betrieb) |
|
||||
| **Testing / QA** | 1. `05_testing_guide.md` (Test-Strategien & Frameworks)<br>2. `05_developer_guide.md` (Test-Befehle) |
|
||||
| **Neuen Features (Allg.)**| `06_active_roadmap.md` (Status Update) |
|
||||
|
||||
---
|
||||
|
|
@ -108,4 +121,38 @@ Damit dieses System wartbar bleibt (auch für KI-Agenten wie NotebookLM), gelten
|
|||
audience: developer
|
||||
context: "Beschreibung der Scoring-Formel."
|
||||
---
|
||||
```
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Schnellzugriff & Empfehlungen
|
||||
|
||||
### Für neue Benutzer
|
||||
1. Starte mit **[Schnellstart](00_quickstart.md)** für die Installation
|
||||
2. Lese **[Vision & Strategie](00_vision_and_strategy.md)** für das große Bild
|
||||
3. Nutze **[Chat Usage Guide](../01_User_Manual/01_chat_usage_guide.md)** für die ersten Schritte
|
||||
|
||||
### Für Entwickler
|
||||
1. **[Developer Guide](../05_Development/05_developer_guide.md)** - Umfassender technischer Guide
|
||||
2. **[Technical References](../03_Technical_References/)** - Detaillierte API- und Architektur-Dokumentation
|
||||
3. **[GenAI Best Practices](../05_Development/05_genai_best_practices.md)** - Workflow mit LLMs
|
||||
|
||||
### Für Administratoren
|
||||
1. **[Admin Operations](../04_Operations/04_admin_operations.md)** - Installation und Wartung
|
||||
2. **[Server Operations Manual](../04_Operations/04_server_operation_manual.md)** - Server-Betrieb und Disaster Recovery
|
||||
3. **[Troubleshooting Guide](../04_Operations/04_admin_operations.md#33-troubleshooting-guide)** - Häufige Probleme und Lösungen
|
||||
|
||||
### Für Autoren
|
||||
1. **[Knowledge Design](../01_User_Manual/01_knowledge_design.md)** - Content-Regeln und Best Practices
|
||||
2. **[Authoring Guidelines](../01_User_Manual/01_authoring_guidelines.md)** - Strukturierung für den Digitalen Zwilling
|
||||
3. **[Obsidian Integration](../01_User_Manual/01_obsidian_integration_guide.md)** - Workflow-Optimierung
|
||||
|
||||
---
|
||||
|
||||
## 6. Dokumentations-Status
|
||||
|
||||
**Aktuelle Version:** 3.1.1
|
||||
**Letzte Aktualisierung:** 2026-01-02
|
||||
**Status:** ✅ Vollständig und aktiv gepflegt
|
||||
|
||||
**Hinweis:** Diese Dokumentation wird kontinuierlich aktualisiert. Bei Fragen oder Verbesserungsvorschlägen bitte im Repository melden.
|
||||
|
|
@ -2,41 +2,73 @@
|
|||
doc_type: glossary
|
||||
audience: all
|
||||
status: active
|
||||
version: 2.6
|
||||
context: "Definitionen zentraler Begriffe und Entitäten im Mindnet-System."
|
||||
version: 4.5.8
|
||||
context: "Zentrales Glossar für Mindnet v4.5.8. Enthält Definitionen zu Hybrid-Cloud Resilienz, WP-14 Modularisierung, WP-15b Two-Pass Ingestion, WP-15c Multigraph-Support, WP-25 Agentic Multi-Stream RAG, WP-25a Mixture of Experts (MoE), WP-25b Lazy-Prompt-Orchestration, WP-24c Phase 3 Agentic Edge Validation und Mistral-safe Parsing."
|
||||
---
|
||||
|
||||
# Mindnet Glossar
|
||||
|
||||
**Quellen:** `appendix.md`, `Overview.md`
|
||||
**Quellen:** `01_edge_vocabulary.md`, `llm_service.py`, `ingestion.py`, `edge_registry.py`, `registry.py`, `qdrant.py`
|
||||
|
||||
## Kern-Entitäten
|
||||
|
||||
* **Note:** Repräsentiert eine Markdown-Datei. Die fachliche Haupteinheit.
|
||||
* **Chunk:** Ein Textabschnitt einer Note (meist 512 Tokens). Die technische Sucheinheit (Vektor).
|
||||
* **Edge:** Eine gerichtete Verbindung zwischen zwei Knoten (Chunks oder Notes).
|
||||
* **Note:** Repräsentiert eine Markdown-Datei. Die fachliche Haupteinheit. Verfügt über einen **Status** (stable, draft, system), der das Scoring beeinflusst.
|
||||
* **Chunk:** Ein Textabschnitt einer Note. Die technische Sucheinheit (Vektor).
|
||||
* **Edge:** Eine gerichtete Verbindung zwischen zwei Knoten. Wird in WP-22 durch die Registry validiert. Seit v2.9.1 unterstützt Edges **Section-basierte Links** (`target_section`), sodass mehrere Kanten zwischen denselben Knoten existieren können, wenn sie auf verschiedene Abschnitte zeigen.
|
||||
* **Vault:** Der lokale Ordner mit den Markdown-Dateien (Source of Truth).
|
||||
* **Frontmatter:** Der YAML-Header am Anfang einer Notiz (enthält `id`, `type`, `title`).
|
||||
* **Frontmatter:** Der YAML-Header am Anfang einer Notiz (enthält `id`, `type`, `title`, `status`).
|
||||
|
||||
## Komponenten
|
||||
|
||||
* **Importer:** Das Python-Skript (`ingestion.py`), das Markdown liest und in Qdrant schreibt.
|
||||
* **Retriever:** Die Komponente, die sucht. Nutzt hybrides Scoring (Semantik + Graph).
|
||||
* **Decision Engine:** Teil des Routers, der entscheidet, wie auf eine Anfrage reagiert wird (z.B. Strategie wählen).
|
||||
* **Hybrid Router v5:** Die Logik, die erkennt, ob der User eine Frage stellt (`RAG`) oder einen Befehl gibt (`INTERVIEW`).
|
||||
* **Draft Editor:** Die Web-UI-Komponente, in der generierte Notizen bearbeitet werden.
|
||||
* **Traffic Control:** Ein Mechanismus im `LLMService`, der Chat-Anfragen priorisiert und Hintergrund-Jobs (wie Import) drosselt.
|
||||
* **Edge Registry:** Der zentrale Dienst (SSOT), der Kanten-Typen validiert und Aliase in kanonische Typen auflöst. Nutzt `01_edge_vocabulary.md` als Basis.
|
||||
* **LLM Service:** Der Hybrid-Client (v3.3.6), der Anfragen zwischen OpenRouter, Google Gemini und lokalem Ollama routet. Verwaltet Cloud-Timeouts und Quoten-Management. Nutzt zur Text-Bereinigung nun die neutrale `registry.py`, um Circular Imports zu vermeiden.
|
||||
* **Retriever:** Besteht in v2.7+ aus der Orchestrierung (`retriever.py`) und der mathematischen Scoring-Engine (`retriever_scoring.py`). Seit WP-14 im Paket `app.core.retrieval` gekapselt.
|
||||
* **Decision Engine (WP-25):** Der zentrale **Agentic Orchestrator**, der Intents erkennt, parallele Wissens-Streams orchestriert und die Ergebnisse synthetisiert. Implementiert Multi-Stream Retrieval und Intent-basiertes Routing.
|
||||
* **Agentic Multi-Stream RAG (WP-25):** Architektur-Paradigma, bei dem Nutzeranfragen in parallele, spezialisierte Wissens-Streams aufgeteilt werden (Values, Facts, Biography, Risk, Tech), die gleichzeitig abgefragt und zu einer kontextreichen Antwort synthetisiert werden.
|
||||
* **Stream-Tracing (WP-25):** Kennzeichnung jedes Treffers mit seinem Ursprungs-Stream (`stream_origin`), um Feedback-Optimierung pro Wissensbereich zu ermöglichen.
|
||||
* **Intent-basiertes Routing (WP-25):** Hybrid-Modus zur Intent-Erkennung mit Keyword Fast-Path (sofortige Erkennung von Triggern) und LLM Slow-Path (semantische Analyse für unklare Anfragen).
|
||||
* **Wissens-Synthese (WP-25):** Template-basierte Zusammenführung der Ergebnisse aus parallelen Streams mit expliziten Stream-Variablen (z.B. `{values_stream}`, `{risk_stream}`), um dem LLM eine differenzierte Abwägung zu ermöglichen.
|
||||
* **Traffic Control:** Verwaltet Prioritäten und drosselt Hintergrund-Tasks (z.B. Smart Edges) mittels Semaphoren und Timeouts (45s) zur Vermeidung von System-Hangs.
|
||||
* **Unknown Edges Log:** Die Datei `unknown_edges.jsonl`, in der das System Kanten-Typen protokolliert, die nicht im Dictionary gefunden wurden.
|
||||
* **Database Package (WP-14):** Zentralisiertes Infrastruktur-Paket (`app.core.database`), das den Qdrant-Client (`qdrant.py`) und das Point-Mapping (`qdrant_points.py`) verwaltet.
|
||||
* **LocalBatchCache (WP-15b):** Ein globaler In-Memory-Index, der während des Pass 1 Scans aufgebaut wird und Metadaten (IDs, Titel, Summaries) aller Notizen für die Kantenvalidierung bereithält.
|
||||
|
||||
## Konzepte & Features
|
||||
|
||||
* **Active Intelligence:** Feature im Web-Editor, das während des Schreibens automatisch Links vorschlägt.
|
||||
* **Smart Edge Allocation (WP15):** Ein KI-Verfahren, das prüft, ob ein Link in einer Notiz für einen spezifischen Textabschnitt relevant ist.
|
||||
* **Healing Parser:** UI-Funktion, die fehlerhaften Output des LLMs (z.B. defektes YAML) automatisch repariert.
|
||||
* **Explanation Layer:** Die Schicht, die dem Nutzer erklärt, *warum* ein Suchergebnis gefunden wurde (z.B. "Weil Projekt X davon abhängt").
|
||||
* **Provenance:** Die Herkunft einer Kante.
|
||||
* `explicit`: Vom Mensch geschrieben.
|
||||
* `smart`: Vom LLM validiert.
|
||||
* `rule`: Durch Config-Regel erzeugt.
|
||||
* **Matrix Logic:** Regelwerk, das den Typ einer Kante basierend auf Quell- und Ziel-Typ bestimmt (z.B. Erfahrung -> Wert = `based_on`).
|
||||
* **Idempotenz:** Die Eigenschaft des Importers, bei mehrfacher Ausführung dasselbe Ergebnis zu liefern ohne Duplikate.
|
||||
* **Resurrection Pattern:** UI-Technik, um User-Eingaben beim Tab-Wechsel zu erhalten.
|
||||
* **Hybrid Provider Cascade:** Die intelligente Reihenfolge der Modell-Ansprache. Schlägt die Cloud (OpenRouter/Gemini) fehl, erfolgt nach Retries ein Fallback auf den lokalen Ollama (Quoten-Schutz).
|
||||
* **Deep Fallback (v2.11.14, WP20):** Ein inhaltsbasierter Rettungsmechanismus in der Ingestion. Im Gegensatz zum technischen Fallback (bei Verbindungsfehlern) wird der Deep Fallback ausgelöst, wenn ein Cloud-Modell zwar technisch erfolgreich antwortet, aber inhaltlich keine verwertbaren Daten liefert (z. B. bei "Data Policy Violations").
|
||||
* **Silent Refusal (WP20):** Ein Zustand, in dem Cloud-Provider (wie OpenRouter) die Verarbeitung eines Dokuments aufgrund interner Filter ("No data training") verweigern, ohne einen HTTP-Fehler zu senden. Wird durch Deep Fallback abgefangen.
|
||||
* **Rate-Limit Resilience (WP-20):** Automatisierte Erkennung von HTTP 429 Fehlern. Das System pausiert (konfigurierbar via `LLM_RATE_LIMIT_WAIT`) und wiederholt den Cloud-Call, bevor der langsame Fallback ausgelöst wird.
|
||||
* **Mistral-safe Parsing:** Robuste Extraktions-Logik in Ingestion und Analyzer, die technische Steuerzeichen (`<s>`, `[OUT]`) und Framework-Tags erkennt und entfernt, um valides JSON aus Free-Modellen zu gewinnen.
|
||||
* **Lifecycle Scoring (WP-22):** Ein Mechanismus, der die Relevanz einer Notiz basierend auf ihrem Status gewichtet (z.B. Bonus für `stable`, Malus für `draft`).
|
||||
* **Intent Boosting:** Dynamische Erhöhung der Kanten-Gewichte basierend auf der Nutzerfrage (z.B. Fokus auf `caused_by` bei "Warum"-Fragen).
|
||||
* **Provenance Weighting:** Gewichtung einer Kante nach ihrer Herkunft:
|
||||
* `explicit`: Vom Mensch gesetzt (Prio 1).
|
||||
* `semantic_ai`: Von der KI im Turbo-Mode extrahiert und validiert (Prio 2).
|
||||
* `structure`: Durch System-Regeln/Matrix erzeugt (Prio 3).
|
||||
* **Smart Edge Allocation (WP-15b):** KI-Verfahren zur Relevanzprüfung von Links für spezifische Textabschnitte. Validiert Kandidaten semantisch gegen das Ziel im LocalBatchCache.
|
||||
* **Matrix Logic:** Bestimmung des Kanten-Typs basierend auf Quell- und Ziel-Entität (z.B. Erfahrung -> Wert = `based_on`).
|
||||
* **Two-Pass Workflow (WP-15b):** Optimiertes Ingestion-Verfahren:
|
||||
* **Pass 1 (Pre-Scan):** Schnelles Scannen aller Dateien zur Befüllung des LocalBatchCache.
|
||||
* **Pass 2 (Semantic Processing):** Tiefenverarbeitung (Chunking, Embedding, Validierung) nur für geänderte Dateien.
|
||||
* **Circular Import Registry (WP-14):** Entkopplung von Kern-Logik (wie Textbereinigung) in eine neutrale `registry.py`, um Abhängigkeitsschleifen zwischen Diensten und Ingestion-Utilities zu verhindern.
|
||||
* **Deep-Link / Section-basierter Link:** Ein Link wie `[[Note#Section]]`, der auf einen spezifischen Abschnitt innerhalb einer Note verweist. Seit v2.9.1 wird dieser in `target_id="Note"` und `target_section="Section"` aufgeteilt, um "Phantom-Knoten" zu vermeiden und Multigraph-Support zu ermöglichen.
|
||||
* **Atomic Section Logic (v3.9.9):** Chunking-Verfahren, das Sektions-Überschriften und deren Inhalte atomar in Chunks hält (Pack-and-Carry-Over). Verhindert, dass Überschriften über Chunk-Grenzen hinweg getrennt werden.
|
||||
* **Registry-First Profiling (v2.13.12):** Hierarchische Auflösung des Chunking-Profils: Frontmatter > types.yaml Typ-Config > Global Defaults. Stellt sicher, dass Note-Typen automatisch das korrekte Profil erhalten.
|
||||
* **Mixture of Experts (MoE) - WP-25a:** Profilbasierte Experten-Architektur, bei der jede Systemaufgabe (Synthese, Ingestion-Validierung, Routing, Kompression) einem dedizierten Profil zugewiesen wird, das Modell, Provider und Parameter unabhängig von der globalen Konfiguration definiert.
|
||||
* **LLM-Profil:** Zentrale Definition in `llm_profiles.yaml`, die Provider, Modell, Temperature und Fallback-Profil für eine spezifische Aufgabe festlegt (z.B. `synthesis_pro`, `tech_expert`, `ingest_validator`).
|
||||
* **Fallback-Kaskade (WP-25a):** Rekursive Fallback-Logik, bei der bei Fehlern automatisch auf das `fallback_profile` umgeschaltet wird, bis der terminale Endpunkt (`identity_safe`) erreicht wird. Schutz gegen Zirkel-Referenzen via `visited_profiles`-Tracking.
|
||||
* **Pre-Synthesis Kompression (WP-25a):** Asynchrone Verdichtung überlanger Wissens-Streams vor der Synthese, um Token-Verbrauch zu reduzieren und die Synthese zu beschleunigen. Nutzt `compression_profile` (z.B. `compression_fast`).
|
||||
* **Profilgesteuerte Validierung (WP-25a):** Semantische Kanten-Validierung in der Ingestion erfolgt zwingend über das MoE-Profil `ingest_validator` (Temperature 0.0 für Determinismus), unabhängig von der globalen Provider-Konfiguration.
|
||||
* **Lazy-Prompt-Orchestration (WP-25b):** Hierarchisches Prompt-Resolution-System, das Prompts erst im Moment des Modellaustauschs lädt, basierend auf dem exakt aktiven Modell. Ermöglicht modell-spezifisches Tuning und maximale Resilienz bei Modell-Fallbacks.
|
||||
* **Hierarchische Prompt-Resolution (WP-25b):** Dreistufige Auflösungs-Logik: Level 1 (Modell-ID) → Level 2 (Provider) → Level 3 (Default). Gewährleistet, dass jedes Modell das optimale Template erhält.
|
||||
* **PROMPT-TRACE (WP-25b):** Logging-Mechanismus, der die genutzte Prompt-Auflösungs-Ebene protokolliert (`🎯 Level 1`, `📡 Level 2`, `⚓ Level 3`). Bietet vollständige Transparenz über die genutzten Instruktionen.
|
||||
* **Ultra-robustes Intent-Parsing (WP-25b):** Regex-basierter Intent-Parser in der DecisionEngine, der Modell-Artefakte wie `[/S]`, `</s>` oder Newlines zuverlässig bereinigt, um präzises Strategie-Routing zu gewährleisten.
|
||||
* **Differenzierte Ingestion-Validierung (WP-25b):** Unterscheidung zwischen transienten Fehlern (Netzwerk, Timeout) und permanenten Fehlern (Config, Validation). Transiente Fehler erlauben die Kante (Datenverlust vermeiden), permanente Fehler lehnen sie ab (Graph-Qualität schützen).
|
||||
* **Phase 3 Agentic Edge Validation (WP-24c v4.5.8):** Finales Validierungs-Gate für alle Kanten mit `candidate:` Präfix. Nutzt LLM-basierte semantische Prüfung zur Verifizierung von Wissensverknüpfungen. Verhindert "Geister-Verknüpfungen" und sichert die Graph-Qualität gegen Fehlinterpretationen ab.
|
||||
* **candidate: Präfix (WP-24c v4.5.8):** Markierung für unbestätigte Kanten in `rule_id` oder `provenance`. Alle Kanten mit diesem Präfix werden in Phase 3 dem LLM-Validator vorgelegt. Nach erfolgreicher Validierung wird das Präfix entfernt.
|
||||
* **verified Status (WP-24c v4.5.8):** Impliziter Status für Kanten nach erfolgreicher Phase 3 Validierung. Kanten ohne `candidate:` Präfix gelten als verifiziert und werden in die Datenbank geschrieben.
|
||||
* **Note-Scope (WP-24c v4.2.0):** Globale Verbindungen, die der gesamten Note zugeordnet werden (nicht nur einem spezifischen Chunk). Wird durch spezielle Header-Zonen (z.B. `## Smart Edges`) definiert. In Phase 3 Validierung wird `note_summary` oder `note_text` als Kontext verwendet.
|
||||
* **Chunk-Scope (WP-24c v4.2.0):** Lokale Verbindungen, die einem spezifischen Textabschnitt (Chunk) zugeordnet werden. In Phase 3 Validierung wird der spezifische Chunk-Text als Kontext verwendet, falls verfügbar.
|
||||
* **Kontext-Optimierung (WP-24c v4.5.8):** Dynamische Kontext-Auswahl in Phase 3 Validierung basierend auf `scope`. Note-Scope nutzt aggregierten Note-Text, Chunk-Scope nutzt spezifischen Chunk-Text. Optimiert die Validierungs-Genauigkeit durch passenden Kontext.
|
||||
* **rejected_edges (WP-24c v4.5.8):** Liste von Kanten, die in Phase 3 Validierung abgelehnt wurden. Diese Kanten werden **nicht** in die Datenbank geschrieben und vollständig ignoriert. Verhindert persistente "Geister-Verknüpfungen" im Wissensgraphen.
|
||||
180
docs/00_General/00_quality_checklist.md
Normal file
180
docs/00_General/00_quality_checklist.md
Normal file
|
|
@ -0,0 +1,180 @@
|
|||
---
|
||||
doc_type: quality_assurance
|
||||
audience: all
|
||||
status: active
|
||||
version: 4.5.8
|
||||
context: "Qualitätsprüfung der Dokumentation für alle Rollen: Vollständigkeit, Korrektheit und Anwendbarkeit. Inkludiert WP-24c Phase 3 Agentic Edge Validation, automatische Spiegelkanten und Note-Scope Zonen."
|
||||
---
|
||||
|
||||
# Dokumentations-Qualitätsprüfung
|
||||
|
||||
Diese Checkliste dient zur systematischen Prüfung, ob die Dokumentation alle Fragen jeder Rolle vollständig beantwortet.
|
||||
|
||||
## ✅ Entwickler
|
||||
|
||||
### Setup & Installation
|
||||
- [x] **Lokales Setup:** [Developer Guide](../05_Development/05_developer_guide.md#6-lokales-setup-development)
|
||||
- [x] **Schnellstart:** [Quickstart](00_quickstart.md)
|
||||
- [x] **Hardware-Anforderungen:** [Admin Operations](../04_Operations/04_admin_operations.md#11-voraussetzungen)
|
||||
|
||||
### Architektur & Code
|
||||
- [x] **Modulare Struktur:** [Developer Guide - Architektur](../05_Development/05_developer_guide.md#4-projektstruktur--modul-referenz-deep-dive)
|
||||
- [x] **Design-Patterns:** [Architektur-Patterns](../02_concepts/02_concept_architecture_patterns.md)
|
||||
- [x] **API-Referenz:** [API Reference](../03_Technical_References/03_tech_api_reference.md)
|
||||
- [x] **Datenmodell:** [Data Model](../03_Technical_References/03_tech_data_model.md)
|
||||
|
||||
### Entwicklung & Erweiterung
|
||||
- [x] **Workflow:** [Developer Guide - Workflow](../05_Development/05_developer_guide.md#7-der-entwicklungs-zyklus-workflow)
|
||||
- [x] **Erweiterungs-Guide:** [Teach-the-AI](../05_Development/05_developer_guide.md#8-erweiterungs-guide-teach-the-ai)
|
||||
- [x] **GenAI Best Practices:** [GenAI Best Practices](../05_Development/05_genai_best_practices.md)
|
||||
|
||||
### Testing
|
||||
- [x] **Test-Strategien:** [Testing Guide](../05_Development/05_testing_guide.md)
|
||||
- [x] **Test-Frameworks:** [Testing Guide - Frameworks](../05_Development/05_testing_guide.md#3-test-frameworks--tools)
|
||||
- [x] **Test-Daten:** [Testing Guide - Test-Daten](../05_Development/05_testing_guide.md#2-test-daten--vaults)
|
||||
|
||||
### Debugging & Troubleshooting
|
||||
- [x] **Troubleshooting:** [Developer Guide - Troubleshooting](../05_Development/05_developer_guide.md#10-troubleshooting--one-liners)
|
||||
- [x] **Debug-Tools:** [Testing Guide - Debugging](../05_Development/05_testing_guide.md#7-debugging--diagnose)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Administratoren
|
||||
|
||||
### Installation & Setup
|
||||
- [x] **Installation:** [Admin Operations](../04_Operations/04_admin_operations.md#1-installation--setup)
|
||||
- [x] **Docker Setup:** [Admin Operations - Qdrant](../04_Operations/04_admin_operations.md#12-qdrant-docker)
|
||||
- [x] **Systemd Services:** [Admin Operations - Deployment](../04_Operations/04_admin_operations.md#2-deployment-systemd-services)
|
||||
|
||||
### Betrieb & Wartung
|
||||
- [x] **Monitoring:** [Admin Operations - Wartung](../04_Operations/04_admin_operations.md#3-wartung--monitoring)
|
||||
- [x] **Backup & Restore:** [Admin Operations - Backup](../04_Operations/04_admin_operations.md#4-backup--restore)
|
||||
- [x] **Troubleshooting:** [Admin Operations - Troubleshooting](../04_Operations/04_admin_operations.md#33-troubleshooting-guide)
|
||||
|
||||
### Server-Betrieb
|
||||
- [x] **Server-Konfiguration:** [Server Operations Manual](../04_Operations/04_server_operation_manual.md)
|
||||
- [x] **Disaster Recovery:** [Server Operations - DR](../04_Operations/04_server_operation_manual.md#5-disaster-recovery-wiederherstellung-two-stage-dr)
|
||||
- [x] **Backup-Strategie:** [Server Operations - Backup](../04_Operations/04_server_operation_manual.md#4-backup-strategie-borgmatic)
|
||||
|
||||
### Konfiguration
|
||||
- [x] **ENV-Variablen:** [Configuration Reference](../03_Technical_References/03_tech_configuration.md#1-environment-variablen-env)
|
||||
- [x] **YAML-Configs:** [Configuration Reference - YAML](../03_Technical_References/03_tech_configuration.md#2-typ-registry-typesyaml)
|
||||
- [x] **Phase 3 Validierung:** [Configuration Reference - ENV](../03_Technical_References/03_tech_configuration.md#1-environment-variablen-env) (MINDNET_LLM_VALIDATION_HEADERS, MINDNET_NOTE_SCOPE_ZONE_HEADERS)
|
||||
- [x] **LLM-Profile:** [Configuration Reference - LLM Profiles](../03_Technical_References/03_tech_configuration.md#6-llm-profile-registry-llm_profilesyaml-v130)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Anwender
|
||||
|
||||
### Erste Schritte
|
||||
- [x] **Schnellstart:** [Quickstart](00_quickstart.md)
|
||||
- [x] **Was ist Mindnet:** [Vision & Strategie](00_vision_and_strategy.md)
|
||||
- [x] **Grundlagen:** [Glossar](00_glossary.md)
|
||||
|
||||
### Nutzung
|
||||
- [x] **Chat-Bedienung:** [Chat Usage Guide](../01_User_Manual/01_chat_usage_guide.md)
|
||||
- [x] **Graph Explorer:** [Chat Usage Guide - Graph](../01_User_Manual/01_chat_usage_guide.md#22-modus--graph-explorer-cytoscape)
|
||||
- [x] **Editor:** [Chat Usage Guide - Editor](../01_User_Manual/01_chat_usage_guide.md#23-modus--manueller-editor)
|
||||
|
||||
### Content-Erstellung
|
||||
- [x] **Knowledge Design:** [Knowledge Design Manual](../01_User_Manual/01_knowledge_design.md)
|
||||
- [x] **Authoring Guidelines:** [Authoring Guidelines](../01_User_Manual/01_authoring_guidelines.md)
|
||||
- [x] **Obsidian-Integration:** [Obsidian Integration](../01_User_Manual/01_obsidian_integration_guide.md)
|
||||
- [x] **Note-Scope Zonen:** [Note-Scope Zonen](../01_User_Manual/NOTE_SCOPE_ZONEN.md) (WP-24c v4.2.0)
|
||||
- [x] **LLM-Validierung:** [LLM-Validierung von Links](../01_User_Manual/LLM_VALIDIERUNG_VON_LINKS.md) (WP-24c v4.5.8)
|
||||
|
||||
### Häufige Fragen
|
||||
- [x] **Wie strukturiere ich Notizen?** → [Knowledge Design](../01_User_Manual/01_knowledge_design.md)
|
||||
- [x] **Welche Note-Typen gibt es?** → [Knowledge Design - Typ-Referenz](../01_User_Manual/01_knowledge_design.md#31-typ-referenz--stream-logik)
|
||||
- [x] **Wie verknüpfe ich Notizen?** → [Knowledge Design - Edges](../01_User_Manual/01_knowledge_design.md#4-edges--verlinkung)
|
||||
- [x] **Wie nutze ich den Chat?** → [Chat Usage Guide](../01_User_Manual/01_chat_usage_guide.md)
|
||||
- [x] **Was sind automatische Spiegelkanten?** → [Knowledge Design - Spiegelkanten](../01_User_Manual/01_knowledge_design.md#43-automatische-spiegelkanten-invers-logik---wp-24c-v458)
|
||||
- [x] **Was ist Phase 3 Validierung?** → [Knowledge Design - Phase 3](../01_User_Manual/01_knowledge_design.md#44-explizite-vs-validierte-kanten-phase-3-validierung---wp-24c-v458)
|
||||
- [x] **Was sind Note-Scope Zonen?** → [Note-Scope Zonen](../01_User_Manual/NOTE_SCOPE_ZONEN.md)
|
||||
- [x] **Wann nutze ich explizite vs. validierte Links?** → [Knowledge Design - Explizite vs. Validierte](../01_User_Manual/01_knowledge_design.md#44-explizite-vs-validierte-kanten-phase-3-validierung---wp-24c-v458)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Tester
|
||||
|
||||
### Test-Strategien
|
||||
- [x] **Test-Pyramide:** [Testing Guide - Strategien](../05_Development/05_testing_guide.md#1-test-strategie--ebenen)
|
||||
- [x] **Unit Tests:** [Testing Guide - Unit Tests](../05_Development/05_testing_guide.md#11-unit-tests-pytest)
|
||||
- [x] **Integration Tests:** [Testing Guide - Integration](../05_Development/05_testing_guide.md#12-integration-tests)
|
||||
- [x] **E2E Tests:** [Testing Guide - E2E](../05_Development/05_testing_guide.md#13-e2e--smoke-tests)
|
||||
|
||||
### Test-Frameworks
|
||||
- [x] **Pytest:** [Testing Guide - Frameworks](../05_Development/05_testing_guide.md#31-pytest-unit-tests)
|
||||
- [x] **Unittest:** [Testing Guide - Unittest](../05_Development/05_testing_guide.md#32-unittest-e2e-tests)
|
||||
- [x] **Shell-Skripte:** [Testing Guide - Shell](../05_Development/05_testing_guide.md#33-shell-skripte-e2e-roundtrip)
|
||||
|
||||
### Test-Daten & Tools
|
||||
- [x] **Test-Vault erstellen:** [Testing Guide - Test-Daten](../05_Development/05_testing_guide.md#21-test-vault-erstellen)
|
||||
- [x] **Test-Skripte:** [Developer Guide - Scripts](../05_Development/05_developer_guide.md#44-scripts--tooling-die-admin-toolbox)
|
||||
- [x] **Test-Checkliste:** [Testing Guide - Checkliste](../05_Development/05_testing_guide.md#8-test-checkliste-für-pull-requests)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Deployment
|
||||
|
||||
### Deployment-Prozesse
|
||||
- [x] **Deployment-Guide:** [Deployment Guide](../04_Operations/04_deployment_guide.md)
|
||||
- [x] **CI/CD Pipeline:** [Deployment Guide - CI/CD](../04_Operations/04_deployment_guide.md#9-cicd-pipeline-details)
|
||||
- [x] **Rollout-Strategien:** [Deployment Guide - Rollout](../04_Operations/04_deployment_guide.md#4-rollout-strategien)
|
||||
|
||||
### Versionierung & Releases
|
||||
- [x] **Version-Schema:** [Deployment Guide - Versionierung](../04_Operations/04_deployment_guide.md#51-version-schema)
|
||||
- [x] **Release-Prozess:** [Deployment Guide - Release](../04_Operations/04_deployment_guide.md#52-release-prozess)
|
||||
|
||||
### Rollback & Recovery
|
||||
- [x] **Rollback-Strategien:** [Deployment Guide - Rollback](../04_Operations/04_deployment_guide.md#6-rollback-strategien)
|
||||
- [x] **Disaster Recovery:** [Server Operations - DR](../04_Operations/04_server_operation_manual.md#5-disaster-recovery-wiederherstellung-two-stage-dr)
|
||||
|
||||
### Pre/Post-Deployment
|
||||
- [x] **Pre-Deployment Checkliste:** [Deployment Guide - Checkliste](../04_Operations/04_deployment_guide.md#7-pre-deployment-checkliste)
|
||||
- [x] **Post-Deployment Validierung:** [Deployment Guide - Validierung](../04_Operations/04_deployment_guide.md#8-post-deployment-validierung)
|
||||
|
||||
---
|
||||
|
||||
## 📊 Zusammenfassung
|
||||
|
||||
### Vollständigkeit nach Rolle
|
||||
|
||||
| Rolle | Abgedeckte Themen | Status |
|
||||
| :--- | :--- | :--- |
|
||||
| **Entwickler** | Setup, Architektur, Code, Testing, Debugging | ✅ Vollständig |
|
||||
| **Administratoren** | Installation, Betrieb, Wartung, Backup, DR | ✅ Vollständig |
|
||||
| **Anwender** | Nutzung, Content-Erstellung, Workflows | ✅ Vollständig |
|
||||
| **Tester** | Test-Strategien, Frameworks, Tools | ✅ Vollständig |
|
||||
| **Deployment** | CI/CD, Rollout, Versionierung, Rollback | ✅ Vollständig |
|
||||
|
||||
### Neue Dokumente
|
||||
|
||||
1. ✅ `05_testing_guide.md` - Umfassender Test-Guide
|
||||
2. ✅ `04_deployment_guide.md` - Vollständiger Deployment-Guide
|
||||
3. ✅ `02_concept_architecture_patterns.md` - Architektur-Patterns
|
||||
4. ✅ `03_tech_api_reference.md` - API-Referenz
|
||||
5. ✅ `00_quickstart.md` - Schnellstart-Anleitung
|
||||
6. ✅ `README.md` - Dokumentations-Einstiegspunkt
|
||||
|
||||
### Aktualisierte Dokumente
|
||||
|
||||
1. ✅ `00_documentation_map.md` - Alle neuen Dokumente aufgenommen
|
||||
2. ✅ `04_admin_operations.md` - Troubleshooting erweitert, Phase 3 Validierung dokumentiert
|
||||
3. ✅ `05_developer_guide.md` - Modulare Struktur ergänzt, WP-24c Phase 3 dokumentiert
|
||||
4. ✅ `03_tech_ingestion_pipeline.md` - Background Tasks dokumentiert, Phase 3 Agentic Validation hinzugefügt
|
||||
5. ✅ `03_tech_configuration.md` - Fehlende ENV-Variablen ergänzt, WP-24c Konfiguration dokumentiert
|
||||
6. ✅ `00_vision_and_strategy.md` - Design-Entscheidungen ergänzt
|
||||
7. ✅ `01_knowledge_design.md` - Automatische Spiegelkanten, Phase 3 Validierung, Note-Scope Zonen dokumentiert
|
||||
8. ✅ `02_concept_graph_logic.md` - Phase 3 Validierung, automatische Spiegelkanten, Note-Scope vs. Chunk-Scope dokumentiert
|
||||
9. ✅ `03_tech_data_model.md` - candidate: Präfix, verified Status, virtual Flag dokumentiert
|
||||
10. ✅ `NOTE_SCOPE_ZONEN.md` - Phase 3 Validierung integriert
|
||||
11. ✅ `LLM_VALIDIERUNG_VON_LINKS.md` - Phase 3 statt global_pool, Kontext-Optimierung dokumentiert
|
||||
12. ✅ `05_testing_guide.md` - WP-24c Test-Szenarien hinzugefügt
|
||||
|
||||
---
|
||||
|
||||
**Status:** ✅ Alle Rollen vollständig abgedeckt
|
||||
**Letzte Prüfung:** 2025-01-XX
|
||||
**Version:** 2.9.1
|
||||
|
||||
156
docs/00_General/00_quickstart.md
Normal file
156
docs/00_General/00_quickstart.md
Normal file
|
|
@ -0,0 +1,156 @@
|
|||
---
|
||||
doc_type: quickstart_guide
|
||||
audience: user, developer, admin
|
||||
status: active
|
||||
version: 2.9.1
|
||||
context: "Schnellstart-Anleitung für neue Benutzer von Mindnet"
|
||||
---
|
||||
|
||||
# Mindnet Schnellstart
|
||||
|
||||
Diese Anleitung hilft dir, in 15 Minuten mit Mindnet loszulegen.
|
||||
|
||||
## 🎯 Was ist Mindnet?
|
||||
|
||||
Mindnet ist ein **persönliches KI-Gedächtnis**, das:
|
||||
- Dein Wissen in Markdown-Notizen speichert
|
||||
- Semantisch verknüpft (Wissensgraph)
|
||||
- Als intelligenter Dialogpartner agiert (RAG-Chat)
|
||||
- **Lokal und privat** läuft (Privacy First)
|
||||
|
||||
## 📋 Voraussetzungen
|
||||
|
||||
- **Python 3.10+** installiert
|
||||
- **Docker** installiert (für Qdrant)
|
||||
- **Ollama** installiert (für lokale LLMs)
|
||||
- Optional: **Obsidian** (für komfortables Schreiben)
|
||||
|
||||
## ⚡ Installation (5 Minuten)
|
||||
|
||||
### Schritt 1: Repository klonen
|
||||
|
||||
```bash
|
||||
git clone <repository-url> mindnet
|
||||
cd mindnet
|
||||
```
|
||||
|
||||
### Schritt 2: Virtuelle Umgebung erstellen
|
||||
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate # Windows: .venv\Scripts\activate
|
||||
```
|
||||
|
||||
### Schritt 3: Abhängigkeiten installieren
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Schritt 4: Qdrant starten (Docker)
|
||||
|
||||
```bash
|
||||
docker compose up -d qdrant
|
||||
```
|
||||
|
||||
### Schritt 5: Ollama-Modelle laden
|
||||
|
||||
```bash
|
||||
ollama pull phi3:mini
|
||||
ollama pull nomic-embed-text
|
||||
```
|
||||
|
||||
### Schritt 6: Konfiguration anpassen
|
||||
|
||||
Erstelle eine `.env`-Datei im Projektroot:
|
||||
|
||||
```ini
|
||||
QDRANT_URL=http://localhost:6333
|
||||
MINDNET_OLLAMA_URL=http://localhost:11434
|
||||
MINDNET_LLM_MODEL=phi3:mini
|
||||
MINDNET_EMBEDDING_MODEL=nomic-embed-text
|
||||
COLLECTION_PREFIX=mindnet
|
||||
MINDNET_VAULT_ROOT=./vault
|
||||
```
|
||||
|
||||
## 🚀 Erste Schritte (5 Minuten)
|
||||
|
||||
### Schritt 1: Backend starten
|
||||
|
||||
```bash
|
||||
uvicorn app.main:app --reload --port 8001
|
||||
```
|
||||
|
||||
### Schritt 2: Frontend starten (neues Terminal)
|
||||
|
||||
```bash
|
||||
streamlit run app/frontend/ui.py --server.port 8501
|
||||
```
|
||||
|
||||
### Schritt 3: Browser öffnen
|
||||
|
||||
Öffne `http://localhost:8501` im Browser.
|
||||
|
||||
### Schritt 4: Erste Notiz importieren
|
||||
|
||||
Erstelle eine Test-Notiz im `vault/` Ordner:
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: 20250101-test
|
||||
title: Meine erste Notiz
|
||||
type: concept
|
||||
status: active
|
||||
---
|
||||
|
||||
# Meine erste Notiz
|
||||
|
||||
Dies ist eine Test-Notiz für Mindnet.
|
||||
|
||||
[[rel:related_to Mindnet]]
|
||||
```
|
||||
|
||||
Dann importiere sie:
|
||||
|
||||
```bash
|
||||
python3 -m scripts.import_markdown --vault ./vault --prefix mindnet --apply
|
||||
```
|
||||
|
||||
### Schritt 5: Erste Chat-Anfrage
|
||||
|
||||
Im Browser-Chat eingeben:
|
||||
```
|
||||
Was ist Mindnet?
|
||||
```
|
||||
|
||||
## 📚 Nächste Schritte
|
||||
|
||||
Nach dem Schnellstart empfehle ich:
|
||||
|
||||
1. **[Chat Usage Guide](../01_User_Manual/01_chat_usage_guide.md)** - Lerne die Chat-Funktionen kennen
|
||||
2. **[Knowledge Design](../01_User_Manual/01_knowledge_design.md)** - Verstehe, wie du Notizen strukturierst
|
||||
3. **[Authoring Guidelines](../01_User_Manual/01_authoring_guidelines.md)** - Lerne Best Practices für das Schreiben
|
||||
|
||||
## 🆘 Hilfe & Troubleshooting
|
||||
|
||||
**Problem:** Qdrant startet nicht
|
||||
- **Lösung:** Prüfe, ob Docker läuft: `docker ps`
|
||||
|
||||
**Problem:** Ollama-Modell nicht gefunden
|
||||
- **Lösung:** Prüfe mit `ollama list`, ob die Modelle geladen sind
|
||||
|
||||
**Problem:** Import schlägt fehl
|
||||
- **Lösung:** Prüfe die Logs und stelle sicher, dass Qdrant läuft
|
||||
|
||||
Für detaillierte Troubleshooting-Informationen siehe [Admin Operations](../04_Operations/04_admin_operations.md#33-troubleshooting-guide).
|
||||
|
||||
## 🔗 Weitere Ressourcen
|
||||
|
||||
- **[Dokumentationskarte](00_documentation_map.md)** - Übersicht aller Dokumente
|
||||
- **[Glossar](00_glossary.md)** - Wichtige Begriffe erklärt
|
||||
- **[Vision & Strategie](00_vision_and_strategy.md)** - Die Philosophie hinter Mindnet
|
||||
|
||||
---
|
||||
|
||||
**Viel Erfolg mit Mindnet!** 🚀
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user