Merge pull request 'WP19' (#10) from WP19 into main
All checks were successful
Deploy mindnet to llm-node / deploy (push) Successful in 3s
All checks were successful
Deploy mindnet to llm-node / deploy (push) Successful in 3s
Reviewed-on: #10 Merge feature/wp19-graph-viz into main WP-19: Frontend Modularization & Advanced Graph Visualization Dieses Update transformiert die Frontend-Architektur umfassend: - **Refactoring:** Die monolithische `ui.py` wurde in eine modulare Struktur (`ui_graph_service.py`, `ui_editor.py`, `ui_cytoscape.py` etc.) zerlegt. - **Feature (Graph):** Integration von `st-cytoscape` mit COSE-Layout für stabile, überlappungsfreie Visualisierung ("Active Inspector, Passive Graph" Pattern). - **Feature (Editor):** Implementierung der "Single Source of Truth" Logik. Der Editor lädt Inhalte nun direkt vom Dateisystem statt aus Datenbank-Payloads. - **Feature (UX):** Layout-Einstellungen und Graph-Tiefe werden nun via URL-Parameter persistiert (Deep Linking). - **Documentation:** Erstellung der `03_tech_frontend.md` und Aktualisierung aller relevanten Guides. - **Dependencies:** Wechsel von `streamlit-cytoscapejs` zu `st-cytoscape`.
This commit is contained in:
commit
84e319a42f
|
|
@ -1,533 +1,50 @@
|
|||
import streamlit as st
|
||||
import requests
|
||||
import uuid
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
import yaml
|
||||
import unicodedata
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# --- CONFIGURATION ---
|
||||
load_dotenv()
|
||||
API_BASE_URL = os.getenv("MINDNET_API_URL", "http://localhost:8002")
|
||||
CHAT_ENDPOINT = f"{API_BASE_URL}/chat"
|
||||
FEEDBACK_ENDPOINT = f"{API_BASE_URL}/feedback"
|
||||
INGEST_ANALYZE_ENDPOINT = f"{API_BASE_URL}/ingest/analyze"
|
||||
INGEST_SAVE_ENDPOINT = f"{API_BASE_URL}/ingest/save"
|
||||
HISTORY_FILE = Path("data/logs/search_history.jsonl")
|
||||
|
||||
# Timeout Strategy
|
||||
timeout_setting = os.getenv("MINDNET_API_TIMEOUT") or os.getenv("MINDNET_LLM_TIMEOUT")
|
||||
API_TIMEOUT = float(timeout_setting) if timeout_setting else 300.0
|
||||
|
||||
# --- PAGE SETUP ---
|
||||
st.set_page_config(page_title="mindnet v2.5", page_icon="🧠", layout="wide")
|
||||
|
||||
# --- CSS STYLING ---
|
||||
# --- CONFIG & STYLING ---
|
||||
st.set_page_config(page_title="mindnet v2.6", page_icon="🧠", layout="wide")
|
||||
st.markdown("""
|
||||
<style>
|
||||
.block-container { padding-top: 2rem; max_width: 1000px; margin: auto; }
|
||||
|
||||
.intent-badge {
|
||||
background-color: #e8f0fe; color: #1a73e8;
|
||||
padding: 4px 10px; border-radius: 12px;
|
||||
font-size: 0.8rem; font-weight: 600;
|
||||
border: 1px solid #d2e3fc; display: inline-block; margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.draft-box {
|
||||
border: 1px solid #d0d7de;
|
||||
border-radius: 6px;
|
||||
padding: 16px;
|
||||
background-color: #f6f8fa;
|
||||
margin-top: 10px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
.preview-box {
|
||||
border: 1px solid #e0e0e0;
|
||||
border-radius: 6px;
|
||||
padding: 24px;
|
||||
background-color: white;
|
||||
font-family: -apple-system,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif;
|
||||
}
|
||||
.block-container { padding-top: 2rem; max_width: 1200px; margin: auto; }
|
||||
.intent-badge { background-color: #e8f0fe; color: #1a73e8; padding: 4px 10px; border-radius: 12px; font-size: 0.8rem; font-weight: 600; border: 1px solid #d2e3fc; display: inline-block; margin-bottom: 0.5rem; }
|
||||
.draft-box { border: 1px solid #d0d7de; border-radius: 6px; padding: 16px; background-color: #f6f8fa; margin: 10px 0; }
|
||||
.preview-box { border: 1px solid #e0e0e0; border-radius: 6px; padding: 24px; background-color: white; }
|
||||
</style>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
# --- MODULE IMPORTS ---
|
||||
try:
|
||||
from ui_config import QDRANT_URL, QDRANT_KEY, COLLECTION_PREFIX
|
||||
from ui_graph_service import GraphExplorerService
|
||||
|
||||
# Komponenten
|
||||
from ui_sidebar import render_sidebar
|
||||
from ui_chat import render_chat_interface
|
||||
from ui_editor import render_manual_editor
|
||||
|
||||
# Die beiden Graph-Engines
|
||||
from ui_graph import render_graph_explorer as render_graph_agraph
|
||||
from ui_graph_cytoscape import render_graph_explorer_cytoscape # <-- Import
|
||||
|
||||
except ImportError as e:
|
||||
st.error(f"Import Error: {e}. Bitte stelle sicher, dass alle UI-Dateien im Ordner liegen und 'streamlit-cytoscapejs' installiert ist.")
|
||||
st.stop()
|
||||
|
||||
# --- SESSION STATE ---
|
||||
if "messages" not in st.session_state: st.session_state.messages = []
|
||||
if "user_id" not in st.session_state: st.session_state.user_id = str(uuid.uuid4())
|
||||
|
||||
# --- HELPER FUNCTIONS ---
|
||||
# --- SERVICE INIT ---
|
||||
graph_service = GraphExplorerService(QDRANT_URL, QDRANT_KEY, COLLECTION_PREFIX)
|
||||
|
||||
def slugify(value):
|
||||
if not value: return ""
|
||||
value = str(value).lower()
|
||||
replacements = {'ä': 'ae', 'ö': 'oe', 'ü': 'ue', 'ß': 'ss', '&': 'und', '+': 'und'}
|
||||
for k, v in replacements.items():
|
||||
value = value.replace(k, v)
|
||||
|
||||
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
|
||||
value = re.sub(r'[^\w\s-]', '', value).strip()
|
||||
return re.sub(r'[-\s]+', '-', value)
|
||||
|
||||
def normalize_meta_and_body(meta, body):
|
||||
ALLOWED_KEYS = {"title", "type", "status", "tags", "id", "created", "updated", "aliases", "lang"}
|
||||
clean_meta = {}
|
||||
extra_content = []
|
||||
|
||||
if "titel" in meta and "title" not in meta:
|
||||
meta["title"] = meta.pop("titel")
|
||||
|
||||
tag_candidates = ["tags", "emotionale_keywords", "keywords", "schluesselwoerter"]
|
||||
all_tags = []
|
||||
for key in tag_candidates:
|
||||
if key in meta:
|
||||
val = meta[key]
|
||||
if isinstance(val, list): all_tags.extend(val)
|
||||
elif isinstance(val, str): all_tags.extend([t.strip() for t in val.split(",")])
|
||||
|
||||
for key, val in meta.items():
|
||||
if key in ALLOWED_KEYS:
|
||||
clean_meta[key] = val
|
||||
elif key in tag_candidates:
|
||||
pass
|
||||
else:
|
||||
if val and isinstance(val, str):
|
||||
header = key.replace("_", " ").title()
|
||||
extra_content.append(f"## {header}\n{val}\n")
|
||||
|
||||
if all_tags:
|
||||
clean_tags = []
|
||||
for t in all_tags:
|
||||
t_clean = str(t).replace("#", "").strip()
|
||||
if t_clean: clean_tags.append(t_clean)
|
||||
clean_meta["tags"] = list(set(clean_tags))
|
||||
|
||||
if extra_content:
|
||||
new_section = "\n".join(extra_content)
|
||||
final_body = f"{new_section}\n{body}"
|
||||
else:
|
||||
final_body = body
|
||||
|
||||
return clean_meta, final_body
|
||||
|
||||
def parse_markdown_draft(full_text):
|
||||
"""
|
||||
HEALING PARSER: Repariert kaputten LLM Output (z.B. fehlendes schließendes '---').
|
||||
"""
|
||||
clean_text = full_text.strip()
|
||||
|
||||
# 1. Code-Block Wrapper entfernen
|
||||
pattern_block = r"```(?:markdown|md|yaml)?\s*(.*?)\s*```"
|
||||
match_block = re.search(pattern_block, clean_text, re.DOTALL | re.IGNORECASE)
|
||||
if match_block:
|
||||
clean_text = match_block.group(1).strip()
|
||||
|
||||
meta = {}
|
||||
body = clean_text
|
||||
yaml_str = ""
|
||||
|
||||
# 2. Versuch A: Standard Split (Idealfall)
|
||||
parts = re.split(r"^---+\s*$", clean_text, maxsplit=2, flags=re.MULTILINE)
|
||||
|
||||
if len(parts) >= 3:
|
||||
yaml_str = parts[1]
|
||||
body = parts[2]
|
||||
|
||||
# 3. Versuch B: Healing (Wenn LLM das schließende --- vergessen hat)
|
||||
elif clean_text.startswith("---"):
|
||||
# Wir suchen die erste Überschrift '#', da Frontmatter davor sein muss
|
||||
# Pattern: Suche --- am Anfang, dann nimm alles bis zum ersten # am Zeilenanfang
|
||||
fallback_match = re.search(r"^---\s*(.*?)(?=\n#)", clean_text, re.DOTALL | re.MULTILINE)
|
||||
if fallback_match:
|
||||
yaml_str = fallback_match.group(1)
|
||||
# Der Body ist alles NACH dem YAML String (inklusive dem #)
|
||||
body = clean_text.replace(f"---{yaml_str}", "", 1).strip()
|
||||
|
||||
# 4. YAML Parsing
|
||||
if yaml_str:
|
||||
yaml_str_clean = yaml_str.replace("#", "") # Tags cleanen
|
||||
try:
|
||||
parsed = yaml.safe_load(yaml_str_clean)
|
||||
if isinstance(parsed, dict):
|
||||
meta = parsed
|
||||
except Exception as e:
|
||||
print(f"YAML Parsing Warning: {e}")
|
||||
|
||||
# Fallback: Titel aus H1
|
||||
if not meta.get("title"):
|
||||
h1_match = re.search(r"^#\s+(.*)$", body, re.MULTILINE)
|
||||
if h1_match:
|
||||
meta["title"] = h1_match.group(1).strip()
|
||||
|
||||
# Correction: type/status swap
|
||||
if meta.get("type") == "draft":
|
||||
meta["status"] = "draft"
|
||||
meta["type"] = "experience"
|
||||
|
||||
return normalize_meta_and_body(meta, body)
|
||||
|
||||
def build_markdown_doc(meta, body):
|
||||
"""Baut das finale Dokument zusammen."""
|
||||
if "id" not in meta or meta["id"] == "generated_on_save":
|
||||
raw_title = meta.get('title', 'note')
|
||||
clean_slug = slugify(raw_title)[:50] or "note"
|
||||
meta["id"] = f"{datetime.now().strftime('%Y%m%d')}-{clean_slug}"
|
||||
|
||||
meta["updated"] = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
ordered_meta = {}
|
||||
prio_keys = ["id", "type", "title", "status", "tags"]
|
||||
for k in prio_keys:
|
||||
if k in meta: ordered_meta[k] = meta.pop(k)
|
||||
ordered_meta.update(meta)
|
||||
|
||||
try:
|
||||
yaml_str = yaml.dump(ordered_meta, default_flow_style=None, sort_keys=False, allow_unicode=True).strip()
|
||||
except:
|
||||
yaml_str = "error: generating_yaml"
|
||||
|
||||
return f"---\n{yaml_str}\n---\n\n{body}"
|
||||
|
||||
def load_history_from_logs(limit=10):
|
||||
queries = []
|
||||
if HISTORY_FILE.exists():
|
||||
try:
|
||||
with open(HISTORY_FILE, "r", encoding="utf-8") as f:
|
||||
lines = f.readlines()
|
||||
for line in reversed(lines):
|
||||
try:
|
||||
entry = json.loads(line)
|
||||
q = entry.get("query_text")
|
||||
if q and q not in queries:
|
||||
queries.append(q)
|
||||
if len(queries) >= limit: break
|
||||
except: continue
|
||||
except: pass
|
||||
return queries
|
||||
|
||||
# --- API CLIENT ---
|
||||
|
||||
def send_chat_message(message: str, top_k: int, explain: bool):
|
||||
try:
|
||||
response = requests.post(
|
||||
CHAT_ENDPOINT,
|
||||
json={"message": message, "top_k": top_k, "explain": explain},
|
||||
timeout=API_TIMEOUT
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def analyze_draft_text(text: str, n_type: str):
|
||||
try:
|
||||
response = requests.post(
|
||||
INGEST_ANALYZE_ENDPOINT,
|
||||
json={"text": text, "type": n_type},
|
||||
timeout=15
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def save_draft_to_vault(markdown_content: str, filename: str = None):
|
||||
try:
|
||||
response = requests.post(
|
||||
INGEST_SAVE_ENDPOINT,
|
||||
json={"markdown_content": markdown_content, "filename": filename},
|
||||
timeout=API_TIMEOUT
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def submit_feedback(query_id, node_id, score, comment=None):
|
||||
try:
|
||||
requests.post(FEEDBACK_ENDPOINT, json={"query_id": query_id, "node_id": node_id, "score": score, "comment": comment}, timeout=2)
|
||||
st.toast(f"Feedback ({score}) gesendet!")
|
||||
except: pass
|
||||
|
||||
# --- UI COMPONENTS ---
|
||||
|
||||
def render_sidebar():
|
||||
with st.sidebar:
|
||||
st.title("🧠 mindnet")
|
||||
st.caption("v2.5 | Healing Parser")
|
||||
mode = st.radio("Modus", ["💬 Chat", "📝 Manueller Editor"], index=0)
|
||||
st.divider()
|
||||
st.subheader("⚙️ Settings")
|
||||
top_k = st.slider("Quellen (Top-K)", 1, 10, 5)
|
||||
explain = st.toggle("Explanation Layer", True)
|
||||
st.divider()
|
||||
st.subheader("🕒 Verlauf")
|
||||
for q in load_history_from_logs(8):
|
||||
if st.button(f"🔎 {q[:25]}...", key=f"hist_{q}", use_container_width=True):
|
||||
st.session_state.messages.append({"role": "user", "content": q})
|
||||
st.rerun()
|
||||
return mode, top_k, explain
|
||||
|
||||
def render_draft_editor(msg):
|
||||
if "query_id" not in msg or not msg["query_id"]:
|
||||
msg["query_id"] = str(uuid.uuid4())
|
||||
|
||||
qid = msg["query_id"]
|
||||
key_base = f"draft_{qid}"
|
||||
|
||||
# State Keys
|
||||
data_meta_key = f"{key_base}_data_meta"
|
||||
data_sugg_key = f"{key_base}_data_suggestions"
|
||||
widget_body_key = f"{key_base}_widget_body"
|
||||
data_body_key = f"{key_base}_data_body"
|
||||
|
||||
# --- 1. INIT STATE ---
|
||||
if f"{key_base}_init" not in st.session_state:
|
||||
meta, body = parse_markdown_draft(msg["content"])
|
||||
if "type" not in meta: meta["type"] = "default"
|
||||
if "title" not in meta: meta["title"] = ""
|
||||
tags = meta.get("tags", [])
|
||||
meta["tags_str"] = ", ".join(tags) if isinstance(tags, list) else str(tags)
|
||||
|
||||
# Persistent Data
|
||||
st.session_state[data_meta_key] = meta
|
||||
st.session_state[data_sugg_key] = []
|
||||
st.session_state[data_body_key] = body.strip()
|
||||
|
||||
# Init Widgets Keys
|
||||
st.session_state[f"{key_base}_wdg_title"] = meta["title"]
|
||||
st.session_state[f"{key_base}_wdg_type"] = meta["type"]
|
||||
st.session_state[f"{key_base}_wdg_tags"] = meta["tags_str"]
|
||||
|
||||
st.session_state[f"{key_base}_init"] = True
|
||||
|
||||
# --- 2. RESURRECTION ---
|
||||
if widget_body_key not in st.session_state and data_body_key in st.session_state:
|
||||
st.session_state[widget_body_key] = st.session_state[data_body_key]
|
||||
|
||||
# --- CALLBACKS ---
|
||||
def _sync_meta():
|
||||
meta = st.session_state[data_meta_key]
|
||||
meta["title"] = st.session_state.get(f"{key_base}_wdg_title", "")
|
||||
meta["type"] = st.session_state.get(f"{key_base}_wdg_type", "default")
|
||||
meta["tags_str"] = st.session_state.get(f"{key_base}_wdg_tags", "")
|
||||
st.session_state[data_meta_key] = meta
|
||||
|
||||
def _sync_body():
|
||||
st.session_state[data_body_key] = st.session_state[widget_body_key]
|
||||
|
||||
def _insert_text(text_to_insert):
|
||||
current = st.session_state.get(widget_body_key, "")
|
||||
new_text = f"{current}\n\n{text_to_insert}"
|
||||
st.session_state[widget_body_key] = new_text
|
||||
st.session_state[data_body_key] = new_text
|
||||
|
||||
def _remove_text(text_to_remove):
|
||||
current = st.session_state.get(widget_body_key, "")
|
||||
new_text = current.replace(text_to_remove, "").strip()
|
||||
st.session_state[widget_body_key] = new_text
|
||||
st.session_state[data_body_key] = new_text
|
||||
|
||||
# --- UI LAYOUT ---
|
||||
st.markdown(f'<div class="draft-box">', unsafe_allow_html=True)
|
||||
st.markdown("### 📝 Entwurf bearbeiten")
|
||||
|
||||
meta_ref = st.session_state[data_meta_key]
|
||||
c1, c2 = st.columns([2, 1])
|
||||
with c1:
|
||||
st.text_input("Titel", key=f"{key_base}_wdg_title", on_change=_sync_meta)
|
||||
|
||||
with c2:
|
||||
known_types = ["concept", "project", "decision", "experience", "journal", "value", "goal", "principle", "risk", "belief"]
|
||||
curr_type = st.session_state.get(f"{key_base}_wdg_type", meta_ref["type"])
|
||||
if curr_type not in known_types: known_types.append(curr_type)
|
||||
st.selectbox("Typ", known_types, key=f"{key_base}_wdg_type", on_change=_sync_meta)
|
||||
|
||||
st.text_input("Tags", key=f"{key_base}_wdg_tags", on_change=_sync_meta)
|
||||
|
||||
tab_edit, tab_intel, tab_view = st.tabs(["✏️ Inhalt", "🧠 Intelligence", "👁️ Vorschau"])
|
||||
|
||||
# --- TAB 1: EDITOR ---
|
||||
with tab_edit:
|
||||
st.text_area(
|
||||
"Body",
|
||||
key=widget_body_key,
|
||||
height=500,
|
||||
on_change=_sync_body,
|
||||
label_visibility="collapsed"
|
||||
)
|
||||
|
||||
# --- TAB 2: INTELLIGENCE ---
|
||||
with tab_intel:
|
||||
st.info("Klicke auf 'Analysieren', um Verknüpfungen für den AKTUELLEN Text zu finden.")
|
||||
|
||||
if st.button("🔍 Analyse starten", key=f"{key_base}_analyze"):
|
||||
st.session_state[data_sugg_key] = []
|
||||
|
||||
text_to_analyze = st.session_state.get(widget_body_key, st.session_state.get(data_body_key, ""))
|
||||
current_doc_type = st.session_state.get(f"{key_base}_wdg_type", "concept")
|
||||
|
||||
with st.spinner("Analysiere..."):
|
||||
analysis = analyze_draft_text(text_to_analyze, current_doc_type)
|
||||
|
||||
if "error" in analysis:
|
||||
st.error(f"Fehler: {analysis['error']}")
|
||||
else:
|
||||
suggestions = analysis.get("suggestions", [])
|
||||
st.session_state[data_sugg_key] = suggestions
|
||||
if not suggestions:
|
||||
st.warning("Keine Vorschläge gefunden.")
|
||||
else:
|
||||
st.success(f"{len(suggestions)} Vorschläge gefunden.")
|
||||
|
||||
suggestions = st.session_state[data_sugg_key]
|
||||
if suggestions:
|
||||
current_text_state = st.session_state.get(widget_body_key, "")
|
||||
|
||||
for idx, sugg in enumerate(suggestions):
|
||||
link_text = sugg.get('suggested_markdown', '')
|
||||
is_inserted = link_text in current_text_state
|
||||
|
||||
bg_color = "#e6fffa" if is_inserted else "#ffffff"
|
||||
border = "3px solid #28a745" if is_inserted else "3px solid #1a73e8"
|
||||
|
||||
st.markdown(f"""
|
||||
<div style="border-left: {border}; background-color: {bg_color}; padding: 10px; margin-bottom: 8px; border-radius: 4px; box-shadow: 0 1px 3px rgba(0,0,0,0.1);">
|
||||
<b>{sugg.get('target_title')}</b> <small>({sugg.get('type')})</small><br>
|
||||
<i>{sugg.get('reason')}</i><br>
|
||||
<code>{link_text}</code>
|
||||
</div>
|
||||
""", unsafe_allow_html=True)
|
||||
|
||||
if is_inserted:
|
||||
st.button("❌ Entfernen", key=f"del_{idx}_{key_base}", on_click=_remove_text, args=(link_text,))
|
||||
else:
|
||||
st.button("➕ Einfügen", key=f"add_{idx}_{key_base}", on_click=_insert_text, args=(link_text,))
|
||||
|
||||
# --- TAB 3: SAVE ---
|
||||
final_tags_str = st.session_state.get(f"{key_base}_wdg_tags", "")
|
||||
final_tags = [t.strip() for t in final_tags_str.split(",") if t.strip()]
|
||||
|
||||
final_meta = {
|
||||
"id": "generated_on_save",
|
||||
"type": st.session_state.get(f"{key_base}_wdg_type", "default"),
|
||||
"title": st.session_state.get(f"{key_base}_wdg_title", "").strip(),
|
||||
"status": "draft",
|
||||
"tags": final_tags
|
||||
}
|
||||
|
||||
final_body = st.session_state.get(widget_body_key, st.session_state[data_body_key])
|
||||
|
||||
if not final_meta["title"]:
|
||||
h1_match = re.search(r"^#\s+(.*)$", final_body, re.MULTILINE)
|
||||
if h1_match:
|
||||
final_meta["title"] = h1_match.group(1).strip()
|
||||
|
||||
final_doc = build_markdown_doc(final_meta, final_body)
|
||||
|
||||
with tab_view:
|
||||
st.markdown('<div class="preview-box">', unsafe_allow_html=True)
|
||||
st.markdown(final_doc)
|
||||
st.markdown('</div>', unsafe_allow_html=True)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
b1, b2 = st.columns([1, 1])
|
||||
with b1:
|
||||
if st.button("💾 Speichern & Indizieren", type="primary", key=f"{key_base}_save"):
|
||||
with st.spinner("Speichere im Vault..."):
|
||||
|
||||
raw_title = final_meta.get("title", "")
|
||||
if not raw_title:
|
||||
clean_body = re.sub(r"[#*_\[\]()]", "", final_body).strip()
|
||||
raw_title = clean_body[:40] if clean_body else "draft"
|
||||
|
||||
safe_title = slugify(raw_title)[:60] or "draft"
|
||||
fname = f"{datetime.now().strftime('%Y%m%d')}-{safe_title}.md"
|
||||
|
||||
result = save_draft_to_vault(final_doc, filename=fname)
|
||||
if "error" in result:
|
||||
st.error(f"Fehler: {result['error']}")
|
||||
else:
|
||||
st.success(f"Gespeichert: {result.get('file_path')}")
|
||||
st.balloons()
|
||||
with b2:
|
||||
if st.button("📋 Code anzeigen", key=f"{key_base}_btn_copy"):
|
||||
st.code(final_doc, language="markdown")
|
||||
|
||||
st.markdown("</div>", unsafe_allow_html=True)
|
||||
|
||||
|
||||
def render_chat_interface(top_k, explain):
|
||||
for idx, msg in enumerate(st.session_state.messages):
|
||||
with st.chat_message(msg["role"]):
|
||||
if msg["role"] == "assistant":
|
||||
intent = msg.get("intent", "UNKNOWN")
|
||||
src = msg.get("intent_source", "?")
|
||||
icon = {"EMPATHY":"❤️", "DECISION":"⚖️", "CODING":"💻", "FACT":"📚", "INTERVIEW":"📝"}.get(intent, "🧠")
|
||||
st.markdown(f'<div class="intent-badge">{icon} Intent: {intent} <span style="opacity:0.6; font-size:0.8em">({src})</span></div>', unsafe_allow_html=True)
|
||||
|
||||
with st.expander("🐞 Debug Raw Payload", expanded=False):
|
||||
st.json(msg)
|
||||
|
||||
if intent == "INTERVIEW":
|
||||
render_draft_editor(msg)
|
||||
else:
|
||||
st.markdown(msg["content"])
|
||||
|
||||
if "sources" in msg and msg["sources"]:
|
||||
for hit in msg["sources"]:
|
||||
with st.expander(f"📄 {hit.get('note_id', '?')} ({hit.get('total_score', 0):.2f})"):
|
||||
st.markdown(f"_{hit.get('source', {}).get('text', '')[:300]}..._")
|
||||
if hit.get('explanation'):
|
||||
st.caption(f"Grund: {hit['explanation']['reasons'][0]['message']}")
|
||||
def _cb(qid=msg.get("query_id"), nid=hit.get('node_id')):
|
||||
val = st.session_state.get(f"fb_src_{qid}_{nid}")
|
||||
if val is not None: submit_feedback(qid, nid, val+1)
|
||||
st.feedback("faces", key=f"fb_src_{msg.get('query_id')}_{hit.get('node_id')}", on_change=_cb)
|
||||
|
||||
if "query_id" in msg:
|
||||
qid = msg["query_id"]
|
||||
st.feedback("stars", key=f"fb_glob_{qid}", on_change=lambda: submit_feedback(qid, "generated_answer", st.session_state[f"fb_glob_{qid}"]+1))
|
||||
else:
|
||||
st.markdown(msg["content"])
|
||||
|
||||
if prompt := st.chat_input("Frage Mindnet..."):
|
||||
st.session_state.messages.append({"role": "user", "content": prompt})
|
||||
st.rerun()
|
||||
|
||||
if len(st.session_state.messages) > 0 and st.session_state.messages[-1]["role"] == "user":
|
||||
with st.chat_message("assistant"):
|
||||
with st.spinner("Thinking..."):
|
||||
resp = send_chat_message(st.session_state.messages[-1]["content"], top_k, explain)
|
||||
if "error" in resp:
|
||||
st.error(resp["error"])
|
||||
else:
|
||||
st.session_state.messages.append({
|
||||
"role": "assistant",
|
||||
"content": resp.get("answer"),
|
||||
"intent": resp.get("intent", "FACT"),
|
||||
"intent_source": resp.get("intent_source", "Unknown"),
|
||||
"sources": resp.get("sources", []),
|
||||
"query_id": resp.get("query_id")
|
||||
})
|
||||
st.rerun()
|
||||
|
||||
def render_manual_editor():
|
||||
mock_msg = {
|
||||
"content": "---\ntype: concept\ntitle: Neue Notiz\nstatus: draft\ntags: []\n---\n# Titel\n",
|
||||
"query_id": "manual_mode_v2"
|
||||
}
|
||||
render_draft_editor(mock_msg)
|
||||
|
||||
# --- MAIN ---
|
||||
# --- MAIN ROUTING ---
|
||||
mode, top_k, explain = render_sidebar()
|
||||
|
||||
if mode == "💬 Chat":
|
||||
render_chat_interface(top_k, explain)
|
||||
else:
|
||||
render_manual_editor()
|
||||
elif mode == "📝 Manueller Editor":
|
||||
render_manual_editor()
|
||||
elif mode == "🕸️ Graph (Agraph)":
|
||||
render_graph_agraph(graph_service)
|
||||
elif mode == "🕸️ Graph (Cytoscape)":
|
||||
render_graph_explorer_cytoscape(graph_service)
|
||||
37
app/frontend/ui_api.py
Normal file
37
app/frontend/ui_api.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
import requests
|
||||
import streamlit as st
|
||||
from ui_config import CHAT_ENDPOINT, INGEST_ANALYZE_ENDPOINT, INGEST_SAVE_ENDPOINT, FEEDBACK_ENDPOINT, API_TIMEOUT
|
||||
|
||||
def send_chat_message(message: str, top_k: int, explain: bool):
|
||||
try:
|
||||
response = requests.post(
|
||||
CHAT_ENDPOINT,
|
||||
json={"message": message, "top_k": top_k, "explain": explain},
|
||||
timeout=API_TIMEOUT
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def analyze_draft_text(text: str, n_type: str):
|
||||
try:
|
||||
response = requests.post(INGEST_ANALYZE_ENDPOINT, json={"text": text, "type": n_type}, timeout=15)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def save_draft_to_vault(markdown_content: str, filename: str = None):
|
||||
try:
|
||||
response = requests.post(INGEST_SAVE_ENDPOINT, json={"markdown_content": markdown_content, "filename": filename}, timeout=API_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
def submit_feedback(query_id, node_id, score, comment=None):
|
||||
try:
|
||||
requests.post(FEEDBACK_ENDPOINT, json={"query_id": query_id, "node_id": node_id, "score": score, "comment": comment}, timeout=2)
|
||||
st.toast(f"Feedback ({score}) gesendet!")
|
||||
except: pass
|
||||
61
app/frontend/ui_callbacks.py
Normal file
61
app/frontend/ui_callbacks.py
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
import streamlit as st
|
||||
import os
|
||||
from ui_utils import build_markdown_doc
|
||||
|
||||
def switch_to_editor_callback(note_payload):
|
||||
"""
|
||||
Callback für den 'Bearbeiten'-Button im Graphen.
|
||||
Versucht, die Datei direkt aus dem Vault (Dateisystem) zu lesen.
|
||||
Das garantiert, dass Frontmatter und Inhalt vollständig sind (Single Source of Truth).
|
||||
"""
|
||||
# 1. Pfad ermitteln (Priorität auf 'path' aus Qdrant)
|
||||
origin_fname = note_payload.get('path')
|
||||
|
||||
# Fallback für Legacy-Datenfelder
|
||||
if not origin_fname:
|
||||
origin_fname = note_payload.get('file_path') or note_payload.get('filename')
|
||||
|
||||
content = ""
|
||||
file_loaded = False
|
||||
|
||||
# 2. Versuch: Direkt von der Festplatte lesen
|
||||
# Wir prüfen, ob der Pfad existiert und lesen den aktuellen Stand der Datei.
|
||||
if origin_fname and os.path.exists(origin_fname):
|
||||
try:
|
||||
with open(origin_fname, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
file_loaded = True
|
||||
except Exception as e:
|
||||
# Fehler im Terminal loggen, aber UI nicht crashen lassen
|
||||
print(f"Fehler beim Lesen von {origin_fname}: {e}")
|
||||
|
||||
# 3. Fallback: Inhalt aus Qdrant nehmen (wenn Datei nicht zugreifbar)
|
||||
if not file_loaded:
|
||||
# Wir nehmen 'fulltext' aus dem Payload
|
||||
content = note_payload.get('fulltext', '')
|
||||
|
||||
if not content:
|
||||
# Letzter Ausweg: Metadaten nehmen und Dummy-Content bauen
|
||||
content = build_markdown_doc(note_payload, "Inhalt konnte nicht geladen werden (Datei nicht gefunden).")
|
||||
else:
|
||||
# Check: Hat der Text ein Frontmatter? Wenn nein, rekonstruieren wir es.
|
||||
if not content.strip().startswith("---"):
|
||||
content = build_markdown_doc(note_payload, content)
|
||||
|
||||
# Notfall-Pfad Konstruktion (falls gar kein Pfad im System ist)
|
||||
if not origin_fname and 'note_id' in note_payload:
|
||||
origin_fname = f"{note_payload['note_id']}.md"
|
||||
|
||||
# 4. Daten an den Editor übergeben
|
||||
# Wir nutzen den Chat-Verlauf als Transportmittel für den State
|
||||
st.session_state.messages.append({
|
||||
"role": "assistant",
|
||||
"intent": "INTERVIEW",
|
||||
"content": content,
|
||||
"query_id": f"edit_{note_payload.get('note_id', 'unknown')}", # Trigger für den Editor
|
||||
"origin_filename": origin_fname,
|
||||
"origin_note_id": note_payload.get('note_id')
|
||||
})
|
||||
|
||||
# 5. Modus umschalten (wechselt den Tab beim nächsten Rerun)
|
||||
st.session_state["sidebar_mode_selection"] = "📝 Manueller Editor"
|
||||
78
app/frontend/ui_chat.py
Normal file
78
app/frontend/ui_chat.py
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
import streamlit as st
|
||||
from ui_api import send_chat_message, submit_feedback
|
||||
from ui_editor import render_draft_editor
|
||||
|
||||
def render_chat_interface(top_k, explain):
|
||||
"""
|
||||
Rendert das Chat-Interface.
|
||||
Zeigt Nachrichten an und behandelt User-Input.
|
||||
"""
|
||||
# 1. Verlauf anzeigen
|
||||
for idx, msg in enumerate(st.session_state.messages):
|
||||
with st.chat_message(msg["role"]):
|
||||
if msg["role"] == "assistant":
|
||||
# Intent Badge
|
||||
intent = msg.get("intent", "UNKNOWN")
|
||||
st.markdown(f'<div class="intent-badge">Intent: {intent}</div>', unsafe_allow_html=True)
|
||||
|
||||
# Debugging (optional, gut für Entwicklung)
|
||||
with st.expander("🐞 Payload", expanded=False):
|
||||
st.json(msg)
|
||||
|
||||
# Unterscheidung: Normaler Text oder Editor-Modus (Interview)
|
||||
if intent == "INTERVIEW":
|
||||
render_draft_editor(msg)
|
||||
else:
|
||||
st.markdown(msg["content"])
|
||||
|
||||
# Quellen anzeigen
|
||||
if "sources" in msg and msg["sources"]:
|
||||
for hit in msg["sources"]:
|
||||
score = hit.get('total_score', 0)
|
||||
# Wenn score None ist, 0.0 annehmen
|
||||
if score is None: score = 0.0
|
||||
|
||||
with st.expander(f"📄 {hit.get('note_id', '?')} ({score:.2f})"):
|
||||
st.markdown(f"_{hit.get('source', {}).get('text', '')[:300]}..._")
|
||||
|
||||
# Explanation Layer
|
||||
if hit.get('explanation'):
|
||||
st.caption(f"Grund: {hit['explanation']['reasons'][0]['message']}")
|
||||
|
||||
# Feedback Buttons pro Source
|
||||
def _cb(qid=msg.get("query_id"), nid=hit.get('node_id')):
|
||||
val = st.session_state.get(f"fb_src_{qid}_{nid}")
|
||||
if val is not None: submit_feedback(qid, nid, val+1)
|
||||
|
||||
st.feedback("faces", key=f"fb_src_{msg.get('query_id')}_{hit.get('node_id')}", on_change=_cb)
|
||||
|
||||
# Globales Feedback für die Antwort
|
||||
if "query_id" in msg:
|
||||
qid = msg["query_id"]
|
||||
st.feedback("stars", key=f"fb_glob_{qid}", on_change=lambda: submit_feedback(qid, "generated_answer", st.session_state[f"fb_glob_{qid}"]+1))
|
||||
else:
|
||||
# User Nachricht
|
||||
st.markdown(msg["content"])
|
||||
|
||||
# 2. Input Feld
|
||||
if prompt := st.chat_input("Frage Mindnet..."):
|
||||
st.session_state.messages.append({"role": "user", "content": prompt})
|
||||
st.rerun()
|
||||
|
||||
# 3. Antwort generieren (wenn letzte Nachricht vom User ist)
|
||||
if len(st.session_state.messages) > 0 and st.session_state.messages[-1]["role"] == "user":
|
||||
with st.chat_message("assistant"):
|
||||
with st.spinner("Thinking..."):
|
||||
resp = send_chat_message(st.session_state.messages[-1]["content"], top_k, explain)
|
||||
|
||||
if "error" in resp:
|
||||
st.error(resp["error"])
|
||||
else:
|
||||
st.session_state.messages.append({
|
||||
"role": "assistant",
|
||||
"content": resp.get("answer"),
|
||||
"intent": resp.get("intent", "FACT"),
|
||||
"sources": resp.get("sources", []),
|
||||
"query_id": resp.get("query_id")
|
||||
})
|
||||
st.rerun()
|
||||
79
app/frontend/ui_config.py
Normal file
79
app/frontend/ui_config.py
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
import os
|
||||
import hashlib
|
||||
from dotenv import load_dotenv
|
||||
from pathlib import Path
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# --- API & PORTS ---
|
||||
API_BASE_URL = os.getenv("MINDNET_API_URL", "http://localhost:8002")
|
||||
CHAT_ENDPOINT = f"{API_BASE_URL}/chat"
|
||||
FEEDBACK_ENDPOINT = f"{API_BASE_URL}/feedback"
|
||||
INGEST_ANALYZE_ENDPOINT = f"{API_BASE_URL}/ingest/analyze"
|
||||
INGEST_SAVE_ENDPOINT = f"{API_BASE_URL}/ingest/save"
|
||||
|
||||
# --- QDRANT ---
|
||||
QDRANT_URL = os.getenv("QDRANT_URL", "http://localhost:6333")
|
||||
QDRANT_KEY = os.getenv("QDRANT_API_KEY", None)
|
||||
if QDRANT_KEY == "": QDRANT_KEY = None
|
||||
COLLECTION_PREFIX = os.getenv("COLLECTION_PREFIX", "mindnet")
|
||||
|
||||
# --- FILES & TIMEOUTS ---
|
||||
HISTORY_FILE = Path("data/logs/search_history.jsonl")
|
||||
timeout_setting = os.getenv("MINDNET_API_TIMEOUT") or os.getenv("MINDNET_LLM_TIMEOUT")
|
||||
API_TIMEOUT = float(timeout_setting) if timeout_setting else 300.0
|
||||
|
||||
# --- STYLING CONSTANTS ---
|
||||
|
||||
# Basierend auf types.yaml
|
||||
GRAPH_COLORS = {
|
||||
# Kerntypen
|
||||
"experience": "#feca57", # Gelb/Orange
|
||||
"project": "#ff9f43", # Dunkleres Orange
|
||||
"decision": "#5f27cd", # Lila
|
||||
|
||||
# Persönlichkeit
|
||||
"value": "#00d2d3", # Cyan
|
||||
"principle": "#0abde3", # Dunkles Cyan
|
||||
"belief": "#48dbfb", # Helles Blau
|
||||
"profile": "#1dd1a1", # Grün
|
||||
|
||||
# Strategie & Risiko
|
||||
"goal": "#ff9ff3", # Pink
|
||||
"risk": "#ff6b6b", # Rot
|
||||
|
||||
# Basis
|
||||
"concept": "#54a0ff", # Blau
|
||||
"task": "#8395a7", # Grau-Blau
|
||||
"journal": "#c8d6e5", # Hellgrau
|
||||
"source": "#576574", # Dunkelgrau
|
||||
"glossary": "#222f3e", # Sehr dunkel
|
||||
|
||||
"default": "#8395a7" # Fallback
|
||||
}
|
||||
|
||||
# System-Kanten, die wir NICHT im Graphen sehen wollen, um Rauschen zu reduzieren
|
||||
SYSTEM_EDGES = ["prev", "next", "belongs_to"]
|
||||
|
||||
def get_edge_color(kind: str) -> str:
|
||||
"""Generiert eine deterministische Farbe basierend auf dem Edge-Typ."""
|
||||
if not kind: return "#bdc3c7"
|
||||
|
||||
# Einige feste Farben für wichtige semantische Typen
|
||||
fixed_colors = {
|
||||
"depends_on": "#ff6b6b", # Rot (Blocker/Abhängigkeit)
|
||||
"blocks": "#ee5253", # Dunkelrot
|
||||
"caused_by": "#ff9ff3", # Pink
|
||||
"related_to": "#c8d6e5", # Hellgrau (Hintergrund)
|
||||
"references": "#bdc3c7", # Grau
|
||||
"derived_from": "#1dd1a1" # Grün
|
||||
}
|
||||
|
||||
if kind in fixed_colors:
|
||||
return fixed_colors[kind]
|
||||
|
||||
# Fallback: Hash-basierte Farbe für dynamische Typen
|
||||
# Wir nutzen einen Pastell-Generator, damit es nicht zu grell wird
|
||||
hash_obj = hashlib.md5(kind.encode())
|
||||
hue = int(hash_obj.hexdigest(), 16) % 360
|
||||
return f"hsl({hue}, 60%, 50%)"
|
||||
213
app/frontend/ui_editor.py
Normal file
213
app/frontend/ui_editor.py
Normal file
|
|
@ -0,0 +1,213 @@
|
|||
import streamlit as st
|
||||
import uuid
|
||||
import re
|
||||
from datetime import datetime
|
||||
|
||||
from ui_utils import parse_markdown_draft, build_markdown_doc, slugify
|
||||
from ui_api import save_draft_to_vault, analyze_draft_text
|
||||
|
||||
def render_draft_editor(msg):
|
||||
"""
|
||||
Rendert den Markdown-Editor.
|
||||
Nutzt 'origin_filename' aus der Message, um zwischen Update und Neu zu unterscheiden.
|
||||
"""
|
||||
if "query_id" not in msg or not msg["query_id"]:
|
||||
msg["query_id"] = str(uuid.uuid4())
|
||||
|
||||
qid = msg["query_id"]
|
||||
key_base = f"draft_{qid}"
|
||||
|
||||
# State Keys
|
||||
data_meta_key = f"{key_base}_data_meta"
|
||||
data_sugg_key = f"{key_base}_data_suggestions"
|
||||
widget_body_key = f"{key_base}_widget_body"
|
||||
data_body_key = f"{key_base}_data_body"
|
||||
|
||||
# --- INIT STATE ---
|
||||
if f"{key_base}_init" not in st.session_state:
|
||||
meta, body = parse_markdown_draft(msg["content"])
|
||||
if "type" not in meta: meta["type"] = "default"
|
||||
if "title" not in meta: meta["title"] = ""
|
||||
tags = meta.get("tags", [])
|
||||
meta["tags_str"] = ", ".join(tags) if isinstance(tags, list) else str(tags)
|
||||
|
||||
st.session_state[data_meta_key] = meta
|
||||
st.session_state[data_sugg_key] = []
|
||||
st.session_state[data_body_key] = body.strip()
|
||||
|
||||
st.session_state[f"{key_base}_wdg_title"] = meta["title"]
|
||||
st.session_state[f"{key_base}_wdg_type"] = meta["type"]
|
||||
st.session_state[f"{key_base}_wdg_tags"] = meta["tags_str"]
|
||||
|
||||
# Pfad übernehmen (Source of Truth)
|
||||
st.session_state[f"{key_base}_origin_filename"] = msg.get("origin_filename")
|
||||
st.session_state[f"{key_base}_init"] = True
|
||||
|
||||
# --- RESURRECTION ---
|
||||
if widget_body_key not in st.session_state and data_body_key in st.session_state:
|
||||
st.session_state[widget_body_key] = st.session_state[data_body_key]
|
||||
|
||||
# --- SYNC HELPER ---
|
||||
def _sync_meta():
|
||||
meta = st.session_state[data_meta_key]
|
||||
meta["title"] = st.session_state.get(f"{key_base}_wdg_title", "")
|
||||
meta["type"] = st.session_state.get(f"{key_base}_wdg_type", "default")
|
||||
meta["tags_str"] = st.session_state.get(f"{key_base}_wdg_tags", "")
|
||||
st.session_state[data_meta_key] = meta
|
||||
|
||||
def _sync_body():
|
||||
st.session_state[data_body_key] = st.session_state[widget_body_key]
|
||||
|
||||
def _insert_text(t):
|
||||
st.session_state[widget_body_key] = f"{st.session_state.get(widget_body_key, '')}\n\n{t}"
|
||||
st.session_state[data_body_key] = st.session_state[widget_body_key]
|
||||
|
||||
def _remove_text(t):
|
||||
st.session_state[widget_body_key] = st.session_state.get(widget_body_key, '').replace(t, "").strip()
|
||||
st.session_state[data_body_key] = st.session_state[widget_body_key]
|
||||
|
||||
# --- UI LAYOUT ---
|
||||
|
||||
# Header Info (Debug Pfad anzeigen, damit wir sicher sind)
|
||||
origin_fname = st.session_state.get(f"{key_base}_origin_filename")
|
||||
|
||||
if origin_fname:
|
||||
# Dateiname extrahieren für saubere Anzeige
|
||||
display_name = str(origin_fname).split("/")[-1]
|
||||
st.success(f"📂 **Update-Modus**: `{display_name}`")
|
||||
# Debugging: Zeige vollen Pfad im Expander
|
||||
with st.expander("Dateipfad Details", expanded=False):
|
||||
st.code(origin_fname)
|
||||
st.markdown(f'<div class="draft-box" style="border-left: 5px solid #ff9f43;">', unsafe_allow_html=True)
|
||||
else:
|
||||
st.info("✨ **Erstell-Modus**: Neue Datei wird angelegt.")
|
||||
st.markdown(f'<div class="draft-box">', unsafe_allow_html=True)
|
||||
|
||||
st.markdown("### Editor")
|
||||
|
||||
# Meta Felder
|
||||
meta_ref = st.session_state[data_meta_key]
|
||||
c1, c2 = st.columns([2, 1])
|
||||
with c1:
|
||||
st.text_input("Titel", key=f"{key_base}_wdg_title", on_change=_sync_meta)
|
||||
with c2:
|
||||
known_types = ["concept", "project", "decision", "experience", "journal", "value", "goal", "principle", "risk", "belief"]
|
||||
curr_type = st.session_state.get(f"{key_base}_wdg_type", meta_ref["type"])
|
||||
if curr_type not in known_types: known_types.append(curr_type)
|
||||
st.selectbox("Typ", known_types, key=f"{key_base}_wdg_type", on_change=_sync_meta)
|
||||
|
||||
st.text_input("Tags", key=f"{key_base}_wdg_tags", on_change=_sync_meta)
|
||||
|
||||
# Tabs
|
||||
tab_edit, tab_intel, tab_view = st.tabs(["✏️ Inhalt", "🧠 Intelligence", "👁️ Vorschau"])
|
||||
|
||||
with tab_edit:
|
||||
st.text_area("Body", key=widget_body_key, height=600, on_change=_sync_body, label_visibility="collapsed")
|
||||
|
||||
with tab_intel:
|
||||
st.info("Analysiert den Text auf Verknüpfungsmöglichkeiten.")
|
||||
if st.button("🔍 Analyse starten", key=f"{key_base}_analyze"):
|
||||
st.session_state[data_sugg_key] = []
|
||||
text_to_analyze = st.session_state.get(widget_body_key, st.session_state.get(data_body_key, ""))
|
||||
with st.spinner("Analysiere..."):
|
||||
analysis = analyze_draft_text(text_to_analyze, st.session_state.get(f"{key_base}_wdg_type", "concept"))
|
||||
if "error" in analysis:
|
||||
st.error(f"Fehler: {analysis['error']}")
|
||||
else:
|
||||
suggestions = analysis.get("suggestions", [])
|
||||
st.session_state[data_sugg_key] = suggestions
|
||||
if not suggestions: st.warning("Keine Vorschläge.")
|
||||
else: st.success(f"{len(suggestions)} Vorschläge gefunden.")
|
||||
|
||||
suggestions = st.session_state[data_sugg_key]
|
||||
if suggestions:
|
||||
current_text = st.session_state.get(widget_body_key, "")
|
||||
for idx, sugg in enumerate(suggestions):
|
||||
link_text = sugg.get('suggested_markdown', '')
|
||||
is_inserted = link_text in current_text
|
||||
bg_color = "#e6fffa" if is_inserted else "#ffffff"
|
||||
border = "3px solid #28a745" if is_inserted else "3px solid #1a73e8"
|
||||
st.markdown(f"<div style='border-left: {border}; background-color: {bg_color}; padding: 10px; margin-bottom: 8px;'><b>{sugg.get('target_title')}</b> <small>({sugg.get('type')})</small><br><i>{sugg.get('reason')}</i><br><code>{link_text}</code></div>", unsafe_allow_html=True)
|
||||
if is_inserted:
|
||||
st.button("❌ Entfernen", key=f"del_{idx}_{key_base}", on_click=_remove_text, args=(link_text,))
|
||||
else:
|
||||
st.button("➕ Einfügen", key=f"add_{idx}_{key_base}", on_click=_insert_text, args=(link_text,))
|
||||
|
||||
# Save Logic Preparation
|
||||
final_tags = [t.strip() for t in st.session_state.get(f"{key_base}_wdg_tags", "").split(",") if t.strip()]
|
||||
final_meta = {
|
||||
"id": "generated_on_save",
|
||||
"type": st.session_state.get(f"{key_base}_wdg_type", "default"),
|
||||
"title": st.session_state.get(f"{key_base}_wdg_title", "").strip(),
|
||||
"status": "draft",
|
||||
"tags": final_tags
|
||||
}
|
||||
if "origin_note_id" in msg:
|
||||
final_meta["id"] = msg["origin_note_id"]
|
||||
|
||||
final_body = st.session_state.get(widget_body_key, st.session_state[data_body_key])
|
||||
if not final_meta["title"]:
|
||||
h1_match = re.search(r"^#\s+(.*)$", final_body, re.MULTILINE)
|
||||
if h1_match: final_meta["title"] = h1_match.group(1).strip()
|
||||
|
||||
final_doc = build_markdown_doc(final_meta, final_body)
|
||||
|
||||
with tab_view:
|
||||
st.markdown('<div class="preview-box">', unsafe_allow_html=True)
|
||||
st.markdown(final_doc)
|
||||
st.markdown('</div>', unsafe_allow_html=True)
|
||||
|
||||
st.markdown("---")
|
||||
|
||||
# Save Actions
|
||||
b1, b2 = st.columns([1, 1])
|
||||
with b1:
|
||||
save_label = "💾 Update speichern" if origin_fname else "💾 Neu anlegen & Indizieren"
|
||||
|
||||
if st.button(save_label, type="primary", key=f"{key_base}_save"):
|
||||
with st.spinner("Speichere im Vault..."):
|
||||
if origin_fname:
|
||||
# UPDATE: Ziel ist der exakte Pfad
|
||||
target_file = origin_fname
|
||||
else:
|
||||
# CREATE: Neuer Dateiname
|
||||
raw_title = final_meta.get("title", "draft")
|
||||
target_file = f"{datetime.now().strftime('%Y%m%d')}-{slugify(raw_title)[:60]}.md"
|
||||
|
||||
result = save_draft_to_vault(final_doc, filename=target_file)
|
||||
if "error" in result:
|
||||
st.error(f"Fehler: {result['error']}")
|
||||
else:
|
||||
st.success(f"Gespeichert: {result.get('file_path')}")
|
||||
st.balloons()
|
||||
with b2:
|
||||
if st.button("📋 Code anzeigen", key=f"{key_base}_btn_copy"):
|
||||
st.code(final_doc, language="markdown")
|
||||
|
||||
st.markdown("</div>", unsafe_allow_html=True)
|
||||
|
||||
def render_manual_editor():
|
||||
"""
|
||||
Rendert den manuellen Editor.
|
||||
PRÜFT, ob eine Edit-Anfrage aus dem Graphen vorliegt!
|
||||
"""
|
||||
|
||||
target_msg = None
|
||||
|
||||
# 1. Prüfen: Gibt es Nachrichten im Verlauf?
|
||||
if st.session_state.messages:
|
||||
last_msg = st.session_state.messages[-1]
|
||||
|
||||
# 2. Ist die letzte Nachricht eine Edit-Anfrage? (Erkennbar am query_id prefix 'edit_')
|
||||
qid = str(last_msg.get("query_id", ""))
|
||||
if qid.startswith("edit_"):
|
||||
target_msg = last_msg
|
||||
|
||||
# 3. Fallback: Leeres Template, falls keine Edit-Anfrage vorliegt
|
||||
if not target_msg:
|
||||
target_msg = {
|
||||
"content": "---\ntype: concept\ntitle: Neue Notiz\nstatus: draft\ntags: []\n---\n# Titel\n",
|
||||
"query_id": f"manual_{uuid.uuid4()}" # Eigene ID, damit neuer State entsteht
|
||||
}
|
||||
|
||||
render_draft_editor(target_msg)
|
||||
153
app/frontend/ui_graph.py
Normal file
153
app/frontend/ui_graph.py
Normal file
|
|
@ -0,0 +1,153 @@
|
|||
import streamlit as st
|
||||
from streamlit_agraph import agraph, Config
|
||||
from qdrant_client import models
|
||||
from ui_config import COLLECTION_PREFIX, GRAPH_COLORS
|
||||
from ui_callbacks import switch_to_editor_callback
|
||||
|
||||
def render_graph_explorer(graph_service):
|
||||
st.header("🕸️ Graph Explorer")
|
||||
|
||||
# Session State initialisieren
|
||||
if "graph_center_id" not in st.session_state:
|
||||
st.session_state.graph_center_id = None
|
||||
|
||||
# Defaults für View & Physik setzen
|
||||
st.session_state.setdefault("graph_depth", 2)
|
||||
st.session_state.setdefault("graph_show_labels", True)
|
||||
# Höhere Default-Werte für Abstand
|
||||
st.session_state.setdefault("graph_spacing", 250)
|
||||
st.session_state.setdefault("graph_gravity", -4000)
|
||||
|
||||
col_ctrl, col_graph = st.columns([1, 4])
|
||||
|
||||
# --- LINKE SPALTE: CONTROLS ---
|
||||
with col_ctrl:
|
||||
st.subheader("Fokus")
|
||||
|
||||
# Sucheingabe
|
||||
search_term = st.text_input("Suche Notiz", placeholder="Titel eingeben...")
|
||||
|
||||
# Suchlogik Qdrant
|
||||
if search_term:
|
||||
hits, _ = graph_service.client.scroll(
|
||||
collection_name=f"{COLLECTION_PREFIX}_notes",
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchText(text=search_term))]),
|
||||
limit=10
|
||||
)
|
||||
options = {h.payload['title']: h.payload['note_id'] for h in hits}
|
||||
|
||||
if options:
|
||||
selected_title = st.selectbox("Ergebnisse:", list(options.keys()))
|
||||
if st.button("Laden", use_container_width=True):
|
||||
st.session_state.graph_center_id = options[selected_title]
|
||||
st.rerun()
|
||||
|
||||
st.divider()
|
||||
|
||||
# Layout & Physik Einstellungen
|
||||
with st.expander("👁️ Ansicht & Layout", expanded=True):
|
||||
st.session_state.graph_depth = st.slider("Tiefe (Tier)", 1, 3, st.session_state.graph_depth)
|
||||
st.session_state.graph_show_labels = st.checkbox("Kanten-Beschriftung", st.session_state.graph_show_labels)
|
||||
|
||||
st.markdown("**Physik (BarnesHut)**")
|
||||
st.session_state.graph_spacing = st.slider("Federlänge (Abstand)", 50, 800, st.session_state.graph_spacing)
|
||||
st.session_state.graph_gravity = st.slider("Abstoßung (Gravity)", -20000, -500, st.session_state.graph_gravity)
|
||||
|
||||
if st.button("Reset Layout"):
|
||||
st.session_state.graph_spacing = 250
|
||||
st.session_state.graph_gravity = -4000
|
||||
st.rerun()
|
||||
|
||||
st.divider()
|
||||
st.caption("Legende (Top Typen)")
|
||||
for k, v in list(GRAPH_COLORS.items())[:8]:
|
||||
st.markdown(f"<span style='color:{v}'>●</span> {k}", unsafe_allow_html=True)
|
||||
|
||||
# --- RECHTE SPALTE: GRAPH & ACTION BAR ---
|
||||
with col_graph:
|
||||
center_id = st.session_state.graph_center_id
|
||||
|
||||
if center_id:
|
||||
# Action Container oben fixieren (Layout Fix)
|
||||
action_container = st.container()
|
||||
|
||||
# Graph und Daten laden
|
||||
with st.spinner(f"Lade Graph..."):
|
||||
nodes, edges = graph_service.get_ego_graph(
|
||||
center_id,
|
||||
depth=st.session_state.graph_depth,
|
||||
show_labels=st.session_state.graph_show_labels
|
||||
)
|
||||
|
||||
# WICHTIG: Daten für Editor holen (inkl. Pfad)
|
||||
note_data = graph_service.get_note_with_full_content(center_id)
|
||||
|
||||
# Action Bar rendern
|
||||
with action_container:
|
||||
c1, c2 = st.columns([3, 1])
|
||||
with c1:
|
||||
st.caption(f"Aktives Zentrum: **{center_id}**")
|
||||
with c2:
|
||||
if note_data:
|
||||
st.button("📝 Bearbeiten",
|
||||
use_container_width=True,
|
||||
on_click=switch_to_editor_callback,
|
||||
args=(note_data,))
|
||||
else:
|
||||
st.error("Datenfehler: Note nicht gefunden")
|
||||
|
||||
# Debug Inspector
|
||||
with st.expander("🕵️ Data Inspector", expanded=False):
|
||||
if note_data:
|
||||
st.json(note_data)
|
||||
if 'path' in note_data:
|
||||
st.success(f"Pfad OK: {note_data['path']}")
|
||||
else:
|
||||
st.error("Pfad fehlt!")
|
||||
else:
|
||||
st.info("Leer.")
|
||||
|
||||
if not nodes:
|
||||
st.warning("Keine Daten gefunden.")
|
||||
else:
|
||||
# --- CONFIGURATION (BarnesHut) ---
|
||||
# Height-Trick für Re-Render (da key-Parameter manchmal crasht)
|
||||
dyn_height = 800 + (abs(st.session_state.graph_gravity) % 5)
|
||||
|
||||
config = Config(
|
||||
width=1000,
|
||||
height=dyn_height,
|
||||
directed=True,
|
||||
physics={
|
||||
"enabled": True,
|
||||
"solver": "barnesHut",
|
||||
"barnesHut": {
|
||||
"gravitationalConstant": st.session_state.graph_gravity,
|
||||
"centralGravity": 0.005, # Extrem wichtig für die Ausbreitung!
|
||||
"springLength": st.session_state.graph_spacing,
|
||||
"springConstant": 0.04,
|
||||
"damping": 0.09,
|
||||
"avoidOverlap": 0.1
|
||||
},
|
||||
"stabilization": {"enabled": True, "iterations": 600}
|
||||
},
|
||||
hierarchical=False,
|
||||
nodeHighlightBehavior=True,
|
||||
highlightColor="#F7A7A6",
|
||||
collapsible=False
|
||||
)
|
||||
|
||||
return_value = agraph(nodes=nodes, edges=edges, config=config)
|
||||
|
||||
# Interaktions-Logik (Klick auf Node)
|
||||
if return_value:
|
||||
if return_value != center_id:
|
||||
# Navigation: Neues Zentrum setzen
|
||||
st.session_state.graph_center_id = return_value
|
||||
st.rerun()
|
||||
else:
|
||||
# Klick auf das Zentrum selbst
|
||||
st.toast(f"Zentrum: {return_value}")
|
||||
|
||||
else:
|
||||
st.info("👈 Bitte wähle links eine Notiz aus, um den Graphen zu starten.")
|
||||
325
app/frontend/ui_graph_cytoscape.py
Normal file
325
app/frontend/ui_graph_cytoscape.py
Normal file
|
|
@ -0,0 +1,325 @@
|
|||
import streamlit as st
|
||||
from st_cytoscape import cytoscape
|
||||
from qdrant_client import models
|
||||
from ui_config import COLLECTION_PREFIX, GRAPH_COLORS
|
||||
from ui_callbacks import switch_to_editor_callback
|
||||
|
||||
def update_url_params():
|
||||
"""Callback: Schreibt Slider-Werte in die URL und synchronisiert den State."""
|
||||
# Werte aus den Slider-Keys in die Logik-Variablen übertragen
|
||||
if "cy_depth_slider" in st.session_state:
|
||||
st.session_state.cy_depth = st.session_state.cy_depth_slider
|
||||
if "cy_len_slider" in st.session_state:
|
||||
st.session_state.cy_ideal_edge_len = st.session_state.cy_len_slider
|
||||
if "cy_rep_slider" in st.session_state:
|
||||
st.session_state.cy_node_repulsion = st.session_state.cy_rep_slider
|
||||
|
||||
# In URL schreiben
|
||||
st.query_params["depth"] = st.session_state.cy_depth
|
||||
st.query_params["len"] = st.session_state.cy_ideal_edge_len
|
||||
st.query_params["rep"] = st.session_state.cy_node_repulsion
|
||||
|
||||
def render_graph_explorer_cytoscape(graph_service):
|
||||
st.header("🕸️ Graph Explorer (Cytoscape)")
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 1. STATE & PERSISTENZ
|
||||
# ---------------------------------------------------------
|
||||
if "graph_center_id" not in st.session_state:
|
||||
st.session_state.graph_center_id = None
|
||||
|
||||
if "graph_inspected_id" not in st.session_state:
|
||||
st.session_state.graph_inspected_id = None
|
||||
|
||||
# Lade Einstellungen aus der URL (falls vorhanden), sonst Defaults
|
||||
params = st.query_params
|
||||
|
||||
# Helper um sicher int zu parsen
|
||||
def get_param(key, default):
|
||||
try: return int(params.get(key, default))
|
||||
except: return default
|
||||
|
||||
# Initialisiere Session State Variablen, falls noch nicht vorhanden
|
||||
if "cy_depth" not in st.session_state:
|
||||
st.session_state.cy_depth = get_param("depth", 2)
|
||||
|
||||
if "cy_ideal_edge_len" not in st.session_state:
|
||||
st.session_state.cy_ideal_edge_len = get_param("len", 150)
|
||||
|
||||
if "cy_node_repulsion" not in st.session_state:
|
||||
st.session_state.cy_node_repulsion = get_param("rep", 1000000)
|
||||
|
||||
col_ctrl, col_graph = st.columns([1, 4])
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# 2. LINKES PANEL (Steuerung)
|
||||
# ---------------------------------------------------------
|
||||
with col_ctrl:
|
||||
st.subheader("Fokus")
|
||||
|
||||
search_term = st.text_input("Suche Notiz", placeholder="Titel eingeben...", key="cy_search")
|
||||
|
||||
if search_term:
|
||||
hits, _ = graph_service.client.scroll(
|
||||
collection_name=f"{COLLECTION_PREFIX}_notes",
|
||||
limit=10,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchText(text=search_term))])
|
||||
)
|
||||
options = {h.payload['title']: h.payload['note_id'] for h in hits}
|
||||
|
||||
if options:
|
||||
selected_title = st.selectbox("Ergebnisse:", list(options.keys()), key="cy_select")
|
||||
if st.button("Laden", use_container_width=True, key="cy_load"):
|
||||
new_id = options[selected_title]
|
||||
st.session_state.graph_center_id = new_id
|
||||
st.session_state.graph_inspected_id = new_id
|
||||
st.rerun()
|
||||
|
||||
st.divider()
|
||||
|
||||
# LAYOUT EINSTELLUNGEN (Mit URL Sync)
|
||||
with st.expander("👁️ Layout Einstellungen", expanded=True):
|
||||
st.slider("Tiefe (Tier)", 1, 3,
|
||||
value=st.session_state.cy_depth,
|
||||
key="cy_depth_slider",
|
||||
on_change=update_url_params)
|
||||
|
||||
st.markdown("**COSE Layout**")
|
||||
st.slider("Kantenlänge", 50, 600,
|
||||
value=st.session_state.cy_ideal_edge_len,
|
||||
key="cy_len_slider",
|
||||
on_change=update_url_params)
|
||||
|
||||
st.slider("Knoten-Abstoßung", 100000, 5000000, step=100000,
|
||||
value=st.session_state.cy_node_repulsion,
|
||||
key="cy_rep_slider",
|
||||
on_change=update_url_params)
|
||||
|
||||
if st.button("Neu berechnen", key="cy_rerun"):
|
||||
st.rerun()
|
||||
|
||||
st.divider()
|
||||
st.caption("Legende")
|
||||
for k, v in list(GRAPH_COLORS.items())[:8]:
|
||||
st.markdown(f"<span style='color:{v}'>●</span> {k}", unsafe_allow_html=True)
|
||||
# ---------------------------------------------------------
|
||||
# 3. RECHTES PANEL (GRAPH & INSPECTOR)
|
||||
# ---------------------------------------------------------
|
||||
with col_graph:
|
||||
center_id = st.session_state.graph_center_id
|
||||
|
||||
# Fallback Init
|
||||
if not center_id and st.session_state.graph_inspected_id:
|
||||
center_id = st.session_state.graph_inspected_id
|
||||
st.session_state.graph_center_id = center_id
|
||||
|
||||
if center_id:
|
||||
# Sync Inspection
|
||||
if not st.session_state.graph_inspected_id:
|
||||
st.session_state.graph_inspected_id = center_id
|
||||
|
||||
inspected_id = st.session_state.graph_inspected_id
|
||||
|
||||
# --- DATEN LADEN ---
|
||||
with st.spinner(f"Lade Graph (Tiefe {st.session_state.cy_depth})..."):
|
||||
# 1. Graph Daten
|
||||
nodes_data, edges_data = graph_service.get_ego_graph(
|
||||
center_id,
|
||||
depth=st.session_state.cy_depth
|
||||
)
|
||||
# 2. Detail Daten (Inspector)
|
||||
inspected_data = graph_service.get_note_with_full_content(inspected_id)
|
||||
|
||||
# --- ACTION BAR ---
|
||||
action_container = st.container()
|
||||
with action_container:
|
||||
c1, c2, c3 = st.columns([2, 1, 1])
|
||||
|
||||
with c1:
|
||||
title_show = inspected_data.get('title', inspected_id) if inspected_data else inspected_id
|
||||
st.info(f"**Ausgewählt:** {title_show}")
|
||||
|
||||
with c2:
|
||||
# NAVIGATION
|
||||
if inspected_id != center_id:
|
||||
if st.button("🎯 Als Zentrum setzen", use_container_width=True, key="cy_nav_btn"):
|
||||
st.session_state.graph_center_id = inspected_id
|
||||
st.rerun()
|
||||
else:
|
||||
st.caption("_(Ist aktuelles Zentrum)_")
|
||||
|
||||
with c3:
|
||||
# EDITIEREN
|
||||
if inspected_data:
|
||||
st.button("📝 Bearbeiten",
|
||||
use_container_width=True,
|
||||
on_click=switch_to_editor_callback,
|
||||
args=(inspected_data,),
|
||||
key="cy_edit_btn")
|
||||
|
||||
# --- DATA INSPECTOR ---
|
||||
with st.expander("🕵️ Data Inspector (Details)", expanded=False):
|
||||
if inspected_data:
|
||||
col_i1, col_i2 = st.columns(2)
|
||||
with col_i1:
|
||||
st.markdown(f"**ID:** `{inspected_data.get('note_id')}`")
|
||||
st.markdown(f"**Typ:** `{inspected_data.get('type')}`")
|
||||
with col_i2:
|
||||
st.markdown(f"**Tags:** {', '.join(inspected_data.get('tags', []))}")
|
||||
path_check = "✅" if inspected_data.get('path') else "❌"
|
||||
st.markdown(f"**Pfad:** {path_check}")
|
||||
|
||||
st.caption("Inhalt (Vorschau):")
|
||||
st.text_area("Content Preview", inspected_data.get('fulltext', '')[:1000], height=200, disabled=True, label_visibility="collapsed")
|
||||
|
||||
with st.expander("📄 Raw JSON anzeigen"):
|
||||
st.json(inspected_data)
|
||||
else:
|
||||
st.warning("Keine Daten geladen.")
|
||||
|
||||
# --- GRAPH ELEMENTS ---
|
||||
cy_elements = []
|
||||
|
||||
for n in nodes_data:
|
||||
is_center = (n.id == center_id)
|
||||
is_inspected = (n.id == inspected_id)
|
||||
|
||||
tooltip_text = n.title if n.title else n.label
|
||||
display_label = n.label
|
||||
if len(display_label) > 15 and " " in display_label:
|
||||
display_label = display_label.replace(" ", "\n", 1)
|
||||
|
||||
cy_node = {
|
||||
"data": {
|
||||
"id": n.id,
|
||||
"label": display_label,
|
||||
"bg_color": n.color,
|
||||
"tooltip": tooltip_text
|
||||
},
|
||||
# Wir steuern das Aussehen rein über Klassen (.inspected / .center)
|
||||
"classes": " ".join([c for c in ["center" if is_center else "", "inspected" if is_inspected else ""] if c]),
|
||||
"selected": False
|
||||
}
|
||||
cy_elements.append(cy_node)
|
||||
|
||||
for e in edges_data:
|
||||
target_id = getattr(e, "to", getattr(e, "target", None))
|
||||
if target_id:
|
||||
cy_edge = {
|
||||
"data": {
|
||||
"source": e.source,
|
||||
"target": target_id,
|
||||
"label": e.label,
|
||||
"line_color": e.color
|
||||
}
|
||||
}
|
||||
cy_elements.append(cy_edge)
|
||||
|
||||
# --- STYLESHEET ---
|
||||
stylesheet = [
|
||||
{
|
||||
"selector": "node",
|
||||
"style": {
|
||||
"label": "data(label)",
|
||||
"width": "30px", "height": "30px",
|
||||
"background-color": "data(bg_color)",
|
||||
"color": "#333", "font-size": "12px",
|
||||
"text-valign": "center", "text-halign": "center",
|
||||
"text-wrap": "wrap", "text-max-width": "90px",
|
||||
"border-width": 2, "border-color": "#fff",
|
||||
"title": "data(tooltip)"
|
||||
}
|
||||
},
|
||||
# Inspiziert (Gelber Rahmen)
|
||||
{
|
||||
"selector": ".inspected",
|
||||
"style": {
|
||||
"border-width": 6,
|
||||
"border-color": "#FFC300",
|
||||
"width": "50px", "height": "50px",
|
||||
"font-weight": "bold",
|
||||
"z-index": 999
|
||||
}
|
||||
},
|
||||
# Zentrum (Roter Rahmen)
|
||||
{
|
||||
"selector": ".center",
|
||||
"style": {
|
||||
"border-width": 4,
|
||||
"border-color": "#FF5733",
|
||||
"width": "40px", "height": "40px"
|
||||
}
|
||||
},
|
||||
# Mix
|
||||
{
|
||||
"selector": ".center.inspected",
|
||||
"style": {
|
||||
"border-width": 6,
|
||||
"border-color": "#FF5733",
|
||||
"width": "55px", "height": "55px"
|
||||
}
|
||||
},
|
||||
# Default Selection unterdrücken
|
||||
{
|
||||
"selector": "node:selected",
|
||||
"style": {
|
||||
"border-width": 0,
|
||||
"overlay-opacity": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"selector": "edge",
|
||||
"style": {
|
||||
"width": 2,
|
||||
"line-color": "data(line_color)",
|
||||
"target-arrow-color": "data(line_color)",
|
||||
"target-arrow-shape": "triangle",
|
||||
"curve-style": "bezier",
|
||||
"label": "data(label)",
|
||||
"font-size": "10px", "color": "#666",
|
||||
"text-background-opacity": 0.8, "text-background-color": "#fff"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# --- RENDER ---
|
||||
graph_key = f"cy_{center_id}_{st.session_state.cy_depth}_{st.session_state.cy_ideal_edge_len}"
|
||||
|
||||
clicked_elements = cytoscape(
|
||||
elements=cy_elements,
|
||||
stylesheet=stylesheet,
|
||||
layout={
|
||||
"name": "cose",
|
||||
"idealEdgeLength": st.session_state.cy_ideal_edge_len,
|
||||
"nodeOverlap": 20,
|
||||
"refresh": 20,
|
||||
"fit": True,
|
||||
"padding": 50,
|
||||
"randomize": False,
|
||||
"componentSpacing": 100,
|
||||
"nodeRepulsion": st.session_state.cy_node_repulsion,
|
||||
"edgeElasticity": 100,
|
||||
"nestingFactor": 5,
|
||||
"gravity": 80,
|
||||
"numIter": 1000,
|
||||
"initialTemp": 200,
|
||||
"coolingFactor": 0.95,
|
||||
"minTemp": 1.0,
|
||||
"animate": False
|
||||
},
|
||||
key=graph_key,
|
||||
height="700px"
|
||||
)
|
||||
|
||||
# --- EVENT HANDLING ---
|
||||
if clicked_elements:
|
||||
clicked_nodes = clicked_elements.get("nodes", [])
|
||||
if clicked_nodes:
|
||||
clicked_id = clicked_nodes[0]
|
||||
|
||||
if clicked_id != st.session_state.graph_inspected_id:
|
||||
st.session_state.graph_inspected_id = clicked_id
|
||||
st.rerun()
|
||||
|
||||
else:
|
||||
st.info("👈 Bitte wähle links eine Notiz aus.")
|
||||
312
app/frontend/ui_graph_service.py
Normal file
312
app/frontend/ui_graph_service.py
Normal file
|
|
@ -0,0 +1,312 @@
|
|||
import re
|
||||
from qdrant_client import QdrantClient, models
|
||||
from streamlit_agraph import Node, Edge
|
||||
from ui_config import GRAPH_COLORS, get_edge_color, SYSTEM_EDGES
|
||||
|
||||
class GraphExplorerService:
|
||||
def __init__(self, url, api_key=None, prefix="mindnet"):
|
||||
self.client = QdrantClient(url=url, api_key=api_key)
|
||||
self.prefix = prefix
|
||||
self.notes_col = f"{prefix}_notes"
|
||||
self.chunks_col = f"{prefix}_chunks"
|
||||
self.edges_col = f"{prefix}_edges"
|
||||
self._note_cache = {}
|
||||
|
||||
def get_note_with_full_content(self, note_id):
|
||||
"""
|
||||
Lädt die Metadaten der Note und rekonstruiert den gesamten Text
|
||||
aus den Chunks (Stitching). Wichtig für den Editor-Fallback.
|
||||
"""
|
||||
# 1. Metadaten holen
|
||||
meta = self._fetch_note_cached(note_id)
|
||||
if not meta: return None
|
||||
|
||||
# 2. Volltext aus Chunks bauen
|
||||
full_text = self._fetch_full_text_stitched(note_id)
|
||||
|
||||
# 3. Ergebnis kombinieren (Wir überschreiben das 'fulltext' Feld mit dem frischen Stitching)
|
||||
# Wir geben eine Kopie zurück, um den Cache nicht zu verfälschen
|
||||
complete_note = meta.copy()
|
||||
if full_text:
|
||||
complete_note['fulltext'] = full_text
|
||||
|
||||
return complete_note
|
||||
|
||||
def get_ego_graph(self, center_note_id: str, depth=2, show_labels=True):
|
||||
"""
|
||||
Erstellt den Ego-Graphen um eine zentrale Notiz.
|
||||
Lädt Volltext für das Zentrum und Snippets für Nachbarn.
|
||||
"""
|
||||
nodes_dict = {}
|
||||
unique_edges = {}
|
||||
|
||||
# 1. Center Note laden
|
||||
center_note = self._fetch_note_cached(center_note_id)
|
||||
if not center_note: return [], []
|
||||
self._add_node_to_dict(nodes_dict, center_note, level=0)
|
||||
|
||||
# Initialset für Suche
|
||||
level_1_ids = {center_note_id}
|
||||
|
||||
# Suche Kanten für Center (L1)
|
||||
l1_edges = self._find_connected_edges([center_note_id], center_note.get("title"))
|
||||
|
||||
for edge_data in l1_edges:
|
||||
src_id, tgt_id = self._process_edge(edge_data, nodes_dict, unique_edges, current_depth=1)
|
||||
if src_id: level_1_ids.add(src_id)
|
||||
if tgt_id: level_1_ids.add(tgt_id)
|
||||
|
||||
# Level 2 Suche (begrenzt für Performance)
|
||||
if depth > 1 and len(level_1_ids) > 1 and len(level_1_ids) < 80:
|
||||
l1_subset = list(level_1_ids - {center_note_id})
|
||||
if l1_subset:
|
||||
l2_edges = self._find_connected_edges_batch(l1_subset)
|
||||
for edge_data in l2_edges:
|
||||
self._process_edge(edge_data, nodes_dict, unique_edges, current_depth=2)
|
||||
|
||||
# --- SMART CONTENT LOADING ---
|
||||
|
||||
# A. Fulltext für Center Node holen (Chunks zusammenfügen)
|
||||
center_text = self._fetch_full_text_stitched(center_note_id)
|
||||
if center_note_id in nodes_dict:
|
||||
orig_title = nodes_dict[center_note_id].title
|
||||
clean_full = self._clean_markdown(center_text[:2000])
|
||||
# Wir packen den Text in den Tooltip (title attribute)
|
||||
nodes_dict[center_note_id].title = f"{orig_title}\n\n📄 INHALT:\n{clean_full}..."
|
||||
|
||||
# B. Previews für alle Nachbarn holen (Batch)
|
||||
all_ids = list(nodes_dict.keys())
|
||||
previews = self._fetch_previews_for_nodes(all_ids)
|
||||
|
||||
for nid, node_obj in nodes_dict.items():
|
||||
if nid != center_note_id:
|
||||
prev_raw = previews.get(nid, "Kein Vorschau-Text.")
|
||||
clean_prev = self._clean_markdown(prev_raw[:600])
|
||||
node_obj.title = f"{node_obj.title}\n\n🔍 VORSCHAU:\n{clean_prev}..."
|
||||
|
||||
# Graphen bauen (Nodes & Edges finalisieren)
|
||||
final_edges = []
|
||||
for (src, tgt), data in unique_edges.items():
|
||||
kind = data['kind']
|
||||
prov = data['provenance']
|
||||
color = get_edge_color(kind)
|
||||
is_smart = (prov != "explicit" and prov != "rule")
|
||||
|
||||
# Label Logik
|
||||
label_text = kind if show_labels else " "
|
||||
|
||||
final_edges.append(Edge(
|
||||
source=src, target=tgt, label=label_text, color=color, dashes=is_smart,
|
||||
title=f"Relation: {kind}\nProvenance: {prov}"
|
||||
))
|
||||
|
||||
return list(nodes_dict.values()), final_edges
|
||||
|
||||
def _clean_markdown(self, text):
|
||||
"""Entfernt Markdown-Sonderzeichen für saubere Tooltips im Browser."""
|
||||
if not text: return ""
|
||||
# Entferne Header Marker (## )
|
||||
text = re.sub(r'#+\s', '', text)
|
||||
# Entferne Bold/Italic (** oder *)
|
||||
text = re.sub(r'\*\*|__|\*|_', '', text)
|
||||
# Entferne Links [Text](Url) -> Text
|
||||
text = re.sub(r'\[([^\]]+)\]\([^\)]+\)', r'\1', text)
|
||||
# Entferne Wikilinks [[Link]] -> Link
|
||||
text = re.sub(r'\[\[([^\]]+)\]\]', r'\1', text)
|
||||
return text
|
||||
|
||||
def _fetch_full_text_stitched(self, note_id):
|
||||
"""Lädt alle Chunks einer Note und baut den Text zusammen."""
|
||||
try:
|
||||
scroll_filter = models.Filter(
|
||||
must=[models.FieldCondition(key="note_id", match=models.MatchValue(value=note_id))]
|
||||
)
|
||||
# Limit hoch genug setzen
|
||||
chunks, _ = self.client.scroll(self.chunks_col, scroll_filter=scroll_filter, limit=100, with_payload=True)
|
||||
# Sortieren nach 'ord' (Reihenfolge im Dokument)
|
||||
chunks.sort(key=lambda x: x.payload.get('ord', 999))
|
||||
|
||||
full_text = []
|
||||
for c in chunks:
|
||||
# 'text' ist der reine Inhalt ohne Overlap
|
||||
txt = c.payload.get('text', '')
|
||||
if txt: full_text.append(txt)
|
||||
|
||||
return "\n\n".join(full_text)
|
||||
except:
|
||||
return "Fehler beim Laden des Volltexts."
|
||||
|
||||
def _fetch_previews_for_nodes(self, node_ids):
|
||||
"""Holt Batch-weise den ersten Chunk für eine Liste von Nodes."""
|
||||
if not node_ids: return {}
|
||||
previews = {}
|
||||
try:
|
||||
scroll_filter = models.Filter(must=[models.FieldCondition(key="note_id", match=models.MatchAny(any=node_ids))])
|
||||
# Limit = Anzahl Nodes * 3 (Puffer)
|
||||
chunks, _ = self.client.scroll(self.chunks_col, scroll_filter=scroll_filter, limit=len(node_ids)*3, with_payload=True)
|
||||
|
||||
for c in chunks:
|
||||
nid = c.payload.get("note_id")
|
||||
# Nur den ersten gefundenen Chunk pro Note nehmen
|
||||
if nid and nid not in previews:
|
||||
previews[nid] = c.payload.get("window") or c.payload.get("text") or ""
|
||||
except: pass
|
||||
return previews
|
||||
|
||||
def _find_connected_edges(self, note_ids, note_title=None):
|
||||
"""Findet eingehende und ausgehende Kanten."""
|
||||
|
||||
results = []
|
||||
|
||||
# 1. OUTGOING EDGES (Der "Owner"-Fix)
|
||||
# Wir suchen Kanten, die im Feld 'note_id' (Owner) eine unserer Notizen haben.
|
||||
# Das findet ALLE ausgehenden Kanten, egal ob sie an einem Chunk oder der Note hängen.
|
||||
if note_ids:
|
||||
out_filter = models.Filter(must=[
|
||||
models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids)),
|
||||
models.FieldCondition(key="kind", match=models.MatchExcept(**{"except": SYSTEM_EDGES}))
|
||||
])
|
||||
# Limit hoch, um alles zu finden
|
||||
res_out, _ = self.client.scroll(self.edges_col, scroll_filter=out_filter, limit=500, with_payload=True)
|
||||
results.extend(res_out)
|
||||
|
||||
# 2. INCOMING EDGES (Ziel = Chunk ID oder Titel oder Note ID)
|
||||
# Hier müssen wir Chunks auflösen, um Treffer auf Chunks zu finden.
|
||||
|
||||
# Chunk IDs der aktuellen Notes holen
|
||||
chunk_ids = []
|
||||
if note_ids:
|
||||
c_filter = models.Filter(must=[models.FieldCondition(key="note_id", match=models.MatchAny(any=note_ids))])
|
||||
chunks, _ = self.client.scroll(self.chunks_col, scroll_filter=c_filter, limit=300)
|
||||
chunk_ids = [c.id for c in chunks]
|
||||
|
||||
shoulds = []
|
||||
# Case A: Edge zeigt auf einen unserer Chunks
|
||||
if chunk_ids:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchAny(any=chunk_ids)))
|
||||
|
||||
# Case B: Edge zeigt direkt auf unsere Note ID
|
||||
if note_ids:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchAny(any=note_ids)))
|
||||
|
||||
# Case C: Edge zeigt auf unseren Titel (Wikilinks)
|
||||
if note_title:
|
||||
shoulds.append(models.FieldCondition(key="target_id", match=models.MatchValue(value=note_title)))
|
||||
|
||||
if shoulds:
|
||||
in_filter = models.Filter(
|
||||
must=[models.FieldCondition(key="kind", match=models.MatchExcept(**{"except": SYSTEM_EDGES}))],
|
||||
should=shoulds
|
||||
)
|
||||
res_in, _ = self.client.scroll(self.edges_col, scroll_filter=in_filter, limit=500, with_payload=True)
|
||||
results.extend(res_in)
|
||||
|
||||
return results
|
||||
|
||||
def _find_connected_edges_batch(self, note_ids):
|
||||
# Wrapper für Level 2 Suche
|
||||
return self._find_connected_edges(note_ids)
|
||||
|
||||
def _process_edge(self, record, nodes_dict, unique_edges, current_depth):
|
||||
"""Verarbeitet eine rohe Edge, löst IDs auf und fügt sie den Dictionaries hinzu."""
|
||||
payload = record.payload
|
||||
src_ref = payload.get("source_id")
|
||||
tgt_ref = payload.get("target_id")
|
||||
kind = payload.get("kind")
|
||||
provenance = payload.get("provenance", "explicit")
|
||||
|
||||
# IDs zu Notes auflösen
|
||||
src_note = self._resolve_note_from_ref(src_ref)
|
||||
tgt_note = self._resolve_note_from_ref(tgt_ref)
|
||||
|
||||
if src_note and tgt_note:
|
||||
src_id = src_note['note_id']
|
||||
tgt_id = tgt_note['note_id']
|
||||
|
||||
if src_id != tgt_id:
|
||||
# Nodes hinzufügen
|
||||
self._add_node_to_dict(nodes_dict, src_note, level=current_depth)
|
||||
self._add_node_to_dict(nodes_dict, tgt_note, level=current_depth)
|
||||
|
||||
# Kante hinzufügen (mit Deduplizierung)
|
||||
key = (src_id, tgt_id)
|
||||
existing = unique_edges.get(key)
|
||||
|
||||
should_update = True
|
||||
# Bevorzuge explizite Kanten vor Smart Kanten
|
||||
is_current_explicit = (provenance in ["explicit", "rule"])
|
||||
if existing:
|
||||
is_existing_explicit = (existing['provenance'] in ["explicit", "rule"])
|
||||
if is_existing_explicit and not is_current_explicit:
|
||||
should_update = False
|
||||
|
||||
if should_update:
|
||||
unique_edges[key] = {"source": src_id, "target": tgt_id, "kind": kind, "provenance": provenance}
|
||||
return src_id, tgt_id
|
||||
return None, None
|
||||
|
||||
def _fetch_note_cached(self, note_id):
|
||||
if note_id in self._note_cache: return self._note_cache[note_id]
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="note_id", match=models.MatchValue(value=note_id))]),
|
||||
limit=1, with_payload=True
|
||||
)
|
||||
if res:
|
||||
self._note_cache[note_id] = res[0].payload
|
||||
return res[0].payload
|
||||
return None
|
||||
|
||||
def _resolve_note_from_ref(self, ref_str):
|
||||
"""Löst eine ID (Chunk, Note oder Titel) zu einer Note Payload auf."""
|
||||
if not ref_str: return None
|
||||
|
||||
# Fall A: Chunk ID (enthält #)
|
||||
if "#" in ref_str:
|
||||
try:
|
||||
# Versuch 1: Chunk ID direkt
|
||||
res = self.client.retrieve(self.chunks_col, ids=[ref_str], with_payload=True)
|
||||
if res: return self._fetch_note_cached(res[0].payload.get("note_id"))
|
||||
except: pass
|
||||
|
||||
# Versuch 2: NoteID#Section (Hash abtrennen)
|
||||
possible_note_id = ref_str.split("#")[0]
|
||||
if self._fetch_note_cached(possible_note_id): return self._fetch_note_cached(possible_note_id)
|
||||
|
||||
# Fall B: Note ID direkt
|
||||
if self._fetch_note_cached(ref_str): return self._fetch_note_cached(ref_str)
|
||||
|
||||
# Fall C: Titel
|
||||
res, _ = self.client.scroll(
|
||||
collection_name=self.notes_col,
|
||||
scroll_filter=models.Filter(must=[models.FieldCondition(key="title", match=models.MatchValue(value=ref_str))]),
|
||||
limit=1, with_payload=True
|
||||
)
|
||||
if res:
|
||||
self._note_cache[res[0].payload['note_id']] = res[0].payload
|
||||
return res[0].payload
|
||||
return None
|
||||
|
||||
def _add_node_to_dict(self, node_dict, note_payload, level=1):
|
||||
nid = note_payload.get("note_id")
|
||||
if nid in node_dict: return
|
||||
|
||||
ntype = note_payload.get("type", "default")
|
||||
color = GRAPH_COLORS.get(ntype, GRAPH_COLORS["default"])
|
||||
|
||||
# Basis-Tooltip (wird später erweitert)
|
||||
tooltip = f"Titel: {note_payload.get('title')}\nTyp: {ntype}"
|
||||
|
||||
if level == 0: size = 45
|
||||
elif level == 1: size = 25
|
||||
else: size = 15
|
||||
|
||||
node_dict[nid] = Node(
|
||||
id=nid,
|
||||
label=note_payload.get('title', nid),
|
||||
size=size,
|
||||
color=color,
|
||||
shape="dot" if level > 0 else "diamond",
|
||||
title=tooltip,
|
||||
font={'color': 'black', 'face': 'arial', 'size': 14 if level < 2 else 0}
|
||||
)
|
||||
36
app/frontend/ui_sidebar.py
Normal file
36
app/frontend/ui_sidebar.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
import streamlit as st
|
||||
from ui_utils import load_history_from_logs
|
||||
from ui_config import HISTORY_FILE
|
||||
|
||||
def render_sidebar():
|
||||
with st.sidebar:
|
||||
st.title("🧠 mindnet")
|
||||
st.caption("v2.6 | WP-19 Graph View")
|
||||
|
||||
if "sidebar_mode_selection" not in st.session_state:
|
||||
st.session_state["sidebar_mode_selection"] = "💬 Chat"
|
||||
|
||||
mode = st.radio(
|
||||
"Modus",
|
||||
[
|
||||
"💬 Chat",
|
||||
"📝 Manueller Editor",
|
||||
"🕸️ Graph (Agraph)",
|
||||
"🕸️ Graph (Cytoscape)" # <-- Neuer Punkt
|
||||
],
|
||||
key="sidebar_mode_selection"
|
||||
)
|
||||
|
||||
st.divider()
|
||||
st.subheader("⚙️ Settings")
|
||||
top_k = st.slider("Quellen (Top-K)", 1, 10, 5)
|
||||
explain = st.toggle("Explanation Layer", True)
|
||||
|
||||
st.divider()
|
||||
st.subheader("🕒 Verlauf")
|
||||
for q in load_history_from_logs(HISTORY_FILE, 8):
|
||||
if st.button(f"🔎 {q[:25]}...", key=f"hist_{q}", use_container_width=True):
|
||||
st.session_state.messages.append({"role": "user", "content": q})
|
||||
st.rerun()
|
||||
|
||||
return mode, top_k, explain
|
||||
137
app/frontend/ui_utils.py
Normal file
137
app/frontend/ui_utils.py
Normal file
|
|
@ -0,0 +1,137 @@
|
|||
import re
|
||||
import yaml
|
||||
import unicodedata
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
def slugify(value):
|
||||
if not value: return ""
|
||||
value = str(value).lower()
|
||||
replacements = {'ä': 'ae', 'ö': 'oe', 'ü': 'ue', 'ß': 'ss', '&': 'und', '+': 'und'}
|
||||
for k, v in replacements.items():
|
||||
value = value.replace(k, v)
|
||||
|
||||
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
|
||||
value = re.sub(r'[^\w\s-]', '', value).strip()
|
||||
return re.sub(r'[-\s]+', '-', value)
|
||||
|
||||
def normalize_meta_and_body(meta, body):
|
||||
ALLOWED_KEYS = {"title", "type", "status", "tags", "id", "created", "updated", "aliases", "lang"}
|
||||
clean_meta = {}
|
||||
extra_content = []
|
||||
|
||||
if "titel" in meta and "title" not in meta:
|
||||
meta["title"] = meta.pop("titel")
|
||||
|
||||
tag_candidates = ["tags", "emotionale_keywords", "keywords", "schluesselwoerter"]
|
||||
all_tags = []
|
||||
for key in tag_candidates:
|
||||
if key in meta:
|
||||
val = meta[key]
|
||||
if isinstance(val, list): all_tags.extend(val)
|
||||
elif isinstance(val, str): all_tags.extend([t.strip() for t in val.split(",")])
|
||||
|
||||
for key, val in meta.items():
|
||||
if key in ALLOWED_KEYS:
|
||||
clean_meta[key] = val
|
||||
elif key in tag_candidates:
|
||||
pass
|
||||
else:
|
||||
if val and isinstance(val, str):
|
||||
header = key.replace("_", " ").title()
|
||||
extra_content.append(f"## {header}\n{val}\n")
|
||||
|
||||
if all_tags:
|
||||
clean_tags = []
|
||||
for t in all_tags:
|
||||
t_clean = str(t).replace("#", "").strip()
|
||||
if t_clean: clean_tags.append(t_clean)
|
||||
clean_meta["tags"] = list(set(clean_tags))
|
||||
|
||||
if extra_content:
|
||||
new_section = "\n".join(extra_content)
|
||||
final_body = f"{new_section}\n{body}"
|
||||
else:
|
||||
final_body = body
|
||||
|
||||
return clean_meta, final_body
|
||||
|
||||
def parse_markdown_draft(full_text):
|
||||
clean_text = full_text.strip()
|
||||
pattern_block = r"```(?:markdown|md|yaml)?\s*(.*?)\s*```"
|
||||
match_block = re.search(pattern_block, clean_text, re.DOTALL | re.IGNORECASE)
|
||||
if match_block:
|
||||
clean_text = match_block.group(1).strip()
|
||||
|
||||
meta = {}
|
||||
body = clean_text
|
||||
yaml_str = ""
|
||||
|
||||
parts = re.split(r"^---+\s*$", clean_text, maxsplit=2, flags=re.MULTILINE)
|
||||
|
||||
if len(parts) >= 3:
|
||||
yaml_str = parts[1]
|
||||
body = parts[2]
|
||||
elif clean_text.startswith("---"):
|
||||
fallback_match = re.search(r"^---\s*(.*?)(?=\n#)", clean_text, re.DOTALL | re.MULTILINE)
|
||||
if fallback_match:
|
||||
yaml_str = fallback_match.group(1)
|
||||
body = clean_text.replace(f"---{yaml_str}", "", 1).strip()
|
||||
|
||||
if yaml_str:
|
||||
yaml_str_clean = yaml_str.replace("#", "")
|
||||
try:
|
||||
parsed = yaml.safe_load(yaml_str_clean)
|
||||
if isinstance(parsed, dict):
|
||||
meta = parsed
|
||||
except Exception as e:
|
||||
print(f"YAML Parsing Warning: {e}")
|
||||
|
||||
if not meta.get("title"):
|
||||
h1_match = re.search(r"^#\s+(.*)$", body, re.MULTILINE)
|
||||
if h1_match:
|
||||
meta["title"] = h1_match.group(1).strip()
|
||||
|
||||
if meta.get("type") == "draft":
|
||||
meta["status"] = "draft"
|
||||
meta["type"] = "experience"
|
||||
|
||||
return normalize_meta_and_body(meta, body)
|
||||
|
||||
def build_markdown_doc(meta, body):
|
||||
if "id" not in meta or meta["id"] == "generated_on_save":
|
||||
raw_title = meta.get('title', 'note')
|
||||
clean_slug = slugify(raw_title)[:50] or "note"
|
||||
meta["id"] = f"{datetime.now().strftime('%Y%m%d')}-{clean_slug}"
|
||||
|
||||
meta["updated"] = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
ordered_meta = {}
|
||||
prio_keys = ["id", "type", "title", "status", "tags"]
|
||||
for k in prio_keys:
|
||||
if k in meta: ordered_meta[k] = meta.pop(k)
|
||||
ordered_meta.update(meta)
|
||||
|
||||
try:
|
||||
yaml_str = yaml.dump(ordered_meta, default_flow_style=None, sort_keys=False, allow_unicode=True).strip()
|
||||
except:
|
||||
yaml_str = "error: generating_yaml"
|
||||
|
||||
return f"---\n{yaml_str}\n---\n\n{body}"
|
||||
|
||||
def load_history_from_logs(filepath, limit=10):
|
||||
queries = []
|
||||
if filepath.exists():
|
||||
try:
|
||||
with open(filepath, "r", encoding="utf-8") as f:
|
||||
lines = f.readlines()
|
||||
for line in reversed(lines):
|
||||
try:
|
||||
entry = json.loads(line)
|
||||
q = entry.get("query_text")
|
||||
if q and q not in queries:
|
||||
queries.append(q)
|
||||
if len(queries) >= limit: break
|
||||
except: continue
|
||||
except: pass
|
||||
return queries
|
||||
|
|
@ -44,6 +44,7 @@ Das Repository ist in **logische Domänen** unterteilt.
|
|||
| `03_tech_ingestion_pipeline.md`| **Import.** Ablauflogik (13 Schritte), Chunker-Profile, Smart Edge Allocation. |
|
||||
| `03_tech_retrieval_scoring.md` | **Suche.** Die mathematischen Formeln für Scoring, Hybrid Search und Explanation Layer. |
|
||||
| `03_tech_chat_backend.md` | **API & LLM.** Implementation des Routers, Traffic Control (Semaphore) und Feedback-Traceability. |
|
||||
| `03_tech_frontend.md` | **UI & Graph.** Architektur des Streamlit-Frontends, State-Management, Cytoscape-Integration und Editor-Logik. |
|
||||
| `03_tech_configuration.md` | **Config.** Referenztabellen für `.env`, `types.yaml` und `retriever.yaml`. |
|
||||
|
||||
### 📂 04_Operations (Betrieb)
|
||||
|
|
@ -77,6 +78,7 @@ Nutze diese Matrix, wenn du ein Workpackage bearbeitest, um die Dokumentation ko
|
|||
| **Importer / Parsing** | `03_tech_ingestion_pipeline.md` |
|
||||
| **Datenbank-Schema** | `03_tech_data_model.md` (Payloads anpassen) |
|
||||
| **Retrieval / Scoring** | `03_tech_retrieval_scoring.md` (Formeln anpassen) |
|
||||
| **Frontend / Visualisierung** | 1. `03_tech_frontend.md` (Technische Details)<br>2. `01_chat_usage_guide.md` (Bedienung) |
|
||||
| **Chat-Logik / Prompts**| 1. `02_concept_ai_personality.md` (Konzept)<br>2. `03_tech_chat_backend.md` (Tech)<br>3. `01_chat_usage_guide.md` (User-Sicht) |
|
||||
| **Deployment / Server** | `04_admin_operations.md` |
|
||||
| **Neuen Features (Allg.)**| `06_active_roadmap.md` (Status Update) |
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
doc_type: user_manual
|
||||
audience: user, mindmaster
|
||||
scope: chat, ui, feedback
|
||||
scope: chat, ui, feedback, graph
|
||||
status: active
|
||||
version: 2.6
|
||||
context: "Anleitung zur Nutzung der Web-Oberfläche, der Chat-Personas und des Feedbacks."
|
||||
context: "Anleitung zur Nutzung der Web-Oberfläche, der Chat-Personas und des Graph Explorers."
|
||||
---
|
||||
|
||||
# Chat Usage Guide
|
||||
# Chat & Graph Usage Guide
|
||||
|
||||
**Quellen:** `user_guide.md`
|
||||
|
||||
|
|
@ -36,10 +36,25 @@ Seit Version 2.3.1 bedienst du Mindnet über eine grafische Oberfläche im Brows
|
|||
* **Grüner Punkt:** Hohe Relevanz (Score > 0.8).
|
||||
* **Klick darauf:** Zeigt den Textauszug und die **Begründung** (Explanation Layer).
|
||||
|
||||
### 2.2 Die Sidebar
|
||||
* **Modus-Wahl:** Umschalten zwischen "💬 Chat" und "📝 Manueller Editor".
|
||||
* *Feature:* Der Editor nutzt ein "Resurrection Pattern" – deine Eingaben bleiben erhalten, auch wenn du den Tab wechselst.
|
||||
* **Settings:** Hier kannst du `Top-K` (Anzahl der Quellen) und `Explanation Layer` steuern.
|
||||
### 2.2 Modus: 🕸️ Graph Explorer (Cytoscape)
|
||||
*Neu in v2.6*: Eine interaktive Karte deines Wissens.
|
||||
|
||||
**Die Farb-Logik:**
|
||||
* 🔴 **Roter Rahmen:** Das aktuelle **Zentrum** (Ego). Alle geladenen Knoten sind Nachbarn dieses Knotens.
|
||||
* 🟡 **Gelber Rahmen:** Der **inspizierte** Knoten. Dessen Details siehst du im "Data Inspector" und in der Action-Bar.
|
||||
|
||||
**Bedienung:**
|
||||
1. **Klick auf einen Knoten:** Wählt ihn aus (Gelb). Der Graph bleibt stabil, aber die Info-Leiste oben aktualisiert sich.
|
||||
2. **Button "🎯 Als Zentrum setzen":** Lädt den Graphen neu und macht den ausgewählten Knoten zum roten Zentrum.
|
||||
3. **Button "📝 Bearbeiten":** Springt mit dem Inhalt dieses Knotens direkt in den Editor.
|
||||
|
||||
**Layout & Persistenz:**
|
||||
Du kannst in der linken Spalte die Physik (Abstoßung, Kantenlänge) einstellen. Diese Einstellungen werden in der **URL gespeichert**. Du kannst den Link als Lesezeichen speichern, um genau diese Ansicht wiederzufinden.
|
||||
|
||||
### 2.3 Modus: 📝 Manueller Editor
|
||||
Ein Editor mit **"File System First"** Garantie.
|
||||
* Wenn du eine Datei bearbeitest, liest Mindnet sie direkt von der Festplatte.
|
||||
* **Resurrection:** Wenn du zwischendurch in den Graph wechselst und zurückkommst, ist dein getippter Text noch da.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
160
docs/03_Technical_References/03_tech_frontend.md
Normal file
160
docs/03_Technical_References/03_tech_frontend.md
Normal file
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
doc_type: technical_reference
|
||||
audience: developer, frontend_architect
|
||||
scope: architecture, graph_viz, state_management
|
||||
status: active
|
||||
version: 2.6
|
||||
context: "Technische Dokumentation des modularen Streamlit-Frontends, der Graph-Engines und des Editors."
|
||||
---
|
||||
|
||||
# Technical Reference: Frontend & Visualization
|
||||
|
||||
**Kontext:** Mindnet nutzt Streamlit nicht nur als einfaches UI, sondern als komplexe Applikation mit eigenem State-Management, Routing und Persistenz.
|
||||
|
||||
---
|
||||
|
||||
## 1. Architektur & Modularisierung (WP19)
|
||||
|
||||
Seit Version 2.6 ist das Frontend (`app/frontend/`) kein Monolith mehr, sondern in funktionale Module unterteilt.
|
||||
|
||||
### 1.1 Dateistruktur & Aufgaben
|
||||
|
||||
| Modul | Funktion |
|
||||
| :--- | :--- |
|
||||
| `ui.py` | **Router.** Der Entrypoint. Initialisiert Session-State und entscheidet anhand der Sidebar-Auswahl, welche View gerendert wird. |
|
||||
| `ui_config.py` | **Konstanten.** Zentraler Ort für Farben (`GRAPH_COLORS`), API-Endpunkte und Timeouts. |
|
||||
| `ui_api.py` | **Backend-Bridge.** Kapselt alle `requests`-Aufrufe an die FastAPI. |
|
||||
| `ui_callbacks.py` | **State Transitions.** Logik für View-Wechsel (z.B. Sprung vom Graph in den Editor). |
|
||||
| `ui_utils.py` | **Helper.** Markdown-Parsing (`parse_markdown_draft`) und String-Normalisierung. |
|
||||
| `ui_graph_service.py`| **Data Logic.** Holt Daten aus Qdrant und bereitet Nodes/Edges auf (unabhängig von der Vis-Library). |
|
||||
| `ui_graph_cytoscape.py`| **View: Graph.** Implementierung mit `st-cytoscape` (COSE Layout). |
|
||||
| `ui_editor.py` | **View: Editor.** Logik für Drafts und manuelles Editieren. |
|
||||
|
||||
### 1.2 Konfiguration (`ui_config.py`)
|
||||
|
||||
Zentrale Steuerung der visuellen Semantik.
|
||||
|
||||
# Mapping von Node-Typen zu Farben (Hex)
|
||||
GRAPH_COLORS = {
|
||||
"project": "#ff9f43", # Orange
|
||||
"decision": "#5f27cd", # Lila
|
||||
"experience": "#feca57", # Gelb (Empathie)
|
||||
"concept": "#54a0ff", # Blau
|
||||
"risk": "#ff6b6b" # Rot
|
||||
}
|
||||
|
||||
---
|
||||
|
||||
## 2. Graph Visualisierung
|
||||
|
||||
Mindnet nutzt primär **Cytoscape** für Stabilität bei großen Graphen.
|
||||
|
||||
### 2.1 Engine: Cytoscape (`st-cytoscape`)
|
||||
* **Algorithmus:** `COSE` (Compound Spring Embedder).
|
||||
* **Vorteil:** Verhindert Überlappungen aktiv (`nodeRepulsion`).
|
||||
|
||||
### 2.2 Architektur-Pattern: "Active Inspector, Passive Graph"
|
||||
Ein häufiges Problem bei Streamlit-Graphen ist das "Flackern" (Re-Render) bei Klicks. Wir lösen das durch Entkopplung:
|
||||
|
||||
1. **Stable Key:** Der React-Key der Komponente hängt *nicht* von der Selektion ab, sondern nur vom Zentrum (`center_id`) und Layout-Settings.
|
||||
2. **CSS-Selektion:** Wir nutzen **nicht** den nativen `:selected` State (buggy bei Single-Select), sondern injizieren eine CSS-Klasse `.inspected`.
|
||||
|
||||
**Stylesheet Implementierung (`ui_graph_cytoscape.py`):**
|
||||
|
||||
stylesheet = [
|
||||
{
|
||||
"selector": "node",
|
||||
"style": { "background-color": "data(bg_color)" }
|
||||
},
|
||||
# Wir steuern das Highlight manuell über eine Klasse
|
||||
{
|
||||
"selector": ".inspected",
|
||||
"style": {
|
||||
"border-width": 6,
|
||||
"border-color": "#FFC300", # Gelb/Gold
|
||||
"z-index": 999
|
||||
}
|
||||
},
|
||||
# Native Selektion wird unterdrückt/unsichtbar gemacht
|
||||
{
|
||||
"selector": "node:selected",
|
||||
"style": { "overlay-opacity": 0, "border-width": 0 }
|
||||
}
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## 3. Editor & Single Source of Truth
|
||||
|
||||
Ein kritisches Design-Pattern ist der Umgang mit Datenkonsistenz beim Editieren ("File System First").
|
||||
|
||||
### 3.1 Das Problem
|
||||
Qdrant speichert Metadaten und Chunks, aber das Feld `fulltext` im Payload kann veraltet sein oder Formatierungen verlieren.
|
||||
|
||||
### 3.2 Die Lösung (Logic Flow)
|
||||
Der `switch_to_editor_callback` in `ui_callbacks.py` implementiert folgende Kaskade:
|
||||
|
||||
def switch_to_editor_callback(note_payload):
|
||||
# 1. Pfad aus Qdrant Payload lesen
|
||||
origin_fname = note_payload.get('path')
|
||||
|
||||
content = ""
|
||||
# 2. Versuch: Hard Read von der Festplatte (Source of Truth)
|
||||
if origin_fname and os.path.exists(origin_fname):
|
||||
with open(origin_fname, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
else:
|
||||
# 3. Fallback: Rekonstruktion aus der DB ("Stitching")
|
||||
# Nur Notfall, falls Docker-Volume fehlt
|
||||
content = note_payload.get('fulltext', '')
|
||||
|
||||
# State setzen (Transport via Message-Bus)
|
||||
st.session_state.messages.append({
|
||||
"role": "assistant",
|
||||
"intent": "INTERVIEW",
|
||||
"content": content,
|
||||
"origin_filename": origin_fname
|
||||
})
|
||||
st.session_state["sidebar_mode_selection"] = "📝 Manueller Editor"
|
||||
|
||||
Dies garantiert, dass der Editor immer den **echten, aktuellen Stand** der Markdown-Datei anzeigt.
|
||||
|
||||
---
|
||||
|
||||
## 4. State Management Patterns
|
||||
|
||||
### 4.1 URL Persistenz (Deep Linking)
|
||||
Layout-Einstellungen werden in der URL gespeichert, damit sie einen Page-Refresh (F5) überleben.
|
||||
|
||||
# ui_graph_cytoscape.py
|
||||
def update_url_params():
|
||||
st.query_params["depth"] = st.session_state.cy_depth
|
||||
st.query_params["rep"] = st.session_state.cy_node_repulsion
|
||||
|
||||
# Init
|
||||
if "cy_depth" not in st.session_state:
|
||||
st.session_state.cy_depth = int(st.query_params.get("depth", 2))
|
||||
|
||||
### 4.2 Resurrection Pattern
|
||||
Verhindert Datenverlust, wenn der Nutzer während des Tippens den Tab wechselt. Der Editor-Inhalt wird bei jedem Keystroke (`on_change`) in `st.session_state` gespiegelt und beim Neuladen der Komponente von dort wiederhergestellt.
|
||||
|
||||
### 4.3 Healing Parser (`ui_utils.py`)
|
||||
Das LLM liefert oft invalides YAML oder Markdown. Der Parser (`parse_markdown_draft`):
|
||||
* Repariert fehlende Frontmatter-Trenner (`---`).
|
||||
* Extrahiert JSON/YAML aus Code-Blöcken.
|
||||
* Normalisiert Tags (entfernt `#`).
|
||||
|
||||
---
|
||||
|
||||
## 5. Constraints & Security (Known Limitations)
|
||||
|
||||
### 5.1 File System Security
|
||||
Der Editor ("File System First") vertraut dem Pfad im Qdrant-Feld `path`.
|
||||
* **Risiko:** Path Traversal (z.B. `../../etc/passwd`).
|
||||
* **Mitigation:** Aktuell findet keine strikte Prüfung statt, ob der Pfad innerhalb des `./vault` Ordners liegt. Das System setzt voraus, dass die Vektor-Datenbank eine **Trusted Source** ist und nur vom internen Importer befüllt wird.
|
||||
* **ToDo:** Bei Öffnung der API für Dritte muss hier eine `Path.resolve().is_relative_to(VAULT_ROOT)` Prüfung implementiert werden.
|
||||
|
||||
### 5.2 Browser Performance
|
||||
Die Graph-Visualisierung (`st-cytoscape`) rendert Client-seitig im Browser.
|
||||
* **Limit:** Ab ca. **500 Knoten/Kanten** kann das Rendering träge werden.
|
||||
* **Design-Entscheidung:** Das UI ist auf **Ego-Graphen** (Nachbarn eines Knotens) und gefilterte Ansichten ausgelegt, nicht auf die Darstellung des gesamten Knowledge-Graphs ("Whole Brain Visualization").
|
||||
|
|
@ -85,6 +85,7 @@ Environment="STREAMLIT_SERVER_PORT=8501"
|
|||
Environment="STREAMLIT_SERVER_ADDRESS=0.0.0.0"
|
||||
Environment="STREAMLIT_SERVER_HEADLESS=true"
|
||||
|
||||
# WICHTIG: Pfad zur neuen Router-Datei (ui.py)
|
||||
ExecStart=/home/llmadmin/mindnet/.venv/bin/streamlit run app/frontend/ui.py
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
|
@ -107,21 +108,28 @@ Führt den Sync stündlich durch. Nutzt `--purge-before-upsert` für Sauberkeit.
|
|||
|
||||
### 3.2 Troubleshooting Guide
|
||||
|
||||
**Fehler: "ModuleNotFoundError: No module named 'st_cytoscape'"**
|
||||
* Ursache: Alte Dependencies oder falsches Paket installiert.
|
||||
* Lösung: Environment aktualisieren.
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
pip uninstall streamlit-cytoscapejs
|
||||
pip install st-cytoscape
|
||||
# Oder sicherheitshalber:
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Fehler: "Vector dimension error: expected 768, got 384"**
|
||||
* Ursache: Alte DB (v2.2), neues Modell (v2.4).
|
||||
* Lösung: **Full Reset** (siehe Kap. 4.2).
|
||||
|
||||
**Fehler: "500 Internal Server Error" (Ollama)**
|
||||
* Ursache: Timeout bei Cold-Start des Modells.
|
||||
* Lösung: `MINDNET_LLM_TIMEOUT=300.0` in `.env` setzen.
|
||||
|
||||
**Fehler: Import sehr langsam**
|
||||
* Ursache: Smart Edges sind aktiv und analysieren jeden Chunk.
|
||||
* Lösung: `MINDNET_LLM_BACKGROUND_LIMIT` prüfen oder Feature in `types.yaml` deaktivieren.
|
||||
|
||||
**Fehler: UI "Read timed out"**
|
||||
* Ursache: Backend braucht für Smart Edges länger als 60s.
|
||||
* Lösung: `MINDNET_API_TIMEOUT=300.0` setzen.
|
||||
* Lösung: `MINDNET_API_TIMEOUT=300.0` in `.env` setzen (oder im Systemd Service).
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -32,6 +32,7 @@ Mindnet läuft in einer verteilten Umgebung (Post-WP15 Setup).
|
|||
Der Code ist modular in `app` (Logik), `scripts` (CLI) und `config` (Steuerung) getrennt.
|
||||
|
||||
### 2.1 Verzeichnisbaum
|
||||
|
||||
```text
|
||||
mindnet/
|
||||
├── app/
|
||||
|
|
@ -50,8 +51,15 @@ mindnet/
|
|||
│ │ ├── semantic_analyzer.py# LLM-Filter für Edges (WP15)
|
||||
│ │ ├── embeddings_client.py# Async Embeddings (HTTPX)
|
||||
│ │ └── discovery.py # Intelligence Logic (WP11)
|
||||
│ ├── frontend/
|
||||
│ │ └── ui.py # Streamlit App inkl. Healing Parser
|
||||
│ ├── frontend/ # UI Logic (WP19 Modularisierung)
|
||||
│ │ ├── ui.py # Main Entrypoint & Routing
|
||||
│ │ ├── ui_config.py # Styles & Constants
|
||||
│ │ ├── ui_api.py # Backend Connector
|
||||
│ │ ├── ui_callbacks.py # State Transitions
|
||||
│ │ ├── ui_utils.py # Helper & Parsing
|
||||
│ │ ├── ui_graph_service.py # Graph Data Logic
|
||||
│ │ ├── ui_graph_cytoscape.py # Modern Graph View (St-Cytoscape)
|
||||
│ │ └── ui_editor.py # Editor View
|
||||
│ └── main.py # Entrypoint der API
|
||||
├── config/ # YAML-Konfigurationen (Single Source of Truth)
|
||||
├── scripts/ # CLI-Tools (Import, Diagnose, Reset)
|
||||
|
|
@ -61,6 +69,12 @@ mindnet/
|
|||
|
||||
### 2.2 Modul-Details (Wie es funktioniert)
|
||||
|
||||
**Das Frontend (`app.frontend`) - *Neu in v2.6***
|
||||
* **Router (`ui.py`):** Entscheidet, welche View geladen wird.
|
||||
* **Service-Layer (`ui_graph_service.py`):** Kapselt die Qdrant-Abfragen. Liefert rohe Nodes/Edges, die dann von den Views visualisiert werden.
|
||||
* **Graph Engine:** Wir nutzen `st-cytoscape` für das Layout. Die Logik zur Vermeidung von Re-Renders (Stable Keys, CSS-Selektion) ist essentiell.
|
||||
* **Data Consistency:** Der Editor (`ui_editor.py`) liest Dateien bevorzugt direkt vom Dateisystem ("File System First"), um Datenverlust durch veraltete DB-Einträge zu vermeiden.
|
||||
|
||||
**Der Importer (`scripts.import_markdown`)**
|
||||
* Das komplexeste Modul.
|
||||
* Nutzt `app.core.chunker` und `app.services.semantic_analyzer` (Smart Edges).
|
||||
|
|
@ -76,10 +90,6 @@ mindnet/
|
|||
* Implementiert die Scoring-Formel (`Semantik + Graph + Typ`).
|
||||
* **Hybrid Search:** Lädt dynamisch den Subgraphen (`graph_adapter.expand`).
|
||||
|
||||
**Das Frontend (`app.frontend.ui`)**
|
||||
* **Resurrection Pattern:** Nutzt `st.session_state`, um Eingaben bei Tab-Wechseln zu erhalten.
|
||||
* **Healing Parser:** Die Funktion `parse_markdown_draft` repariert defekte YAML-Frontmatter vom LLM automatisch.
|
||||
|
||||
**Traffic Control (`app.services.llm_service`)**
|
||||
* Stellt sicher, dass der Chat responsive bleibt, auch wenn ein Import läuft.
|
||||
* Nutzt `asyncio.Semaphore` (`MINDNET_LLM_BACKGROUND_LIMIT`), um Hintergrund-Jobs zu drosseln.
|
||||
|
|
@ -91,6 +101,7 @@ mindnet/
|
|||
**Voraussetzungen:** Python 3.10+, Docker, Ollama.
|
||||
|
||||
**Installation:**
|
||||
|
||||
```bash
|
||||
# 1. Repo & Venv
|
||||
git clone <repo> mindnet
|
||||
|
|
@ -98,7 +109,7 @@ cd mindnet
|
|||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
|
||||
# 2. Dependencies
|
||||
# 2. Dependencies (inkl. st-cytoscape)
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 3. Ollama (Nomic ist Pflicht!)
|
||||
|
|
@ -107,6 +118,7 @@ ollama pull nomic-embed-text
|
|||
```
|
||||
|
||||
**Konfiguration (.env):**
|
||||
|
||||
```ini
|
||||
QDRANT_URL="http://localhost:6333"
|
||||
COLLECTION_PREFIX="mindnet_dev"
|
||||
|
|
@ -164,7 +176,11 @@ Mindnet lernt durch Konfiguration, nicht durch Training.
|
|||
DECISION:
|
||||
inject_types: ["value", "risk"] # Füge 'risk' hinzu
|
||||
```
|
||||
3. **Kognition (`prompts.yaml`):** (Optional) Passe den System-Prompt an, falls nötig.
|
||||
3. **Visualisierung (`ui_config.py`):**
|
||||
Füge dem `GRAPH_COLORS` Dictionary einen Eintrag hinzu:
|
||||
```python
|
||||
"risk": "#ff6b6b"
|
||||
```
|
||||
|
||||
### Workflow B: Interview-Schema anpassen (WP07)
|
||||
Wenn Mindnet neue Fragen stellen soll (z.B. "Budget" bei Projekten):
|
||||
|
|
@ -183,12 +199,14 @@ Wenn Mindnet neue Fragen stellen soll (z.B. "Budget" bei Projekten):
|
|||
## 6. Tests & Debugging
|
||||
|
||||
**Unit Tests:**
|
||||
|
||||
```bash
|
||||
pytest tests/test_retriever_basic.py
|
||||
pytest tests/test_chunking.py
|
||||
```
|
||||
|
||||
**Pipeline Tests:**
|
||||
|
||||
```bash
|
||||
# JSON-Schema prüfen
|
||||
python3 -m scripts.payload_dryrun --vault ./test_vault
|
||||
|
|
@ -198,6 +216,7 @@ python3 -m scripts.edges_full_check
|
|||
```
|
||||
|
||||
**E2E Smoke Tests:**
|
||||
|
||||
```bash
|
||||
# Decision Engine
|
||||
python tests/test_wp06_decision.py -p 8002 -e DECISION -q "Soll ich X tun?"
|
||||
|
|
@ -211,21 +230,24 @@ python tests/test_feedback_smoke.py --url http://localhost:8002/query
|
|||
## 7. Troubleshooting & One-Liners
|
||||
|
||||
**DB komplett zurücksetzen (Vorsicht!):**
|
||||
|
||||
```bash
|
||||
python3 -m scripts.reset_qdrant --mode wipe --prefix "mindnet_dev" --yes
|
||||
```
|
||||
|
||||
**Einen einzelnen File inspizieren (Parser-Sicht):**
|
||||
|
||||
```bash
|
||||
python3 tests/inspect_one_note.py --file ./vault/MeinFile.md
|
||||
```
|
||||
|
||||
**Live-Logs sehen:**
|
||||
|
||||
```bash
|
||||
journalctl -u mindnet-dev -f
|
||||
journalctl -u mindnet-ui-dev -f
|
||||
```
|
||||
|
||||
**"Read timed out" im Frontend:**
|
||||
* Ursache: Smart Edges brauchen länger als 60s.
|
||||
* Lösung: `MINDNET_API_TIMEOUT=300.0` in `.env`.
|
||||
* Ursache: Backend braucht für Smart Edges länger als 60s.
|
||||
* Lösung: `MINDNET_API_TIMEOUT=300.0` in `.env` setzen.
|
||||
|
|
@ -8,20 +8,20 @@ context: "Aktuelle Planung für kommende Features (ab WP16), Release-Strategie u
|
|||
|
||||
# Mindnet Active Roadmap
|
||||
|
||||
**Aktueller Stand:** v2.6.0 (Post-WP15)
|
||||
**Fokus:** Usability, Memory & Visualisierung.
|
||||
**Aktueller Stand:** v2.6.0 (Post-WP19)
|
||||
**Fokus:** Visualisierung, Exploration & Deep Search.
|
||||
|
||||
## 1. Programmstatus
|
||||
|
||||
Wir haben Phase D (Interaktion) weitgehend abgeschlossen. Das System ist stabil, verfügt über Traffic Control und Smart Edges. Der Fokus verschiebt sich nun auf **Phase E (Maintenance & Scaling)** sowie die Vertiefung der KI-Fähigkeiten.
|
||||
Wir haben mit der Implementierung des Graph Explorers (WP19) einen Meilenstein in **Phase E (Maintenance & Scaling)** erreicht. Die Architektur ist nun modular. Der nächste Schritt (WP19a) vertieft die Analyse-Fähigkeiten.
|
||||
|
||||
| Phase | Fokus | Status |
|
||||
| :--- | :--- | :--- |
|
||||
| **Phase A** | Fundament & Import | ✅ Fertig |
|
||||
| **Phase B** | Semantik & Graph | ✅ Fertig |
|
||||
| **Phase C** | Persönlichkeit | ✅ Fertig |
|
||||
| **Phase D** | Interaktion & Tools | 🟡 Abschlussphase |
|
||||
| **Phase E** | Maintenance & Visualisierung | 🚀 Start |
|
||||
| **Phase D** | Interaktion & Tools | ✅ Fertig |
|
||||
| **Phase E** | Maintenance & Visualisierung | 🚀 Aktiv |
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -44,6 +44,7 @@ Eine Übersicht der implementierten Features zum schnellen Auffinden von Funktio
|
|||
| **WP-10a**| Draft Editor | GUI-Komponente zum Bearbeiten und Speichern generierter Notizen. |
|
||||
| **WP-11** | Backend Intelligence | `nomic-embed-text` (768d) und Matrix-Logik für Kanten-Typisierung. |
|
||||
| **WP-15** | Smart Edge Allocation | LLM-Filter für Kanten in Chunks + Traffic Control (Semaphore). |
|
||||
| **WP-19** | Graph Visualisierung | **Frontend Modularisierung:** Umbau auf `ui_*.py`.<br>**Graph Engines:** Parallelbetrieb von Cytoscape (COSE) und Agraph.<br>**Tools:** "Single Source of Truth" Editor, Persistenz via URL. |
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -51,19 +52,24 @@ Eine Übersicht der implementierten Features zum schnellen Auffinden von Funktio
|
|||
|
||||
Diese Features stehen als nächstes an.
|
||||
|
||||
### WP-19a – Graph Intelligence & Discovery (Sprint-Fokus)
|
||||
**Status:** 🚀 Startklar
|
||||
**Ziel:** Vom "Anschauen" zum "Verstehen". Deep-Dive Werkzeuge für den Graphen.
|
||||
* **Discovery Screen:** Neuer Tab für semantische Suche ("Finde Notizen über Vaterschaft") und Wildcard-Filter.
|
||||
* **Filter-Logik:** "Zeige nur Wege, die zu `type:decision` führen".
|
||||
* **Chunk Inspection:** Umschaltbare Granularität (Notiz vs. Chunk) zur Validierung des Smart Chunkers.
|
||||
|
||||
### WP-16 – Auto-Discovery & Enrichment
|
||||
**Status:** 🟡 Geplant
|
||||
**Ziel:** Automatisches Erkennen von fehlenden Kanten in "dummem" Text *vor* der Speicherung.
|
||||
* **Problem:** Nutzer vergessen Wikilinks.
|
||||
* **Lösung:** Ein "Enricher" scannt Text vor dem Import, findet Keywords (z.B. "Mindnet") und schlägt Links vor (`[[Mindnet]]`).
|
||||
* **Abgrenzung:** Anders als *Active Intelligence* (WP11, UI-basiert) läuft dies im Backend (Importer).
|
||||
|
||||
### WP-17 – Conversational Memory (Gedächtnis)
|
||||
**Status:** 🟡 Geplant
|
||||
**Ziel:** Echte Dialoge statt Request-Response.
|
||||
* **Tech:** Erweiterung des `ChatRequest` DTO um `history`.
|
||||
* **Logik:** Token-Management (Context Window Balancing zwischen RAG-Doks und Chat-Verlauf).
|
||||
* **Nutzen:** Rückfragen ("Was meinst du damit?") werden möglich.
|
||||
|
||||
### WP-18 – Graph Health & Maintenance
|
||||
**Status:** 🟡 Geplant (Prio 2)
|
||||
|
|
@ -71,17 +77,10 @@ Diese Features stehen als nächstes an.
|
|||
* **Feature:** Cronjob `check_graph_integrity.py`.
|
||||
* **Funktion:** Findet "Dangling Edges" (Links auf gelöschte Notizen) und repariert/löscht sie.
|
||||
|
||||
### WP-19 – Graph Visualisierung & Explorer
|
||||
**Status:** 🟡 Geplant (Prio 1)
|
||||
**Ziel:** Vertrauen durch Transparenz.
|
||||
* **UI:** Neuer Tab "🕸️ Graph" in Streamlit.
|
||||
* **Tech:** `streamlit-agraph` oder `pyvis`.
|
||||
* **Nutzen:** Visuelle Kontrolle der *Smart Edge Allocation* ("Hat das LLM die Kante wirklich hierhin gesetzt?").
|
||||
|
||||
### WP-20 – Cloud Hybrid Mode (Optional)
|
||||
**Status:** ⚪ Optional
|
||||
**Ziel:** "Turbo-Modus" für Massen-Imports.
|
||||
* **Konzept:** Switch in `.env`, um statt Ollama (Lokal) auf Google Gemini (Cloud) umzuschalten, wenn Datenschutz-unkritische Daten verarbeitet werden.
|
||||
* **Konzept:** Switch in `.env`, um statt Ollama (Lokal) auf Google Gemini (Cloud) umzuschalten.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -89,20 +88,7 @@ Diese Features stehen als nächstes an.
|
|||
|
||||
```mermaid
|
||||
graph TD
|
||||
WP15(Smart Edges) --> WP19(Visualisierung)
|
||||
WP15 --> WP16(Auto-Discovery)
|
||||
WP10(Chat UI) --> WP17(Memory)
|
||||
WP03(Import) --> WP18(Health Check)
|
||||
```
|
||||
|
||||
**Nächstes Release (v2.7):**
|
||||
* Ziel: Visualisierung (WP19) + Conversational Memory (WP17).
|
||||
* ETA: Q1 2026.
|
||||
|
||||
---
|
||||
|
||||
## 5. Governance
|
||||
|
||||
* **Versionierung:** Semantic Versioning via Gitea Tags.
|
||||
* **Feature-Branches:** Jedes WP erhält einen Branch `feature/wpXX-name`.
|
||||
* **Sync First:** Bevor ein neuer Branch erstellt wird, muss `main` gepullt werden.
|
||||
WP19(Graph Viz) --> WP19a(Discovery)
|
||||
WP19a --> WP17(Memory)
|
||||
WP15(Smart Edges) --> WP16(Auto-Discovery)
|
||||
WP03(Import) --> WP18(Health Check)
|
||||
|
|
@ -2,12 +2,12 @@
|
|||
doc_type: archive
|
||||
audience: historian, architect
|
||||
status: archived
|
||||
context: "Archivierte Details zu abgeschlossenen Workpackages (WP01-WP15). Referenz für historische Design-Entscheidungen."
|
||||
context: "Archivierte Details zu abgeschlossenen Workpackages (WP01-WP19). Referenz für historische Design-Entscheidungen."
|
||||
---
|
||||
|
||||
# Legacy Workpackages (Archiv)
|
||||
|
||||
**Quellen:** `Programmplan_V2.2.md`
|
||||
**Quellen:** `Programmplan_V2.2.md`, `Active Roadmap`
|
||||
|
||||
**Status:** Abgeschlossen / Implementiert.
|
||||
Dieses Dokument dient als Referenz für die Entstehungsgeschichte von Mindnet v2.6.
|
||||
|
|
@ -79,4 +79,16 @@ Dieses Dokument dient als Referenz für die Entstehungsgeschichte von Mindnet v2
|
|||
### WP-15 – Smart Edge Allocation (Meilenstein)
|
||||
* **Problem:** "Broadcasting". Ein Chunk erbte alle Links der Note, auch irrelevante. Das verwässerte die Suchergebnisse.
|
||||
* **Lösung:** LLM prüft jeden Chunk auf Link-Relevanz.
|
||||
* **Tech:** Einführung von **Traffic Control** (Semaphore), um Import und Chat zu parallelisieren, ohne die Hardware zu überlasten.
|
||||
* **Tech:** Einführung von **Traffic Control** (Semaphore), um Import und Chat zu parallelisieren, ohne die Hardware zu überlasten.
|
||||
|
||||
---
|
||||
|
||||
## Phase E: Visualisierung & Maintenance (WP19)
|
||||
|
||||
### WP-19 – Graph Visualisierung & Modularisierung
|
||||
* **Ziel:** Transparenz über die Datenstruktur schaffen und technische Schulden (Monolith) abbauen.
|
||||
* **Ergebnis:**
|
||||
* **Modularisierung:** Aufsplittung der `ui.py` in Router, Services und Views (`ui_*.py`).
|
||||
* **Graph Explorer:** Einführung von `st-cytoscape` für stabile, nicht-überlappende Layouts (COSE) als Ergänzung zur Legacy-Engine (Agraph).
|
||||
* **Single Source of Truth:** Der Editor lädt Inhalte nun direkt vom Dateisystem statt aus (potenziell veralteten) Vektor-Payloads.
|
||||
* **UX:** Einführung von URL-Persistenz für Layout-Settings und CSS-basiertes Highlighting zur Vermeidung von Re-Renders.
|
||||
|
|
@ -30,4 +30,8 @@ tqdm>=4.67.1
|
|||
pytest>=8.4.2
|
||||
|
||||
# --- Frontend (WP-10) ---
|
||||
streamlit>=1.39.0
|
||||
streamlit>=1.39.0
|
||||
|
||||
# Visualization (Parallelbetrieb)
|
||||
streamlit-agraph>=0.0.45
|
||||
st-cytoscape>=1.0.0
|
||||
Loading…
Reference in New Issue
Block a user