scripts/import_markdown.py aktualisiert
Some checks failed
Deploy mindnet to llm-node / deploy (push) Failing after 1s

This commit is contained in:
Lars 2025-09-05 09:02:20 +02:00
parent 41d43c2bb6
commit d9e30d5fb4

View File

@ -2,23 +2,23 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """
Name: scripts/import_markdown.py Name: scripts/import_markdown.py
Version: v2.2.2 (2025-09-05) Version: v2.2.3 (2025-09-05)
Kurzbeschreibung: Kurzbeschreibung:
Importiert Obsidian-Markdown-Notes in Qdrant (Notes/Chunks/Edges). Importiert Obsidian-Markdown-Notes in Qdrant (Notes/Chunks/Edges).
- Leitet Wikilink-Edges (references/backlink/references_at) direkt aus Volltext + echten Chunk-Texten ab. - Edges direkt aus Wikilinks: references / backlink / references_at (Chunk-basiert).
- Idempotenz: Ermittelt hash_fulltext; bei Änderung werden alte Chunks/Edges der Note entfernt (Replace-on-Change). - Idempotenz: hash_fulltext; bei Änderung Purge & Neuaufbau (Chunks/Edges) pro Note.
- Unveränderte Noten werden übersprungen (schnell). - Unveränderte Notes werden übersprungen.
- Purge für Edges robust gegen Client-/API-Unterschiede (Filter + Scroll-Delete Fallback).
Aufruf (aus Projekt-Root, im venv): Aufruf (aus Projekt-Root, im venv):
python3 -m scripts.import_markdown --vault ./vault [--apply] [--note-id NOTE_ID] [--embed-note] [--force-replace] python3 -m scripts.import_markdown --vault ./vault [--apply] [--note-id NOTE_ID] [--embed-note] [--force-replace] [--version]
Parameter: Parameter:
--vault Pfad zum Vault (z. B. ./vault) --vault Pfad zum Vault (z. B. ./vault)
--apply Führt Upserts in Qdrant aus (ohne Flag = Dry-Run mit JSON-Summaries) --apply Schreibt in Qdrant (ohne Flag = Dry-Run; nur JSON-Zeilen je Note)
--note-id Bearbeite nur eine konkrete Note-ID --note-id Verarbeite nur die angegebene Note-ID
--embed-note Optional: Note-Vektor (Volltext) zusätzlich einbetten --embed-note Optional: Note-Vektor (Volltext) zusätzlich einbetten
--force-replace Erzwingt Purge & Neuaufbau auch ohne Hash-Änderung (Debug) --force-replace Erzwingt Purge & Neuaufbau auch ohne Hash-Änderung (Debug)
--version Druckt nur die Script-Version und beendet
Umgebungsvariablen (optional): Umgebungsvariablen (optional):
QDRANT_URL, QDRANT_API_KEY, COLLECTION_PREFIX, VECTOR_DIM (Default 384) QDRANT_URL, QDRANT_API_KEY, COLLECTION_PREFIX, VECTOR_DIM (Default 384)
@ -26,21 +26,17 @@ Umgebungsvariablen (optional):
Exitcodes: Exitcodes:
0 = OK, 2 = keine Markdown-Dateien gefunden 0 = OK, 2 = keine Markdown-Dateien gefunden
Hinweise: Wichtige Quellen:
- Wikilink-Ableitung basiert auf app.core.derive_edges (Slug-/ID-Auflösung, unresolved-Status). :contentReference[oaicite:3]{index=3} - Edge-Ableitung (neu, Wikilinks): app/core/derive_edges.py (build_note_index, derive_wikilink_edges)
- Für references_at werden echte Chunk-Texte übergeben (sonst würden sie fehlen). :contentReference[oaicite:4]{index=4} - Qdrant-Setup (1D-Edges, COSINE): app/core/qdrant.py
- Purge verwendet: - Upsert/ID-Vergabe (UUIDv5): app/core/qdrant_points.py
* Chunks: payload.note_id == NOTE_ID - (Legacy) app/core/edges.py ist NICHT mehr im Importer verdrahtet (Kompatibilität bleibt, aber nicht aktiv). :contentReference[oaicite:6]{index=6}
* Edges : (source_id == NOTE_ID) OR (target_id == NOTE_ID) OR (source_id startswith NOTE_ID + "#")
Falls MatchText/Prefix nicht unterstützt: Scroll & Delete-by-IDs als Fallback.
- Notes/Chunks/Edges bleiben kompatibel zu Validator & Backfill (UUIDv5, 1D-Edges).
Changelog: Changelog:
v2.2.2: Entfernt minimum_should (Kompatibilität); Edge-Purge mit Scroll-Fallback für source_id-Prefix. v2.2.3: Schutz gegen Legacy-Import-Pfad; --version-Flag; klare Log-Zeile mit Script-Version.
v2.2.2: Purge kompatibel gemacht (ohne minimum_should); Scroll-Fallback für source_id-Prefix.
v2.2.1: Bugfix Tippfehler (args.force_replace). v2.2.1: Bugfix Tippfehler (args.force_replace).
v2.2.0: Hash-basierte Replace-on-Change-Logik; Purge pro Note; Skip unverändert. :contentReference[oaicite:6]{index=6} v2.2.0: Hash-basierte Replace-on-Change-Logik; Purge pro Note; Skip unverändert.
v2.1.1: Sicherstellung references_at durch Übergabe echter Chunk-Texte. :contentReference[oaicite:7]{index=7}
v2.1.0: Vorab-Note-Index über Vault; direkte Edge-Ableitung. :contentReference[oaicite:8]{index=8}
""" """
from __future__ import annotations from __future__ import annotations
import argparse, os, glob, json, sys, hashlib import argparse, os, glob, json, sys, hashlib
@ -50,15 +46,17 @@ from dotenv import load_dotenv
from qdrant_client import QdrantClient from qdrant_client import QdrantClient
from qdrant_client.http import models as rest from qdrant_client.http import models as rest
from app.core.parser import read_markdown, normalize_frontmatter, validate_required_frontmatter from app.core.parser import read_markdown, normalize_frontmatter, validate_required_frontmatter # :contentReference[oaicite:7]{index=7}
from app.core.note_payload import make_note_payload from app.core.note_payload import make_note_payload
from app.core.validate_note import validate_note_payload from app.core.validate_note import validate_note_payload
from app.core.chunker import assemble_chunks from app.core.chunker import assemble_chunks
from app.core.chunk_payload import make_chunk_payloads from app.core.chunk_payload import make_chunk_payloads
from app.core.embed import embed_texts, embed_one from app.core.embed import embed_texts, embed_one
from app.core.qdrant import QdrantConfig, ensure_collections, get_client from app.core.qdrant import QdrantConfig, ensure_collections, get_client #
from app.core.qdrant_points import points_for_chunks, points_for_note, points_for_edges, upsert_batch from app.core.qdrant_points import points_for_chunks, points_for_note, points_for_edges, upsert_batch # :contentReference[oaicite:9]{index=9}
from app.core.derive_edges import build_note_index, derive_wikilink_edges # Wikilinks :contentReference[oaicite:9]{index=9} from app.core.derive_edges import build_note_index, derive_wikilink_edges # NEU/aktiv
SCRIPT_VERSION = "v2.2.3"
# ------------------- # -------------------
# Utility / Helpers # Utility / Helpers
@ -108,7 +106,6 @@ def _client_delete_points(client: QdrantClient, collection: str, selector_or_fil
def _scroll_edge_ids_by_source_prefix(client: QdrantClient, edges_col: str, source_prefix: str, batch: int = 1000) -> List[int]: def _scroll_edge_ids_by_source_prefix(client: QdrantClient, edges_col: str, source_prefix: str, batch: int = 1000) -> List[int]:
"""Sucht Edge-Point-IDs, deren payload.source_id mit 'source_prefix' beginnt (via Scroll).""" """Sucht Edge-Point-IDs, deren payload.source_id mit 'source_prefix' beginnt (via Scroll)."""
# Wir holen alle Edges (mit payload) und filtern lokal robust gegen fehlende Prefix-Operatoren.
next_offset = None next_offset = None
ids: List[int] = [] ids: List[int] = []
while True: while True:
@ -131,7 +128,7 @@ def _scroll_edge_ids_by_source_prefix(client: QdrantClient, edges_col: str, sour
def purge_note(client: QdrantClient, cfg: QdrantConfig, note_id: str) -> None: def purge_note(client: QdrantClient, cfg: QdrantConfig, note_id: str) -> None:
"""Löscht alle Chunks & Edges einer Note (Replace-on-Change).""" """Löscht alle Chunks & Edges einer Note (Replace-on-Change)."""
notes_col, chunks_col, edges_col = f"{cfg.prefix}_notes", f"{cfg.prefix}_chunks", f"{cfg.prefix}_edges" _, chunks_col, edges_col = f"{cfg.prefix}_notes", f"{cfg.prefix}_chunks", f"{cfg.prefix}_edges"
# Chunks: payload.note_id == NOTE_ID # Chunks: payload.note_id == NOTE_ID
f_chunks = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))]) f_chunks = rest.Filter(must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))])
@ -142,20 +139,15 @@ def purge_note(client: QdrantClient, cfg: QdrantConfig, note_id: str) -> None:
rest.FieldCondition(key="source_id", match=rest.MatchValue(value=note_id)), rest.FieldCondition(key="source_id", match=rest.MatchValue(value=note_id)),
rest.FieldCondition(key="target_id", match=rest.MatchValue(value=note_id)), rest.FieldCondition(key="target_id", match=rest.MatchValue(value=note_id)),
] ]
# Versuche MatchText (falls verfügbar) für Prefix "NOTE_ID#"
try: try:
conds.append(rest.FieldCondition(key="source_id", match=rest.MatchText(text=f"{note_id}#"))) conds.append(rest.FieldCondition(key="source_id", match=rest.MatchText(text=f"{note_id}#")))
f_edges = rest.Filter(should=conds) # kein minimum_should in deiner Client-Version f_edges = rest.Filter(should=conds)
_client_delete_points(client, edges_col, f_edges) _client_delete_points(client, edges_col, f_edges)
except Exception: except Exception:
# Fallback: Scroll & Delete-by-IDs für source_id prefix f_edges_basic = rest.Filter(should=conds)
f_edges_basic = rest.Filter(should=conds) # lösche erst exakte Note-Edges
_client_delete_points(client, edges_col, f_edges_basic) _client_delete_points(client, edges_col, f_edges_basic)
# jetzt alle Kanten mit source_id == NOTE_ID#* auf IDs suchen und gezielt löschen
ids = _scroll_edge_ids_by_source_prefix(client, edges_col, f"{note_id}#") ids = _scroll_edge_ids_by_source_prefix(client, edges_col, f"{note_id}#")
if ids: if ids:
# Delete-by-IDs
selector = rest.PointIdsList(points=ids) selector = rest.PointIdsList(points=ids)
_client_delete_points(client, edges_col, selector) _client_delete_points(client, edges_col, selector)
@ -171,8 +163,15 @@ def main():
ap.add_argument("--note-id", help="Nur eine Note-ID verarbeiten") ap.add_argument("--note-id", help="Nur eine Note-ID verarbeiten")
ap.add_argument("--embed-note", action="store_true", help="Auch Note-Volltext einbetten (optional)") ap.add_argument("--embed-note", action="store_true", help="Auch Note-Volltext einbetten (optional)")
ap.add_argument("--force-replace", action="store_true", help="Purge & Neuaufbau erzwingen (Debug)") ap.add_argument("--force-replace", action="store_true", help="Purge & Neuaufbau erzwingen (Debug)")
ap.add_argument("--version", action="store_true", help="Script-Version anzeigen und beenden")
args = ap.parse_args() args = ap.parse_args()
if args.version:
print(SCRIPT_VERSION); return
# Deutliche Signatur in der Konsole
print(f"[import_markdown] {SCRIPT_VERSION}", file=sys.stderr)
# Qdrant # Qdrant
cfg = QdrantConfig( cfg = QdrantConfig(
url=os.getenv("QDRANT_URL", "http://127.0.0.1:6333"), url=os.getenv("QDRANT_URL", "http://127.0.0.1:6333"),
@ -181,14 +180,14 @@ def main():
dim=int(os.getenv("VECTOR_DIM", "384")), dim=int(os.getenv("VECTOR_DIM", "384")),
) )
client = get_client(cfg) client = get_client(cfg)
ensure_collections(client, cfg.prefix, cfg.dim) ensure_collections(client, cfg.prefix, cfg.dim) # legt u. a. Edges-Collection (1D DOT) an
root = os.path.abspath(args.vault) root = os.path.abspath(args.vault)
files = iter_md(root) files = iter_md(root)
if not files: if not files:
print("Keine Markdown-Dateien gefunden.", file=sys.stderr); sys.exit(2) print("Keine Markdown-Dateien gefunden.", file=sys.stderr); sys.exit(2)
# 1) Vorab-Lauf: globaler Note-Index für robuste Auflösung # 1) Vorab-Lauf: globaler Note-Index für robuste Wikilink-Auflösung
index_payloads: List[Dict] = [] index_payloads: List[Dict] = []
for path in files: for path in files:
try: try:
@ -200,7 +199,7 @@ def main():
index_payloads.append(pl) index_payloads.append(pl)
except Exception: except Exception:
continue continue
note_index = build_note_index(index_payloads) # by_id/by_slug/by_file_slug :contentReference[oaicite:10]{index=10} note_index = build_note_index(index_payloads) # by_id/by_slug/by_file_slug :contentReference[oaicite:12]{index=12}
notes_col = f"{cfg.prefix}_notes" notes_col = f"{cfg.prefix}_notes"
total_notes = 0 total_notes = 0
@ -222,7 +221,7 @@ def main():
note_pl = make_note_payload(parsed, vault_root=root) note_pl = make_note_payload(parsed, vault_root=root)
validate_note_payload(note_pl) validate_note_payload(note_pl)
h = compute_hash_fulltext(parsed.body) h = compute_hash_fulltext(parsed.body)
note_pl["hash_fulltext"] = h # Feld ist im Schema vorgesehen :contentReference[oaicite:11]{index=11} note_pl["hash_fulltext"] = h # im Schema vorgesehen
# Chunks + Payloads # Chunks + Payloads
chunks = assemble_chunks(fm["id"], parsed.body, fm.get("type", "concept")) chunks = assemble_chunks(fm["id"], parsed.body, fm.get("type", "concept"))
@ -235,14 +234,14 @@ def main():
# Optional: Note-Vektor # Optional: Note-Vektor
note_vec = embed_one(parsed.body) if args.embed_note else None note_vec = embed_one(parsed.body) if args.embed_note else None
# Edges (aus Volltext + echten Chunk-Texten) :contentReference[oaicite:12]{index=12} # Edges (aus Volltext + echten Chunk-Texten)
note_pl_for_edges = {"note_id": fm["id"], "title": fm.get("title"), "path": note_pl["path"], "fulltext": parsed.body} note_pl_for_edges = {"note_id": fm["id"], "title": fm.get("title"), "path": note_pl["path"], "fulltext": parsed.body}
chunks_for_links = [] chunks_for_links = []
for i, pl in enumerate(chunk_pls): for i, pl in enumerate(chunk_pls):
cid = pl.get("chunk_id") or pl.get("id") cid = pl.get("chunk_id") or pl.get("id")
txt = chunks[i].text if i < len(chunks) else "" txt = chunks[i].text if i < len(chunks) else ""
chunks_for_links.append({"chunk_id": cid, "text": txt}) chunks_for_links.append({"chunk_id": cid, "text": txt})
edges = derive_wikilink_edges(note_pl_for_edges, chunks_for_links, note_index) edges = derive_wikilink_edges(note_pl_for_edges, chunks_for_links, note_index) # :contentReference[oaicite:13]{index=13}
# Bestehende Note laden (für Hash-Vergleich) # Bestehende Note laden (für Hash-Vergleich)
existing = fetch_existing_note_payload(client, notes_col, fm["id"]) existing = fetch_existing_note_payload(client, notes_col, fm["id"])