scripts/export_markdown.py aktualisiert
Some checks failed
Deploy mindnet to llm-node / deploy (push) Failing after 1s
Some checks failed
Deploy mindnet to llm-node / deploy (push) Failing after 1s
This commit is contained in:
parent
c2954b5663
commit
a807cc8bd1
|
|
@ -1,82 +1,101 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
export_markdown.py — Export aus Qdrant → Markdown (Vault-Struktur)
|
||||
|
||||
Version: 1.3 (2025-09-09)
|
||||
Script: export_markdown.py
|
||||
Version: 1.3.0
|
||||
Datum: 2025-09-09
|
||||
|
||||
Kurzbeschreibung
|
||||
- Exportiert Notes (+ rekonstruierten Body aus Chunks) aus Qdrant in Dateien mit YAML-Frontmatter.
|
||||
- Nutzt ENV-Variablen aus .env (QDRANT_URL, QDRANT_API_KEY, COLLECTION_PREFIX, VECTOR_DIM).
|
||||
- Optionales CLI-Argument --prefix überschreibt COLLECTION_PREFIX, damit alle Tools konsistent sind.
|
||||
- Unterstützung von Mehrfachauswahl per --note-id (mehrfach angeben).
|
||||
---------------
|
||||
Exportiert Markdown-Notizen aus Qdrant in einen Obsidian-kompatiblen Vault-Ordner.
|
||||
Für jede Note wird die YAML-Frontmatter + Body rekonstruiert.
|
||||
|
||||
Anwendungsfälle
|
||||
- Kompletter Vault-Neuaufbau aus Qdrant
|
||||
- Teil-Export einzelner Notizen
|
||||
- Sicherung / Migration
|
||||
Body-Rekonstruktion (Priorität):
|
||||
1) Aus notes.payload.fulltext, falls vorhanden (verlustfreie Rückführung).
|
||||
2) Andernfalls werden alle Chunks der Note geladen und deren Textfelder
|
||||
(text -> content -> raw) in stabiler Reihenfolge zusammengefügt.
|
||||
|
||||
Voraussetzungen
|
||||
- Aktiviertes venv (empfohlen): `source .venv/bin/activate`
|
||||
- Laufender Qdrant (URL/API-Key passend zu deiner Umgebung)
|
||||
- Sammlungen: <prefix>_notes, <prefix>_chunks
|
||||
- Chunk-Payload enthält Text in `text` (Fallback: `raw`), Reihenfolge über `seq` oder Nummer in `chunk_id`.
|
||||
Wichtige Hinweise
|
||||
-----------------
|
||||
- Qdrant-Zugriff wird über Umgebungsvariablen konfiguriert:
|
||||
* QDRANT_URL (z. B. http://127.0.0.1:6333)
|
||||
* QDRANT_API_KEY (optional)
|
||||
* COLLECTION_PREFIX (z. B. mindnet)
|
||||
* VECTOR_DIM (wird hier nur zum Collection-Setup benötigt; Standard 384)
|
||||
- Es wird **kein** --prefix Parameter erwartet; das Präfix kommt aus den Umgebungsvariablen.
|
||||
- Exportiert nach --out. Unterordner gemäß payload['path'] werden automatisch angelegt.
|
||||
- Standard: bestehende Dateien werden NICHT überschrieben; mit --overwrite schon.
|
||||
|
||||
Aufrufe (Beispiele)
|
||||
- Prefix über ENV (empfohlen):
|
||||
export COLLECTION_PREFIX="mindnet"
|
||||
python3 -m scripts.export_markdown --out ./_exportVault
|
||||
Aufrufparameter
|
||||
---------------
|
||||
--out PATH Ziel-Ordner (Pfad zum Export-Vault) [erforderlich]
|
||||
--note-id ID Nur eine einzelne Note exportieren (optional)
|
||||
--overwrite Ziel-Dateien überschreiben, falls vorhanden (optional)
|
||||
|
||||
- Prefix über CLI (überschreibt ENV):
|
||||
python3 -m scripts.export_markdown --out ./_exportVault --prefix mindnet
|
||||
Beispiele
|
||||
---------
|
||||
COLLECTION_PREFIX=mindnet QDRANT_URL=http://127.0.0.1:6333 \\
|
||||
python3 -m scripts.export_markdown --out ./_exportVault
|
||||
|
||||
- Nur bestimmte Notizen exportieren:
|
||||
python3 -m scripts.export_markdown --out ./_exportVault \
|
||||
--prefix mindnet \
|
||||
--note-id 20250821-architektur-ki-trainerassistent-761cfe \
|
||||
--note-id 20250821-personal-mind-ki-projekt-7b0d79
|
||||
Nur eine Note:
|
||||
COLLECTION_PREFIX=mindnet python3 -m scripts.export_markdown \\
|
||||
--out ./_exportVault --note-id 20250821-architektur-ki-trainerassistent-761cfe
|
||||
|
||||
- Existierende Dateien überschreiben:
|
||||
python3 -m scripts.export_markdown --out ./_exportVault --prefix mindnet --overwrite
|
||||
Mit Überschreiben:
|
||||
COLLECTION_PREFIX=mindnet python3 -m scripts.export_markdown \\
|
||||
--out ./_exportVault --overwrite
|
||||
|
||||
Parameter
|
||||
- --out PATH (Pflicht) Ziel-Verzeichnis des Export-Vaults (wird angelegt).
|
||||
- --prefix TEXT (Optional) Collection-Prefix; überschreibt ENV COLLECTION_PREFIX.
|
||||
- --note-id ID (Optional, mehrfach) Export auf bestimmte Note-IDs begrenzen.
|
||||
- --overwrite (Optional) Bereits existierende Dateien überschreiben (default: skip).
|
||||
- --dry-run (Optional) Nur anzeigen, was geschrieben würde; keine Dateien anlegen.
|
||||
|
||||
Änderungen ggü. v1.2
|
||||
- Neues optionales CLI-Argument --prefix (ENV-Fallback bleibt).
|
||||
- Robustere Qdrant-Scroll-Logik (neue Client-Signatur: (points, next_offset)).
|
||||
- Verbesserte Sortierung der Chunks (seq > Nummer aus chunk_id > Fallback).
|
||||
- Defensiver Umgang mit Frontmatter (nur sinnvolle Felder; Datumswerte als Strings).
|
||||
Änderungen (1.3.0)
|
||||
------------------
|
||||
- Body-Rekonstruktion robust gemacht:
|
||||
* nutzt 'fulltext' aus Notes-Payload, falls vorhanden
|
||||
* sonst Zusammenbau aus Chunks; Fallbacks für Textfelder: 'text' -> 'content' -> 'raw'
|
||||
* stabile Sortierung der Chunks: seq -> chunk_index -> Nummer in chunk_id
|
||||
- Entfernt veralteten --prefix Parameter; nutzt QdrantConfig.from_env()
|
||||
- Verbesserte YAML-Ausgabe (Strings bleiben Strings), saubere Trennlinie '---'
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
import os
|
||||
import sys
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
import os
|
||||
import re
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
from dotenv import load_dotenv
|
||||
import yaml
|
||||
from qdrant_client.http import models as rest
|
||||
|
||||
from app.core.qdrant import QdrantConfig, get_client
|
||||
# ensure_collections ist für Export nicht nötig
|
||||
from app.core.qdrant import QdrantConfig, get_client, ensure_collections
|
||||
from qdrant_client import QdrantClient
|
||||
|
||||
# -------------------------
|
||||
# Hilfsfunktionen
|
||||
# -------------------------
|
||||
|
||||
def collections(prefix: str) -> Tuple[str, str, str]:
|
||||
def _names(prefix: str) -> Tuple[str, str, str]:
|
||||
return f"{prefix}_notes", f"{prefix}_chunks", f"{prefix}_edges"
|
||||
|
||||
|
||||
def scroll_all(client, collection: str, flt: Optional[rest.Filter] = None, with_payload=True, with_vectors=False):
|
||||
"""Iterator über alle Punkte einer Collection (neue Qdrant-Client-Signatur)."""
|
||||
def _ensure_dir(path: str) -> None:
|
||||
d = os.path.dirname(path)
|
||||
if d and not os.path.isdir(d):
|
||||
os.makedirs(d, exist_ok=True)
|
||||
|
||||
|
||||
def _to_md(frontmatter: dict, body: str) -> str:
|
||||
fm = yaml.safe_dump(frontmatter, sort_keys=False, allow_unicode=True).strip()
|
||||
# Frontmatter, dann eine leere Zeile, danach der Body
|
||||
return f"---\n{fm}\n---\n{body.rstrip()}\n"
|
||||
|
||||
|
||||
def _scroll_all(
|
||||
client: QdrantClient,
|
||||
collection: str,
|
||||
flt: Optional[rest.Filter] = None,
|
||||
with_payload: bool = True,
|
||||
with_vectors: bool = False,
|
||||
limit: int = 256,
|
||||
) -> List:
|
||||
"""Holt *alle* Punkte via Scroll (QdrantClient.scroll liefert (points, next_offset))."""
|
||||
pts_all = []
|
||||
next_offset = None
|
||||
while True:
|
||||
points, next_offset = client.scroll(
|
||||
|
|
@ -84,201 +103,160 @@ def scroll_all(client, collection: str, flt: Optional[rest.Filter] = None, with_
|
|||
scroll_filter=flt,
|
||||
with_payload=with_payload,
|
||||
with_vectors=with_vectors,
|
||||
limit=256,
|
||||
limit=limit,
|
||||
offset=next_offset,
|
||||
)
|
||||
for p in points:
|
||||
yield p
|
||||
if next_offset is None:
|
||||
pts_all.extend(points or [])
|
||||
if not next_offset:
|
||||
break
|
||||
return pts_all
|
||||
|
||||
|
||||
def ensure_dir(path: Path):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
_NUM_IN_CHUNK_ID = re.compile(r"#(?:c)?(\d+)$")
|
||||
|
||||
|
||||
def select_frontmatter(note_pl: Dict) -> Dict:
|
||||
def _chunk_sort_key(pl: dict) -> Tuple[int, int, str]:
|
||||
"""
|
||||
Reduziert Payload auf für den Vault sinnvolle Frontmatter-Felder.
|
||||
Pflichtfelder laut Schema: note_id (id), title, type, status, created, path
|
||||
Optional: updated, tags
|
||||
"""
|
||||
# Backward-compat: manche Payloads nutzen 'id' statt 'note_id'
|
||||
note_id = note_pl.get("note_id") or note_pl.get("id")
|
||||
fm = {
|
||||
"id": note_id,
|
||||
"title": note_pl.get("title"),
|
||||
"type": note_pl.get("type"),
|
||||
"status": note_pl.get("status"),
|
||||
"created": note_pl.get("created"),
|
||||
"path": note_pl.get("path"),
|
||||
}
|
||||
# optional
|
||||
if note_pl.get("updated") is not None:
|
||||
fm["updated"] = note_pl.get("updated")
|
||||
if note_pl.get("tags"):
|
||||
fm["tags"] = note_pl.get("tags")
|
||||
return fm
|
||||
|
||||
|
||||
def yaml_block(frontmatter: Dict) -> str:
|
||||
"""
|
||||
Sehr einfache YAML-Serialisierung (ohne zusätzliche Abhängigkeiten).
|
||||
Annahme: Werte sind Strings/Listen; Datumsangaben bereits als Strings.
|
||||
"""
|
||||
lines = ["---"]
|
||||
for k, v in frontmatter.items():
|
||||
if v is None:
|
||||
continue
|
||||
if isinstance(v, list):
|
||||
# einfache Listen-Notation
|
||||
lines.append(f"{k}:")
|
||||
for item in v:
|
||||
lines.append(f" - {item}")
|
||||
else:
|
||||
# Strings ggf. doppelt quoten, wenn Sonderzeichen enthalten
|
||||
s = str(v)
|
||||
if any(ch in s for ch in [":", "-", "#", "{", "}", "[", "]", ","]):
|
||||
lines.append(f'{k}: "{s}"')
|
||||
else:
|
||||
lines.append(f"{k}: {s}")
|
||||
lines.append("---")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def chunk_sort_key(pl: Dict) -> Tuple[int, int]:
|
||||
"""
|
||||
Bestimme eine stabile Sortierreihenfolge:
|
||||
1) seq (falls vorhanden)
|
||||
2) Nummer aus chunk_id (…#<n>)
|
||||
3) Fallback: 0
|
||||
Stabile Reihenfolge:
|
||||
1) 'seq' (falls vorhanden),
|
||||
2) 'chunk_index' (falls vorhanden),
|
||||
3) Nummer aus 'chunk_id' Suffix (#c02 -> 2),
|
||||
4) als letzter Fallback: gesamte 'chunk_id' als string.
|
||||
"""
|
||||
seq = pl.get("seq")
|
||||
if isinstance(seq, int):
|
||||
return (0, seq)
|
||||
cid = pl.get("chunk_id") or pl.get("id") or ""
|
||||
n = 0
|
||||
if "#" in cid:
|
||||
return (0, seq, "")
|
||||
idx = pl.get("chunk_index")
|
||||
if isinstance(idx, int):
|
||||
return (1, idx, "")
|
||||
cid = str(pl.get("chunk_id") or "")
|
||||
m = _NUM_IN_CHUNK_ID.search(cid)
|
||||
if m:
|
||||
try:
|
||||
n = int(cid.split("#", 1)[1])
|
||||
return (2, int(m.group(1)), "")
|
||||
except ValueError:
|
||||
n = 0
|
||||
return (1, n)
|
||||
pass
|
||||
return (3, 0, cid)
|
||||
|
||||
|
||||
def reconstruct_body(chunk_payloads: List[Dict]) -> str:
|
||||
def _join_chunk_texts(chunks_payloads: List[dict]) -> str:
|
||||
"""Nimmt die sortierten Chunk-Payloads und baut den Body zusammen."""
|
||||
parts: List[str] = []
|
||||
chunk_payloads_sorted = sorted(chunk_payloads, key=chunk_sort_key)
|
||||
for pl in chunk_payloads_sorted:
|
||||
txt = pl.get("text") or pl.get("raw") or ""
|
||||
parts.append(txt.rstrip("\n"))
|
||||
return "\n\n".join(parts).rstrip() + "\n"
|
||||
for pl in chunks_payloads:
|
||||
txt = pl.get("text") or pl.get("content") or pl.get("raw") or ""
|
||||
if txt:
|
||||
parts.append(txt.rstrip())
|
||||
# Doppelte Leerzeile zwischen Chunks – in Markdown meist ein guter Standard
|
||||
return ("\n\n".join(parts)).rstrip() + ("\n" if parts else "")
|
||||
|
||||
|
||||
def safe_write(out_path: Path, content: str, overwrite: bool) -> str:
|
||||
if out_path.exists() and not overwrite:
|
||||
return "skip"
|
||||
ensure_dir(out_path)
|
||||
out_path.write_text(content, encoding="utf-8")
|
||||
return "write"
|
||||
def _export_one_note(
|
||||
client: QdrantClient,
|
||||
prefix: str,
|
||||
note_pl: dict,
|
||||
out_root: str,
|
||||
overwrite: bool,
|
||||
) -> dict:
|
||||
notes_col, chunks_col, _ = _names(prefix)
|
||||
note_id = note_pl.get("note_id") or note_pl.get("id")
|
||||
path = note_pl.get("path") or f"{note_id}.md"
|
||||
|
||||
# Zielpfad relativ zu out_root
|
||||
out_path = os.path.join(out_root, path).replace("\\", "/")
|
||||
|
||||
def fetch_note_chunks(client, chunks_col: str, note_id: str) -> List[Dict]:
|
||||
flt = rest.Filter(
|
||||
must=[rest.FieldCondition(key="note_id", match=rest.MatchValue(value=note_id))]
|
||||
)
|
||||
out: List[Dict] = []
|
||||
for p in scroll_all(client, chunks_col, flt, with_payload=True, with_vectors=False):
|
||||
if p.payload:
|
||||
out.append(p.payload)
|
||||
return out
|
||||
# Frontmatter aus Payload zurückführen (nur bekannte Felder)
|
||||
fm = {}
|
||||
# Bewährte Felder zurückschreiben – unbekannte Keys nicht in YAML aufnehmen
|
||||
for k in [
|
||||
"title", "id", "type", "status", "created", "updated", "tags",
|
||||
"priority", "effort_min", "due", "people", "aliases",
|
||||
"depends_on", "assigned_to", "lang"
|
||||
]:
|
||||
v = note_pl.get(k) if k in note_pl else note_pl.get(f"note_{k}") # Toleranz für evtl. Namensvarianten
|
||||
if v not in (None, [], ""):
|
||||
fm[k] = v
|
||||
|
||||
# Pflichtfelder sicherstellen
|
||||
fm["id"] = fm.get("id") or note_id
|
||||
fm["title"] = fm.get("title") or note_pl.get("title") or note_id
|
||||
fm["type"] = fm.get("type") or "concept"
|
||||
fm["status"] = fm.get("status") or "draft"
|
||||
|
||||
def make_export_path(export_root: Path, note_pl: Dict) -> Path:
|
||||
# prefer payload 'path', sonst Titel-basierte Fallback-Datei
|
||||
rel = (note_pl.get("path") or f"{note_pl.get('title') or 'Note'}.md").strip("/")
|
||||
# Normalisierung Windows-Backslashes etc.
|
||||
rel = rel.replace("\\", "/")
|
||||
return export_root.joinpath(rel)
|
||||
# Body-Rekonstruktion
|
||||
body = ""
|
||||
fulltext = note_pl.get("fulltext")
|
||||
if isinstance(fulltext, str) and fulltext.strip():
|
||||
body = fulltext
|
||||
else:
|
||||
# Chunks zur Note holen und sortieren
|
||||
flt = rest.Filter(must=[rest.FieldCondition(
|
||||
key="note_id",
|
||||
match=rest.MatchValue(value=note_id)
|
||||
)])
|
||||
chunk_pts = _scroll_all(client, chunks_col, flt, with_payload=True, with_vectors=False)
|
||||
# Sortierung
|
||||
chunk_payloads = [p.payload or {} for p in chunk_pts]
|
||||
chunk_payloads.sort(key=_chunk_sort_key)
|
||||
body = _join_chunk_texts(chunk_payloads)
|
||||
|
||||
# Ziel schreiben
|
||||
if (not overwrite) and os.path.exists(out_path):
|
||||
return {"note_id": note_id, "path": out_path, "status": "skip_exists"}
|
||||
|
||||
_ensure_dir(out_path)
|
||||
with open(out_path, "w", encoding="utf-8") as f:
|
||||
f.write(_to_md(fm, body))
|
||||
|
||||
return {
|
||||
"note_id": note_id,
|
||||
"path": out_path,
|
||||
"status": "written",
|
||||
"body_from": "fulltext" if isinstance(fulltext, str) and fulltext.strip() else "chunks",
|
||||
"chunks_used": None if (isinstance(fulltext, str) and fulltext.strip()) else len(chunk_payloads),
|
||||
}
|
||||
|
||||
# -------------------------
|
||||
# Main
|
||||
# -------------------------
|
||||
|
||||
def main():
|
||||
load_dotenv()
|
||||
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--out", required=True, help="Ziel-Verzeichnis für den Export-Vault")
|
||||
ap.add_argument("--prefix", help="Collection-Prefix (überschreibt ENV COLLECTION_PREFIX)")
|
||||
ap.add_argument("--note-id", action="append", help="Nur bestimmte Note-ID exportieren (mehrfach möglich)")
|
||||
ap.add_argument("--overwrite", action="store_true", help="Existierende Dateien überschreiben")
|
||||
ap.add_argument("--dry-run", action="store_true", help="Nur anzeigen, keine Dateien schreiben")
|
||||
ap.add_argument("--out", required=True, help="Zielordner für den Export-Vault")
|
||||
ap.add_argument("--note-id", help="Nur eine Note exportieren (Note-ID)")
|
||||
ap.add_argument("--overwrite", action="store_true", help="Bestehende Dateien überschreiben")
|
||||
args = ap.parse_args()
|
||||
|
||||
# Qdrant-Konfiguration
|
||||
cfg = QdrantConfig.from_env()
|
||||
if args.prefix:
|
||||
# CLI-Präfix hat Vorrang
|
||||
cfg.prefix = args.prefix
|
||||
|
||||
client = get_client(cfg)
|
||||
notes_col, chunks_col, _ = collections(cfg.prefix)
|
||||
ensure_collections(client, cfg.prefix, cfg.dim)
|
||||
|
||||
# Filter für Noten-Auswahl
|
||||
note_filter: Optional[rest.Filter] = None
|
||||
notes_col, _, _ = _names(cfg.prefix)
|
||||
|
||||
# Notes holen
|
||||
flt = None
|
||||
if args.note_id:
|
||||
should = [rest.FieldCondition(key="note_id", match=rest.MatchValue(value=nid)) for nid in args.note_id]
|
||||
note_filter = rest.Filter(should=should)
|
||||
flt = rest.Filter(must=[rest.FieldCondition(
|
||||
key="note_id",
|
||||
match=rest.MatchValue(value=args.note_id)
|
||||
)])
|
||||
|
||||
export_root = Path(args.out).resolve()
|
||||
export_root.mkdir(parents=True, exist_ok=True)
|
||||
note_pts = _scroll_all(client, notes_col, flt, with_payload=True, with_vectors=False)
|
||||
|
||||
total = 0
|
||||
written = 0
|
||||
skipped = 0
|
||||
if not note_pts:
|
||||
print(json.dumps({"exported": 0, "out": args.out, "message": "Keine Notes gefunden."}, ensure_ascii=False))
|
||||
return
|
||||
|
||||
# Notes aus Qdrant holen
|
||||
for p in scroll_all(client, notes_col, note_filter, with_payload=True, with_vectors=False):
|
||||
results = []
|
||||
for p in note_pts:
|
||||
pl = p.payload or {}
|
||||
note_id = pl.get("note_id") or pl.get("id")
|
||||
title = pl.get("title")
|
||||
try:
|
||||
res = _export_one_note(client, cfg.prefix, pl, args.out, args.overwrite)
|
||||
except Exception as e:
|
||||
res = {"note_id": pl.get("note_id") or pl.get("id"), "error": str(e)}
|
||||
results.append(res)
|
||||
|
||||
# Frontmatter & Body
|
||||
fm = select_frontmatter(pl)
|
||||
yaml = yaml_block(fm)
|
||||
chunks = fetch_note_chunks(client, chunks_col, note_id)
|
||||
body = reconstruct_body(chunks)
|
||||
|
||||
content = f"{yaml}\n{body}"
|
||||
out_path = make_export_path(export_root, pl)
|
||||
|
||||
decision = "dry-run"
|
||||
if not args.dry_run:
|
||||
decision = safe_write(out_path, content, args.overwrite)
|
||||
if decision == "write":
|
||||
written += 1
|
||||
elif decision == "skip":
|
||||
skipped += 1
|
||||
|
||||
total += 1
|
||||
print(json.dumps({
|
||||
"note_id": note_id,
|
||||
"title": title,
|
||||
"file": str(out_path),
|
||||
"chunks": len(chunks),
|
||||
"decision": decision
|
||||
}, ensure_ascii=False))
|
||||
|
||||
print(json.dumps({
|
||||
"summary": {
|
||||
"notes_total": total,
|
||||
"written": written,
|
||||
"skipped": skipped,
|
||||
"out_dir": str(export_root)
|
||||
}
|
||||
}, ensure_ascii=False))
|
||||
print(json.dumps({"exported": len([r for r in results if r.get('status') == 'written']),
|
||||
"skipped": len([r for r in results if r.get('status') == 'skip_exists']),
|
||||
"out": args.out,
|
||||
"details": results}, ensure_ascii=False))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user