Ian Chou's Blog

GraphRAG 進階:LLM 證據選擇與跨層分數融合

GraphRAG 進階:LLM 證據選擇與跨層分數融合

當多跳遍歷返回太多節點,需要 LLM 挑選最小充分證據集。當 Vector、BM25、圖遍歷三種分數來源需要統一排序,就需要跨層分數融合。本文分享這兩個進階功能的實作。

問題一:擴展後節點太多

GraphRAG 的多跳遍歷會從種子節點向外擴展:

Query: "LangChain RAG 經驗"
    ↓ Seed
[LangChain, RAG]
    ↓ 2-hop 遍歷
[LangChain, RAG, Python, Docker, Kubernetes, ML, FastAPI, LLM...]

問題:擴展後有很多間接相關的節點,全部拿去做二次檢索會:

  1. 消耗過多 API 呼叫
  2. 引入噪音
  3. 稀釋真正相關的證據

解法一:LLM Evidence Selection

用 LLM 從擴展節點中挑選最小充分證據集

graph LR
    A[擴展節點 8 個] --> B[LLM with Structured Output]
    B --> C[選中 2 個]
    B --> D[排除 6 個]
    C --> E[二次檢索只用 2 個]

Pydantic Schema

class EvidenceNode(BaseModel):
    node: str = Field(description="Node name from the graph")
    relevance: str = Field(description="Why this node is relevant")

class EvidenceSelection(BaseModel):
    selected_nodes: list[EvidenceNode]
    reasoning_path: list[str]
    excluded_nodes: list[str]

實作

def select_evidence(query: str, expanded_nodes: list[str], graph_context: dict) -> EvidenceSelection:
    prompt = f"""Select the MINIMAL SUFFICIENT set of nodes needed to answer:
    
    Query: {query}
    
    Available Nodes: {expanded_nodes}
    Graph Relationships: {graph_context['edges']}
    
    Aim for 3-7 nodes maximum."""
    
    llm = get_langchain_model()
    structured_llm = llm.with_structured_output(EvidenceSelection)
    return structured_llm.invoke(prompt)

效果

Query: "LangChain RAG 專案經驗"

輸入: [LangChain, RAG, Python, Docker, Kubernetes, ML, FastAPI]
輸出:
  Selected: [LangChain, RAG]
  Excluded: [Python, Docker, Kubernetes, ML, FastAPI]
  Reasoning: 
    1. Query seeks LangChain RAG projects
    2. LangChain is the primary framework
    3. RAG is the key technique

問題二:三種分數來源沒有統一排序

GraphRAG 有三種訊號來源:

來源 衡量的是
Vector 語意相似度
BM25/FTS 關鍵字匹配
Graph Hop 圖結構接近度

問題:目前只是簡單合併,沒有加權融合。

# 之前的做法
all_materials = seed_materials + secondary_materials  # 沒有考慮 hop 距離

解法二:跨層分數融合

設計

┌─────────────────────────────────────────────────────────────┐
│                    Score Fusion                              │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   combined = vector_weight × vector_score                   │
│            + fts_weight × fts_score                         │
│            + graph_weight × graph_score                     │
│                                                             │
│   預設:0.5 × vector + 0.2 × fts + 0.3 × graph              │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Graph Score 計算

def compute_graph_score(hop_distance: int, decay: float = 0.5) -> float:
    """
    Seed (hop=-1): 1.0
    Hop 0: 0.5
    Hop 1: 0.25
    Hop 2: 0.125
    """
    if hop_distance < 0:
        return 1.0  # Seed = full score
    return decay ** (hop_distance + 1)

衰減曲線:

Hop Score
Seed (-1) 1.00
0 (直接鄰居) 0.50
1 0.25
2 0.12

ScoredMaterial 資料結構

@dataclass
class ScoredMaterial:
    material: dict
    vector_score: float = 0.0
    fts_score: float = 0.0
    graph_score: float = 0.0
    hop_distance: int = -1
    source_skill: Optional[str] = None
    
    def combined_score(self, vector_weight=0.5, fts_weight=0.2, graph_weight=0.3):
        return (
            vector_weight * self.vector_score +
            fts_weight * self.fts_score +
            graph_weight * self.graph_score
        )

融合排序

def fuse_and_rank(scored_materials, top_k=10):
    # 去重
    unique = deduplicate_by_id(scored_materials)
    
    # 計算 combined score
    results = []
    for sm in unique:
        m = sm.material.copy()
        m["combined_score"] = sm.combined_score()
        m["hop_distance"] = sm.hop_distance
        results.append(m)
    
    # 排序
    results.sort(key=lambda x: x["combined_score"], reverse=True)
    return results[:top_k]

驗證

scored = [
    ScoredMaterial({'id': 'a'}, vector_score=0.9, graph_score=1.0, hop=-1),  # Seed
    ScoredMaterial({'id': 'b'}, vector_score=0.7, graph_score=0.5, hop=1),   # 1-hop
    ScoredMaterial({'id': 'c'}, vector_score=0.95, graph_score=0.25, hop=2), # 2-hop
]

results = fuse_and_rank(scored)
# a: combined=0.750 (vec=0.90, graph=1.00) ← Seed 獲勝!
# c: combined=0.550 (vec=0.95, graph=0.25) ← 向量高但距離遠
# b: combined=0.500 (vec=0.70, graph=0.50)

即使 c 的向量分數最高 (0.95),但因為它是 2-hop 外的結果,graph_score 只有 0.25,最終排名反而不如 seed 材料 a


def hybrid_search(
    query: str,
    graph: CareerKnowledgeGraph,
    use_evidence_selection: bool = False,
    vector_weight: float = 0.5,
    fts_weight: float = 0.2,
    graph_weight: float = 0.3,
) -> dict:
    # 1. Seed retrieval
    seed_materials = seed_retrieval(query)
    
    # 2. Graph expansion with hop tracking
    expanded, skill_to_hop = weighted_expansion_with_hop(...)
    
    # 3. Optional: LLM evidence selection
    if use_evidence_selection:
        expanded = select_evidence(query, expanded, graph_context)
    
    # 4. Collect scored materials
    scored_materials = []
    for m in seed_materials:
        scored_materials.append(ScoredMaterial(m, hop=-1, ...))
    
    for skill in new_skills:
        hop = skill_to_hop[skill]
        for m in seed_retrieval(skill):
            scored_materials.append(ScoredMaterial(m, hop=hop, source_skill=skill))
    
    # 5. Fuse and rank
    return fuse_and_rank(scored_materials, top_k=top_k)

CLI 使用範例

# 基本搜尋
career-kb graph query "LangChain RAG" --mode local

# 啟用 LLM 證據選擇(更精準但更慢)
career-kb graph query "LangChain RAG" --mode local --use-evidence-selection

# 調整權重(偏重圖結構)
career-kb graph query "LangChain RAG" --vector-weight 0.4 --graph-weight 0.4

模組結構

py-kb/src/career_kb/graph/
├── evidence_selection.py   # [NEW] LLM 證據選擇
├── score_fusion.py         # [NEW] 跨層分數融合
├── hybrid_retrieval.py     # 更新:整合以上兩個模組
├── traversal.py            # 圖遍歷 (BFS, weighted expansion)
└── global_retrieval.py     # Map-Reduce 聚合

總結

功能 解決的問題 技術
LLM Evidence Selection 擴展節點太多 LangChain Structured Output
Cross-Layer Score Fusion 多訊號源排序 加權線性組合 + Hop 衰減

這兩個功能讓 GraphRAG 更精準:

  1. 只用真正相關的節點做二次檢索
  2. 綜合考慮語意、關鍵字、圖結構來排序

Career Knowledge Base 是一個本地優先的履歷知識庫系統,使用 Python + LanceDB + petgraph (Rust) + LlamaIndex 建構。