跳转到主要内容

作用

Reranker Server 是 UR-2.0 中用于 对检索结果进行精排的模块。 它接收来自 Retriever Server 的初步检索结果,并基于语义相关性对候选文档进行重新排序, 从而提升检索阶段的精度与最终生成结果的质量。 该模块原生支持多种主流后端包括 Sentence-TransformersInfinity 以及 OpenAI

使用示例

/images/yaml.svgexamples/corpus_rerank.yaml
# MCP Server
servers:
  benchmark: servers/benchmark
  retriever: servers/retriever
  reranker: servers/reranker

# MCP Client Pipeline
pipeline:
- benchmark.get_data
- retriever.retriever_init
- retriever.retriever_embed
- retriever.retriever_index
- retriever.retriever_search
- reranker.reranker_init
- reranker.reranker_rerank
运行以下命令编译 Pipeline:
ultrarag build examples/corpus_rerank.yaml
修改参数:
/images/yaml.svgexamples/parameters/corpus_search_parameter.yaml
benchmark:
  benchmark:
    key_map:
      gt_ls: golden_answers
      q_ls: question
    limit: -1
    name: nq
    path: data/sample_nq_10.jsonl
    seed: 42
    shuffle: false
reranker:
  backend: sentence_transformers
  backend_configs:
    infinity:
      bettertransformer: false
      device: cuda
      model_warmup: false
      pooling_method: auto
      trust_remote_code: true
    openai:
      api_key: ''
      base_url: https://api.openai.com/v1
      model_name: text-embedding-3-small
    sentence_transformers:
      device: cuda
      trust_remote_code: true
  batch_size: 16
  gpu_ids: 0
  model_name_or_path: openbmb/MiniCPM-Reranker-Light
  model_name_or_path: BAAI/bge-reranker-large
  query_instruction: ''
  top_k: 5
retriever:
  backend: sentence_transformers
  backend_configs:
    bm25:
      lang: en
    infinity:
      bettertransformer: false
      device: cuda
      model_warmup: false
      pooling_method: auto
      trust_remote_code: true
    openai:
      api_key: ''
      base_url: https://api.openai.com/v1
      model_name: text-embedding-3-small
    sentence_transformers:
      device: cuda
      sentence_transformers_encode:
        encode_chunk_size: 10000
        normalize_embeddings: false
        psg_prompt_name: document
        psg_task: null
        q_prompt_name: query
        q_task: null
      trust_remote_code: true
  batch_size: 16
  corpus_path: data/corpus_example.jsonl
  embedding_path: embedding/embedding.npy
  faiss_use_gpu: true
  gpu_ids: 0,1
  gpu_ids: 1
  index_chunk_size: 50000
  index_path: index/index.index
  is_multimodal: false
  model_name_or_path: openbmb/MiniCPM-Embedding-Light
  model_name_or_path: Qwen/Qwen3-Embedding-0.6B
  overwrite: false
  query_instruction: ''
  top_k: 5

运行 Pipeline:
ultrarag run examples/corpus_rerank.yaml