Skip to main content

evaluate

Signature
@app.tool(output="pred_ls,gt_ls,metrics,save_path->eval_res")
def evaluate(
    pred_ls: List[str],
    gt_ls: List[List[str]],
    metrics: List[str] | None,
    save_path: str,
) -> Dict[str, Any]
Function
  • Executes automatic metric evaluation for QA/Generation tasks.
  • Supported Metrics: acc, em, coverem, stringem, f1, rouge-1, rouge-2, rouge-l.
  • Results are automatically saved as .json file and printed as Markdown table.

evaluate_trec

Signature
@app.tool(output="run_path,qrels_path,ir_metrics,ks,save_path->eval_res")
def evaluate_trec(
    run_path: str,
    qrels_path: str,
    metrics: List[str] | None,
    ks: List[int] | None,
    save_path: str,
)
Function
  • Performs IR retrieval metric evaluation based on pytrec_eval.
  • Reads standard TREC format:
    • qrels: <qid> <iter> <docid> <rel>
    • run: <qid> Q0 <docid> <rank> <score> <tag>
  • Supported Metrics: mrr, map, recall@k, precision@k, ndcg@k.
  • Automatically aggregates statistics and outputs as table.

evaluate_trec_pvalue

Signature
@app.tool(
    output="run_new_path,run_old_path,qrels_path,ir_metrics,ks,n_resamples,save_path->eval_res"
)
def evaluate_trec_pvalue(
    run_new_path: str,
    run_old_path: str,
    qrels_path: str,
    metrics: List[str] | None,
    ks: List[int] | None,
    n_resamples: int | None,
    save_path: str,
)
Function
  • Compares significance of two TREC result files using Two-sided Permutation Test to calculate p-value.
  • Default resampling count n_resamples=10000.
  • Outputs mean, difference, p-value, and significance flag.

Configuration

https://mintcdn.com/ultrarag/T7GffHzZitf6TThi/images/yaml.svg?fit=max&auto=format&n=T7GffHzZitf6TThi&q=85&s=69b41e79144bc908039c2ee3abbb1c3bservers/evaluation/parameter.yaml
save_path: output/evaluate_results.json

# QA task
metrics: [ 'acc', 'f1', 'em', 'coverem', 'stringem', 'rouge-1', 'rouge-2', 'rouge-l' ]

# Retrieval task
qrels_path: data/qrels.txt
run_path: data/run_a.txt
ks: [ 1, 5, 10, 20, 50, 100 ]
ir_metrics: [ "mrr", "map", "recall", "ndcg", "precision" ]

# significant
run_new_path: data/run_a.txt
run_old_path: data/run_b.txt
n_resamples: 10000
Parameter Description:
ParameterTypeDescription
save_pathstrEvaluation result save path (automatically appends timestamp)
metricslist[str]Metric set used for QA / Generation tasks
qrels_pathstrTREC format ground truth file path
run_pathstrResult file for retrieval task
kslist[int]Truncation levels for calculating NDCG@K, P@K, Recall@K, etc.
ir_metricslist[str]Retrieval task metric names, supports mrr, map, recall, ndcg, precision
run_new_pathstrRun file path generated by new model (significance analysis)
run_old_pathstrRun file path of old model (significance analysis)
n_resamplesintResampling count for Permutation Test