在这里,我们测试了all-MiniLM-L6-v2
句子转换器嵌入,这是在给定准确性范围内速度最快的之一。我们将检索器的top_k值设置为30。我们还使用了nfcorpus数据集。
如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
%pip install llama-index-embeddings-huggingface
!pip install llama-index
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core.evaluation.benchmarks import BeirEvaluator
from llama_index.core import VectorStoreIndex
def create_retriever(documents):
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
index = VectorStoreIndex.from_documents(
documents, embed_model=embed_model, show_progress=True
)
return index.as_retriever(similarity_top_k=30)
BeirEvaluator().run(
create_retriever, datasets=["nfcorpus"], metrics_k_values=[3, 10, 30]
)
/home/jonch/.pyenv/versions/3.10.6/lib/python3.10/site-packages/beir/datasets/data_loader.py:2: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from tqdm.autonotebook import tqdm
Dataset: nfcorpus downloaded at: /home/jonch/.cache/llama_index/datasets/BeIR__nfcorpus Evaluating on dataset: nfcorpus -------------------------------------
100%|███████████████████████████████████| 3633/3633 [00:00<00:00, 141316.79it/s] Parsing documents into nodes: 100%|████████| 3633/3633 [00:06<00:00, 569.35it/s] Generating embeddings: 100%|████████████████| 3649/3649 [04:22<00:00, 13.92it/s]
Retriever created for: nfcorpus Evaluating retriever on questions against qrels
100%|█████████████████████████████████████████| 323/323 [01:26<00:00, 3.74it/s]
Results for: nfcorpus {'NDCG@3': 0.35476, 'MAP@3': 0.07489, 'Recall@3': 0.08583, 'precision@3': 0.33746} {'NDCG@10': 0.31403, 'MAP@10': 0.11003, 'Recall@10': 0.15885, 'precision@10': 0.23994} {'NDCG@30': 0.28636, 'MAP@30': 0.12794, 'Recall@30': 0.21653, 'precision@30': 0.14716} -------------------------------------
所有评估指标都是越高越好。
这篇towardsdatascience文章更深入地介绍了NDCG、MAP和MRR。