在这个笔记本中,我们将展示如何使用Postgresql和pgvector在LlamaIndex中执行向量搜索。
如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
%pip install llama-index-vector-stores-postgres
!pip install llama-index
运行以下单元格将在Colab中安装带有PGVector的Postgres。
!sudo apt update
!echo | sudo apt install -y postgresql-common
!echo | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
!echo | sudo apt install postgresql-15-pgvector
!sudo service postgresql start
!sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'password';"
!sudo -u postgres psql -c "CREATE DATABASE vector_db;"
# import logging
# import sys
# 取消注释以查看调试日志
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import SimpleDirectoryReader,StorageContext
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.postgres import PGVectorStore
import textwrap
import openai
第一步是配置OpenAI密钥。它将用于为加载到索引中的文档创建嵌入。
import os
os.environ["OPENAI_API_KEY"] = "<your key>"
openai.api_key = os.environ["OPENAI_API_KEY"]
下载数据
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
--2024-03-14 02:56:30-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 75042 (73K) [text/plain] Saving to: ‘data/paul_graham/paul_graham_essay.txt’ data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.001s 2024-03-14 02:56:30 (72.2 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
使用SimpleDirectoryReader加载存储在data/paul_graham/
中的文档
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
print("Document ID:", documents[0].doc_id)
Document ID: 1306591e-cc2d-430b-a74c-03ae7105ecab
使用已经在本地运行的postgres,创建我们将要使用的数据库。
import psycopg2
connection_string = "postgresql://postgres:password@localhost:5432"
db_name = "vector_db"
conn = psycopg2.connect(connection_string)
conn.autocommit = True
with conn.cursor() as c:
c.execute(f"DROP DATABASE IF EXISTS {db_name}")
c.execute(f"CREATE DATABASE {db_name}")
在这里,我们使用之前加载的文档创建一个由Postgres支持的索引。PGVectorStore需要一些参数。
from sqlalchemy import make_url
url = make_url(connection_string)
vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="paul_graham_essay",
embed_dim=1536, # openai embedding dimension
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, show_progress=True
)
query_engine = index.as_query_engine()
Parsing nodes: 0%| | 0/1 [00:00<?, ?it/s]
Generating embeddings: 0%| | 0/22 [00:00<?, ?it/s]
现在我们可以使用我们的索引来提出问题。
response = query_engine.query("What did the author do?")
print(textwrap.fill(str(response), 100))
The author worked on writing and programming before college, initially focusing on writing short stories and later transitioning to programming on early computers like the IBM 1401 using Fortran. The author continued programming on microcomputers like the TRS-80, creating simple games and a word processor. In college, the author initially planned to study philosophy but switched to studying AI due to a lack of interest in philosophy courses. The author was inspired to work on AI after encountering works like Heinlein's novel "The Moon is a Harsh Mistress" and seeing Terry Winograd using SHRDLU in a PBS documentary.
response = query_engine.query("What happened in the mid 1980s?")
print(textwrap.fill(str(response), 100))
AI was in the air in the mid 1980s, with two main influences that sparked interest in working on it: a novel by Heinlein called The Moon is a Harsh Mistress, featuring an intelligent computer called Mike, and a PBS documentary showing Terry Winograd using SHRDLU.
vector_store = PGVectorStore.from_params(
database="vector_db",
host="localhost",
password="password",
port=5432,
user="postgres",
table_name="paul_graham_essay",
embed_dim=1536, # openai embedding dimension
)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do?")
print(textwrap.fill(str(response), 100))
The author worked on writing short stories and programming before college. Initially, the author wrote short stories and later started programming on an IBM 1401 using an early version of Fortran. With the introduction of microcomputers, the author's interest in programming grew, leading to writing simple games, predictive programs, and a word processor. Despite initially planning to study philosophy in college, the author switched to studying AI due to a lack of interest in philosophy courses. The author was inspired to work on AI after encountering a novel featuring an intelligent computer and a PBS documentary showcasing AI technology.
要启用混合搜索,您需要:
PGVectorStore
时传入hybrid_search=True
(并可选择使用所需的语言配置text_search_config
)vector_store_query_mode="hybrid"
(此配置会传递给底层的检索器)。您还可以选择设置sparse_top_k
来配置我们应该从稀疏文本搜索中获取多少结果(默认值与similarity_top_k
相同)。from sqlalchemy import make_url
url = make_url(connection_string)
hybrid_vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="paul_graham_essay_hybrid_search",
embed_dim=1536, # openai embedding dimension
hybrid_search=True,
text_search_config="english",
)
storage_context = StorageContext.from_defaults(
vector_store=hybrid_vector_store
)
hybrid_index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
hybrid_query_engine = hybrid_index.as_query_engine(
vector_store_query_mode="hybrid", sparse_top_k=2
)
hybrid_response = hybrid_query_engine.query(
"Who does Paul Graham think of with the word schtick"
)
/workspaces/llama_index/llama-index-integrations/vector_stores/llama-index-vector-stores-postgres/llama_index/vector_stores/postgres/base.py:571: SAWarning: UserDefinedType REGCONFIG() will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this warning at: https://sqlalche.me/e/20/cprf) res = session.execute(stmt)
print(hybrid_response)
Roy Lichtenstein
由于文本搜索和向量搜索的得分计算方式不同,仅通过文本搜索找到的节点将具有更低的得分。
通常可以通过使用QueryFusionRetriever
来改进混合搜索性能,它更好地利用互信息来对节点进行排名。
from llama_index.core.response_synthesizers import CompactAndRefine
from llama_index.core.retrievers import QueryFusionRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
vector_retriever = hybrid_index.as_retriever(
vector_store_query_mode="default",
similarity_top_k=5,
)
text_retriever = hybrid_index.as_retriever(
vector_store_query_mode="sparse",
similarity_top_k=5, # 在这个上下文中可与sparse_top_k互换
)
retriever = QueryFusionRetriever(
[vector_retriever, text_retriever],
similarity_top_k=5,
num_queries=1, # 将其设置为1以禁用查询生成
mode="relative_score",
use_async=False,
)
response_synthesizer = CompactAndRefine()
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
)
response = query_engine.query(
"Who does Paul Graham think of with the word schtick, and why?"
)
print(response)
Paul Graham thinks of Roy Lichtenstein when he uses the word "schtick" because he recognizes paintings resembling a specific type of cartoon style as being created by Roy Lichtenstein.
PGVectorStore支持在节点中存储元数据,并在检索步骤中基于该元数据进行过滤。
!mkdir -p 'data/git_commits/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/csv/commit_history.csv' -O 'data/git_commits/commit_history.csv'
--2024-03-14 02:56:46-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/csv/commit_history.csv Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1753902 (1.7M) [text/plain] Saving to: ‘data/git_commits/commit_history.csv’ data/git_commits/co 100%[===================>] 1.67M --.-KB/s in 0.02s 2024-03-14 02:56:46 (106 MB/s) - ‘data/git_commits/commit_history.csv’ saved [1753902/1753902]
import csv
with open("data/git_commits/commit_history.csv", "r") as f:
commits = list(csv.DictReader(f))
print(commits[0])
print(len(commits))
{'commit': '44e41c12ab25e36c202f58e068ced262eadc8d16', 'author': 'Lakshmi Narayanan Sreethar<lakshmi@timescale.com>', 'date': 'Tue Sep 5 21:03:21 2023 +0530', 'change summary': 'Fix segfault in set_integer_now_func', 'change details': 'When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 '} 4167
# 为前100次提交创建一个TextNode
from llama_index.core.schema import TextNode
from datetime import datetime
import re
nodes = []
dates = set()
authors = set()
for commit in commits[:100]:
author_email = commit["author"].split("<")[1][:-1]
commit_date = datetime.strptime(
commit["date"], "%a %b %d %H:%M:%S %Y %z"
).strftime("%Y-%m-%d")
commit_text = commit["change summary"]
if commit["change details"]:
commit_text += "\n\n" + commit["change details"]
fixes = re.findall(r"#(\d+)", commit_text, re.IGNORECASE)
nodes.append(
TextNode(
text=commit_text,
metadata={
"commit_date": commit_date,
"author": author_email,
"fixes": fixes,
},
)
)
dates.add(commit_date)
authors.add(author_email)
print(nodes[0])
print(min(dates), "到", max(dates))
print(authors)
Node ID: 69513543-dee5-4c65-b4b8-39295f11e669 Text: Fix segfault in set_integer_now_func When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 2023-03-22 to 2023-09-05 {'rafia.sabih@gmail.com', 'erik@timescale.com', 'jguthrie@timescale.com', 'sven@timescale.com', '36882414+akuzm@users.noreply.github.com', 'me@noctarius.com', 'satish.8483@gmail.com', 'nikhil@timescale.com', 'konstantina@timescale.com', 'dmitry@timescale.com', 'mats@timescale.com', 'jan@timescale.com', 'lakshmi@timescale.com', 'fabriziomello@gmail.com', 'engel@sero-systems.de'}
# 将参数传递给PGVectorStore对象,创建一个vector_store对象
vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="metadata_filter_demo3",
embed_dim=1536, # openai embedding dimension
)
# 从vector_store对象创建一个VectorStoreIndex对象
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
# 将节点插入到索引中
index.insert_nodes(nodes)
print(index.as_query_engine().query("How did Lakshmi fix the segfault?"))
Lakshmi fixed the segfault by removing the invalid call to ReleaseSysCache that was causing the issue.
现在我们可以在检索节点时按提交作者或日期进行过滤。
from llama_index.core.vector_stores.types import (
MetadataFilter,
MetadataFilters,
)
filters = MetadataFilters(
filters=[
MetadataFilter(key="author", value="mats@timescale.com"),
MetadataFilter(key="author", value="sven@timescale.com"),
],
condition="or",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []} {'commit_date': '2023-08-27', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-07-13', 'author': 'mats@timescale.com', 'fixes': []} {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-30', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-23', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []} {'commit_date': '2023-07-25', 'author': 'mats@timescale.com', 'fixes': ['5892']} {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []}
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-15", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-25", operator="<="),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-23', 'author': 'erik@timescale.com', 'fixes': []} {'commit_date': '2023-08-17', 'author': 'konstantina@timescale.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []} {'commit_date': '2023-08-24', 'author': 'lakshmi@timescale.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-23', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-20', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []}
在上面的例子中,我们使用AND或OR组合了多个过滤器。我们也可以组合多组过滤器。
例如,在SQL中:
WHERE (commit_date >= '2023-08-01' AND commit_date <= '2023-08-15') AND (author = 'mats@timescale.com' OR author = 'sven@timescale.com')
filters = MetadataFilters(
filters=[
MetadataFilters(
filters=[
MetadataFilter(
key="commit_date", value="2023-08-01", operator=">="
),
MetadataFilter(
key="commit_date", value="2023-08-15", operator="<="
),
],
condition="and",
),
MetadataFilters(
filters=[
MetadataFilter(key="author", value="mats@timescale.com"),
MetadataFilter(key="author", value="sven@timescale.com"),
],
condition="or",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []} {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []}
上述内容可以通过使用IN运算符来简化。PGVectorStore
支持in
、nin
和contains
,用于将元素与列表进行比较。
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-01", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-15", operator="<="),
MetadataFilter(
key="author",
value=["mats@timescale.com", "sven@timescale.com"],
operator="in",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []} {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []} {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []}
# 使用NOT IN 进行相同的操作
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-01", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-15", operator="<="),
MetadataFilter(
key="author",
value=["mats@timescale.com", "sven@timescale.com"],
operator="nin",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("这个软件项目是关于什么的?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-09', 'author': 'me@noctarius.com', 'fixes': ['5805']} {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []} {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []} {'commit_date': '2023-08-11', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []} {'commit_date': '2023-08-09', 'author': 'konstantina@timescale.com', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']} {'commit_date': '2023-08-03', 'author': 'dmitry@timescale.com', 'fixes': []} {'commit_date': '2023-08-03', 'author': 'dmitry@timescale.com', 'fixes': ['5908']} {'commit_date': '2023-08-01', 'author': 'nikhil@timescale.com', 'fixes': []} {'commit_date': '2023-08-10', 'author': 'konstantina@timescale.com', 'fixes': []} {'commit_date': '2023-08-10', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}
# 包含
filters = MetadataFilters(
filters=[
MetadataFilter(key="fixes", value="5680", operator="contains"), # 操作符为包含
]
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("这些提交是如何修复这个问题的?")
for node in retrieved_nodes:
print(node.node.metadata)
{'commit_date': '2023-08-09', 'author': 'konstantina@timescale.com', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']}
retriever = index.as_retriever(
vector_store_query_mode="hybrid",
similarity_top_k=5,
vector_store_kwargs={"ivfflat_probes": 10},
)
retriever = index.as_retriever(
vector_store_query_mode="hybrid",
similarity_top_k=5,
vector_store_kwargs={"hnsw_ef_search": 300},
)