我们将展示LlamaIndex如何与您的Make.com工作流程相匹配,通过将GPT索引响应发送到一个场景Webhook。
如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。
%pip install llama-index-readers-make-com
!pip install llama-index
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.readers.make_com import MakeWrapper
下载数据
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(documents=documents)
# 将日志级别设置为DEBUG,以获得更详细的输出# 查询索引query_str = "作者在成长过程中做了什么?"query_engine = index.as_query_engine()response = query_engine.query(query_str)
# 将响应发送到Make.com的webhookwrapper = MakeWrapper()wrapper.pass_response_to_webhook("<webhook_url>", response, query_str)