You might want to consider the start of this tutorial.
Short introductions to other TF datasets:
or the
%load_ext autoreload
%autoreload 2
from tf.app import use
VERSION = "2021"
A = use("ETCBC/bhsa", hoist=globals())
Locating corpus resources ...
Name | # of nodes | # slots/node | % coverage |
---|---|---|---|
book | 39 | 10938.21 | 100 |
chapter | 929 | 459.19 | 100 |
lex | 9230 | 46.22 | 100 |
verse | 23213 | 18.38 | 100 |
half_verse | 45179 | 9.44 | 100 |
sentence | 63717 | 6.70 | 100 |
sentence_atom | 64514 | 6.61 | 100 |
clause | 88131 | 4.84 | 100 |
clause_atom | 90704 | 4.70 | 100 |
phrase | 253203 | 1.68 | 100 |
phrase_atom | 267532 | 1.59 | 100 |
subphrase | 113850 | 1.42 | 38 |
word | 426590 | 1.00 | 100 |
It might be helpful to peek under the hood, especially when exploring searches that go slow.
If you went through the previous parts of the tutorial you have encountered cases where things come to a grinding halt.
Yet we can get a hunch of what is going on, even in those cases.
For that, we use the lower-level search API S
of Text-Fabric, and not the
wrappers that the high level A
API provides.
The main difference is, that S.search()
returns a generator of the results,
whereas A.search()
returns a list of the results.
In fact, A.search()
calls the generator function delivered by S.search()
as often as needed.
For some queries, the fetching of results is quite costly, so costly that we do not want to fetch
all results up-front. Rather we want to fetch a few, to see how it goes.
In these cases, directly using S.search()
is preferred over A.search()
.
query = """
book
chapter
verse
phrase det=und
word lex=>LHJM/
"""
First we call S.study(query)
.
The syntax will be checked, features loaded, the search space will be set up, narrowed down, and the fetching of results will be prepared, but not yet executed.
In order to make the query a bit more interesting, we lift the constraint that the results must be in Genesis 1-2.
S.study(query)
0.00s Checking search template ... 0.00s Setting up search space for 5 objects ... 0.25s Constraining search space with 4 relations ... 0.29s 2 edges thinned 0.29s Setting up retrieval plan with strategy small_choice_multi ... 0.32s Ready to deliver results from 3345 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results
Before we rush to the results, lets have a look at the plan.
S.showPlan()
1.47s The results are connected to the original search template as follows: 0 1 R0 book 2 R1 chapter 3 R2 verse 4 R3 phrase det=und 5 R4 word lex=>LHJM 6
Here you see already what your results will look like.
Each result r
is a tuple of nodes:
(R0, R1, R2, R3, R4)
that instantiate the objects in your template.
In case you are curious, you can get details about the search space as well:
S.showPlan(details=True)
Search with 5 objects and 4 relations Results are instantiations of the following objects: node 0-book 39 choices node 1-chapter 929 choices node 2-verse 754 choices node 3-phrase 805 choices node 4-word 818 choices Performance parameters: yarnRatio = 1.25 tryLimitFrom = 40 tryLimitTo = 40 Instantiations are computed along the following relations: node 0-book 39 choices edge 0-book [[ 1-chapter 23.8 choices edge 1-chapter [[ 2-verse 1.0 choices edge 2-verse [[ 3-phrase 1.1 choices (thinned) edge 3-phrase [[ 4-word 1.0 choices (thinned) 3.15s The results are connected to the original search template as follows: 0 1 R0 book 2 R1 chapter 3 R2 verse 4 R3 phrase det=und 5 R4 word lex=>LHJM 6
The part about the nodes shows you how many possible instantiations for each object in your template has been found. These are not results yet, because only combinations of instantiations that satisfy all constraints are results.
The constraints come from the relations between the objects that you specified.
In this case, there is only an implicit relation: embedding [[
.
Later on we'll examine all
spatial relations.
The part about the edges shows you the constraints, and in what order they will be computed when stitching results together. In this case the order is exactly the order by which the relations appear in the template, but that will not always be the case. Text-Fabric spends some time and ingenuity to find out an optimal stitch plan. Fetching results is like selecting a node, stitching it to another node with an edge, and so on, until a full stitch of nodes intersects with all the node sets from which they must be chosen (the yarns).
Fetching results may take time.
For some queries, it can take a large amount of time to walk through all results. Even worse, it may happen that it takes a large amount of time before getting the first result. During stitching, many stitchings will be tried and fail before they can be completed.
This has to do with search strategies on the one hand, and the very likely possibility to encounter pathological search patterns, which have billions of results, mostly unintended. For example, a simple query that asks for 5 words in the Hebrew Bible without further constraints, will have 425,000 to the power of 5 results. That is 10-e28 (a one with 28 zeros), roughly the number of molecules in a few hundred litres of air. That may not sound much, but it is 10,000 times the amount of bytes that can be currently stored on the whole Internet.
Text-Fabric search is not yet done with finding optimal search strategies, and I hope to refine its arsenal of methods in the future, depending on what you report.
It is always a good idea to get a feel for the amount of results, before you dive into them head-on.
S.count(progress=1, limit=5)
0.00s Counting results per 1 up to 5 ... | 0.00s 1 | 0.00s 2 | 0.00s 3 | 0.00s 4 | 0.00s 5 0.01s Done: 6 results
We asked for 5 results in total, with a progress message for every one. That was a bit conservative.
S.count(progress=100, limit=500)
0.00s Counting results per 100 up to 500 ... | 0.01s 100 | 0.02s 200 | 0.03s 300 | 0.04s 400 | 0.05s 500 0.05s Done: 501 results
Still pretty quick, now we want to count all results.
S.count(progress=200, limit=None)
0.00s Counting results per 200 ... | 0.02s 200 | 0.04s 400 | 0.06s 600 | 0.07s 800 0.07s Done: 818 results
It is time to see something of those results.
S.fetch(limit=10)
((426626, 427478, 1435353, 882995, 381820), (426626, 427478, 1435364, 883090, 382059), (426627, 427485, 1435532, 884992, 385801), (426627, 427486, 1435548, 885229, 386188), (426627, 427492, 1435804, 887032, 390487), (426627, 427493, 1435830, 887367, 391119), (426627, 427493, 1435831, 887394, 391159), (426628, 427497, 1435979, 888253, 392968), (426628, 427498, 1436032, 888574, 393786), (426628, 427498, 1436037, 888618, 393895))
Not very informative.
Just a quick observation: look at the last column.
These are the result nodes for the word
part in the query, indicated as R7
by showPlan()
before.
And indeed, they are all below 425,000, the number of words in the Hebrew Bible.
Nevertheless, we want to glean a bit more information off them.
for r in S.fetch(limit=10):
print(S.glean(r))
Ezra 8:17 phrase[מְשָׁרְתִ֖ים לְבֵ֥ית אֱלֹהֵֽינוּ׃ ] אֱלֹהֵֽינוּ׃ Ezra 8:28 phrase[נְדָבָ֔ה לַיהוָ֖ה אֱלֹהֵ֥י אֲבֹתֵיכֶֽם׃ ] אֱלֹהֵ֥י Nehemiah 5:15 phrase[מִפְּנֵ֖י יִרְאַ֥ת אֱלֹהִֽים׃ ] אֱלֹהִֽים׃ Nehemiah 6:12 phrase[אֱלֹהִ֖ים ] אֱלֹהִ֖ים Nehemiah 12:46 phrase[לֵֽאלֹהִֽים׃ ] אלֹהִֽים׃ Nehemiah 13:25 phrase[בֵּֽאלֹהִ֗ים ] אלֹהִ֗ים Nehemiah 13:26 phrase[אֱלֹהִ֔ים ] אֱלֹהִ֔ים 1_Chronicles 4:10 phrase[אֱלֹהִ֖ים ] אֱלֹהִ֖ים 1_Chronicles 5:20 phrase[לֵאלֹהִ֤ים ] אלֹהִ֤ים 1_Chronicles 5:25 phrase[אֱלֹהִ֖ים ] אֱלֹהִ֖ים
It is not possible to do
len(S.fetch())
.
Because fetch()
is a generator, not a list.
It will deliver a result every time it is being asked and for as long as there are results,
but it does not know in advance how many there will be.
Fetching a result can be costly, because due to the constraints, a lot of possibilities
may have to be tried and rejected before a the next result is found.
That is why you often see results coming in at varying speeds when counting them.
We can also use A.table()
to make a list of results.
This function is part of the Bhsa
API, not of the generic Text-Fabric machinery, as opposed to S.glean()
.
So, you can use S.glean()
for every Text-Fabric corpus, but the output is still not very nice.
A.table()
gives much nicer output.
A.table(S.fetch(limit=5))
n | p | book | chapter | verse | phrase | word |
---|---|---|---|---|---|---|
1 | Ezra 8:17 | Ezra | Ezra 8 | מְשָׁרְתִ֖ים לְבֵ֥ית אֱלֹהֵֽינוּ׃ | אֱלֹהֵֽינוּ׃ | |
2 | Ezra 8:28 | Ezra | Ezra 8 | נְדָבָ֔ה לַיהוָ֖ה אֱלֹהֵ֥י אֲבֹתֵיכֶֽם׃ | אֱלֹהֵ֥י | |
3 | Nehemiah 5:15 | Nehemiah | Nehemiah 5 | מִפְּנֵ֖י יִרְאַ֥ת אֱלֹהִֽים׃ | אֱלֹהִֽים׃ | |
4 | Nehemiah 6:12 | Nehemiah | Nehemiah 6 | אֱלֹהִ֖ים | אֱלֹהִ֖ים | |
5 | Nehemiah 12:46 | Nehemiah | Nehemiah 12 | לֵֽאלֹהִֽים׃ | אלֹהִֽים׃ |
Above we mentioned that there are queries with astronomically many results. Here we present one:
query = """
word
# word
"""
We are asking for any pair of different words. That will give roughly 425,000 * 425,000 results, which is 180 billion results. This is a lot to produce, it will take time on even the best of computers, and once you've got the results, what would you do with them. Let's see what happens if we count these results.
S.study(query)
0.00s Checking search template ... 0.00s Setting up search space for 2 objects ... 0.10s Constraining search space with 1 relations ... 0.10s 0 edges thinned 0.10s Setting up retrieval plan with strategy small_choice_multi ... 0.10s Ready to deliver results from 853180 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results
S.count(progress=500000)
0.00s Counting results per 500000 ... | 0.16s 500000 | 0.31s 1000000 | 0.46s 1500000 | 0.61s 2000000 | 0.76s 2500000 | 0.91s 3000000 | 1.05s 3500000 | 1.20s 4000000 | 1.35s 4500000 | 1.50s 5000000 | 1.65s 5500000
1.74s cut off at 5787324 results. There are more ...
Text-fabric has cut off the process at a certain limit. This limit is a number of times the maximum node in your corpus:
5787324 / F.otype.maxNode
4.0
If you really need more results than this limit, you can specify a higher limit:
S.count(progress=500000, limit=8 * F.otype.maxNode)
0.00s Counting results per 500000 up to 11574648 ... | 0.16s 500000 | 0.31s 1000000 | 0.46s 1500000 | 0.61s 2000000 | 0.76s 2500000 | 0.90s 3000000 | 1.05s 3500000 | 1.20s 4000000 | 1.35s 4500000 | 1.50s 5000000 | 1.65s 5500000 | 1.80s 6000000 | 1.94s 6500000 | 2.09s 7000000 | 2.24s 7500000 | 2.39s 8000000 | 2.54s 8500000 | 2.69s 9000000 | 2.84s 9500000 | 2.99s 10000000 | 3.14s 10500000 | 3.28s 11000000 | 3.43s 11500000 3.46s Done: 11574649 results
Now you do not get an error message, because you got what you've asked for.
Or, in the advanced interface, let's fetch the standard maximum of results:
results = A.search(query)
3.11s cut off at 5787324 results. There are more ...
4.92s 5787324 results
Or, with a modified limit:
results = A.search(query, limit=5 * F.otype.maxNode)
6.45s 7234155 results
Again, you do not get an error message, because you got what you've asked for.
The search template above has some pretty tight constraints on one of its objects, so the amount of data to deal with is pretty limited.
If the constraints are weak, search may become slow.
For example, here is a query that looks for pairs of phrases in the same clause in such a way that one is engulfed by the other.
query = """
% test
% verse book=Genesis chapter=2 verse=25
verse
clause
p1:phrase
w1:word
w3:word
w1 < w3
p2:phrase
w2:word
w1 < w2
w3 > w2
p1 < p2
"""
A couple of remarks you may have encountered before.
<
means: comes before, and >
: comes after in the canonical order for nodes,
which for words means: comes textually before/after, but for other nodes the meaning
is explained hereNote on order
Look at the words w1
and w3
below phrase p1
.
Although in the template w1
comes before w3
, this is not
translated in a search constraint of the same nature.
Order between objects in a template is never significant, only embedding is.
Because order is not significant, you have to specify order yourself, using relations.
It turns out that this is better than the other way around.
In MQL order is significant, and it is very difficult to
search for w1
and w2
in any order.
Especially if your are looking for more than 2 complex objects with lots of feature
conditions, your search template would explode if you had to spell out all
possible permutations. See the example of Reinoud Oosting below.
Note on gaps
Look at the phrases p1
and p2
.
We do not specify an order here, only that they are different.
In order to prevent duplicated searches with p1
and p2
interchanged, we even
stipulate that p1 < p2
.
There are many spatial relationships possible between different objects.
In many cases, neither the one comes before the other, nor vice versa.
They can overlap, one can occur in a gap of the other, they can be completely disjoint
and interleaved, etc.
# ignore this
# S.tweakPerformance(yarnRatio=2)
S.study(query)
0.00s Checking search template ... 0.00s Setting up search space for 7 objects ... 0.22s Constraining search space with 10 relations ... 0.80s 6 edges thinned 0.80s Setting up retrieval plan with strategy small_choice_multi ... 0.84s Ready to deliver results from 1894471 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results
Text-Fabric knows that narrowing down the search space in this case would take ages, without resulting in a significantly shrunken space. So it skips doing so for most constraints.
Let us see the plan, with details.
S.showPlan(details=True)
Search with 7 objects and 10 relations Results are instantiations of the following objects: node 0-verse 23207 choices node 1-clause 88081 choices node 2-phrase 252998 choices node 3-word 425729 choices node 4-word 425729 choices node 5-phrase 252998 choices node 6-word 425729 choices Performance parameters: yarnRatio = 1.25 tryLimitFrom = 40 tryLimitTo = 40 Instantiations are computed along the following relations: node 0-verse 23207 choices edge 0-verse [[ 1-clause 4.4 choices (thinned) edge 1-clause [[ 5-phrase 2.8 choices (thinned) edge 5-phrase [[ 6-word 1.6 choices (thinned) edge 1-clause [[ 2-phrase 3.2 choices (thinned) edge 5-phrase > 2-phrase 0 choices edge 2-phrase [[ 4-word 1.7 choices (thinned) edge 6-word < 4-word 0 choices edge 2-phrase [[ 3-word 1.9 choices (thinned) edge 6-word > 3-word 0 choices edge 3-word < 4-word 0 choices 6.61s The results are connected to the original search template as follows: 0 1 % test 2 % verse book=Genesis chapter=2 verse=25 3 R0 verse 4 R1 clause 5 6 R2 p1:phrase 7 R3 w1:word 8 R4 w3:word 9 w1 < w3 10 11 R5 p2:phrase 12 R6 w2:word 13 w1 < w2 14 w3 > w2 15 16 p1 < p2 17
As you see, we have a hefty search space here.
Let us play with the count()
function.
S.count(progress=10, limit=100)
0.00s Counting results per 10 up to 100 ... | 0.02s 10 | 0.02s 20 | 0.02s 30 | 0.03s 40 | 0.03s 50 | 0.03s 60 | 0.03s 70 | 0.03s 80 | 0.03s 90 | 0.03s 100 0.03s Done: 101 results
We can be bolder than this!
S.count(progress=100, limit=1000)
0.00s Counting results per 100 up to 1000 ... | 0.03s 100 | 0.03s 200 | 0.04s 300 | 0.06s 400 | 0.08s 500 | 0.08s 600 | 0.09s 700 | 0.12s 800 | 0.12s 900 | 0.16s 1000 0.16s Done: 1001 results
OK, not too bad, but note that it takes a big fraction of a second to get just 100 results.
Now let us go for all of them by the thousand.
S.count(progress=1000, limit=None)
0.00s Counting results per 1000 ... | 0.15s 1000 | 0.23s 2000 | 0.32s 3000 | 0.40s 4000 | 0.50s 5000 | 0.68s 6000 | 1.07s 7000 1.27s Done: 7593 results
See? This is substantial work.
A.table(S.fetch(limit=5))
n | p | verse | clause | phrase | word | word | phrase | word |
---|---|---|---|---|---|---|---|---|
1 | Genesis 2:25 | וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ | הָֽ | עֲרוּמִּ֔ים | עֲרוּמִּ֔ים | |
2 | Genesis 2:25 | וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ | אָדָ֖ם | עֲרוּמִּ֔ים | עֲרוּמִּ֔ים | |
3 | Genesis 2:25 | וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ | וְ | עֲרוּמִּ֔ים | עֲרוּמִּ֔ים | |
4 | Genesis 2:25 | וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו | שְׁנֵיהֶם֙ | אִשְׁתֹּ֑ו | עֲרוּמִּ֔ים | עֲרוּמִּ֔ים | |
5 | Genesis 4:4 | וְהֶ֨בֶל הֵבִ֥יא גַם־ה֛וּא מִבְּכֹרֹ֥ות צֹאנֹ֖ו וּמֵֽחֶלְבֵהֶ֑ן | הֶ֨בֶל גַם־ה֛וּא | הֶ֨בֶל | גַם־ | הֵבִ֥יא | הֵבִ֥יא |
As a check, here is some code that looks for basically the same phenomenon: a phrase within the gap of another phrase. It does not use search, and it gets a bit more focused results, in half the time compared to the search with the template.
Hint
If you are comfortable with programming, and what you look for is fairly generic, you may be better off without search, provided you can translate your insight in the data into an effective procedure within Text-Fabric. But wait till we are completely done with this example!
TF.indent(reset=True)
TF.info("Getting gapped phrases")
results = []
for v in F.otype.s("verse"):
for c in L.d(v, otype="clause"):
ps = L.d(c, otype="phrase")
first = {}
last = {}
slots = {}
# make index of phrase boundaries
for p in ps:
words = L.d(p, otype="word")
first[p] = words[0]
last[p] = words[-1]
slots[p] = set(words)
for p1 in ps:
for p2 in ps:
if p2 < p1:
continue
if len(slots[p1] & slots[p2]) != 0:
continue
if first[p1] < first[p2] and last[p2] < last[p1]:
results.append(
(v, c, p1, p2, first[p1], first[p2], last[p2], last[p1])
)
TF.info("{} results".format(len(results)))
0.00s Getting gapped phrases 0.79s 368 results
We can use the pretty printing of A.table()
and A.show()
here as well, even though we have
not used search!
Not that you can show the node numbers. In this case it helps to see where the gaps are.
A.table(results, withNodes=True, end=5)
A.show(results, start=1, end=1)
n | p | verse | clause | phrase | phrase | word | word | word | word |
---|---|---|---|---|---|---|---|---|---|
1 | Genesis 2:25 | 1414444 | 427773 וַיִּֽהְי֤וּ 652217 1159 שְׁנֵיהֶם֙ 652218 1160 עֲרוּמִּ֔ים 652217 הָֽאָדָ֖ם וְ1164 אִשְׁתֹּ֑ו | 652217 1159 שְׁנֵיהֶם֙ 652217 הָֽאָדָ֖ם וְ1164 אִשְׁתֹּ֑ו | 652218 1160 עֲרוּמִּ֔ים | 1159 שְׁנֵיהֶם֙ | 1160 עֲרוּמִּ֔ים | 1160 עֲרוּמִּ֔ים | 1164 אִשְׁתֹּ֑ו |
2 | Genesis 4:4 | 1414472 | 427895 וְ652574 1720 הֶ֨בֶל 652575 1721 הֵבִ֥יא 652574 גַם־1723 ה֛וּא מִבְּכֹרֹ֥ות צֹאנֹ֖ו וּמֵֽחֶלְבֵהֶ֑ן | 652574 1720 הֶ֨בֶל 652574 גַם־1723 ה֛וּא | 652575 1721 הֵבִ֥יא | 1720 הֶ֨בֶל | 1721 הֵבִ֥יא | 1721 הֵבִ֥יא | 1723 ה֛וּא |
3 | Genesis 10:21 | 1414644 | 428392 654172 4819 גַּם־ה֑וּא 654173 4821 אֲבִי֙ כָּל־בְּנֵי־4824 עֵ֔בֶר 654172 אֲחִ֖י יֶ֥פֶת הַ4828 גָּדֹֽול׃ | 654172 4819 גַּם־ה֑וּא 654172 אֲחִ֖י יֶ֥פֶת הַ4828 גָּדֹֽול׃ | 654173 4821 אֲבִי֙ כָּל־בְּנֵי־4824 עֵ֔בֶר | 4819 גַּם־ | 4821 אֲבִי֙ | 4824 עֵ֔בֶר | 4828 גָּדֹֽול׃ |
4 | Genesis 12:17 | 1414704 | 428575 וַיְנַגַּ֨ע יְהוָ֧ה׀ 654748 5803 אֶת־פַּרְעֹ֛ה 654749 5805 נְגָעִ֥ים 5806 גְּדֹלִ֖ים 654748 וְאֶת־5809 בֵּיתֹ֑ו עַל־דְּבַ֥ר שָׂרַ֖י אֵ֥שֶׁת אַבְרָֽם׃ | 654748 5803 אֶת־פַּרְעֹ֛ה 654748 וְאֶת־5809 בֵּיתֹ֑ו | 654749 5805 נְגָעִ֥ים 5806 גְּדֹלִ֖ים | 5803 אֶת־ | 5805 נְגָעִ֥ים | 5806 גְּדֹלִ֖ים | 5809 בֵּיתֹ֑ו |
5 | Genesis 13:1 | 1414708 | 428591 וַיַּעַל֩ 654795 5868 אַבְרָ֨ם 654796 5869 מִ5870 מִּצְרַ֜יִם 654795 ה֠וּא וְאִשְׁתֹּ֧ו וְ5875 כָל־428591 הַנֶּֽגְבָּה׃ | 654795 5868 אַבְרָ֨ם 654795 ה֠וּא וְאִשְׁתֹּ֧ו וְ5875 כָל־ | 654796 5869 מִ5870 מִּצְרַ֜יִם | 5868 אַבְרָ֨ם | 5869 מִ | 5870 מִּצְרַ֜יִם | 5875 כָל־ |
result 1
NB Gaps are a tricky phenomenon. In gaps we will deal with them cruelly.
Here is an example by Yanniek van der Schans (2018-09-21).
query = """
c:clause
PreGap:phrase_atom
LastPhrase:phrase_atom
:=
Gap:clause_atom
:: word
PreGap < Gap
Gap < LastPhrase
c || Gap
"""
Here are the current settings of the performance parameters:
S.tweakPerformance()
Performance parameters, current values: tryLimitFrom = 40 tryLimitTo = 40 yarnRatio = 1.25
S.study(query)
S.showPlan(details=True)
0.00s Checking search template ... 0.00s Setting up search space for 5 objects ... 0.13s Constraining search space with 8 relations ... 0.29s 2 edges thinned 0.29s Setting up retrieval plan with strategy small_choice_multi ... 0.30s Ready to deliver results from 454184 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results Search with 5 objects and 8 relations Results are instantiations of the following objects: node 0-clause 88131 choices node 1-phrase_atom 267532 choices node 2-phrase_atom 88131 choices node 3-clause_atom 5195 choices node 4-word 5195 choices Performance parameters: yarnRatio = 1.25 tryLimitFrom = 40 tryLimitTo = 40 Instantiations are computed along the following relations: node 3-clause_atom 5195 choices edge 3-clause_atom [[ 4-word 1.0 choices edge 3-clause_atom :: 4-word 0 choices edge 3-clause_atom < 2-phrase_atom 44065.5 choices edge 2-phrase_atom := 0-clause 1.0 choices (thinned) edge 2-phrase_atom ]] 0-clause 0 choices edge 0-clause || 3-clause_atom 0 choices edge 0-clause [[ 1-phrase_atom 2.7 choices edge 1-phrase_atom < 3-clause_atom 0 choices 0.31s The results are connected to the original search template as follows: 0 1 R0 c:clause 2 R1 PreGap:phrase_atom 3 R2 LastPhrase:phrase_atom 4 := 5 6 R3 Gap:clause_atom 7 R4 :: word 8 9 PreGap < Gap 10 Gap < LastPhrase 11 c || Gap 12
S.count(progress=1, limit=3)
0.00s Counting results per 1 up to 3 ... | 0.00s 1 | 0.00s 2 | 1.62s 3 3.32s Done: 4 results
Can we do better?
The performance parameter yarnRatio
can be used to increase the amount of pre-processing, and we can
increase to number of random samples that we make by tryLimitFrom
and tryLimitTo
.
We start with increasing the amount of up-front edge-spinning.
S.tweakPerformance(yarnRatio=0.2, tryLimitFrom=10000, tryLimitTo=10000)
Performance parameters, current values: tryLimitFrom = 10000 tryLimitTo = 10000 yarnRatio = 0.2
S.study(query)
S.showPlan(details=True)
0.00s Checking search template ... 0.00s Setting up search space for 5 objects ... 0.12s Constraining search space with 8 relations ... 0.41s 2 edges thinned 0.41s Setting up retrieval plan with strategy small_choice_multi ... 0.50s Ready to deliver results from 454184 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results Search with 5 objects and 8 relations Results are instantiations of the following objects: node 0-clause 88131 choices node 1-phrase_atom 267532 choices node 2-phrase_atom 88131 choices node 3-clause_atom 5195 choices node 4-word 5195 choices Performance parameters: yarnRatio = 0.2 tryLimitFrom = 10000 tryLimitTo = 10000 Instantiations are computed along the following relations: node 3-clause_atom 5195 choices edge 3-clause_atom [[ 4-word 1.0 choices edge 3-clause_atom :: 4-word 0 choices edge 3-clause_atom < 2-phrase_atom 44065.5 choices edge 2-phrase_atom := 0-clause 1.0 choices (thinned) edge 2-phrase_atom ]] 0-clause 0 choices edge 0-clause || 3-clause_atom 0 choices edge 0-clause [[ 1-phrase_atom 3.0 choices edge 1-phrase_atom < 3-clause_atom 0 choices 0.50s The results are connected to the original search template as follows: 0 1 R0 c:clause 2 R1 PreGap:phrase_atom 3 R2 LastPhrase:phrase_atom 4 := 5 6 R3 Gap:clause_atom 7 R4 :: word 8 9 PreGap < Gap 10 Gap < LastPhrase 11 c || Gap 12
It seems to be the same plan.
S.count(progress=1, limit=3)
0.00s Counting results per 1 up to 3 ... | 0.00s 1 | 0.00s 2 | 1.58s 3 3.29s Done: 4 results
No improvement.
What if we decrease the amount of edge spinning?
S.tweakPerformance(yarnRatio=5, tryLimitFrom=10000, tryLimitTo=10000)
Performance parameters, current values: tryLimitFrom = 10000 tryLimitTo = 10000 yarnRatio = 5
S.study(query)
S.showPlan(details=True)
0.00s Checking search template ... 0.00s Setting up search space for 5 objects ... 0.14s Constraining search space with 8 relations ... 0.32s 2 edges thinned 0.32s Setting up retrieval plan with strategy small_choice_multi ... 0.41s Ready to deliver results from 454184 nodes Iterate over S.fetch() to get the results See S.showPlan() to interpret the results Search with 5 objects and 8 relations Results are instantiations of the following objects: node 0-clause 88131 choices node 1-phrase_atom 267532 choices node 2-phrase_atom 88131 choices node 3-clause_atom 5195 choices node 4-word 5195 choices Performance parameters: yarnRatio = 5 tryLimitFrom = 10000 tryLimitTo = 10000 Instantiations are computed along the following relations: node 3-clause_atom 5195 choices edge 3-clause_atom [[ 4-word 1.0 choices edge 3-clause_atom :: 4-word 0 choices edge 3-clause_atom < 2-phrase_atom 44065.5 choices edge 2-phrase_atom := 0-clause 1.0 choices (thinned) edge 2-phrase_atom ]] 0-clause 0 choices edge 0-clause || 3-clause_atom 0 choices edge 0-clause [[ 1-phrase_atom 3.0 choices edge 1-phrase_atom < 3-clause_atom 0 choices 0.42s The results are connected to the original search template as follows: 0 1 R0 c:clause 2 R1 PreGap:phrase_atom 3 R2 LastPhrase:phrase_atom 4 := 5 6 R3 Gap:clause_atom 7 R4 :: word 8 9 PreGap < Gap 10 Gap < LastPhrase 11 c || Gap 12
S.count(progress=1, limit=3)
0.00s Counting results per 1 up to 3 ... | 0.00s 1 | 0.00s 2 | 1.61s 3 3.33s Done: 4 results
Again, no improvement.
We'll look for queries where the parameters matter more in the future.
Here is how to reset the performance parameters:
S.tweakPerformance(yarnRatio=None, tryLimitFrom=None, tryLimitTo=None)
Performance parameters, current values: tryLimitFrom = 40 tryLimitTo = 40 yarnRatio = 1.25
advanced sets relations quantifiers from MQL rough
You have seen cases where the implementation is to blame.
Now I want to point to gaps in your understanding:
CC-BY Dirk Roorda