The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment. The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat. Attribution: "MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/".
The production of the Text-Fabric files consist of two steps. First the creation of piclke files (part 1). Secondly the actual Text-Fabric creation process (part 2). Both steps are independent allowing to start from Part 2 by using the pickle files as input.
Be advised that this Text-Fabric version is a test version (proof of concept) and requires further finetuning, especialy with regards of nomenclature and presentation of (sub)phrases and clauses.
This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer).
In the context of this script, 'Leaf' refers to those node containing the Greek word as data, which happen to be the nodes without any child (hence the analogy with the leaves on the tree). These 'leafs' can also be refered to as 'terminal nodes'. Futher, Parent1 is the leaf's parent, Parent2 is Parent1's parent, etc.
For a full description of the source data see document MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf
import pandas as pd
import sys
import os
import time
import pickle
import re #regular expressions
from os import listdir
from os.path import isfile, join
import xml.etree.ElementTree as ET
Change BaseDir, XmlDir and PklDir to match location of the datalocation and the OS used.
BaseDir = 'C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# note: create output directory prior running this part
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
bo2book_ = {'25-3john': ['III_John', '25', '3John']}
In order to traverse from the 'leafs' (terminating nodes) upto the root of the tree, it is required to add information to each node pointing to the parent of each node.
(concept taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)
def addParentInfo(et):
for child in et:
child.attrib['parent'] = et
addParentInfo(child)
def getParent(et):
if 'parent' in et.attrib:
return et.attrib['parent']
else:
return None
# set some globals
monad=1
CollectedItems= 0
# process books in order
for bo, bookinfo in bo2book.items():
CollectedItems=0
SentenceNumber=0
WordGroupNumber=0
full_df=pd.DataFrame({})
book_long=bookinfo[0]
booknum=bookinfo[1]
book_short=bookinfo[2]
InputFile = os.path.join(XmlDir, f'{bo}.xml')
OutputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'Processing {book_long} at {InputFile}')
# send xml document to parsing process
tree = ET.parse(InputFile)
# Now add all the parent info to the nodes in the xtree [important!]
addParentInfo(tree.getroot())
start_time = time.time()
# walk over all the XML data
for elem in tree.iter():
if elem.tag == 'sentence':
# add running number to 'sentence' tags
SentenceNumber+=1
elem.set('SN', SentenceNumber)
if elem.tag == 'wg':
# add running number to 'wg' tags
WordGroupNumber+=1
elem.set('WGN', WordGroupNumber)
if elem.tag == 'w':
# all nodes containing words are tagged with 'w'
# show progress on screen
CollectedItems+=1
if (CollectedItems%100==0): print (".",end='')
#Leafref will contain list with book, chapter verse and wordnumber
Leafref = re.sub(r'[!: ]'," ", elem.attrib.get('ref')).split()
#push value for monad to element tree
elem.set('monad', monad)
monad+=1
# add some important computed data to the leaf
elem.set('LeafName', elem.tag)
elem.set('word', elem.text)
elem.set('book_long', book_long)
elem.set('booknum', int(booknum))
elem.set('book_short', book_short)
elem.set('chapter', int(Leafref[1]))
elem.set('verse', int(Leafref[2]))
# folling code will trace down parents upto the tree and store found attributes
parentnode=getParent(elem)
index=0
while (parentnode):
index+=1
elem.set('Parent{}Name'.format(index), parentnode.tag)
elem.set('Parent{}Type'.format(index), parentnode.attrib.get('type'))
elem.set('Parent{}Appos'.format(index), parentnode.attrib.get('appositioncontainer'))
elem.set('Parent{}Class'.format(index), parentnode.attrib.get('class'))
elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('rule'))
elem.set('Parent{}Role'.format(index), parentnode.attrib.get('role'))
elem.set('Parent{}Cltype'.format(index), parentnode.attrib.get('cltype'))
elem.set('Parent{}Unit'.format(index), parentnode.attrib.get('unit'))
elem.set('Parent{}Junction'.format(index), parentnode.attrib.get('junction'))
elem.set('Parent{}SN'.format(index), parentnode.attrib.get('SN'))
elem.set('Parent{}WGN'.format(index), parentnode.attrib.get('WGN'))
currentnode=parentnode
parentnode=getParent(currentnode)
elem.set('parents', int(index))
#this will push all elements found in the tree into a DataFrame
df=pd.DataFrame(elem.attrib, index={monad})
full_df=pd.concat([full_df,df])
#store the resulting DataFrame per book into a pickle file for further processing
df = df.convert_dtypes(convert_string=True)
# sort by s=id
sortkey='{http://www.w3.org/XML/1998/namespace}id'
full_df.rename(columns={sortkey: 'id'}, inplace=True)
full_df.sort_values(by=['id'])
output = open(r"{}".format(OutputFile), 'wb')
pickle.dump(full_df, output)
output.close()
print("\nFound ",CollectedItems, " items in %s seconds\n" % (time.time() - start_time))
Processing Matthew at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\01-matthew.xml ...................................................................................................................................................................................... Found 18299 items in 337.3681836128235 seconds Processing Mark at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\02-mark.xml ................................................................................................................ Found 11277 items in 144.04719877243042 seconds Processing Luke at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\03-luke.xml .................................................................................................................................................................................................. Found 19456 items in 1501.197922706604 seconds Processing John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\04-john.xml ............................................................................................................................................................ Found 15643 items in 237.1071105003357 seconds Processing Acts at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\05-acts.xml ....................................................................................................................................................................................... Found 18393 items in 384.3644151687622 seconds Processing Romans at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\06-romans.xml ....................................................................... Found 7100 items in 71.03568935394287 seconds Processing I_Corinthians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\07-1corinthians.xml .................................................................... Found 6820 items in 58.47511959075928 seconds Processing II_Corinthians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\08-2corinthians.xml ............................................ Found 4469 items in 31.848721027374268 seconds Processing Galatians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\09-galatians.xml ...................... Found 2228 items in 13.850211143493652 seconds Processing Ephesians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\10-ephesians.xml ........................ Found 2419 items in 17.529520511627197 seconds Processing Philippians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\11-philippians.xml ................ Found 1630 items in 9.271572589874268 seconds Processing Colossians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\12-colossians.xml ............... Found 1575 items in 10.389309883117676 seconds Processing I_Thessalonians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\13-1thessalonians.xml .............. Found 1473 items in 8.413437604904175 seconds Processing II_Thessalonians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\14-2thessalonians.xml ........ Found 822 items in 4.284915447235107 seconds Processing I_Timothy at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\15-1timothy.xml ............... Found 1588 items in 10.419771671295166 seconds Processing II_Timothy at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\16-2timothy.xml ............ Found 1237 items in 7.126454591751099 seconds Processing Titus at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\17-titus.xml ...... Found 658 items in 3.1472580432891846 seconds Processing Philemon at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\18-philemon.xml ... Found 335 items in 1.3175146579742432 seconds Processing Hebrews at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\19-hebrews.xml ................................................. Found 4955 items in 44.31139326095581 seconds Processing James at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\20-james.xml ................. Found 1739 items in 8.570415496826172 seconds Processing I_Peter at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\21-1peter.xml ................ Found 1676 items in 10.489561557769775 seconds Processing II_Peter at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\22-2peter.xml .......... Found 1098 items in 6.005697250366211 seconds Processing I_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\23-1john.xml ..................... Found 2136 items in 10.843079566955566 seconds Processing II_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\24-2john.xml .. Found 245 items in 0.9535031318664551 seconds Processing III_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\25-3john.xml .. Found 219 items in 1.0913233757019043 seconds Processing Jude at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\26-jude.xml .... Found 457 items in 1.8929190635681152 seconds Processing Revelation at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\27-revelation.xml .................................................................................................. Found 9832 items in 125.92533278465271 seconds
# just dump some things to test the result
for bo in bo2book:
'''
load all data into a dataframe
process books in order (bookinfo is a list!)
'''
InputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'\tloading {InputFile}...')
pkl_file = open(InputFile, 'rb')
df = pickle.load(pkl_file)
pkl_file.close()
# not sure if this is needed
# fill dictionary of column names for this book
IndexDict = {} # init an empty dictionary
ItemsInRow=1
for itemname in df.columns.to_list():
IndexDict.update({'i_{}'.format(itemname): ItemsInRow})
print (itemname)
ItemsInRow+=1
This script creates the Text-Fabric files by recursive calling the TF walker function. API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
The pickle files created by step 1 are stored on Github location T.B.D.
import pandas as pd
import os
import re
import gc
from tf.fabric import Fabric
from tf.convert.walker import CV
from tf.parameters import VERSION
from datetime import date
import pickle
BaseDir = 'C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
bo2book_ = {'26-jude': ['Jude', '26', 'Jude']}
# test: sorting the data
import openpyxl
import pickle
#if True:
for bo in bo2book:
'''
load all data into a dataframe
process books in order (bookinfo is a list!)
'''
InputFile = os.path.join(PklDir, f'{bo}.pkl')
#InputFile = os.path.join(PklDir, '01-matthew.pkl')
print(f'\tloading {InputFile}...')
pkl_file = open(InputFile, 'rb')
df = pickle.load(pkl_file)
pkl_file.close()
# not sure if this is needed
# fill dictionary of column names for this book
IndexDict = {} # init an empty dictionary
ItemsInRow=1
for itemname in df.columns.to_list():
IndexDict.update({'i_{}'.format(itemname): ItemsInRow})
ItemsInRow+=1
#print(itemname)
# sort by id
#print(df)
df_sorted=df.sort_values(by=['id'])
df_sorted.to_excel(os.path.join(XlsxDir, f'{bo}.xlsx'), index=False)
loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\26-jude.pkl...
API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
The logic of interpreting the data is included in the director function.
TF = Fabric(locations=BaseDir, silent=False)
cv = CV(TF)
version = "0.1.4 (visualize all wordgroups - recursive clause generation)"
def sanitize(input):
if isinstance(input, float): return ''
if isinstance(input, type(None)): return ''
else: return (input)
def director(cv):
NoneType = type(None) # needed as tool to validate certain data
prev_book = "Matthew" # start at first book
IndexDict = {} # init an empty dictionary
Arrays2Dump=200
DumpedArrays=0
ClauseDict={} # init a dummy dictionary
PrevClauseSet = []
PrevClauseList = []
for bo,bookinfo in bo2book.items():
'''
load all data into a dataframe
process books in order (bookinfo is a list!)
'''
book=bookinfo[0]
booknum=int(bookinfo[1])
book_short=bookinfo[2]
book_loc = os.path.join(PklDir, f'{bo}.pkl')
print(f'\tWe are loading {book_loc}...')
pkl_file = open(book_loc, 'rb')
df_unsorted = pickle.load(pkl_file)
pkl_file.close()
df=df_unsorted.sort_values(by=['id'])
FoundWords=0
phrasefunction='TBD'
phrasefunction_long='TBD'
this_clausetype="unknown" #just signal a not found case
this_clauserule="unknown"
this_clauseconst="unknown"
prev_clausetype = "unknown"
this_phrasetype="unknown" #just signal a not found case
this_phrasefunction="unknown"
this_phrasefunction_long="unknown"
this_role="unknown"
ClausesSuspended = False
PreSuspendClauseList = {}
phrase_resume = False
ClosedPhrase=''
PhraseToRestoreTo=''
phrase_close=False
prev_chapter = int(1) # start at 1
prev_verse = int(1) # start at 1
prev_sentence = int(1) # start at 1
prev_clause = int(1) # start at 1
prev_phrase = int(1) # start at 1
prev_role = int(1) # start at 1
# ClauseArrayNumbers contains the active clause numbers (wg in the LowFatTree data)
# reset/load the following initial variables (we are at the start of a new book)
sentence_track = clause_track = clauseconst_track = phrase_track = 1
sentence_done = clause_done = clauseconst_done = phrase_done = verse_done = chapter_done = book_done = False
PhraseInterupted = False
wrdnum = 0 # start at 0
# fill dictionary of column names for this book
ItemsInRow=1
for itemname in df.columns.to_list():
IndexDict.update({'i_{}'.format(itemname): ItemsInRow})
ItemsInRow+=1
df.sort_values(by=['id'])
'''
Walks through the texts and triggers
slot and node creation events.
'''
# iterate through words and construct objects
for row in df.itertuples():
wrdnum += 1
FoundWords +=1
'''
First get all the relevant information from the dataframe
'''
# get number of parent nodes
parents = row[IndexDict.get("i_parents")]
# get chapter and verse from the data
chapter = sanitize(row[IndexDict.get("i_chapter")])
verse = sanitize(row[IndexDict.get("i_verse")])
# experimental (decoding the clause structure)
ClauseList=[] # stores current active clauses (wg numbers)
if True:
for i in range(parents-1,1,-1): # reversed itteration
item = IndexDict.get("i_Parent{}Class".format(i))
#if row[item]=="cl":
if True:
WGN=row[IndexDict.get("i_Parent{}WGN".format(i))]
ClauseList.append(WGN)
ClauseDict[(WGN,0)]=WGN
ClauseDict[(WGN,1)]=sanitize(row[IndexDict.get("i_Parent{}Rule".format(i))])
ClauseDict[(WGN,2)]=sanitize(row[IndexDict.get("i_Parent{}Cltype".format(i))])
ClauseDict[(WGN,3)]=sanitize(row[IndexDict.get("i_Parent{}Junction".format(i))])
ClauseDict[(WGN,6)]=sanitize(row[IndexDict.get("i_Parent{}Class".format(i))])
if not PrevClauseList==ClauseList:
#print ('\nPrevClauseDict ={',end='')
#for item in PrevClauseDict:
# print (' ',item,'=',PrevClauseDict[item],end='')
#print ('}')
#print('\nClauseList=',ClauseList, 'PrevClauseList=',PrevClauseList, 'ClausesSuspended=',ClausesSuspended,end='')
#print('\nAction(s) = ',end=' ')
ResumePhrase=False
if ClausesSuspended==True:
for item in ClauseList:
try:
ItemToResume=ClauseDict[(item,4)]
#print (' resume: '+str(item),end=' ')
cv.resume(ClauseDict[(item,4)])
#ResumePhrase=True
PhraseToRestoreTo=prev_phrasefunction
except:
#print("This is the odd condition for wg=",item)
# CREATE CASE
#print ('\n create: '+str(item),end=' ')
ClauseDict[(item,4)]=cv.node('clause')
ClauseDict[(item,5)]=clause_track
clause_track += 1
#print ('link=',ClauseDict[(item,4)])
if not bool(ClauseList): # this means it is a part outside a clause
for item in PrevClauseList:
# SUSPEND CASE
#print (' suspend: '+str(item),end=' ')
cv.feature(ClauseDict[(item,4)], clause=ClauseDict[(item,5)], wg=ClauseDict[(item,0)], junction=ClauseDict[(item,3)], clausetype=ClauseDict[(item,2)], clauserule=ClauseDict[(item,1)], wgclass=ClauseDict[(item,6)])
cv.terminate(ClauseDict[(item,4)])
ClausesSuspended= True
PreSuspendClauseList = PrevClauseList
else:
for item in PrevClauseList:
if (item not in ClauseList):
# CLOSE CASE
#print ('\n close: '+str(item),end=' ')
cv.feature(ClauseDict[(item,4)], clause=ClauseDict[(item,5)], wg=ClauseDict[(item,0)], junction=ClauseDict[(item,3)], clausetype=ClauseDict[(item,2)], clauserule=ClauseDict[(item,1)] , wgclass=ClauseDict[(item,6)])
cv.terminate(ClauseDict[item,4])
for k in range(6): del ClauseDict[item,k]
for item in ClauseList:
if (item not in PrevClauseList):
if not ClausesSuspended:
# CREATE CASE
#print ('\n create: '+str(item),end=' ')
ClauseDict[(item,4)]=cv.node('clause')
ClauseDict[(item,5)]=clause_track
clause_track += 1
#print ('link=',ClauseDict[(item,4)])
ClausesSuspended=False
PrevClauseList = ClauseList
clause_done=True
#print ('\nClauseDict=',ClauseDict,end='')
#print ('\n\n',book,chapter,":",verse,'\t',row[IndexDict.get("i_word")]+row[IndexDict.get("i_after")],end='')
#else:
#print (row[IndexDict.get("i_word")]+row[IndexDict.get("i_after")],end='')
# determine syntactic categories of clause parts. See also the description in
# "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf" page 5&6
# (section 2.4 Syntactic Categories at Clause Level)
prev_phrasefunction=this_phrasefunction
prev_phrasefunction_long=this_phrasefunction_long
role=row[IndexDict.get("i_role")]
ValidRoles=["adv","io","o","o2","s","p","v","vc"]
this_phrasefunction=''
PhraseWGN=''
if isinstance (role,str) and role in ValidRoles:
this_phrasefunction=role
else:
for i in range(1,parents-1):
role = row[IndexDict.get("i_Parent{}Role".format(i))]
if isinstance (role,str) and role in ValidRoles:
this_phrasefunction=role
PhraseWGN=row[IndexDict.get("i_Parent{}WGN".format(i))]
break
if this_phrasefunction!=prev_phrasefunction:
#print ('\n',chapter,':',verse,' this_phrasefunction=',this_phrasefunction,' prev_phrasefunction=',prev_phrasefunction)
if phrase_close:
cv.resume(ClosedPhrase)
#print ('resume phrase',ClosedPhrase)
if this_phrasefunction!='':
phrase_done = True
else:
phrase_close = True
ClosedPhrase=this_phrase
PhraseToRestoreTo=prev_phrasefunction
if this_phrasefunction=="adv": this_phrasefunction_long='Adverbial'
elif this_phrasefunction=="io": this_phrasefunction_long='Indirect Object'
elif this_phrasefunction=="o": this_phrasefunction_long='Object'
elif this_phrasefunction=="o2": this_phrasefunction_long='Second Object'
elif this_phrasefunction=="s": this_phrasefunction_long='Subject'
elif this_phrasefunction=="p": this_phrasefunction_long='Predicate'
elif this_phrasefunction=="v": this_phrasefunction_long='Verbal'
elif this_phrasefunction=="vc": this_phrasefunction_long='Verbal Copula'
#print ('this_phrasefunction=',this_phrasefunction, end=" | ")
'''
determine if conditions are met to trigger some action
action will be executed after next word
'''
# detect book boundary
if prev_book != book:
prev_book=book
book_done = True
chapter_done = True
verse_done=True
sentence_done = True
clause_done = True
phrase_done = True
# detect chapter boundary
if prev_chapter != chapter:
chapter_done = True
verse_done=True
sentence_done = True
clause_done = True
phrase_done = True
# detect verse boundary
if prev_verse != verse:
verse_done=True
# determine syntactic categories at word level. See also the description in
# "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf" page 6&7
# (2.2. Syntactic Categories at Word Level: Part of Speech Labels)
sp=sanitize(row[IndexDict.get("i_class")])
if sp=='adj':
sp_full='adjective'
elif sp=='adj':
sp_full='adjective'
elif sp=='conj':
sp_full='conjunction'
elif sp=='det':
sp_full='determiner'
elif sp=='intj':
sp_full='interjection'
elif sp=='noun':
sp_full='noun'
elif sp=='num':
sp_full='numeral'
elif sp=='prep':
sp_full='preposition'
elif sp=='ptcl':
sp_full='particle'
elif sp=='pron':
sp_full='pronoun'
elif sp=='verb':
sp_full='verb'
# Manage first word per book
if wrdnum==1:
prev_phrasetype=this_phrasetype
prev_phrasefunction=this_phrasefunction
prev_phrasefunction_long=this_phrasefunction_long
prev_clauserule=this_clauserule
book_done = chapter_done = verse_done = phrase_done = clause_done = sentence_done = False
# create the first set of nodes
this_book = cv.node('book')
cv.feature(this_book, book=prev_book)
this_chapter = cv.node('chapter')
this_verse = cv.node('verse')
this_sentence = cv.node('sentence')
#this_clause = cv.node('clause')
this_phrase = cv.node('phrase')
#print ('new phrase',this_phrase)
sentence_track += 1
#clause_track += 1
phrase_track += 1
'''
-- handle TF events --
Determine what actions need to be done if proper condition is met.
'''
# act upon end of phrase (close)
if phrase_done or phrase_close:
cv.feature(this_phrase, phrase=prev_phrase, phrasetype=prev_phrasetype, phrasefunction=prev_phrasefunction, phrasefunction_long=prev_phrasefunction_long,wg=PhraseWGN)
cv.terminate(this_phrase)
#print ('terminate phrase',this_phrase,':',prev_phrasefunction,'\n')
ClosedPhrase=this_phrase
PhraseToRestoreTo=prev_phrasefunction
phrase_close=False
# cv.feature(this_clause, clause=prev_clause, clausetype=prev_clausetype, clauserule=prev_clauserule)
# cv.terminate(this_clause)
# act upon end of sentence (close)
if sentence_done:
cv.feature(this_sentence, sentence=prev_sentence)
cv.terminate(this_sentence)
# act upon end of verse (close)
if verse_done:
cv.feature(this_verse, verse=prev_verse)
cv.terminate(this_verse)
prev_verse = verse
# act upon end of chapter (close)
if chapter_done:
cv.feature(this_chapter, chapter=prev_chapter)
cv.terminate(this_chapter)
prev_chapter = chapter
# act upon end of book (close and open new)
if book_done:
cv.terminate(this_book)
this_book = cv.node('book')
cv.feature(this_book, book=book)
prev_book = book
wrdnum = 1
phrase_track = 1
clause_track = 1
sentence_track = 1
book_done = False
# start of chapter (create new)
if chapter_done:
this_chapter = cv.node('chapter')
chapter_done = False
# start of verse (create new)
if verse_done:
this_verse = cv.node('verse')
verse_done = False
# start of sentence (create new)
if sentence_done:
this_sentence= cv.node('sentence')
prev_sentence = sentence_track
sentence_track += 1
sentence_done = False
# start of phrase (create new)
if phrase_done and not ResumePhrase:
this_phrase = cv.node('phrase')
#print ('new phrase',this_phrase)
prev_phrase = phrase_track
prev_phrasefunction=this_phrasefunction
prev_phrasefunction_long=this_phrasefunction_long
phrase_track += 1
phrase_done = False
# Detect boundaries of sentences
text=sanitize(row[IndexDict.get("i_after")])[-1:]
if text == "." :
sentence_done = True
#phrase_done = True
#if text == ";" or text == ",":
# phrase_done = True
'''
-- create word nodes --
'''
# some attributes are not present inside some (small) books. The following is to prevent exceptions.
degree=''
if 'i_degree' in IndexDict:
degree=sanitize(row[IndexDict.get("i_degree")])
subjref=''
if 'i_subjref' in IndexDict:
subjref=sanitize(row[IndexDict.get("i_subjref")])
#print (chapter,':',verse," ",row[IndexDict.get("i_word")],' - ',row[IndexDict.get("i_gloss")],' - ', sp,end='\n')
# make word object
this_word = cv.slot()
cv.feature(this_word,
after=sanitize(row[IndexDict.get("i_after")]),
id=sanitize(row[IndexDict.get("i_id")]),
unicode=sanitize(row[IndexDict.get("i_unicode")]),
word=sanitize(row[IndexDict.get("i_word")]),
monad=sanitize(row[IndexDict.get("i_monad")]),
orig_order=FoundWords,
book_long=sanitize(row[IndexDict.get("i_book_long")]),
booknum=booknum,
book_short=sanitize(row[IndexDict.get("i_book_short")]),
chapter=chapter,
ref=sanitize(row[IndexDict.get("i_ref")]),
sp=sanitize(sp),
sp_full=sanitize(sp_full),
verse=verse,
sentence=sanitize(prev_sentence),
clause=sanitize(prev_clause),
phrase=sanitize(prev_phrase),
normalized=sanitize(row[IndexDict.get("i_normalized")]),
morph=sanitize(row[IndexDict.get("i_morph")]),
strongs=sanitize(row[IndexDict.get("i_strong")]),
lex_dom=sanitize(row[IndexDict.get("i_domain")]),
ln=sanitize(row[IndexDict.get("i_ln")]),
gloss=sanitize(row[IndexDict.get("i_gloss")]),
gn=sanitize(row[IndexDict.get("i_gender")]),
nu=sanitize(row[IndexDict.get("i_number")]),
case=sanitize(row[IndexDict.get("i_case")]),
lemma=sanitize(row[IndexDict.get("i_lemma")]),
person=sanitize(row[IndexDict.get("i_person")]),
mood=sanitize(row[IndexDict.get("i_mood")]),
tense=sanitize(row[IndexDict.get("i_tense")]),
number=sanitize(row[IndexDict.get("i_number")]),
voice=sanitize(row[IndexDict.get("i_voice")]),
degree=degree,
type=sanitize(row[IndexDict.get("i_type")]),
reference=sanitize(row[IndexDict.get("i_ref")]),
subj_ref=subjref,
nodeID=sanitize(row[1]) #this is a fixed position.
)
cv.terminate(this_word)
'''
-- wrap up the book --
'''
# close all nodes (phrase, clause, sentence, verse, chapter and book)
cv.feature(this_phrase, phrase=phrase_track, phrasetype=prev_phrasetype,phrasefunction=prev_phrasefunction,phrasefunction_long=prev_phrasefunction_long, wg=PhraseWGN)
cv.terminate(this_phrase)
#print ('terminate phrase',this_phrase,':',prev_phrasetype,'\n')
#print (ClauseDict)
for item in ClauseList:
cv.feature(ClauseDict[(item,4)], clause=ClauseDict[(item,5)], clausetype=prev_clausetype, clauserule=prev_clauserule)
cv.terminate(ClauseDict[item,4])
cv.feature(this_sentence, sentence=prev_sentence)
cv.terminate(this_sentence)
cv.feature(this_verse, verse=prev_verse)
cv.terminate(this_verse)
cv.feature(this_chapter, chapter=prev_chapter)
cv.terminate(this_chapter)
cv.feature(this_book, book=prev_book)
cv.terminate(this_book)
# clear dataframe for this book
del df
# clear the index dictionary
IndexDict.clear()
gc.collect()
'''
-- output definitions --
'''
slotType = 'word' # or whatever you choose
otext = { # dictionary of config data for sections and text formats
'fmt:text-orig-full':'{word}{after}',
'sectionTypes':'book,chapter,verse',
'sectionFeatures':'book,chapter,verse',
'structureFeatures': 'book,chapter,verse',
'structureTypes': 'book,chapter,verse',
}
# configure metadata
generic = { # dictionary of metadata meant for all features
'Name': 'Greek New Testament (NA1904)',
'Version': '1904',
'Editors': 'Nestle',
'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat',
'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',
'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',
'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',
'Convertor_source': 'https://github.com/tonyjurg/n1904_lft',
'Converter_version': '{}'.format(version),
'TextFabric version': '{}'.format(VERSION) #imported from tf.parameters
}
intFeatures = { # set of integer valued feature names
'booknum',
'chapter',
'verse',
'sentence',
'clause',
'phrase',
'orig_order',
'monad'
}
featureMeta = { # per feature dicts with metadata
'after': {'description': 'Characters (eg. punctuations) following the word'},
'id': {'description': 'id of the word'},
'book': {'description': 'Book'},
'book_long': {'description': 'Book name (fully spelled out)'},
'booknum': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},
'book_short': {'description': 'Book name (abbreviated)'},
'chapter': {'description': 'Chapter number inside book'},
'verse': {'description': 'Verse number inside chapter'},
'sentence': {'description': 'Sentence number (counted per chapter)'},
'clause': {'description': 'Clause number (counted per chapter)'},
'clausetype' : {'description': 'Clause type information (verb, verbless, elided, minor, etc.)'},
'clauserule' : {'description': 'Clause rule information '},
'phrase' : {'description': 'Phrase number (counted per book)'},
'phrasetype' : {'description': 'Phrase type information'},
'phrasefunction' : {'description': 'Phrase function (abbreviated)'},
'phrasefunction_long' : {'description': 'Phrase function (long description)'},
'orig_order': {'description': 'Word order within corpus (per book)'},
'monad':{'description': 'Monad (currently: order of words in XML tree file!)'},
'word': {'description': 'Word as it appears in the text (excl. punctuations)'},
'unicode': {'description': 'Word as it arears in the text in Unicode (incl. punctuations)'},
'ref': {'description': 'ref Id'},
'sp': {'description': 'Part of Speech (abbreviated)'},
'sp_full': {'description': 'Part of Speech (long description)'},
'normalized': {'description': 'Surface word stripped of punctations'},
'lemma': {'description': 'Lexeme (lemma)'},
'morph': {'description': 'Morphological tag (Sandborg-Petersen morphology)'},
# see also discussion on relation between lex_dom and ln @ https://github.com/Clear-Bible/macula-greek/issues/29
'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},
'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},
'strongs': {'description': 'Strongs number'},
'gloss': {'description': 'English gloss'},
'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},
'nu': {'description': 'Gramatical number (Singular, Plural)'},
'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},
'person': {'description': 'Gramatical person of the verb (first, second, third)'},
'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},
'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},
'number': {'description': 'Gramatical number of the verb'},
'voice': {'description': 'Gramatical voice of the verb'},
'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},
'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},
'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},
'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},
'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'},
'junction': {'description': 'Junction data related to a clause'},
'wg' : {'description': 'wg number in orig xml'},
'wgclass' : {'description': 'class of the wg'}
}
'''
-- the main function --
'''
good = cv.walk(
director,
slotType,
otext=otext,
generic=generic,
intFeatures=intFeatures,
featureMeta=featureMeta,
warn=True,
force=True
)
if good:
print ("done")
This is Text-Fabric 11.2.3 49 features found and 0 ignored 0.00s Importing data from walking through the source ... | 0.00s Preparing metadata... | SECTION TYPES: book, chapter, verse | SECTION FEATURES: book, chapter, verse | STRUCTURE TYPES: book, chapter, verse | STRUCTURE FEATURES: book, chapter, verse | TEXT FEATURES: | | text-orig-full after, word | 0.00s OK | 0.00s Following director... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\01-matthew.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\02-mark.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\03-luke.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\04-john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\05-acts.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\06-romans.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\07-1corinthians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\08-2corinthians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\09-galatians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\10-ephesians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\11-philippians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\12-colossians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\13-1thessalonians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\14-2thessalonians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\15-1timothy.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\16-2timothy.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\17-titus.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\18-philemon.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\19-hebrews.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\20-james.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\21-1peter.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\22-2peter.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\23-1john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\24-2john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\25-3john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\26-jude.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\27-revelation.pkl... | 40s "edge" actions: 0 | 40s "feature" actions: 328053 | 40s "node" actions: 180871 | 40s "resume" actions: 0 | 40s "slot" actions: 137779 | 40s "terminate" actions: 328026 | 27 x "book" node | 260 x "chapter" node | 95811 x "clause" node | 71106 x "phrase" node | 5724 x "sentence" node | 7943 x "verse" node | 137779 x "word" node = slot type | 318650 nodes of all types | 40s OK | 0.00s checking for nodes and edges ... | 0.00s OK | 0.00s checking (section) features ... | 0.24s OK | 0.00s reordering nodes ... | 0.03s Sorting 27 nodes of type "book" | 0.04s Sorting 260 nodes of type "chapter" | 0.05s Sorting 95811 nodes of type "clause" | 0.16s Sorting 71106 nodes of type "phrase" | 0.24s Sorting 5724 nodes of type "sentence" | 0.26s Sorting 7943 nodes of type "verse" | 0.28s Max node = 318650 | 0.28s OK | 0.00s reassigning feature values ... | | 0.00s node feature "after" with 137779 nodes | | 0.04s node feature "book" with 27 nodes | | 0.05s node feature "book_long" with 137779 nodes | | 0.09s node feature "book_short" with 137779 nodes | | 0.14s node feature "booknum" with 137779 nodes | | 0.18s node feature "case" with 137779 nodes | | 0.24s node feature "chapter" with 138039 nodes | | 0.29s node feature "clause" with 233590 nodes | | 0.38s node feature "clauserule" with 95811 nodes | | 0.43s node feature "clausetype" with 95811 nodes | | 0.47s node feature "degree" with 137779 nodes | | 0.52s node feature "gloss" with 137779 nodes | | 0.57s node feature "gn" with 137779 nodes | | 0.62s node feature "id" with 137779 nodes | | 0.66s node feature "junction" with 95808 nodes | | 0.69s node feature "lemma" with 137779 nodes | | 0.73s node feature "lex_dom" with 137779 nodes | | 0.78s node feature "ln" with 137779 nodes | | 0.82s node feature "monad" with 137779 nodes | | 0.86s node feature "mood" with 137779 nodes | | 0.89s node feature "morph" with 137779 nodes | | 0.93s node feature "nodeID" with 137779 nodes | | 0.97s node feature "normalized" with 137779 nodes | | 1.01s node feature "nu" with 137779 nodes | | 1.05s node feature "number" with 137779 nodes | | 1.09s node feature "orig_order" with 137779 nodes | | 1.13s node feature "person" with 137779 nodes | | 1.16s node feature "phrase" with 208885 nodes | | 1.22s node feature "phrasefunction" with 71106 nodes | | 1.25s node feature "phrasefunction_long" with 71106 nodes | | 1.27s node feature "phrasetype" with 71106 nodes | | 1.29s node feature "ref" with 137779 nodes | | 1.33s node feature "reference" with 137779 nodes | | 1.37s node feature "sentence" with 143503 nodes | | 1.41s node feature "sp" with 137779 nodes | | 1.45s node feature "sp_full" with 137779 nodes | | 1.49s node feature "strongs" with 137779 nodes | | 1.53s node feature "subj_ref" with 137779 nodes | | 1.56s node feature "tense" with 137779 nodes | | 1.61s node feature "type" with 137779 nodes | | 1.65s node feature "unicode" with 137779 nodes | | 1.69s node feature "verse" with 145722 nodes | | 1.73s node feature "voice" with 137779 nodes | | 1.77s node feature "wg" with 166914 nodes | | 1.81s node feature "wgclass" with 95808 nodes | | 1.85s node feature "word" with 137779 nodes | 1.98s OK 0.00s Exporting 47 node and 1 edge and 1 config features to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data/: 0.00s VALIDATING oslots feature 0.02s VALIDATING oslots feature 0.02s maxSlot= 137779 0.02s maxNode= 318650 0.04s OK: oslots is valid | 0.13s T after to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.00s T book to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T book_long to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T book_short to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.12s T booknum to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T case to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.12s T chapter to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.21s T clause to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.10s T clauserule to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.09s T clausetype to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T degree to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T gloss to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T gn to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T id to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.09s T junction to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T lemma to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T lex_dom to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T ln to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T monad to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T mood to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T morph to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T nodeID to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T normalized to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T nu to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T number to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T orig_order to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.06s T otype to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T person to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.20s T phrase to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.07s T phrasefunction to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.07s T phrasefunction_long to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.07s T phrasetype to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T ref to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T reference to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T sentence to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T sp to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T sp_full to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T strongs to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T subj_ref to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T tense to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T type to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.16s T unicode to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T verse to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T voice to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.22s T wg to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.09s T wgclass to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T word to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.51s T oslots to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data | 0.00s M otext to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data 6.57s Exported 47 node features and 1 edge features and 1 config features to C:/Users/tonyj/my_new_Jupyter_folder/Read_from_lowfat/data/ done
The TF will be loaded from github repository https://github.com/tonyjurg/n1904_lft
%load_ext autoreload
%autoreload 2
# First, I have to laod different modules that I use for analyzing the data and for plotting:
import sys, os, collections
import pandas as pd
import numpy as np
import re
from tf.fabric import Fabric
from tf.app import use
The following cell loads the TextFabric files from github repository.
# Loading-the-New-Testament-Text-Fabric (add a specific version, eg. 0.1.2)
NA = use ("tonyjurg/n1904_lft", version="0.1.4", hoist=globals())
Locating corpus resources ...
The requested data is not available offline C:/Users/tonyj/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 not found rate limit is 5000 requests per hour, with 4998 left for this hour connecting to online GitHub repo tonyjurg/n1904_lft ... connected cannot find releases cannot find releases tf/0.1.4/after.tf...downloaded tf/0.1.4/book.tf...downloaded tf/0.1.4/book_long.tf...downloaded tf/0.1.4/book_short.tf...downloaded tf/0.1.4/booknum.tf...downloaded tf/0.1.4/case.tf...downloaded tf/0.1.4/chapter.tf...downloaded tf/0.1.4/class.tf...downloaded tf/0.1.4/clause.tf...downloaded tf/0.1.4/clauserule.tf...downloaded tf/0.1.4/clausetype.tf...downloaded tf/0.1.4/degree.tf...downloaded tf/0.1.4/gloss.tf...downloaded tf/0.1.4/gn.tf...downloaded tf/0.1.4/id.tf...downloaded tf/0.1.4/junction.tf...downloaded tf/0.1.4/lemma.tf...downloaded tf/0.1.4/lex_dom.tf...downloaded tf/0.1.4/ln.tf...downloaded tf/0.1.4/monad.tf...downloaded tf/0.1.4/mood.tf...downloaded tf/0.1.4/morph.tf...downloaded tf/0.1.4/nodeID.tf...downloaded tf/0.1.4/normalized.tf...downloaded tf/0.1.4/nu.tf...downloaded tf/0.1.4/number.tf...downloaded tf/0.1.4/orig_order.tf...downloaded tf/0.1.4/oslots.tf...downloaded tf/0.1.4/otext.tf...downloaded tf/0.1.4/otype.tf...downloaded tf/0.1.4/person.tf...downloaded tf/0.1.4/phrase.tf...downloaded tf/0.1.4/phrasefunction.tf...downloaded tf/0.1.4/phrasefunction_long.tf...downloaded tf/0.1.4/phrasetype.tf...downloaded tf/0.1.4/ref.tf...downloaded tf/0.1.4/reference.tf...downloaded tf/0.1.4/sentence.tf...downloaded tf/0.1.4/sp.tf...downloaded tf/0.1.4/sp_full.tf...downloaded tf/0.1.4/strongs.tf...downloaded tf/0.1.4/subj_ref.tf...downloaded tf/0.1.4/tense.tf...downloaded tf/0.1.4/type.tf...downloaded tf/0.1.4/unicode.tf...downloaded tf/0.1.4/verse.tf...downloaded tf/0.1.4/voice.tf...downloaded tf/0.1.4/wg.tf...downloaded tf/0.1.4/wgclass.tf...downloaded tf/0.1.4/word.tf...downloaded OK
| 0.33s T otype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 3.44s T oslots from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.53s T verse from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.56s T after from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.00s T book from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.68s T word from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.50s T chapter from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | | 0.09s C __levels__ from otype, oslots, otext | | 1.95s C __order__ from otype, oslots, __levels__ | | 0.09s C __rank__ from otype, __order__ | | 5.51s C __levUp__ from otype, oslots, __rank__ | | 3.12s C __levDown__ from otype, __levUp__, __rank__ | | 0.06s C __characters__ from otext | | 1.34s C __boundary__ from otype, oslots, __rank__ | | 0.04s C __sections__ from otype, oslots, otext, __levUp__, __levels__, book, chapter, verse | | 0.26s C __structure__ from otype, oslots, otext, __rank__, __levUp__, book, chapter, verse | 0.57s T book_long from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.57s T book_short from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.49s T booknum from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.54s T case from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.83s T clause from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.39s T clauserule from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.34s T clausetype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.48s T degree from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.65s T gloss from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.55s T gn from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.72s T id from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.37s T junction from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.62s T lemma from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.59s T lex_dom from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.60s T ln from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.51s T monad from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.51s T mood from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.58s T morph from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.73s T nodeID from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.65s T normalized from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.55s T nu from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.55s T number from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.54s T orig_order from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.54s T person from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.83s T phrase from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.29s T phrasefunction from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.30s T phrasefunction_long from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.30s T phrasetype from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.75s T ref from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.57s T sentence from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.59s T sp from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.62s T sp_full from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.60s T strongs from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.50s T subj_ref from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.52s T tense from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.52s T type from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.69s T unicode from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.50s T voice from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.74s T wg from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4 | 0.39s T wgclass from ~/text-fabric-data/github/tonyjurg/n1904_lft/tf/0.1.4
Name | # of nodes | # slots/node | % coverage |
---|---|---|---|
book | 27 | 5102.93 | 100 |
chapter | 260 | 529.92 | 100 |
sentence | 5724 | 24.07 | 100 |
verse | 7943 | 17.35 | 100 |
clause | 95811 | 7.73 | 538 |
phrase | 71106 | 1.80 | 93 |
word | 137779 | 1.00 | 100 |
N.otypeRank
{'word': 0, 'phrase': 1, 'clause': 2, 'verse': 3, 'sentence': 4, 'chapter': 5, 'book': 6}
note: the implementation with regards how phrases need to be displayed (esp. with regards to conjunctions) is still to be done.
Search0 = '''
book book=Matthew
chapter chapter=1
verse
'''
Search0 = NA.search(Search0)
NA.show(Search0, start=1, end=20, condensed=True, extraFeatures={'sp','wg','clausetype','gloss','clauserule', 'junction','phrasefunction_long', }, withNodes=True)
0.01s 25 results
verse 1
verse 2
verse 3
verse 4
verse 5
verse 6
verse 7
verse 8
verse 9
verse 10
verse 11
verse 12
verse 13
verse 14
verse 15
verse 16
verse 17
verse 18
verse 19
verse 20
T.structureInfo()
A heading is a tuple of pairs (node type, feature value) of node types and features that have been configured as structural elements These 3 structural elements have been configured node type book with heading feature book node type chapter with heading feature chapter node type verse with heading feature verse You can get them as a tuple with T.headings. Structure API: T.structure(node=None) gives the structure below node, or everything if node is None T.structurePretty(node=None) prints the structure below node, or everything if node is None T.top() gives all top-level nodes T.up(node) gives the (immediate) parent node T.down(node) gives the (immediate) children nodes T.headingFromNode(node) gives the heading of a node T.nodeFromHeading(heading) gives the node of a heading T.ndFromHd complete mapping from headings to nodes T.hdFromNd complete mapping from nodes to headings T.hdMult are all headings with their nodes that occur multiple times There are 1097 structural elements in the dataset.
T.structure(20619)
20619 is an phrase which is not configured as a structure type
TF.features['otext'].metaData
{'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)', 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', 'Converter_version': '0.1 (Initial)', 'Convertor_source': 'https://github.com/tonyjurg/n1904_lft', 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat', 'Editors': 'Nestle', 'Name': 'Greek New Testament (NA1904)', 'TextFabric version': '11.2.3', 'Version': '1904', 'fmt:text-orig-full': '{word}', 'sectionFeatures': 'book,chapter,verse', 'sectionTypes': 'book,chapter,verse', 'structureFeatures': 'book,chapter,verse', 'structureTypes': 'book,chapter,verse', 'writtenBy': 'Text-Fabric', 'dateWritten': '2023-04-06T16:41:52Z'}
!text-fabric app
This is Text-Fabric 11.2.3 Connecting to running kernel via 19685 Connecting to running webserver via 29685 Opening app in browser Press <Ctrl+C> to stop the TF browser
!text-fabric app -k
This is Text-Fabric 11.2.3 Killing processes: kernel % 10804: 19685 app: terminated web % 3564: 29685 app: terminated text-fabric % 10076 app: terminated 3 processes done.
tf.core.nodes.Nodes.otypeRank
--------------------------------------------------------------------------- NameError Traceback (most recent call last) Input In [44], in <cell line: 1>() ----> 1 tf.core.nodes.Nodes.otypeRank NameError: name 'tf' is not defined