Version: 0.1.9 (May 10, 2023 - add phrases again)
The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment. The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat. Attribution: "MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/".
The production of the Text-Fabric files consist of two steps. First the creation of piclke files (part 1). Secondly the actual Text-Fabric creation process (part 2). Both steps are independent allowing to start from Part 2 by using the pickle files as input.
Be advised that this Text-Fabric version is a test version (proof of concept) and requires further finetuning, especialy with regards of nomenclature and presentation of (sub)phrases and clauses.
This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer).
In the context of this script, 'Leaf' refers to those node containing the Greek word as data, which happen to be the nodes without any child (hence the analogy with the leaves on the tree). These 'leafs' can also be refered to as 'terminal nodes'. Futher, Parent1 is the leaf's parent, Parent2 is Parent1's parent, etc.
For a full description of the source data see document MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf
import pandas as pd
import sys
import os
import time
import pickle
import re #regular expressions
from os import listdir
from os.path import isfile, join
import xml.etree.ElementTree as ET
Change BaseDir, XmlDir and PklDir to match location of the datalocation and the OS used.
BaseDir = 'C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# note: create output directory prior running this part
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
bo2book_ = {'01-matthew': ['Matthew', '1', 'Matt']}
In order to traverse from the 'leafs' (terminating nodes) upto the root of the tree, it is required to add information to each node pointing to the parent of each node.
(concept taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)
def addParentInfo(et):
for child in et:
child.attrib['parent'] = et
addParentInfo(child)
def getParent(et):
if 'parent' in et.attrib:
return et.attrib['parent']
else:
return None
# set some globals
monad=1
CollectedItems= 0
# process books in order
for bo, bookinfo in bo2book.items():
CollectedItems=0
SentenceNumber=0
WordGroupNumber=0
full_df=pd.DataFrame({})
book_long=bookinfo[0]
booknum=bookinfo[1]
book_short=bookinfo[2]
InputFile = os.path.join(XmlDir, f'{bo}.xml')
OutputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'Processing {book_long} at {InputFile}')
DataFrameList = []
# send xml document to parsing process
tree = ET.parse(InputFile)
# Now add all the parent info to the nodes in the xtree [important!]
addParentInfo(tree.getroot())
start_time = time.time()
# walk over all the XML data
for elem in tree.iter():
if elem.tag == 'sentence':
# add running number to 'sentence' tags
SentenceNumber+=1
elem.set('SN', SentenceNumber)
if elem.tag == 'wg':
# add running number to 'wg' tags
WordGroupNumber+=1
elem.set('WGN', WordGroupNumber)
if elem.tag == 'w':
# all nodes containing words are tagged with 'w'
# show progress on screen
CollectedItems+=1
if (CollectedItems%100==0): print (".",end='')
#Leafref will contain list with book, chapter verse and wordnumber
Leafref = re.sub(r'[!: ]'," ", elem.attrib.get('ref')).split()
#push value for monad to element tree
elem.set('monad', monad)
monad+=1
# add some important computed data to the leaf
elem.set('LeafName', elem.tag)
elem.set('word', elem.text)
elem.set('book_long', book_long)
elem.set('booknum', int(booknum))
elem.set('book_short', book_short)
elem.set('chapter', int(Leafref[1]))
elem.set('verse', int(Leafref[2]))
# folling code will trace down parents upto the tree and store found attributes
parentnode=getParent(elem)
index=0
while (parentnode):
index+=1
elem.set('Parent{}Name'.format(index), parentnode.tag)
elem.set('Parent{}Type'.format(index), parentnode.attrib.get('type'))
elem.set('Parent{}Appos'.format(index), parentnode.attrib.get('appositioncontainer'))
elem.set('Parent{}Class'.format(index), parentnode.attrib.get('class'))
elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('rule'))
elem.set('Parent{}Role'.format(index), parentnode.attrib.get('role'))
elem.set('Parent{}Cltype'.format(index), parentnode.attrib.get('cltype'))
elem.set('Parent{}Unit'.format(index), parentnode.attrib.get('unit'))
elem.set('Parent{}Junction'.format(index), parentnode.attrib.get('junction'))
elem.set('Parent{}SN'.format(index), parentnode.attrib.get('SN'))
elem.set('Parent{}WGN'.format(index), parentnode.attrib.get('WGN'))
currentnode=parentnode
parentnode=getParent(currentnode)
elem.set('parents', int(index))
#this will add all elements found in the tree to a list of dataframes
DataFrameChunk=pd.DataFrame(elem.attrib, index={monad})
DataFrameList.append(DataFrameChunk)
#store the resulting DataFrame per book into a pickle file for further processing
full_df = pd.concat([df for df in DataFrameList])
output = open(r"{}".format(OutputFile), 'wb')
pickle.dump(full_df, output)
output.close()
print("\nFound ",CollectedItems, " items in %s seconds\n" % (time.time() - start_time))
Processing Matthew at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\01-matthew.xml ...................................................................................................................................................................................... Found 18299 items in 75.79531121253967 seconds Processing Mark at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\02-mark.xml ................................................................................................................ Found 11277 items in 45.66975522041321 seconds Processing Luke at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\03-luke.xml .................................................................................................................................................................................................. Found 19456 items in 251.0713756084442 seconds Processing John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\04-john.xml ............................................................................................................................................................ Found 15643 items in 62.22022581100464 seconds Processing Acts at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\05-acts.xml ....................................................................................................................................................................................... Found 18393 items in 80.11186122894287 seconds Processing Romans at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\06-romans.xml ....................................................................... Found 7100 items in 33.103615522384644 seconds Processing I_Corinthians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\07-1corinthians.xml .................................................................... Found 6820 items in 28.27225947380066 seconds Processing II_Corinthians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\08-2corinthians.xml ............................................ Found 4469 items in 21.04596710205078 seconds Processing Galatians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\09-galatians.xml ...................... Found 2228 items in 9.386286497116089 seconds Processing Ephesians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\10-ephesians.xml ........................ Found 2419 items in 12.377203464508057 seconds Processing Philippians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\11-philippians.xml ................ Found 1630 items in 6.929890871047974 seconds Processing Colossians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\12-colossians.xml ............... Found 1575 items in 7.252684116363525 seconds Processing I_Thessalonians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\13-1thessalonians.xml .............. Found 1473 items in 6.500782251358032 seconds Processing II_Thessalonians at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\14-2thessalonians.xml ........ Found 822 items in 3.1195180416107178 seconds Processing I_Timothy at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\15-1timothy.xml ............... Found 1588 items in 8.0901620388031 seconds Processing II_Timothy at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\16-2timothy.xml ............ Found 1237 items in 6.2979772090911865 seconds Processing Titus at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\17-titus.xml ...... Found 658 items in 2.693586826324463 seconds Processing Philemon at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\18-philemon.xml ... Found 335 items in 1.189115047454834 seconds Processing Hebrews at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\19-hebrews.xml ................................................. Found 4955 items in 20.371201038360596 seconds Processing James at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\20-james.xml ................. Found 1739 items in 5.6465394496917725 seconds Processing I_Peter at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\21-1peter.xml ................ Found 1676 items in 8.954313278198242 seconds Processing II_Peter at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\22-2peter.xml .......... Found 1098 items in 4.5939764976501465 seconds Processing I_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\23-1john.xml ..................... Found 2136 items in 8.40240740776062 seconds Processing II_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\24-2john.xml .. Found 245 items in 0.5407323837280273 seconds Processing III_John at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\25-3john.xml .. Found 219 items in 0.882843017578125 seconds Processing Jude at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\26-jude.xml .... Found 457 items in 1.5982708930969238 seconds Processing Revelation at C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\xml\27-revelation.xml .................................................................................................. Found 9832 items in 46.32647252082825 seconds
This script creates the Text-Fabric files by recursive calling the TF walker function. API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
The pickle files created by step 1 are stored on Github location T.B.D.
import pandas as pd
import os
import re
import gc
from tf.fabric import Fabric
from tf.convert.walker import CV
from tf.parameters import VERSION
from datetime import date
import pickle
BaseDir = 'C:\\Users\\tonyj\\my_new_Jupyter_folder\\Read_from_lowfat\\data\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
bo2book_ = {'01-matthew': ['Matthew', '1', 'Matt']}
# test: sorting the data
import openpyxl
import pickle
#if True:
for bo in bo2book:
'''
load all data into a dataframe
process books in order (bookinfo is a list!)
'''
InputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'\tloading {InputFile}...')
pkl_file = open(InputFile, 'rb')
df = pickle.load(pkl_file)
pkl_file.close()
df.to_excel(os.path.join(XlsxDir, f'{bo}.xlsx'), index=False)
loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\01-matthew.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\02-mark.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\03-luke.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\04-john.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\05-acts.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\06-romans.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\07-1corinthians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\08-2corinthians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\09-galatians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\10-ephesians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\11-philippians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\12-colossians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\13-1thessalonians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\14-2thessalonians.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\15-1timothy.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\16-2timothy.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\17-titus.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\18-philemon.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\19-hebrews.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\20-james.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\21-1peter.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\22-2peter.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\23-1john.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\24-2john.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\25-3john.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\26-jude.pkl... loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\27-revelation.pkl...
API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
The logic of interpreting the data is included in the director function.
TF = Fabric(locations=BaseDir, silent=False)
cv = CV(TF)
version = "0.1.9"
###############################################
# Common helper functions #
###############################################
#Function to prevent errors during conversion due to missing data
def sanitize(input):
if isinstance(input, float): return ''
if isinstance(input, type(None)): return ''
else: return (input)
# Function to expand the syntactic categories of words or wordgroup
# See also "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf"
# page 5&6 (section 2.4 Syntactic Categories at Clause Level)
def ExpandRole(input):
if input=="adv": return 'Adverbial'
if input=="io": return 'Indirect Object'
if input=="o": return 'Object'
if input=="o2": return 'Second Object'
if input=="s": return 'Subject'
if input=="p": return 'Predicate'
if input=="v": return 'Verbal'
if input=="vc": return 'Verbal Copula'
if input=='aux': return 'Auxiliar'
return ''
# Function to expantion of Part of Speech labels. See also the description in
# "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf" page 6&7
# (2.2. Syntactic Categories at Word Level: Part of Speech Labels)
def ExpandSP(input):
if input=='adj': return 'adjective'
if input=='conj': return 'conjunction'
if input=='det': return 'determiner'
if input=='intj': return 'interjection'
if input=='noun': return 'noun'
if input=='num': return 'numeral'
if input=='prep': return 'preposition'
if input=='ptcl': return 'particle'
if input=='pron': return 'pronoun'
if input=='verb': return 'verb'
return ''
###############################################
# The director routine #
###############################################
def director(cv):
###############################################
# Innitial setup of data etc. #
###############################################
NoneType = type(None) # needed as tool to validate certain data
IndexDict = {} # init an empty dictionary
WordGroupDict={} # init a dummy dictionary
PrevWordGroupSet = WordGroupSet = []
PrevWordGroupList = WordGroupList = []
RootWordGroup = 0
WordNumber=FoundWords=WordGroupTrack=0
# The following is required to recover succesfully from an abnormal condition
# in the LowFat tree data where a <wg> element is labeled as <error>
# this number is arbitrary but should be high enough not to clash with 'real' WG numbers
DummyWGN=200000
for bo,bookinfo in bo2book.items():
###############################################
# start of section executed for each book #
###############################################
# note: bookinfo is a list! Split the data
Book = bookinfo[0]
BookNumber = int(bookinfo[1])
BookShort = bookinfo[2]
BookLoc = os.path.join(PklDir, f'{bo}.pkl')
# load data for this book into a dataframe.
# make sure wordorder is correct
print(f'\tWe are loading {BookLoc}...')
pkl_file = open(BookLoc, 'rb')
df_unsorted = pickle.load(pkl_file)
pkl_file.close()
df=df_unsorted.sort_values(by=['ref'])
# set up nodes for new book
ThisBookPointer = cv.node('book')
cv.feature(ThisBookPointer, book=Book, booknumber=BookNumber, bookshort=BookShort)
ThisChapterPointer = cv.node('chapter')
cv.feature(ThisChapterPointer, chapter=1)
PreviousChapter=1
ThisVersePointer = cv.node('verse')
cv.feature(ThisVersePointer, verse=1)
PreviousVerse=1
ThisSentencePointer = cv.node('sentence')
cv.feature(ThisSentencePointer, verse=1)
PreviousSentence=1
'''
fill dictionary of column names for this book
sort to ensure proper wordorder
'''
ItemsInRow=1
for itemname in df.columns.to_list():
IndexDict.update({'i_{}'.format(itemname): ItemsInRow})
ItemsInRow+=1
###############################################
# Iterate through words and construct objects #
###############################################
for row in df.itertuples():
WordNumber += 1
FoundWords +=1
# Detect and act upon changes in sentences, verse and chapter
# the order of terminating and creating the nodes is critical:
# close verse - close chapter - open chapter - open verse
NumberOfParents = sanitize(row[IndexDict.get("i_parents")])
ThisSentence=int(row[IndexDict.get("i_Parent{}SN".format(NumberOfParents-1))])
ThisVerse = sanitize(row[IndexDict.get("i_verse")])
ThisChapter = sanitize(row[IndexDict.get("i_chapter")])
if (ThisSentence!=PreviousSentence):
#cv.feature(ThisSentencePointer, statdata?)
cv.terminate(ThisSentencePointer)
if (ThisVerse!=PreviousVerse):
#cv.feature(ThisVersePointer, statdata?)
cv.terminate(ThisVersePointer)
if (ThisChapter!=PreviousChapter):
#cv.feature(ThisChapterPointer, statdata?)
cv.terminate(ThisChapterPointer)
PreviousChapter = ThisChapter
ThisChapterPointer = cv.node('chapter')
cv.feature(ThisChapterPointer, chapter=ThisChapter)
if (ThisVerse!=PreviousVerse):
PreviousVerse = ThisVerse
ThisVersePointer = cv.node('verse')
cv.feature(ThisVersePointer, verse=ThisVerse, chapter=ThisChapter)
if (ThisSentence!=PreviousSentence):
PreviousSentence=ThisSentence
ThisSentencePointer = cv.node('sentence')
cv.feature(ThisSentencePointer, verse=ThisVerse, chapter=ThisChapter)
###############################################
# analyze and process <WG> tags #
###############################################
PrevWordGroupList=WordGroupList
WordGroupList=[] # stores current active WordGroup numbers
for i in range(NumberOfParents-2,0,-1): # important: reversed itteration!
_WGN=row[IndexDict.get("i_Parent{}WGN".format(i))]
if isinstance(_WGN, type(None)):
# handling conditions where XML data has <error role="err_clause-complex-met-no-conditionsClCl2"> e.g. Acts 26:12
# to recover, we need to create a dummy WG with a sufficient high WGN so it can never match any real WGN.
WGN=DummyWGN
else:
WGN=int(_WGN)
if WGN!='':
WordGroupList.append(WGN)
WordGroupDict[(WGN,0)]=WGN
WGclass=sanitize(row[IndexDict.get("i_Parent{}Class".format(i))])
WGrule=sanitize(row[IndexDict.get("i_Parent{}Rule".format(i))])
WGtype=sanitize(row[IndexDict.get("i_Parent{}Type".format(i))])
if WGclass==WGrule==WGtype=='':
WGclass='to be skipped?'
if WGrule[-2:]=='CL' and WGclass=='':
WGclass='cl*' # to simulate the way Logos presents this condition
WordGroupDict[(WGN,6)]=WGclass
WordGroupDict[(WGN,1)]=WGrule
WordGroupDict[(WGN,8)]=WGtype
WordGroupDict[(WGN,3)]=sanitize(row[IndexDict.get("i_Parent{}Junction".format(i))])
WordGroupDict[(WGN,2)]=sanitize(row[IndexDict.get("i_Parent{}Cltype".format(i))])
WordGroupDict[(WGN,7)]=sanitize(row[IndexDict.get("i_Parent{}Role".format(i))])
WordGroupDict[(WGN,9)]=sanitize(row[IndexDict.get("i_Parent{}Appos".format(i))])
WordGroupDict[(WGN,10)]=NumberOfParents-1-i # = number of parent wordgroups
if not PrevWordGroupList==WordGroupList:
if RootWordGroup != WordGroupList[0]:
RootWordGroup = WordGroupList[0]
SuspendableWordGoupList = []
# we have a new sentence. rebuild suspendable wordgroup list
# some cleaning of data may be added here to save on memmory...
#for k in range(6): del WordGroupDict[item,k]
for item in reversed(PrevWordGroupList):
if (item not in WordGroupList):
# CLOSE/SUSPEND CASE
SuspendableWordGoupList.append(item)
cv.terminate(WordGroupDict[item,4])
for item in WordGroupList:
if (item not in PrevWordGroupList):
if (item in SuspendableWordGoupList):
# RESUME CASE
#print ('\n resume: '+str(item),end=' ')
cv.resume(WordGroupDict[(item,4)])
else:
# CREATE CASE
#print ('\n create: '+str(item),end=' ')
WordGroupDict[(item,4)]=cv.node('wg')
WordGroupDict[(item,5)]=WordGroupTrack
WordGroupTrack += 1
cv.feature(WordGroupDict[(item,4)], wgnum=WordGroupDict[(item,0)], junction=WordGroupDict[(item,3)],
clausetype=WordGroupDict[(item,2)], rule=WordGroupDict[(item,1)], wgclass=WordGroupDict[(item,6)],
wgrole=WordGroupDict[(item,7)],wgrolelong=ExpandRole(WordGroupDict[(item,7)]),
wgtype=WordGroupDict[(item,8)],appos=WordGroupDict[(item,8)],wglevel=WordGroupDict[(item,10)])
# These roles are performed either by a WG or just a single word.
Role=row[IndexDict.get("i_role")]
ValidRoles=["adv","io","o","o2","s","p","v","vc","aux"]
DistanceToRoleClause=0
if isinstance (Role,str) and Role in ValidRoles:
# role is assign to this word (uniqely)
WordRole=Role
WordRoleLong=ExpandRole(WordRole)
else:
# role details needs to be taken from some uptree wordgroup
WordRole=WordRoleLong=''
for item in range(1,NumberOfParents-1):
Role = sanitize(row[IndexDict.get("i_Parent{}Role".format(item))])
if isinstance (Role,str) and Role in ValidRoles:
WordRole=Role
WordRoleLong=ExpandRole(WordRole)
DistanceToRoleClause=item
break
# find the number of the WG containing the clause definition
for item in range(1,NumberOfParents-1):
WGrule = sanitize(row[IndexDict.get("i_Parent{}Rule".format(item))])
if row[IndexDict.get("i_Parent{}Class".format(item))]=='cl' or WGrule[-2:]=='CL':
ContainedClause=sanitize(row[IndexDict.get("i_Parent{}WGN".format(item))])
break
###############################################
# analyze and process <W> tags #
###############################################
# determine syntactic categories at word level.
PartOfSpeech=sanitize(row[IndexDict.get("i_class")])
PartOfSpeechFull=ExpandSP(PartOfSpeech)
# some attributes are not present inside some (small) books. The following is to prevent exceptions.
degree=''
if 'i_degree' in IndexDict: degree=sanitize(row[IndexDict.get("i_degree")])
subjref=''
if 'i_subjref' in IndexDict: subjref=sanitize(row[IndexDict.get("i_subjref")])
# create the word slots
this_word = cv.slot()
cv.feature(this_word,
after= sanitize(row[IndexDict.get("i_after")]),
unicode= sanitize(row[IndexDict.get("i_unicode")]),
word= sanitize(row[IndexDict.get("i_word")]),
monad= sanitize(row[IndexDict.get("i_monad")]),
orig_order= FoundWords,
book_long= sanitize(row[IndexDict.get("i_book_long")]),
booknumber= BookNumber,
bookshort= sanitize(row[IndexDict.get("i_book_short")]),
chapter= ThisChapter,
ref= sanitize(row[IndexDict.get("i_ref")]),
sp= PartOfSpeech,
sp_full= PartOfSpeechFull,
verse= ThisVerse,
sentence= ThisSentence,
normalized= sanitize(row[IndexDict.get("i_normalized")]),
morph= sanitize(row[IndexDict.get("i_morph")]),
strongs= sanitize(row[IndexDict.get("i_strong")]),
lex_dom= sanitize(row[IndexDict.get("i_domain")]),
ln= sanitize(row[IndexDict.get("i_ln")]),
gloss= sanitize(row[IndexDict.get("i_gloss")]),
gn= sanitize(row[IndexDict.get("i_gender")]),
nu= sanitize(row[IndexDict.get("i_number")]),
case= sanitize(row[IndexDict.get("i_case")]),
lemma= sanitize(row[IndexDict.get("i_lemma")]),
person= sanitize(row[IndexDict.get("i_person")]),
mood= sanitize(row[IndexDict.get("i_mood")]),
tense= sanitize(row[IndexDict.get("i_tense")]),
number= sanitize(row[IndexDict.get("i_number")]),
voice= sanitize(row[IndexDict.get("i_voice")]),
degree= degree,
type= sanitize(row[IndexDict.get("i_type")]),
reference= sanitize(row[IndexDict.get("i_ref")]),
subj_ref= subjref,
nodeID= sanitize(row[4]), #this is a fixed position in dataframe
wordrole= WordRole,
wordrolelong= WordRoleLong,
wordlevel= NumberOfParents-1,
roleclausedistance = DistanceToRoleClause,
containedclause = ContainedClause
)
cv.terminate(this_word)
'''
wrap up the book. At the end of the book we need to close all nodes in proper order.
'''
# close all open WordGroup nodes
for item in WordGroupList:
#cv.feature(WordGroupDict[(item,4)], add some stats?)
cv.terminate(WordGroupDict[item,4])
#cv.feature(ThisSentencePointer, statdata?)
cv.terminate(ThisSentencePointer)
#cv.feature(ThisVersePointer, statdata?)
cv.terminate(ThisVersePointer)
#cv.feature(ThisChapterPonter, statdata?)
cv.terminate(ThisChapterPointer)
#cv.feature(ThisBookPointer, statdata?)
cv.terminate(ThisBookPointer)
# clear dataframe for this book, clear the index dictionary
del df
IndexDict.clear()
gc.collect()
###############################################
# end of section executed for each book #
###############################################
###############################################
# end of director function #
###############################################
###############################################
# Output definitions #
###############################################
slotType = 'word'
otext = { # dictionary of config data for sections and text formats
'fmt:text-orig-full':'{word}{after}',
'sectionTypes':'book,chapter,verse',
'sectionFeatures':'book,chapter,verse',
'structureFeatures': 'book,chapter,verse',
'structureTypes': 'book,chapter,verse',
}
# configure metadata
generic = { # dictionary of metadata meant for all features
'Name': 'Greek New Testament (N1904 based on Low Fat Tree)',
'Version': '1904',
'Editors': 'Nestle',
'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat',
'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',
'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',
'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands',
'Convertor_source': 'https://github.com/tonyjurg/n1904_lft',
'Converter_version': '{}'.format(version),
'TextFabric version': '{}'.format(VERSION) #imported from tf.parameters
}
# set of integer valued feature names
intFeatures = {
'booknumber',
'chapter',
'verse',
'sentence',
'wgnum',
'orig_order',
'monad',
'wglevel'
}
# per feature dicts with metadata
featureMeta = {
'after': {'description': 'Characters (eg. punctuations) following the word'},
'book': {'description': 'Book'},
'book_long': {'description': 'Book name (fully spelled out)'},
'booknumber': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},
'bookshort': {'description': 'Book name (abbreviated)'},
'chapter': {'description': 'Chapter number inside book'},
'verse': {'description': 'Verse number inside chapter'},
'sentence': {'description': 'Sentence number (counted per chapter)'},
'type': {'description': 'Wordgroup type information (verb, verbless, elided, minor, etc.)'},
'rule': {'description': 'Wordgroup rule information '},
'orig_order': {'description': 'Word order within corpus (per book)'},
'monad': {'description': 'Monad (currently: order of words in XML tree file!)'},
'word': {'description': 'Word as it appears in the text (excl. punctuations)'},
'unicode': {'description': 'Word as it arears in the text in Unicode (incl. punctuations)'},
'ref': {'description': 'ref Id'},
'sp': {'description': 'Part of Speech (abbreviated)'},
'sp_full': {'description': 'Part of Speech (long description)'},
'normalized': {'description': 'Surface word stripped of punctations'},
'lemma': {'description': 'Lexeme (lemma)'},
'morph': {'description': 'Morphological tag (Sandborg-Petersen morphology)'},
# see also discussion on relation between lex_dom and ln
# @ https://github.com/Clear-Bible/macula-greek/issues/29
'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},
'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},
'strongs': {'description': 'Strongs number'},
'gloss': {'description': 'English gloss'},
'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},
'nu': {'description': 'Gramatical number (Singular, Plural)'},
'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},
'person': {'description': 'Gramatical person of the verb (first, second, third)'},
'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},
'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},
'number': {'description': 'Gramatical number of the verb'},
'voice': {'description': 'Gramatical voice of the verb'},
'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},
'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},
'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},
'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},
'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'},
'junction': {'description': 'Junction data related to a wordgroup'},
'wgnum': {'description': 'Wordgroup number (counted per book)'},
'wgclass': {'description': 'Class of the wordgroup ()'},
'wgrole': {'description': 'Role of the wordgroup (abbreviated)'},
'wgrolelong': {'description': 'Role of the wordgroup (full)'},
'wordrole': {'description': 'Role of the word (abbreviated)'},
'wordrolelong':{'description': 'Role of the word (full)'},
'wgtype': {'description': 'Wordgroup type details'},
'clausetype': {'description': 'Clause type details'},
'appos': {'description': 'Apposition details'},
'wglevel': {'description': 'number of parent wordgroups for a wordgroup'},
'wordlevel': {'description': 'number of parent wordgroups for a word'},
'roleclausedistance': {'description': 'distance to wordgroup defining the role of this word'},
'containedclause': {'description': 'Contained clause (WG number)'}
}
###############################################
# the main function #
###############################################
good = cv.walk(
director,
slotType,
otext=otext,
generic=generic,
intFeatures=intFeatures,
featureMeta=featureMeta,
warn=False,
force=False
)
if good:
print ("done")
This is Text-Fabric 11.4.10 55 features found and 0 ignored 0.00s Importing data from walking through the source ... | 0.00s Preparing metadata... | SECTION TYPES: book, chapter, verse | SECTION FEATURES: book, chapter, verse | STRUCTURE TYPES: book, chapter, verse | STRUCTURE FEATURES: book, chapter, verse | TEXT FEATURES: | | text-orig-full after, word | 0.00s OK | 0.00s Following director... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\01-matthew.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\02-mark.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\03-luke.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\04-john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\05-acts.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\06-romans.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\07-1corinthians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\08-2corinthians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\09-galatians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\10-ephesians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\11-philippians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\12-colossians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\13-1thessalonians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\14-2thessalonians.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\15-1timothy.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\16-2timothy.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\17-titus.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\18-philemon.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\19-hebrews.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\20-james.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\21-1peter.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\22-2peter.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\23-1john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\24-2john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\25-3john.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\26-jude.pkl... We are loading C:\Users\tonyj\my_new_Jupyter_folder\Read_from_lowfat\data\pkl\27-revelation.pkl... | 53s "edge" actions: 0 | 53s "feature" actions: 290649 | 53s "node" actions: 152870 | 53s "resume" actions: 37890 | 53s "slot" actions: 137779 | 53s "terminate" actions: 328692 | 27 x "book" node | 270 x "chapter" node | 12170 x "sentence" node | 7943 x "verse" node | 132460 x "wg" node | 137779 x "word" node = slot type | 290649 nodes of all types | 53s OK | 0.05s Removing unlinked nodes ... | | 0.00s 10 unlinked "chapter" nodes: [1, 30, 47, 72, 94] ... | | 0.00s 10 unlinked "sentence" nodes: [1, 1678, 2765, 4473, 6092] ... | | 0.00s 20 unlinked nodes | | 0.00s Leaving 290629 nodes | 0.00s checking for nodes and edges ... | 0.00s OK | 0.00s checking (section) features ... | 0.25s OK | 0.00s reordering nodes ... | 0.04s Sorting 27 nodes of type "book" | 0.06s Sorting 260 nodes of type "chapter" | 0.07s Sorting 12160 nodes of type "sentence" | 0.11s Sorting 7943 nodes of type "verse" | 0.13s Sorting 132460 nodes of type "wg" | 0.30s Max node = 290629 | 0.30s OK | 0.00s reassigning feature values ... | | 0.69s node feature "after" with 137779 nodes | | 0.75s node feature "appos" with 132460 nodes | | 0.82s node feature "book" with 27 nodes | | 0.82s node feature "book_long" with 137779 nodes | | 0.87s node feature "booknumber" with 137806 nodes | | 0.93s node feature "bookshort" with 137806 nodes | | 0.98s node feature "case" with 137779 nodes | | 1.04s node feature "chapter" with 158098 nodes | | 1.12s node feature "clausetype" with 132460 nodes | | 1.19s node feature "containedclause" with 137779 nodes | | 1.24s node feature "degree" with 137779 nodes | | 1.30s node feature "gloss" with 137779 nodes | | 1.35s node feature "gn" with 137779 nodes | | 1.41s node feature "junction" with 132460 nodes | | 1.47s node feature "lemma" with 137779 nodes | | 1.53s node feature "lex_dom" with 137779 nodes | | 1.58s node feature "ln" with 137779 nodes | | 1.65s node feature "monad" with 137779 nodes | | 1.70s node feature "mood" with 137779 nodes | | 1.76s node feature "morph" with 137779 nodes | | 1.83s node feature "nodeID" with 137779 nodes | | 1.88s node feature "normalized" with 137779 nodes | | 1.95s node feature "nu" with 137779 nodes | | 2.00s node feature "number" with 137779 nodes | | 2.05s node feature "orig_order" with 137779 nodes | | 2.11s node feature "person" with 137779 nodes | | 2.17s node feature "ref" with 137779 nodes | | 2.22s node feature "reference" with 137779 nodes | | 2.27s node feature "roleclausedistance" with 137779 nodes | | 2.32s node feature "rule" with 132460 nodes | | 2.39s node feature "sentence" with 137779 nodes | | 2.45s node feature "sp" with 137779 nodes | | 2.50s node feature "sp_full" with 137779 nodes | | 2.56s node feature "strongs" with 137779 nodes | | 2.61s node feature "subj_ref" with 137779 nodes | | 2.67s node feature "tense" with 137779 nodes | | 2.73s node feature "type" with 137779 nodes | | 2.78s node feature "unicode" with 137779 nodes | | 2.83s node feature "verse" with 157882 nodes | | 2.89s node feature "voice" with 137779 nodes | | 2.94s node feature "wgclass" with 132460 nodes | | 3.00s node feature "wglevel" with 132460 nodes | | 3.06s node feature "wgnum" with 132460 nodes | | 3.11s node feature "wgrole" with 132460 nodes | | 3.17s node feature "wgrolelong" with 132460 nodes | | 3.22s node feature "wgtype" with 132460 nodes | | 3.29s node feature "word" with 137779 nodes | | 3.34s node feature "wordlevel" with 137779 nodes | | 3.38s node feature "wordrole" with 137779 nodes | | 3.43s node feature "wordrolelong" with 137779 nodes | 2.94s OK 0.00s Exporting 51 node and 1 edge and 1 config features to ~/my_new_Jupyter_folder/Read_from_lowfat/data: 0.00s VALIDATING oslots feature 0.02s VALIDATING oslots feature 0.02s maxSlot= 137779 0.02s maxNode= 290629 0.04s OK: oslots is valid | 0.13s T after to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T appos to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.00s T book to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T book_long to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.12s T booknumber to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T bookshort to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T case to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T chapter to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T clausetype to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T containedclause to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T degree to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T gloss to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T gn to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T junction to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.16s T lemma to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T lex_dom to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T ln to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T monad to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T mood to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T morph to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T nodeID to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.17s T normalized to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T nu to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T number to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T orig_order to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.06s T otype to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T person to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T ref to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T reference to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T roleclausedistance to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T rule to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T sentence to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T sp to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T sp_full to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T strongs to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T subj_ref to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T tense to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T type to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.16s T unicode to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.15s T verse to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T voice to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T wgclass to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.12s T wglevel to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.19s T wgnum to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T wgrole to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T wgrolelong to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T wgtype to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.16s T word to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.13s T wordlevel to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T wordrole to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.14s T wordrolelong to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.49s T oslots to ~/my_new_Jupyter_folder/Read_from_lowfat/data | 0.00s M otext to ~/my_new_Jupyter_folder/Read_from_lowfat/data 7.53s Exported 51 node features and 1 edge features and 1 config features to ~/my_new_Jupyter_folder/Read_from_lowfat/data 7.53s WARNING section chapter "1" of level 2 ends inside a verse "1" of level 3 (10 x): | | 11s WARNING section chapter "??" of level 2 starts inside a verse "1" of level 3 (10 x): | | 11s use `cv.walk(..., warn=True)` to stop after warnings | | 11s use `cv.walk(..., warn=None)` to suppress warnings done
The TF will be loaded from github repository https://github.com/tonyjurg/n1904_lft
%load_ext autoreload
%autoreload 2
# First, I have to laod different modules that I use for analyzing the data and for plotting:
import sys, os, collections
import pandas as pd
import numpy as np
import re
from tf.fabric import Fabric
from tf.app import use
The following cell loads the TextFabric files from github repository.
# Loading-the-New-Testament-Text-Fabric
NA = use ("tonyjurg/n1904_lft:clone", version="0.1.9", hoist=globals())
Locating corpus resources ...
Name | # of nodes | # slots/node | % coverage |
---|---|---|---|
book | 27 | 5102.93 | 100 |
chapter | 260 | 529.92 | 100 |
verse | 7943 | 17.35 | 100 |
sentence | 12160 | 11.33 | 100 |
wg | 132460 | 6.59 | 633 |
word | 137779 | 1.00 | 100 |
note: the implementation with regards how phrases need to be displayed (esp. with regards to conjunctions) is still to be done.
Search0 = '''
book book=Matthew
chapter chapter=1
verse
'''
Search0 = NA.search(Search0)
NA.show(Search0, start=20, end=21, condensed=True, extraFeatures={'containedclause','wordrole'}, suppress={'chapter'}, withNodes=False)
0.01s 25 results
verse 20
verse 21
Search0 = '''
book book=Matthew
chapter chapter=1
verse verse=20
wg1:wg wgclass=cl wglevel*
word wordrole=v
word wordrole=o
word wordrole=aux
'''
Search0 = NA.search(Search0)
NA.show(Search0, start=1, end=1, condensed=True, colorMap={4:'pink', 5:'turquoise', 6:'lightblue', 7:'red'}, multiFeatures=False)
0.43s 6 results
verse 1
T.structureInfo()
A heading is a tuple of pairs (node type, feature value) of node types and features that have been configured as structural elements These 3 structural elements have been configured node type book with heading feature book node type chapter with heading feature chapter node type verse with heading feature verse You can get them as a tuple with T.headings. Structure API: T.structure(node=None) gives the structure below node, or everything if node is None T.structurePretty(node=None) prints the structure below node, or everything if node is None T.top() gives all top-level nodes T.up(node) gives the (immediate) parent node T.down(node) gives the (immediate) children nodes T.headingFromNode(node) gives the heading of a node T.nodeFromHeading(heading) gives the node of a heading T.ndFromHd complete mapping from headings to nodes T.hdFromNd complete mapping from nodes to headings T.hdMult are all headings with their nodes that occur multiple times There are 8230 structural elements in the dataset.
TF.features['otext'].metaData
{'Availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)', 'Converter_author': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', 'Converter_execution': 'Tony Jurg, Vrije Universiteit Amsterdam, Netherlands', 'Converter_version': '0.1.9', 'Convertor_source': 'https://github.com/tonyjurg/n1904_lft', 'Data source': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat', 'Editors': 'Nestle', 'Name': 'Greek New Testament (N1904 based on Low Fat Tree)', 'TextFabric version': '11.4.10', 'Version': '1904', 'fmt:text-orig-full': '{word}{after}', 'sectionFeatures': 'book,chapter,verse', 'sectionTypes': 'book,chapter,verse', 'structureFeatures': 'book,chapter,verse', 'structureTypes': 'book,chapter,verse', 'writtenBy': 'Text-Fabric', 'dateWritten': '2023-05-10T18:14:07Z'}
!tf app:\text-fabric-data\github\tonyjurg\n1904_lft\app data:\text-fabric-data\github\tonyjurg\n1904_lft\tf\0.1.8
This is Text-Fabric 11.4.10 Connecting to running kernel via 18794 Connecting to running webserver via 28794 Opening app:/text-fabric-data/github/tonyjurg/n1904_lft/app in browser Press <Ctrl+C> to stop the TF browser