Code version: 0.6 (November 17, 2023) Data version: August 16, 2023
The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment (British Foreign Bible Society, 1904). The starting dataset is formatted according to Syntax diagram markup by the Global Bible Initiative (GBI). The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat.
Attribution: "MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/".
The production of the Text-Fabric files consist of two phases. First one is the creation of piclke files (section 2). The second phase is the the actual Text-Fabric creation process (section 3). The process can be depicted as follows:
This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer). See also the Python3 documentation.
Within the context of this script, the term 'Leaf' refers to nodes that contain the Greek word as data. These nodes are also referred to as 'terminal nodes' since they do not have any children, similar to leaves on a tree. Additionally, Parent1 represents the parent of the leaf, Parent2 represents the parent of Parent1, and so on. For a visual representation, please refer to the following diagram.
For a full description of the source data see document MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf
The scripts in this notebook require (beside text-fabric) the following Python libraries to be installed in the environment:
pandas openpyxl
You can install any missing library from within Jupyter Notebook using either pip
or pip3
. (eg.: !pip3 install pandas)
import pandas as pd
import sys
import os
import time
import pickle
import re #regular expressions
from os import listdir
from os.path import isfile, join
import xml.etree.ElementTree as ET
The following global data initializes the script, gathering the XML data to store it into the pickle files.
IMPORTANT: To ensure proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, InputDir, and OutputDir to match the location of the data and the operating system you are using. In this Jupyter Notebook, Windows is the operating system employed.
BaseDir = 'D:\\TF\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# note: create output directory prior running this part
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
In order to be able to traverse from the 'leafs' upto the root of the tree, it is required to add information to each node pointing to the parent of each node. The terminating nodes of an XML tree are called "leaf nodes" or "leaves." These nodes do not have any child elements and are located at the end of a branch in the XML tree. Leaf nodes contain the actual data or content within an XML document. In contrast, non-leaf nodes are called "internal nodes," which have one or more child elements.
(Attribution: the concept of following functions is taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)
def addParentInfo(et):
for child in et:
child.attrib['parent'] = et
addParentInfo(child)
def getParent(et):
if 'parent' in et.attrib:
return et.attrib['parent']
else:
return None
This code processes books in the correct order. Firstly, it parses the XML and adds parent information to each node. Then, it loops through the nodes and checks if it is a 'leaf' node, meaning it contains only one word. If it is a 'leaf' node, the following steps are performed:
Note that this script takes a long time to execute (due to the large number of itterations). However, once the XML data is converted to PKL, there is no need to rerun (unless the source XML data is updated).
# set some globals
WordOrder=1 # stores the word order as it is found in the XML files (unique number for each word in the full corpus)
CollectedItems= 0
# process books in order
for bo, bookinfo in bo2book.items():
CollectedItems=0
SentenceNumber=0
WordGroupNumber=0
full_df=pd.DataFrame({})
book_long=bookinfo[0]
booknum=bookinfo[1]
book_short=bookinfo[2]
InputFile = os.path.join(XmlDir, f'{bo}.xml')
OutputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'Processing {book_long} at {InputFile}')
DataFrameList = []
# Send XML document to parsing process
tree = ET.parse(InputFile)
# Now add all the parent info to the nodes in the xtree [important!]
addParentInfo(tree.getroot())
start_time = time.time()
# walk over all the XML data
for elem in tree.iter():
if elem.tag == 'sentence':
# add running number to 'sentence' tags
SentenceNumber+=1
elem.set('SN', SentenceNumber)
# handling conditions where XML data has <error ... > by converting it to a WG
if elem.tag == 'error':
elem.tag = 'wg'
if elem.tag == 'wg':
# add running number to 'wg' tags
WordGroupNumber+=1
elem.set('WGN', WordGroupNumber)
if elem.tag == 'w':
# all nodes containing words are tagged with 'w'
# show progress on screen
CollectedItems+=1
if (CollectedItems%100==0): print (".",end='')
#Leafref will contain list with book, chapter verse and wordnumber
Leafref = re.sub(r'[!: ]'," ", elem.attrib.get('ref')).split()
#push value for word_order to element tree
elem.set('word_order', WordOrder)
WordOrder+=1
# add some important computed data to the leaf
elem.set('LeafName', elem.tag)
elem.set('word', elem.text)
elem.set('book_long', book_long)
elem.set('booknum', int(booknum))
elem.set('book_short', book_short)
elem.set('chapter', int(Leafref[1]))
elem.set('verse', int(Leafref[2]))
# folling code will trace down parents upto the tree and store found attributes
parentnode=getParent(elem)
index=0
while (parentnode):
index+=1
elem.set('Parent{}Name'.format(index), parentnode.tag)
elem.set('Parent{}Type'.format(index), parentnode.attrib.get('type'))
elem.set('Parent{}Appos'.format(index), parentnode.attrib.get('appositioncontainer'))
elem.set('Parent{}Class'.format(index), parentnode.attrib.get('class'))
elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('rule'))
elem.set('Parent{}Role'.format(index), parentnode.attrib.get('role'))
elem.set('Parent{}Cltype'.format(index), parentnode.attrib.get('cltype'))
elem.set('Parent{}Unit'.format(index), parentnode.attrib.get('unit'))
elem.set('Parent{}Junction'.format(index), parentnode.attrib.get('junction'))
elem.set('Parent{}SN'.format(index), parentnode.attrib.get('SN'))
elem.set('Parent{}WGN'.format(index), parentnode.attrib.get('WGN'))
currentnode=parentnode
parentnode=getParent(currentnode)
elem.set('parents', int(index))
#this will add all elements found in the tree to a list of dataframes
DataFrameChunk=pd.DataFrame(elem.attrib, index=['word_order'])
DataFrameList.append(DataFrameChunk)
#store the resulting DataFrame per book into a pickle file for further processing
full_df = pd.concat([df for df in DataFrameList])
output = open(r"{}".format(OutputFile), 'wb')
pickle.dump(full_df, output)
output.close()
print("\nFound ",CollectedItems, " items in %s seconds\n" % (time.time() - start_time))
Processing Acts at D:\TF\xml\05-acts.xml .......................................................................................................................................................................................
C:\Users\tonyj\AppData\Local\Temp\ipykernel_21964\1214332617.py:86: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation. full_df = pd.concat([df for df in DataFrameList])
Found 18393 items in 62.032474994659424 seconds
This script creates the Text-Fabric files by recursive calling the TF walker function. API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
The pickle files created by the script in section 2.4 are stored on Github location /resources/pickle.
The following global data initializes the Text-Fabric conversion script.
IMPORTANT: To ensure the proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, PklDir, etc., to match the location of the data and the operating system you are using. This Jupyter Notebook employs the Windows operating system.
import pandas as pd
import os
import re
import gc
from tf.fabric import Fabric
from tf.convert.walker import CV
from tf.parameters import VERSION
from datetime import date
import pickle
from unidecode import unidecode
import unicodedata
BaseDir = 'D:\\TF\\'
XmlDir = BaseDir+'xml\\'
PklDir = BaseDir+'pkl\\'
XlsxDir = BaseDir+'xlsx\\'
# key: filename, [0]=book_long, [1]=book_num, [3]=book_short
bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],
'02-mark': ['Mark', '2', 'Mark'],
'03-luke': ['Luke', '3', 'Luke'],
'04-john': ['John', '4', 'John'],
'05-acts': ['Acts', '5', 'Acts'],
'06-romans': ['Romans', '6', 'Rom'],
'07-1corinthians': ['I_Corinthians', '7', '1Cor'],
'08-2corinthians': ['II_Corinthians', '8', '2Cor'],
'09-galatians': ['Galatians', '9', 'Gal'],
'10-ephesians': ['Ephesians', '10', 'Eph'],
'11-philippians': ['Philippians', '11', 'Phil'],
'12-colossians': ['Colossians', '12', 'Col'],
'13-1thessalonians':['I_Thessalonians', '13', '1Thess'],
'14-2thessalonians':['II_Thessalonians','14', '2Thess'],
'15-1timothy': ['I_Timothy', '15', '1Tim'],
'16-2timothy': ['II_Timothy', '16', '2Tim'],
'17-titus': ['Titus', '17', 'Titus'],
'18-philemon': ['Philemon', '18', 'Phlm'],
'19-hebrews': ['Hebrews', '19', 'Heb'],
'20-james': ['James', '20', 'Jas'],
'21-1peter': ['I_Peter', '21', '1Pet'],
'22-2peter': ['II_Peter', '22', '2Pet'],
'23-1john': ['I_John', '23', '1John'],
'24-2john': ['II_John', '24', '2John'],
'25-3john': ['III_John', '25', '3John'],
'26-jude': ['Jude', '26', 'Jude'],
'27-revelation': ['Revelation', '27', 'Rev']}
# unmark to debug using only one small book
#bo2book = {'18-philemon': ['Philemon', '18', 'Phlm']}
This step is optional. It will allow for manual examining the input data to the Text-Fabric conversion script.
# test: sorting the data
import openpyxl
import pickle
#if True:
for bo in bo2book:
'''
load all data into a dataframe
process books in order (bookinfo is a list!)
'''
InputFile = os.path.join(PklDir, f'{bo}.pkl')
print(f'\tloading {InputFile}...')
pkl_file = open(InputFile, 'rb')
df = pickle.load(pkl_file)
pkl_file.close()
df.to_excel(os.path.join(XlsxDir, f'{bo}.xlsx'), index=False)
loading D:\TF\pkl\18-philemon.pkl...
API info: https://annotation.github.io/text-fabric/tf/convert/walker.html
Explanatory notes about the data interpretation logic are incorporated within the Python code of the director function.
TF = Fabric(locations=BaseDir, silent=False)
cv = CV(TF)
###############################################
# Common helper functions #
###############################################
#Function to prevent errors during conversion due to missing data
def sanitize(input):
if isinstance(input, float): return ''
if isinstance(input, type(None)): return ''
else: return (input)
# Function to expand the syntactic categories of words or wordgroup
# See also "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf"
# page 5&6 (section 2.4 Syntactic Categories at Clause Level)
def ExpandRole(input):
if input=="adv": return 'Adverbial'
if input=="io": return 'Indirect Object'
if input=="o": return 'Object'
if input=="o2": return 'Second Object'
if input=="s": return 'Subject'
if input=="p": return 'Predicate'
if input=="v": return 'Verbal'
if input=="vc": return 'Verbal Copula'
if input=='aux': return 'Auxiliar'
return ''
# Function to expantion of Part of Speech labels. See also the description in
# "MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf" page 6&7
# (2.2. Syntactic Categories at Word Level: Part of Speech Labels)
def ExpandSP(input):
if input=='adj': return 'Adjective'
if input=='conj': return 'Conjunction'
if input=='det': return 'Determiner'
if input=='intj': return 'Interjection'
if input=='noun': return 'Noun'
if input=='num': return 'Numeral'
if input=='prep': return 'Preposition'
if input=='ptcl': return 'Particle'
if input=='pron': return 'Pronoun'
if input=='verb': return 'Verb'
return ''
# Small function to remove accents from Greek words
def removeAccents(text):
return ''.join(c for c in unicodedata.normalize('NFD', text) if unicodedata.category(c) != 'Mn')
###############################################
# The director routine #
###############################################
def director(cv):
###############################################
# Innitial setup of data etc. #
###############################################
NoneType = type(None) # needed as tool to validate certain data
IndexDict = {} # init an empty dictionary
WordGroupDict={} # init a dummy dictionary
PrevWordGroupSet = WordGroupSet = []
PrevWordGroupList = WordGroupList = []
RootWordGroup = 0
WordNumber=FoundWords=WordGroupTrack=0
# The following is required to recover succesfully from an abnormal condition
# in the LowFat tree data where a <wg> element is labeled as <error>
# this number is arbitrary but should be high enough not to clash with 'real' WG numbers
DummyWGN=200000 # first dummy WG number
# Following variables are used for textual critical data
criticalMarkCharacters = "[]()—"
punctuationCharacters = ",.;·"
translationTableMarkers = str.maketrans("", "", criticalMarkCharacters)
translationTablePunctuations = str.maketrans("", "", punctuationCharacters)
punctuations=('.',',',';','·')
for bo,bookinfo in bo2book.items():
###############################################
# start of section executed for each book #
###############################################
# note: bookinfo is a list! Split the data
Book = bookinfo[0]
BookNumber = int(bookinfo[1])
BookShort = bookinfo[2]
BookLoc = os.path.join(PklDir, f'{bo}.pkl')
# load data for this book into a dataframe.
# make sure wordorder is correct
print(f'\tWe are loading {BookLoc}...')
pkl_file = open(BookLoc, 'rb')
df_unsorted = pickle.load(pkl_file)
pkl_file.close()
'''
Fill dictionary of column names for this book
sort to ensure proper wordorder
'''
ItemsInRow=1
for itemname in df_unsorted.columns.to_list():
IndexDict.update({'i_{}'.format(itemname): ItemsInRow})
# This is to identify the collumn containing the key to sort upon
if itemname=="{http://www.w3.org/XML/1998/namespace}id": SortKey=ItemsInRow-1
ItemsInRow+=1
df=df_unsorted.sort_values(by=df_unsorted.columns[SortKey])
del df_unsorted
# Set up nodes for new book
ThisBookPointer = cv.node('book')
cv.feature(ThisBookPointer, book=Book, booknumber=BookNumber, bookshort=BookShort)
ThisChapterPointer = cv.node('chapter')
cv.feature(ThisChapterPointer, chapter=1, book=Book)
PreviousChapter=1
ThisVersePointer = cv.node('verse')
cv.feature(ThisVersePointer, verse=1, chapter=1, book=Book)
PreviousVerse=1
ThisSentencePointer = cv.node('sentence')
cv.feature(ThisSentencePointer, sentence=1, headverse=1, chapter=1, book=Book)
PreviousSentence=1
###############################################
# Iterate through words and construct objects #
###############################################
for row in df.itertuples():
WordNumber += 1
FoundWords +=1
# Detect and act upon changes in sentences, verse and chapter
# the order of terminating and creating the nodes is critical:
# close verse - close chapter - open chapter - open verse
NumberOfParents = sanitize(row[IndexDict.get("i_parents")])
ThisSentence=int(row[IndexDict.get("i_Parent{}SN".format(NumberOfParents-1))])
ThisVerse = sanitize(row[IndexDict.get("i_verse")])
ThisChapter = sanitize(row[IndexDict.get("i_chapter")])
if (ThisVerse!=PreviousVerse):
cv.terminate(ThisVersePointer)
if (ThisSentence!=PreviousSentence):
cv.terminate(ThisSentencePointer)
if (ThisChapter!=PreviousChapter):
cv.terminate(ThisChapterPointer)
PreviousChapter = ThisChapter
ThisChapterPointer = cv.node('chapter')
cv.feature(ThisChapterPointer, chapter=ThisChapter, book=Book)
if (ThisVerse!=PreviousVerse):
PreviousVerse = ThisVerse
ThisVersePointer = cv.node('verse')
cv.feature(ThisVersePointer, verse=ThisVerse, chapter=ThisChapter, book=Book)
if (ThisSentence!=PreviousSentence):
PreviousSentence=ThisSentence
ThisSentencePointer = cv.node('sentence')
cv.feature(ThisSentencePointer, sentence=ThisSentence, headverse=ThisVerse, chapter=ThisChapter, book=Book)
###############################################
# analyze and process <WG> tags #
###############################################
PrevWordGroupList=WordGroupList
WordGroupList=[] # stores current active WordGroup numbers
for i in range(NumberOfParents-2,0,-1): # important: reversed itteration!
_WGN=int(row[IndexDict.get("i_Parent{}WGN".format(i))])
if _WGN!='':
WGN=int(_WGN)
if WGN!='':
WGclass=sanitize(row[IndexDict.get("i_Parent{}Class".format(i))])
WGrule=sanitize(row[IndexDict.get("i_Parent{}Rule".format(i))])
WGtype=sanitize(row[IndexDict.get("i_Parent{}Type".format(i))])
if WGclass==WGrule==WGtype=='':
WGclass='empty'
else:
#print ('---',WordGroupList)
if WGN not in WordGroupList:
WordGroupList.append(WGN)
#print(f'append WGN={WGN}')
WordGroupDict[(WGN,0)]=WGN
if WGrule[-2:]=='CL' and WGclass=='':
WGclass='cl*' # to simulate the way Logos presents this condition
WordGroupDict[(WGN,6)]=WGclass
WordGroupDict[(WGN,1)]=WGrule
WordGroupDict[(WGN,8)]=WGtype
WordGroupDict[(WGN,3)]=sanitize(row[IndexDict.get("i_Parent{}Junction".format(i))])
WordGroupDict[(WGN,2)]=sanitize(row[IndexDict.get("i_Parent{}Cltype".format(i))])
WordGroupDict[(WGN,7)]=sanitize(row[IndexDict.get("i_Parent{}Role".format(i))])
WordGroupDict[(WGN,9)]=sanitize(row[IndexDict.get("i_Parent{}Appos".format(i))]) # appos is not pressent any more in the newer dataset. kept here for the time being...
WordGroupDict[(WGN,10)]=NumberOfParents-1-i # = number of parent wordgroups
if not PrevWordGroupList==WordGroupList:
#print ('##',PrevWordGroupList,WordGroupList,NumberOfParents)
if RootWordGroup != WordGroupList[0]:
RootWordGroup = WordGroupList[0]
SuspendableWordGoupList = []
# we have a new sentence. rebuild suspendable wordgroup list
# some cleaning of data may be added here to save on memmory...
#for k in range(6): del WordGroupDict[item,k]
for item in reversed(PrevWordGroupList):
if (item not in WordGroupList):
# CLOSE/SUSPEND CASE
SuspendableWordGoupList.append(item)
#print ('\n close: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+ WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ')
cv.terminate(WordGroupDict[(item,4)])
for item in WordGroupList:
if (item not in PrevWordGroupList):
if (item in SuspendableWordGoupList):
# RESUME CASE
#print ('\n resume: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ')
cv.resume(WordGroupDict[(item,4)])
else:
# CREATE CASE
#print ('\n create: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+ WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ')
WordGroupDict[(item,4)]=cv.node('wg')
WordGroupDict[(item,5)]=WordGroupTrack
WordGroupTrack += 1
cv.feature(WordGroupDict[(item,4)], wgnum=WordGroupDict[(item,0)], junction=WordGroupDict[(item,3)],
clausetype=WordGroupDict[(item,2)], wgrule=WordGroupDict[(item,1)], wgclass=WordGroupDict[(item,6)],
wgrole=WordGroupDict[(item,7)],wgrolelong=ExpandRole(WordGroupDict[(item,7)]),
wgtype=WordGroupDict[(item,8)],wglevel=WordGroupDict[(item,10)])
# These roles are performed either by a WG or just a single word.
Role=row[IndexDict.get("i_role")]
ValidRoles=["adv","io","o","o2","s","p","v","vc","aux"]
DistanceToRoleClause=0
if isinstance (Role,str) and Role in ValidRoles:
# Role is assign to this word (uniqely)
WordRole=Role
WordRoleLong=ExpandRole(WordRole)
else:
# Role details needs to be taken from some uptree wordgroup
WordRole=WordRoleLong=''
for item in range(1,NumberOfParents-1):
Role = sanitize(row[IndexDict.get("i_Parent{}Role".format(item))])
if isinstance (Role,str) and Role in ValidRoles:
WordRole=Role
WordRoleLong=ExpandRole(WordRole)
DistanceToRoleClause=item
break
# Find the number of the WG containing the clause definition
for item in range(1,NumberOfParents-1):
WGrule = sanitize(row[IndexDict.get("i_Parent{}Rule".format(item))])
if row[IndexDict.get("i_Parent{}Class".format(item))]=='cl' or WGrule[-2:]=='CL':
ContainedClause=sanitize(row[IndexDict.get("i_Parent{}WGN".format(item))])
break
###############################################
# analyze and process <W> tags #
###############################################
# Determine syntactic categories at word level.
PartOfSpeech=sanitize(row[IndexDict.get("i_class")])
PartOfSpeechFull=ExpandSP(PartOfSpeech)
# The folling part of code reproduces feature 'word' and 'after' that are
# currently containing incorrect data in a few specific cases.
# See https://github.com/tonyjurg/Nestle1904LFT/blob/main/resources/identifying_odd_afters.ipynb
# Get the word details and detect presence of punctuations
# it also creates the textual critical features
rawWord=sanitize(row[IndexDict.get("i_unicode")])
cleanWord= rawWord.translate(translationTableMarkers)
rawWithoutPunctuations=rawWord.translate(translationTablePunctuations)
markBefore=markAfter=PunctuationMarkOrder=''
if cleanWord[-1] in punctuations:
punctuation=cleanWord[-1]
after=punctuation+' '
word=cleanWord[:-1]
else:
after=' '
word=cleanWord
punctuation=''
if rawWithoutPunctuations!=word:
markAfter=markBefore=''
if rawWord.find(word)==0:
markAfter=rawWithoutPunctuations.replace(word,"")
if punctuation!='':
if rawWord.find(markAfter)-rawWord.find(punctuation)>0:
PunctuationMarkOrder="3" # punct. before mark
else:
PunctuationMarkOrder="2" # punct. after mark.
else:
PunctuationMarkOrder="1" #no punctuation, mark after word
else:
markBefore=rawWithoutPunctuations.replace(word,"")
PunctuationMarkOrder="0" #mark is before word
# Some attributes are not present inside some (small) books. The following is to prevent exceptions.
degree=''
if 'i_degree' in IndexDict:
degree=sanitize(row[IndexDict.get("i_degree")])
subjref=''
if 'i_subjref' in IndexDict:
subjref=sanitize(row[IndexDict.get("i_subjref")])
# Create the word slots
this_word = cv.slot()
cv.feature(this_word,
after= after,
unicode= rawWord,
word= word,
wordtranslit= unidecode(word),
wordunacc= removeAccents(word),
punctuation= punctuation,
markafter= markAfter,
markbefore= markBefore,
markorder= PunctuationMarkOrder,
monad= FoundWords,
orig_order= sanitize(row[IndexDict.get("i_word_order")]),
book= Book,
booknumber= BookNumber,
bookshort= BookShort,
chapter= ThisChapter,
ref= sanitize(row[IndexDict.get("i_ref")]),
sp= PartOfSpeech,
sp_full= PartOfSpeechFull,
verse= ThisVerse,
sentence= ThisSentence,
normalized= sanitize(row[IndexDict.get("i_normalized")]),
morph= sanitize(row[IndexDict.get("i_morph")]),
strongs= sanitize(row[IndexDict.get("i_strong")]),
lex_dom= sanitize(row[IndexDict.get("i_domain")]),
ln= sanitize(row[IndexDict.get("i_ln")]),
gloss= sanitize(row[IndexDict.get("i_gloss")]),
gn= sanitize(row[IndexDict.get("i_gender")]),
nu= sanitize(row[IndexDict.get("i_number")]),
case= sanitize(row[IndexDict.get("i_case")]),
lemma= sanitize(row[IndexDict.get("i_lemma")]),
person= sanitize(row[IndexDict.get("i_person")]),
mood= sanitize(row[IndexDict.get("i_mood")]),
tense= sanitize(row[IndexDict.get("i_tense")]),
number= sanitize(row[IndexDict.get("i_number")]),
voice= sanitize(row[IndexDict.get("i_voice")]),
degree= degree,
type= sanitize(row[IndexDict.get("i_type")]),
reference= sanitize(row[IndexDict.get("i_ref")]),
subj_ref= subjref,
nodeID= sanitize(row[4]), #this is a fixed position in dataframe
wordrole= WordRole,
wordrolelong= WordRoleLong,
wordlevel= NumberOfParents-1,
roleclausedistance = DistanceToRoleClause,
containedclause = ContainedClause
)
cv.terminate(this_word)
'''
wrap up the book. At the end of the book we need to close all nodes in proper order.
'''
# close all open WordGroup nodes
for item in WordGroupList:
#cv.feature(WordGroupDict[(item,4)], add some stats?)
cv.terminate(WordGroupDict[item,4])
cv.terminate(ThisSentencePointer)
cv.terminate(ThisVersePointer)
cv.terminate(ThisChapterPointer)
cv.terminate(ThisBookPointer)
# clear dataframe for this book, clear the index dictionary
del df
IndexDict.clear()
gc.collect()
###############################################
# end of section executed for each book #
###############################################
###############################################
# end of director function #
###############################################
###############################################
# Output definitions #
###############################################
slotType = 'word'
otext = { # dictionary of config data for sections and text formats
'fmt:text-orig-full': '{word}{after}',
'fmt:text-normalized': '{normalized}{after}',
'fmt:text-unaccented': '{wordunacc}{after}',
'fmt:text-transliterated':'{wordtranslit}{after}',
'fmt:text-critical': '{unicode} ',
'sectionTypes':'book,chapter,verse',
'sectionFeatures':'book,chapter,verse',
'structureFeatures': 'book,chapter,verse',
'structureTypes': 'book,chapter,verse',
}
# configure metadata
generic = { # dictionary of metadata meant for all features
'textFabriVersion': '{}'.format(VERSION), #imported from tf.parameter
'xmlSourceLocation': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/xml/20230816',
'xmlSourceDate': 'August 16, 2023',
'author': 'Evangelists and apostles',
'availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',
'converters': 'Tony Jurg',
'converterSource': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/converter',
'converterVersion': '0.6',
'dataSource': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',
'editors': 'Eberhart Nestle (1904)',
'sourceDescription': 'Greek New Testment (British Foreign Bible Society, 1904)',
'sourceFormat': 'XML (Low Fat tree XML data)',
'title': 'Greek New Testament (Nestle1904LFT)'
}
# set of integer valued feature names
intFeatures = {
'booknumber',
'chapter',
'verse',
'sentence',
'wgnum',
'orig_order',
'monad',
'wglevel'
}
# per feature dicts with metadata
# icon provides guidance on feature maturity (✅ = trustworthy, 🆗 = usable, ⚠️ = be carefull when using)
#
featureMeta = {
'after': {'description': '✅ Characters (eg. punctuations) following the word'},
'book': {'description': '✅ Book name (in English language)'},
'booknumber': {'description': '✅ NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},
'bookshort': {'description': '✅ Book name (abbreviated)'},
'chapter': {'description': '✅ Chapter number inside book'},
'verse': {'description': '✅ Verse number inside chapter'},
'headverse': {'description': '✅ Start verse number of a sentence'},
'sentence': {'description': '✅ Sentence number (counted per chapter)'},
'type': {'description': '✅ Wordgroup type information (e.g.verb, verbless, elided, minor)'},
'wgrule': {'description': '✅ Wordgroup rule information (e.g. Np-Appos, ClCl2, PrepNp)'},
'orig_order': {'description': '✅ Word order (in source XML file)'},
'monad': {'description': '✅ Monad (smallest token matching word order in the corpus)'},
'word': {'description': '✅ Word as it appears in the text (excl. punctuations)'},
'wordtranslit':{'description': '🆗 Transliteration of the text (in latin letters, excl. punctuations)'},
'wordunacc': {'description': '✅ Word without accents (excl. punctuations)'},
'unicode': {'description': '✅ Word as it apears in the text in Unicode (incl. punctuations)'},
'punctuation': {'description': '✅ Punctuation after word'},
'markafter': {'description': '🆗 Text critical marker after word'},
'markbefore': {'description': '🆗 Text critical marker before word'},
'markorder': {'description': ' Order of punctuation and text critical marker'},
'ref': {'description': '✅ Value of the ref ID (taken from XML sourcedata)'},
'sp': {'description': '✅ Part of Speech (abbreviated)'},
'sp_full': {'description': '✅ Part of Speech (long description)'},
'normalized': {'description': '✅ Surface word with accents normalized and trailing punctuations removed'},
'lemma': {'description': '✅ Lexeme (lemma)'},
'morph': {'description': '✅ Morphological tag (Sandborg-Petersen morphology)'},
# see also discussion on relation between lex_dom and ln
# @ https://github.com/Clear-Bible/macula-greek/issues/29
'lex_dom': {'description': '✅ Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},
'ln': {'description': '✅ Lauw-Nida lexical classification (not present everywhere?)'},
'strongs': {'description': '✅ Strongs number'},
'gloss': {'description': '✅ English gloss'},
'gn': {'description': '✅ Gramatical gender (Masculine, Feminine, Neuter)'},
'nu': {'description': '✅ Gramatical number (Singular, Plural)'},
'case': {'description': '✅ Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},
'person': {'description': '✅ Gramatical person of the verb (first, second, third)'},
'mood': {'description': '✅ Gramatical mood of the verb (passive, etc)'},
'tense': {'description': '✅ Gramatical tense of the verb (e.g. Present, Aorist)'},
'number': {'description': '✅ Gramatical number of the verb (e.g. singular, plural)'},
'voice': {'description': '✅ Gramatical voice of the verb (e.g. active,passive)'},
'degree': {'description': '✅ Degree (e.g. Comparitative, Superlative)'},
'type': {'description': '✅ Gramatical type of noun or pronoun (e.g. Common, Personal)'},
'reference': {'description': '✅ Reference (to nodeID in XML source data, not yet post-processes)'},
'subj_ref': {'description': '🆗 Subject reference (to nodeID in XML source data, not yet post-processes)'},
'nodeID': {'description': '✅ Node ID (as in the XML source data)'},
'junction': {'description': '✅ Junction data related to a wordgroup'},
'wgnum': {'description': '✅ Wordgroup number (counted per book)'},
'wgclass': {'description': '✅ Class of the wordgroup (e.g. cl, np, vp)'},
'wgrole': {'description': '✅ Syntactical role of the wordgroup (abbreviated)'},
'wgrolelong': {'description': '✅ Syntactical role of the wordgroup (full)'},
'wordrole': {'description': '✅ Syntactical role of the word (abbreviated)'},
'wordrolelong':{'description': '✅ Syntactical role of the word (full)'},
'wgtype': {'description': '✅ Wordgroup type details (e.g. group, apposition)'},
'clausetype': {'description': '✅ Clause type details (e.g. Verbless, Minor)'},
'wglevel': {'description': '🆗 Number of the parent wordgroups for a wordgroup'},
'wordlevel': {'description': '🆗 Number of the parent wordgroups for a word'},
'roleclausedistance': {'description': '⚠️ Distance to the wordgroup defining the syntactical role of this word'},
'containedclause': {'description': '🆗 Contained clause (WG number)'}
}
###############################################
# the main function #
###############################################
good = cv.walk(
director,
slotType,
otext=otext,
generic=generic,
intFeatures=intFeatures,
featureMeta=featureMeta,
warn=True,
force=True
)
if good:
print ("done")
This is Text-Fabric 12.1.5 59 features found and 0 ignored 0.00s Importing data from walking through the source ... | 0.00s Preparing metadata... | SECTION TYPES: book, chapter, verse | SECTION FEATURES: book, chapter, verse | STRUCTURE TYPES: book, chapter, verse | STRUCTURE FEATURES: book, chapter, verse | TEXT FEATURES: | | text-critical unicode | | text-normalized after, normalized | | text-orig-full after, word | | text-transliterated after, wordtranslit | | text-unaccented after, wordunacc | 0.00s OK | 0.00s Following director... We are loading D:\TF\pkl\01-matthew.pkl... We are loading D:\TF\pkl\02-mark.pkl... We are loading D:\TF\pkl\03-luke.pkl... We are loading D:\TF\pkl\04-john.pkl... We are loading D:\TF\pkl\05-acts.pkl... We are loading D:\TF\pkl\06-romans.pkl... We are loading D:\TF\pkl\07-1corinthians.pkl... We are loading D:\TF\pkl\08-2corinthians.pkl... We are loading D:\TF\pkl\09-galatians.pkl... We are loading D:\TF\pkl\10-ephesians.pkl... We are loading D:\TF\pkl\11-philippians.pkl... We are loading D:\TF\pkl\12-colossians.pkl... We are loading D:\TF\pkl\13-1thessalonians.pkl... We are loading D:\TF\pkl\14-2thessalonians.pkl... We are loading D:\TF\pkl\15-1timothy.pkl... We are loading D:\TF\pkl\16-2timothy.pkl... We are loading D:\TF\pkl\17-titus.pkl... We are loading D:\TF\pkl\18-philemon.pkl... We are loading D:\TF\pkl\19-hebrews.pkl... We are loading D:\TF\pkl\20-james.pkl... We are loading D:\TF\pkl\21-1peter.pkl... We are loading D:\TF\pkl\22-2peter.pkl... We are loading D:\TF\pkl\23-1john.pkl... We are loading D:\TF\pkl\24-2john.pkl... We are loading D:\TF\pkl\25-3john.pkl... We are loading D:\TF\pkl\26-jude.pkl... We are loading D:\TF\pkl\27-revelation.pkl... | 47s "delete" actions: 0 | 47s "edge" actions: 0 | 47s "feature" actions: 259450 | 47s "node" actions: 121671 | 47s "resume" actions: 9626 | 47s "slot" actions: 137779 | 47s "terminate" actions: 269177 | 27 x "book" node | 260 x "chapter" node | 8011 x "sentence" node | 7943 x "verse" node | 105430 x "wg" node | 137779 x "word" node = slot type | 259450 nodes of all types | 47s OK | 0.00s checking for nodes and edges ... | 0.00s OK | 0.00s checking (section) features ... | 0.19s OK | 0.00s reordering nodes ... | 0.00s No slot sorting needed | 0.03s Sorting 27 nodes of type "book" | 0.05s Sorting 260 nodes of type "chapter" | 0.06s Sorting 8011 nodes of type "sentence" | 0.08s Sorting 7943 nodes of type "verse" | 0.11s Sorting 105430 nodes of type "wg" | 0.23s Max node = 259450 | 0.23s OK | 0.00s reassigning feature values ... | | node feature "after" with 137779 nodes | | node feature "book" with 154020 nodes | | node feature "booknumber" with 137806 nodes | | node feature "bookshort" with 137806 nodes | | node feature "case" with 137779 nodes | | node feature "chapter" with 153993 nodes | | node feature "clausetype" with 105430 nodes | | node feature "containedclause" with 137779 nodes | | node feature "degree" with 137779 nodes | | node feature "gloss" with 137779 nodes | | node feature "gn" with 137779 nodes | | node feature "headverse" with 8011 nodes | | node feature "junction" with 105430 nodes | | node feature "lemma" with 137779 nodes | | node feature "lex_dom" with 137779 nodes | | node feature "ln" with 137779 nodes | | node feature "markafter" with 137779 nodes | | node feature "markbefore" with 137779 nodes | | node feature "markorder" with 137779 nodes | | node feature "monad" with 137779 nodes | | node feature "mood" with 137779 nodes | | node feature "morph" with 137779 nodes | | node feature "nodeID" with 137779 nodes | | node feature "normalized" with 137779 nodes | | node feature "nu" with 137779 nodes | | node feature "number" with 137779 nodes | | node feature "orig_order" with 137779 nodes | | node feature "person" with 137779 nodes | | node feature "punctuation" with 137779 nodes | | node feature "ref" with 137779 nodes | | node feature "reference" with 137779 nodes | | node feature "roleclausedistance" with 137779 nodes | | node feature "sentence" with 145790 nodes | | node feature "sp" with 137779 nodes | | node feature "sp_full" with 137779 nodes | | node feature "strongs" with 137779 nodes | | node feature "subj_ref" with 137779 nodes | | node feature "tense" with 137779 nodes | | node feature "type" with 137779 nodes | | node feature "unicode" with 137779 nodes | | node feature "verse" with 145722 nodes | | node feature "voice" with 137779 nodes | | node feature "wgclass" with 105430 nodes | | node feature "wglevel" with 105430 nodes | | node feature "wgnum" with 105430 nodes | | node feature "wgrole" with 105430 nodes | | node feature "wgrolelong" with 105430 nodes | | node feature "wgrule" with 105430 nodes | | node feature "wgtype" with 105430 nodes | | node feature "word" with 137779 nodes | | node feature "wordlevel" with 137779 nodes | | node feature "wordrole" with 137779 nodes | | node feature "wordrolelong" with 137779 nodes | | node feature "wordtranslit" with 137779 nodes | | node feature "wordunacc" with 137779 nodes | 2.25s OK 50s Features ready to write 0.00s Exporting 56 node and 1 edge and 1 config features to D:/TF: 0.00s VALIDATING oslots feature 0.02s VALIDATING oslots feature 0.02s maxSlot= 137779 0.02s maxNode= 259450 0.03s OK: oslots is valid | 0.12s T after to D:/TF | 0.13s T book to D:/TF | 0.10s T booknumber to D:/TF | 0.13s T bookshort to D:/TF | 0.12s T case to D:/TF | 0.12s T chapter to D:/TF | 0.08s T clausetype to D:/TF | 0.11s T containedclause to D:/TF | 0.11s T degree to D:/TF | 0.13s T gloss to D:/TF | 0.13s T gn to D:/TF | 0.01s T headverse to D:/TF | 0.08s T junction to D:/TF | 0.14s T lemma to D:/TF | 0.12s T lex_dom to D:/TF | 0.12s T ln to D:/TF | 0.11s T markafter to D:/TF | 0.11s T markbefore to D:/TF | 0.12s T markorder to D:/TF | 0.11s T monad to D:/TF | 0.11s T mood to D:/TF | 0.12s T morph to D:/TF | 0.12s T nodeID to D:/TF | 0.14s T normalized to D:/TF | 0.12s T nu to D:/TF | 0.12s T number to D:/TF | 0.11s T orig_order to D:/TF | 0.04s T otype to D:/TF | 0.11s T person to D:/TF | 0.12s T punctuation to D:/TF | 0.12s T ref to D:/TF | 0.13s T reference to D:/TF | 0.11s T roleclausedistance to D:/TF | 0.12s T sentence to D:/TF | 0.12s T sp to D:/TF | 0.11s T sp_full to D:/TF | 0.11s T strongs to D:/TF | 0.11s T subj_ref to D:/TF | 0.12s T tense to D:/TF | 0.13s T type to D:/TF | 0.14s T unicode to D:/TF | 0.12s T verse to D:/TF | 0.12s T voice to D:/TF | 0.10s T wgclass to D:/TF | 0.08s T wglevel to D:/TF | 0.09s T wgnum to D:/TF | 0.09s T wgrole to D:/TF | 0.09s T wgrolelong to D:/TF | 0.11s T wgrule to D:/TF | 0.09s T wgtype to D:/TF | 0.14s T word to D:/TF | 0.11s T wordlevel to D:/TF | 0.13s T wordrole to D:/TF | 0.12s T wordrolelong to D:/TF | 0.13s T wordtranslit to D:/TF | 0.14s T wordunacc to D:/TF | 0.32s T oslots to D:/TF | 0.00s M otext to D:/TF 6.68s Exported 56 node features and 1 edge features and 1 config features to D:/TF done