#!/usr/bin/env python # coding: utf-8 # # Finding non-English newspapers in Trove # # There are a growing number of non-English newspapers digitised in Trove. However, if you're only searching using English keywords, you might never know that they're there. I thought it would be useful to generate a list of non-English newspapers, but it wasn't quite as straightforward as I thought. # # ## How not to do it... # # My first thought was I could start by searching for digitised newspapers amongst the library records in Trove. My theory was that catalogue metadata would include language information. For example, you can search for newspapers using `format:Periodical/Newspaper` in the books and libraries category (or the `article` API zone). To find those that are digitised, you can add a search for 'trove.nla.gov.au'. Here's the [sort of results](https://trove.nla.gov.au/search/category/books?keyword=%22trove.nla.gov.au%22%20format%3APeriodical%2FNewspaper) you get. Unfortunately, you only get about 826 results and there are many more newspapers than that in Trove. It seems links to digitised newspapers are not consistently recorded. # # My second approach was to get the list of digitised newspapers from the API, extract the ISSN, then use this to search for catalogue records. Here's the code snippet I used. # # ``` python # params = { # 'zone': 'article', # 'encoding': 'json', # 'l-format': 'Periodical/Newspaper', # 'reclevel': 'full', # 'key': TROVE_API_KEY # } # newspapers = get_newspapers() # for newspaper in newspapers: # print(f'\n{newspaper["title"]}') # issn = newspaper.get('issn') # params['q'] = f'issn:{issn}' # response = s.get('https://api.trove.nla.gov.au/v2/result', params=params) # data = response.json() # try: # works = data['response']['zone'][0]['records']['work'] # except KeyError: # print('Not found') # else: # for work in works: # print(work.get('language')) # if not response.from_cache: # time.sleep(0.2) # ``` # # The main problem here is that not all titles have ISSNs. You could try searching on the titles is there's no ISSN, but this would involve a fair bit of disambiguation. In any case, in running this I discovered that while there is some language information in the metadata, it's not consistently applied. So basically a metadata-only approach is not going to work. Sigh... # # ## How I actually did it # # If I couldn't get language details from metadata, then I had to try and extract it from the resource itself. I spent quite a bit of time looking around for Python packages that provided reliable language detection. The first one I tried regularly identified Mandarin as Korean (it turns out this was a known issue). Another one sent me into dependency hell. Finally I found [pycld3](https://pypi.org/project/pycld3/) which installed with `pip`, and *just worked*. # # My plan was to get the list of newspapers via the API as before, then fire off an empty search for each one. I'd then loop through the results, running the language detector over the article text. I set the query parameters to retrieve the maxmimum number of results in one request – 100. That seemed like a reasonable sample. To try and provide a big enough amount of text for the language detector to work with, I set the number of words parameter to return articles with between 100 and 1000 words. So the query parameters I used were: # # ``` python # params = { # 'zone': 'newspaper', # 'encoding': 'json', # 'l-word': '100 - 1000 Words', # 'include': 'articletext', # 'key': TROVE_API_KEY, # 'q': ' ', # 'n': 100, # } # ``` # # Because some of the newspapers had short runs and the word count filter limits the results, I found that I wasn't always getting 100 results per newspaper. To work around this I found the likely language for each article, aggregated the counts, and then calculated the proportion of results for each language. This gave me the proportion of articles in each language – a number I could use across newspapers to find the non-English titles. # # In general this worked pretty well, and the result was a [list of 48 newspapers](non-english-newspapers.md) (also as a [Gist](https://gist.github.com/wragge/9aa385648cff5f0de0c7d4837896df97)) that have significant amounts of non-English content. However, I had to do a fair bit of fiddling to filter out dodgy results. All the details are included below. # # ## Problems / limitations # # * It's no surprise that the results of the language detection are affected by the quality of the OCR. # * In filtering out what seems to be the product of dodgy OCR, it's possible that I might be excluding some non-English content. # * I'm only detecting the predominant language for each article, so there might be articles containing a mix of languages that are being missed. # * I'm just talking the first 100 results from a blank search in each newspaper. Larger, or more randomised samples might produce different results. # * Some dodgy detection results remain in the list of newspapers, but the point of this exercise was to find non-English newspapers. If you wanted to accurately determine the quantity of non-English content, you'd have to do a lot more fine-grained analysis. # ## Import what we need # In[1]: import requests import time import requests_cache from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from collections import Counter import re from langdetect import detect from tqdm.auto import tqdm import pandas as pd import cld3 import pycountry from language_tags import tags import altair as alt from pathlib import Path s = requests_cache.CachedSession() retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ]) s.mount('https://', HTTPAdapter(max_retries=retries)) s.mount('http://', HTTPAdapter(max_retries=retries)) # In[2]: TROVE_API_KEY = '[YOUR API KEY]' # ## Harvest the data and run language detection on articles # In[3]: def get_newspapers(): ''' Get a list of newspapers in Trove. ''' response = s.get('https://api.trove.nla.gov.au/v2/newspaper/titles', params={'encoding': 'json', 'key': TROVE_API_KEY}) data = response.json() return data['response']['records']['newspaper'] # In[4]: params = { 'zone': 'newspaper', 'encoding': 'json', #'l-category': 'Article', 'l-word': '100 - 1000 Words', 'include': 'articletext', 'key': TROVE_API_KEY, 'q': ' ', 'n': 100, } newspaper_langs = [] newspapers = get_newspapers() for newspaper in tqdm(newspapers): langs = [] # print(f'\n{newspaper["title"]}') params['l-title'] = newspaper['id'] response = s.get('https://api.trove.nla.gov.au/v2/result', params=params) data = response.json() n = data['response']['zone'][0]['records']['n'] try: articles = data['response']['zone'][0]['records']['article'] except KeyError: # print('Not found') pass else: # Detect language for each article in results for article in articles: if 'articleText' in article: # Clean up OCRd text by removing takings and extra whitespace text = article['articleText'] text = re.sub('<[^<]+?>', '', text) text = re.sub("\s\s+", " ", text) # Get the language ld = cld3.get_language(text) # If the language prediction is reliable, save it if ld.is_reliable: langs.append(ld.language) # Find the count of each language detected in the sample of articles for lang, count in dict(Counter(langs)).items(): # Calculate the language count as a proportion of the total number of results prop = int(count) / len(langs) newspaper_langs.append({'id': newspaper['id'], 'title': newspaper['title'], 'language': lang, 'proportion': prop, 'number': n}) if not response.from_cache: time.sleep(0.2) # Convert the results into a dataframe. # In[5]: df = pd.DataFrame(newspaper_langs) df.head() # ## Add full language names # # The language detector returns BCP-47-style language codes. To translate these into something that's a bit easier for humans to understand, we can use the [language-tags](https://github.com/OnroerendErfgoed/language-tags) package. # In[50]: def get_full_language(lc): ''' Get full language names from codes ''' lang = tags.description(lc) if lang: return lang[0] else: print(lc) return lc df['language_full'] = df['language'].apply(get_full_language) # ## Filtering the results # # If we just look at the numbers of languages detected we might think that Australia's cultural diversity was much greater than we expected! But the likelihood that there were ten newspapers publishing articles in Igbo (the language of the Igbo people in south-eastern Nigeria) seems small. Obviously there are a considerable number of false positives here. # In[59]: df['language_full'].value_counts() # Remember that for each language detected in a newspaper we calculated the proportion of articles in our results set in that language. So we can, for example, just look at newspapers where 100% of the articles are in a single language. This highlights a few non-English language newspapers, but obviously we're missing a lot of others. # In[70]: df.loc[df['proportion'] == 1]['language_full'].value_counts() # If we chart the proportions, we see them bunched up at either end of the scale. So there are lots of languages detected in only a small proportion of articles. # In[66]: alt.Chart(df).mark_bar().encode( x=alt.X('proportion:Q', bin=True), y='count():Q' ) # If we zoom in on the proportions less than 0.1 (that's 10 articles in a sample of 100) we see that they're mostly less that 0.01 (or 1 article in 100). It seems likely that these are false positives. # In[72]: alt.Chart(df.loc[df['proportion'] < 0.1]).mark_bar().encode( x=alt.X('proportion:Q', bin=True), y='count():Q' ) # Let's be fairly conservative and filter out languages that have a proportion (per newspaper) less than 0.5. This list seems a bit more in line with what we would expect, but there are still some surprises – 48 newspapers published articles in Maltese? # In[74]: df.loc[df['proportion'] >= 0.05]['language_full'].value_counts() # If we focus in on the newspapers that supposedly have a significant proportion of articles in Maltese, we see some very strange results. I seriously doubt that 80% of the *Mildura Irrigationist* from 1892-3 is in Maltese. So what's going on? # In[76]: df.loc[(df['proportion'] > 0.1) & (df['language_full'] == 'Maltese')] # If you look at results for the *Mildura Irrigationist* [in Trove](https://trove.nla.gov.au/search/advanced/category/newspapers?l-advtitle=1583&l-advWord=100%20-%201000%20Words) you'll see that many of the page images are blurry, and as a result the OCR is very, very bad. Here's a sample: # # > ill Tatr W lyltwililUmt aat aa«v aa MwOkaWtOPMlkMrf faiflftMMRltitlWBfMNM fmiMW^M^K IMIOHIpM^fQBMMI ft tWMmrwl tWWiltjfNMStW ffw aailwt«M wtMitiar«lH*a ifcmH af tlw ial«««l ion «M««f ffantoif wwtMaaM. tto tf h «frwringmhw torf M hr toaiy. Im*4. ar, fc> mmirf awlUW wefllaM aA. aaytMaa. l «Wa A tfc» tow waliw Macks b aaM, b wil fVfbH Ja ^IMntaam* Mm' ls tolliac. rt Tto aad nf ttoar UhKMimiw*a afM» ftjrwl ans W l OtfWOar jpaaofTwSi aJwwr la'aahS^*— attor aakwt mm rvfimMiMh* ttoai. day - Why. aa IH thrf t«fl almd yaa."iw. aal wwifciha m OiO all tto laM amnavaA, fawawNl I r aa4 f wa* tm enr a Mtcfc tto watrr tto wiaaal m a* a* day pfaMat. aa4 (h* ilj amintir* ilm tTtsjtvL.f**' ""j •fria—lhati* tow ««4M k." tlml t | r 4m» wtn .aa rUa* I h ha«« t ctoantaf InMM* aM*toclt ttopnaMaf II It la Mat rtgM, t jmi awl a 1 : af but d awtliqg a Mr. Jafc Matwa-(MMa M t «wl y gha yaar «toa anl yaar (ma as «fpai ta af t«l. i pwwiaf Mtan (tot jw. twy MwUI «*a1 a«ry ftajr «ndl tar tlw aad annaH* a*«r aarf a««r aaria. tiaa # What happens when we feed this fragment of bad OCR to the language detector? Remarkably, the language detector is 96% sure that it's Maltese! To find out why this is the case, we'd probably have to dig into the way the language detection model was trained. But for our purposes it's enough to know that some of the languages detected seem to be the result of bad OCR. # In[79]: ocr = '''ill Tatr W lyltwililUmt aat aa«v aa MwOkaWtOPMlkMrf faiflftMMRltitlWBfMNM fmiMW^M^K IMIOHIpM^fQBMMI ft tWMmrwl tWWiltjfNMStW ffw aailwt«M wtMitiar«lH*a ifcmH af tlw ial«««l ion «M««f ffantoif wwtMaaM. tto tf h «frwringmhw torf M hr toaiy. Im*4. ar, fc> mmirf awlUW wefllaM aA. aaytMaa. l «Wa A tfc» tow waliw Macks b aaM, b wil fVfbH Ja ^IMntaam* Mm' ls tolliac. rt Tto aad nf ttoar UhKMimiw*a afM» ftjrwl ans W l OtfWOar jpaaofTwSi aJwwr la'aahS^*— attor aakwt mm rvfimMiMh* ttoai. day - Why. aa IH thrf t«fl almd yaa."iw. aal wwifciha m OiO all tto laM amnavaA, fawawNl I r aa4 f wa* tm enr a Mtcfc tto watrr tto wiaaal m a* a* day pfaMat. aa4 (h* ilj amintir* ilm tTtsjtvL.f**' ""j •fria—lhati* tow ««4M k." tlml t | r 4m» wtn .aa rUa* I h ha«« t ctoantaf InMM* aM*toclt ttopnaMaf II It la Mat rtgM, t jmi awl a 1 : af but d awtliqg a Mr. Jafc Matwa-(MMa M t «wl y gha yaar «toa anl yaar (ma as «fpai ta af t«l. i pwwiaf Mtan (tot jw. twy MwUI «*a1 a«ry ftajr «ndl tar tlw aad annaH* a*«r aarf a««r aaria. tiaa''' cld3.get_language(ocr) # Of course there might actually be newspapers with articles in Maltese, so we don't want to filter them all out. So let's do some manual inspection of the newspapers that *seem* to have non-English content. First we'll filter our results to include only languages with proportions of more than 0.05, and then drop out newspapers that seem to be only in English. We end up with 105 different titles. # In[89]: # The filter on the groupby drops out newspapers that only have articles in English. filtered = df.loc[df['proportion'] >= 0.05].groupby(by=['title', 'id']).filter(lambda x: (len(x) > 1) or (len(x)== 1 and x['language'] != 'en')) papers = filtered.groupby(by=['title', 'id']) len(papers) # Let's list those 105 newspapers. From the list below, I think it's pretty easy to pick out the results that are likely to be the product of bad OCR. # In[86]: for n, l in papers: if not l.loc[(~df['language'].isin(['en'])) & (df['proportion'] >= 0.05)].empty: print(f'\n{n[0]} ({n[1]})') display(l[['language_full', 'language', 'proportion']].loc[(l['proportion'] > 0.05)].sort_values(by='proportion', ascending=False)) # I went through the titles above and compiled a list of title identifiers that seem to be producing dodgy results. We can use this to filter these newspapers out of our results. # In[32]: # Titles where dodgy OCR causes false positives in language detection # This was manually created after scanning results dodgy = ['1036', '1043', '1103', '116', '1207', '1265', '13', '1320', '1336', '140', '1400', '145', '1488', '1543', '1546', '1581', '1582', '1583', '1623', '1626', '1678', '171', '196', '213', '224', '286', '292', '318', '329', '34', '384', '389', '394', '418', '430', '431', '452', '479', '499', '500', '570', '623', '763', '810', '860', '886', '892', '906', '92', '926', '927', '935', '937', '94', '946', '970', '986'] # Here we'll add the dodgy title ids into our filter. It seems that we have 48 newspapers with significant amounts of non-English content. # In[90]: # The filter removes titles that only have one language, which is English filtered = df.loc[(~df['id'].isin(dodgy)) & (df['proportion'] >= 0.05)].groupby(by=['title', 'id']).filter(lambda x: (len(x) > 1) or (len(x)== 1 and x['language'] != 'en')) papers = filtered.groupby(by=['title', 'id']) len(papers) # Let's list them. # In[92]: for n, l in papers: print(n[0]) # That's looking pretty good. Let's save the results as a Markdown file to make it easy to explore. We'll include links into Trove. Here's the [list of all 48 newspapers](non-english-newspapers.md) (also as a [Gist](https://gist.github.com/wragge/9aa385648cff5f0de0c7d4837896df97)). # In[97]: with open(Path('non-english-newspapers.md'), 'w') as md_file: i = 1 for n, l in papers: md_file.write(f'\n### {i}. [{n[0]}](http://nla.gov.au/nla.news-title{n[1]})\n\n') md_file.write('| Language | Language code | Proportion of sample |\n') md_file.write('|---|---|---|\n') for row in l[['language_full', 'language', 'proportion']].loc[(l['proportion'] > 0.05)].sort_values(by='proportion', ascending=False).itertuples(): md_file.write(f'| {row.language_full} | {row.language} | {row.proportion} |\n') i += 1 # If you look at the Markdown files you'll see that there are still some dodgy results – for example, 16% of the *Chinese Advertiser* is detected as 'Scottish Gaelic'. But the point of this exercise was to find non-English newspapers, rather than accurately detect the proportion of non-English content, so I think we can live with it for now. # ---- # # Created by [Tim Sherratt](https://timsherratt.org/) for the [GLAM Workbench](https://glam-workbench.github.io/).