# flake8: noqa
import warnings
import os
# Suppress noisy requests warnings.
warnings.filterwarnings("ignore")
os.environ["PYTHONWARNINGS"] = "ignore"
In this example, we will show you how to run optical character recognition (OCR) on a set of documents and analyze the resulting text with the natural language processing library spaCy. Running OCR on a large dataset is very computationally expensive, so using Ray for distributed processing can really speed up the analysis. Ray Data makes it easy to compose the different steps of the pipeline, namely the OCR and the natural language processing. Ray Data' actor support also allows us to be more efficient by sharing the spaCy NLP context between several datapoints.
To make it more interesting, we will run the analysis on the LightShot dataset. It is a large publicly available OCR dataset with a wide variety of different documents, all of them screenshots of various forms. It is easy to replace that dataset with your own data and adapt the example to your own use cases!
This tutorial will cover:
Let's start by preparing the dependencies and downloading the dataset. First we install the OCR software tesseract
and its Python client:
{tab-set}
````{tab-item} macOS
```
brew install tesseract
pip install pytesseract
```
````
````{tab-item} linux
```
sudo apt-get install tesseract-ocr
pip install pytesseract
```
````
By default, the following example will run on a tiny dataset we provide. If you want to run it on the full dataset, we recommend to run it on a cluster since processing all the images with tesseract takes a lot of time.
{note}
If you want to run the example on the full [LightShot](https://www.kaggle.com/datasets/datasnaek/lightshot) dataset, you need to download the dataset and extract it. You can extract the dataset by first running `unzip archive.zip` and then `unrar x LightShot13k.rar .` and then you can upload the dataset to S3 with `aws s3 cp LightShot13k/ s3://<bucket>/<folder> --recursive`.
Let's now import Ray and initialize a local Ray cluster. If you want to run OCR at a very large scale, you should run this workload on a multi-node cluster.
# Import ray and initialize a local Ray cluster.
import ray
ray.init()
We can now use the {meth}ray.data.read_binary_files <ray.data.read_binary_files>
function to read all the images from S3. We set the include_paths=True
option to create a dataset of the S3 paths and image contents. We then run the {meth}ds.map <ray.data.Dataset.map>
function on this dataset to execute the actual OCR process on each file and convert the screen shots into text. This creates a tabular dataset with columns path
and text
.
{note}
If you want to load the data from a private bucket, you have to run
```python
import pyarrow.fs
ds = ray.data.read_binary_files("s3://<bucket>/<folder>",
include_paths=True,
filesystem=pyarrow.fs.S3FileSystem(
access_key="...",
secret_key="...",
session_token="..."))
```
from io import BytesIO
from PIL import Image
import pytesseract
def perform_ocr(data):
path, img = data
return {
"path": path,
"text": pytesseract.image_to_string(Image.open(BytesIO(img)))
}
ds = ray.data.read_binary_files(
"s3://anonymous@air-example-data/ocr_tiny_dataset",
include_paths=True)
results = ds.map(perform_ocr)
Let us have a look at some of the data points with the {meth}take <ray.data.Dataset.take>
function.
results.take(10)
{note}
Saving the dataset is optional, you can also continue with the in-memory data without persisting it to storage.
We can save the result of running tesseract on the dataset on disk so we can read it out later if we want to re-run the NLP analysis without needing to re-run the OCR (which is very expensive on the whole dataset). This can be done with the {meth}write_parquet <ray.data.Dataset.write_parquet>
function:
import os
results.write_parquet(os.path.expanduser("~/LightShot13k_results"))
You can later reload the data with the {meth}read_parquet <ray.data.read_parquet>
function:
results = ray.data.read_parquet(os.path.expanduser("~/LightShot13k_results"))
This is the part where the fun begins. Depending on your task there will be different needs for post processing, for example:
In our specific example, let's try to determine all the documents in the LightShot dataset that are chat protocols and extract named entities in those documents. We will extract this data with spaCy. Let's first make sure the libraries are installed:
!pip install "spacy>=3"
!python -m spacy download en_core_web_sm
!pip install spacy_langdetect
This is some code to determine the language of a piece of text:
import spacy
from spacy.language import Language
from spacy_langdetect import LanguageDetector
nlp = spacy.load('en_core_web_sm')
@Language.factory("language_detector")
def get_lang_detector(nlp, name):
return LanguageDetector()
nlp.add_pipe('language_detector', last=True)
nlp("This is an English sentence. Ray rocks!")._.language
It gives both the language and a confidence score for that language.
In order to run the code on the dataset, we should use Ray Data' built in support for actors since the nlp
object is not serializable and we want to avoid having to recreate it for each individual sentence. We also batch the computation with the {meth}map_batches <ray.data.Dataset.map_batches>
function to ensure spaCy can use more efficient vectorized operations where available:
import spacy
from spacy.language import Language
from spacy_langdetect import LanguageDetector
class SpacyBatchInference:
def __init__(self):
self.nlp = spacy.load('en_core_web_sm')
@Language.factory("language_detector")
def get_lang_detector(nlp, name):
return LanguageDetector()
self.nlp.add_pipe('language_detector', last=True)
def __call__(self, df):
docs = list(self.nlp.pipe(list(df["text"])))
df["language"] = [doc._.language["language"] for doc in docs]
df["score"] = [doc._.language["score"] for doc in docs]
return df
results.limit(10).map_batches(SpacyBatchInference, compute=ray.data.ActorPoolStrategy())
We can now get language statistics over the whole dataset:
languages = results.map_batches(SpacyBatchInference, compute=ray.data.ActorPoolStrategy())
languages.groupby("language").count().show()
{note}
On the full LightShot dataset, you would get the following:
```text
{'language': 'UNKNOWN', 'count()': 2815}
{'language': 'af', 'count()': 109}
{'language': 'ca', 'count()': 268}
{'language': 'cs', 'count()': 13}
{'language': 'cy', 'count()': 80}
{'language': 'da', 'count()': 33}
{'language': 'de', 'count()': 281}
{'language': 'en', 'count()': 5640}
{'language': 'es', 'count()': 453}
{'language': 'et', 'count()': 82}
{'language': 'fi', 'count()': 32}
{'language': 'fr', 'count()': 168}
{'language': 'hr', 'count()': 143}
{'language': 'hu', 'count()': 57}
{'language': 'id', 'count()': 128}
{'language': 'it', 'count()': 139}
{'language': 'lt', 'count()': 17}
{'language': 'lv', 'count()': 12}
{'language': 'nl', 'count()': 982}
{'language': 'no', 'count()': 56}
```
We can now filter to include only the English documents and also sort them according to their score.
languages.filter(lambda row: row["language"] == "en").sort("score", descending=True).take(1000)
If you are interested in this example and want to extend it, you can do the following for the full dataset:
Contributions that extend the example in this direction with a PR are welcome!