Deep Aramaic: Towards a synthetic data paradigm enabling machine learning in epigraphy.

Bibliographic Details
Title: Deep Aramaic: Towards a synthetic data paradigm enabling machine learning in epigraphy.
Authors: Aioanei, Andrei C.1 (AUTHOR) aaioanei@proton.me, Hunziker-Rodewald, Regine R.1 (AUTHOR), Klein, Konstantin M.2 (AUTHOR), Michels, Dominik L.3 (AUTHOR)
Superior Title: PLoS ONE. 4/19/2024, Vol. 19 Issue 4, p1-29. 29p.
Subject Terms: *INSCRIPTIONS, *ARTIFICIAL intelligence, *ENGINEERS, *COVER letters
Abstract: Epigraphy is witnessing a growing integration of artificial intelligence, notably through its subfield of machine learning (ML), especially in tasks like extracting insights from ancient inscriptions. However, scarce labeled data for training ML algorithms severely limits current techniques, especially for ancient scripts like Old Aramaic. Our research pioneers an innovative methodology for generating synthetic training data tailored to Old Aramaic letters. Our pipeline synthesizes photo-realistic Aramaic letter datasets, incorporating textural features, lighting, damage, and augmentations to mimic real-world inscription diversity. Despite minimal real examples, we engineer a dataset of 250 000 training and 25 000 validation images covering the 22 letter classes in the Aramaic alphabet. This comprehensive corpus provides a robust volume of data for training a residual neural network (ResNet) to classify highly degraded Aramaic letters. The ResNet model demonstrates 95% accuracy in classifying real images from the 8th century BCE Hadad statue inscription. Additional experiments validate performance on varying materials and styles, proving effective generalization. Our results validate the model's capabilities in handling diverse real-world scenarios, proving the viability of our synthetic data approach and avoiding the dependence on scarce training data that has constrained epigraphic analysis. Our innovative framework elevates interpretation accuracy on damaged inscriptions, thus enhancing knowledge extraction from these historical resources. [ABSTRACT FROM AUTHOR]
Copyright of PLoS ONE is the property of Public Library of Science and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Academic Search Premier
Full text is not displayed to guests.
Description
Description not available.