Skip to main content

Technology does not like me

 To date I still have yet to analyze my selected text through any software because no matter what I do or how many problems I solve I hit roadblock after roadblock. 

As previously mentioned, I intend to analyze text from the House of Commons and Senate -- specificlaly the readings pertaining to the first Canadain Citizenship Act. My initial issue was that despite this resource having been digitized and OCRed (Optical Character Recognition -- when software converts images of textual documents into readable, editable and  searchable text) the OCR was conducted years ago and was not wholy accurate. Many words were incorrectly read, and despite having two separate colums to a page, the OCR sometimes only recognized them as one in sections. 

Therefore my first task was to remove the old bad OCR and redo it with newer techonology to improve the accuracy. Under the recommendation of another digital humanities student, I attempted to formulate python code utilizing ChatGPT for several weeks. As someone who does not have formal training, or even basic training in python, this proved fairly fruitless. After some research, I found articles detailing that studies had been completed observing a significant decline in the accuracy of ChatGPT over time. (Paulo Confino, Fortune) In light of this information I switched tactics and began looking into programs that would be able to re-OCR my documents. Several individuals recommended Adobe Acrobat, but Carleton was unable to provide me with a license, and I am unable to pay for it with my own funds, however, Carleton was able to provide me with a license for Foxit PDF Reader.  After a couple online tutorials I discovered how to remove OCR as well as apply it to my documents. 

This new OCR did prove to have better accuracy within Foxit PDF Reader. Excited and looking forward to finally beginning my textual analysis after months of struggling I exported the newly re-OCRed documents into .txt files to create my corpus, only to discover that while two column OCRed documents may read as two columns in PDF format, they revert back to reading as one column in .txt format.  

Another road block occurs...

Comments

Popular posts from this blog

DATA DATA DATA!

I have finally published the data sets from the corpus on Zenodo. The following citations contain the links to the data.  Have at it!  Amato, Natalie. “Corpus”. Zenodo , March 27, 2025. https://doi.org/10.5281/zenodo.15098565 . Amato, Natalie. “Voyant Files”. Zenodo, March 27, 2025. https://doi.org/10.5281/zenodo.14871765.  Amato, Natalie. “Voyant Files”. Zenodo, March 27, 2025. https://doi.org/10.5281/zenodo.14871765 . Amato, Natalie. “Stopwords”. Zenodo , March 28, 2025. https://doi.org/10.5281/zenodo.15103566 . Amato, Natalie. “Nvivo Files”. Zenodo , March 28, 2025. https://doi.org/10.5281/zenodo.15103555 . Amato, Natalie. “Antconc Collocate Files”. Zenodo, March 28, 2025. https://doi.org/10.5281/zenodo.15103493 .   Amato, Natalie. “Antconc Cluster Files”. Zenodo , March 28, 2025. https://doi.org/10.5281/zenodo.15103462 .   Amato, Natalie. “Antconc KWIC Files”. Zenodo, March 27, 2025. https://doi.org/10.5281/zenodo.15098553 .    Amato, Nata...

Why are Two Columns such a Burden?? WHY?

          I apologize for the extended delay in posting. After my last post I attempted to create a work-around to convert my two column text files into one column. This proved insanely difficult. My thought process was that if I could create OCRed readable pdf files (which I thought I had done) with Foxit then I could export them to editable word documents and then convert them from two column to one column files and then export them to txt files. Did it feel like there must be an easier way to do this? Yes. But I could not find it, at least not without hitting a pay wall. Therefore, I surmised that I would have to one-by-one open files in Foxit PDF Editor, go to the “Convert” tab and then select “To MS Office” in the menu and select “To Word”. This would bring up a new “save” window where I would need to select “settings” beside the file format. Then that would bring up another window and here is where I run into another roadblock. In t...

Topic Modeling Tool

  The next tool I moved to on my corpus analysis journey was the topic modelling tool.   The Topic Modeling Tool is an interesting innovation because it utilizes MALLET (Machine Learning for Language Toolkit) to perform LDA (Latent Dirichlet Allocation) topic modeling but also incorporates a user friendly interface allowing individuals like myself who can learn basic coding but just don’t understand how to troubleshoot when things go wrong.   The tool was created by David Newman, part of the Research Faculty of Computer Science at the University of California Irvine, and Arun Balagopalan and further developed by Jonathan Scott Enderle, a Digital Humanities Specialist at the Penn Library at the University of Pennsylvania. [1] Unfortunately Enderle has since passed and therefore development of the tool has stalled until someone else decides to take up cause.   Regardless the tool was still incredibly useful for my purposes.   It ...