This lesson from Programming Historian introduces basic use of Map Warper for historical maps. It guides you from upload to export, demonstrating methods for georeferencing and producing visualizations.
In this video, presented as part of the Friday Frontiers series, Bernard Pochet traces the evolution of Open Science at the University of Liège in the early 2000s, focusing on Open Access and the implementation of a Diamond Open Access journal publishing platform (PoPuPS) and an institutional repository (ORBi).
In this lesson, you will learn how to apply a Generative Pre-trained Transformer language model to a large-scale corpus so that you can locate broad themes and trends within written text.
This resource from the CLS INFRA project offers an introduction to several research areas and issues that are prominent withinComputational Literary Studies (CLS), including authorship attribution, literary history, literary genre, gender in literature, and canonicity/prestige, as well as to several key methodological concerns that are of importance when performing research in CLS.
This tutorial introduces the concept of photogrammetry and its application using the Kiri Engine, a 3D scanner app, guiding users through the process of preparing an object for scanning, capturing photos, and using Kiri Engine to create a 3D model.
This three-day international training school in Knowledge Extraction from Text from the CLS Infra project offered a crash course in how to “Dig for Gold” in a corpus of texts. From Stylometry to Natural Language Processing, learners will be able to follow along using 'plug and play' tools, while also getting a brief introduction to Python and R.
This is the first of a two-part lesson introducing deep learning based computer vision methods for humanities research. Using a dataset of historical newspaper advertisements and the fastai Python library, the lesson walks through the pipeline of training a computer vision model to perform image classification.
This is the second of a two-part lesson introducing deep learning based computer vision methods for humanities research. This lesson digs deeper into the details of training a deep learning based computer vision model. It covers some challenges one may face due to the training data used and the importance of choosing an appropriate metric for your model. It presents some methods for evaluating the performance of a model.
This lesson is the second in a two-part lesson focusing on regression analysis. It provides an overview of logistic regression, how to use Python (Scikit-learn) to make a logistic regression model, and a discussion of interpreting the results of such analysis.
How would you as a person with deafblindness navigate the world – a world filled with navigation and mobility challenges, inaccessible information, and technologies that rely on the senses of sight and hearing? In this talk, Nasrine Olson (PhD, Associate Professor) introduces the idea behind the formation of the Centre for Inclusive Studies at University of Borås and presents a few projects that have explored ways in which technology can be leveraged to level the playing field.
This lesson is the first of a two-part lesson focusing on an indispensable set of data analysis methods, logistic and linear regression. It provides an overview of linear regression and walks through running both algorithms in Python (using Scikit-learn). The lesson also discusses interpreting the results of a regression model and some common pitfalls to avoid.