Presentation Information
This blog post is based on my 5-minute lightening talk presentation at the Symposium Artificial Intelligence in Open, Social Scholarship: Canadian and Brazilian Contexts.
The symposium was held January 21, 2026, at the University of Victoria Libraries in the Digital Scholarship Commons, and hosted by Electronic Textual Culture Lab.
- Presentation Slides (please look at the speaker notes)
- Companion Infographic in English
- Infográfico complementar em português
Introduction
Back in 2014 as a graduate Research Assistant, I was assigned a project to do sentiment analysis on over 2,000 tweets!
- The work was boring, but the results were quite interesting. I call this type of work: digital ditch digging!
- This is the reason why we now have a workshop activity that I created for the Digital Scholarship Commons that uses a GenAI tool that can run locally on a laptop to perform sentiment analysis on individual social media posts or short form survey feedback data for example.
- The model we use also by default helpfully explains its reasoning to the sentiment it assigns to the tweet or text.
Principle 0: GenAI Is a Double-Edged Sword

GenAI is like a double-edged sword, which can be very useful in the hands of an adult and very dangerous for a child to handle.
GenAI tools in research contexts:
- Can be very useful if you are a subject expert and can validate their outputs by simply reading them.
- Less useful if you have to validate all facts and claims they generates.
- Potentially dangerous if you don’t validate all facts claims they generate, which could seriously damage your professional reputation.
Principle 1: Automate Repetitive Tasks but Leave Humans in the Loop
Maintain “Human-in-the-Loop” control for any low risk research task that you choose to outsource to a Gen AI tool. If you cannot explain the logic of a GenAI generated conclusion, you should not automate the task.
Lower Risk Tasks to Automate:
- Initial OCR Correction: A first pass before human review.
- Basic Sentiment Analysis: Especially on large amounts of short form text that would be difficult or impossible do otherwise. Samples would be validated before proceeding to the complete data set.
- Translate Summaries: Scan research in languages you can’t read to identify papers to have translated.
- Code Generation: Writing scripts for data scraping. Always validate logic & outputs.
Higher Risk Tasks to Automate:
- Synthesis of Evidence: Constructing arguments from contradictory sources.
- Close Reading: Detecting irony, context, and nuance.
- Defining Research Questions: Deciding what is “worth” studying based on human values.
Principle 2: Preserve Serendipity & Context
Avoid “Black Box” automation that collapses the messiness of research into statistical averages or “Regression to the Mean.”
Lower Risk Tasks to Automate:
- Preliminary Thematic Tagging: Identify word and concept frequencies in very large bodies of text. Always validate.
- Metadata Extraction: Harvesting bibliographic data.
- Summarization of Logistical Documents: Condensing grant guidelines or meeting transcripts.
Higher Risk Tasks to Automate:
- Archival Serendipity: Browsing physical archives “accidental” connections AI tools would filter out.
- Contextualizing Nuance: Interpreting subtext that relies on historical, societal or organizational context.
- Ethical Evaluation: Assessing potentially sensitive archival materials for possible inclusion in your research.
Conclusions
One more metaphor that I find helpful is to think of GenAI tools as Research Assistants who are want to please us, and occasionally make things up to meet requests we give them. So how do I work with RA’s like this?
- I check their work carefully for accuracy or omissions
- I keep their limitations in mind before assigning them a task
- I give them specific directions when do assign a task
I hope that you find this helpful to each of you in some way.
AI Disclosure

https://aiusagefacts.abacusai.app/
