Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • lit/digizeit
  • daniel.mcdonald/digizeit
2 results
Show changes
### Feeling the Past
*Feeling the Past* is an interactive text-based educational game, which comments on the role of objects in the historical source selection. In the game the player, slipping into the role of a master's student doing an internship in a museum, is confronted with various tasks regarding objects and their role in historical research. The goal is illustrating that objects are an undervalued source group in historical research and a lot can be gained from using them as sources. For more discussion see the reflection piece [here](https://feeling-the-past.readthedocs.io/en/latest/).
For a view into the planning process at an early stage look at the file session-7.md, also found in this directory.
The game is publically available and can be played on [*Lives in Transit*](https://livesintransit.org/login).
The repository for the game is hosted on [GitHub](https://github.com/henokemp/lit-feelingthepast).
# Session 7 *Feeling the Past* (Working Title)
In this session we're going to look at my (Henrik's) project ideas. Because I've talked, written and babbled about it in various places over the past few weeks I'll introduce it once more from the ground up.
As you'll probably remember my project is all about *Objects* in scientific History and specifically about how *Objects* as primary sources are underrepresented. I've made this observation while working at the *Historisches und Völkerkunde Museum St.Gallen* during Summer 2019 and actually encountering a wide variety of *Objects* closely for the first time. Before that I only very rarely had anything to do with non-text-based sources.
In my opinion there is a lot to gain from using more *Objects* as sources for historical papers or in general considering them in research. There is simply a fragment of the past missing if historians disregard them and only focus on text-based sources. Another important point is that arguments can be strengthened by adding a more varied collection of primary sources.
The reasons for the current deficit can be found mainly in two fields:
#### Accessibility
One reason why text-based-sources are currently so dominant in scientific History can be found looking at accessibility. Written sources can easily be replicated digitally without losing a lot of their - what I'll be calling it - **source-value** in comparison with the original. Thanks to the internet they can be easily distributed around the world with two clicks.
This is not the case for objects. The digital representation in form of database entries has lost a lot of **source-value** in comparision with the experience one might have when actually the the object in front them and being able to feel it. For an example of a typical database entry of an objects take a look at the [online archive of the British Museum](https://research.britishmuseum.org/research/collection_online/search.aspx).
#### Pedagogy
The other reason why *Objects* only play a minor role in the historical source selection is, that in most cases it's simply not taught in University. I personally had to get out of University to realize that there is a lot more than just texts, which can be used as historical sources. I've talked with other students about it and their experience was similar. Admittedly, working with *Objects* as sousrces is in most cases more abstract and difficult than using texts, but this shouldn't be a reason not to do it, as there is much to gain.
## The Project *Feeling the Past* (Working Title)
The goal of my project is to combine two ideas. The plan is to follow @martin.dusinberre's suggestion and create a *Lives in Transit* chapter. In this chapter I plan to conceptually introduce *Objects* as sources to the player using the narrative structure of *Marugoto*. The aim is to influence the way a historian might think about their source selection and get more *Objects* into scientific history.
The second idea is packaged inside the first one. Because I've identified a deficit in the way how *Objects* are digitally represented, I want to think about to reconceptualize datasets for *Objects* in such a way to raise the **source-value**. I won't try to actually build a database/website, which utilizes a new way of representation. The idea is more to have the player in the LiT chapter maybe access one (reconceptualized) dataset of a hip, new archive far away from their homebase, based on which they have to start thinking how they might use the information given for their historical analysis.
## Session 7
During Session 7 I primarily want to talk about how historians perceive objects and how they might write History with it. I'll allow myself to open an issue to give some of you homework regarding objects. Depending how far I'll come until next Tuesday, maybe I can give a rough run-down of the storyline and what I want to achieve with the game.
## Homework
As preperation for Session 7 I want you to look at the BBC podcast [A History of the World in 100 Objects](http://www.bbc.co.uk/ahistoryoftheworld/about/british-museum-objects/). It doesn't really matter, which objects specifically you look at/listen to/read about, but pay attention to the information you're being given. Maybe compare it to the information you can access via the online archive of the British Museum (link see above). Additionally, I want to ask three of you to pick an object from current place of stay and try to think about, how you might use it as a source for historical work.
## Linux Journey
After being curious for a while and playing with the thought of trying out Linux I -- motivated by Daniel McDonald -- decided that trying out a new OS parallel to the other new digital stuff from this course shouldn't be too big of a bother. So I began my *Linux Journey* as described in journey_henrik.md. It's not necessarily limited to Linux as it progressed parallel to learning Git. At the end are some commands copied, which were commented by Daniel McDonald or that I began using on the way.
Personally I feel like I've found a new home in Linux, for working that is. It has a rather steep learning curve but the productivity benefits are definitely worth it. I had the luck to have someone to ask if I ran into problems but the internet also holds a plethora of information and solutions for any problem. I would recommend Linux for anyone wanting to be more productive in their workflow and to have more influence over their OS.
\ No newline at end of file
......@@ -7,8 +7,9 @@ I solved it by using a third party software, even though sadly it wasn't open so
### Second Update (22:40)
- ![And we're live](https://paste.pics/7ea975d5d5034961602ae19453da027c)
- Though I haven't figured out how to embed this picture
![And we're live](live.png "And we're live")
... Though I haven't figured out how to embed this picture. Hmm, this seems to be how
### First impressions
......@@ -47,10 +48,97 @@ You do this by looking for an option in the editor of your choice called `Create
Basically that's it. Like the link above says, `git clone` is supposed to do that (it's not a bug, it's a feature!). In my opinion not really self-explanatory, so it might result in some confusion.
Small edit: So I found out how it works in Sublime, but still struggling to repeat the same in Atom.
Small edit: So I found out how it works in Sublime (both on Linux and Windows), but still struggling to repeat the same in Atom. I'm able to pull all the branches and checkout the branch I want in GitBash, but then getting the branch into Atom appears to be diffult.
Small edit-edit: Okay, I think I figured out how it works in Atom. So after doing what I described in the previous paragraph, one has to go to the braching selection, go to `New Branch`, type in the same name of the branch and then fetch said branch with Right-click and fetch. I'm not sure if this is the intended way, but it works. To be honest, doing it in Sublime was much more intuitive.
### Updates (23.3.20)
In other news, I'm still having some trouble with my touchpad, so I started disabling it via Terminal (just Linux things). Which works pretty well as I'm slowly getting more adapt to navigating with my keyboard.
I'm slowly beginning to really like Ubuntu, even though it at times can be frustrating. It's a love-hate relationship. The great thing about Linux is: it does what you tell it to do. The bad thing about Linux is: it does what you tell it to do. I think as soon as I'm settled in with all the software I need and am comfortable with everything, it'll be a very enjoyable experience working on Linux. What I really need to get into are the different file-types on Linux. I'm sometimes not sure how to do installations or what to do with other files. But I think this will improve over time.
### (26.3.20)
So, I like Linux. Outside the whole Git/Atom/coding stuff it's just really nice to work with. Opening stuff with one click (for example in Files) is so comfortable. Most software I use also remembers which tabs I had open after I restarted the computer. Which saves me some thoughts everytime I have to restart. After finding the right software (PDF, text editor etc.) for my needs everything works well, without having to wait for features to load which I don't even use. It's quick, intuitive, and visually appealing. So after about 1.5 weeks in quarantine and with Linux I definitely recommend Linux for someone who want to improve their workflow. It has a steep learning curve, but after a few weeks it just feels nice to work. Like I already said, it does what you tell it to do. That's a nice thing and a bad thing.
I'm currently thinking about and preparing to write either already my current papers or my DigiZeit paper in LaTeX. My expectation is that the learning experience is going to be similar, but the nerves not lost over Word formatting will be worth it.
Let's see how that'll go, I'm going to keep updating this .md or a separate one. Maybe even in a LaTeX based PDF file.
### Competent Linux guidance
To have Dannys work not be in vain, I'll be adding some of his "better-written" linux summaries below.
Since you might find yourself inside the Terminal (command prompt) more often on Linux, do try to document which commands you find yourself using, and what they do. You'll probably paste a few in from Google/stack overflow answers. Some of the common commands:
* `ls` -> show contents of directory (`ls -alh`, show it more readably)
* `cd` -> change directory (learn `cd /`, `cd ~`, `cd ..` for moving around using relative, rather than absolute paths to things)
* `find` -> print paths to files/folders matching some criteria
* `grep` -> use regular expressions to find lines in files that match some search string
* `cat <file.txt>` -> print contents of file to terminal
* `less <file.txt>` -> print the first part of a file, allowing you to scroll up and down. ctrl+c to get out
* `echo "something"` -> print "something" in terminal
* `mkdir my-folder/touch file.md` -> make new folders/files
* `history` -> print my previously entered commands
* `apt and apt-get` -> linux commands for installing packages
* `clear` -> clear your terminal screen
* `yes` -> keep printing the word yes, truly amazing command
* `mv <source-path> <target-path>` -> move a file from source to target
* `cp -R <source-dir> <target-dir>` -> copy a directory and its contents recursively to a new location
* `rm file.txt` -> delete a file
* `rm -r -f <directory>` -> remove a folder and everything inside it recursively. be careful with this, you could rm most of your system by mistake...
* `man <other-command` -> show me the manual (i.e. documentation) for another command. Learn to read these man pages, they typically have all the info you need
* `ctrl+r` -> find from your command history (really useful, learn to use this one as an instinct!)
* `git` -> this one you know!
All programming is, is stringing together a bunch of commands in a file, and then executing that file (i.e. running the script). For example, you could write a nice little script that pulls the latest for every git repository in your home folder. It might look something like:
```bash
#!/usr/bin/env bash
cd ~ # go to your home folder
cd digizeit # go into our repo, assuming it was cloned to ~/digizeit
git checkout master # make sure you're on master branch
git pull # get latest
cd .. # go to parent directory
cd <other-project> # some other project you're working on, same thing
git checkout master
git pull
cd ~
echo "Finished! I am now a computer programmer"
```
The first line of the script, called a shebang, tells your system which language the program is written in (bash, the default language of your terminal). You'd save this as `update.sh`, then you'd tell Linux you want to be able to execute the file with:
```bash
chmod +x update.sh
```
You can then do ./update.sh to run your script. If it prints the Finished line, it worked, and you're a programmer now. You can even move the file to /usr/bin, and then you can run it from any directory at any time just by typing update.sh! But, since you're a programmer now, you actually make a repository containing all your cool scripts, and you keep this and other scripts that you write under version control in that repository. That way you can improve it over time, while keeping its history, blah blah.
#### One interesting thing to try:
Open a terminal and use the following command to install telnet:
```bash
sudo apt install telnet
```
And then use this command to watch Star Wars in ASCII art.
```bash
telnet towel.blinkenlights.nl
```
One other thing you'll want to do quite quickly, install Atom and/or Sublime Text, and then set those in terminal as your default editor, so that files are opened for editing using those apps, rather than in the terminal's default (terminal-based) editor (nano or vim, both of which require some practice).
https://askubuntu.com/questions/777410/how-to-set-atom-editor-to-main-editor
These instructions tell you something really valuable, that you can add lines to this `~/.bashrc file`. This file contains your terminal preferences and shortcuts and so on. For example, I add:
```bash
alias la="ls -alh"
```
And then do `source ~/.bashrc` to reload the settings. This gives me a new command, `la`, which is just the same as doing `ls -alh` but shorter. So yeah, back to the *point*, you use this same preferences file to store your preferred text editor, so that when you do things in terminal requiring a text editor, it'll open the file with Atom, much easier than learning `vim` right now.
projects/Henrik/Linux Journey/live.png

226 KiB

You can find everything about our project here:
[Project Page](https://gitlab.uzh.ch/maryam.joseph/historians-and-corona)
and our thought process up to that point here: (#14)
## My project
Plese check out my project's GitLab page [here](https://gitlab.uzh.ch/patrick.gut/blended-readings)!
\ No newline at end of file
# Student projects (assessable task 1)
> This directory contains spaces for each student assignment for the first assessable task. They may simply be links to other repositories, branches, etc., as students have had the choice in how they create and submit their work.
## Assignment 1
Requirements for these projects are not particularly strict. That said, they must target (a) history and (b) digital space. Any requests for clarifications on assessment should be made as an Issue in thie repository, which by now you all know how to do.
## Merge requests?
Changes to these folders will gladly be merged, provided the student is responsible for the modified directory. Merge requests related to the projects of other students will be rejected unless there's a compelling reason for this to be happening.
You can find information on my project right [here](https://gitlab.uzh.ch/ricardo.stalder/cr-from-afar).
# Readings
> Readings for the course can be optional or mandatory, and assigned by instructor or student. Simply add the text, or a link to it, inside this directory, provide an explanation in this file if you like, and perhaps make an Issue to alert everyone that you've added something new.
## Reading list (maybe this should become a table?)
* [Ghost Work (Grey & Suri)](https://gitlab.uzh.ch/lit/digizeit/-/blob/master/readings/GhostWork_GreySuri.pdf)
* [The design of everyday things (Norman)](https://gitlab.uzh.ch/lit/digizeit/-/blob/master/readings/optional/Norman_EverydayThings.pdf)
* [Introduction to Functional Grammar (Halliday & Matthiessen)](https://gitlab.uzh.ch/lit/digizeit/-/blob/master/readings/optional/ifg.pdf), for the language-inclined, and those in need of rigorous terminology for describing system vs instance, genre, text, semantics, meaning and so on
\ No newline at end of file
File added
# Scripts
> In here, we host some scripts that can serve as an introduction to programming for the humanities. If you develop a script, you're welcome to add it here. If it requires data to process, that should go in the `data` directory of this repo.
#! /usr/bin/env python3
# above is the shebang, which tells machine to use python to run this file
# below is a *module docstring*, telling us what this file is and how to use
"""
This is an improved version of wordcount.py that can be run on the command line.
On the command line, you can pass an argument that points to any text file you
would like to process. So, example use is:
```
> ./count.py <path-to-file.txt>
> python3 count.py <path-to-file.txt>
```
It works on any plain text, for example, the following file, which is plaintext
of Lewis Carroll's Alice's Adventures in Wonderland, found at:
https://www.gutenberg.org/files/11/11-0.txt
If you put this file in your `usr/bin` directory, you can call it from anywhere
on your machine, without writing out the full path to the script each time. You
can name it just `count`, without an extension, and then do:
```
> count <my-file.txt>
```
Finally note: you can do `diff -y wordcount.py count.py` to see the difference
between the two scripts.
"""
# we need these two modules in order to run our file as a script (see end)
import os
import sys
# import a tool that counts occurrences of things in python
from collections import Counter
# create a function -- a chunk of code that can be *called* multiple times
def wordcount(filepath):
"""
This function, called wordcount, takes one argument, `filepath`,
which is a path to plaintext data.
It returns a Counter, with word frequencies in this filepath
"""
# open the file (like double clicking on it)
raw_text = open(filepath).read()
# print the first 500 characters to screen ("\n" is newline)
print(f"Raw data:\n\n{raw_text[:500]}\n\n")
# make everything lowercase (i.e. normalise)
lowered = raw_text.lower()
# split text on spaces. is this reliable?
wordlist = lowered.split(' ')
# loop over each word, remove those that are not alphanumeric
cleaned = [word for word in wordlist if word.strip().isalnum()]
# perform the counting using the Counter class
counted = Counter(cleaned)
# print top 10 most frequent
print(f"Counts:\n\n{counted.most_common(10)}\n\n")
# return the wordcount data to us after we call this function
return counted
# Below is actually the first bit of code that will be run. The rest is just
# *defining* functions for actual use now
# get the *argument* we passed to the script (-1 is the last item in a list)
input_file = sys.argv[-1]
# check that it exists, raise error if it does not
if not os.path.isfile(input_file):
raise ValueError(f"File {input_file} does not exist! Stopping.")
# if there is no error, we can do the processing
print(f"Processing {input_file} ...")
# call wordcount function to get our counter full of data
results = wordcount(input_file)
# processing done! let's make sure there are results, or we raise an error.
# this prevents us from writing an empty file. this is 'defensive programming'.
if not results: # i.e. if there are no words in our counter
raise ValueError("No results found for some reason. Stopping.")
print("Processing finished. Saving file as results.csv ... ")
# store the results of the analysis in results.txt
with open("results.csv", "w") as results_file:
# loop over the words starting with most frequent
# or, we could use sorted(results.items()) to alphabetise
for word, count in results.most_common():
# write the word and count to file, separated by a tab
results_file.write(f"{word}\t{count}\n")
# the hard part is now done, and the file has been saved as results.csv.
# now we just want to tell the user that it worked...
# first, calculate number of items in the counter (i.e. its length)
uniq = len(results)
# now, for extra credit: figure out how to calculate the total number of words
# in the file, i.e. the sum of counts. hint: you may have to loop over the
# *values* of the Counter object with `results.values()`. there is also a nice
# function called `sum` which will sum a list of numbers...
total_words = "???"
# print our success message ... we're all done now!
print(f"All done! {uniq} unique words counted. Total words: {total_words}")
#! /usr/bin/env python3
# above is the shebang, which tells machine to use python to run this file
# below is a *module docstring*, telling us what this file is and how to use
"""
This file contains a basic script for counting word frequencies in a text file.
It is written in Python 3 according to the PEP8 style guide. It is not (yet)
configured for command-line use, because paths to text are hardcoded in and
cannot be easily changed. Right now, you need to edit the script itself to
point to a text file for processing. So, to use this script, edit it so that it
points to a text file, and then from the command line (inside digizeit dir) do:
```
> ./scripts/wordcount.py
# or
> python3 ./scripts/wordcount.py
```
It works on any plain text, for example, the following file, which is plaintext
of Lewis Carroll's Alice's Adventures in Wonderland, found at:
https://www.gutenberg.org/files/11/11-0.txt
"""
# import a tool that counts occurrences of things in python
from collections import Counter
# create a function -- a chunk of code that can be *called* multiple times
def wordcount(filepath):
"""
This function, called wordcount, takes one argument, `filepath`,
which is a path to plaintext data.
It returns a Counter, with word frequencies in this filepath
"""
# open the file (like double clicking on it)
raw_text = open(filepath).read()
# print the first 500 characters to screen ("\n" is newline)
print(f"Raw data:\n\n{raw_text[:500]}\n\n")
# make everything lowercase (i.e. normalise)
lowered = raw_text.lower()
# split text on spaces. is this reliable?
wordlist = lowered.split(' ')
# loop over each word, remove those that are not alphanumeric
cleaned = [word for word in wordlist if word.strip().isalnum()]
# perform the counting using the Counter class
counted = Counter(cleaned)
# print top 10 most frequent
print(f"Counts:\n\n{counted.most_common(10)}\n\n")
# return the wordcount data to us after we call this function
return counted
# all we have done so far is *define* a function ...
# we haven't yet *called* it (i.e. run it). do it like this:
wordcounts = wordcount("./data/alice.txt")
# have a look at what is inside
print(wordcounts)
# get count for a particular word
wordcounts["test"]
# show the 10 most common words
wordcounts.most_common(10)
# store the results of the analysis in results.txt
with open("results.csv", "w") as results_file:
# loop over the words starting with most frequent
# or, we could use sorted(wordcounts.items()) to alphabetise
for word, count in wordcounts.most_common():
# write the word and count to file, separated by a tab
results_file.write(f"{word}\t{count}\n")