In this tutorial, we will show you how to create simple human and machine-readable metadata files in JavaScript Object Notation [(JSON)](https://www.json.org/json-en.html). JSON files consist of fields of key-value pairs. These are *sidecar* metadata files, that is, they accompany a separate source data file and provide essential information about the data.
JSON metadata files differ from structured metadata files (i.e., tables) because of their machine-readability. While structured metadata files may contain human-only readable columns (e.g., "comment" columns with free-text notes), JSON files should not. However, they can have more details than the metadata tables.
::: {.callout-tip collapse="false"}
## Making JSON files
Good editors with a graphical interface are available online to read and write JSON files. We recommend the following: <https://jsoneditoronline.org/>
However, we recommend creating JSON files with a script and not manually to save time and prevent data entry errors. We will demonstrate how in this tutorial.
:::
## 1.1 Setting the stage for this tutorial
Here, we will work with an example situation derived from an imaging experiment. We will automate the creation of a JSON file from a metadata table containing image file names and locations, as well as information about the images (e.g., subject ID, subject sex, condition in which the subject was observed, treatment the subject received).
#### Requirements for creating the JSON files
- Metadata table, containing the name of the reference images and metadata. *If relying entirely on the metadata on these tables (provided they have sufficient information) we do not require access to the actual files.*
- Potentially, additional table(s) with JSON fields. The JSON files will therefore have additional information not found in the metadata table.
- Description of filenaming convention, *codebook (i.e., ?)*, and glossary of abbreviations used in the metadata table
- The [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) R package, used to write the JSON string
- Naturally, some R code
## 1.2 Let's get to work!
Let's assume a relatively common structure for the dummy dataset we'll be using for this tutorial:
``` {style="background-color: WhiteSmoke"}
#| eval: false
experiment_results # The base folder of our dataset
│
├── ... # Folder(s) with other kinds of data
│
└── imaging # The folder containing the imaging data
│
├── ...
│
└── subject_n # Each measured subject has a folder
│
└── subject_n_imgfile.tiff # The image file
```
The code we provide will parse a dummy metadata table to create one JSON file for each row of the table, which describes one data file (in this case, an image). Our script thus will generate companion files for all image files, as necessary for machine readability.
As we are automating a task, it's essential that our metadata table is formatted to be machine readable. This means that when preparing the table one should have paid attention to (among others) avoiding blank rows, if possible avoiding empty cells, using only the first row for header information (i.e. variable names).
Futhermore, the metadata table should be part of a spreadsheet also containing a *codebook* explaining what each variable is. The number of variables (= the number of columns) and their names in the metadata tables should be the same in the codebook.
For further information on readability of spreadsheets, see the [Six tips for better spreadsheets](https://doi.org/10.1038/d41586-022-02076-1) by J.M. Perkel .
:::
::: {.cell}
```{.r .cell-code}
# these packages are required for correct functioning of the tutorial
easypackages::packages("dplyr", # data operations
"kableExtra", # for rendering of tables in HTML or PDF
"knitr") # rendering of the report
```
:::
First of all, let's create a simple dummy (or toy) metadata table:
::: {.cell}
```{.r .cell-code}
n_rows = 30 # defining how many rows (in this case, how many study "subjects") we want in the table
Now, let's make the corresponding JSON files. For readability purposes, here we use a for loop to iterate over the lines of the metadata table. This solution can be very slow when dealing with large metadata tables, so below we will illustrate an alternative, faster solution.
::: {.cell}
```{.r .cell-code}
library(jsonlite)
library(stringr)
saveoutput <- F # set to TRUE or T to automate JSON saving
pull(img_location) %>% # this will give us the full path to the image
str_replace(., ".tiff", ".json") # and this will remove the filename
# write JSON file to appropriate location if triggered
write(json_metadata, file = json_path)
print(paste0("Wrote ", json_path))
}
}
```
:::
::: callout-tip
The code snippet you just saw includes the possibility to save the generated JSON files into the folders containing image files mentioned in the `img_location` column of the metadata table. If you want to use this functionality during your execution, simply change the `saveoutput` variable to `TRUE` or `T`.
:::
The `toJSON` function of `jsonlite` will convert anything in a table (in our case, a single row of the metadata table) into an R character vector of length 1, i.e. containing a single string. This one string is formatted according to the JSON format specifications. Here's how one of our JSON looks like:
Notice that the file starts with a `[` and ends with a `]`. The content of a row of the metadata table is delimited by `{}`. This delimited field contains `column_name:value` pairs in separate lines (separated by a newline, `\n`). Therefore, we could say that `toJSON` "expands" the row of the metadata table into a list describing each of its cells.
:::
And just like that, you've created your first JSON files. Congratulations!
## 1.3 Code
This is all the code that you'll need to execute what we talked about in this tutorial's section, grouped in one place.
::: {.cell}
```{.r .cell-code code-fold="show"}
n_rows = 30 # defining how many rows (in this case, how many study "subjects") we want in the table
pull(img_location) %>% # this will give us the full path to the image
str_replace(., ".tiff", ".json") # and this will remove the filename
# write JSON file to appropriate location if triggered
write(json_metadata, file = json_path)
print(paste0("Wrote ", json_path))
}
}
```
:::
# 2. Creating JSON files with a metadata table and information from additional files
The example above was about the simplest possible situation you might encounter when creating JSON files. A more realistic situation you could encounter is that in which you have created a metadata table, but want to create JSON files combining its information with that present in other files.
## 2.1 Setting the stage for this tutorial
In this part of the tutorial, we will create a script that allows you to customise the JSON-making process by editing a table that is used to specify which fields will be included in the JSON files. The information for these fields will be extracted from both the metadata table and the file names of the data files:
{width="659"}
## 2.2 Requirements for creating the JSON files
- A `.csv`-format table specifying JSON *keys* (e.g., "FacilityName", "GenusSpecies" ) and their *values* (e.g., "My_Lab" "Mus_Musculus") that apply to all data files;
??(If a value is blank it will be filled by information in the file name or table with subject information (see below))
- A collection of data files, with a subject ID encoded in their file name;
- A metadata table with information on study subjects (e.g., sex, body weight) that can be added to the JSON file that will accompany each data file;
- The [jsonlite](https://cran.r-project.org/web/packages/jsonlite/index.html) R package, used to write the JSON strings;
Metadata tables should be prepared during data collection, or after it in edge cases, in a spreadsheet, and can be saved in different file formats (e.g., `.csv` or Excel `.xlsx`). For the sake of interoperability, the `.csv` file format is preferred.
:::
## 2.3 Let's get to work
This example will revolve around a dummy data set, meaning that the JSON fields and metadata have no real-life meaning. Let's imagine that this dummy data set contains microscopy images of plant tissues.
Let's assume a structure for the dummy dataset that is different than what we described before, and that is more common in research:
``` {style="background-color: WhiteSmoke"}
#| eval: false
experiment_results # The base folder of our dataset
│
├── metadata.csv # metadata table with information on study subjects
│
├── ... # Folder(s) with other kinds of data
│
└── imaging # The folder containing the imaging data
│
├── ...
│
└── subject_n_imgfile.tiff # All image files are collected in a single folder
```
In this example, we have a `metadata.csv` table in the dataset base folder that contains one row per file present in the `imaging` sub-folder. Each row can contain the path to the imaging file and information on the study subject the file was derived from. Therefore, each row can contain both file- and experimental subject-specific information.
We will now create a dummy dataset following the structure we just reviewed, so that we can illustrate reading/writing automation as well.
Let's start by creating the dummy experimental dataset:
::: {.cell}
:::
::: {.cell}
```{.r .cell-code}
# we need to create a "tutorial" folder to organise input and output files
dataset_dir <- file.path("./Tutorial_Metadata_JSON/experiment_results") # will work on every OS
if (!dir.exists(dataset_dir)) {
dir.create(dataset_dir, recursive = T)
}
# then, let's define how many study "subjects" we want in this example
n_subjects = 10
# just to make things more interesting, let's come up with random subject IDs
Having created the dummy dataset, we can now move to the "operational" part of the tutorial, in which we'll work as if the dataset was real and we did not just create it. First, let's import the *metadata table* and the table with *user-defined JSON fields*:
### 2.3.1 Retrieving the subject IDs from imaging files
If our metadata table does not contain the location of each imaging file, we can always use the file names of the (dummy) images to retrieve it (let's just make sure that we match the locations with subject IDs):
The `json_fields` table we created contains the column **variable_name** that contains what will be the *key*, and the column **variable_values** which contains what will be the *values* in the JSON file. The other columns are not necessary for this example, but can help the users when specifying the content of their JSON files.
??The entries in **permissible_values** will be filled with file-specific information.
::: {.callout-important title="Naming and contents of columns"}
The names of the `json_fields` columns are arbitrary and you can define any other name. It is important to note the format of the values (e.g., numeric, alphanumeric or strings). The values can also be arrays, e.g. `[1,2,3]`. Note that a JSON file can have a more hierarchical structure, with keys and sub-keys, but we will keep things simple for this example.
:::
### 2.3.3 Creating the JSON files
First, we'll need to convert the human-readable `json_fields` table specifying the JSON fields into a suitable format for `jsonlite::toJSON()`:
::: {.cell}
```{.r .cell-code code-fold="show"}
# Preserve the order of the field names as in the table
You'll notice that the second line of code we just looked applies the `as.character()` function to each key-value pair we created from `json_fields`. This will keep everything consistent in the resulting JSON files, but `jsonlite` can handle multiple data types, such as dates, integer and real numbers, and more.
:::
We now can loop over the image files and create a JSON file for each of them. Besides the information we provided with the `[...]_JSON_fields.csv` file (which will be the same for each JSON), we can add image-specific information to each file. We will retrieve this information from the metadata table.
::: {.cell}
```{.r .cell-code}
# Join tables with filen ames and subject information by subject ID
The `jsonlite` package offers several encoding formatting options. You can check out all of them [online](https://cran.r-project.org/web/packages/jsonlite/index.html), and here are we illustrate the most relevant formatting possibilities:
(@) The `pretty` option of the `toJSON` function controls the appearance of the JSON output by adding indentation if set to TRUE.