rose-e-wang commited on
Commit
d368219
1 Parent(s): dfcf367

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - math
9
+ ---
10
+
11
+ TLDR: MIT OCW Math Lectures with Student Questions
12
+
13
+ # SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts
14
+
15
+ [Project Page](https://rosewang2008.github.io/sight/), [Paper](https://arxiv.org/pdf/2306.09343.pdf), [Poster](assets/poster.pdf)
16
+
17
+ Authors: Rose E. Wang*, Pawan Wirawarn*, Noah Goodman and Dorottya Demszky
18
+
19
+ *= Equal contributions
20
+
21
+ In the Proceedings of Innovative Use of NLP for Building Educational Applications 2023
22
+
23
+ If you find our work useful or interesting, please consider citing it!
24
+
25
+ ```
26
+ @inproceedings{wang2023sight,
27
+ title={SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts},
28
+ author={Wang, Rose E and Wirawarn, Pawan and Goodman, Noah and Demszky, Dorottya},
29
+ year={2023},
30
+ month = jun,
31
+ booktitle = {18th Workshop on Innovative Use of NLP for Building Educational Applications},
32
+ month_numeric = {6}
33
+ }
34
+ ```
35
+
36
+ ![Main Figure](assets/fig1.png)
37
+
38
+ ## Motivation
39
+ Lectures are a learning experience for both students and teachers.
40
+ Students learn from teachers about the subject material, while teachers learn from students about how to refine their instruction.
41
+ Unfortunately, online student feedback is unstructured and abundant, making it challenging for teachers to learn and improve. We take a step towards tackling this challenge.
42
+ First, we contribute a dataset for studying this problem: SIGHT is a large dataset of 288 math lecture transcripts and 15,784 comments collected from the Massachusetts Institute of Technology OpenCourseWare (MIT OCW) YouTube channel.
43
+ Second, we develop a rubric for categorizing feedback types using qualitative analysis.
44
+ Qualitative analysis methods are powerful in uncovering domain-specific insights, however they are costly to apply to large data sources.
45
+ To overcome this challenge, we propose a set of best practices for using large language models (LLMs) to cheaply classify the comments at scale.
46
+ We observe a striking correlation between the model's and humans' annotation:
47
+ Categories with consistent human annotations (>$0.9$ inter-rater reliability, IRR) also display higher human-model agreement (>$0.7$), while categories with less consistent human annotations ($0.7$-$0.8$ IRR) correspondingly demonstrate lower human-model agreement ($0.3$-$0.5$).
48
+ These techniques uncover useful student feedback from thousands of comments, costing around $\$0.002$ per comment.
49
+ We conclude by discussing exciting future directions on using online student feedback and improving automated annotation techniques for qualitative research.
50
+
51
+
52
+ ## Repository structure
53
+
54
+ Scripts are in `run_analysis.sh` for replicating the paper analysis. Please refer to the `prompts` directory for replicating the annotations.
55
+
56
+ The repo structure:
57
+
58
+ ```
59
+ .
60
+ ├── data
61
+ ├── annotations # Sample (human) and full SIGHT annotations
62
+ ├── comments # Per-video comments
63
+ ├── metadata # Per-video metadata like playlist ID or video name
64
+ └── transcripts # Per-video transcript, transcribed with Whisper Large V2
65
+ ├── prompts # Prompts used for annotation
66
+ ├── results # Result plots used in paper
67
+ ├── scripts # Python scripts for analysis
68
+ ├── requirements.txt # Install requirements for running code
69
+ ├── run_analysis.sh # Complete analysis script
70
+ ├── LICENSE
71
+ └── README.md
72
+ ```
73
+
74
+ ## Installation
75
+
76
+ To install the required libraries:
77
+
78
+ ```
79
+ conda create -n sight python=3
80
+ conda activate sight
81
+ pip install -r requirements.txt
82
+ ```
83
+
84
+ ## Experiments
85
+
86
+ TLDR: Running `source run_analysis.sh` replicates all the results we report in the paper.
87
+
88
+
89
+ Plots (e.g., the IRR comparison in Figure 3) are saved under `results/` as PDF files.
90
+ Numbers (e.g., sample data information in Table 2 or IRR values in Table 3) are printed out under `results/` as txt files.
91
+
92
+
93
+ ## Annotations
94
+
95
+ The automated annotations provided in this GitHub repository have been scaled on categories with high inter-rater reliability (IRR) scores.
96
+ While we have made efforts to ensure the reliability of these annotations, it is important to note that the automated annotations may not be completely error-free.
97
+ We recommend using these annotations as a starting point and validating them through additional human annotation or other means as necessary.
98
+ By using these annotations, you acknowledge and accept the potential limitations and inherent uncertainties associated with automated annotation methods, like annotating at scale with GPT-3.5.
99
+
100
+ We welcome any contributions to improve the quality of the annotations in this repository!
101
+ If you have made improvements to the annotations or expanded the annotations, feel free to submit a pulll request with your changes.
102
+ We appreciate all efforts to make these annotations more useful for the education and NLP community!