saridormi commited on
Commit
310e730
β€’
1 Parent(s): d6bb9c2

Edit README a bit & add config

Browse files
README.md CHANGED
@@ -1,42 +1,96 @@
1
  ---
2
  license: apache-2.0
3
  configs:
4
- - config_name: default
5
- data_files:
6
- - split: test
7
- path: data/*.json
8
  ---
9
 
10
- ## Dataset
11
- The following dataset contains logs of the GitHub action for a failed workflow of some commits,
12
- followed by the commit that passes the workflow successfully. A full list of the datapoints' content is given below.
13
 
 
 
14
 
15
- ## Task
16
- The intended task for this dataset is to fix the repo to pass GitHub actions workflow.
17
- Note that the dataset does not contain repo snapshot.
18
- During benchmark, the method clones the necessary repo on the user's local machine.
19
- The user's model should correct the files of the repo, and benchmark pushes repo to GitHub, returning the result of the workflow run aggregated by all datapoints.
20
 
21
- ## List of items of the datapoint:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  **TODO** Add http links to failed commit
24
  **TODO** Add file list os changed files
25
 
26
- **id**: unique id of the dp
27
- **language**: the main language of the repo
28
- **repo_name**: original repo name
29
- **repo_owner**: original repo owner
30
- **head_branch**: name of the original branch that the commit was pushed at
31
- **contributor**: username of the contributor that committed changes
32
- **difficulty**: the difficulty of the problem (accessor-based)
33
- **sha_fail**: sha of the failed commit
34
- **sha_success**: sha of the successful commit
35
- **diff:** the content of diff file between failed and successful commits
36
- **logs**: list of dicts [{"log": log, "step_name": step_name}]:
37
- - log: logs of the failed job, particular step
38
- - step_name: name of the failed step of the job
39
- **workflow**: content of the workflow file that has been used to run jobs
40
- **workflow_filename**: workflow filename that has been used to run jobs
41
- **workflow_name**: name of the workflow that was run
42
- **workflow_path**: path to the workflow file that was run
 
 
1
  ---
2
  license: apache-2.0
3
  configs:
4
+ - config_name: python
5
+ data_files:
6
+ - split: test
7
+ path: data/python/*.json
8
  ---
9
 
10
+ # 🏟️ Long Code Arena (CI Fixing)
 
 
11
 
12
+ > πŸ› οΈ CI Fixing: given logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the
13
+ > repository contents in order to make the workflow pass.
14
 
15
+ This is the benchmark for **CI Fixing** task as part of
16
+ 🏟️ [**Long Code Arena** benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
 
 
 
17
 
18
+ ## How-to
19
+
20
+ 1. List all the available configs
21
+ via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
22
+ and choose an appropriate one.
23
+
24
+ Current configs: `python`
25
+
26
+ 2. Load the data
27
+ via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
28
+
29
+ ```
30
+ from datasets import load_dataset
31
+
32
+ configuration = "TODO" # select a configuration
33
+ dataset = load_dataset("JetBrains-Research/lca-ci-fix", configuration, split="test")
34
+ ```
35
+
36
+ Note that all the data we have is considered to be in the test split.
37
+
38
+ ## Dataset Structure
39
+
40
+ This dataset contains logs of the failed GitHub Action workflows for some commits
41
+ followed by the commit that passes the workflow successfully.
42
+
43
+ Note that, unlike many other 🏟 Long Code Arena datasets, this dataset doesn't contain repositories.
44
+
45
+ * Our [CI Fixing benchmark](todo) (🚧 todo) clones the necessary repos to the user's local machine. The user should run their model to
46
+ fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow runs
47
+ for all the datapoints.
48
+
49
+ ### Datapoint Schema
50
+
51
+ **TODO** Add http links to failed commit
52
+ **TODO** Add file list of changed files
53
+
54
+ Each example has the following fields:
55
+
56
+ | Field | Description |
57
+ |---------------------|------------------------------------------------------------------------------------------------------------------------------|
58
+ | `contributor` | Username of the contributor that committed changes |
59
+ | `difficulty` | Difficulty of the problem (assessor-based) |
60
+ | `diff` | Contents of the diff between the failed and the successful commits |
61
+ | `head_branch` | Name of the original branch that the commit was pushed at |
62
+ | `id` | Unique ID of the datapoint |
63
+ | `language` | Main language of the repo |
64
+ | `logs` | List of dicts with keys `log` (logs of the failed job, particular step) and `step_name` (name of the failed step of the job) | |
65
+ | `repo_name` | Name of the original repo (second part of the `owner/name` on GitHub) |
66
+ | `repo owner` | Owner of the original repo (first part of the `owner/name` on GitHub) |
67
+ | `sha_fail` | SHA of the failed commit |
68
+ | `sha_success` | SHA of the successful commit |
69
+ | `workflow` | Contents of the workflow file |
70
+ | `workflow_filename` | The name of the workflow file (without directories) |
71
+ | `workflow_name` | The name of the workflow |
72
+ | `workflow_path` | The full path to the workflow file |
73
+
74
+ ### Datapoint Example
75
 
76
  **TODO** Add http links to failed commit
77
  **TODO** Add file list os changed files
78
 
79
+ ```
80
+ {'contributor': 'Gallaecio',
81
+ 'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
82
+ 'difficulty': '1',
83
+ 'head_branch': 'component-getters',
84
+ 'id': 18,
85
+ 'language': 'Python',
86
+ 'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
87
+ 'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
88
+ 'repo_name': 'scrapy',
89
+ 'repo_owner': 'scrapy',
90
+ 'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
91
+ 'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
92
+ 'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
93
+ 'workflow_filename': 'checks.yml',
94
+ 'workflow_name': 'Checks',
95
+ 'workflow_path': '.github/workflows/checks.yml'}
96
+ ```
data/{0828c8d.json β†’ python/0828c8d.json} RENAMED
File without changes
data/{0f71221.json β†’ python/0f71221.json} RENAMED
File without changes
data/{102f918.json β†’ python/102f918.json} RENAMED
File without changes
data/{2a104bf.json β†’ python/2a104bf.json} RENAMED
File without changes
data/{2c06ffa.json β†’ python/2c06ffa.json} RENAMED
File without changes
data/{2e41e78.json β†’ python/2e41e78.json} RENAMED
File without changes
data/{434321a.json β†’ python/434321a.json} RENAMED
File without changes
data/{43dd59c.json β†’ python/43dd59c.json} RENAMED
File without changes
data/{72cd8be.json β†’ python/72cd8be.json} RENAMED
File without changes
data/{79f4668.json β†’ python/79f4668.json} RENAMED
File without changes
data/{cc2ad92.json β†’ python/cc2ad92.json} RENAMED
File without changes
data/{cdfe3ca.json β†’ python/cdfe3ca.json} RENAMED
File without changes
data/{d2e06b5.json β†’ python/d2e06b5.json} RENAMED
File without changes
data/{db6550a.json β†’ python/db6550a.json} RENAMED
File without changes
data/{eaba357.json β†’ python/eaba357.json} RENAMED
File without changes
data/{f9f4b05.json β†’ python/f9f4b05.json} RENAMED
File without changes