File size: 3,812 Bytes
ca5e076
 
e8e2a3f
 
 
 
f8de82c
 
 
e8e2a3f
 
f8de82c
 
 
 
e8e2a3f
f8de82c
 
 
 
 
 
e8e2a3f
f8de82c
 
ca5e076
e8e2a3f
f8de82c
e8e2a3f
 
f8de82c
e8e2a3f
 
 
 
 
f8de82c
e8e2a3f
 
 
 
 
 
 
 
 
 
 
 
f8de82c
 
 
 
 
 
 
e8e2a3f
 
 
 
 
 
 
f8de82c
 
 
e8e2a3f
f8de82c
 
 
e8e2a3f
f8de82c
 
 
 
 
e8e2a3f
 
 
 
 
 
 
 
f8de82c
e8e2a3f
 
 
 
f8de82c
e8e2a3f
 
 
f8de82c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: mit
language:
- fr
task_categories:
- image-to-text
pretty_name: RIMES-2011-Lines
tags:
- Handwritten Text Recognition
dataset_info:
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_examples: 10188
  - name: validation
    num_examples: 1138
  - name: test
    num_examples: 778
  dataset_size: 12104
size_categories:
- 10K<n<100K
---

# RIMES-2011-Lines Dataset

## Table of Contents
- [RIMES-2011-Lines Dataset](#pylaia-rimes-dataset)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Languages](#languages)
  - [Dataset Structure](#dataset-structure)
    - [Data Loading](#data-loading)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Splits](#data-splits)

## Dataset Description

- **Homepage:** [ARTEMIS](https://artemis.telecom-sudparis.eu/2012/10/05/rimes/)
- **PapersWithCode:** [Papers using the RIMES dataset](https://paperswithcode.com/dataset/rimes)
- **Point of Contact:** [TEKLIA](https://teklia.com)

### Dataset Summary

The RIMES (Reconnaissance et Indexation de données Manuscrites et de fac similÉS) database has been created to evaluate automatic recognition and indexing systems for handwritten documents. Of particular interest are cases such as those sent by mail or fax from individuals to companies or administrations.

The database was collected by asking volunteers to write handwritten letters in exchange for gift certificates. Volunteers were given a fictitious identity (same gender as the real one) and up to 5 scenarios. Each scenario was chosen from among 9 realistic topics: change of personal data (address, bank account), request for information, opening and closing (customer account), change of contract or order, complaint (poor quality of service...), payment difficulties (request for delay, tax exemption...), reminder, complaint with other circumstances and a target (administrations or service providers (telephone, electricity, bank, insurance). The volunteers wrote a letter with this information in their own words. The layout was free and the only request was to use white paper and write legibly in black ink.

The campaign was a success, with more than 1,300 people contributing to the RIMES database by writing up to 5 letters. The resulting RIMES database contains 12,723 pages, corresponding to 5605 mails of two to three pages each.

Note that all images are resized to a fixed height of 128 pixels.

### Languages

All the documents in the dataset are written in French.

## Dataset Structure

### Data Loading

The dataset can be loaded using this simple code:

```py
from datasets import load_dataset
dataset = load_dataset("Teklia/rimes-2011-lines")
```

### Data Instances

Each instance represents an image and its transcription:
```json
{
  'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2560x128 at 0x1A800E8E190,
  'text': 'Comme indiqué dans les conditions particulières de mon contrat d'assurance'
}
```

### Data Fields

- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`.
- `text`: the label transcription of the image.

### Data Splits

Three sets are available: train, validation and test.

|                         | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Number of Lines         | 10188 |   1138     | 778  |