Difference between revisions of "Evaluation report"
(→Start evaluation (automatic word count)) |
(→Reevaluation and arbitration requests) |
||
Line 172: | Line 172: | ||
(if agree with the evaluator (in this case, the project will be completed)) or '''Request reevaluation''' (if disagree): | (if agree with the evaluator (in this case, the project will be completed)) or '''Request reevaluation''' (if disagree): | ||
− | ::<span style="color:orange">'''[new version]'''</span> When the | + | ::<span style="color:orange">'''[new version]'''</span> When the evaluation is done, the translator can complete the project or request the reevaluation if they disagree with mistake severities: |
[[File:Request reevaluation.png|border|200px]] | [[File:Request reevaluation.png|border|200px]] |
Revision as of 16:04, 18 February 2022
Contents
General information
After the evaluator uploaded files, they can start the evaluation.
The evaluator can start the evaluation whether with automatic word count or enter it manually when starting the process:
For more info on both methods, please check the relevant sections below.
Automatic vs. manual word count
Automatic:
1. Used for fully reviewed files.
2. The "Evaluation sample word count limit" is used to adjust how many segments for evaluation will be displayed.
3. The system will display only corrected segments (selected randomly) with the total word count specified as "Evaluation sample word count limit".
For example, if 1000 was specified as "Evaluation sample word count limit", the system will display around 100 segments with around 1000 words in total.
- Please note that the number of segments varies depending on the size of segments.
- Note: If the evaluator specifies 1000 as "Evaluation sample word count limit" while there are only 500 words in all corrected segments (let's say, there are 900 words in the file), the system will still display corrected segments with around 500 words in total. It means that 1000 can be safely used as "Evaluation sample word count limit" even if the real total word count is lower.
4. When calculating the score, the "Total source words" from the "Evaluation details" section (not the Total source words of a file) is used:
For example, if the evaluation report includes corrected segments with around 1000 words and the total source words is 1757, 1757 will be used in the formula.
Manual:
1. Used for partially reviewed files (in order not to split the file into parts and import only the reviewed part).
2. The "Evaluated source words" should reflect the total number of words in the reviewed part of the file.
For example, a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.
3. The system will display all the corrected segments. So, if the reviewed part of the file is large, the evaluator will have to evaluate way more segments.
4. When calculating the score, the "Total source words" is used. In this case, "Evaluated source words" = "Total source words".
Start evaluation (automatic word count)
If you select this option, the system will display randomly selected segments containing only corrected units for evaluation:
Then you may configure the evaluation process:
- Skip repetitions — the system will hide repeated segments (only one of them will be displayed in this case).
- [new version] *"Skip repetitions" — the system will hide repeated segments (only one of them will be displayed)
- Skip locked units — the program will hide "frozen" units. For example, the client wants some parts, extremely important for him, stayed unchanged. Besides, extra units slow the editor’s work down.
- [new version] *"Skip locked units" — the "frozen" units will not be displayed (for example, this setting is used if a client wants some important parts of the translated text to stay unchanged).
- [new] "Skip units with match >=" — units with matches greater than or equal to a specified number will not be displayed.
- Evaluation sample word count limit — the number of words in edited segments chosen for evaluation.
- [new version] *"Evaluation sample word count limit" — this value is used to adjust how many segments for evaluation will be displayed
Having applied the settings you need, press "Start evaluation" to initiate the process.
- [new version] Adjust the settings and click "Start evaluation".
Start evaluation (manual word count)
If the file was reviewed partially, you can use the evaluation with manual word count.
To do this, click "Start evaluation (manual word count)":
Enter the number of evaluated source words (total number of words in the reviewed part of the file):
Then click "Start evaluation" and the system will display all corrected segments of the document.
Mistakes
You can Add mistake:
- [new version] Click the "Add mistake" button within a needed segment to add a mistake:
You may describe it:
- [new version] Specify each mistake's type and severity, leave a comment if needed, and click "Submit":
You can edit, delete mistake/comment or add another mistake by clicking the corresponding buttons:
- [new version] You can edit, delete mistake or comment, and add mistakes by clicking the corresponding buttons:
- View in comparison — this link redirects you to the page with the comparison report:
When the mistakes classification is done, the project evaluator has to press "Complete evaluation":
- [new version] When all the mistakes are added and classified, click "Complete evaluation", write an evaluation summary, and click the "Complete" button. The translator will get a notification.
Note: If you press "Complete" and no mistakes are added to the report, the system will warn you:
Markup display
Markup display settings allow you to choose how tags will be displayed:
- "Full" — tags have original length, so you can see the data within:
- "Short" — the contents of the tags are not displayed and you see only their position in the text:
- "None" — tags are not displayed:
Units display
- All - all text segments are displayed.
- [new version] "All" - units with and without mistakes are displayed:
- With mistakes - only with mistakes text segments are displayed.
- [new version] "With mistakes" - only units with mistakes are displayed:
- Last commented by evaluator - only last commented by evaluator text segments are displayed.
- [new version] *"Last commented by evaluator" — only units with the last comment left by the evaluator are displayed.
- Last commented by translator - only last commented by translator text segments are displayed.
- [new version] *Last commented by translator — only units with the last comment left by the translater are displayed.
- Last commented by arbiter - only last commented by arbiter text segments are displayed.
- [new version] *"Last commented by arbiter" — only units with the last comment left by the arbiter are displayed.
Click "Apply" after changing the preferences:
Reevaluation and arbitration requests
When mistakes classification is done, the project evaluator has to press "Complete evaluation" => "Complete",
and the system will send the quality assessment report to the translator.
When the translator receives this report and look through classification of each mistake, he may Complete project
(if agree with the evaluator (in this case, the project will be completed)) or Request reevaluation (if disagree):
- [new version] When the evaluation is done, the translator can complete the project or request the reevaluation if they disagree with mistake severities:
The project will be sent to the evaluator, who will review translator’s comments.
- [new version] If the translator requests the reevaluation, the evaluator will have to reply to all the translator's comments and decrease mistake severities if needed.
If they are convincing, the evaluator may change mistake severity. The translator will receive the reevaluated project.
The translator can send this project for reevaluation one more time.
- Note: [new] Unless the number of maximum evaluation attempts has been adjusted, the translator can request the reevaluation for 2 times.
If an agreement between the translator and evaluator wasn’t reached, the translator can send the project to the arbiter
by pressing "Request arbitration" (it appears instead of "Request reevaluation"):
[new version] If there is no agreement between the translator and evaluator, the translator can request the arbitration:
[new] The arbiter provides a final score that cannot be disputed and completes the project. Once the arbitration is completed all the project participants will receive an email notification.