Difference between revisions of "Evaluation report"

From TQAuditor Wiki
Jump to: navigation, search
(Start evaluation (manual word count))
(Units display)
 
(91 intermediate revisions by 4 users not shown)
Line 5: Line 5:
 
The evaluator can start the evaluation whether with automatic word count or enter it manually when starting the process:
 
The evaluator can start the evaluation whether with automatic word count or enter it manually when starting the process:
  
[[File:Evaluations.png|border|250px]]
+
[[File:Evaluations.png|border|200px]]
  
 
For more info on both methods, please check the relevant sections below.
 
For more info on both methods, please check the relevant sections below.
  
=='''Start evaluation (automatic word count)'''==
+
=='''Automatic vs. manual word count'''==
 +
 
 +
'''Automatic:'''
  
If you select this option, the system will display randomly selected segments сontaining only corrected units for evaluation:
+
1. Used for fully reviewed files.
  
[[File:Start evaluation automatic.png|border|250px]]
+
2. The "Evaluation sample word count limit" is used to adjust how many segments for evaluation will be displayed.
  
Then you may configure the evaluation process:
+
3. The system will display only corrected segments (selected randomly) with the total word count specified as "Evaluation sample word count limit".
  
*'''Skip repetitions'''—the system will hide repeated segments (only one of them will be displayed in this case).
+
''For example'', if 1000 was specified as "Evaluation sample word count limit", the system will display around 100 segments with around 1000 words in total.
  
*'''Skip locked units'''—the program will hide "frozen" units. For example, the client wants some parts, extremely important for him, stayed unchanged. Besides, extra units slow the editor’s work down.
+
::''Please note that the number of segments varies depending on the size of segments''.
  
*'''Evaluation sample word count limit'''—the number of words in edited segments chosen for evaluation.
+
::<span style="color:orange">'''Note'''</span>: If the evaluator specifies 1000 as "Evaluation sample word count limit" while there are only 500 words in all corrected segments (let's say, there are 900 words in the file), the system will still display corrected segments with around 500 words in total. It means that ''1000 can be safely used as "Evaluation sample word count limit" even if the real total word count is lower''.
  
[[File:Start evaluation automatic settings 2.png|border|250px]]
+
4. When calculating the score, the "Total source words" from the "Evaluation details" section (not the Total source words of a file) is used:
  
Having applied the settings you need, press "'''Start evaluation'''" to initiate the process.
+
[[file:TSW.png|border|300 px]]
  
=='''Start evaluation (manual word count)'''==
+
''For example'', if the evaluation report includes corrected segments with around 1000 words and the total source words is 1757, 1757 will be used in the formula.
  
If the word count given by the system does not correspond to the word count you want, you may manually enter the total word count before starting evaluation.
+
'''Manual:'''
  
To do this, press '''"Start evaluation (manual word count)"''':
+
1. Used for partially reviewed files (in order not to split the file into parts and import only the reviewed part).
  
[[File:Start manual evaluation.png|border|250px]]
+
2. The "Evaluated source words" should reflect the total number of words in the reviewed part of the file.  
  
Then enter the number of evaluated source words:
+
''For example'', a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.
  
[[File:Start manual evaluation settings.png|border|250px]]
+
3. The system will display all the corrected segments. So, if the reviewed part of the file is large, the evaluator will have to evaluate way more segments.
  
Then you may configure the evaluation process:
+
4. When calculating the score, the "Total source words" is used. In this case, "Evaluated source words" = "Total source words".
  
*'''Skip repetitions'''—the system will hide repeated segments (only one of them will be displayed in this case).
+
=='''Start evaluation (automatic word count)'''==
  
*'''Skip locked units'''—the program will hide "frozen" units. For example, the client wants some parts, extremely important for him, stayed unchanged. Besides, extra units slow the editor’s work down.
+
If you select this option, the system will display randomly selected segments containing only corrected units for evaluation:
  
*'''Evaluation sample word count limit'''—the number of words in edited segments chosen for evaluation.
+
[[File:Start evaluation automatic.png|border|200px]]
  
[[File:Start evaluation automatic settings 2.png|border|250px]]
+
Then you may configure the evaluation process:
  
Having applied the settings you need, press "'''Start evaluation'''" to initiate the process.
+
*"Skip repetitions" the system will hide repeated segments (only one of them will be displayed)
  
If you select this option, the system will display all segments of the document, and you will be able to select the segments you want for evaluation:
+
*"Skip locked units" — the "frozen" units will not be displayed (for example, this setting is used if a client wants some important parts of the translated text to stay unchanged).
  
=='''Mistakes'''==
+
*"Skip units with match >=" — units with matches greater than or equal to a specified number will not be displayed.
  
You can '''Add mistake''':
+
*"Evaluation sample word count limit" — this value is used to adjust how many segments for evaluation will be displayed.
  
[[File:Add mistake button terminology.jpg|border|800px]]
+
[[File:Start automatic evaluation settings final.png|border|500px]]
 
You may describe it:
 
  
[[File:Adding mistake glossary.jpg|border|800px]]
+
Adjust the settings and click "Start evaluation".
 
You may also edit, delete mistake/comment:
 
  
[[File:Mistake&Comment edit delete buttons new.jpg|border|800px]]
+
=='''Start evaluation (manual word count)'''==
 
Or add another mistake by pressing "'''Add mistake'''":
 
  
[[File:Adding another mistake new.jpg|border|800px]]
+
If the file was reviewed partially, you can use the evaluation with manual word count.
 
*'''View in comparison''' — this link redirects you on the page with the '''<u>[[Comparison report|Comparison report]]</u>''':
 
  
[[File:View in comparison.jpg|border|800px]]
+
To do this, click '''"Start evaluation (manual word count)"''':
  
When the mistakes classification is done, the project evaluator has to press '''"Complete evaluation"''':
+
[[File:Start manual evaluation.png|border|200px]]
  
[[file:Complete evaluation button.jpg|border|400px]]
+
Enter the number of evaluated source words (total number of words in the reviewed part of the file):
  
The evaluator may describe translation in general or give advice to the translator and press the "'''Complete'''" button:
+
[[File:Start manual evaluation settings 3.png|border|450px]]
  
[[file:Complete evaluation details page.jpg|border|400px]]
+
Then click "Start evaluation" and the system will display all corrected segments of the document.
  
<span style="color:#DC143C"> '''Note:''' If you press '''"Complete"'''  and no mistakes are added to the report, the system will warn you:</span>
+
=='''Mistakes'''==
  
[[file:Evaluation no mistake are added.jpg|border|400px]]
+
Click the "Add mistake" button within a needed segment to add a mistake:
  
=='''Buttons and filters'''==
+
[[File:1. 91.png|border|700px]]
 
   
 
   
At the left side of the screen, different buttons and filters are displayed:
+
Specify mistake type and severity, leave a comment if needed, and click "Submit":
  
[[file:Evaluation report buttons.jpg|border|250px]]
+
[[File:2. mistake.png|border|290px]]
 +
 +
You can edit, delete mistakes or comments, and add mistakes by clicking the corresponding buttons:
  
*'''Complete evaluation''' - the button that finishes evaluation process.
+
[[File:3. mistal.png|border|700px]]
 +
 +
*'''View in comparison''' — this link redirects you to the page with the '''<u>[[Comparison report|comparison report]]</u>''':
  
*'''Evaluation report''' - evaluation report view.
+
[[File:View in comparison.jpg|border|700px]]
  
*'''Delete evaluation report''' - deletes the evaluation report.
+
When all the mistakes are added and classified, click "Complete evaluation", write an evaluation summary, and click the "Complete" button. The translator will get a notification.
  
*'''Comparison report''' - comparison report view.
+
[[File:1 complete evaluation.png|border|530px]]
  
*'''Project details''' - basic information about the project.
+
::<span style="color:orange"> '''Note:'''</span> If you press '''"Complete"'''  and no mistakes are added to the report, the system will warn you:
  
*'''Project files''' - original translation and amended translation.
+
[[file:Evaluation no mistake are added.jpg|border|400px]]
  
 
==='''Markup display'''===
 
==='''Markup display'''===
  
'''Markup display''' option defines tags display:  
+
Markup display settings allow you to choose how tags will be displayed:  
  
*'''Full''' - tags have original length, so you can see the data within:
+
*"Full" — tags have original length, so you can see the data within:
  
[[file:MD Full ev rep.jpg|border|1200px]]
+
[[File:1 full.png|border|140px]] [[File:1.png|border|650px]]
  
*'''Short''' - tags are compressed and you see only their position in the text:
+
*"Short" — the contents of the tags are not displayed and you see only their position in the text:
  
[[file:MD Short Ev rep.jpg|border|1200px]]
+
[[File:2 short.png|border|140px]] [[File:2.png|border|350px]]
 
   
 
   
*'''None''' – tags are totally hidden, so they will not distract you:
+
*"None" — tags are not displayed:
  
[[file:MD None Ev rep.jpg|border|1200px]]
+
[[File:3 none.png|border|140px]] [[File:3.png|border|270px]]
  
 
==='''Units display'''===
 
==='''Units display'''===
 
   
 
   
*'''All''' - all text segments are displayed:
+
*"All" - units with and without mistakes are displayed:
 
 
[[file:UD All Ev rep.jpg|border|1200px]]
 
 
 
*'''With mistakes''' - only with mistakes text segments are displayed:
 
 
 
[[file:UD with mistakes Ev rep.jpg|border|1200px]]
 
 
 
*'''Last commented by evaluator''' - only last commented by evaluator text segments are displayed:
 
 
 
[[file:UD last by evaluator Ev rep.jpg|border|1200px]]
 
 
 
*'''Last commented by translator''' - only last commented by translator text segments are displayed:
 
 
 
[[file:UD Last by translator.jpg|border|1200px]]
 
 
 
*'''Last commented by arbiter''' - only last commented by arbiter text segments are displayed:
 
 
 
[[file:UD last by arbiter.jpg|border|1200px]]
 
 
 
==='''Export to Excel'''===
 
 
 
You may export the report to Excel by pressing the '''"Export to Excel"''' link in the upper right corner of the report:
 
 
 
[[file:Evaluation report export to Excel link.jpg|border|1000px]]
 
 
 
You will have the fixes in columns for reviewing.
 
 
 
Please note that the rows with mistakes are highlighted in red with the indication of their types and severities:
 
 
 
[[file:Evaluation report in Excel.jpg|border|1200px]]
 
 
 
<span style="color:#DC143C"> '''Note:''' If you apply the '''<U>[[Evaluation report#Units display|Units display]]</U>''' filter, only the filtered data will be exported.</span>
 
 
 
=='''Evaluation and comparison details'''==
 
 
Also, you may find here '''Evaluation''' and '''Comparison details''', such as:
 
 
 
'''Evaluation sample details''':
 
 
 
[[File: Evaluation sample details.jpg|border|250px]]
 
 
 
*'''Total units''' - the number of text segments in the sample.
 
 
 
*'''Total source words''' - the total number of words in the sample.
 
 
 
*'''Total mistakes''' - the general number of mistakes.
 
 
 
'''Evaluation details''':
 
 
 
[[File:Evaluation details level1.jpg|border|250px]]
 
 
 
*'''Skip locked units''' - hidden, "frozen" units (for example, the client wants some parts, extremely important for him, stayed unchanged. Besides, extra units slow down editor’s work).
 
 
 
*'''Skip segments with match >=''' - predefined fuzzy match percentage (the program hides segments with match greater than or equal to that you specified).
 
 
 
*'''Total units''' - the total number of text segments.
 
 
 
*'''Corrected units''' - the number of segments with amendments.
 
 
 
*'''Total source words''' - the total number of words in the source.
 
 
 
*'''Source words in corrected units''' - the number of source words in amended segments.
 
 
 
*'''Quality score''' - the complex index of performing translation that depends on the total number of words, specialization, severity of mistakes, etc.
 
 
 
*'''Quality level''' - evaluation of translator based on quality score.
 
 
 
'''Comparison details'''
 
  
[[File:Comparison details.jpg|border|250px]]
+
*"With mistakes" - only units with mistakes are displayed:
  
*'''Total units''' - the total number of segments.
+
*"Last commented by evaluator" — only units with the last comment left by the evaluator are displayed.
  
*'''Corrected units''' - the number of segments with amendments.
+
*"Last commented by translator" — only units with the last comment left by the translater are displayed.
  
*'''Total source words''' - the total number of words in the source.
+
*"Last commented by arbiter" — only units with the last comment left by the arbiter are displayed.
  
*'''Source words in corrected units''' - the number of source words in amended segments.
+
[[File:Units display.png|border|170px]]
  
 
=='''Reevaluation and arbitration requests'''==
 
=='''Reevaluation and arbitration requests'''==
  
When mistakes classification is done, the project evaluator has to press '''"Complete evaluation" => "Complete"''',
+
When the evaluation is done, the translator can complete the project or request the reevaluation if they disagree with mistake severities:
 
 
and the system will send the quality assessment report to the translator.
 
 
 
When the translator receive this report and look through classification of each mistake, he may '''Complete project'''
 
 
 
(if agree with the evaluator (in this case, the project will be completed)) or '''Request reevaluation''' (if disagree):
 
 
 
[[File:Request eveluation button translator's account.jpg|border|1000px]]
 
  
The project will be sent to the evaluator, who will review translator’s comments.  
+
[[File:Request reevaluation.png|border|200px]]
  
If they are convincing, the evaluator may change mistake severity. The translator will receive the reevaluated project.  
+
If the translator requests the reevaluation, the evaluator will have to reply to all the translator's comments.
  
The translator can send this project for reevaluation one more time.  
+
::<span style="color:orange">'''Note:'''</span> Unless the number of [https://wiki.tqauditor.com/wiki/Evaluation_settings maximum evaluation attempts] has been adjusted, the translator can request the reevaluation for 2 times.
  
If an agreement between the translator and evaluator wasn’t reached, the translator can send the project to the arbiter
+
If there is no agreement between the translator and evaluator, the translator can request the arbitration:
  
by pressing '''"Request arbitration"''' (it appears instead of '''"Request reevaluation"'''):
+
[[File:Arb.png|border|200px]]
  
[[File:Request arbitration button.jpg|border|1000px]]
+
The arbiter provides a final score that cannot be disputed and completes the project. Once the arbitration is completed, all the project participants will receive an email notification.
  
[[File:Redirect.jpg|40px|link=Quality evaluation]] Back to the '''table of contents'''.
+
[[File:Redirect.jpg|40px|link=Quality evaluation]] Back to the table of contents.

Latest revision as of 17:52, 18 February 2022

General information

After the evaluator uploaded files, they can start the evaluation.

The evaluator can start the evaluation whether with automatic word count or enter it manually when starting the process:

Evaluations.png

For more info on both methods, please check the relevant sections below.

Automatic vs. manual word count

Automatic:

1. Used for fully reviewed files.

2. The "Evaluation sample word count limit" is used to adjust how many segments for evaluation will be displayed.

3. The system will display only corrected segments (selected randomly) with the total word count specified as "Evaluation sample word count limit".

For example, if 1000 was specified as "Evaluation sample word count limit", the system will display around 100 segments with around 1000 words in total.

Please note that the number of segments varies depending on the size of segments.
Note: If the evaluator specifies 1000 as "Evaluation sample word count limit" while there are only 500 words in all corrected segments (let's say, there are 900 words in the file), the system will still display corrected segments with around 500 words in total. It means that 1000 can be safely used as "Evaluation sample word count limit" even if the real total word count is lower.

4. When calculating the score, the "Total source words" from the "Evaluation details" section (not the Total source words of a file) is used:

TSW.png

For example, if the evaluation report includes corrected segments with around 1000 words and the total source words is 1757, 1757 will be used in the formula.

Manual:

1. Used for partially reviewed files (in order not to split the file into parts and import only the reviewed part).

2. The "Evaluated source words" should reflect the total number of words in the reviewed part of the file.

For example, a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.

3. The system will display all the corrected segments. So, if the reviewed part of the file is large, the evaluator will have to evaluate way more segments.

4. When calculating the score, the "Total source words" is used. In this case, "Evaluated source words" = "Total source words".

Start evaluation (automatic word count)

If you select this option, the system will display randomly selected segments containing only corrected units for evaluation:

Start evaluation automatic.png

Then you may configure the evaluation process:

  • "Skip repetitions" — the system will hide repeated segments (only one of them will be displayed)
  • "Skip locked units" — the "frozen" units will not be displayed (for example, this setting is used if a client wants some important parts of the translated text to stay unchanged).
  • "Skip units with match >=" — units with matches greater than or equal to a specified number will not be displayed.
  • "Evaluation sample word count limit" — this value is used to adjust how many segments for evaluation will be displayed.

Start automatic evaluation settings final.png

Adjust the settings and click "Start evaluation".

Start evaluation (manual word count)

If the file was reviewed partially, you can use the evaluation with manual word count.

To do this, click "Start evaluation (manual word count)":

Start manual evaluation.png

Enter the number of evaluated source words (total number of words in the reviewed part of the file):

Start manual evaluation settings 3.png

Then click "Start evaluation" and the system will display all corrected segments of the document.

Mistakes

Click the "Add mistake" button within a needed segment to add a mistake:

1. 91.png

Specify mistake type and severity, leave a comment if needed, and click "Submit":

2. mistake.png

You can edit, delete mistakes or comments, and add mistakes by clicking the corresponding buttons:

3. mistal.png

  • View in comparison — this link redirects you to the page with the comparison report:

View in comparison.jpg

When all the mistakes are added and classified, click "Complete evaluation", write an evaluation summary, and click the "Complete" button. The translator will get a notification.

1 complete evaluation.png

Note: If you press "Complete" and no mistakes are added to the report, the system will warn you:

Evaluation no mistake are added.jpg

Markup display

Markup display settings allow you to choose how tags will be displayed:

  • "Full" — tags have original length, so you can see the data within:

1 full.png 1.png

  • "Short" — the contents of the tags are not displayed and you see only their position in the text:

2 short.png 2.png

  • "None" — tags are not displayed:

3 none.png 3.png

Units display

  • "All" - units with and without mistakes are displayed:
  • "With mistakes" - only units with mistakes are displayed:
  • "Last commented by evaluator" — only units with the last comment left by the evaluator are displayed.
  • "Last commented by translator" — only units with the last comment left by the translater are displayed.
  • "Last commented by arbiter" — only units with the last comment left by the arbiter are displayed.

Units display.png

Reevaluation and arbitration requests

When the evaluation is done, the translator can complete the project or request the reevaluation if they disagree with mistake severities:

Request reevaluation.png

If the translator requests the reevaluation, the evaluator will have to reply to all the translator's comments.

Note: Unless the number of maximum evaluation attempts has been adjusted, the translator can request the reevaluation for 2 times.

If there is no agreement between the translator and evaluator, the translator can request the arbitration:

Arb.png

The arbiter provides a final score that cannot be disputed and completes the project. Once the arbitration is completed, all the project participants will receive an email notification.

Redirect.jpg Back to the table of contents.