Difference between revisions of "Evaluation report"

From TQAuditor Wiki
Jump to: navigation, search
(Automatic vs. manual word count)
(Automatic vs. manual word count)
Line 13: Line 13:
 
To put it shortly, the evaluation with automatic word count is used for files that were fully reviewed.  
 
To put it shortly, the evaluation with automatic word count is used for files that were fully reviewed.  
  
The evaluation with manual word count was designed to be used with files that were not fully reviewed. For example, a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.
+
The evaluation with manual word count was designed to be used with files that were not fully reviewed.
 +
 
 +
'''Automatic:'''
 +
 
 +
1. Used for fully reviewed files.
 +
 
 +
2. The "Evaluation sample word count limit" is used to adjust how many segments for evaluation will be displayed (for example, 1000 words - around 100 segments, 1500 words - around 150 segments etc.).
 +
 
 +
3. When calculating the score, the "Total source words" is used.
 +
 
 +
'''Manual:'''
 +
 
 +
1. Used for partially reviewed files (in order not to split the file into parts and import only the reviewed part).
 +
 
 +
2. The "Evaluated source words" should reflect the total number of words in the reviewed file. For example, a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.
 +
 
 +
3. When calculating the score, the "Total source words" is used. In this case, "Evaluated source words" = "Total source words"
  
 
=='''Start evaluation (automatic word count)'''==
 
=='''Start evaluation (automatic word count)'''==

Revision as of 15:26, 20 January 2022

General information

After the evaluator uploaded files, they can start the evaluation.

The evaluator can start the evaluation whether with automatic word count or enter it manually when starting the process:

Evaluations.png

For more info on both methods, please check the relevant sections below.

Automatic vs. manual word count

To put it shortly, the evaluation with automatic word count is used for files that were fully reviewed.

The evaluation with manual word count was designed to be used with files that were not fully reviewed.

Automatic:

1. Used for fully reviewed files.

2. The "Evaluation sample word count limit" is used to adjust how many segments for evaluation will be displayed (for example, 1000 words - around 100 segments, 1500 words - around 150 segments etc.).

3. When calculating the score, the "Total source words" is used.

Manual:

1. Used for partially reviewed files (in order not to split the file into parts and import only the reviewed part).

2. The "Evaluated source words" should reflect the total number of words in the reviewed file. For example, a reviewer reviewed only 1500 words in a 5000-word file. Then they should specify 1500 as "Evaluated source words" and the system will not take the remaining 3500 words into account.

3. When calculating the score, the "Total source words" is used. In this case, "Evaluated source words" = "Total source words"

Start evaluation (automatic word count)

If you select this option, the system will display randomly selected segments containing only corrected units for evaluation:

Start evaluation automatic.png

Then you may configure the evaluation process:

  • Skip repetitions — the system will hide repeated segments (only one of them will be displayed in this case).
  • Skip locked units — the program will hide "frozen" units. For example, the client wants some parts, extremely important for him, stayed unchanged. Besides, extra units slow the editor’s work down.
  • Evaluation sample word count limit — the number of words in edited segments chosen for evaluation.

Start automatic evaluation settings final.png

Having applied the settings you need, press "Start evaluation" to initiate the process.

Start evaluation (manual word count)

If the word count given by the system does not correspond to the word count you want, you may manually enter the total word count before starting evaluation.

To do this, press "Start evaluation (manual word count)":

Start manual evaluation.png

Enter the number of evaluated source words:

Start manual evaluation settings 3.png

Then press "Start evaluation" and the system will display all corrected segments of the document.

Thus, you will be able to select the segments for evaluation on your own.

Mistakes

You can Add mistake:

1. 91.png

You may describe it:

2. mistake.png

You can edit, delete mistake/comment or add another mistake by clicking the corresponding buttons:

3. mistal.png

  • View in comparison — this link redirects you on the page with the Comparison report:

View in comparison.jpg

When the mistakes classification is done, the project evaluator has to press "Complete evaluation":

1 complete evaluation.png

Note: If you press "Complete" and no mistakes are added to the report, the system will warn you:

Evaluation no mistake are added.jpg

Markup display

Markup display option defines tags display:

  • Full - tags have original length, so you can see the data within:

1 full.png 1.png

  • Short - tags are compressed and you see only their position in the text:

2 short.png 2.png

  • None – tags are totally hidden, so they will not distract you:

3 none.png 3.png

Units display

  • All - all text segments are displayed.
  • With mistakes - only with mistakes text segments are displayed.
  • Last commented by evaluator - only last commented by evaluator text segments are displayed.
  • Last commented by translator - only last commented by translator text segments are displayed.
  • Last commented by arbiter - only last commented by arbiter text segments are displayed.

Press "Apply" after changing the preferences:

Units display.png

Reevaluation and arbitration requests

When mistakes classification is done, the project evaluator has to press "Complete evaluation" => "Complete",

and the system will send the quality assessment report to the translator.

When the translator receives this report and look through classification of each mistake, he may Complete project

(if agree with the evaluator (in this case, the project will be completed)) or Request reevaluation (if disagree):

Request reevaluation.png

The project will be sent to the evaluator, who will review translator’s comments.

If they are convincing, the evaluator may change mistake severity. The translator will receive the reevaluated project.

The translator can send this project for reevaluation one more time.

If an agreement between the translator and evaluator wasn’t reached, the translator can send the project to the arbiter

by pressing "Request arbitration" (it appears instead of "Request reevaluation"):

Arb.png

Redirect.jpg Back to the table of contents.