|
|
Line 28: |
Line 28: |
| | | |
| ===Manual word count=== | | ===Manual word count=== |
− |
| |
− | =='''TQAuditor v3.0'''==
| |
− | ===How Quality Score is calculated===
| |
− |
| |
− | In TQAuditor v3.0 the Quality Score formula is similar to MQM score formula http://www.qt21.eu/mqm-definition/definition-2015-12-30.html#scoring-algorithm
| |
− |
| |
− | The quality score reflects the number of mistakes made per 1000 words of the translated text.
| |
− |
| |
− |
| |
− | '''The formula is:'''
| |
− |
| |
− | ROUND(GREATEST(account.score_limit - SUM(mistake_severity.score * mistake_type_spec.weight) / project.evaluation_total_word_count * 1000, 0), 2)
| |
− |
| |
− |
| |
− | '''The formula interpretation:'''
| |
− |
| |
− | ROUND(var1, 2) — returns rounded to two decimal places value
| |
− |
| |
− | ROUND(GREATEST(var2, 0), 2) — returns "0" in case of a negative value (example: if "-5" then "0").
| |
− |
| |
− |
| |
− | '''Variables interpretation:'''
| |
− |
| |
− | var1 = GREATEST(main2, 0)
| |
− |
| |
− | var2 = account.score_limit - (SUM(mistake_severity.score * mistake_type_spec.weight) / project.evaluation_total_word_count) * 1000
| |
− |
| |
− | *'''mistake_severity.score'''—the mistake severity score.
| |
− |
| |
− | *'''mistake_type_spec.weight'''—the weight coefficient of the mistake type per project specialization.
| |
− |
| |
− | *'''project.evaluation_total_word_count'''—total source words after the '''<U>[[Evaluation report#General information|Start evaluation]]</U>''' filter is applied (whether with '''<U>[[Evaluation report#Start evaluation (automatic word count)|automatic]]</U>''' or '''<U>[[Evaluation report#Start evaluation (manual word count)|manually entered]]</U>''' word count).
| |
− |
| |
− |
| |
− |
| |
− | '''<span style="color:red"> Note:</span>''' '''account.score_limit''' is equal to 100 by default, but you may define the highest score limit you need in the '''<U>[[Evaluation settings]]</U>'''.
| |
− |
| |
− | =='''TQAuditor v2.14'''==
| |
− |
| |
− | === How Quality Score is calculated ===
| |
− |
| |
− |
| |
− | Quality score reflects the number of mistakes made per 1000 words of translated text.
| |
− |
| |
− | The formula is:
| |
− |
| |
− | Quality score = Σ(mistake_severity.score * mistake_type_spec.weight) * project.evaluation_corrected_word_count / project.evaluation_sample_word_count / project.evaluation_total_word_count * 1000,
| |
− | where:
| |
− |
| |
− | *'''Quality score''' - the number of "base" mistakes per 1000 words ("base" mistake - the mistake that has the severity score of 1 and the weight coefficient of 1).
| |
− | *'''Σ''' - the sum of products of the mistake severity score and the weight coefficient of the mistake type per specialization.
| |
− | *'''mistake_severity.score''' - the mistake severity score.
| |
− | *'''mistake_type_spec.weight''' - the weight coefficient of the mistake type per project specialization.
| |
− | *'''project.evaluation_corrected_word_count''' - source words in corrected units in the evaluation sample.
| |
− | *'''project.evaluation_sample_word_count''' - total source words in the evaluation sample.
| |
− | *'''project.evaluation_total_word_count''' - total source words after evaluation start filter is applied.
| |
− |
| |
− | === How the number of translated words is selected ===
| |
− |
| |
− | To make it simpler, let’s make an example:
| |
− |
| |
− | Before starting the evaluation, the reviewer selects the number of words to evaluate:
| |
− |
| |
− | [[File:Start evaluation filter.jpg|border|350px]]
| |
− |
| |
− | Now imagine that the whole translated text (marketing specialization) contains 3126 words, and the editor corrects the segments containing 2323 source words.
| |
− |
| |
− | You select 1000 words sample which is randomly taken by the system out of 2323 source words in corrected units and add 5 mistakes of different types and severities. Two of them are minor punctuation mistakes
| |
− |
| |
− | (the severity score of 1 and the weight coefficient of 0.9 for every mistake), and three of them are major grammar mistakes (the severity score of 5 and the weight coefficient of 1.2 for every mistake):
| |
− |
| |
− | [[File:QS types and severities 1.jpg|border|600px]]
| |
− |
| |
− | <span style="color:red">'''Note:''' By default, the system has pre-defined quality standards, but you can define your own corporate '''<U>[[System|quality standards]]</U>'''.</span>
| |
− |
| |
− | As a result, we get:
| |
− |
| |
− | '''Quality score''' = Σ(1*0.9+1*0.9+5*1,2+5*1,2+5*1,2)*2323/3126/1001*1000=14,699
| |
− |
| |
− | You’ll be able to see these numbers on the evaluation page:
| |
− |
| |
− | [[File:QSF result 1.jpg|border|200px]]
| |
− |
| |
− | Where:
| |
− |
| |
− | *'''Quality score''' = 14,7.
| |
− | *'''project.evaluation_corrected_word_count''' = 2323.
| |
− | *'''project.evaluation_sample_word_count''' = 3126.
| |
− | *'''project.evaluation_total_word_count''' = 1001.
| |
− |
| |
− | === Why is it made that way? ===
| |
− |
| |
− | We came to this through several stages of evolution.
| |
− |
| |
− | First, the system was selecting just the beginning of the text, and we found out that the translators started to translate first 1000 words better than the rest of the text. So we decided to select the random part of the text. But then the system sometimes selected the pieces containing no corrections while skipping heavily corrected parts. So we changed the logic, and now the system returns the required number of corrections to the evaluator, but remembers how much text it took to find these corrections.
| |
The mistake severity scores, mistake type weights, and reports are based on this 1000-word amount. For example, a critical mistake with score of 20 with weight coefficient of 1 reduces the quality score by 20 per 1000 words (by 10 per 2000 words, accordingly, and so on).