Difference between revisions of "Quality score formula"

From TQAuditor Wiki
Jump to: navigation, search
(TQAuditor v3.0)
(Quality score calculation examples)
 
(88 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
=='''Quality score formula visualisation and interpretation'''==
  
=='''TQAuditor v3.0'''==
+
[[File:QSF.png|border|1000px]]
===How Quality Score is calculated?===
 
  
The quality score reflects the number of mistakes made per 1000 words of the translated text.
+
;1. Account score limit
 +
:The highest possible score (100 by default). Can be set on this page: [[https://cloud.tqauditor.com/system/evaluation-settings Evaluation settings]]
 +
;2. Mistake severity score
 +
:Score of  mistake by its severity (for example, 1 point for minor and 5 points for major mistakes). Can be set on this page: [[https://cloud.tqauditor.com/mistake-severity/index Mistake severities list]]
 +
;3. Mistake type weight
 +
:Weight of a specific mistake type by specialization. Can be adjusted on this page [[https://cloud.tqauditor.com/mistake-type/index Mistake types list]]
 +
;4. SUM
 +
:A sum of products of #2 and #3 for all the mistakes.
 +
;5. Project evaluation word count
 +
:''Automatic word count'': The value from the "Total source words" field in the "Evaluation details" section (not the Total source words of a file). Please see the [[Evaluation_report#Automatic_vs._manual_word_count|Automatic vs. manual word count]] page.
 +
:''Manual word count'': The value from the "Evaluated source words" field (specified when starting the evaluation with a manual word count).
 +
;6. 1000
 +
:In the end, the result is always converted to 1000, a most optimal value.
 +
The mistake severity scores, mistake type weights, and reports are based on this 1000-word amount. For example, a critical mistake with score of 20 with weight coefficient of 1 reduces the quality score by 20 per 1000 words (by 10 per 2000 words, accordingly, and so on).
  
The formula is:
 
  
ROUND(GREATEST(account.score_limit - SUM(mistake_severity.score * mistake_type_spec.weight) / project.evaluation_total_word_count * 1000, 0), 2)
+
For more information, please check the [[Quality score formula:Details and versions|Quality score formula:Details and versions]] page.
  
The formula interpretation:
+
=='''Quality score calculation examples'''==
  
ROUND($var1, 2)-returns rounded to two decimal places value
+
In the examples below, the following quality standard is used:
  
ROUND(GREATEST($var2, 0), 2)-returns "0" in case of a negative value (example: if "-5" then "0").
+
[[File:QS.png|border|300px]]
  
Variables interpretation:
+
[[File:QS1.png|border|1000px]]
  
$var1 = GREATEST($main2, 0)  
+
===Automatic word count===
 +
‎<br />
 +
;File details&#58;
 +
:Total source words: 3126
 +
:Fully reviewed: yes
 +
:Number of units (segments): 392
 +
:Project specialization - marketing
  
$var2 = account.score_limit - SUM(mistake_severity.score * mistake_type_spec.weight) / project.evaluation_total_word_count * 1000
+
;Evaluation settings&#58;
 +
[[File:EAS.png|border|300px]]
  
*'''mistake_severity.score''' - the mistake severity score.
+
;Evaluation details&#58;
 +
:Total source words: 1749
 +
:Source words in corrected units: 1002
 +
 +
'''Mistakes:'''
  
*'''mistake_type_spec.weight''' - the weight coefficient of the mistake type per project specialization.
+
# <span style="color:red">'''Major'''</span> ('''<span style="color:blue">3</span>''' points each):
 +
## Terminology: '''1''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
## Grammar: '''1''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
# <span style="color:orange"> '''Minor'''</span> ('''<span style="color:blue">1</span>''' point each):
 +
## Grammar: '''3''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
## Functional: '''2''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
# <span style="color:green"> '''Repetitive'''</span> ('''<span style="color:blue">0.01</span>''' point each):
 +
## Functional: '''6''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
## Layout: '''8''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
# <span style="color:#6C3483"> '''Non-scoring'''</span> ('''<span style="color:blue">0</span>''' points each):
 +
## Style: '''5''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
## Terminology: '''3''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
  
*'''project.evaluation_total_word_count''' - total source words after evaluation start filter is applied.
+
:'''Total mistakes''': '''29'''
 +
:'''SUM''': '''1'''*'''<span style="color:blue">3</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''1'''*'''<span style="color:blue">3</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''3'''*'''<span style="color:blue">1</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''2'''*'''<span style="color:blue">1</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''6'''*'''<span style="color:blue">0.01</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''8'''*'''<span style="color:blue">0.01</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''5'''*'''<span style="color:blue">0</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''3'''*'''<span style="color:blue">0</span>'''*'''<span style="color:DarkTurquoise">1</span>'''=3+3.6+3.6+2+0.06+0.08+0+0='''12.34'''
  
--
 
  
account.score_limit >> 100 by default. есть в настройках акк
+
;Quality score calculation&#58;
project.evaluation_total_word_count >> автоматич подсчет или ввод вручную, без разницы
 
  
 +
Using the [[Quality_score_formula#Quality_score_formula_visualisation_and_interpretation|formula]]:
 +
100-(12.34/1749*1000)=92.94
  
mistake_severity.score и mistake_type_spec.weight >> как в старой формуле, дескрипшн есть в wiki
+
[[File:QSA.png|border|300px]]
  
 +
----
  
 +
===Manual word count===
  
 +
‎<br />
 +
;File details&#58;
 +
:Total source words: 3126
 +
:Fully reviewed: no
 +
:Number of words in a reviewed part of the file: around 1500
 +
:Number of units (segments): 392
 +
:Project specialization - marketing
  
 +
;Evaluation settings&#58;
 +
:‎Evaluated source words: 1500<br />
  
 +
;Evaluation details&#58;
 +
:Corrected units: 227
 +
:Total source words: 1500
 +
 +
'''Mistakes:'''
  
 +
# <span style="color:red">'''Major'''</span> ('''<span style="color:blue">3</span>''' points each):
 +
## Terminology: '''1''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
## Grammar: '''1''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
# <span style="color:orange"> '''Minor'''</span> ('''<span style="color:blue">1</span>''' point each):
 +
## Grammar: '''3''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
## Functional: '''2''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
# <span style="color:green"> '''Repetitive'''</span> ('''<span style="color:blue">0.01</span>''' point each):
 +
## Functional: '''6''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
## Layout: '''8''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
 +
# <span style="color:#6C3483"> '''Non-scoring'''</span> ('''<span style="color:blue">0</span>''' points each):
 +
## Style: '''5''' (weight coefficient '''<span style="color:DarkTurquoise">1.2</span>''')
 +
## Terminology: '''3''' (weight coefficient '''<span style="color:DarkTurquoise">1</span>''')
  
Quality score = Σ(mistake_severity.score * mistake_type_spec.weight) * project.evaluation_corrected_word_count / project.evaluation_sample_word_count / project.evaluation_total_word_count * 1000,
+
:'''Total mistakes''': '''29'''
where:
+
:'''SUM''': '''1'''*'''<span style="color:blue">3</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''1'''*'''<span style="color:blue">3</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''3'''*'''<span style="color:blue">1</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''2'''*'''<span style="color:blue">1</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''6'''*'''<span style="color:blue">0.01</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''8'''*'''<span style="color:blue">0.01</span>'''*'''<span style="color:DarkTurquoise">1</span>'''+'''5'''*'''<span style="color:blue">0</span>'''*'''<span style="color:DarkTurquoise">1.2</span>'''+'''3'''*'''<span style="color:blue">0</span>'''*'''<span style="color:DarkTurquoise">1</span>'''=3+3.6+3.6+2+0.06+0.08+0+0='''12.34'''
  
*'''Quality score''' - the number of "base" mistakes per 1000 words ("base" mistake - the mistake that has the severity score of 1 and the weight coefficient of 1).
 
*'''Σ''' - the sum of products of the mistake severity score and the weight coefficient of the mistake type per specialization.
 
*'''mistake_severity.score''' - the mistake severity score.
 
*'''mistake_type_spec.weight''' - the weight coefficient of the mistake type per project specialization.
 
*'''project.evaluation_corrected_word_count''' - source words in corrected units in the evaluation sample.
 
*'''project.evaluation_sample_word_count''' - total source words in the evaluation sample.
 
*'''project.evaluation_total_word_count''' - total source words after evaluation start filter is applied.
 
  
=='''TQAuditor v2.14'''==
+
;Quality score calculation&#58;
 +
Using the [[Quality_score_formula#Quality_score_formula_visualisation_and_interpretation|formula]]:
 +
100-(12.34/1500*1000)=91.77
  
== How Quality Score is calculated? ==
+
[[File:QSM.png|border|300px]]
 
 
 
 
Quality score reflects the number of mistakes made per 1000 words of translated text.
 
 
 
The formula is:
 
 
 
Quality score = Σ(mistake_severity.score * mistake_type_spec.weight) * project.evaluation_corrected_word_count / project.evaluation_sample_word_count / project.evaluation_total_word_count * 1000,
 
where:
 
 
 
*'''Quality score''' - the number of "base" mistakes per 1000 words ("base" mistake - the mistake that has the severity score of 1 and the weight coefficient of 1).
 
*'''Σ''' - the sum of products of the mistake severity score and the weight coefficient of the mistake type per specialization.
 
*'''mistake_severity.score''' - the mistake severity score.
 
*'''mistake_type_spec.weight''' - the weight coefficient of the mistake type per project specialization.
 
*'''project.evaluation_corrected_word_count''' - source words in corrected units in the evaluation sample.
 
*'''project.evaluation_sample_word_count''' - total source words in the evaluation sample.
 
*'''project.evaluation_total_word_count''' - total source words after evaluation start filter is applied.
 
 
 
== How the number of translated words is selected? ==
 
 
 
To make it simpler, let’s make an example:
 
 
 
Before starting the evaluation, the reviewer selects the number of words to evaluate:
 
 
 
[[File:Start evaluation filter.jpg|border|350px]]
 
 
 
Now imagine that the whole translated text (marketing specialization) contains 3126 words, and the editor corrects the segments containing 2323 source words.
 
 
 
You select 1000 words sample which is randomly taken by the system out of 2323 source words in corrected units and add 5 mistakes of different types and severities. Two of them are minor punctuation mistakes
 
 
 
(the severity score of 1 and the weight coefficient of 0.9 for every mistake), and three of them are major grammar mistakes (the severity score of 5 and the weight coefficient of 1.2 for every mistake):
 
 
 
[[File:QS types and severities 1.jpg|border|600px]]
 
 
 
<span style="color:red">'''Note:''' By default, the system has pre-defined quality standards, but you can define your own corporate '''<U>[[System|quality standards]]</U>'''.</span>
 
 
 
As a result, we get:
 
 
 
'''Quality score''' = Σ(1*0.9+1*0.9+5*1,2+5*1,2+5*1,2)*2323/3126/1001*1000=14,699
 
 
 
You’ll be able to see these numbers on the evaluation page:
 
 
 
[[File:QSF result 1.jpg|border|200px]]
 
 
 
Where:
 
 
 
*'''Quality score''' = 14,7.
 
*'''project.evaluation_corrected_word_count''' = 2323.
 
*'''project.evaluation_sample_word_count''' = 3126.
 
*'''project.evaluation_total_word_count''' = 1001.
 
 
 
== Why is it made that way? ==
 
 
 
We came to this through several stages of evolution.
 
 
 
First, the system was selecting just the beginning of the text, and we found out that the translators started to translate first 1000 words better than the rest of the text. So we decided to select the random part of the text. But then the system sometimes selected the pieces containing no corrections while skipping heavily corrected parts. So we changed the logic, and now the system returns the required number of corrections to the evaluator, but remembers how much text it took to find these corrections.
 
 
 
==How Quality Score is calculated for several projects?==
 
 
 
In evaluating the quality score of several projects ( when generating reports), the corresponding projects indicators are summarized and calculated according to the formula:
 
 
 
Average quality score = Σ(Σ(A)) * Σ(B) / Σ(C) / Σ(D) * 1000, where:
 
 
 
*'''Average quality score''' - the number of "base" mistakes per 1000 words in evaluated projects ("base" mistake - the mistake that has the severity score of 1 and the weight coefficient of 1).
 
 
 
*'''Σ(Σ(A))''' - the total sum of products of the mistake severity scores and the weight coefficients of the mistake types per specialization of evaluated projects.
 
 
 
*'''Σ(B)''' - the sum of source words in corrected units in the evaluation samples of evaluated projects.
 
 
 
*'''Σ(C)''' - the sum of total source words in the evaluation samples of evaluated projects.
 
 
 
*'''Σ(D)''' - the sum of total source words in evaluated projects.
 
 
 
To put it simply, let's calculate the quality score of two projects:
 
 
 
{| style="width:650px" border="1" 
 
 
|-style="height:50px"
 
|1. " valign="top" align="center" width="300"  |
 
 
 
|2. " valign="center" align="center" width="100" |  '''Project A'''
 
 
|3. " valign="center" align="center" width="100" |  '''Project B'''
 
 
|- style="height:50px"
 
 
 
|4. " valign="top" align="left" | '''Σ(A)'''  - the sum of products of the mistake severity scores and the weight coefficients of the mistake types per specialization
 
 
|5. " valign="center" align="center" | 10
 
 
 
|6. " valign="center" align="center" | 2,01
 
 
|- style="height:50px"
 
 
|7. " valign="center" align="left" | '''B''' - source words in corrected units in the evaluation sample
 
 
|8. " valign="center" align="center" | 428
 
 
|9. " valign="center" align="center" | 8956
 
 
|- style="height:50px"
 
 
 
|10. " valign="center" align="Left" | '''C'''  - source words in the evaluation sample
 
 
 
|11. " valign="center" align="center" | 428
 
 
 
|12. " valign="center" align="center" | 308
 
 
|- style="height:50px"
 
 
|13. " valign="center" align="Left" | '''D''' - total source words in the project
 
 
|14. " valign="center" align="center" | 1489
 
 
|15. " valign="center" align="center" | 9979
 
 
|- style="height:50px"
 
 
|16. " valign="center" align="Left" style="color: blue" | Quality score
 
 
|17. " valign="center" align="center" style="color: red"| 6,72
 
 
|18. " valign="center" align="center" style="color: red" | 5,86
 
 
 
|- style="height:50px"
 
 
|19. " valign="center" align="Left" style="color: blue" | Quality level
 
 
|20. " valign="center" align="center" style="color: red"| Good
 
 
|21. " valign="center" align="center" style="color: red" | Good
 
 
 
|-
 
 
 
 
|}
 
 
 
As the result, we get:
 
 
 
Average quality score = (10+2,01) * (428+8956) / (428+308) / (1489+9979) * 1000=<span style="color:red">13,35</span> (which corresponds to the "Satisfactory" quality level).
 
 
 
As you can see in the above example, the average quality score and average quality level could differ from the quality score and quality level of individual projects when the system computes summarized data of several projects.
 

Latest revision as of 15:20, 28 January 2022

Quality score formula visualisation and interpretation

QSF.png

1. Account score limit
The highest possible score (100 by default). Can be set on this page: [Evaluation settings]
2. Mistake severity score
Score of mistake by its severity (for example, 1 point for minor and 5 points for major mistakes). Can be set on this page: [Mistake severities list]
3. Mistake type weight
Weight of a specific mistake type by specialization. Can be adjusted on this page [Mistake types list]
4. SUM
A sum of products of #2 and #3 for all the mistakes.
5. Project evaluation word count
Automatic word count: The value from the "Total source words" field in the "Evaluation details" section (not the Total source words of a file). Please see the Automatic vs. manual word count page.
Manual word count: The value from the "Evaluated source words" field (specified when starting the evaluation with a manual word count).
6. 1000
In the end, the result is always converted to 1000, a most optimal value.

The mistake severity scores, mistake type weights, and reports are based on this 1000-word amount. For example, a critical mistake with score of 20 with weight coefficient of 1 reduces the quality score by 20 per 1000 words (by 10 per 2000 words, accordingly, and so on).


For more information, please check the Quality score formula:Details and versions page.

Quality score calculation examples

In the examples below, the following quality standard is used:

QS.png

QS1.png

Automatic word count


File details:
Total source words: 3126
Fully reviewed: yes
Number of units (segments): 392
Project specialization - marketing
Evaluation settings:

EAS.png

Evaluation details:
Total source words: 1749
Source words in corrected units: 1002

Mistakes:

  1. Major (3 points each):
    1. Terminology: 1 (weight coefficient 1)
    2. Grammar: 1 (weight coefficient 1.2)
  2. Minor (1 point each):
    1. Grammar: 3 (weight coefficient 1.2)
    2. Functional: 2 (weight coefficient 1)
  3. Repetitive (0.01 point each):
    1. Functional: 6 (weight coefficient 1)
    2. Layout: 8 (weight coefficient 1)
  4. Non-scoring (0 points each):
    1. Style: 5 (weight coefficient 1.2)
    2. Terminology: 3 (weight coefficient 1)
Total mistakes: 29
SUM: 1*3*1+1*3*1.2+3*1*1.2+2*1*1+6*0.01*1+8*0.01*1+5*0*1.2+3*0*1=3+3.6+3.6+2+0.06+0.08+0+0=12.34


Quality score calculation:

Using the formula: 100-(12.34/1749*1000)=92.94

QSA.png


Manual word count


File details:
Total source words: 3126
Fully reviewed: no
Number of words in a reviewed part of the file: around 1500
Number of units (segments): 392
Project specialization - marketing
Evaluation settings:
‎Evaluated source words: 1500
Evaluation details:
Corrected units: 227
Total source words: 1500

Mistakes:

  1. Major (3 points each):
    1. Terminology: 1 (weight coefficient 1)
    2. Grammar: 1 (weight coefficient 1.2)
  2. Minor (1 point each):
    1. Grammar: 3 (weight coefficient 1.2)
    2. Functional: 2 (weight coefficient 1)
  3. Repetitive (0.01 point each):
    1. Functional: 6 (weight coefficient 1)
    2. Layout: 8 (weight coefficient 1)
  4. Non-scoring (0 points each):
    1. Style: 5 (weight coefficient 1.2)
    2. Terminology: 3 (weight coefficient 1)
Total mistakes: 29
SUM: 1*3*1+1*3*1.2+3*1*1.2+2*1*1+6*0.01*1+8*0.01*1+5*0*1.2+3*0*1=3+3.6+3.6+2+0.06+0.08+0+0=12.34


Quality score calculation:

Using the formula: 100-(12.34/1500*1000)=91.77

QSM.png