Skip to main content

The Scribendi Sentence Evaluator uses perplexity score to evaluate if a correction in a sentence has improved its grammar.

Perplexity is a measurement used in machine learning models to capture the degree of uncertainty. As a measurement tool, it processes and analyses natural language data to best interpret written text, including the underlying contextual nuances, and assigns possibilities to predict grammatical errors. The perplexity score is necessary to evaluate the grammatical accuracy of sentences and to suggest multifaceted language corrections if needed.

A Simple Evaluation of
Sentence Grammaticality

Improvements to the grammaticality and fluency of sentences are correlated with a lowering of the perplexity scores of those sentences. If a sentence’s perplexity score is low, the sentence is more likely to occur commonly in grammatically correct texts and thus be correct itself.

Based on Language Models

The Scribendi Sentence Evaluator calculates a perplexity score that is inversely proportional to the probability of a sequence of words. Our editing tools are based on language models that use a mathematical framework for predictions and inferences, thereby creating a database of the likeliest sequences of words.

The Scribendi Sentence Evaluator detects sentence errors before you start editing, leaving you with more time to focus on crucial tasks, such as clarifying an author’s meaning and strengthening their writing overall.