S-nlp

Rank:

Average Model Cost: $0.0000

Number of Runs: 13,561

Models by this creator

russian_toxicity_classifier

russian_toxicity_classifier

s-nlp

The russian_toxicity_classifier is a BERT-based classifier that has been fine-tuned to classify toxic comments in the Russian language. It was trained on a merged dataset of toxic comments collected from 2ch.hk and ok.ru. The dataset was split into training, development, and test sets in an 80-10-10 proportion. The model's performance was evaluated on the test set and the metrics obtained were not specified in the description. It is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Read more

$-/run

9.1K

Huggingface

roberta_toxicity_classifier

roberta_toxicity_classifier

This model is trained for toxicity classification task. The dataset used for training is the merge of the English parts of the three datasets by Jigsaw (Jigsaw 2018, Jigsaw 2019, Jigsaw 2020), containing around 2 million examples. We split it into two parts and fine-tune a RoBERTa model (RoBERTa: A Robustly Optimized BERT Pretraining Approach) on it. The classifiers perform closely on the test set of the first Jigsaw competition, reaching the AUC-ROC of 0.98 and F1-score of 0.76. Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Read more

$-/run

1.9K

Huggingface

roberta-base-formality-ranker

roberta-base-formality-ranker

The model has been trained to predict for English sentences, whether they are formal or informal. Base model: roberta-base Datasets: GYAFC from Rao and Tetreault, 2018 and online formality corpus from Pavlick and Tetreault, 2016. Data augmentation: changing texts to upper or lower case; removing all punctuation, adding dot at the end of a sentence. It was applied because otherwise the model is over-reliant on punctuation and capitalization and does not pay enough attention to other features. Loss: binary classification (on GYAFC), in-batch ranking (on PT data). Performance metrics on the test data:

Read more

$-/run

974

Huggingface

gpt2-base-gedi-detoxification

gpt2-base-gedi-detoxification

This is a conditional language model based on gpt2-medium but with a vocabulary from t5-base, for compatibility with T5-based paraphrasers such as t5-paranmt-detox. The model is conditional on two styles, toxic and normal, and was fine-tuned on the dataset from the Jigsaw toxic comment classification challenge. The model was trained for the paper Text Detoxification using Large Pre-trained Neural Models (Dale et al, 2021) that describes its possible usage in more detail. An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox. The model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see the paper for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier. The model inherits all the risks of its parent model, gpt2-medium. It also inherits all the biases of the Jigsaw dataset on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords. See the paper Text Detoxification using Large Pre-trained Neural Models and the associated code. The model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see the paper). BibTeX:

Read more

$-/run

80

Huggingface

rubert-base-corruption-detector

rubert-base-corruption-detector

This is a model for evaluation of naturalness of short Russian texts. It has been trained to distinguish human-written texts from their corrupted versions. Corruption sources: random replacement, deletion, addition, shuffling, and re-inflection of words and characters, random changes of capitalization, round-trip translation, filling random gaps with T5 and RoBERTA models. For each original text, we sampled three corrupted texts, so the model is uniformly biased towards the unnatural label. Data sources: web-corpora from the Leipzig collection (rus_news_2020_100K, rus_newscrawl-public_2018_100K, rus-ru_web-public_2019_100K, rus_wikipedia_2021_100K), comments from OK and Pikabu. On our private test dataset, the model has achieved 40% rank correlation with human judgements of naturalness, which is higher than GPT perplexity, another popular fluency metric.

Read more

$-/run

55

Huggingface

Similar creators