Alexandrainst
Rank:Average Model Cost: $0.0000
Number of Runs: 852,514
Models by this creator
scandi-nli-large
$-/run
845.8K
Huggingface
da-sentiment-base
da-sentiment-base
Model Card for Danish BERT Danish BERT Tone for sentiment polarity detection Model Details Model Description The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained Danish BERT model by BotXO. Developed by: DaNLP Shared by [Optional]: Hugging Face Model type: Text Classification Language(s) (NLP): Danish (da) License: cc-by-sa-4.0 Related Models: More information needed Parent Model: BERT Resources for more information: GitHub Repo Associated Documentation Uses Direct Use This model can be used for text classification Downstream Use [Optional] More information needed. Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Training Details Training Data The data used for training come from the Twitter Sentiment and EuroParl sentiment 2 datasets. Training Procedure Preprocessing It has been finetuned on the pretrained Danish BERT model by BotXO. Speeds, Sizes, Times More information needed. Evaluation Testing Data, Factors & Metrics Testing Data More information needed. Factors Metrics F1 Results More information needed. Model Examination More information needed. Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Hardware Type: More information needed. Hours used: More information needed. Cloud Provider: More information needed. Compute Region: More information needed. Carbon Emitted: More information needed. Technical Specifications [optional] Model Architecture and Objective More information needed. Compute Infrastructure More information needed. Hardware More information needed. Software More information needed. Citation BibTeX: More information needed. APA: More information needed. Glossary [optional] More information needed. More Information [optional] More information needed. Model Card Authors [optional] DaNLP in collaboration with Ezi Ozoani and the Hugging Face team Model Card Contact More information needed. How to Get Started with the Model Use the code below to get started with the model.
$-/run
1.3K
Huggingface
da-binary-emotion-classification-base
da-binary-emotion-classification-base
Danish BERT for emotion detection The BERT Emotion model detects whether a Danish text is emotional or not. It is based on the pretrained Danish BERT model by BotXO which has been fine-tuned on social media data. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
$-/run
1.1K
Huggingface
da-hatespeech-classification-base
da-hatespeech-classification-base
Danish BERT for hate speech classification The BERT HateSpeech model classifies offensive Danish text into 4 categories: Særlig opmærksomhed (special attention, e.g. threat) Personangreb (personal attack) Sprogbrug (offensive language) Spam & indhold (spam) This model is intended to be used after the BERT HateSpeech detection model. It is based on the pretrained Danish BERT model by BotXO which has been fine-tuned on social media data. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
$-/run
924
Huggingface
da-subjectivivity-classification-base
da-subjectivivity-classification-base
Danish BERT Tone for the detection of subjectivity/objectivity The BERT Tone model detects whether a text (in Danish) is subjective or objective. The model is based on the finetuning of the pretrained Danish BERT model by BotXO. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training come from the Twitter Sentiment and EuroParl sentiment 2 datasets.
$-/run
858
Huggingface
da-hatespeech-detection-small
da-hatespeech-detection-small
Danish ELECTRA for hate speech (offensive language) detection The ELECTRA Offensive model detects whether a Danish text is offensive or not. It is based on the pretrained Danish Ælæctra model. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
$-/run
842
Huggingface
da-emotion-classification-base
da-emotion-classification-base
Danish BERT for emotion classification The BERT Emotion model classifies a Danish text in one of the following class: Glæde/Sindsro Tillid/Accept Forventning/Interrese Overasket/Målløs Vrede/Irritation Foragt/Modvilje Sorg/trist Frygt/Bekymret It is based on the pretrained Danish BERT model by BotXO which has been fine-tuned on social media data. This model should be used after detecting whether the text contains emotion or not, using the binary BERT Emotion model. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
$-/run
840
Huggingface
da-hatespeech-detection-base
da-hatespeech-detection-base
Danish BERT for hate speech (offensive language) detection The BERT HateSpeech model detects whether a Danish text is offensive or not. It is based on the pretrained Danish BERT model by BotXO which has been fine-tuned on social media data. See the DaNLP documentation for more details. Here is how to use the model: Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
$-/run
451
Huggingface
da-offensive-detection-base
$-/run
276
Huggingface
da-ner-base
da-ner-base
BERT fine-tuned for Named Entity Recognition in Danish The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC]. The pretrained language model used for fine-tuning is the Danish BERT by BotXO. See the DaNLP documentation for more details. Here is how to use the model: Training Data The model has been trained on the DaNE.
$-/run
88
Huggingface