Monologg
Rank:Average Model Cost: $0.0000
Number of Runs: 739,537
Models by this creator
bert-base-cased-goemotions-original
bert-base-cased-goemotions-original
The bert-base-cased-goemotions-original model is a language model based on BERT (Bidirectional Encoder Representations from Transformers). It is trained on a large corpus of text data and is capable of understanding the emotional content of sentences. The model can be used to classify sentences into different emotional categories such as joy, sadness, anger, and so on. It is useful in applications where sentiment analysis or emotion detection is required.
$-/run
174.6K
Huggingface
koelectra-small-v2-distilled-korquad-384
koelectra-small-v2-distilled-korquad-384
The koelectra-small-v2-distilled-korquad-384 model is a fine-tuned version of the ELECTRA model for question-answering tasks in the Korean language. It is trained on the KorQuAD dataset, which consists of questions and passages in Korean, along with their corresponding answers. The model can take a question and a passage as input, and it outputs the answer to the question based on the information in the passage. This model achieves good performance on the KorQuAD benchmark, demonstrating its ability to accurately answer questions in Korean.
$-/run
172.8K
Huggingface
kobert
kobert
KoBERT is a language model trained specifically for Korean text. It is based on the BERT architecture and trained on a large corpus of Korean text. The model is capable of performing a range of natural language processing tasks such as text classification, named entity recognition, and sentiment analysis. It can also be used for feature extraction from text. KoBERT provides state-of-the-art performance on various Korean language tasks and is a valuable tool for NLP applications in the Korean language.
$-/run
143.2K
Huggingface
koelectra-small-v3-discriminator
koelectra-small-v3-discriminator
koelectra-small-v3-discriminator is a language model that has been trained to perform text classification tasks. It has been fine-tuned on a specific dataset to accurately classify text into different categories or labels.
$-/run
127.8K
Huggingface
koelectra-base-v3-discriminator
koelectra-base-v3-discriminator
koelectra-base-v3-discriminator is a transformer-based model that has been fine-tuned for discriminating between real and fake text. It is based on the KoELECTRA architecture, which incorporates a discriminator designed for text classification tasks. The model has been trained on a large dataset and can be used to classify text as either real or generated.
$-/run
53.6K
Huggingface
koelectra-base-v3-goemotions
koelectra-base-v3-goemotions
koelectra-base-v3-goemotions is a pre-trained language model that has been fine-tuned on the GoEmotions dataset. The GoEmotions dataset consists of text samples from various online platforms, labeled with one or more emotion labels such as joy, sadness, anger, etc. The model can be used to classify the emotions present in a given text sample.
$-/run
36.5K
Huggingface
biobert_v1.1_pubmed
biobert_v1.1_pubmed
The biobert_v1.1_pubmed model is a language model trained on PubMed articles. It is specifically designed for tasks in the biomedical domain. The model can be used for various natural language processing tasks, such as text classification, named entity recognition, and question answering, among others. It is particularly useful for understanding and analyzing biomedical text data.
$-/run
23.4K
Huggingface
kobigbird-bert-base
$-/run
3.3K
Huggingface
kocharelectra-base-discriminator
kocharelectra-base-discriminator
Platform did not provide a description for this model.
$-/run
2.9K
Huggingface
koelectra-base-discriminator
$-/run
1.5K
Huggingface