Dbmdz

Rank:

Average Model Cost: $0.0000

Number of Runs: 487,076

Models by this creator

bert-large-cased-finetuned-conll03-english

bert-large-cased-finetuned-conll03-english

dbmdz

No description available.

Read more

$-/run

181.8K

Huggingface

bert-base-german-cased

bert-base-german-cased

The bert-base-german-cased model is a German language version of the BERT (Bidirectional Encoder Representations from Transformers) model. It is a pre-trained model that can be used for various natural language processing tasks such as text classification, named entity recognition, and question answering. The model is based on a transformer architecture and is trained on a large corpus of German text. It is case-sensitive, meaning it can distinguish between lowercase and uppercase letters. The model is capable of filling in missing words in a sentence, a task known as fill-mask.

Read more

$-/run

25.6K

Huggingface

bert-base-turkish-128k-cased

bert-base-turkish-128k-cased

BERT (Bidirectional Encoder Representations from Transformers) is a popular pre-trained language model used for various natural language processing tasks. The "bert-base-turkish-128k-cased" model is a Turkish version of BERT trained on a large corpus of Turkish text. It is cased, meaning it retains information about the capitalization of words. This model can be fine-tuned or used as a feature extractor for tasks such as text classification, named entity recognition, and sentiment analysis in the Turkish language.

Read more

$-/run

15.8K

Huggingface

bert-base-turkish-128k-uncased

bert-base-turkish-128k-uncased

The bert-base-turkish-128k-uncased model is an uncased BERT model for the Turkish language. It has been trained on a large corpus of Turkish text, including the Turkish OSCAR corpus, Wikipedia dumps, OPUS corpora, and a special corpus from Kemal Oflazer. The model has a vocabulary size of 128k and is trained for 2M steps on a TPU v3-8. The model weights are currently only available in PyTorch-Transformers format. The model is available on the Huggingface model hub and can be used for tasks like PoS tagging and named entity recognition.

Read more

$-/run

15.8K

Huggingface

bert-base-italian-xxl-cased

bert-base-italian-xxl-cased

The bert-base-italian-xxl-cased model is a pre-trained language model specifically designed for Italian text. It is based on the BERT (Bidirectional Encoder Representations from Transformers) architecture and has been trained on a large corpus of Italian text. This model can be used for various natural language processing tasks such as text classification, named entity recognition, and text generation. It is particularly useful for filling in masked words or phrases in Italian sentences.

Read more

$-/run

10.4K

Huggingface

electra-large-discriminator-finetuned-conll03-english

electra-large-discriminator-finetuned-conll03-english

The electra-large-discriminator-finetuned-conll03-english model is a fine-tuned version of the ELECTRA model for token classification. It has been trained on the CoNLL-2003 English dataset, which consists of named entity recognition and part-of-speech tagging tasks. The model can be used to label tokens in text with entity labels such as person names, locations, organizations, and other relevant categories.

Read more

$-/run

9.8K

Huggingface

german-gpt2

german-gpt2

German-gpt2 is a language model trained specifically for the German language. It is based on GPT-2, a transformer-based model known for its text generation capabilities. This model can generate high-quality German text on a wide range of topics. It has been fine-tuned on a large dataset of German text to produce coherent and fluent responses. It can be used for various natural language processing tasks such as text completion, text generation, and language understanding.

Read more

$-/run

7.9K

Huggingface

Similar creators