Cl-tohoku
Rank:Average Model Cost: $0.0000
Number of Runs: 3,486,760
Models by this creator
bert-base-japanese-whole-word-masking
$-/run
3.0M
Huggingface
bert-base-japanese
bert-base-japanese
The bert-base-japanese model is a language model trained on Japanese text. It is based on the BERT (Bidirectional Encoder Representations from Transformers) architecture, which allows it to understand the context and meaning of Japanese words and sentences. This model can be used for various natural language processing tasks such as text classification, named entity recognition, and question answering. Additionally, it can be used for the fill-mask task, where it predicts missing words in a given sentence. The bert-base-japanese model is a valuable resource for developers working with Japanese text data and can help improve the accuracy and performance of their NLP applications.
$-/run
229.7K
Huggingface
bert-base-japanese-char
bert-base-japanese-char
The bert-base-japanese-char model is a pre-trained language model specifically designed for Japanese text. It is based on the BERT (Bidirectional Encoder Representations from Transformers) architecture and uses character-level tokenization. This model can be used for various natural language processing tasks, including fill-mask, where it predicts the missing word in a given text.
$-/run
120.3K
Huggingface
bert-base-japanese-v2
$-/run
72.6K
Huggingface
bert-base-japanese-char-v2
$-/run
33.8K
Huggingface
bert-base-japanese-v3
bert-base-japanese-v3
The bert-base-japanese-v3 model is a Japanese version of BERT (Bidirectional Encoder Representations from Transformers), a transformer-based model for natural language processing tasks. This model is specifically trained on Japanese text and can process a wide range of tasks such as text classification, named entity recognition, and question answering. It can be used to perform various NLP tasks in the Japanese language.
$-/run
11.9K
Huggingface
bert-large-japanese
bert-large-japanese
The bert-large-japanese model is a language model trained on a large dataset of Japanese text. It is based on the BERT (Bidirectional Encoder Representations from Transformers) architecture. This model is designed for tasks that involve filling in missing words or tokens in a sentence, also known as fill-mask tasks. It can be used to generate appropriate Japanese word recommendations for masked tokens in a given sentence.
$-/run
7.9K
Huggingface
bert-large-japanese-v2
$-/run
1.9K
Huggingface
bert-base-japanese-char-whole-word-masking
bert-base-japanese-char-whole-word-masking
Platform did not provide a description for this model.
$-/run
1.8K
Huggingface
bert-base-japanese-char-v3
$-/run
1.1K
Huggingface