Jhgan
Rank:Average Model Cost: $0.0000
Number of Runs: 86,346
Models by this creator
ko-sroberta-multitask
ko-sroberta-multitask
ko-sroberta-multitask is a model designed for sentence similarity, but additional information about its purpose, architecture, or training process is not provided.
$-/run
48.5K
Huggingface
ko-sbert-sts
ko-sbert-sts
The ko-sbert-sts model is a model trained to measure the semantic similarity between two Korean sentences. It is based on the SBERT architecture, which uses a siamese network to generate dense vector embeddings for sentences. The model is trained on the Semantic Textual Similarity (STS) task and is able to provide a similarity score between two Korean sentences.
$-/run
37.3K
Huggingface
ko-sbert-multitask
ko-sbert-multitask
ko-sbert-multitask This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results KorSTS, KorNLI ํ์ต ๋ฐ์ดํฐ์ ์ผ๋ก ๋ฉํฐ ํ์คํฌ ํ์ต์ ์งํํ ํ KorSTS ํ๊ฐ ๋ฐ์ดํฐ์ ์ผ๋ก ํ๊ฐํ ๊ฒฐ๊ณผ์ ๋๋ค. Cosine Pearson: 84.13 Cosine Spearman: 84.71 Euclidean Pearson: 82.42 Euclidean Spearman: 82.66 Manhattan Pearson: 81.41 Manhattan Spearman: 81.69 Dot Pearson: 80.05 Dot Spearman: 79.69 Training The model was trained with the parameters: DataLoader: sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader of length 8885 with parameters: Loss: sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 719 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 Reimers, Nils and Iryna Gurevych. โSentence-BERT: Sentence Embeddings using Siamese BERT-Networks.โ ArXiv abs/1908.10084 (2019) Reimers, Nils and Iryna Gurevych. โMaking Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.โ EMNLP (2020).
$-/run
431
Huggingface
ko-sroberta-nli
$-/run
60
Huggingface
ko-sbert-nli
ko-sbert-nli
ko-sbert-nli This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results KorNLI ํ์ต ๋ฐ์ดํฐ์ ์ผ๋ก ํ์ตํ ํ KorSTS ํ๊ฐ ๋ฐ์ดํฐ์ ์ผ๋ก ํ๊ฐํ ๊ฒฐ๊ณผ์ ๋๋ค. Cosine Pearson: 82.24 Cosine Spearman: 83.16 Euclidean Pearson: 82.19 Euclidean Spearman: 82.31 Manhattan Pearson: 82.18 Manhattan Spearman: 82.30 Dot Pearson: 79.30 Dot Spearman: 78.78 Training The model was trained with the parameters: DataLoader: sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader of length 8885 with parameters: Loss: sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters: Parameters of the fit()-Method: Full Model Architecture Citing & Authors Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289 Reimers, Nils and Iryna Gurevych. โSentence-BERT: Sentence Embeddings using Siamese BERT-Networks.โ ArXiv abs/1908.10084 (2019) Reimers, Nils and Iryna Gurevych. โMaking Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.โ EMNLP (2020).
$-/run
55
Huggingface
ko-sroberta-sts
$-/run
38
Huggingface