Setu4993

Rank:

Average Model Cost: $0.0000

Number of Runs: 37,511

Models by this creator

LaBSE

LaBSE

setu4993

No description available.

Read more

$-/run

36.1K

Huggingface

LEALLA-small

LEALLA-small

LEALLA-small Model description LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from LaBSE. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval. Model: HuggingFace's model hub. Paper: arXiv. Original model: TensorFlow Hub. Conversion from TensorFlow to PyTorch: GitHub. This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are equivalent. Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities. Usage Using the model: To get the sentence embeddings, use the pooler output: Output for other languages: For similarity between sentences, an L2-norm is recommended before calculating the similarity: Details Details about data, training, evaluation and performance metrics are available in the original paper. BibTeX entry and citation info

Read more

$-/run

184

Huggingface

LEALLA-large

LEALLA-large

LEALLA-large Model description LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from LaBSE. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval. Model: HuggingFace's model hub. Paper: arXiv. Original model: TensorFlow Hub. Conversion from TensorFlow to PyTorch: GitHub. This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are equivalent. Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities. Usage Using the model: To get the sentence embeddings, use the pooler output: Output for other languages: For similarity between sentences, an L2-norm is recommended before calculating the similarity: Details Details about data, training, evaluation and performance metrics are available in the original paper. BibTeX entry and citation info

Read more

$-/run

102

Huggingface

LEALLA-base

LEALLA-base

LEALLA-base Model description LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from LaBSE. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval. Model: HuggingFace's model hub. Paper: arXiv. Original model: TensorFlow Hub. Conversion from TensorFlow to PyTorch: GitHub. This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are equivalent. Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities. Usage Using the model: To get the sentence embeddings, use the pooler output: Output for other languages: For similarity between sentences, an L2-norm is recommended before calculating the similarity: Details Details about data, training, evaluation and performance metrics are available in the original paper. BibTeX entry and citation info

Read more

$-/run

23

Huggingface

Similar creators