Flax-sentence-embeddings

Rank:

Average Model Cost: $0.0000

Number of Runs: 23,386

Models by this creator

all_datasets_v4_MiniLM-L6

all_datasets_v4_MiniLM-L6

flax-sentence-embeddings

The all_datasets_v4_MiniLM-L6 model is a language model trained on various datasets and can be used for sentence similarity tasks. It is designed to determine the similarity between two sentences and can be helpful in various natural language processing applications.

Read more

$-/run

17.8K

Huggingface

stackoverflow_mpnet-base

stackoverflow_mpnet-base

stackoverflow_mpnet-base This is a microsoft/mpnet-base model trained on 18,562,443 (title, body) pairs from StackOverflow. SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained microsoft/mpnet-base model and trained it using Siamese Network setup and contrastive learning objective. 18,562,443 (title, body) pairs from StackOverflow was used as training data. For this model, mean pooling of hidden states were used as sentence embeddings. See data_config.json and train_script.py in this respository how the model was trained and which datasets have been used. We developed this model during the Community week using JAX/Flax for NLP & CV, organized by Hugging Face. We developed this model as part of the project: Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks. Intended uses Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks. How to use Here is how to use this model to get the features of a given text using SentenceTransformers library: Training procedure Pre-training We use the pretrained microsoft/mpnet-base. Please refer to the model card for more detailed information about the pre-training procedure. Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. Hyper parameters We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. Training data We used 18,562,443 (title, body) pairs from StackOverflow as training data.

Read more

$-/run

2.0K

Huggingface

st-codesearch-distilroberta-base

st-codesearch-distilroberta-base

flax-sentence-embeddings/st-codesearch-distilroberta-base This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on the code_search_net dataset and can be used to search program code given text. Usage: Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Training The model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss. It is some preliminary model. It was neither tested nor was the trained quite sophisticated The model was trained with the parameters: DataLoader: MultiDatasetDataLoader.MultiDatasetDataLoader of length 5371 with parameters: Loss: sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters: Parameters of the fit()-Method: Full Model Architecture Citing & Authors

Read more

$-/run

785

Huggingface

all_datasets_v3_MiniLM-L12

all_datasets_v3_MiniLM-L12

Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained MiniLM-L12 model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the Community week using JAX/Flax for NLP & CV, organized by Hugging Face. We developped this model as part of the project: Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. How to use Here is how to use this model to get the features of a given text using SentenceTransformers library: Training procedure Pre-training We use the pretrained MiniLM-L12. Please refer to the model card for more detailed information about the pre-training procedure. Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the data_config.json file.

Read more

$-/run

261

Huggingface

Similar creators