Pritamdeka

Rank:

Average Model Cost: $0.0000

Number of Runs: 21,670

Models by this creator

S-Bluebert-snli-multinli-stsb

S-Bluebert-snli-multinli-stsb

pritamdeka

The S-Bluebert-snli-multinli-stsb model is a sentence similarity model trained on the SNLI, MultiNLI, and STSB datasets. It is designed to determine the similarity between two sentences.

Read more

$-/run

17.3K

Huggingface

BioBERT-mnli-snli-scinli-scitail-mednli-stsb

BioBERT-mnli-snli-scinli-scitail-mednli-stsb

pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 90 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors If you use the model kindly cite the following work

Read more

$-/run

1.3K

Huggingface

PubMedBERT-MNLI-MedNLI

PubMedBERT-MNLI-MedNLI

PubMedBERT-MNLI-MedNLI This model is a fine-tuned version of PubMedBERT on the MNLI dataset first and then on the MedNLI dataset. It achieves the following results on the evaluation set: Loss: 0.9501 Accuracy: 0.8667 Model description More information needed Intended uses & limitations The model can be used for NLI tasks related to biomedical data and even be adapted to fact-checking tasks. It can be used from the Huggingface pipeline method as follows: The output for the above will be: Training and evaluation data More information needed Training procedure Training hyperparameters The following hyperparameters were used during training: learning_rate: 2e-05 train_batch_size: 32 eval_batch_size: 32 seed: 42 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 20.0 Training results Framework versions Transformers 4.22.0.dev0 Pytorch 1.12.1+cu113 Datasets 2.4.0 Tokenizers 0.12.1 Citing & Authors If you use the model kindly cite the following work

Read more

$-/run

192

Huggingface

S-Biomed-Roberta-snli-multinli-stsb

S-Biomed-Roberta-snli-multinli-stsb

S-Biomed-Roberta-snli-multinli-stsb This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The base model used is allenai/biomed_roberta_base which has been fine-tuned for sentence similarity. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 90 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors If you use the model kindly cite the following work

Read more

$-/run

86

Huggingface

PubMedBERT-mnli-snli-scinli-scitail-mednli-stsb

PubMedBERT-mnli-snli-scinli-scitail-mednli-stsb

pritamdeka/PubMedBERT-mnli-snli-scinli-scitail-mednli-stsb This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 90 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors If you use the model kindly cite the following work

Read more

$-/run

71

Huggingface

SapBERT-mnli-snli-scinli-scitail-mednli-stsb

SapBERT-mnli-snli-scinli-scitail-mednli-stsb

pritamdeka/SapBERT-mnli-snli-scinli-scitail-mednli-stsb This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings. Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Evaluation Results For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net Training The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 90 with parameters: Loss: sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss Parameters of the fit()-Method: Full Model Architecture Citing & Authors If you use the model kindly cite the following work

Read more

$-/run

65

Huggingface

Similar creators