Average Model Cost: $0.0000
Number of Runs: 62,002
Models by this creator
plbart-base is a text-to-text generation model that is based on the BART (Bidirectional and AutoRegressive Transformers) architecture. It is a pre-trained model that can be fine-tuned on various natural language processing tasks such as text summarization, machine translation, and question answering. The model uses a transformer-based architecture and leverages pre-training data to generate high-quality text outputs. It is designed to be highly flexible and adaptable to different text generation tasks.
keyphrase-mpnet-v1 This is a sentence-transformers model specialized for phrases: It maps phrases to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. In the original paper, this model is used for calculating semantic-based evaluation metrics of keyphrase models. This model is based on sentence-transformers/all-mpnet-base-v2 and further fine-tuned on 1 million keyphrase data with SimCSE. Citing & Authors Paper: KPEval: Towards Fine-grained Semantic-based Evaluation of Keyphrase Extraction and Generation Systems Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Training The model is trained on phrases from four keyphrase datasets covering a wide range of domains. The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 2025 with parameters: Loss: sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters: Parameters of the fit()-Method: Full Model Architecture