Ml6team

Rank:

Average Model Cost: $0.0000

Number of Runs: 46,075

Models by this creator

keyphrase-extraction-kbir-inspec

keyphrase-extraction-kbir-inspec

ml6team

The keyphrase-extraction-kbir-inspec model is a token classification model that is trained to extract keyphrases from a given text. It is designed to identify and label specific words or phrases in the input text that are considered important or relevant. The model can be used for tasks such as document summarization, information retrieval, and content analysis.

Read more

$-/run

27.5K

Huggingface

keyphrase-extraction-distilbert-inspec

keyphrase-extraction-distilbert-inspec

The keyphrase-extraction-distilbert-inspec model is a token classification model that can extract keyphrases from a given input text. It is trained on the Inspec dataset and utilizes the DistilBERT architecture. The model can identify and classify words or tokens in the input text as keyphrases, which are important terms or phrases that represent the main topics or concepts in the text. This model can be helpful in tasks such as information retrieval, document summarization, and text understanding.

Read more

$-/run

14.1K

Huggingface

keyphrase-extraction-kbir-kpcrowd

keyphrase-extraction-kbir-kpcrowd

🔑 Keyphrase Extraction Model: KBIR-KPCrowd Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. 📓 Model Description This model uses KBIR as its base model and fine-tunes it on the KPCrowd dataset. KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC). You can find more information about the architecture in this paper. Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not. Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021). Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020. ✋ Intended Uses & Limitations 🛑 Limitations This keyphrase extraction model is very dataset-specific. It's not recommended to use this model for other domains, but you are free to test it out. Only works for English documents. Large number of annotated keyphrases. ❓ How To Use 📚 Training Dataset KPCrowd is a broadcast news transcription dataset consisting of 500 English broadcast news stories from 10 different categories (art and culture, business, crime, fashion, health, politics us, politics world, science, sports, technology) with 50 docs per category. This dataset is annotated by multiple annotators that were required to look at the same news story and assign a set of keyphrases from the text itself. You can find more information in the paper. 👷‍♂️ Training Procedure Training Parameters Preprocessing The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens. Postprocessing (Without Pipeline Function) If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed. 📝 Evaluation results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. The model achieves the following results on the Inspec test set: 🚨 Issues Please feel free to start discussions in the Community Tab.

Read more

$-/run

439

Huggingface

Similar creators