Voidism

Rank:

Average Model Cost: $0.0000

Number of Runs: 9,446

Models by this creator

diffcse-roberta-base-sts

diffcse-roberta-base-sts

voidism

DiffCSE is an unsupervised contrastive learning framework for learning sentence embeddings. It learns embeddings that are sensitive to the difference between an original sentence and an edited sentence. The edited sentence is obtained by masking out the original sentence and sampling from a masked language model. DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. Pretrained models are available for evaluation and transfer learning tasks.

Read more

$-/run

6.2K

Huggingface

diffcse-bert-base-uncased-sts

diffcse-bert-base-uncased-sts

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings arXiv link: https://arxiv.org/abs/2204.10298To be published in NAACL 2022 Authors: Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Scott Wen-tau Yih, Yoon Kim, James Glass Our code is mainly based on the code of SimCSE. Please refer to their repository for more detailed information. Overview We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. Setups Requirements Python 3.9.5 Install our customized Transformers package Install other packages Download the pretraining dataset Download the downstream dataset Training (The same as run_diffcse.sh.) Our new arguments: --lambda_weight: the lambda coefficient mentioned in Section 3 of our paper. --masking_ratio: the masking ratio for MLM generator to randomly replace tokens. --generator_name: the model name of generator. For bert-base-uncased, we use distilbert-base-uncased. For roberta-base, we use distilroberta-base. Arguments from SimCSE: --train_file: Training file path (data/wiki1m_for_simcse.txt). --model_name_or_path: Pre-trained checkpoints to start with such as BERT-based models (bert-base-uncased, bert-large-uncased, etc.) and RoBERTa-based models (RoBERTa-base, RoBERTa-large). --temp: Temperature for the contrastive loss. We always use 0.05. --pooler_type: Pooling method. --mlp_only_train: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models. For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance. Evaluation We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation: To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts: BERT RoBERTa For more detailed information, please check SimCSE's GitHub repo. Pretrained models DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans We can load the models using the API provided by SimCSE. See Getting Started for more information. Citations Please cite our paper and the SimCSE paper if they are helpful to your work!

Read more

$-/run

3.0K

Huggingface

diffcse-bert-base-uncased-trans

diffcse-bert-base-uncased-trans

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings arXiv link: https://arxiv.org/abs/2204.10298To be published in NAACL 2022 Authors: Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Scott Wen-tau Yih, Yoon Kim, James Glass Our code is mainly based on the code of SimCSE. Please refer to their repository for more detailed information. Overview We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. Setups Requirements Python 3.9.5 Install our customized Transformers package Install other packages Download the pretraining dataset Download the downstream dataset Training (The same as run_diffcse.sh.) Our new arguments: --lambda_weight: the lambda coefficient mentioned in Section 3 of our paper. --masking_ratio: the masking ratio for MLM generator to randomly replace tokens. --generator_name: the model name of generator. For bert-base-uncased, we use distilbert-base-uncased. For roberta-base, we use distilroberta-base. Arguments from SimCSE: --train_file: Training file path (data/wiki1m_for_simcse.txt). --model_name_or_path: Pre-trained checkpoints to start with such as BERT-based models (bert-base-uncased, bert-large-uncased, etc.) and RoBERTa-based models (RoBERTa-base, RoBERTa-large). --temp: Temperature for the contrastive loss. We always use 0.05. --pooler_type: Pooling method. --mlp_only_train: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models. For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance. Evaluation We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation: To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts: BERT RoBERTa For more detailed information, please check SimCSE's GitHub repo. Pretrained models DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans We can load the models using the API provided by SimCSE. See Getting Started for more information. Citations Please cite our paper and the SimCSE paper if they are helpful to your work!

Read more

$-/run

250

Huggingface

diffcse-roberta-base-trans

diffcse-roberta-base-trans

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings arXiv link: https://arxiv.org/abs/2204.10298To be published in NAACL 2022 Authors: Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Scott Wen-tau Yih, Yoon Kim, James Glass Our code is mainly based on the code of SimCSE. Please refer to their repository for more detailed information. Overview We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. Setups Requirements Python 3.9.5 Install our customized Transformers package Install other packages Download the pretraining dataset Download the downstream dataset Training (The same as run_diffcse.sh.) Our new arguments: --lambda_weight: the lambda coefficient mentioned in Section 3 of our paper. --masking_ratio: the masking ratio for MLM generator to randomly replace tokens. --generator_name: the model name of generator. For bert-base-uncased, we use distilbert-base-uncased. For roberta-base, we use distilroberta-base. Arguments from SimCSE: --train_file: Training file path (data/wiki1m_for_simcse.txt). --model_name_or_path: Pre-trained checkpoints to start with such as BERT-based models (bert-base-uncased, bert-large-uncased, etc.) and RoBERTa-based models (RoBERTa-base, RoBERTa-large). --temp: Temperature for the contrastive loss. We always use 0.05. --pooler_type: Pooling method. --mlp_only_train: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models. For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance. Evaluation We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation: To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts: BERT RoBERTa For more detailed information, please check SimCSE's GitHub repo. Pretrained models DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans We can load the models using the API provided by SimCSE. See Getting Started for more information. Citations Please cite our paper and the SimCSE paper if they are helpful to your work!

Read more

$-/run

27

Huggingface

Similar creators