Zjunlp

Rank:

Average Model Cost: $0.0000

Number of Runs: 3,305

Models by this creator

MolGen-large

MolGen-large

zjunlp

MolGen-large was introduced in the paper "Domain-Agnostic Molecular Generation with Self-feedback" and first released in this repository. It is a pre-trained molecular generative model built using the 100% robust molecular language representation, SELFIES. MolGen-large is the first pre-trained model that only produces chemically valid molecules. With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms. Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder. Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large can generate molecules with desired properties, making it a valuable tool for molecular optimization. You can use the raw model for molecule generation or fine-tune it to a downstream task. Please take note that the following examples only demonstrate the utilization of our pre-trained model for molecule generation. See the repository to look for fine-tune details on a task that interests you. Molecule generation example:

Read more

$-/run

2.7K

Huggingface

llama-molinst-protein-7b

llama-molinst-protein-7b

This repo contains a fully fine-tuned LLaMA-7b, trained on the ๐Ÿงฌ protein-oriented instructions from the ๐Ÿงช Mol-Instructions dataset. Instructions for running it can be found at https://github.com/zjunlp/Mol-Instructions. ๐Ÿงฌ Tasks ๐Ÿ“ Demo As illustrated in our repository, we provide an example to perform generation. For model fine-tuned on protein-oriented instructions, you can conveniently recover the model weights we trained through the following command. Please download llama-7b-hf to obtain the pre-training weights of LLaMA-7B, refine the --base_model to point towards the location where the model weights are saved. Then replace $DIFF_WEIGHT_PATH with the path of our provided diff weights, and replace $RECOVER_WEIGHT_PATH with the desired path to save the recovered weights. If the directory of recovered weights lacks required files (e.g., tokenizer configuration files), you can copy from $DIFF_WEIGHT_PATH. After that, you can execute the following command to generate outputs with the fine-tuned LLaMA model. ๐Ÿšจ Limitations The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited. ๐Ÿ“š References ๐Ÿซฑ๐Ÿปโ€๐Ÿซฒ Acknowledgements We appreciate LLaMA, Huggingface Transformers Llama, Alpaca, Alpaca-LoRA, Chatbot Service and many other related works for their open-source contributions.

Read more

$-/run

23

Huggingface

mt5-ie

mt5-ie

We trained the MT5-base model for the CCKS2023 Instruction-based KGC task using 27W weakly supervised data without employing any additional techniques. To learn more about the training process and how to utilize the model, please consult the following GitHub repository: https://github.com/zjunlp/DeepKE/tree/main/example/triple/mt5. There, you will find detailed information on how to train the model and leverage its capabilities for the given task.

Read more

$-/run

11

Huggingface

llama-7b-lora-ie

llama-7b-lora-ie

We employed the LoRA method on LLaMA-7b to train the model for the CCKS2023 Instruction-based KGC task. The training was conducted using 27W weakly supervised data, without relying on any additional techniques or tricks. To learn more about the training process and how to utilize the model, please consult the following GitHub repository: https://github.com/zjunlp/DeepKE/tree/main/example/llm/InstructKGC. There, you will find detailed information on how to train the model and leverage its capabilities for the given task.

Read more

$-/run

0

Huggingface

Similar creators