Psmathur
Rank:Average Model Cost: $0.0000
Number of Runs: 7,103
Models by this creator
orca_mini_3b
orca_mini_3b
orca_mini_3b Use orca-mini-3b on Free Google Colab with T4 GPU :) An OpenLLaMa-3B model model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. Dataset We build explain tuned WizardLM dataset ~70K, Alpaca dataset ~52K & Dolly-V2 dataset ~15K created using approaches from Orca Research Paper. We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the System prompt is added before each instruction. Training The training configurations are provided in the table below. The training takes on 8x A100(80G) GPUs and lasts for around 4 Hours for cost of $48 using Lambda Labs We used DeepSpeed with fully sharded data parallelism, also know as ZeRO stage 3 by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing OpenAlpaca repo Here are some of params used during training: Example Usage Below shows an example on how to use this model P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at www.linkedin.com/in/pankajam Next Goals: Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) Provide 4bit GGML/GPTQ quantized model (may be TheBloke can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or applications, please kindly cite using the following BibTeX:
$-/run
3.0K
Huggingface
orca_mini_13b
$-/run
2.1K
Huggingface
orca_mini_7b
$-/run
1.6K
Huggingface
orca_mini_v2_7b
$-/run
166
Huggingface
orca_alpaca_3b
$-/run
150
Huggingface
orca_dolly_3b
$-/run
24
Huggingface
lora-alpaca-LLaMa7B
lora-alpaca-LLaMa7B
This is a LoRA Adapter fine tuned model trained using modified version of Alpaca dataset (cleaned version of Alpaca dataset, as original Alpaca datsets has null data & quality issues). For details of the LoRA => https://github.com/microsoft/LoRA For details of the data and hyper params - https://crfm.stanford.edu/2023/03/13/alpaca.html While conducting initial evaluation, this fine tuned LoRA model seems to produces outputs comparable to the Stanford Alpaca model. Further tuning might be able to achieve better performance. This repo only contains the LoRa weights and not the original LLaMa weights which are research only.
$-/run
1
Huggingface
bloom-7b1-lora-quote-generator
bloom-7b1-lora-quote-generator
Generate English Quote based upon keywords or tags For example: 'books', 'humor' ==>: "So many books, so little time."
$-/run
0
Huggingface