Anon8231489123
Rank:Average Model Cost: $0.0000
Number of Runs: 9,177
Models by this creator
gpt4-x-alpaca-13b-native-4bit-128g
gpt4-x-alpaca-13b-native-4bit-128g
GPT4-x-alpaca-13b-native-4bit-128g is a powerful text-generation model. It is capable of generating human-like text based on the provided input. This model has been trained on a vast amount of data and can understand context, generate coherent responses, and mimic human writing style. Its high parameter count (13 billion parameters) allows for more detailed and accurate text generation.
$-/run
5.2K
Huggingface
vicuna-13b-GPTQ-4bit-128g
vicuna-13b-GPTQ-4bit-128g
** Converted model for GPTQ from https://huggingface.co/lmsys/vicuna-13b-delta-v0. This is the best local model I've ever tried. I hope someone makes a version based on the uncensored dataset...** GPTQ conversion command (on CUDA branch): CUDA_VISIBLE_DEVICES=0 python llama.py ../lmsys/vicuna-13b-v0 c4 --wbits 4 --true-sequential --groupsize 128 --save vicuna-13b-4bit-128g.pt Added 1 token to the tokenizer model: python llama-tools/add_tokens.py lmsys/vicuna-13b-v0/tokenizer.model /content/tokenizer.model llama-tools/test_list.txt Use of Oobabooga with these tags: --wbits 4 --groupsize 128 Enjoy
$-/run
4.0K
Huggingface