H2oai

Rank:

Average Model Cost: $0.0000

Number of Runs: 114,294

Models by this creator

h2ogpt-gm-oasst1-en-2048-falcon-7b-v3

h2ogpt-gm-oasst1-en-2048-falcon-7b-v3

h2oai

The h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 model is a text generation model that has been trained on a large dataset to produce coherent and contextually relevant text. It uses the GPT-3 architecture and has been fine-tuned on a specific domain to generate high-quality text. This model is capable of generating text for various use cases, such as writing emails, answering questions, providing conversational responses, and more. It has a wide range of applications in natural language processing tasks and can be used to generate human-like text in a variety of contexts.

Read more

$-/run

57.5K

Huggingface

h2ogpt-gm-oasst1-en-2048-falcon-7b-v2

h2ogpt-gm-oasst1-en-2048-falcon-7b-v2

The h2ogpt-gm-oasst1-en-2048-falcon-7b-v2 model is a conversational AI model trained using the GPT-3 architecture. It is designed to generate responses and engage in conversations with users. The model has been trained on a large amount of text data and can understand and generate human-like responses in English. It can be used for various conversational tasks such as chatbots, virtual assistants, and customer support applications.

Read more

$-/run

18.0K

Huggingface

h2ogpt-oig-oasst1-512-6_9b

h2ogpt-oig-oasst1-512-6_9b

h2ogpt-oig-oasst1-512-6_9b is a 6.9 billion parameter instruction-following large language model designed for commercial use. It is based on the EleutherAI/pythia-6.9b base model and has been fine-tuned using multiple datasets. The model is available for use as a chatbot and can be integrated with the transformers library. However, users should be aware that the model may exhibit biases, generate incorrect or nonsensical responses, and should be used responsibly and ethically.

Read more

$-/run

13.7K

Huggingface

h2ogpt-oasst1-512-12b

h2ogpt-oasst1-512-12b

The h2ogpt-oasst1-512-12b model is a 12 billion parameter large language model designed for instruction-following tasks. It is based on the EleutherAI/pythia-12b base model and has been fine-tuned on the h2oai/openassistant_oasst1_h2ogpt_graded dataset. The model can be used as a chatbot and is available for commercial use. Users can run their own chatbot using the model by following the instructions provided in the H2O.ai GitHub repository. The model has been validated using the EleutherAI lm-evaluation-harness. However, it is important to note that the model may exhibit biases, produce incorrect or nonsensical responses, and may generate offensive or inappropriate content. Users are responsible for critically evaluating the generated content and using it at their discretion. It is also important to use the model responsibly and ethically, and to report any biased or inappropriate content to the repository maintainers.

Read more

$-/run

7.2K

Huggingface

h2ogpt-research-oasst1-llama-65b

h2ogpt-research-oasst1-llama-65b

h2oGPT Model Card Summary H2O.ai's h2ogpt-research-oasst1-llama-65b is a 65 billion parameter instruction-following large language model (NOT licensed for commercial use). Base model: decapoda-research/llama-65b-hf Fine-tuning dataset: h2oai/openassistant_oasst1_h2ogpt_graded Data-prep and fine-tuning code: H2O.ai GitHub Training logs: zip Chatbot Run your own chatbot: H2O.ai GitHub Usage To use the model with the transformers library on a machine with GPUs, first make sure you have the following libraries installed. Alternatively, if you prefer to not use trust_remote_code=True you can download instruct_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: Model Architecture Model Configuration Model Validation Model validation results using EleutherAI lm-evaluation-harness. TBD Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.

Read more

$-/run

2.1K

Huggingface

h2ogpt-gm-oasst1-multilang-2048-falcon-7b

h2ogpt-gm-oasst1-multilang-2048-falcon-7b

Model Card Summary This model was trained using H2O LLM Studio. Base model: tiiuae/falcon-7b Dataset preparation: OpenAssistant/oasst1 Usage To use the model with the transformers library on a machine with GPUs, first make sure you have the transformers, accelerate, torch and einops libraries installed. You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: Alternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: Model Architecture Model Configuration This model was trained using H2O LLM Studio and with the configuration in cfg.yaml. Visit H2O LLM Studio to learn how to train your own large language models. Model Validation Model validation results using EleutherAI lm-evaluation-harness. Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.

Read more

$-/run

1.6K

Huggingface

h2ogpt-gm-oasst1-en-xgen-7b-8k

h2ogpt-gm-oasst1-en-xgen-7b-8k

Model Card Summary This model was trained using H2O LLM Studio. Base model: Salesforce/xgen-7b-8k-base Dataset preparation: OpenAssistant/oasst1 personalized Usage To use the model with the transformers library on a machine with GPUs, first make sure you have the transformers, accelerate and torch libraries installed. You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: Alternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the transformers package, this will allow you to set trust_remote_code=False. You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: Model Architecture Model Configuration This model was trained using H2O LLM Studio and with the configuration in cfg.yaml. Visit H2O LLM Studio to learn how to train your own large language models. Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.

Read more

$-/run

1.5K

Huggingface

Similar creators