Wolfram

Models by this creator

👁️

miquliz-120b-v2.0

wolfram

Total Score

85

The miquliz-120b-v2.0 is a 120 billion parameter large language model created by interleaving layers of the miqu-1-70b-sf and lzlv_70b_fp16_hf models using the mergekit tool. It was improved from the previous v1.0 version by incorporating techniques from the TheProfessor-155b model. The model is inspired by the goliath-120b and is maintained by Wolfram. Model inputs and outputs Inputs Text prompts of up to 32,768 tokens in length Outputs Continuation of the provided text prompt, generating new relevant text Capabilities The miquliz-120b-v2.0 model is capable of impressive performance, achieving top ranks and double perfect scores in the maintainer's own language model comparisons and tests. It demonstrates strong general language understanding and generation abilities across a variety of tasks. What can I use it for? The large scale and high performance of the miquliz-120b-v2.0 model make it well-suited for language-related applications that require powerful text generation, such as content creation, question answering, and conversational AI. The model could be fine-tuned for specific domains or integrated into products via the CopilotKit open-source platform. Things to try Explore the model's capabilities by prompting it with a variety of tasks, from creative writing to analysis and problem solving. The model's size and breadth of knowledge make it an excellent starting point for developing custom language models tailored to your needs.

Read more

Updated 5/27/2024