![Introducing Llama2-70B-Chat with MosaicML Inference](https://log4dev.com/articles/introducing-llama2-70b-chat-with-mosaicml-inference-46/a173afe6efbc1d708b98e1b8e6a49a48bf280381309e032b50c70830138f9034225b5e219dde26f8febafd1b5d08360c971ed13b2fe4cca478a2e6340dad8b05.png)
Introducing Llama2-70B-Chat with MosaicML Inference
Llama2-70B-Chat is available via MosaicML Inference. To get started, sign up here and check out our inference product page. Figure 1: Human raters prefer Llama2-70B-Chat to ChatGPT and PaLM-Bison. Adapted from the Llama2 technical paper . See the paper for additional data on model-based evaluation using GPT-4. Llama2-70B-Chat was fine-tuned for dialog use cases, carefully optimized for safety and helpfulness leveraging over 1 million human annotations. On July 18th, Meta published Llama2-70B-Chat : a 70B parameter language model pre-trained on 2 trillion tokens of text with a context length of 4096 that outperforms all open source models on many benchmarks , and is comparable in quality to closed proprietary models such as OpenAI’s ChatGPT and Google PaLM-Bison....