Introducing Llama2-70B-Chat with MosaicML Inference

Introducing Llama2-70B-Chat with MosaicML Inference

Llama2-70B-Chat is available via MosaicML Inference. To get started, sign up here and check out our inference product page. Figure 1: Human raters prefer Llama2-70B-Chat to ChatGPT and PaLM-Bison. Adapted from the Llama2 technical paper . See the paper for additional data on model-based evaluation using GPT-4. Llama2-70B-Chat was fine-tuned for dialog use cases, carefully optimized for safety and helpfulness leveraging over 1 million human annotations. On July 18th, Meta published Llama2-70B-Chat : a 70B parameter language model pre-trained on 2 trillion tokens of text with a context length of 4096 that outperforms all open source models on many benchmarks , and is comparable in quality to closed proprietary models such as OpenAI’s ChatGPT and Google PaLM-Bison....

Published in www.mosaicml.com · by Hagay Lupesko, Margaret Qian, Daya Khudia, Sam Havens, Daniel King, Erica Ji Yuen · 13 min read · August 28, 2023
Figma is powered by WebAssembly

Figma is powered by WebAssembly

WebAssembly cut Figma’s load time by 3x WebAssembly was just released this past March but has already generated a lot of excitement in the web community. WebAssembly was just released this past March but has already generated a lot of excitement in the web community. It’s a new binary format for machine code that was specifically designed with browsers in mind. Because apps compiled to WebAssembly can run as fast as native apps, it has the potential to change the way software is written on the web....

Published in figma.com · by Evan Wallace · 7 min read · August 10, 2023