15
0
Fork 0

Try Meta's Code Llama models on your laptop or cloud VM in seconds.

README.md

Code Llama

Accept Terms & Acceptable Use Policy

Visit the Meta website to request access, then accept the license and acceptable use policy before accessing these models.

Note: Your XetHub user account email address must match the email you provide on this Meta website.

Code Llama is distributed for both research and commercial use, following the license and acceptable use policy listed above. It is hosted on XetHub as a convenience. Please reference Meta documentation for more information around these models.

Why Code Llama on XetHub?

Downloading models is time consuming and Code Llama uses >300GB on disk. With XetHub, you can mount this repository in seconds and load a model within minutes for fast inference from your machine.

Repo mounted in 3.5 seconds

(.venv) ➜  Documents xet mount --prefetch 32 xet://XetHub/codellama/main codellama
Mounting to "/Users/srinikadamati/Documents/codellama"
Cloning into temporary directory "/var/folders/z3/v554nrl160q0pq6z_6_139l80000gn/T/.tmpv4oTmo"
Mounting as a background task...
Setting up mount point...
Mount at "/Users/srinikadamati/Documents/codellama" successful. Unmount with 'umount "/Users/srinikadamati/Documents/codellama"'
Mount complete in 3.487839s

Inference in seconds - model loaded in 291 seconds

llama.cpp/main -ngl 1 --model codellama/GGUF/7b/codellama-7b-python.Q8_0.gguf --prompt "Write me a Python code snippet that returns the maximum value of the Revenue column in a DataFrame. Only return the Python code syntax."

...

Write me a Python code snippet that returns the maximum value of the Revenue column in a DataFrame. Only return the Python code syntax.

# Solution

revenue_max = int(df['Revenue'].max()) [end of text]

llama_print_timings:        load time = 291070.41 ms

Tutorial

For full instructions on how to use Code Llama with llama.cpp, we recommend reading our companion blog post.

File List Total items: 3
Name Last Commit Size Last Modified
GGUF move xet://XetHub/codellama/main/GGUF/7B/codellama-7b.Q8_0.gguf to xet://XetHub/codellama/main/GGUF/7b/codellama-7b.Q8_0.gguf 9 months ago
.gitattributes Initial commit 79 B 9 months ago
README.md copy README.md to xet://XetHub/codellama/main 2.2 KiB 9 months ago

About

Try Meta's Code Llama models on your laptop or cloud VM in seconds.

Repository Size

Loading repo size...

Commits 27 commits

File Types