Code generation model based on Code Llama.
34B
31.6K Pulls Updated 4 months ago
34b-python
19GB
34b-v2
19GB
34b-q4_0
19GB
34b-q4_1
21GB
34b-q5_0
23GB
34b-q5_1
25GB
34b-q8_0
36GB
34b-q2_K
14GB
34b-q3_K_S
15GB
34b-q3_K_M
16GB
34b-q3_K_L
18GB
34b-q4_K_S
19GB
34b-q4_K_M
20GB
34b-q5_K_S
23GB
34b-q5_K_M
24GB
34b-q6_K
28GB
34b-fp16
67GB
34b-python-q4_0
19GB
34b-python-q4_1
21GB
34b-python-q5_0
23GB
34b-python-q5_1
25GB
34b-python-q8_0
36GB
34b-python-q2_K
14GB
34b-python-q3_K_S
15GB
34b-python-q3_K_M
16GB
34b-python-q3_K_L
18GB
34b-python-q4_K_S
19GB
34b-python-q4_K_M
20GB
34b-python-q5_K_S
23GB
34b-python-q5_K_M
24GB
34b-python-q6_K
28GB
34b-python-fp16
67GB
34b-v2-q4_0
19GB
34b-v2-q4_1
21GB
34b-v2-q5_0
23GB
34b-v2-q5_1
25GB
34b-v2-q8_0
36GB
34b-v2-q2_K
14GB
34b-v2-q3_K_S
15GB
34b-v2-q3_K_M
16GB
34b-v2-q3_K_L
18GB
34b-v2-q4_K_S
19GB
34b-v2-q4_K_M
20GB
34b-v2-q5_K_S
23GB
34b-v2-q5_K_M
24GB
34b-v2-q6_K
28GB
34b-v2-fp16
67GB