Cixo Electronic
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
ylai@lemmy.ml to LocalLLaMA@sh.itjust.worksEnglish · 1 year ago

LLaMA Now Goes Faster on CPUs

justine.lol

external-link
message-square
0
fedilink
1
external-link

LLaMA Now Goes Faster on CPUs

justine.lol

ylai@lemmy.ml to LocalLLaMA@sh.itjust.worksEnglish · 1 year ago
message-square
0
fedilink
I wrote 84 new matmul kernels to improve llamafile CPU performance.
alert-triangle
You must log in or register to comment.

LocalLLaMA@sh.itjust.works

localllama@sh.itjust.works

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 34 users / day
  • 34 users / week
  • 34 users / month
  • 34 users / 6 months
  • 0 local subscribers
  • 2.51K subscribers
  • 50 Posts
  • 0 Comments
  • Modlog
  • mods:
  • SkySyrup@sh.itjust.works
  • pax@sh.itjust.works
  • noneabove1182@sh.itjust.works
  • BE: 0.19.8
  • Modlog
  • Legal
  • Instances
  • Docs
  • Code
  • join-lemmy.org