cm0002@piefed.world to Technology@lemmy.zipEnglish · 2 days agoMy new laptop chip has an 'AI' processor in it, and it's a complete waste of spacewww.pcgamer.comexternal-linkmessage-square37fedilinkarrow-up1181arrow-down111
arrow-up1170arrow-down1external-linkMy new laptop chip has an 'AI' processor in it, and it's a complete waste of spacewww.pcgamer.comcm0002@piefed.world to Technology@lemmy.zipEnglish · 2 days agomessage-square37fedilink
minus-squaregirsaysdoom@sh.itjust.workslinkfedilinkEnglisharrow-up5·1 day agoThis might partially answer your question: https://github.com/ollama/ollama/issues/5186. It looks like the answer is, it depends on what you want to run as some configs are partially supported but there’s no clear cut support yet?
minus-squaresheogorath@lemmy.worldlinkfedilinkEnglisharrow-up5·1 day agoI tried running some models on an Intel 155h NPU and the performance is actually worse than using the CPU directly for inference. However, it wins on power consumption front IIRC.
This might partially answer your question: https://github.com/ollama/ollama/issues/5186.
It looks like the answer is, it depends on what you want to run as some configs are partially supported but there’s no clear cut support yet?
I tried running some models on an Intel 155h NPU and the performance is actually worse than using the CPU directly for inference. However, it wins on power consumption front IIRC.