Skip to content

Instantly share code, notes, and snippets.

View enricomgian's full-sized avatar

_EMG_ enricomgian

View GitHub Profile
@enricomgian
enricomgian / ipex-llm-arc-140v-lunar-lake-windows-fix.md
Last active April 15, 2026 15:32
[Windows + Lunar Lake] Working solution: IPEX-LLM + Ollama v0.9.3 + modern models (qwen3:8b, gemma4) on Intel Arc 140V — undocumented fix

Clarifications (based on community feedback)

  • This uses IPEX-LLM as an optimization layer over SYCL not the same as vanilla SYCL llama.cpp
  • Ollama handles all model loading (GGUF format, standard registry)
    IPEX-LLM provides GPU acceleration underneath
  • The archived intel/ipex-llm repo is irrelevant the pip package ipex-llm[cpp]==2.3.0b20251029 ships Ollama v0.9.3, updated October 2025
  • This is experimental treat it as "sharing what worked on my hardware" not an officially supported setup