Joey Hafner
97e4cc547a
1. homelab [Gitea](https://gitea.jafner.tools/Jafner/homelab), [Github (docker_config)](https://github.com/Jafner/docker_config), [Github (wiki)](https://github.com/Jafner/wiki), [Github (cloud_tools)](https://github.com/Jafner/cloud_tools), [Github (self-hosting)](https://github.com/Jafner/self-hosting). - Rename? Jafner.net? Wouldn't that be `Jafner/Jafner.net/Jafner.net`? 2. Jafner.dev [Github](https://github.com/Jafner/Jafner.dev). 3. dotfiles [Gitea](https://gitea.jafner.tools/Jafner/dotfiles), [Github](https://github.com/Jafner/dotfiles). 4. nvgm [Gitea](https://gitea.jafner.tools/Jafner/nvgm) 5. pamidi [Gitea](https://gitea.jafner.tools/Jafner/pamidi), [Github](https://github.com/Jafner/pamidi) 6. docker-llm-amd [Gitea](https://gitea.jafner.tools/Jafner/docker-llm-amd) 7. doradash [Gitea](https://gitea.jafner.tools/Jafner/doradash) 8. clip-it-and-ship-it [Gitea (PyClipIt)](https://gitea.jafner.tools/Jafner/PyClipIt), [Github](https://github.com/Jafner/clip-it-and-ship-it). 9. razer battery led [Github](https://github.com/Jafner/Razer-BatteryLevelRGB) 10. 5etools-docker [Github](https://github.com/Jafner/5etools-docker) 11. jafner-homebrew [Github](https://github.com/Jafner/jafner-homebrew)
1.9 KiB
1.9 KiB
Flash Attention in Docker on AMD is Not Yet Working
Below are my notes on the efforts I've made to get it working.
FROM rocm/pytorch-nightly:latest
COPY . .
RUN git clone --recursive https://github.com/ROCm/flash-attention.git /tmp/flash-attention
WORKDIR /tmp/flash-attention
ENV MAX_JOBS=8
RUN pip install -v .
Resources
- What is Flash-attention? (How do i use it with Oobabooga?) :...
- Adding flash attention to one click installer · Issue #4015 ...
- Accelerating Large Language Models with Flash Attention on A...
- GitHub - Dao-AILab/flash-attention: Fast and memory-efficien...
- GitHub - ROCm/llvm-project: This is the AMD-maintained fork ...
- GitHub - ROCm/AITemplate: AITemplate is a Python framework w...
- Stable diffusion with RX7900XTX on ROCm5.7 · ROCm/composable...
- Current state of training on AMD Radeon 7900 XTX (with bench... Current state of training on AMD Radeon 7900 XTX (with benchmarks) rLocalLLaMA
- llm-tracker - howto/AMD GPUs
- RDNA3 support · Issue #27 · ROCm/flash-attention · GitHub
- GitHub - ROCm/xformers: Hackable and optimized Transformers ...
- [ROCm] support Radeon™ 7900 series (gfx1100) without using...