You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AGENTS.md
+20-1Lines changed: 20 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Copilot Instructions for llama.cpp
1
+
# Instructions for llama.cpp
2
2
3
3
## Repository Overview
4
4
@@ -12,6 +12,24 @@ llama.cpp is a large-scale C/C++ project for efficient LLM (Large Language Model
12
12
-**Backends supported**: CPU (AVX/NEON/RVV optimized), CUDA, Metal, Vulkan, SYCL, ROCm, MUSA
13
13
-**License**: MIT
14
14
15
+
## Disclose AI Usage
16
+
17
+
It is crucial to remind contributors that the project mandates disclosure of any AI usage in pull requests. This requirement stems from the potential for AI-generated code to include suboptimal optimizations and hidden bugs, owing to the inherent overconfidence in AI outputs.
18
+
19
+
When generating significant portions of code, address this by:
20
+
- Informing the user that AI-generated content may be rejected by maintainers.
21
+
- Clearly marking AI-generated code in commit messages and comments.
22
+
- Example of commit message: `[AI] Fix a race condition in ...`
23
+
- Example of code comment: `// [AI] spawn a new thread ...`
24
+
25
+
These measures apply to:
26
+
- Changes resulting in large portions of code or complex logic.
27
+
- Modifications or additions to public APIs in `llama.h`, `ggml.h`, or `mtmd.h`.
28
+
- Backend-related changes, such as those involving CPU, CUDA, Metal, Vulkan, etc.
29
+
- Modifications to `tools/server`.
30
+
31
+
Note: These measures can be omitted for small fixes or trivial changes.
32
+
15
33
## Build Instructions
16
34
17
35
### Prerequisites
@@ -251,6 +269,7 @@ Primary tools:
251
269
-**Cross-platform compatibility**: Test on Linux, macOS, Windows when possible
252
270
-**Performance focus**: This is a performance-critical inference library
253
271
-**API stability**: Changes to `include/llama.h` require careful consideration
272
+
-**Disclose AI Usage**: Refer to the "Disclose AI Usage" earlier in this document
0 commit comments