Cline Rules Optimization
What Are Cline Rules?
./clinerules
are a user defined section for developers to add custom instructions to the system prompt of the popular open-source coding agent Cline, similar to .cursor/rules
in Cursor or CLAUDE.md
in Claude Code.

How Can You Boost Coding Accuracy Via Cline Rules?
One proven agent system prompt optimization strategy leverages a research-driven approach called prompt learning. Inspired by reinforcement learning, the approach follows the action → evaluation → improvement loop but uses meta prompting instead of gradients with a key addition: LLM-generated feedback explaining why outputs were right or wrong, giving the optimizer richer signal to refine future prompts.

Based on early testing, using prompt learning it’s possible to improve Cline’s accuracy by over 15% on SWE-Bench without retraining or fine tuning an LLM, changing any tools, or modifying the architecture — bringing GPT-4.1’s performance on SWE-Bench Lite to near state-of-the-art levels (matching Claude Sonnet 4-5) purely through ruleset optimization.
The Approach
Optimizing over Act Mode — giving Cline full permissions to read, write, and edit code files — and testing its accuracy on SWE Bench Lite showcases the power in this approach.

The loop works as follows:
- Run Cline on SWE-Bench Lite (150 train, 150 test) and record its train/test accuracy.
- Collect the patches it produces and verify correctness via unit tests.
- Use GPT-5 to explain why each fix succeeded or failed on the training set.
- Feed those training evals — along with Cline’s system prompt and current ruleset — into a Meta-Prompt LLM to generate an improved ruleset.
- Update
./clinerules
, re-run, and repeat.
Get Started
Try this approach optimizing Cline on SWE Bench using prompt learning and see improvement for yourself! Code/notebook here.