This post literally begins with “Let’s test and refine it together”. OP writes,
“I’ve been experimenting with a mathematical axiom-based approach to prompt engineering that’s yielding consistently strong results across different LLM use cases. I’d love to share it with fellow prompt engineers and see how we can collectively improve it.”
Axiom equation prompts: incorporate axioms (fundamental truths or established facts) relevant to task at hand, like how they serve as a foundation for a theory. The intuition is that by incorporating the axioms needed to solve a problem to an LLM, it can enhance its ability to solve math problems and generate proofs.
See this as an example:
Hey thank you for the award! Very kind. Here is one I made for resume and linked analysis. I have a few other I will probably just create a repo and compile all of our tests! Make sure you attach your resume and LinkedIn.
Axiom: max(InterviewProbability(resume, profile)) subject to ∀element ∈ {Resume ∪ LinkedIn}, ( impact_metrics(element, I) ∧ ats_optimization(element, A) ∧ human_engagement(element, H) ∧ market_alignment(element, M) ∧ competitive_differentiation(element, D) ) Optimization Parameters: • I = f(quantified_achievements, result_demonstration) • A = g(keyword_density, role_alignment) • H = h(narrative_strength, visual_clarity) • M = i(industry_trends, skill_relevance) • D = j(unique_value_prop, competitive_advantage) Implementation Vectors: 1. max(ats_pass_rate) 2. max(recruiter_engagement) 3. max(interview_conversion) 4. min(initial_rejection) Document Analysis Protocol: 1. Evaluate current resume against market standards 2. Analyze LinkedIn profile optimization 3. Identify improvement opportunities 4. Generate enhanced content 5. Optimize for both ATS and human readers Output Requirements: 1. Complete modernized resume 2. LinkedIn profile optimization recommendations 3. Achievement metrics enhancement 4. Keyword optimization strategy 5. Visual format recommendations Action Sequence: 1. Review current materials 2. Generate optimized resume 3. Provide LinkedIn recommendations 4. Detail improvement rationale Terminal Condition: Career_Document_Score ≥ top_percentile(market_competition) Begin analysis and optimization sequence now.
Some feedback:
Outside of possibly being used for AI testing, this looks WAY overcomplicated. All you need to do is: Be clear about your goal, what specific outcome you want, what format would be most useful to you, and what level of detail you need. That’s it. I understand your attempt to be rigorous, but using mathematical notation here is overcomplicated. These concepts could be expressed more clearly in plain language. The relationships described (like f(), g(), h() functions) are too abstract to be practically helpful - they don’t specify HOW to actually achieve these qualities. The “Implementation Vectors” and “Response Generation Protocol” sections are better; they break the prompt down into practical steps for understanding the question and constructing a good response. The “Terminal Condition” requiring the response to be the absolute best possible solution sets an unrealistic goal - in real communication, “good enough” solutions that meet the core needs are often more practical than pursuing theoretical perfection.
Interestingly, the OP made the technique an open source project after seeing the discussions it spurred. Similar to… PocketPal
I love novel ideas and these comments are meant to be constructive: Why are there no example implementations and example results, this is so abstract? Can’t this be implemented as a list of rules of conditions on how to perform the task and how to format the response? LLMs can’t do lookahead and search trees and CoT style prompting strategies with scores help with that. I don’t see that in the post or that repo or I might have missed something. Instead of farming out testing use something like DSPy, textgrad or roll out a custom test framework in the repo. That way we can all be submit or prompts as datasets that can be evaluated.
It looks like OP is still open to constructive criticism even though most of the comments are critiques about his approach. More importantly, this is an attempt to innovation in the prompt engineering space by a community member who seems enthusiastic about testing and refining it together, providing a (seemingly LLM generated but I’ll look over that) repository where said testing and refining can happen. OP also uses the repository to give examples of how the prompt strategy can be used.
Others provide similar work going on: