Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ Explore our extensive list of prompt engineering techniques, ranging from basic
| 20 | 🌍 **Advanced Applications** | [Ethical Considerations](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/ethical-prompt-engineering.ipynb) | Bias avoidance and inclusivity |
| 21 | 🌍 **Advanced Applications** | [Prompt Security](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/prompt-security-and-safety.ipynb) | Preventing injections |
| 22 | 🌍 **Advanced Applications** | [Effectiveness Evaluation](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/evaluating-prompt-effectiveness.ipynb) | Evaluating prompt performance |
| 23 | 🌍 **Advanced Applications** | [Multi-Provider Prompting](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/multi-provider-prompting.ipynb) | Using and comparing multiple LLM providers (OpenAI, MiniMax) |

### 🌱 Fundamental Concepts

Expand Down Expand Up @@ -310,13 +311,21 @@ Explore our extensive list of prompt engineering techniques, ranging from basic
Covers techniques for prompt injection prevention, content filtering implementation, and testing the effectiveness of security and safety measures.

22. **[Evaluating Prompt Effectiveness](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/evaluating-prompt-effectiveness.ipynb)**

#### Overview 🔎
Explores methods and techniques for evaluating the effectiveness of prompts in AI language models.

#### Implementation 🛠️
Covers setting up evaluation metrics, implementing manual and automated evaluation techniques, and providing practical examples using OpenAI and LangChain.

23. **[Multi-Provider Prompting](https://github.com/NirDiamant/Prompt_Engineering/blob/main/all_prompt_engineering_techniques/multi-provider-prompting.ipynb)**

#### Overview 🔎
Demonstrates how to use multiple LLM providers (OpenAI, MiniMax) with the same prompt engineering techniques, enabling provider comparison and avoiding vendor lock-in.

#### Implementation 🛠️
Covers the `utils/llm_provider.py` helper for multi-provider support, side-by-side provider comparison for zero-shot/few-shot/CoT prompts, auto-detection via environment variables, and using MiniMax M2.5/M2.7 models through an OpenAI-compatible API.
Comment on lines +323 to +327
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate subheadings trigger markdownlint MD024.

Lines 323 and 326 reuse heading text already used elsewhere (#### Overview 🔎, #### Implementation 🛠️). Consider converting these to bold labels to keep style but avoid duplicate-heading lint failures.

✅ Lint-safe wording example
-    #### Overview 🔎
-    Demonstrates how to use multiple LLM providers (OpenAI, MiniMax) with the same prompt engineering techniques, enabling provider comparison and avoiding vendor lock-in.
+    **Overview 🔎**  
+    Demonstrates how to use multiple LLM providers (OpenAI, MiniMax) with the same prompt engineering techniques, enabling provider comparison and avoiding vendor lock-in.

-    #### Implementation 🛠️
-    Covers the `utils/llm_provider.py` helper for multi-provider support, side-by-side provider comparison for zero-shot/few-shot/CoT prompts, auto-detection via environment variables, and using MiniMax M2.5/M2.7 models through an OpenAI-compatible API.
+    **Implementation 🛠️**  
+    Covers the `utils/llm_provider.py` helper for multi-provider support, side-by-side provider comparison for zero-shot/few-shot/CoT prompts, auto-detection via environment variables, and using MiniMax M2.5/M2.7 models through an OpenAI-compatible API.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#### Overview 🔎
Demonstrates how to use multiple LLM providers (OpenAI, MiniMax) with the same prompt engineering techniques, enabling provider comparison and avoiding vendor lock-in.
#### Implementation 🛠️
Covers the `utils/llm_provider.py` helper for multi-provider support, side-by-side provider comparison for zero-shot/few-shot/CoT prompts, auto-detection via environment variables, and using MiniMax M2.5/M2.7 models through an OpenAI-compatible API.
**Overview 🔎**
Demonstrates how to use multiple LLM providers (OpenAI, MiniMax) with the same prompt engineering techniques, enabling provider comparison and avoiding vendor lock-in.
**Implementation 🛠️**
Covers the `utils/llm_provider.py` helper for multi-provider support, side-by-side provider comparison for zero-shot/few-shot/CoT prompts, auto-detection via environment variables, and using MiniMax M2.5/M2.7 models through an OpenAI-compatible API.
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 323-323: Multiple headings with the same content

(MD024, no-duplicate-heading)


[warning] 326-326: Multiple headings with the same content

(MD024, no-duplicate-heading)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 323 - 327, The README uses duplicate subheadings
"#### Overview 🔎" and "#### Implementation 🛠️" which triggers markdownlint
MD024; replace those heading lines with non-heading labels (e.g., bold text like
"**Overview 🔎**" and "**Implementation 🛠️**") so the content and style remain
but duplicate-heading lint errors are avoided—update the instances that
currently use "#### Overview 🔎" and "#### Implementation 🛠️".


## Getting Started

To begin exploring and implementing prompt engineering techniques:
Expand Down
Loading