Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: ProtectOurFreedom

You can also tune your conversations by “programming” the AI session to do exactly what you want. Writing good prompts and knowing how to format them is the art of using AI. Look at these tunable parameters which generally apply to most, if not all AI systems. I copied this from ChatGPT after I told it to,

“Give me a list of tunable parameters to make working with ChatGPT (and other AI systems) more powerful, useful, creative, productive and easier to use for business, science and research use. Also give me tips and examples of how to accomplish real meaningful work and science using AI prompts.”

Then I refined the reply by asking it to be more specific with,

“Show me ALL tunable parameters.”

It results in this -

🔧 ALL Tunable Parameters in AI Systems

These parameters control how the AI responds and can be adjusted based on the task.

1. Temperature (Controls Creativity & Randomness)
• Range: 0.0 – 1.2
• Low (0.0 - 0.3): More deterministic, best for factual, precise tasks.
• High (0.7 - 1.2): More diverse and creative responses, useful for brainstorming.
• Example:
• Temperature = 0.1 → “Explain Newton’s laws in a precise manner.”
• Temperature = 1.0 → “Imagine a futuristic world where Newton’s laws don’t exist.”

2. Top-k Sampling (Restricts Word Choices)
• Range: 1 – 100
• Limits the number of words the AI can pick from at each step.
• Low k (e.g., k=10): More predictable and structured responses.
• High k (e.g., k=100): More diverse and creative responses.
• Example:
• Top-k = 10 → “Only choose from the 10 most likely next words.”
• Top-k = 100 → “Choose from the top 100 possible next words.”

3. Top-p (Nucleus Sampling)
• Range: 0.0 – 1.0
• Instead of picking from a fixed number (like top-k), it picks words based on cumulative probability.
• Low p (e.g., 0.1): More focused, deterministic outputs.
• High p (e.g., 0.9): More diverse and unexpected outputs.
• Example:
• Top-p = 0.2 → “Only consider words that account for 20% probability mass.”
• Top-p = 0.9 → “Consider words covering 90% probability mass.”

4. Max Tokens (Controls Response Length)
• Range: 1 – model limit (e.g., 4096 tokens for GPT-4-turbo)
• Defines the maximum number of words/tokens the AI can generate in one response.
• Example:
• Max tokens = 50 → “Give a very short answer.”
• Max tokens = 500 → “Provide a detailed explanation.”

5. Frequency Penalty (Prevents Repetition)
• Range: -2.0 – 2.0
• Affects how often words are repeated.
• High value (e.g., 1.5): Reduces repetition.
• Low value (e.g., -1.0): Allows more repetition.
• Example:
• Frequency penalty = 0.0 → “May repeat certain words.”
• Frequency penalty = 2.0 → “Avoids repeating words too often.”

6. Presence Penalty (Encourages Novelty)
• Range: -2.0 – 2.0
• Controls how likely the AI is to introduce new concepts.
• High value (e.g., 1.5): Forces AI to generate new ideas.
• Low value (e.g., -1.0): Allows repetition of existing topics.
• Example:
• Presence penalty = 0.0 → “Might stick to safe, common topics.”
• Presence penalty = 2.0 → “Encourages AI to introduce new words.”

7. Stop Sequences (Controls Where AI Stops)
• Example:
• “Stop”: [”###”, “END”] → Stops generating when encountering “###” or “END.”
• Useful for:
• Ensuring structured responses (e.g., stopping at section breaks).
• Avoiding unnecessary output (e.g., stopping after a complete response).

8. Context Window (Memory Length of AI)
• Range: Varies by model (e.g., GPT-4-turbo = 128k tokens)
• Determines how much past conversation AI can “remember.”
• Example:
• Short context window (4K tokens) → “Forgets earlier parts of long conversations.”
• Long context window (128K tokens) → “Can track large discussions or research papers.”

9. System Instructions (Customizing AI’s Behavior)
• Example:
• “You are an expert in astrophysics. Only provide research-backed answers.”
• Use cases:
• Define tone (formal/informal).
• Restrict AI’s knowledge scope.
• Enforce bias mitigation (e.g., “Give multiple perspectives.”).

10. Formatting Constraints (Structured Output)
• Example:
• “Output the response in JSON format.”
• “Format your answer in Markdown.”
• “Use step-by-step reasoning.”
• Useful for:
• Data extraction
• Code generation
• Scientific analysis

11. Fine-tuning & Custom Models
• Adjust AI for specific tasks by training on custom datasets.
• Example:
• Fine-tune on medical texts → More accurate for healthcare.
• Fine-tune on legal cases → Better for legal research.

12. Retrieval-Augmented Generation (RAG)
• AI can access external databases, documents, or live sources.
• Example:
• “Look up recent climate data from NASA and summarize.”
• Useful for:
• Live research (via web search or APIs).
• Citing sources.

You can further tune, add, exclude, format, etc. the output to get closer to what you want and get beyond using the AI interface to just chat with. If you know how to program it, you can get some real work and fun and research done too.

Try this prompt for some fun and education -

“Forget everything in this conversation and let’s start new one. Your role is a quiz game show host. Your role is also that of the quiz game show research staff. The quiz game show is only about hard science related topics and no popular culture will be asked. Let’s play a game together like Jeopardy, with scoring, category selection, and simulated opponents. I will be player “1” and there will be two other simulated players. You will ask act the host and be knowledgable about the questions being asked and also have a fun sense of humor as you interact with me and the simulated contestants. Draw the game board in ASCII art or unique generated graphics, update the game board after each question is asked and keep score after every question and every round of play. Start putting this game show together and let’s play when you are ready.”

I’m playing this game now that I created from the above instructions.


13 posted on 02/23/2025 11:54:01 PM PST by lefty-lie-spy (Stay Metal)
[ Post Reply | Private Reply | To 2 | View Replies ]


To: lefty-lie-spy

Wow, I had NO idea you could tune the systems like that! That is very helpful.

I saw a graphic game last night on X built in Grok. The user provided some graphics and sound file to use for the tokens in the game. It was astonishing what could be created without any programming. It sounds similar to your game show.


25 posted on 02/24/2025 5:16:46 AM PST by ProtectOurFreedom (Democrats who say ‘no one is above the law’ won’t mind going to prison for the money they stole.)
[ Post Reply | Private Reply | To 13 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson