Frequestly Asked Questions
What are tokens? The units of what the AI sees
Tokens are the units of information that our AI models see when they look at text. Before the text is fed to the model, it's chopped up into discrete units, slightly shorter than words. For context, a 100-page book is about 30,000 tokens.
What is context window? How much text the AI can see
The “context window” determines how much of the story or role-play the AI can consider when generating its next response. It's like the AI's short-term memory.
We always give the AI the scenario definition (plot, characters, etc.) first. Then, the remaining tokens in the context window are filled with the most recent parts of the role-play or story. In role-play, you can even mark certain interactions as “sticky,” so they'll always be included, even if they're older.
Keep in mind that your stories and role-plays can be longer than the context window. The AI just won't be able to see all of it at once when generating new responses.
What are the different DreamGen models?
We've got three AI models ready to bring your stories to life:
- 🐣 opus-v1-sm (small): Swift, nimble, and perfect for simpler adventures. Plus, it has the longest context window on our paid plans.
- 🐓 opus-v1-lg (large): The balanced choice. Not too big, not too small, just right for most adventures.
- 🦚 opus-v1-xl (extra large): The heavyweight champ. Ideal for intricate, epic narratives.
Each model has its own unique “personality” and writing style, so make sure to experiment with all of them to find your favorite. And don't underestimate the small model - it's still got plenty of storytelling chops!
To pick a model, go to the “Model” settings. On the desktop the settings are in the sidebar, while on mobile you can access them via the “gearbox” button in the corner.
What are credits? Your AI adventure fuel
To generate responses, the models use “credits.” Different models use different amounts of credits per input & output token.
If you run out of credits, don't fret! Your monthly credits get reset at the start of each calendar month, and our credit fairy 🧚♀️ sprinkles extra credits into your account every day, so you can keep the adventure going all month long! This applies to all plans, including the free one.
Want to keep an eye on your credit usage? Check out your usage page to see how many credits you've used and how many you have left.
Also checkout our credit calculator on the pricing page, to estimate how many credits you'll need.
Example:
Assume you are writing a story and that on average, each "Continue" will require ~3000 input tokens and produce ~100 output tokens (the actual values will depend on the length of the story, etc.).
This will cost 0.11 tokens with the opus-v1-sm
model (as of 2024/04/01).
Therefore, if you have 100 credits, you can generate (100 / 0.11) * 100 ~= 90909 output tokens, which is roughly ~300 pages of an average book.
Is DreamGen Free?
Yes! DreamGen offers a generous free plan that lets you try out all of the features and models. The free credits reset at the start of each calendar month, and you also get extra credits every day if you run out.
How to make the character responses longer?
The response length is mostly influenced by the conversation history. To that end, you should make sure that the intro of your scenario has good examples of long character messages (here is an example scenario that consistently produces long messages).
There are few other "knobs" you can use to make the responses longer:
- Slightly increase temperature: this tends to make responses longer.
- Set "Minimum message tokens" in the model settings: this will force messages ot be longer, but might have side effects (such as the model speaking out of turn).
- Check "Disable “text” interactions" in the model settings: this will turn off the "narrator" and so the narrative will get pushed to the character messages to some degree.
- Use instructions: try for example "The next message is from <name of character> and must be at least 100 words long".
How to prevent the model from speaking for your character?
Make sure that the conversation history does not indlude messages where this happens -- the model might otherwise repeat the pattern.
This often happens as a result of the "narrator" describing what's happening, sometimes including a description of an action or reaction of your own character. You can turn off he narrator by checking the "Disable “text” interactions" in the model settings.
How to prevent the model from repeating itself?
Sometimes the model may get stuck in a loop where it repeats the same sentence or a paragraph (example below).
This tends to happen when the model generates a lot of text on its own, without much if any other input. Here are a few strategies to get the model unstuck:
- Remove the repetitive text. The first step is to remove the repetitive text from the story or role-play.
- Penalties. Experiment with frequency penalty 0.1-0.15 and presence penalty 0.1-0.15. Frequency and presence penalty do not take context into account, so you may also want to use repetition penalty of around 1.05-1.1. Keep in mind that higher isn't always better, and high values can lower the quality of output. For that reason it's recommended to lower the values once the issue is overcome.
- Provide instructions. Tell the model what should happen next using instrucitons.
- Rewrite the text. Try replacing the repetitive segment with your own few sentences.
- Switch up the model. Try temporarily switching to a different model. Alternatively try to increase the temperature of the current model, but don't go above 1.5-1.75.
- Lower "Max total tokens". You can find this option at the bottom of the Model settings. Sometimes the reason for repetition is that the model is close to its natural token limit, and lowering this value can help. You can experiment with values from 3000 to 7000 depending on the model.
- Unselect "Do not generate instructions". You can find this option in the Model settings. This will allow the model to generate instructions for itself, which can help it work through the scene and potentially avoid the issue.
The bartender then asked the woman if she wanted it made with a dash of citrus. The woman replied that she wanted it made with a dash of citrus, and the bartender nodded again.
The bartender then asked the woman if she wanted it made with a dash of bitters. The woman replied that she wanted it made with a dash of bitters, and the bartender nodded once more.
The bartender then asked the woman if she wanted it made with a dash of water. The woman replied that she wanted it made with a dash of water, and the bartender nodded once more.