DreamGen API

We offer an API for our Opus V1+ models. The API is available to some of our premium subscription plans. Keep in mind that the DreamGen API is strictly for personal use, and we actively prohibit any form of sharing.

You can interact with the API through HTTP requests from any language. Some APIs are also compatible with OpenAI's client libraries.

Authentication

The DreamGen API uses API keys for authentication. You can create and remove API keys in your account page.

To authenticate the API requests, you submit your API key through the HTTP Authorization header:

Authorization: Bearer YOUR_DREAMGEN_API_KEY

Keep your API keys secret. Do not share them with other people and do not expose it publicly.

Chat API

This API is currently undocumented.

Text API

POST https://dreamgen.com/api/v1/model/completion

Request

type CompletionRequest = {
  modelId: 'opus-v1-sm' | 'opus-v1-xl';
  input: string;
  samplingParams: SamplingParams;
};
 
type SamplingParams = {
  kind: 'basic';
  presencePenalty?: number;
  frequencyPenalty?: number;
  repetitionPenalty?: number;
  temperature?: number;
  minP?: number;
  topP?: number;
  topK?: number;
 
  // Maximum number of output tokens.
  maxTokens?: number;
 
  // Maximum number of input + output tokens.
  maxTotalTokens?: number;
 
  // Array of strings upon which the generation will stop.
  // The stop sequence is included in the output.
  stopSequences?: string[];
 
  // If set to true, the model will not generate the EOS token.
  // It won't stop generating until either:
  // - `maxTokens` limit is reached; or
  // - `stop` sequence is encountered; or
  // - another stop condition is met
  ignoreEos?: boolean;
 
  // Opus V1+ only.
  // Set of roles that the model is allowed to generate, e.g.
  // ["text", "user"].
  allowedRoles?: string[];
 
  // Opus V1+ only.
  // If set to true, the model will not generate
  // the EOM token (`<|im_end|>`).
  disallowMessageEnd?: boolean;
 
  // Opus V1+ only.
  // If set to true, the model will not generate
  // the EOM token (`<|im_end|>`) until the message
  // has at least `minimumMessageContentTokens` tokens.
  minimumMessageContentTokens?: number;
};

Response

The response is streamed in JSON-lines format, each line containing a JSON object of the type CompletionResponse as defined below.

type CompletionResponse = CompletionOkResponse | CompletionErrResponse;
 
type CompletionOkResponse = {
  success: true,
 
  // The generated output.
  // Either the full output or a delta, depending on the request params.
  output: string,
 
  // The reason why the generation stopped.
  // - length: maxTokens reached
  // - stop: stopSequences reached
  finishReason?: 'length' | 'stop' | undefined,
 
  // The number of tokens and credits used.
  usage: CompletionUsage
};
 
type CompletionErrResponse = {
  message: string,
  success: false,
  status?: number | undefined
};
 
type CompletionUsageSchema = {
  inputTokens: number,
  outputTokens: number,
  completionCredits: number
};

OpenAI compatible APIs

We provide several OpenAI compatible APIs, namely:

  • Chat API: POST https://dreamgen.com/api/openai/v1/chat/completions
  • Text API: POST https://dreamgen.com/api/openai/v1/completions
  • Models API: POST https://dreamgen.com/api/openai/v1/models

OpenAI Chat Completion

NOTE: This is a stateless API, message contents are never logged or stored.

POST https://dreamgen.com/api/openai/v1/chat/completions

This is a version of the native Chat API that is compatible with OpenAI's streaming chat completion specification.

See this Python Colab to see how to use it with the official OpenAI Python client library.

Role specification

This section explains how to specfiy the text role that's required for story-writing and role-playing using the OpenAI Chat API.

We offer a few ways, in order to accomodate different OpenAI clients with various limitations.

So what's the problem? Traditional OpenAI chat request may look like this:

{
  "model": "opus-v1-sm",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Hello!"
    },
    {
      "role": "assistant",
      "content": "Hi there!"
    },
    {
      "role": "user",
      "content": "What are you up to?"
    }
  ]
}

Most OpenAI client libraries will not let you set the role field to anything other than system, assistant or user, but they will let you set a name field.

The DreamGen API supports specifying the name field for messages, for example:

{
  "model": "opus-v1-sm",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "name": "Ron",
      "content": "Hello!"
    },
    {
      "role": "assistant",
      "name": "Hermione",
      "content": "Hi there!"
    },
    {
      "role": "user",
      "name": "Ron",
      "content": "What are you up to?"
    }
  ]
}

Any turns where name is not null or undefined will be interpreted as text (writing) turns with the respective character names. When the name is empty, i.e. "name": "", the turn will be interpreted as a narrator text turn.

You can also specify the role and name of the turn that is being generated by the model in response to your requets: assistant or text (with or without a name).

To that end, we you can provide a RoleConfig:

type RoleConfig = {
  assistant: {
    role: 'assistant' | 'text';
    name?: string;
    open?: boolean;
  };
  user?: {
    role: 'user' | 'text';
    name?: string;
  };
};

The assistant and user config determines how the OpenAI assistant and user roles are converted to DreamGen roles, and the assistant role also determines the role of the model's response.

You can provide RoleConfig in 2 ways:

Role config inside the model field

The model field of the request has this shape:

  • "opus-v1-{size}/{ROLE_CONFIG}"

Where {size} is either sm or xl and where {ROLE_CONFIG} is a either text, assistant or a JSON with the following schema:

You can also use the following shorthands:

  • opus-v1-{size}/text: stands for {"assistant": {"role": "text", "open": true}}
  • opus-v1-{size}/assistant: stands for {"assistant": {"role": "assistant", "open": false}}

Role config in the request body

You can also provide RoleConfig in the request body using the role_config field.

{
  "model": "opus-v1-sm",
  "role_config": {
    "assistant": {
      "role": "text",
      "name": "Hermione"
    }
  },
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "name": "Ron",
      "content": "Hello!"
    },
    {
      "role": "assistant",
      "name": "Hermione",
      "content": "Hi there!"
    },
    {
      "role": "user",
      "name": "Ron",
      "content": "What are you up to?"
    }
  ]
}

When using OpenAI's Python SDK, you can pass role_config it using the extra_body param, see below.

Additional request parameters:

These are additional parameters supported by DreamGen's OpenAI API:

  • min_p
  • top_k
  • repetition_penalty
  • dry
  • role_config (see above)

When using OpenAI's Python SDK, you can pass these using the extra_body param:

completion = client.chat.completions.create(
  model="opus-v1-xl/text",
  stream=True,
  max_tokens=200,
  messages=[...],
  temperature=0.5,
  frequency_penalty=0.1,
  presence_penalty=0.1,
  extra_body={
    "min_p": 0.05,
    "repetition_penalty": 1.02,
    "dry": {
      "multiplier": 0.8,
      "base": 1.75,
      "allowed_length": 2
    },
    "role_config": {
      "assistant": {
        "role": "text",
        "name": "Hermione"
      }
    }
  }
)

The dry param represents the setting for the DRY sampler, and should follow this schema:

type DrySampler = {
  multiplier: number;
  base: number;
  allowedLength: number;
};

Unsupported (ignored) request parameters:

  • logit_bias
  • logprobs
  • top_logprobs
  • n
  • response_format
  • seed
  • stream
  • tools
  • tool_choice
  • user
  • function_call
  • functions

OpenAI Text Completion

NOTE: This is a stateless API, prompt contents are never logged or stored.

POST https://dreamgen.com/api/openai/v1/completions

This is a version of the native Text API that is compatible with OpenAI's streaming text completion specification.

See this Python Colab to see how to use it with the official OpenAI Python client library.

The text prompt should follow the ChatML+Text prompt format as described in the Opus V1 model guide.

The model can be either:

  • opus-v1-{size}
  • opus-v1-{size}/text -- model is allowed to only use the text role
  • opus-v1-{size}/assistant -- models is allowed to only use the assistant role

Where {size} is either sm or xl.

Additional request parameters:

These are additional parameters supported by DreamGen's OpenAI API:

  • min_p
  • top_k
  • repetition_penalty

When using OpenAI's Python SDK, you can pass these using the extra_body param:

completion = client.completions.create(
  model="opus-v1-sm/text",
  stream=True,
  max_tokens=200,
  prompt=prompt,
  temperature=0.5,
  frequency_penalty=0.1,
  presence_penalty=0.1,
  extra_body={
    "min_p": 0.05,
    "repetition_penalty": 1.02,
    "dry": {
      "multiplier": 0.8,
      "base": 1.75,
      "allowed_length": 2
    },
    "role_config": {
      "assistant": {
        "role": "text",
        "name": "Hermione"
      }
    }
  }
)

Unsupported (ignored) request parameters:

  • best_of
  • echo
  • logit_bias
  • logprobs
  • n
  • seed
  • stream
  • suffix
  • user

OpenAI List Models

GET https://dreamgen.com/api/openai/v1/models/list

This API is compatible with OpenAI's List Models specification.