DreamGen API

We offer an API for our Opus V1+ models. The API is available to some of our premium subscription plans. Keep in mind that the DreamGen API is strictly for personal use, and we actively prohibit any form of sharing.

You can interact with the API through HTTP requests from any language. Some APIs are also compatible with OpenAI's client libraries.

Authentication

The DreamGen API uses API keys for authentication. You can create and remove API keys in your account page.

To authenticate the API requests, you submit your API key through the HTTP Authorization header:

Authorization: Bearer YOUR_DREAMGEN_API_KEY

Keep your API keys secret. Do not share them with other people and do not expose it publicly.

Chat API

This API is currently undocumented.

Text API

POST https://dreamgen.com/api/v1/model/completion

Request

type CompletionRequest = {
  modelId: 'opus-v1-sm' | 'opus-v1-lg' | 'opus-v1-xl';
  input: string;
  samplingParams: SamplingParams;
};
 
type SamplingParams = {
  kind: 'basic';
  presencePenalty?: number;
  frequencyPenalty?: number;
  repetitionPenalty?: number;
  temperature?: number;
  minP?: number;
  topP?: number;
  topK?: number;
 
  // Maximum number of output tokens.
  maxTokens?: number;
 
  // Maximum number of input + output tokens.
  maxTotalTokens?: number;
 
  // Array of strings upon which the generation will stop.
  // The stop sequence is included in the output.
  stopSequences?: string[];
 
  // If set to true, the model will not generate the EOS token.
  // It won't stop generating until either:
  // - `maxTokens` limit is reached; or
  // - `stop` sequence is encountered; or
  // - another stop condition is met
  ignoreEos?: boolean;
 
  // Opus V1+ only.
  // Set of roles that the model is allowed to generate, e.g.
  // ["text", "user"].
  allowedRoles?: string[];
 
  // Opus V1+ only.
  // If set to true, the model will not generate
  // the EOM token (`<|im_end|>`).
  disallowMessageEnd?: boolean;
 
  // Opus V1+ only.
  // If set to true, the model will not generate
  // the EOM token (`<|im_end|>`) until the message
  // has at least `minimumMessageContentTokens` tokens.
  minimumMessageContentTokens?: number;
};

Response

The response is streamed in JSON-lines format, each line containing a JSON object of the type CompletionResponse as defined below.

type CompletionResponse = CompletionOkResponse | CompletionErrResponse;
 
type CompletionOkResponse = {
  success: true,
 
  // The generated output.
  // Either the full output or a delta, depending on the request params.
  output: string,
 
  // The reason why the generation stopped.
  // - length: maxTokens reached
  // - stop: stopSequences reached
  finishReason?: 'length' | 'stop' | undefined,
 
  // The number of tokens and credits used.
  usage: CompletionUsage
};
 
type CompletionErrResponse = {
  message: string,
  success: false,
  status?: number | undefined
};
 
type CompletionUsageSchema = {
  inputTokens: number,
  outputTokens: number,
  completionCredits: number
};

OpenAI compatible APIs

We provide several OpenAI compatible APIs, namely:

  • Chat API: POST https://dreamgen.com/api/openai/v1/chat/completions
  • Text API: POST https://dreamgen.com/api/openai/v1/completions
  • Models API: POST https://dreamgen.com/api/openai/v1/models

OpenAI Chat Completion

NOTE: This is a stateless API, message contents are never logged or stored.

POST https://dreamgen.com/api/openai/v1/chat/completions

This is a version of the native Chat API that is compatible with OpenAI's streaming chat completion specification.

See this Python Colab to see how to use it with the official OpenAI Python client library.

Model specification

Many OpenAI clients only support system, assistant and user roles. In order to support the Opus' text role that's necessary for story-writing and role-playing, we encode extra information in the model request field.

Possible model values are:

  • "opus-v1-{size}/{MODEL_SPEC}"

Where {size} is either sm or lg and where {MODEL_SPEC} is a either text, assistant or a JSON with the following schema:

type ModelSpec = {
  assistant: {
    role: 'assistant' | 'text';
    name?: string;
    open?: boolean;
  };
  user?: {
    role: 'user' | 'text';
    name?: string;
  };
};

The assistant and user specification determines how the OpenAI assistant and user roles are converted to Opus roles.

You can also use the following shorthands:

  • opus-v1-{size}/text: stands for {"assistant": {"role": "text", "open": true}}
  • opus-v1-{size}/assistant: stands for {"assistant": {"role": "assistant", "open": false}}

Additional request parameters:

These are additional parameters supported by DreamGen's OpenAI API:

  • min_p
  • top_k
  • repetition_penalty

When using OpenAI's Python SDK, you can pass these using the extra_body param:

completion = client.chat.completions.create(
  model="opus-v1-lg/text",
  stream=True,
  max_tokens=200,
  messages=[...],
  temperature=0.5,
  frequency_penalty=0.1,
  presence_penalty=0.1,
  extra_body={
    "min_p": 0.05,
    "repetition_penalty": 1.1
  }
)

Unsupported (ignored) request parameters:

  • logit_bias
  • logprobs
  • top_logprobs
  • n
  • response_format
  • seed
  • stream
  • tools
  • tool_choice
  • user
  • function_call
  • functions

OpenAI Text Completion

NOTE: This is a stateless API, prompt contents are never logged or stored.

POST https://dreamgen.com/api/openai/v1/completions

This is a version of the native Text API that is compatible with OpenAI's streaming text completion specification.

See this Python Colab to see how to use it with the official OpenAI Python client library.

The text prompt should follow the ChatML+Text prompt format as described in the Opus V1 model guide.

The model can be either:

  • opus-v1-{size}
  • opus-v1-{size}/text -- model is allowed to only use the text role
  • opus-v1-{size}/assistant -- models is allowed to only use the assistant role

Where {size} is either sm, lg or xl.

Additional request parameters:

These are additional parameters supported by DreamGen's OpenAI API:

  • min_p
  • top_k
  • repetition_penalty

When using OpenAI's Python SDK, you can pass these using the extra_body param:

completion = client.completions.create(
  model="opus-v1-sm/text",
  stream=True,
  max_tokens=200,
  prompt=prompt,
  temperature=0.5,
  frequency_penalty=0.1,
  presence_penalty=0.1,
  extra_body={
    "min_p": 0.05,
    "repetition_penalty": 1.1
  }
)

Unsupported (ignored) request parameters:

  • best_of
  • echo
  • logit_bias
  • logprobs
  • n
  • seed
  • stream
  • suffix
  • user

OpenAI List Models

GET https://dreamgen.com/api/openai/v1/models/list

This API is compatible with OpenAI's List Models specification.