Skip to content

Commit 9b8ca64

Browse files
authored
Update openai switch kit blog and guide (#1556)
1 parent debd9ae commit 9b8ca64

File tree

2 files changed

+38
-40
lines changed

2 files changed

+38
-40
lines changed

pgml-cms/blog/introducing-the-openai-switch-kit-move-from-closed-to-open-source-ai-in-minutes.md

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,8 @@ The Switch Kit is an open-source AI SDK that provides a drop in replacement for
4141
{% tabs %}
4242
{% tab title="JavaScript" %}
4343
```javascript
44-
const pgml = require("pgml");
45-
const client = pgml.newOpenSourceAI();
44+
const korvus = require("korvus");
45+
const client = korvus.newOpenSourceAI();
4646
const results = client.chat_completions_create(
4747
"meta-llama/Meta-Llama-3-8B-Instruct",
4848
[
@@ -62,8 +62,8 @@ console.log(results);
6262

6363
{% tab title="Python" %}
6464
```python
65-
import pgml
66-
client = pgml.OpenSourceAI()
65+
import korvus
66+
client = korvus.OpenSourceAI()
6767
results = client.chat_completions_create(
6868
"meta-llama/Meta-Llama-3-8B-Instruct",
6969
[
@@ -117,17 +117,15 @@ The above is an example using our open-source AI SDK with Meta-Llama-3-8B-Instru
117117

118118
Notice there is near one to one relation between the parameters and return type of OpenAI’s `chat.completions.create` and our `chat_completion_create`.
119119

120-
The best part of using open-source AI is the flexibility with models. Unlike OpenAI, we are not restricted to using a few censored models, but have access to almost any model out there.
121-
122-
Here is an example of streaming with the popular Mythalion model, an uncensored MythoMax variant designed for chatting.
120+
Here is an example of streaming:
123121

124122
{% tabs %}
125123
{% tab title="JavaScript" %}
126124
```javascript
127-
const pgml = require("pgml");
128-
const client = pgml.newOpenSourceAI();
125+
const korvus = require("korvus");
126+
const client = korvus.newOpenSourceAI();
129127
const it = client.chat_completions_create_stream(
130-
"PygmalionAI/mythalion-13b",
128+
"meta-llama/Meta-Llama-3-8B-Instruct",
131129
[
132130
{
133131
role: "system",
@@ -149,10 +147,10 @@ while (!result.done) {
149147

150148
{% tab title="Python" %}
151149
```python
152-
import pgml
153-
client = pgml.OpenSourceAI()
150+
import korvus
151+
client = korvus.OpenSourceAI()
154152
results = client.chat_completions_create_stream(
155-
"PygmalionAI/mythalion-13b",
153+
"meta-llama/Meta-Llama-3-8B-Instruct",
156154
[
157155
{
158156
"role": "system",
@@ -184,7 +182,7 @@ for c in results:
184182
],
185183
"created": 1701296792,
186184
"id": "62a817f5-549b-43e0-8f0c-a7cb204ab897",
187-
"model": "PygmalionAI/mythalion-13b",
185+
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
188186
"object": "chat.completion.chunk",
189187
"system_fingerprint": "f366d657-75f9-9c33-8e57-1e6be2cf62f3"
190188
}
@@ -200,7 +198,7 @@ for c in results:
200198
],
201199
"created": 1701296792,
202200
"id": "62a817f5-549b-43e0-8f0c-a7cb204ab897",
203-
"model": "PygmalionAI/mythalion-13b",
201+
"model": "meta-llama/Meta-Llama-3-8B-Instruct",
204202
"object": "chat.completion.chunk",
205203
"system_fingerprint": "f366d657-75f9-9c33-8e57-1e6be2cf62f3"
206204
}
@@ -212,15 +210,15 @@ We have truncated the output to two items
212210

213211
!!!
214212

215-
We also have asynchronous versions of the create and `create_stream` functions relatively named `create_async` and `create_stream_async`. Checkout [our documentation](https://postgresml.org/docs/introduction/machine-learning/sdks/opensourceai) for a complete guide of the open-source AI SDK including guides on how to specify custom models.
213+
We also have asynchronous versions of the create and `create_stream` functions relatively named `create_async` and `create_stream_async`. Checkout [our documentation](https://postgresml.org/docs/guides/opensourceai) for a complete guide of the open-source AI SDK including guides on how to specify custom models.
216214

217-
PostgresML is free and open source. To run the above examples yourself[ create an account](https://postgresml.org/signup), install pgml, and get running!
215+
PostgresML is free and open source. To run the above examples yourself [create an account](https://postgresml.org/signup), install korvus, and get running!
218216

219217
### Why use open-source models on PostgresML?
220218

221219
PostgresML is a complete MLOps platform in a simple PostgreSQL extension. It’s the tool our team wished they’d had scaling MLOps at Instacart during its peak years of growth. You can host your database with us or locally. However you want to engage, we know from experience that it’s better to bring your ML workload to the database rather than bringing the data to the codebase.
222220

223-
Fundamentally, PostgresML enables PostgreSQL to act as a GPU-powered AI application database — where you can both save models and index data. That eliminates the need for the myriad of separate services you have to tie together for your ML workflow. Pgml + pgvector create a complete ML platform (vector DB, model store, inference service, open-source LLMs) all within open-source extensions for PostgreSQL. That takes a lot of the complexity out of your infra, and it's ultimately faster for your users.
221+
Fundamentally, PostgresML enables PostgreSQL to act as a GPU-powered AI application database — where you can both save models and index data. That eliminates the need for the myriad of separate services you have to tie together for your ML workflow. pgml + pgvector create a complete ML platform (vector DB, model store, inference service, open-source LLMs) all within open-source extensions for PostgreSQL. That takes a lot of the complexity out of your infra, and it's ultimately faster for your users.
224222

225223
We're bullish on the power of in-database and open-source ML/AI, and we’re excited for you to see the power of this approach yourself. You can try it out in our serverless database for $0, with usage based billing starting at just five cents an hour per GB GPU cache. You can even mess with it for free on our homepage.
226224

pgml-cms/docs/guides/opensourceai.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -6,26 +6,26 @@ OpenSourceAI is a drop in replacement for OpenAI's chat completion endpoint.
66

77
Follow the instillation section in [getting-started.md](../api/client-sdk/getting-started.md "mention")
88

9-
When done, set the environment variable `DATABASE_URL` to your PostgresML database url.
9+
When done, set the environment variable `KORVUS_DATABASE_URL` to your PostgresML database url.
1010

1111
```bash
12-
export DATABASE_URL=postgres://user:pass@.db.cloud.postgresml.org:6432/pgml
12+
export KORVUS_DATABASE_URL=postgres://user:pass@.db.cloud.postgresml.org:6432/pgml
1313
```
1414

1515
Note that an alternative to setting the environment variable is passing the url to the constructor of `OpenSourceAI`
1616

1717
{% tabs %}
1818
{% tab title="JavaScript" %}
1919
```javascript
20-
const pgml = require("pgml");
21-
const client = pgml.newOpenSourceAI(YOUR_DATABASE_URL);
20+
const korvus = require("korvus");
21+
const client = korvus.newOpenSourceAI(YOUR_DATABASE_URL);
2222
```
2323
{% endtab %}
2424

2525
{% tab title="Python" %}
2626
```python
27-
import pgml
28-
client = pgml.OpenSourceAI(YOUR_DATABASE_URL)
27+
import korvus
28+
client = korvus.OpenSourceAI(YOUR_DATABASE_URL)
2929
```
3030
{% endtab %}
3131
{% endtabs %}
@@ -59,8 +59,8 @@ Here is a simple example using zephyr-7b-beta, one of the best 7 billion paramet
5959
{% tabs %}
6060
{% tab title="JavaScript" %}
6161
```javascript
62-
const pgml = require("pgml");
63-
const client = pgml.newOpenSourceAI();
62+
const korvus = require("korvus");
63+
const client = korvus.newOpenSourceAI();
6464
const results = client.chat_completions_create(
6565
"meta-llama/Meta-Llama-3-8B-Instruct",
6666
[
@@ -80,8 +80,8 @@ console.log(results);
8080

8181
{% tab title="Python" %}
8282
```python
83-
import pgml
84-
client = pgml.OpenSourceAI()
83+
import korvus
84+
client = korvus.OpenSourceAI()
8585
results = client.chat_completions_create(
8686
"meta-llama/Meta-Llama-3-8B-Instruct",
8787
[
@@ -138,8 +138,8 @@ Here is an example of streaming with the popular `meta-llama/Meta-Llama-3-8B-Ins
138138
{% tabs %}
139139
{% tab title="JavaScript" %}
140140
```javascript
141-
const pgml = require("pgml");
142-
const client = pgml.newOpenSourceAI();
141+
const korvus = require("korvus");
142+
const client = korvus.newOpenSourceAI();
143143
const it = client.chat_completions_create_stream(
144144
"meta-llama/Meta-Llama-3-8B-Instruct",
145145
[
@@ -163,8 +163,8 @@ while (!result.done) {
163163

164164
{% tab title="Python" %}
165165
```python
166-
import pgml
167-
client = pgml.OpenSourceAI()
166+
import korvus
167+
client = korvus.OpenSourceAI()
168168
results = client.chat_completions_create_stream(
169169
"meta-llama/Meta-Llama-3-8B-Instruct",
170170
[
@@ -231,8 +231,8 @@ We also have asynchronous versions of the `chat_completions_create` and `chat_co
231231
{% tabs %}
232232
{% tab title="JavaScript" %}
233233
```javascript
234-
const pgml = require("pgml");
235-
const client = pgml.newOpenSourceAI();
234+
const korvus = require("korvus");
235+
const client = korvus.newOpenSourceAI();
236236
const results = await client.chat_completions_create_async(
237237
"meta-llama/Meta-Llama-3-8B-Instruct",
238238
[
@@ -252,8 +252,8 @@ console.log(results);
252252

253253
{% tab title="Python" %}
254254
```python
255-
import pgml
256-
client = pgml.OpenSourceAI()
255+
import korvus
256+
client = korvus.OpenSourceAI()
257257
results = await client.chat_completions_create_async(
258258
"meta-llama/Meta-Llama-3-8B-Instruct",
259259
[
@@ -300,8 +300,8 @@ Notice the return types for the sync and async variations are the same.
300300
{% tabs %}
301301
{% tab title="JavaScript" %}
302302
```javascript
303-
const pgml = require("pgml");
304-
const client = pgml.newOpenSourceAI();
303+
const korvus = require("korvus");
304+
const client = korvus.newOpenSourceAI();
305305
const it = await client.chat_completions_create_stream_async(
306306
"meta-llama/Meta-Llama-3-8B-Instruct",
307307
[
@@ -325,8 +325,8 @@ while (!result.done) {
325325

326326
{% tab title="Python" %}
327327
```python
328-
import pgml
329-
client = pgml.OpenSourceAI()
328+
import korvus
329+
client = korvus.OpenSourceAI()
330330
results = await client.chat_completions_create_stream_async(
331331
"meta-llama/Meta-Llama-3-8B-Instruct",
332332
[

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy