diff --git a/pgml-cms/docs/.gitbook/assets/Chatbots_Flow-Diagram.svg b/pgml-cms/docs/.gitbook/assets/Chatbots_Flow-Diagram.svg new file mode 100644 index 000000000..382cab6e3 --- /dev/null +++ b/pgml-cms/docs/.gitbook/assets/Chatbots_Flow-Diagram.svg @@ -0,0 +1,281 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pgml-cms/docs/.gitbook/assets/Chatbots_King-Diagram.svg b/pgml-cms/docs/.gitbook/assets/Chatbots_King-Diagram.svg new file mode 100644 index 000000000..8f9d7f7fd --- /dev/null +++ b/pgml-cms/docs/.gitbook/assets/Chatbots_King-Diagram.svg @@ -0,0 +1,78 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pgml-cms/docs/.gitbook/assets/Chatbots_Limitations-Diagram.svg b/pgml-cms/docs/.gitbook/assets/Chatbots_Limitations-Diagram.svg new file mode 100644 index 000000000..c96b30ec4 --- /dev/null +++ b/pgml-cms/docs/.gitbook/assets/Chatbots_Limitations-Diagram.svg @@ -0,0 +1,275 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pgml-cms/docs/.gitbook/assets/Chatbots_Tokens-Diagram.svg b/pgml-cms/docs/.gitbook/assets/Chatbots_Tokens-Diagram.svg new file mode 100644 index 000000000..0b7c0915a --- /dev/null +++ b/pgml-cms/docs/.gitbook/assets/Chatbots_Tokens-Diagram.svg @@ -0,0 +1,238 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pgml-cms/docs/.gitbook/assets/chatbot_flow.png b/pgml-cms/docs/.gitbook/assets/chatbot_flow.png deleted file mode 100644 index f9107d99f..000000000 Binary files a/pgml-cms/docs/.gitbook/assets/chatbot_flow.png and /dev/null differ diff --git a/pgml-cms/docs/.gitbook/assets/embedding_king.png b/pgml-cms/docs/.gitbook/assets/embedding_king.png deleted file mode 100644 index 03deebbe8..000000000 Binary files a/pgml-cms/docs/.gitbook/assets/embedding_king.png and /dev/null differ diff --git a/pgml-cms/docs/.gitbook/assets/embeddings_tokens.png b/pgml-cms/docs/.gitbook/assets/embeddings_tokens.png deleted file mode 100644 index 6f7a13221..000000000 Binary files a/pgml-cms/docs/.gitbook/assets/embeddings_tokens.png and /dev/null differ diff --git a/pgml-cms/docs/guides/chatbots/README.md b/pgml-cms/docs/guides/chatbots/README.md index cd65d9125..9237f5c38 100644 --- a/pgml-cms/docs/guides/chatbots/README.md +++ b/pgml-cms/docs/guides/chatbots/README.md @@ -30,7 +30,7 @@ Here is an example flowing from: text -> tokens -> LLM -> probability distribution -> predicted token -> text -

The flow of inputs through an LLM. In this case the inputs are "What is Baldur's Gate 3?" and the output token "14" maps to the word "I"

+

The flow of inputs through an LLM. In this case the inputs are "What is Baldur's Gate 3?" and the output token "14" maps to the word "I"

{% hint style="info" %} We have simplified the tokenization process. Words do not always map directly to tokens. For instance, the word "Baldur's" may actually map to multiple tokens. For more information on tokenization checkout [HuggingFace's summary](https://huggingface.co/docs/transformers/tokenizer\_summary). @@ -108,11 +108,11 @@ What does an `embedding` look like? `Embeddings` are just vectors (for our use c embedding_1 = embed("King") # embed returns something like [0.11, -0.32, 0.46, ...] ``` -

The flow of word -> token -> embedding

+

The flow of word -> token -> embedding

`Embeddings` aren't limited to words, we have models that can embed entire sentences. -

The flow of sentence -> tokens -> embedding

+

The flow of sentence -> tokens -> embedding

Why do we care about `embeddings`? `Embeddings` have a very interesting property. Words and sentences that have close [semantic similarity](https://en.wikipedia.org/wiki/Semantic\_similarity) sit closer to one another in vector space than words and sentences that do not have close semantic similarity. @@ -157,7 +157,7 @@ print(context) There is a lot going on with this, let's check out this diagram and step through it. -

The flow of taking a document, splitting it into chunks, embedding those chunks, and then retrieving a chunk based off of a users query

+

The flow of taking a document, splitting it into chunks, embedding those chunks, and then retrieving a chunk based off of a users query

Step 1: We take the document and split it into chunks. Chunks are typically a paragraph or two in size. There are many ways to split documents into chunks, for more information check out [this guide](https://www.pinecone.io/learn/chunking-strategies/). diff --git a/pgml-cms/docs/introduction/getting-started/import-your-data/README.md b/pgml-cms/docs/introduction/getting-started/import-your-data/README.md index 0ab10669c..755d94cf7 100644 --- a/pgml-cms/docs/introduction/getting-started/import-your-data/README.md +++ b/pgml-cms/docs/introduction/getting-started/import-your-data/README.md @@ -12,11 +12,11 @@ Just like any PostgreSQL database, PostgresML can be configured as the primary a If your intention is to use PostgresML as your primary database, your job here is done. You can use the connection credentials provided and start building your application on top of in-database AI right away. -## [Logical replica](logical-replication/) +## [Logical replication](logical-replication/) If your primary database is hosted elsewhere, for example AWS RDS, or Azure Postgres, you can get your data replicated to PostgresML in real time using logical replication. -
Logical replication
+
Logical replication
Having access to your data immediately is very useful to accelerate your machine learning use cases and removes the need for moving data multiple times between microservices. Latency-sensitive applications should consider using this approach. @@ -25,7 +25,7 @@ accelerate your machine learning use cases and removes the need for moving data Foreign data wrappers are a set of PostgreSQL extensions that allow making direct connections from inside the database directly to other databases, even if they aren't running on Postgres. For example, Postgres has foreign data wrappers for MySQL, S3, Snowflake and many others. -
Foreign data wrappers
+
Foreign data wrappers
FDWs are useful when data access is infrequent and not latency-sensitive. For many use cases, like offline batch workloads and not very busy websites, this approach is suitable and easy to get started with. pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy