Sébastien De Greef commited on
Commit
f32aa09
1 Parent(s): 6ed5f5f

chore: Update website navigation to include "Embeddings" section

Browse files
src/_quarto.yml CHANGED
@@ -3,7 +3,7 @@ project:
3
  website:
4
  title: "My AI Cookbook"
5
  sidebar:
6
- style: "docked"
7
  search: true
8
  collapse-level: 3
9
  contents:
@@ -22,51 +22,33 @@ website:
22
  text: "Layer Types"
23
  - href: theory/metrics.qmd
24
  text: "Metric Types"
25
-
26
-
 
 
27
 
28
  - section: "Large Language Models"
29
  contents:
 
 
 
 
 
30
  - href: llms/prompting.qmd
31
  text: "Prompting"
32
  - href: theory/chainoftoughts.qmd
33
  text: "Chain of toughts"
34
- - href: llms/embeddings.qmd
35
- text: "Embeddings"
36
- - href: llms/index.qmd
37
- text: "LLM'xs"
38
 
39
- - section: "Retrival Augmented Generation"
40
- contents:
41
- - section: "RAG Techniques"
42
- contents:
43
- - href: notebooks/rag_zephyr_langchain.qmd
44
- text: "RAG Zephyr & LangChain"
45
- - href: notebooks/advanced_rag.qmd
46
- text: "Advanced RAG"
47
- - href: notebooks/rag_evaluation.qmd
48
- text: "RAG Evaluation"
49
 
50
- - section: "Additional Techniques"
51
- contents:
52
- - href: notebooks/automatic_embedding.ipynb
53
- text: "Automatic Embedding"
54
- - href: notebooks/faiss.ipynb
55
- text: "FAISS for Efficient Search"
56
- - href: notebooks/single_gpu.ipynb
57
- text: "Single GPU Optimization"
58
-
59
- - section: "Computer Vision"
60
- contents:
61
- - href: notebooks/automatic_embedding.ipynb
62
- text: "Automatic Embedding"
63
- - href: notebooks/faiss.ipynb
64
- text: "FAISS for Efficient Search"
65
- - href: notebooks/single_gpu.ipynb
66
- text: "Single GPU Optimization"
67
 
68
  format:
69
  html:
70
- theme: cosmo
71
  css: styles.css
72
  toc: true
 
3
  website:
4
  title: "My AI Cookbook"
5
  sidebar:
6
+ style: "floating"
7
  search: true
8
  collapse-level: 3
9
  contents:
 
22
  text: "Layer Types"
23
  - href: theory/metrics.qmd
24
  text: "Metric Types"
25
+ - href: theory/optimizers.qmd
26
+ text: "Optimizers"
27
+ - href: theory/training.qmd
28
+ text: "Training"
29
 
30
  - section: "Large Language Models"
31
  contents:
32
+ - href: llms/tokenizers.qmd
33
+ text: "Tokenizers"
34
+ - href: llms/embeddings.qmd
35
+ text: "Embeddings"
36
+
37
  - href: llms/prompting.qmd
38
  text: "Prompting"
39
  - href: theory/chainoftoughts.qmd
40
  text: "Chain of toughts"
41
+ - href: llms/finetuning.qmd
42
+ text: "Fine-tuning and Lora"
43
+ - href: llms/rag_systems.qmd
44
+ text: "Retrival Augmented Generation"
45
 
 
 
 
 
 
 
 
 
 
 
46
 
47
+ - section: "Computer Vision Models"
48
+ - section: "Image Generation Models"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  format:
51
  html:
52
+ theme: sketchy
53
  css: styles.css
54
  toc: true
src/index.qmd CHANGED
@@ -5,7 +5,7 @@ title: "About My AI Cookbook"
5
  This repository is my personal collection of recipes and notebooks, documenting my journey of learning and exploring various aspects of Artificial Intelligence (AI). As a self-taught AI enthusiast, I created this cookbook to serve as a knowledge base, a "how-to" guide, and a reference point for my own projects and experiments.
6
 
7
  ::: {.callout-tip}
8
- ## Tip with Title
9
 
10
- This is an example of a callout with a title.
11
  :::
 
5
  This repository is my personal collection of recipes and notebooks, documenting my journey of learning and exploring various aspects of Artificial Intelligence (AI). As a self-taught AI enthusiast, I created this cookbook to serve as a knowledge base, a "how-to" guide, and a reference point for my own projects and experiments.
6
 
7
  ::: {.callout-tip}
8
+ ## Use the search box
9
 
10
+ If you are looking for something in particular use the searchbox to find the correct page
11
  :::
src/llms/context_window.qmd ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Understanding the Context Window in Natural Language Processing
2
+
3
+ When working with natural language processing (NLP), one of the foundational concepts is the "context window". This term refers to the segment of text that a model considers when making predictions or processing language. The context window is crucial for understanding how language models like GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers) process and generate text. Here, we will explore what a context window is, why it's important, and how it influences the performance and capabilities of AI models.
4
+
5
+ ## What is a Context Window?
6
+
7
+ A context window in NLP is the range of words or tokens around a focal word that an algorithm uses to understand or predict that word. This window can be of a fixed size or variable, depending on the model's architecture. For instance, in a fixed-size model, a window might include five words before and five words after a target word. In more dynamic architectures, the size and scope of the context window might adjust based on the model’s training and objectives.
8
+
9
+ ## Importance of the Context Window
10
+
11
+ The context window is vital for several reasons:
12
+
13
+ 1. **Language Understanding**: It allows models to capture more than just the meaning of a single word; they also incorporate surrounding text to grasp context, idiomatic expressions, and syntactic relationships.
14
+ 2. **Coherence and Cohesion**: By considering words beyond the immediate vicinity, models can generate text that is coherent and contextually appropriate, maintaining logical flow in language generation tasks.
15
+ 3. **Disambiguation**: Words with multiple meanings can be interpreted correctly based on the words surrounding them. For example, the word "bank" would be understood differently in "river bank" compared to "savings bank".
16
+
17
+ ## Applications of Context Windows
18
+
19
+ The context window concept is applied in various tasks across NLP:
20
+
21
+ - **Machine Translation**: Larger context windows help in understanding the full meaning of sentences, which improves the accuracy of translations.
22
+ - **Sentiment Analysis**: The sentiment conveyed in a text often depends on phrases and context, not just individual words.
23
+ - **Autocomplete and Predictive Text**: Effective prediction of the next word or series of words in a sentence requires understanding the context provided by previous words.
24
+ - **Information Retrieval**: When searching for documents or answers, a broader context window can help identify more relevant results based on the query’s context.
25
+
26
+ ## Challenges with Context Windows
27
+
28
+ While context windows are beneficial, they also present certain challenges:
29
+
30
+ 1. **Computational Cost**: Larger context windows require more memory and processing power, which can slow down model training and inference.
31
+ 2. **Noise Introduction**: Including too much context can introduce noise, potentially leading to less accurate predictions or understandings, especially if the additional context is not relevant.
32
+ 3. **Optimal Size Determination**: Determining the ideal size of a context window is often challenging and may require extensive experimentation. Different tasks might also require different window sizes for optimal performance.
33
+
34
+ ## Future Directions
35
+
36
+ As AI research advances, the exploration of optimal context window sizes and mechanisms continues. Techniques like attention mechanisms, which allow models to dynamically focus on different parts of the input data, help address some of the challenges posed by fixed-size context windows. These innovations enable more sophisticated processing of language, improving both the efficiency and effectiveness of NLP applications.
37
+
38
+ In conclusion, the context window is a critical concept in the field of NLP, playing a pivotal role in how machines understand and generate human language. By effectively leveraging context windows, AI models can achieve a deeper understanding of language nuances, resulting in more accurate and human-like language processing capabilities.
src/llms/embeddings.qmd CHANGED
@@ -1,10 +1,10 @@
1
- # Understanding Embeddings in Large Language Models (LLMs)
2
- Embeddings in Large Language Models (LLMs) like GPT, BERT, and others, are a foundational component in the field of natural language processing (NLP). These embeddings transform words, phrases, or even longer texts into a vector space, capturing the semantic meaning that enables LLMs to perform a variety of language-based tasks with remarkable proficiency. This article focuses on the role of embeddings in LLMs, how they are generated, and their impact on the performance of these models.
3
 
4
  ## What are Embeddings in LLMs?
5
  In the context of LLMs, embeddings are dense vector representations of text. Each vector aims to encapsulate aspects of linguistic meaning such as syntax, semantics, and context. Unlike simpler models that might use one-hot encoding, LLM embeddings map words or tokens to vectors in a way that reflects their semantic and contextual relationships.
6
 
7
- ## Role of Embeddings in LLMs
8
  Embeddings are the input layer of LLMs, where each word or token from the input text is converted into vectors. These vectors are then processed by the model’s deeper layers to perform tasks such as text classification, question answering, translation, and more. Here’s how embeddings contribute to the functionality of LLMs:
9
 
10
  ## Pre-trained Word Embeddings
@@ -13,7 +13,7 @@ Many LLMs start with a layer of pre-trained word embeddings obtained from vast a
13
  ## Contextual Embeddings
14
  Advanced models like BERT and GPT use embeddings that adjust according to the context of a word in a sentence, differing from static word embeddings used in earlier models. This means the embedding for the word "bank" would differ when used in the context of a river compared to a financial institution.
15
 
16
- ## Generating Embeddings in LLMs
17
  LLMs typically generate embeddings using one of two architectures:
18
 
19
  ### Transformer-Based Models
@@ -22,7 +22,7 @@ These models, including BERT and GPT, utilize the transformer architecture that
22
  ### Autoencoder Models
23
  Some LLMs employ autoencoders to generate embeddings that are then used to reconstruct the input data. This process helps in learning efficient representations of the data.
24
 
25
- ## Applications of Embeddings in LLMs
26
  Embeddings in LLMs enable a wide range of applications:
27
 
28
  **Machine Translation**: Understanding and translating languages by capturing contextual nuances.
 
1
+ # Embeddings
2
+ Embeddings in Large Language Models (LLMs) are a foundational component in the field of natural language processing (NLP). These embeddings transform words, phrases, or even longer texts into a vector space, capturing the semantic meaning that enables LLMs to perform a variety of language-based tasks with remarkable proficiency. This article focuses on the role of embeddings in LLMs, how they are generated, and their impact on the performance of these models.
3
 
4
  ## What are Embeddings in LLMs?
5
  In the context of LLMs, embeddings are dense vector representations of text. Each vector aims to encapsulate aspects of linguistic meaning such as syntax, semantics, and context. Unlike simpler models that might use one-hot encoding, LLM embeddings map words or tokens to vectors in a way that reflects their semantic and contextual relationships.
6
 
7
+ ## Role of Embeddings
8
  Embeddings are the input layer of LLMs, where each word or token from the input text is converted into vectors. These vectors are then processed by the model’s deeper layers to perform tasks such as text classification, question answering, translation, and more. Here’s how embeddings contribute to the functionality of LLMs:
9
 
10
  ## Pre-trained Word Embeddings
 
13
  ## Contextual Embeddings
14
  Advanced models like BERT and GPT use embeddings that adjust according to the context of a word in a sentence, differing from static word embeddings used in earlier models. This means the embedding for the word "bank" would differ when used in the context of a river compared to a financial institution.
15
 
16
+ ## Generating Embeddings
17
  LLMs typically generate embeddings using one of two architectures:
18
 
19
  ### Transformer-Based Models
 
22
  ### Autoencoder Models
23
  Some LLMs employ autoencoders to generate embeddings that are then used to reconstruct the input data. This process helps in learning efficient representations of the data.
24
 
25
+ ## Applications of Embeddings
26
  Embeddings in LLMs enable a wide range of applications:
27
 
28
  **Machine Translation**: Understanding and translating languages by capturing contextual nuances.
src/llms/finetuning.qmd ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fine-Tuning and Lora
2
+
3
+ Fine-tuning and LoRA (Low-Rank Adaptation) are two approaches commonly used to adapt Large Language Models (LLMs) to specific tasks or datasets. These methods are crucial for leveraging the power of pre-trained models and making them more effective for particular applications without needing to train a model from scratch.
4
+
5
+ ## Fine-Tuning in LLMs
6
+
7
+ **Fine-tuning** is a process where a pre-trained model is further trained (or "fine-tuned") on a new dataset with a possibly different but related task. This approach leverages the learned features and knowledge the model has acquired during its initial training on a vast corpus of data, typically involving general tasks like language modeling.
8
+
9
+ ### How Fine-Tuning Works
10
+ 1. **Start with a Pre-trained Model**: Begin with a model that has been pre-trained on a large dataset to learn a wide range of language patterns and tasks.
11
+ 2. **Continue Training**: The model is further trained on a smaller, specific dataset. This dataset is task-specific, meaning it directly relates to the tasks the model needs to perform in its deployment environment.
12
+ 3. **Adjust Weights**: During fine-tuning, most or all of the neural network layers are adjusted. The learning rate is typically much lower during this phase to make smaller adjustments to the weights, which helps in refining the model’s capabilities without losing the generalized knowledge it has previously acquired.
13
+ 4. **Task-Specific Adjustments**: The output layer often undergoes significant changes, especially if the new task differs in nature (e.g., from classification to regression).
14
+
15
+ Fine-tuning allows the model to specialize towards specific nuances of a new task or dataset, enhancing its performance on particular domains or problems.
16
+
17
+ ## LoRA in LLMs
18
+
19
+ **LoRA (Low-Rank Adaptation)** is a more recent and less resource-intensive approach compared to traditional fine-tuning. It introduces task-specific trainable parameters while freezing most of the pre-trained parameters. This method is particularly useful for adapting large models efficiently with fewer trainable parameters.
20
+
21
+ ### How LoRA Works
22
+ 1. **Modify Architecture**: LoRA introduces low-rank matrices to the architecture of a pre-trained model. Instead of updating the original weight matrices of the model, LoRA adds trainable rank-decomposition matrices that capture the task-specific deviations from the pre-trained setup.
23
+ 2. **Train Few Parameters**: In LoRA, you train these additional low-rank matrices while keeping the original high-dimensional weight matrices frozen. This reduces the number of trainable parameters significantly.
24
+ 3. **Parameter Efficiency**: The key advantage of LoRA is that it allows for the efficient tuning of large models without the need to re-train or fine-tune millions of parameters. This makes it computationally cheaper and faster while still leveraging the power of large-scale pre-training.
25
+ 4. **Integration**: The low-rank matrices are integrated into the model during the forward pass, modifying the model's behavior dynamically based on the learned task-specific adjustments.
26
+
27
+ ## Comparison and Use Cases
28
+
29
+ - **Fine-Tuning**: More comprehensive and potentially more powerful as it adapts all model weights to the new task, but it is resource-intensive. Ideal for situations where you have sufficient computational resources and the task-specific dataset is large enough to warrant extensive re-training.
30
+ - **LoRA**: Best suited for scenarios where computational resources are limited, or you need to quickly adapt a large model to multiple specific tasks. LoRA is also beneficial when you need to maintain the original model’s integrity and avoid catastrophic forgetting that might occur during extensive fine-tuning.
31
+
32
+ Both fine-tuning and LoRA are vital tools in the adaptation of LLMs to specific tasks, offering a balance between leveraging massive pre-trained models and customizing them to meet particular needs effectively.
src/llms/rag_systems.qmd ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Retrieval-Augmented Generation (RAG)
2
+
3
+ In the rapidly evolving field of natural language processing (NLP), the development of Retrieval-Augmented Generation (RAG) systems marks a significant advancement. RAG systems are designed to enhance the capabilities of language models by extending beyond traditional context window limitations. This technology allows models to incorporate vast amounts of external information dynamically, leading to improved comprehension, reasoning, and generative abilities. This article delves into what RAG systems are, how they operate, and their impact on extending context window limitations in NLP.
4
+
5
+ **Directly delve into the subject with langchain**
6
+
7
+ * [Rag From Scratch](https://github.com/langchain-ai/rag-from-scratch)
8
+ * [Videos of Rag From Scratch](https://www.youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x)
9
+
10
+ ## What are Retrieval-Augmented Generation Systems?
11
+
12
+ Retrieval-Augmented Generation systems integrate traditional language models with information retrieval techniques. These systems first retrieve relevant documents or data from a large corpus and then use this information to generate responses. The integration allows the model to access a broader range of information than what is contained within the immediate context window of the input.
13
+
14
+ ## How Do RAG Systems Work?
15
+
16
+ RAG systems operate in two primary phases:
17
+
18
+ 1. **Retrieval Phase**: In this phase, the system uses a query generated from the input to fetch relevant information from an external knowledge base or document collection. This is typically done using vector-based search techniques where documents are converted into embeddings that capture semantic meanings and can be compared for relevance.
19
+
20
+ 2. **Generation Phase**: After retrieval, the system uses the retrieved documents as an extended context to generate an output. This phase is powered by a language model that synthesizes the input and the retrieved data to produce coherent and contextually enriched responses.
21
+
22
+ ## Extending Beyond Context Window Limitations
23
+
24
+ Traditional language models, such as those based solely on transformers, are constrained by fixed-size context windows. This limitation restricts the model’s understanding to only the immediate surrounding text. RAG systems, however, transcend these boundaries in several key ways:
25
+
26
+ - **Expanded Knowledge Base**: By accessing external databases or documents, RAG systems are not limited to pre-encoded knowledge or the immediate input provided. This enables them to incorporate updated, extensive, and specific information that might not be present within the model’s trained parameters.
27
+
28
+ - **Dynamic Context Adaptation**: RAG systems dynamically adjust the context used for generating responses based on the input. This adaptability allows for more precise and appropriate responses, especially in complex or niche queries.
29
+
30
+ - **Enhanced Reasoning Abilities**: With access to more comprehensive data, RAG systems can perform more complex reasoning tasks. This capability is particularly valuable in applications such as question answering and decision support systems.
31
+
32
+ ## Applications of RAG Systems
33
+
34
+ RAG systems have been successfully applied in various NLP tasks, demonstrating substantial improvements over traditional models:
35
+
36
+ - **Question Answering**: RAG systems excel in open-domain question answering, where they can retrieve information from vast corpora to answer questions that require external knowledge.
37
+
38
+ - **Content Generation**: In creative and content generation tasks, they provide richer and more diverse content by referencing a broad range of sources.
39
+
40
+ - **Dialogue Systems**: They enhance conversational AI by providing more informed, accurate, and engaging responses, drawing from a larger pool of conversational contexts and facts.
41
+
42
+ ## Challenges and Future Directions
43
+
44
+ Despite their advantages, RAG systems also face challenges such as retrieval accuracy, integration of retrieved information with generative models, and computational demands. Future advancements are likely to focus on improving the efficiency and accuracy of the retrieval phase, better integration techniques for blending retrieved information with generative processes, and scaling up to handle even larger and more diverse datasets.
45
+
46
+ In conclusion, Retrieval-Augmented Generation systems represent a pivotal development in NLP. By effectively breaking the constraints of fixed context windows, RAG systems provide a pathway towards more intelligent, knowledgeable, and capable language models. This technology not only broadens the scope of what is achievable with AI in language tasks but also sets the stage for more sophisticated and contextually aware AI systems in the future.
src/llms/tokenizers.qmd ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Tokenizers
3
+ ---
4
+
5
+ Tokenization is a fundamental step in natural language processing (NLP) that involves breaking down text into smaller components, such as words, phrases, or symbols. These smaller components are called tokens. Tokenizers, the tools that perform tokenization, play a crucial role in preparing text for various NLP tasks like machine translation, sentiment analysis, and text summarization. This article provides an exhaustive overview of tokenizers, exploring their types, how they function, their importance, and the challenges they present.
6
+
7
+ [Excellent video of Andrej Karpathy about Tokenizers](https://www.youtube.com/watch?v=zduSFxRajkE)
8
+
9
+ ## What is Tokenization?
10
+
11
+ Tokenization is the process of converting a sequence of characters into a sequence of tokens. It is a form of text segmentation that helps in structuring text to be processed by NLP models. The primary goal is to interpret the input text by analyzing its composition of words, phrases, or other meaningful elements.
12
+
13
+ ## Types of Tokenizers
14
+
15
+ Tokenizers can be broadly classified into several types based on the method and granularity of tokenization:
16
+
17
+ 1. **Word Tokenizers**: These split text into words using spaces and punctuation as delimiters. Common in many Western languages where words are clearly delineated by spaces.
18
+
19
+ 2. **Subword Tokenizers**: These break words into smaller meaningful units (subwords or morphemes), which can be beneficial for handling rare words and morphologically rich languages. Examples include Byte Pair Encoding (BPE), WordPiece, and SentencePiece.
20
+
21
+ 3. **Character Tokenizers**: These tokenize text into individual characters. This approach is useful in certain contexts, like character-level text generation or languages without clear word boundaries.
22
+
23
+ 4. **Morphological Tokenizers**: These analyze the morphological structure of words, useful particularly in agglutinative languages like Turkish or Finnish, where words can be formed by stringing together multiple morphemes.
24
+
25
+ 5. **Whitespace Tokenizers**: These are the simplest form of tokenizers that split text on whitespace. They are fast but naive, as they do not consider punctuation or other delimiters.
26
+
27
+ 6. **Regex Tokenizers**: These use regular expressions to define tokens and are highly customizable. They can be designed to capture specific patterns like dates, names, or specialized terms.
28
+
29
+ ## Importance of Tokenization
30
+
31
+ Tokenization is critical in NLP for several reasons:
32
+
33
+ - **Preprocessing**: It is often the first step in preprocessing text data, preparing it for more complex operations like parsing or entity recognition.
34
+ - **Vocabulary Construction**: Tokenizers help in building the vocabulary of a model, which is crucial for embedding layers and the overall understanding of the text.
35
+ - **Consistency**: Effective tokenization ensures that text data is uniformly structured, aiding in the consistency of subsequent analyses.
36
+ - **Flexibility**: Advanced tokenizers handle different languages and scripts, accommodating global and diverse linguistic features.
37
+
38
+ ## How Tokenizers Influence Model Performance
39
+
40
+ The choice of tokenizer can significantly impact the performance of NLP models:
41
+
42
+ - **Language Coverage**: Some tokenizers are better suited for specific languages or linguistic phenomena. For example, subword tokenizers can be particularly effective for languages with rich morphology.
43
+ - **Handling of Rare Words**: Subword and character tokenizers reduce the problem of out-of-vocabulary (OOV) words, as they can decompose unknown words into known subunits.
44
+ - **Training Efficiency**: Efficient tokenization can speed up the training process by reducing the size of the vocabulary and simplifying the model architecture.
45
+
46
+ ## Challenges with Tokenizers
47
+
48
+ Despite their utility, tokenizers also face several challenges:
49
+
50
+ - **Complexity**: Designing a tokenizer that effectively handles various languages, scripts, and special cases (like emojis or code snippets) can be complex.
51
+ - **Bias**: Tokenizers can introduce or perpetuate biases if not properly designed, especially when handling dialects or non-standard language forms.
52
+ - **Upkeep**: Languages evolve, and tokenizers must be updated to accommodate new words, slang, or changes in language use.
53
+
54
+ ## Future Directions
55
+
56
+ Advancements in tokenization are moving towards more adaptive and intelligent systems that can handle the intricacies of human language more effectively. Developments in deep learning might lead to models that can learn to tokenize optimally for specific tasks without manual intervention.
57
+
58
+ In conclusion, tokenizers are a backbone technology in NLP that facilitate the understanding and processing of text. Their design and implementation are critical for the success of language models and applications. As NLP continues to evolve, so too will the methodologies and technologies surrounding tokenization, shaping the future of how machines understand human language.