Metadata-Version: 2.4
Name: graphrag-llm
Version: 3.0.2
Summary: GraphRAG LLM package.
Project-URL: Source, https://github.com/microsoft/graphrag
Author: Mónica Carvajal
Author-email: Alonso Guevara Fernández <alonsog@microsoft.com>, Andrés Morales Esquivel <andresmor@microsoft.com>, Chris Trevino <chtrevin@microsoft.com>, David Tittsworth <datittsw@microsoft.com>, Dayenne de Souza <ddesouza@microsoft.com>, Derek Worthen <deworthe@microsoft.com>, Gaudy Blanco Meneses <gaudyb@microsoft.com>, Ha Trinh <trinhha@microsoft.com>, Jonathan Larson <jolarso@microsoft.com>, Josh Bradley <joshbradley@microsoft.com>, Kate Lytvynets <kalytv@microsoft.com>, Kenny Zhang <zhangken@microsoft.com>, Nathan Evans <naevans@microsoft.com>, Rodrigo Racanicci <rracanicci@microsoft.com>, Sarah Smith <smithsarah@microsoft.com>
License-Expression: MIT
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: <3.14,>=3.10
Requires-Dist: azure-identity~=1.25
Requires-Dist: graphrag-cache==3.0.2
Requires-Dist: graphrag-common==3.0.2
Requires-Dist: jinja2~=3.1
Requires-Dist: litellm~=1.80
Requires-Dist: nest-asyncio2~=1.7
Requires-Dist: pydantic~=2.10
Requires-Dist: typing-extensions~=4.12
Description-Content-Type: text/markdown

# GraphRAG LLM

## Basic Completion

This example demonstrates basic usage of the LLM library to interact with Azure OpenAI. It loads environment variables for API configuration, creates a ModelConfig for Azure OpenAI, and sends a simple question to the model. The code handles both streaming and non-streaming responses (streaming responses are printed chunk by chunk in real-time, while non-streaming responses are printed all at once). It also shows how to use the gather_completion_response utility function as a simpler alternative that automatically handles both response types and returns the complete text.

[Open the notebook to explore the basic completion example code](example_notebooks/basic_completion_example.ipynb)


## Basic Embedding

This examples demonstrates how to generate text embeddings using the GraphRAG LLM library with Azure OpenAI's embedding service. It loads API credentials from environment variables, creates a ModelConfig for the Azure embedding model and configures authentication to use either API key or Azure Managed Identity. The script then creates an embedding client and processes a batch of two text strings ("Hello world" and "How are you?") to generate their vector embeddings.

[Open the notebook to explore the basic embeddings example code](example_notebooks/basic_completion_example.ipynb)

View the [notebooks](example_notebooks/README.md) for more examples.