root/local/: graphrag-llm-3.0.2 metadata and description

Simple index

GraphRAG LLM package.

author Mónica Carvajal
author_email Alonso Guevara Fernández <alonsog@microsoft.com>, Andrés Morales Esquivel <andresmor@microsoft.com>, Chris Trevino <chtrevin@microsoft.com>, David Tittsworth <datittsw@microsoft.com>, Dayenne de Souza <ddesouza@microsoft.com>, Derek Worthen <deworthe@microsoft.com>, Gaudy Blanco Meneses <gaudyb@microsoft.com>, Ha Trinh <trinhha@microsoft.com>, Jonathan Larson <jolarso@microsoft.com>, Josh Bradley <joshbradley@microsoft.com>, Kate Lytvynets <kalytv@microsoft.com>, Kenny Zhang <zhangken@microsoft.com>, Nathan Evans <naevans@microsoft.com>, Rodrigo Racanicci <rracanicci@microsoft.com>, Sarah Smith <smithsarah@microsoft.com>
classifiers
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Programming Language :: Python :: 3.13
description_content_type text/markdown
license_expression MIT
project_urls
  • Source, https://github.com/microsoft/graphrag
requires_dist
  • azure-identity~=1.25
  • graphrag-cache==3.0.2
  • graphrag-common==3.0.2
  • jinja2~=3.1
  • litellm~=1.80
  • nest-asyncio2~=1.7
  • pydantic~=2.10
  • typing-extensions~=4.12
requires_python <3.14,>=3.10

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
graphrag_llm-3.0.2-py3-none-any.whl
Size
81 KB
Type
Python Wheel
Python
3
  • Replaced 6 time(s)
  • Uploaded to root/local by root 2026-02-19 21:53:54
graphrag_llm-3.0.2.tar.gz
Size
58 KB
Type
Source
  • Replaced 6 time(s)
  • Uploaded to root/local by root 2026-02-19 21:54:05

GraphRAG LLM

Basic Completion

This example demonstrates basic usage of the LLM library to interact with Azure OpenAI. It loads environment variables for API configuration, creates a ModelConfig for Azure OpenAI, and sends a simple question to the model. The code handles both streaming and non-streaming responses (streaming responses are printed chunk by chunk in real-time, while non-streaming responses are printed all at once). It also shows how to use the gather_completion_response utility function as a simpler alternative that automatically handles both response types and returns the complete text.

Open the notebook to explore the basic completion example code

Basic Embedding

This examples demonstrates how to generate text embeddings using the GraphRAG LLM library with Azure OpenAI's embedding service. It loads API credentials from environment variables, creates a ModelConfig for the Azure embedding model and configures authentication to use either API key or Azure Managed Identity. The script then creates an embedding client and processes a batch of two text strings ("Hello world" and "How are you?") to generate their vector embeddings.

Open the notebook to explore the basic embeddings example code

View the notebooks for more examples.