Hugging Face for Excel
Breaking changes to the API are probable. Please provide feedback.
Run inference on Hugging Face models and spaces directly from Excel functions. Free to use.
Overview
This add-in provides Excel custom functions that wrap existing Hugging Face JavaScript libraries (@huggingface/inference, @gradio/client, @xenova/transformers). Use models and spaces on Hugging Face without building, hosting, or publishing your own Excel add-in.
Target Users
| User Type | Use Case |
|---|---|
| Data scientists | Provide end-users an easy way to use models from Excel |
| Excel power users | Build custom AI functions using existing Hugging Face models |
| Model testers & labelers | Run datasets through a model and annotate results |
Example: Build a custom translation function using a Hugging Face model, similar to the Translate for Excel add-in, which uses Transformers.js and the Opus-MT series of models.
These functions are designed to be consumed in a named Excel LAMBDA function to hide implementation details from end-users (e.g., task, model, hf_token, endpoint).
Features
| Feature | Description |
|---|---|
| 🆓 Free | Unlimited free use |
| 🔒 Private | None of your data is sent to our servers |
| 💻 Local Option | Run inference locally in your browser |
| 🚀 Fast Setup | Quickly test or operationalize your model |
Demo
Download the demo workbook to see the functions in action. You’ll need to add your hf_token on the ReadMe sheet to use the serverless API inference.
Functions
The Gradio function uses a repeating parameter to pass arbitrary data payloads. All other functions are designed around Transformers pipeline tasks (translation, summarization, etc.), supporting both local and API inference. We support a subset of Transformers tasks with their most common parameters.
HF.GRADIO
Gradio is a popular way to build AI demos and production APIs. This function uses the Gradio JavaScript client to connect to a Gradio space API with user feedback on queue status and ETA.
=HF.GRADIO(hf_space, hf_token, arg1, arg2, ...)
| Parameter | Required | Description |
|---|---|---|
hf_space |
Yes | Gradio Space ID (e.g., "boardflare/translation") |
hf_token |
No | Required for private spaces (e.g., "hf_eiffw3..") |
arg1, arg2, ... |
Yes | Data elements passed to the space |
Example:
=HF.GRADIO("boardflare/translation", "hf_eiffw3..", "Hello", "en", "fr")
How arguments work: Arguments are packaged as an array and sent as {data: [arg1, arg2, ...]}. Pass arguments in the order expected by your Gradio app. The return must be a scalar type (string, number, boolean).
Note: Matrix arguments and returns are technically possible but not currently supported because all arguments would be sent as matrices, even single cells (e.g.,
{data: [[["foo"]], [["bar"]], ...]}). This doesn’t work with standard single-line Gradio interfaces. Use Python code on both sides to shape data as needed. See Architecture and Roadmap for more details.
HF.TRANSLATION
=HF.TRANSLATION(hf_model, text, [src_lang], [tgt_lang], [hf_token], [hf_endpoint])
| Parameter | Required | Description |
|---|---|---|
hf_model |
Yes | Model name (e.g., Helsinki-NLP/opus-mt-en-de) |
text |
Yes | Text to translate |
src_lang |
No | Source language code (e.g., "en") |
tgt_lang |
No | Target language code (e.g., "de") |
hf_token |
No | Hugging Face token (e.g., hf_ddie3kd...) |
hf_endpoint |
No | Inference endpoint URL |
Output: Translated text string.
Examples:
=HF.TRANSLATION("facebook/mbart-large-50-many-to-many-mmt", "Hello, how are you?", "en", "de")
Serverless API translation from English to German.
=HF.TRANSLATION("Xenova/m2m100_418M", "Hello, how are you?", "en", "de")
Local inference translation from English to German.
Tips for translation models:
| Model Type | src_lang/tgt_lang Required | Notes |
|---|---|---|
Single language-pair (e.g., Helsinki-NLP/opus-mt-en-de) |
No | Trained for one language pair only |
Multi-lingual (e.g., facebook/mbart-large-50-many-to-many-mmt) |
Yes | Formats vary by model (en, en_XX, etc.). See Transformers.js translation pipeline for common model parameters |
| Text-to-text (e.g., T5) | Prepend prompt | Use prompts like "translate English to French: ..." when using the inference API |
HF.SUMMARIZATION
=HF.SUMMARIZATION(hf_model, text, [max_length], [hf_token], [hf_endpoint])
| Parameter | Required | Description |
|---|---|---|
hf_model |
Yes | Model name (e.g., facebook/bart-large-cnn) |
text |
Yes | Text to summarize |
max_length |
No | Maximum summary length (tokens) |
hf_token |
No | Hugging Face token |
hf_endpoint |
No | Inference endpoint URL |
Output: Summarized text string.
Example:
=HF.SUMMARIZATION("facebook/bart-large-cnn", "The tower is 324 meters (1,063 ft) tall...")
HF.ZERO_SHOT_CLASSIFICATION
=HF.ZERO_SHOT_CLASSIFICATION(hf_model, text, labels, [hf_token], [hf_endpoint])
| Parameter | Required | Description |
|---|---|---|
hf_model |
Yes | Model name (e.g., facebook/bart-large-mnli) |
text |
Yes | Text to classify |
labels |
Yes | Range of labels to score (e.g., D2:D5) |
hf_token |
No | Hugging Face token |
hf_endpoint |
No | Inference endpoint URL |
Output: A range of scores for each label, spilling across columns.
Example:
=HF.ZERO_SHOT_CLASSIFICATION("facebook/bart-large-mnli", "I am feeling great today", D2:D5)
Where D2:D5 contains labels like “positive”, “negative”, “neutral”, “mixed”.
HF.TEXT_GENERATION
=HF.TEXT_GENERATION(hf_model, text, [max_length], [hf_token], [hf_endpoint])
| Parameter | Required | Description |
|---|---|---|
hf_model |
Yes | Model name (e.g., Xenova/distilgpt2) |
text |
Yes | Text prompt to generate from |
max_length |
No | Maximum generated text length (tokens) |
hf_token |
No | Hugging Face token |
hf_endpoint |
No | Inference endpoint URL |
Output: Generated text string.
Example:
=HF.TEXT_GENERATION("Xenova/distilgpt2", "Once upon a time")
Architecture
Excel add-ins with custom functions are web apps containing: - functions.json — Defines function signatures in Excel - functions.js — Runs when functions are called - manifest.xml — Tells Excel where to find these files and provides add-in metadata
Example functions.json:
{
"functions": [
{
"description": "Performs sentiment analysis on a string.",
"id": "SENTIMENT_ANALYSIS",
"name": "SENTIMENT_ANALYSIS",
"parameters": [
{
"name": "text",
"description": "The text to analyze.",
"type": "string"
}
]
}
]
}Example functions.js:
async function sentimentAnalysis(text) {
const response = await fetch('https://api.example.com/sentiment', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ text: text })
});
const result = await response.json();
return result.sentiment;
}
CustomFunctions.associate("SENTIMENT_ANALYSIS", sentimentAnalysis);Runtime Environment
functions.js runs in a browser—either the WebView2 control in Excel for Windows (Chromium-based) or an iframe in Excel for the web.
- Local inference: Uses a web worker running Transformers.js
- Python option: Pyodide or PyScript can run Python in the browser. The Anaconda Toolbox for Excel uses this approach.
Python in Excel Integration
The built-in Python in Excel enables Python code using an Azure runtime. Anaconda Code provides local Python via PyScript. Both could wrap a generic Excel custom function that calls Hugging Face APIs—similar to the Gradio function, but with more flexibility.
Note: The Python in Excel runtime is sandboxed and doesn’t allow network access, similar to LLM runtimes.
Hosting Options
Since add-ins are web apps, you could host a custom Excel add-in on a Hugging Face static space (a reverse proxy is needed for CORS support in Excel for the web). You could even prompt an LLM to create an entirely custom Excel add-in and push the code to a static space.
The Opportunity
There are over a billion Excel users worldwide, many of whom will start using Python with the upcoming GA of Python in Excel. This will be accelerated by LLMs like Copilot in Excel that write Python code in Excel. This will enable more Excel power users to become DIY data scientists who want to use Hugging Face tools and models from Excel.
Roadmap
This is an initial version to gather feedback. Planned enhancements include:
| Enhancement | Description |
|---|---|
| More tasks | Support additional transformers tasks, particularly chat completion prompts |
| WebGPU | Upgrade to Transformers.js v3 with WebGPU support for faster local inference |
| AutoTrain | AutoTrain API support to fine-tune models directly from Excel |
| Static space deployment | Deploy add-in to a HF static space for easy duplication and customization |
| Enterprise auth | Add user authentication and centralized hf_token management so tokens aren’t hard-coded in workbooks, with user-level metrics for endpoint consumption |
| Generic functions | Generic Gradio and Transformer functions accepting arbitrary range inputs and returning range outputs—one function using a Python in Excel wrapper, eliminating the need to build/host/publish an Excel add-in |
If any of these are of keen interest to you, please let us know.
