[]
        
(Showing Draft Content)

AI Assistant

The SpreadJS AI add-on provides a framework that enhances AI interactions by supplying contextual spreadsheet data and parsing capabilities. This enables AI models to generate more accurate and spreadsheet-specific responses.

Installation and Setup

Adding the AI Add-on

To enable AI functionality in SpreadJS, you must include the AI add-on script in your project.

For Header Reference Implementation:

<script src="gc.spread.sheets.ai.x.x.x.min.js"></script>

For Module Implementation

import '@mescius/spread-sheets-ai-addon';

Contextual Intelligence

SpreadJS intelligently extracts and organizes worksheet data to provide AI models with relevant context, resulting in more precise outputs.

​Example Scenario​​:

  • Without context: AI guesses data ranges (=SUM(A1:A10))

  • With context: AI references named ranges (=SUM(table1[sales])

AI Model Integration Methods

SpreadJS provides flexible approaches to connect with AI models. Below are the detailed implementation methods:

1. Secure Backend Proxy

If you do not accept exposing the API key, you can choose to send the request to the server and return the response data.

Most secure approach - keeps API keys server-side

Frontend Implementation:

const backendAIProxy = async (request) => {
    // Add SpreadJS metadata
    request.metadata = {
        spreadsheetId: workbook.getActiveSheet().name(),
        userId: currentUser.id,
        timestamp: new Date().toISOString()
    };
    
    const response = await fetch('/api/spreadjs-ai', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'X-Request-ID': generateUUID()
        },
        body: JSON.stringify(request)
    });
    
    if (!response.ok) {
        const error = await response.json();
        throw new Error(error.message || 'AI request failed');
    }
    
    return response;
};

workbook.injectAI(backendAIProxy);

Backend Implementation (Node.js):

const { OpenAI } = require('openai');
const express = require('express');
const app = express();

// Initialize AI client
const aiClient = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
    organization: process.env.ORG_ID,
    timeout: 30000
});

// AI Proxy Endpoint
app.post('/api/spreadjs-ai', async (req, res) => {
    try {
        // 1. Request validation
        if (!req.body.messages || !Array.isArray(req.body.messages)) {
            return res.status(400).json({ error: 'Invalid request format' });
        }

        // 2. Process request with custom logic
        const completion = await aiClient.chat.completions.create({
            model: req.body.model || 'gpt-4-turbo',
            messages: req.body.messages,
            temperature: req.body.temperature || 0.5,
            max_tokens: req.body.max_tokens || 1000,
            stream: false
        });

        // 3. Log analytics
        logAnalytics(req.body.metadata, completion.usage);

        // 4. Return formatted response
        res.json({
            success: true,
            data: completion.choices[0].message.content,
            usage: completion.usage
        });
        
    } catch (error) {
        console.error('AI Processing Error:', error);
        res.status(500).json({ 
            error: error.message,
            type: error.type || 'ai_service_error' 
        });
    }
});

2. Direct API Configuration

If you don’t mind exposing the AI configuration in the HTTP request body (publishing your API key is not recommended), you can choose to inject the configuration as an environment variable.

// Initialize SpreadJS workbook
const workbook = new GC.Spread.Sheets.Workbook('ss');

// Directly configure AI service credentials
workbook.injectAI({
    model: 'gpt-4-turbo',  // Specify your AI model
    key: 'sk-your-api-key-here',  // Your API key
    basePath: 'https://api.openai.com/v1',  // API endpoint
    
    // Optional advanced parameters
    organization: 'your-org-id',  // For OpenAI organizations
    timeout: 30000,  // Request timeout in ms
    defaultTemperature: 0.7  // Default creativity level
});

3. Custom Client-Side Handler

If you don’t mind exposing the AI configuration in the HTTP request body (publishing your API key is not recommended) but want to check whether the request body contains sensitive data and perform data cleaning and other actions, you can do so.

const aiHandler = async (requestConfig) => {
    // 1. Add required model configuration
    requestConfig.model = 'gpt-4-turbo';
    
    // 2. Data sanitization (example)
    const sanitizedMessages = requestConfig.messages.map(msg => ({
        ...msg,
        content: msg.content.replace(/credit-card-\d{4}/g, '****')
    }));
    
    // 3. Custom headers and parameters
    const requestOptions = {
        method: 'POST',
        headers: {
            'Authorization': `Bearer ${API_KEY}`,
            'Content-Type': 'application/json',
            'X-SpreadJS-Version': '16.0.0'
        },
        body: JSON.stringify({
            ...requestConfig,
            messages: sanitizedMessages
        })
    };
    
    // 4. Error handling with retry logic
    let retries = 3;
    while (retries > 0) {
        try {
            const response = await fetch('https://api.openai.com/v1/chat/completions', requestOptions);
            if (!response.ok) throw new Error(`HTTP ${response.status}`);
            return response;
        } catch (error) {
            if (--retries === 0) throw error;
            await new Promise(resolve => setTimeout(resolve, 1000));
        }
    }
};

workbook.injectAI(aiHandler);

Language Localization

SpreadJS automatically requests AI responses in the workbook's current language:

let culture = GC.Spread.Common.CultureManager.culture(); // ja-jp
let language = GC.Spread.Common.CultureManager.getCultureInfo(culture).displayName // 'Japanese (Japan)'

// in prompts
// 'please return the answer by this language: ' + language;

Security Best Practices

  1. ​Data Protection​​:

    • Always sanitize sensitive spreadsheet data

    • Consider field redaction in callbacks

  2. ​Credential Security​​:

    • Exposing API Keys directly in client-side code is NOT recommended.

    • Use server proxies in production

  3. ​Validation​​:

    • Verify all AI-generated formulas/content

    • Implement output sanitization

AI-Generated Content Disclaimer


1. Content Generation Risks

This service utilizes third-party AI models injected by users to generate outputs. Results may contain inaccuracies, omissions, or misleading content due to inherent limitations in model architectures and training data. While we implement prompt engineering and technical constraints to optimize outputs, we cannot eliminate all error risks stemming from fundamental model deficiencies.


2. User Verification Obligations

By using this service, you acknowledge and agree to:

  • Conduct manual verification of all generated content

  • Refrain from using unvalidated outputs in high-risk scenarios (legal, medical, financial, etc.)

  • Hold us harmless for any direct/indirect damages caused by reliance on generated content

3. Technical Limitations

We disclaim responsibility for:

  • Output failures caused by third-party model defects or logic errors

  • Unsuccessful error recovery attempts through fault-tolerant procedures

  • Technical constraints inherent in current AI technologies

4. Intellectual Property Compliance

You must ensure:

  • Injected models/content do not infringe third-party rights

  • No illegal/sensitive material is processed through the service

  • Compliance with model providers' IP agreements

5. Agreement Updates

We reserve the right to modify these terms to align with:

  • Technological advancements (e.g. new AI safety protocols)

  • Regulatory changes (e.g. updated AI governance frameworks)

  • Service architecture improvements