AI-Powered Features
Master integrating AI services (OpenAI, Anthropic) into Buzzy apps. Build intelligent features with Buzzy Functions, prompt engineering, and cost management.
Overview
What we're building: A content assistant app that helps users write, edit, and improve text using AI via Buzzy Functions.
Non-technical explanation: Imagine adding a smart writing assistant directly into your app—like having ChatGPT built in, but customized for your specific use case. Users can generate content, improve writing, summarize text, or translate languages without leaving your app. We'll use Buzzy Functions to safely connect to AI services like OpenAI.
Time commitment: 6-10 hours total
Setup and API access: 45 minutes
Building Buzzy Function: 2-3 hours
App interface and UX: 2-3 hours
Prompt engineering and testing: 2-3 hours
Cost tracking and error handling: 1-2 hours
Difficulty: 🔴 Advanced - Requires understanding of APIs, server-side code, and AI prompting
Prerequisites:
✅ Completed External API Integration tutorial
✅ Understanding of Buzzy Functions architecture
✅ Familiarity with AI prompting concepts
✅ Reviewed Buzzy Functions documentation
✅ Payment method for AI service (OpenAI requires credit card)
✅ Understanding of token-based pricing models
What you'll learn:
🤖 Creating Buzzy Functions for AI service integration (OpenAI, Anthropic)
🔒 Using Buzzy Constants for secure API key storage with AES encryption
✍️ Prompt engineering for AI features (getting quality results)
💰 Managing AI API costs with usage tracking and limits
⚠️ Error handling for AI services in Buzzy apps
📞 Calling Buzzy Functions from app actions with parameters
🎯 Best practices for production AI features
AI Services Overview
Available AI APIs
OpenAI (GPT-4, GPT-3.5):
Most popular
Good documentation
Variety of models
Pricing: $0.03-$0.06 per 1K tokens (GPT-4)
Anthropic (Claude):
Strong safety features
Long context windows
Good for complex tasks
Similar pricing to OpenAI
Others:
Google (Gemini)
Cohere
AI21 Labs
For this example: We'll use OpenAI, but patterns apply to all
Cost Considerations
Pricing factors:
Model used (GPT-4 more expensive than GPT-3.5)
Input tokens (your prompt)
Output tokens (AI response)
Features (embeddings, fine-tuning cost extra)
Cost management:
Cache common prompts
Use cheaper models when possible
Limit response length
Implement usage limits per user
Monitor spending
The Content Assistant Project
Features:
Write new content from prompts
Improve existing text
Summarize long text
Change tone (professional, casual, etc.)
Check grammar and spelling
Translate to different languages
Chat interface
Use case: Helps users create better content, similar to ChatGPT but integrated into your app.
Step 1: Setup (45 minutes)
Get API Access
OpenAI:
Go to platform.openai.com
Create account
Add payment method (required)
Generate API key
Set spending limits (recommended: start with $10/month)
Test API key with simple request
API Key security with Buzzy:
Store in Buzzy Constants (AES encrypted)
Never expose to client
Access via
BUZZYCONSTANTS()
in FunctionsRotate periodically in Settings
Understand Token Usage
Tokens:
Chunks of text (roughly 4 characters = 1 token)
"Hello world" = ~2 tokens
Average English word = ~1.3 tokens
Example costs (GPT-3.5-turbo):
Short article (500 words) = ~650 tokens
Cost: $0.001-0.002 per generation
1000 articles = $1-2
Token calculator: platform.openai.com/tokenizer
Plan the Buzzy Application
Data Model in Buzzy:
Documents Datatable:
title (text field)
content (long text field)
original_content (long text field, for comparison)
created_by (automatically set to current user)
created_at (date field)
updated_at (date field)
Viewers field (current user - security)
AI_Requests Datatable (optional, for tracking):
user_id (automatically set to current user)
feature (text field: "improve", "summarize", etc.)
tokens_used (number field)
cost (number field)
created_at (date field)
Viewers field (current user - security)
User Flow:
User creates/opens document in Buzzy app
↓
Types or pastes content
↓
Clicks AI action button (improve, summarize, etc.)
↓
Buzzy app calls Buzzy Function → Function calls OpenAI → Returns result
↓
Result displayed in modal
↓
User can accept, modify, or try again
↓
Document saved to Datatable
Step 2: Initial Build with Buzzy AI v3 (60 minutes)
The Prompt
In Buzzy Workspace, create a new app:
Create a content assistant application with AI-powered writing features:
Data Model:
- Documents Datatable:
- title (text field, required)
- content (long text field)
- original_content (long text field)
- created_at (date field, automatic)
- updated_at (date field, automatic)
- Viewers field (current user)
Screens:
1. Documents List screen:
- Show all user's documents
- Display title, preview of content (first 100 chars), last updated
- "New Document" button at top
- Search bar for documents
- Empty state: "No documents yet. Create your first document!"
2. Document Editor screen:
- Title input field
- Large text area for content editing
- Word count display below text area
- AI Tools panel with buttons:
* "Improve Writing"
* "Summarize"
* "Change Tone"
* "Check Grammar"
- "Save" button
- "Back to List" button
3. AI Result Modal (popup):
- Shows AI-generated result text
- "Use This" button (replaces editor content)
- "Copy to Clipboard" button
- "Try Again" button
- "Cancel" button
- Loading indicator for when AI is processing
Features:
- Auto-save when user clicks Save button
- Mobile-responsive design
- Clean, modern interface
- Viewers field ensures users only see their own documents
Security:
- Use Viewers field on Documents to ensure data privacy
Note: We'll add actual AI functionality using Buzzy Functions in next steps.
Step 3: Create Buzzy Functions for AI Integration (90-120 minutes)
Step 3a: Store OpenAI API Key in Buzzy Constants
Security first: Never hard-code API keys
Create a Buzzy Constant:
In Buzzy Workspace, go to Settings tab
Click Constants section
Click Add Constant
Name:
OPENAI_API_KEY
Value: [paste your OpenAI API key]
Description: "OpenAI API key for AI features"
Save
Buzzy Constants are encrypted with AES encryption. Your Buzzy Functions can access them securely using BUZZYCONSTANTS()
, but they're never exposed to the client. Learn more.
Step 3b: Create Buzzy Functions for AI Features
Create a Buzzy Function for each AI feature. Here's the pattern:
Function 1: improveWriting
In Settings → Functions, click Add Function
Name:
improveWriting
Description: "Improves writing quality using OpenAI"
Runtime: Node.js 22
Lambda function code (minimal example):
export const handler = async (event) => {
try {
const { text } = event;
if (!text || text.length < 10) {
return {
statusCode: 400,
body: JSON.stringify({ error: 'Text is required (min 10 characters)' })
};
}
const apiKey = BUZZYCONSTANTS('OPENAI_API_KEY');
const url = 'https://api.openai.com/v1/chat/completions';
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
model: 'gpt-3.5-turbo',
messages: [
{
role: 'system',
content: 'You are a professional writing assistant. Improve the text to be clearer, more engaging, and better written while maintaining the original meaning and tone.'
},
{
role: 'user',
content: `Please improve this text:\n\n${text}`
}
],
max_tokens: 1000,
temperature: 0.7
})
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.error?.message || 'OpenAI API error');
}
const data = await response.json();
return {
statusCode: 200,
body: JSON.stringify({
improvedText: data.choices[0].message.content,
tokensUsed: data.usage.total_tokens,
model: data.model
})
};
} catch (error) {
console.error('Error:', error);
return {
statusCode: 500,
body: JSON.stringify({ error: 'Failed to improve text' })
};
}
};
Function 2: summarizeText (similar pattern):
export const handler = async (event) => {
try {
const { text } = event;
if (!text || text.length < 100) {
return {
statusCode: 400,
body: JSON.stringify({ error: 'Text too short to summarize (min 100 characters)' })
};
}
const apiKey = BUZZYCONSTANTS('OPENAI_API_KEY');
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({
model: 'gpt-3.5-turbo',
messages: [
{
role: 'system',
content: 'You are a summarization assistant. Create concise, accurate summaries.'
},
{
role: 'user',
content: `Summarize this text in 2-3 sentences:\n\n${text}`
}
],
max_tokens: 500
})
});
const data = await response.json();
return {
statusCode: 200,
body: JSON.stringify({
summary: data.choices[0].message.content,
tokensUsed: data.usage.total_tokens
})
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({ error: 'Failed to summarize text' })
};
}
};
Create similar Functions for:
changeTone
(acceptstext
andtone
parameters)checkGrammar
(acceptstext
parameter)
Step 3c: Call Buzzy Functions from Your App
Integrate Functions with button actions:
Option 1 - Use Buzzy AI to integrate:
Connect the AI Functions to the editor buttons:
1. "Improve Writing" button:
- On click, show loading indicator
- Call improveWriting Function with the content from editor
- On success: Show result in AI Result Modal
- On error: Show error message
- Track tokens used
2. "Summarize" button:
- Validate content is at least 100 characters
- If too short: show "Please add more content to summarize"
- Call summarizeText Function
- Display summary in modal
3. "Change Tone" button:
- Show dropdown to select tone: Professional, Casual, Friendly, Formal
- Call changeTone Function with selected tone
- Display result in modal
4. "Check Grammar" button:
- Call checkGrammar Function
- Display corrected text in modal
5. For all buttons:
- Disable button while Function is executing
- Show loading spinner
- Handle Function errors with user-friendly messages
- Enable button after completion
6. In AI Result Modal:
- "Use This" button replaces editor content with result
- "Copy" button copies result to clipboard
- "Try Again" button calls same Function again
Option 2 - Manual configuration in Design tab:
Go to Design tab → Document Editor screen
Click on "Improve Writing" button
In button properties, add Action: Call Function
Select Function:
improveWriting
Set Parameters:
{ "text": "[content from editor]" }
Configure success action: Show AI Result Modal with response data
Configure error action: Show error message
Add loading indicator
Repeat for each AI button
Step 4: Advanced - Streaming AI Responses (Optional, 60 minutes)
Why Stream?
Benefits:
Better user experience (see results appear word-by-word)
Feels faster and more engaging
User can stop if result isn't helpful
More ChatGPT-like experience
Implementation with Buzzy Functions
Streaming requires more complex setup:
Buzzy Functions can stream responses using AWS Lambda streaming
Requires additional code to handle Server-Sent Events (SSE)
Your Buzzy app needs to handle streaming data reception
For most use cases, standard (non-streaming) responses work well. Consider streaming only if:
Responses are typically very long (500+ tokens)
User experience requires instant feedback
You have experience with streaming APIs
If you need streaming:
Modify your Buzzy Function to use Lambda streaming responses
Use Server-Sent Events (SSE) pattern
Handle stream chunks in your Buzzy app using Code Widget
Display partial results as they arrive
This is an advanced topic beyond the scope of this basic guide.
Step 5: Cost Management in Buzzy (45 minutes)
Track Usage in Your Buzzy App
Create tracking in your Buzzy Functions:
Each Function can save usage data to the AI_Requests Datatable:
// At the end of your Buzzy Function, add tracking
const tokensUsed = data.usage.total_tokens;
const costPer1kTokens = 0.002; // GPT-3.5-turbo rate
const cost = (tokensUsed / 1000) * costPer1kTokens;
// You can use BuzzyFrameAPI from Code Widget to save this
// Or return it to the app and save from there
Better approach - Track from Buzzy app:
When a Function succeeds, save the usage data:
After each successful AI Function call:
1. Extract tokensUsed from Function response
2. Calculate cost based on model:
- GPT-3.5: $0.002 per 1K tokens
- GPT-4: $0.03 per 1K tokens
3. Save to AI_Requests Datatable:
- feature name (e.g., "improveWriting")
- tokens_used
- cost
- current user
- current date/time
Implement Usage Limits
Option 1 - Check limits in Buzzy app:
Before calling AI Functions:
1. Count today's AI_Requests for current user
2. If count >= daily limit (e.g., 50):
- Show message: "Daily AI limit reached (50 requests). Resets tomorrow."
- Disable AI buttons
- Show when limit resets
3. Otherwise, allow Function call
Option 2 - Check limits in Buzzy Function:
Add limit checking logic at the start of each Function that queries AI_Requests count.
Display Usage to Users
Add usage tracking UI using Buzzy AI:
Add AI usage tracking display to the app:
1. On Documents List screen header:
- Display text: "AI Requests Today: X/50"
- Use formula field to count today's AI_Requests
- Update after each AI Function call
2. Visual indicators:
- Green when usage < 40
- Yellow warning at 40-47 requests
- Red warning at 48-49 requests
- Disabled buttons at 50 requests with clear message
3. Optional settings screen:
- Show total AI requests this month
- Show estimated monthly cost
- Breakdown by feature type
Optimize AI Costs
Best practices for your Buzzy Functions:
Use GPT-3.5-turbo for most tasks (10x cheaper than GPT-4)
Set appropriate
max_tokens
limits (don't use 2000 if 500 is enough)Use shorter, focused system prompts
Cache results when appropriate (store in Datatable)
Consider batching similar requests
Monitor usage via AI_Requests Datatable
Step 6: Error Handling in Buzzy (30 minutes)
Common Errors from AI Functions
API Errors from OpenAI:
Rate limit exceeded (429)
Invalid API key (401)
Model overloaded (503)
Request too large (400)
Content policy violation (400)
Buzzy Function Errors:
Function timeout (30 second limit)
Invalid parameters
Missing Constants
Graceful Error Handling
Implement in your Buzzy Functions (return appropriate statusCodes):
// In your Buzzy Function handler
try {
// ... OpenAI API call ...
} catch (error) {
if (error.message.includes('429')) {
return {
statusCode: 429,
body: JSON.stringify({
error: 'AI service is temporarily busy. Please try again in a moment.'
})
};
}
if (error.message.includes('401')) {
return {
statusCode: 401,
body: JSON.stringify({
error: 'Configuration error. Please contact support.'
})
};
}
// Generic error
return {
statusCode: 500,
body: JSON.stringify({
error: 'Unable to process request. Please try again.'
})
};
}
Handle errors in your Buzzy app:
Configure error actions in Design tab for each AI button:
1. On Function error (statusCode 429):
- Show message: "AI service is busy. Please wait and try again."
- Add "Retry" button
2. On Function error (statusCode 400):
- Check error message
- If "too long": "Text is too long. Please shorten it and try again."
- If "too short": "Text is too short. Please add more content."
- If "empty": "Please enter some text first."
3. On Function timeout:
- Show message: "Request timed out. Try with shorter text."
- Provide "Try Again" button
4. On usage limit reached (from app-side check):
- Show message: "Daily AI limit reached (50 requests). Resets at midnight."
- Disable AI buttons
- Show countdown to reset
5. General error handling:
- Never show technical errors to users
- Always suggest an action ("Try again", "Shorten text", "Contact support")
- Don't lose user's document content on error
- Log errors for debugging (Console logs in Functions)
Step 7: Advanced Features (Optional)
Conversation Memory for Chat Features
For chat-style interactions:
Store conversation history in a Datatable (Messages Subtable under Documents)
Pass conversation history to your Buzzy Function
Function includes history in OpenAI API call
Keep last 10 messages to manage token costs
Example: See Buzzy AI Chat App for a complete chat implementation.
Custom Templates Using Buzzy
Add template library:
Create Templates Datatable:
- template_name (text)
- template_prompt (long text)
- category (text)
- placeholders (JSON - list of fields to fill)
Add Templates screen:
- List of available templates
- User selects template
- Fill in placeholder fields
- Click "Generate" to call AI Function with filled prompt
- Display result in editor
Example templates:
- "Write a blog post about [topic]"
- "Create social media post for [product]"
- "Write email to [recipient] about [subject]"
AI-Powered Search (Advanced)
For finding similar documents:
Create a Buzzy Function that generates embeddings using OpenAI embeddings API
Store embeddings in Documents Datatable (JSON field)
Create search Function that compares embeddings
Use for semantic search (find documents with similar meaning)
This is an advanced topic requiring vector similarity calculations.
Testing AI Features in Buzzy
Test Buzzy Functions Independently
In Settings → Functions, test each Function:
Test in Buzzy App Preview Mode
Functional Testing:
Edge Cases:
Error Scenarios:
Cost and Limits Testing:
Best Practices Summary
AI Integration with Buzzy Functions:
✅ Always use Buzzy Functions for AI API calls (never from client)
✅ Store API keys in Buzzy Constants (AES encrypted)
✅ Implement usage limits and tracking
✅ Track token usage and costs in Datatables
✅ Handle errors gracefully with user-friendly messages
✅ Test Functions independently before integrating
✅ Choose appropriate model (GPT-3.5 vs GPT-4) based on task complexity
✅ Set reasonable max_tokens limits
✅ Cache results when appropriate
What to avoid:
❌ Never expose API keys in app or Function code
❌ Don't call AI APIs directly from Buzzy app (use Functions)
❌ Don't allow unlimited free AI usage without limits
❌ Don't skip error handling in Functions
❌ Don't use GPT-4 for simple tasks (costs 10x more)
❌ Don't ignore token limits and costs
❌ Don't show technical errors to users
Buzzy Functions for AI - Key Benefits:
Secure API key storage with Constants
Server-side execution (no CORS, no key exposure)
Automatic scaling on AWS Lambda
Managed infrastructure by Buzzy
Easy to test and debug independently
Reusable across multiple apps
Next Steps
Enhance your AI-powered Buzzy app:
Add more AI Functions (translation, tone adjustment, content generation)
Implement chat interface with conversation memory (see Buzzy AI Chat App)
Add image generation using DALL-E API via Buzzy Functions
Build custom AI workflows combining multiple Functions
Add embeddings for semantic search
Integrate other AI services:
Anthropic Claude via Buzzy Functions
Google Gemini via Buzzy Functions
Custom fine-tuned models
Specialized AI services (sentiment analysis, language detection)
Learn more:
Pattern to remember:
Store AI API keys in Buzzy Constants
Create Buzzy Function for each AI capability
Configure app buttons to call Functions
Handle loading states and errors
Track usage and enforce limits
Display results in your app
Congratulations! You've built an AI-powered Buzzy application using Buzzy Functions. The patterns you learned—secure API integration, cost management, error handling—apply to any AI service. You can now add intelligence to any Buzzy app you build. The Buzzy Functions architecture keeps your API keys secure, scales automatically, and provides professional-grade reliability.
Last updated