AI-Powered Features
Master integrating AI services (OpenAI, Anthropic) into Buzzy apps. Build intelligent features with Buzzy Functions, prompt engineering, and cost management.
Overview
What we're building: A content assistant app that helps users write, edit, and improve text using AI via Buzzy Functions.
Non-technical explanation: Imagine adding a smart writing assistant directly into your app—like having ChatGPT built in, but customized for your specific use case. Users can generate content, improve writing, summarize text, or translate languages without leaving your app. We'll use Buzzy Functions to safely connect to AI services like OpenAI.
Time commitment: 6-10 hours total
Setup and API access: 45 minutes
Building Buzzy Function: 2-3 hours
App interface and UX: 2-3 hours
Prompt engineering and testing: 2-3 hours
Cost tracking and error handling: 1-2 hours
Difficulty: 🔴 Advanced - Requires understanding of APIs, server-side code, and AI prompting
Prerequisites:
✅ Completed External API Integration tutorial
✅ Understanding of Buzzy Functions architecture
✅ Familiarity with AI prompting concepts
✅ Reviewed Buzzy Functions documentation
✅ Payment method for AI service (OpenAI requires credit card)
✅ Understanding of token-based pricing models
What you'll learn:
🤖 Creating Buzzy Functions for AI service integration (OpenAI, Anthropic)
🔒 Using Buzzy Constants for secure API key storage with AES encryption
✍️ Prompt engineering for AI features (getting quality results)
💰 Managing AI API costs with usage tracking and limits
⚠️ Error handling for AI services in Buzzy apps
📞 Calling Buzzy Functions from app actions with parameters
🎯 Best practices for production AI features
AI Services Overview
Available AI APIs
OpenAI (GPT-4, GPT-3.5):
Most popular
Good documentation
Variety of models
Pricing: $0.03-$0.06 per 1K tokens (GPT-4)
Anthropic (Claude):
Strong safety features
Long context windows
Good for complex tasks
Similar pricing to OpenAI
Others:
Google (Gemini)
Cohere
AI21 Labs
For this example: We'll use OpenAI, but patterns apply to all
Cost Considerations
Pricing factors:
Model used (GPT-4 more expensive than GPT-3.5)
Input tokens (your prompt)
Output tokens (AI response)
Features (embeddings, fine-tuning cost extra)
Cost management:
Cache common prompts
Use cheaper models when possible
Limit response length
Implement usage limits per user
Monitor spending
The Content Assistant Project
Features:
Write new content from prompts
Improve existing text
Summarize long text
Change tone (professional, casual, etc.)
Check grammar and spelling
Translate to different languages
Chat interface
Use case: Helps users create better content, similar to ChatGPT but integrated into your app.
Step 1: Setup (45 minutes)
Get API Access
OpenAI:
Go to platform.openai.com
Create account
Add payment method (required)
Generate API key
Set spending limits (recommended: start with $10/month)
Test API key with simple request
API Key security with Buzzy:
Store in Buzzy Constants (AES encrypted)
Never expose to client
Access via
BUZZYCONSTANTS()in FunctionsRotate periodically in Settings
Understand Token Usage
Tokens:
Chunks of text (roughly 4 characters = 1 token)
"Hello world" = ~2 tokens
Average English word = ~1.3 tokens
Example costs (GPT-3.5-turbo):
Short article (500 words) = ~650 tokens
Cost: $0.001-0.002 per generation
1000 articles = $1-2
Token calculator: platform.openai.com/tokenizer
Plan the Buzzy Application
Data Model in Buzzy:
Documents Datatable:
title (text field)
content (long text field)
original_content (long text field, for comparison)
created_by (automatically set to current user)
created_at (date field)
updated_at (date field)
Viewers field (current user - security)
AI_Requests Datatable (optional, for tracking):
user_id (automatically set to current user)
feature (text field: "improve", "summarize", etc.)
tokens_used (number field)
cost (number field)
created_at (date field)
Viewers field (current user - security)
User Flow:
Step 2: Initial Build with Buzzy AI v3 (60 minutes)
The Prompt
In Buzzy Workspace, create a new app:
Step 3: Create Buzzy Functions for AI Integration (90-120 minutes)
Step 3a: Store OpenAI API Key in Buzzy Constants
Security first: Never hard-code API keys
Create a Buzzy Constant:
In Buzzy Workspace, go to Settings tab
Click Constants section
Click Add Constant
Name:
OPENAI_API_KEYValue: [paste your OpenAI API key]
Description: "OpenAI API key for AI features"
Save
Buzzy Constants are encrypted with AES encryption. Your Buzzy Functions can access them securely using BUZZYCONSTANTS(), but they're never exposed to the client. Learn more.
Step 3b: Create Buzzy Functions for AI Features
Create a Buzzy Function for each AI feature. Here's the pattern:
Function 1: improveWriting
In Settings → Functions, click Add Function
Name:
improveWritingDescription: "Improves writing quality using OpenAI"
Runtime: Node.js 22
Lambda function code (minimal example):
Function 2: summarizeText (similar pattern):
Create similar Functions for:
changeTone(acceptstextandtoneparameters)checkGrammar(acceptstextparameter)
Step 3c: Call Buzzy Functions from Your App
Integrate Functions with button actions:
Option 1 - Use Buzzy AI to integrate:
Option 2 - Manual configuration in Design tab:
Go to Design tab → Document Editor screen
Click on "Improve Writing" button
In button properties, add Action: Call Function
Select Function:
improveWritingSet Parameters:
{ "text": "[content from editor]" }Configure success action: Show AI Result Modal with response data
Configure error action: Show error message
Add loading indicator
Repeat for each AI button
Step 4: Advanced - Streaming AI Responses (Optional, 60 minutes)
Why Stream?
Benefits:
Better user experience (see results appear word-by-word)
Feels faster and more engaging
User can stop if result isn't helpful
More ChatGPT-like experience
Implementation with Buzzy Functions
Streaming requires more complex setup:
Buzzy Functions can stream responses using AWS Lambda streaming
Requires additional code to handle Server-Sent Events (SSE)
Your Buzzy app needs to handle streaming data reception
For most use cases, standard (non-streaming) responses work well. Consider streaming only if:
Responses are typically very long (500+ tokens)
User experience requires instant feedback
You have experience with streaming APIs
If you need streaming:
Modify your Buzzy Function to use Lambda streaming responses
Use Server-Sent Events (SSE) pattern
Handle stream chunks in your Buzzy app using Code Widget
Display partial results as they arrive
This is an advanced topic beyond the scope of this basic guide.
Step 5: Cost Management in Buzzy (45 minutes)
Track Usage in Your Buzzy App
Create tracking in your Buzzy Functions:
Each Function can save usage data to the AI_Requests Datatable:
Better approach - Track from Buzzy app:
When a Function succeeds, save the usage data:
Implement Usage Limits
Option 1 - Check limits in Buzzy app:
Option 2 - Check limits in Buzzy Function:
Add limit checking logic at the start of each Function that queries AI_Requests count.
Display Usage to Users
Add usage tracking UI using Buzzy AI:
Optimize AI Costs
Best practices for your Buzzy Functions:
Use GPT-3.5-turbo for most tasks (10x cheaper than GPT-4)
Set appropriate
max_tokenslimits (don't use 2000 if 500 is enough)Use shorter, focused system prompts
Cache results when appropriate (store in Datatable)
Consider batching similar requests
Monitor usage via AI_Requests Datatable
Step 6: Error Handling in Buzzy (30 minutes)
Common Errors from AI Functions
API Errors from OpenAI:
Rate limit exceeded (429)
Invalid API key (401)
Model overloaded (503)
Request too large (400)
Content policy violation (400)
Buzzy Function Errors:
Function timeout (30 second limit)
Invalid parameters
Missing Constants
Graceful Error Handling
Implement in your Buzzy Functions (return appropriate statusCodes):
Handle errors in your Buzzy app:
Step 7: Advanced Features (Optional)
Conversation Memory for Chat Features
For chat-style interactions:
Store conversation history in a Datatable (Messages Subtable under Documents)
Pass conversation history to your Buzzy Function
Function includes history in OpenAI API call
Keep last 10 messages to manage token costs
Example: See Buzzy AI Chat App for a complete chat implementation.
Custom Templates Using Buzzy
Add template library:
AI-Powered Search (Advanced)
For finding similar documents:
Create a Buzzy Function that generates embeddings using OpenAI embeddings API
Store embeddings in Documents Datatable (JSON field)
Create search Function that compares embeddings
Use for semantic search (find documents with similar meaning)
This is an advanced topic requiring vector similarity calculations.
Testing AI Features in Buzzy
Test Buzzy Functions Independently
In Settings → Functions, test each Function:
Test in Buzzy App Preview Mode
Functional Testing:
Edge Cases:
Error Scenarios:
Cost and Limits Testing:
Best Practices Summary
AI Integration with Buzzy Functions:
✅ Always use Buzzy Functions for AI API calls (never from client)
✅ Store API keys in Buzzy Constants (AES encrypted)
✅ Implement usage limits and tracking
✅ Track token usage and costs in Datatables
✅ Handle errors gracefully with user-friendly messages
✅ Test Functions independently before integrating
✅ Choose appropriate model (GPT-3.5 vs GPT-4) based on task complexity
✅ Set reasonable max_tokens limits
✅ Cache results when appropriate
What to avoid:
❌ Never expose API keys in app or Function code
❌ Don't call AI APIs directly from Buzzy app (use Functions)
❌ Don't allow unlimited free AI usage without limits
❌ Don't skip error handling in Functions
❌ Don't use GPT-4 for simple tasks (costs 10x more)
❌ Don't ignore token limits and costs
❌ Don't show technical errors to users
Buzzy Functions for AI - Key Benefits:
Secure API key storage with Constants
Server-side execution (no CORS, no key exposure)
Automatic scaling on AWS Lambda
Managed infrastructure by Buzzy
Easy to test and debug independently
Reusable across multiple apps
Next Steps
Enhance your AI-powered Buzzy app:
Add more AI Functions (translation, tone adjustment, content generation)
Implement chat interface with conversation memory (see Buzzy AI Chat App)
Add image generation using DALL-E API via Buzzy Functions
Build custom AI workflows combining multiple Functions
Add embeddings for semantic search
Integrate other AI services:
Anthropic Claude via Buzzy Functions
Google Gemini via Buzzy Functions
Custom fine-tuned models
Specialized AI services (sentiment analysis, language detection)
Learn more:
Pattern to remember:
Store AI API keys in Buzzy Constants
Create Buzzy Function for each AI capability
Configure app buttons to call Functions
Handle loading states and errors
Track usage and enforce limits
Display results in your app
Congratulations! You've built an AI-powered Buzzy application using Buzzy Functions. The patterns you learned—secure API integration, cost management, error handling—apply to any AI service. You can now add intelligence to any Buzzy app you build. The Buzzy Functions architecture keeps your API keys secure, scales automatically, and provides professional-grade reliability.
Last updated