This chapter implements the function calling and system instructions capabilities demonstrated in Chapter 7, but using the Vertex AI API instead of the Development API.
Key Differences from Chapter 7:
- API Endpoint: Uses the Vertex AI WebSocket endpoint through a proxy instead of the direct Development API endpoint
- Authentication: Uses service account authentication through the proxy instead of an API key
- Model Path: Uses the full Vertex AI model path format:
projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/gemini-2.0-flash-exp
- Setup Configuration: Includes additional Vertex AI-specific configuration parameters in the setup message
⚠️ Important Tool Limitation: Unlike the Development API which supports multiple tools, the Vertex AI API currently supports only one tool declaration. This means you must choose a single function to expose to the model (weather, search, or code execution) rather than providing all three simultaneously.
The core functionality remains similar to Chapter 7, but adapted for the single-tool limitation:
- System instructions support
- Function calling (limited to one function)
- WebSocket communication patterns
- Enhanced connection handling
- All multimodal capabilities from Chapter 10
Please refer to the comprehensive documentation in Chapter 7's README, keeping in mind that you'll need to adapt the implementation to work with a single tool at a time.
You can compare the implementations by looking at:
- Chapter 11 index.html (Vertex API version)
- Chapter 7 index.html (Development API version)
The main differences are in the initialization, configuration, and tool declaration sections, while the core system instruction handling remains the same.