- InnoAI Insight
- Posts
- Gemini Pro API is now live
Gemini Pro API is now live
Gemini Pro API has been officially launched by Google, providing developers and enterprises with access to the second-best model in the Gemini family of AI models. Gemini Pro is currently free for developers and enterprises to start building with.
What’s going on here?
Gemini Pro is now accessible via API for app development.
What does this mean ?
Here’s what you have access to right now (not gonna bother with what’s coming up).
Gemini Pro API is now accessible for app development, offering a 32K context window for text and a vision endpoint for multimodal use.
Free usage is available until its general availability next year, with a limitation of 60 queries per minute.
Functionality includes features such as function calling, embeddings, semantic retrieval, custom knowledge grounding, and chat functionality.
Software Development Kits (SDKs) are provided for Python, Kotlin, Node.js, Swift, and Javascript.
Integration is possible through Google AI Studio and Vertex AI.
Post general availability, Gemini Pro's pricing is comparable to the GPT-3.5-turbo model.
For enterprise users, For enterprise users, Google Cloud introduces features like Duet AI for code, AI Hypercomputer, and various hardware upgrades.
Why should I care?
Google claims superior performance compared to models of similar size and offers multilingual support across 180+ countries.
Despite Google's claim, the delay in providing developer access a week after the launch has generated some concerns. Currently, there is limited developer engagement on X, with more attention directed towards Mistral.
Testing the models now is advisable, considering the generous free options available. Being part of the alpha phase allows exploration in areas where others may not be focusing.