When Capacity sought to enhance its Answer Engine to better retrieve accurate, context-aware, and multilingual answers from organizational knowledge, it knew a shift to a generative AI–native architecture was crucial.
The company chose Phi small language models from Microsoft, via Azure AI Foundry, as the clear winner based on quantitative and qualitative output, pricing, deployment capabilities, and third-party benchmarks.
The solution is more scalable, secure, and cost-effective and has already delivered 4.2x the cost savings, 97% tagging accuracy, and faster document summarization, yielding significant performance and customer satisfaction improvements.
Follow Microsoft