5 result(s)
Foundry Local is a high-performance local AI runtime from Azure AI Foundry that brings model execution and hardware acceleration to client devices. It enables developers to build and ship cross-platform AI apps that run accelerated models on a wide range of silicon.
The post highlights how running AI locally on developers’ devices addresses long-standing concerns about data privacy, dependency on cloud services, and limited control over tooling. By using local models, teams — especially those working on sensitive or regulated projects — can keep data on-device, customize workflows, reduce latency, and retain full control over model behavior. The article positions local models as a practical approach for AI-assisted development and is published on the Azure AI Foundry Blog.
Azure AI Foundry now can create a production-ready Azure AI Search vector index natively during agent grounding, enabling immediate indexing with no prior search setup and speeding agent deployment to production.
Microsoft announced the Computer Use tool (Preview) for the Azure AI Foundry Agent Service. The preview brings feature parity with the Azure OpenAI Responses API while offering seamless integration into the Foundry agent runtime and enterprise-grade security, allowing developers to build agents that reason and use tools within a secure environment.
August 2025 update for Azure AI Foundry: GPT-5 is available, Model Router adds GPT-5 support, Responses API reaches general availability, Browser Automation enters public preview, and there are additions and updates including Sora, Mistral Document AI, FLUX image models, OpenAI gpt-oss with Foundry Local, plus SDK and documentation improvements.