Microsoft is working on creating guidelines for red teams making sure generative AI is secure and responsible.
Microsoft and OpenAI have gone hand-in-hand for a long time, from Microsoft’s initial funding of the company behind ChatGPT to embedding GPT services inside the Azure AI platform. Azure OpenAI Service is at the “forefront” of a generative AI transformation that includes GPT-4, while the Azure AI infrastructure is the “backbone,” the Redmond tech giant wrote in a blog post detailing updates to the Azure AI platform on Monday.
Two big changes — new offerings from OpenAI and upgraded virtual machine hardware — show Microsoft’s ongoing commitment to putting generative AI in play in more ways.
Jump to:
OpenAI’s GPT-4 and GPT-35-Turbo will now be available through Azure OpenAI Service in four new regions: eastern Canada, eastern Japan, southern U.K. and an additional swath of the eastern U.S. (East US 2 on the availability map). GPT-4 is the most advanced generative AI model available from OpenAI today.
Azure OpenAI Service has about 11,000 customers who use it for tasks such as customer service, writing content and analyzing documents.
“What we’re seeing is that the ChatGPT editor [from Azure OpenAI] is helping users create content that is more relevant, personalized, even more creative,” Aprimo chief product officer Kevin Souers said.
Existing Azure OpenAI customers can now join a waitlist for access to GPT-4.
SEE: Threat actors spun ChatGPT out into the malicious WormGPT. Learn about this and other possible ChatGPT-related security risks. (TechRepublic)
A new virtual machine series, the Azure ND H100 v5, are now generally available in the East U.S. and South Central U.S. Azure regions for existing enterprise customers. These VMs are designed to help organizations design and run generative AI applications.
The new hardware, NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking, is tuned for AI performance. The low-latency networking includes NVIDIA Quantum-2 ConnectX-7 InfiniBand with 400Gb/s per GPU with 3.2Tb/s per VM of cross-node bandwidth for supercomputer-level performance.
The PCIe Gen5 data transfer standard in the ND H100 v5 VMs, guided GPU performance to 64GB/s bandwidth per GPU for improved performance between the CPU and GPU, Nidhi Chappell, general manager of Azure AI infrastructure, and Eric Boyd, corporate vice president for AI platforms at Microsoft, detailed in a blog post.
Operations on the ND H100 v5 VMs will be able to be performed faster, and some large language models will see two times faster speeds when run on them, Microsoft said.
Safety and security remain concerns when it comes to AI. Microsoft assures customers that it ” … incorporates robust safety systems and leverages human feedback mechanisms to handle harmful inputs responsibly.” Microsoft also encourages red teaming AI applications or inviting ethical hackers to play the role of threat actors to see how AI applications might be vulnerable to attack.
Microsoft’s guidelines for red teams gearing up to work with AI include:
Competitors to Microsoft’s Azure AI include Amazon Web Services, IBM Watson, Google AI, DataRobot’s custom AI model service, Salesforce Einstein AI for marketing, ServiceNow AIOps for IT operations management, Oracle Cloud Infrastructure and H2O.ai.