OpenAI Launches New AI Safety Initiative

OpenAI has introduced a new online resource, the "Safety Evaluations Hub", with the aim of publicly share data and insights on security performance of its intelligence models artificial. This initiative aims to provide transparency on crucial aspects such as the hallucination rates of the models, the their tendency to generate harmful content, the accuracy with which are followed by instructions and their resilience to attempts to violation. The company emphasizes that this new hub represents a step forward towards greater openness, in a at a time when it faces several legal challenges, including those for alleged illicit use of material copyrighted to train your own models. The "Safety Evaluations Hub" is designed to expand the information already available in OpenAI's system sheets. While these offer a snapshot of the security measures of a model at launch, the hub is designed to provide continuous updates. In an official note, OpenAI has stated that he wanted to 'share the progress in the development for measure the capability and safety of the models'. The intent is twofold: on the one hand, to facilitate understanding of the performance of the systems and on the other hand increase collective efforts to increase transparency in the industry. The company has also expressed the intention to work towards a more effective communication proactive in this area at all levels. Inside of the hub, interested users can explore different dedicated sections, finding relevant information for various models, from Gpt-4.1 up to the most recent versions, such as 4.5.
ansa