![](https://dedirock.com/wp-content/uploads/2024/10/c35fd37c82c2f3af97051316d4108fce-3-848x440.webp)
Back in May, OpenAI announced that it was forming a new Safety and Security Committee (SSC) to evaluate its current processes and safeguards and make recommendations for changes to make. When announced, the company said the SSC would do evaluations for 90 days and then present its findings to the board.
Now that the process has been completed, OpenAI is sharing five changes it will be making based on the SSC’s evaluation.
First, the SSC will become an independent oversight committee on the OpenAI board to continue providing independent governance on safety and security. The board committee will be led by Zico Kolter, director of the machine learning department with the School of Computer Science at Carnegie Mellon University. Other members will include Adam D’Angelo, co-founder and CEO of Quora; Paul Nakasone, a retired US Army General; and Nicole Seligman, former EVP and general counsel of Sony Corporation.
The SSC board has already reviewed the o1 release of safety and will continue reviewing future releases both during development and after release. It will also have oversight for model launches, and will have the power to delay releases with safety concerns until those concerns have been sufficiently addressed.
Second, the SSC will enhance the company’s security measures by increasing internal information segmentation, expanding security operations teams to provide 24/7 coverage, and investing further in the security of the company’s research and product infrastructure.
“Cybersecurity is a critical component of AI safety, and we have been at the forefront of establishing necessary security measures to protect advanced AI. We will maintain a risk-based approach to our security tactics and adapt as both the threat landscape and risk profiles of our models evolve,” OpenAI stated in a post.
The third recommendation is for the company to increase transparency regarding its work. It consistently produces system cards that outline the capabilities and risks associated with models, and plans to explore new methods for sharing and elaborating on safety initiatives.
The system cards for GPT-4o and o1-preview include results from external red teaming, evaluations of cutting-edge risks within the Preparedness Framework, and descriptions of risk mitigation strategies incorporated into the systems.
Fourth, it will explore new ways to independently test its systems by collaborating with more external companies. For example, OpenAI is establishing new partnerships with safety organizations and non-governmental labs to carry out model safety assessments.
It is also collaborating with government entities such as Los Alamos National Labs to examine how AI can be employed safely in labs to further bioscientific research.
OpenAI has also entered into recent agreements with the U.S. and U.K. AI Safety Institutes to focus on the research of emerging AI safety risks.
Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.