infotechlead
infotechlead

OpenAI Unveils Safety Framework for Advanced AI Models

OpenAI, a pioneering AI company backed by Microsoft, has introduced a comprehensive safety framework for its most cutting-edge models in a decisive move aimed at addressing the growing concerns surrounding artificial intelligence safety.
ChatGPT from OpenAIThe framework, disclosed via a plan released on its official website  on Monday, includes a notable provision allowing the company’s board to reverse safety-related decisions, Reuters news report said.

OpenAI’s ChatGPT AI tool is used by 100 million people on a weekly basis, OpenAI CEO Sam Altman said recently at the developer forum. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies.

The development assumes significant importance because the deployment of AI will be facing regulatory challenges in several markets.

Under the outlined strategy, OpenAI will limit the deployment of its latest AI technology exclusively to areas vetted as safe, particularly focusing on domains like cybersecurity and nuclear threat mitigation.

To bolster safety protocols, the company is establishing an advisory group tasked with scrutinizing safety reports and forwarding them to both company executives and the board. While the executives will be responsible for making determinations, the board possesses the authority to overturn these decisions, emphasizing a multi-tiered approach to ensuring AI safety.

Ever since the debut of ChatGPT a year ago, concerns regarding the potential hazards of AI have been paramount among AI researchers and the wider public. While generative AI technology has captivated users with its proficiency in crafting poetry and essays, it has concurrently raised apprehensions due to its capacity to disseminate misinformation and manipulate human behavior.

The unease surrounding AI’s implications reached a crescendo in April when a coalition of AI industry leaders and experts penned an open letter advocating for a six-month hiatus in developing systems surpassing the capabilities of OpenAI’s GPT-4, citing inherent risks to society.

Subsequent to this, a Reuters/Ipsos poll conducted in May disclosed that more than two-thirds of Americans harbor concerns regarding the potential adverse impacts of AI, with 61 percent expressing apprehensions about its potential threat to civilization.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

More like this
Related

Best AI Productivity Apps in 2025 – Smart Note-Taking, Learning, Budgeting, and Privacy-Friendly Alternatives

Artificial intelligence is transforming personal productivity in 2025. From...

Canalys Forums APAC 2025: Experts Debate How Partners Can Build the Ideal Human–Agentic AI Service Balance

Canalys Forums APAC 2025 hosted a high-energy panel discussion,...

Nvidia Claims 10-fold Performance Boost for Latest AI Server as Competition Intensifies

Nvidia has released new performance data indicating that its...

Canalys Forums APAC 2025: AI Adoption Drives New Cloud Cost Dynamics

Business AI adoption in Asia Pacific is rising rapidly...