Facebook-owner Meta is opening up access to a large language model for artificial intelligence research, the social media company said.
The Open Pretrained Transformer (OPT-175B), the first 175-billion-parameter language model, will “improve researchers’ abilities to understand how large language models work”.
Meta said restrictions on access to such models had been hindering progress on efforts to improve their robustness and mitigate known issues such as bias and toxicity. Artificial intelligence technology is increasingly being used in research and development for several platforms.
According to the MIT Technology Review, this would be the first time that a fully trained large language model will be made available to any researcher. The news has been welcomed by experts concerned about how this technology is developed “by small teams behind closed doors”.
“We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration,” said Joelle Pineau, managing director at Meta AI.
Meta said access to the model would be granted to academic researchers and people affiliated with government, civil society and academic organizations, as well as industry research laboratories.
Earlier this month Meta announced a partnership with a neuroimaging company NeuroSpin on a long-term research project to build a next-generation AI, “an effort to create a human-level AI”.