Meta, which owns Facebook, is opening up access to its large language model for artificial intelligence research.
A model containing 175 billion parameters was released by Meta for the first time to broader research communities in AI.
“Large language models” are natural language processing systems that are trained on massive volumes of text and are capable of answering reading comprehension questions or generating new text.
In a blog post, Meta said the release of its “Open Pretrained Transformer (OPT-175B)” model would improve researchers’ ability to understand how large language models work.
Meta said restrictions on access to such models had been “hindering progress on efforts to improve their robustness and mitigate known issues such as bias and toxicity.”
Several major online platforms depend on artificial intelligence technology for their research and development, which perpetuates societal biases concerning issues like race and gender. The spread of harmful effects from large language models is a concern for some researchers.
Meta said it “hoped to increase the diversity of voices defining the ethical considerations of such technologies.”
The tech giant said to prevent misuse and “maintain integrity,” it was releasing the model under a noncommercial license to focus on research use cases.
Meta said access to the model would be granted to academic researchers and people affiliated with government, civil society, and academic organizations, as well as industry research laboratories. The release will include the pretrained models and the code to train and use them.