Opening up AI: have the front-runners met(a) their match?

19 July 2023

Meta is set to become the latest tech giant to try and grab its piece of the artificial intelligence (AI) pie through its reported imminent release of its own commercial AI model, following the early moves made by its rivals Google and OpenAI (backed by Microsoft).

However, Meta is seemingly choosing to go down a different route to that taken by its competitors, as the software source code that powers its AI model (known as large language models or LLMs) will be released publicly on an open source basis, rather than it being kept confidential. Despite the leaks of Meta’s open source AI models earlier this year, this move will still have taken some by surprise and, although the finer details remain to be released, it is certainly a bold statement in the AI race which has been dominated until now by a small number of closed source software models such as ChatGPT.

What is Open Source Software (OSS)?

OSS is software that is provided by its developer or owner (Meta, in this instance) under licence terms which typically include:

  • the freedom for the user to run the program as they wish for any purpose;
  • the freedom for the user to study how the program works by accessing the source code;
  • the ability to make derivative works and modify the code; and
  • the freedom to re-distribute copies of the code.

This does not mean that OSS does not benefit from any legal protections: the code is still protected by copyright and the licensee must comply with the terms of the relevant open source licence.

Meta has not yet publicly confirmed its licensing terms for its AI model and whether they will choose to include minimal restrictions on redistribution of software (known as a “permissive” licence), or whether it will impose conditions on any modifications or derivative works (known as a “copyleft/restrictive” licence). In either case, however, providing its technology on an open source basis will mean that Meta’s AI model could potentially be customised by businesses using their data sets, used to build applications and ultimately improved by its users.

What are the benefits and potential challenges of Meta’s approach?

It is not yet clear if, or how, Meta will choose to monetise their open source AI model, but adopting this alternative approach may benefit Meta in other ways. Meta’s chief global affairs, Nick Clegg, believes that “openness is the best antidote to the fears surrounding AI”, implying that it may be better for AI technology to be in the hands of the many rather than the few.

Other benefits for Meta include the potential for broader usage of its AI model than its closed source competitors, in turn generating more data for it to process and thereby increasing its capabilities. Further, this AI model could lead to enhanced safety and security, as users will be able to help identify, and tackle, any bugs existing in the technology. Reports suggest Google is concerned that open source AI developers are quickly closing the gap on them, as the open source models are proving to be more adaptable and there are fears that they will ultimately be more capable than restricted AI models such as Google’s own Bard AI offering.

On the contrary, perhaps the greatest challenge posed by such “openness” could be the scope for the AI model to be abused by bad actors that may seek to spread misinformation, commit fraud and other online crimes. OpenAI, for example, have stated that they are reluctant to release an open source version of their AI model until such misuse can be more effectively limited.

Finally, the data used to train AI models through the process of machine learning is likely to be subject to copyright protection, meaning that an open source AI model could facilitate the infringement of third party intellectual property by making that data publicly available. Meta itself will be all too aware of the risk of such infringement, in light of Sarah Silverman’s recently filed US lawsuit against it and OpenAI for copyright infringement relating to claims that their AI models used her work for training without her permission. 


It remains to be seen whether Meta’s bold move will be a success, and if it will further increase its dominance in the tech markets and even beyond. Legislators and regulators all around the world will be keeping a close eye on how this progresses, and they will no doubt be scrutinising this development further in the coming months as they continue to consider the practical and legal implications of broadening the generative AI market out to open source models.

“The competitive landscape of AI is going to completely change in the coming months, in the coming weeks maybe, when there will be open source platforms that are actually as good as the ones that are not”