The power of large language models is obvious to everyone - but choosing the right LLM to underpin your products and services requires some careful consideration.
Everyone is getting into generative AI, which has become one of the most rapidly adopted technologies in history and which has the potential to have the most impact long term.
A recent survey of Accelerance’s global partner network revealed that many partners are developing and deploying generative AI systems, both to streamline and enhance their software development processes, and to build new products and services for their customers.
The big IT services firms are also ramping up to make AI central to their offerings. Indian IT provider Infosys says it has 80 generative AI projects in the works and is upskilling 40,000 employees to work with generative AI. In the first four months of the year alone, Accenture secured over $100 million of deals to help customers deploy generative AI systems.
A plethora of LLM options
At the heart of these services are large language models, which researchers have been developing for years. However, these LLMs really only left the lab and hit the mainstream with the debut of ChatGPT in November.
The GPT-4 (general pre-trained transformer) technology OpenAI offers businesses access via an API license and, through Microsoft’s Azure OpenAI platform, is powering thousands of new AI services - everything from customer service bots to contract analysis in the insurance industry.
But OpenAI’s LLMs are certainly not the only ones available. Google, AWS, Microsoft, IBM, and Meta all have LLMs on offer - mainly as cloud-based services. A growing roster of smaller OpenAI rivals - Anthropic, Stability AI, MosaicML, and Cohere among them, have joined the race. Meta last month released its Llama2 large language model as a freely available open-source tool for users to run and adapt as needed.
Businesses have options and important choices to make. Thankfully, the widespread uptake of Agile software development and CI/CD (continuous integration, continuous delivery) methodologies over the last decade means businesses can experiment with generative AI without throwing significant dollars at projects that ultimately go nowhere.
Gone are the 9 - 12 month waterfall programs of work on a product. Sprints allow you to beta test functionality, trial usage, change, and pivot if necessary. This is how OpenAI and most other LLM developers operate, constantly beta testing and iterating based on the feedback.
Here are six key things to keep in mind when you are surveying the options for deploying LLMs as part of your own products and services.
A fast-shifting landscape
Ultimately, the future development potential of a LLM is very important to consider. Is there a strong community of developers and users to draw ideas and resources from? Is there a clear roadmap of future LLM releases and upgrades? The generative AI landscape is shifting so quickly, these are not easy questions to answer.
Ultimately, the best choice of a large language model depends on your specific requirements and constraints. It's a good idea to experiment with different models and evaluate their performance before committing to one for your project.
The Accelerance global partner network can help you do just that. We have partners with generative AI experience all over the world who can help you test the water with this powerful and disruptive new technology.
Get in touch with our analysts to find out more.