Schedule a Demo





    The Battle of Closed vs. Open Source in GenAI

    Fisent Technologies Inc. > AI Technology  > The Battle of Closed vs. Open Source in GenAI

    The Battle of Closed vs. Open Source in GenAI

    This post was originally published on April 30, 2024 on LinkedIn

    With the release of Meta’s new open source LLM, Llama 3, on April 18th 2024, the tech community has been captivated, sharing stories about the power and potential that this model will have on the startup ecosystem. Some are speculating that this will eliminate billions of dollars in startup capital and valuations as there is now a viable open source competitor to OpenAI’s GPT-4. I don’t disagree that some startups, specifically those betting on building the next “GPT killer”, will find themselves increasingly falling behind the blistering pace of the open source community, especially with Meta investing billions in the space. However, as an operator actually deploying GenAI solutions in the enterprise ecosystem, I have a few different thoughts:

    1. This is more of the same – The pace of change right now in GenAI is exponential and we can expect it to continue to accelerate, which means model performance will continue to rapidly improve.
    2. You must maintain optionality – As an enterprise, your only viable path to navigate the pace of change is to remain flexible, in both the models you use, and how and where they are deployed.

    We are only weeks away from the highly anticipated release of OpenAI’s next GPT series model. Only 13 months have passed since OpenAI released GPT-4 and it has remained near the leading edge of performance since that time. Meta may have made a significant step in closing the gap for now, but I strongly suspect the next 12 months will continue to be owned by the closed source models. In a recent interview Sam Altman gave with Lex Friedman, he revealed his views on what is to come:

    …relative to where we need to get to and where I believe we will get to, at the time of GPT-3, people are like, “Oh, this is amazing. This is marvel of technology.” And it is, it was. But now we have GPT-4 and look at GPT-3 and you’re like, “That’s unimaginably horrible.” I expect that the delta between 5 and 4 will be the same as between 4 and 3 and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that’s how we make sure the future is better.

    At Fisent Technologies, we’ve been architecting BizAI with a focus on the enterprise since day 1. That focus is why we took the approach of not making a bet on a single LLM. Instead, by design, we support a capability we call multi-model, multi-host, which gives total optionality to our clients to use the right model for the job, regardless of where they are hosted. This also means as the next most powerful model becomes available, our clients can seamlessly transition to using that model to power their underlying automations, taking advantage of material, instantaneous improvements in reliability, speed and accuracy.

    As actual practitioners in the enterprise space, the number one concern we come across is the accuracy and reliability of these models when applied to specific business challenges. That is why the models and services that can deliver the most accurate and consistent results will continue to win at the enterprise level, while the competition, even if close behind, will continue to find themselves relegated to the bench.