Jason Knight is Co-founder and VP of ML at OctoAI – Interview Collection

Jason Knight is Co-founder and Vice President of Machine Studying at OctoAI, the platform delivers an entire stack for app builders to run, tune, and scale their AI functions within the cloud or on-premises.

OctoAI was spun out of the College of Washington by the unique creators of Apache TVM, an open supply stack for ML portability and efficiency. TVM allows ML fashions to run effectively on any {hardware} backend, and has shortly develop into a key a part of the structure of fashionable shopper gadgets like Amazon Alexa.

Are you able to share the inspiration behind founding OctoAI and the core drawback you aimed to resolve?

AI has historically been a fancy area accessible solely to these snug with the arithmetic and high-performance computing required to make one thing with it. However AI unlocks the final word computing interfaces, that of textual content, voice, and imagery programmed by examples and suggestions, and brings the total energy of computing to everybody on Earth. Earlier than AI, solely programmers had been in a position to get computer systems to do what they wished by writing arcane programming language texts.

OctoAI was created to speed up our path to that actuality in order that extra individuals can use and profit from AI. And other people, in flip, can use AI to create but extra advantages by accelerating the sciences, drugs, artwork, and extra.

Reflecting in your expertise at Intel, how did your earlier roles put together you for co-founding and main the event at OctoAI?

Intel and the AI {hardware} and biotech startups earlier than it gave me the angle to see how laborious AI is for even probably the most subtle of expertise firms, and but how worthwhile it may be to those that have found out how you can use it. And seeing that the hole between these benefiting from AI in comparison with those that aren’t but is primarily one among infrastructure, compute, and finest practices—not magic.

What differentiates OctoStack from different AI deployment options out there out there at this time?

OctoStack is the trade’s first full expertise stack designed particularly for serving generative AI fashions wherever. It gives a turnkey manufacturing platform that gives extremely optimized inference, mannequin customization, and asset administration at an enterprise scale.

OctoStack permits organizations to realize AI autonomy by working any mannequin of their most well-liked atmosphere with full management over information, fashions, and {hardware}. It additionally delivers unmatched efficiency and value effectivity, with financial savings of as much as 12X in comparison with different options like GPT-4.

Are you able to clarify some great benefits of deploying AI fashions in a non-public atmosphere utilizing OctoStack?

Fashions today are ubiquitous, however assembling the precise infrastructure to run these fashions and apply them with your personal information is the place the business-value flywheel actually begins to spin. Utilizing these fashions in your most delicate information, after which turning that into insights, higher immediate engineering, RAG pipelines, and fine-tuning is the place you will get probably the most worth out of generative AI. However it’s nonetheless troublesome for all however probably the most subtle firms to do that alone, which is the place a turnkey answer like OctoStack can speed up you and convey one of the best practices collectively in a single place to your practitioners.

Deploying AI fashions in a non-public atmosphere utilizing OctoStack gives a number of benefits, together with enhanced safety and management over information and fashions. Prospects can run generative AI functions inside their very own VPCs or on-premises, making certain that their information stays safe and inside their chosen environments. This strategy additionally supplies companies with the flexibleness to run any mannequin, be it open-source, customized, or proprietary, whereas benefiting from price reductions and efficiency enhancements.

What challenges did you face in optimizing OctoStack to help a variety of {hardware}, and the way had been these challenges overcome?

Optimizing OctoStack to help a variety of {hardware} concerned making certain compatibility and efficiency throughout numerous gadgets, corresponding to NVIDIA and AMD GPUs and AWS Inferentia. OctoAI overcame these challenges by leveraging its deep AI programs experience, developed by way of years of analysis and growth, to create a platform that constantly updates and helps further {hardware} sorts, GenAI use instances, and finest practices. This permits OctoAI to ship market-leading efficiency and value effectivity.

Moreover, getting the most recent capabilities in generative AI, corresponding to multi-modality, operate calling, strict JSON schema following, environment friendly fine-tune internet hosting, and extra into the palms of your inner builders will speed up your AI takeoff level.

OctoAI has a wealthy historical past of leveraging Apache TVM. How has this framework influenced your platform’s capabilities?

We created Apache TVM to make it straightforward for stylish builders to jot down environment friendly AI libraries for GPUs and accelerators extra simply. We did this as a result of getting probably the most efficiency from GPU and accelerator {hardware} was crucial for AI inference then as it’s now.

We’ve since leveraged that very same mindset and experience for the whole Gen AI serving stack to ship automation for a broader set of builders.

Are you able to talk about any vital efficiency enhancements that OctoStack gives, such because the 10x efficiency increase in large-scale deployments?

OctoStack gives vital efficiency enhancements, together with as much as 12X financial savings in comparison with different fashions like GPT-4 with out sacrificing velocity or high quality. It additionally supplies 4X higher GPU utilization and a 50 p.c discount in operational prices, enabling organizations to run large-scale deployments effectively and cost-effectively.

Are you able to share some notable use instances the place OctoStack has considerably improved AI deployment to your purchasers?

A notable use case is Apate.ai, a worldwide service combating phone scams utilizing generative conversational AI. Apate.ai leveraged OctoStack to effectively run their suite of language fashions throughout a number of geographies, benefiting from OctoStack’s flexibility, scale, and safety. This deployment allowed Apate.ai to ship customized fashions supporting a number of languages and regional dialects, assembly their efficiency and security-sensitive necessities.

As well as, we serve lots of of fine-tunes for our buyer OpenPipe. Have been they to spin up devoted situations for every of those, their clients’ use instances could be infeasible as they develop and evolve their use instances and constantly re-train their parameter-efficient fine-tunes for optimum output high quality at cost-effective costs.

Thanks for the good interview, readers who want to be taught extra ought to go to OctoAI.