OpenAI o1: Is This the Enigmatic Power That Will Reshape Each Information Sector We Know? | by Abhinav Prasad Yasaswi | Sep, 2024

My first encounters with the o1 mannequin

An image generated by DALL-E with a prompt the same as the blog title.
A picture generated by DALL-E with a immediate exactly the identical because the weblog title.

On the twelfth of September at 10:00 a.m., I used to be within the class “Frontier Subjects in Generative AI,” a graduate-level course at Arizona State College. A day earlier than this, on the eleventh of September, I submitted a crew project that concerned attempting to establish flaws and faulty outputs generated by GPT-4 (primarily attempting to immediate GPT-4 to see if it makes errors on trivial questions or high-school-level reasoning questions) as a part of one other graduate-level class “Subjects in Pure Language Processing.” We recognized a number of trivial errors that GPT-4 made, one in all them being unable to rely the variety of r’s within the phrase strawberry. Earlier than submitting this project, I researched a number of peer-reviewed papers on the web that recognized the place and why GPT -4 made errors and the way you may rectify them. Many of the paperwork I got here throughout recognized two predominant domains the place GPT-4 erred, and so they handled planning and reasoning.

This paper¹ (though nearly a 12 months previous) goes in depth by a number of instances the place GPT-4 fails to reply trivial questions that contain easy counting, easy arithmetic, elementary logic, and even frequent sense. The paper¹ causes that these questions require some stage of reasoning and that as a result of GPT-4 is completely incapable of reasoning, it nearly at all times will get these questions unsuitable. The creator additionally states that reasoning is a (very) computationally onerous drawback. Though GPT-4 could be very compute-intensive, its compute-intensive nature just isn’t geared in the direction of involving reasoning in fixing the questions that it’s prompted with. A number of different papers echo this notion of GPT-4 being unable to cause or plan²³.

Effectively, let’s get again to the twelfth of September. My class ends at round 10:15 a.m., and I come again straight residence from class and open up YouTube on my telephone as I dig into my morning brunch. The primary suggestion on my YouTube homepage was a video from OpenAI asserting the discharge of GPT-o1 named “Constructing OpenAI o1”. They introduced that this mannequin is a straight-up a reasoning mannequin and that it could take extra time to cause and reply your questions offering extra correct solutions. They state that they’ve put extra compute time into RL (Reinforcement Studying) than earlier fashions to generate coherent chains-of-thoughts⁴. Basically, they’ve educated the chain of thought era course of utilizing Reinforcement studying (to generate and hone its personal generated chain of thought course of). Within the o1 fashions, the engineers had been in a position to ask the mannequin questions as to why it was unsuitable (at any time when it was unsuitable) in its chain-of-thought course of and it may establish the errors and proper itself from them. The mannequin may query itself and need to mirror (see “Reflection in LLMs”) on its outputs and proper itself.

In one other video “Reasoning with OpenAI o1”, Jerry Tworek demonstrates how earlier OpenAI and most different LLMs available in the market are likely to fail on the next immediate:

“Assume the legal guidelines of physics on earth. A small strawberry is put into a standard cup and the cup is positioned the other way up on a desk. Somebody then takes the cup and places it contained in the microwave. The place is the strawberry now? Clarify your reasoning step-by-step.”

Legacy GPT-4 solutions as follows: