GPT-4 Turbo unveiled at OpenAI DevDay

GPT-4 Turbo unveiled at OpenAI DevDay

The GPT-4 Turbo marks a significant upgrade from the original GPT-4, boasting enhanced capabilities, cost-effectiveness, and an expanded 128K context window. With this advanced version, users can expect more thorough and informative text outputs, alongside improved consistency and coherence in generated content due to its increased memory capacity.

Notably, GPT-4 Turbo offers these advanced features at a reduced cost compared to its predecessor, GPT-4. This strategic pricing move has the potential to democratize access to cutting-edge AI technology, paving the way for an influx of innovative AI-driven applications and solutions across various sectors.


Custom GPTs, GPT Builder, GPT Store… OpenAI has huge updates


What does GPT-4 Turbo have to offer?

Unveiled at OpenAI DevDay, GPT-4 Turbo and the Assistants API mark a significant leap forward for developers keen on crafting bespoke assistive AI applications. With streamlined integration of goals and the capacity to utilize various models and tools, developers are now equipped to forge AI applications tailored to assist with an array of tasks. These range from mundane chores like scheduling meetings to more complex creative endeavors such as composing emails or generating original content.

Currently in its early access stage, the Assistants API holds immense promise for altering our interaction with digital tools. Envision a future where AI assistants are intricately woven into the fabric of our daily lives, adeptly managing personal tasks or seamlessly administering business operations.

GPT-4 Turbo unveiled at OpenAI DevDay
GPT-4 Turbo, much like its predecessors, operates as a sophisticated statistical instrument crafted for word prediction (Image credit)

According to OpenAI the pricing model for this innovative service is enticingly economical: $0.01 per 1,000 input tokens (which amounts to approximately 750 words), and $0.03 per 1,000 output tokens. Here, “tokens” refer to snippets of text—for instance, the word “fantastic” is broken down into segments like “fan,” “tas,” and “tic.” This token-based system applies to both the data fed into the model and the responses generated by it. Furthermore, the cost for image processing with GPT-4 Turbo will vary based on the dimensions of the image, with a 1080×1080 pixel image costing $0.00765.

GPT-4 Turbo, much like its predecessors, operates as a sophisticated statistical instrument crafted for word prediction. Ingesting a vast corpus of data, predominantly sourced from the web, it has honed its capabilities to determine the probability of word sequences, taking into account not just the patterns of language but also the underlying semantic context.

While GPT-4’s training encompassed web content up to September 2021, GPT-4 Turbo extends its knowledge horizon to April 2023. This extension in temporal awareness ensures that inquiries regarding more recent occurrences, up until the updated cut-off, are met with enhanced precision and relevance.

About GPT-4 Turbo with Vision

GPT-4 Turbo with Vision represents an evolutionary leap in the GPT-4 Turbo lineage, boasting the dual capacity to generate both textual and visual content. This enhancement unlocks new horizons for developers, enabling them to craft AI applications that seamlessly bridge the realms of language and imagery.

Unveiled at OpenAI DevDay, GPT-4 Turbo with Vision is poised to catalyze the creation of a novel class of AI-driven creative instruments. Imagine image editing suites or video editing platforms that harness AI to assist users in producing content of exceptional quality with unprecedented ease and speed. This technology stands as a testament to the rapidly advancing capabilities of AI in augmenting human creativity and efficiency.

GPT-4 Turbo unveiled at OpenAI DevDay
Unveiled at OpenAI DevDay, GPT-4 Turbo with Vision is poised to catalyze the creation of a novel class of AI-driven creative instruments (Image credit)

Context window

GPT-4 Turbo marks a significant advancement in conversational AI models with its expanded context window. The context window, quantified in tokens, is essentially the span of text a model can reference when generating responses. Models equipped with smaller context windows often suffer from short-term memory lapses, causing conversational drift and sometimes problematic discourse.

Boasting a context window of 128,000 tokens, GPT-4 Turbo sets a new industry standard — quadrupling that of its predecessor, GPT-4, and outstripping the likes of Claude 2 from Anthropic, which accommodates up to 100,000 tokens. To contextualize, 128,000 tokens are roughly equivalent to 100,000 words or about 300 pages of text, offering a deep well of information for the model to draw from for coherent and relevant dialogue continuity.

Furthermore, GPT-4 Turbo introduces a ‘JSON mode’ that ensures outputs in valid JSON format, greatly benefiting web applications in the transmission of data from servers to clients for web page displays, as per OpenAI. Accompanying parameters are also introduced, enhancing the model’s ability to provide consistent completions and, for specialized uses, to log probabilities of the most likely output tokens. This suite of features not only expands the utility of GPT-4 Turbo but also enhances the precision and applicability of its outputs across a multitude of web-based applications.

GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g. ‘always respond in XML’). And GPT-4 Turbo is more likely to return the right function parameters.

-OpenAI

GPT-4 updates

OpenAI is ensuring that the evolution of GPT-4 does not stop with the advent of GPT-4 Turbo. In a move to further advance the capabilities of their latest model, they are initiating an experimental program to allow fine-tuning of GPT-4. This program represents an upgrade from the GPT-3.5 fine-tuning process, incorporating enhanced oversight and support from the OpenAI teams to navigate the complex technical challenges involved.

Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning.

-OpenAI.

In addition to these fine-tuning improvements, OpenAI is significantly increasing the value proposition for its current GPT-4 clientele. They have announced a doubling of the tokens-per-minute rate limit for all GPT-4 subscribers without altering the existing price structure. The costs will continue to be $0.03 per input token and $0.06 per output token for the standard GPT-4 model, which offers an 8,000-token context window. Meanwhile, for those utilizing the more extensive 32,000-token context window variant of GPT-4, the rate remains at $0.06 per input token and $0.012 per output token.

GPT-4 Turbo unveiled at OpenAI DevDay
OpenAI is ensuring that the evolution of GPT-4 does not stop with the advent of GPT-4 Turbo (Image credit)

GPT-4 Turbo pricing details

OpenAI DevDay announced many new tools for users, as well as new challenges of using the AI company’s models.

  • GPT-4 Turbo 8K:
    • Input price: $0.03
    • Output price: $0.06
  • GPT-4 Turbo 128K:
    • Input price: $0.01
    • Output price: $0.03
  • GPT-3.5 Turbo 4K:
    • Input price: $0.0015
    • Output price: $0.002
  • GPT-3.5 Turbo 16K:
    • Input price: $0.003
    • Output price: $0.004
  • GPT-3.5 Turbo fine-tuning 4K and 16K:
    • Training: $0.008
    • Input price for 4K: $0.012
    • Input price for 16K: $0.003

Featured image credit: Zac Wolff/Unsplash

#GPT4 #Turbo #unveiled #OpenAI #DevDay

Leave a Reply

Your email address will not be published. Required fields are marked *