- The AI-ronman 🚀
- Posts
- Did OpenAI quietly release GPT-5?
Did OpenAI quietly release GPT-5?
The AI-ronman 🚀
Quick Takes ⚡
OpenAI’s groundbreaking memory feature released
Mysterious 'gpt2-chatbot' sparks GPT-5 speculation
Gradient releases LLama-3 8B with 1M tokens
China’s answer to Sora: Vidu
Github releases much anticipated Workspace with agents
Deep Dive 🔍
OpenAI’s groundbreaking memory feature released 💹
OpenAI is piloting a new memory feature in ChatGPT for plus accounts (except Europe and Korea) that allows the system to learn from past conversations and recall context in new conversations. Memory might be one of the core pillars to create agents, and OpenAI claims that the capability is intended to help ChatGPT better understand individual user preferences and styles, and gets better with time. You can reset it, remove specific memories or all of them, or disable this feature entirely in your settings. OpenAI also claims that company and team user data, including memory, is excluded from OpenAI model training.
Mysterious 'gpt2-chatbot' sparks GPT-5 speculation 🫢
A new AI model, "gpt2-chatbot," is causing a stir as it goes viral, released without official documentation. Its impressive reasoning and human-like responses, coupled with similarities to OpenAI's style and vulnerabilities, fuel speculation. Industry rumors suggest it could be an early release of GPT-5 for benchmarking, since it has distinct similarities to OpenAI's models, including specific vulnerabilities and response styles.
Gradient releases LLama-3 8B with a context length of over 1M tokens 😅
This model, developed by Gradient and supported by Crusoe Energy, enhances LLama-3 8B's context length from 8k to over 1040K. It shows that state-of-the-art large language models (LLMs) can efficiently manage long contexts with minimal training by fine-tuning RoPE theta. For this stage, the model was trained on 830 million tokens, and a total of 1.4 billion tokens across all stages, which constitutes less than 0.01% of the original pre-training data used for LLama-3. Does it potentially mean we’re also reached GPT-4 and Gemini 1.5 Pro level context windows?China’s answer to Sora: Vidu 🎥
Shengshu Technology and Tsinghua University introduced Vidu, a text-to-video model that produces videos with text prompts and just one click. Vidu can create 16-second clips at 1080p resolution, while Sora, in contrast, is capable of generating 60-second videos. Vidu utilizes a Universal Vision Transformer (U-ViT) architecture, which, according to the company, enables it to realistically simulate the physical world with multi-camera view generation.Github releases much anticipated Workspace with E2E agents 🧑💻
Following a sneak peek at last year's GitHub Universe, GitHub has officially introduced the technical preview of Copilot Workspace, a groundbreaking development environment designed around Copilot technology. In Copilot Workspace, developers can seamlessly brainstorm, plan, build, test, and run code entirely in natural language. This innovative, task-centric environment is powered by various Copilot agents, providing developers with complete control throughout the coding process while the workspace builds the complete plan. Pretty impressive, right?
GlazeGPT just announced their industry-leading innovation - text-to-MongoDB queries, a revolutionary feature that translates natural language into precise MongoDB queries.🔥This latest update empowers users to effortlessly interact with complex MongoDB datasets and generates complex queries in seconds while understanding your business glossary.
Level up your skills with this 14-day free trial - don't miss out! ⚡️
Thanks for joining us on this journey through the world of AI! Until next time, stay curious and keep exploring!🚀🌟
Ciao for now!
Karan 😎 🚀
CEO, GlazeGPT.com